Sample records for node-local storage approaching

  1. The Scalable Checkpoint/Restart Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moody, A.

    The Scalable Checkpoint/Restart (SCR) library provides an interface that codes may use to worite our and read in application-level checkpoints in a scalable fashion. In the current implementation, checkpoint files are cached in local storage (hard disk or RAM disk) on the compute nodes. This technique provides scalable aggregate bandwidth and uses storage resources that are fully dedicated to the job. This approach addresses the two common drawbacks of checkpointing a large-scale application to a shared parallel file system, namely, limited bandwidth and file system contention. In fact, on current platforms, SCR scales linearly with the number of compute nodes.more » It has been benchmarked as high as 720GB/s on 1094 nodes of Atlas, which is nearly two orders of magnitude faster thanthe parallel file system.« less

  2. Hybrid swarm intelligence optimization approach for optimal data storage position identification in wireless sensor networks.

    PubMed

    Mohanasundaram, Ranganathan; Periasamy, Pappampalayam Sanmugam

    2015-01-01

    The current high profile debate with regard to data storage and its growth have become strategic task in the world of networking. It mainly depends on the sensor nodes called producers, base stations, and also the consumers (users and sensor nodes) to retrieve and use the data. The main concern dealt here is to find an optimal data storage position in wireless sensor networks. The works that have been carried out earlier did not utilize swarm intelligence based optimization approaches to find the optimal data storage positions. To achieve this goal, an efficient swam intelligence approach is used to choose suitable positions for a storage node. Thus, hybrid particle swarm optimization algorithm has been used to find the suitable positions for storage nodes while the total energy cost of data transmission is minimized. Clustering-based distributed data storage is utilized to solve clustering problem using fuzzy-C-means algorithm. This research work also considers the data rates and locations of multiple producers and consumers to find optimal data storage positions. The algorithm is implemented in a network simulator and the experimental results show that the proposed clustering and swarm intelligence based ODS strategy is more effective than the earlier approaches.

  3. Development of climate data storage and processing model

    NASA Astrophysics Data System (ADS)

    Okladnikov, I. G.; Gordov, E. P.; Titov, A. G.

    2016-11-01

    We present a storage and processing model for climate datasets elaborated in the framework of a virtual research environment (VRE) for climate and environmental monitoring and analysis of the impact of climate change on the socio-economic processes on local and regional scales. The model is based on a «shared nothings» distributed computing architecture and assumes using a computing network where each computing node is independent and selfsufficient. Each node holds a dedicated software for the processing and visualization of geospatial data providing programming interfaces to communicate with the other nodes. The nodes are interconnected by a local network or the Internet and exchange data and control instructions via SSH connections and web services. Geospatial data is represented by collections of netCDF files stored in a hierarchy of directories in the framework of a file system. To speed up data reading and processing, three approaches are proposed: a precalculation of intermediate products, a distribution of data across multiple storage systems (with or without redundancy), and caching and reuse of the previously obtained products. For a fast search and retrieval of the required data, according to the data storage and processing model, a metadata database is developed. It contains descriptions of the space-time features of the datasets available for processing, their locations, as well as descriptions and run options of the software components for data analysis and visualization. The model and the metadata database together will provide a reliable technological basis for development of a high- performance virtual research environment for climatic and environmental monitoring.

  4. Proposal for massively parallel data storage system

    NASA Technical Reports Server (NTRS)

    Mansuripur, M.

    1992-01-01

    An architecture for integrating large numbers of data storage units (drives) to form a distributed mass storage system is proposed. The network of interconnected units consists of nodes and links. At each node there resides a controller board, a data storage unit and, possibly, a local/remote user-terminal. The links (twisted-pair wires, coax cables, or fiber-optic channels) provide the communications backbone of the network. There is no central controller for the system as a whole; all decisions regarding allocation of resources, routing of messages and data-blocks, creation and distribution of redundant data-blocks throughout the system (for protection against possible failures), frequency of backup operations, etc., are made locally at individual nodes. The system can handle as many user-terminals as there are nodes in the network. Various users compete for resources by sending their requests to the local controller-board and receiving allocations of time and storage space. In principle, each user can have access to the entire system, and all drives can be running in parallel to service the requests for one or more users. The system is expandable up to a maximum number of nodes, determined by the number of routing-buffers built into the controller boards. Additional drives, controller-boards, user-terminals, and links can be simply plugged into an existing system in order to expand its capacity.

  5. Hybrid Swarm Intelligence Optimization Approach for Optimal Data Storage Position Identification in Wireless Sensor Networks

    PubMed Central

    Mohanasundaram, Ranganathan; Periasamy, Pappampalayam Sanmugam

    2015-01-01

    The current high profile debate with regard to data storage and its growth have become strategic task in the world of networking. It mainly depends on the sensor nodes called producers, base stations, and also the consumers (users and sensor nodes) to retrieve and use the data. The main concern dealt here is to find an optimal data storage position in wireless sensor networks. The works that have been carried out earlier did not utilize swarm intelligence based optimization approaches to find the optimal data storage positions. To achieve this goal, an efficient swam intelligence approach is used to choose suitable positions for a storage node. Thus, hybrid particle swarm optimization algorithm has been used to find the suitable positions for storage nodes while the total energy cost of data transmission is minimized. Clustering-based distributed data storage is utilized to solve clustering problem using fuzzy-C-means algorithm. This research work also considers the data rates and locations of multiple producers and consumers to find optimal data storage positions. The algorithm is implemented in a network simulator and the experimental results show that the proposed clustering and swarm intelligence based ODS strategy is more effective than the earlier approaches. PMID:25734182

  6. Compression in wearable sensor nodes: impacts of node topology.

    PubMed

    Imtiaz, Syed Anas; Casson, Alexander J; Rodriguez-Villegas, Esther

    2014-04-01

    Wearable sensor nodes monitoring the human body must operate autonomously for very long periods of time. Online and low-power data compression embedded within the sensor node is therefore essential to minimize data storage/transmission overheads. This paper presents a low-power MSP430 compressive sensing implementation for providing such compression, focusing particularly on the impact of the sensor node architecture on the compression performance. Compression power performance is compared for four different sensor nodes incorporating different strategies for wireless transmission/on-sensor-node local storage of data. The results demonstrate that the compressive sensing used must be designed differently depending on the underlying node topology, and that the compression strategy should not be guided only by signal processing considerations. We also provide a practical overview of state-of-the-art sensor node topologies. Wireless transmission of data is often preferred as it offers increased flexibility during use, but in general at the cost of increased power consumption. We demonstrate that wireless sensor nodes can highly benefit from the use of compressive sensing and now can achieve power consumptions comparable to, or better than, the use of local memory.

  7. Using IKAROS as a data transfer and management utility within the KM3NeT computing model

    NASA Astrophysics Data System (ADS)

    Filippidis, Christos; Cotronis, Yiannis; Markou, Christos

    2016-04-01

    KM3NeT is a future European deep-sea research infrastructure hosting a new generation neutrino detectors that - located at the bottom of the Mediterranean Sea - will open a new window on the universe and answer fundamental questions both in particle physics and astrophysics. IKAROS is a framework that enables creating scalable storage formations on-demand and helps addressing several limitations that the current file systems face when dealing with very large scale infrastructures. It enables creating ad-hoc nearby storage formations and can use a huge number of I/O nodes in order to increase the available bandwidth (I/O and network). IKAROS unifies remote and local access in the overall data flow, by permitting direct access to each I/O node. In this way we can handle the overall data flow at the network layer, limiting the interaction with the operating system. This approach allows virtually connecting, at the users level, the several different computing facilities used (Grids, Clouds, HPCs, Data Centers, Local computing Clusters and personal storage devices), on-demand, based on the needs, by using well known standards and protocols, like HTTP.

  8. Tier 3 batch system data locality via managed caches

    NASA Astrophysics Data System (ADS)

    Fischer, Max; Giffels, Manuel; Jung, Christopher; Kühn, Eileen; Quast, Günter

    2015-05-01

    Modern data processing increasingly relies on data locality for performance and scalability, whereas the common HEP approaches aim for uniform resource pools with minimal locality, recently even across site boundaries. To combine advantages of both, the High- Performance Data Analysis (HPDA) Tier 3 concept opportunistically establishes data locality via coordinated caches. In accordance with HEP Tier 3 activities, the design incorporates two major assumptions: First, only a fraction of data is accessed regularly and thus the deciding factor for overall throughput. Second, data access may fallback to non-local, making permanent local data availability an inefficient resource usage strategy. Based on this, the HPDA design generically extends available storage hierarchies into the batch system. Using the batch system itself for scheduling file locality, an array of independent caches on the worker nodes is dynamically populated with high-profile data. Cache state information is exposed to the batch system both for managing caches and scheduling jobs. As a result, users directly work with a regular, adequately sized storage system. However, their automated batch processes are presented with local replications of data whenever possible.

  9. An elementary quantum network using robust nuclear spin qubits in diamond

    NASA Astrophysics Data System (ADS)

    Kalb, Norbert; Reiserer, Andreas; Humphreys, Peter; Blok, Machiel; van Bemmelen, Koen; Twitchen, Daniel; Markham, Matthew; Taminiau, Tim; Hanson, Ronald

    Quantum registers containing multiple robust qubits can form the nodes of future quantum networks for computation and communication. Information storage within such nodes must be resilient to any type of local operation. Here we demonstrate multiple robust memories by employing five nuclear spins adjacent to a nitrogen-vacancy defect centre in diamond. We characterize the storage of quantum superpositions and their resilience to entangling attempts with the electron spin of the defect centre. The storage fidelity is found to be limited by the probabilistic electron spin reset after failed entangling attempts. Control over multiple memories is then utilized to encode states in decoherence protected subspaces with increased robustness. Furthermore we demonstrate memory control in two optically linked network nodes and characterize the storage capabilities of both memories in terms of the process fidelity with the identity. These results pave the way towards multi-qubit quantum algorithms in a remote network setting.

  10. Energy Options for Wireless Sensor Nodes.

    PubMed

    Knight, Chris; Davidson, Joshua; Behrens, Sam

    2008-12-08

    Reduction in size and power consumption of consumer electronics has opened up many opportunities for low power wireless sensor networks. One of the major challenges is in supporting battery operated devices as the number of nodes in a network grows. The two main alternatives are to utilize higher energy density sources of stored energy, or to generate power at the node from local forms of energy. This paper reviews the state-of-the art technology in the field of both energy storage and energy harvesting for sensor nodes. The options discussed for energy storage include batteries, capacitors, fuel cells, heat engines and betavoltaic systems. The field of energy harvesting is discussed with reference to photovoltaics, temperature gradients, fluid flow, pressure variations and vibration harvesting.

  11. Energy Options for Wireless Sensor Nodes

    PubMed Central

    Knight, Chris; Davidson, Joshua; Behrens, Sam

    2008-01-01

    Reduction in size and power consumption of consumer electronics has opened up many opportunities for low power wireless sensor networks. One of the major challenges is in supporting battery operated devices as the number of nodes in a network grows. The two main alternatives are to utilize higher energy density sources of stored energy, or to generate power at the node from local forms of energy. This paper reviews the state-of-the art technology in the field of both energy storage and energy harvesting for sensor nodes. The options discussed for energy storage include batteries, capacitors, fuel cells, heat engines and betavoltaic systems. The field of energy harvesting is discussed with reference to photovoltaics, temperature gradients, fluid flow, pressure variations and vibration harvesting. PMID:27873975

  12. A study of the Immune Epitope Database for some fungi species using network topological indices.

    PubMed

    Vázquez-Prieto, Severo; Paniagua, Esperanza; Solana, Hugo; Ubeira, Florencio M; González-Díaz, Humberto

    2017-08-01

    In the last years, the encryption of system structure information with different network topological indices has been a very active field of research. In the present study, we assembled for the first time a complex network using data obtained from the Immune Epitope Database for fungi species, and we then considered the general topology, the node degree distribution, and the local structure of this network. We also calculated eight node centrality measures for the observed network and compared it with three theoretical models. In view of the results obtained, we may expect that the present approach can become a valuable tool to explore the complexity of this database, as well as for the storage, manipulation, comparison, and retrieval of information contained therein.

  13. A Wearable Wireless Sensor Network for Indoor Smart Environment Monitoring in Safety Applications

    PubMed Central

    Antolín, Diego; Medrano, Nicolás; Calvo, Belén; Pérez, Francisco

    2017-01-01

    This paper presents the implementation of a wearable wireless sensor network aimed at monitoring harmful gases in industrial environments. The proposed solution is based on a customized wearable sensor node using a low-power low-rate wireless personal area network (LR-WPAN) communications protocol, which as a first approach measures CO2 concentration, and employs different low power strategies for appropriate energy handling which is essential to achieving long battery life. These wearables nodes are connected to a deployed static network and a web-based application allows data storage, remote control and monitoring of the complete network. Therefore, a complete and versatile remote web application with a locally implemented decision-making system is accomplished, which allows early detection of hazardous situations for exposed workers. PMID:28216556

  14. A Wearable Wireless Sensor Network for Indoor Smart Environment Monitoring in Safety Applications.

    PubMed

    Antolín, Diego; Medrano, Nicolás; Calvo, Belén; Pérez, Francisco

    2017-02-14

    This paper presents the implementation of a wearable wireless sensor network aimed at monitoring harmful gases in industrial environments. The proposed solution is based on a customized wearable sensor node using a low-power low-rate wireless personal area network (LR-WPAN) communications protocol, which as a first approach measures CO₂ concentration, and employs different low power strategies for appropriate energy handling which is essential to achieving long battery life. These wearables nodes are connected to a deployed static network and a web-based application allows data storage, remote control and monitoring of the complete network. Therefore, a complete and versatile remote web application with a locally implemented decision-making system is accomplished, which allows early detection of hazardous situations for exposed workers.

  15. Architecture and method for a burst buffer using flash technology

    DOEpatents

    Tzelnic, Percy; Faibish, Sorin; Gupta, Uday K.; Bent, John; Grider, Gary Alan; Chen, Hsing-bung

    2016-03-15

    A parallel supercomputing cluster includes compute nodes interconnected in a mesh of data links for executing an MPI job, and solid-state storage nodes each linked to a respective group of the compute nodes for receiving checkpoint data from the respective compute nodes, and magnetic disk storage linked to each of the solid-state storage nodes for asynchronous migration of the checkpoint data from the solid-state storage nodes to the magnetic disk storage. Each solid-state storage node presents a file system interface to the MPI job, and multiple MPI processes of the MPI job write the checkpoint data to a shared file in the solid-state storage in a strided fashion, and the solid-state storage node asynchronously migrates the checkpoint data from the shared file in the solid-state storage to the magnetic disk storage and writes the checkpoint data to the magnetic disk storage in a sequential fashion.

  16. Hybrid energy storage system for wireless sensor node powered by aircraft specific thermoelectric energy harvesting

    NASA Astrophysics Data System (ADS)

    Thangaraj, K.; Elefsiniotis, A.; Aslam, S.; Becker, Th.; Schmid, U.; Lees, J.; Featherston, C. A.; Pullin, R.

    2013-05-01

    This paper describes an approach for efficiently storing the energy harvested from a thermoelectric module for powering autonomous wireless sensor nodes for aeronautical health monitoring applications. A representative temperature difference was created across a thermo electric generator (TEG) by attaching a thermal mass and a cavity containing a phase change material to one side, and a heat source (to represent the aircraft fuselage) to the other. Batteries and supercapacitors are popular choices of storage device, but neither represents the ideal solution; supercapacitors have a lower energy density than batteries and batteries have lower power density than supercapacitors. When using only a battery for storage, the runtime of a typical sensor node is typically reduced by internal impedance, high resistance and other internal losses. Supercapacitors may overcome some of these problems, but generally do not provide sufficient long-term energy to allow advanced health monitoring applications to operate over extended periods. A hybrid energy storage unit can provide both energy and power density to the wireless sensor node simultaneously. Techniques such as acoustic-ultrasonic, acoustic-emission, strain, crack wire sensor and window wireless shading require storage approaches that can provide immediate energy on demand, usually in short, high intensity bursts, and that can be sustained over long periods of time. This application requirement is considered as a significant constraint when working with battery-only and supercapacitor-only solutions and they should be able to store up-to 40-50J of energy.

  17. The Role of Energy Reservoirs in Distributed Computing: Manufacturing, Implementing, and Optimizing Energy Storage in Energy-Autonomous Sensor Nodes

    NASA Astrophysics Data System (ADS)

    Cowell, Martin Andrew

    The world already hosts more internet connected devices than people, and that ratio is only increasing. These devices seamlessly integrate with peoples lives to collect rich data and give immediate feedback about complex systems from business, health care, transportation, and security. As every aspect of global economies integrate distributed computing into their industrial systems and these systems benefit from rich datasets. Managing the power demands of these distributed computers will be paramount to ensure the continued operation of these networks, and is elegantly addressed by including local energy harvesting and storage on a per-node basis. By replacing non-rechargeable batteries with energy harvesting, wireless sensor nodes will increase their lifetimes by an order of magnitude. This work investigates the coupling of high power energy storage with energy harvesting technologies to power wireless sensor nodes; with sections covering device manufacturing, system integration, and mathematical modeling. First we consider the energy storage mechanism of supercapacitors and batteries, and identify favorable characteristics in both reservoir types. We then discuss experimental methods used to manufacture high power supercapacitors in our labs. We go on to detail the integration of our fabricated devices with collaborating labs to create functional sensor node demonstrations. With the practical knowledge gained through in-lab manufacturing and system integration, we build mathematical models to aid in device and system design. First, we model the mechanism of energy storage in porous graphene supercapacitors to aid in component architecture optimization. We then model the operation of entire sensor nodes for the purpose of optimally sizing the energy harvesting and energy reservoir components. In consideration of deploying these sensor nodes in real-world environments, we model the operation of our energy harvesting and power management systems subject to spatially and temporally varying energy availability in order to understand sensor node reliability. Looking to the future, we see an opportunity for further research to implement machine learning algorithms to control the energy resources of distributed computing networks.

  18. Approach to Privacy-Preserve Data in Two-Tiered Wireless Sensor Network Based on Linear System and Histogram

    NASA Astrophysics Data System (ADS)

    Dang, Van H.; Wohlgemuth, Sven; Yoshiura, Hiroshi; Nguyen, Thuc D.; Echizen, Isao

    Wireless sensor network (WSN) has been one of key technologies for the future with broad applications from the military to everyday life [1,2,3,4,5]. There are two kinds of WSN model models with sensors for sensing data and a sink for receiving and processing queries from users; and models with special additional nodes capable of storing large amounts of data from sensors and processing queries from the sink. Among the latter type, a two-tiered model [6,7] has been widely adopted because of its storage and energy saving benefits for weak sensors, as proved by the advent of commercial storage node products such as Stargate [8] and RISE. However, by concentrating storage in certain nodes, this model becomes more vulnerable to attack. Our novel technique, called zip-histogram, contributes to solving the problems of previous studies [6,7] by protecting the stored data's confidentiality and integrity (including data from the sensor and queries from the sink) against attackers who might target storage nodes in two-tiered WSNs.

  19. An Effective Cuckoo Search Algorithm for Node Localization in Wireless Sensor Network.

    PubMed

    Cheng, Jing; Xia, Linyuan

    2016-08-31

    Localization is an essential requirement in the increasing prevalence of wireless sensor network (WSN) applications. Reducing the computational complexity, communication overhead in WSN localization is of paramount importance in order to prolong the lifetime of the energy-limited sensor nodes and improve localization performance. This paper proposes an effective Cuckoo Search (CS) algorithm for node localization. Based on the modification of step size, this approach enables the population to approach global optimal solution rapidly, and the fitness of each solution is employed to build mutation probability for avoiding local convergence. Further, the approach restricts the population in the certain range so that it can prevent the energy consumption caused by insignificant search. Extensive experiments were conducted to study the effects of parameters like anchor density, node density and communication range on the proposed algorithm with respect to average localization error and localization success ratio. In addition, a comparative study was conducted to realize the same localization task using the same network deployment. Experimental results prove that the proposed CS algorithm can not only increase convergence rate but also reduce average localization error compared with standard CS algorithm and Particle Swarm Optimization (PSO) algorithm.

  20. An Effective Cuckoo Search Algorithm for Node Localization in Wireless Sensor Network

    PubMed Central

    Cheng, Jing; Xia, Linyuan

    2016-01-01

    Localization is an essential requirement in the increasing prevalence of wireless sensor network (WSN) applications. Reducing the computational complexity, communication overhead in WSN localization is of paramount importance in order to prolong the lifetime of the energy-limited sensor nodes and improve localization performance. This paper proposes an effective Cuckoo Search (CS) algorithm for node localization. Based on the modification of step size, this approach enables the population to approach global optimal solution rapidly, and the fitness of each solution is employed to build mutation probability for avoiding local convergence. Further, the approach restricts the population in the certain range so that it can prevent the energy consumption caused by insignificant search. Extensive experiments were conducted to study the effects of parameters like anchor density, node density and communication range on the proposed algorithm with respect to average localization error and localization success ratio. In addition, a comparative study was conducted to realize the same localization task using the same network deployment. Experimental results prove that the proposed CS algorithm can not only increase convergence rate but also reduce average localization error compared with standard CS algorithm and Particle Swarm Optimization (PSO) algorithm. PMID:27589756

  1. Quantum storage of entangled telecom-wavelength photons in an erbium-doped optical fibre

    NASA Astrophysics Data System (ADS)

    Saglamyurek, Erhan; Jin, Jeongwan; Verma, Varun B.; Shaw, Matthew D.; Marsili, Francesco; Nam, Sae Woo; Oblak, Daniel; Tittel, Wolfgang

    2015-02-01

    The realization of a future quantum Internet requires the processing and storage of quantum information at local nodes and interconnecting distant nodes using free-space and fibre-optic links. Quantum memories for light are key elements of such quantum networks. However, to date, neither an atomic quantum memory for non-classical states of light operating at a wavelength compatible with standard telecom fibre infrastructure, nor a fibre-based implementation of a quantum memory, has been reported. Here, we demonstrate the storage and faithful recall of the state of a 1,532 nm wavelength photon entangled with a 795 nm photon, in an ensemble of cryogenically cooled erbium ions doped into a 20-m-long silica fibre, using a photon-echo quantum memory protocol. Despite its currently limited efficiency and storage time, our broadband light-matter interface brings fibre-based quantum networks one step closer to reality.

  2. SSL: Signal Similarity-Based Localization for Ocean Sensor Networks.

    PubMed

    Chen, Pengpeng; Ma, Honglu; Gao, Shouwan; Huang, Yan

    2015-11-24

    Nowadays, wireless sensor networks are often deployed on the sea surface for ocean scientific monitoring. One of the important challenges is to localize the nodes' positions. Existing localization schemes can be roughly divided into two types: range-based and range-free. The range-based localization approaches heavily depend on extra hardware capabilities, while range-free ones often suffer from poor accuracy and low scalability, far from the practical ocean monitoring applications. In response to the above limitations, this paper proposes a novel signal similarity-based localization (SSL) technology, which localizes the nodes' positions by fully utilizing the similarity of received signal strength and the open-air characteristics of the sea surface. In the localization process, we first estimate the relative distance between neighboring nodes through comparing the similarity of received signal strength and then calculate the relative distance for non-neighboring nodes with the shortest path algorithm. After that, the nodes' relative relation map of the whole network can be obtained. Given at least three anchors, the physical locations of nodes can be finally determined based on the multi-dimensional scaling (MDS) technology. The design is evaluated by two types of ocean experiments: a zonal network and a non-regular network using 28 nodes. Results show that the proposed design improves the localization accuracy compared to typical connectivity-based approaches and also confirm its effectiveness for large-scale ocean sensor networks.

  3. Distributed Power Allocation for Wireless Sensor Network Localization: A Potential Game Approach.

    PubMed

    Ke, Mingxing; Li, Ding; Tian, Shiwei; Zhang, Yuli; Tong, Kaixiang; Xu, Yuhua

    2018-05-08

    The problem of distributed power allocation in wireless sensor network (WSN) localization systems is investigated in this paper, using the game theoretic approach. Existing research focuses on the minimization of the localization errors of individual agent nodes over all anchor nodes subject to power budgets. When the service area and the distribution of target nodes are considered, finding the optimal trade-off between localization accuracy and power consumption is a new critical task. To cope with this issue, we propose a power allocation game where each anchor node minimizes the square position error bound (SPEB) of the service area penalized by its individual power. Meanwhile, it is proven that the power allocation game is an exact potential game which has one pure Nash equilibrium (NE) at least. In addition, we also prove the existence of an ϵ -equilibrium point, which is a refinement of NE and the better response dynamic approach can reach the end solution. Analytical and simulation results demonstrate that: (i) when prior distribution information is available, the proposed strategies have better localization accuracy than the uniform strategies; (ii) when prior distribution information is unknown, the performance of the proposed strategies outperforms power management strategies based on the second-order cone program (SOCP) for particular agent nodes after obtaining the estimated distribution of agent nodes. In addition, proposed strategies also provide an instructional trade-off between power consumption and localization accuracy.

  4. Partial Storage Optimization and Load Control Strategy of Cloud Data Centers

    PubMed Central

    2015-01-01

    We present a novel approach to solve the cloud storage issues and provide a fast load balancing algorithm. Our approach is based on partitioning and concurrent dual direction download of the files from multiple cloud nodes. Partitions of the files are saved on the cloud rather than the full files, which provide a good optimization to the cloud storage usage. Only partial replication is used in this algorithm to ensure the reliability and availability of the data. Our focus is to improve the performance and optimize the storage usage by providing the DaaS on the cloud. This algorithm solves the problem of having to fully replicate large data sets, which uses up a lot of precious space on the cloud nodes. Reducing the space needed will help in reducing the cost of providing such space. Moreover, performance is also increased since multiple cloud servers will collaborate to provide the data to the cloud clients in a faster manner. PMID:25973444

  5. Partial storage optimization and load control strategy of cloud data centers.

    PubMed

    Al Nuaimi, Klaithem; Mohamed, Nader; Al Nuaimi, Mariam; Al-Jaroodi, Jameela

    2015-01-01

    We present a novel approach to solve the cloud storage issues and provide a fast load balancing algorithm. Our approach is based on partitioning and concurrent dual direction download of the files from multiple cloud nodes. Partitions of the files are saved on the cloud rather than the full files, which provide a good optimization to the cloud storage usage. Only partial replication is used in this algorithm to ensure the reliability and availability of the data. Our focus is to improve the performance and optimize the storage usage by providing the DaaS on the cloud. This algorithm solves the problem of having to fully replicate large data sets, which uses up a lot of precious space on the cloud nodes. Reducing the space needed will help in reducing the cost of providing such space. Moreover, performance is also increased since multiple cloud servers will collaborate to provide the data to the cloud clients in a faster manner.

  6. Influence of reserpine on in vivo localization of injected lymph node cells in the mouse.

    PubMed Central

    Bellavia, A; Micklem, H S

    1987-01-01

    The effects of reserpine, and other agents that affect the storage and availability of 5-hydroxytryptamine (5HT), on the localization of injected 51Cr-labelled syngeneic lymph node cells have been investigated. A high dose (5 mg/kg) of reserpine to the recipients reduced localization in the lymph nodes and prevented the usual accumulation of lymphocytes in lymph nodes draining the site of an antigen (sheep erythrocytes: SE) injection. These effects were partially reversible by the monoamine oxidase inhibitor nialamide. This dose of reserpine produced deep sedation throughout the period of the experiment. Lower doses, up to 2.5 mg/kg, produced little sedation and had no effect on the localization of lymphocytes. Other workers had previously reported reduced localization of cells in delayed-type hypersensitivity (DTH) lesions after treatment of the recipients with 5 mg/kg reserpine, and had interpreted this in terms of a role of 5HT in promoting vascular permeability and egress of blood cells. The effect of lower doses of reserpine was not reported. We suggest that the effects on cell localization in both sets of experiments may have been secondary to the general state of sedation and not attributable to a direct local influence of 5HT. Other effects of reserpine included prolonged retention of lymphocytes in lungs and blood, and a reduction of cellularity and DNA synthesis in the thymus, spleen and lymph nodes. PMID:3817871

  7. Paging memory from random access memory to backing storage in a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Inglett, Todd A; Ratterman, Joseph D; Smith, Brian E

    2013-05-21

    Paging memory from random access memory (`RAM`) to backing storage in a parallel computer that includes a plurality of compute nodes, including: executing a data processing application on a virtual machine operating system in a virtual machine on a first compute node; providing, by a second compute node, backing storage for the contents of RAM on the first compute node; and swapping, by the virtual machine operating system in the virtual machine on the first compute node, a page of memory from RAM on the first compute node to the backing storage on the second compute node.

  8. I-DWRL: Improved Dual Wireless Radio Localization Using Magnetometer.

    PubMed

    Aziz, Abdul; Kumar, Ramesh; Joe, Inwhee

    2017-11-15

    In the dual wireless radio localization (DWRL) technique each sensor node is equipped with two ultra-wide band (UWB) radios; the distance between the two radios is a few tens of centimeters. For localization, the DWRL technique must use at least two pre-localized nodes to fully localize an unlocalized node. Moreover, in the DWRL technique it is also not possible for two sensor nodes to properly communicate location information unless each of the four UWB radios of two communicating sensor nodes cannot approach the remaining three radios. In this paper, we propose an improved DWRL (I-DWRL) algorithm along with mounting a magnetometer sensor on one of the UWB radios of all sensor nodes. This addition of a magnetometer helps to improve DWRL algorithm such that only one localized sensor node is required for the localization of an unlocalized sensor node, and localization can also be achieved even when some of the four radios of two nodes are unable to communicate with the remaining three radios. The results show that with the use of a magnetometer a greater number of nodes can be localized with a smaller transmission range, less energy and a shorter period of time. In comparison with the conventional DWRL algorithm, our I-DWRL not only maintains the localization error but also requires around half of semi-localizations, 60% of the time, 70% of the energy and a shorter communication range to fully localize an entire network. Moreover, I-DWRL can even localize more nodes while transmission range is not sufficient for DWRL algorithm.

  9. I-DWRL: Improved Dual Wireless Radio Localization Using Magnetometer

    PubMed Central

    Aziz, Abdul; Kumar, Ramesh; Joe, Inwhee

    2017-01-01

    In the dual wireless radio localization (DWRL) technique each sensor node is equipped with two ultra-wide band (UWB) radios; the distance between the two radios is a few tens of centimeters. For localization, the DWRL technique must use at least two pre-localized nodes to fully localize an unlocalized node. Moreover, in the DWRL technique it is also not possible for two sensor nodes to properly communicate location information unless each of the four UWB radios of two communicating sensor nodes cannot approach the remaining three radios. In this paper, we propose an improved DWRL (I-DWRL) algorithm along with mounting a magnetometer sensor on one of the UWB radios of all sensor nodes. This addition of a magnetometer helps to improve DWRL algorithm such that only one localized sensor node is required for the localization of an unlocalized sensor node, and localization can also be achieved even when some of the four radios of two nodes are unable to communicate with the remaining three radios. The results show that with the use of a magnetometer a greater number of nodes can be localized with a smaller transmission range, less energy and a shorter period of time. In comparison with the conventional DWRL algorithm, our I-DWRL not only maintains the localization error but also requires around half of semi-localizations, 60% of the time, 70% of the energy and a shorter communication range to fully localize an entire network. Moreover, I-DWRL can even localize more nodes while transmission range is not sufficient for DWRL algorithm. PMID:29140291

  10. Entanglement distillation between solid-state quantum network nodes.

    PubMed

    Kalb, N; Reiserer, A A; Humphreys, P C; Bakermans, J J W; Kamerling, S J; Nickerson, N H; Benjamin, S C; Twitchen, D J; Markham, M; Hanson, R

    2017-06-02

    The impact of future quantum networks hinges on high-quality quantum entanglement shared between network nodes. Unavoidable imperfections necessitate a means to improve remote entanglement by local quantum operations. We realize entanglement distillation on a quantum network primitive of distant electron-nuclear two-qubit nodes. The heralded generation of two copies of a remote entangled state is demonstrated through single-photon-mediated entangling of the electrons and robust storage in the nuclear spins. After applying local two-qubit gates, single-shot measurements herald the distillation of an entangled state with increased fidelity that is available for further use. The key combination of generating, storing, and processing entangled states should enable the exploration of multiparticle entanglement on an extended quantum network. Copyright © 2017, American Association for the Advancement of Science.

  11. Minimally buffered data transfers between nodes in a data communications network

    DOEpatents

    Miller, Douglas R.

    2015-06-23

    Methods, apparatus, and products for minimally buffered data transfers between nodes in a data communications network are disclosed that include: receiving, by a messaging module on an origin node, a storage identifier, a origin data type, and a target data type, the storage identifier specifying application storage containing data, the origin data type describing a data subset contained in the origin application storage, the target data type describing an arrangement of the data subset in application storage on a target node; creating, by the messaging module, origin metadata describing the origin data type; selecting, by the messaging module from the origin application storage in dependence upon the origin metadata and the storage identifier, the data subset; and transmitting, by the messaging module to the target node, the selected data subset for storing in the target application storage in dependence upon the target data type without temporarily buffering the data subset.

  12. The Localized Discovery and Recovery for Query Packet Losses in Wireless Sensor Networks with Distributed Detector Clusters

    PubMed Central

    Teng, Rui; Leibnitz, Kenji; Miura, Ryu

    2013-01-01

    An essential application of wireless sensor networks is to successfully respond to user queries. Query packet losses occur in the query dissemination due to wireless communication problems such as interference, multipath fading, packet collisions, etc. The losses of query messages at sensor nodes result in the failure of sensor nodes reporting the requested data. Hence, the reliable and successful dissemination of query messages to sensor nodes is a non-trivial problem. The target of this paper is to enable highly successful query delivery to sensor nodes by localized and energy-efficient discovery, and recovery of query losses. We adopt local and collective cooperation among sensor nodes to increase the success rate of distributed discoveries and recoveries. To enable the scalability in the operations of discoveries and recoveries, we employ a distributed name resolution mechanism at each sensor node to allow sensor nodes to self-detect the correlated queries and query losses, and then efficiently locally respond to the query losses. We prove that the collective discovery of query losses has a high impact on the success of query dissemination and reveal that scalability can be achieved by using the proposed approach. We further study the novel features of the cooperation and competition in the collective recovery at PHY and MAC layers, and show that the appropriate number of detectors can achieve optimal successful recovery rate. We evaluate the proposed approach with both mathematical analyses and computer simulations. The proposed approach enables a high rate of successful delivery of query messages and it results in short route lengths to recover from query losses. The proposed approach is scalable and operates in a fully distributed manner. PMID:23748172

  13. Identifying influential nodes in complex networks: A node information dimension approach

    NASA Astrophysics Data System (ADS)

    Bian, Tian; Deng, Yong

    2018-04-01

    In the field of complex networks, how to identify influential nodes is a significant issue in analyzing the structure of a network. In the existing method proposed to identify influential nodes based on the local dimension, the global structure information in complex networks is not taken into consideration. In this paper, a node information dimension is proposed by synthesizing the local dimensions at different topological distance scales. A case study of the Netscience network is used to illustrate the efficiency and practicability of the proposed method.

  14. Unequal Probability Marking Approach to Enhance Security of Traceback Scheme in Tree-Based WSNs.

    PubMed

    Huang, Changqin; Ma, Ming; Liu, Xiao; Liu, Anfeng; Zuo, Zhengbang

    2017-06-17

    Fog (from core to edge) computing is a newly emerging computing platform, which utilizes a large number of network devices at the edge of a network to provide ubiquitous computing, thus having great development potential. However, the issue of security poses an important challenge for fog computing. In particular, the Internet of Things (IoT) that constitutes the fog computing platform is crucial for preserving the security of a huge number of wireless sensors, which are vulnerable to attack. In this paper, a new unequal probability marking approach is proposed to enhance the security performance of logging and migration traceback (LM) schemes in tree-based wireless sensor networks (WSNs). The main contribution of this paper is to overcome the deficiency of the LM scheme that has a higher network lifetime and large storage space. In the unequal probability marking logging and migration (UPLM) scheme of this paper, different marking probabilities are adopted for different nodes according to their distances to the sink. A large marking probability is assigned to nodes in remote areas (areas at a long distance from the sink), while a small marking probability is applied to nodes in nearby area (areas at a short distance from the sink). This reduces the consumption of storage and energy in addition to enhancing the security performance, lifetime, and storage capacity. Marking information will be migrated to nodes at a longer distance from the sink for increasing the amount of stored marking information, thus enhancing the security performance in the process of migration. The experimental simulation shows that for general tree-based WSNs, the UPLM scheme proposed in this paper can store 1.12-1.28 times the amount of stored marking information that the equal probability marking approach achieves, and has 1.15-1.26 times the storage utilization efficiency compared with other schemes.

  15. Unequal Probability Marking Approach to Enhance Security of Traceback Scheme in Tree-Based WSNs

    PubMed Central

    Huang, Changqin; Ma, Ming; Liu, Xiao; Liu, Anfeng; Zuo, Zhengbang

    2017-01-01

    Fog (from core to edge) computing is a newly emerging computing platform, which utilizes a large number of network devices at the edge of a network to provide ubiquitous computing, thus having great development potential. However, the issue of security poses an important challenge for fog computing. In particular, the Internet of Things (IoT) that constitutes the fog computing platform is crucial for preserving the security of a huge number of wireless sensors, which are vulnerable to attack. In this paper, a new unequal probability marking approach is proposed to enhance the security performance of logging and migration traceback (LM) schemes in tree-based wireless sensor networks (WSNs). The main contribution of this paper is to overcome the deficiency of the LM scheme that has a higher network lifetime and large storage space. In the unequal probability marking logging and migration (UPLM) scheme of this paper, different marking probabilities are adopted for different nodes according to their distances to the sink. A large marking probability is assigned to nodes in remote areas (areas at a long distance from the sink), while a small marking probability is applied to nodes in nearby area (areas at a short distance from the sink). This reduces the consumption of storage and energy in addition to enhancing the security performance, lifetime, and storage capacity. Marking information will be migrated to nodes at a longer distance from the sink for increasing the amount of stored marking information, thus enhancing the security performance in the process of migration. The experimental simulation shows that for general tree-based WSNs, the UPLM scheme proposed in this paper can store 1.12–1.28 times the amount of stored marking information that the equal probability marking approach achieves, and has 1.15–1.26 times the storage utilization efficiency compared with other schemes. PMID:28629135

  16. A Trust-Based Secure Routing Scheme Using the Traceback Approach for Energy-Harvesting Wireless Sensor Networks.

    PubMed

    Tang, Jiawei; Liu, Anfeng; Zhang, Jian; Xiong, Neal N; Zeng, Zhiwen; Wang, Tian

    2018-03-01

    The Internet of things (IoT) is composed of billions of sensing devices that are subject to threats stemming from increasing reliance on communications technologies. A Trust-Based Secure Routing (TBSR) scheme using the traceback approach is proposed to improve the security of data routing and maximize the use of available energy in Energy-Harvesting Wireless Sensor Networks (EHWSNs). The main contributions of a TBSR are (a) the source nodes send data and notification to sinks through disjoint paths, separately; in such a mechanism, the data and notification can be verified independently to ensure their security. (b) Furthermore, the data and notification adopt a dynamic probability of marking and logging approach during the routing. Therefore, when attacked, the network will adopt the traceback approach to locate and clear malicious nodes to ensure security. The probability of marking is determined based on the level of battery remaining; when nodes harvest more energy, the probability of marking is higher, which can improve network security. Because if the probability of marking is higher, the number of marked nodes on the data packet routing path will be more, and the sink will be more likely to trace back the data packet routing path and find malicious nodes according to this notification. When data packets are routed again, they tend to bypass these malicious nodes, which make the success rate of routing higher and lead to improved network security. When the battery level is low, the probability of marking will be decreased, which is able to save energy. For logging, when the battery level is high, the network adopts a larger probability of marking and smaller probability of logging to transmit notification to the sink, which can reserve enough storage space to meet the storage demand for the period of the battery on low level; when the battery level is low, increasing the probability of logging can reduce energy consumption. After the level of battery remaining is high enough, nodes then send the notification which was logged before to the sink. Compared with past solutions, our results indicate that the performance of the TBSR scheme has been improved comprehensively; it can effectively increase the quantity of notification received by the sink by 20%, increase energy efficiency by 11%, reduce the maximum storage capacity needed by nodes by 33.3% and improve the success rate of routing by approximately 16.30%.

  17. A Trust-Based Secure Routing Scheme Using the Traceback Approach for Energy-Harvesting Wireless Sensor Networks

    PubMed Central

    Tang, Jiawei; Zhang, Jian; Zeng, Zhiwen; Wang, Tian

    2018-01-01

    The Internet of things (IoT) is composed of billions of sensing devices that are subject to threats stemming from increasing reliance on communications technologies. A Trust-Based Secure Routing (TBSR) scheme using the traceback approach is proposed to improve the security of data routing and maximize the use of available energy in Energy-Harvesting Wireless Sensor Networks (EHWSNs). The main contributions of a TBSR are (a) the source nodes send data and notification to sinks through disjoint paths, separately; in such a mechanism, the data and notification can be verified independently to ensure their security. (b) Furthermore, the data and notification adopt a dynamic probability of marking and logging approach during the routing. Therefore, when attacked, the network will adopt the traceback approach to locate and clear malicious nodes to ensure security. The probability of marking is determined based on the level of battery remaining; when nodes harvest more energy, the probability of marking is higher, which can improve network security. Because if the probability of marking is higher, the number of marked nodes on the data packet routing path will be more, and the sink will be more likely to trace back the data packet routing path and find malicious nodes according to this notification. When data packets are routed again, they tend to bypass these malicious nodes, which make the success rate of routing higher and lead to improved network security. When the battery level is low, the probability of marking will be decreased, which is able to save energy. For logging, when the battery level is high, the network adopts a larger probability of marking and smaller probability of logging to transmit notification to the sink, which can reserve enough storage space to meet the storage demand for the period of the battery on low level; when the battery level is low, increasing the probability of logging can reduce energy consumption. After the level of battery remaining is high enough, nodes then send the notification which was logged before to the sink. Compared with past solutions, our results indicate that the performance of the TBSR scheme has been improved comprehensively; it can effectively increase the quantity of notification received by the sink by 20%, increase energy efficiency by 11%, reduce the maximum storage capacity needed by nodes by 33.3% and improve the success rate of routing by approximately 16.30%. PMID:29494561

  18. Secure Localization in the Presence of Colluders in WSNs

    PubMed Central

    Barbeau, Michel; Corriveau, Jean-Pierre; Garcia-Alfaro, Joaquin; Yao, Meng

    2017-01-01

    We address the challenge of correctly estimating the position of wireless sensor network (WSN) nodes in the presence of malicious adversaries. We consider adversarial situations during the execution of node localization under three classes of colluding adversaries. We describe a decentralized algorithm that aims at determining the position of nodes in the presence of such colluders. Colluders are assumed to either forge or manipulate the information they exchange with the other nodes of the WSN. This algorithm allows location-unknown nodes to successfully detect adversaries within their communication range. Numeric simulation is reported to validate the approach. Results show the validity of the proposal, both in terms of localization and adversary detection. PMID:28817077

  19. Localization-Free Detection of Replica Node Attacks in Wireless Sensor Networks Using Similarity Estimation with Group Deployment Knowledge

    PubMed Central

    Ding, Chao; Yang, Lijun; Wu, Meng

    2017-01-01

    Due to the unattended nature and poor security guarantee of the wireless sensor networks (WSNs), adversaries can easily make replicas of compromised nodes, and place them throughout the network to launch various types of attacks. Such an attack is dangerous because it enables the adversaries to control large numbers of nodes and extend the damage of attacks to most of the network with quite limited cost. To stop the node replica attack, we propose a location similarity-based detection scheme using deployment knowledge. Compared with prior solutions, our scheme provides extra functionalities that prevent replicas from generating false location claims without deploying resource-consuming localization techniques on the resource-constraint sensor nodes. We evaluate the security performance of our proposal under different attack strategies through heuristic analysis, and show that our scheme achieves secure and robust replica detection by increasing the cost of node replication. Additionally, we evaluate the impact of network environment on the proposed scheme through theoretic analysis and simulation experiments, and indicate that our scheme achieves effectiveness and efficiency with substantially lower communication, computational, and storage overhead than prior works under different situations and attack strategies. PMID:28098846

  20. Localization-Free Detection of Replica Node Attacks in Wireless Sensor Networks Using Similarity Estimation with Group Deployment Knowledge.

    PubMed

    Ding, Chao; Yang, Lijun; Wu, Meng

    2017-01-15

    Due to the unattended nature and poor security guarantee of the wireless sensor networks (WSNs), adversaries can easily make replicas of compromised nodes, and place them throughout the network to launch various types of attacks. Such an attack is dangerous because it enables the adversaries to control large numbers of nodes and extend the damage of attacks to most of the network with quite limited cost. To stop the node replica attack, we propose a location similarity-based detection scheme using deployment knowledge. Compared with prior solutions, our scheme provides extra functionalities that prevent replicas from generating false location claims without deploying resource-consuming localization techniques on the resource-constraint sensor nodes. We evaluate the security performance of our proposal under different attack strategies through heuristic analysis, and show that our scheme achieves secure and robust replica detection by increasing the cost of node replication. Additionally, we evaluate the impact of network environment on the proposed scheme through theoretic analysis and simulation experiments, and indicate that our scheme achieves effectiveness and efficiency with substantially lower communication, computational, and storage overhead than prior works under different situations and attack strategies.

  1. Collaborative localization in wireless sensor networks via pattern recognition in radio irregularity using omnidirectional antennas.

    PubMed

    Jiang, Joe-Air; Chuang, Cheng-Long; Lin, Tzu-Shiang; Chen, Chia-Pang; Hung, Chih-Hung; Wang, Jiing-Yi; Liu, Chang-Wang; Lai, Tzu-Yun

    2010-01-01

    In recent years, various received signal strength (RSS)-based localization estimation approaches for wireless sensor networks (WSNs) have been proposed. RSS-based localization is regarded as a low-cost solution for many location-aware applications in WSNs. In previous studies, the radiation patterns of all sensor nodes are assumed to be spherical, which is an oversimplification of the radio propagation model in practical applications. In this study, we present an RSS-based cooperative localization method that estimates unknown coordinates of sensor nodes in a network. Arrangement of two external low-cost omnidirectional dipole antennas is developed by using the distance-power gradient model. A modified robust regression is also proposed to determine the relative azimuth and distance between a sensor node and a fixed reference node. In addition, a cooperative localization scheme that incorporates estimations from multiple fixed reference nodes is presented to improve the accuracy of the localization. The proposed method is tested via computer-based analysis and field test. Experimental results demonstrate that the proposed low-cost method is a useful solution for localizing sensor nodes in unknown or changing environments.

  2. Grid data access on widely distributed worker nodes using scalla and SRM

    NASA Astrophysics Data System (ADS)

    Jakl, P.; Lauret, J.; Hanushevsky, A.; Shoshani, A.; Sim, A.; Gu, J.

    2008-07-01

    Facing the reality of storage economics, NP experiments such as RHIC/STAR have been engaged in a shift of the analysis model, and now heavily rely on using cheap disks attached to processing nodes, as such a model is extremely beneficial over expensive centralized storage. Additionally, exploiting storage aggregates with enhanced distributed computing capabilities such as dynamic space allocation (lifetime of spaces), file management on shared storages (lifetime of files, pinning file), storage policies or a uniform access to heterogeneous storage solutions is not an easy task. The Xrootd/Scalla system allows for storage aggregation. We will present an overview of the largest deployment of Scalla (Structured Cluster Architecture for Low Latency Access) in the world spanning over 1000 CPUs co-sharing the 350 TB Storage Elements and the experience on how to make such a model work in the RHIC/STAR standard analysis framework. We will explain the key features and approach on how to make access to mass storage (HPSS) possible in such a large deployment context. Furthermore, we will give an overview of a fully 'gridified' solution using the plug-and-play features of Scalla architecture, replacing standard storage access with grid middleware SRM (Storage Resource Manager) components designed for space management and will compare the solution with the standard Scalla approach in use in STAR for the past 2 years. Integration details, future plans and status of development will be explained in the area of best transfer strategy between multiple-choice data pools and best placement with respect of load balancing and interoperability with other SRM aware tools or implementations.

  3. Theoretical Analysis of Local Search and Simple Evolutionary Algorithms for the Generalized Travelling Salesperson Problem.

    PubMed

    Pourhassan, Mojgan; Neumann, Frank

    2018-06-22

    The generalized travelling salesperson problem is an important NP-hard combinatorial optimization problem for which meta-heuristics, such as local search and evolutionary algorithms, have been used very successfully. Two hierarchical approaches with different neighbourhood structures, namely a Cluster-Based approach and a Node-Based approach, have been proposed by Hu and Raidl (2008) for solving this problem. In this paper, local search algorithms and simple evolutionary algorithms based on these approaches are investigated from a theoretical perspective. For local search algorithms, we point out the complementary abilities of the two approaches by presenting instances where they mutually outperform each other. Afterwards, we introduce an instance which is hard for both approaches when initialized on a particular point of the search space, but where a variable neighbourhood search combining them finds the optimal solution in polynomial time. Then we turn our attention to analysing the behaviour of simple evolutionary algorithms that use these approaches. We show that the Node-Based approach solves the hard instance of the Cluster-Based approach presented in Corus et al. (2016) in polynomial time. Furthermore, we prove an exponential lower bound on the optimization time of the Node-Based approach for a class of Euclidean instances.

  4. Parallel compression of data chunks of a shared data object using a log-structured file system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2016-10-25

    Techniques are provided for parallel compression of data chunks being written to a shared object. A client executing on a compute node or a burst buffer node in a parallel computing system stores a data chunk generated by the parallel computing system to a shared data object on a storage node by compressing the data chunk; and providing the data compressed data chunk to the storage node that stores the shared object. The client and storage node may employ Log-Structured File techniques. The compressed data chunk can be de-compressed by the client when the data chunk is read. A storagemore » node stores a data chunk as part of a shared object by receiving a compressed version of the data chunk from a compute node; and storing the compressed version of the data chunk to the shared data object on the storage node.« less

  5. Parallel checksumming of data chunks of a shared data object using a log-structured file system

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2016-09-06

    Checksum values are generated and used to verify the data integrity. A client executing in a parallel computing system stores a data chunk to a shared data object on a storage node in the parallel computing system. The client determines a checksum value for the data chunk; and provides the checksum value with the data chunk to the storage node that stores the shared object. The data chunk can be stored on the storage node with the corresponding checksum value as part of the shared object. The storage node may be part of a Parallel Log-Structured File System (PLFS), and the client may comprise, for example, a Log-Structured File System client on a compute node or burst buffer. The checksum value can be evaluated when the data chunk is read from the storage node to verify the integrity of the data that is read.

  6. Data Access Based on a Guide Map of the Underwater Wireless Sensor Network

    PubMed Central

    Wei, Zhengxian; Song, Min; Yin, Guisheng; Wang, Hongbin; Cheng, Albert M. K.

    2017-01-01

    Underwater wireless sensor networks (UWSNs) represent an area of increasing research interest, as data storage, discovery, and query of UWSNs are always challenging issues. In this paper, a data access based on a guide map (DAGM) method is proposed for UWSNs. In DAGM, the metadata describes the abstracts of data content and the storage location. The center ring is composed of nodes according to the shortest average data query path in the network in order to store the metadata, and the data guide map organizes, diffuses and synchronizes the metadata in the center ring, providing the most time-saving and energy-efficient data query service for the user. For this method, firstly the data is stored in the UWSN. The storage node is determined, the data is transmitted from the sensor node (data generation source) to the storage node, and the metadata is generated for it. Then, the metadata is sent to the center ring node that is the nearest to the storage node and the data guide map organizes the metadata, diffusing and synchronizing it to the other center ring nodes. Finally, when there is query data in any user node, the data guide map will select a center ring node nearest to the user to process the query sentence, and based on the shortest transmission delay and lowest energy consumption, data transmission routing is generated according to the storage location abstract in the metadata. Hence, specific application data transmission from the storage node to the user is completed. The simulation results demonstrate that DAGM has advantages with respect to data access time and network energy consumption. PMID:29039757

  7. Data Access Based on a Guide Map of the Underwater Wireless Sensor Network.

    PubMed

    Wei, Zhengxian; Song, Min; Yin, Guisheng; Song, Houbing; Wang, Hongbin; Ma, Xuefei; Cheng, Albert M K

    2017-10-17

    Underwater wireless sensor networks (UWSNs) represent an area of increasing research interest, as data storage, discovery, and query of UWSNs are always challenging issues. In this paper, a data access based on a guide map (DAGM) method is proposed for UWSNs. In DAGM, the metadata describes the abstracts of data content and the storage location. The center ring is composed of nodes according to the shortest average data query path in the network in order to store the metadata, and the data guide map organizes, diffuses and synchronizes the metadata in the center ring, providing the most time-saving and energy-efficient data query service for the user. For this method, firstly the data is stored in the UWSN. The storage node is determined, the data is transmitted from the sensor node (data generation source) to the storage node, and the metadata is generated for it. Then, the metadata is sent to the center ring node that is the nearest to the storage node and the data guide map organizes the metadata, diffusing and synchronizing it to the other center ring nodes. Finally, when there is query data in any user node, the data guide map will select a center ring node nearest to the user to process the query sentence, and based on the shortest transmission delay and lowest energy consumption, data transmission routing is generated according to the storage location abstract in the metadata. Hence, specific application data transmission from the storage node to the user is completed. The simulation results demonstrate that DAGM has advantages with respect to data access time and network energy consumption.

  8. Grid Data Access on Widely Distributed Worker Nodes Using Scalla and SRM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jakl, Pavel; /Prague, Inst. Phys.; Lauret, Jerome

    2011-11-10

    Facing the reality of storage economics, NP experiments such as RHIC/STAR have been engaged in a shift of the analysis model, and now heavily rely on using cheap disks attached to processing nodes, as such a model is extremely beneficial over expensive centralized storage. Additionally, exploiting storage aggregates with enhanced distributed computing capabilities such as dynamic space allocation (lifetime of spaces), file management on shared storages (lifetime of files, pinning file), storage policies or a uniform access to heterogeneous storage solutions is not an easy task. The Xrootd/Scalla system allows for storage aggregation. We will present an overview of themore » largest deployment of Scalla (Structured Cluster Architecture for Low Latency Access) in the world spanning over 1000 CPUs co-sharing the 350 TB Storage Elements and the experience on how to make such a model work in the RHIC/STAR standard analysis framework. We will explain the key features and approach on how to make access to mass storage (HPSS) possible in such a large deployment context. Furthermore, we will give an overview of a fully 'gridified' solution using the plug-and-play features of Scalla architecture, replacing standard storage access with grid middleware SRM (Storage Resource Manager) components designed for space management and will compare the solution with the standard Scalla approach in use in STAR for the past 2 years. Integration details, future plans and status of development will be explained in the area of best transfer strategy between multiple-choice data pools and best placement with respect of load balancing and interoperability with other SRM aware tools or implementations.« less

  9. Active Flash: Out-of-core Data Analytics on Flash Storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boboila, Simona; Kim, Youngjae; Vazhkudai, Sudharshan S

    2012-01-01

    Next generation science will increasingly come to rely on the ability to perform efficient, on-the-fly analytics of data generated by high-performance computing (HPC) simulations, modeling complex physical phenomena. Scientific computing workflows are stymied by the traditional chaining of simulation and data analysis, creating multiple rounds of redundant reads and writes to the storage system, which grows in cost with the ever-increasing gap between compute and storage speeds in HPC clusters. Recent HPC acquisitions have introduced compute node-local flash storage as a means to alleviate this I/O bottleneck. We propose a novel approach, Active Flash, to expedite data analysis pipelines bymore » migrating to the location of the data, the flash device itself. We argue that Active Flash has the potential to enable true out-of-core data analytics by freeing up both the compute core and the associated main memory. By performing analysis locally, dependence on limited bandwidth to a central storage system is reduced, while allowing this analysis to proceed in parallel with the main application. In addition, offloading work from the host to the more power-efficient controller reduces peak system power usage, which is already in the megawatt range and poses a major barrier to HPC system scalability. We propose an architecture for Active Flash, explore energy and performance trade-offs in moving computation from host to storage, demonstrate the ability of appropriate embedded controllers to perform data analysis and reduction tasks at speeds sufficient for this application, and present a simulation study of Active Flash scheduling policies. These results show the viability of the Active Flash model, and its capability to potentially have a transformative impact on scientific data analysis.« less

  10. Dynamic storage in resource-scarce browsing multimedia applications

    NASA Astrophysics Data System (ADS)

    Elenbaas, Herman; Dimitrova, Nevenka

    1998-10-01

    In the convergence of information and entertainment there is a conflict between the consumer's expectation of fast access to high quality multimedia content through narrow bandwidth channels versus the size of this content. During the retrieval and information presentation of a multimedia application there are two problems that have to be solved: the limited bandwidth during transmission of the retrieved multimedia content and the limited memory for temporary caching. In this paper we propose an approach for latency optimization in information browsing applications. We proposed a method for flattening hierarchically linked documents in a manner convenient for network transport over slow channels to minimize browsing latency. Flattening of the hierarchy involves linearization, compression and bundling of the document nodes. After the transfer, the compressed hierarchy is stored on a local device where it can be partly unbundled to fit the caching limits at the local site while giving the user availability to the content.

  11. Scalable cloud without dedicated storage

    NASA Astrophysics Data System (ADS)

    Batkovich, D. V.; Kompaniets, M. V.; Zarochentsev, A. K.

    2015-05-01

    We present a prototype of a scalable computing cloud. It is intended to be deployed on the basis of a cluster without the separate dedicated storage. The dedicated storage is replaced by the distributed software storage. In addition, all cluster nodes are used both as computing nodes and as storage nodes. This solution increases utilization of the cluster resources as well as improves fault tolerance and performance of the distributed storage. Another advantage of this solution is high scalability with a relatively low initial and maintenance cost. The solution is built on the basis of the open source components like OpenStack, CEPH, etc.

  12. Study of Solid State Drives performance in PROOF distributed analysis system

    NASA Astrophysics Data System (ADS)

    Panitkin, S. Y.; Ernst, M.; Petkus, R.; Rind, O.; Wenaus, T.

    2010-04-01

    Solid State Drives (SSD) is a promising storage technology for High Energy Physics parallel analysis farms. Its combination of low random access time and relatively high read speed is very well suited for situations where multiple jobs concurrently access data located on the same drive. It also has lower energy consumption and higher vibration tolerance than Hard Disk Drive (HDD) which makes it an attractive choice in many applications raging from personal laptops to large analysis farms. The Parallel ROOT Facility - PROOF is a distributed analysis system which allows to exploit inherent event level parallelism of high energy physics data. PROOF is especially efficient together with distributed local storage systems like Xrootd, when data are distributed over computing nodes. In such an architecture the local disk subsystem I/O performance becomes a critical factor, especially when computing nodes use multi-core CPUs. We will discuss our experience with SSDs in PROOF environment. We will compare performance of HDD with SSD in I/O intensive analysis scenarios. In particular we will discuss PROOF system performance scaling with a number of simultaneously running analysis jobs.

  13. Routing in Mobile Wireless Sensor Networks: A Leader-Based Approach.

    PubMed

    Burgos, Unai; Amozarrain, Ugaitz; Gómez-Calzado, Carlos; Lafuente, Alberto

    2017-07-07

    This paper presents a leader-based approach to routing in Mobile Wireless Sensor Networks (MWSN). Using local information from neighbour nodes, a leader election mechanism maintains a spanning tree in order to provide the necessary adaptations for efficient routing upon the connectivity changes resulting from the mobility of sensors or sink nodes. We present two protocols following the leader election approach, which have been implemented using Castalia and OMNeT++. The protocols have been evaluated, besides other reference MWSN routing protocols, to analyse the impact of network size and node velocity on performance, which has demonstrated the validity of our approach.

  14. Serving by local consensus in the public service location game.

    PubMed

    Sun, Yi-Fan; Zhou, Hai-Jun

    2016-09-02

    We discuss the issue of distributed and cooperative decision-making in a network game of public service location. Each node of the network can decide to host a certain public service incurring in a construction cost and serving all the neighboring nodes and itself. A pure consumer node has to pay a tax, and the collected tax is evenly distributed to all the hosting nodes to remedy their construction costs. If all nodes make individual best-response decisions, the system gets trapped in an inefficient situation of high tax level. Here we introduce a decentralized local-consensus selection mechanism which requires nodes to recommend their neighbors of highest local impact as candidate servers, and a node may become a server only if all its non-server neighbors give their assent. We demonstrate that although this mechanism involves only information exchange among neighboring nodes, it leads to socially efficient solutions with tax level approaching the lowest possible value. Our results may help in understanding and improving collective problem-solving in various networked social and robotic systems.

  15. Serving by local consensus in the public service location game

    PubMed Central

    Sun, Yi-Fan; Zhou, Hai-Jun

    2016-01-01

    We discuss the issue of distributed and cooperative decision-making in a network game of public service location. Each node of the network can decide to host a certain public service incurring in a construction cost and serving all the neighboring nodes and itself. A pure consumer node has to pay a tax, and the collected tax is evenly distributed to all the hosting nodes to remedy their construction costs. If all nodes make individual best-response decisions, the system gets trapped in an inefficient situation of high tax level. Here we introduce a decentralized local-consensus selection mechanism which requires nodes to recommend their neighbors of highest local impact as candidate servers, and a node may become a server only if all its non-server neighbors give their assent. We demonstrate that although this mechanism involves only information exchange among neighboring nodes, it leads to socially efficient solutions with tax level approaching the lowest possible value. Our results may help in understanding and improving collective problem-solving in various networked social and robotic systems. PMID:27586793

  16. Serving by local consensus in the public service location game

    NASA Astrophysics Data System (ADS)

    Sun, Yi-Fan; Zhou, Hai-Jun

    2016-09-01

    We discuss the issue of distributed and cooperative decision-making in a network game of public service location. Each node of the network can decide to host a certain public service incurring in a construction cost and serving all the neighboring nodes and itself. A pure consumer node has to pay a tax, and the collected tax is evenly distributed to all the hosting nodes to remedy their construction costs. If all nodes make individual best-response decisions, the system gets trapped in an inefficient situation of high tax level. Here we introduce a decentralized local-consensus selection mechanism which requires nodes to recommend their neighbors of highest local impact as candidate servers, and a node may become a server only if all its non-server neighbors give their assent. We demonstrate that although this mechanism involves only information exchange among neighboring nodes, it leads to socially efficient solutions with tax level approaching the lowest possible value. Our results may help in understanding and improving collective problem-solving in various networked social and robotic systems.

  17. NATIONAL WATER INFORMATION SYSTEM OF THE U. S. GEOLOGICAL SURVEY.

    USGS Publications Warehouse

    Edwards, Melvin D.

    1985-01-01

    National Water Information System (NWIS) has been designed as an interactive, distributed data system. It will integrate the existing, diverse data-processing systems into a common system. It will also provide easier, more flexible use as well as more convenient access and expanded computing, dissemination, and data-analysis capabilities. The NWIS is being implemented as part of a Distributed Information System (DIS) being developed by the Survey's Water Resources Division. The NWIS will be implemented on each node of the distributed network for the local processing, storage, and dissemination of hydrologic data collected within the node's area of responsibility. The processor at each node will also be used to perform hydrologic modeling, statistical data analysis, text editing, and some administrative work.

  18. Remote direct memory access

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.

    2012-12-11

    Methods, parallel computers, and computer program products are disclosed for remote direct memory access. Embodiments include transmitting, from an origin DMA engine on an origin compute node to a plurality target DMA engines on target compute nodes, a request to send message, the request to send message specifying a data to be transferred from the origin DMA engine to data storage on each target compute node; receiving, by each target DMA engine on each target compute node, the request to send message; preparing, by each target DMA engine, to store data according to the data storage reference and the data length, including assigning a base storage address for the data storage reference; sending, by one or more of the target DMA engines, an acknowledgment message acknowledging that all the target DMA engines are prepared to receive a data transmission from the origin DMA engine; receiving, by the origin DMA engine, the acknowledgement message from the one or more of the target DMA engines; and transferring, by the origin DMA engine, data to data storage on each of the target compute nodes according to the data storage reference using a single direct put operation.

  19. An Obstacle-Tolerant Path Planning Algorithm for Mobile-Anchor-Node-Assisted Localization

    PubMed Central

    Tsai, Rong-Guei

    2018-01-01

    The location information obtained using a sensor is a critical requirement in wireless sensor networks. Numerous localization schemes have been proposed, among which mobile-anchor-node-assisted localization (MANAL) can reduce costs and overcome environmental constraints. A mobile anchor node (MAN) provides its own location information to assist the localization of sensor nodes. Numerous path planning schemes have been proposed for MANAL, but most scenarios assume the absence of obstacles in the environment. However, in a realistic environment, sensor nodes cannot be located because the obstacles block the path traversed by the MAN, thereby rendering the sensor incapable of receiving sufficient three location information from the MAN. This study proposes the obstacle-tolerant path planning (OTPP) approach to solve the sensor location problem owing to obstacle blockage. OTPP can approximate the optimum beacon point number and path planning, thereby ensuring that all the unknown nodes can receive the three location information from the MAN and reduce the number of MAN broadcast packet times. Experimental results demonstrate that OTPP performs better than Z-curves because it reduces the total number of beacon points utilized and is thus more suitable in an obstacle-present environment. Compared to the Z-curve, OTPP can reduce localization error and improve localization coverage. PMID:29547582

  20. Direct memory access transfer completion notification

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Parker, Jeffrey J.

    2010-08-17

    Methods, apparatus, and products are disclosed for DMA transfer completion notification that include: inserting, by an origin DMA engine on an origin compute node in an injection FIFO buffer, a data descriptor for an application message to be transferred to a target compute node on behalf of an application on the origin compute node; inserting, by the origin DMA engine, a completion notification descriptor in the injection FIFO buffer after the data descriptor for the message, the completion notification descriptor specifying an address of a completion notification field in application storage for the application; transferring, by the origin DMA engine to the target compute node, the message in dependence upon the data descriptor; and notifying, by the origin DMA engine, the application that the transfer of the message is complete, including performing a local direct put operation to store predesignated notification data at the address of the completion notification field.

  1. Data oriented job submission scheme for the PHENIX user analysis in CCJ

    NASA Astrophysics Data System (ADS)

    Nakamura, T.; En'yo, H.; Ichihara, T.; Watanabe, Y.; Yokkaichi, S.

    2011-12-01

    The RIKEN Computing Center in Japan (CCJ) has been developed to make it possible analyzing huge amount of data corrected by the PHENIX experiment at RHIC. The corrected raw data or reconstructed data are transferred via SINET3 with 10 Gbps bandwidth from Brookheaven National Laboratory (BNL) by using GridFTP. The transferred data are once stored in the hierarchical storage management system (HPSS) prior to the user analysis. Since the size of data grows steadily year by year, concentrations of the access request to data servers become one of the serious bottlenecks. To eliminate this I/O bound problem, 18 calculating nodes with total 180 TB local disks were introduced to store the data a priori. We added some setup in a batch job scheduler (LSF) so that user can specify the requiring data already distributed to the local disks. The locations of data are automatically obtained from a database, and jobs are dispatched to the appropriate node which has the required data. To avoid the multiple access to a local disk from several jobs in a node, techniques of lock file and access control list are employed. As a result, each job can handle a local disk exclusively. Indeed, the total throughput was improved drastically as compared to the preexisting nodes in CCJ, and users can analyze about 150 TB data within 9 hours. We report this successful job submission scheme and the feature of the PC cluster.

  2. A real-space stochastic density matrix approach for density functional electronic structure.

    PubMed

    Beck, Thomas L

    2015-12-21

    The recent development of real-space grid methods has led to more efficient, accurate, and adaptable approaches for large-scale electrostatics and density functional electronic structure modeling. With the incorporation of multiscale techniques, linear-scaling real-space solvers are possible for density functional problems if localized orbitals are used to represent the Kohn-Sham energy functional. These methods still suffer from high computational and storage overheads, however, due to extensive matrix operations related to the underlying wave function grid representation. In this paper, an alternative stochastic method is outlined that aims to solve directly for the one-electron density matrix in real space. In order to illustrate aspects of the method, model calculations are performed for simple one-dimensional problems that display some features of the more general problem, such as spatial nodes in the density matrix. This orbital-free approach may prove helpful considering a future involving increasingly parallel computing architectures. Its primary advantage is the near-locality of the random walks, allowing for simultaneous updates of the density matrix in different regions of space partitioned across the processors. In addition, it allows for testing and enforcement of the particle number and idempotency constraints through stabilization of a Feynman-Kac functional integral as opposed to the extensive matrix operations in traditional approaches.

  3. Hybrid data storage system in an HPC exascale environment

    DOEpatents

    Bent, John M.; Faibish, Sorin; Gupta, Uday K.; Tzelnic, Percy; Ting, Dennis P. J.

    2015-08-18

    A computer-executable method, system, and computer program product for managing I/O requests from a compute node in communication with a data storage system, including a first burst buffer node and a second burst buffer node, the computer-executable method, system, and computer program product comprising striping data on the first burst buffer node and the second burst buffer node, wherein a first portion of the data is communicated to the first burst buffer node and a second portion of the data is communicated to the second burst buffer node, processing the first portion of the data at the first burst buffer node, and processing the second portion of the data at the second burst buffer node.

  4. Dispatching packets on a global combining network of a parallel computer

    DOEpatents

    Almasi, Gheorghe [Ardsley, NY; Archer, Charles J [Rochester, MN

    2011-07-19

    Methods, apparatus, and products are disclosed for dispatching packets on a global combining network of a parallel computer comprising a plurality of nodes connected for data communications using the network capable of performing collective operations and point to point operations that include: receiving, by an origin system messaging module on an origin node from an origin application messaging module on the origin node, a storage identifier and an operation identifier, the storage identifier specifying storage containing an application message for transmission to a target node, and the operation identifier specifying a message passing operation; packetizing, by the origin system messaging module, the application message into network packets for transmission to the target node, each network packet specifying the operation identifier and an operation type for the message passing operation specified by the operation identifier; and transmitting, by the origin system messaging module, the network packets to the target node.

  5. Dynamic resource allocation scheme for distributed heterogeneous computer systems

    NASA Technical Reports Server (NTRS)

    Liu, Howard T. (Inventor); Silvester, John A. (Inventor)

    1991-01-01

    This invention relates to a resource allocation in computer systems, and more particularly, to a method and associated apparatus for shortening response time and improving efficiency of a heterogeneous distributed networked computer system by reallocating the jobs queued up for busy nodes to idle, or less-busy nodes. In accordance with the algorithm (SIDA for short), the load-sharing is initiated by the server device in a manner such that extra overhead in not imposed on the system during heavily-loaded conditions. The algorithm employed in the present invention uses a dual-mode, server-initiated approach. Jobs are transferred from heavily burdened nodes (i.e., over a high threshold limit) to low burdened nodes at the initiation of the receiving node when: (1) a job finishes at a node which is burdened below a pre-established threshold level, or (2) a node is idle for a period of time as established by a wakeup timer at the node. The invention uses a combination of the local queue length and the local service rate ratio at each node as the workload indicator.

  6. Applying network theory to animal movements to identify properties of landscape space use.

    PubMed

    Bastille-Rousseau, Guillaume; Douglas-Hamilton, Iain; Blake, Stephen; Northrup, Joseph M; Wittemyer, George

    2018-04-01

    Network (graph) theory is a popular analytical framework to characterize the structure and dynamics among discrete objects and is particularly effective at identifying critical hubs and patterns of connectivity. The identification of such attributes is a fundamental objective of animal movement research, yet network theory has rarely been applied directly to animal relocation data. We develop an approach that allows the analysis of movement data using network theory by defining occupied pixels as nodes and connection among these pixels as edges. We first quantify node-level (local) metrics and graph-level (system) metrics on simulated movement trajectories to assess the ability of these metrics to pull out known properties in movement paths. We then apply our framework to empirical data from African elephants (Loxodonta africana), giant Galapagos tortoises (Chelonoidis spp.), and mule deer (Odocoileous hemionus). Our results indicate that certain node-level metrics, namely degree, weight, and betweenness, perform well in capturing local patterns of space use, such as the definition of core areas and paths used for inter-patch movement. These metrics were generally applicable across data sets, indicating their robustness to assumptions structuring analysis or strategies of movement. Other metrics capture local patterns effectively, but were sensitive to specified graph properties, indicating case specific applications. Our analysis indicates that graph-level metrics are unlikely to outperform other approaches for the categorization of general movement strategies (central place foraging, migration, nomadism). By identifying critical nodes, our approach provides a robust quantitative framework to identify local properties of space use that can be used to evaluate the effect of the loss of specific nodes on range wide connectivity. Our network approach is intuitive, and can be implemented across imperfectly sampled or large-scale data sets efficiently, providing a framework for conservationists to analyze movement data. Functions created for the analyses are available within the R package moveNT. © 2018 by the Ecological Society of America.

  7. Method and apparatus for offloading compute resources to a flash co-processing appliance

    DOEpatents

    Tzelnic, Percy; Faibish, Sorin; Gupta, Uday K.; Bent, John; Grider, Gary Alan; Chen, Hsing -bung

    2015-10-13

    Solid-State Drive (SSD) burst buffer nodes are interposed into a parallel supercomputing cluster to enable fast burst checkpoint of cluster memory to or from nearby interconnected solid-state storage with asynchronous migration between the burst buffer nodes and slower more distant disk storage. The SSD nodes also perform tasks offloaded from the compute nodes or associated with the checkpoint data. For example, the data for the next job is preloaded in the SSD node and very fast uploaded to the respective compute node just before the next job starts. During a job, the SSD nodes perform fast visualization and statistical analysis upon the checkpoint data. The SSD nodes can also perform data reduction and encryption of the checkpoint data.

  8. 3D Kirchhoff depth migration algorithm: A new scalable approach for parallelization on multicore CPU based cluster

    NASA Astrophysics Data System (ADS)

    Rastogi, Richa; Londhe, Ashutosh; Srivastava, Abhishek; Sirasala, Kirannmayi M.; Khonde, Kiran

    2017-03-01

    In this article, a new scalable 3D Kirchhoff depth migration algorithm is presented on state of the art multicore CPU based cluster. Parallelization of 3D Kirchhoff depth migration is challenging due to its high demand of compute time, memory, storage and I/O along with the need of their effective management. The most resource intensive modules of the algorithm are traveltime calculations and migration summation which exhibit an inherent trade off between compute time and other resources. The parallelization strategy of the algorithm largely depends on the storage of calculated traveltimes and its feeding mechanism to the migration process. The presented work is an extension of our previous work, wherein a 3D Kirchhoff depth migration application for multicore CPU based parallel system had been developed. Recently, we have worked on improving parallel performance of this application by re-designing the parallelization approach. The new algorithm is capable to efficiently migrate both prestack and poststack 3D data. It exhibits flexibility for migrating large number of traces within the available node memory and with minimal requirement of storage, I/O and inter-node communication. The resultant application is tested using 3D Overthrust data on PARAM Yuva II, which is a Xeon E5-2670 based multicore CPU cluster with 16 cores/node and 64 GB shared memory. Parallel performance of the algorithm is studied using different numerical experiments and the scalability results show striking improvement over its previous version. An impressive 49.05X speedup with 76.64% efficiency is achieved for 3D prestack data and 32.00X speedup with 50.00% efficiency for 3D poststack data, using 64 nodes. The results also demonstrate the effectiveness and robustness of the improved algorithm with high scalability and efficiency on a multicore CPU cluster.

  9. A universal computer control system for motors

    NASA Technical Reports Server (NTRS)

    Szakaly, Zoltan F. (Inventor)

    1991-01-01

    A control system for a multi-motor system such as a space telerobot, having a remote computational node and a local computational node interconnected with one another by a high speed data link is described. A Universal Computer Control System (UCCS) for the telerobot is located at each node. Each node is provided with a multibus computer system which is characterized by a plurality of processors with all processors being connected to a common bus, and including at least one command processor. The command processor communicates over the bus with a plurality of joint controller cards. A plurality of direct current torque motors, of the type used in telerobot joints and telerobot hand-held controllers, are connected to the controller cards and responds to digital control signals from the command processor. Essential motor operating parameters are sensed by analog sensing circuits and the sensed analog signals are converted to digital signals for storage at the controller cards where such signals can be read during an address read/write cycle of the command processing processor.

  10. Flexible embedding of networks

    NASA Astrophysics Data System (ADS)

    Fernandez-Gracia, Juan; Buckee, Caroline; Onnela, Jukka-Pekka

    We introduce a model for embedding one network into another, focusing on the case where network A is much bigger than network B. Nodes from network A are assigned to the nodes in network B using an algorithm where we control the extent of localization of node placement in network B using a single parameter. Starting from an unassigned node in network A, called the source node, we first map this node to a randomly chosen node in network B, called the target node. We then assign the neighbors of the source node to the neighborhood of the target node using a random walk based approach. To assign each neighbor of the source node to one of the nodes in network B, we perform a random walk starting from the target node with stopping probability α. We repeat this process until all nodes in network A have been mapped to the nodes of network B. The simplicity of the model allows us to calculate key quantities of interest in closed form. By varying the parameter α, we are able to produce embeddings from very local (α = 1) to very global (α --> 0). We show how our calculations fit the simulated results, and we apply the model to study how social networks are embedded in geography and how the neurons of C. Elegans are embedded in the surrounding volume.

  11. Energy-aware scheduling of surveillance in wireless multimedia sensor networks.

    PubMed

    Wang, Xue; Wang, Sheng; Ma, Junjie; Sun, Xinyao

    2010-01-01

    Wireless sensor networks involve a large number of sensor nodes with limited energy supply, which impacts the behavior of their application. In wireless multimedia sensor networks, sensor nodes are equipped with audio and visual information collection modules. Multimedia contents are ubiquitously retrieved in surveillance applications. To solve the energy problems during target surveillance with wireless multimedia sensor networks, an energy-aware sensor scheduling method is proposed in this paper. Sensor nodes which acquire acoustic signals are deployed randomly in the sensing fields. Target localization is based on the signal energy feature provided by multiple sensor nodes, employing particle swarm optimization (PSO). During the target surveillance procedure, sensor nodes are adaptively grouped in a totally distributed manner. Specially, the target motion information is extracted by a forecasting algorithm, which is based on the hidden Markov model (HMM). The forecasting results are utilized to awaken sensor node in the vicinity of future target position. According to the two properties, signal energy feature and residual energy, the sensor nodes decide whether to participate in target detection separately with a fuzzy control approach. Meanwhile, the local routing scheme of data transmission towards the observer is discussed. Experimental results demonstrate the efficiency of energy-aware scheduling of surveillance in wireless multimedia sensor network, where significant energy saving is achieved by the sensor awakening approach and data transmission paths are calculated with low computational complexity.

  12. Solar micro-power system for self-powered wireless sensor nodes

    NASA Astrophysics Data System (ADS)

    He, Yongtai; Li, Yangqiu; Liu, Lihui; Wang, Lei

    2008-10-01

    In self-powered wireless sensor nodes, the efficiency for environmental energy harvesting, storage and management determines the lifetime and environmental adaptability of the sensor nodes. However, the method of improving output efficiency for traditional photovoltaic power generation is not suitable for a solar micro-power system due to the special requirements for its application. This paper presents a solar micro-power system designed for a solar self-powered wireless sensor node. The Maximum Power Point Tracking (MPPT) of solar cells and energy storage are realized by the hybrid energy storage structure and "window" control. Meanwhile, the mathematical model of energy harvesting, storing and management is formulated. In the novel system, the output conversion efficiency of solar cells is 12%.

  13. Sphincter-sparing local excision and hypofractionated radiation therapy for anorectal melanoma: a 20-year experience.

    PubMed

    Kelly, Patrick; Zagars, Gunar K; Cormier, Jancie N; Ross, Merrick I; Guadagnolo, B Ashleigh

    2011-10-15

    Anorectal melanoma is a rare disease with a poor prognosis. Because survival is determined by distant failure, many centers have adopted sphincter-sparing excision for primary tumor control. However, this approach is associated with high rates of local failure (∼50%). In this study, the authors report their 20-year experience with sphincter-sparing excision combined with radiation therapy (RT) for the treatment of localized anorectal melanoma. The authors reviewed the records of 54 patients with localized anorectal melanoma who were treated at the University of Texas MD Anderson Cancer Center from 1989 to 2008. All patients underwent definitive local excision with or without sentinel lymph node biopsy or lymph node dissection. RT (25-36 grays in 5-6 fractions) was delivered to extended fields that targeted the primary site and draining pelvic/inguinal lymphatics in 39 patients and to limited fields that targeted only the primary site in 15 patients. The 5-year rates of local control (LC), lymph node control (NC), and sphincter preservation were 82%, 88%, and 96%, respectively. However, because of the high rate of distant metastasis, the overall survival (OS) rate at 5 years was only 30%. Although there were no significant differences in LC, NC, or OS based on RT field extent, patients who received extended-field RT had higher rates of lymphedema than patients who received limited-field RT. The current results indicated that combined sphincter-sparing local excision and RT is a well tolerated approach that provides effective LC for patients with anorectal melanoma. Inclusion of the inguinal lymph node basins in the RT fields did not improve outcomes and was associated with an increased risk of lymphedema. Copyright © 2011 American Cancer Society.

  14. A new adaptive mesh refinement strategy for numerically solving evolutionary PDE's

    NASA Astrophysics Data System (ADS)

    Burgarelli, Denise; Kischinhevsky, Mauricio; Biezuner, Rodney Josue

    2006-11-01

    A graph-based implementation of quadtree meshes for dealing with adaptive mesh refinement (AMR) in the numerical solution of evolutionary partial differential equations is discussed using finite volume methods. The technique displays a plug-in feature that allows replacement of a group of cells in any region of interest for another one with arbitrary refinement, and with only local changes occurring in the data structure. The data structure is also specially designed to minimize the number of operations needed in the AMR. Implementation of the new scheme allows flexibility in the levels of refinement of adjacent regions. Moreover, storage requirements and computational cost compare competitively with mesh refinement schemes based on hierarchical trees. Low storage is achieved for only the children nodes are stored when a refinement takes place. These nodes become part of a graph structure, thus motivating the denomination autonomous leaves graph (ALG) for the new scheme. Neighbors can then be reached without accessing their parent nodes. Additionally, linear-system solvers based on the minimization of functionals can be easily employed. ALG was not conceived with any particular problem or geometry in mind and can thus be applied to the study of several phenomena. Some test problems are used to illustrate the effectiveness of the technique.

  15. Relative Localization in Wireless Sensor Networks for Measurement of Electric Fields under HVDC Transmission Lines

    PubMed Central

    Cui, Yong; Wang, Qiusheng; Yuan, Haiwen; Song, Xiao; Hu, Xuemin; Zhao, Luxing

    2015-01-01

    In the wireless sensor networks (WSNs) for electric field measurement system under the High-Voltage Direct Current (HVDC) transmission lines, it is necessary to obtain the electric field distribution with multiple sensors. The location information of each sensor is essential to the correct analysis of measurement results. Compared with the existing approach which gathers the location information by manually labelling sensors during deployment, the automatic localization can reduce the workload and improve the measurement efficiency. A novel and practical range-free localization algorithm for the localization of one-dimensional linear topology wireless networks in the electric field measurement system is presented. The algorithm utilizes unknown nodes' neighbor lists based on the Received Signal Strength Indicator (RSSI) values to determine the relative locations of nodes. The algorithm is able to handle the exceptional situation of the output permutation which can effectively improve the accuracy of localization. The performance of this algorithm under real circumstances has been evaluated through several experiments with different numbers of nodes and different node deployments in the China State Grid HVDC test base. Results show that the proposed algorithm achieves an accuracy of over 96% under different conditions. PMID:25658390

  16. Relative localization in wireless sensor networks for measurement of electric fields under HVDC transmission lines.

    PubMed

    Cui, Yong; Wang, Qiusheng; Yuan, Haiwen; Song, Xiao; Hu, Xuemin; Zhao, Luxing

    2015-02-04

    In the wireless sensor networks (WSNs) for electric field measurement system under the High-Voltage Direct Current (HVDC) transmission lines, it is necessary to obtain the electric field distribution with multiple sensors. The location information of each sensor is essential to the correct analysis of measurement results. Compared with the existing approach which gathers the location information by manually labelling sensors during deployment, the automatic localization can reduce the workload and improve the measurement efficiency. A novel and practical range-free localization algorithm for the localization of one-dimensional linear topology wireless networks in the electric field measurement system is presented. The algorithm utilizes unknown nodes' neighbor lists based on the Received Signal Strength Indicator (RSSI) values to determine the relative locations of nodes. The algorithm is able to handle the exceptional situation of the output permutation which can effectively improve the accuracy of localization. The performance of this algorithm under real circumstances has been evaluated through several experiments with different numbers of nodes and different node deployments in the China State Grid HVDC test base. Results show that the proposed algorithm achieves an accuracy of over 96% under different conditions.

  17. Cloud Engineering Principles and Technology Enablers for Medical Image Processing-as-a-Service.

    PubMed

    Bao, Shunxing; Plassard, Andrew J; Landman, Bennett A; Gokhale, Aniruddha

    2017-04-01

    Traditional in-house, laboratory-based medical imaging studies use hierarchical data structures (e.g., NFS file stores) or databases (e.g., COINS, XNAT) for storage and retrieval. The resulting performance from these approaches is, however, impeded by standard network switches since they can saturate network bandwidth during transfer from storage to processing nodes for even moderate-sized studies. To that end, a cloud-based "medical image processing-as-a-service" offers promise in utilizing the ecosystem of Apache Hadoop, which is a flexible framework providing distributed, scalable, fault tolerant storage and parallel computational modules, and HBase, which is a NoSQL database built atop Hadoop's distributed file system. Despite this promise, HBase's load distribution strategy of region split and merge is detrimental to the hierarchical organization of imaging data (e.g., project, subject, session, scan, slice). This paper makes two contributions to address these concerns by describing key cloud engineering principles and technology enhancements we made to the Apache Hadoop ecosystem for medical imaging applications. First, we propose a row-key design for HBase, which is a necessary step that is driven by the hierarchical organization of imaging data. Second, we propose a novel data allocation policy within HBase to strongly enforce collocation of hierarchically related imaging data. The proposed enhancements accelerate data processing by minimizing network usage and localizing processing to machines where the data already exist. Moreover, our approach is amenable to the traditional scan, subject, and project-level analysis procedures, and is compatible with standard command line/scriptable image processing software. Experimental results for an illustrative sample of imaging data reveals that our new HBase policy results in a three-fold time improvement in conversion of classic DICOM to NiFTI file formats when compared with the default HBase region split policy, and nearly a six-fold improvement over a commonly available network file system (NFS) approach even for relatively small file sets. Moreover, file access latency is lower than network attached storage.

  18. [Technical points of laparoscopic splenic hilar lymph node dissection--The original intention of CLASS-04 research design].

    PubMed

    Huang, Changming; Lin, Mi

    2018-02-25

    According to Japanese gastric cancer treatment guidelines, the standard operation for locally advanced upper third gastric cancer is the total gastrectomy with D2 lymphadenectomy, which includes the dissection of the splenic hilar lymph nodes. With the development of minimally invasive ideas and surgical techniques, laparoscopic spleen-preserving splenic hilar lymph node dissection is gradually accepted. It needs high technical requirements and should be carried out by surgeons with rich experience of open operation and skilled laparoscopic techniques. Based on being familiar with the anatomy of splenic hilum, we should choose a reasonable surgical approach and standardized operating procedure. A favorable left-sided approach is used to perform the laparoscopic spleen-preserving splenic hilar lymph node dissection in Department of Gastric Surgery, Fujian Medical University Union Hospital. This means that the membrane of the pancreas is separated at the superior border of the pancreatic tail in order to reach the posterior pancreatic space, revealing the end of the splenic vessels' trunk. The short gastric vessels are severed at their roots. This enables complete removal of the splenic hilar lymph nodes and stomach. At the same time, based on the rich clinical practice of laparoscopic gastric cancer surgery, we have summarized an effective operating procedure called Huang's three-step maneuver. The first step is the dissection of the lymph nodes in the inferior pole region of the spleen. The second step is the dissection of the lymph nodes in the trunk of splenic artery region. The third step is the dissection of the lymph nodes in the superior pole region of the spleen. It simplifies the procedure, reduces the difficulty of the operation, improves the efficiency of the operation, and ensures the safety of the operation. To further explore the safety of laparoscopic spleen-preserving splenic hilar lymph node dissection for locally advanced upper third gastric cancer, in 2016, we launched a multicenter phase II( trial of safety and feasibility of laparoscopic spleen-preserving No.10 lymph node dissection for locally advanced upper third gastric cancer (CLASS-04). Through the multicenter prospective study, we try to provide scientific theoretical basis and clinical experience for the promotion and application of the operation, and also to standardize and popularize the laparoscopic spleen-preserving splenic hilar lymph node dissection to promote its development. At present, the enrollment of the study has been completed, and the preliminary results also suggested that laparoscopic spleen-preserving No.10 lymph node dissection for locally advanced upper third gastric cancer was safe and feasible. We believe that with the improvement of standardized operation training system, the progress of laparoscopic technology and the promotion of Huang's three-step maneuver, laparoscopic spleen-preserving splenic hilar lymph node dissection will also become one of the standard treatments for locally advanced upper third gastric cancer.

  19. GPS-Free Localization Algorithm for Wireless Sensor Networks

    PubMed Central

    Wang, Lei; Xu, Qingzheng

    2010-01-01

    Localization is one of the most fundamental problems in wireless sensor networks, since the locations of the sensor nodes are critical to both network operations and most application level tasks. A GPS-free localization scheme for wireless sensor networks is presented in this paper. First, we develop a standardized clustering-based approach for the local coordinate system formation wherein a multiplication factor is introduced to regulate the number of master and slave nodes and the degree of connectivity among master nodes. Second, using homogeneous coordinates, we derive a transformation matrix between two Cartesian coordinate systems to efficiently merge them into a global coordinate system and effectively overcome the flip ambiguity problem. The algorithm operates asynchronously without a centralized controller; and does not require that the location of the sensors be known a priori. A set of parameter-setting guidelines for the proposed algorithm is derived based on a probability model and the energy requirements are also investigated. A simulation analysis on a specific numerical example is conducted to validate the mathematical analytical results. We also compare the performance of the proposed algorithm under a variety multiplication factor, node density and node communication radius scenario. Experiments show that our algorithm outperforms existing mechanisms in terms of accuracy and convergence time. PMID:22219694

  20. Epidemic spreading in metapopulation networks with heterogeneous infection rates

    NASA Astrophysics Data System (ADS)

    Gong, Yong-Wang; Song, Yu-Rong; Jiang, Guo-Ping

    2014-12-01

    In this paper, we study epidemic spreading in metapopulation networks wherein each node represents a subpopulation symbolizing a city or an urban area and links connecting nodes correspond to the human traveling routes among cities. Differently from previous studies, we introduce a heterogeneous infection rate to characterize the effect of nodes' local properties, such as population density, individual health habits, and social conditions, on epidemic infectivity. By means of a mean-field approach and Monte Carlo simulations, we explore how the heterogeneity of the infection rate affects the epidemic dynamics, and find that large fluctuations of the infection rate have a profound impact on the epidemic threshold as well as the temporal behavior of the prevalence above the epidemic threshold. This work can refine our understanding of epidemic spreading in metapopulation networks with the effect of nodes' local properties.

  1. Supplying the power requirements to a sensor network using radio frequency power transfer.

    PubMed

    Percy, Steven; Knight, Chris; Cooray, Francis; Smart, Ken

    2012-01-01

    Wireless power transmission is a method of supplying power to small electronic devices when there is no wired connection. One way to increase the range of these systems is to use a directional transmitting antenna, the problem with this approach is that power can only be transmitted through a narrow beam and directly forward, requiring the transmitter to always be aligned with the sensor node position. The work outlined in this article describes the design and testing of an autonomous radio frequency power transfer system that is capable of rotating the base transmitter to track the position of sensor nodes and transferring power to that sensor node. The system's base station monitors the node's energy levels and forms a charge queue to plan charging order and maintain energy levels of the nodes. Results show a radio frequency harvesting circuit with a measured S11 value of -31.5 dB and a conversion efficiency of 39.1%. Simulation and experimentation verified the level of power transfer and efficiency. The results of this work show a small network of three nodes with different storage types powered by a central base node.

  2. Node 3 Relocation Environmental Control and Life Support System Modification Kit Verification and Updated Status

    NASA Technical Reports Server (NTRS)

    Williams, David E.; Spector Lawrence N.

    2010-01-01

    Node 1 (Unity) flew to International Space Station (ISS) on Flight 2A. Node 1 was the first module of the United States On-Orbit Segment (USOS) launched to ISS. The Node 1 ISS Environmental Control and Life Support (ECLS) design featured limited ECLS capability. The main purpose of Node 1 was to provide internal storage by providing four stowage rack locations within the module and to allow docking of multiple modules and a truss segment to it. The ECLS subsystems inside Node 1 were routed through the element prior to launch to allow for easy integration of the attached future elements, particularly the Habitation Module which was planned to be located at the nadir docking port of Node 1. After Node I was on-orbit, the Program decided not to launch the Habitation Module and instead, to replace it with Node 3 (Tranquility). In 2007, the Program became concerned with a potential Russian docking port approach issue for the Russian FGB nadir docking port after Node 3 is attached to Node 1. To solve this concern the Program decided to relocate Node 3 from Node I nadir to Node 1 port. To support the movement of Node 3 the Program decided to build a modification kit for Node 1, an on-orbit feedthrough leak test device, and new vestibule jumpers to support the ECLS part of the relocation. This paper provides a design overview of the modification kit for Node 1, a summary of the Node 1 ECLS re-verification to support the Node 3 relocation from Node 1 nadir to Node 1 port, and a status of the ECLS modification kit installation into Node 1.

  3. In-network Coding for Resilient Sensor Data Storage and Efficient Data Mule Collection

    NASA Astrophysics Data System (ADS)

    Albano, Michele; Gao, Jie

    In a sensor network of n nodes in which k of them have sensed interesting data, we perform in-network erasure coding such that each node stores a linear combination of all the network data with random coefficients. This scheme greatly improves data resilience to node failures: as long as there are k nodes that survive an attack, all the data produced in the sensor network can be recovered with high probability. The in-network coding storage scheme also improves data collection rate by mobile mules and allows for easy scheduling of data mules.

  4. Node 3 Relocation Environmental Control and Life Support System Modification Kit Verification and Updated Status

    NASA Technical Reports Server (NTRS)

    Williams, David E.; Spector, Lawrence N.

    2009-01-01

    Node 1 (Unity) flew to International Space Station (ISS) on Flight 2A. Node 1 was the first module of the United States On-Orbit Segment (USOS) launched to ISS. The Node 1 ISS Environmental Control and Life Support (ECLS) design featured limited ECLS capability. The main purpose of Node 1 was to provide internal storage by providing four stowage rack locations within the module and to allow docking of multiple modules and a truss segment to it. The ECLS subsystems inside Node 1 were routed through the element prior to launch to allow for easy integration of the attached future elements, particularly the Habitation Module which was planned to be located at the nadir docking port of Node 1. After Node 1 was on-orbit, the Program decided not to launch the Habitation Module and instead, to replace it with Node 3 (Tranquility). In 2007, the Program became concerned with a potential Russian docking port approach issue for the Russian FGB nadir docking port after Node 3 is attached to Node 1. To solve this concern the Program decided to relocate Node 3 from Node 1 nadir to Node 1 port. To support the movement of Node 3 the Program decided to build a modification kit for Node 1, an on-orbit feedthrough leak test device, and new vestibule jumpers to support the ECLS part of the relocation. This paper provides a design overview of the modification kit, a summary of the Node 1 ECLS re-verification to support the Node 3 relocation from Node 1 nadir to Node 1 port, and a status of the ECLS modification kit installation into Node 1.

  5. Use of edge-based finite elements for solving three dimensional scattering problems

    NASA Technical Reports Server (NTRS)

    Chatterjee, A.; Jin, J. M.; Volakis, John L.

    1991-01-01

    Edge based finite elements are free from drawbacks associated with node based vectorial finite elements and are, therefore, ideal for solving 3-D scattering problems. The finite element discretization using edge elements is checked by solving for the resonant frequencies of a closed inhomogeneously filled metallic cavity. Great improvements in accuracy are observed when compared to the classical node based approach with no penalty in terms of computational time and with the expected absence of spurious modes. A performance comparison between the edge based tetrahedra and rectangular brick elements is carried out and tetrahedral elements are found to be more accurate than rectangular bricks for a given storage intensity. A detailed formulation for the scattering problem with various approaches for terminating the finite element mesh is also presented.

  6. Feature Geo Analytics and Big Data Processing: Hybrid Approaches for Earth Science and Real-Time Decision Support

    NASA Astrophysics Data System (ADS)

    Wright, D. J.; Raad, M.; Hoel, E.; Park, M.; Mollenkopf, A.; Trujillo, R.

    2016-12-01

    Introduced is a new approach for processing spatiotemporal big data by leveraging distributed analytics and storage. A suite of temporally-aware analysis tools summarizes data nearby or within variable windows, aggregates points (e.g., for various sensor observations or vessel positions), reconstructs time-enabled points into tracks (e.g., for mapping and visualizing storm tracks), joins features (e.g., to find associations between features based on attributes, spatial relationships, temporal relationships or all three simultaneously), calculates point densities, finds hot spots (e.g., in species distributions), and creates space-time slices and cubes (e.g., in microweather applications with temperature, humidity, and pressure, or within human mobility studies). These "feature geo analytics" tools run in both batch and streaming spatial analysis mode as distributed computations across a cluster of servers on typical "big" data sets, where static data exist in traditional geospatial formats (e.g., shapefile) locally on a disk or file share, attached as static spatiotemporal big data stores, or streamed in near-real-time. In other words, the approach registers large datasets or data stores with ArcGIS Server, then distributes analysis across a cluster of machines for parallel processing. Several brief use cases will be highlighted based on a 16-node server cluster at 14 Gb RAM per node, allowing, for example, the buffering of over 8 million points or thousands of polygons in 1 minute. The approach is "hybrid" in that ArcGIS Server integrates open-source big data frameworks such as Apache Hadoop and Apache Spark on the cluster in order to run the analytics. In addition, the user may devise and connect custom open-source interfaces and tools developed in Python or Python Notebooks; the common denominator being the familiar REST API.

  7. Distributed Transforms for Efficient Data Gathering in Sensor Networks

    NASA Technical Reports Server (NTRS)

    Ortega, Antonio (Inventor); Shen, Godwin (Inventor); Narang, Sunil K. (Inventor); Perez-Trufero, Javier (Inventor)

    2014-01-01

    Devices, systems, and techniques for data collecting network such as wireless sensors are disclosed. A described technique includes detecting one or more remote nodes included in the wireless sensor network using a local power level that controls a radio range of the local node. The technique includes transmitting a local outdegree. The local outdegree can be based on a quantity of the one or more remote nodes. The technique includes receiving one or more remote outdegrees from the one or more remote nodes. The technique includes determining a local node type of the local node based on detecting a node type of the one or more remote nodes, using the one or more remote outdegrees, and using the local outdegree. The technique includes adjusting characteristics, including an energy usage characteristic and a data compression characteristic, of the wireless sensor network by selectively modifying the local power level and selectively changing the local node type.

  8. Theoretical and Empirical Comparison of Big Data Image Processing with Apache Hadoop and Sun Grid Engine.

    PubMed

    Bao, Shunxing; Weitendorf, Frederick D; Plassard, Andrew J; Huo, Yuankai; Gokhale, Aniruddha; Landman, Bennett A

    2017-02-11

    The field of big data is generally concerned with the scale of processing at which traditional computational paradigms break down. In medical imaging, traditional large scale processing uses a cluster computer that combines a group of workstation nodes into a functional unit that is controlled by a job scheduler. Typically, a shared-storage network file system (NFS) is used to host imaging data. However, data transfer from storage to processing nodes can saturate network bandwidth when data is frequently uploaded/retrieved from the NFS, e.g., "short" processing times and/or "large" datasets. Recently, an alternative approach using Hadoop and HBase was presented for medical imaging to enable co-location of data storage and computation while minimizing data transfer. The benefits of using such a framework must be formally evaluated against a traditional approach to characterize the point at which simply "large scale" processing transitions into "big data" and necessitates alternative computational frameworks. The proposed Hadoop system was implemented on a production lab-cluster alongside a standard Sun Grid Engine (SGE). Theoretical models for wall-clock time and resource time for both approaches are introduced and validated. To provide real example data, three T1 image archives were retrieved from a university secure, shared web database and used to empirically assess computational performance under three configurations of cluster hardware (using 72, 109, or 209 CPU cores) with differing job lengths. Empirical results match the theoretical models. Based on these data, a comparative analysis is presented for when the Hadoop framework will be relevant and non-relevant for medical imaging.

  9. Theoretical and empirical comparison of big data image processing with Apache Hadoop and Sun Grid Engine

    NASA Astrophysics Data System (ADS)

    Bao, Shunxing; Weitendorf, Frederick D.; Plassard, Andrew J.; Huo, Yuankai; Gokhale, Aniruddha; Landman, Bennett A.

    2017-03-01

    The field of big data is generally concerned with the scale of processing at which traditional computational paradigms break down. In medical imaging, traditional large scale processing uses a cluster computer that combines a group of workstation nodes into a functional unit that is controlled by a job scheduler. Typically, a shared-storage network file system (NFS) is used to host imaging data. However, data transfer from storage to processing nodes can saturate network bandwidth when data is frequently uploaded/retrieved from the NFS, e.g., "short" processing times and/or "large" datasets. Recently, an alternative approach using Hadoop and HBase was presented for medical imaging to enable co-location of data storage and computation while minimizing data transfer. The benefits of using such a framework must be formally evaluated against a traditional approach to characterize the point at which simply "large scale" processing transitions into "big data" and necessitates alternative computational frameworks. The proposed Hadoop system was implemented on a production lab-cluster alongside a standard Sun Grid Engine (SGE). Theoretical models for wall-clock time and resource time for both approaches are introduced and validated. To provide real example data, three T1 image archives were retrieved from a university secure, shared web database and used to empirically assess computational performance under three configurations of cluster hardware (using 72, 109, or 209 CPU cores) with differing job lengths. Empirical results match the theoretical models. Based on these data, a comparative analysis is presented for when the Hadoop framework will be relevant and nonrelevant for medical imaging.

  10. Evaluation of the matrix exponential for use in ground-water-flow and solute-transport simulations; theoretical framework

    USGS Publications Warehouse

    Umari, A.M.; Gorelick, S.M.

    1986-01-01

    It is possible to obtain analytic solutions to the groundwater flow and solute transport equations if space variables are discretized but time is left continuous. From these solutions, hydraulic head and concentration fields for any future time can be obtained without ' marching ' through intermediate time steps. This analytical approach involves matrix exponentiation and is referred to as the Matrix Exponential Time Advancement (META) method. Two algorithms are presented for the META method, one for symmetric and the other for non-symmetric exponent matrices. A numerical accuracy indicator, referred to as the matrix condition number, was defined and used to determine the maximum number of significant figures that may be lost in the META method computations. The relative computational and storage requirements of the META method with respect to the time marching method increase with the number of nodes in the discretized problem. The potential greater accuracy of the META method and the associated greater reliability through use of the matrix condition number have to be weighed against this increased relative computational and storage requirements of this approach as the number of nodes becomes large. For a particular number of nodes, the META method may be computationally more efficient than the time-marching method, depending on the size of time steps used in the latter. A numerical example illustrates application of the META method to a sample ground-water-flow problem. (Author 's abstract)

  11. Network Coding Opportunities for Wireless Grids Formed by Mobile Devices

    NASA Astrophysics Data System (ADS)

    Nielsen, Karsten Fyhn; Madsen, Tatiana K.; Fitzek, Frank H. P.

    Wireless grids have potential in sharing communication, computa-tional and storage resources making these networks more powerful, more robust, and less cost intensive. However, to enjoy the benefits of cooperative resource sharing, a number of issues should be addressed and the cost of the wireless link should be taken into account. We focus on the question how nodes can efficiently communicate and distribute data in a wireless grid. We show the potential of a network coding approach when nodes have the possibility to combine packets thus increasing the amount of information per transmission. Our implementation demonstrates the feasibility of network coding for wireless grids formed by mobile devices.

  12. Automatic localization of IASLC-defined mediastinal lymph node stations on CT images using fuzzy models

    NASA Astrophysics Data System (ADS)

    Matsumoto, Monica M. S.; Beig, Niha G.; Udupa, Jayaram K.; Archer, Steven; Torigian, Drew A.

    2014-03-01

    Lung cancer is associated with the highest cancer mortality rates among men and women in the United States. The accurate and precise identification of the lymph node stations on computed tomography (CT) images is important for staging disease and potentially for prognosticating outcome in patients with lung cancer, as well as for pretreatment planning and response assessment purposes. To facilitate a standard means of referring to lymph nodes, the International Association for the Study of Lung Cancer (IASLC) has recently proposed a definition of the different lymph node stations and zones in the thorax. However, nodal station identification is typically performed manually by visual assessment in clinical radiology. This approach leaves room for error due to the subjective and potentially ambiguous nature of visual interpretation, and is labor intensive. We present a method of automatically recognizing the mediastinal IASLC-defined lymph node stations by modifying a hierarchical fuzzy modeling approach previously developed for body-wide automatic anatomy recognition (AAR) in medical imagery. Our AAR-lymph node (AAR-LN) system follows the AAR methodology and consists of two steps. In the first step, the various lymph node stations are manually delineated on a set of CT images following the IASLC definitions. These delineations are then used to build a fuzzy hierarchical model of the nodal stations which are considered as 3D objects. In the second step, the stations are automatically located on any given CT image of the thorax by using the hierarchical fuzzy model and object recognition algorithms. Based on 23 data sets used for model building, 22 independent data sets for testing, and 10 lymph node stations, a mean localization accuracy of within 1-6 voxels has been achieved by the AAR-LN system.

  13. Cooperative Search and Rescue with Artificial Fishes Based on Fish-Swarm Algorithm for Underwater Wireless Sensor Networks

    PubMed Central

    Zhao, Wei; Tang, Zhenmin; Yang, Yuwang; Wang, Lei; Lan, Shaohua

    2014-01-01

    This paper presents a searching control approach for cooperating mobile sensor networks. We use a density function to represent the frequency of distress signals issued by victims. The mobile nodes' moving in mission space is similar to the behaviors of fish-swarm in water. So, we take the mobile node as artificial fish node and define its operations by a probabilistic model over a limited range. A fish-swarm based algorithm is designed requiring local information at each fish node and maximizing the joint detection probabilities of distress signals. Optimization of formation is also considered for the searching control approach and is optimized by fish-swarm algorithm. Simulation results include two schemes: preset route and random walks, and it is showed that the control scheme has adaptive and effective properties. PMID:24741341

  14. Cooperative search and rescue with artificial fishes based on fish-swarm algorithm for underwater wireless sensor networks.

    PubMed

    Zhao, Wei; Tang, Zhenmin; Yang, Yuwang; Wang, Lei; Lan, Shaohua

    2014-01-01

    This paper presents a searching control approach for cooperating mobile sensor networks. We use a density function to represent the frequency of distress signals issued by victims. The mobile nodes' moving in mission space is similar to the behaviors of fish-swarm in water. So, we take the mobile node as artificial fish node and define its operations by a probabilistic model over a limited range. A fish-swarm based algorithm is designed requiring local information at each fish node and maximizing the joint detection probabilities of distress signals. Optimization of formation is also considered for the searching control approach and is optimized by fish-swarm algorithm. Simulation results include two schemes: preset route and random walks, and it is showed that the control scheme has adaptive and effective properties.

  15. Design and Training of Limited-Interconnect Architectures

    DTIC Science & Technology

    1991-07-16

    and signal processing. Neuromorphic (brain like) models, allow an alternative for achieving real-time operation tor such tasks, while having a...compact and robust architecture. Neuromorphic models consist of interconnections of simple computational nodes. In this approach, each node computes a...operational performance. I1. Research Objectives The research objectives were: 1. Development of on- chip local training rules specifically designed for

  16. Decentralized semi-active damping of free structural vibrations by means of structural nodes with an on/off ability to transmit moments

    NASA Astrophysics Data System (ADS)

    Poplawski, Blazej; Mikułowski, Grzegorz; Mróz, Arkadiusz; Jankowski, Łukasz

    2018-02-01

    This paper proposes, tests numerically and verifies experimentally a decentralized control algorithm with local feedback for semi-active mitigation of free vibrations in frame structures. The algorithm aims at transferring the vibration energy of low-order, lightly-damped structural modes into high-frequency modes of vibration, where it is quickly damped by natural mechanisms of material damping. Such an approach to mitigation of vibrations, known as the prestress-accumulation release (PAR) strategy, has been earlier applied only in global control schemes to the fundamental vibration mode of a cantilever beam. In contrast, the decentralization and local feedback allows the approach proposed here to be applied to more complex frame structures and vibration patterns, where the global control ceases to be intuitively obvious. The actuators (truss-frame nodes with controllable ability to transmit moments) are essentially unblockable hinges that become unblocked only for very short time periods in order to trigger local modal transfer of energy. The paper proposes a computationally simple model of the controllable nodes, specifies the control performance measure, yields basic characteristics of the optimum control, proposes the control algorithm and then tests it in numerical and experimental examples.

  17. Performance Management of High Performance Computing for Medical Image Processing in Amazon Web Services.

    PubMed

    Bao, Shunxing; Damon, Stephen M; Landman, Bennett A; Gokhale, Aniruddha

    2016-02-27

    Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical-Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for-use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.

  18. Performance management of high performance computing for medical image processing in Amazon Web Services

    NASA Astrophysics Data System (ADS)

    Bao, Shunxing; Damon, Stephen M.; Landman, Bennett A.; Gokhale, Aniruddha

    2016-03-01

    Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical- Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for- use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.

  19. Performance Management of High Performance Computing for Medical Image Processing in Amazon Web Services

    PubMed Central

    Bao, Shunxing; Damon, Stephen M.; Landman, Bennett A.; Gokhale, Aniruddha

    2016-01-01

    Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical-Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for-use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline. PMID:27127335

  20. Storing files in a parallel computing system based on user or application specification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faibish, Sorin; Bent, John M.; Nick, Jeffrey M.

    2016-03-29

    Techniques are provided for storing files in a parallel computing system based on a user-specification. A plurality of files generated by a distributed application in a parallel computing system are stored by obtaining a specification from the distributed application indicating how the plurality of files should be stored; and storing one or more of the plurality of files in one or more storage nodes of a multi-tier storage system based on the specification. The plurality of files comprise a plurality of complete files and/or a plurality of sub-files. The specification can optionally be processed by a daemon executing on onemore » or more nodes in a multi-tier storage system. The specification indicates how the plurality of files should be stored, for example, identifying one or more storage nodes where the plurality of files should be stored.« less

  1. Community detection using preference networks

    NASA Astrophysics Data System (ADS)

    Tasgin, Mursel; Bingol, Haluk O.

    2018-04-01

    Community detection is the task of identifying clusters or groups of nodes in a network where nodes within the same group are more connected with each other than with nodes in different groups. It has practical uses in identifying similar functions or roles of nodes in many biological, social and computer networks. With the availability of very large networks in recent years, performance and scalability of community detection algorithms become crucial, i.e. if time complexity of an algorithm is high, it cannot run on large networks. In this paper, we propose a new community detection algorithm, which has a local approach and is able to run on large networks. It has a simple and effective method; given a network, algorithm constructs a preference network of nodes where each node has a single outgoing edge showing its preferred node to be in the same community with. In such a preference network, each connected component is a community. Selection of the preferred node is performed using similarity based metrics of nodes. We use two alternatives for this purpose which can be calculated in 1-neighborhood of nodes, i.e. number of common neighbors of selector node and its neighbors and, the spread capability of neighbors around the selector node which is calculated by the gossip algorithm of Lind et.al. Our algorithm is tested on both computer generated LFR networks and real-life networks with ground-truth community structure. It can identify communities accurately in a fast way. It is local, scalable and suitable for distributed execution on large networks.

  2. Implementation of bipartite or remote unitary gates with repeater nodes

    NASA Astrophysics Data System (ADS)

    Yu, Li; Nemoto, Kae

    2016-08-01

    We propose some protocols to implement various classes of bipartite unitary operations on two remote parties with the help of repeater nodes in-between. We also present a protocol to implement a single-qubit unitary with parameters determined by a remote party with the help of up to three repeater nodes. It is assumed that the neighboring nodes are connected by noisy photonic channels, and the local gates can be performed quite accurately, while the decoherence of memories is significant. A unitary is often a part of a larger computation or communication task in a quantum network, and to reduce the amount of decoherence in other systems of the network, we focus on the goal of saving the total time for implementing a unitary including the time for entanglement preparation. We review some previously studied protocols that implement bipartite unitaries using local operations and classical communication and prior shared entanglement, and apply them to the situation with repeater nodes without prior entanglement. We find that the protocols using piecewise entanglement between neighboring nodes often require less total time compared to preparing entanglement between the two end nodes first and then performing the previously known protocols. For a generic bipartite unitary, as the number of repeater nodes increases, the total time could approach the time cost for direct signal transfer from one end node to the other. We also prove some lower bounds of the total time when there are a small number of repeater nodes. The application to position-based cryptography is discussed.

  3. Efficiently sphere-decodable physical layer transmission schemes for wireless storage networks

    NASA Astrophysics Data System (ADS)

    Lu, Hsiao-Feng Francis; Barreal, Amaro; Karpuk, David; Hollanti, Camilla

    2016-12-01

    Three transmission schemes over a new type of multiple-access channel (MAC) model with inter-source communication links are proposed and investigated in this paper. This new channel model is well motivated by, e.g., wireless distributed storage networks, where communication to repair a lost node takes place from helper nodes to a repairing node over a wireless channel. Since in many wireless networks nodes can come and go in an arbitrary manner, there must be an inherent capability of inter-node communication between every pair of nodes. Assuming that communication is possible between every pair of helper nodes, the newly proposed schemes are based on various smart time-sharing and relaying strategies. In other words, certain helper nodes will be regarded as relays, thereby converting the conventional uncooperative multiple-access channel to a multiple-access relay channel (MARC). The diversity-multiplexing gain tradeoff (DMT) of the system together with efficient sphere-decodability and low structural complexity in terms of the number of antennas required at each end is used as the main design objectives. While the optimal DMT for the new channel model is fully open, it is shown that the proposed schemes outperform the DMT of the simple time-sharing protocol and, in some cases, even the optimal uncooperative MAC DMT. While using a wireless distributed storage network as a motivating example throughout the paper, the MAC transmission techniques proposed here are completely general and as such applicable to any MAC communication with inter-source communication links.

  4. [Sentinel node in melanoma and breast cancer. Current considerations].

    PubMed

    Vidal-Sicart, S; Vilalta Solsona, A; Alonso Vargas, M I

    2015-01-01

    The main objectives of sentinel node (SN) biopsy is to avoid unnecessary lymphadenectomies and to identify the 20-25% of patients with occult regional metastatic involvement. This technique reduces the associated morbidity from lymphadenectomy and increases the occult lymphatic metastases identification rate by offering the pathologist the or those lymph nodes with the highest probability of containing metastatic cells. Pre-surgical lymphoscintigraphy is considered a "road map" to guide the surgeon towards the sentinel nodes and to localize unpredictable lymphatic drainage patterns. The SPECT/CT advantages include a better SN detection rate than planar images, the ability to detect SNs in difficult to interpret studies, better SN depiction, especially in sites closer to the injection site and better anatomic localization. These advantages may result in a change in the patient's clinical management both in melanoma and breast cancer. The correct SN evaluation by pathology implies a tumoral load stratification and further prognostic implication. The use of intraoperative imaging devices allows the surgeon a better surgical approach and precise SN localization. Several studies reports the added value of such devices for more sentinel nodes excision and a complete monitoring of the whole procedure. New techniques, by using fluorescent or hybrid tracers, are currently being developed. Copyright © 2014 Elsevier España, S.L.U. and SEMNIM. All rights reserved.

  5. Semantic Visualization of Wireless Sensor Networks for Elderly Monitoring

    NASA Astrophysics Data System (ADS)

    Stocklöw, Carsten; Kamieth, Felix

    In the area of Ambient Intelligence, Wireless Sensor Networks are commonly used for user monitoring purposes like health monitoring and user localization. Existing work on visualization of wireless sensor networks focuses mainly on displaying individual nodes and logical, graph-based topologies. This way, the relation to the real-world deployment is lost. This paper presents a novel approach for visualization of wireless sensor networks and interaction with complex services on the nodes. The environment is realized as a 3D model, and multiple nodes, that are worn by a single individual, are grouped together to allow an intuitive interface for end users. We describe application examples and show that our approach allows easier access to network information and functionality by comparing it with existing solutions.

  6. A distributed transmit beamforming synchronization strategy for multi-element radar systems

    NASA Astrophysics Data System (ADS)

    Xiao, Manlin; Li, Xingwen; Xu, Jikang

    2017-02-01

    The distributed transmit beamforming has recently been discussed as an energy-effective technique in wireless communication systems. A common ground of various techniques is that the destination node transmits a beacon signal or feedback to assist source nodes to synchronize signals. However, this approach is not appropriate for a radar system since the destination is a non-cooperative target of an unknown location. In our paper, we propose a novel synchronization strategy for a distributed multiple-element beamfoming radar system. Source nodes estimate parameters of beacon signals transmitted from others to get their local synchronization information. The channel information of the phase propagation delay is transmitted to nodes via the reflected beacon signals as well. Next, each node generates appropriate parameters to form a beamforming signal at the target. Transmit beamforming signals of all nodes will combine coherently at the target compensating for different propagation delay. We analyse the influence of the local oscillation accuracy and the parameter estimation errors on the performance of the proposed synchronization scheme. The results of numerical simulations illustrate that this synchronization scheme is effective to enable the transmit beamforming in a distributed multi-element radar system.

  7. Adaptive Peer Sampling with Newscast

    NASA Astrophysics Data System (ADS)

    Tölgyesi, Norbert; Jelasity, Márk

    The peer sampling service is a middleware service that provides random samples from a large decentralized network to support gossip-based applications such as multicast, data aggregation and overlay topology management. Lightweight gossip-based implementations of the peer sampling service have been shown to provide good quality random sampling while also being extremely robust to many failure scenarios, including node churn and catastrophic failure. We identify two problems with these approaches. The first problem is related to message drop failures: if a node experiences a higher-than-average message drop rate then the probability of sampling this node in the network will decrease. The second problem is that the application layer at different nodes might request random samples at very different rates which can result in very poor random sampling especially at nodes with high request rates. We propose solutions for both problems. We focus on Newscast, a robust implementation of the peer sampling service. Our solution is based on simple extensions of the protocol and an adaptive self-control mechanism for its parameters, namely—without involving failure detectors—nodes passively monitor local protocol events using them as feedback for a local control loop for self-tuning the protocol parameters. The proposed solution is evaluated by simulation experiments.

  8. Distributed adaptive diagnosis of sensor faults using structural response data

    NASA Astrophysics Data System (ADS)

    Dragos, Kosmas; Smarsly, Kay

    2016-10-01

    The reliability and consistency of wireless structural health monitoring (SHM) systems can be compromised by sensor faults, leading to miscalibrations, corrupted data, or even data loss. Several research approaches towards fault diagnosis, referred to as ‘analytical redundancy’, have been proposed that analyze the correlations between different sensor outputs. In wireless SHM, most analytical redundancy approaches require centralized data storage on a server for data analysis, while other approaches exploit the on-board computing capabilities of wireless sensor nodes, analyzing the raw sensor data directly on board. However, using raw sensor data poses an operational constraint due to the limited power resources of wireless sensor nodes. In this paper, a new distributed autonomous approach towards sensor fault diagnosis based on processed structural response data is presented. The inherent correlations among Fourier amplitudes of acceleration response data, at peaks corresponding to the eigenfrequencies of the structure, are used for diagnosis of abnormal sensor outputs at a given structural condition. Representing an entirely data-driven analytical redundancy approach that does not require any a priori knowledge of the monitored structure or of the SHM system, artificial neural networks (ANN) are embedded into the sensor nodes enabling cooperative fault diagnosis in a fully decentralized manner. The distributed analytical redundancy approach is implemented into a wireless SHM system and validated in laboratory experiments, demonstrating the ability of wireless sensor nodes to self-diagnose sensor faults accurately and efficiently with minimal data traffic. Besides enabling distributed autonomous fault diagnosis, the embedded ANNs are able to adapt to the actual condition of the structure, thus ensuring accurate and efficient fault diagnosis even in case of structural changes.

  9. Tuning HDF5 subfiling performance on parallel file systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Byna, Suren; Chaarawi, Mohamad; Koziol, Quincey

    Subfiling is a technique used on parallel file systems to reduce locking and contention issues when multiple compute nodes interact with the same storage target node. Subfiling provides a compromise between the single shared file approach that instigates the lock contention problems on parallel file systems and having one file per process, which results in generating a massive and unmanageable number of files. In this paper, we evaluate and tune the performance of recently implemented subfiling feature in HDF5. In specific, we explain the implementation strategy of subfiling feature in HDF5, provide examples of using the feature, and evaluate andmore » tune parallel I/O performance of this feature with parallel file systems of the Cray XC40 system at NERSC (Cori) that include a burst buffer storage and a Lustre disk-based storage. We also evaluate I/O performance on the Cray XC30 system, Edison, at NERSC. Our results show performance benefits of 1.2X to 6X performance advantage with subfiling compared to writing a single shared HDF5 file. We present our exploration of configurations, such as the number of subfiles and the number of Lustre storage targets to storing files, as optimization parameters to obtain superior I/O performance. Based on this exploration, we discuss recommendations for achieving good I/O performance as well as limitations with using the subfiling feature.« less

  10. Peregrine System | High-Performance Computing | NREL

    Science.gov Websites

    ) and longer-term (/projects) storage. These file systems are mounted on all nodes. Peregrine has three -2670 Xeon processors and 64 GB of memory. In addition to mounting the /home, /nopt, /projects and # cores/node Memory/node Peak (DP) performance per node 88 Intel Xeon E5-2670 "Sandy Bridge" 8

  11. Synthesis of natural flows at selected sites in the upper Missouri River basin, Montana, 1928-89

    USGS Publications Warehouse

    Cary, L.E.; Parrett, Charles

    1996-01-01

    Natural monthly streamflows were synthesized for the years 1928-89 for 43 sites in the upper Missouri River Basin upstream from Fort Peck Lake in Montana. The sites are represented as nodes in a streamflow accounting model being developed by the Bureau of Reclamation. Recorded and historical flows at most sites have been affected by human activities including reservoir storage, diversions for irrigation, and municipal use. Natural flows at the sites were synthesized by eliminating the effects of these activities. Recorded data at some sites do not include the entire study period. The missing flows at these sites were estimated using a statistical procedure. The methods of synthesis varied, depending on upstream activities and information available. Recorded flows were transferred to nodes that did not have streamflow-gaging stations from the nearest station with a sufficient length of record. The flows at one node were computed as the sum of flows from three upstream tributaries. Monthly changes in reservoir storage were computed from monthend contents. The changes in storage were corrected for the effects of evaporation and precipitation using pan-evaporation and precipitation data from climate stations. Irrigation depletions and consumptive use by the three largest municipalities were computed. Synthesized natural flow at most nodes was computed by adding algebraically the upstream depletions and changes in reservoir storage to recorded or historical flow at the nodes.

  12. Reputation-Based Secure Sensor Localization in Wireless Sensor Networks

    PubMed Central

    He, Jingsha; Xu, Jing; Zhu, Xingye; Zhang, Yuqiang; Zhang, Ting; Fu, Wanqing

    2014-01-01

    Location information of sensor nodes in wireless sensor networks (WSNs) is very important, for it makes information that is collected and reported by the sensor nodes spatially meaningful for applications. Since most current sensor localization schemes rely on location information that is provided by beacon nodes for the regular sensor nodes to locate themselves, the accuracy of localization depends on the accuracy of location information from the beacon nodes. Therefore, the security and reliability of the beacon nodes become critical in the localization of regular sensor nodes. In this paper, we propose a reputation-based security scheme for sensor localization to improve the security and the accuracy of sensor localization in hostile or untrusted environments. In our proposed scheme, the reputation of each beacon node is evaluated based on a reputation evaluation model so that regular sensor nodes can get credible location information from highly reputable beacon nodes to accomplish localization. We also perform a set of simulation experiments to demonstrate the effectiveness of the proposed reputation-based security scheme. And our simulation results show that the proposed security scheme can enhance the security and, hence, improve the accuracy of sensor localization in hostile or untrusted environments. PMID:24982940

  13. Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure.

    PubMed

    Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei

    2011-09-07

    Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47× speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed.

  14. The Global File System

    NASA Technical Reports Server (NTRS)

    Soltis, Steven R.; Ruwart, Thomas M.; OKeefe, Matthew T.

    1996-01-01

    The global file system (GFS) is a prototype design for a distributed file system in which cluster nodes physically share storage devices connected via a network-like fiber channel. Networks and network-attached storage devices have advanced to a level of performance and extensibility so that the previous disadvantages of shared disk architectures are no longer valid. This shared storage architecture attempts to exploit the sophistication of storage device technologies whereas a server architecture diminishes a device's role to that of a simple component. GFS distributes the file system responsibilities across processing nodes, storage across the devices, and file system resources across the entire storage pool. GFS caches data on the storage devices instead of the main memories of the machines. Consistency is established by using a locking mechanism maintained by the storage devices to facilitate atomic read-modify-write operations. The locking mechanism is being prototyped in the Silicon Graphics IRIX operating system and is accessed using standard Unix commands and modules.

  15. Theoretical and Empirical Comparison of Big Data Image Processing with Apache Hadoop and Sun Grid Engine

    PubMed Central

    Bao, Shunxing; Weitendorf, Frederick D.; Plassard, Andrew J.; Huo, Yuankai; Gokhale, Aniruddha; Landman, Bennett A.

    2016-01-01

    The field of big data is generally concerned with the scale of processing at which traditional computational paradigms break down. In medical imaging, traditional large scale processing uses a cluster computer that combines a group of workstation nodes into a functional unit that is controlled by a job scheduler. Typically, a shared-storage network file system (NFS) is used to host imaging data. However, data transfer from storage to processing nodes can saturate network bandwidth when data is frequently uploaded/retrieved from the NFS, e.g., “short” processing times and/or “large” datasets. Recently, an alternative approach using Hadoop and HBase was presented for medical imaging to enable co-location of data storage and computation while minimizing data transfer. The benefits of using such a framework must be formally evaluated against a traditional approach to characterize the point at which simply “large scale” processing transitions into “big data” and necessitates alternative computational frameworks. The proposed Hadoop system was implemented on a production lab-cluster alongside a standard Sun Grid Engine (SGE). Theoretical models for wall-clock time and resource time for both approaches are introduced and validated. To provide real example data, three T1 image archives were retrieved from a university secure, shared web database and used to empirically assess computational performance under three configurations of cluster hardware (using 72, 109, or 209 CPU cores) with differing job lengths. Empirical results match the theoretical models. Based on these data, a comparative analysis is presented for when the Hadoop framework will be relevant and non-relevant for medical imaging. PMID:28736473

  16. C-fuzzy variable-branch decision tree with storage and classification error rate constraints

    NASA Astrophysics Data System (ADS)

    Yang, Shiueng-Bien

    2009-10-01

    The C-fuzzy decision tree (CFDT), which is based on the fuzzy C-means algorithm, has recently been proposed. The CFDT is grown by selecting the nodes to be split according to its classification error rate. However, the CFDT design does not consider the classification time taken to classify the input vector. Thus, the CFDT can be improved. We propose a new C-fuzzy variable-branch decision tree (CFVBDT) with storage and classification error rate constraints. The design of the CFVBDT consists of two phases-growing and pruning. The CFVBDT is grown by selecting the nodes to be split according to the classification error rate and the classification time in the decision tree. Additionally, the pruning method selects the nodes to prune based on the storage requirement and the classification time of the CFVBDT. Furthermore, the number of branches of each internal node is variable in the CFVBDT. Experimental results indicate that the proposed CFVBDT outperforms the CFDT and other methods.

  17. Toward seamless wearable sensing: Automatic on-body sensor localization for physical activity monitoring.

    PubMed

    Saeedi, Ramyar; Purath, Janet; Venkatasubramanian, Krishna; Ghasemzadeh, Hassan

    2014-01-01

    Mobile wearable sensors have demonstrated great potential in a broad range of applications in healthcare and wellness. These technologies are known for their potential to revolutionize the way next generation medical services are supplied and consumed by providing more effective interventions, improving health outcomes, and substantially reducing healthcare costs. Despite these potentials, utilization of these sensor devices is currently limited to lab settings and in highly controlled clinical trials. A major obstacle in widespread utilization of these systems is that the sensors need to be used in predefined locations on the body in order to provide accurate outcomes such as type of physical activity performed by the user. This has reduced users' willingness to utilize such technologies. In this paper, we propose a novel signal processing approach that leverages feature selection algorithms for accurate and automatic localization of wearable sensors. Our results based on real data collected using wearable motion sensors demonstrate that the proposed approach can perform sensor localization with 98.4% accuracy which is 30.7% more accurate than an approach without a feature selection mechanism. Furthermore, utilizing our node localization algorithm aids the activity recognition algorithm to achieve 98.8% accuracy (an increase from 33.6% for the system without node localization).

  18. Collaborative Localization Algorithms for Wireless Sensor Networks with Reduced Localization Error

    PubMed Central

    Sahoo, Prasan Kumar; Hwang, I-Shyan

    2011-01-01

    Localization is an important research issue in Wireless Sensor Networks (WSNs). Though Global Positioning System (GPS) can be used to locate the position of the sensors, unfortunately it is limited to outdoor applications and is costly and power consuming. In order to find location of sensor nodes without help of GPS, collaboration among nodes is highly essential so that localization can be accomplished efficiently. In this paper, novel localization algorithms are proposed to find out possible location information of the normal nodes in a collaborative manner for an outdoor environment with help of few beacons and anchor nodes. In our localization scheme, at most three beacon nodes should be collaborated to find out the accurate location information of any normal node. Besides, analytical methods are designed to calculate and reduce the localization error using probability distribution function. Performance evaluation of our algorithm shows that there is a tradeoff between deployed number of beacon nodes and localization error, and average localization time of the network can be increased with increase in the number of normal nodes deployed over a region. PMID:22163738

  19. Rapid self-organised initiation of ad hoc sensor networks close above the percolation threshold

    NASA Astrophysics Data System (ADS)

    Korsnes, Reinert

    2010-07-01

    This work shows potentials for rapid self-organisation of sensor networks where nodes collaborate to relay messages to a common data collecting unit (sink node). The study problem is, in the sense of graph theory, to find a shortest path tree spanning a weighted graph. This is a well-studied problem where for example Dijkstra’s algorithm provides a solution for non-negative edge weights. The present contribution shows by simulation examples that simple modifications of known distributed approaches here can provide significant improvements in performance. Phase transition phenomena, which are known to take place in networks close to percolation thresholds, may explain these observations. An initial method, which here serves as reference, assumes the sink node starts organisation of the network (tree) by transmitting a control message advertising its availability for its neighbours. These neighbours then advertise their current cost estimate for routing a message to the sink. A node which in this way receives a message implying an improved route to the sink, advertises its new finding and remembers which neighbouring node the message came from. This activity proceeds until there are no more improvements to advertise to neighbours. The result is a tree network for cost effective transmission of messages to the sink (root). This distributed approach has potential for simple improvements which are of interest when minimisation of storage and communication of network information are a concern. Fast organisation of the network takes place when the number k of connections for each node ( degree) is close above its critical value for global network percolation and at the same time there is a threshold for the nodes to decide to advertise network route updates.

  20. A Robust Wireless Sensor Network Localization Algorithm in Mixed LOS/NLOS Scenario.

    PubMed

    Li, Bing; Cui, Wei; Wang, Bin

    2015-09-16

    Localization algorithms based on received signal strength indication (RSSI) are widely used in the field of target localization due to its advantages of convenient application and independent from hardware devices. Unfortunately, the RSSI values are susceptible to fluctuate under the influence of non-line-of-sight (NLOS) in indoor space. Existing algorithms often produce unreliable estimated distances, leading to low accuracy and low effectiveness in indoor target localization. Moreover, these approaches require extra prior knowledge about the propagation model. As such, we focus on the problem of localization in mixed LOS/NLOS scenario and propose a novel localization algorithm: Gaussian mixed model based non-metric Multidimensional (GMDS). In GMDS, the RSSI is estimated using a Gaussian mixed model (GMM). The dissimilarity matrix is built to generate relative coordinates of nodes by a multi-dimensional scaling (MDS) approach. Finally, based on the anchor nodes' actual coordinates and target's relative coordinates, the target's actual coordinates can be computed via coordinate transformation. Our algorithm could perform localization estimation well without being provided with prior knowledge. The experimental verification shows that GMDS effectively reduces NLOS error and is of higher accuracy in indoor mixed LOS/NLOS localization and still remains effective when we extend single NLOS to multiple NLOS.

  1. [11C]choline-PET-guided helical tomotherapy and estramustine in a patient with pelvic-recurrent prostate cancer: local control and toxicity profile after 24 months.

    PubMed

    Alongi, Filippo; Schipani, Stefano; Samanes Gajate, Ana Maria; Rosso, Alberto; Cozzarini, Cesare; Fiorino, Claudio; Alongi, Pierpaolo; Picchio, Maria; Gianolli, Luigi; Messa, Cristina; Di Muzio, Nadia

    2010-01-01

    [11C]choline positron emission tomograhy can be useful to detect metastatic disease and to localize isolated lymph node relapse after primary treatment in case of prostate-specific antigen failure. In case of lymph node failure in prostate cancer patients, surgery or radiotherapy can be proposed with a curative intent. Some reports have suggested that radiotherapy could have a role in local control of oligometastatic lymph node disease. This is the first reported case of [11C]choline positron emission tomography-guided helical tomotherapy concomitant with estramustine for the treatment of pelvic-recurrent prostate cancer. At 24 months after the end of helical tomotherapy, prostate-specific antigen was undetectable and no late toxicities were recorded. A disease-free survival of 24 months, in the absence of any type of systemic therapy, is uncommon in metastatic prostate cancer. The therapeutic approach of the case report is discussed and a literature review on the issue is presented.

  2. A connectionist model for diagnostic problem solving

    NASA Technical Reports Server (NTRS)

    Peng, Yun; Reggia, James A.

    1989-01-01

    A competition-based connectionist model for solving diagnostic problems is described. The problems considered are computationally difficult in that (1) multiple disorders may occur simultaneously and (2) a global optimum in the space exponential to the total number of possible disorders is sought as a solution. The diagnostic problem is treated as a nonlinear optimization problem, and global optimization criteria are decomposed into local criteria governing node activation updating in the connectionist model. Nodes representing disorders compete with each other to account for each individual manifestation, yet complement each other to account for all manifestations through parallel node interactions. When equilibrium is reached, the network settles into a locally optimal state. Three randomly generated examples of diagnostic problems, each of which has 1024 cases, were tested, and the decomposition plus competition plus resettling approach yielded very high accuracy.

  3. Trust index based fault tolerant multiple event localization algorithm for WSNs.

    PubMed

    Xu, Xianghua; Gao, Xueyong; Wan, Jian; Xiong, Naixue

    2011-01-01

    This paper investigates the use of wireless sensor networks for multiple event source localization using binary information from the sensor nodes. The events could continually emit signals whose strength is attenuated inversely proportional to the distance from the source. In this context, faults occur due to various reasons and are manifested when a node reports a wrong decision. In order to reduce the impact of node faults on the accuracy of multiple event localization, we introduce a trust index model to evaluate the fidelity of information which the nodes report and use in the event detection process, and propose the Trust Index based Subtract on Negative Add on Positive (TISNAP) localization algorithm, which reduces the impact of faulty nodes on the event localization by decreasing their trust index, to improve the accuracy of event localization and performance of fault tolerance for multiple event source localization. The algorithm includes three phases: first, the sink identifies the cluster nodes to determine the number of events occurred in the entire region by analyzing the binary data reported by all nodes; then, it constructs the likelihood matrix related to the cluster nodes and estimates the location of all events according to the alarmed status and trust index of the nodes around the cluster nodes. Finally, the sink updates the trust index of all nodes according to the fidelity of their information in the previous reporting cycle. The algorithm improves the accuracy of localization and performance of fault tolerance in multiple event source localization. The experiment results show that when the probability of node fault is close to 50%, the algorithm can still accurately determine the number of the events and have better accuracy of localization compared with other algorithms.

  4. Trust Index Based Fault Tolerant Multiple Event Localization Algorithm for WSNs

    PubMed Central

    Xu, Xianghua; Gao, Xueyong; Wan, Jian; Xiong, Naixue

    2011-01-01

    This paper investigates the use of wireless sensor networks for multiple event source localization using binary information from the sensor nodes. The events could continually emit signals whose strength is attenuated inversely proportional to the distance from the source. In this context, faults occur due to various reasons and are manifested when a node reports a wrong decision. In order to reduce the impact of node faults on the accuracy of multiple event localization, we introduce a trust index model to evaluate the fidelity of information which the nodes report and use in the event detection process, and propose the Trust Index based Subtract on Negative Add on Positive (TISNAP) localization algorithm, which reduces the impact of faulty nodes on the event localization by decreasing their trust index, to improve the accuracy of event localization and performance of fault tolerance for multiple event source localization. The algorithm includes three phases: first, the sink identifies the cluster nodes to determine the number of events occurred in the entire region by analyzing the binary data reported by all nodes; then, it constructs the likelihood matrix related to the cluster nodes and estimates the location of all events according to the alarmed status and trust index of the nodes around the cluster nodes. Finally, the sink updates the trust index of all nodes according to the fidelity of their information in the previous reporting cycle. The algorithm improves the accuracy of localization and performance of fault tolerance in multiple event source localization. The experiment results show that when the probability of node fault is close to 50%, the algorithm can still accurately determine the number of the events and have better accuracy of localization compared with other algorithms. PMID:22163972

  5. Cloud Engineering Principles and Technology Enablers for Medical Image Processing-as-a-Service

    PubMed Central

    Bao, Shunxing; Plassard, Andrew J.; Landman, Bennett A.; Gokhale, Aniruddha

    2017-01-01

    Traditional in-house, laboratory-based medical imaging studies use hierarchical data structures (e.g., NFS file stores) or databases (e.g., COINS, XNAT) for storage and retrieval. The resulting performance from these approaches is, however, impeded by standard network switches since they can saturate network bandwidth during transfer from storage to processing nodes for even moderate-sized studies. To that end, a cloud-based “medical image processing-as-a-service” offers promise in utilizing the ecosystem of Apache Hadoop, which is a flexible framework providing distributed, scalable, fault tolerant storage and parallel computational modules, and HBase, which is a NoSQL database built atop Hadoop’s distributed file system. Despite this promise, HBase’s load distribution strategy of region split and merge is detrimental to the hierarchical organization of imaging data (e.g., project, subject, session, scan, slice). This paper makes two contributions to address these concerns by describing key cloud engineering principles and technology enhancements we made to the Apache Hadoop ecosystem for medical imaging applications. First, we propose a row-key design for HBase, which is a necessary step that is driven by the hierarchical organization of imaging data. Second, we propose a novel data allocation policy within HBase to strongly enforce collocation of hierarchically related imaging data. The proposed enhancements accelerate data processing by minimizing network usage and localizing processing to machines where the data already exist. Moreover, our approach is amenable to the traditional scan, subject, and project-level analysis procedures, and is compatible with standard command line/scriptable image processing software. Experimental results for an illustrative sample of imaging data reveals that our new HBase policy results in a three-fold time improvement in conversion of classic DICOM to NiFTI file formats when compared with the default HBase region split policy, and nearly a six-fold improvement over a commonly available network file system (NFS) approach even for relatively small file sets. Moreover, file access latency is lower than network attached storage. PMID:28884169

  6. Performing an allreduce operation on a plurality of compute nodes of a parallel computer

    DOEpatents

    Faraj, Ahmad

    2013-02-12

    Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer, each node including at least two processing cores, that include: performing, for each node, a local reduction operation using allreduce contribution data for the cores of that node, yielding, for each node, a local reduction result for one or more representative cores for that node; establishing one or more logical rings among the nodes, each logical ring including only one of the representative cores from each node; performing, for each logical ring, a global allreduce operation using the local reduction result for the representative cores included in that logical ring, yielding a global allreduce result for each representative core included in that logical ring; and performing, for each node, a local broadcast operation using the global allreduce results for each representative core on that node.

  7. Optical Circuit Switched Protocol

    NASA Technical Reports Server (NTRS)

    Monacos, Steve P. (Inventor)

    2000-01-01

    The present invention is a system and method embodied in an optical circuit switched protocol for the transmission of data through a network. The optical circuit switched protocol is an all-optical circuit switched network and includes novel optical switching nodes for transmitting optical data packets within a network. Each optical switching node comprises a detector for receiving the header, header detection logic for translating the header into routing information and eliminating the header, and a controller for receiving the routing information and configuring an all optical path within the node. The all optical path located within the node is solely an optical path without having electronic storage of the data and without having optical delay of the data. Since electronic storage of the header is not necessary and the initial header is eliminated by the first detector of the first switching node. multiple identical headers are sent throughout the network so that subsequent switching nodes can receive and read the header for setting up an optical data path.

  8. Cation binding at the node of Ranvier: I. Localization of binding sites during development.

    PubMed

    Zagoren, J C; Raine, C S; Suzuki, K

    1982-06-17

    Cations are known to bind to the node of Ranvier and the paranodal regions of myelinated fibers. The integrity of these specialized structures is essential for normal conduction. Sites of cation binding can be microscopically identified by the electrondense histochemical reaction product formed by the precipitate of copper sulfate/potassium ferrocyanide. This technique was used to study the distribution of cation binding during normal development of myelinating fibers. Sciatic nerves of C57B1 mice, at 1, 3, 5, 6, 7, 8, 9, 13, 16, 18, 24 and 30 days of age, were prepared for electron microscopy following fixation in phosphate-buffered 2.5% glutaraldehyde and 1% osmic acid, microdissection and incubation in phosphate-buffered 0.1 M cupric sulfate followed by 0.1 M potassium ferrocyanide. Localization of reaction product was studied by light and electron microscopy. By light microscopy, no reaction product was observed prior to 9 days of age. At 13 days, a few nodes and paranodes exhibited reaction product. This increased in frequency and intensity up to 30 days when almost all nodes or paranodes exhibited reaction product. Ultrastructurally, diffuse reaction product was first observed at 3 days of age in the axoplasm of the node, in the paranodal extracellular space of the terminal loops, in the Schwann cell proper and in the terminal loops of Schwann cell cytoplasm. When myelinated axons fulfilled the criteria for mature nodes, reaction product was no longer observed in the Schwann cell cytoplasm, while the intensity of reaction product in the nodal axoplasm and paranodal extracellular space of the terminal loops increased. Reaction product in the latter site appeared to be interrupted by the transverse bands. These results suggest that cation binding accompanies nodal maturity and that the Schwann cell may play a role in production or storage of the cation binding substance during myelinogenesis and development.

  9. International Space Station Environmental Control and Life Support System Acceptance Testing for Node 1 Temperature and Humidity Control Subsystem

    NASA Technical Reports Server (NTRS)

    Williams, David E.

    2011-01-01

    The International Space Station (ISS) Node 1 Environmental Control and Life Support (ECLS) System is comprised of five subsystems: Atmosphere Control and Storage (ACS), Atmosphere Revitalization (AR), Fire Detection and Suppression (FDS), Temperature and Humidity Control (THC), and Water Recovery and Management (WRM). This paper will provide a summary of the Node 1 ECLS THC subsystem design and a detailed discussion of the ISS ECLS Acceptance Testing methodology utilized for this subsystem.The International Space Station (ISS) Node 1 Environmental Control and Life Support (ECLS) System is comprised of five subsystems: Atmosphere Control and Storage (ACS), Atmosphere Revitalization (AR), Fire Detection and Suppression (FDS), Temperature and Humidity Control (THC), and Water Recovery and Management (WRM). This paper will provide a summary of the Node 1 ECLS THC subsystem design and a detailed discussion of the ISS ECLS Acceptance Testing methodology utilized for this subsystem.

  10. Identifying and characterizing key nodes among communities based on electrical-circuit networks.

    PubMed

    Zhu, Fenghui; Wang, Wenxu; Di, Zengru; Fan, Ying

    2014-01-01

    Complex networks with community structures are ubiquitous in the real world. Despite many approaches developed for detecting communities, we continue to lack tools for identifying overlapping and bridging nodes that play crucial roles in the interactions and communications among communities in complex networks. Here we develop an algorithm based on the local flow conservation to effectively and efficiently identify and distinguish the two types of nodes. Our method is applicable in both undirected and directed networks without a priori knowledge of the community structure. Our method bypasses the extremely challenging problem of partitioning communities in the presence of overlapping nodes that may belong to multiple communities. Due to the fact that overlapping and bridging nodes are of paramount importance in maintaining the function of many social and biological networks, our tools open new avenues towards understanding and controlling real complex networks with communities accompanied with the key nodes.

  11. Peregrine System Configuration | High-Performance Computing | NREL

    Science.gov Websites

    nodes and storage are connected by a high speed InfiniBand network. Compute nodes are diskless with an directories are mounted on all nodes, along with a file system dedicated to shared projects. A brief processors with 64 GB of memory. All nodes are connected to the high speed Infiniband network and and a

  12. A novel topology control approach to maintain the node degree in dynamic wireless sensor networks.

    PubMed

    Huang, Yuanjiang; Martínez, José-Fernán; Díaz, Vicente Hernández; Sendra, Juana

    2014-03-07

    Topology control is an important technique to improve the connectivity and the reliability of Wireless Sensor Networks (WSNs) by means of adjusting the communication range of wireless sensor nodes. In this paper, a novel Fuzzy-logic Topology Control (FTC) is proposed to achieve any desired average node degree by adaptively changing communication range, thus improving the network connectivity, which is the main target of FTC. FTC is a fully localized control algorithm, and does not rely on location information of neighbors. Instead of designing membership functions and if-then rules for fuzzy-logic controller, FTC is constructed from the training data set to facilitate the design process. FTC is proved to be accurate, stable and has short settling time. In order to compare it with other representative localized algorithms (NONE, FLSS, k-Neighbor and LTRT), FTC is evaluated through extensive simulations. The simulation results show that: firstly, similar to k-Neighbor algorithm, FTC is the best to achieve the desired average node degree as node density varies; secondly, FTC is comparable to FLSS and k-Neighbor in terms of energy-efficiency, but is better than LTRT and NONE; thirdly, FTC has the lowest average maximum communication range than other algorithms, which indicates that the most energy-consuming node in the network consumes the lowest power.

  13. Real time network traffic monitoring for wireless local area networks based on compressed sensing

    NASA Astrophysics Data System (ADS)

    Balouchestani, Mohammadreza

    2017-05-01

    A wireless local area network (WLAN) is an important type of wireless networks which connotes different wireless nodes in a local area network. WLANs suffer from important problems such as network load balancing, large amount of energy, and load of sampling. This paper presents a new networking traffic approach based on Compressed Sensing (CS) for improving the quality of WLANs. The proposed architecture allows reducing Data Delay Probability (DDP) to 15%, which is a good record for WLANs. The proposed architecture is increased Data Throughput (DT) to 22 % and Signal to Noise (S/N) ratio to 17 %, which provide a good background for establishing high qualified local area networks. This architecture enables continuous data acquisition and compression of WLAN's signals that are suitable for a variety of other wireless networking applications. At the transmitter side of each wireless node, an analog-CS framework is applied at the sensing step before analog to digital converter in order to generate the compressed version of the input signal. At the receiver side of wireless node, a reconstruction algorithm is applied in order to reconstruct the original signals from the compressed signals with high probability and enough accuracy. The proposed algorithm out-performs existing algorithms by achieving a good level of Quality of Service (QoS). This ability allows reducing 15 % of Bit Error Rate (BER) at each wireless node.

  14. Chemical applicability domain of the Local Lymph Node Assay (LLNA) for skin sensitisation potency. Part 2. The biological variability of the murine Local Lymph Node Assay (LLNA) for skin sensitisation.

    PubMed

    Roberts, David W; Api, Anne Marie; Aptula, Aynur O

    2016-10-01

    The Local Lymph Node Assay (LLNA) is the most common in vivo regulatory toxicology test for skin sensitisation, quantifying potency as the EC3, the concentration of chemical giving a threefold increase in thymidine uptake in the local lymph node. Existing LLNA data can, along with clinical data, provide useful comparator information on the potency of sensitisers. Understanding of the biological variability of data from LLNA studies is important for those developing non-animal based risk assessment approaches for skin allergy. Here an existing set of 94 EC3 values for 12 chemicals, all tested at least three times in the same vehicle have been analysed by calculating standard deviations (SD) for logEC3 values. The SDs range from 0.08 to 0.22. The overall SD for the 94 logEC3 values is 0.147. Thus the 95% confidence limits (2xSD) for LLNA EC3 values are within a factor of 2, comparable to those for physico-chemical measurements such as partition coefficients and solubility. The residual SDs of Quantitative Mechanistic Models (QMMs) based on physical organic chemistry parameters are similar to the overall SD of the LLNA, indicating that QMMs of this type are unlikely to be bettered for predictive accuracy. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Optimally Distributed Kalman Filtering with Data-Driven Communication †

    PubMed Central

    Dormann, Katharina

    2018-01-01

    For multisensor data fusion, distributed state estimation techniques that enable a local processing of sensor data are the means of choice in order to minimize storage and communication costs. In particular, a distributed implementation of the optimal Kalman filter has recently been developed. A significant disadvantage of this algorithm is that the fusion center needs access to each node so as to compute a consistent state estimate, which requires full communication each time an estimate is requested. In this article, different extensions of the optimally distributed Kalman filter are proposed that employ data-driven transmission schemes in order to reduce communication expenses. As a first relaxation of the full-rate communication scheme, it can be shown that each node only has to transmit every second time step without endangering consistency of the fusion result. Also, two data-driven algorithms are introduced that even allow for lower transmission rates, and bounds are derived to guarantee consistent fusion results. Simulations demonstrate that the data-driven distributed filtering schemes can outperform a centralized Kalman filter that requires each measurement to be sent to the center node. PMID:29596392

  16. A two-stage approach to removing noise from recorded music

    NASA Astrophysics Data System (ADS)

    Berger, Jonathan; Goldberg, Maxim J.; Coifman, Ronald C.; Goldberg, Maxim J.; Coifman, Ronald C.

    2004-05-01

    A two-stage algorithm for removing noise from recorded music signals (first proposed in Berger et al., ICMC, 1995) is described and updated. The first stage selects the ``best'' local trigonometric basis for the signal and models noise as the part having high entropy [see Berger et al., J. Audio Eng. Soc. 42(10), 808-818 (1994)]. In the second stage, the original source and the model of the noise obtained from the first stage are expanded into dyadic trees of smooth local sine bases. The best basis for the source signal is extracted using a relative entropy function (the Kullback-Leibler distance) to compare the sum of the costs of the children nodes to the cost of their parent node; energies of the noise in corresponding nodes of the model noise tree are used as weights. The talk will include audio examples of various stages of the method and proposals for further research.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Song; Wang, Yihong; Luo, Wei

    In virtualized data centers, virtual disk images (VDIs) serve as the containers in virtual environment, so their access performance is critical for the overall system performance. Some distributed VDI chunk storage systems have been proposed in order to alleviate the I/O bottleneck for VM management. As the system scales up to a large number of running VMs, however, the overall network traffic would become unbalanced with hot spots on some VMs inevitably, leading to I/O performance degradation when accessing the VMs. Here, we propose an adaptive and collaborative VDI storage system (ACStor) to resolve the above performance issue. In comparisonmore » with the existing research, our solution is able to dynamically balance the traffic workloads in accessing VDI chunks, based on the run-time network state. Specifically, compute nodes with lightly loaded traffic will be adaptively assigned more chunk access requests from remote VMs and vice versa, which can effectively eliminate the above problem and thus improves the I/O performance of VMs. We also implement a prototype based on our ACStor design, and evaluate it by various benchmarks on a real cluster with 32 nodes and a simulated platform with 256 nodes. Experiments show that under different network traffic patterns of data centers, our solution achieves up to 2-8 performance gain on VM booting time and VM’s I/O throughput, in comparison with the other state-of-the-art approaches.« less

  18. A Collaborative Secure Localization Algorithm Based on Trust Model in Underwater Wireless Sensor Networks

    PubMed Central

    Han, Guangjie; Liu, Li; Jiang, Jinfang; Shu, Lei; Rodrigues, Joel J.P.C.

    2016-01-01

    Localization is one of the hottest research topics in Underwater Wireless Sensor Networks (UWSNs), since many important applications of UWSNs, e.g., event sensing, target tracking and monitoring, require location information of sensor nodes. Nowadays, a large number of localization algorithms have been proposed for UWSNs. How to improve location accuracy are well studied. However, few of them take location reliability or security into consideration. In this paper, we propose a Collaborative Secure Localization algorithm based on Trust model (CSLT) for UWSNs to ensure location security. Based on the trust model, the secure localization process can be divided into the following five sub-processes: trust evaluation of anchor nodes, initial localization of unknown nodes, trust evaluation of reference nodes, selection of reference node, and secondary localization of unknown node. Simulation results demonstrate that the proposed CSLT algorithm performs better than the compared related works in terms of location security, average localization accuracy and localization ratio. PMID:26891300

  19. A Comparative Study on Two Typical Schemes for Securing Spatial-Temporal Top-k Queries in Two-Tiered Mobile Wireless Sensor Networks.

    PubMed

    Ma, Xingpo; Liu, Xingjian; Liang, Junbin; Li, Yin; Li, Ran; Ma, Wenpeng; Qi, Chuanda

    2018-03-15

    A novel network paradigm of mobile edge computing, namely TMWSNs (two-tiered mobile wireless sensor networks), has just been proposed by researchers in recent years for its high scalability and robustness. However, only a few works have considered the security of TMWSNs. In fact, the storage nodes, which are located at the upper layer of TMWSNs, are prone to being attacked by the adversaries because they play a key role in bridging both the sensor nodes and the sink, which may lead to the disclosure of all data stored on them as well as some other potentially devastating results. In this paper, we make a comparative study on two typical schemes, EVTopk and VTMSN, which have been proposed recently for securing Top- k queries in TMWSNs, through both theoretical analysis and extensive simulations, aiming at finding out their disadvantages and advancements. We find that both schemes unsatisfactorily raise communication costs. Specifically, the extra communication cost brought about by transmitting the proof information uses up more than 40% of the total communication cost between the sensor nodes and the storage nodes, and 80% of that between the storage nodes and the sink. We discuss the corresponding reasons and present our suggestions, hoping that it will inspire the researchers researching this subject.

  20. The raw disk i/o performance of compaq storage works RAID arrays under tru64 unix

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uselton, A C

    2000-10-19

    We report on the raw disk i/o performance of a set of Compaq StorageWorks RAID arrays connected to our cluster of Compaq ES40 computers via Fibre Channel. The best cumulative peak sustained data rate is l17MB/s per node for reads and 77MB/s per node for writes. This value occurs for a configuration in which a node has two Fibre Channel interfaces to a switch, which in turn has two connections to each of two Compaq StorageWorks RAID arrays. Each RAID array has two HSG80 RAID controllers controlling (together) two 5+p RAID chains. A 10% more space efficient arrangement using amore » single 1l+p RAID chain in place of the two 5+P chains is 25% slower for reads and 40% slower for writes.« less

  1. Color Filtering Localization for Three-Dimensional Underwater Acoustic Sensor Networks

    PubMed Central

    Liu, Zhihua; Gao, Han; Wang, Wuling; Chang, Shuai; Chen, Jiaxing

    2015-01-01

    Accurate localization of mobile nodes has been an important and fundamental problem in underwater acoustic sensor networks (UASNs). The detection information returned from a mobile node is meaningful only if its location is known. In this paper, we propose two localization algorithms based on color filtering technology called PCFL and ACFL. PCFL and ACFL aim at collaboratively accomplishing accurate localization of underwater mobile nodes with minimum energy expenditure. They both adopt the overlapping signal region of task anchors which can communicate with the mobile node directly as the current sampling area. PCFL employs the projected distances between each of the task projections and the mobile node, while ACFL adopts the direct distance between each of the task anchors and the mobile node. The proportion factor of distance is also proposed to weight the RGB values. By comparing the nearness degrees of the RGB sequences between the samples and the mobile node, samples can be filtered out. The normalized nearness degrees are considered as the weighted standards to calculate the coordinates of the mobile nodes. The simulation results show that the proposed methods have excellent localization performance and can localize the mobile node in a timely way. The average localization error of PCFL is decreased by about 30.4% compared to the AFLA method. PMID:25774706

  2. Water Catchment and Storage Monitoring

    NASA Astrophysics Data System (ADS)

    Bruenig, Michael; Dunbabin, Matt; Moore, Darren

    2010-05-01

    Sensors and Sensor Networks technologies provide the means for comprehensive understanding of natural processes in the environment by radically increasing the availability of empirical data about the natural world. This step change is achieved through a dramatic reduction in the cost of data acquisition and many orders of magnitude increase in the spatial and temporal granularity of measurements. Australia's Commonwealth Scientific and Industrial Research Organisation (CSIRO) is undertaking a strategic research program developing wireless sensor network technology for environmental monitoring. As part of this research initiative, we are engaging with government agencies to densely monitor water catchments and storages, thereby enhancing understanding of the environmental processes that affect water quality. In the Gold Coast hinterland in Queensland, Australia, we are building sensor networks to monitor restoration of rainforest within the catchment, and to monitor methane flux release and water quality in the water storages. This poster will present our ongoing work in this region of eastern Australia. The Springbrook plateau in the Gold Coast hinterland lies within a World Heritage listed area, has uniquely high rainfall, hosts a wide range of environmental gradients, and forms part of the catchment for Gold Coast's water storages. Parts of the plateau are being restored from agricultural grassland to native rainforest vegetation. Since April 2008, we have had a 10-node, multi-hop sensor network deployed there to monitor microclimate variables. This network will be expanded to 50-nodes in February 2010, and to around 200-nodes and 1000 sensors by mid-2011, spread over an area of approximately 0.8 square kilometers. The extremely dense microclimate sensing will enhance knowledge of the environmental factors that enhance or inhibit the regeneration of native rainforest. The final network will also include nodes with acoustic and image sensing capability for monitoring higher level parameters such as fauna diversity. The regenerating rainforest environment presents a number of interesting challenges for wireless sensor networks related to energy harvesting and to reliable low-power wireless communications through dense and wet vegetation. Located downstream from the Springbrook plateau, the Little Nerang and Hinze dams are the two major water supply storages for the Gold Coast region. In September 2009 we fitted methane, light, wind, and sonar sensors to our autonomous electric boat platform and successfully demonstrated autonomous collection of methane flux release data on Little Nerang Dam. Sensor and boat status data were relayed back to a human operator on the shore of the dam via a small network of our Fleck™ nodes. The network also included 4 floating nodes each fitted with a string of 6 temperature sensors for profiling temperature at different water depths. We plan to expand the network further during 2010 to incorporate floating methane nodes, additional temperature sensing nodes, as well as land-based microclimate nodes. The overall monitoring system will provide significant data to understand the connected catchment-to-storage system and will provide continuous data to monitor and understand change trends within this world heritage area.

  3. Data-based reconstruction of complex geospatial networks, nodal positioning and detection of hidden nodes

    PubMed Central

    Su, Ri-Qi; Wang, Wen-Xu; Wang, Xiao; Lai, Ying-Cheng

    2016-01-01

    Given a complex geospatial network with nodes distributed in a two-dimensional region of physical space, can the locations of the nodes be determined and their connection patterns be uncovered based solely on data? We consider the realistic situation where time series/signals can be collected from a single location. A key challenge is that the signals collected are necessarily time delayed, due to the varying physical distances from the nodes to the data collection centre. To meet this challenge, we develop a compressive-sensing-based approach enabling reconstruction of the full topology of the underlying geospatial network and more importantly, accurate estimate of the time delays. A standard triangularization algorithm can then be employed to find the physical locations of the nodes in the network. We further demonstrate successful detection of a hidden node (or a hidden source or threat), from which no signal can be obtained, through accurate detection of all its neighbouring nodes. As a geospatial network has the feature that a node tends to connect with geophysically nearby nodes, the localized region that contains the hidden node can be identified. PMID:26909187

  4. A time-domain finite element boundary integral approach for elastic wave scattering

    NASA Astrophysics Data System (ADS)

    Shi, F.; Lowe, M. J. S.; Skelton, E. A.; Craster, R. V.

    2018-04-01

    The response of complex scatterers, such as rough or branched cracks, to incident elastic waves is required in many areas of industrial importance such as those in non-destructive evaluation and related fields; we develop an approach to generate accurate and rapid simulations. To achieve this we develop, in the time domain, an implementation to efficiently couple the finite element (FE) method within a small local region, and the boundary integral (BI) globally. The FE explicit scheme is run in a local box to compute the surface displacement of the scatterer, by giving forcing signals to excitation nodes, which can lie on the scatterer itself. The required input forces on the excitation nodes are obtained with a reformulated FE equation, according to the incident displacement field. The surface displacements computed by the local FE are then projected, through time-domain BI formulae, to calculate the scattering signals with different modes. This new method yields huge improvements in the efficiency of FE simulations for scattering from complex scatterers. We present results using different shapes and boundary conditions, all simulated using this approach in both 2D and 3D, and then compare with full FE models and theoretical solutions to demonstrate the efficiency and accuracy of this numerical approach.

  5. Homemade Buckeye-Pi: A Learning Many-Node Platform for High-Performance Parallel Computing

    NASA Astrophysics Data System (ADS)

    Amooie, M. A.; Moortgat, J.

    2017-12-01

    We report on the "Buckeye-Pi" cluster, the supercomputer developed in The Ohio State University School of Earth Sciences from 128 inexpensive Raspberry Pi (RPi) 3 Model B single-board computers. Each RPi is equipped with fast Quad Core 1.2GHz ARMv8 64bit processor, 1GB of RAM, and 32GB microSD card for local storage. Therefore, the cluster has a total RAM of 128GB that is distributed on the individual nodes and a flash capacity of 4TB with 512 processors, while it benefits from low power consumption, easy portability, and low total cost. The cluster uses the Message Passing Interface protocol to manage the communications between each node. These features render our platform the most powerful RPi supercomputer to date and suitable for educational applications in high-performance-computing (HPC) and handling of large datasets. In particular, we use the Buckeye-Pi to implement optimized parallel codes in our in-house simulator for subsurface media flows with the goal of achieving a massively-parallelized scalable code. We present benchmarking results for the computational performance across various number of RPi nodes. We believe our project could inspire scientists and students to consider the proposed unconventional cluster architecture as a mainstream and a feasible learning platform for challenging engineering and scientific problems.

  6. A convex optimization method for self-organization in dynamic (FSO/RF) wireless networks

    NASA Astrophysics Data System (ADS)

    Llorca, Jaime; Davis, Christopher C.; Milner, Stuart D.

    2008-08-01

    Next generation communication networks are becoming increasingly complex systems. Previously, we presented a novel physics-based approach to model dynamic wireless networks as physical systems which react to local forces exerted on network nodes. We showed that under clear atmospheric conditions the network communication energy can be modeled as the potential energy of an analogous spring system and presented a distributed mobility control algorithm where nodes react to local forces driving the network to energy minimizing configurations. This paper extends our previous work by including the effects of atmospheric attenuation and transmitted power constraints in the optimization problem. We show how our new formulation still results in a convex energy minimization problem. Accordingly, an updated force-driven mobility control algorithm is presented. Forces on mobile backbone nodes are computed as the negative gradient of the new energy function. Results show how in the presence of atmospheric obscuration stronger forces are exerted on network nodes that make them move closer to each other, avoiding loss of connectivity. We show results in terms of network coverage and backbone connectivity and compare the developed algorithms for different scenarios.

  7. Ensuring Data Storage Security in Tree cast Routing Architecture for Sensor Networks

    NASA Astrophysics Data System (ADS)

    Kumar, K. E. Naresh; Sagar, U. Vidya; Waheed, Mohd. Abdul

    2010-10-01

    In this paper presents recent advances in technology have made low-cost, low-power wireless sensors with efficient energy consumption. A network of such nodes can coordinate among themselves for distributed sensing and processing of certain data. For which, we propose an architecture to provide a stateless solution in sensor networks for efficient routing in wireless sensor networks. This type of architecture is known as Tree Cast. We propose a unique method of address allocation, building up multiple disjoint trees which are geographically inter-twined and rooted at the data sink. Using these trees, routing messages to and from the sink node without maintaining any routing state in the sensor nodes is possible. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, this routing architecture moves the application software and databases to the large data centers, where the management of the data and services may not be fully trustworthy. This unique attribute, however, poses many new security challenges which have not been well understood. In this paper, we focus on data storage security, which has always been an important aspect of quality of service. To ensure the correctness of users' data in this architecture, we propose an effective and flexible distributed scheme with two salient features, opposing to its predecessors. By utilizing the homomorphic token with distributed verification of erasure-coded data, our scheme achieves the integration of storage correctness insurance and data error localization, i.e., the identification of misbehaving server(s). Unlike most prior works, the new scheme further supports secure and efficient dynamic operations on data blocks, including: data update, delete and append. Extensive security and performance analysis shows that the proposed scheme is highly efficient and resilient against Byzantine failure, malicious data modification attack, and even server colluding attacks.

  8. Mesh refinement strategy for optimal control problems

    NASA Astrophysics Data System (ADS)

    Paiva, L. T.; Fontes, F. A. C. C.

    2013-10-01

    Direct methods are becoming the most used technique to solve nonlinear optimal control problems. Regular time meshes having equidistant spacing are frequently used. However, in some cases these meshes cannot cope accurately with nonlinear behavior. One way to improve the solution is to select a new mesh with a greater number of nodes. Another way, involves adaptive mesh refinement. In this case, the mesh nodes have non equidistant spacing which allow a non uniform nodes collocation. In the method presented in this paper, a time mesh refinement strategy based on the local error is developed. After computing a solution in a coarse mesh, the local error is evaluated, which gives information about the subintervals of time domain where refinement is needed. This procedure is repeated until the local error reaches a user-specified threshold. The technique is applied to solve the car-like vehicle problem aiming minimum consumption. The approach developed in this paper leads to results with greater accuracy and yet with lower overall computational time as compared to using a time meshes having equidistant spacing.

  9. Using GIS databases for simulated nightlight imagery

    NASA Astrophysics Data System (ADS)

    Zollweg, Joshua D.; Gartley, Michael; Roskovensky, John; Mercier, Jeffery

    2012-06-01

    Proposed is a new technique for simulating nighttime scenes with realistically-modelled urban radiance. While nightlight imagery is commonly used to measure urban sprawl,1 it is uncommon to use urbanization as metric to develop synthetic nighttime scenes. In the developed methodology, the open-source Open Street Map (OSM) Geographic Information System (GIS) database is used. The database is comprised of many nodes, which are used to dene the position of dierent types of streets, buildings, and other features. These nodes are the driver used to model urban nightlights, given several assumptions. The rst assumption is that the spatial distribution of nodes is closely related to the spatial distribution of nightlights. Work by Roychowdhury et al has demonstrated the relationship between urban lights and development. 2 So, the real assumption being made is that the density of nodes corresponds to development, which is reasonable. Secondly, the local density of nodes must relate directly to the upwelled radiance within the given locality. Testing these assumptions using Albuquerque and Indianapolis as example cities revealed that dierent types of nodes produce more realistic results than others. Residential street nodes oered the best performance for any single node type, among the types tested in this investigation. Other node types, however, still provide useful supplementary data. Using streets and buildings dened in the OSM database allowed automated generation of simulated nighttime scenes of Albuquerque and Indianapolis in the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model. The simulation was compared to real data from the recently deployed National Polar-orbiting Operational Environmental Satellite System(NPOESS) Visible Infrared Imager Radiometer Suite (VIIRS) platform. As a result of the comparison, correction functions were used to correct for discrepancies between simulated and observed radiance. Future work will include investigating more advanced approaches for mapping the spatial extent of nightlights, based on the distribution of dierent node types in local neighbourhoods. This will allow the spectral prole of each region to be dynamically adjusted, in addition to simply modifying the magnitude of a single source type.

  10. Random Time Identity Based Firewall In Mobile Ad hoc Networks

    NASA Astrophysics Data System (ADS)

    Suman, Patel, R. B.; Singh, Parvinder

    2010-11-01

    A mobile ad hoc network (MANET) is a self-organizing network of mobile routers and associated hosts connected by wireless links. MANETs are highly flexible and adaptable but at the same time are highly prone to security risks due to the open medium, dynamically changing network topology, cooperative algorithms, and lack of centralized control. Firewall is an effective means of protecting a local network from network-based security threats and forms a key component in MANET security architecture. This paper presents a review of firewall implementation techniques in MANETs and their relative merits and demerits. A new approach is proposed to select MANET nodes at random for firewall implementation. This approach randomly select a new node as firewall after fixed time and based on critical value of certain parameters like power backup. This approach effectively balances power and resource utilization of entire MANET because responsibility of implementing firewall is equally shared among all the nodes. At the same time it ensures improved security for MANETs from outside attacks as intruder will not be able to find out the entry point in MANET due to the random selection of nodes for firewall implementation.

  11. Optimization of storage tank locations in an urban stormwater drainage system using a two-stage approach.

    PubMed

    Wang, Mingming; Sun, Yuanxiang; Sweetapple, Chris

    2017-12-15

    Storage is important for flood mitigation and non-point source pollution control. However, to seek a cost-effective design scheme for storage tanks is very complex. This paper presents a two-stage optimization framework to find an optimal scheme for storage tanks using storm water management model (SWMM). The objectives are to minimize flooding, total suspended solids (TSS) load and storage cost. The framework includes two modules: (i) the analytical module, which evaluates and ranks the flooding nodes with the analytic hierarchy process (AHP) using two indicators (flood depth and flood duration), and then obtains the preliminary scheme by calculating two efficiency indicators (flood reduction efficiency and TSS reduction efficiency); (ii) the iteration module, which obtains an optimal scheme using a generalized pattern search (GPS) method based on the preliminary scheme generated by the analytical module. The proposed approach was applied to a catchment in CZ city, China, to test its capability in choosing design alternatives. Different rainfall scenarios are considered to test its robustness. The results demonstrate that the optimal framework is feasible, and the optimization is fast based on the preliminary scheme. The optimized scheme is better than the preliminary scheme for reducing runoff and pollutant loads under a given storage cost. The multi-objective optimization framework presented in this paper may be useful in finding the best scheme of storage tanks or low impact development (LID) controls. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure

    NASA Astrophysics Data System (ADS)

    Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei

    2011-09-01

    Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47× speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed. This work was presented in part at the 2010 Annual Meeting of the American Association of Physicists in Medicine (AAPM), Philadelphia, PA.

  13. [Management of penile cancer patients: new aspects of a rare tumour entity].

    PubMed

    Roiner, M; Maurer, O; Lebentrau, S; Gilfrich, C; Schäfer, C; Haberl, C; Brookman-May, S D; Burger, M; May, M; Hakenberg, O W

    2018-06-01

    Over the past few decades, some principles in the treatment of penile cancer have changed fundamentally. While 15 years ago a negative surgical margin of at least 2 cm was considered mandatory, organ-sparing surgery permitting minimal negative surgical margins has a high priority nowadays. The current treatment principle requires as much organ preservation as possible and as much radicality as necessary. The implementation of organ-sparing and reconstructive surgical techniques has improved the quality of life of surviving patients. However, oncological and functional outcomes are still unsatisfactory. Alongside with adequate local treatment of the primary tumour, a consistent management of inguinal lymph nodes is of fundamental prognostic significance. In particular, clinically inconspicuous inguinal lymph nodes staged T1b and upwards need a surgical approach. Sentinel node biopsy, minimally-invasive surgical techniques and modified inguinal lymphadenectomy have reduced morbidity compared to conventional inguinal lymph node dissection. Multimodal treatment with surgery and chemotherapy is required in all patients with lymph node-positive disease; neoadjuvant chemotherapy has been established for patients with locally advanced lymph node disease, and adjuvant treatment after radical inguinal lymphadenectomy for lymph node-positive disease. An increasing understanding of the underlying tumour biology, in particular the role of the human papilloma virus (HPV) and epidermal growth factor receptor (EGFR) status, has led to a new pathological classification and may further enhance treatment options. This review summarises current aspects in the therapeutic management of penile cancer. © Georg Thieme Verlag KG Stuttgart · New York.

  14. A Social Potential Fields Approach for Self-Deployment and Self-Healing in Hierarchical Mobile Wireless Sensor Networks

    PubMed Central

    González-Parada, Eva; Cano-García, Jose; Aguilera, Francisco; Sandoval, Francisco; Urdiales, Cristina

    2017-01-01

    Autonomous mobile nodes in mobile wireless sensor networks (MWSN) allow self-deployment and self-healing. In both cases, the goals are: (i) to achieve adequate coverage; and (ii) to extend network life. In dynamic environments, nodes may use reactive algorithms so that each node locally decides when and where to move. This paper presents a behavior-based deployment and self-healing algorithm based on the social potential fields algorithm. In the proposed algorithm, nodes are attached to low cost robots to autonomously navigate in the coverage area. The proposed algorithm has been tested in environments with and without obstacles. Our study also analyzes the differences between non-hierarchical and hierarchical routing configurations in terms of network life and coverage. PMID:28075364

  15. A Social Potential Fields Approach for Self-Deployment and Self-Healing in Hierarchical Mobile Wireless Sensor Networks.

    PubMed

    González-Parada, Eva; Cano-García, Jose; Aguilera, Francisco; Sandoval, Francisco; Urdiales, Cristina

    2017-01-09

    Autonomous mobile nodes in mobile wireless sensor networks (MWSN) allow self-deployment and self-healing. In both cases, the goals are: (i) to achieve adequate coverage; and (ii) to extend network life. In dynamic environments, nodes may use reactive algorithms so that each node locally decides when and where to move. This paper presents a behavior-based deployment and self-healing algorithm based on the social potential fields algorithm. In the proposed algorithm, nodes are attached to low cost robots to autonomously navigate in the coverage area. The proposed algorithm has been tested in environments with and without obstacles. Our study also analyzes the differences between non-hierarchical and hierarchical routing configurations in terms of network life and coverage.

  16. Requirements for a network storage service

    NASA Technical Reports Server (NTRS)

    Kelly, Suzanne M.; Haynes, Rena A.

    1991-01-01

    Sandia National Laboratories provides a high performance classified computer network as a core capability in support of its mission of nuclear weapons design and engineering, physical sciences research, and energy research and development. The network, locally known as the Internal Secure Network (ISN), comprises multiple distributed local area networks (LAN's) residing in New Mexico and California. The TCP/IP protocol suite is used for inter-node communications. Scientific workstations and mid-range computers, running UNIX-based operating systems, compose most LAN's. One LAN, operated by the Sandia Corporate Computing Computing Directorate, is a general purpose resource providing a supercomputer and a file server to the entire ISN. The current file server on the supercomputer LAN is an implementation of the Common File Server (CFS). Subsequent to the design of the ISN, Sandia reviewed its mass storage requirements and chose to enter into a competitive procurement to replace the existing file server with one more adaptable to a UNIX/TCP/IP environment. The requirements study for the network was the starting point for the requirements study for the new file server. The file server is called the Network Storage Service (NSS) and its requirements are described. An application or functional description of the NSS is given. The final section adds performance, capacity, and access constraints to the requirements.

  17. A Novel Topology Control Approach to Maintain the Node Degree in Dynamic Wireless Sensor Networks

    PubMed Central

    Huang, Yuanjiang; Martínez, José-Fernán; Díaz, Vicente Hernández; Sendra, Juana

    2014-01-01

    Topology control is an important technique to improve the connectivity and the reliability of Wireless Sensor Networks (WSNs) by means of adjusting the communication range of wireless sensor nodes. In this paper, a novel Fuzzy-logic Topology Control (FTC) is proposed to achieve any desired average node degree by adaptively changing communication range, thus improving the network connectivity, which is the main target of FTC. FTC is a fully localized control algorithm, and does not rely on location information of neighbors. Instead of designing membership functions and if-then rules for fuzzy-logic controller, FTC is constructed from the training data set to facilitate the design process. FTC is proved to be accurate, stable and has short settling time. In order to compare it with other representative localized algorithms (NONE, FLSS, k-Neighbor and LTRT), FTC is evaluated through extensive simulations. The simulation results show that: firstly, similar to k-Neighbor algorithm, FTC is the best to achieve the desired average node degree as node density varies; secondly, FTC is comparable to FLSS and k-Neighbor in terms of energy-efficiency, but is better than LTRT and NONE; thirdly, FTC has the lowest average maximum communication range than other algorithms, which indicates that the most energy-consuming node in the network consumes the lowest power. PMID:24608008

  18. Unification of two fractal families

    NASA Astrophysics Data System (ADS)

    Liu, Ying

    1995-06-01

    Barnsley and Hurd classify the fractal images into two families: iterated function system fractals (IFS fractals) and fractal transform fractals, or local iterated function system fractals (LIFS fractals). We will call IFS fractals, class 2 fractals and LIFS fractals, class 3 fractals. In this paper, we will unify these two approaches plus another family of fractals, the class 5 fractals. The basic idea is given as follows: a dynamical system can be represented by a digraph, the nodes in a digraph can be divided into two parts: transient states and persistent states. For bilevel images, a persistent node is a black pixel. A transient node is a white pixel. For images with more than two gray levels, a stochastic digraph is used. A transient node is a pixel with the intensity of 0. The intensity of a persistent node is determined by a relative frequency. In this way, the two families of fractals can be generated in a similar way. In this paper, we will first present a classification of dynamical systems and introduce the transformation based on digraphs, then we will unify the two approaches for fractal binary images. We will compare the decoding algorithms of the two families. Finally, we will generalize the discussion to continuous-tone images.

  19. Cervical lymphadenopathy in the dental patient: a review of clinical approach.

    PubMed

    Parisi, Ernesta; Glick, Michael

    2005-06-01

    Lymph node enlargement may be an incidental finding on examination, or may be associated with a patient complaint. It is likely that over half of all patients examined each day may have enlarged lymph nodes in the head and neck region. There are no written guidelines specifying when further evaluation of lymphadenopathy is necessary. With such a high frequency of occurrence, oral health care providers need to be able to determine when lymphadenopathy should be investigated further. Although most cervical lymphadenopathy is the result of a benign infectious etiology, clinicians should search for a precipitating cause and examine other nodal locations to exclude generalized lymphadenopathy. Lymph nodes larger than 1 cm in diameter are generally considered abnormal. Malignancy should be considered when palpable lymph nodes are identified in the supraclavicular region, or when nodes are rock hard, rubbery, or fixed in consistency. Patients with unexplained localized cervical lymphadenopathy presenting with a benign clinical picture should be observed for a 2- to 4-week period. Generalized lymphadenopathy should prompt further clinical investigation. This article reviews common causes of lymphadenopathy, and presents a methodical clinical approach to a patient with cervical lymphadenopathy.

  20. An adaptive neural swarm approach for intrusion defense in ad hoc networks

    NASA Astrophysics Data System (ADS)

    Cannady, James

    2011-06-01

    Wireless sensor networks (WSN) and mobile ad hoc networks (MANET) are being increasingly deployed in critical applications due to the flexibility and extensibility of the technology. While these networks possess numerous advantages over traditional wireless systems in dynamic environments they are still vulnerable to many of the same types of host-based and distributed attacks common to those systems. Unfortunately, the limited power and bandwidth available in WSNs and MANETs, combined with the dynamic connectivity that is a defining characteristic of the technology, makes it extremely difficult to utilize traditional intrusion detection techniques. This paper describes an approach to accurately and efficiently detect potentially damaging activity in WSNs and MANETs. It enables the network as a whole to recognize attacks, anomalies, and potential vulnerabilities in a distributive manner that reflects the autonomic processes of biological systems. Each component of the network recognizes activity in its local environment and then contributes to the overall situational awareness of the entire system. The approach utilizes agent-based swarm intelligence to adaptively identify potential data sources on each node and on adjacent nodes throughout the network. The swarm agents then self-organize into modular neural networks that utilize a reinforcement learning algorithm to identify relevant behavior patterns in the data without supervision. Once the modular neural networks have established interconnectivity both locally and with neighboring nodes the analysis of events within the network can be conducted collectively in real-time. The approach has been shown to be extremely effective in identifying distributed network attacks.

  1. Automatic Network Fingerprinting through Single-Node Motifs

    PubMed Central

    Echtermeyer, Christoph; da Fontoura Costa, Luciano; Rodrigues, Francisco A.; Kaiser, Marcus

    2011-01-01

    Complex networks have been characterised by their specific connectivity patterns (network motifs), but their building blocks can also be identified and described by node-motifs—a combination of local network features. One technique to identify single node-motifs has been presented by Costa et al. (L. D. F. Costa, F. A. Rodrigues, C. C. Hilgetag, and M. Kaiser, Europhys. Lett., 87, 1, 2009). Here, we first suggest improvements to the method including how its parameters can be determined automatically. Such automatic routines make high-throughput studies of many networks feasible. Second, the new routines are validated in different network-series. Third, we provide an example of how the method can be used to analyse network time-series. In conclusion, we provide a robust method for systematically discovering and classifying characteristic nodes of a network. In contrast to classical motif analysis, our approach can identify individual components (here: nodes) that are specific to a network. Such special nodes, as hubs before, might be found to play critical roles in real-world networks. PMID:21297963

  2. A Localization Method for Underwater Wireless Sensor Networks Based on Mobility Prediction and Particle Swarm Optimization Algorithms

    PubMed Central

    Zhang, Ying; Liang, Jixing; Jiang, Shengming; Chen, Wei

    2016-01-01

    Due to their special environment, Underwater Wireless Sensor Networks (UWSNs) are usually deployed over a large sea area and the nodes are usually floating. This results in a lower beacon node distribution density, a longer time for localization, and more energy consumption. Currently most of the localization algorithms in this field do not pay enough consideration on the mobility of the nodes. In this paper, by analyzing the mobility patterns of water near the seashore, a localization method for UWSNs based on a Mobility Prediction and a Particle Swarm Optimization algorithm (MP-PSO) is proposed. In this method, the range-based PSO algorithm is used to locate the beacon nodes, and their velocities can be calculated. The velocity of an unknown node is calculated by using the spatial correlation of underwater object’s mobility, and then their locations can be predicted. The range-based PSO algorithm may cause considerable energy consumption and its computation complexity is a little bit high, nevertheless the number of beacon nodes is relatively smaller, so the calculation for the large number of unknown nodes is succinct, and this method can obviously decrease the energy consumption and time cost of localizing these mobile nodes. The simulation results indicate that this method has higher localization accuracy and better localization coverage rate compared with some other widely used localization methods in this field. PMID:26861348

  3. A scalable architecture for online anomaly detection of WLCG batch jobs

    NASA Astrophysics Data System (ADS)

    Kuehn, E.; Fischer, M.; Giffels, M.; Jung, C.; Petzold, A.

    2016-10-01

    For data centres it is increasingly important to monitor the network usage, and learn from network usage patterns. Especially configuration issues or misbehaving batch jobs preventing a smooth operation need to be detected as early as possible. At the GridKa data and computing centre we therefore operate a tool BPNetMon for monitoring traffic data and characteristics of WLCG batch jobs and pilots locally on different worker nodes. On the one hand local information itself are not sufficient to detect anomalies for several reasons, e.g. the underlying job distribution on a single worker node might change or there might be a local misconfiguration. On the other hand a centralised anomaly detection approach does not scale regarding network communication as well as computational costs. We therefore propose a scalable architecture based on concepts of a super-peer network.

  4. Localization with a mobile beacon in underwater acoustic sensor networks.

    PubMed

    Lee, Sangho; Kim, Kiseon

    2012-01-01

    Localization is one of the most important issues associated with underwater acoustic sensor networks, especially when sensor nodes are randomly deployed. Given that it is difficult to deploy beacon nodes at predetermined locations, localization schemes with a mobile beacon on the sea surface or along the planned path are inherently convenient, accurate, and energy-efficient. In this paper, we propose a new range-free Localization with a Mobile Beacon (LoMoB). The mobile beacon periodically broadcasts a beacon message containing its location. Sensor nodes are individually localized by passively receiving the beacon messages without inter-node communications. For location estimation, a set of potential locations are obtained as candidates for a node's location and then the node's location is determined through the weighted mean of all the potential locations with the weights computed based on residuals.

  5. Localization with a Mobile Beacon in Underwater Acoustic Sensor Networks

    PubMed Central

    Lee, Sangho; Kim, Kiseon

    2012-01-01

    Localization is one of the most important issues associated with underwater acoustic sensor networks, especially when sensor nodes are randomly deployed. Given that it is difficult to deploy beacon nodes at predetermined locations, localization schemes with a mobile beacon on the sea surface or along the planned path are inherently convenient, accurate, and energy-efficient. In this paper, we propose a new range-free Localization with a Mobile Beacon (LoMoB). The mobile beacon periodically broadcasts a beacon message containing its location. Sensor nodes are individually localized by passively receiving the beacon messages without inter-node communications. For location estimation, a set of potential locations are obtained as candidates for a node's location and then the node's location is determined through the weighted mean of all the potential locations with the weights computed based on residuals. PMID:22778597

  6. Investigation of Storage Options for Scientific Computing on Grid and Cloud Facilities

    NASA Astrophysics Data System (ADS)

    Garzoglio, Gabriele

    2012-12-01

    In recent years, several new storage technologies, such as Lustre, Hadoop, OrangeFS, and BlueArc, have emerged. While several groups have run benchmarks to characterize them under a variety of configurations, more work is needed to evaluate these technologies for the use cases of scientific computing on Grid clusters and Cloud facilities. This paper discusses our evaluation of the technologies as deployed on a test bed at FermiCloud, one of the Fermilab infrastructure-as-a-service Cloud facilities. The test bed consists of 4 server-class nodes with 40 TB of disk space and up to 50 virtual machine clients, some running on the storage server nodes themselves. With this configuration, the evaluation compares the performance of some of these technologies when deployed on virtual machines and on “bare metal” nodes. In addition to running standard benchmarks such as IOZone to check the sanity of our installation, we have run I/O intensive tests using physics-analysis applications. This paper presents how the storage solutions perform in a variety of realistic use cases of scientific computing. One interesting difference among the storage systems tested is found in a decrease in total read throughput with increasing number of client processes, which occurs in some implementations but not others.

  7. Interdependent Multi-Layer Networks: Modeling and Survivability Analysis with Applications to Space-Based Networks

    PubMed Central

    Castet, Jean-Francois; Saleh, Joseph H.

    2013-01-01

    This article develops a novel approach and algorithmic tools for the modeling and survivability analysis of networks with heterogeneous nodes, and examines their application to space-based networks. Space-based networks (SBNs) allow the sharing of spacecraft on-orbit resources, such as data storage, processing, and downlink. Each spacecraft in the network can have different subsystem composition and functionality, thus resulting in node heterogeneity. Most traditional survivability analyses of networks assume node homogeneity and as a result, are not suited for the analysis of SBNs. This work proposes that heterogeneous networks can be modeled as interdependent multi-layer networks, which enables their survivability analysis. The multi-layer aspect captures the breakdown of the network according to common functionalities across the different nodes, and it allows the emergence of homogeneous sub-networks, while the interdependency aspect constrains the network to capture the physical characteristics of each node. Definitions of primitives of failure propagation are devised. Formal characterization of interdependent multi-layer networks, as well as algorithmic tools for the analysis of failure propagation across the network are developed and illustrated with space applications. The SBN applications considered consist of several networked spacecraft that can tap into each other's Command and Data Handling subsystem, in case of failure of its own, including the Telemetry, Tracking and Command, the Control Processor, and the Data Handling sub-subsystems. Various design insights are derived and discussed, and the capability to perform trade-space analysis with the proposed approach for various network characteristics is indicated. The select results here shown quantify the incremental survivability gains (with respect to a particular class of threats) of the SBN over the traditional monolith spacecraft. Failure of the connectivity between nodes is also examined, and the results highlight the importance of the reliability of the wireless links between spacecraft (nodes) to enable any survivability improvements for space-based networks. PMID:23599835

  8. Interdependent multi-layer networks: modeling and survivability analysis with applications to space-based networks.

    PubMed

    Castet, Jean-Francois; Saleh, Joseph H

    2013-01-01

    This article develops a novel approach and algorithmic tools for the modeling and survivability analysis of networks with heterogeneous nodes, and examines their application to space-based networks. Space-based networks (SBNs) allow the sharing of spacecraft on-orbit resources, such as data storage, processing, and downlink. Each spacecraft in the network can have different subsystem composition and functionality, thus resulting in node heterogeneity. Most traditional survivability analyses of networks assume node homogeneity and as a result, are not suited for the analysis of SBNs. This work proposes that heterogeneous networks can be modeled as interdependent multi-layer networks, which enables their survivability analysis. The multi-layer aspect captures the breakdown of the network according to common functionalities across the different nodes, and it allows the emergence of homogeneous sub-networks, while the interdependency aspect constrains the network to capture the physical characteristics of each node. Definitions of primitives of failure propagation are devised. Formal characterization of interdependent multi-layer networks, as well as algorithmic tools for the analysis of failure propagation across the network are developed and illustrated with space applications. The SBN applications considered consist of several networked spacecraft that can tap into each other's Command and Data Handling subsystem, in case of failure of its own, including the Telemetry, Tracking and Command, the Control Processor, and the Data Handling sub-subsystems. Various design insights are derived and discussed, and the capability to perform trade-space analysis with the proposed approach for various network characteristics is indicated. The select results here shown quantify the incremental survivability gains (with respect to a particular class of threats) of the SBN over the traditional monolith spacecraft. Failure of the connectivity between nodes is also examined, and the results highlight the importance of the reliability of the wireless links between spacecraft (nodes) to enable any survivability improvements for space-based networks.

  9. Electro-mechanical analysis of composite and sandwich multilayered structures by shell elements with node-dependent kinematics

    NASA Astrophysics Data System (ADS)

    Carrera; Valvano; Kulikov

    2018-01-01

    In this work, a new class of finite elements for the analysis of composite and sandwich shells embedding piezoelectric skins and patches is proposed. The main idea of models coupling is developed by presenting the concept of nodal dependent kinematics where the same finite element can present at each node a different approximation of the main unknowns by setting a node-wise through-the-thickness approximation base. In a global/local approach scenario, the computational costs can be reduced drastically by assuming refined theories only in those zones/nodes of the structural domain where the resulting strain and stress states, and their electro-mechanical coupling present a complex distribution. Several numerical investigations are carried out to validate the accuracy and efficiency of the present shell element. An accurate representation of mechanical stresses and electric displacements in localized zones is possible with reduction of the computational costs if an accurate distribution of the higher-order kinematic capabilities is performed. On the contrary, the accuracy of the solution in terms of mechanical displacements and electric potential values depends on the global approximation over the whole structure. The efficacy of the present node-dependent variable kinematic models, thus, depends on the characteristics of the problem under consideration as well as on the required analysis type.

  10. A Comparative Study on Two Typical Schemes for Securing Spatial-Temporal Top-k Queries in Two-Tiered Mobile Wireless Sensor Networks

    PubMed Central

    Liu, Xingjian; Liang, Junbin; Li, Ran; Ma, Wenpeng; Qi, Chuanda

    2018-01-01

    A novel network paradigm of mobile edge computing, namely TMWSNs (two-tiered mobile wireless sensor networks), has just been proposed by researchers in recent years for its high scalability and robustness. However, only a few works have considered the security of TMWSNs. In fact, the storage nodes, which are located at the upper layer of TMWSNs, are prone to being attacked by the adversaries because they play a key role in bridging both the sensor nodes and the sink, which may lead to the disclosure of all data stored on them as well as some other potentially devastating results. In this paper, we make a comparative study on two typical schemes, EVTopk and VTMSN, which have been proposed recently for securing Top-k queries in TMWSNs, through both theoretical analysis and extensive simulations, aiming at finding out their disadvantages and advancements. We find that both schemes unsatisfactorily raise communication costs. Specifically, the extra communication cost brought about by transmitting the proof information uses up more than 40% of the total communication cost between the sensor nodes and the storage nodes, and 80% of that between the storage nodes and the sink. We discuss the corresponding reasons and present our suggestions, hoping that it will inspire the researchers researching this subject. PMID:29543745

  11. Contrast-enhanced ultrasound mapping of sentinel lymph nodes in oral tongue cancer-a pilot study.

    PubMed

    Gvetadze, Shalva R; Xiong, Ping; Lv, Mingming; Li, Jun; Hu, Jingzhou; Ilkaev, Konstantin D; Yang, Xin; Sun, Jian

    2017-03-01

    To assess the usefulness of contrast-enhanced ultrasound (CEUS) with peritumoral injection of microbubble contrast agent for detecting the sentinel lymph nodes for oral tongue carcinoma. The study was carried out on 12 patients with T1-2cN0 oral tongue cancer. A radical resection of the primary disease was planned; a modified radical supraomohyoid neck dissection was reserved for patients with larger lesions (T2, n = 8). The treatment plan and execution were not influenced by sentinel node mapping outcome. The Sonovue ™ contrast agent (Bracco Imaging, Milan, Italy) was utilized. After detection, the position and radiologic features of the sentinel nodes were recorded. The identification rate of the sentinel nodes was 91.7%; one patient failed to demonstrate any enhanced areas. A total of 15 sentinel nodes were found in the rest of the 11 cases, with a mean of 1.4 nodes for each patient. The sentinel nodes were localized in: Level IA-1 (6.7%) node; Level IB-11 (73.3%) nodes; Level IIA-3 (20.0%) nodes. No contrast-related adverse effects were observed. For oral tongue tumours, CEUS is a feasible and potentially widely available approach of sentinel node mapping. Further clinical research is required to establish the position of CEUS detection of the sentinel nodes in oral cavity cancers.

  12. CSRQ: Communication-Efficient Secure Range Queries in Two-Tiered Sensor Networks

    PubMed Central

    Dai, Hua; Ye, Qingqun; Yang, Geng; Xu, Jia; He, Ruiliang

    2016-01-01

    In recent years, we have seen many applications of secure query in two-tiered wireless sensor networks. Storage nodes are responsible for storing data from nearby sensor nodes and answering queries from Sink. It is critical to protect data security from a compromised storage node. In this paper, the Communication-efficient Secure Range Query (CSRQ)—a privacy and integrity preserving range query protocol—is proposed to prevent attackers from gaining information of both data collected by sensor nodes and queries issued by Sink. To preserve privacy and integrity, in addition to employing the encoding mechanisms, a novel data structure called encrypted constraint chain is proposed, which embeds the information of integrity verification. Sink can use this encrypted constraint chain to verify the query result. The performance evaluation shows that CSRQ has lower communication cost than the current range query protocols. PMID:26907293

  13. Sentinel lymph node scintigraphy in cutaneous melanoma using a planar calibration phantom filled with Tc-99m pertechnetate solution for body contouring.

    PubMed

    Peştean, Claudiu; Bărbuş, Elena; Piciu, Andra; Larg, Maria Iulia; Sabo, Alexandrina; Moisescu-Goia, Cristina; Piciu, Doina

    2016-01-01

    Melanoma is a disease that has an increasing incidence worldwide. Sentinel lymph node scintigraphy is a diagnostic tool that offers important information regarding the localization of the sentinel lymph nodes offering important input data to establish a pertinent and personalized therapeutic strategy. The golden standard in body contouring for sentinel lymph node scintigraphy is to use a planar flood source of Cobalt-57 (Co-57) placed behind the patients, against the gamma camera. The purpose of the study was to determine the performance of the procedure using a flood calibration planar phantom filled with aqueous solution of Technetion-99m (Tc-99m) in comparison with the published data in literature where the gold standard was used. The study was conducted in the Department of Nuclear Medicine of Oncology Institute "Prof. Dr. Ion Chiricuţă" Cluj-Napoca in 95 patients, 31 males and 64 females. The localization of the lesions was grouped by anatomical regions as follows: 23 on lower limbs, 17 on upper limbs, 45 on thorax and 10 on abdomen. The calibration flood phantom containing aqueous solution of Tc-99m pertechnetate was used as planar source to visualize the body contour of the patients for a proper anatomic localization of detected sentinel lymph nodes. The radiopharmaceutical uptake in sentinel lymph nodes has been recorded in serial images following peritumoral injection of 1 ml solution of Tc-99m albumin nanocolloids with an activity of 1 mCi (37 MBq). The used protocol consisted in early acquired planar images within 15 minutes post-injection and delayed images at 2-3 hours and when necessary, additional images at 6-7 hours. The acquisition matrix used was 128×128 pixels for an acquisition time of 5 - 7 minutes. The skin projection of the sentinel lymph nodes was marked on the skin and surgical removal of detected sentinel lymph nodes was performed the next day using a gamma probe for detection and measurements. The sentinel lymph nodes were detected in 92 cases and confirmed with the gamma probe during the surgical procedure. The localization of the lymph nodes was as follows: for the tumors localized on lower limb 23 lymph nodes were localized in inguinal region, for the tumors localized on upper limb, 17 lymph nodes were localized in axilla, for the tumors localized on the thorax, 40 lymph nodes were localized in axilla and 3 were localized in the inguinal region; for the tumors localized on the abdomen, 1 lymph node was localized in axilla and 8 lymph nodes was localized in inguinal region. Regarding the negative sentinel lymph node cases, 2 cases were registered for primarily lesions localized on thorax and 1 for a lesion localized on abdomen. According to histology, 26 cases revealed lymphatic metastatic invasion. Dose rates measured at 1m from the calibrator phantom had an average value of 3.46 μSv/h (SD 0.19) and at 1.4m, the value was 2.57 μSv/h (SD 0.22). Dose rates measured at the same distances from the Co-57 planar flood source had a average values of 32.5μSv/h (SD 0.11) respectively 24.1 μSv/h (SD 0.14). The planar calibration flood phantom is an effective tool for body contouring in sentinel lymph node scintigraphy and offers accurate anatomical information to efficiently localize the detected sentinel lymph nodes in melanoma, being for the first time used and mentioned as a pertinent alternative in our department.

  14. Compressive sensing of high betweenness centrality nodes in networks

    NASA Astrophysics Data System (ADS)

    Mahyar, Hamidreza; Hasheminezhad, Rouzbeh; Ghalebi K., Elahe; Nazemian, Ali; Grosu, Radu; Movaghar, Ali; Rabiee, Hamid R.

    2018-05-01

    Betweenness centrality is a prominent centrality measure expressing importance of a node within a network, in terms of the fraction of shortest paths passing through that node. Nodes with high betweenness centrality have significant impacts on the spread of influence and idea in social networks, the user activity in mobile phone networks, the contagion process in biological networks, and the bottlenecks in communication networks. Thus, identifying k-highest betweenness centrality nodes in networks will be of great interest in many applications. In this paper, we introduce CS-HiBet, a new method to efficiently detect top- k betweenness centrality nodes in networks, using compressive sensing. CS-HiBet can perform as a distributed algorithm by using only the local information at each node. Hence, it is applicable to large real-world and unknown networks in which the global approaches are usually unrealizable. The performance of the proposed method is evaluated by extensive simulations on several synthetic and real-world networks. The experimental results demonstrate that CS-HiBet outperforms the best existing methods with notable improvements.

  15. Nonvolatile memory with Co-SiO2 core-shell nanocrystals as charge storage nodes in floating gate

    NASA Astrophysics Data System (ADS)

    Liu, Hai; Ferrer, Domingo A.; Ferdousi, Fahmida; Banerjee, Sanjay K.

    2009-11-01

    In this letter, we reported nanocrystal floating gate memory with Co-SiO2 core-shell nanocrystal charge storage nodes. By using a water-in-oil microemulsion scheme, Co-SiO2 core-shell nanocrystals were synthesized and closely packed to achieve high density matrix in the floating gate without aggregation. The insulator shell also can help to increase the thermal stability of the nanocrystal metal core during the fabrication process to improve memory performance.

  16. Forming an ad-hoc nearby storage, based on IKAROS and social networking services

    NASA Astrophysics Data System (ADS)

    Filippidis, Christos; Cotronis, Yiannis; Markou, Christos

    2014-06-01

    We present an ad-hoc "nearby" storage, based on IKAROS and social networking services, such as Facebook. By design, IKAROS is capable to increase or decrease the number of nodes of the I/O system instance on the fly, without bringing everything down or losing data. IKAROS is capable to decide the file partition distribution schema, by taking on account requests from the user or an application, as well as a domain or a Virtual Organization policy. In this way, it is possible to form multiple instances of smaller capacity higher bandwidth storage utilities capable to respond in an ad-hoc manner. This approach, focusing on flexibility, can scale both up and down and so can provide more cost effective infrastructures for both large scale and smaller size systems. A set of experiments is performed comparing IKAROS with PVFS2 by using multiple clients requests under HPC IOR benchmark and MPICH2.

  17. Age-related changes in localization of injected radiolabelled lymphocytes in the lymph nodes of antigen-stimulated mice.

    PubMed Central

    Inchley, C J; Micklem, H S; Barrett, J; Hunter, J; Minty, C

    1976-01-01

    The localization of i.v. injected syngeneic lymph node cells, radiolabelled with 51Cr or 75Se-L-selenomethionine, was studied in male CBA/H mice aged between 3 and 30 months. The following results were obtained. (1) Localization of cells from young adult donors was greater in the s.c. lymph nodes of old than of young recipients, the main increase being between 15 and 17 months of age. Increases in lymph node weight and DNA-synthesis were also seen at this time; but the rise in cell localization was significant even when calculated per unit of tissue weight. Splenic localization either declined slightly with age or, like the liver, showed no significant change. (2) Local antigenic stimulation by a single injection of sheep erythrocytes into one front footpad, 24 hr before lymph node cell injection, resulted in increased localization in the regional lymph nodes of 3-17 month old, but rarely of 24-30 month old mice. (3) No consistent differences in localization were observed between lymph node cells from 4-month and 25-month old donors. Both age-related and antigen-related increases in cell localization were at least partly attributable to an enhanced rate of entry of lymphocytes from the blood to the lymph nodes. Although the changes underlying the decline in antigen-related localization of cells in old recipients have still to be clarified, it is probable that the defective immune responses of old mice result partly from this decline. PMID:991459

  18. Exploring novel key regulators in breast cancer network.

    PubMed

    Ali, Shahnawaz; Malik, Md Zubbair; Singh, Soibam Shyamchand; Chirom, Keilash; Ishrat, Romana; Singh, R K Brojen

    2018-01-01

    The breast cancer network constructed from 70 experimentally verified genes is found to follow hierarchical scale free nature with heterogeneous modular organization and diverge leading hubs. The topological parameters (degree distributions, clustering co-efficient, connectivity and centralities) of this network obey fractal rules indicating absence of centrality lethality rule, and efficient communication among the components. From the network theoretical approach, we identified few key regulators out of large number of leading hubs, which are deeply rooted from top to down of the network, serve as backbone of the network, and possible target genes. However, p53, which is one of these key regulators, is found to be in low rank and keep itself at low profile but directly cross-talks with important genes BRCA2 and BRCA3. The popularity of these hubs gets changed in unpredictable way at various levels of organization thus showing disassortive nature. The local community paradigm approach in this network shows strong correlation of nodes in majority of modules/sub-modules (fast communication among nodes) and weak correlation of nodes only in few modules/sub-modules (slow communication among nodes) at various levels of network organization.

  19. New approach to predict photoallergic potentials of chemicals based on murine local lymph node assay.

    PubMed

    Maeda, Yosuke; Hirosaki, Haruka; Yamanaka, Hidenori; Takeyoshi, Masahiro

    2018-05-23

    Photoallergic dermatitis, caused by pharmaceuticals and other consumer products, is a very important issue in human health. However, S10 guidelines of the International Conference on Harmonization do not recommend the existing prediction methods for photoallergy because of their low predictability in human cases. We applied local lymph node assay (LLNA), a reliable, quantitative skin sensitization prediction test, to develop a new photoallergy prediction method. This method involves a three-step approach: (1) ultraviolet (UV) absorption analysis; (2) determination of no observed adverse effect level for skin phototoxicity based on LLNA; and (3) photoallergy evaluation based on LLNA. Photoallergic potential of chemicals was evaluated by comparing lymph node cell proliferation among groups treated with chemicals with minimal effect levels of skin sensitization and skin phototoxicity under UV irradiation (UV+) or non-UV irradiation (UV-). A case showing significant difference (P < .05) in lymph node cell proliferation rates between UV- and UV+ groups was considered positive for photoallergic reaction. After testing 13 chemicals, seven human photoallergens tested positive and the other six, with no evidence of causing photoallergic dermatitis or UV absorption, tested negative. Among these chemicals, both doxycycline hydrochloride and minocycline hydrochloride were tetracycline antibiotics with different photoallergic properties, and the new method clearly distinguished between the photoallergic properties of these chemicals. These findings suggested high predictability of our method; therefore, it is promising and effective in predicting human photoallergens. Copyright © 2018 John Wiley & Sons, Ltd.

  20. Clinical and histopathological factors affecting failed sentinel node localization in axillary staging for breast cancer.

    PubMed

    Dordea, Matei; Colvin, Hugh; Cox, Phil; Pujol Nicolas, Andrea; Kanakala, Venkat; Iwuchukwu, Obi

    2013-04-01

    Sentinel lymph node biopsy (SLNB) has become the standard of care in axillary staging of clinically node-negative breast cancer patients. To analyze reasons for failure of SLN localization by means of a multivariate analysis of clinical and histopathological factors. We performed a review of 164 consecutive breast cancer patients who underwent SLNB. A superficial injection technique was used. 9/164 patients failed to show nodes. In 7/9 patients no evidence of radioactivity or blue dye was observed. Age and nodal status were the only statistically significant factors (p < 0.05). For every unit increase in age there was a 9% reduced chance of failed SLN localization. Patients with negative nodal status have 90% reduced risk of failed sentinel node localization than patients with macro or extra capsular nodal invasion. The results suggest that altered lymphatic dynamics secondary to tumour burden may play a role in failed sentinel node localization. We showed that in all failed localizations the radiocolloid persisted around the injection site, showing limited local diffusion only. While clinical and histopathological data may provide some clues as to why sentinel node localization fails, we further hypothesize that integrity of peri-areolar lymphatics is important for successful localization. Copyright © 2012 Royal College of Surgeons of Edinburgh (Scottish charity number SC005317) and Royal College of Surgeons in Ireland. Published by Elsevier Ltd. All rights reserved.

  1. [Effects of herbicide on grape leaf photosynthesis and nutrient storage].

    PubMed

    Tan, Wei; Wang, Hui; Zhai, Heng

    2011-09-01

    Selecting three adjacent vineyards as test objects, this paper studied the effects of applying herbicide in growth season on the leaf photosynthetic apparatus and branch nutrient storage of grape Kyoho (Vitis vinfrraxVitis labrusca). In the vineyards T1 and T2 where herbicide was applied in 2009, the net photosynthesis rate (Pa) of grape leaves had a significant decrease, as compared with that in vineyard CK where artificial weeding was implemented. The leaves at the fourth node in vineyard T1 and those at the sixth node in vineyard T2 had the largest decrement of Pn (40.5% and 32.1%, respectively). Herbicide had slight effects on the leaf stomatal conductance (Gs). In T1 where herbicide application was kept on with in 2010, the Pn, was still significantly lower than that in CK; while in T2 where artificial weeding was implemented in 2010, the Pn and Gs of top- and middle node leaves were slightly higher than those in T1, but the Pn was still lower than that in CK, showing the aftereffects of herbicide residual. The herbicide application in 2009 decreased the leaf maximum photochemical efficiency of PS II (Fv/Fm) and performance index (P1) while increased the relative variable fluorescence in the J step and K step, indicating the damage of electron transportation of PS II center and oxygen-evolving complex. Herbicide application decreased the pigment content of middle-node leaves in a dose-manner. Applying herbicide enhanced the leaf catalase and peroxidase activities significantly, increased the superoxide dismutase (SOD) activity of middle-node leaves, but decreased the SOD activity of top- and bottom node leaves. After treated with herbicide, the ascorbate peroxidase (APX) activity of middle- and bottom node leaves increased, but that of top-node leaves decreased. Herbicide treatment aggravated leaf lipid peroxidation, and reduced the soluble sugar, starch, free amino acids, and soluble protein storage in branches.

  2. Fuzzy mobile-robot positioning in intelligent spaces using wireless sensor networks.

    PubMed

    Herrero, David; Martínez, Humberto

    2011-01-01

    This work presents the development and experimental evaluation of a method based on fuzzy logic to locate mobile robots in an Intelligent Space using wireless sensor networks (WSNs). The problem consists of locating a mobile node using only inter-node range measurements, which are estimated by radio frequency signal strength attenuation. The sensor model of these measurements is very noisy and unreliable. The proposed method makes use of fuzzy logic for modeling and dealing with such uncertain information. Besides, the proposed approach is compared with a probabilistic technique showing that the fuzzy approach is able to handle highly uncertain situations that are difficult to manage by well-known localization methods.

  3. Localization of antigen-specific lymphocytes following lymph node challenge.

    PubMed Central

    Liu, H; Splitter, G A

    1986-01-01

    The effect of subcutaneous injections of Brucella abortus strain 19 antigen on the specific localization of autologous lymphocytes in the regional nodes of calves was analysed by fluorescent labelling and flow cytometry. Both in vitro and in vivo FITC labelling of lymphocytes indicated the preferential migration of lymphocytes from a previously challenged lymph node to a recently challenged lymph node. However, lymphocytes from a lymph node challenged with B. abortus failed to localize preferentially in a lymph node challenged with a control antigen, Listeria monocytogenes. Lymph node cells, enriched for T lymphocytes and isolated from primary stimulated or secondary challenged B. abortus lymph nodes, could proliferate when cultured with autologous antigen-pulsed macrophages. The kinetics of [3H]thymidine incorporation in lymphocytes from secondarily challenged lymph nodes occurred earlier and to a greater extent when compared with lymphocytes from primary challenged lymph nodes. Our data show that the accumulation of B. abortus-specific lymphocytes in secondarily challenged lymph nodes is increased by the presence of the specific antigen. Images Figure 4 PMID:2426183

  4. ACStor: Optimizing Access Performance of Virtual Disk Images in Clouds

    DOE PAGES

    Wu, Song; Wang, Yihong; Luo, Wei; ...

    2017-03-02

    In virtualized data centers, virtual disk images (VDIs) serve as the containers in virtual environment, so their access performance is critical for the overall system performance. Some distributed VDI chunk storage systems have been proposed in order to alleviate the I/O bottleneck for VM management. As the system scales up to a large number of running VMs, however, the overall network traffic would become unbalanced with hot spots on some VMs inevitably, leading to I/O performance degradation when accessing the VMs. Here, we propose an adaptive and collaborative VDI storage system (ACStor) to resolve the above performance issue. In comparisonmore » with the existing research, our solution is able to dynamically balance the traffic workloads in accessing VDI chunks, based on the run-time network state. Specifically, compute nodes with lightly loaded traffic will be adaptively assigned more chunk access requests from remote VMs and vice versa, which can effectively eliminate the above problem and thus improves the I/O performance of VMs. We also implement a prototype based on our ACStor design, and evaluate it by various benchmarks on a real cluster with 32 nodes and a simulated platform with 256 nodes. Experiments show that under different network traffic patterns of data centers, our solution achieves up to 2-8 performance gain on VM booting time and VM’s I/O throughput, in comparison with the other state-of-the-art approaches.« less

  5. Surgery of the vulva in vulvar cancer.

    PubMed

    Micheletti, Leonardo; Preti, Mario

    2014-10-01

    The standard radical mutilating surgery for the treatment of invasive vulval carcinoma is, today, being replaced by a conservative and individualised approach. Surgical conservative modifications that are currently considered safe, regarding vulval lesion, are separate skin vulval-groin incisions, drawn according to the lesion diameter, and wide local radical excision or partial radical vulvectomy with 1-2 cm of clinically clear surgical margins. Regarding inguinofemoral lymph nodes management, surgical conservative modifications not compromising patient survival are omission of groin lymphadenectomy only when tumour stromal invasion is ≤ 1 mm, unilateral groin lymphadenectomy only in well-lateralised early lesions and total or radical inguinofemoral lymphadenectomy with preservation of femoral fascia when full groin resection is needed. Sentinel lymph node dissection is a promising technique but it should not be routinely employed outside referral centres. Pelvic nodes are better managed by radiation. Locally advanced vulval carcinoma can be managed by ultraradical surgery, exclusive radiotherapy or chemoradiation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. BFL: a node and edge betweenness based fast layout algorithm for large scale networks

    PubMed Central

    Hashimoto, Tatsunori B; Nagasaki, Masao; Kojima, Kaname; Miyano, Satoru

    2009-01-01

    Background Network visualization would serve as a useful first step for analysis. However, current graph layout algorithms for biological pathways are insensitive to biologically important information, e.g. subcellular localization, biological node and graph attributes, or/and not available for large scale networks, e.g. more than 10000 elements. Results To overcome these problems, we propose the use of a biologically important graph metric, betweenness, a measure of network flow. This metric is highly correlated with many biological phenomena such as lethality and clusters. We devise a new fast parallel algorithm calculating betweenness to minimize the preprocessing cost. Using this metric, we also invent a node and edge betweenness based fast layout algorithm (BFL). BFL places the high-betweenness nodes to optimal positions and allows the low-betweenness nodes to reach suboptimal positions. Furthermore, BFL reduces the runtime by combining a sequential insertion algorim with betweenness. For a graph with n nodes, this approach reduces the expected runtime of the algorithm to O(n2) when considering edge crossings, and to O(n log n) when considering only density and edge lengths. Conclusion Our BFL algorithm is compared against fast graph layout algorithms and approaches requiring intensive optimizations. For gene networks, we show that our algorithm is faster than all layout algorithms tested while providing readability on par with intensive optimization algorithms. We achieve a 1.4 second runtime for a graph with 4000 nodes and 12000 edges on a standard desktop computer. PMID:19146673

  7. Experimental and computational analysis of a large protein network that controls fat storage reveals the design principles of a signaling network.

    PubMed

    Al-Anzi, Bader; Arpp, Patrick; Gerges, Sherif; Ormerod, Christopher; Olsman, Noah; Zinn, Kai

    2015-05-01

    An approach combining genetic, proteomic, computational, and physiological analysis was used to define a protein network that regulates fat storage in budding yeast (Saccharomyces cerevisiae). A computational analysis of this network shows that it is not scale-free, and is best approximated by the Watts-Strogatz model, which generates "small-world" networks with high clustering and short path lengths. The network is also modular, containing energy level sensing proteins that connect to four output processes: autophagy, fatty acid synthesis, mRNA processing, and MAP kinase signaling. The importance of each protein to network function is dependent on its Katz centrality score, which is related both to the protein's position within a module and to the module's relationship to the network as a whole. The network is also divisible into subnetworks that span modular boundaries and regulate different aspects of fat metabolism. We used a combination of genetics and pharmacology to simultaneously block output from multiple network nodes. The phenotypic results of this blockage define patterns of communication among distant network nodes, and these patterns are consistent with the Watts-Strogatz model.

  8. Sampling versus systematic full lymphatic dissection in surgical treatment of non-small cell lung cancer.

    PubMed

    Koulaxouzidis, Georgios; Karagkiouzis, Grigorios; Konstantinou, Marios; Gkiozos, Ioannis; Syrigos, Konstantinos

    2013-04-22

    The extent of mediastinal lymph node assessment during surgery for non-small cell cancer remains controversial. Different techniques are used, ranging from simple visual inspection of the unopened mediastinum to an extended bilateral lymph node dissection. Furthermore, different terms are used to define these techniques. Sampling is the removal of one or more lymph nodes under the guidance of pre-operative findings. Systematic (full) nodal dissection is the removal of all mediastinal tissue containing the lymph nodes systematically within anatomical landmarks. A Medline search was conducted to identify articles in the English language that addressed the role of mediastinal lymph node resection in the treatment of non-small cell lung cancer. Opinions as to the reasons for favoring full lymphatic dissection include complete resection, improved nodal staging and better local control due to resection of undetected micrometastasis. Arguments against routine full lymphatic dissection are increased morbidity, increase in operative time, and lack of evidence of improved survival. For complete resection of non-small cell lung cancer, many authors recommend a systematic nodal dissection as the standard approach during surgery, and suggest that this provides both adequate nodal staging and guarantees complete resection. Whether extending the lymph node dissection influences survival or recurrence rate is still not known. There are valid arguments in favor in terms not only of an improved local control but also of an improved long-term survival. However, the impact of lymph node dissection on long-term survival should be further assessed by large-scale multicenter randomized trials.

  9. Cloud object store for archive storage of high performance computing data using decoupling middleware

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2015-06-30

    Cloud object storage is enabled for archived data, such as checkpoints and results, of high performance computing applications using a middleware process. A plurality of archived files, such as checkpoint files and results, generated by a plurality of processes in a parallel computing system are stored by obtaining the plurality of archived files from the parallel computing system; converting the plurality of archived files to objects using a log structured file system middleware process; and providing the objects for storage in a cloud object storage system. The plurality of processes may run, for example, on a plurality of compute nodes. The log structured file system middleware process may be embodied, for example, as a Parallel Log-Structured File System (PLFS). The log structured file system middleware process optionally executes on a burst buffer node.

  10. Extended mesometrial resection (EMMR): Surgical approach to the treatment of locally advanced cervical cancer based on the theory of ontogenetic cancer fields.

    PubMed

    Wolf, Benjamin; Ganzer, Roman; Stolzenburg, Jens-Uwe; Hentschel, Bettina; Horn, Lars-Christian; Höckel, Michael

    2017-08-01

    Based on ontogenetic-anatomic considerations, we have introduced total mesometrial resection (TMMR) and laterally extended endopelvic resection (LEER) as surgical treatments for patients with cancer of the uterine cervix FIGO stages I B1 - IV A. For a subset of patients with locally advanced disease we have sought to develop an operative strategy characterized by the resection of additional tissue at risk for tumor infiltration as compared to TMMR, but less than in LEER, preserving the urinary bladder function. We conducted a prospective single center study to evaluate the feasibility of extended mesometrial resection (EMMR) and therapeutic lymph node dissection as a surgical treatment approach for patients with cervical cancer fixed to the urinary bladder and/or its mesenteries as determined by intraoperative evaluation. None of the patients received postoperative adjuvant radiotherapy. 48 consecutive patients were accrued into the trial. Median tumor size was 5cm, and 85% of all patients were found to have lymph node metastases. Complete tumor resection (R0) was achieved in all cases. Recurrence free survival at 5years was 54.1% (95% CI 38.3-69.9). The overall survival rate was 62.6% (95% CI 45.6-79.6) at 5years. Perioperative morbidity represented by grade II and III complications (determined by the Franco-Italian glossary) occurred in 25% and 15% of patients, respectively. We demonstrate in this study the feasibility of EMMR as a surgical treatment approach for patients with locally advanced cervical cancer and regional lymph node invasion without the necessity for postoperative adjuvant radiation. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. A Hybrid DV-Hop Algorithm Using RSSI for Localization in Large-Scale Wireless Sensor Networks.

    PubMed

    Cheikhrouhou, Omar; M Bhatti, Ghulam; Alroobaea, Roobaea

    2018-05-08

    With the increasing realization of the Internet-of-Things (IoT) and rapid proliferation of wireless sensor networks (WSN), estimating the location of wireless sensor nodes is emerging as an important issue. Traditional ranging based localization algorithms use triangulation for estimating the physical location of only those wireless nodes that are within one-hop distance from the anchor nodes. Multi-hop localization algorithms, on the other hand, aim at localizing the wireless nodes that can physically be residing at multiple hops away from anchor nodes. These latter algorithms have attracted a growing interest from research community due to the smaller number of required anchor nodes. One such algorithm, known as DV-Hop (Distance Vector Hop), has gained popularity due to its simplicity and lower cost. However, DV-Hop suffers from reduced accuracy due to the fact that it exploits only the network topology (i.e., number of hops to anchors) rather than the distances between pairs of nodes. In this paper, we propose an enhanced DV-Hop localization algorithm that also uses the RSSI values associated with links between one-hop neighbors. Moreover, we exploit already localized nodes by promoting them to become additional anchor nodes. Our simulations have shown that the proposed algorithm significantly outperforms the original DV-Hop localization algorithm and two of its recently published variants, namely RSSI Auxiliary Ranging and the Selective 3-Anchor DV-hop algorithm. More precisely, in some scenarios, the proposed algorithm improves the localization accuracy by almost 95%, 90% and 70% as compared to the basic DV-Hop, Selective 3-Anchor, and RSSI DV-Hop algorithms, respectively.

  12. Improving UWB-Based Localization in IoT Scenarios with Statistical Models of Distance Error.

    PubMed

    Monica, Stefania; Ferrari, Gianluigi

    2018-05-17

    Interest in the Internet of Things (IoT) is rapidly increasing, as the number of connected devices is exponentially growing. One of the application scenarios envisaged for IoT technologies involves indoor localization and context awareness. In this paper, we focus on a localization approach that relies on a particular type of communication technology, namely Ultra Wide Band (UWB). UWB technology is an attractive choice for indoor localization, owing to its high accuracy. Since localization algorithms typically rely on estimated inter-node distances, the goal of this paper is to evaluate the improvement brought by a simple (linear) statistical model of the distance error. On the basis of an extensive experimental measurement campaign, we propose a general analytical framework, based on a Least Square (LS) method, to derive a novel statistical model for the range estimation error between a pair of UWB nodes. The proposed statistical model is then applied to improve the performance of a few illustrative localization algorithms in various realistic scenarios. The obtained experimental results show that the use of the proposed statistical model improves the accuracy of the considered localization algorithms with a reduction of the localization error up to 66%.

  13. Enabling High-performance Interactive Geoscience Data Analysis Through Data Placement and Movement Optimization

    NASA Astrophysics Data System (ADS)

    Zhu, F.; Yu, H.; Rilee, M. L.; Kuo, K. S.; Yu, L.; Pan, Y.; Jiang, H.

    2017-12-01

    Since the establishment of data archive centers and the standardization of file formats, scientists are required to search metadata catalogs for data needed and download the data files to their local machines to carry out data analysis. This approach has facilitated data discovery and access for decades, but it inevitably leads to data transfer from data archive centers to scientists' computers through low-bandwidth Internet connections. Data transfer becomes a major performance bottleneck in such an approach. Combined with generally constrained local compute/storage resources, they limit the extent of scientists' studies and deprive them of timely outcomes. Thus, this conventional approach is not scalable with respect to both the volume and variety of geoscience data. A much more viable solution is to couple analysis and storage systems to minimize data transfer. In our study, we compare loosely coupled approaches (exemplified by Spark and Hadoop) and tightly coupled approaches (exemplified by parallel distributed database management systems, e.g., SciDB). In particular, we investigate the optimization of data placement and movement to effectively tackle the variety challenge, and boost the popularization of parallelization to address the volume challenge. Our goal is to enable high-performance interactive analysis for a good portion of geoscience data analysis exercise. We show that tightly coupled approaches can concentrate data traffic between local storage systems and compute units, and thereby optimizing bandwidth utilization to achieve a better throughput. Based on our observations, we develop a geoscience data analysis system that tightly couples analysis engines with storages, which has direct access to the detailed map of data partition locations. Through an innovation data partitioning and distribution scheme, our system has demonstrated scalable and interactive performance in real-world geoscience data analysis applications.

  14. On Designing Thermal-Aware Localized QoS Routing Protocol for in-vivo Sensor Nodes in Wireless Body Area Networks.

    PubMed

    Monowar, Muhammad Mostafa; Bajaber, Fuad

    2015-06-15

    In this paper, we address the thermal rise and Quality-of-Service (QoS) provisioning issue for an intra-body Wireless Body Area Network (WBAN) having in-vivo sensor nodes. We propose a thermal-aware QoS routing protocol, called TLQoS, that facilitates the system in achieving desired QoS in terms of delay and reliability for diverse traffic types, as well as avoids the formation of highly heated nodes known as hotspot(s), and keeps the temperature rise along the network to an acceptable level. TLQoS exploits modular architecture wherein different modules perform integrated operations in providing multiple QoS service with lower temperature rise. To address the challenges of highly dynamic wireless environment inside the human body. TLQoS implements potential-based localized routing that requires only local neighborhood information. TLQoS avoids routing loop formation as well as reduces the number of hop traversal exploiting hybrid potential, and tuning a configurable parameter. We perform extensive simulations of TLQoS, and the results show that TLQoS has significant performance improvements over state-of-the-art approaches.

  15. On Designing Thermal-Aware Localized QoS Routing Protocol for in-vivo Sensor Nodes in Wireless Body Area Networks

    PubMed Central

    Monowar, Muhammad Mostafa; Bajaber, Fuad

    2015-01-01

    In this paper, we address the thermal rise and Quality-of-Service (QoS) provisioning issue for an intra-body Wireless Body Area Network (WBAN) having in-vivo sensor nodes. We propose a thermal-aware QoS routing protocol, called TLQoS, that facilitates the system in achieving desired QoS in terms of delay and reliability for diverse traffic types, as well as avoids the formation of highly heated nodes known as hotspot(s), and keeps the temperature rise along the network to an acceptable level. TLQoS exploits modular architecture wherein different modules perform integrated operations in providing multiple QoS service with lower temperature rise. To address the challenges of highly dynamic wireless environment inside the human body. TLQoS implements potential-based localized routing that requires only local neighborhood information. TLQoS avoids routing loop formation as well as reduces the number of hop traversal exploiting hybrid potential, and tuning a configurable parameter. We perform extensive simulations of TLQoS, and the results show that TLQoS has significant performance improvements over state-of-the-art approaches. PMID:26083228

  16. Wireless Sensor Networks - Node Localization for Various Industry Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Derr, Kurt; Manic, Milos

    Fast, effective monitoring following airborne releases of toxic substances is critical to mitigate risks to threatened population areas. Wireless sensor nodes at fixed predetermined locations may monitor such airborne releases and provide early warnings to the public. A challenging algorithmic problem is determining the locations to place these sensor nodes while meeting several criteria: 1) provide complete coverage of the domain, and 2) create a topology with problem dependent node densities, while 3) minimizing the number of sensor nodes. This manuscript presents a novel approach to determining optimal sensor placement, Advancing Front mEsh generation with Constrained dElaunay Triangulation and Smoothingmore » (AFECETS) that addresses these criteria. A unique aspect of AFECETS is the ability to determine wireless sensor node locations for areas of high interest (hospitals, schools, high population density areas) that require higher density of nodes for monitoring environmental conditions, a feature that is difficult to find in other research work. The AFECETS algorithm was tested on several arbitrary shaped domains. AFECETS simulation results show that the algorithm 1) provides significant reduction in the number of nodes, in some cases over 40%, compared to an advancing front mesh generation algorithm, 2) maintains and improves optimal spacing between nodes, and 3) produces simulation run times suitable for real-time applications.« less

  17. Wireless Sensor Networks - Node Localization for Various Industry Problems

    DOE PAGES

    Derr, Kurt; Manic, Milos

    2015-06-01

    Fast, effective monitoring following airborne releases of toxic substances is critical to mitigate risks to threatened population areas. Wireless sensor nodes at fixed predetermined locations may monitor such airborne releases and provide early warnings to the public. A challenging algorithmic problem is determining the locations to place these sensor nodes while meeting several criteria: 1) provide complete coverage of the domain, and 2) create a topology with problem dependent node densities, while 3) minimizing the number of sensor nodes. This manuscript presents a novel approach to determining optimal sensor placement, Advancing Front mEsh generation with Constrained dElaunay Triangulation and Smoothingmore » (AFECETS) that addresses these criteria. A unique aspect of AFECETS is the ability to determine wireless sensor node locations for areas of high interest (hospitals, schools, high population density areas) that require higher density of nodes for monitoring environmental conditions, a feature that is difficult to find in other research work. The AFECETS algorithm was tested on several arbitrary shaped domains. AFECETS simulation results show that the algorithm 1) provides significant reduction in the number of nodes, in some cases over 40%, compared to an advancing front mesh generation algorithm, 2) maintains and improves optimal spacing between nodes, and 3) produces simulation run times suitable for real-time applications.« less

  18. Public storage for the Open Science Grid

    NASA Astrophysics Data System (ADS)

    Levshina, T.; Guru, A.

    2014-06-01

    The Open Science Grid infrastructure doesn't provide efficient means to manage public storage offered by participating sites. A Virtual Organization that relies on opportunistic storage has difficulties finding appropriate storage, verifying its availability, and monitoring its utilization. The involvement of the production manager, site administrators and VO support personnel is required to allocate or rescind storage space. One of the main requirements for Public Storage implementation is that it should use SRM or GridFTP protocols to access the Storage Elements provided by the OSG Sites and not put any additional burden on sites. By policy, no new services related to Public Storage can be installed and run on OSG sites. Opportunistic users also have difficulties in accessing the OSG Storage Elements during the execution of jobs. A typical users' data management workflow includes pre-staging common data on sites before a job's execution, then storing for a subsequent download to a local institution the output data produced by a job on a worker node. When the amount of data is significant, the only means to temporarily store the data is to upload it to one of the Storage Elements. In order to do that, a user's job should be aware of the storage location, availability, and free space. After a successful data upload, users must somehow keep track of the data's location for future access. In this presentation we propose solutions for storage management and data handling issues in the OSG. We are investigating the feasibility of using the integrated Rule-Oriented Data System developed at RENCI as a front-end service to the OSG SEs. The current architecture, state of deployment and performance test results will be discussed. We will also provide examples of current usage of the system by beta-users.

  19. A Two-Phase Time Synchronization-Free Localization Algorithm for Underwater Sensor Networks.

    PubMed

    Luo, Junhai; Fan, Liying

    2017-03-30

    Underwater Sensor Networks (UWSNs) can enable a broad range of applications such as resource monitoring, disaster prevention, and navigation-assistance. Sensor nodes location in UWSNs is an especially relevant topic. Global Positioning System (GPS) information is not suitable for use in UWSNs because of the underwater propagation problems. Hence, some localization algorithms based on the precise time synchronization between sensor nodes that have been proposed for UWSNs are not feasible. In this paper, we propose a localization algorithm called Two-Phase Time Synchronization-Free Localization Algorithm (TP-TSFLA). TP-TSFLA contains two phases, namely, range-based estimation phase and range-free evaluation phase. In the first phase, we address a time synchronization-free localization scheme based on the Particle Swarm Optimization (PSO) algorithm to obtain the coordinates of the unknown sensor nodes. In the second phase, we propose a Circle-based Range-Free Localization Algorithm (CRFLA) to locate the unlocalized sensor nodes which cannot obtain the location information through the first phase. In the second phase, sensor nodes which are localized in the first phase act as the new anchor nodes to help realize localization. Hence, in this algorithm, we use a small number of mobile beacons to help obtain the location information without any other anchor nodes. Besides, to improve the precision of the range-free method, an extension of CRFLA achieved by designing a coordinate adjustment scheme is updated. The simulation results show that TP-TSFLA can achieve a relative high localization ratio without time synchronization.

  20. A Two-Phase Time Synchronization-Free Localization Algorithm for Underwater Sensor Networks

    PubMed Central

    Luo, Junhai; Fan, Liying

    2017-01-01

    Underwater Sensor Networks (UWSNs) can enable a broad range of applications such as resource monitoring, disaster prevention, and navigation-assistance. Sensor nodes location in UWSNs is an especially relevant topic. Global Positioning System (GPS) information is not suitable for use in UWSNs because of the underwater propagation problems. Hence, some localization algorithms based on the precise time synchronization between sensor nodes that have been proposed for UWSNs are not feasible. In this paper, we propose a localization algorithm called Two-Phase Time Synchronization-Free Localization Algorithm (TP-TSFLA). TP-TSFLA contains two phases, namely, range-based estimation phase and range-free evaluation phase. In the first phase, we address a time synchronization-free localization scheme based on the Particle Swarm Optimization (PSO) algorithm to obtain the coordinates of the unknown sensor nodes. In the second phase, we propose a Circle-based Range-Free Localization Algorithm (CRFLA) to locate the unlocalized sensor nodes which cannot obtain the location information through the first phase. In the second phase, sensor nodes which are localized in the first phase act as the new anchor nodes to help realize localization. Hence, in this algorithm, we use a small number of mobile beacons to help obtain the location information without any other anchor nodes. Besides, to improve the precision of the range-free method, an extension of CRFLA achieved by designing a coordinate adjustment scheme is updated. The simulation results show that TP-TSFLA can achieve a relative high localization ratio without time synchronization. PMID:28358342

  1. The European approach to in-transit melanoma lesions.

    PubMed

    Hoekstra, H J

    2008-05-01

    The biological behavior of melanoma is unpredictable. Three to five per cent of melanoma patients will develop in-transit lesions and the median time to recurrence ranges between 13-16 months. At the time of recurrence the risk of occult nodal metastasis, with clinically negative regional lymph nodes, is as high as 50%. The risk of in-transit lesions depends on the tumor biology and not on the surgical approach to the regional lymph nodes. The high incidence of in-transit lesions at the lower limb may be caused by the gravity and delayed lymphatic drainage. The treatment of limited disease is local excision, laser ablation, cryosurgery, while multiple in-transit lesions or bulky disease located in a limb can be successfully treated with regional chemotherapy, a therapeutic isolated limb perfusion or infusion with melphalan or a combination of melphalan and tumor necrosis factor (TNF) alpha. If local regional treatment or systemic dacarbazine based systemic treatment fails, novel systemic treatment strategies with vaccines, antibodies and gene therapy are currently investigated.

  2. From link-prediction in brain connectomes and protein interactomes to the local-community-paradigm in complex networks

    PubMed Central

    Cannistraci, Carlo Vittorio; Alanis-Lobato, Gregorio; Ravasi, Timothy

    2013-01-01

    Growth and remodelling impact the network topology of complex systems, yet a general theory explaining how new links arise between existing nodes has been lacking, and little is known about the topological properties that facilitate link-prediction. Here we investigate the extent to which the connectivity evolution of a network might be predicted by mere topological features. We show how a link/community-based strategy triggers substantial prediction improvements because it accounts for the singular topology of several real networks organised in multiple local communities - a tendency here named local-community-paradigm (LCP). We observe that LCP networks are mainly formed by weak interactions and characterise heterogeneous and dynamic systems that use self-organisation as a major adaptation strategy. These systems seem designed for global delivery of information and processing via multiple local modules. Conversely, non-LCP networks have steady architectures formed by strong interactions, and seem designed for systems in which information/energy storage is crucial. PMID:23563395

  3. From link-prediction in brain connectomes and protein interactomes to the local-community-paradigm in complex networks.

    PubMed

    Cannistraci, Carlo Vittorio; Alanis-Lobato, Gregorio; Ravasi, Timothy

    2013-01-01

    Growth and remodelling impact the network topology of complex systems, yet a general theory explaining how new links arise between existing nodes has been lacking, and little is known about the topological properties that facilitate link-prediction. Here we investigate the extent to which the connectivity evolution of a network might be predicted by mere topological features. We show how a link/community-based strategy triggers substantial prediction improvements because it accounts for the singular topology of several real networks organised in multiple local communities - a tendency here named local-community-paradigm (LCP). We observe that LCP networks are mainly formed by weak interactions and characterise heterogeneous and dynamic systems that use self-organisation as a major adaptation strategy. These systems seem designed for global delivery of information and processing via multiple local modules. Conversely, non-LCP networks have steady architectures formed by strong interactions, and seem designed for systems in which information/energy storage is crucial.

  4. A Mobile Anchor Assisted Localization Algorithm Based on Regular Hexagon in Wireless Sensor Networks

    PubMed Central

    Rodrigues, Joel J. P. C.

    2014-01-01

    Localization is one of the key technologies in wireless sensor networks (WSNs), since it provides fundamental support for many location-aware protocols and applications. Constraints of cost and power consumption make it infeasible to equip each sensor node in the network with a global position system (GPS) unit, especially for large-scale WSNs. A promising method to localize unknown nodes is to use several mobile anchors which are equipped with GPS units moving among unknown nodes and periodically broadcasting their current locations to help nearby unknown nodes with localization. This paper proposes a mobile anchor assisted localization algorithm based on regular hexagon (MAALRH) in two-dimensional WSNs, which can cover the whole monitoring area with a boundary compensation method. Unknown nodes calculate their positions by using trilateration. We compare the MAALRH with HILBERT, CIRCLES, and S-CURVES algorithms in terms of localization ratio, localization accuracy, and path length. Simulations show that the MAALRH can achieve high localization ratio and localization accuracy when the communication range is not smaller than the trajectory resolution. PMID:25133212

  5. Sentinel node localization in oral cavity and oropharynx squamous cell cancer.

    PubMed

    Taylor, R J; Wahl, R L; Sharma, P K; Bradford, C R; Terrell, J E; Teknos, T N; Heard, E M; Wolf, G T; Chepeha, D B

    2001-08-01

    To evaluate the feasibility and predictive ability of the sentinel node localization technique for patients with squamous cell carcinoma of the oral cavity or oropharynx and clinically negative necks. Prospective, efficacy study comparing the histopathologic status of the sentinel node with that of the remaining neck dissection specimen. Tertiary referral center. Patients with T1 or T2 disease and clinically negative necks were eligible for the study. Nine previously untreated patients with oral cavity or oropharyngeal squamous cell carcinoma were enrolled in the study. Unfiltered technetium Tc 99m sulfur colloid injections of the primary tumor and lymphoscintigraphy were performed on the day before surgery. Intraoperatively, the sentinel node(s) was localized with a gamma probe and removed after tumor resection and before neck dissection. The primary outcome was the negative predictive value of the histopathologic status of the sentinel node for predicting cervical metastases. Sentinel nodes were identified in 9 previously untreated patients. In 5 patients, there were no positive nodes. In 4 patients, the sentinel nodes were the only histopathologically positive nodes. In previously untreated patients, the sentinel node technique had a negative predictive value of 100% for cervical metastasis. Our preliminary investigation shows that sentinel node localization is technically feasible in head and neck surgery and is predictive of cervical metastasis. The sentinel node technique has the potential to decrease the number of neck dissections performed in clinically negative necks, thus reducing the associated morbidity for patients in this group.

  6. A De-centralized Scheduling and Load Balancing Algorithm for Heterogeneous Grid Environments

    NASA Technical Reports Server (NTRS)

    Arora, Manish; Das, Sajal K.; Biswas, Rupak

    2002-01-01

    In the past two decades, numerous scheduling and load balancing techniques have been proposed for locally distributed multiprocessor systems. However, they all suffer from significant deficiencies when extended to a Grid environment: some use a centralized approach that renders the algorithm unscalable, while others assume the overhead involved in searching for appropriate resources to be negligible. Furthermore, classical scheduling algorithms do not consider a Grid node to be N-resource rich and merely work towards maximizing the utilization of one of the resources. In this paper, we propose a new scheduling and load balancing algorithm for a generalized Grid model of N-resource nodes that not only takes into account the node and network heterogeneity, but also considers the overhead involved in coordinating among the nodes. Our algorithm is decentralized, scalable, and overlaps the node coordination time with that of the actual processing of ready jobs, thus saving valuable clock cycles needed for making decisions. The proposed algorithm is studied by conducting simulations using the Message Passing Interface (MPI) paradigm.

  7. A De-Centralized Scheduling and Load Balancing Algorithm for Heterogeneous Grid Environments

    NASA Technical Reports Server (NTRS)

    Arora, Manish; Das, Sajal K.; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2002-01-01

    In the past two decades, numerous scheduling and load balancing techniques have been proposed for locally distributed multiprocessor systems. However, they all suffer from significant deficiencies when extended to a Grid environment: some use a centralized approach that renders the algorithm unscalable, while others assume the overhead involved in searching for appropriate resources to be negligible. Furthermore, classical scheduling algorithms do not consider a Grid node to be N-resource rich and merely work towards maximizing the utilization of one of the resources. In this paper we propose a new scheduling and load balancing algorithm for a generalized Grid model of N-resource nodes that not only takes into account the node and network heterogeneity, but also considers the overhead involved in coordinating among the nodes. Our algorithm is de-centralized, scalable, and overlaps the node coordination time of the actual processing of ready jobs, thus saving valuable clock cycles needed for making decisions. The proposed algorithm is studied by conducting simulations using the Message Passing Interface (MPI) paradigm.

  8. Enterprise Architecture as a Tool of Navy METOC Transformation

    DTIC Science & Technology

    2006-09-01

    Enterprise Service Integration Layer (MESIL) METOC Enterprise Service Bus (ESB) Local ESBl Impl InfraI l I f Production Center Node Local ESBl Impl...InfraI l I f Local ESBl Impl InfraI l I f METOC Edge Node NCOW Tenets NCOW Tenets SOA Tenets SOA Tenets Production Center Node Top-Down Analysis

  9. Link prediction based on local community properties

    NASA Astrophysics Data System (ADS)

    Yang, Xu-Hua; Zhang, Hai-Feng; Ling, Fei; Cheng, Zhi; Weng, Guo-Qing; Huang, Yu-Jiao

    2016-09-01

    The link prediction algorithm is one of the key technologies to reveal the inherent rule of network evolution. This paper proposes a novel link prediction algorithm based on the properties of the local community, which is composed of the common neighbor nodes of any two nodes in the network and the links between these nodes. By referring to the node degree and the condition of assortativity or disassortativity in a network, we comprehensively consider the effect of the shortest path and edge clustering coefficient within the local community on node similarity. We numerically show the proposed method provide good link prediction results.

  10. Performing a global barrier operation in a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2014-12-09

    Executing computing tasks on a parallel computer that includes compute nodes coupled for data communications, where each compute node executes tasks, with one task on each compute node designated as a master task, including: for each task on each compute node until all master tasks have joined a global barrier: determining whether the task is a master task; if the task is not a master task, joining a single local barrier; if the task is a master task, joining the global barrier and the single local barrier only after all other tasks on the compute node have joined the single local barrier.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    Performing a global barrier operation in a parallel computer that includes compute nodes coupled for data communications, where each compute node executes tasks, with one task on each compute node designated as a master task, including: for each task on each compute node until all master tasks have joined a global barrier: determining whether the task is a master task; if the task is not a master task, joining a single local barrier; if the task is a master task, joining the global barrier and the single local barrier only after all other tasks on the compute node have joinedmore » the single local barrier.« less

  12. Combining registration and active shape models for the automatic segmentation of the lymph node regions in head and neck CT images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen Antong; Deeley, Matthew A.; Niermann, Kenneth J.

    2010-12-15

    Purpose: Intensity-modulated radiation therapy (IMRT) is the state of the art technique for head and neck cancer treatment. It requires precise delineation of the target to be treated and structures to be spared, which is currently done manually. The process is a time-consuming task of which the delineation of lymph node regions is often the longest step. Atlas-based delineation has been proposed as an alternative, but, in the authors' experience, this approach is not accurate enough for routine clinical use. Here, the authors improve atlas-based segmentation results obtained for level II-IV lymph node regions using an active shape model (ASM)more » approach. Methods: An average image volume was first created from a set of head and neck patient images with minimally enlarged nodes. The average image volume was then registered using affine, global, and local nonrigid transformations to the other volumes to establish a correspondence between surface points in the atlas and surface points in each of the other volumes. Once the correspondence was established, the ASMs were created for each node level. The models were then used to first constrain the results obtained with an atlas-based approach and then to iteratively refine the solution. Results: The method was evaluated through a leave-one-out experiment. The ASM- and atlas-based segmentations were compared to manual delineations via the Dice similarity coefficient (DSC) for volume overlap and the Euclidean distance between manual and automatic 3D surfaces. The mean DSC value obtained with the ASM-based approach is 10.7% higher than with the atlas-based approach; the mean and median surface errors were decreased by 13.6% and 12.0%, respectively. Conclusions: The ASM approach is effective in reducing segmentation errors in areas of low CT contrast where purely atlas-based methods are challenged. Statistical analysis shows that the improvements brought by this approach are significant.« less

  13. Spreading to localized targets in complex networks

    NASA Astrophysics Data System (ADS)

    Sun, Ye; Ma, Long; Zeng, An; Wang, Wen-Xu

    2016-12-01

    As an important type of dynamics on complex networks, spreading is widely used to model many real processes such as the epidemic contagion and information propagation. One of the most significant research questions in spreading is to rank the spreading ability of nodes in the network. To this end, substantial effort has been made and a variety of effective methods have been proposed. These methods usually define the spreading ability of a node as the number of finally infected nodes given that the spreading is initialized from the node. However, in many real cases such as advertising and news propagation, the spreading only aims to cover a specific group of nodes. Therefore, it is necessary to study the spreading ability of nodes towards localized targets in complex networks. In this paper, we propose a reversed local path algorithm for this problem. Simulation results show that our method outperforms the existing methods in identifying the influential nodes with respect to these localized targets. Moreover, the influential spreaders identified by our method can effectively avoid infecting the non-target nodes in the spreading process.

  14. Real-Time CORBA

    DTIC Science & Technology

    2000-10-01

    control systems and prototyped the approach by porting the ILU ORB from Xerox to the Lynx real - time operating system . They then provided a distributed...compliant real - time operating system , a real-time ORB, and an ODMG-compliant real-time ODBMS [12]. The MITRE system is an infrastructure for...the server’s local operating system can handle. For instance, on a node controlled by the VXWorks real - time operating system with 256 local

  15. Location estimation in wireless sensor networks using spring-relaxation technique.

    PubMed

    Zhang, Qing; Foh, Chuan Heng; Seet, Boon-Chong; Fong, A C M

    2010-01-01

    Accurate and low-cost autonomous self-localization is a critical requirement of various applications of a large-scale distributed wireless sensor network (WSN). Due to its massive deployment of sensors, explicit measurements based on specialized localization hardware such as the Global Positioning System (GPS) is not practical. In this paper, we propose a low-cost WSN localization solution. Our design uses received signal strength indicators for ranging, light weight distributed algorithms based on the spring-relaxation technique for location computation, and the cooperative approach to achieve certain location estimation accuracy with a low number of nodes with known locations. We provide analysis to show the suitability of the spring-relaxation technique for WSN localization with cooperative approach, and perform simulation experiments to illustrate its accuracy in localization.

  16. Design and Verification of Remote Sensing Image Data Center Storage Architecture Based on Hadoop

    NASA Astrophysics Data System (ADS)

    Tang, D.; Zhou, X.; Jing, Y.; Cong, W.; Li, C.

    2018-04-01

    The data center is a new concept of data processing and application proposed in recent years. It is a new method of processing technologies based on data, parallel computing, and compatibility with different hardware clusters. While optimizing the data storage management structure, it fully utilizes cluster resource computing nodes and improves the efficiency of data parallel application. This paper used mature Hadoop technology to build a large-scale distributed image management architecture for remote sensing imagery. Using MapReduce parallel processing technology, it called many computing nodes to process image storage blocks and pyramids in the background to improve the efficiency of image reading and application and sovled the need for concurrent multi-user high-speed access to remotely sensed data. It verified the rationality, reliability and superiority of the system design by testing the storage efficiency of different image data and multi-users and analyzing the distributed storage architecture to improve the application efficiency of remote sensing images through building an actual Hadoop service system.

  17. The active control strategy on the output power for photovoltaic-storage systems based on extended PQ-QV-PV Node

    NASA Astrophysics Data System (ADS)

    Xu, Chen; Zhou, Bao-Rong; Zhai, Jian-Wei; Zhang, Yong-Jun; Yi, Ying-Qi

    2017-05-01

    In order to solve the problem of voltage exceeding specified limits and improve the penetration of photovoltaic in distribution network, we can make full use of the active power regulation ability of energy storage(ES) and the reactive power regulation ability of grid-connected photovoltaic inverter to provide support of active power and reactive power for distribution network. A strategy of actively controlling the output power for photovoltaic-storage system based on extended PQ-QV-PV node by analyzing the voltage regulating mechanism of point of commom coupling(PCC) of photovoltaic with energy storage(PVES) by controlling photovoltaic inverter and energy storage. The strategy set a small wave range of voltage to every photovoltaic by making the type of PCC convert among PQ, PV and QV. The simulation results indicate that the active control method can provide a better solution to the problem of voltage exceeding specified limits when photovoltaic is connectted to electric distribution network.

  18. An efficient parallel algorithm: Poststack and prestack Kirchhoff 3D depth migration using flexi-depth iterations

    NASA Astrophysics Data System (ADS)

    Rastogi, Richa; Srivastava, Abhishek; Khonde, Kiran; Sirasala, Kirannmayi M.; Londhe, Ashutosh; Chavhan, Hitesh

    2015-07-01

    This paper presents an efficient parallel 3D Kirchhoff depth migration algorithm suitable for current class of multicore architecture. The fundamental Kirchhoff depth migration algorithm exhibits inherent parallelism however, when it comes to 3D data migration, as the data size increases the resource requirement of the algorithm also increases. This challenges its practical implementation even on current generation high performance computing systems. Therefore a smart parallelization approach is essential to handle 3D data for migration. The most compute intensive part of Kirchhoff depth migration algorithm is the calculation of traveltime tables due to its resource requirements such as memory/storage and I/O. In the current research work, we target this area and develop a competent parallel algorithm for post and prestack 3D Kirchhoff depth migration, using hybrid MPI+OpenMP programming techniques. We introduce a concept of flexi-depth iterations while depth migrating data in parallel imaging space, using optimized traveltime table computations. This concept provides flexibility to the algorithm by migrating data in a number of depth iterations, which depends upon the available node memory and the size of data to be migrated during runtime. Furthermore, it minimizes the requirements of storage, I/O and inter-node communication, thus making it advantageous over the conventional parallelization approaches. The developed parallel algorithm is demonstrated and analysed on Yuva II, a PARAM series of supercomputers. Optimization, performance and scalability experiment results along with the migration outcome show the effectiveness of the parallel algorithm.

  19. Synchronizing compute node time bases in a parallel computer

    DOEpatents

    Chen, Dong; Faraj, Daniel A; Gooding, Thomas M; Heidelberger, Philip

    2015-01-27

    Synchronizing time bases in a parallel computer that includes compute nodes organized for data communications in a tree network, where one compute node is designated as a root, and, for each compute node: calculating data transmission latency from the root to the compute node; configuring a thread as a pulse waiter; initializing a wakeup unit; and performing a local barrier operation; upon each node completing the local barrier operation, entering, by all compute nodes, a global barrier operation; upon all nodes entering the global barrier operation, sending, to all the compute nodes, a pulse signal; and for each compute node upon receiving the pulse signal: waking, by the wakeup unit, the pulse waiter; setting a time base for the compute node equal to the data transmission latency between the root node and the compute node; and exiting the global barrier operation.

  20. Synchronizing compute node time bases in a parallel computer

    DOEpatents

    Chen, Dong; Faraj, Daniel A; Gooding, Thomas M; Heidelberger, Philip

    2014-12-30

    Synchronizing time bases in a parallel computer that includes compute nodes organized for data communications in a tree network, where one compute node is designated as a root, and, for each compute node: calculating data transmission latency from the root to the compute node; configuring a thread as a pulse waiter; initializing a wakeup unit; and performing a local barrier operation; upon each node completing the local barrier operation, entering, by all compute nodes, a global barrier operation; upon all nodes entering the global barrier operation, sending, to all the compute nodes, a pulse signal; and for each compute node upon receiving the pulse signal: waking, by the wakeup unit, the pulse waiter; setting a time base for the compute node equal to the data transmission latency between the root node and the compute node; and exiting the global barrier operation.

  1. Requirements for a network storage service

    NASA Technical Reports Server (NTRS)

    Kelly, Suzanne M.; Haynes, Rena A.

    1992-01-01

    Sandia National Laboratories provides a high performance classified computer network as a core capability in support of its mission of nuclear weapons design and engineering, physical sciences research, and energy research and development. The network, locally known as the Internal Secure Network (ISN), was designed in 1989 and comprises multiple distributed local area networks (LAN's) residing in Albuquerque, New Mexico and Livermore, California. The TCP/IP protocol suite is used for inner-node communications. Scientific workstations and mid-range computers, running UNIX-based operating systems, compose most LAN's. One LAN, operated by the Sandia Corporate Computing Directorate, is a general purpose resource providing a supercomputer and a file server to the entire ISN. The current file server on the supercomputer LAN is an implementation of the Common File System (CFS) developed by Los Alamos National Laboratory. Subsequent to the design of the ISN, Sandia reviewed its mass storage requirements and chose to enter into a competitive procurement to replace the existing file server with one more adaptable to a UNIX/TCP/IP environment. The requirements study for the network was the starting point for the requirements study for the new file server. The file server is called the Network Storage Services (NSS) and is requirements are described in this paper. The next section gives an application or functional description of the NSS. The final section adds performance, capacity, and access constraints to the requirements.

  2. Need to improve SWMM's subsurface flow routing algorithm for green infrastructure modeling

    EPA Science Inventory

    SWMM can simulate various subsurface flows, including groundwater (GW) release from a subcatchment to a node, percolation out of storage units and low impact development (LID) controls, and rainfall derived inflow and infiltration (RDII) at a node. Originally, the subsurface flow...

  3. Wide-area-distributed storage system for a multimedia database

    NASA Astrophysics Data System (ADS)

    Ueno, Masahiro; Kinoshita, Shigechika; Kuriki, Makato; Murata, Setsuko; Iwatsu, Shigetaro

    1998-12-01

    We have developed a wide-area-distribution storage system for multimedia databases, which minimizes the possibility of simultaneous failure of multiple disks in the event of a major disaster. It features a RAID system, whose member disks are spatially distributed over a wide area. Each node has a device, which includes the controller of the RAID and the controller of the member disks controlled by other nodes. The devices in the node are connected to a computer, using fiber optic cables and communicate using fiber-channel technology. Any computer at a node can utilize multiple devices connected by optical fibers as a single 'virtual disk.' The advantage of this system structure is that devices and fiber optic cables are shared by the computers. In this report, we first described our proposed system, and a prototype was used for testing. We then discussed its performance; i.e., how to read and write throughputs are affected by data-access delay, the RAID level, and queuing.

  4. Layered Location-Based Security Mechanism for Mobile Sensor Networks: Moving Security Areas.

    PubMed

    Wang, Ze; Zhang, Haijuan; Wu, Luqiang; Zhou, Chang

    2015-09-25

    Network security is one of the most important issues in mobile sensor networks (MSNs). Networks are particularly vulnerable in hostile environments because of many factors, such as uncertain mobility, limitations on computation, and the need for storage in mobile nodes. Though some location-based security mechanisms can resist some malicious attacks, they are only suitable for static networks and may sometimes require large amounts of storage. To solve these problems, using location information, which is one of the most important properties in outdoor wireless networks, a security mechanism called a moving security area (MSA) is proposed to resist malicious attacks by using mobile nodes' dynamic location-based keys. The security mechanism is layered by performing different detection schemes inside or outside the MSA. The location-based private keys will be updated only at the appropriate moments, considering the balance of cost and security performance. By transferring parts of the detection tasks from ordinary nodes to the sink node, the memory requirements are distributed to different entities to save limited energy.

  5. A modified dual-level algorithm for large-scale three-dimensional Laplace and Helmholtz equation

    NASA Astrophysics Data System (ADS)

    Li, Junpu; Chen, Wen; Fu, Zhuojia

    2018-01-01

    A modified dual-level algorithm is proposed in the article. By the help of the dual level structure, the fully-populated interpolation matrix on the fine level is transformed to a local supported sparse matrix to solve the highly ill-conditioning and excessive storage requirement resulting from fully-populated interpolation matrix. The kernel-independent fast multipole method is adopted to expediting the solving process of the linear equations on the coarse level. Numerical experiments up to 2-million fine-level nodes have successfully been achieved. It is noted that the proposed algorithm merely needs to place 2-3 coarse-level nodes in each wavelength per direction to obtain the reasonable solution, which almost down to the minimum requirement allowed by the Shannon's sampling theorem. In the real human head model example, it is observed that the proposed algorithm can simulate well computationally very challenging exterior high-frequency harmonic acoustic wave propagation up to 20,000 Hz.

  6. Development and Utilization of an Ex Vivo Bromodeoxyuridine Local Lymph Node Assay (LLNA) Protocol for Assessing Potential Chemical Sensitizers

    EPA Science Inventory

    The murine local lymph node assay (LLNA) is widely used to identify chemicals that may cause allergic contact dermatitis. Exposure to a dermal sensitizer results in proliferation of local lymph node T cells, which has traditionally been measured by in vivo incorporation of [3H]m...

  7. Parametric Grid Information in the DOE Knowledge Base: Data Preparation, Storage, and Access

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HIPP,JAMES R.; MOORE,SUSAN G.; MYERS,STEPHEN C.

    The parametric grid capability of the Knowledge Base provides an efficient, robust way to store and access interpolatable information which is needed to monitor the Comprehensive Nuclear Test Ban Treaty. To meet both the accuracy and performance requirements of operational monitoring systems, we use a new approach which combines the error estimation of kriging with the speed and robustness of Natural Neighbor Interpolation (NNI). The method involves three basic steps: data preparation (DP), data storage (DS), and data access (DA). The goal of data preparation is to process a set of raw data points to produce a sufficient basis formore » accurate NNI of value and error estimates in the Data Access step. This basis includes a set of nodes and their connectedness, collectively known as a tessellation, and the corresponding values and errors that map to each node, which we call surfaces. In many cases, the raw data point distribution is not sufficiently dense to guarantee accurate error estimates from the NNI, so the original data set must be densified using a newly developed interpolation technique known as Modified Bayesian Kriging. Once appropriate kriging parameters have been determined by variogram analysis, the optimum basis for NNI is determined in a process they call mesh refinement, which involves iterative kriging, new node insertion, and Delauny triangle smoothing. The process terminates when an NNI basis has been calculated which will fir the kriged values within a specified tolerance. In the data storage step, the tessellations and surfaces are stored in the Knowledge Base, currently in a binary flatfile format but perhaps in the future in a spatially-indexed database. Finally, in the data access step, a client application makes a request for an interpolated value, which triggers a data fetch from the Knowledge Base through the libKBI interface, a walking triangle search for the containing triangle, and finally the NNI interpolation.« less

  8. Investigation of storage options for scientific computing on Grid and Cloud facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garzoglio, Gabriele

    In recent years, several new storage technologies, such as Lustre, Hadoop, OrangeFS, and BlueArc, have emerged. While several groups have run benchmarks to characterize them under a variety of configurations, more work is needed to evaluate these technologies for the use cases of scientific computing on Grid clusters and Cloud facilities. This paper discusses our evaluation of the technologies as deployed on a test bed at FermiCloud, one of the Fermilab infrastructure-as-a-service Cloud facilities. The test bed consists of 4 server-class nodes with 40 TB of disk space and up to 50 virtual machine clients, some running on the storagemore » server nodes themselves. With this configuration, the evaluation compares the performance of some of these technologies when deployed on virtual machines and on bare metal nodes. In addition to running standard benchmarks such as IOZone to check the sanity of our installation, we have run I/O intensive tests using physics-analysis applications. This paper presents how the storage solutions perform in a variety of realistic use cases of scientific computing. One interesting difference among the storage systems tested is found in a decrease in total read throughput with increasing number of client processes, which occurs in some implementations but not others.« less

  9. Financial aspects of sentinel lymph node biopsy in early breast cancer.

    PubMed

    Severi, S; Gazzoni, E; Pellegrini, A; Sansovini, M; Raulli, G; Corbelli, C; Altini, M; Paganelli, G

    2012-02-01

    At present, early breast cancer is treated with conservative surgery of the primary lesion (BCS) along with axillary staging by sentinel lymph node biopsy (SLNB). Although the scintigraphic method is standardized, its surgical application is different for patient compliance, work organization, costs, and diagnosis related group (DRG) reimbursements. We compared four surgical protocols presently used in our region: (A) traditional BCS with axillary lymph node dissection (ALND); (B) BCS with SLNB and concomitant ALND for positive sentinel nodes (SN); (C) BCS and SLNB under local anaesthesia with subsequent ALND under general anaesthesia according to the SN result; (D) SLNB under local anaesthesia with subsequent BCS under local anaesthesia for negative SN, or ALND under general anaesthesia for positive SN. For each protocol, patient compliance, use of consumables, resources and time spent by various dedicated professionals, were analyzed. Furthermore, a detailed breakdown of 1-/2-day hospitalization costs was calculated using specific DRGs. We reported a mean costs variation that ranged from 1,634 to 2,221 Euros (protocols C and D). The number of procedures performed and the pathologists' results are the most significant variables affecting the rate of DRG reimbursements, that were the highest for protocol D and the lowest for protocol B. In our experience protocol C is the most suitable in terms of patient compliance, impact of surgical procedures, and work organization, and is granted by an appropriate DRG. We observed that a multidisciplinary approach enhances overall patient care and that a revaluation of DRG reimbursements is opportune.

  10. Massively parallel processor networks with optical express channels

    DOEpatents

    Deri, R.J.; Brooks, E.D. III; Haigh, R.E.; DeGroot, A.J.

    1999-08-24

    An optical method for separating and routing local and express channel data comprises interconnecting the nodes in a network with fiber optic cables. A single fiber optic cable carries both express channel traffic and local channel traffic, e.g., in a massively parallel processor (MPP) network. Express channel traffic is placed on, or filtered from, the fiber optic cable at a light frequency or a color different from that of the local channel traffic. The express channel traffic is thus placed on a light carrier that skips over the local intermediate nodes one-by-one by reflecting off of selective mirrors placed at each local node. The local-channel-traffic light carriers pass through the selective mirrors and are not reflected. A single fiber optic cable can thus be threaded throughout a three-dimensional matrix of nodes with the x,y,z directions of propagation encoded by the color of the respective light carriers for both local and express channel traffic. Thus frequency division multiple access is used to hierarchically separate the local and express channels to eliminate the bucket brigade latencies that would otherwise result if the express traffic had to hop between every local node to reach its ultimate destination. 3 figs.

  11. Massively parallel processor networks with optical express channels

    DOEpatents

    Deri, Robert J.; Brooks, III, Eugene D.; Haigh, Ronald E.; DeGroot, Anthony J.

    1999-01-01

    An optical method for separating and routing local and express channel data comprises interconnecting the nodes in a network with fiber optic cables. A single fiber optic cable carries both express channel traffic and local channel traffic, e.g., in a massively parallel processor (MPP) network. Express channel traffic is placed on, or filtered from, the fiber optic cable at a light frequency or a color different from that of the local channel traffic. The express channel traffic is thus placed on a light carrier that skips over the local intermediate nodes one-by-one by reflecting off of selective mirrors placed at each local node. The local-channel-traffic light carriers pass through the selective mirrors and are not reflected. A single fiber optic cable can thus be threaded throughout a three-dimensional matrix of nodes with the x,y,z directions of propagation encoded by the color of the respective light carriers for both local and express channel traffic. Thus frequency division multiple access is used to hierarchically separate the local and express channels to eliminate the bucket brigade latencies that would otherwise result if the express traffic had to hop between every local node to reach its ultimate destination.

  12. Three-dimensional local grid refinement for block-centered finite-difference groundwater models using iteratively coupled shared nodes: A new method of interpolation and analysis of errors

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2004-01-01

    This paper describes work that extends to three dimensions the two-dimensional local-grid refinement method for block-centered finite-difference groundwater models of Mehl and Hill [Development and evaluation of a local grid refinement method for block-centered finite-difference groundwater models using shared nodes. Adv Water Resour 2002;25(5):497-511]. In this approach, the (parent) finite-difference grid is discretized more finely within a (child) sub-region. The grid refinement method sequentially solves each grid and uses specified flux (parent) and specified head (child) boundary conditions to couple the grids. Iteration achieves convergence between heads and fluxes of both grids. Of most concern is how to interpolate heads onto the boundary of the child grid such that the physics of the parent-grid flow is retained in three dimensions. We develop a new two-step, "cage-shell" interpolation method based on the solution of the flow equation on the boundary of the child between nodes shared with the parent grid. Error analysis using a test case indicates that the shared-node local grid refinement method with cage-shell boundary head interpolation is accurate and robust, and the resulting code is used to investigate three-dimensional local grid refinement of stream-aquifer interactions. Results reveal that (1) the parent and child grids interact to shift the true head and flux solution to a different solution where the heads and fluxes of both grids are in equilibrium, (2) the locally refined model provided a solution for both heads and fluxes in the region of the refinement that was more accurate than a model without refinement only if iterations are performed so that both heads and fluxes are in equilibrium, and (3) the accuracy of the coupling is limited by the parent-grid size - A coarse parent grid limits correct representation of the hydraulics in the feedback from the child grid.

  13. Design strategy for integrating DSA via patterning in sub-7 nm interconnects

    NASA Astrophysics Data System (ADS)

    Karageorgos, Ioannis; Ryckaert, Julien; Tung, Maryann C.; Wong, H.-S. P.; Gronheid, Roel; Bekaert, Joost; Karageorgos, Evangelos; Croes, Kris; Vandenberghe, Geert; Stucchi, Michele; Dehaene, Wim

    2016-03-01

    In recent years, major advancements have been made in the directed self-assembly (DSA) of block copolymers (BCPs). As a result, the insertion of DSA for IC fabrication is being actively considered for the sub-7nm nodes. At these nodes the DSA technology could alleviate costs for multiple patterning and limit the number of litho masks that would be required per metal layer. One of the most straightforward approaches for DSA implementation would be for via patterning through templated DSA, where hole patterns are readily accessible through templated confinement of cylindrical phase BCP materials. Our in-house studies show that decomposition of via layers in realistic circuits below the 7nm node would require at least many multi-patterning steps (or colors), using 193nm immersion lithography. Even the use of EUV might require double patterning in these dimensions, since the minimum via distance would be smaller than EUV resolution. The grouping of vias through templated DSA can resolve local conflicts in high density areas. This way, the number of required colors can be significantly reduced. For the implementation of this approach, a DSA-aware mask decomposition is required. In this paper, our design approach for DSA via patterning in sub-7nm nodes is discussed. We propose options to expand the list of DSA-compatible via patterns (DSA letters) and we define matching cost formulas for the optimal DSA-aware layout decomposition. The flowchart of our proposed approach tool is presented.

  14. Patterns of Local-Regional Management Following Neoadjuvant Chemotherapy in Breast Cancer: Results From ACOSOG Z1071 (Alliance)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haffty, Bruce G., E-mail: hafftybg@cinj.rutgers.edu; McCall, Linda M.; Ballman, Karla V.

    2016-03-01

    Purpose: American College of Surgeons Oncology Group Z1071 was a prospective trial evaluating the false negative rate of sentinel lymph node (SLN) surgery after neoadjuvant chemotherapy (NAC) in breast cancer patients with initial node-positive disease. Radiation therapy (RT) decisions were made at the discretion of treating physicians, providing an opportunity to evaluate variability in practice patterns following NAC. Methods and Materials: Of 756 patients enrolled from July 2009 to June 2011, 685 met all eligibility requirements. Surgical approach, RT, and radiation field design were analyzed based on presenting clinical and pathologic factors. Results: Of 401 node-positive patients, mastectomy was performed inmore » 148 (36.9%), mastectomy with immediate reconstruction in 107 (26.7%), and breast-conserving surgery (BCS) in 146 patients (36.4%). Of the 284 pathologically node-negative patients, mastectomy was performed in 84 (29.6%), mastectomy with immediate reconstruction in 69 (24.3%), and BCS in 131 patients (46.1%). Bilateral mastectomy rates were higher in women undergoing reconstruction than in those without (66.5% vs 32.2%, respectively, P<.0001). Use of internal mammary RT was low (7.8%-11.2%) and did not differ between surgical approaches. Supraclavicular RT rate ranged from 46.6% to 52.2% and did not differ between surgical approaches but was omitted in 193 or 408 node-positive patients (47.3%). Rate of axillary RT was more frequent in patients who remained node-positive (P=.002). However, 22% of patients who converted to node-negative still received axillary RT. Post-mastectomy RT was more frequently omitted after reconstruction than mastectomy (23.9% vs 12.1%, respectively, P=.002) and was omitted in 19 of 107 patients (17.8%) with residual node-positive disease in the reconstruction group. Conclusions: Most clinically node-positive patients treated with NAC undergoing mastectomy receive RT. RT is less common in patients undergoing reconstruction. There is wide variability in RT fields. These practice patterns conflict with expert recommendations and ongoing trial guidelines. There is a significant need for greater uniformity and guidelines regarding RT following NAC.« less

  15. Partnership For Edge Physics Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parashar, Manish

    In this effort, we will extend our prior work as part of CPES (i.e., DART and DataSpaces) to support in-situ tight coupling between application codes that exploits data locality and core-level parallelism to maximize on-chip data exchange and reuse. This will be accomplished by mapping coupled simulations so that the data exchanges are more localized within the nodes. Coupled simulation workflows can more effectively utilize the resources available on emerging HEC platforms if they can be mapped and executed to exploit data locality as well as the communication patterns between application components. Scheduling and running such workflows requires an extendedmore » framework that should provide a unified hybrid abstraction to enable coordination and data sharing across computation tasks that run on the heterogeneous multi-core-based systems, and develop a data-locality based dynamic tasks scheduling approach to increase on-chip or intra-node data exchanges and in-situ execution. This effort will extend our prior work as part of CPES (i.e., DART and DataSpaces), which provided a simple virtual shared-space abstraction hosted at the staging nodes, to support application coordination, data sharing and active data processing services. Moreover, it will transparently manage the low-level operations associated with the inter-application data exchange, such as data redistributions, and will enable running coupled simulation workflow on multi-cores computing platforms.« less

  16. Exploiting Spatial Channel Occupancy Information in WLANs

    DTIC Science & Technology

    2014-05-15

    transmit signal UDP user datagram protocol WLAN wireless local area network ix Acknowledgements I owe a great debt of gratitude to my advisor, Professor...information. Unlike in wired networks , each node in a wireless network observes a different medium depending on its location. As a result, standard local... wireless LANs [15, 23, 29]. In [23], Li et. al. model the throughput of an 802.11 network using full spatial information. Their approach is from a

  17. Precipitation Variability and Projection Uncertainties in Climate Change Adaptation: Go Local!

    EPA Science Inventory

    Presentations agenda includes: Regional and local climate change effects: The relevance; Variability and uncertainty in decision- making and adaptation approaches; Adaptation attributes for the U.S. Southwest: Water availability, storage capacity, and related; EPA research...

  18. Integration of Geo-Sensor Feeds and Event Consumer Services for Real-Time Representation of Iot Nodes

    NASA Astrophysics Data System (ADS)

    Isikdag, U.; Pilouk, M.

    2016-06-01

    More and more devices are starting to be connected to the Internet every day. Internet-of-Things (IoT) is known as an architecture where online devices have the ability to communicate and interact with each other in real-time. On the other hand, with the development of IoT related technologies information about devices (i.e. Things) can be acquired in real-time by the humans. The implementation of IoT related technologies requires new approaches to be investigated for novel system architectures. These architectures need to have 3 main abilities. The first one is the ability is to store and query information coming from millions of devices in real-time. The second one is the ability to interact with large number of devices seamlessly regardless of their hardware and their software platforms. The final one is the ability to visualise and present information coming from millions of sensors in real time. The paper provides an architectural approach and implementation tests for storage, exposition and presentation of large amounts of real-time geo-information coming from multiple IoT nodes (and sensors).

  19. Voice over internet protocol with prepaid calling card solutions

    NASA Astrophysics Data System (ADS)

    Gunadi, Tri

    2001-07-01

    The VoIP technology is growing up rapidly, it has big network impact on PT Telkom Indonesia, the bigger telecommunication operator in Indonesia. Telkom has adopted VoIP and one other technology, Intelligent Network (IN). We develop those technologies together in one service product, called Internet Prepaid Calling Card (IPCC). IPCC is becoming new breakthrough for the Indonesia telecommunication services especially on VoIP and Prepaid Calling Card solutions. Network architecture of Indonesia telecommunication consists of three layer, Local, Tandem and Trunck Exchange layer. Network development researches for IPCC architecture are focus on network overlay hierarchy, Internet and PSTN. With this design hierarchy the goal of Interworking PSTN, VoIP and IN calling card, become reality. Overlay design for IPCC is not on Trunck Exchange, this is the new architecture, these overlay on Tandem and Local Exchange, to make the faster call processing. The nodes added: Gateway (GW) and Card Management Center (CMC) The GW do interfacing between PSTN and Internet Network used ISDN-PRA and Ethernet. The other functions are making bridge on circuit (PSTN) with packet (VoIP) based and real time billing process. The CMC used for data storage, pin validation, report activation, tariff system, directory number and all the administration transaction. With two nodes added the IPCC service offered to the market.

  20. Integration of a Decentralized Linear-Quadratic-Gaussian Control into GSFC's Universal 3-D Autonomous Formation Flying Algorithm

    NASA Technical Reports Server (NTRS)

    Folta, David C.; Carpenter, J. Russell

    1999-01-01

    A decentralized control is investigated for applicability to the autonomous formation flying control algorithm developed by GSFC for the New Millenium Program Earth Observer-1 (EO-1) mission. This decentralized framework has the following characteristics: The approach is non-hierarchical, and coordination by a central supervisor is not required; Detected failures degrade the system performance gracefully; Each node in the decentralized network processes only its own measurement data, in parallel with the other nodes; Although the total computational burden over the entire network is greater than it would be for a single, centralized controller, fewer computations are required locally at each node; Requirements for data transmission between nodes are limited to only the dimension of the control vector, at the cost of maintaining a local additional data vector. The data vector compresses all past measurement history from all the nodes into a single vector of the dimension of the state; and The approach is optimal with respect to standard cost functions. The current approach is valid for linear time-invariant systems only. Similar to the GSFC formation flying algorithm, the extension to linear LQG time-varying systems requires that each node propagate its filter covariance forward (navigation) and controller Riccati matrix backward (guidance) at each time step. Extension of the GSFC algorithm to non-linear systems can also be accomplished via linearization about a reference trajectory in the standard fashion, or linearization about the current state estimate as with the extended Kalman filter. To investigate the feasibility of the decentralized integration with the GSFC algorithm, an existing centralized LQG design for a single spacecraft orbit control problem is adapted to the decentralized framework while using the GSFC algorithm's state transition matrices and framework. The existing GSFC design uses both reference trajectories of each spacecraft in formation and by appropriate choice of coordinates and simplified measurement modeling is formulated as a linear time-invariant system. Results for improvements to the GSFC algorithm and a multiple satellite formation will be addressed. The goal of this investigation is to progressively relax the assumptions that result in linear time-invariance, ultimately to the point of linearization of the non-linear dynamics about the current state estimate as in the extended Kalman filter. An assessment will then be made about the feasibility of the decentralized approach to the realistic formation flying application of the EO-1/Landsat 7 formation flying experiment.

  1. Lessons Learned from the Node 1 Atmosphere Control and Storage and Water Recovery and Management Subsystem Design

    NASA Technical Reports Server (NTRS)

    Williams, David E.

    2010-01-01

    Node 1 flew to the International Space Station (ISS) on Flight 2A during December 1998. To date the National Aeronautics and Space Administration (NASA) has learned a lot of lessons from this module based on its history of approximately two years of acceptance testing on the ground and currently its twelve years on-orbit. This paper will provide an overview of the ISS Environmental Control and Life Support (ECLS) design of the Node 1 Atmosphere Control and Storage (ACS) and Water Recovery and Management (WRM) subsystems and it will document some of the lessons that have been learned to date for these subsystems based on problems prelaunch, problems encountered on-orbit, and operational problems/concerns. It is hoped that documenting these lessons learned from ISS will help in preventing them in future Programs.

  2. Lessons Learned from the Node 1 Atmosphere Control and Storage and Water Recovery and Management Subsystem Design

    NASA Technical Reports Server (NTRS)

    Williams, David E.

    2011-01-01

    Node 1 flew to the International Space Station (ISS) on Flight 2A during December 1998. To date the National Aeronautics and Space Administration (NASA) has learned a lot of lessons from this module based on its history of approximately two years of acceptance testing on the ground and currently its twelve years on-orbit. This paper will provide an overview of the ISS Environmental Control and Life Support (ECLS) design of the Node 1 Atmosphere Control and Storage (ACS) and Water Recovery and Management (WRM) subsystems and it will document some of the lessons that have been learned to date for these subsystems based on problems prelaunch, problems encountered on-orbit, and operational problems/concerns. It is hoped that documenting these lessons learned from ISS will help in preventing them in future Programs.

  3. Cloud object store for checkpoints of high performance computing applications using decoupling middleware

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2016-04-19

    Cloud object storage is enabled for checkpoints of high performance computing applications using a middleware process. A plurality of files, such as checkpoint files, generated by a plurality of processes in a parallel computing system are stored by obtaining said plurality of files from said parallel computing system; converting said plurality of files to objects using a log structured file system middleware process; and providing said objects for storage in a cloud object storage system. The plurality of processes may run, for example, on a plurality of compute nodes. The log structured file system middleware process may be embodied, for example, as a Parallel Log-Structured File System (PLFS). The log structured file system middleware process optionally executes on a burst buffer node.

  4. A template-based approach for parallel hexahedral two-refinement

    DOE PAGES

    Owen, Steven J.; Shih, Ryan M.; Ernst, Corey D.

    2016-10-17

    Here, we provide a template-based approach for generating locally refined all-hex meshes. We focus specifically on refinement of initially structured grids utilizing a 2-refinement approach where uniformly refined hexes are subdivided into eight child elements. The refinement algorithm consists of identifying marked nodes that are used as the basis for a set of four simple refinement templates. The target application for 2-refinement is a parallel grid-based all-hex meshing tool for high performance computing in a distributed environment. The result is a parallel consistent locally refined mesh requiring minimal communication and where minimum mesh quality is greater than scaled Jacobian 0.3more » prior to smoothing.« less

  5. A template-based approach for parallel hexahedral two-refinement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owen, Steven J.; Shih, Ryan M.; Ernst, Corey D.

    Here, we provide a template-based approach for generating locally refined all-hex meshes. We focus specifically on refinement of initially structured grids utilizing a 2-refinement approach where uniformly refined hexes are subdivided into eight child elements. The refinement algorithm consists of identifying marked nodes that are used as the basis for a set of four simple refinement templates. The target application for 2-refinement is a parallel grid-based all-hex meshing tool for high performance computing in a distributed environment. The result is a parallel consistent locally refined mesh requiring minimal communication and where minimum mesh quality is greater than scaled Jacobian 0.3more » prior to smoothing.« less

  6. Low-Latency and Energy-Efficient Data Preservation Mechanism in Low-Duty-Cycle Sensor Networks.

    PubMed

    Jiang, Chan; Li, Tao-Shen; Liang, Jun-Bin; Wu, Heng

    2017-05-06

    Similar to traditional wireless sensor networks (WSN), the nodes only have limited memory and energy in low-duty-cycle sensor networks (LDC-WSN). However, different from WSN, the nodes in LDC-WSN often sleep most of their time to preserve their energies. The sleeping feature causes serious data transmission delay. However, each source node that has sensed data needs to quickly disseminate its data to other nodes in the network for redundant storage. Otherwise, data would be lost due to its source node possibly being destroyed by outer forces in a harsh environment. The quick dissemination requirement produces a contradiction with the sleeping delay in the network. How to quickly disseminate all the source data to all the nodes with limited memory in the network for effective preservation is a challenging issue. In this paper, a low-latency and energy-efficient data preservation mechanism in LDC-WSN is proposed. The mechanism is totally distributed. The data can be disseminated to the network with low latency by using a revised probabilistic broadcasting mechanism, and then stored by the nodes with LT (Luby Transform) codes, which are a famous rateless erasure code. After the process of data dissemination and storage completes, some nodes may die due to being destroyed by outer forces. If a mobile sink enters the network at any time and from any place to collect the data, it can recover all of the source data by visiting a small portion of survived nodes in the network. Theoretical analyses and simulation results show that our mechanism outperforms existing mechanisms in the performances of data dissemination delay and energy efficiency.

  7. A coherent Ising machine for 2000-node optimization problems

    NASA Astrophysics Data System (ADS)

    Inagaki, Takahiro; Haribara, Yoshitaka; Igarashi, Koji; Sonobe, Tomohiro; Tamate, Shuhei; Honjo, Toshimori; Marandi, Alireza; McMahon, Peter L.; Umeki, Takeshi; Enbutsu, Koji; Tadanaga, Osamu; Takenouchi, Hirokazu; Aihara, Kazuyuki; Kawarabayashi, Ken-ichi; Inoue, Kyo; Utsunomiya, Shoko; Takesue, Hiroki

    2016-11-01

    The analysis and optimization of complex systems can be reduced to mathematical problems collectively known as combinatorial optimization. Many such problems can be mapped onto ground-state search problems of the Ising model, and various artificial spin systems are now emerging as promising approaches. However, physical Ising machines have suffered from limited numbers of spin-spin couplings because of implementations based on localized spins, resulting in severe scalability problems. We report a 2000-spin network with all-to-all spin-spin couplings. Using a measurement and feedback scheme, we coupled time-multiplexed degenerate optical parametric oscillators to implement maximum cut problems on arbitrary graph topologies with up to 2000 nodes. Our coherent Ising machine outperformed simulated annealing in terms of accuracy and computation time for a 2000-node complete graph.

  8. Electronic circuit for measuring series connected electrochemical cell voltages

    DOEpatents

    Ashtiani, Cyrus N.; Stuart, Thomas A.

    2000-01-01

    An electronic circuit for measuring voltage signals in an energy storage device is disclosed. The electronic circuit includes a plurality of energy storage cells forming the energy storage device. A voltage divider circuit is connected to at least one of the energy storage cells. A current regulating circuit is provided for regulating the current through the voltage divider circuit. A voltage measurement node is associated with the voltage divider circuit for producing a voltage signal which is proportional to the voltage across the energy storage cell.

  9. Lymph node detection in IASLC-defined zones on PET/CT images

    NASA Astrophysics Data System (ADS)

    Song, Yihua; Udupa, Jayaram K.; Odhner, Dewey; Tong, Yubing; Torigian, Drew A.

    2016-03-01

    Lymph node detection is challenging due to the low contrast between lymph nodes as well as surrounding soft tissues and the variation in nodal size and shape. In this paper, we propose several novel ideas which are combined into a system to operate on positron emission tomography/ computed tomography (PET/CT) images to detect abnormal thoracic nodes. First, our previous Automatic Anatomy Recognition (AAR) approach is modified where lymph node zones predominantly following International Association for the Study of Lung Cancer (IASLC) specifications are modeled as objects arranged in a hierarchy along with key anatomic anchor objects. This fuzzy anatomy model built from diagnostic CT images is then deployed on PET/CT images for automatically recognizing the zones. A novel globular filter (g-filter) to detect blob-like objects over a specified range of sizes is designed to detect the most likely locations and sizes of diseased nodes. Abnormal nodes within each automatically localized zone are subsequently detected via combined use of different items of information at various scales: lymph node zone model poses found at recognition indicating the geographic layout at the global level of node clusters, g-filter response which hones in on and carefully selects node-like globular objects at the node level, and CT and PET gray value but within only the most plausible nodal regions for node presence at the voxel level. The models are built from 25 diagnostic CT scans and refined for an object hierarchy based on a separate set of 20 diagnostic CT scans. Node detection is tested on an additional set of 20 PET/CT scans. Our preliminary results indicate node detection sensitivity and specificity at around 90% and 85%, respectively.

  10. Spatial network surrogates for disentangling complex system structure from spatial embedding of nodes

    NASA Astrophysics Data System (ADS)

    Wiedermann, Marc; Donges, Jonathan F.; Kurths, Jürgen; Donner, Reik V.

    2016-04-01

    Networks with nodes embedded in a metric space have gained increasing interest in recent years. The effects of spatial embedding on the networks' structural characteristics, however, are rarely taken into account when studying their macroscopic properties. Here, we propose a hierarchy of null models to generate random surrogates from a given spatially embedded network that can preserve certain global and local statistics associated with the nodes' embedding in a metric space. Comparing the original network's and the resulting surrogates' global characteristics allows one to quantify to what extent these characteristics are already predetermined by the spatial embedding of the nodes and links. We apply our framework to various real-world spatial networks and show that the proposed models capture macroscopic properties of the networks under study much better than standard random network models that do not account for the nodes' spatial embedding. Depending on the actual performance of the proposed null models, the networks are categorized into different classes. Since many real-world complex networks are in fact spatial networks, the proposed approach is relevant for disentangling the underlying complex system structure from spatial embedding of nodes in many fields, ranging from social systems over infrastructure and neurophysiology to climatology.

  11. Terascale Cluster for Advanced Turbulent Combustion Simulations

    DTIC Science & Technology

    2008-07-25

    the system We have given the name CATS (for Combustion And Turbulence Simulator) to the terascale system that was obtained through this grant. CATS ...lnfiniBand interconnect. CATS includes an interactive login node and a file server, each holding in excess of 1 terabyte of file storage. The 35 active...compute nodes of CATS enable us to run up to 140-core parallel MPI batch jobs; one node is reserved to run the scheduler. CATS is operated and

  12. Spatial analysis of bus transport networks using network theory

    NASA Astrophysics Data System (ADS)

    Shanmukhappa, Tanuja; Ho, Ivan Wang-Hei; Tse, Chi Kong

    2018-07-01

    In this paper, we analyze the bus transport network (BTN) structure considering the spatial embedding of the network for three cities, namely, Hong Kong (HK), London (LD), and Bengaluru (BL). We propose a novel approach called supernode graph structuring for modeling the bus transport network. A static demand estimation procedure is proposed to assign the node weights by considering the points of interests (POIs) and the population distribution in the city over various localized zones. In addition, the end-to-end delay is proposed as a parameter to measure the topological efficiency of the bus networks instead of the shortest distance measure used in previous works. With the aid of supernode graph representation, important network parameters are analyzed for the directed, weighted and geo-referenced bus transport networks. It is observed that the supernode concept has significant advantage in analyzing the inherent topological behavior. For instance, the scale-free and small-world behavior becomes evident with supernode representation as compared to conventional or regular graph representation for the Hong Kong network. Significant improvement in clustering, reduction in path length, and increase in centrality values are observed in all the three networks with supernode representation. The correlation between topologically central nodes and the geographically central nodes reveals the interesting fact that the proposed static demand estimation method for assigning node weights aids in better identifying the geographically significant nodes in the network. The impact of these geographically significant nodes on the local traffic behavior is demonstrated by simulation using the SUMO (Simulation of Urban Mobility) tool which is also supported by real-world empirical data, and our results indicate that the traffic speed around a particular bus stop can reach a jammed state from a free flow state due to the presence of these geographically important nodes. A comparison of the simulation and the empirical data provides useful information on how bus operators can better plan their routes and deploy stops considering the geographically significant nodes.

  13. Concurrent hypercube system with improved message passing

    NASA Technical Reports Server (NTRS)

    Peterson, John C. (Inventor); Tuazon, Jesus O. (Inventor); Lieberman, Don (Inventor); Pniel, Moshe (Inventor)

    1989-01-01

    A network of microprocessors, or nodes, are interconnected in an n-dimensional cube having bidirectional communication links along the edges of the n-dimensional cube. Each node's processor network includes an I/O subprocessor dedicated to controlling communication of message packets along a bidirectional communication link with each end thereof terminating at an I/O controlled transceiver. Transmit data lines are directly connected from a local FIFO through each node's communication link transceiver. Status and control signals from the neighboring nodes are delivered over supervisory lines to inform the local node that the neighbor node's FIFO is empty and the bidirectional link between the two nodes is idle for data communication. A clocking line between neighbors, clocks a message into an empty FIFO at a neighbor's node and vica versa. Either neighbor may acquire control over the bidirectional communication link at any time, and thus each node has circuitry for checking whether or not the communication link is busy or idle, and whether or not the receive FIFO is empty. Likewise, each node can empty its own FIFO and in turn deliver a status signal to a neighboring node indicating that the local FIFO is empty. The system includes features of automatic message rerouting, block message transfer and automatic parity checking and generation.

  14. Estimation of distributed Fermat-point location for wireless sensor networking.

    PubMed

    Huang, Po-Hsian; Chen, Jiann-Liang; Larosa, Yanuarius Teofilus; Chiang, Tsui-Lien

    2011-01-01

    This work presents a localization scheme for use in wireless sensor networks (WSNs) that is based on a proposed connectivity-based RF localization strategy called the distributed Fermat-point location estimation algorithm (DFPLE). DFPLE applies triangle area of location estimation formed by intersections of three neighboring beacon nodes. The Fermat point is determined as the shortest path from three vertices of the triangle. The area of estimated location then refined using Fermat point to achieve minimum error in estimating sensor nodes location. DFPLE solves problems of large errors and poor performance encountered by localization schemes that are based on a bounding box algorithm. Performance analysis of a 200-node development environment reveals that, when the number of sensor nodes is below 150, the mean error decreases rapidly as the node density increases, and when the number of sensor nodes exceeds 170, the mean error remains below 1% as the node density increases. Second, when the number of beacon nodes is less than 60, normal nodes lack sufficient beacon nodes to enable their locations to be estimated. However, the mean error changes slightly as the number of beacon nodes increases above 60. Simulation results revealed that the proposed algorithm for estimating sensor positions is more accurate than existing algorithms, and improves upon conventional bounding box strategies.

  15. Learning about and Practice of Designing Local Data Bases as an Harmonizing Factor.

    ERIC Educational Resources Information Center

    Neelameghan, A.

    This paper provides information workers with some practical approaches to the design, development, and use of local databases that form components of information storage and retrieval systems (ISR) and of automated library operations. Topics discussed include: (1) course objectives for the design and development of local databases for library and…

  16. Evaluation of a toxicogenomic approach to the local lymph node assay (LLNA).

    PubMed

    Boverhof, Darrell R; Gollapudi, B Bhaskar; Hotchkiss, Jon A; Osterloh-Quiroz, Mandy; Woolhiser, Michael R

    2009-02-01

    Genomic technologies have the potential to enhance and complement existing toxicology endpoints; however, assessment of these approaches requires a systematic evaluation including a robust experimental design with genomic endpoints anchored to traditional toxicology endpoints. The present study was conducted to assess the sensitivity of genomic responses when compared with the traditional local lymph node assay (LLNA) endpoint of lymph node cell proliferation and to evaluate the responses for their ability to provide insights into mode of action. Female BALB/c mice were treated with the sensitizer trimellitic anhydride (TMA), following the standard LLNA dosing regimen, at doses of 0.1, 1, or 10% and traditional tritiated thymidine ((3)HTdR) incorporation and gene expression responses were monitored in the auricular lymph nodes. Additional mice dosed with either vehicle or 10% TMA and sacrificed on day 4 or 10, were also included to examine temporal effects on gene expression. Analysis of (3)HTdR incorporation revealed TMA-induced stimulation indices of 2.8, 22.9, and 61.0 relative to vehicle with an EC(3) of 0.11%. Examination of the dose-response gene expression responses identified 9, 833, and 2122 differentially expressed genes relative to vehicle for the 0.1, 1, and 10% TMA dose groups, respectively. Calculation of EC(3) values for differentially expressed genes did not identify a response that was more sensitive than the (3)HTdR value, although a number of genes displayed comparable sensitivity. Examination of temporal responses revealed 1760, 1870, and 953 differentially expressed genes at the 4-, 6-, and 10-day time points respectively. Functional analysis revealed many responses displayed dose- and time-specific induction patterns within the functional categories of cellular proliferation and immune response, including numerous immunoglobin genes which were highly induced at the day 10 time point. Overall, these experiments have systematically illustrated the potential utility of genomic endpoints to enhance the LLNA and support further exploration of this approach through examination of a more diverse array of chemicals.

  17. Merkel cell carcinoma: An algorithm for multidisciplinary management and decision-making.

    PubMed

    Prieto, Isabel; Pérez de la Fuente, Teresa; Medina, Susana; Castelo, Beatriz; Sobrino, Beatriz; Fortes, Jose R; Esteban, David; Cassinello, Fernando; Jover, Raquel; Rodríguez, Nuria

    2016-02-01

    Merkel cell carcinoma (MCC) is a rare and aggressive neuroendocrine tumor of the skin. Therapeutic approach is often unclear, and considerable controversy exists regarding MCC pathogenesis and optimal management. Due to its rising incidence and poor prognosis, it is imperative to establish the optimal therapy for both the tumor and the lymph node basin, and for treatment to include sentinel node biopsy. Sentinel node biopsy is currently the most consistent predictor of survival for MCC patients, although there are conflicting views and a lack of awareness regarding node management. Tumor and node management involve different specialists, and their respective decisions and interventions are interrelated. No effective systemic treatment has been made available to date, and therefore patients continue to experience distant failure, often without local failure. This review aims to improve multidisciplinary decision-making by presenting scientific evidence of the contributions of each team member implicated in MCC management. Following this review of previously published research, the authors conclude that multidisciplinary team management is beneficial for care, and propose a multidisciplinary decision algorithm for managing this tumor. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  18. Use of an ex vivo local lymph node assay to assess contact hypersensitivity potential.

    PubMed

    Piccotti, Joseph R; Kawabata, Thomas T

    2008-07-01

    The local lymph node assay (LLNA) is used to assess the contact hypersensitivity potential of compounds. In the standard assay, mice are treated topically with test compound to the dorsum of both ears on Days 1-3. The induction of a hypersensitivity response is assessed on Day 6 by injecting [(3)H]-thymidine into a tail vein and measuring thymidine incorporation into DNA of lymph node cells draining the ears. The ex vivo LLNA is conducted similarly except lymphocyte proliferation is assessed after in vitro incubation of lymph node cells with [(3)H]-thymidine, which significantly reduces the amount of radioactive waste. The current study tested the use of this approach for hazard assessment of contact hypersensitivity and to estimate allergenic potency. Female BALB/c mice were treated on Days 1-3 with two nonsensitizers (4' -methoxyacetophenone, diethyl phthalate), three weak sensitizers (hydroxycitronellal, eugenol, citral), one weak-to-moderate sensitizer (hexylcinnamic aldehyde), two moderate sensitizers (isoeugenol, phenyl benzoate), and one strong sensitizer (dinitrochlorobenzene). On Day 6, isolated lymph node cells were incubated overnight with [(3)H]-thymidine and thymidine incorporation was measured by liquid scintillation spectrophotometry. The ex vivo LLNA accurately distinguished the contact sensitizers from the nonsensitizing chemicals, and correctly ranked the relative potency of the compounds tested. The EC3 values, i.e., the effective concentration of test substance needed to induce a stimulation index of 3, were as follows: 4' -methoxyacetophenone (> 50%), diethyl phthalate (> 50%), hydroxycitronellal (20.4%), eugenol (13.6%), citral (8.9%), isoeugenol (3.8%), hexylcinnamic aldehyde (2.7%), phenyl benzoate (2%), and dinitrochlorobenzene (0.02%). In addition, low inter-animal and inter-experiment variability was seen with 25% hexyl-cinnamic aldehyde (assay positive control). The results of the ex vivo LLNA in the current study were consistent with published reports using the standard LLNA and provided further evidence that supports the use of this alternative approach to assess the skin sensitization potential of test compounds.

  19. Anchor Node Localization for Wireless Sensor Networks Using Video and Compass Information Fusion

    PubMed Central

    Pescaru, Dan; Curiac, Daniel-Ioan

    2014-01-01

    Distributed sensing, computing and communication capabilities of wireless sensor networks require, in most situations, an efficient node localization procedure. In the case of random deployments in harsh or hostile environments, a general localization process within global coordinates is based on a set of anchor nodes able to determine their own position using GPS receivers. In this paper we propose another anchor node localization technique that can be used when GPS devices cannot accomplish their mission or are considered to be too expensive. This novel technique is based on the fusion of video and compass data acquired by the anchor nodes and is especially suitable for video- or multimedia-based wireless sensor networks. For these types of wireless networks the presence of video cameras is intrinsic, while the presence of digital compasses is also required for identifying the cameras' orientations. PMID:24594614

  20. Noninvasive Electromagnetic Source Imaging and Granger Causality Analysis: An Electrophysiological Connectome (eConnectome) Approach

    PubMed Central

    Sohrabpour, Abbas; Ye, Shuai; Worrell, Gregory A.; Zhang, Wenbo

    2016-01-01

    Objective Combined source imaging techniques and directional connectivity analysis can provide useful information about the underlying brain networks in a non-invasive fashion. Source imaging techniques have been used successfully to either determine the source of activity or to extract source time-courses for Granger causality analysis, previously. In this work, we utilize source imaging algorithms to both find the network nodes (regions of interest) and then extract the activation time series for further Granger causality analysis. The aim of this work is to find network nodes objectively from noninvasive electromagnetic signals, extract activation time-courses and apply Granger analysis on the extracted series to study brain networks under realistic conditions. Methods Source imaging methods are used to identify network nodes and extract time-courses and then Granger causality analysis is applied to delineate the directional functional connectivity of underlying brain networks. Computer simulations studies where the underlying network (nodes and connectivity pattern) is known were performed; additionally, this approach has been evaluated in partial epilepsy patients to study epilepsy networks from inter-ictal and ictal signals recorded by EEG and/or MEG. Results Localization errors of network nodes are less than 5 mm and normalized connectivity errors of ~20% in estimating underlying brain networks in simulation studies. Additionally, two focal epilepsy patients were studied and the identified nodes driving the epileptic network were concordant with clinical findings from intracranial recordings or surgical resection. Conclusion Our study indicates that combined source imaging algorithms with Granger causality analysis can identify underlying networks precisely (both in terms of network nodes location and internodal connectivity). Significance The combined source imaging and Granger analysis technique is an effective tool for studying normal or pathological brain conditions. PMID:27740473

  1. Noninvasive Electromagnetic Source Imaging and Granger Causality Analysis: An Electrophysiological Connectome (eConnectome) Approach.

    PubMed

    Sohrabpour, Abbas; Ye, Shuai; Worrell, Gregory A; Zhang, Wenbo; He, Bin

    2016-12-01

    Combined source-imaging techniques and directional connectivity analysis can provide useful information about the underlying brain networks in a noninvasive fashion. Source-imaging techniques have been used successfully to either determine the source of activity or to extract source time-courses for Granger causality analysis, previously. In this work, we utilize source-imaging algorithms to both find the network nodes [regions of interest (ROI)] and then extract the activation time series for further Granger causality analysis. The aim of this work is to find network nodes objectively from noninvasive electromagnetic signals, extract activation time-courses, and apply Granger analysis on the extracted series to study brain networks under realistic conditions. Source-imaging methods are used to identify network nodes and extract time-courses and then Granger causality analysis is applied to delineate the directional functional connectivity of underlying brain networks. Computer simulations studies where the underlying network (nodes and connectivity pattern) is known were performed; additionally, this approach has been evaluated in partial epilepsy patients to study epilepsy networks from interictal and ictal signals recorded by EEG and/or Magnetoencephalography (MEG). Localization errors of network nodes are less than 5 mm and normalized connectivity errors of ∼20% in estimating underlying brain networks in simulation studies. Additionally, two focal epilepsy patients were studied and the identified nodes driving the epileptic network were concordant with clinical findings from intracranial recordings or surgical resection. Our study indicates that combined source-imaging algorithms with Granger causality analysis can identify underlying networks precisely (both in terms of network nodes location and internodal connectivity). The combined source imaging and Granger analysis technique is an effective tool for studying normal or pathological brain conditions.

  2. [Feasibility analysis of sentinel lymph node biopsy in patients with breast cancer after local lumpectomy].

    PubMed

    Wang, J; Wang, X; Wang, W Y; Liu, J Q; Xing, Z Y; Wang, X

    2016-07-01

    To explore the feasibility, safety and clinical application value of sentinel lymph node biopsy(SLNB)in patients with breast cancer after local lumpectomy. Clinical data of 195 patients who previously received local lumpectomy from January 2005 to April 2015 were retrospectively analyzed. All the patients with pathologic stage T1-2N0M0 (T1-2N0M0) breast cancer underwent SLNB. Methylene blue, carbon nanoparticles suspension, technetium-99m-labeled dextran, or in combination were used in the SLNB. The interval from lumpectomy to SLNB was 1-91 days(mean, 18.3 days)and the maximum diameter of tumors before first operation was 0.2-4.5 cm (mean, 1.8 cm). The sentinel lymph node was successfully found in all the cases and the detection rate was 100%. 42 patients received axillary lymph node dissection (ALND), 19 patients had pathologically positive sentinel lymph node, with an accuracy rate of 97.6%, sensitivity of 95.0%, false negative rate of 5.0%, and specificity of 100%, and the false positive rate was 0. Logistic regression analysis suggested that the age of patients was significantly associated with sentinel lymph node metastasis after local lumpectomy. For early breast cancer and after breast tumor biopsy, the influence of local lumpectomy on detection rate and accuracy of sentinel lymph node is not significant. Sentinel lymph node biopsy with appropriately chosen tracing technique may still provide a high detection rate and accuracy.

  3. Dual-modality imaging with 99mTc and fluorescent indocyanine green using surface-modified silica nanoparticles for biopsy of the sentinel lymph node: an animal study

    PubMed Central

    2013-01-01

    Background We propose a new approach to facilitate sentinel node biopsy examination by multimodality imaging in which radioactive and near-infrared (NIR) fluorescent nanoparticles depict deeply situated sentinel nodes and fluorescent nodes with anatomical resolution in the surgical field. For this purpose, we developed polyamidoamine (PAMAM)-coated silica nanoparticles loaded with technetium-99m (99mTc) and indocyanine green (ICG). Methods We conducted animal studies to test the feasibility and utility of this dual-modality imaging probe. The mean diameter of the PAMAM-coated silica nanoparticles was 30 to 50 nm, as evaluated from the images of transmission electron microscopy and scanning electron microscopy. The combined labeling with 99mTc and ICG was verified by thin-layer chromatography before each experiment. A volume of 0.1 ml of the nanoparticle solution (7.4 MBq, except for one rat that was injected with 3.7 MBq, and 1 μg of an ICG derivative [ICG-sulfo-OSu]) was injected submucosally into the tongue of six male Wistar rats. Results Scintigraphic images showed increased accumulation of 99mTc in the neck of four of the six rats. Nineteen lymph nodes were identified in the dissected neck of the six rats, and a contact radiographic study showed three nodes with a marked increase in uptake and three nodes with a weak uptake. NIR fluorescence imaging provided real-time clear fluorescent images of the lymph nodes in the neck with anatomical resolution. Six lymph nodes showed weak (+) to strong (+++) fluorescence, whereas other lymph nodes showed no fluorescence. Nodes showing increased radioactivity coincided with the fluorescent nodes. The radioactivity of 15 excised lymph nodes from the four rats was assayed using a gamma well counter. Comparisons of the levels of radioactivity revealed a large difference between the high-fluorescence-intensity group (four lymph nodes; mean, 0.109% ± 0.067%) and the low- or no-fluorescence-intensity group (11 lymph nodes; mean, 0.001% ± 0.000%, p < 0.05). Transmission electron microscopy revealed that small black granules were localized to and dispersed within the cytoplasm of macrophages in the lymph nodes. Conclusion Although further studies are needed to determine the appropriate dose of the dual-imaging nanoparticle probe for effective sensitivity and safety, the results of this animal study revealed a novel method for improved node detection by a dual-modality approach for sentinel lymph node biopsy. PMID:23618132

  4. A Support Vector Learning-Based Particle Filter Scheme for Target Localization in Communication-Constrained Underwater Acoustic Sensor Networks

    PubMed Central

    Zhang, Chenglin; Yan, Lei; Han, Song; Guan, Xinping

    2017-01-01

    Target localization, which aims to estimate the location of an unknown target, is one of the key issues in applications of underwater acoustic sensor networks (UASNs). However, the constrained property of an underwater environment, such as restricted communication capacity of sensor nodes and sensing noises, makes target localization a challenging problem. This paper relies on fractional sensor nodes to formulate a support vector learning-based particle filter algorithm for the localization problem in communication-constrained underwater acoustic sensor networks. A node-selection strategy is exploited to pick fractional sensor nodes with short-distance pattern to participate in the sensing process at each time frame. Subsequently, we propose a least-square support vector regression (LSSVR)-based observation function, through which an iterative regression strategy is used to deal with the distorted data caused by sensing noises, to improve the observation accuracy. At the same time, we integrate the observation to formulate the likelihood function, which effectively update the weights of particles. Thus, the particle effectiveness is enhanced to avoid “particle degeneracy” problem and improve localization accuracy. In order to validate the performance of the proposed localization algorithm, two different noise scenarios are investigated. The simulation results show that the proposed localization algorithm can efficiently improve the localization accuracy. In addition, the node-selection strategy can effectively select the subset of sensor nodes to improve the communication efficiency of the sensor network. PMID:29267252

  5. A Support Vector Learning-Based Particle Filter Scheme for Target Localization in Communication-Constrained Underwater Acoustic Sensor Networks.

    PubMed

    Li, Xinbin; Zhang, Chenglin; Yan, Lei; Han, Song; Guan, Xinping

    2017-12-21

    Target localization, which aims to estimate the location of an unknown target, is one of the key issues in applications of underwater acoustic sensor networks (UASNs). However, the constrained property of an underwater environment, such as restricted communication capacity of sensor nodes and sensing noises, makes target localization a challenging problem. This paper relies on fractional sensor nodes to formulate a support vector learning-based particle filter algorithm for the localization problem in communication-constrained underwater acoustic sensor networks. A node-selection strategy is exploited to pick fractional sensor nodes with short-distance pattern to participate in the sensing process at each time frame. Subsequently, we propose a least-square support vector regression (LSSVR)-based observation function, through which an iterative regression strategy is used to deal with the distorted data caused by sensing noises, to improve the observation accuracy. At the same time, we integrate the observation to formulate the likelihood function, which effectively update the weights of particles. Thus, the particle effectiveness is enhanced to avoid "particle degeneracy" problem and improve localization accuracy. In order to validate the performance of the proposed localization algorithm, two different noise scenarios are investigated. The simulation results show that the proposed localization algorithm can efficiently improve the localization accuracy. In addition, the node-selection strategy can effectively select the subset of sensor nodes to improve the communication efficiency of the sensor network.

  6. Anchor-free localization method for mobile targets in coal mine wireless sensor networks.

    PubMed

    Pei, Zhongmin; Deng, Zhidong; Xu, Shuo; Xu, Xiao

    2009-01-01

    Severe natural conditions and complex terrain make it difficult to apply precise localization in underground mines. In this paper, an anchor-free localization method for mobile targets is proposed based on non-metric multi-dimensional scaling (Multi-dimensional Scaling: MDS) and rank sequence. Firstly, a coal mine wireless sensor network is constructed in underground mines based on the ZigBee technology. Then a non-metric MDS algorithm is imported to estimate the reference nodes' location. Finally, an improved sequence-based localization algorithm is presented to complete precise localization for mobile targets. The proposed method is tested through simulations with 100 nodes, outdoor experiments with 15 ZigBee physical nodes, and the experiments in the mine gas explosion laboratory with 12 ZigBee nodes. Experimental results show that our method has better localization accuracy and is more robust in underground mines.

  7. Exploiting on-node heterogeneity for in-situ analytics of climate simulations via a functional partitioning framework

    NASA Astrophysics Data System (ADS)

    Sapra, Karan; Gupta, Saurabh; Atchley, Scott; Anantharaj, Valentine; Miller, Ross; Vazhkudai, Sudharshan

    2016-04-01

    Efficient resource utilization is critical for improved end-to-end computing and workflow of scientific applications. Heterogeneous node architectures, such as the GPU-enabled Titan supercomputer at the Oak Ridge Leadership Computing Facility (OLCF), present us with further challenges. In many HPC applications on Titan, the accelerators are the primary compute engines while the CPUs orchestrate the offloading of work onto the accelerators, and moving the output back to the main memory. On the other hand, applications that do not exploit GPUs, the CPU usage is dominant while the GPUs idle. We utilized Heterogenous Functional Partitioning (HFP) runtime framework that can optimize usage of resources on a compute node to expedite an application's end-to-end workflow. This approach is different from existing techniques for in-situ analyses in that it provides a framework for on-the-fly analysis on-node by dynamically exploiting under-utilized resources therein. We have implemented in the Community Earth System Model (CESM) a new concurrent diagnostic processing capability enabled by the HFP framework. Various single variate statistics, such as means and distributions, are computed in-situ by launching HFP tasks on the GPU via the node local HFP daemon. Since our current configuration of CESM does not use GPU resources heavily, we can move these tasks to GPU using the HFP framework. Each rank running the atmospheric model in CESM pushes the variables of of interest via HFP function calls to the HFP daemon. This node local daemon is responsible for receiving the data from main program and launching the designated analytics tasks on the GPU. We have implemented these analytics tasks in C and use OpenACC directives to enable GPU acceleration. This methodology is also advantageous while executing GPU-enabled configurations of CESM when the CPUs will be idle during portions of the runtime. In our implementation results, we demonstrate that it is more efficient to use HFP framework to offload the tasks to GPUs instead of doing it in the main application. We observe increased resource utilization and overall productivity in this approach by using HFP framework for end-to-end workflow.

  8. Software Defined Cyberinfrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foster, Ian; Blaiszik, Ben; Chard, Kyle

    Within and across thousands of science labs, researchers and students struggle to manage data produced in experiments, simulations, and analyses. Largely manual research data lifecycle management processes mean that much time is wasted, research results are often irreproducible, and data sharing and reuse remain rare. In response, we propose a new approach to data lifecycle management in which researchers are empowered to define the actions to be performed at individual storage systems when data are created or modified: actions such as analysis, transformation, copying, and publication. We term this approach software-defined cyberinfrastructure because users can implement powerful data management policiesmore » by deploying rules to local storage systems, much as software-defined networking allows users to configure networks by deploying rules to switches.We argue that this approach can enable a new class of responsive distributed storage infrastructure that will accelerate research innovation by allowing any researcher to associate data workflows with data sources, whether local or remote, for such purposes as data ingest, characterization, indexing, and sharing. We report on early experiments with this approach in the context of experimental science, in which a simple if-trigger-then-action (IFTA) notation is used to define rules.« less

  9. Voltage control in pulsed system by predict-ahead control

    DOEpatents

    Payne, Anthony N.; Watson, James A.; Sampayan, Stephen E.

    1994-01-01

    A method and apparatus for predict-ahead pulse-to-pulse voltage control in a pulsed power supply system is disclosed. A DC power supply network is coupled to a resonant charging network via a first switch. The resonant charging network is coupled at a node to a storage capacitor. An output load is coupled to the storage capacitor via a second switch. A de-Q-ing network is coupled to the resonant charging network via a third switch. The trigger for the third switch is a derived function of the initial voltage of the power supply network, the initial voltage of the storage capacitor, and the present voltage of the storage capacitor. A first trigger closes the first switch and charges the capacitor. The third trigger is asserted according to the derived function to close the third switch. When the third switch is closed, the first switch opens and voltage on the node is regulated. The second trigger may be thereafter asserted to discharge the capacitor into the output load.

  10. Voltage control in pulsed system by predict-ahead control

    DOEpatents

    Payne, A.N.; Watson, J.A.; Sampayan, S.E.

    1994-09-13

    A method and apparatus for predict-ahead pulse-to-pulse voltage control in a pulsed power supply system is disclosed. A DC power supply network is coupled to a resonant charging network via a first switch. The resonant charging network is coupled at a node to a storage capacitor. An output load is coupled to the storage capacitor via a second switch. A de-Q-ing network is coupled to the resonant charging network via a third switch. The trigger for the third switch is a derived function of the initial voltage of the power supply network, the initial voltage of the storage capacitor, and the present voltage of the storage capacitor. A first trigger closes the first switch and charges the capacitor. The third trigger is asserted according to the derived function to close the third switch. When the third switch is closed, the first switch opens and voltage on the node is regulated. The second trigger may be thereafter asserted to discharge the capacitor into the output load. 4 figs.

  11. Sentinel Node Detection in Head and Neck Malignancies: Innovations in Radioguided Surgery

    PubMed Central

    Vermeeren, L.; Klop, W. M. C.; van den Brekel, M. W. M.; Balm, A. J. M.; Nieweg, O. E.; Valdés Olmos, R. A.

    2009-01-01

    Sentinel node mapping is becoming a routine procedure for staging of various malignancies, because it can determine lymph node status more precisely. Due to anatomical problems, localizing sentinel nodes in the head and neck region on the basis of conventional images can be difficult. New diagnostic tools can provide better visualization of sentinel nodes. In an attempt to keep up with possible scientific progress, this article reviews new and innovative tools for sentinel node localization in this specific area. The overview comprises a short introduction of the sentinel node procedure as well as indications in the head and neck region. Then the results of SPECT/CT for sentinel node detection are described. Finally, a portable gamma camera to enable intraoperative real-time imaging with improved sentinel node detection is described. PMID:20016804

  12. Method for gathering and summarizing internet information

    DOEpatents

    Potok, Thomas E.; Elmore, Mark Thomas; Reed, Joel Wesley; Treadwell, Jim N.; Samatova, Nagiza Faridovna

    2010-04-06

    A computer method of gathering and summarizing large amounts of information comprises collecting information from a plurality of information sources (14, 51) according to respective maps (52) of the information sources (14), converting the collected information from a storage format to XML-language documents (26, 53) and storing the XML-language documents in a storage medium, searching for documents (55) according to a search query (13) having at least one term and identifying the documents (26) found in the search, and displaying the documents as nodes (33) of a tree structure (32) having links (34) and nodes (33) so as to indicate similarity of the documents to each other.

  13. System for gathering and summarizing internet information

    DOEpatents

    Potok, Thomas E.; Elmore, Mark Thomas; Reed, Joel Wesley; Treadwell, Jim N.; Samatova, Nagiza Faridovna

    2006-07-04

    A computer method of gathering and summarizing large amounts of information comprises collecting information from a plurality of information sources (14, 51) according to respective maps (52) of the information sources (14), converting the collected information from a storage format to XML-language documents (26, 53) and storing the XML-language documents in a storage medium, searching for documents (55) according to a search query (13) having at least one term and identifying the documents (26) found in the search, and displaying the documents as nodes (33) of a tree structure (32) having links (34) and nodes (33) so as to indicate similarity of the documents to each other.

  14. Method for gathering and summarizing internet information

    DOEpatents

    Potok, Thomas E [Oak Ridge, TN; Elmore, Mark Thomas [Oak Ridge, TN; Reed, Joel Wesley [Knoxville, TN; Treadwell, Jim N [Louisville, TN; Samatova, Nagiza Faridovna [Oak Ridge, TN

    2008-01-01

    A computer method of gathering and summarizing large amounts of information comprises collecting information from a plurality of information sources (14, 51) according to respective maps (52) of the information sources (14), converting the collected information from a storage format to XML-language documents (26, 53) and storing the XML-language documents in a storage medium, searching for documents (55) according to a search query (13) having at least one term and identifying the documents (26) found in the search, and displaying the documents as nodes (33) of a tree structure (32) having links (34) and nodes (33) so as to indicate similarity of the documents to each other.

  15. Data Acquisition Based on Stable Matching of Bipartite Graph in Cooperative Vehicle–Infrastructure Systems †

    PubMed Central

    Tang, Xiaolan; Hong, Donghui; Chen, Wenlong

    2017-01-01

    Existing studies on data acquisition in vehicular networks often take the mobile vehicular nodes as data carriers. However, their autonomous movements, limited resources and security risks impact the quality of services. In this article, we propose a data acquisition model using stable matching of bipartite graph in cooperative vehicle-infrastructure systems, namely, DAS. Contents are distributed to roadside units, while vehicular nodes support supplementary storage. The original distribution problem is formulated as a stable matching problem of bipartite graph, where the data and the storage cells compose two sides of vertices. Regarding the factors relevant with the access ratio and delay, the preference rankings for contents and roadside units are calculated, respectively. With a multi-replica preprocessing algorithm to handle the potential one-to-many mapping, the matching problem is addressed in polynomial time. In addition, vehicular nodes carry and forward assistant contents to deliver the failed packets because of bandwidth competition. Furthermore, an incentive strategy is put forward to boost the vehicle cooperation and to achieve a fair bandwidth allocation at roadside units. Experiments show that DAS achieves a high access ratio and a small storage cost with an acceptable delay. PMID:28594359

  16. Horizontally scaling dChache SRM with the Terracotta platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perelmutov, T.; Crawford, M.; Moibenko, A.

    2011-01-01

    The dCache disk caching file system has been chosen by a majority of LHC experiments Tier 1 centers for their data storage needs. It is also deployed at many Tier 2 centers. The Storage Resource Manager (SRM) is a standardized grid storage interface and a single point of remote entry into dCache, and hence is a critical component. SRM must scale to increasing transaction rates and remain resilient against changing usage patterns. The initial implementation of the SRM service in dCache suffered from an inability to support clustered deployment, and its performance was limited by the hardware of a singlemore » node. Using the Terracotta platform, we added the ability to horizontally scale the dCache SRM service to run on multiple nodes in a cluster configuration, coupled with network load balancing. This gives site administrators the ability to increase the performance and reliability of SRM service to face the ever-increasing requirements of LHC data handling. In this paper we will describe the previous limitations of the architecture SRM server and how the Terracotta platform allowed us to readily convert single node service into a highly scalable clustered application.« less

  17. Localization Algorithm Based on a Spring Model (LASM) for Large Scale Wireless Sensor Networks.

    PubMed

    Chen, Wanming; Mei, Tao; Meng, Max Q-H; Liang, Huawei; Liu, Yumei; Li, Yangming; Li, Shuai

    2008-03-15

    A navigation method for a lunar rover based on large scale wireless sensornetworks is proposed. To obtain high navigation accuracy and large exploration area, highnode localization accuracy and large network scale are required. However, thecomputational and communication complexity and time consumption are greatly increasedwith the increase of the network scales. A localization algorithm based on a spring model(LASM) method is proposed to reduce the computational complexity, while maintainingthe localization accuracy in large scale sensor networks. The algorithm simulates thedynamics of physical spring system to estimate the positions of nodes. The sensor nodesare set as particles with masses and connected with neighbor nodes by virtual springs. Thevirtual springs will force the particles move to the original positions, the node positionscorrespondingly, from the randomly set positions. Therefore, a blind node position can bedetermined from the LASM algorithm by calculating the related forces with the neighbornodes. The computational and communication complexity are O(1) for each node, since thenumber of the neighbor nodes does not increase proportionally with the network scale size.Three patches are proposed to avoid local optimization, kick out bad nodes and deal withnode variation. Simulation results show that the computational and communicationcomplexity are almost constant despite of the increase of the network scale size. The time consumption has also been proven to remain almost constant since the calculation steps arealmost unrelated with the network scale size.

  18. Fragmenting networks by targeting collective influencers at a mesoscopic level.

    PubMed

    Kobayashi, Teruyoshi; Masuda, Naoki

    2016-11-25

    A practical approach to protecting networks against epidemic processes such as spreading of infectious diseases, malware, and harmful viral information is to remove some influential nodes beforehand to fragment the network into small components. Because determining the optimal order to remove nodes is a computationally hard problem, various approximate algorithms have been proposed to efficiently fragment networks by sequential node removal. Morone and Makse proposed an algorithm employing the non-backtracking matrix of given networks, which outperforms various existing algorithms. In fact, many empirical networks have community structure, compromising the assumption of local tree-like structure on which the original algorithm is based. We develop an immunization algorithm by synergistically combining the Morone-Makse algorithm and coarse graining of the network in which we regard a community as a supernode. In this way, we aim to identify nodes that connect different communities at a reasonable computational cost. The proposed algorithm works more efficiently than the Morone-Makse and other algorithms on networks with community structure.

  19. Fragmenting networks by targeting collective influencers at a mesoscopic level

    NASA Astrophysics Data System (ADS)

    Kobayashi, Teruyoshi; Masuda, Naoki

    2016-11-01

    A practical approach to protecting networks against epidemic processes such as spreading of infectious diseases, malware, and harmful viral information is to remove some influential nodes beforehand to fragment the network into small components. Because determining the optimal order to remove nodes is a computationally hard problem, various approximate algorithms have been proposed to efficiently fragment networks by sequential node removal. Morone and Makse proposed an algorithm employing the non-backtracking matrix of given networks, which outperforms various existing algorithms. In fact, many empirical networks have community structure, compromising the assumption of local tree-like structure on which the original algorithm is based. We develop an immunization algorithm by synergistically combining the Morone-Makse algorithm and coarse graining of the network in which we regard a community as a supernode. In this way, we aim to identify nodes that connect different communities at a reasonable computational cost. The proposed algorithm works more efficiently than the Morone-Makse and other algorithms on networks with community structure.

  20. Fragmenting networks by targeting collective influencers at a mesoscopic level

    PubMed Central

    Kobayashi, Teruyoshi; Masuda, Naoki

    2016-01-01

    A practical approach to protecting networks against epidemic processes such as spreading of infectious diseases, malware, and harmful viral information is to remove some influential nodes beforehand to fragment the network into small components. Because determining the optimal order to remove nodes is a computationally hard problem, various approximate algorithms have been proposed to efficiently fragment networks by sequential node removal. Morone and Makse proposed an algorithm employing the non-backtracking matrix of given networks, which outperforms various existing algorithms. In fact, many empirical networks have community structure, compromising the assumption of local tree-like structure on which the original algorithm is based. We develop an immunization algorithm by synergistically combining the Morone-Makse algorithm and coarse graining of the network in which we regard a community as a supernode. In this way, we aim to identify nodes that connect different communities at a reasonable computational cost. The proposed algorithm works more efficiently than the Morone-Makse and other algorithms on networks with community structure. PMID:27886251

  1. Active pixel sensor array with electronic shuttering

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor)

    2002-01-01

    An active pixel cell includes electronic shuttering capability. The cell can be shuttered to prevent additional charge accumulation. One mode transfers the current charge to a storage node that is blocked against accumulation of optical radiation. The charge is sampled from a floating node. Since the charge is stored, the node can be sampled at the beginning and the end of every cycle. Another aspect allows charge to spill out of the well whenever the charge amount gets higher than some amount, thereby providing anti blooming.

  2. [The incidence of human papilloma virus associated vulvar cancer in younger women is increasing and wide local excision with sentinel lymph node biopsie has become standard].

    PubMed

    Fehr, Mathias K

    2011-10-01

    Sentinel lymph node (SLN) dissections have been shown to be sensitive for the evaluation of nodal basins for metastatic disease and are associated with decreased short-term and long-term morbidity when compared with complete lymph node dissection. There has been increasing interest in the use of SLN technology in gynecologic cancers. This review assesses the current evidence-based literature for the use of SLN dissections in gynecologic malignancies. Recent literature continues to support the safety and feasibility of SLN biopsy for early stage vulvar cancer with negative predictive value approaching 100 % and low false negative rates. Alternatively, for endometrial cancer most studies have reported low false-negative rates, with variable sensitivities and have reported low detection rates of the sentinel node. Studies examining the utility of SLN biopsy in early-stage cervical cancer remain promising with detection rates, sensitivities, and false-negative rates greater than 90 % for stage 1B1 tumors. SLN dissections have been shown to be effective and safe in certain, select vulvar cancer patients and can be considered an alternative surgical approach for these patients. For endometrial and cervical cancer, SLN dissection continues to have encouraging results and however needs further investigation.

  3. Does communication help people coordinate?

    PubMed Central

    2017-01-01

    Theoretical and experimental investigations have consistently demonstrated that collective performance in a variety of tasks can be significantly improved by allowing communication. We present the results of the first experiment systematically investigating the value of communication in networked consensus. The goal of all tasks in our experiments is for subjects to reach global consensus, even though nodes can only observe choices of their immediate neighbors. Unlike previous networked consensus tasks, our experiments allow subjects to communicate either with their immediate neighbors (locally) or with the entire network (globally). Moreover, we consider treatments in which essentially arbitrary messages can be sent, as well as those in which only one type of message is allowed, informing others about a node’s local state. We find that local communication adds minimal value: fraction of games solved is essentially identical to treatments with no communication. Ability to communicate globally, in contrast, offers a significant performance improvement. In addition, we find that constraining people to only exchange messages about local state is significantly better than unconstrained communication. We observe that individual behavior is qualitatively consistent across settings: people clearly react to messages they receive in all communication settings. However, we find that messages received in local communication treatments are relatively uninformative, whereas global communication offers substantial information advantage. Exploring mixed communication settings, in which only a subset of agents are global communicators, we find that a significant number of global communicators is needed for performance to approach success when everyone communicates globally. However, global communicators have a significant advantage: a small tightly connected minority of globally communicating nodes can successfully steer outcomes towards their preferences, although this can be significantly mitigated when all other nodes have the ability to communicate locally with their neighbors. PMID:28178295

  4. CONSIDERATIONS ON ANATOMY AND PHYSIOLOGY OF LYMPH VESSELS OF UPPER AERO DIGESTIVE ORGANS AND CERVICAL SATELLITE LYMPH NODE GROUP.

    PubMed

    Ciupilan, Corina; Stan, C I

    2016-01-01

    The almost constant local regional development of the cancers of upper aero digestive organs requires the same special attention to cervical lymph node metastases, as well as to the primary neoplastic burning point. The surgical therapy alone or associated has a mutilating, damaging character, resulting in loss of an organ and function, most of the times with social implications, involving physical distortions with aesthetic consequences, which make the reintegration of the individual into society questionable. The problem of cervical lymph node metastases is vast and complex, reason why we approached several anatomical and physiological aspects of lymph vessels of the aero digestive organs. Among the available elements during treatment, the headquarters of the tumour, its histologic degree, and its infiltrative nature, each of them significantly influences the possibility of developing metastases.

  5. Unsupervised algorithms for intrusion detection and identification in wireless ad hoc sensor networks

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    2009-05-01

    In previous work by the author, parameters across network protocol layers were selected as features in supervised algorithms that detect and identify certain intrusion attacks on wireless ad hoc sensor networks (WSNs) carrying multisensor data. The algorithms improved the residual performance of the intrusion prevention measures provided by any dynamic key-management schemes and trust models implemented among network nodes. The approach of this paper does not train algorithms on the signature of known attack traffic, but, instead, the approach is based on unsupervised anomaly detection techniques that learn the signature of normal network traffic. Unsupervised learning does not require the data to be labeled or to be purely of one type, i.e., normal or attack traffic. The approach can be augmented to add any security attributes and quantified trust levels, established during data exchanges among nodes, to the set of cross-layer features from the WSN protocols. A two-stage framework is introduced for the security algorithms to overcome the problems of input size and resource constraints. The first stage is an unsupervised clustering algorithm which reduces the payload of network data packets to a tractable size. The second stage is a traditional anomaly detection algorithm based on a variation of support vector machines (SVMs), whose efficiency is improved by the availability of data in the packet payload. In the first stage, selected algorithms are adapted to WSN platforms to meet system requirements for simple parallel distributed computation, distributed storage and data robustness. A set of mobile software agents, acting like an ant colony in securing the WSN, are distributed at the nodes to implement the algorithms. The agents move among the layers involved in the network response to the intrusions at each active node and trustworthy neighborhood, collecting parametric values and executing assigned decision tasks. This minimizes the need to move large amounts of audit-log data through resource-limited nodes and locates routines closer to that data. Performance of the unsupervised algorithms is evaluated against the network intrusions of black hole, flooding, Sybil and other denial-of-service attacks in simulations of published scenarios. Results for scenarios with intentionally malfunctioning sensors show the robustness of the two-stage approach to intrusion anomalies.

  6. Magnitude of pseudopotential localization errors in fixed node diffusion quantum Monte Carlo

    DOE PAGES

    Kent, Paul R.; Krogel, Jaron T.

    2017-06-22

    Growth in computational resources has lead to the application of real space diffusion quantum Monte Carlo to increasingly heavy elements. Although generally assumed to be small, we find that when using standard techniques, the pseudopotential localization error can be large, on the order of an electron volt for an isolated cerium atom. We formally show that the localization error can be reduced to zero with improvements to the Jastrow factor alone, and we define a metric of Jastrow sensitivity that may be useful in the design of pseudopotentials. We employ an extrapolation scheme to extract the bare fixed node energymore » and estimate the localization error in both the locality approximation and the T-moves schemes for the Ce atom in charge states 3+/4+. The locality approximation exhibits the lowest Jastrow sensitivity and generally smaller localization errors than T-moves although the locality approximation energy approaches the localization free limit from above/below for the 3+/4+ charge state. We find that energy minimized Jastrow factors including three-body electron-electron-ion terms are the most effective at reducing the localization error for both the locality approximation and T-moves for the case of the Ce atom. Less complex or variance minimized Jastrows are generally less effective. Finally, our results suggest that further improvements to Jastrow factors and trial wavefunction forms may be needed to reduce localization errors to chemical accuracy when medium core pseudopotentials are applied to heavy elements such as Ce.« less

  7. Protocol for multiple node network

    NASA Technical Reports Server (NTRS)

    Kirkham, Harold (Inventor)

    1995-01-01

    The invention is a multiple interconnected network of intelligent message-repeating remote nodes which employs an antibody recognition message termination process performed by all remote nodes and a remote node polling process performed by other nodes which are master units controlling remote nodes in respective zones of the network assigned to respective master nodes. Each remote node repeats only those messages originated in the local zone, to provide isolation among the master nodes.

  8. Protocol for multiple node network

    NASA Technical Reports Server (NTRS)

    Kirkham, Harold (Inventor)

    1994-01-01

    The invention is a multiple interconnected network of intelligent message-repeating remote nodes which employs an antibody recognition message termination process performed by all remote nodes and a remote node polling process performed by other nodes which are master units controlling remote nodes in respective zones of the network assigned to respective master nodes. Each remote node repeats only those messages originated in the local zone, to provide isolation among the master nodes.

  9. File Transfers from Peregrine to the Mass Storage System - Gyrfalcon |

    Science.gov Websites

    login node or data-transfer queue node. Below is an example to access data-tranfer queue Interactively number of container files using the tar command. For example, $ cd /scratch//directory1 tar files. The rsync command is convenient for handling a large number of files. For example, make

  10. Laparoscopic completion radical cholecystectomy for T2 gallbladder cancer.

    PubMed

    Gumbs, Andrew A; Hoffman, John P

    2010-12-01

    The role of minimally invasive surgery in the surgical management of gallbladder cancer is a matter of controversy. Because of the authors' growing experience with laparoscopic liver and pancreatic surgery, they have begun offering patients laparoscopic completion partial hepatectomies of the gallbladder bed with laparoscopic hepatoduodenal lymphadenectomy. The video shows the steps needed to perform laparoscopic resection of the residual gallbladder bed, the hepatoduodenal lymph node nodes, and the residual cystic duct stump in a setting with a positive cystic stump margin. The skin and fascia around the previous extraction site are resected, and this site is used for specimen retrieval during the second operation. To date, three patients have undergone laparoscopic radical cholecystectomy with hepatoduodenal lymph node dissection for gallbladder cancer. The average number of lymph nodes retrieved was 3 (range, 1-6), and the average estimated blood loss was 117 ml (range, 50-200 ml). The average operative time was 227 min (range, 120-360 min), and the average hospital length of stay was 4 days (range, 3-5 days). No morbidity or mortality was observed during 90 days of follow-up for each patient. Although controversy exists as to the best surgical approach for gallbladder cancer diagnosed after routine laparoscopic cholecystectomy, the minimally invasive approach seems feasible and safe, even after previous hepatobiliary surgery. If the previous extraction site cannot be ascertained, all port sites can be excised locally. Larger studies are needed to determine whether the minimally invasive approach to postoperatively diagnosed early-stage gallbladder cancer has any drawbacks.

  11. A Self-Organizing Incremental Neural Network based on local distribution learning.

    PubMed

    Xing, Youlu; Shi, Xiaofeng; Shen, Furao; Zhou, Ke; Zhao, Jinxi

    2016-12-01

    In this paper, we propose an unsupervised incremental learning neural network based on local distribution learning, which is called Local Distribution Self-Organizing Incremental Neural Network (LD-SOINN). The LD-SOINN combines the advantages of incremental learning and matrix learning. It can automatically discover suitable nodes to fit the learning data in an incremental way without a priori knowledge such as the structure of the network. The nodes of the network store rich local information regarding the learning data. The adaptive vigilance parameter guarantees that LD-SOINN is able to add new nodes for new knowledge automatically and the number of nodes will not grow unlimitedly. While the learning process continues, nodes that are close to each other and have similar principal components are merged to obtain a concise local representation, which we call a relaxation data representation. A denoising process based on density is designed to reduce the influence of noise. Experiments show that the LD-SOINN performs well on both artificial and real-word data. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. [Systemic approach to ecologic safety at objects with radiation jeopardy, involved into localization of low and medium radioactive waste].

    PubMed

    Veselov, E I

    2011-01-01

    The article deals with specifying systemic approach to ecologic safety of objects with radiation jeopardy. The authors presented stages of work and algorithm of decisions on preserving reliability of storage for radiation jeopardy waste. Findings are that providing ecologic safety can cover 3 approaches: complete exemption of radiation jeopardy waste, removal of more dangerous waste from present buildings and increasing reliability of prolonged localization of radiation jeopardy waste at the initial place. The systemic approach presented could be realized at various radiation jeopardy objects.

  13. Salient object detection: manifold-based similarity adaptation approach

    NASA Astrophysics Data System (ADS)

    Zhou, Jingbo; Ren, Yongfeng; Yan, Yunyang; Gao, Shangbing

    2014-11-01

    A saliency detection algorithm based on manifold-based similarity adaptation is proposed. The proposed algorithm is divided into three steps. First, we segment an input image into superpixels, which are represented as the nodes in a graph. Second, a new similarity measurement is used in the proposed algorithm. The weight matrix of the graph, which indicates the similarities between the nodes, uses a similarity-based method. It also captures the manifold structure of the image patches, in which the graph edges are determined in a data adaptive manner in terms of both similarity and manifold structure. Then, we use local reconstruction method as a diffusion method to obtain the saliency maps. The objective function in the proposed method is based on local reconstruction, with which estimated weights capture the manifold structure. Experiments on four bench-mark databases demonstrate the accuracy and robustness of the proposed method.

  14. Advances in radioguided surgery in oncology.

    PubMed

    Valdés Olmos, Renato A; Vidal-Sicart, Sergi; Manca, Gianpiero; Mariani, Giuliano; León-Ramírez, Luisa F; Rubello, Domenico; Giammarile, Francesco

    2017-09-01

    The sentinel lymph node (SLN) biopsy is probably the most well-known radioguided technique in surgical oncology. Today SLN biopsy reduces the morbidity associated with lymphadenectomy and increases the identification rate of occult lymphatic metastases by offering the pathologist the lymph nodes with the highest probability of containing metastatic cells. These advantages may result in a change in clinical management both in melanoma and breast cancer patients. The SLN evaluation by pathology currently implies tumor burden stratification for further prognostic information. The concept of SLN biopsy includes pre-surgical lymphoscintigraphy as a "roadmap" to guide the surgeon toward the SLNs and to localize unpredictable lymphatic drainage patterns. In addition to planar images, SPECT/CT improves SLN detection, especially in sites closer to the injection site, providing anatomic landmarks which are helpful in localizing SLNs in difficult to interpret studies. The use of intraoperative imaging devices allows a better surgical approach and SLN localization. Several studies report the value of such devices for excision of additional sentinel nodes and for monitoring the whole procedure. The combination of preoperative imaging and radioguided localization constitutes the basis for a whole spectrum of basic and advanced nuclear medicine procedures, which recently have been encompassed under the term "guided intraoperative scintigraphic tumor targeting" (GOSTT). Excepting SLN biopsy, GOSTT includes procedures based on the detection of target lesions with visible uptake of tumor-seeking radiotracers on SPECT/CT or PET/CT enabling their subsequent radioguided excisional biopsy for diagnostic of therapeutic purposes. The incorporation of new PET-tracers into nuclear medicine has reinforced this field delineating new strategies for radioguided excision. In cases with insufficient lesion uptake after systemic radiotracer administration, intralesional injection of a tracer without migration may enable subsequent excision of the targeted tissue. This approach has been helpful in non-palpable breast cancer and in solitary pulmonary nodules. The introduction of allied technologies like fluorescence constitutes a recent advance aimed to refine the search for SLNs and tracer-avid lesions in the operation theatre in combination with radioguidance.

  15. Performances of multiprocessor multidisk architectures for continuous media storage

    NASA Astrophysics Data System (ADS)

    Gennart, Benoit A.; Messerli, Vincent; Hersch, Roger D.

    1996-03-01

    Multimedia interfaces increase the need for large image databases, capable of storing and reading streams of data with strict synchronicity and isochronicity requirements. In order to fulfill these requirements, we consider a parallel image server architecture which relies on arrays of intelligent disk nodes, each disk node being composed of one processor and one or more disks. This contribution analyzes through bottleneck performance evaluation and simulation the behavior of two multi-processor multi-disk architectures: a point-to-point architecture and a shared-bus architecture similar to current multiprocessor workstation architectures. We compare the two architectures on the basis of two multimedia algorithms: the compute-bound frame resizing by resampling and the data-bound disk-to-client stream transfer. The results suggest that the shared bus is a potential bottleneck despite its very high hardware throughput (400Mbytes/s) and that an architecture with addressable local memories located closely to their respective processors could partially remove this bottleneck. The point- to-point architecture is scalable and able to sustain high throughputs for simultaneous compute- bound and data-bound operations.

  16. Mechanistic applicability domain classification of a local lymph node assay dataset for skin sensitization.

    PubMed

    Roberts, David W; Patlewicz, Grace; Kern, Petra S; Gerberick, Frank; Kimber, Ian; Dearman, Rebecca J; Ryan, Cindy A; Basketter, David A; Aptula, Aynur O

    2007-07-01

    The goal of eliminating animal testing in the predictive identification of chemicals with the intrinsic ability to cause skin sensitization is an important target, the attainment of which has recently been brought into even sharper relief by the EU Cosmetics Directive and the requirements of the REACH legislation. Development of alternative methods requires that the chemicals used to evaluate and validate novel approaches comprise not only confirmed skin sensitizers and non-sensitizers but also substances that span the full chemical mechanistic spectrum associated with skin sensitization. To this end, a recently published database of more than 200 chemicals tested in the mouse local lymph node assay (LLNA) has been examined in relation to various chemical reaction mechanistic domains known to be associated with sensitization. It is demonstrated here that the dataset does cover the main reaction mechanistic domains. In addition, it is shown that assignment to a reaction mechanistic domain is a critical first step in a strategic approach to understanding, ultimately on a quantitative basis, how chemical properties influence the potency of skin sensitizing chemicals. This understanding is necessary if reliable non-animal approaches, including (quantitative) structure-activity relationships (Q)SARs, read-across, and experimental chemistry based models, are to be developed.

  17. BridgeRank: A novel fast centrality measure based on local structure of the network

    NASA Astrophysics Data System (ADS)

    Salavati, Chiman; Abdollahpouri, Alireza; Manbari, Zhaleh

    2018-04-01

    Ranking nodes in complex networks have become an important task in many application domains. In a complex network, influential nodes are those that have the most spreading ability. Thus, identifying influential nodes based on their spreading ability is a fundamental task in different applications such as viral marketing. One of the most important centrality measures to ranking nodes is closeness centrality which is efficient but suffers from high computational complexity O(n3) . This paper tries to improve closeness centrality by utilizing the local structure of nodes and presents a new ranking algorithm, called BridgeRank centrality. The proposed method computes local centrality value for each node. For this purpose, at first, communities are detected and the relationship between communities is completely ignored. Then, by applying a centrality in each community, only one best critical node from each community is extracted. Finally, the nodes are ranked based on computing the sum of the shortest path length of nodes to obtained critical nodes. We have also modified the proposed method by weighting the original BridgeRank and selecting several nodes from each community based on the density of that community. Our method can find the best nodes with high spread ability and low time complexity, which make it applicable to large-scale networks. To evaluate the performance of the proposed method, we use the SIR diffusion model. Finally, experiments on real and artificial networks show that our method is able to identify influential nodes so efficiently, and achieves better performance compared to other recent methods.

  18. In vitro autoradiographic localization of angiotensin-converting enzyme in sarcoid lymph nodes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allen, R.K.; Chai, S.Y.; Dunbar, M.S.

    1986-09-01

    Angiotensin-converting enzyme (ACE) was localized in sarcoid lymph nodes by an in vitro autoradiographic technique using a synthetic ACE inhibitor of high affinity, /sup 125/I-labelled 351A. The lymph nodes were from seven patients with active sarcoidosis who underwent mediastinoscopy and from six control subjects who had nodes resected at either mediastinoscopy or laparotomy. Angiotensin-converting enzyme was localized in the epithelioid cells of sarcoid granulomata in markedly increased amounts compared with control nodes, where it was restricted to vessels and some histiocytes. In sarcoid lymph nodes, there was little ACE present in lymphocytes or fibrous tissue. Sarcoid nodes with considerable fibrosismore » had much less intense ACE activity than the nonfibrotic nodes. The specific activity of ACE measured by an enzymatic assay in both the control and sarcoid lymph nodes closely reflected the ACE activity demonstrated by autoradiography. Sarcoid lymph nodes with fibrosis had an ACE specific activity of half that of nonfibrotic nodes (p less than 0.05). There was a 15-fold increase in specific ACE activity in sarcoid nodes (p less than 0.05) compared to normal. Serum ACE was significantly higher in those sarcoid patients whose lymph nodes were not fibrosed compared with those with fibrosis (p less than 0.01). This technique offers many advantages over the use of polyclonal antibodies. The 351A is a highly specific ACE inhibitor, chemically defined and in limitless supply. This method enables the quantitation of results, and autoradiographs may be stored indefinitely for later comparison.« less

  19. Energy storage requirements of dc microgrids with high penetration renewables under droop control

    DOE PAGES

    Weaver, Wayne W.; Robinett, Rush D.; Parker, Gordon G.; ...

    2015-01-09

    Energy storage is a important design component in microgrids with high penetration renewable sources to maintain the system because of the highly variable and sometimes stochastic nature of the sources. Storage devices can be distributed close to the sources and/or at the microgrid bus. In addition, storage requirements can be minimized with a centralized control architecture, but this creates a single point of failure. Distributed droop control enables a completely decentralized architecture but, the energy storage optimization becomes more difficult. Our paper presents an approach to droop control that enables the local and bus storage requirements to be determined. Givenmore » a priori knowledge of the design structure of a microgrid and the basic cycles of the renewable sources, we found that the droop settings of the sources are such that they minimize both the bus voltage variations and overall energy storage capacity required in the system. This approach can be used in the design phase of a microgrid with a decentralized control structure to determine appropriate droop settings as well as the sizing of energy storage devices.« less

  20. The respiratory local lymph node assay as a tool to study respiratory sensitizers.

    PubMed

    Arts, Josje H E; de Jong, Wim H; van Triel, Jos J; Schijf, Marcel A; de Klerk, Arja; van Loveren, Henk; Kuper, C Frieke

    2008-12-01

    The local lymph node assay (LLNA) is used to test the potential of low molecular weight (LMW) compounds to induce sensitization via the skin. In the present study, a respiratory LLNA was developed. Male BALB/c mice were exposed head/nose-only during three consecutive days for 45, 90, 180, or 360 min/day to various LMW allergens. Ear application (skin LLNA) was used as a positive control. Negative controls were exposed to the vehicle. Three days after the last exposure, proliferation was determined in the draining mandibular lymph nodes, and the respiratory tract was examined microscopically. Upon inhalation, the allergens trimellitic anhydride, phthalic anhydride, hexamethylene diisocyanate, toluene diisocyanate, isophorone diisocyanate (IPDI), dinitrochlorobenzene, and oxazolone were positive and showed stimulation indices (SIs) up to 11, whereas trimeric IPDI, formaldehyde, and methyl salicylate were negative (viz. SI < 3). All compounds, except trimeric IPDI, induced histopathological lesions predominantly in the upper respiratory tract. Exposure by inhalation is a realistic approach to test respiratory allergens. However, based on the local toxicity, the dose that can be applied is (generally) much lower than can be achieved by skin application. It is concluded that strong LMW allergens, regardless their immunological nature, besides the skin can also sensitize the body via the respiratory tract. In addition, the contact allergens were as potent as the respiratory allergens, although the potency ranking differed from that in a skin LLNA.

  1. Building Intrusion Detection with a Wireless Sensor Network

    NASA Astrophysics Data System (ADS)

    Wälchli, Markus; Braun, Torsten

    This paper addresses the detection and reporting of abnormal building access with a wireless sensor network. A common office room, offering space for two working persons, has been monitored with ten sensor nodes and a base station. The task of the system is to report suspicious office occupation such as office searching by thieves. On the other hand, normal office occupation should not throw alarms. In order to save energy for communication, the system provides all nodes with some adaptive short-term memory. Thus, a set of sensor activation patterns can be temporarily learned. The local memory is implemented as an Adaptive Resonance Theory (ART) neural network. Unknown event patterns detected on sensor node level are reported to the base station, where the system-wide anomaly detection is performed. The anomaly detector is lightweight and completely self-learning. The system can be run autonomously or it could be used as a triggering system to turn on an additional high-resolution system on demand. Our building monitoring system has proven to work reliably in different evaluated scenarios. Communication costs of up to 90% could be saved compared to a threshold-based approach without local memory.

  2. Mean PB To Failure - Initial results from a long-term study of disk storage patterns at the RACF

    NASA Astrophysics Data System (ADS)

    Caramarcu, C.; Hollowell, C.; Rao, T.; Strecker-Kellogg, W.; Wong, A.; Zaytsev, S. A.

    2015-12-01

    The RACF (RHIC-ATLAS Computing Facility) has operated a large, multi-purpose dedicated computing facility since the mid-1990’s, serving a worldwide, geographically diverse scientific community that is a major contributor to various HEPN projects. A central component of the RACF is the Linux-based worker node cluster that is used for both computing and data storage purposes. It currently has nearly 50,000 computing cores and over 23 PB of storage capacity distributed over 12,000+ (non-SSD) disk drives. The majority of the 12,000+ disk drives provide a cost-effective solution for dCache/XRootD-managed storage, and a key concern is the reliability of this solution over the lifetime of the hardware, particularly as the number of disk drives and the storage capacity of individual drives grow. We report initial results of a long-term study to measure lifetime PB read/written to disk drives in the worker node cluster. We discuss the historical disk drive mortality rate, disk drive manufacturers' published MPTF (Mean PB to Failure) data and how they are correlated to our results. The results help the RACF understand the productivity and reliability of its storage solutions and have implications for other highly-available storage systems (NFS, GPFS, CVMFS, etc) with large I/O requirements.

  3. Signaling completion of a message transfer from an origin compute node to a target compute node

    DOEpatents

    Blocksome, Michael A [Rochester, MN; Parker, Jeffrey J [Rochester, MN

    2011-05-24

    Signaling completion of a message transfer from an origin node to a target node includes: sending, by an origin DMA engine, an RTS message, the RTS message specifying an application message for transfer to the target node from the origin node; receiving, by the origin DMA engine, a remote get message containing a data descriptor for the message and a completion notification descriptor, the completion notification descriptor specifying a local direct put transfer operation for transferring data locally on the origin node; inserting, by the origin DMA engine in an injection FIFO buffer, the data descriptor followed by the completion notification descriptor; transferring, by the origin DMA engine to the target node, the message in dependence upon the data descriptor; and notifying, by the origin DMA engine, the application that transfer of the message is complete in dependence upon the completion notification descriptor.

  4. Signaling completion of a message transfer from an origin compute node to a target compute node

    DOEpatents

    Blocksome, Michael A [Rochester, MN

    2011-02-15

    Signaling completion of a message transfer from an origin node to a target node includes: sending, by an origin DMA engine, an RTS message, the RTS message specifying an application message for transfer to the target node from the origin node; receiving, by the origin DMA engine, a remote get message containing a data descriptor for the message and a completion notification descriptor, the completion notification descriptor specifying a local memory FIFO data transfer operation for transferring data locally on the origin node; inserting, by the origin DMA engine in an injection FIFO buffer, the data descriptor followed by the completion notification descriptor; transferring, by the origin DMA engine to the target node, the message in dependence upon the data descriptor; and notifying, by the origin DMA engine, the application that transfer of the message is complete in dependence upon the completion notification descriptor.

  5. Lake and wetland ecosystem services measuring water storage and local climate regulation

    NASA Astrophysics Data System (ADS)

    Wong, Christina P.; Jiang, Bo; Bohn, Theodore J.; Lee, Kai N.; Lettenmaier, Dennis P.; Ma, Dongchun; Ouyang, Zhiyun

    2017-04-01

    Developing interdisciplinary methods to measure ecosystem services is a scientific priority, however, progress remains slow in part because we lack ecological production functions (EPFs) to quantitatively link ecohydrological processes to human benefits. In this study, we tested a new approach, combining a process-based model with regression models, to create EPFs to evaluate water storage and local climate regulation from a green infrastructure project on the Yongding River in Beijing, China. Seven artificial lakes and wetlands were established to improve local water storage and human comfort; evapotranspiration (ET) regulates both services. Managers want to minimize the trade-off between water losses and cooling to sustain water supplies while lowering the heat index (HI) to improve human comfort. We selected human benefit indicators using water storage targets and Beijing's HI, and the Variable Infiltration Capacity model to determine the change in ET from the new ecosystems. We created EPFs to quantify the ecosystem services as marginal values [Δfinal ecosystem service/Δecohydrological process]: (1) Δwater loss (lake evaporation/volume)/Δdepth and (2) Δsummer HI/ΔET. We estimate the new ecosystems increased local ET by 0.7 mm/d (20.3 W/m2) on the Yongding River. However, ET rates are causing water storage shortfalls while producing no improvements in human comfort. The shallow lakes/wetlands are vulnerable to drying when inflow rates fluctuate, low depths lead to higher evaporative losses, causing water storage shortfalls with minimal cooling effects. We recommend managers make the lakes deeper to increase water storage, and plant shade trees to improve human comfort in the parks.

  6. Multi-source feature extraction and target recognition in wireless sensor networks based on adaptive distributed wavelet compression algorithms

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    2008-04-01

    Proposed distributed wavelet-based algorithms are a means to compress sensor data received at the nodes forming a wireless sensor network (WSN) by exchanging information between neighboring sensor nodes. Local collaboration among nodes compacts the measurements, yielding a reduced fused set with equivalent information at far fewer nodes. Nodes may be equipped with multiple sensor types, each capable of sensing distinct phenomena: thermal, humidity, chemical, voltage, or image signals with low or no frequency content as well as audio, seismic or video signals within defined frequency ranges. Compression of the multi-source data through wavelet-based methods, distributed at active nodes, reduces downstream processing and storage requirements along the paths to sink nodes; it also enables noise suppression and more energy-efficient query routing within the WSN. Targets are first detected by the multiple sensors; then wavelet compression and data fusion are applied to the target returns, followed by feature extraction from the reduced data; feature data are input to target recognition/classification routines; targets are tracked during their sojourns through the area monitored by the WSN. Algorithms to perform these tasks are implemented in a distributed manner, based on a partition of the WSN into clusters of nodes. In this work, a scheme of collaborative processing is applied for hierarchical data aggregation and decorrelation, based on the sensor data itself and any redundant information, enabled by a distributed, in-cluster wavelet transform with lifting that allows multiple levels of resolution. The wavelet-based compression algorithm significantly decreases RF bandwidth and other resource use in target processing tasks. Following wavelet compression, features are extracted. The objective of feature extraction is to maximize the probabilities of correct target classification based on multi-source sensor measurements, while minimizing the resource expenditures at participating nodes. Therefore, the feature-extraction method based on the Haar DWT is presented that employs a maximum-entropy measure to determine significant wavelet coefficients. Features are formed by calculating the energy of coefficients grouped around the competing clusters. A DWT-based feature extraction algorithm used for vehicle classification in WSNs can be enhanced by an added rule for selecting the optimal number of resolution levels to improve the correct classification rate and reduce energy consumption expended in local algorithm computations. Published field trial data for vehicular ground targets, measured with multiple sensor types, are used to evaluate the wavelet-assisted algorithms. Extracted features are used in established target recognition routines, e.g., the Bayesian minimum-error-rate classifier, to compare the effects on the classification performance of the wavelet compression. Simulations of feature sets and recognition routines at different resolution levels in target scenarios indicate the impact on classification rates, while formulas are provided to estimate reduction in resource use due to distributed compression.

  7. Non-volatile memory for checkpoint storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blumrich, Matthias A.; Chen, Dong; Cipolla, Thomas M.

    A system, method and computer program product for supporting system initiated checkpoints in high performance parallel computing systems and storing of checkpoint data to a non-volatile memory storage device. The system and method generates selective control signals to perform checkpointing of system related data in presence of messaging activity associated with a user application running at the node. The checkpointing is initiated by the system such that checkpoint data of a plurality of network nodes may be obtained even in the presence of user applications running on highly parallel computers that include ongoing user messaging activity. In one embodiment, themore » non-volatile memory is a pluggable flash memory card.« less

  8. Data Summarization in the Node by Parameters (DSNP): Local Data Fusion in an IoT Environment.

    PubMed

    Maschi, Luis F C; Pinto, Alex S R; Meneguette, Rodolfo I; Baldassin, Alexandro

    2018-03-07

    With the advent of the Internet of Things, billions of objects or devices are inserted into the global computer network, generating and processing data at a volume never imagined before. This paper proposes a way to collect and process local data through a data fusion technology called summarization. The main feature of the proposal is the local data fusion, through parameters provided by the application, ensuring the quality of data collected by the sensor node. In the evaluation, the sensor node was compared when performing the data summary with another that performed a continuous recording of the collected data. Two sets of nodes were created, one with a sensor node that analyzed the luminosity of the room, which in this case obtained a reduction of 97% in the volume of data generated, and another set that analyzed the temperature of the room, obtaining a reduction of 80% in the data volume. Through these tests, it has been proven that the local data fusion at the node can be used to reduce the volume of data generated, consequently decreasing the volume of messages generated by IoT environments.

  9. Influence of Time-Series Normalization, Number of Nodes, Connectivity and Graph Measure Selection on Seizure-Onset Zone Localization from Intracranial EEG.

    PubMed

    van Mierlo, Pieter; Lie, Octavian; Staljanssens, Willeke; Coito, Ana; Vulliémoz, Serge

    2018-04-26

    We investigated the influence of processing steps in the estimation of multivariate directed functional connectivity during seizures recorded with intracranial EEG (iEEG) on seizure-onset zone (SOZ) localization. We studied the effect of (i) the number of nodes, (ii) time-series normalization, (iii) the choice of multivariate time-varying connectivity measure: Adaptive Directed Transfer Function (ADTF) or Adaptive Partial Directed Coherence (APDC) and (iv) graph theory measure: outdegree or shortest path length. First, simulations were performed to quantify the influence of the various processing steps on the accuracy to localize the SOZ. Afterwards, the SOZ was estimated from a 113-electrodes iEEG seizure recording and compared with the resection that rendered the patient seizure-free. The simulations revealed that ADTF is preferred over APDC to localize the SOZ from ictal iEEG recordings. Normalizing the time series before analysis resulted in an increase of 25-35% of correctly localized SOZ, while adding more nodes to the connectivity analysis led to a moderate decrease of 10%, when comparing 128 with 32 input nodes. The real-seizure connectivity estimates localized the SOZ inside the resection area using the ADTF coupled to outdegree or shortest path length. Our study showed that normalizing the time-series is an important pre-processing step, while adding nodes to the analysis did only marginally affect the SOZ localization. The study shows that directed multivariate Granger-based connectivity analysis is feasible with many input nodes (> 100) and that normalization of the time-series before connectivity analysis is preferred.

  10. Environmental Data Store (EDS): A multi-node Data Storage Facility for diverse sets of Geoscience Data

    NASA Astrophysics Data System (ADS)

    Piasecki, M.; Ji, P.

    2014-12-01

    Geoscience data comes in many flavors that are determined by type of data such as continous on a grid or mesh or discrete colelcted at point either as one time samples or a stream of data coming of sensors, but coudl also encompass digital files of any time type such text files, WORD or EXCEL documents, or audio and video files. We present a storage facility that is comprsed of 6 nodes each of speciaized to host a certain data type: grid based data (netCDF on a THREDDS server), GIS data (shapefiles using GeoServer), point time series data (CUAHSI ODM), sample data (EDBS), and any digital data (RAMADAA) plus a server fro Remote sensing data and its products. While there is overlap in data type storage capabilities (rasters can go into several of these nodes) we prefer to use dedicated storage facilities that are a) freeware, and b) have a good degree of maturity, and c) have shown their utility for stroing a cetain type. In addition it allows to place these commonly used software stacks and storage solutiosn side-by-side to develop interoprability strategies. We have used a DRUPAL based system to handle user regoistration and authentication, and also use the system for data submission and data search. In support for tis system we developed an extensive controlled vocabulary system that is an amalgamation of various CVs used in the geosciecne community in order to achieve as high a degree of recognition, such the CF conventions, CUAHSI Cvs, , NASA (GCMD), EPA and USGS taxonomies, GEMET, in addition to ontological representations such as SWEET.

  11. Indocyanine green SPY elite-assisted sentinel lymph node biopsy in cutaneous melanoma.

    PubMed

    Korn, Jason M; Tellez-Diaz, Alejandra; Bartz-Kurycki, Marisa; Gastman, Brian

    2014-04-01

    Sentinel lymph node biopsy is the standard of care for intermediate-depth and high-risk thin melanomas. Recently, indocyanine green and near-infrared imaging have been used to aid in sentinel node biopsy. The present study aimed to determine the feasibility of sentinel lymph node biopsy with indocyanine green SPY Elite navigation and to critically evaluate the technique compared with the standard modalities. A retrospective review of 90 consecutive cutaneous melanoma patients who underwent sentinel lymph node biopsy was performed. Two cohorts were formed: group A, which had sentinel lymph node biopsy performed with blue dye and radioisotope; and group B, which had sentinel lymph node biopsy performed with radioisotope and indocyanine green SPY Elite navigation. The cohorts were compared to assess for differences in localization rates, sensitivity and specificity of sentinel node identification, and length of surgery. The sentinel lymph node localization rate was 79.4 percent using the blue dye method, 98.0 percent using the indocyanine green fluorescence method, and 97.8 percent using the radioisotope/handheld gamma probe method. Indocyanine green fluorescence detected more sentinel lymph nodes than the vital dye method alone (p = 0.020). A trend toward a reduction in length of surgery was noted in the SPY Elite cohort. Sentinel lymph node mapping and localization in cutaneous melanoma with the indocyanine green SPY Elite navigation system is technically feasible and may offer several advantages over current modalities, including higher sensitivity and specificity, decreased number of lymph nodes sampled, decreased operative time, and potentially lower false-negative rates. Diagnostic, II.

  12. Extraperitoneal lymph node dissection in locally advanced cervical cancer; the prognostic factors associated with survival

    PubMed Central

    Köse, Mehmet Faruk; Kiseli, Mine; Kimyon, Günsu; Öcalan, Reyhan; Yenen, Müfit Cemal; Tulunay, Gökhan; Turan, Ahmet Taner; Üreyen, Işın; Boran, Nurettin

    2017-01-01

    Objective: Surgical staging was recently recommended for the decision of treatment in locally advanced cervical cancer. We aimed to investigate clinical outcomes as well as factors associated with overall survival (OS) in patients with locally advanced cervical cancer who had undergone extraperitoneal lymph node dissection and were managed according to their lymph node status. Material and Methods: The medical records of 233 women with stage IIb-IVa cervical cancer who were clinically staged and underwent extraperitoneal lymph node dissection were retrospectively reviewed. Paraaortic lymph node status determined the appropriate radiotherapeutic treatment field. Surgery-related complications and clinical outcomes were evaluated. Results: The median age of the patients was 52 years (range, 26-88 years) and the median follow-up time was 28.4 months (range, 3-141 months). Thirty-one patients had laparoscopic extraperitoneal lymph node dissection and 202 patients underwent laparotomy. The number of paraaortic lymph nodes extracted was similar for both techniques. Sixty-two (27%) of the 233 patients had paraaortic lymph node metastases. The 3-year and 5-year OS rates were 55.1% and 46.5%, respectively. The stage of disease, number of metastatic paraaortic lymph nodes, tumor type, and paraaortic lymph node status were associated with OS. In multivariate Cox regression analyses, tumor type, stage, and presence of paraaortic lymph node metastases were the independent prognostic factors of OS. Conclusion: Paraaortic lymph node metastasis is the most important prognostic factor affecting survival. Surgery would give hints about the prognosis and treatment planning of the patient. PMID:28400350

  13. The Added Value of a Single-photon Emission Computed Tomography-Computed Tomography in Sentinel Lymph Node Mapping in Patients with Breast Cancer and Malignant Melanoma.

    PubMed

    Bennie, George; Vorster, Mariza; Buscombe, John; Sathekge, Mike

    2015-01-01

    Single-photon emission computed tomography-computed tomography (SPECT-CT) allows for physiological and anatomical co-registration in sentinel lymph node (SLN) mapping and offers additional benefits over conventional planar imaging. However, the clinical relevance when considering added costs and radiation burden of these reported benefits remains somewhat uncertain. This study aimed to evaluate the possible added value of SPECT-CT and intra-operative gamma-probe use over planar imaging alone in the South African setting. 80 patients with breast cancer or malignant melanoma underwent both planar and SPECT-CT imaging for SLN mapping. We assessed and compared the number of nodes detected on each study, false positive and negative findings, changes in surgical approach and or patient management. In all cases where a sentinel node was identified, SPECT-CT was more accurate anatomically. There was a significant change in surgical approach in 30 cases - breast cancer (n = 13; P 0.001) and malignant melanoma (n = 17; P 0.0002). In 4 cases a node not identified on planar imaging was seen on SPECT-CT. In 16 cases additional echelon nodes were identified. False positives were excluded by SPECT-CT in 12 cases. The addition of SPECT-CT and use of intra-operative gamma-probe to planar imaging offers important benefits in patients who present with breast cancer and melanoma. These benefits include increased nodal detection, elimination of false positives and negatives and improved anatomical localization that ultimately aids and expedites surgical management. This has been demonstrated in the context of industrialized country previously and has now also been confirmed in the setting of a emerging-market nation.

  14. Paving the Way Towards Reactive Planar Spanner Construction in Wireless Networks

    NASA Astrophysics Data System (ADS)

    Frey, Hannes; Rührup, Stefan

    A spanner is a subgraph of a given graph that supports the original graph's shortest path lengths up to a constant factor. Planar spanners and their distributed construction are of particular interest for geographic routing, which is an efficient localized routing scheme for wireless ad hoc and sensor networks. Planarity of the network graph is a key criterion for guaranteed delivery, while the spanner property supports efficiency in terms of path length. We consider the problem of reactive local spanner construction, where a node's local topology is determined on demand. Known message-efficient reactive planarization algorithms do not preserve the spanner property, while reactive spanner constructions with a low message overhead have not been described so far. We introduce the concept of direct planarization which may be an enabler of efficient reactive spanner construction. Given an edge, nodes check for all incident intersecting edges a certain geometric criterion and withdraw the edge if this criterion is not satisfied. We use this concept to derive a generic reactive topology control mechanism and consider two geometric criteria. Simulation results show that direct planarization increases the performance of localized geographic routing by providing shorter paths than existing reactive approaches.

  15. IoT-based flood embankments monitoring system

    NASA Astrophysics Data System (ADS)

    Michta, E.; Szulim, R.; Sojka-Piotrowska, A.; Piotrowski, K.

    2017-08-01

    In the paper a concept of flood embankments monitoring system based on using Internet of Things approach and Cloud Computing technologies will be presented. The proposed system consists of sensors, IoT nodes, Gateways and Cloud based services. Nodes communicates with the sensors measuring certain physical parameters describing the state of the embankments and communicates with the Gateways. Gateways are specialized active devices responsible for direct communication with the nodes, collecting sensor data, preprocess the data, applying local rules and communicate with the Cloud Services using communication API delivered by cloud services providers. Architecture of all of the system components will be proposed consisting IoT devices functionalities description, their communication model, software modules and services bases on using a public cloud computing platform like Microsoft Azure will be proposed. The most important aspects of maintaining the communication in a secure way will be shown.

  16. Data management routines for reproducible research using the G-Node Python Client library

    PubMed Central

    Sobolev, Andrey; Stoewer, Adrian; Pereira, Michael; Kellner, Christian J.; Garbers, Christian; Rautenberg, Philipp L.; Wachtler, Thomas

    2014-01-01

    Structured, efficient, and secure storage of experimental data and associated meta-information constitutes one of the most pressing technical challenges in modern neuroscience, and does so particularly in electrophysiology. The German INCF Node aims to provide open-source solutions for this domain that support the scientific data management and analysis workflow, and thus facilitate future data access and reproducible research. G-Node provides a data management system, accessible through an application interface, that is based on a combination of standardized data representation and flexible data annotation to account for the variety of experimental paradigms in electrophysiology. The G-Node Python Library exposes these services to the Python environment, enabling researchers to organize and access their experimental data using their familiar tools while gaining the advantages that a centralized storage entails. The library provides powerful query features, including data slicing and selection by metadata, as well as fine-grained permission control for collaboration and data sharing. Here we demonstrate key actions in working with experimental neuroscience data, such as building a metadata structure, organizing recorded data in datasets, annotating data, or selecting data regions of interest, that can be automated to large degree using the library. Compliant with existing de-facto standards, the G-Node Python Library is compatible with many Python tools in the field of neurophysiology and thus enables seamless integration of data organization into the scientific data workflow. PMID:24634654

  17. Data management routines for reproducible research using the G-Node Python Client library.

    PubMed

    Sobolev, Andrey; Stoewer, Adrian; Pereira, Michael; Kellner, Christian J; Garbers, Christian; Rautenberg, Philipp L; Wachtler, Thomas

    2014-01-01

    Structured, efficient, and secure storage of experimental data and associated meta-information constitutes one of the most pressing technical challenges in modern neuroscience, and does so particularly in electrophysiology. The German INCF Node aims to provide open-source solutions for this domain that support the scientific data management and analysis workflow, and thus facilitate future data access and reproducible research. G-Node provides a data management system, accessible through an application interface, that is based on a combination of standardized data representation and flexible data annotation to account for the variety of experimental paradigms in electrophysiology. The G-Node Python Library exposes these services to the Python environment, enabling researchers to organize and access their experimental data using their familiar tools while gaining the advantages that a centralized storage entails. The library provides powerful query features, including data slicing and selection by metadata, as well as fine-grained permission control for collaboration and data sharing. Here we demonstrate key actions in working with experimental neuroscience data, such as building a metadata structure, organizing recorded data in datasets, annotating data, or selecting data regions of interest, that can be automated to large degree using the library. Compliant with existing de-facto standards, the G-Node Python Library is compatible with many Python tools in the field of neurophysiology and thus enables seamless integration of data organization into the scientific data workflow.

  18. A Hybrid Spatio-Temporal Data Indexing Method for Trajectory Databases

    PubMed Central

    Ke, Shengnan; Gong, Jun; Li, Songnian; Zhu, Qing; Liu, Xintao; Zhang, Yeting

    2014-01-01

    In recent years, there has been tremendous growth in the field of indoor and outdoor positioning sensors continuously producing huge volumes of trajectory data that has been used in many fields such as location-based services or location intelligence. Trajectory data is massively increased and semantically complicated, which poses a great challenge on spatio-temporal data indexing. This paper proposes a spatio-temporal data indexing method, named HBSTR-tree, which is a hybrid index structure comprising spatio-temporal R-tree, B*-tree and Hash table. To improve the index generation efficiency, rather than directly inserting trajectory points, we group consecutive trajectory points as nodes according to their spatio-temporal semantics and then insert them into spatio-temporal R-tree as leaf nodes. Hash table is used to manage the latest leaf nodes to reduce the frequency of insertion. A new spatio-temporal interval criterion and a new node-choosing sub-algorithm are also proposed to optimize spatio-temporal R-tree structures. In addition, a B*-tree sub-index of leaf nodes is built to query the trajectories of targeted objects efficiently. Furthermore, a database storage scheme based on a NoSQL-type DBMS is also proposed for the purpose of cloud storage. Experimental results prove that HBSTR-tree outperforms TB*-tree in some aspects such as generation efficiency, query performance and query type. PMID:25051028

  19. A hybrid spatio-temporal data indexing method for trajectory databases.

    PubMed

    Ke, Shengnan; Gong, Jun; Li, Songnian; Zhu, Qing; Liu, Xintao; Zhang, Yeting

    2014-07-21

    In recent years, there has been tremendous growth in the field of indoor and outdoor positioning sensors continuously producing huge volumes of trajectory data that has been used in many fields such as location-based services or location intelligence. Trajectory data is massively increased and semantically complicated, which poses a great challenge on spatio-temporal data indexing. This paper proposes a spatio-temporal data indexing method, named HBSTR-tree, which is a hybrid index structure comprising spatio-temporal R-tree, B*-tree and Hash table. To improve the index generation efficiency, rather than directly inserting trajectory points, we group consecutive trajectory points as nodes according to their spatio-temporal semantics and then insert them into spatio-temporal R-tree as leaf nodes. Hash table is used to manage the latest leaf nodes to reduce the frequency of insertion. A new spatio-temporal interval criterion and a new node-choosing sub-algorithm are also proposed to optimize spatio-temporal R-tree structures. In addition, a B*-tree sub-index of leaf nodes is built to query the trajectories of targeted objects efficiently. Furthermore, a database storage scheme based on a NoSQL-type DBMS is also proposed for the purpose of cloud storage. Experimental results prove that HBSTR-tree outperforms TB*-tree in some aspects such as generation efficiency, query performance and query type.

  20. Peer-to-peer architecture for multi-departmental distributed PACS

    NASA Astrophysics Data System (ADS)

    Rosset, Antoine; Heuberger, Joris; Pysher, Lance; Ratib, Osman

    2006-03-01

    We have elected to explore peer-to-peer technology as an alternative to centralized PACS architecture for the increasing requirements for wide access to images inside and outside a radiology department. The goal being to allow users across the enterprise to access any study anytime without the need for prefetching or routing of images from central archive. Images can be accessed between different workstations and local storage nodes. We implemented "bonjour" a new remote file access technology developed by Apple allowing applications to share data and files remotely with optimized data access and data transfer. Our Open-source image display platform called OsiriX was adapted to allow sharing of local DICOM images through direct access of each local SQL database to be accessible from any other OsiriX workstation over the network. A server version of Osirix Core Data database also allows to access distributed archives servers in the same way. The infrastructure implemented allows fast and efficient access to any image anywhere anytime independently from the actual physical location of the data. It also allows benefiting from the performance of distributed low-cost and high capacity storage servers that can provide efficient caching of PACS data that was found to be 10 to 20 x faster that accessing the same date from the central PACS archive. It is particularly suitable for large hospitals and academic environments where clinical conferences, interdisciplinary discussions and successive sessions of image processing are often part of complex workflow or patient management and decision making.

  1. Overlapping communities from dense disjoint and high total degree clusters

    NASA Astrophysics Data System (ADS)

    Zhang, Hongli; Gao, Yang; Zhang, Yue

    2018-04-01

    Community plays an important role in the field of sociology, biology and especially in domains of computer science, where systems are often represented as networks. And community detection is of great importance in the domains. A community is a dense subgraph of the whole graph with more links between its members than between its members to the outside nodes, and nodes in the same community probably share common properties or play similar roles in the graph. Communities overlap when nodes in a graph belong to multiple communities. A vast variety of overlapping community detection methods have been proposed in the literature, and the local expansion method is one of the most successful techniques dealing with large networks. The paper presents a density-based seeding method, in which dense disjoint local clusters are searched and selected as seeds. The proposed method selects a seed by the total degree and density of local clusters utilizing merely local structures of the network. Furthermore, this paper proposes a novel community refining phase via minimizing the conductance of each community, through which the quality of identified communities is largely improved in linear time. Experimental results in synthetic networks show that the proposed seeding method outperforms other seeding methods in the state of the art and the proposed refining method largely enhances the quality of the identified communities. Experimental results in real graphs with ground-truth communities show that the proposed approach outperforms other state of the art overlapping community detection algorithms, in particular, it is more than two orders of magnitude faster than the existing global algorithms with higher quality, and it obtains much more accurate community structure than the current local algorithms without any priori information.

  2. The Design and Application of Data Storage System in Miyun Satellite Ground Station

    NASA Astrophysics Data System (ADS)

    Xue, Xiping; Su, Yan; Zhang, Hongbo; Liu, Bin; Yao, Meijuan; Zhao, Shu

    2015-04-01

    China has launched Chang'E-3 satellite in 2013, firstly achieved soft landing on moon for China's lunar probe. Miyun satellite ground station firstly used SAN storage network system based-on Stornext sharing software in Chang'E-3 mission. System performance fully meets the application requirements of Miyun ground station data storage.The Stornext file system is a sharing file system with high performance, supports multiple servers to access the file system using different operating system at the same time, and supports access to data on a variety of topologies, such as SAN and LAN. Stornext focused on data protection and big data management. It is announced that Quantum province has sold more than 70,000 licenses of Stornext file system worldwide, and its customer base is growing, which marks its leading position in the big data management.The responsibilities of Miyun satellite ground station are the reception of Chang'E-3 satellite downlink data and management of local data storage. The station mainly completes exploration mission management, receiving and management of observation data, and provides a comprehensive, centralized monitoring and control functions on data receiving equipment. The ground station applied SAN storage network system based on Stornext shared software for receiving and managing data reliable.The computer system in Miyun ground station is composed by business running servers, application workstations and other storage equipments. So storage systems need a shared file system which supports heterogeneous multi-operating system. In practical applications, 10 nodes simultaneously write data to the file system through 16 channels, and the maximum data transfer rate of each channel is up to 15MB/s. Thus the network throughput of file system is not less than 240MB/s. At the same time, the maximum capacity of each data file is up to 810GB. The storage system planned requires that 10 nodes simultaneously write data to the file system through 16 channels with 240MB/s network throughput.When it is integrated,sharing system can provide 1020MB/s write speed simultaneously.When the master storage server fails, the backup storage server takes over the normal service.The literacy of client will not be affected,in which switching time is less than 5s.The design and integrated storage system meet users requirements. Anyway, all-fiber way is too expensive in SAN; SCSI hard disk transfer rate may still be the bottleneck in the development of the entire storage system. Stornext can provide users with efficient sharing, management, automatic archiving of large numbers of files and hardware solutions. It occupies a leading position in big data management. Storage is the most popular sharing shareware, and there are drawbacks in Stornext: Firstly, Stornext software is expensive, in which charge by the sites. When the network scale is large, the purchase cost will be very high. Secondly, the parameters of Stornext software are more demands on the skills of technical staff. If there is a problem, it is difficult to exclude.

  3. Elastic extension of a local analysis facility on external clouds for the LHC experiments

    NASA Astrophysics Data System (ADS)

    Ciaschini, V.; Codispoti, G.; Rinaldi, L.; Aiftimiei, D. C.; Bonacorsi, D.; Calligola, P.; Dal Pra, S.; De Girolamo, D.; Di Maria, R.; Grandi, C.; Michelotto, D.; Panella, M.; Taneja, S.; Semeria, F.

    2017-10-01

    The computing infrastructures serving the LHC experiments have been designed to cope at most with the average amount of data recorded. The usage peaks, as already observed in Run-I, may however originate large backlogs, thus delaying the completion of the data reconstruction and ultimately the data availability for physics analysis. In order to cope with the production peaks, the LHC experiments are exploring the opportunity to access Cloud resources provided by external partners or commercial providers. In this work we present the proof of concept of the elastic extension of a local analysis facility, specifically the Bologna Tier-3 Grid site, for the LHC experiments hosted at the site, on an external OpenStack infrastructure. We focus on the Cloud Bursting of the Grid site using DynFarm, a newly designed tool that allows the dynamic registration of new worker nodes to LSF. In this approach, the dynamically added worker nodes instantiated on an OpenStack infrastructure are transparently accessed by the LHC Grid tools and at the same time they serve as an extension of the farm for the local usage.

  4. Neural node network and model, and method of teaching same

    DOEpatents

    Parlos, A.G.; Atiya, A.F.; Fernandez, B.; Tsai, W.K.; Chong, K.T.

    1995-12-26

    The present invention is a fully connected feed forward network that includes at least one hidden layer. The hidden layer includes nodes in which the output of the node is fed back to that node as an input with a unit delay produced by a delay device occurring in the feedback path (local feedback). Each node within each layer also receives a delayed output (crosstalk) produced by a delay unit from all the other nodes within the same layer. The node performs a transfer function operation based on the inputs from the previous layer and the delayed outputs. The network can be implemented as analog or digital or within a general purpose processor. Two teaching methods can be used: (1) back propagation of weight calculation that includes the local feedback and the crosstalk or (2) more preferably a feed forward gradient decent which immediately follows the output computations and which also includes the local feedback and the crosstalk. Subsequent to the gradient propagation, the weights can be normalized, thereby preventing convergence to a local optimum. Education of the network can be incremental both on and off-line. An educated network is suitable for modeling and controlling dynamic nonlinear systems and time series systems and predicting the outputs as well as hidden states and parameters. The educated network can also be further educated during on-line processing. 21 figs.

  5. Neural node network and model, and method of teaching same

    DOEpatents

    Parlos, Alexander G.; Atiya, Amir F.; Fernandez, Benito; Tsai, Wei K.; Chong, Kil T.

    1995-01-01

    The present invention is a fully connected feed forward network that includes at least one hidden layer 16. The hidden layer 16 includes nodes 20 in which the output of the node is fed back to that node as an input with a unit delay produced by a delay device 24 occurring in the feedback path 22 (local feedback). Each node within each layer also receives a delayed output (crosstalk) produced by a delay unit 36 from all the other nodes within the same layer 16. The node performs a transfer function operation based on the inputs from the previous layer and the delayed outputs. The network can be implemented as analog or digital or within a general purpose processor. Two teaching methods can be used: (1) back propagation of weight calculation that includes the local feedback and the crosstalk or (2) more preferably a feed forward gradient decent which immediately follows the output computations and which also includes the local feedback and the crosstalk. Subsequent to the gradient propagation, the weights can be normalized, thereby preventing convergence to a local optimum. Education of the network can be incremental both on and off-line. An educated network is suitable for modeling and controlling dynamic nonlinear systems and time series systems and predicting the outputs as well as hidden states and parameters. The educated network can also be further educated during on-line processing.

  6. [Ambulatory surgical treatment for breast carcinoma].

    PubMed

    Barillari, P; Leuzzi, R; Bassiri-Gharb, A; D'Angelo, F; Aurello, P; Naticchioni, E

    2001-02-01

    The aim of the study is to demonstrate the feasibility and the oncologic effectiveness of quadrantectomy plus sentinel node biopsy performed under local anesthesia, and to demonstrate the economic and psychologic advantages. From October 1996 to March 2000, 71 patients affected with clinical T1 N0 breast cancer, underwent quadrantectomy or tumor resection plus sentinel node biopsy and clinically suspicion axillary nodes biopsy, under local anesthesia at the Casa di Cura "Villa Mafalda" in Rome. Twenty tumors were T1a, 26 T1b e 25 T1c. A mean of 2 sentinel nodes (range 1-4) and a mean of 8 axillary nodes were removed during the procedure. In 2 cases sentinel nodes were not identified. Intraoperative histologic examination showed metastatic sentinel nodes in 11 cases. An axillary node dissection was performed in all cases (>12 nodes) and no other metastatic nodes were found. In all patients clinically suspected nodes were removed. In two cases no evidence of metastasis was found in sentinel nodes, while histologic examination revealed in a patient micrometastasis in one node, and in another patient two metastatic nodes. Fifty-three patients rated the overall surgical, anesthetic and recovery experience as "very satisfactory", 13 "satisfactory" and 5 "unsatisfactory". Patients typically expressed their pleasure at the possibility to return home and stressed the ease of recovery.

  7. Enabling Controlling Complex Networks with Local Topological Information.

    PubMed

    Li, Guoqi; Deng, Lei; Xiao, Gaoxi; Tang, Pei; Wen, Changyun; Hu, Wuhua; Pei, Jing; Shi, Luping; Stanley, H Eugene

    2018-03-15

    Complex networks characterize the nature of internal/external interactions in real-world systems including social, economic, biological, ecological, and technological networks. Two issues keep as obstacles to fulfilling control of large-scale networks: structural controllability which describes the ability to guide a dynamical system from any initial state to any desired final state in finite time, with a suitable choice of inputs; and optimal control, which is a typical control approach to minimize the cost for driving the network to a predefined state with a given number of control inputs. For large complex networks without global information of network topology, both problems remain essentially open. Here we combine graph theory and control theory for tackling the two problems in one go, using only local network topology information. For the structural controllability problem, a distributed local-game matching method is proposed, where every node plays a simple Bayesian game with local information and local interactions with adjacent nodes, ensuring a suboptimal solution at a linear complexity. Starring from any structural controllability solution, a minimizing longest control path method can efficiently reach a good solution for the optimal control in large networks. Our results provide solutions for distributed complex network control and demonstrate a way to link the structural controllability and optimal control together.

  8. Programming Sustainable Urban Nodes for Spontaneous, Intensive Urban Environments

    NASA Astrophysics Data System (ADS)

    Szubryt-Obrycka, Adriana

    2017-10-01

    Urban development nowadays, not only in Poland but also throughout the world, is an important issue for planners, municipal authorities and residents themselves. New structures generated in spontaneous urban and suburban areas constitute randomly scattered seeds of excessive residential and little commercial functions which therein appear more often as temporary or even ephemeral installations emerging where it is temporarily needed. The more important special services are provided rarely. Correct thinking about creating cities involves simultaneous thinking on providing different basic functions required by local communities, but at the same time recognizing temporal fluctuations and distinction on what kind of amenities have to be provided in particular area permanently (such as e.g. medical care, preventive services and schools), with others retaining its mobile, non-formal character. An even greater problem is a restoration of urban structures in the areas affected by natural disasters or leftover areas being previously war zones, where similar deficits have significantly higher impact being potential cause of higher toll in human lives, if no functional nodes providing essential functions survived. The Ariadne’s Thread is a research project which proposes infrastructure and nodes for such urban areas. It develops new framework for creating nodes not only aimed at fulfilling basic needs of people but achieving social integration and build stability for fragile communities. The aim of the paper is to describe the process of identification of a relationship between needs of the inhabitants and both programmatic and ideological approach to Ariadne’s Thread (AT) node giving ultimately its architectural interpretation. The paper will introduce the process of recognition of local needs, the interpretive and/or participatory mechanisms of establishing the node as a response to this recognition containing conceptual programming, socio-cultural programming, and functional programming (services). Then, the aspect of permanence or temporality will be addressed to determine the choice of appropriate technologies used in order to convey programmatic assertions into physical solutions. The nodes are meant to be as lightweight installments in the area as possible, but at the same time as durable and of good quality as to support positive social effects and reinforce building social capital in the area. The author believe that this emergency-based AT node scenario can be extrapolated to unbalanced housing areas being the result of urban sprawl, after being only slightly adjusted to local standards. But the main goal is to allow for efficient interventions in areas in dire needs and poor environments with limited resources or limited funds.

  9. The EPOS e-Infrastructure

    NASA Astrophysics Data System (ADS)

    Jeffery, Keith; Bailo, Daniele

    2014-05-01

    The European Plate Observing System (EPOS) is integrating geoscientific information concerning earth movements in Europe. We are approaching the end of the PP (Preparatory Project) phase and in October 2014 expect to continue with the full project within ESFRI (European Strategic Framework for Research Infrastructures). The key aspects of EPOS concern providing services to allow homogeneous access by end-users over heterogeneous data, software, facilities, equipment and services. The e-infrastructure of EPOS is the heart of the project since it integrates the work on organisational, legal, economic and scientific aspects. Following the creation of an inventory of relevant organisations, persons, facilities, equipment, services, datasets and software (RIDE) the scale of integration required became apparent. The EPOS e-infrastructure architecture has been developed systematically based on recorded primary (user) requirements and secondary (interoperation with other systems) requirements through Strawman, Woodman and Ironman phases with the specification - and developed confirmatory prototypes - becoming more precise and progressively moving from paper to implemented system. The EPOS architecture is based on global core services (Integrated Core Services - ICS) which access thematic nodes (domain-specific European-wide collections, called thematic Core Services - TCS), national nodes and specific institutional nodes. The key aspect is the metadata catalog. In one dimension this is described in 3 levels: (1) discovery metadata using well-known and commonly used standards such as DC (Dublin Core) to enable users (via an intelligent user interface) to search for objects within the EPOS environment relevant to their needs; (2) contextual metadata providing the context of the object described in the catalog to enable a user or the system to determine the relevance of the discovered object(s) to their requirement - the context includes projects, funding, organisations involved, persons involved, related publications, facilities, equipment and others, and utilises CERIF (Common European Research Information Format) standard (see www.eurocris.org); (3) detailed metadata which is specific to a domain or to a particular object and includes the schema describing the object to processing software. The other dimension of the metadata concerns the objects described. These are classified into users, services (including software), data and resources (computing, data storage, instruments and scientific equipment). An alternative architecture has been considered: using brokering. This technique has been used especially in North America geoscience projects to interoperate datasets. The technique involves writing software to interconvert between any two node datasets. Given n nodes this implies writing n*(n-1) convertors. EPOS Working Group 7 (e-infrastructures and virtual community) which deals with the design and implementation of a prototype of the EPOS services, chose to use an approach which endows the system with an extreme flexibility and sustainability. It is called the Metadata Catalogue approach. With the use of the catalogue the EPOS system can: 1. interoperate with software, services, users, organisations, facilities, equipment etc. as well as datasets; 2. avoid to write n*(n-1) software convertors and generate as much as possible, through the information contained in the catalogue only n convertors. This is a huge saving - especially in maintenance as the datasets (or other node resources) evolve. We are working on (semi-) automation of convertor generation by metadata mapping - this is leading-edge computer science research; 3. make large use of contextual metadata which enable a user or a machine to: (i) improve discovery of resources at nodes; (ii) improve precision and recall in search; (iii) drive the systems for identification, authentication, authorisation, security and privacy recording the relevant attributes of the node resources and of the user; (iv) manage provenance and long-term digital preservation; The linkage between the Integrated Services, which provide the integration of data and services, with the diverse Thematic Services Nodes is provided by means of a compatibility layer, which includes the aforementioned metadata catalogue. This layer provides 'connectors' to make local data, software and services available through the EPOS Integrated Services layer. In conclusion, we believe the EPOS e-infrastructure architecture is fit for purpose including long-term sustainability and pan-European access to data and services.

  10. A Localization-Free Interference and Energy Holes Minimization Routing for Underwater Wireless Sensor Networks.

    PubMed

    Khan, Anwar; Ahmedy, Ismail; Anisi, Mohammad Hossein; Javaid, Nadeem; Ali, Ihsan; Khan, Nawsher; Alsaqer, Mohammed; Mahmood, Hasan

    2018-01-09

    Interference and energy holes formation in underwater wireless sensor networks (UWSNs) threaten the reliable delivery of data packets from a source to a destination. Interference also causes inefficient utilization of the limited battery power of the sensor nodes in that more power is consumed in the retransmission of the lost packets. Energy holes are dead nodes close to the surface of water, and their early death interrupts data delivery even when the network has live nodes. This paper proposes a localization-free interference and energy holes minimization (LF-IEHM) routing protocol for UWSNs. The proposed algorithm overcomes interference during data packet forwarding by defining a unique packet holding time for every sensor node. The energy holes formation is mitigated by a variable transmission range of the sensor nodes. As compared to the conventional routing protocols, the proposed protocol does not require the localization information of the sensor nodes, which is cumbersome and difficult to obtain, as nodes change their positions with water currents. Simulation results show superior performance of the proposed scheme in terms of packets received at the final destination and end-to-end delay.

  11. A Localization-Free Interference and Energy Holes Minimization Routing for Underwater Wireless Sensor Networks

    PubMed Central

    Khan, Anwar; Anisi, Mohammad Hossein; Javaid, Nadeem; Khan, Nawsher; Alsaqer, Mohammed; Mahmood, Hasan

    2018-01-01

    Interference and energy holes formation in underwater wireless sensor networks (UWSNs) threaten the reliable delivery of data packets from a source to a destination. Interference also causes inefficient utilization of the limited battery power of the sensor nodes in that more power is consumed in the retransmission of the lost packets. Energy holes are dead nodes close to the surface of water, and their early death interrupts data delivery even when the network has live nodes. This paper proposes a localization-free interference and energy holes minimization (LF-IEHM) routing protocol for UWSNs. The proposed algorithm overcomes interference during data packet forwarding by defining a unique packet holding time for every sensor node. The energy holes formation is mitigated by a variable transmission range of the sensor nodes. As compared to the conventional routing protocols, the proposed protocol does not require the localization information of the sensor nodes, which is cumbersome and difficult to obtain, as nodes change their positions with water currents. Simulation results show superior performance of the proposed scheme in terms of packets received at the final destination and end-to-end delay. PMID:29315247

  12. An Energy Centric Cluster-Based Routing Protocol for Wireless Sensor Networks.

    PubMed

    Hosen, A S M Sanwar; Cho, Gi Hwan

    2018-05-11

    Clustering is an effective way to prolong the lifetime of a wireless sensor network (WSN). The common approach is to elect cluster heads to take routing and controlling duty, and to periodically rotate each cluster head's role to distribute energy consumption among nodes. However, a significant amount of energy dissipates due to control messages overhead, which results in a shorter network lifetime. This paper proposes an energy-centric cluster-based routing mechanism in WSNs. To begin with, cluster heads are elected based on the higher ranks of the nodes. The rank is defined by residual energy and average distance from the member nodes. With the role of data aggregation and data forwarding, a cluster head acts as a caretaker for cluster-head election in the next round, where the ranks' information are piggybacked along with the local data sending during intra-cluster communication. This reduces the number of control messages for the cluster-head election as well as the cluster formation in detail. Simulation results show that our proposed protocol saves the energy consumption among nodes and achieves a significant improvement in the network lifetime.

  13. An Energy Centric Cluster-Based Routing Protocol for Wireless Sensor Networks

    PubMed Central

    Hosen, A. S. M. Sanwar; Cho, Gi Hwan

    2018-01-01

    Clustering is an effective way to prolong the lifetime of a wireless sensor network (WSN). The common approach is to elect cluster heads to take routing and controlling duty, and to periodically rotate each cluster head’s role to distribute energy consumption among nodes. However, a significant amount of energy dissipates due to control messages overhead, which results in a shorter network lifetime. This paper proposes an energy-centric cluster-based routing mechanism in WSNs. To begin with, cluster heads are elected based on the higher ranks of the nodes. The rank is defined by residual energy and average distance from the member nodes. With the role of data aggregation and data forwarding, a cluster head acts as a caretaker for cluster-head election in the next round, where the ranks’ information are piggybacked along with the local data sending during intra-cluster communication. This reduces the number of control messages for the cluster-head election as well as the cluster formation in detail. Simulation results show that our proposed protocol saves the energy consumption among nodes and achieves a significant improvement in the network lifetime. PMID:29751663

  14. A postprocessing method in the HMC framework for predicting gene function based on biological instrumental data

    NASA Astrophysics Data System (ADS)

    Feng, Shou; Fu, Ping; Zheng, Wenbin

    2018-03-01

    Predicting gene function based on biological instrumental data is a complicated and challenging hierarchical multi-label classification (HMC) problem. When using local approach methods to solve this problem, a preliminary results processing method is usually needed. This paper proposed a novel preliminary results processing method called the nodes interaction method. The nodes interaction method revises the preliminary results and guarantees that the predictions are consistent with the hierarchy constraint. This method exploits the label dependency and considers the hierarchical interaction between nodes when making decisions based on the Bayesian network in its first phase. In the second phase, this method further adjusts the results according to the hierarchy constraint. Implementing the nodes interaction method in the HMC framework also enhances the HMC performance for solving the gene function prediction problem based on the Gene Ontology (GO), the hierarchy of which is a directed acyclic graph that is more difficult to tackle. The experimental results validate the promising performance of the proposed method compared to state-of-the-art methods on eight benchmark yeast data sets annotated by the GO.

  15. Nanoparticle Transport from Mouse Vagina to Adjacent Lymph Nodes

    PubMed Central

    Ballou, Byron; Andreko, Susan K.; Osuna-Highley, Elvira; McRaven, Michael; Catalone, Tina; Bruchez, Marcel P.; Hope, Thomas J.; Labib, Mohamed E.

    2012-01-01

    To test the feasibility of localized intravaginal therapy directed to neighboring lymph nodes, the transport of quantum dots across the vaginal wall was investigated. Quantum dots instilled into the mouse vagina were transported across the vaginal mucosa into draining lymph nodes, but not into distant nodes. Most of the particles were transported to the lumbar nodes; far fewer were transported to the inguinal nodes. A low level of transport was evident at 4 hr after intravaginal instillation, and transport peaked at about 36 hr after instillation. Transport was greatly enhanced by prior vaginal instillation of Nonoxynol-9. Hundreds of micrograms of nanoparticles/kg tissue (ppb) were found in the lumbar lymph nodes at 36 hr post-instillation. Our results imply that targeted transport of microbicides or immunogens from the vagina to local lymph organs is feasible. They also offer an in vivo model for assessing the toxicity of compounds intended for intravaginal use. PMID:23284844

  16. A distributed approach to the OPF problem

    NASA Astrophysics Data System (ADS)

    Erseghe, Tomaso

    2015-12-01

    This paper presents a distributed approach to optimal power flow (OPF) in an electrical network, suitable for application in a future smart grid scenario where access to resource and control is decentralized. The non-convex OPF problem is solved by an augmented Lagrangian method, similar to the widely known ADMM algorithm, with the key distinction that penalty parameters are constantly increased. A (weak) assumption on local solver reliability is required to always ensure convergence. A certificate of convergence to a local optimum is available in the case of bounded penalty parameters. For moderate sized networks (up to 300 nodes, and even in the presence of a severe partition of the network), the approach guarantees a performance very close to the optimum, with an appreciably fast convergence speed. The generality of the approach makes it applicable to any (convex or non-convex) distributed optimization problem in networked form. In the comparison with the literature, mostly focused on convex SDP approximations, the chosen approach guarantees adherence to the reference problem, and it also requires a smaller local computational complexity effort.

  17. An Approach Using Parallel Architecture to Storage DICOM Images in Distributed File System

    NASA Astrophysics Data System (ADS)

    Soares, Tiago S.; Prado, Thiago C.; Dantas, M. A. R.; de Macedo, Douglas D. J.; Bauer, Michael A.

    2012-02-01

    Telemedicine is a very important area in medical field that is expanding daily motivated by many researchers interested in improving medical applications. In Brazil was started in 2005, in the State of Santa Catarina has a developed server called the CyclopsDCMServer, which the purpose to embrace the HDF for the manipulation of medical images (DICOM) using a distributed file system. Since then, many researches were initiated in order to seek better performance. Our approach for this server represents an additional parallel implementation in I/O operations since HDF version 5 has an essential feature for our work which supports parallel I/O, based upon the MPI paradigm. Early experiments using four parallel nodes, provide good performance when compare to the serial HDF implemented in the CyclopsDCMServer.

  18. Energy Harvesting Hybrid Acoustic-Optical Underwater Wireless Sensor Networks Localization.

    PubMed

    Saeed, Nasir; Celik, Abdulkadir; Al-Naffouri, Tareq Y; Alouini, Mohamed-Slim

    2017-12-26

    Underwater wireless technologies demand to transmit at higher data rate for ocean exploration. Currently, large coverage is achieved by acoustic sensor networks with low data rate, high cost, high latency, high power consumption, and negative impact on marine mammals. Meanwhile, optical communication for underwater networks has the advantage of the higher data rate albeit for limited communication distances. Moreover, energy consumption is another major problem for underwater sensor networks, due to limited battery power and difficulty in replacing or recharging the battery of a sensor node. The ultimate solution to this problem is to add energy harvesting capability to the acoustic-optical sensor nodes. Localization of underwater sensor networks is of utmost importance because the data collected from underwater sensor nodes is useful only if the location of the nodes is known. Therefore, a novel localization technique for energy harvesting hybrid acoustic-optical underwater wireless sensor networks (AO-UWSNs) is proposed. AO-UWSN employs optical communication for higher data rate at a short transmission distance and employs acoustic communication for low data rate and long transmission distance. A hybrid received signal strength (RSS) based localization technique is proposed to localize the nodes in AO-UWSNs. The proposed technique combines the noisy RSS based measurements from acoustic communication and optical communication and estimates the final locations of acoustic-optical sensor nodes. A weighted multiple observations paradigm is proposed for hybrid estimated distances to suppress the noisy observations and give more importance to the accurate observations. Furthermore, the closed form solution for Cramer-Rao lower bound (CRLB) is derived for localization accuracy of the proposed technique.

  19. Energy Harvesting Hybrid Acoustic-Optical Underwater Wireless Sensor Networks Localization

    PubMed Central

    Saeed, Nasir; Celik, Abdulkadir; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim

    2017-01-01

    Underwater wireless technologies demand to transmit at higher data rate for ocean exploration. Currently, large coverage is achieved by acoustic sensor networks with low data rate, high cost, high latency, high power consumption, and negative impact on marine mammals. Meanwhile, optical communication for underwater networks has the advantage of the higher data rate albeit for limited communication distances. Moreover, energy consumption is another major problem for underwater sensor networks, due to limited battery power and difficulty in replacing or recharging the battery of a sensor node. The ultimate solution to this problem is to add energy harvesting capability to the acoustic-optical sensor nodes. Localization of underwater sensor networks is of utmost importance because the data collected from underwater sensor nodes is useful only if the location of the nodes is known. Therefore, a novel localization technique for energy harvesting hybrid acoustic-optical underwater wireless sensor networks (AO-UWSNs) is proposed. AO-UWSN employs optical communication for higher data rate at a short transmission distance and employs acoustic communication for low data rate and long transmission distance. A hybrid received signal strength (RSS) based localization technique is proposed to localize the nodes in AO-UWSNs. The proposed technique combines the noisy RSS based measurements from acoustic communication and optical communication and estimates the final locations of acoustic-optical sensor nodes. A weighted multiple observations paradigm is proposed for hybrid estimated distances to suppress the noisy observations and give more importance to the accurate observations. Furthermore, the closed form solution for Cramer-Rao lower bound (CRLB) is derived for localization accuracy of the proposed technique. PMID:29278405

  20. Metric Ranking of Invariant Networks with Belief Propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tao, Changxia; Ge, Yong; Song, Qinbao

    The management of large-scale distributed information systems relies on the effective use and modeling of monitoring data collected at various points in the distributed information systems. A promising approach is to discover invariant relationships among the monitoring data and generate invariant networks, where a node is a monitoring data source (metric) and a link indicates an invariant relationship between two monitoring data. Such an invariant network representation can help system experts to localize and diagnose the system faults by examining those broken invariant relationships and their related metrics, because system faults usually propagate among the monitoring data and eventually leadmore » to some broken invariant relationships. However, at one time, there are usually a lot of broken links (invariant relationships) within an invariant network. Without proper guidance, it is difficult for system experts to manually inspect this large number of broken links. Thus, a critical challenge is how to effectively and efficiently rank metrics (nodes) of invariant networks according to the anomaly levels of metrics. The ranked list of metrics will provide system experts with useful guidance for them to localize and diagnose the system faults. To this end, we propose to model the nodes and the broken links as a Markov Random Field (MRF), and develop an iteration algorithm to infer the anomaly of each node based on belief propagation (BP). Finally, we validate the proposed algorithm on both realworld and synthetic data sets to illustrate its effectiveness.« less

  1. 30 CFR 57.6130 - Explosive material storage facilities.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... local authorities for over-the-road use. Facilities other than magazines used to store blasting agents... or other appropriate warning signs that indicate the contents and are visible from each approach. ...

  2. 30 CFR 57.6130 - Explosive material storage facilities.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... local authorities for over-the-road use. Facilities other than magazines used to store blasting agents... or other appropriate warning signs that indicate the contents and are visible from each approach. ...

  3. 30 CFR 57.6130 - Explosive material storage facilities.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... local authorities for over-the-road use. Facilities other than magazines used to store blasting agents... or other appropriate warning signs that indicate the contents and are visible from each approach. ...

  4. The predictive factors for lymph node metastasis in early gastric cancer: A clinical study.

    PubMed

    Wang, Yinzhong

    2015-01-01

    To detect the clinicopathological factors associated with lymph node metastases in early gastric cancer. We retrospectively evaluated the distribution of metastatic nodes in 198 patients with early gastric cancer treated in our hospital between May 2008 and January 2015, the clinicopathological factors including age, gender, tumor location, tumor size, macroscopic type, depth of invasion, histological type and venous invasion were studied, and the relationship between various parameters and lymph node metastases was analyzed. In this study, one hundred and ninety-eight patients with early gastric cancer were included, and lymph node metastasis was detected in 28 patients. Univariate analysis revealed a close relationship between tumor size, depth of invasion, histological type, venous invasion, local ulceration and lymph node metastases. Multivariate analysis revealed that the five factors were independent risk factors for lymph node metastases. The clinicopathological parameters including tumor size, depth of invasion, local ulceration, histological type and venous invasion are closely correlated with lymph node metastases, should be paid high attention in early gastric cancer patients.

  5. Automatic abdominal lymph node detection method based on local intensity structure analysis from 3D x-ray CT images

    NASA Astrophysics Data System (ADS)

    Nakamura, Yoshihiko; Nimura, Yukitaka; Kitasaka, Takayuki; Mizuno, Shinji; Furukawa, Kazuhiro; Goto, Hidemi; Fujiwara, Michitaka; Misawa, Kazunari; Ito, Masaaki; Nawano, Shigeru; Mori, Kensaku

    2013-03-01

    This paper presents an automated method of abdominal lymph node detection to aid the preoperative diagnosis of abdominal cancer surgery. In abdominal cancer surgery, surgeons must resect not only tumors and metastases but also lymph nodes that might have a metastasis. This procedure is called lymphadenectomy or lymph node dissection. Insufficient lymphadenectomy carries a high risk for relapse. However, excessive resection decreases a patient's quality of life. Therefore, it is important to identify the location and the structure of lymph nodes to make a suitable surgical plan. The proposed method consists of candidate lymph node detection and false positive reduction. Candidate lymph nodes are detected using a multi-scale blob-like enhancement filter based on local intensity structure analysis. To reduce false positives, the proposed method uses a classifier based on support vector machine with the texture and shape information. The experimental results reveal that it detects 70.5% of the lymph nodes with 13.0 false positives per case.

  6. Scale-Free Compact Routing Schemes in Networks of Low Doubling Dimension

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konjevod, Goran; Richa, Andréa W.; Xia, Donglin

    In this work, we consider compact routing schemes in networks of low doubling dimension, where the doubling dimension is the least value α such that any ball in the network can be covered by at most 2 α balls of half radius. There are two variants of routing-scheme design: (i) labeled (name-dependent) routing, in which the designer is allowed to rename the nodes so that the names (labels) can contain additional routing information, for example, topological information; and (ii) name-independent routing, which works on top of the arbitrary original node names in the network, that is, the node names aremore » independent of the routing scheme. In this article, given any constant ε ϵ (0, 1) and an n-node edge-weighted network of doubling dimension α ϵ O(loglog n), we present —a (1 + ε)-stretch labeled compact routing scheme with Γlog n-bit routing labels, O(log 2 n/loglog n)-bit packet headers, and ((1/ε) O(α) log 3 n)-bit routing information at each node; —a (9 + ε)-stretch name-independent compact routing scheme with O(log 2 n/loglog n)-bit packet headers, and ((1/ε) O(α) log 3 n)-bit routing information at each node. In addition, we prove a lower bound: any name-independent routing scheme with o(n (ε/60)2) bits of storage at each node has stretch no less than 9 - ε for any ε ϵ (0, 8). Therefore, our name-independent routing scheme achieves asymptotically optimal stretch with polylogarithmic storage at each node and packet headers. Note that both schemes are scale-free in the sense that their space requirements do not depend on the normalized diameter Δ of the network. Finally, we also present a simpler nonscale-free (9 + ε)-stretch name-independent compact routing scheme with improved space requirements if Δ is polynomial in n.« less

  7. Scale-Free Compact Routing Schemes in Networks of Low Doubling Dimension

    DOE PAGES

    Konjevod, Goran; Richa, Andréa W.; Xia, Donglin

    2016-06-15

    In this work, we consider compact routing schemes in networks of low doubling dimension, where the doubling dimension is the least value α such that any ball in the network can be covered by at most 2 α balls of half radius. There are two variants of routing-scheme design: (i) labeled (name-dependent) routing, in which the designer is allowed to rename the nodes so that the names (labels) can contain additional routing information, for example, topological information; and (ii) name-independent routing, which works on top of the arbitrary original node names in the network, that is, the node names aremore » independent of the routing scheme. In this article, given any constant ε ϵ (0, 1) and an n-node edge-weighted network of doubling dimension α ϵ O(loglog n), we present —a (1 + ε)-stretch labeled compact routing scheme with Γlog n-bit routing labels, O(log 2 n/loglog n)-bit packet headers, and ((1/ε) O(α) log 3 n)-bit routing information at each node; —a (9 + ε)-stretch name-independent compact routing scheme with O(log 2 n/loglog n)-bit packet headers, and ((1/ε) O(α) log 3 n)-bit routing information at each node. In addition, we prove a lower bound: any name-independent routing scheme with o(n (ε/60)2) bits of storage at each node has stretch no less than 9 - ε for any ε ϵ (0, 8). Therefore, our name-independent routing scheme achieves asymptotically optimal stretch with polylogarithmic storage at each node and packet headers. Note that both schemes are scale-free in the sense that their space requirements do not depend on the normalized diameter Δ of the network. Finally, we also present a simpler nonscale-free (9 + ε)-stretch name-independent compact routing scheme with improved space requirements if Δ is polynomial in n.« less

  8. Thyroglobulin assay in fluids from lymph node fine needle-aspiration washout: influence of pre-analytical conditions.

    PubMed

    Casson, Florence Boux de; Moal, Valérie; Gauchez, Anne-Sophie; Moineau, Marie-Pierre; Sault, Corinne; Schlageter, Marie-Hélène; Massart, Catherine

    2017-04-01

    The aim of this study was to evaluate the pre-analytical factors contributing to uncertainty in thyroglobulin measurement in fluids from fine-needle aspiration (FNA) washout of cervical lymph nodes. We studied pre-analytical stability, in different conditions, of 41 samples prepared with concentrated solutions of thyroglobulin (FNA washout or certified standard) diluted in physiological saline solution or buffer containing 6% albumin. In this buffer, over time, no changes in thyroglobulin concentrations were observed in all storage conditions tested. In albumin free saline solution, thyroglobulin recovery rates depended on initial sample concentrations and on modalities of their conservation (in conventional storage tubes, recovery mean was 56% after 3 hours-storage at room temperature and 19% after 24 hours-storage for concentrations ranged from 2 to 183 μg/L; recovery was 95%, after 3 hours or 24 hours-storage at room temperature, for a concentration of 5,656 μg/L). We show here that these results are due to non-specific adsorption of thyroglobulin in storage tubes, which depends on sample protein concentrations. We also show that possible contamination of fluids from FNA washout by plasma proteins do not always adequately prevent this adsorption. In conclusion, non-specific adsorption in storage tubes strongly contributes to uncertainty in thyroglobulin measurement in physiological saline solution. It is therefore recommended, for FNA washout, to use a buffer containing proteins provided by the laboratory.

  9. New modeling method for the dielectric relaxation of a DRAM cell capacitor

    NASA Astrophysics Data System (ADS)

    Choi, Sujin; Sun, Wookyung; Shin, Hyungsoon

    2018-02-01

    This study proposes a new method for automatically synthesizing the equivalent circuit of the dielectric relaxation (DR) characteristic in dynamic random access memory (DRAM) without frequency dependent capacitance measurement. Charge loss due to DR can be observed by a voltage drop at the storage node and this phenomenon can be analyzed by an equivalent circuit. The Havariliak-Negami model is used to accurately determine the electrical characteristic parameters of an equivalent circuit. The DRAM sensing operation is performed in HSPICE simulations to verify this new method. The simulation demonstrates that the storage node voltage drop resulting from DR and the reduction in the sensing voltage margin, which has a critical impact on DRAM read operation, can be accurately estimated using this new method.

  10. Non-animal assessment of skin sensitization hazard: Is an integrated testing strategy needed, and if so what should be integrated?

    PubMed

    Roberts, David W; Patlewicz, Grace

    2018-01-01

    There is an expectation that to meet regulatory requirements, and avoid or minimize animal testing, integrated approaches to testing and assessment will be needed that rely on assays representing key events (KEs) in the skin sensitization adverse outcome pathway. Three non-animal assays have been formally validated and regulatory adopted: the direct peptide reactivity assay (DPRA), the KeratinoSens™ assay and the human cell line activation test (h-CLAT). There have been many efforts to develop integrated approaches to testing and assessment with the "two out of three" approach attracting much attention. Here a set of 271 chemicals with mouse, human and non-animal sensitization test data was evaluated to compare the predictive performances of the three individual non-animal assays, their binary combinations and the "two out of three" approach in predicting skin sensitization potential. The most predictive approach was to use both the DPRA and h-CLAT as follows: (1) perform DPRA - if positive, classify as sensitizing, and (2) if negative, perform h-CLAT - a positive outcome denotes a sensitizer, a negative, a non-sensitizer. With this approach, 85% (local lymph node assay) and 93% (human) of non-sensitizer predictions were correct, whereas the "two out of three" approach had 69% (local lymph node assay) and 79% (human) of non-sensitizer predictions correct. The findings are consistent with the argument, supported by published quantitative mechanistic models that only the first KE needs to be modeled. All three assays model this KE to an extent. The value of using more than one assay depends on how the different assays compensate for each other's technical limitations. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  11. A Comprehensive Review of Contemporary Role of Local Treatment of the Primary Tumor and/or the Metastases in Metastatic Prostate Cancer

    PubMed Central

    Aoun, Fouad; Peltier, Alexandre; van Velthoven, Roland

    2014-01-01

    To provide an overview of the currently available literature regarding local control of primary tumor and oligometastases in metastatic prostate cancer and salvage lymph node dissection of clinical lymph node relapse after curative treatment of prostate cancer. Evidence Acquisition. A systematic literature search was conducted in 2014 to identify abstracts, original articles, review articles, research articles, and editorials relevant to the local control in metastatic prostate cancer. Evidence Synthesis. Local control of primary tumor in metastatic prostate cancer remains experimental with low level of evidence. The concept is supported by a growing body of genetic and molecular research as well as analogy with other cancers. There is only one retrospective observational population based study showing prolonged survival. To eradicate oligometastases, several options exist with excellent local control rates. Stereotactic body radiotherapy is safe, well tolerated, and efficacious treatment for lymph node and bone lesions. Both biochemical and clinical progression are slowed down with a median time to initiate ADT of 2 years. Salvage lymph node dissection is feasible in patients with clinical lymph node relapse after local curable treatment. Conclusion. Despite encouraging oncologic midterm results, a complete cure remains elusive in metastatic prostate cancer patients. Further advances in imaging are crucial in order to rapidly evolve beyond the proof of concept. PMID:25485280

  12. Network module detection: Affinity search technique with the multi-node topological overlap measure

    PubMed Central

    Li, Ai; Horvath, Steve

    2009-01-01

    Background Many clustering procedures only allow the user to input a pairwise dissimilarity or distance measure between objects. We propose a clustering method that can input a multi-point dissimilarity measure d(i1, i2, ..., iP) where the number of points P can be larger than 2. The work is motivated by gene network analysis where clusters correspond to modules of highly interconnected nodes. Here, we define modules as clusters of network nodes with high multi-node topological overlap. The topological overlap measure is a robust measure of interconnectedness which is based on shared network neighbors. In previous work, we have shown that the multi-node topological overlap measure yields biologically meaningful results when used as input of network neighborhood analysis. Findings We adapt network neighborhood analysis for the use of module detection. We propose the Module Affinity Search Technique (MAST), which is a generalized version of the Cluster Affinity Search Technique (CAST). MAST can accommodate a multi-node dissimilarity measure. Clusters grow around user-defined or automatically chosen seeds (e.g. hub nodes). We propose both local and global cluster growth stopping rules. We use several simulations and a gene co-expression network application to argue that the MAST approach leads to biologically meaningful results. We compare MAST with hierarchical clustering and partitioning around medoid clustering. Conclusion Our flexible module detection method is implemented in the MTOM software which can be downloaded from the following webpage: PMID:19619323

  13. Network module detection: Affinity search technique with the multi-node topological overlap measure.

    PubMed

    Li, Ai; Horvath, Steve

    2009-07-20

    Many clustering procedures only allow the user to input a pairwise dissimilarity or distance measure between objects. We propose a clustering method that can input a multi-point dissimilarity measure d(i1, i2, ..., iP) where the number of points P can be larger than 2. The work is motivated by gene network analysis where clusters correspond to modules of highly interconnected nodes. Here, we define modules as clusters of network nodes with high multi-node topological overlap. The topological overlap measure is a robust measure of interconnectedness which is based on shared network neighbors. In previous work, we have shown that the multi-node topological overlap measure yields biologically meaningful results when used as input of network neighborhood analysis. We adapt network neighborhood analysis for the use of module detection. We propose the Module Affinity Search Technique (MAST), which is a generalized version of the Cluster Affinity Search Technique (CAST). MAST can accommodate a multi-node dissimilarity measure. Clusters grow around user-defined or automatically chosen seeds (e.g. hub nodes). We propose both local and global cluster growth stopping rules. We use several simulations and a gene co-expression network application to argue that the MAST approach leads to biologically meaningful results. We compare MAST with hierarchical clustering and partitioning around medoid clustering. Our flexible module detection method is implemented in the MTOM software which can be downloaded from the following webpage: http://www.genetics.ucla.edu/labs/horvath/MTOM/

  14. Unusual metastasis of left colon cancer: considerations on two cases.

    PubMed

    Gubitosi, Adelmo; Moccia, Giancarlo; Malinconico, Francesca Antonella; Gilio, Francesco; Iside, Giovanni; Califano, Umberto G A; Foroni, Fabrizio; Ruggiero, Roberto; Docimo, Giovanni; Parmeggiani, Domenico; Agresti, Massimo

    2009-04-01

    Usually, left colon cancer metastasis concerns liver, abdominal lymph nodes and lungs. Other localizations are quite rare occurrences. In spite of this, some uncommon metastasis sites are reported in literature, such as: peritoneum, ovaries, uterus, kidney testis, bones, thyroid, oral cavity and central nervous system. We report two cases of unusual localizations of left colon cancer metastasis localization, one into the retroperitoneal space and the other at the left axillary lynphnodes and between liver and pancreas. In the first reported case the diffusion pathway may have been the lymphatic mesocolic vessels, partially left in place from the previous surgery. In the second case the alleged metastatic lane may have been through the periumbilical lymph nodes to the parasternal lymph nodes and then to the internal mammary ones, finally reaching the axillary limph nodes.

  15. Comparisons of node-based and element-based approaches of assigning bone material properties onto subject-specific finite element models.

    PubMed

    Chen, G; Wu, F Y; Liu, Z C; Yang, K; Cui, F

    2015-08-01

    Subject-specific finite element (FE) models can be generated from computed tomography (CT) datasets of a bone. A key step is assigning material properties automatically onto finite element models, which remains a great challenge. This paper proposes a node-based assignment approach and also compares it with the element-based approach in the literature. Both approaches were implemented using ABAQUS. The assignment procedure is divided into two steps: generating the data file of the image intensity of a bone in a MATLAB program and reading the data file into ABAQUS via user subroutines. The node-based approach assigns the material properties to each node of the finite element mesh, while the element-based approach assigns the material properties directly to each integration point of an element. Both approaches are independent from the type of elements. A number of FE meshes are tested and both give accurate solutions; comparatively the node-based approach involves less programming effort. The node-based approach is also independent from the type of analyses; it has been tested on the nonlinear analysis of a Sawbone femur. The node-based approach substantially improves the level of automation of the assignment procedure of bone material properties. It is the simplest and most powerful approach that is applicable to many types of analyses and elements. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  16. A Low-Storage-Consumption XML Labeling Method for Efficient Structural Information Extraction

    NASA Astrophysics Data System (ADS)

    Liang, Wenxin; Takahashi, Akihiro; Yokota, Haruo

    Recently, labeling methods to extract and reconstruct the structural information of XML data, which are important for many applications such as XPath query and keyword search, are becoming more attractive. To achieve efficient structural information extraction, in this paper we propose C-DO-VLEI code, a novel update-friendly bit-vector encoding scheme, based on register-length bit operations combining with the properties of Dewey Order numbers, which cannot be implemented in other relevant existing schemes such as ORDPATH. Meanwhile, the proposed method also achieves lower storage consumption because it does not require either prefix schema or any reserved codes for node insertion. We performed experiments to evaluate and compare the performance and storage consumption of the proposed method with those of the ORDPATH method. Experimental results show that the execution times for extracting depth information and parent node labels using the C-DO-VLEI code are about 25% and 15% less, respectively, and the average label size using the C-DO-VLEI code is about 24% smaller, comparing with ORDPATH.

  17. Towards understanding the behavior of physical systems using information theory

    NASA Astrophysics Data System (ADS)

    Quax, Rick; Apolloni, Andrea; Sloot, Peter M. A.

    2013-09-01

    One of the goals of complex network analysis is to identify the most influential nodes, i.e., the nodes that dictate the dynamics of other nodes. In the case of autonomous systems or transportation networks, highly connected hubs play a preeminent role in diffusing the flow of information and viruses; in contrast, in language evolution most linguistic norms come from the peripheral nodes who have only few contacts. Clearly a topological analysis of the interactions alone is not sufficient to identify the nodes that drive the state of the network. Here we show how information theory can be used to quantify how the dynamics of individual nodes propagate through a system. We interpret the state of a node as a storage of information about the state of other nodes, which is quantified in terms of Shannon information. This information is transferred through interactions and lost due to noise, and we calculate how far it can travel through a network. We apply this concept to a model of opinion formation in a complex social network to calculate the impact of each node by measuring how long its opinion is remembered by the network. Counter-intuitively we find that the dynamics of opinions are not determined by the hubs or peripheral nodes, but rather by nodes with an intermediate connectivity.

  18. Disruptions of brain structural network in end-stage renal disease patients with long-term hemodialysis and normal-appearing brain tissues.

    PubMed

    Chou, Ming-Chung; Ko, Chih-Hung; Chang, Jer-Ming; Hsieh, Tsyh-Jyi

    2018-05-04

    End-stage renal disease (ESRD) patients on hemodialysis were demonstrated to exhibit silent and invisible white-matter alterations which would likely lead to disruptions of brain structural networks. Therefore, the purpose of this study was to investigate the disruptions of brain structural network in ESRD patients. Thiry-three ESRD patients with normal-appearing brain tissues and 29 age- and gender-matched healthy controls were enrolled in this study and underwent both cognitive ability screening instrument (CASI) assessment and diffusion tensor imaging (DTI) acquisition. Brain structural connectivity network was constructed using probabilistic tractography with automatic anatomical labeling template. Graph-theory analysis was performed to detect the alterations of node-strength, node-degree, node-local efficiency, and node-clustering coefficient in ESRD patients. Correlational analysis was performed to understand the relationship between network measures, CASI score, and dialysis duration. Structural connectivity, node-strength, node-degree, and node-local efficiency were significantly decreased, whereas node-clustering coefficient was significantly increased in ESRD patients as compared with healthy controls. The disrupted local structural networks were generally associated with common neurological complications of ESRD patients, but the correlational analysis did not reveal significant correlation between network measures, CASI score, and dialysis duration. Graph-theory analysis was helpful to investigate disruptions of brain structural network in ESRD patients with normal-appearing brain tissues. Copyright © 2018. Published by Elsevier Masson SAS.

  19. Constructing Social Networks From Secondary Storage With Bulk Analysis Tools

    DTIC Science & Technology

    2016-06-01

    that classic measures of centrality are effective for identifying important nodes and close associates, and that further study of modularity classes...which ground truth was determined by interviews with the owners, and which can be used for future study in this area. Two objectives motivated this thesis...tifying important nodes and close associates, and that further study of modularity classes may be a promising method of partitioning complex components

  20. Architecture of a spatial data service system for statistical analysis and visualization of regional climate changes

    NASA Astrophysics Data System (ADS)

    Titov, A. G.; Okladnikov, I. G.; Gordov, E. P.

    2017-11-01

    The use of large geospatial datasets in climate change studies requires the development of a set of Spatial Data Infrastructure (SDI) elements, including geoprocessing and cartographical visualization web services. This paper presents the architecture of a geospatial OGC web service system as an integral part of a virtual research environment (VRE) general architecture for statistical processing and visualization of meteorological and climatic data. The architecture is a set of interconnected standalone SDI nodes with corresponding data storage systems. Each node runs a specialized software, such as a geoportal, cartographical web services (WMS/WFS), a metadata catalog, and a MySQL database of technical metadata describing geospatial datasets available for the node. It also contains geospatial data processing services (WPS) based on a modular computing backend realizing statistical processing functionality and, thus, providing analysis of large datasets with the results of visualization and export into files of standard formats (XML, binary, etc.). Some cartographical web services have been developed in a system’s prototype to provide capabilities to work with raster and vector geospatial data based on OGC web services. The distributed architecture presented allows easy addition of new nodes, computing and data storage systems, and provides a solid computational infrastructure for regional climate change studies based on modern Web and GIS technologies.

  1. Optimisation of the usage of LHC and local computing resources in a multidisciplinary physics department hosting a WLCG Tier-2 centre

    NASA Astrophysics Data System (ADS)

    Barberis, Stefano; Carminati, Leonardo; Leveraro, Franco; Mazza, Simone Michele; Perini, Laura; Perlz, Francesco; Rebatto, David; Tura, Ruggero; Vaccarossa, Luca; Villaplana, Miguel

    2015-12-01

    We present the approach of the University of Milan Physics Department and the local unit of INFN to allow and encourage the sharing among different research areas of computing, storage and networking resources (the largest ones being those composing the Milan WLCG Tier-2 centre and tailored to the needs of the ATLAS experiment). Computing resources are organised as independent HTCondor pools, with a global master in charge of monitoring them and optimising their usage. The configuration has to provide satisfactory throughput for both serial and parallel (multicore, MPI) jobs. A combination of local, remote and cloud storage options are available. The experience of users from different research areas operating on this shared infrastructure is discussed. The promising direction of improving scientific computing throughput by federating access to distributed computing and storage also seems to fit very well with the objectives listed in the European Horizon 2020 framework for research and development.

  2. The use of ethanol:diethylphthalate as a vehicle for the local lymph node assay.

    PubMed

    Betts, Catherine J; Beresford, L; Dearman, R J; Lalko, J; Api, A P; Kimber, I

    2007-02-01

    The murine local lymph node assay (LLNA) is used for the prospective identification of contact allergens. This method is not only accepted as a stand-alone test for the identification of contact allergenic hazard but also used increasingly for the measurement of the relative potency of skin-sensitizing chemicals as an integral component of the risk assessment process. During the development and validation phases of the method, a list of standard vehicles for use in the LLNA was identified, among them being the vehicle most commonly used acetone/olive oil (4:1, AOO). We have now explored the performance in the LLNA of a non-standard vehicle, ethanol:diethyl phthalate (1:3, EtOH:DEP), that is used frequently to evaluate dermal effects of fragrance materials in both human and experimental studies. Current investigations have demonstrated that EtOH:DEP induces similar levels of background proliferative responses in lymph nodes comparable with the standard vehicle AOO. Moreover, expected levels of activity are observed when EtOH:DEP is used to deliver a known contact allergen in the LLNA. The conclusion drawn is that EtOH:DEP provides a suitable vehicle for use in the LLNA and that the approach described provides a basis for future evaluation of novel vehicles.

  3. Partially Decentralized Control Architectures for Satellite Formations

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Bauer, Frank H.

    2002-01-01

    In a partially decentralized control architecture, more than one but less than all nodes have supervisory capability. This paper describes an approach to choosing the number of supervisors in such au architecture, based on a reliability vs. cost trade. It also considers the implications of these results for the design of navigation systems for satellite formations that could be controlled with a partially decentralized architecture. Using an assumed cost model, analytic and simulation-based results indicate that it may be cheaper to achieve a given overall system reliability with a partially decentralized architecture containing only a few supervisors, than with either fully decentralized or purely centralized architectures. Nominally, the subset of supervisors may act as centralized estimation and control nodes for corresponding subsets of the remaining subordinate nodes, and act as decentralized estimation and control peers with respect to each other. However, in the context of partially decentralized satellite formation control, the absolute positions and velocities of each spacecraft are unique, so that correlations which make estimates using only local information suboptimal only occur through common biases and process noise. Covariance and monte-carlo analysis of a simplified system show that this lack of correlation may allow simplification of the local estimators while preserving the global optimality of the maneuvers commanded by the supervisors.

  4. DMA engine for repeating communication patterns

    DOEpatents

    Chen, Dong; Gara, Alan G.; Giampapa, Mark E.; Heidelberger, Philip; Steinmacher-Burow, Burkhard; Vranas, Pavlos

    2010-09-21

    A parallel computer system is constructed as a network of interconnected compute nodes to operate a global message-passing application for performing communications across the network. Each of the compute nodes includes one or more individual processors with memories which run local instances of the global message-passing application operating at each compute node to carry out local processing operations independent of processing operations carried out at other compute nodes. Each compute node also includes a DMA engine constructed to interact with the application via Injection FIFO Metadata describing multiple Injection FIFOs where each Injection FIFO may containing an arbitrary number of message descriptors in order to process messages with a fixed processing overhead irrespective of the number of message descriptors included in the Injection FIFO.

  5. Quantifying and Mapping the Supply of and Demand for Carbon Storage and Sequestration Service from Urban Trees.

    PubMed

    Zhao, Chang; Sander, Heather A

    2015-01-01

    Studies that assess the distribution of benefits provided by ecosystem services across urban areas are increasingly common. Nevertheless, current knowledge of both the supply and demand sides of ecosystem services remains limited, leaving a gap in our understanding of balance between ecosystem service supply and demand that restricts our ability to assess and manage these services. The present study seeks to fill this gap by developing and applying an integrated approach to quantifying the supply and demand of a key ecosystem service, carbon storage and sequestration, at the local level. This approach follows three basic steps: (1) quantifying and mapping service supply based upon Light Detection and Ranging (LiDAR) processing and allometric models, (2) quantifying and mapping demand for carbon sequestration using an indicator based on local anthropogenic CO2 emissions, and (3) mapping a supply-to-demand ratio. We illustrate this approach using a portion of the Twin Cities Metropolitan Area of Minnesota, USA. Our results indicate that 1735.69 million kg carbon are stored by urban trees in our study area. Annually, 33.43 million kg carbon are sequestered by trees, whereas 3087.60 million kg carbon are emitted by human sources. Thus, carbon sequestration service provided by urban trees in the study location play a minor role in combating climate change, offsetting approximately 1% of local anthropogenic carbon emissions per year, although avoided emissions via storage in trees are substantial. Our supply-to-demand ratio map provides insight into the balance between carbon sequestration supply in urban trees and demand for such sequestration at the local level, pinpointing critical locations where higher levels of supply and demand exist. Such a ratio map could help planners and policy makers to assess and manage the supply of and demand for carbon sequestration.

  6. Use of Local Intelligence to Reduce Energy Consumption of Wireless Sensor Nodes in Elderly Health Monitoring Systems

    PubMed Central

    Lampoltshammer, Thomas J.; de Freitas, Edison Pignaton; Nowotny, Thomas; Plank, Stefan; da Costa, João Paulo Carvalho Lustosa; Larsson, Tony; Heistracher, Thomas

    2014-01-01

    The percentage of elderly people in European countries is increasing. Such conjuncture affects socio-economic structures and creates demands for resourceful solutions, such as Ambient Assisted Living (AAL), which is a possible methodology to foster health care for elderly people. In this context, sensor-based devices play a leading role in surveying, e.g., health conditions of elderly people, to alert care personnel in case of an incident. However, the adoption of such devices strongly depends on the comfort of wearing the devices. In most cases, the bottleneck is the battery lifetime, which impacts the effectiveness of the system. In this paper we propose an approach to reduce the energy consumption of sensors' by use of local sensors' intelligence. By increasing the intelligence of the sensor node, a substantial decrease in the necessary communication payload can be achieved. The results show a significant potential to preserve energy and decrease the actual size of the sensor device units. PMID:24618777

  7. Use of local intelligence to reduce energy consumption of wireless sensor nodes in elderly health monitoring systems.

    PubMed

    Lampoltshammer, Thomas J; Pignaton de Freitas, Edison; Nowotny, Thomas; Plank, Stefan; da Costa, João Paulo Carvalho Lustosa; Larsson, Tony; Heistracher, Thomas

    2014-03-11

    The percentage of elderly people in European countries is increasing. Such conjuncture affects socio-economic structures and creates demands for resourceful solutions, such as Ambient Assisted Living (AAL), which is a possible methodology to foster health care for elderly people. In this context, sensor-based devices play a leading role in surveying, e.g., health conditions of elderly people, to alert care personnel in case of an incident. However, the adoption of such devices strongly depends on the comfort of wearing the devices. In most cases, the bottleneck is the battery lifetime, which impacts the effectiveness of the system. In this paper we propose an approach to reduce the energy consumption of sensors' by use of local sensors' intelligence. By increasing the intelligence of the sensor node, a substantial decrease in the necessary communication payload can be achieved. The results show a significant potential to preserve energy and decrease the actual size of the sensor device units.

  8. The informational architecture of the cell.

    PubMed

    Walker, Sara Imari; Kim, Hyunju; Davies, Paul C W

    2016-03-13

    We compare the informational architecture of biological and random networks to identify informational features that may distinguish biological networks from random. The study presented here focuses on the Boolean network model for regulation of the cell cycle of the fission yeast Schizosaccharomyces pombe. We compare calculated values of local and global information measures for the fission yeast cell cycle to the same measures as applied to two different classes of random networks: Erdös-Rényi and scale-free. We report patterns in local information processing and storage that do indeed distinguish biological from random, associated with control nodes that regulate the function of the fission yeast cell-cycle network. Conversely, we find that integrated information, which serves as a global measure of 'emergent' information processing, does not differ from random for the case presented. We discuss implications for our understanding of the informational architecture of the fission yeast cell-cycle network in particular, and more generally for illuminating any distinctive physics that may be operative in life. © 2016 The Author(s).

  9. Holographic memory for high-density data storage and high-speed pattern recognition

    NASA Astrophysics Data System (ADS)

    Gu, Claire

    2002-09-01

    As computers and the internet become faster and faster, more and more information is transmitted, received, and stored everyday. The demand for high density and fast access time data storage is pushing scientists and engineers to explore all possible approaches including magnetic, mechanical, optical, etc. Optical data storage has already demonstrated its potential in the competition against other storage technologies. CD and DVD are showing their advantages in the computer and entertainment market. What motivated the use of optical waves to store and access information is the same as the motivation for optical communication. Light or an optical wave has an enormous capacity (or bandwidth) to carry information because of its short wavelength and parallel nature. In optical storage, there are two types of mechanism, namely localized and holographic memories. What gives the holographic data storage an advantage over localized bit storage is the natural ability to read the stored information in parallel, therefore, meeting the demand for fast access. Another unique feature that makes the holographic data storage attractive is that it is capable of performing associative recall at an incomparable speed. Therefore, volume holographic memory is particularly suitable for high-density data storage and high-speed pattern recognition. In this paper, we review previous works on volume holographic memories and discuss the challenges for this technology to become a reality.

  10. Anchor-Free Localization Method for Mobile Targets in Coal Mine Wireless Sensor Networks

    PubMed Central

    Pei, Zhongmin; Deng, Zhidong; Xu, Shuo; Xu, Xiao

    2009-01-01

    Severe natural conditions and complex terrain make it difficult to apply precise localization in underground mines. In this paper, an anchor-free localization method for mobile targets is proposed based on non-metric multi-dimensional scaling (Multi-dimensional Scaling: MDS) and rank sequence. Firstly, a coal mine wireless sensor network is constructed in underground mines based on the ZigBee technology. Then a non-metric MDS algorithm is imported to estimate the reference nodes’ location. Finally, an improved sequence-based localization algorithm is presented to complete precise localization for mobile targets. The proposed method is tested through simulations with 100 nodes, outdoor experiments with 15 ZigBee physical nodes, and the experiments in the mine gas explosion laboratory with 12 ZigBee nodes. Experimental results show that our method has better localization accuracy and is more robust in underground mines. PMID:22574048

  11. Leveraging Cloud Computing to Improve Storage Durability, Availability, and Cost for MER Maestro

    NASA Technical Reports Server (NTRS)

    Chang, George W.; Powell, Mark W.; Callas, John L.; Torres, Recaredo J.; Shams, Khawaja S.

    2012-01-01

    The Maestro for MER (Mars Exploration Rover) software is the premiere operation and activity planning software for the Mars rovers, and it is required to deliver all of the processed image products to scientists on demand. These data span multiple storage arrays sized at 2 TB, and a backup scheme ensures data is not lost. In a catastrophe, these data would currently recover at 20 GB/hour, taking several days for a restoration. A seamless solution provides access to highly durable, highly available, scalable, and cost-effective storage capabilities. This approach also employs a novel technique that enables storage of the majority of data on the cloud and some data locally. This feature is used to store the most recent data locally in order to guarantee utmost reliability in case of an outage or disconnect from the Internet. This also obviates any changes to the software that generates the most recent data set as it still has the same interface to the file system as it did before updates

  12. Improved Survival in Male Melanoma Patients in the Era of Sentinel Node Biopsy.

    PubMed

    Koskivuo, I; Vihinen, P; Mäki, M; Talve, L; Vahlberg, T; Suominen, E

    2017-03-01

    Sentinel node biopsy is a standard method for nodal staging in patients with clinically localized cutaneous melanoma, but the survival advantage of sentinel node biopsy remains unsolved. The aim of this case-control study was to investigate the survival benefit of sentinel node biopsy. A total of 305 prospective melanoma patients undergoing sentinel node biopsy were compared with 616 retrospective control patients with clinically localized melanoma whom have not undergone sentinel node biopsy. Survival differences were calculated with the median follow-up time of 71 months in sentinel node biopsy patients and 74 months in control patients. Analyses were calculated overall and separately in males and females. Overall, there were no differences in relapse-free survival or cancer-specific survival between sentinel node biopsy patients and control patients. Male sentinel node biopsy patients had significantly higher relapse-free survival ( P = 0.021) and cancer-specific survival ( P = 0.024) than control patients. In females, no differences were found. Cancer-specific survival rates at 5 years were 87.8% in sentinel node biopsy patients and 85.2% in controls overall with 88.3% in male sentinel node biopsy patients and 80.6% in male controls and 87.3% in female sentinel node biopsy patients and 89.8% in female controls. Sentinel node biopsy did not improve survival in melanoma patients overall. While females had no differences in survival, males had significantly improved relapse-free survival and cancer-specific survival following sentinel node biopsy.

  13. Strategies for synchronisation in an evolving telecommunications network

    NASA Astrophysics Data System (ADS)

    Avery, Rob

    1992-06-01

    The achievement of precise synchronization in the telecommunications environment is addressed. Transmitting the timing from node to node has been the inherent problem for all digital networks. Traditional network equipment used to transfer synchronization, such as digital switching ststems, adds impairments to the once traceable signal. As the synchronization signals are passed from node to node, they lose stability by passing through intervening clocks. Timing would be an integrated part of all new network and service deployments. New transmission methods, such as the Synchronous Digital Hierarchy (SDH), survivable network topologies and the issues that arise from them, necessitate a review of current network synchronization strategies. Challenges that face the network are itemized. A demonstration of why localized Primary Reference Clocks (PRC) in key nodes and the Synchronization Supply Unit (SSU) clock architecture of transit and local node clocks is a technically and economically viable solution to the issues facing network planners today is given.

  14. Integration of XRootD into the cloud infrastructure for ALICE data analysis

    NASA Astrophysics Data System (ADS)

    Kompaniets, Mikhail; Shadura, Oksana; Svirin, Pavlo; Yurchenko, Volodymyr; Zarochentsev, Andrey

    2015-12-01

    Cloud technologies allow easy load balancing between different tasks and projects. From the viewpoint of the data analysis in the ALICE experiment, cloud allows to deploy software using Cern Virtual Machine (CernVM) and CernVM File System (CVMFS), to run different (including outdated) versions of software for long term data preservation and to dynamically allocate resources for different computing activities, e.g. grid site, ALICE Analysis Facility (AAF) and possible usage for local projects or other LHC experiments. We present a cloud solution for Tier-3 sites based on OpenStack and Ceph distributed storage with an integrated XRootD based storage element (SE). One of the key features of the solution is based on idea that Ceph has been used as a backend for Cinder Block Storage service for OpenStack, and in the same time as a storage backend for XRootD, with redundancy and availability of data preserved by Ceph settings. For faster and easier OpenStack deployment was applied the Packstack solution, which is based on the Puppet configuration management system. Ceph installation and configuration operations are structured and converted to Puppet manifests describing node configurations and integrated into Packstack. This solution can be easily deployed, maintained and used even in small groups with limited computing resources and small organizations, which usually have lack of IT support. The proposed infrastructure has been tested on two different clouds (SPbSU & BITP) and integrates successfully with the ALICE data analysis model.

  15. Parallel file system with metadata distributed across partitioned key-value store c

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary; Torres, Aaron

    2017-09-19

    Improved techniques are provided for storing metadata associated with a plurality of sub-files associated with a single shared file in a parallel file system. The shared file is generated by a plurality of applications executing on a plurality of compute nodes. A compute node implements a Parallel Log Structured File System (PLFS) library to store at least one portion of the shared file generated by an application executing on the compute node and metadata for the at least one portion of the shared file on one or more object storage servers. The compute node is also configured to implement a partitioned data store for storing a partition of the metadata for the shared file, wherein the partitioned data store communicates with partitioned data stores on other compute nodes using a message passing interface. The partitioned data store can be implemented, for example, using Multidimensional Data Hashing Indexing Middleware (MDHIM).

  16. Database Objects vs Files: Evaluation of alternative strategies for managing large remote sensing data

    NASA Astrophysics Data System (ADS)

    Baru, Chaitan; Nandigam, Viswanath; Krishnan, Sriram

    2010-05-01

    Increasingly, the geoscience user community expects modern IT capabilities to be available in service of their research and education activities, including the ability to easily access and process large remote sensing datasets via online portals such as GEON (www.geongrid.org) and OpenTopography (opentopography.org). However, serving such datasets via online data portals presents a number of challenges. In this talk, we will evaluate the pros and cons of alternative storage strategies for management and processing of such datasets using binary large object implementations (BLOBs) in database systems versus implementation in Hadoop files using the Hadoop Distributed File System (HDFS). The storage and I/O requirements for providing online access to large datasets dictate the need for declustering data across multiple disks, for capacity as well as bandwidth and response time performance. This requires partitioning larger files into a set of smaller files, and is accompanied by the concomitant requirement for managing large numbers of file. Storing these sub-files as blobs in a shared-nothing database implemented across a cluster provides the advantage that all the distributed storage management is done by the DBMS. Furthermore, subsetting and processing routines can be implemented as user-defined functions (UDFs) on these blobs and would run in parallel across the set of nodes in the cluster. On the other hand, there are both storage overheads and constraints, and software licensing dependencies created by such an implementation. Another approach is to store the files in an external filesystem with pointers to them from within database tables. The filesystem may be a regular UNIX filesystem, a parallel filesystem, or HDFS. In the HDFS case, HDFS would provide the file management capability, while the subsetting and processing routines would be implemented as Hadoop programs using the MapReduce model. Hadoop and its related software libraries are freely available. Another consideration is the strategy used for partitioning large data collections, and large datasets within collections, using round-robin vs hash partitioning vs range partitioning methods. Each has different characteristics in terms of spatial locality of data and resultant degree of declustering of the computations on the data. Furthermore, we have observed that, in practice, there can be large variations in the frequency of access to different parts of a large data collection and/or dataset, thereby creating "hotspots" in the data. We will evaluate the ability of different approaches for dealing effectively with such hotspots and alternative strategies for dealing with hotspots.

  17. A local immunization strategy for networks with overlapping community structure

    NASA Astrophysics Data System (ADS)

    Taghavian, Fatemeh; Salehi, Mostafa; Teimouri, Mehdi

    2017-02-01

    Since full coverage treatment is not feasible due to limited resources, we need to utilize an immunization strategy to effectively distribute the available vaccines. On the other hand, the structure of contact network among people has a significant impact on epidemics of infectious diseases (such as SARS and influenza) in a population. Therefore, network-based immunization strategies aim to reduce the spreading rate by removing the vaccinated nodes from contact network. Such strategies try to identify more important nodes in epidemics spreading over a network. In this paper, we address the effect of overlapping nodes among communities on epidemics spreading. The proposed strategy is an optimized random-walk based selection of these nodes. The whole process is local, i.e. it requires contact network information in the level of nodes. Thus, it is applicable to large-scale and unknown networks in which the global methods usually are unrealizable. Our simulation results on different synthetic and real networks show that the proposed method outperforms the existing local methods in most cases. In particular, for networks with strong community structures, high overlapping membership of nodes or small size communities, the proposed method shows better performance.

  18. a Weighted Local-World Evolving Network Model Based on the Edge Weights Preferential Selection

    NASA Astrophysics Data System (ADS)

    Li, Ping; Zhao, Qingzhen; Wang, Haitang

    2013-05-01

    In this paper, we use the edge weights preferential attachment mechanism to build a new local-world evolutionary model for weighted networks. It is different from previous papers that the local-world of our model consists of edges instead of nodes. Each time step, we connect a new node to two existing nodes in the local-world through the edge weights preferential selection. Theoretical analysis and numerical simulations show that the scale of the local-world affect on the weight distribution, the strength distribution and the degree distribution. We give the simulations about the clustering coefficient and the dynamics of infectious diseases spreading. The weight dynamics of our network model can portray the structure of realistic networks such as neural network of the nematode C. elegans and Online Social Network.

  19. Key-value store with internal key-value storage interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, John M.; Faibish, Sorin; Ting, Dennis P. J.

    A key-value store is provided having one or more key-value storage interfaces. A key-value store on at least one compute node comprises a memory for storing a plurality of key-value pairs; and an abstract storage interface comprising a software interface module that communicates with at least one persistent storage device providing a key-value interface for persistent storage of one or more of the plurality of key-value pairs, wherein the software interface module provides the one or more key-value pairs to the at least one persistent storage device in a key-value format. The abstract storage interface optionally processes one or moremore » batch operations on the plurality of key-value pairs. A distributed embodiment for a partitioned key-value store is also provided.« less

  20. Cavity-based quantum networks with single atoms and optical photons

    NASA Astrophysics Data System (ADS)

    Reiserer, Andreas; Rempe, Gerhard

    2015-10-01

    Distributed quantum networks will allow users to perform tasks and to interact in ways which are not possible with present-day technology. Their implementation is a key challenge for quantum science and requires the development of stationary quantum nodes that can send and receive as well as store and process quantum information locally. The nodes are connected by quantum channels for flying information carriers, i.e., photons. These channels serve both to directly exchange quantum information between nodes and to distribute entanglement over the whole network. In order to scale such networks to many particles and long distances, an efficient interface between the nodes and the channels is required. This article describes the cavity-based approach to this goal, with an emphasis on experimental systems in which single atoms are trapped in and coupled to optical resonators. Besides being conceptually appealing, this approach is promising for quantum networks on larger scales, as it gives access to long qubit coherence times and high light-matter coupling efficiencies. Thus, it allows one to generate entangled photons on the push of a button, to reversibly map the quantum state of a photon onto an atom, to transfer and teleport quantum states between remote atoms, to entangle distant atoms, to detect optical photons nondestructively, to perform entangling quantum gates between an atom and one or several photons, and even provides a route toward efficient heralded quantum memories for future repeaters. The presented general protocols and the identification of key parameters are applicable to other experimental systems.

  1. Network-based study of Lagrangian transport and mixing

    NASA Astrophysics Data System (ADS)

    Padberg-Gehle, Kathrin; Schneide, Christiane

    2017-10-01

    Transport and mixing processes in fluid flows are crucially influenced by coherent structures and the characterization of these Lagrangian objects is a topic of intense current research. While established mathematical approaches such as variational methods or transfer-operator-based schemes require full knowledge of the flow field or at least high-resolution trajectory data, this information may not be available in applications. Recently, different computational methods have been proposed to identify coherent behavior in flows directly from Lagrangian trajectory data, that is, numerical or measured time series of particle positions in a fluid flow. In this context, spatio-temporal clustering algorithms have been proven to be very effective for the extraction of coherent sets from sparse and possibly incomplete trajectory data. Inspired by these recent approaches, we consider an unweighted, undirected network, where Lagrangian particle trajectories serve as network nodes. A link is established between two nodes if the respective trajectories come close to each other at least once in the course of time. Classical graph concepts are then employed to analyze the resulting network. In particular, local network measures such as the node degree, the average degree of neighboring nodes, and the clustering coefficient serve as indicators of highly mixing regions, whereas spectral graph partitioning schemes allow us to extract coherent sets. The proposed methodology is very fast to run and we demonstrate its applicability in two geophysical flows - the Bickley jet as well as the Antarctic stratospheric polar vortex.

  2. Connectivity disruption sparks explosive epidemic spreading.

    PubMed

    Böttcher, L; Woolley-Meza, O; Goles, E; Helbing, D; Herrmann, H J

    2016-04-01

    We investigate the spread of an infection or other malfunction of cascading nature when a system component can recover only if it remains reachable from a functioning central component. We consider the susceptible-infected-susceptible model, typical of mathematical epidemiology, on a network. Infection spreads from infected to healthy nodes, with the addition that infected nodes can only recover when they remain connected to a predefined central node, through a path that contains only healthy nodes. In this system, clusters of infected nodes will absorb their noninfected interior because no path exists between the central node and encapsulated nodes. This gives rise to the simultaneous infection of multiple nodes. Interestingly, the system converges to only one of two stationary states: either the whole population is healthy or it becomes completely infected. This simultaneous cluster infection can give rise to discontinuous jumps of different sizes in the number of failed nodes. Larger jumps emerge at lower infection rates. The network topology has an important effect on the nature of the transition: we observed hysteresis for networks with dominating local interactions. Our model shows how local spread can abruptly turn uncontrollable when it disrupts connectivity at a larger spatial scale.

  3. A critical review of variables affecting the accuracy and false-negative rate of sentinel node biopsy procedures in early breast cancer.

    PubMed

    Vijayakumar, Vani; Boerner, Philip S; Jani, Ashesh B; Vijayakumar, Srinivasan

    2005-05-01

    Radionuclide sentinel lymph node localization and biopsy is a staging procedure that is being increasingly used to evaluate patients with invasive breast cancer who have clinically normal axillary nodes. The most important prognostic indicator in patients with invasive breast cancer is the axillary node status, which must also be known for correct staging, and influences the selection of adjuvant therapies. The accuracy of sentinel lymph node localization depends on a number of factors, including the injection method, the operating surgeon's experience and the hospital setting. The efficacy of sentinel lymph node mapping can be determined by two measures: the sentinel lymph node identification rate and the false-negative rate. Of these, the false-negative rate is the most important, based on a review of 92 studies. As sentinel lymph node procedures vary widely, nuclear medicine physicians and radiologists must be acquainted with the advantages and disadvantages of the various techniques. In this review, the factors that influence the success of different techniques are examined, and studies which have investigated false-negative rates and/or sentinel lymph node identification rates are summarized.

  4. Bayesian-based localization of wireless capsule endoscope using received signal strength.

    PubMed

    Nadimi, Esmaeil S; Blanes-Vidal, Victoria; Tarokh, Vahid; Johansen, Per Michael

    2014-01-01

    In wireless body area sensor networking (WBASN) applications such as gastrointestinal (GI) tract monitoring using wireless video capsule endoscopy (WCE), the performance of out-of-body wireless link propagating through different body media (i.e. blood, fat, muscle and bone) is still under investigation. Most of the localization algorithms are vulnerable to the variations of path-loss coefficient resulting in unreliable location estimation. In this paper, we propose a novel robust probabilistic Bayesian-based approach using received-signal-strength (RSS) measurements that accounts for Rayleigh fading, variable path-loss exponent and uncertainty in location information received from the neighboring nodes and anchors. The results of this study showed that the localization root mean square error of our Bayesian-based method was 1.6 mm which was very close to the optimum Cramer-Rao lower bound (CRLB) and significantly smaller than that of other existing localization approaches (i.e. classical MDS (64.2mm), dwMDS (32.2mm), MLE (36.3mm) and POCS (2.3mm)).

  5. A Wireless Sensor Network with Soft Computing Localization Techniques for Track Cycling Applications.

    PubMed

    Gharghan, Sadik Kamel; Nordin, Rosdiadee; Ismail, Mahamod

    2016-08-06

    In this paper, we propose two soft computing localization techniques for wireless sensor networks (WSNs). The two techniques, Neural Fuzzy Inference System (ANFIS) and Artificial Neural Network (ANN), focus on a range-based localization method which relies on the measurement of the received signal strength indicator (RSSI) from the three ZigBee anchor nodes distributed throughout the track cycling field. The soft computing techniques aim to estimate the distance between bicycles moving on the cycle track for outdoor and indoor velodromes. In the first approach the ANFIS was considered, whereas in the second approach the ANN was hybridized individually with three optimization algorithms, namely Particle Swarm Optimization (PSO), Gravitational Search Algorithm (GSA), and Backtracking Search Algorithm (BSA). The results revealed that the hybrid GSA-ANN outperforms the other methods adopted in this paper in terms of accuracy localization and distance estimation accuracy. The hybrid GSA-ANN achieves a mean absolute distance estimation error of 0.02 m and 0.2 m for outdoor and indoor velodromes, respectively.

  6. A Wireless Sensor Network with Soft Computing Localization Techniques for Track Cycling Applications

    PubMed Central

    Gharghan, Sadik Kamel; Nordin, Rosdiadee; Ismail, Mahamod

    2016-01-01

    In this paper, we propose two soft computing localization techniques for wireless sensor networks (WSNs). The two techniques, Neural Fuzzy Inference System (ANFIS) and Artificial Neural Network (ANN), focus on a range-based localization method which relies on the measurement of the received signal strength indicator (RSSI) from the three ZigBee anchor nodes distributed throughout the track cycling field. The soft computing techniques aim to estimate the distance between bicycles moving on the cycle track for outdoor and indoor velodromes. In the first approach the ANFIS was considered, whereas in the second approach the ANN was hybridized individually with three optimization algorithms, namely Particle Swarm Optimization (PSO), Gravitational Search Algorithm (GSA), and Backtracking Search Algorithm (BSA). The results revealed that the hybrid GSA-ANN outperforms the other methods adopted in this paper in terms of accuracy localization and distance estimation accuracy. The hybrid GSA-ANN achieves a mean absolute distance estimation error of 0.02 m and 0.2 m for outdoor and indoor velodromes, respectively. PMID:27509495

  7. Adjuvant radiation therapy for malignant Abrikossoff's tumor: a case report about a femoral triangle localisation.

    PubMed

    Marchand Crety, C; Garbar, C; Madelis, G; Guillemin, F; Soibinet Oudot, P; Eymard, J C; Servagi Vernat, S

    2018-06-20

    Granular cell or Abrikossoff's tumors are usually benign however rare malignant forms concern 1 to 3% of cases reported. Pelvic locations are exceptional. We report a case of a 43-years-old patient who had a benign Abrikossoff's tumor localized in the right femoral triangle diagnosed at the biopsy. The patient underwent a surgical tumorectomy and inguinal lymph nodes resection. Histologically, the tumor showed enough criteria to give diagnosis of malignancy: nuclear pleomorphism, tumor cell spindling, vesicular nuclei with large nucleoli. Moreover, five lymph nodes were metastatic. Immunohistochemistry findings confirmed the diagnosis of granular cell tumor which is positive for S100 protein and CD68 antibodies. The mitotic index was nevertheless low with a Ki67 labeling index of 1-2%. A large surgical revision with an inguinal curage following radiotherapy were decided on oncology committee. Adjuvant radiotherapy on the tumor bed and right inguinal area of ​​50 Gy in conventional fractionation was delivered with the aim of reducing local recurrence risk. There was no recurrence on longer follow-up (10 months post radiotherapy). Adjuvant radiotherapy seems an appropriate therapeutic approach, even if controversial, given that some authors report effectiveness on local disease progression.

  8. Resource-Efficient, Hierarchical Auto-Tuning of a Hybrid Lattice Boltzmann Computation on the Cray XT4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Computational Research Division, Lawrence Berkeley National Laboratory; NERSC, Lawrence Berkeley National Laboratory; Computer Science Department, University of California, Berkeley

    2009-05-04

    We apply auto-tuning to a hybrid MPI-pthreads lattice Boltzmann computation running on the Cray XT4 at National Energy Research Scientific Computing Center (NERSC). Previous work showed that multicore-specific auto-tuning can improve the performance of lattice Boltzmann magnetohydrodynamics (LBMHD) by a factor of 4x when running on dual- and quad-core Opteron dual-socket SMPs. We extend these studies to the distributed memory arena via a hybrid MPI/pthreads implementation. In addition to conventional auto-tuning at the local SMP node, we tune at the message-passing level to determine the optimal aspect ratio as well as the correct balance between MPI tasks and threads permore » MPI task. Our study presents a detailed performance analysis when moving along an isocurve of constant hardware usage: fixed total memory, total cores, and total nodes. Overall, our work points to approaches for improving intra- and inter-node efficiency on large-scale multicore systems for demanding scientific applications.« less

  9. Modeling complexity in engineered infrastructure system: Water distribution network as an example

    NASA Astrophysics Data System (ADS)

    Zeng, Fang; Li, Xiang; Li, Ke

    2017-02-01

    The complex topology and adaptive behavior of infrastructure systems are driven by both self-organization of the demand and rigid engineering solutions. Therefore, engineering complex systems requires a method balancing holism and reductionism. To model the growth of water distribution networks, a complex network model was developed following the combination of local optimization rules and engineering considerations. The demand node generation is dynamic and follows the scaling law of urban growth. The proposed model can generate a water distribution network (WDN) similar to reported real-world WDNs on some structural properties. Comparison with different modeling approaches indicates that a realistic demand node distribution and co-evolvement of demand node and network are important for the simulation of real complex networks. The simulation results indicate that the efficiency of water distribution networks is exponentially affected by the urban growth pattern. On the contrary, the improvement of efficiency by engineering optimization is limited and relatively insignificant. The redundancy and robustness, on another aspect, can be significantly improved through engineering methods.

  10. Metastatic eccrine porocarcinoma: report of a case and review of the literature

    PubMed Central

    2011-01-01

    Eccrine porocarcinoma (EPC) is a rare type of skin cancer arising from the intraepidermal portion of eccrine sweat glands or acrosyringium, representing 0.005-0.01% of all cutaneous tumors. About 20% of EPC will recur and about 20% will metastasize to regional lymph nodes. There is a mortality rate of 67% in patients with lymph node metastases. Although rare, the occurrence of distant metastases has been reported. We report a case of patient with EPC of the left arm, with axillary nodal involvement and subsequent local relapse, treated by complete lymph node dissection and electrochemotherapy (ECT). EPC is an unusual tumor to diagnose. Neither chemotherapy nor radiation therapy has been proven to be of clinical benefit in treating metastatic disease. Although in the current case the short follow-up period is a limitation, we consider in the management of EPC a therapeutic approach involving surgery and ECT, because of its aggressive potential for loregional metastatic spread. PMID:21410982

  11. Metastatic eccrine porocarcinoma: report of a case and review of the literature.

    PubMed

    Marone, Ugo; Caracò, Corrado; Anniciello, Anna Maria; Di Monta, Gianluca; Chiofalo, Maria Grazia; Di Cecilia, Maria Luisa; Mozzillo, Nicola

    2011-03-16

    Eccrine porocarcinoma (EPC) is a rare type of skin cancer arising from the intraepidermal portion of eccrine sweat glands or acrosyringium, representing 0.005-0.01% of all cutaneous tumors. About 20% of EPC will recur and about 20% will metastasize to regional lymph nodes. There is a mortality rate of 67% in patients with lymph node metastases. Although rare, the occurrence of distant metastases has been reported.We report a case of patient with EPC of the left arm, with axillary nodal involvement and subsequent local relapse, treated by complete lymph node dissection and electrochemotherapy (ECT).EPC is an unusual tumor to diagnose. Neither chemotherapy nor radiation therapy has been proven to be of clinical benefit in treating metastatic disease. Although in the current case the short follow-up period is a limitation, we consider in the management of EPC a therapeutic approach involving surgery and ECT, because of its aggressive potential for loregional metastatic spread.

  12. A Study on Run Time Assurance for Complex Cyber Physical Systems

    DTIC Science & Technology

    2013-04-18

    safety verification approach was applied to synchronization of distributed local clocks of the nodes on a CAN bus by Jiang et al. [36]. The class of...mode of interaction between the instrumented system and the checker, we distin- guish between synchronous and asynchronous monitoring. In synchronous ...occurred. Synchronous monitoring may deliver a higher degree of assurance than the asynchronous one, because it can block a dangerous action. However

  13. The local lymph node assay and skin sensitization: a cut-down screen to reduce animal requirements?

    PubMed

    Kimber, Ian; Dearman, Rebecca J; Betts, Catherine J; Gerberick, G Frank; Ryan, Cindy A; Kern, Petra S; Patlewicz, Grace Y; Basketter, David A

    2006-04-01

    The local lymph node assay (LLNA), an alternative approach to skin-sensitizing testing, has made a significant contribution to animal welfare by permitting a reduction and refinement of animal use. Although there is clearly an aspiration to eliminate the use of animals in such tests, it is appropriate also to consider other opportunities for refinement and reduction of animal use. We have therefore explored the use of a modified version of the LLNA for screening purposes when there is a need to evaluate the sensitizing activity of a large number of chemicals, as will be the case under the auspices of registration, evaluation and authorization of chemicals (REACH). Using an existing LLNA database of 211 chemicals, we have examined whether a cut-down assay comprising a single high-dose group and a concurrent vehicle control would provide a realistic approach for screening chemicals for sensitizing potential. The analyses reported here suggest this is the case. We speculate that the animal welfare benefits may be enhanced further by reducing the number of animals per experimental group. However, a detailed evaluation will be necessary to provide reassurance that a reduction in group size would provide adequate sensitivity across a range of skin sensitization potencies.

  14. Evaluation of the performance of the reduced local lymph node assay for skin sensitization testing.

    PubMed

    Ezendam, Janine; Muller, Andre; Hakkert, Betty C; van Loveren, Henk

    2013-06-01

    The local lymph node assay (LLNA) is the preferred method for classification of sensitizers within REACH. To reduce the number of mice for the identification of sensitizers the reduced LLNA was proposed, which uses only the high dose group of the LLNA. To evaluate the performance of this method for classification, LLNA data from REACH registrations were used and classification based on all dose groups was compared to classification based on the high dose group. We confirmed previous examinations of the reduced LLNA showing that this method is less sensitive compared to the LLNA. The reduced LLNA misclassified 3.3% of the sensitizers identified in the LLNA and misclassification occurred in all potency classes and that there was no clear association with irritant properties. It is therefore not possible to predict beforehand which substances might be misclassified. Another limitation of the reduced LLNA is that skin sensitizing potency cannot be assessed. For these reasons, it is not recommended to use the reduced LLNA as a stand-alone assay for skin sensitization testing within REACH. In the future, the reduced LLNA might be of added value in a weight of evidence approach to confirm negative results obtained with non-animal approaches. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. Improved methods for estimating local terrestrial water dynamics from GRACE in the Northern High Plains

    NASA Astrophysics Data System (ADS)

    Seyoum, Wondwosen M.; Milewski, Adam M.

    2017-12-01

    Investigating terrestrial water cycle dynamics is vital for understanding the recent climatic variability and human impacts in the hydrologic cycle. In this study, a downscaling approach was developed and tested, to improve the applicability of terrestrial water storage (TWS) anomaly data from the Gravity Recovery and Climate Experiment (GRACE) satellite mission for understanding local terrestrial water cycle dynamics in the Northern High Plains region. A non-parametric, artificial neural network (ANN)-based model, was utilized to downscale GRACE data by integrating it with hydrological variables (e.g. soil moisture) derived from satellite and land surface model data. The downscaling model, constructed through calibration and sensitivity analysis, was used to estimate TWS anomaly for watersheds ranging from 5000 to 20,000 km2 in the study area. The downscaled water storage anomaly data were evaluated using water storage data derived from an (1) integrated hydrologic model, (2) land surface model (e.g. Noah), and (3) storage anomalies calculated from in-situ groundwater level measurements. Results demonstrate the ANN predicts monthly TWS anomaly within the uncertainty (conservative error estimate = 34 mm) for most of the watersheds. Seasonal derived groundwater storage anomaly (GWSA) from the ANN correlated well (r = ∼0.85) with GWSAs calculated from in-situ groundwater level measurements for a watershed size as small as 6000 km2. ANN downscaled TWSA matches closely with Noah-based TWSA compared to standard GRACE extracted TWSA at a local scale. Moreover, the ANN-downscaled change in TWS replicated the water storage variability resulting from the combined effect of climatic and human impacts (e.g. abstraction). The implications of utilizing finer resolution GRACE data for improving local and regional water resources management decisions and applications are clear, particularly in areas lacking in-situ hydrologic monitoring networks.

  16. Astronomy In The Cloud: Using Mapreduce For Image Coaddition

    NASA Astrophysics Data System (ADS)

    Wiley, Keith; Connolly, A.; Gardner, J.; Krughoff, S.; Balazinska, M.; Howe, B.; Kwon, Y.; Bu, Y.

    2011-01-01

    In the coming decade, astronomical surveys of the sky will generate tens of terabytes of images and detect hundreds of millions of sources every night. The study of these sources will involve computational challenges such as anomaly detection, classification, and moving object tracking. Since such studies require the highest quality data, methods such as image coaddition, i.e., registration, stacking, and mosaicing, will be critical to scientific investigation. With a requirement that these images be analyzed on a nightly basis to identify moving sources, e.g., asteroids, or transient objects, e.g., supernovae, these datastreams present many computational challenges. Given the quantity of data involved, the computational load of these problems can only be addressed by distributing the workload over a large number of nodes. However, the high data throughput demanded by these applications may present scalability challenges for certain storage architectures. One scalable data-processing method that has emerged in recent years is MapReduce, and in this paper we focus on its popular open-source implementation called Hadoop. In the Hadoop framework, the data is partitioned among storage attached directly to worker nodes, and the processing workload is scheduled in parallel on the nodes that contain the required input data. A further motivation for using Hadoop is that it allows us to exploit cloud computing resources, i.e., platforms where Hadoop is offered as a service. We report on our experience implementing a scalable image-processing pipeline for the SDSS imaging database using Hadoop. This multi-terabyte imaging dataset provides a good testbed for algorithm development since its scope and structure approximate future surveys. First, we describe MapReduce and how we adapted image coaddition to the MapReduce framework. Then we describe a number of optimizations to our basic approach and report experimental results compring their performance. This work is funded by the NSF and by NASA.

  17. Astronomy in the Cloud: Using MapReduce for Image Co-Addition

    NASA Astrophysics Data System (ADS)

    Wiley, K.; Connolly, A.; Gardner, J.; Krughoff, S.; Balazinska, M.; Howe, B.; Kwon, Y.; Bu, Y.

    2011-03-01

    In the coming decade, astronomical surveys of the sky will generate tens of terabytes of images and detect hundreds of millions of sources every night. The study of these sources will involve computation challenges such as anomaly detection and classification and moving-object tracking. Since such studies benefit from the highest-quality data, methods such as image co-addition, i.e., astrometric registration followed by per-pixel summation, will be a critical preprocessing step prior to scientific investigation. With a requirement that these images be analyzed on a nightly basis to identify moving sources such as potentially hazardous asteroids or transient objects such as supernovae, these data streams present many computational challenges. Given the quantity of data involved, the computational load of these problems can only be addressed by distributing the workload over a large number of nodes. However, the high data throughput demanded by these applications may present scalability challenges for certain storage architectures. One scalable data-processing method that has emerged in recent years is MapReduce, and in this article we focus on its popular open-source implementation called Hadoop. In the Hadoop framework, the data are partitioned among storage attached directly to worker nodes, and the processing workload is scheduled in parallel on the nodes that contain the required input data. A further motivation for using Hadoop is that it allows us to exploit cloud computing resources: i.e., platforms where Hadoop is offered as a service. We report on our experience of implementing a scalable image-processing pipeline for the SDSS imaging database using Hadoop. This multiterabyte imaging data set provides a good testbed for algorithm development, since its scope and structure approximate future surveys. First, we describe MapReduce and how we adapted image co-addition to the MapReduce framework. Then we describe a number of optimizations to our basic approach and report experimental results comparing their performance.

  18. A Polynomial Subset-Based Efficient Multi-Party Key Management System for Lightweight Device Networks.

    PubMed

    Mahmood, Zahid; Ning, Huansheng; Ghafoor, AtaUllah

    2017-03-24

    Wireless Sensor Networks (WSNs) consist of lightweight devices to measure sensitive data that are highly vulnerable to security attacks due to their constrained resources. In a similar manner, the internet-based lightweight devices used in the Internet of Things (IoT) are facing severe security and privacy issues because of the direct accessibility of devices due to their connection to the internet. Complex and resource-intensive security schemes are infeasible and reduce the network lifetime. In this regard, we have explored the polynomial distribution-based key establishment schemes and identified an issue that the resultant polynomial value is either storage intensive or infeasible when large values are multiplied. It becomes more costly when these polynomials are regenerated dynamically after each node join or leave operation and whenever key is refreshed. To reduce the computation, we have proposed an Efficient Key Management (EKM) scheme for multiparty communication-based scenarios. The proposed session key management protocol is established by applying a symmetric polynomial for group members, and the group head acts as a responsible node. The polynomial generation method uses security credentials and secure hash function. Symmetric cryptographic parameters are efficient in computation, communication, and the storage required. The security justification of the proposed scheme has been completed by using Rubin logic, which guarantees that the protocol attains mutual validation and session key agreement property strongly among the participating entities. Simulation scenarios are performed using NS 2.35 to validate the results for storage, communication, latency, energy, and polynomial calculation costs during authentication, session key generation, node migration, secure joining, and leaving phases. EKM is efficient regarding storage, computation, and communication overhead and can protect WSN-based IoT infrastructure.

  19. A Polynomial Subset-Based Efficient Multi-Party Key Management System for Lightweight Device Networks

    PubMed Central

    Mahmood, Zahid; Ning, Huansheng; Ghafoor, AtaUllah

    2017-01-01

    Wireless Sensor Networks (WSNs) consist of lightweight devices to measure sensitive data that are highly vulnerable to security attacks due to their constrained resources. In a similar manner, the internet-based lightweight devices used in the Internet of Things (IoT) are facing severe security and privacy issues because of the direct accessibility of devices due to their connection to the internet. Complex and resource-intensive security schemes are infeasible and reduce the network lifetime. In this regard, we have explored the polynomial distribution-based key establishment schemes and identified an issue that the resultant polynomial value is either storage intensive or infeasible when large values are multiplied. It becomes more costly when these polynomials are regenerated dynamically after each node join or leave operation and whenever key is refreshed. To reduce the computation, we have proposed an Efficient Key Management (EKM) scheme for multiparty communication-based scenarios. The proposed session key management protocol is established by applying a symmetric polynomial for group members, and the group head acts as a responsible node. The polynomial generation method uses security credentials and secure hash function. Symmetric cryptographic parameters are efficient in computation, communication, and the storage required. The security justification of the proposed scheme has been completed by using Rubin logic, which guarantees that the protocol attains mutual validation and session key agreement property strongly among the participating entities. Simulation scenarios are performed using NS 2.35 to validate the results for storage, communication, latency, energy, and polynomial calculation costs during authentication, session key generation, node migration, secure joining, and leaving phases. EKM is efficient regarding storage, computation, and communication overhead and can protect WSN-based IoT infrastructure. PMID:28338632

  20. Experience with local lymph node assay performance standards using standard radioactivity and nonradioactive cell count measurements.

    PubMed

    Basketter, David; Kolle, Susanne N; Schrage, Arnhild; Honarvar, Naveed; Gamer, Armin O; van Ravenzwaay, Bennard; Landsiedel, Robert

    2012-08-01

    The local lymph node assay (LLNA) is the preferred test for identification of skin-sensitizing substances by measuring radioactive thymidine incorporation into the lymph node. To facilitate acceptance of nonradioactive variants, validation authorities have published harmonized minimum performance standards (PS) that the alternative endpoint assay must meet. In the present work, these standards were applied to a variant of the LLNA based on lymph node cell counts (LNCC) run in parallel as a control with the standard LLNA with radioactivity measurements, with threshold concentrations (EC3) being determined for the sensitizers. Of the 22 PS chemicals tested in this study, 21 yielded the same results from standard radioactivity and cell count measurements; only 2-mercaptobenzothiazole was positive by LLNA but negative by LNCC. Of the 16 PS positives, 15 were positive by LLNA and 14 by LNCC; methylmethacrylate was not identified as sensitizer by either of the measurements. Two of the six PS negatives tested negative in our study by both LLNA and LNCC. Of the four PS negatives which were positive in our study, chlorobenzene and methyl salicylate were tested at higher concentrations than the published PS, whereas the corresponding concentrations resulted in consistent negative results. Methylmethacrylate and nickel chloride tested positive within the concentration range used for the published PS. The results indicate cell counts and radioactive measurements are in good accordance within the same LLNA using the 22 PS test substances. Comparisons with the published PS results may, however, require balanced analysis rather than a simple checklist approach. Copyright © 2011 John Wiley & Sons, Ltd.

  1. Feasibility of Decentralized Linear-Quadratic-Gaussian Control of Autonomous Distributed Spacecraft

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell

    1999-01-01

    A distributed satellite formation, modeled as an arbitrary number of fully connected nodes in a network, could be controlled using a decentralized controller framework that distributes operations in parallel over the network. For such problems, a solution that minimizes data transmission requirements, in the context of linear-quadratic-Gaussian (LQG) control theory, was given by Speyer. This approach is advantageous because it is non-hierarchical, detected failures gracefully degrade system performance, fewer local computations are required than for a centralized controller, and it is optimal with respect to the standard LQG cost function. Disadvantages of the approach are the need for a fully connected communications network, the total operations performed over all the nodes are greater than for a centralized controller, and the approach is formulated for linear time-invariant systems. To investigate the feasibility of the decentralized approach to satellite formation flying, a simple centralized LQG design for a spacecraft orbit control problem is adapted to the decentralized framework. The simple design uses a fixed reference trajectory (an equatorial, Keplerian, circular orbit), and by appropriate choice of coordinates and measurements is formulated as a linear time-invariant system.

  2. Efficient implementation of multidimensional fast fourier transform on a distributed-memory parallel multi-node computer

    DOEpatents

    Bhanot, Gyan V [Princeton, NJ; Chen, Dong [Croton-On-Hudson, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY

    2012-01-10

    The present in invention is directed to a method, system and program storage device for efficiently implementing a multidimensional Fast Fourier Transform (FFT) of a multidimensional array comprising a plurality of elements initially distributed in a multi-node computer system comprising a plurality of nodes in communication over a network, comprising: distributing the plurality of elements of the array in a first dimension across the plurality of nodes of the computer system over the network to facilitate a first one-dimensional FFT; performing the first one-dimensional FFT on the elements of the array distributed at each node in the first dimension; re-distributing the one-dimensional FFT-transformed elements at each node in a second dimension via "all-to-all" distribution in random order across other nodes of the computer system over the network; and performing a second one-dimensional FFT on elements of the array re-distributed at each node in the second dimension, wherein the random order facilitates efficient utilization of the network thereby efficiently implementing the multidimensional FFT. The "all-to-all" re-distribution of array elements is further efficiently implemented in applications other than the multidimensional FFT on the distributed-memory parallel supercomputer.

  3. Efficient implementation of a multidimensional fast fourier transform on a distributed-memory parallel multi-node computer

    DOEpatents

    Bhanot, Gyan V [Princeton, NJ; Chen, Dong [Croton-On-Hudson, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY

    2008-01-01

    The present in invention is directed to a method, system and program storage device for efficiently implementing a multidimensional Fast Fourier Transform (FFT) of a multidimensional array comprising a plurality of elements initially distributed in a multi-node computer system comprising a plurality of nodes in communication over a network, comprising: distributing the plurality of elements of the array in a first dimension across the plurality of nodes of the computer system over the network to facilitate a first one-dimensional FFT; performing the first one-dimensional FFT on the elements of the array distributed at each node in the first dimension; re-distributing the one-dimensional FFT-transformed elements at each node in a second dimension via "all-to-all" distribution in random order across other nodes of the computer system over the network; and performing a second one-dimensional FFT on elements of the array re-distributed at each node in the second dimension, wherein the random order facilitates efficient utilization of the network thereby efficiently implementing the multidimensional FFT. The "all-to-all" re-distribution of array elements is further efficiently implemented in applications other than the multidimensional FFT on the distributed-memory parallel supercomputer.

  4. 75 FR 37443 - National Toxicology Program (NTP); NTP Interagency Center for the Evaluation of Alternative...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-29

    ... Nonradioactive Versions of the Murine Local Lymph Node Assay for Assessing Allergic Contact Dermatitis Hazard... nonradioactive versions of the Local Lymph Node Assay (LLNA) for assessing allergic contact dermatitis (ACD... Nonradioactive Alternative Test Method to Assess the Allergic Contact Dermatitis Potential of Chemicals and...

  5. Development of an ex vivo BrdU labeling procedure for the murine LLNA

    EPA Science Inventory

    The murine local lymph node assay (LLNA) is widely used to identify chemicals that may cause allergic contact dermatitis. Exposure to a dermal sensitizer results in proliferation of local lymph node T cells, which has traditionally been measured by in vivo incorporation of [3H]m...

  6. DIETARY VITAMIN A ENHANCES SENSITIVITY OF THE LOCAL LYMPH NODE ASSAY

    EPA Science Inventory

    Murine assays such as the mouse ear swelling test (MEST) and the local lymph node assay (LLNA) are popular alternatives to guinea pig models for the identification of contact sensitizers, yet there has been concern over the effectiveness of these assays to detect weak and moderat...

  7. Converged photonic data storage and switch platform for exascale disaggregated data centers

    NASA Astrophysics Data System (ADS)

    Pitwon, R.; Wang, K.; Worrall, A.

    2017-02-01

    We report on a converged optically enabled Ethernet storage, switch and compute platform, which could support future disaggregated data center architectures. The platform includes optically enabled Ethernet switch controllers, an advanced electro-optical midplane and optically interchangeable generic end node devices. We demonstrate system level performance using optically enabled Ethernet disk drives and micro-servers across optical links of varied lengths.

  8. Predictive capacity of a non-radioisotopic local lymph node assay using flow cytometry, LLNA:BrdU-FCM: Comparison of a cutoff approach and inferential statistics.

    PubMed

    Kim, Da-Eun; Yang, Hyeri; Jang, Won-Hee; Jung, Kyoung-Mi; Park, Miyoung; Choi, Jin Kyu; Jung, Mi-Sook; Jeon, Eun-Young; Heo, Yong; Yeo, Kyung-Wook; Jo, Ji-Hoon; Park, Jung Eun; Sohn, Soo Jung; Kim, Tae Sung; Ahn, Il Young; Jeong, Tae-Cheon; Lim, Kyung-Min; Bae, SeungJin

    2016-01-01

    In order for a novel test method to be applied for regulatory purposes, its reliability and relevance, i.e., reproducibility and predictive capacity, must be demonstrated. Here, we examine the predictive capacity of a novel non-radioisotopic local lymph node assay, LLNA:BrdU-FCM (5-bromo-2'-deoxyuridine-flow cytometry), with a cutoff approach and inferential statistics as a prediction model. 22 reference substances in OECD TG429 were tested with a concurrent positive control, hexylcinnamaldehyde 25%(PC), and the stimulation index (SI) representing the fold increase in lymph node cells over the vehicle control was obtained. The optimal cutoff SI (2.7≤cutoff <3.5), with respect to predictive capacity, was obtained by a receiver operating characteristic curve, which produced 90.9% accuracy for the 22 substances. To address the inter-test variability in responsiveness, SI values standardized with PC were employed to obtain the optimal percentage cutoff (42.6≤cutoff <57.3% of PC), which produced 86.4% accuracy. A test substance may be diagnosed as a sensitizer if a statistically significant increase in SI is elicited. The parametric one-sided t-test and non-parametric Wilcoxon rank-sum test produced 77.3% accuracy. Similarly, a test substance could be defined as a sensitizer if the SI means of the vehicle control, and of the low, middle, and high concentrations were statistically significantly different, which was tested using ANOVA or Kruskal-Wallis, with post hoc analysis, Dunnett, or DSCF (Dwass-Steel-Critchlow-Fligner), respectively, depending on the equal variance test, producing 81.8% accuracy. The absolute SI-based cutoff approach produced the best predictive capacity, however the discordant decisions between prediction models need to be examined further. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Towards Accurate Node-Based Detection of P2P Botnets

    PubMed Central

    2014-01-01

    Botnets are a serious security threat to the current Internet infrastructure. In this paper, we propose a novel direction for P2P botnet detection called node-based detection. This approach focuses on the network characteristics of individual nodes. Based on our model, we examine node's flows and extract the useful features over a given time period. We have tested our approach on real-life data sets and achieved detection rates of 99-100% and low false positives rates of 0–2%. Comparison with other similar approaches on the same data sets shows that our approach outperforms the existing approaches. PMID:25089287

  10. Re-Engineering the Tropical Rainfall Measuring Mission (TRMM) Satellite Utilizing Goddard Space Flight Center (GSFC) Mission Services Center (GMSEC) Middleware Based Technology to Enable Lights Out Operations and Autonomous Re-Dump of Lost Telemetry Data

    NASA Technical Reports Server (NTRS)

    Marius, Julio L.; Busch, Jim

    2008-01-01

    The Tropical Rainfall Measuring Mission (TRMM) spacecraft was launched in November of 1996 in order to obtain unique three dimensional radar cross sectional observations of cloud structures with particular interest in hurricanes. The TRMM mission life was recently extended with current estimates that operations will continue through the 2012-2013 timeframe. Faced with this extended mission profile, the project has embarked on a technology refresh and re-engineering effort. TRMM has recently implemented a re-engineering effort to expand a middleware based messaging architecture to enable fully redundant lights-out of flight operations activities. The middleware approach is based on the Goddard Mission Services Evolution Center (GMSEC) architecture, tools and associated open-source Applications Programming Interface (API). Middleware based messaging systems are useful in spacecraft operations and automation systems because private node based knowledge (such as that within a telemetry and command system) can be broadcast on the middleware messaging bus and hence enable collaborative decisions to be made by multiple subsystems. In this fashion, private data is made public and distributed within the local area network and multiple nodes can remain synchronized with other nodes. This concept is useful in a fully redundant architecture whereby one node is monitoring the processing of the 'prime' node so that in the event of a failure the backup node can assume operations of the prime, without loss of state knowledge. This paper will review and present the experiences, architecture, approach and lessons learned of the TRMM re-engineering effort centered on the GMSEC middleware architecture and tool suite. Relevant information will be presented that relates to the dual redundant parallel nature of the Telemetry and Command (T and C) and Front-End systems and how these systems can interact over a middleware bus to achieve autonomous operations including autonomous commanding to recover missing science data during the same spacecraft contact.

  11. A financial network perspective of financial institutions' systemic risk contributions

    NASA Astrophysics Data System (ADS)

    Huang, Wei-Qiang; Zhuang, Xin-Tian; Yao, Shuang; Uryasev, Stan

    2016-08-01

    This study considers the effects of the financial institutions' local topology structure in the financial network on their systemic risk contribution using data from the Chinese stock market. We first measure the systemic risk contribution with the Conditional Value-at-Risk (CoVaR) which is estimated by applying dynamic conditional correlation multivariate GARCH model (DCC-MVGARCH). Financial networks are constructed from dynamic conditional correlations (DCC) with graph filtering method of minimum spanning trees (MSTs). Then we investigate dynamics of systemic risk contributions of financial institution. Also we study dynamics of financial institution's local topology structure in the financial network. Finally, we analyze the quantitative relationships between the local topology structure and systemic risk contribution with panel data regression analysis. We find that financial institutions with greater node strength, larger node betweenness centrality, larger node closeness centrality and larger node clustering coefficient tend to be associated with larger systemic risk contributions.

  12. Direct memory access transfer completion notification

    DOEpatents

    Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Parker, Jeffrey J [Rochester, MN

    2011-02-15

    DMA transfer completion notification includes: inserting, by an origin DMA engine on an origin node in an injection first-in-first-out (`FIFO`) buffer, a data descriptor for an application message to be transferred to a target node on behalf of an application on the origin node; inserting, by the origin DMA engine, a completion notification descriptor in the injection FIFO buffer after the data descriptor for the message, the completion notification descriptor specifying a packet header for a completion notification packet; transferring, by the origin DMA engine to the target node, the message in dependence upon the data descriptor; sending, by the origin DMA engine, the completion notification packet to a local reception FIFO buffer using a local memory FIFO transfer operation; and notifying, by the origin DMA engine, the application that transfer of the message is complete in response to receiving the completion notification packet in the local reception FIFO buffer.

  13. An enhanced performance through agent-based secure approach for mobile ad hoc networks

    NASA Astrophysics Data System (ADS)

    Bisen, Dhananjay; Sharma, Sanjeev

    2018-01-01

    This paper proposes an agent-based secure enhanced performance approach (AB-SEP) for mobile ad hoc network. In this approach, agent nodes are selected through optimal node reliability as a factor. This factor is calculated on the basis of node performance features such as degree difference, normalised distance value, energy level, mobility and optimal hello interval of node. After selection of agent nodes, a procedure of malicious behaviour detection is performed using fuzzy-based secure architecture (FBSA). To evaluate the performance of the proposed approach, comparative analysis is done with conventional schemes using performance parameters such as packet delivery ratio, throughput, total packet forwarding, network overhead, end-to-end delay and percentage of malicious detection.

  14. Local Failure in Resected N1 Lung Cancer: Implications for Adjuvant Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Higgins, Kristin A., E-mail: kristin.higgins@duke.edu; Chino, Junzo P.; Berry, Mark

    2012-06-01

    Purpose: To evaluate actuarial rates of local failure in patients with pathologic N1 non-small-cell lung cancer and to identify clinical and pathologic factors associated with an increased risk of local failure after resection. Methods and Materials: All patients who underwent surgery for non-small-cell lung cancer with pathologically confirmed N1 disease at Duke University Medical Center from 1995-2008 were identified. Patients receiving any preoperative therapy or postoperative radiotherapy or with positive surgical margins were excluded. Local failure was defined as disease recurrence within the ipsilateral hilum, mediastinum, or bronchial stump/staple line. Actuarial rates of local failure were calculated with the Kaplan-Meiermore » method. A Cox multivariate analysis was used to identify factors independently associated with a higher risk of local recurrence. Results: Among 1,559 patients who underwent surgery during the time interval, 198 met the inclusion criteria. Of these patients, 50 (25%) received adjuvant chemotherapy. Actuarial (5-year) rates of local failure, distant failure, and overall survival were 40%, 55%, and 33%, respectively. On multivariate analysis, factors associated with an increased risk of local failure included a video-assisted thoracoscopic surgery approach (hazard ratio [HR], 2.5; p = 0.01), visceral pleural invasion (HR, 2.1; p = 0.04), and increasing number of positive N1 lymph nodes (HR, 1.3 per involved lymph node; p = 0.02). Chemotherapy was associated with a trend toward decreased risk of local failure that was not statistically significant (HR, 0.61; p = 0.2). Conclusions: Actuarial rates of local failure in pN1 disease are high. Further investigation of conformal postoperative radiotherapy may be warranted.« less

  15. Lymphoscintigraphy and SPECT/CT in multicentric and multifocal breast cancer: does each tumour have a separate drainage pattern? Results of a Dutch multicentre study (MULTISENT).

    PubMed

    Brouwer, O R; Vermeeren, L; van der Ploeg, I M C; Valdés Olmos, R A; Loo, C E; Pereira-Bouda, L M; Smit, F; Neijenhuis, P; Vrouenraets, B C; Sivro-Prndelj, F; Jap-a-Joe, S M; Borgstein, P J; Rutgers, E J Th; Oldenburg, H S A

    2012-07-01

    To investigate whether lymphoscintigraphy and SPECT/CT after intralesional injection of radiopharmaceutical into each tumour separately in patients with multiple malignancies in one breast yields additional sentinel nodes compared to intralesional injection of the largest tumour only. Patients were included prospectively at four centres in The Netherlands. Lymphatic flow was studied using planar lymphoscintigraphy and SPECT/CT until 4 h after administration of (99m)Tc-nanocolloid in the largest tumour. Subsequently, the smaller tumour(s) was injected intratumorally followed by the same imaging sequence. Sentinel nodes were intraoperatively localized using a gamma ray detection probe and vital blue dye. Included in the study were 50 patients. Additional lymphatic drainage was depicted after the second and/or third injection in 32 patients (64%). Comparison of planar images and SPECT/CT images after consecutive injections enabled visualization of the number and location of additional sentinel nodes (32 axillary, 11 internal mammary chain, 2 intramammary, and 1 interpectoral. A sentinel node contained metastases in 17 patients (34%). In five patients with a tumour-positive node in the axilla that was visualized after the first injection, an additional involved axillary node was found after the second injection. In two patients, isolated tumour cells were found in sentinel nodes that were only visualized after the second injection, whilst the sentinel nodes identified after the first injection were tumour-negative. Lymphoscintigraphy and SPECT/CT after consecutive intratumoral injections of tracer enable lymphatic mapping of each tumour separately in patients with multiple malignancies within one breast. The high incidence of additional sentinel nodes draining from tumours other than the largest one suggests that separate tumour-related tracer injections may be a more accurate approach to mapping and sampling of sentinel nodes in patients with multicentric or multifocal breast cancer.

  16. Comparison of Node-Centered and Cell-Centered Unstructured Finite-Volume Discretizations. Part 1; Viscous Fluxes

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.; Nielsen, Eric J.; Nishikawa, Hiroaki; White, Jeffery A.

    2009-01-01

    Discretization of the viscous terms in current finite-volume unstructured-grid schemes are compared using node-centered and cell-centered approaches in two dimensions. Accuracy and efficiency are studied for six nominally second-order accurate schemes: a node-centered scheme, cell-centered node-averaging schemes with and without clipping, and cell-centered schemes with unweighted, weighted, and approximately mapped least-square face gradient reconstruction. The grids considered range from structured (regular) grids to irregular grids composed of arbitrary mixtures of triangles and quadrilaterals, including random perturbations of the grid points to bring out the worst possible behavior of the solution. Two classes of tests are considered. The first class of tests involves smooth manufactured solutions on both isotropic and highly anisotropic grids with discontinuous metrics, typical of those encountered in grid adaptation. The second class concerns solutions and grids varying strongly anisotropically over a curved body, typical of those encountered in high-Reynolds number turbulent flow simulations. Results from the first class indicate the face least-square methods, the node-averaging method without clipping, and the node-centered method demonstrate second-order convergence of discretization errors with very similar accuracies per degree of freedom. The second class of tests are more discriminating. The node-centered scheme is always second order with an accuracy and complexity in linearization comparable to the best of the cell-centered schemes. In comparison, the cell-centered node-averaging schemes are less accurate, have a higher complexity in linearization, and can fail to converge to the exact solution when clipping of the node-averaged values is used. The cell-centered schemes using least-square face gradient reconstruction have more compact stencils with a complexity similar to the complexity of the node-centered scheme. For simulations on highly anisotropic curved grids, the least-square methods have to be amended either by introducing a local mapping of the surface anisotropy or modifying the scheme stencil to reflect the direction of strong coupling.

  17. DPM — efficient storage in diverse environments

    NASA Astrophysics Data System (ADS)

    Hellmich, Martin; Furano, Fabrizio; Smith, David; Brito da Rocha, Ricardo; Álvarez Ayllón, Alejandro; Manzi, Andrea; Keeble, Oliver; Calvet, Ivan; Regala, Miguel Antonio

    2014-06-01

    Recent developments, including low power devices, cluster file systems and cloud storage, represent an explosion in the possibilities for deploying and managing grid storage. In this paper we present how different technologies can be leveraged to build a storage service with differing cost, power, performance, scalability and reliability profiles, using the popular storage solution Disk Pool Manager (DPM/dmlite) as the enabling technology. The storage manager DPM is designed for these new environments, allowing users to scale up and down as they need it, and optimizing their computing centers energy efficiency and costs. DPM runs on high-performance machines, profiting from multi-core and multi-CPU setups. It supports separating the database from the metadata server, the head node, largely reducing its hard disk requirements. Since version 1.8.6, DPM is released in EPEL and Fedora, simplifying distribution and maintenance, but also supporting the ARM architecture beside i386 and x86_64, allowing it to run the smallest low-power machines such as the Raspberry Pi or the CuBox. This usage is facilitated by the possibility to scale horizontally using a main database and a distributed memcached-powered namespace cache. Additionally, DPM supports a variety of storage pools in the backend, most importantly HDFS, S3-enabled storage, and cluster file systems, allowing users to fit their DPM installation exactly to their needs. In this paper, we investigate the power-efficiency and total cost of ownership of various DPM configurations. We develop metrics to evaluate the expected performance of a setup both in terms of namespace and disk access considering the overall cost including equipment, power consumptions, or data/storage fees. The setups tested range from the lowest scale using Raspberry Pis with only 700MHz single cores and a 100Mbps network connections, over conventional multi-core servers to typical virtual machine instances in cloud settings. We evaluate the combinations of different name server setups, for example load-balanced clusters, with different storage setups, from using a classic local configuration to private and public clouds.

  18. GPU-accelerated Modeling and Element-free Reverse-time Migration with Gauss Points Partition

    NASA Astrophysics Data System (ADS)

    Zhen, Z.; Jia, X.

    2014-12-01

    Element-free method (EFM) has been applied to seismic modeling and migration. Compared with finite element method (FEM) and finite difference method (FDM), it is much cheaper and more flexible because only the information of the nodes and the boundary of the study area are required in computation. In the EFM, the number of Gauss points should be consistent with the number of model nodes; otherwise the accuracy of the intermediate coefficient matrices would be harmed. Thus when we increase the nodes of velocity model in order to obtain higher resolution, we find that the size of the computer's memory will be a bottleneck. The original EFM can deal with at most 81×81 nodes in the case of 2G memory, as tested by Jia and Hu (2006). In order to solve the problem of storage and computation efficiency, we propose a concept of Gauss points partition (GPP), and utilize the GPUs to improve the computation efficiency. Considering the characteristics of the Gaussian points, the GPP method doesn't influence the propagation of seismic wave in the velocity model. To overcome the time-consuming computation of the stiffness matrix (K) and the mass matrix (M), we also use the GPUs in our computation program. We employ the compressed sparse row (CSR) format to compress the intermediate sparse matrices and try to simplify the operations by solving the linear equations with the CULA Sparse's Conjugate Gradient (CG) solver instead of the linear sparse solver 'PARDISO'. It is observed that our strategy can significantly reduce the computational time of K and Mcompared with the algorithm based on CPU. The model tested is Marmousi model. The length of the model is 7425m and the depth is 2990m. We discretize the model with 595x298 nodes, 300x300 Gauss cells and 3x3 Gauss points in each cell. In contrast to the computational time of the conventional EFM, the GPUs-GPP approach can substantially improve the efficiency. The speedup ratio of time consumption of computing K, M is 120 and the speedup ratio time consumption of RTM is 11.5. At the same time, the accuracy of imaging is not harmed. Another advantage of the GPUs-GPP method is its easy applications in other numerical methods such as the FEM. Finally, in the GPUs-GPP method, the arrays require quite limited memory storage, which makes the method promising in dealing with large-scale 3D problems.

  19. TU-AB-BRA-10: Prognostic Value of Intra-Radiation Treatment FDG-PET and CT Imaging Features in Locally Advanced Head and Neck Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, J; Pollom, E; Durkee, B

    2015-06-15

    Purpose: To predict response to radiation treatment using computational FDG-PET and CT images in locally advanced head and neck cancer (HNC). Methods: 68 patients with State III-IVB HNC treated with chemoradiation were included in this retrospective study. For each patient, we analyzed primary tumor and lymph nodes on PET and CT scans acquired both prior to and during radiation treatment, which led to 8 combinations of image datasets. From each image set, we extracted high-throughput, radiomic features of the following types: statistical, morphological, textural, histogram, and wavelet, resulting in a total of 437 features. We then performed unsupervised redundancy removalmore » and stability test on these features. To avoid over-fitting, we trained a logistic regression model with simultaneous feature selection based on least absolute shrinkage and selection operator (LASSO). To objectively evaluate the prediction ability, we performed 5-fold cross validation (CV) with 50 random repeats of stratified bootstrapping. Feature selection and model training was solely conducted on the training set and independently validated on the holdout test set. Receiver operating characteristic (ROC) curve of the pooled Result and the area under the ROC curve (AUC) was calculated as figure of merit. Results: For predicting local-regional recurrence, our model built on pre-treatment PET of lymph nodes achieved the best performance (AUC=0.762) on 5-fold CV, which compared favorably with node volume and SUVmax (AUC=0.704 and 0.449, p<0.001). Wavelet coefficients turned out to be the most predictive features. Prediction of distant recurrence showed a similar trend, in which pre-treatment PET features of lymph nodes had the highest AUC of 0.705. Conclusion: The radiomics approach identified novel imaging features that are predictive to radiation treatment response. If prospectively validated in larger cohorts, they could aid in risk-adaptive treatment of HNC.« less

  20. Laparoscopic Pelvic Exenteration for Locally Advanced Rectal Cancer, Technique and Short-Term Outcomes.

    PubMed

    Pokharkar, Ashish; Kammar, Praveen; D'souza, Ashwin; Bhamre, Rahul; Sugoor, Pavan; Saklani, Avanish

    2018-05-09

    Since last two decades minimally invasive techniques have revolutionized surgical field. In 2003 Pomel first described laparoscopic pelvic exenteration, since then very few reports have described minimally invasive approaches for total pelvic exenteration. We report the 10 cases of locally advanced rectal adenocarcinoma which were operated between the periods from March 1, 2017 to November 11, 2017 at the Tata Memorial Hospital, Mumbai. All male patients had lower rectal cancer with prostate involvement on magnetic resonance imaging (MRI). One female patient had uterine and fornix involvement. All perioperative and intraoperative parameters were collected retrospectively from prospectively maintained electronic data. Nine male patients with diagnosis of nonmetastatic locally advanced lower rectal adenocarcinoma were selected. All patients were operated with minimally invasive approach. All patients underwent abdominoperineal resection with permanent sigmoid stoma. Ileal conduit was constructed with Bricker's procedure through small infraumbilical incision (4-5 cm). Lateral pelvic lymph node dissection was done only when postchemoradiotherapy MRI showed enlarged pelvic nodes. All 10 patients received neoadjuvant chemo radiotherapy, whereas 8 patients received additional neoadjuvant chemotherapy. Mean body mass index was 21.73 (range 19.5-26.3). Mean blood loss was 1000 mL (range 300-2000 mL). Mean duration of surgery was 9.13 hours (range 7-13 hours). One patient developed paralytic ileus, which was managed conservatively. One patient developed intestinal obstruction due to herniation of small intestine behind the left ureter and ileal conduit. The same patient developed acute pylonephritis, which was managed with antibiotics. Mean postoperative stay was 14.6 days (range 9-25 days). On postoperative histopathology, all margins were free of tumor in all cases. Minimally invasive approaches can be used safely for total pelvic exenteration in locally advanced lower rectal adenocarcinoma. All patients had fast recovery with less blood loss. In all patients R0 resection was achieved with adequate margins. Long-term oncological outcomes are still uncertain and will require further follow-up.

  1. KSC-07pd2416

    NASA Image and Video Library

    2007-09-10

    KENNEDY SPACE CENTER, FLA. -- In bay 3 of the Orbiter Processing Facility, a tool storage assembly unit is being moved for storage in Discovery's payload bay. The tools may be used on a spacewalk, yet to be determined, during mission STS-120. In an unusual operation, the payload bay doors had to be reopened after closure to accommodate the storage. Space shuttle Discovery is targeted to launch Oct. 23 to the International Space Station. It will carry the U.S. Node 2, a connecting module, named Harmony, for assembly on the space station. Photo credit: NASA/Amanda Diller

  2. dCache on Steroids - Delegated Storage Solutions

    DOE PAGES

    Mkrtchyan, Tigran; Adeyemi, F.; Ashish, A.; ...

    2017-11-23

    For over a decade, dCache.org has delivered a robust software used at more than 80 Universities and research institutes around the world, allowing these sites to provide reliable storage services for the WLCG experiments as well as many other scientific communities. The flexible architecture of dCache allows running it in a wide variety of configurations and platforms - from a SoC based all-in-one Raspberry-Pi up to hundreds of nodes in a multipetabyte installation. Due to lack of managed storage at the time, dCache implemented data placement, replication and data integrity directly. Today, many alternatives are available: S3, GlusterFS, CEPH andmore » others. While such solutions position themselves as scalable storage systems, they cannot be used by many scientific communities out of the box. The absence of community-accepted authentication and authorization mechanisms, the use of product specific protocols and the lack of namespace are some of the reasons that prevent wide-scale adoption of these alternatives. Most of these limitations are already solved by dCache. By delegating low-level storage management functionality to the above-mentioned new systems and providing the missing layer through dCache, we provide a solution which combines the benefits of both worlds - industry standard storage building blocks with the access protocols and authentication required by scientific communities. In this paper, we focus on CEPH, a popular software for clustered storage that supports file, block and object interfaces. CEPH is often used in modern computing centers, for example as a backend to OpenStack services. We will show prototypes of dCache running with a CEPH backend and discuss the benefits and limitations of such an approach. As a result, we will also outline the roadmap for supporting ‘delegated storage’ within the dCache releases.« less

  3. dCache on Steroids - Delegated Storage Solutions

    NASA Astrophysics Data System (ADS)

    Mkrtchyan, T.; Adeyemi, F.; Ashish, A.; Behrmann, G.; Fuhrmann, P.; Litvintsev, D.; Millar, P.; Rossi, A.; Sahakyan, M.; Starek, J.

    2017-10-01

    For over a decade, dCache.org has delivered a robust software used at more than 80 Universities and research institutes around the world, allowing these sites to provide reliable storage services for the WLCG experiments as well as many other scientific communities. The flexible architecture of dCache allows running it in a wide variety of configurations and platforms - from a SoC based all-in-one Raspberry-Pi up to hundreds of nodes in a multipetabyte installation. Due to lack of managed storage at the time, dCache implemented data placement, replication and data integrity directly. Today, many alternatives are available: S3, GlusterFS, CEPH and others. While such solutions position themselves as scalable storage systems, they cannot be used by many scientific communities out of the box. The absence of community-accepted authentication and authorization mechanisms, the use of product specific protocols and the lack of namespace are some of the reasons that prevent wide-scale adoption of these alternatives. Most of these limitations are already solved by dCache. By delegating low-level storage management functionality to the above-mentioned new systems and providing the missing layer through dCache, we provide a solution which combines the benefits of both worlds - industry standard storage building blocks with the access protocols and authentication required by scientific communities. In this paper, we focus on CEPH, a popular software for clustered storage that supports file, block and object interfaces. CEPH is often used in modern computing centers, for example as a backend to OpenStack services. We will show prototypes of dCache running with a CEPH backend and discuss the benefits and limitations of such an approach. We will also outline the roadmap for supporting ‘delegated storage’ within the dCache releases.

  4. dCache on Steroids - Delegated Storage Solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mkrtchyan, Tigran; Adeyemi, F.; Ashish, A.

    For over a decade, dCache.org has delivered a robust software used at more than 80 Universities and research institutes around the world, allowing these sites to provide reliable storage services for the WLCG experiments as well as many other scientific communities. The flexible architecture of dCache allows running it in a wide variety of configurations and platforms - from a SoC based all-in-one Raspberry-Pi up to hundreds of nodes in a multipetabyte installation. Due to lack of managed storage at the time, dCache implemented data placement, replication and data integrity directly. Today, many alternatives are available: S3, GlusterFS, CEPH andmore » others. While such solutions position themselves as scalable storage systems, they cannot be used by many scientific communities out of the box. The absence of community-accepted authentication and authorization mechanisms, the use of product specific protocols and the lack of namespace are some of the reasons that prevent wide-scale adoption of these alternatives. Most of these limitations are already solved by dCache. By delegating low-level storage management functionality to the above-mentioned new systems and providing the missing layer through dCache, we provide a solution which combines the benefits of both worlds - industry standard storage building blocks with the access protocols and authentication required by scientific communities. In this paper, we focus on CEPH, a popular software for clustered storage that supports file, block and object interfaces. CEPH is often used in modern computing centers, for example as a backend to OpenStack services. We will show prototypes of dCache running with a CEPH backend and discuss the benefits and limitations of such an approach. As a result, we will also outline the roadmap for supporting ‘delegated storage’ within the dCache releases.« less

  5. Improved Object Localization Using Accurate Distance Estimation in Wireless Multimedia Sensor Networks

    PubMed Central

    Ur Rehman, Yasar Abbas; Tariq, Muhammad; Khan, Omar Usman

    2015-01-01

    Object localization plays a key role in many popular applications of Wireless Multimedia Sensor Networks (WMSN) and as a result, it has acquired a significant status for the research community. A significant body of research performs this task without considering node orientation, object geometry and environmental variations. As a result, the localized object does not reflect the real world scenarios. In this paper, a novel object localization scheme for WMSN has been proposed that utilizes range free localization, computer vision, and principle component analysis based algorithms. The proposed approach provides the best possible approximation of distance between a wmsn sink and an object, and the orientation of the object using image based information. Simulation results report 99% efficiency and an error ratio of 0.01 (around 1 ft) when compared to other popular techniques. PMID:26528919

  6. Effective Social Relationship Measurement and Cluster Based Routing in Mobile Opportunistic Networks †

    PubMed Central

    Zeng, Feng; Zhao, Nan; Li, Wenjia

    2017-01-01

    In mobile opportunistic networks, the social relationship among nodes has an important impact on data transmission efficiency. Motivated by the strong share ability of “circles of friends” in communication networks such as Facebook, Twitter, Wechat and so on, we take a real-life example to show that social relationships among nodes consist of explicit and implicit parts. The explicit part comes from direct contact among nodes, and the implicit part can be measured through the “circles of friends”. We present the definitions of explicit and implicit social relationships between two nodes, adaptive weights of explicit and implicit parts are given according to the contact feature of nodes, and the distributed mechanism is designed to construct the “circles of friends” of nodes, which is used for the calculation of the implicit part of social relationship between nodes. Based on effective measurement of social relationships, we propose a social-based clustering and routing scheme, in which each node selects the nodes with close social relationships to form a local cluster, and the self-control method is used to keep all cluster members always having close relationships with each other. A cluster-based message forwarding mechanism is designed for opportunistic routing, in which each node only forwards the copy of the message to nodes with the destination node as a member of the local cluster. Simulation results show that the proposed social-based clustering and routing outperforms the other classic routing algorithms. PMID:28498309

  7. Effective Social Relationship Measurement and Cluster Based Routing in Mobile Opportunistic Networks.

    PubMed

    Zeng, Feng; Zhao, Nan; Li, Wenjia

    2017-05-12

    In mobile opportunistic networks, the social relationship among nodes has an important impact on data transmission efficiency. Motivated by the strong share ability of "circles of friends" in communication networks such as Facebook, Twitter, Wechat and so on, we take a real-life example to show that social relationships among nodes consist of explicit and implicit parts. The explicit part comes from direct contact among nodes, and the implicit part can be measured through the "circles of friends". We present the definitions of explicit and implicit social relationships between two nodes, adaptive weights of explicit and implicit parts are given according to the contact feature of nodes, and the distributed mechanism is designed to construct the "circles of friends" of nodes, which is used for the calculation of the implicit part of social relationship between nodes. Based on effective measurement of social relationships, we propose a social-based clustering and routing scheme, in which each node selects the nodes with close social relationships to form a local cluster, and the self-control method is used to keep all cluster members always having close relationships with each other. A cluster-based message forwarding mechanism is designed for opportunistic routing, in which each node only forwards the copy of the message to nodes with the destination node as a member of the local cluster. Simulation results show that the proposed social-based clustering and routing outperforms the other classic routing algorithms.

  8. Wi-GIM system: a new wireless sensor network (WSN) for accurate ground instability monitoring

    NASA Astrophysics Data System (ADS)

    Mucchi, Lorenzo; Trippi, Federico; Schina, Rosa; Fornaciai, Alessandro; Gigli, Giovanni; Nannipieri, Luca; Favalli, Massimiliano; Marturia Alavedra, Jordi; Intrieri, Emanuele; Agostini, Andrea; Carnevale, Ennio; Bertolini, Giovanni; Pizziolo, Marco; Casagli, Nicola

    2016-04-01

    Landslides are among the most serious and common geologic hazards around the world. Their impact on human life is expected to increase in the next future as a consequence of human-induced climate change as well as the population growth in proximity of unstable slopes. Therefore, developing better performing technologies for monitoring landslides and providing local authorities with new instruments able to help them in the decision making process, is becoming more and more important. The recent progresses in Information and Communication Technologies (ICT) allow us to extend the use of wireless technologies in landslide monitoring. In particular, the developments in electronics components have permitted to lower the price of the sensors and, at the same time, to actuate more efficient wireless communications. In this work we present a new wireless sensor network (WSN) system, designed and developed for landslide monitoring in the framework of EU Wireless Sensor Network for Ground Instability Monitoring - Wi-GIM project (LIFE12 ENV/IT/001033). We show the preliminary performance of the Wi-GIM system after the first period of monitoring on the active Roncovetro Landslide and on a large subsiding area in the neighbourhood of Sallent village. The Roncovetro landslide is located in the province of Reggio Emilia (Italy) and moved an inferred volume of about 3 million cubic meters. Sallent village is located at the centre of the Catalan evaporitic basin in Spain. The Wi-GIM WSN monitoring system consists of three levels: 1) Master/Gateway level coordinates the WSN and performs data aggregation and local storage; 2) Master/Server level takes care of acquiring and storing data on a remote server; 3) Nodes level that is based on a mesh of peripheral nodes, each consisting in a sensor board equipped with sensors and wireless module. The nodes are located in the landslide ground perimeter and are able to create an ad-hoc WSN. The location of each sensor on the ground is determined by integrating an ultra wideband technology with a radar technology; this integration allows to push the accuracy towards the cm. An extended Kalman filter is also used to reduce the noise and enhance the accuracy of the measures. The sensor nodes are organized as a hierarchical cluster, composed by one master and several slave nodes. The landslide movement is detected by comparing day by day the x, y and z coordinates of each nodes. The 3D movements of each sensor during the monitoring period are represented as vector and displayed on a Web-GIS which is accessible at the following link: www.life-wigim.eu.

  9. Clinical Response of Pelvic and Para-aortic Lymphadenopathy to a Radiation Boost in the Definitive Management of Locally Advanced Cervical Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rash, Dominique L.; Lee, Yongsook C.; Kashefi, Amir

    Purpose: Optimal treatment with radiation for metastatic lymphadenopathy in locally advanced cervical cancer remains controversial. We investigated the clinical dose response threshold for pelvic and para-aortic lymph node boost using radiographic imaging and clinical outcomes. Methods and Materials: Between 2007 and 2011, 68 patients were treated for locally advanced cervical cancer; 40 patients had clinically involved pelvic and/or para-aortic lymph nodes. Computed tomography (CT) or 18F-labeled fluorodeoxyglucose-positron emission tomography scans obtained pre- and postchemoradiation for 18 patients were reviewed to assess therapeutic radiographic response of individual lymph nodes. External beam boost doses to involved nodes were compared to treatment response,more » assessed by change in size of lymph nodes by short axis and change in standard uptake value (SUV). Patterns of failure, time to recurrence, overall survival (OS), and disease-free survival (DFS) were determined. Results: Sixty-four lymph nodes suspicious for metastatic involvement were identified. Radiation boost doses ranged from 0 to 15 Gy, with a mean total dose of 52.3 Gy. Pelvic lymph nodes were treated with a slightly higher dose than para-aortic lymph nodes: mean 55.3 Gy versus 51.7 Gy, respectively. There was no correlation between dose delivered and change in size of lymph nodes along the short axis. All lymph nodes underwent a decrease in SUV with a complete resolution of abnormal uptake observed in 68%. Decrease in SUV was significantly greater for lymph nodes treated with ≥54 Gy compared to those treated with <54 Gy (P=.006). Median follow-up was 18.7 months. At 2 years, OS and DFS for the entire cohort were 78% and 50%, respectively. Locoregional control at 2 years was 84%. Conclusions: A biologic response, as measured by the change in SUV for metastatic lymph nodes, was observed at a dose threshold of 54 Gy. We recommend that involved lymph nodes be treated to this minimum dose.« less

  10. Paired-agent fluorescent imaging to detect micrometastases in breast sentinel lymph node biopsy: experiment design and protocol development

    NASA Astrophysics Data System (ADS)

    Li, Chengyue; Xu, Xiaochun; Basheer, Yusairah; He, Yusheng; Sattar, Husain A.; Brankov, Jovan G.; Tichauer, Kenneth M.

    2018-02-01

    Sentinel lymph node status is a critical prognostic factor in breast cancer treatment and is essential to guide future adjuvant treatment. The estimation that 20-60% of micrometastases are missed by conventional pathology has created a demand for the development of more accurate approaches. Here, a paired-agent imaging approach is presented that employs a control imaging agent to allow rapid, quantitative mapping of microscopic populations of tumor cells in lymph nodes to guide pathology sectioning. To test the feasibility of this approach to identify micrometastases, healthy pig lymph nodes were stained with targeted and control imaging agent solution to evaluate the potential for the agents to diffuse into and out of intact nodes. Aby-029, an anti-EGFR affibody was labeled with IRDye 800CW (LICOR) as targeted agent and IRDye 700DX was hydrolyzed as a control agent. Lymph nodes were stained and rinsed by directly injecting the agents into the lymph nodes after immobilization in agarose gel. Subsequently, lymph nodes were frozen-sectioned and imaged under an 80-um resolution fluorescence imaging system (Pearl, LICOR) to confirm equivalence of spatial distribution of both agents in the entire node. The binding potentials were acquired by a pixel-by-pixel calculation and was found to be 0.02 +/- 0.06 along the lymph node in the absence of binding. The results demonstrate this approach's potential to enhance the sensitivity of lymph node pathology by detecting fewer than 1000 cell in a whole human lymph node.

  11. Molecular markers to complement sentinel node status in predicting survival in patients with high-risk locally invasive melanoma.

    PubMed

    Rowe, Casey J; Tang, Fiona; Hughes, Maria Celia B; Rodero, Mathieu P; Malt, Maryrose; Lambie, Duncan; Barbour, Andrew; Hayward, Nicholas K; Smithers, B Mark; Green, Adele C; Khosrotehrani, Kiarash

    2016-08-01

    Sentinel lymph node status is a major prognostic marker in locally invasive cutaneous melanoma. However, this procedure is not always feasible, requires advanced logistics and carries rare but significant morbidity. Previous studies have linked markers of tumour biology to patient survival. In this study, we aimed to combine the predictive value of established biomarkers in addition to clinical parameters as indicators of survival in addition to or instead of sentinel node biopsy in a cohort of high-risk melanoma patients. Patients with locally invasive melanomas undergoing sentinel lymph node biopsy were ascertained and prospectively followed. Information on mortality was validated through the National Death Index. Immunohistochemistry was used to analyse proteins previously reported to be associated with melanoma survival, namely Ki67, p16 and CD163. Evaluation and multivariate analyses according to REMARK criteria were used to generate models to predict disease-free and melanoma-specific survival. A total of 189 patients with available archival material of their primary tumour were analysed. Our study sample was representative of the entire cohort (N = 559). Average Breslow thickness was 2.5 mm. Thirty-two (17%) patients in the study sample died from melanoma during the follow-up period. A prognostic score was developed and was strongly predictive of survival, independent of sentinel node status. The score allowed classification of risk of melanoma death in sentinel node-negative patients. Combining clinicopathological factors and established biomarkers allows prediction of outcome in locally invasive melanoma and might be implemented in addition to or in cases when sentinel node biopsy cannot be performed. © 2016 UICC.

  12. G-Hash: Towards Fast Kernel-based Similarity Search in Large Graph Databases.

    PubMed

    Wang, Xiaohong; Smalter, Aaron; Huan, Jun; Lushington, Gerald H

    2009-01-01

    Structured data including sets, sequences, trees and graphs, pose significant challenges to fundamental aspects of data management such as efficient storage, indexing, and similarity search. With the fast accumulation of graph databases, similarity search in graph databases has emerged as an important research topic. Graph similarity search has applications in a wide range of domains including cheminformatics, bioinformatics, sensor network management, social network management, and XML documents, among others.Most of the current graph indexing methods focus on subgraph query processing, i.e. determining the set of database graphs that contains the query graph and hence do not directly support similarity search. In data mining and machine learning, various graph kernel functions have been designed to capture the intrinsic similarity of graphs. Though successful in constructing accurate predictive and classification models for supervised learning, graph kernel functions have (i) high computational complexity and (ii) non-trivial difficulty to be indexed in a graph database.Our objective is to bridge graph kernel function and similarity search in graph databases by proposing (i) a novel kernel-based similarity measurement and (ii) an efficient indexing structure for graph data management. Our method of similarity measurement builds upon local features extracted from each node and their neighboring nodes in graphs. A hash table is utilized to support efficient storage and fast search of the extracted local features. Using the hash table, a graph kernel function is defined to capture the intrinsic similarity of graphs and for fast similarity query processing. We have implemented our method, which we have named G-hash, and have demonstrated its utility on large chemical graph databases. Our results show that the G-hash method achieves state-of-the-art performance for k-nearest neighbor (k-NN) classification. Most importantly, the new similarity measurement and the index structure is scalable to large database with smaller indexing size, faster indexing construction time, and faster query processing time as compared to state-of-the-art indexing methods such as C-tree, gIndex, and GraphGrep.

  13. Identification of hybrid node and link communities in complex networks

    PubMed Central

    He, Dongxiao; Jin, Di; Chen, Zheng; Zhang, Weixiong

    2015-01-01

    Identifying communities in complex networks is an effective means for analyzing complex systems, with applications in diverse areas such as social science, engineering, biology and medicine. Finding communities of nodes and finding communities of links are two popular schemes for network analysis. These schemes, however, have inherent drawbacks and are inadequate to capture complex organizational structures in real networks. We introduce a new scheme and an effective approach for identifying complex mixture structures of node and link communities, called hybrid node-link communities. A central piece of our approach is a probabilistic model that accommodates node, link and hybrid node-link communities. Our extensive experiments on various real-world networks, including a large protein-protein interaction network and a large network of semantically associated words, illustrated that the scheme for hybrid communities is superior in revealing network characteristics. Moreover, the new approach outperformed the existing methods for finding node or link communities separately. PMID:25728010

  14. Identification of hybrid node and link communities in complex networks.

    PubMed

    He, Dongxiao; Jin, Di; Chen, Zheng; Zhang, Weixiong

    2015-03-02

    Identifying communities in complex networks is an effective means for analyzing complex systems, with applications in diverse areas such as social science, engineering, biology and medicine. Finding communities of nodes and finding communities of links are two popular schemes for network analysis. These schemes, however, have inherent drawbacks and are inadequate to capture complex organizational structures in real networks. We introduce a new scheme and an effective approach for identifying complex mixture structures of node and link communities, called hybrid node-link communities. A central piece of our approach is a probabilistic model that accommodates node, link and hybrid node-link communities. Our extensive experiments on various real-world networks, including a large protein-protein interaction network and a large network of semantically associated words, illustrated that the scheme for hybrid communities is superior in revealing network characteristics. Moreover, the new approach outperformed the existing methods for finding node or link communities separately.

  15. Identification of hybrid node and link communities in complex networks

    NASA Astrophysics Data System (ADS)

    He, Dongxiao; Jin, Di; Chen, Zheng; Zhang, Weixiong

    2015-03-01

    Identifying communities in complex networks is an effective means for analyzing complex systems, with applications in diverse areas such as social science, engineering, biology and medicine. Finding communities of nodes and finding communities of links are two popular schemes for network analysis. These schemes, however, have inherent drawbacks and are inadequate to capture complex organizational structures in real networks. We introduce a new scheme and an effective approach for identifying complex mixture structures of node and link communities, called hybrid node-link communities. A central piece of our approach is a probabilistic model that accommodates node, link and hybrid node-link communities. Our extensive experiments on various real-world networks, including a large protein-protein interaction network and a large network of semantically associated words, illustrated that the scheme for hybrid communities is superior in revealing network characteristics. Moreover, the new approach outperformed the existing methods for finding node or link communities separately.

  16. Efficient DV-HOP Localization for Wireless Cyber-Physical Social Sensing System: A Correntropy-Based Neural Network Learning Scheme

    PubMed Central

    Xu, Yang; Luo, Xiong; Wang, Weiping; Zhao, Wenbing

    2017-01-01

    Integrating wireless sensor network (WSN) into the emerging computing paradigm, e.g., cyber-physical social sensing (CPSS), has witnessed a growing interest, and WSN can serve as a social network while receiving more attention from the social computing research field. Then, the localization of sensor nodes has become an essential requirement for many applications over WSN. Meanwhile, the localization information of unknown nodes has strongly affected the performance of WSN. The received signal strength indication (RSSI) as a typical range-based algorithm for positioning sensor nodes in WSN could achieve accurate location with hardware saving, but is sensitive to environmental noises. Moreover, the original distance vector hop (DV-HOP) as an important range-free localization algorithm is simple, inexpensive and not related to the environment factors, but performs poorly when lacking anchor nodes. Motivated by these, various improved DV-HOP schemes with RSSI have been introduced, and we present a new neural network (NN)-based node localization scheme, named RHOP-ELM-RCC, through the use of DV-HOP, RSSI and a regularized correntropy criterion (RCC)-based extreme learning machine (ELM) algorithm (ELM-RCC). Firstly, the proposed scheme employs both RSSI and DV-HOP to evaluate the distances between nodes to enhance the accuracy of distance estimation at a reasonable cost. Then, with the help of ELM featured with a fast learning speed with a good generalization performance and minimal human intervention, a single hidden layer feedforward network (SLFN) on the basis of ELM-RCC is used to implement the optimization task for obtaining the location of unknown nodes. Since the RSSI may be influenced by the environmental noises and may bring estimation error, the RCC instead of the mean square error (MSE) estimation, which is sensitive to noises, is exploited in ELM. Hence, it may make the estimation more robust against outliers. Additionally, the least square estimation (LSE) in ELM is replaced by the half-quadratic optimization technique. Simulation results show that our proposed scheme outperforms other traditional localization schemes. PMID:28085084

  17. A range-based predictive localization algorithm for WSID networks

    NASA Astrophysics Data System (ADS)

    Liu, Yuan; Chen, Junjie; Li, Gang

    2017-11-01

    Most studies on localization algorithms are conducted on the sensor networks with densely distributed nodes. However, the non-localizable problems are prone to occur in the network with sparsely distributed sensor nodes. To solve this problem, a range-based predictive localization algorithm (RPLA) is proposed in this paper for the wireless sensor networks syncretizing the RFID (WSID) networks. The Gaussian mixture model is established to predict the trajectory of a mobile target. Then, the received signal strength indication is used to reduce the residence area of the target location based on the approximate point-in-triangulation test algorithm. In addition, collaborative localization schemes are introduced to locate the target in the non-localizable situations. Simulation results verify that the RPLA achieves accurate localization for the network with sparsely distributed sensor nodes. The localization accuracy of the RPLA is 48.7% higher than that of the APIT algorithm, 16.8% higher than that of the single Gaussian model-based algorithm and 10.5% higher than that of the Kalman filtering-based algorithm.

  18. Percolation of localized attack on isolated and interdependent random networks

    NASA Astrophysics Data System (ADS)

    Shao, Shuai; Huang, Xuqing; Stanley, H. Eugene; Havlin, Shlomo

    2014-03-01

    Percolation properties of isolated and interdependent random networks have been investigated extensively. The focus of these studies has been on random attacks where each node in network is attacked with the same probability or targeted attack where each node is attacked with a probability being a function of its centrality, such as degree. Here we discuss a new type of realistic attacks which we call a localized attack where a group of neighboring nodes in the networks are attacked. We attack a randomly chosen node, its neighbors, and its neighbor of neighbors and so on, until removing a fraction (1 - p) of the network. This type of attack reflects damages due to localized disasters, such as earthquakes, floods and war zones in real-world networks. We study, both analytically and by simulations the impact of localized attack on percolation properties of random networks with arbitrary degree distributions and discuss in detail random regular (RR) networks, Erdős-Rényi (ER) networks and scale-free (SF) networks. We extend and generalize our theoretical and simulation results of single isolated networks to networks formed of interdependent networks.

  19. A Trust-Based Adaptive Probability Marking and Storage Traceback Scheme for WSNs

    PubMed Central

    Liu, Anfeng; Liu, Xiao; Long, Jun

    2016-01-01

    Security is a pivotal issue for wireless sensor networks (WSNs), which are emerging as a promising platform that enables a wide range of military, scientific, industrial and commercial applications. Traceback, a key cyber-forensics technology, can play an important role in tracing and locating a malicious source to guarantee cybersecurity. In this work a trust-based adaptive probability marking and storage (TAPMS) traceback scheme is proposed to enhance security for WSNs. In a TAPMS scheme, the marking probability is adaptively adjusted according to the security requirements of the network and can substantially reduce the number of marking tuples and improve network lifetime. More importantly, a high trust node is selected to store marking tuples, which can avoid the problem of marking information being lost. Experimental results show that the total number of marking tuples can be reduced in a TAPMS scheme, thus improving network lifetime. At the same time, since the marking tuples are stored in high trust nodes, storage reliability can be guaranteed, and the traceback time can be reduced by more than 80%. PMID:27043566

  20. Saliency Detection via Absorbing Markov Chain With Learnt Transition Probability.

    PubMed

    Lihe Zhang; Jianwu Ai; Bowen Jiang; Huchuan Lu; Xiukui Li

    2018-02-01

    In this paper, we propose a bottom-up saliency model based on absorbing Markov chain (AMC). First, a sparsely connected graph is constructed to capture the local context information of each node. All image boundary nodes and other nodes are, respectively, treated as the absorbing nodes and transient nodes in the absorbing Markov chain. Then, the expected number of times from each transient node to all other transient nodes can be used to represent the saliency value of this node. The absorbed time depends on the weights on the path and their spatial coordinates, which are completely encoded in the transition probability matrix. Considering the importance of this matrix, we adopt different hierarchies of deep features extracted from fully convolutional networks and learn a transition probability matrix, which is called learnt transition probability matrix. Although the performance is significantly promoted, salient objects are not uniformly highlighted very well. To solve this problem, an angular embedding technique is investigated to refine the saliency results. Based on pairwise local orderings, which are produced by the saliency maps of AMC and boundary maps, we rearrange the global orderings (saliency value) of all nodes. Extensive experiments demonstrate that the proposed algorithm outperforms the state-of-the-art methods on six publicly available benchmark data sets.

  1. A transmission power optimization with a minimum node degree for energy-efficient wireless sensor networks with full-reachability.

    PubMed

    Chen, Yi-Ting; Horng, Mong-Fong; Lo, Chih-Cheng; Chu, Shu-Chuan; Pan, Jeng-Shyang; Liao, Bin-Yih

    2013-03-20

    Transmission power optimization is the most significant factor in prolonging the lifetime and maintaining the connection quality of wireless sensor networks. Un-optimized transmission power of nodes either interferes with or fails to link neighboring nodes. The optimization of transmission power depends on the expected node degree and node distribution. In this study, an optimization approach to an energy-efficient and full reachability wireless sensor network is proposed. In the proposed approach, an adjustment model of the transmission range with a minimum node degree is proposed that focuses on topology control and optimization of the transmission range according to node degree and node density. The model adjusts the tradeoff between energy efficiency and full reachability to obtain an ideal transmission range. In addition, connectivity and reachability are used as performance indices to evaluate the connection quality of a network. The two indices are compared to demonstrate the practicability of framework through simulation results. Furthermore, the relationship between the indices under the conditions of various node degrees is analyzed to generalize the characteristics of node densities. The research results on the reliability and feasibility of the proposed approach will benefit the future real deployments.

  2. A Transmission Power Optimization with a Minimum Node Degree for Energy-Efficient Wireless Sensor Networks with Full-Reachability

    PubMed Central

    Chen, Yi-Ting; Horng, Mong-Fong; Lo, Chih-Cheng; Chu, Shu-Chuan; Pan, Jeng-Shyang; Liao, Bin-Yih

    2013-01-01

    Transmission power optimization is the most significant factor in prolonging the lifetime and maintaining the connection quality of wireless sensor networks. Un-optimized transmission power of nodes either interferes with or fails to link neighboring nodes. The optimization of transmission power depends on the expected node degree and node distribution. In this study, an optimization approach to an energy-efficient and full reachability wireless sensor network is proposed. In the proposed approach, an adjustment model of the transmission range with a minimum node degree is proposed that focuses on topology control and optimization of the transmission range according to node degree and node density. The model adjusts the tradeoff between energy efficiency and full reachability to obtain an ideal transmission range. In addition, connectivity and reachability are used as performance indices to evaluate the connection quality of a network. The two indices are compared to demonstrate the practicability of framework through simulation results. Furthermore, the relationship between the indices under the conditions of various node degrees is analyzed to generalize the characteristics of node densities. The research results on the reliability and feasibility of the proposed approach will benefit the future real deployments. PMID:23519351

  3. Training of Attentional Filtering, but Not of Memory Storage, Enhances Working Memory Efficiency by Strengthening the Neuronal Gatekeeper Network.

    PubMed

    Schmicker, Marlen; Schwefel, Melanie; Vellage, Anne-Katrin; Müller, Notger G

    2016-04-01

    Memory training (MT) in older adults with memory deficits often leads to frustration and, therefore, is usually not recommended. Here, we pursued an alternative approach and looked for transfer effects of 1-week attentional filter training (FT) on working memory performance and its neuronal correlates in young healthy humans. The FT effects were compared with pure MT, which lacked the necessity to filter out irrelevant information. Before and after training, all participants performed an fMRI experiment that included a combined task in which stimuli had to be both filtered based on color and stored in memory. We found that training induced processing changes by biasing either filtering or storage. FT induced larger transfer effects on the untrained cognitive function than MT. FT increased neuronal activity in frontal parts of the neuronal gatekeeper network, which is proposed to hinder irrelevant information from being unnecessarily stored in memory. MT decreased neuronal activity in the BG part of the gatekeeper network but enhanced activity in the parietal storage node. We take these findings as evidence that FT renders working memory more efficient by strengthening the BG-prefrontal gatekeeper network. MT, on the other hand, simply stimulates storage of any kind of information. These findings illustrate a tight connection between working memory and attention, and they may open up new avenues for ameliorating memory deficits in patients with cognitive impairments.

  4. Modeling Complex Dynamic Interactions of Nonlinear, Aeroelastic, Multistage, and Localization Phenomena in Turbine Engines

    DTIC Science & Technology

    2011-02-25

    fast method of predicting the number of iterations needed for converged results. A new hybrid technique is proposed to predict the convergence history...interchanging between the modes, whereas a smaller veering (or crossing) region shows fast mode switching. Then, the nonlinear vibration re- sponse of the...problems of interest involve dynamic ( fast ) crack propagation, then the nodes selected by the proposed approach at some time instant might not

  5. MotifNet: a web-server for network motif analysis.

    PubMed

    Smoly, Ilan Y; Lerman, Eugene; Ziv-Ukelson, Michal; Yeger-Lotem, Esti

    2017-06-15

    Network motifs are small topological patterns that recur in a network significantly more often than expected by chance. Their identification emerged as a powerful approach for uncovering the design principles underlying complex networks. However, available tools for network motif analysis typically require download and execution of computationally intensive software on a local computer. We present MotifNet, the first open-access web-server for network motif analysis. MotifNet allows researchers to analyze integrated networks, where nodes and edges may be labeled, and to search for motifs of up to eight nodes. The output motifs are presented graphically and the user can interactively filter them by their significance, number of instances, node and edge labels, and node identities, and view their instances. MotifNet also allows the user to distinguish between motifs that are centered on specific nodes and motifs that recur in distinct parts of the network. MotifNet is freely available at http://netbio.bgu.ac.il/motifnet . The website was implemented using ReactJs and supports all major browsers. The server interface was implemented in Python with data stored on a MySQL database. estiyl@bgu.ac.il or michaluz@cs.bgu.ac.il. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  6. Distributed fault detection over sensor networks with Markovian switching topologies

    NASA Astrophysics Data System (ADS)

    Ge, Xiaohua; Han, Qing-Long

    2014-05-01

    This paper deals with the distributed fault detection for discrete-time Markov jump linear systems over sensor networks with Markovian switching topologies. The sensors are scatteredly deployed in the sensor field and the fault detectors are physically distributed via a communication network. The system dynamics changes and sensing topology variations are modeled by a discrete-time Markov chain with incomplete mode transition probabilities. Each of these sensor nodes firstly collects measurement outputs from its all underlying neighboring nodes, processes these data in accordance with the Markovian switching topologies, and then transmits the processed data to the remote fault detector node. Network-induced delays and accumulated data packet dropouts are incorporated in the data transmission between the sensor nodes and the distributed fault detector nodes through the communication network. To generate localized residual signals, mode-independent distributed fault detection filters are proposed. By means of the stochastic Lyapunov functional approach, the residual system performance analysis is carried out such that the overall residual system is stochastically stable and the error between each residual signal and the fault signal is made as small as possible. Furthermore, a sufficient condition on the existence of the mode-independent distributed fault detection filters is derived in the simultaneous presence of incomplete mode transition probabilities, Markovian switching topologies, network-induced delays, and accumulated data packed dropouts. Finally, a stirred-tank reactor system is given to show the effectiveness of the developed theoretical results.

  7. Ultrasonographic identification of the anatomical landmarks that define cervical lymph nodes spaces.

    PubMed

    Lenghel, Lavinia Manuela; Baciuţ, Grigore; Botar-Jid, Carolina; Vasilescu, Dan; Bojan, Anca; Dudea, Sorin M

    2013-03-01

    The localization of cervical lymph nodes is extremely important in practice for the positive and differential diagnosis as well as the staging of cervical lymphadenopathies. Ultrasonography represents the first line imaging method in the diagnosis of cervical lymphadenopathies due to its excellent resolution and high diagnosis accuracy. The present paper aims to illustrate the ultrasonographic identification of the anatomical landmarks used for the definition of cervical lymphatic spaces. The application of standardized views allows a delineation of clear anatomical landmarks and an accurate localization of the cervical lymph nodes.

  8. Generating a fault-tolerant global clock using high-speed control signals for the MetaNet architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ofek, Y.

    1994-05-01

    This work describes a new technique, based on exchanging control signals between neighboring nodes, for constructing a stable and fault-tolerant global clock in a distributed system with an arbitrary topology. It is shown that it is possible to construct a global clock reference with time step that is much smaller than the propagation delay over the network's links. The synchronization algorithm ensures that the global clock tick' has a stable periodicity, and therefore, it is possible to tolerate failures of links and clocks that operate faster and/or slower than nominally specified, as well as hard failures. The approach taken inmore » this work is to generate a global clock from the ensemble of the local transmission clocks and not to directly synchronize these high-speed clocks. The steady-state algorithm, which generates the global clock, is executed in hardware by the network interface of each node. At the network interface, it is possible to measure accurately the propagation delay between neighboring nodes with a small error or uncertainty and thereby to achieve global synchronization that is proportional to these error measurements. It is shown that the local clock drift (or rate uncertainty) has only a secondary effect on the maximum global clock rate. The synchronization algorithm can tolerate any physical failure. 18 refs.« less

  9. Comparing the Performance of NoSQL Approaches for Managing Archetype-Based Electronic Health Record Data

    PubMed Central

    Freire, Sergio Miranda; Teodoro, Douglas; Wei-Kleiner, Fang; Sundvall, Erik; Karlsson, Daniel; Lambrix, Patrick

    2016-01-01

    This study provides an experimental performance evaluation on population-based queries of NoSQL databases storing archetype-based Electronic Health Record (EHR) data. There are few published studies regarding the performance of persistence mechanisms for systems that use multilevel modelling approaches, especially when the focus is on population-based queries. A healthcare dataset with 4.2 million records stored in a relational database (MySQL) was used to generate XML and JSON documents based on the openEHR reference model. Six datasets with different sizes were created from these documents and imported into three single machine XML databases (BaseX, eXistdb and Berkeley DB XML) and into a distributed NoSQL database system based on the MapReduce approach, Couchbase, deployed in different cluster configurations of 1, 2, 4, 8 and 12 machines. Population-based queries were submitted to those databases and to the original relational database. Database size and query response times are presented. The XML databases were considerably slower and required much more space than Couchbase. Overall, Couchbase had better response times than MySQL, especially for larger datasets. However, Couchbase requires indexing for each differently formulated query and the indexing time increases with the size of the datasets. The performances of the clusters with 2, 4, 8 and 12 nodes were not better than the single node cluster in relation to the query response time, but the indexing time was reduced proportionally to the number of nodes. The tested XML databases had acceptable performance for openEHR-based data in some querying use cases and small datasets, but were generally much slower than Couchbase. Couchbase also outperformed the response times of the relational database, but required more disk space and had a much longer indexing time. Systems like Couchbase are thus interesting research targets for scalable storage and querying of archetype-based EHR data when population-based use cases are of interest. PMID:26958859

  10. Comparing the Performance of NoSQL Approaches for Managing Archetype-Based Electronic Health Record Data.

    PubMed

    Freire, Sergio Miranda; Teodoro, Douglas; Wei-Kleiner, Fang; Sundvall, Erik; Karlsson, Daniel; Lambrix, Patrick

    2016-01-01

    This study provides an experimental performance evaluation on population-based queries of NoSQL databases storing archetype-based Electronic Health Record (EHR) data. There are few published studies regarding the performance of persistence mechanisms for systems that use multilevel modelling approaches, especially when the focus is on population-based queries. A healthcare dataset with 4.2 million records stored in a relational database (MySQL) was used to generate XML and JSON documents based on the openEHR reference model. Six datasets with different sizes were created from these documents and imported into three single machine XML databases (BaseX, eXistdb and Berkeley DB XML) and into a distributed NoSQL database system based on the MapReduce approach, Couchbase, deployed in different cluster configurations of 1, 2, 4, 8 and 12 machines. Population-based queries were submitted to those databases and to the original relational database. Database size and query response times are presented. The XML databases were considerably slower and required much more space than Couchbase. Overall, Couchbase had better response times than MySQL, especially for larger datasets. However, Couchbase requires indexing for each differently formulated query and the indexing time increases with the size of the datasets. The performances of the clusters with 2, 4, 8 and 12 nodes were not better than the single node cluster in relation to the query response time, but the indexing time was reduced proportionally to the number of nodes. The tested XML databases had acceptable performance for openEHR-based data in some querying use cases and small datasets, but were generally much slower than Couchbase. Couchbase also outperformed the response times of the relational database, but required more disk space and had a much longer indexing time. Systems like Couchbase are thus interesting research targets for scalable storage and querying of archetype-based EHR data when population-based use cases are of interest.

  11. An Electronic-Nose Sensor Node Based on a Polymer-Coated Surface Acoustic Wave Array for Wireless Sensor Network Applications

    PubMed Central

    Tang, Kea-Tiong; Li, Cheng-Han; Chiu, Shih-Wen

    2011-01-01

    This study developed an electronic-nose sensor node based on a polymer-coated surface acoustic wave (SAW) sensor array. The sensor node comprised an SAW sensor array, a frequency readout circuit, and an Octopus II wireless module. The sensor array was fabricated on a large K2 128° YX LiNbO3 sensing substrate. On the surface of this substrate, an interdigital transducer (IDT) was produced with a Cr/Au film as its metallic structure. A mixed-mode frequency readout application specific integrated circuit (ASIC) was fabricated using a TSMC 0.18 μm process. The ASIC output was connected to a wireless module to transmit sensor data to a base station for data storage and analysis. This sensor node is applicable for wireless sensor network (WSN) applications. PMID:22163865

  12. An electronic-nose sensor node based on a polymer-coated surface acoustic wave array for wireless sensor network applications.

    PubMed

    Tang, Kea-Tiong; Li, Cheng-Han; Chiu, Shih-Wen

    2011-01-01

    This study developed an electronic-nose sensor node based on a polymer-coated surface acoustic wave (SAW) sensor array. The sensor node comprised an SAW sensor array, a frequency readout circuit, and an Octopus II wireless module. The sensor array was fabricated on a large K(2) 128° YX LiNbO3 sensing substrate. On the surface of this substrate, an interdigital transducer (IDT) was produced with a Cr/Au film as its metallic structure. A mixed-mode frequency readout application specific integrated circuit (ASIC) was fabricated using a TSMC 0.18 μm process. The ASIC output was connected to a wireless module to transmit sensor data to a base station for data storage and analysis. This sensor node is applicable for wireless sensor network (WSN) applications.

  13. Thoracic lymph node station recognition on CT images based on automatic anatomy recognition with an optimal parent strategy

    NASA Astrophysics Data System (ADS)

    Xu, Guoping; Udupa, Jayaram K.; Tong, Yubing; Cao, Hanqiang; Odhner, Dewey; Torigian, Drew A.; Wu, Xingyu

    2018-03-01

    Currently, there are many papers that have been published on the detection and segmentation of lymph nodes from medical images. However, it is still a challenging problem owing to low contrast with surrounding soft tissues and the variations of lymph node size and shape on computed tomography (CT) images. This is particularly very difficult on low-dose CT of PET/CT acquisitions. In this study, we utilize our previous automatic anatomy recognition (AAR) framework to recognize the thoracic-lymph node stations defined by the International Association for the Study of Lung Cancer (IASLC) lymph node map. The lymph node stations themselves are viewed as anatomic objects and are localized by using a one-shot method in the AAR framework. Two strategies have been taken in this paper for integration into AAR framework. The first is to combine some lymph node stations into composite lymph node stations according to their geometrical nearness. The other is to find the optimal parent (organ or union of organs) as an anchor for each lymph node station based on the recognition error and thereby find an overall optimal hierarchy to arrange anchor organs and lymph node stations. Based on 28 contrast-enhanced thoracic CT image data sets for model building, 12 independent data sets for testing, our results show that thoracic lymph node stations can be localized within 2-3 voxels compared to the ground truth.

  14. Analysis of Energy Efficiency in WSN by Considering SHM Application

    NASA Astrophysics Data System (ADS)

    Kumar, Pawan; Naresh Babu, Merugu; Raju, Kota Solomon, Dr; Sharma, Sudhir Kumar, Dr; Jain, Vaibhav

    2017-08-01

    The Wireless Sensor Network is composed of a significant number of autonomous nodes deployed in an extensive or remote area. In WSN, the sensor nodes have a limited transmission range, processing speed and storage capabilities as well as their energy resources are also limited. In WSN all nodes are not directly connected. The primary objective for all kind of WSN is to enhance and optimize the network lifetime i.e. to minimize the energy consumption in the WSN. There are lots of applications of WSN out of which this research paper focuses upon the Structural Health Monitoring application in which 50 Meter bridge has been taken as a test application for the simulation purpose.

  15. Development of Low Parasitic Light Sensitivity and Low Dark Current 2.8 μm Global Shutter Pixel †

    PubMed Central

    Yokoyama, Toshifumi; Tsutsui, Masafumi; Suzuki, Masakatsu; Nishi, Yoshiaki; Mizuno, Ikuo; Lahav, Assaf

    2018-01-01

    We developed a low parasitic light sensitivity (PLS) and low dark current 2.8 μm global shutter pixel. We propose a new inner lens design concept to realize both low PLS and high quantum efficiency (QE). 1/PLS is 7700 and QE is 62% at a wavelength of 530 nm. We also propose a new storage-gate based memory node for low dark current. P-type implants and negative gate biasing are introduced to suppress dark current at the surface of the memory node. This memory node structure shows the world smallest dark current of 9.5 e−/s at 60 °C. PMID:29370146

  16. Development of Low Parasitic Light Sensitivity and Low Dark Current 2.8 μm Global Shutter Pixel.

    PubMed

    Yokoyama, Toshifumi; Tsutsui, Masafumi; Suzuki, Masakatsu; Nishi, Yoshiaki; Mizuno, Ikuo; Lahav, Assaf

    2018-01-25

    Abstract : We developed a low parasitic light sensitivity (PLS) and low dark current 2.8 μm global shutter pixel. We propose a new inner lens design concept to realize both low PLS and high quantum efficiency (QE). 1/PLS is 7700 and QE is 62% at a wavelength of 530 nm. We also propose a new storage-gate based memory node for low dark current. P-type implants and negative gate biasing are introduced to suppress dark current at the surface of the memory node. This memory node structure shows the world smallest dark current of 9.5 e - /s at 60 °C.

  17. A wireless modular multi-modal multi-node patch platform for robust biosignal monitoring.

    PubMed

    Pantelopoulos, Alexandros; Saldivar, Enrique; Roham, Masoud

    2011-01-01

    In this paper a wireless modular, multi-modal, multi-node patch platform is described. The platform comprises low-cost semi-disposable patch design aiming at unobtrusive ambulatory monitoring of multiple physiological parameters. Owing to its modular design it can be interfaced with various low-power RF communication and data storage technologies, while the data fusion of multi-modal and multi-node features facilitates measurement of several biosignals from multiple on-body locations for robust feature extraction. Preliminary results of the patch platform are presented which illustrate the capability to extract respiration rate from three different independent metrics, which combined together can give a more robust estimate of the actual respiratory rate.

  18. A data management proposal to connect in a hierarchical way nodes of the Spanish Long Term Ecological Research (LTER) network

    NASA Astrophysics Data System (ADS)

    Fuentes, Daniel; Pérez-Luque, Antonio J.; Bonet García, Francisco J.; Moreno-LLorca, Ricardo A.; Sánchez-Cano, Francisco M.; Suárez-Muñoz, María

    2017-04-01

    The Long Term Ecological Research (LTER) network aims to provide the scientific community, policy makers, and society with the knowledge and predictive understanding necessary to conserve, protect, and manage the ecosystems. LTER is organized into networks ranging from the global to national scale. In the top of network, the International Long Term Ecological Research (ILTER) Network coordinates among ecological researchers and LTER research networks at local, regional and global scales. In Spain, the Spanish Long Term Ecological Research (LTER-Spain) network was built to foster the collaboration and coordination between longest-lived ecological researchers and networks on a local scale. Currently composed by nine nodes, this network facilitates the data exchange, documentation and preservation encouraging the development of cross-disciplinary works. However, most nodes have no specific information systems, tools or qualified personnel to manage their data for continued conservation and there are no harmonized methodologies for long-term monitoring protocols. Hence, the main challenge is to place the nodes in its correct position in the network, providing the best tools that allow them to manage their data autonomously and make it easier for them to access information and knowledge in the network. This work proposes a connected structure composed by four LTER nodes located in southern Spain. The structure is built considering hierarchical approach: nodes that create information which is documented using metadata standards (such as Ecological Metadata Language, EML); and others nodes that gather metadata and information. We also take into account the capacity of each node to manage their own data and the premise that the data and metadata must be maintained where it is generated. The current state of the nodes is a follows: two of them have their own information management system (Sierra Nevada-Granada and Doñana Long-Term Socio-ecological Research Platform) and another has no infrastructure to maintain their data (The Arid Iberian South East LTSER Platform). The last one (Environmental Information Network of Andalusia-REDIAM) acts as the coordinator, providing physical and logical support to other nodes and also gathers and distributes the information "uphill" to the rest of the network (LTER Europe and ILTER). The development of the network has been divided in three stages. First, existing resources and data management requirements are identified in each node. Second, the necessary software tools and interoperable standards to manage and exchange the data have been selected, installed and configured in each participant. Finally, once the network has been set up completely, it is expected to expand it all over Spain with new nodes and its connection to others LTER and similar networks. This research has been funded by ADAPTAMED (Protection of key ecosystem services by adaptive management of Climate Change endangered Mediterranean socioecosystems) Life EU project, Sierra Nevada Global Change Observatory (LTER-site) and eLTER (Integrated European Long Term Ecosystem & Socio-Ecological Research Infrastructure).

  19. Online Adaboost-Based Parameterized Methods for Dynamic Distributed Network Intrusion Detection.

    PubMed

    Hu, Weiming; Gao, Jun; Wang, Yanguo; Wu, Ou; Maybank, Stephen

    2014-01-01

    Current network intrusion detection systems lack adaptability to the frequently changing network environments. Furthermore, intrusion detection in the new distributed architectures is now a major requirement. In this paper, we propose two online Adaboost-based intrusion detection algorithms. In the first algorithm, a traditional online Adaboost process is used where decision stumps are used as weak classifiers. In the second algorithm, an improved online Adaboost process is proposed, and online Gaussian mixture models (GMMs) are used as weak classifiers. We further propose a distributed intrusion detection framework, in which a local parameterized detection model is constructed in each node using the online Adaboost algorithm. A global detection model is constructed in each node by combining the local parametric models using a small number of samples in the node. This combination is achieved using an algorithm based on particle swarm optimization (PSO) and support vector machines. The global model in each node is used to detect intrusions. Experimental results show that the improved online Adaboost process with GMMs obtains a higher detection rate and a lower false alarm rate than the traditional online Adaboost process that uses decision stumps. Both the algorithms outperform existing intrusion detection algorithms. It is also shown that our PSO, and SVM-based algorithm effectively combines the local detection models into the global model in each node; the global model in a node can handle the intrusion types that are found in other nodes, without sharing the samples of these intrusion types.

  20. A Heterogeneous Wireless Identification Network for the Localization of Animals Based on Stochastic Movements

    PubMed Central

    Gutiérrez, Álvaro; González, Carlos; Jiménez-Leube, Javier; Zazo, Santiago; Dopico, Nelson; Raos, Ivana

    2009-01-01

    The improvement in the transmission range in wireless applications without the use of batteries remains a significant challenge in identification applications. In this paper, we describe a heterogeneous wireless identification network mostly powered by kinetic energy, which allows the localization of animals in open environments. The system relies on radio communications and a global positioning system. It is made up of primary and secondary nodes. Secondary nodes are kinetic-powered and take advantage of animal movements to activate the node and transmit a specific identifier, reducing the number of batteries of the system. Primary nodes are battery-powered and gather secondary-node transmitted information to provide it, along with position and time data, to a final base station in charge of the animal monitoring. The system allows tracking based on contextual information obtained from statistical data. PMID:22412344

  1. Fault-Tolerant Local-Area Network

    NASA Technical Reports Server (NTRS)

    Morales, Sergio; Friedman, Gary L.

    1988-01-01

    Local-area network (LAN) for computers prevents single-point failure from interrupting communication between nodes of network. Includes two complete cables, LAN 1 and LAN 2. Microprocessor-based slave switches link cables to network-node devices as work stations, print servers, and file servers. Slave switches respond to commands from master switch, connecting nodes to two cable networks or disconnecting them so they are completely isolated. System monitor and control computer (SMC) acts as gateway, allowing nodes on either cable to communicate with each other and ensuring that LAN 1 and LAN 2 are fully used when functioning properly. Network monitors and controls itself, automatically routes traffic for efficient use of resources, and isolates and corrects its own faults, with potential dramatic reduction in time out of service.

  2. Neural networks for link prediction in realistic biomedical graphs: a multi-dimensional evaluation of graph embedding-based approaches.

    PubMed

    Crichton, Gamal; Guo, Yufan; Pyysalo, Sampo; Korhonen, Anna

    2018-05-21

    Link prediction in biomedical graphs has several important applications including predicting Drug-Target Interactions (DTI), Protein-Protein Interaction (PPI) prediction and Literature-Based Discovery (LBD). It can be done using a classifier to output the probability of link formation between nodes. Recently several works have used neural networks to create node representations which allow rich inputs to neural classifiers. Preliminary works were done on this and report promising results. However they did not use realistic settings like time-slicing, evaluate performances with comprehensive metrics or explain when or why neural network methods outperform. We investigated how inputs from four node representation algorithms affect performance of a neural link predictor on random- and time-sliced biomedical graphs of real-world sizes (∼ 6 million edges) containing information relevant to DTI, PPI and LBD. We compared the performance of the neural link predictor to those of established baselines and report performance across five metrics. In random- and time-sliced experiments when the neural network methods were able to learn good node representations and there was a negligible amount of disconnected nodes, those approaches outperformed the baselines. In the smallest graph (∼ 15,000 edges) and in larger graphs with approximately 14% disconnected nodes, baselines such as Common Neighbours proved a justifiable choice for link prediction. At low recall levels (∼ 0.3) the approaches were mostly equal, but at higher recall levels across all nodes and average performance at individual nodes, neural network approaches were superior. Analysis showed that neural network methods performed well on links between nodes with no previous common neighbours; potentially the most interesting links. Additionally, while neural network methods benefit from large amounts of data, they require considerable amounts of computational resources to utilise them. Our results indicate that when there is enough data for the neural network methods to use and there are a negligible amount of disconnected nodes, those approaches outperform the baselines. At low recall levels the approaches are mostly equal but at higher recall levels and average performance at individual nodes, neural network approaches are superior. Performance at nodes without common neighbours which indicate more unexpected and perhaps more useful links account for this.

  3. A Metascalable Computing Framework for Large Spatiotemporal-Scale Atomistic Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nomura, K; Seymour, R; Wang, W

    2009-02-17

    A metascalable (or 'design once, scale on new architectures') parallel computing framework has been developed for large spatiotemporal-scale atomistic simulations of materials based on spatiotemporal data locality principles, which is expected to scale on emerging multipetaflops architectures. The framework consists of: (1) an embedded divide-and-conquer (EDC) algorithmic framework based on spatial locality to design linear-scaling algorithms for high complexity problems; (2) a space-time-ensemble parallel (STEP) approach based on temporal locality to predict long-time dynamics, while introducing multiple parallelization axes; and (3) a tunable hierarchical cellular decomposition (HCD) parallelization framework to map these O(N) algorithms onto a multicore cluster based onmore » hybrid implementation combining message passing and critical section-free multithreading. The EDC-STEP-HCD framework exposes maximal concurrency and data locality, thereby achieving: (1) inter-node parallel efficiency well over 0.95 for 218 billion-atom molecular-dynamics and 1.68 trillion electronic-degrees-of-freedom quantum-mechanical simulations on 212,992 IBM BlueGene/L processors (superscalability); (2) high intra-node, multithreading parallel efficiency (nanoscalability); and (3) nearly perfect time/ensemble parallel efficiency (eon-scalability). The spatiotemporal scale covered by MD simulation on a sustained petaflops computer per day (i.e. petaflops {center_dot} day of computing) is estimated as NT = 2.14 (e.g. N = 2.14 million atoms for T = 1 microseconds).« less

  4. A Node Localization Algorithm Based on Multi-Granularity Regional Division and the Lagrange Multiplier Method in Wireless Sensor Networks.

    PubMed

    Shang, Fengjun; Jiang, Yi; Xiong, Anping; Su, Wen; He, Li

    2016-11-18

    With the integrated development of the Internet, wireless sensor technology, cloud computing, and mobile Internet, there has been a lot of attention given to research about and applications of the Internet of Things. A Wireless Sensor Network (WSN) is one of the important information technologies in the Internet of Things; it integrates multi-technology to detect and gather information in a network environment by mutual cooperation, using a variety of methods to process and analyze data, implement awareness, and perform tests. This paper mainly researches the localization algorithm of sensor nodes in a wireless sensor network. Firstly, a multi-granularity region partition is proposed to divide the location region. In the range-based method, the RSSI (Received Signal Strength indicator, RSSI) is used to estimate distance. The optimal RSSI value is computed by the Gaussian fitting method. Furthermore, a Voronoi diagram is characterized by the use of dividing region. Rach anchor node is regarded as the center of each region; the whole position region is divided into several regions and the sub-region of neighboring nodes is combined into triangles while the unknown node is locked in the ultimate area. Secondly, the multi-granularity regional division and Lagrange multiplier method are used to calculate the final coordinates. Because nodes are influenced by many factors in the practical application, two kinds of positioning methods are designed. When the unknown node is inside positioning unit, we use the method of vector similarity. Moreover, we use the centroid algorithm to calculate the ultimate coordinates of unknown node. When the unknown node is outside positioning unit, we establish a Lagrange equation containing the constraint condition to calculate the first coordinates. Furthermore, we use the Taylor expansion formula to correct the coordinates of the unknown node. In addition, this localization method has been validated by establishing the real environment.

  5. Relative value of physical examination, mammography, and breast sonography in evaluating the size of the primary tumor and regional lymph node metastases in women receiving neoadjuvant chemotherapy for locally advanced breast carcinoma.

    PubMed

    Herrada, J; Iyer, R B; Atkinson, E N; Sneige, N; Buzdar, A U; Hortobagyi, G N

    1997-09-01

    The purpose of this study was to correlate physical examination and sonographic and mammographic measurements of breast tumors and regional lymph nodes with pathological findings and to evaluate the effect of neoadjuvant chemotherapy on clinical Tumor-Node-Metastasis stage by noninvasive methods. This was a retrospective analysis of 100 patients with locally advanced breast cancer registered and treated in prospective trials of neoadjuvant chemotherapy. All patients received four cycles of a doxorubicin-containing regimen and had noninvasive evaluation of the primary tumor and regional lymph nodes before and after neoadjuvant chemotherapy by physical examination, sonography, and mammography and underwent breast surgery and axillary dissection within 5 weeks after completion of neoadjuvant chemotherapy. The correlations between clinical and pathological measurements were determined by Spearman rank correlation analysis. A proportional odds model was used to examine predictive values. Eighty-three patients had both a clinically detectable primary tumor and lymph node metastases. Sixty-four patients had a decrease in Tumor-Node-Metastasis stage after chemotherapy. For 54% of patients, there was concordance in clinical response between the primary tumor and lymph node compartment; for the rest, results were discordant. Physical examination correlated best with pathological findings in the measurement of the primary tumor (P = 0.0003), whereas sonography was the most accurate predictor of size for axillary lymph nodes (P = 0.0005). The combination of physical examination and mammography worked best for assessment of the primary tumor (P = 0.003), whereas combining physical examination with sonography gave optimal evaluation of regional lymph nodes (P = 0.0001). In conclusion, physical examination is the best noninvasive predictor of the real size of locally advanced primary breast cancer, whereas sonography correlates better with the real dimensions of axillary lymph nodes. The combination of physical examination with either mammography or sonography significantly improves the accuracy of noninvasive assessment of tumor dimensions.

  6. Outcomes of Node-positive Breast Cancer Patients Treated With Accelerated Partial Breast Irradiation Via Multicatheter Interstitial Brachytherapy: The Pooled Registry of Multicatheter Interstitial Sites (PROMIS) Experience.

    PubMed

    Kamrava, Mitchell; Kuske, Robert R; Anderson, Bethany; Chen, Peter; Hayes, John; Quiet, Coral; Wang, Pin-Chieh; Veruttipong, Darlene; Snyder, Margaret; Demanes, David J

    2018-06-01

    To report outcomes for breast-conserving therapy using adjuvant accelerated partial breast irradiation (APBI) with interstitial multicatheter brachytherapy in node-positive compared with node-negative patients. From 1992 to 2013, 1351 patients (1369 breast cancers) were treated with breast-conserving surgery and adjuvant APBI using interstitial multicatheter brachytherapy. A total of 907 patients (835 node negative, 59 N1a, and 13 N1mic) had >1 year of data available and nodal status information and are the subject of this analysis. Median age (range) was 59 years old (22 to 90 y). T stage was 90% T1 and ER/PR/Her2 was positive in 87%, 71%, and 7%. Mean number of axillary nodes removed was 12 (SD, 6). Cox multivariate analysis for local/regional control was performed using age, nodal stage, ER/PR/Her2 receptor status, tumor size, grade, margin, and adjuvant chemotherapy/antiestrogen therapy. The mean (SD) follow-up was 7.5 years (4.6). The 5-year actuarial local control (95% confidence interval) in node-negative versus node-positive patients was 96.3% (94.5-97.5) versus 95.8% (87.6-98.6) (P=0.62). The 5-year actuarial regional control in node-negative versus node-positive patients was 98.5% (97.3-99.2) versus 96.7% (87.4-99.2) (P=0.33). The 5-year actuarial freedom from distant metastasis and cause-specific survival were significantly lower in node-positive versus node-negative patients at 92.3% (82.4-96.7) versus 97.8% (96.3-98.7) (P=0.006) and 91.3% (80.2-96.3) versus 98.7% (97.3-99.3) (P=0.0001). Overall survival was not significantly different. On multivariate analysis age 50 years and below, Her2 positive, positive margin status, and not receiving chemotherapy or antiestrogen therapy were associated with a higher risk of local/regional recurrence. Patients who have had an axillary lymph node dissection and limited node-positive disease may be candidates for treatment with APBI. Further research is ultimately needed to better define specific criteria for APBI in node-positive patients.

  7. Motion planning with complete knowledge using a colored SOM.

    PubMed

    Vleugels, J; Kok, J N; Overmars, M

    1997-01-01

    The motion planning problem requires that a collision-free path be determined for a robot moving amidst a fixed set of obstacles. Most neural network approaches to this problem are for the situation in which only local knowledge about the configuration space is available. The main goal of the paper is to show that neural networks are also suitable tools in situations with complete knowledge of the configuration space. In this paper we present an approach that combines a neural network and deterministic techniques. We define a colored version of Kohonen's self-organizing map that consists of two different classes of nodes. The network is presented with random configurations of the robot and, from this information, it constructs a road map of possible motions in the work space. The map is a growing network, and different nodes are used to approximate boundaries of obstacles and the Voronoi diagram of the obstacles, respectively. In a second phase, the positions of the two kinds of nodes are combined to obtain the road map. In this way a number of typical problems with small obstacles and passages are avoided, and the required number of nodes for a given accuracy is within reasonable limits. This road map is searched to find a motion connecting the given source and goal configurations of the robot. The algorithm is simple and general; the only specific computation that is required is a check for intersection of two polygons. We implemented the algorithm for planar robots allowing both translation and rotation and experiments show that compared to conventional techniques it performs well, even for difficult motion planning scenes.

  8. Garbage Collection in a Distributed Object-Oriented System

    NASA Technical Reports Server (NTRS)

    Gupta, Aloke; Fuchs, W. Kent

    1993-01-01

    An algorithm is described in this paper for garbage collection in distributed systems with object sharing across processor boundaries. The algorithm allows local garbage collection at each node in the system to proceed independently of local collection at the other nodes. It requires no global synchronization or knowledge of the global state of the system and exhibits the capability of graceful degradation. The concept of a specialized dump node is proposed to facilitate the collection of inaccessible circular structures. An experimental evaluation of the algorithm is also described. The algorithm is compared with a corresponding scheme that requires global synchronization. The results show that the algorithm works well in distributed processing environments even when the locality of object references is low.

  9. Key Management Scheme Based on Route Planning of Mobile Sink in Wireless Sensor Networks.

    PubMed

    Zhang, Ying; Liang, Jixing; Zheng, Bingxin; Jiang, Shengming; Chen, Wei

    2016-01-29

    In many wireless sensor network application scenarios the key management scheme with a Mobile Sink (MS) should be fully investigated. This paper proposes a key management scheme based on dynamic clustering and optimal-routing choice of MS. The concept of Traveling Salesman Problem with Neighbor areas (TSPN) in dynamic clustering for data exchange is proposed, and the selection probability is used in MS route planning. The proposed scheme extends static key management to dynamic key management by considering the dynamic clustering and mobility of MSs, which can effectively balance the total energy consumption during the activities. Considering the different resources available to the member nodes and sink node, the session key between cluster head and MS is established by modified an ECC encryption with Diffie-Hellman key exchange (ECDH) algorithm and the session key between member node and cluster head is built with a binary symmetric polynomial. By analyzing the security of data storage, data transfer and the mechanism of dynamic key management, the proposed scheme has more advantages to help improve the resilience of the key management system of the network on the premise of satisfying higher connectivity and storage efficiency.

  10. Fast Entanglement Establishment via Local Dynamics for Quantum Repeater Networks

    NASA Astrophysics Data System (ADS)

    Gyongyosi, Laszlo; Imre, Sandor

    Quantum entanglement is a necessity for future quantum communication networks, quantum internet, and long-distance quantum key distribution. The current approaches of entanglement distribution require high-delay entanglement transmission, entanglement swapping to extend the range of entanglement, high-cost entanglement purification, and long-lived quantum memories. We introduce a fundamental protocol for establishing entanglement in quantum communication networks. The proposed scheme does not require entanglement transmission between the nodes, high-cost entanglement swapping, entanglement purification, or long-lived quantum memories. The protocol reliably establishes a maximally entangled system between the remote nodes via dynamics generated by local Hamiltonians. The method eliminates the main drawbacks of current schemes allowing fast entanglement establishment with a minimized delay. Our solution provides a fundamental method for future long-distance quantum key distribution, quantum repeater networks, quantum internet, and quantum-networking protocols. This work was partially supported by the GOP-1.1.1-11-2012-0092 project sponsored by the EU and European Structural Fund, by the Hungarian Scientific Research Fund - OTKA K-112125, and by the COST Action MP1006.

  11. The Impact of Definitive Local Therapy for Lymph Node-Positive Prostate Cancer: A Population-Based Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rusthoven, Chad G., E-mail: chad.rusthoven@ucdenver.edu; Carlson, Julie A.; Waxweiler, Timothy V.

    2014-04-01

    Purpose: To evaluate the survival outcomes for patients with lymph node-positive, nonmetastatic prostate cancer undergoing definitive local therapy (radical prostatectomy [RP], external beam radiation therapy [EBRT], or both) versus no local therapy (NLT) in the US population in the modern prostate specific antigen (PSA) era. Methods and Materials: The Surveillance, Epidemiology, and End Results database was queried for patients with T1-4N1M0 prostate cancer diagnosed from 1995 through 2005. To allow comparisons of equivalent datasets, patients were analyzed in separate clinical (cN+) and pathologically confirmed (pN+) lymph node-positive cohorts. Kaplan-Meier overall survival (OS) and prostate cancer-specific survival (PCSS) estimates were generated,more » with accompanying univariate log-rank and multivariate Cox proportional hazards comparisons. Results: A total of 796 cN+ and 2991 pN+ patients were evaluable. Among cN+ patients, 43% underwent EBRT and 57% had NLT. Outcomes for cN+ patients favored EBRT, with 10-year OS rates of 45% versus 29% (P<.001) and PCSS rates of 67% versus 53% (P<.001). Among pN+ patients, 78% underwent local therapy (RP 57%, EBRT 10%, or both 11%) and 22% had NLT. Outcomes for pN+ also favored local therapy, with 10-year OS rates of 65% versus 42% (P<.001) and PCSS rates of 78% versus 56% (P<.001). On multivariate analysis, local therapy in both the cN+ and pN+ cohorts remained independently associated with improved OS and PCSS (all P<.001). Local therapy was associated with favorable hazard ratios across subgroups, including patients aged ≥70 years and those with multiple positive lymph nodes. Among pN+ patients, no significant differences in survival were observed between RP versus EBRT and RP with or without adjuvant EBRT. Conclusions: In this large, population-based cohort, definitive local therapy was associated with significantly improved survival in patients with lymph node-positive prostate cancer.« less

  12. 38 CFR 4.116 - Schedule of ratings-gynecological conditions and disorders of the breast.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... removal of the entire breast, underlying pectoral muscles, and regional lymph nodes up to the... nodes (in continuity with the breast). Pectoral muscles are left intact. (3) Simple (or total... lymph nodes and muscles are left intact. (4) Wide local excision (including partial mastectomy...

  13. 38 CFR 4.116 - Schedule of ratings-gynecological conditions and disorders of the breast.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... removal of the entire breast, underlying pectoral muscles, and regional lymph nodes up to the... nodes (in continuity with the breast). Pectoral muscles are left intact. (3) Simple (or total... lymph nodes and muscles are left intact. (4) Wide local excision (including partial mastectomy...

  14. 38 CFR 4.116 - Schedule of ratings-gynecological conditions and disorders of the breast.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... removal of the entire breast, underlying pectoral muscles, and regional lymph nodes up to the... nodes (in continuity with the breast). Pectoral muscles are left intact. (3) Simple (or total... lymph nodes and muscles are left intact. (4) Wide local excision (including partial mastectomy...

  15. 9 CFR 311.9 - Actinomycosis and actinobacillosis.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ..., strictly localized, and without suppuration, fistulous tracts, or lymph node involvement, the tongue, if free from disease, may be passed, or, when the disease is slight and confined to the lymph nodes, the... corresponding lymph nodes, the head may be passed for human food after removal and condemnation of the tongue...

  16. Enterprise storage report for the 1990's

    NASA Technical Reports Server (NTRS)

    Moore, Fred

    1991-01-01

    Data processing has become an increasingly vital function, if not the most vital function, in most businesses today. No longer only a mainframe domain, the data processing enterprise also includes the midrange and workstation platforms, either local or remote. This expanded view of the enterprise has encouraged more and more businesses to take a strategic, long-range view of information management rather than the short-term tactical approaches of the past. Some of the significant aspects of data storage in the enterprise for the 1990's are highlighted.

  17. Hotspot detection using image pattern recognition based on higher-order local auto-correlation

    NASA Astrophysics Data System (ADS)

    Maeda, Shimon; Matsunawa, Tetsuaki; Ogawa, Ryuji; Ichikawa, Hirotaka; Takahata, Kazuhiro; Miyairi, Masahiro; Kotani, Toshiya; Nojima, Shigeki; Tanaka, Satoshi; Nakagawa, Kei; Saito, Tamaki; Mimotogi, Shoji; Inoue, Soichi; Nosato, Hirokazu; Sakanashi, Hidenori; Kobayashi, Takumi; Murakawa, Masahiro; Higuchi, Tetsuya; Takahashi, Eiichi; Otsu, Nobuyuki

    2011-04-01

    Below 40nm design node, systematic variation due to lithography must be taken into consideration during the early stage of design. So far, litho-aware design using lithography simulation models has been widely applied to assure that designs are printed on silicon without any error. However, the lithography simulation approach is very time consuming, and under time-to-market pressure, repetitive redesign by this approach may result in the missing of the market window. This paper proposes a fast hotspot detection support method by flexible and intelligent vision system image pattern recognition based on Higher-Order Local Autocorrelation. Our method learns the geometrical properties of the given design data without any defects as normal patterns, and automatically detects the design patterns with hotspots from the test data as abnormal patterns. The Higher-Order Local Autocorrelation method can extract features from the graphic image of design pattern, and computational cost of the extraction is constant regardless of the number of design pattern polygons. This approach can reduce turnaround time (TAT) dramatically only on 1CPU, compared with the conventional simulation-based approach, and by distributed processing, this has proven to deliver linear scalability with each additional CPU.

  18. Gps-Denied Geo-Localisation Using Visual Odometry

    NASA Astrophysics Data System (ADS)

    Gupta, Ashish; Chang, Huan; Yilmaz, Alper

    2016-06-01

    The primary method for geo-localization is based on GPS which has issues of localization accuracy, power consumption, and unavailability. This paper proposes a novel approach to geo-localization in a GPS-denied environment for a mobile platform. Our approach has two principal components: public domain transport network data available in GIS databases or OpenStreetMap; and a trajectory of a mobile platform. This trajectory is estimated using visual odometry and 3D view geometry. The transport map information is abstracted as a graph data structure, where various types of roads are modelled as graph edges and typically intersections are modelled as graph nodes. A search for the trajectory in real time in the graph yields the geo-location of the mobile platform. Our approach uses a simple visual sensor and it has a low memory and computational footprint. In this paper, we demonstrate our method for trajectory estimation and provide examples of geolocalization using public-domain map data. With the rapid proliferation of visual sensors as part of automated driving technology and continuous growth in public domain map data, our approach has the potential to completely augment, or even supplant, GPS based navigation since it functions in all environments.

  19. Geological Carbon Sequestration Storage Resource Estimates for the Ordovician St. Peter Sandstone, Illinois and Michigan Basins, USA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnes, David; Ellett, Kevin; Leetaru, Hannes

    The Cambro-Ordovician strata of the Midwest of the United States is a primary target for potential geological storage of CO2 in deep saline formations. The objective of this project is to develop a comprehensive evaluation of the Cambro-Ordovician strata in the Illinois and Michigan Basins above the basal Mount Simon Sandstone since the Mount Simon is the subject of other investigations including a demonstration-scale injection at the Illinois Basin Decatur Project. The primary reservoir targets investigated in this study are the middle Ordovician St Peter Sandstone and the late Cambrian to early Ordovician Knox Group carbonates. The topic of thismore » report is a regional-scale evaluation of the geologic storage resource potential of the St Peter Sandstone in both the Illinois and Michigan Basins. Multiple deterministic-based approaches were used in conjunction with the probabilistic-based storage efficiency factors published in the DOE methodology to estimate the carbon storage resource of the formation. Extensive data sets of core analyses and wireline logs were compiled to develop the necessary inputs for volumetric calculations. Results demonstrate how the range in uncertainty of storage resource estimates varies as a function of data availability and quality, and the underlying assumptions used in the different approaches. In the simplest approach, storage resource estimates were calculated from mapping the gross thickness of the formation and applying a single estimate of the effective mean porosity of the formation. Results from this approach led to storage resource estimates ranging from 3.3 to 35.1 Gt in the Michigan Basin, and 1.0 to 11.0 Gt in the Illinois Basin at the P10 and P90 probability level, respectively. The second approach involved consideration of the diagenetic history of the formation throughout the two basins and used depth-dependent functions of porosity to derive a more realistic spatially variable model of porosity rather than applying a single estimate of porosity throughout the entire potential reservoir domains. The second approach resulted in storage resource estimates of 3.0 to 31.6 Gt in the Michigan Basin, and 0.6 to 6.1 Gt in the Illinois Basin. The third approach attempted to account for the local-scale variability in reservoir quality as a function of both porosity and permeability by using core and log analyses to calculate explicitly the net effective porosity at multiple well locations, and interpolate those results throughout the two basins. This approach resulted in storage resource estimates of 10.7 to 34.7 Gt in the Michigan Basin, and 11.2 to 36.4 Gt in the Illinois Basin. A final approach used advanced reservoir characterization as the most sophisticated means to estimating storage resource by defining reservoir properties for multiple facies within the St Peter formation. This approach was limited to the Michigan Basin since the Illinois Basin data set did not have the requisite level of data quality and sampling density to support such an analysis. Results from this approach led to storage resource estimates of 15.4 Gt to 50.1 Gt for the Michigan Basin. The observed variability in results from the four different approaches is evaluated in the context of data and methodological constraints, leading to the conclusion that the storage resource estimates from the first two approaches may be conservative, whereas the net porosity based approaches may over-estimate the resource.« less

  20. Distributed Water Pollution Source Localization with Mobile UV-Visible Spectrometer Probes in Wireless Sensor Networks.

    PubMed

    Ma, Junjie; Meng, Fansheng; Zhou, Yuexi; Wang, Yeyao; Shi, Ping

    2018-02-16

    Pollution accidents that occur in surface waters, especially in drinking water source areas, greatly threaten the urban water supply system. During water pollution source localization, there are complicated pollutant spreading conditions and pollutant concentrations vary in a wide range. This paper provides a scalable total solution, investigating a distributed localization method in wireless sensor networks equipped with mobile ultraviolet-visible (UV-visible) spectrometer probes. A wireless sensor network is defined for water quality monitoring, where unmanned surface vehicles and buoys serve as mobile and stationary nodes, respectively. Both types of nodes carry UV-visible spectrometer probes to acquire in-situ multiple water quality parameter measurements, in which a self-adaptive optical path mechanism is designed to flexibly adjust the measurement range. A novel distributed algorithm, called Dual-PSO, is proposed to search for the water pollution source, where one particle swarm optimization (PSO) procedure computes the water quality multi-parameter measurements on each node, utilizing UV-visible absorption spectra, and another one finds the global solution of the pollution source position, regarding mobile nodes as particles. Besides, this algorithm uses entropy to dynamically recognize the most sensitive parameter during searching. Experimental results demonstrate that online multi-parameter monitoring of a drinking water source area with a wide dynamic range is achieved by this wireless sensor network and water pollution sources are localized efficiently with low-cost mobile node paths.

  1. Distributed Water Pollution Source Localization with Mobile UV-Visible Spectrometer Probes in Wireless Sensor Networks

    PubMed Central

    Zhou, Yuexi; Wang, Yeyao; Shi, Ping

    2018-01-01

    Pollution accidents that occur in surface waters, especially in drinking water source areas, greatly threaten the urban water supply system. During water pollution source localization, there are complicated pollutant spreading conditions and pollutant concentrations vary in a wide range. This paper provides a scalable total solution, investigating a distributed localization method in wireless sensor networks equipped with mobile ultraviolet-visible (UV-visible) spectrometer probes. A wireless sensor network is defined for water quality monitoring, where unmanned surface vehicles and buoys serve as mobile and stationary nodes, respectively. Both types of nodes carry UV-visible spectrometer probes to acquire in-situ multiple water quality parameter measurements, in which a self-adaptive optical path mechanism is designed to flexibly adjust the measurement range. A novel distributed algorithm, called Dual-PSO, is proposed to search for the water pollution source, where one particle swarm optimization (PSO) procedure computes the water quality multi-parameter measurements on each node, utilizing UV-visible absorption spectra, and another one finds the global solution of the pollution source position, regarding mobile nodes as particles. Besides, this algorithm uses entropy to dynamically recognize the most sensitive parameter during searching. Experimental results demonstrate that online multi-parameter monitoring of a drinking water source area with a wide dynamic range is achieved by this wireless sensor network and water pollution sources are localized efficiently with low-cost mobile node paths. PMID:29462929

  2. Local Higher-Order Graph Clustering

    PubMed Central

    Yin, Hao; Benson, Austin R.; Leskovec, Jure; Gleich, David F.

    2018-01-01

    Local graph clustering methods aim to find a cluster of nodes by exploring a small region of the graph. These methods are attractive because they enable targeted clustering around a given seed node and are faster than traditional global graph clustering methods because their runtime does not depend on the size of the input graph. However, current local graph partitioning methods are not designed to account for the higher-order structures crucial to the network, nor can they effectively handle directed networks. Here we introduce a new class of local graph clustering methods that address these issues by incorporating higher-order network information captured by small subgraphs, also called network motifs. We develop the Motif-based Approximate Personalized PageRank (MAPPR) algorithm that finds clusters containing a seed node with minimal motif conductance, a generalization of the conductance metric for network motifs. We generalize existing theory to prove the fast running time (independent of the size of the graph) and obtain theoretical guarantees on the cluster quality (in terms of motif conductance). We also develop a theory of node neighborhoods for finding sets that have small motif conductance, and apply these results to the case of finding good seed nodes to use as input to the MAPPR algorithm. Experimental validation on community detection tasks in both synthetic and real-world networks, shows that our new framework MAPPR outperforms the current edge-based personalized PageRank methodology. PMID:29770258

  3. Improved local and regional control with radiotherapy for Merkel cell carcinoma of the head and neck.

    PubMed

    Strom, Tobin; Naghavi, Arash O; Messina, Jane L; Kim, Sungjune; Torres-Roca, Javier F; Russell, Jeffery; Sondak, Vernon K; Padhya, Tapan A; Trotti, Andy M; Caudell, Jimmy J; Harrison, Louis B

    2017-01-01

    We hypothesized that radiotherapy (RT) would improve both local and regional control with Merkel cell carcinoma of the head and neck. A single-institution institutional review board-approved study was performed including 113 patients with nonmetastatic Merkel cell carcinoma of the head and neck. Postoperative RT was delivered to the primary tumor bed (71.7% cases) ± draining lymphatics (33.3% RT cases). Postoperative local RT was associated with improved local control (3-year actuarial local control 89.4% vs 68.1%; p = .005; Cox hazard ratio [HR] 0.18; 95% confidence interval [CI] = 0.06-0.55; p = .002). Similarly, regional RT was associated with improved regional control (3-year actuarial regional control 95.0% vs 66.7%; p = .008; Cox HR = 0.09; 95% CI = 0.01-0.69; p = .02). Regional RT played an important role for both clinical node-negative patients (3-year regional control 100% vs 44.7%; p = .03) and clinical/pathological node-positive patients (3-year regional control 90.9% vs 55.6%; p = .047). Local RT was beneficial for all patients with Merkel cell carcinoma of the head and neck, whereas regional RT was beneficial for clinical node-negative and clinical/pathological node-positive patients. © 2016 Wiley Periodicals, Inc. Head Neck 39: 48-55, 2017. © 2016 Wiley Periodicals, Inc.

  4. Comparison of Node-Centered and Cell-Centered Unstructured Finite-Volume Discretizations: Viscous Fluxes

    NASA Technical Reports Server (NTRS)

    Diskin, Boris; Thomas, James L.; Nielsen, Eric J.; Nishikawa, Hiroaki; White, Jeffery A.

    2010-01-01

    Discretization of the viscous terms in current finite-volume unstructured-grid schemes are compared using node-centered and cell-centered approaches in two dimensions. Accuracy and complexity are studied for four nominally second-order accurate schemes: a node-centered scheme and three cell-centered schemes - a node-averaging scheme and two schemes with nearest-neighbor and adaptive compact stencils for least-square face gradient reconstruction. The grids considered range from structured (regular) grids to irregular grids composed of arbitrary mixtures of triangles and quadrilaterals, including random perturbations of the grid points to bring out the worst possible behavior of the solution. Two classes of tests are considered. The first class of tests involves smooth manufactured solutions on both isotropic and highly anisotropic grids with discontinuous metrics, typical of those encountered in grid adaptation. The second class concerns solutions and grids varying strongly anisotropically over a curved body, typical of those encountered in high-Reynolds number turbulent flow simulations. Tests from the first class indicate the face least-square methods, the node-averaging method without clipping, and the node-centered method demonstrate second-order convergence of discretization errors with very similar accuracies per degree of freedom. The tests of the second class are more discriminating. The node-centered scheme is always second order with an accuracy and complexity in linearization comparable to the best of the cell-centered schemes. In comparison, the cell-centered node-averaging schemes may degenerate on mixed grids, have a higher complexity in linearization, and can fail to converge to the exact solution when clipping of the node-averaged values is used. The cell-centered schemes using least-square face gradient reconstruction have more compact stencils with a complexity similar to that of the node-centered scheme. For simulations on highly anisotropic curved grids, the least-square methods have to be amended either by introducing a local mapping based on a distance function commonly available in practical schemes or modifying the scheme stencil to reflect the direction of strong coupling. The major conclusion is that accuracies of the node centered and the best cell-centered schemes are comparable at equivalent number of degrees of freedom.

  5. Effects of awareness diffusion and self-initiated awareness behavior on epidemic spreading - An approach based on multiplex networks

    NASA Astrophysics Data System (ADS)

    Kan, Jia-Qian; Zhang, Hai-Feng

    2017-03-01

    In this paper, we study the interplay between the epidemic spreading and the diffusion of awareness in multiplex networks. In the model, an infectious disease can spread in one network representing the paths of epidemic spreading (contact network), leading to the diffusion of awareness in the other network (information network), and then the diffusion of awareness will cause individuals to take social distances, which in turn affects the epidemic spreading. As for the diffusion of awareness, we assume that, on the one hand, individuals can be informed by other aware neighbors in information network, on the other hand, the susceptible individuals can be self-awareness induced by the infected neighbors in the contact networks (local information) or mass media (global information). Through Markov chain approach and numerical computations, we find that the density of infected individuals and the epidemic threshold can be affected by the structures of the two networks and the effective transmission rate of the awareness. However, we prove that though the introduction of the self-awareness can lower the density of infection, which cannot increase the epidemic threshold no matter of the local information or global information. Our finding is remarkably different to many previous results on single-layer network: local information based behavioral response can alter the epidemic threshold. Furthermore, our results indicate that the nodes with more neighbors (hub nodes) in information networks are easier to be informed, as a result, their risk of infection in contact networks can be effectively reduced.

  6. Delivery of video-on-demand services using local storages within passive optical networks.

    PubMed

    Abeywickrama, Sandu; Wong, Elaine

    2013-01-28

    At present, distributed storage systems have been widely studied to alleviate Internet traffic build-up caused by high-bandwidth, on-demand applications. Distributed storage arrays located locally within the passive optical network were previously proposed to deliver Video-on-Demand services. As an added feature, a popularity-aware caching algorithm was also proposed to dynamically maintain the most popular videos in the storage arrays of such local storages. In this paper, we present a new dynamic bandwidth allocation algorithm to improve Video-on-Demand services over passive optical networks using local storages. The algorithm exploits the use of standard control packets to reduce the time taken for the initial request communication between the customer and the central office, and to maintain the set of popular movies in the local storage. We conduct packet level simulations to perform a comparative analysis of the Quality-of-Service attributes between two passive optical networks, namely the conventional passive optical network and one that is equipped with a local storage. Results from our analysis highlight that strategic placement of a local storage inside the network enables the services to be delivered with improved Quality-of-Service to the customer. We further formulate power consumption models of both architectures to examine the trade-off between enhanced Quality-of-Service performance versus the increased power requirement from implementing a local storage within the network.

  7. Quantifying and Mapping the Supply of and Demand for Carbon Storage and Sequestration Service from Urban Trees

    PubMed Central

    Zhao, Chang; Sander, Heather A.

    2015-01-01

    Studies that assess the distribution of benefits provided by ecosystem services across urban areas are increasingly common. Nevertheless, current knowledge of both the supply and demand sides of ecosystem services remains limited, leaving a gap in our understanding of balance between ecosystem service supply and demand that restricts our ability to assess and manage these services. The present study seeks to fill this gap by developing and applying an integrated approach to quantifying the supply and demand of a key ecosystem service, carbon storage and sequestration, at the local level. This approach follows three basic steps: (1) quantifying and mapping service supply based upon Light Detection and Ranging (LiDAR) processing and allometric models, (2) quantifying and mapping demand for carbon sequestration using an indicator based on local anthropogenic CO2 emissions, and (3) mapping a supply-to-demand ratio. We illustrate this approach using a portion of the Twin Cities Metropolitan Area of Minnesota, USA. Our results indicate that 1735.69 million kg carbon are stored by urban trees in our study area. Annually, 33.43 million kg carbon are sequestered by trees, whereas 3087.60 million kg carbon are emitted by human sources. Thus, carbon sequestration service provided by urban trees in the study location play a minor role in combating climate change, offsetting approximately 1% of local anthropogenic carbon emissions per year, although avoided emissions via storage in trees are substantial. Our supply-to-demand ratio map provides insight into the balance between carbon sequestration supply in urban trees and demand for such sequestration at the local level, pinpointing critical locations where higher levels of supply and demand exist. Such a ratio map could help planners and policy makers to assess and manage the supply of and demand for carbon sequestration. PMID:26317530

  8. Fiber-connected position localization sensor networks

    NASA Astrophysics Data System (ADS)

    Pan, Shilong; Zhu, Dan; Fu, Jianbin; Yao, Tingfeng

    2014-11-01

    Position localization has drawn great attention due to its wide applications in radars, sonars, electronic warfare, wireless communications and so on. Photonic approaches to realize position localization can achieve high-resolution, which also provides the possibility to move the signal processing from each sensor node to the central station, thanks to the low loss, immunity to electromagnetic interference (EMI) and broad bandwidth brought by the photonic technologies. In this paper, we present a review on the recent works of position localization based on photonic technologies. A fiber-connected ultra-wideband (UWB) sensor network using optical time-division multiplexing (OTDM) is proposed to realize high-resolution localization and moving the signal processing to the central station. A 3.9-cm high spatial resolution is achieved. A wavelength-division multiplexed (WDM) fiber-connected sensor network is also demonstrated to realize location which is independent of the received signal format.

  9. Simultaneously Discovering and Localizing Common Objects in Wild Images.

    PubMed

    Wang, Zhenzhen; Yuan, Junsong

    2018-09-01

    Motivated by the recent success of supervised and weakly supervised common object discovery, in this paper, we move forward one step further to tackle common object discovery in a fully unsupervised way. Generally, object co-localization aims at simultaneously localizing objects of the same class across a group of images. Traditional object localization/detection usually trains specific object detectors which require bounding box annotations of object instances, or at least image-level labels to indicate the presence/absence of objects in an image. Given a collection of images without any annotations, our proposed fully unsupervised method is to simultaneously discover images that contain common objects and also localize common objects in corresponding images. Without requiring to know the total number of common objects, we formulate this unsupervised object discovery as a sub-graph mining problem from a weighted graph of object proposals, where nodes correspond to object proposals, and edges represent the similarities between neighbouring proposals. The positive images and common objects are jointly discovered by finding sub-graphs of strongly connected nodes, with each sub-graph capturing one object pattern. The optimization problem can be efficiently solved by our proposed maximal-flow-based algorithm. Instead of assuming that each image contains only one common object, our proposed solution can better address wild images where each image may contain multiple common objects or even no common object. Moreover, our proposed method can be easily tailored to the task of image retrieval in which the nodes correspond to the similarity between query and reference images. Extensive experiments on PASCAL VOC 2007 and Object Discovery data sets demonstrate that even without any supervision, our approach can discover/localize common objects of various classes in the presence of scale, view point, appearance variation, and partial occlusions. We also conduct broad experiments on image retrieval benchmarks, Holidays and Oxford5k data sets, to show that our proposed method, which considers both the similarity between query and reference images and also similarities among reference images, can help to improve the retrieval results significantly.

  10. Simplifying Logistics and Avoiding the Unnecessary in Patients With Breast Cancer Undergoing Sentinel Node Biopsy. A Prospective Feasibility Trial of the Preoperative Injection of Super Paramagnetic Iron Oxide Nanoparticles.

    PubMed

    Karakatsanis, A; Olofsson, H; Stålberg, P; Bergkvist, L; Abdsaleh, S; Wärnberg, F

    2018-06-01

    Sentinel node is routinely localized with the intraoperative use of a radioactive tracer, involving challenging logistics. Super paramagnetic iron oxide nanoparticle is a non-radioactive tracer with comparable performance that could allow for preoperative localization, would simplify the procedure, and possibly be of value in axillary mapping before neoadjuvant treatment. The current trial aimed to determine the a priori hypothesis that the injection of super paramagnetic iron oxide nanoparticles in the preoperative period for the localization of the sentinel node is feasible. This is a prospective feasibility trial, conducted from 9 September 2014 to 22 October 2014 at Uppsala University Hospital. In all, 12 consecutive patients with primary breast cancer planned for resection of the primary and sentinel node biopsy were recruited. Super paramagnetic iron oxide nanoparticles were injected in the preoperative visit in the outpatient clinic. The radioactive tracer ( 99 mTc) and the blue dye were injected perioperatively in standard fashion. A volunteer was injected with super paramagnetic iron oxide nanoparticles to follow the decline in the magnetic signal in the sentinel node over time. The primary outcome was successful sentinel node detection. Super paramagnetic iron oxide nanoparticles' detection after preoperative injection (3-15 days) was successful in all cases (100%). In the volunteer, axillary signal was presented for 4 weeks. No adverse effects were noted. Conclusion and relevance: Preoperative super paramagnetic iron oxide nanoparticles' injection is feasible and leads to successful detection of the sentinel node. That may lead to simplified logistics as well as the identification, sampling, and marking of the sentinel node in patients planned for neoadjuvant treatment.

  11. Use of a Hybrid Edge Node-Centroid Node Approach to Thermal Modeling

    NASA Technical Reports Server (NTRS)

    Peabody, Hume L.

    2010-01-01

    A recent proposal submitted for an ESA mission required that models be delivered in ESARAD/ESAT AN formats. ThermalDesktop was the preferable analysis code to be used for model development with a conversion done as the final step before delivery. However, due to some differences between the capabilities of the two codes, a unique approach was developed to take advantage of the edge node capability of ThermalDesktop while maintaining the centroid node approach used by ESARAD. In essence, two separate meshes were used: one for conduction and one for radiation. The conduction calculations were eliminated from the radiation surfaces and the capacitance and radiative calculations were eliminated from the conduction surfaces. The resulting conduction surface nodes were coincident with all nodes of the radiation surface and were subsequently merged, while the nodes along the edges remained free. Merging of nodes on the edges of adjacent surfaces provided the conductive links between surfaces. Lastly, all nodes along edges were placed into the subnetwork and the resulting supernetwork included only the nodes associated with radiation surfaces. This approach had both benefits and disadvantages. The use of centroid, surface based radiation reduces the overall size of the radiation network, which is often the most computationally intensive part of the modeling process. Furthermore, using the conduction surfaces and allowing ThermalDesktop to calculate the conduction network can save significant time by not having to manually generate the couplings. Lastly, the resulting GMM/TMM models can be exported to formats which do not support edge nodes. One drawback, however, is the necessity to maintain two sets of surfaces. This requires additional care on the part of the analyst to ensure communication between the conductive and radiative surfaces in the resulting overall network. However, with more frequent use of this technique, the benefits of this approach can far outweigh the additional effort.

  12. Use of a Hybrid Edge Node-Centroid Node Approach to Thermal Modeling

    NASA Technical Reports Server (NTRS)

    Peabody, Hume L.

    2010-01-01

    A recent proposal submitted for an ESA mission required that models be delivered in ESARAD/ESATAN formats. ThermalDesktop was the preferable analysis code to be used for model development with a conversion done as the final step before delivery. However, due to some differences between the capabilities of the two codes, a unique approach was developed to take advantage of the edge node capability of ThermalDesktop while maintaining the centroid node approach used by ESARAD. In essence, two separate meshes were used: one for conduction and one for radiation. The conduction calculations were eliminated from the radiation surfaces and the capacitance and radiative calculations were eliminated from the conduction surfaces. The resulting conduction surface nodes were coincident with all nodes of the radiation surface and were subsequently merged, while the nodes along the edges remained free. Merging of nodes on the edges of adjacent surfaces provided the conductive links between surfaces. Lastly, all nodes along edges were placed into the subnetwork and the resulting supernetwork included only the nodes associated with radiation surfaces. This approach had both benefits and disadvantages. The use of centroid, surface based radiation reduces the overall size of the radiation network, which is often the most computationally intensive part of the modeling process. Furthermore, using the conduction surfaces and allowing ThermalDesktop to calculate the conduction network can save significant time by not having to manually generate the couplings. Lastly, the resulting GMM/TMM models can be exported to formats which do not support edge nodes. One drawback, however, is the necessity to maintain two sets of surfaces. This requires additional care on the part of the analyst to ensure communication between the conductive and radiative surfaces in the resulting overall network. However, with more frequent use of this technique, the benefits of this approach can far outweigh the additional effort.

  13. Method and apparatus for routing data in an inter-nodal communications lattice of a massively parallel computer system by dynamically adjusting local routing strategies

    DOEpatents

    Archer, Charles Jens; Musselman, Roy Glenn; Peters, Amanda; Pinnow, Kurt Walter; Swartz, Brent Allen; Wallenfelt, Brian Paul

    2010-03-16

    A massively parallel computer system contains an inter-nodal communications network of node-to-node links. Each node implements a respective routing strategy for routing data through the network, the routing strategies not necessarily being the same in every node. The routing strategies implemented in the nodes are dynamically adjusted during application execution to shift network workload as required. Preferably, adjustment of routing policies in selective nodes is performed at synchronization points. The network may be dynamically monitored, and routing strategies adjusted according to detected network conditions.

  14. Adaptive triangular mesh generation

    NASA Technical Reports Server (NTRS)

    Erlebacher, G.; Eiseman, P. R.

    1984-01-01

    A general adaptive grid algorithm is developed on triangular grids. The adaptivity is provided by a combination of node addition, dynamic node connectivity and a simple node movement strategy. While the local restructuring process and the node addition mechanism take place in the physical plane, the nodes are displaced on a monitor surface, constructed from the salient features of the physical problem. An approximation to mean curvature detects changes in the direction of the monitor surface, and provides the pulling force on the nodes. Solutions to the axisymmetric Grad-Shafranov equation demonstrate the capturing, by triangles, of the plasma-vacuum interface in a free-boundary equilibrium configuration.

  15. IgG4-related prostatitis progressed from localized IgG4-related lymphadenopathy.

    PubMed

    Li, Dujuan; Kan, Yunzhen; Fu, Fangfang; Wang, Shuhuan; Shi, Ligang; Liu, Jie; Kong, Lingfei

    2015-01-01

    Immunoglobulin G4-related disease (IgG4-RD) is a recently described inflammatory disease involving multiple organs. Prostate involvement with IgG4-RD is very rare. In this report, we describe a case of IgG4-related prostatitis progressed from localized IgG4-related lymphadenopathy. This patient was present with urine retention symptoms. MRI and CT examination revealed the prostatic enlargement and the multiple lymphadenopathy. Serum IgG4 levels were elevated. Prostatic tissue samples resected both this time and less than 1 year earlier showed the same histological type of prostatitis with histopathologic and immunohistochemical findings characteristic of IgG4-RD. The right submandibular lymph nodes excised 2 years earlier were eventually proven to be follicular hyperplasia-type IgG4-related lymphadenopathy. This is the first case of IgG4-RD that began as localized IgG4-related lymphadenopathy and progressed into a systemic disease involving prostate and multiple lymph nodes. This patient showed a good response to steroid therapy. This leads us to advocate a novel pathogenesis of prostatitis, and a novel therapeutic approach against prostatitis. Pathologists and urologists should consider this disease entity in the patients with elevated serum IgG4 levels and the symptoms of prostatic hyperplasia to avoid ineffective medical or unnecessary surgical treatment.

  16. Characterizing output bottlenecks in a supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Bing; Chase, Jeffrey; Dillow, David A

    2012-01-01

    Supercomputer I/O loads are often dominated by writes. HPC (High Performance Computing) file systems are designed to absorb these bursty outputs at high bandwidth through massive parallelism. However, the delivered write bandwidth often falls well below the peak. This paper characterizes the data absorption behavior of a center-wide shared Lustre parallel file system on the Jaguar supercomputer. We use a statistical methodology to address the challenges of accurately measuring a shared machine under production load and to obtain the distribution of bandwidth across samples of compute nodes, storage targets, and time intervals. We observe and quantify limitations from competing traffic,more » contention on storage servers and I/O routers, concurrency limitations in the client compute node operating systems, and the impact of variance (stragglers) on coupled output such as striping. We then examine the implications of our results for application performance and the design of I/O middleware systems on shared supercomputers.« less

  17. The production deployment of IPv6 on WLCG

    NASA Astrophysics Data System (ADS)

    Bernier, J.; Campana, S.; Chadwick, K.; Chudoba, J.; Dewhurst, A.; Eliáš, M.; Fayer, S.; Finnern, T.; Grigoras, C.; Hartmann, T.; Hoeft, B.; Idiculla, T.; Kelsey, D. P.; López Muñoz, F.; Macmahon, E.; Martelli, E.; Millar, A. P.; Nandakumar, R.; Ohrenberg, K.; Prelz, F.; Rand, D.; Sciabà, A.; Tigerstedt, U.; Voicu, R.; Walker, C. J.; Wildish, T.

    2015-12-01

    The world is rapidly running out of IPv4 addresses; the number of IPv6 end systems connected to the internet is increasing; WLCG and the LHC experiments may soon have access to worker nodes and/or virtual machines (VMs) possessing only an IPv6 routable address. The HEPiX IPv6 Working Group has been investigating, testing and planning for dual-stack services on WLCG for several years. Following feedback from our working group, many of the storage technologies in use on WLCG have recently been made IPv6-capable. This paper presents the IPv6 requirements, tests and plans of the LHC experiments together with the tests performed on the group's IPv6 test-bed. This is primarily aimed at IPv6-only worker nodes or VMs accessing several different implementations of a global dual-stack federated storage service. Finally the plans for deployment of production dual-stack WLCG services are presented.

  18. Improving local clustering based top-L link prediction methods via asymmetric link clustering information

    NASA Astrophysics Data System (ADS)

    Wu, Zhihao; Lin, Youfang; Zhao, Yiji; Yan, Hongyan

    2018-02-01

    Networks can represent a wide range of complex systems, such as social, biological and technological systems. Link prediction is one of the most important problems in network analysis, and has attracted much research interest recently. Many link prediction methods have been proposed to solve this problem with various techniques. We can note that clustering information plays an important role in solving the link prediction problem. In previous literatures, we find node clustering coefficient appears frequently in many link prediction methods. However, node clustering coefficient is limited to describe the role of a common-neighbor in different local networks, because it cannot distinguish different clustering abilities of a node to different node pairs. In this paper, we shift our focus from nodes to links, and propose the concept of asymmetric link clustering (ALC) coefficient. Further, we improve three node clustering based link prediction methods via the concept of ALC. The experimental results demonstrate that ALC-based methods outperform node clustering based methods, especially achieving remarkable improvements on food web, hamster friendship and Internet networks. Besides, comparing with other methods, the performance of ALC-based methods are very stable in both globalized and personalized top-L link prediction tasks.

  19. Neural network based feed-forward high density associative memory

    NASA Technical Reports Server (NTRS)

    Daud, T.; Moopenn, A.; Lamb, J. L.; Ramesham, R.; Thakoor, A. P.

    1987-01-01

    A novel thin film approach to neural-network-based high-density associative memory is described. The information is stored locally in a memory matrix of passive, nonvolatile, binary connection elements with a potential to achieve a storage density of 10 to the 9th bits/sq cm. Microswitches based on memory switching in thin film hydrogenated amorphous silicon, and alternatively in manganese oxide, have been used as programmable read-only memory elements. Low-energy switching has been ascertained in both these materials. Fabrication and testing of memory matrix is described. High-speed associative recall approaching 10 to the 7th bits/sec and high storage capacity in such a connection matrix memory system is also described.

  20. Performance and policy dimensions in internet routing

    NASA Technical Reports Server (NTRS)

    Mills, David L.; Boncelet, Charles G.; Elias, John G.; Schragger, Paul A.; Jackson, Alden W.; Thyagarajan, Ajit

    1995-01-01

    The Internet Routing Project, referred to in this report as the 'Highball Project', has been investigating architectures suitable for networks spanning large geographic areas and capable of very high data rates. The Highball network architecture is based on a high speed crossbar switch and an adaptive, distributed, TDMA scheduling algorithm. The scheduling algorithm controls the instantaneous configuration and swell time of the switch, one of which is attached to each node. In order to send a single burst or a multi-burst packet, a reservation request is sent to all nodes. The scheduling algorithm then configures the switches immediately prior to the arrival of each burst, so it can be relayed immediately without requiring local storage. Reservations and housekeeping information are sent using a special broadcast-spanning-tree schedule. Progress to date in the Highball Project includes the design and testing of a suite of scheduling algorithms, construction of software reservation/scheduling simulators, and construction of a strawman hardware and software implementation. A prototype switch controller and timestamp generator have been completed and are in test. Detailed documentation on the algorithms, protocols and experiments conducted are given in various reports and papers published. Abstracts of this literature are included in the bibliography at the end of this report, which serves as an extended executive summary.

Top