Sample records for node search interface

  1. Developing A Web-based User Interface for Semantic Information Retrieval

    NASA Technical Reports Server (NTRS)

    Berrios, Daniel C.; Keller, Richard M.

    2003-01-01

    While there are now a number of languages and frameworks that enable computer-based systems to search stored data semantically, the optimal design for effective user interfaces for such systems is still uncle ar. Such interfaces should mask unnecessary query detail from users, yet still allow them to build queries of arbitrary complexity without significant restrictions. We developed a user interface supporting s emantic query generation for Semanticorganizer, a tool used by scient ists and engineers at NASA to construct networks of knowledge and dat a. Through this interface users can select node types, node attribute s and node links to build ad-hoc semantic queries for searching the S emanticOrganizer network.

  2. Matching pursuit parallel decomposition of seismic data

    NASA Astrophysics Data System (ADS)

    Li, Chuanhui; Zhang, Fanchang

    2017-07-01

    In order to improve the computation speed of matching pursuit decomposition of seismic data, a matching pursuit parallel algorithm is designed in this paper. We pick a fixed number of envelope peaks from the current signal in every iteration according to the number of compute nodes and assign them to the compute nodes on average to search the optimal Morlet wavelets in parallel. With the help of parallel computer systems and Message Passing Interface, the parallel algorithm gives full play to the advantages of parallel computing to significantly improve the computation speed of the matching pursuit decomposition and also has good expandability. Besides, searching only one optimal Morlet wavelet by every compute node in every iteration is the most efficient implementation.

  3. SpicyNodes Radial Map Engine

    NASA Astrophysics Data System (ADS)

    Douma, M.; Ligierko, G.; Angelov, I.

    2008-10-01

    The need for information has increased exponentially over the past decades. The current systems for constructing, exploring, classifying, organizing, and searching information face the growing challenge of enabling their users to operate efficiently and intuitively in knowledge-heavy environments. This paper presents SpicyNodes, an advanced user interface for difficult interaction contexts. It is based on an underlying structure known as a radial map, which allows users to manipulate and interact in a natural manner with entities called nodes. This technology overcomes certain limitations of existing solutions and solves the problem of browsing complex sets of linked information. SpicyNodes is also an organic system that projects users into a living space, stimulating exploratory behavior and fostering creative thought. Our interactive radial layout is used for educational purposes and has the potential for numerous other applications.

  4. Glide dislocation nucleation from dislocation nodes at semi-coherent {111} Cu–Ni interfaces

    DOE PAGES

    Shao, Shuai; Wang, Jian; Beyerlein, Irene J.; ...

    2015-07-23

    Using atomistic simulations and dislocation theory on a model system of semi-coherent {1 1 1} interfaces, we show that misfit dislocation nodes adopt multiple atomic arrangements corresponding to the creation and redistribution of excess volume at the nodes. We identified four distinctive node structures: volume-smeared nodes with (i) spiral or (ii) straight dislocation patterns, and volume-condensed nodes with (iii) triangular or (iv) hexagonal dislocation patterns. Volume-smeared nodes contain interfacial dislocations lying in the Cu–Ni interface but volume-condensed nodes contain two sets of interfacial dislocations in the two adjacent interfaces and jogs across the atomic layer between the two adjacent interfaces.more » Finally, under biaxial tension/compression applied parallel to the interface, we show that the nucleation of lattice dislocations is preferred at the nodes and is correlated with the reduction of excess volume at the nodes.« less

  5. 2015 ESGF Progress Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, D. N.

    2015-06-22

    The Earth System Grid Federation (ESGF) is a multi-agency, international collaboration whose purpose is to develop the software infrastructure needed to facilitate and empower the study of climate change on a global scale. ESGF’s architecture employs a system of geographically distributed peer nodes that are independently administered yet united by common federation protocols and application programming interfaces. The cornerstones of its interoperability are the peer-to-peer messaging, which is continuously exchanged among all nodes in the federation; a shared architecture for search and discovery; and a security infrastructure based on industry standards. ESGF integrates popular application engines available from the open-sourcemore » community with custom components (for data publishing, searching, user interface, security, and messaging) that were developed collaboratively by the team. The full ESGF infrastructure has now been adopted by multiple Earth science projects and allows access to petabytes of geophysical data, including the Coupled Model Intercomparison Project (CMIP)—output used by the Intergovernmental Panel on Climate Change assessment reports. ESGF is a successful example of integration of disparate open-source technologies into a cohesive functional system that serves the needs of the global climate science community.« less

  6. A De-centralized Scheduling and Load Balancing Algorithm for Heterogeneous Grid Environments

    NASA Technical Reports Server (NTRS)

    Arora, Manish; Das, Sajal K.; Biswas, Rupak

    2002-01-01

    In the past two decades, numerous scheduling and load balancing techniques have been proposed for locally distributed multiprocessor systems. However, they all suffer from significant deficiencies when extended to a Grid environment: some use a centralized approach that renders the algorithm unscalable, while others assume the overhead involved in searching for appropriate resources to be negligible. Furthermore, classical scheduling algorithms do not consider a Grid node to be N-resource rich and merely work towards maximizing the utilization of one of the resources. In this paper, we propose a new scheduling and load balancing algorithm for a generalized Grid model of N-resource nodes that not only takes into account the node and network heterogeneity, but also considers the overhead involved in coordinating among the nodes. Our algorithm is decentralized, scalable, and overlaps the node coordination time with that of the actual processing of ready jobs, thus saving valuable clock cycles needed for making decisions. The proposed algorithm is studied by conducting simulations using the Message Passing Interface (MPI) paradigm.

  7. A De-Centralized Scheduling and Load Balancing Algorithm for Heterogeneous Grid Environments

    NASA Technical Reports Server (NTRS)

    Arora, Manish; Das, Sajal K.; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2002-01-01

    In the past two decades, numerous scheduling and load balancing techniques have been proposed for locally distributed multiprocessor systems. However, they all suffer from significant deficiencies when extended to a Grid environment: some use a centralized approach that renders the algorithm unscalable, while others assume the overhead involved in searching for appropriate resources to be negligible. Furthermore, classical scheduling algorithms do not consider a Grid node to be N-resource rich and merely work towards maximizing the utilization of one of the resources. In this paper we propose a new scheduling and load balancing algorithm for a generalized Grid model of N-resource nodes that not only takes into account the node and network heterogeneity, but also considers the overhead involved in coordinating among the nodes. Our algorithm is de-centralized, scalable, and overlaps the node coordination time of the actual processing of ready jobs, thus saving valuable clock cycles needed for making decisions. The proposed algorithm is studied by conducting simulations using the Message Passing Interface (MPI) paradigm.

  8. Method and apparatus for connecting finite element meshes and performing simulations therewith

    DOEpatents

    Dohrmann, Clark R.; Key, Samuel W.; Heinstein, Martin W.

    2003-05-06

    The present invention provides a method of connecting dissimilar finite element meshes. A first mesh, designated the master mesh, and a second mesh, designated the slave mesh, each have interface surfaces proximal the other. Each interface surface has a corresponding interface mesh comprising a plurality of interface nodes. Each slave interface node is assigned new coordinates locating the interface node on the interface surface of the master mesh. The slave interface surface is further redefined to be the projection of the slave interface mesh onto the master interface surface.

  9. Searching social networks for subgraph patterns

    NASA Astrophysics Data System (ADS)

    Ogaard, Kirk; Kase, Sue; Roy, Heather; Nagi, Rakesh; Sambhoos, Kedar; Sudit, Moises

    2013-06-01

    Software tools for Social Network Analysis (SNA) are being developed which support various types of analysis of social networks extracted from social media websites (e.g., Twitter). Once extracted and stored in a database such social networks are amenable to analysis by SNA software. This data analysis often involves searching for occurrences of various subgraph patterns (i.e., graphical representations of entities and relationships). The authors have developed the Graph Matching Toolkit (GMT) which provides an intuitive Graphical User Interface (GUI) for a heuristic graph matching algorithm called the Truncated Search Tree (TruST) algorithm. GMT is a visual interface for graph matching algorithms processing large social networks. GMT enables an analyst to draw a subgraph pattern by using a mouse to select categories and labels for nodes and links from drop-down menus. GMT then executes the TruST algorithm to find the top five occurrences of the subgraph pattern within the social network stored in the database. GMT was tested using a simulated counter-insurgency dataset consisting of cellular phone communications within a populated area of operations in Iraq. The results indicated GMT (when executing the TruST graph matching algorithm) is a time-efficient approach to searching large social networks. GMT's visual interface to a graph matching algorithm enables intelligence analysts to quickly analyze and summarize the large amounts of data necessary to produce actionable intelligence.

  10. A concept of volume rendering guided search process to analyze medical data set.

    PubMed

    Zhou, Jianlong; Xiao, Chun; Wang, Zhiyan; Takatsuka, Masahiro

    2008-03-01

    This paper firstly presents an approach of parallel coordinates based parameter control panel (PCP). The PCP is used to control parameters of focal region-based volume rendering (FRVR) during data analysis. It uses a parallel coordinates style interface. Different rendering parameters represented with nodes on each axis, and renditions based on related parameters are connected using polylines to show dependencies between renditions and parameters. Based on the PCP, a concept of volume rendering guided search process is proposed. The search pipeline is divided into four phases. Different parameters of FRVR are recorded and modulated in the PCP during search phases. The concept shows that volume visualization could play the role of guiding a search process in the rendition space to help users to efficiently find local structures of interest. The usability of the proposed approach is evaluated to show its effectiveness.

  11. Expected performance of m-solution backtracking

    NASA Technical Reports Server (NTRS)

    Nicol, D. M.

    1986-01-01

    This paper derives upper bounds on the expected number of search tree nodes visited during an m-solution backtracking search, a search which terminates after some preselected number m problem solutions are found. The search behavior is assumed to have a general probabilistic structure. The results are stated in terms of node expansion and contraction. A visited search tree node is said to be expanding if the mean number of its children visited by the search exceeds 1 and is contracting otherwise. It is shown that if every node expands, or if every node contracts, then the number of search tree nodes visited by a search has an upper bound which is linear in the depth of the tree, in the mean number of children a node has, and in the number of solutions sought. Also derived are bounds linear in the depth of the tree in some situations where an upper portion of the tree contracts (expands), while the lower portion expands (contracts). While previous analyses of 1-solution backtracking have concluded that the expected performance is always linear in the tree depth, the model allows superlinear expected performance.

  12. Virtual gap element approach for the treatment of non-matching interface using three-dimensional solid elements

    NASA Astrophysics Data System (ADS)

    Song, Yeo-Ul; Youn, Sung-Kie; Park, K. C.

    2017-10-01

    A method for three-dimensional non-matching interface treatment with a virtual gap element is developed. When partitioned structures contain curved interfaces and have different brick meshes, the discretized models have gaps along the interfaces. As these gaps bring unexpected errors, special treatments are required to handle the gaps. In the present work, a virtual gap element is introduced to link the frame and surface domain nodes in the frame work of the mortar method. Since the surface of the hexahedron element is quadrilateral, the gap element is pyramidal. The pyramidal gap element consists of four domain nodes and one frame node. Zero-strain condition in the gap element is utilized for the interpolation of frame nodes in terms of the domain nodes. This approach is taken to satisfy the momentum and energy conservation. The present method is applicable not only to curved interfaces with gaps, but also to flat interfaces in three dimensions. Several numerical examples are given to describe the effectiveness and accuracy of the proposed method.

  13. Integration and validation of a data grid software

    NASA Astrophysics Data System (ADS)

    Carenton-Madiec, Nicolas; Berger, Katharina; Cofino, Antonio

    2014-05-01

    The Earth System Grid Federation (ESGF) Peer-to-Peer (P2P) is a software infrastructure for the management, dissemination, and analysis of model output and observational data. The ESGF grid is composed with several types of nodes which have different roles. About 40 data nodes host model outputs and datasets using thredds catalogs. About 25 compute nodes offer remote visualization and analysis tools. About 15 index nodes crawl data nodes catalogs and implement faceted and federated search in a web interface. About 15 Identity providers nodes manage accounts, authentication and authorization. Here we will present an actual size test federation spread across different institutes in different countries and a python test suite that were started in December 2013. The first objective of the test suite is to provide a simple tool that helps to test and validate a single data node and its closest index, compute and identity provider peer. The next objective will be to run this test suite on every data node of the federation and therefore test and validate every single node of the whole federation. The suite already implements nosetests, requests, myproxy-logon, subprocess, selenium and fabric python libraries in order to test both web front ends, back ends and security services. The goal of this project is to improve the quality of deliverable in a small developers team context. Developers are widely spread around the world working collaboratively and without hierarchy. This kind of working organization context en-lighted the need of a federated integration test and validation process.

  14. Archiving Spectral Libraries in the Planetary Data System

    NASA Astrophysics Data System (ADS)

    Slavney, S.; Guinness, E. A.; Scholes, D.; Zastrow, A.

    2017-12-01

    Spectral libraries are becoming popular candidates for archiving in PDS. With the increase in the number of individual investigators funded by programs such as NASA's PDART, the PDS Geosciences Node is receiving many requests for support from proposers wishing to archive various forms of laboratory spectra. To accommodate the need for a standardized approach to archiving spectra, the Geosciences Node has designed the PDS Spectral Library Data Dictionary, which contains PDS4 classes and attributes specifically for labeling spectral data, including a classification scheme for samples. The Reflectance Experiment Laboratory (RELAB) at Brown University, which has long been a provider of spectroscopy equipment and services to the science community, has provided expert input into the design of the dictionary. Together the Geosciences Node and RELAB are preparing the whole of the RELAB Spectral Library, consisting of many thousands of spectra collected over the years, to be archived in PDS. An online interface for searching, displaying, and downloading selected spectra is planned, using the Spectral Library metadata recorded in the PDS labels. The data dictionary and online interface will be extended to include spectral libraries submitted by other data providers. The Spectral Library Data Dictionary is now available from PDS at https://pds.nasa.gov/pds4/schema/released/. It can be used in PDS4 labels for reflectance spectra as well as for Raman, XRF, XRD, LIBS, and other types of spectra. Ancillary data such as images, chemistry, and abundance data are also supported. To help generate PDS4-compliant labels for spectra, the Geosciences Node provides a label generation program called MakeLabels (http://pds-geosciences.wustl.edu/tools/makelabels.html) which creates labels from a template, and which can be used for any kind of PDS4 label. For information, contact the Geosciences Node at geosci@wunder.wustl.edu.

  15. Numerical modelling of flow through foam's node.

    PubMed

    Anazadehsayed, Abdolhamid; Rezaee, Nastaran; Naser, Jamal

    2017-10-15

    In this work, for the first time, a three-dimensional model to describe the dynamics of flow through geometric Plateau border and node components of foam is presented. The model involves a microscopic-scale structure of one interior node and four Plateau borders with an angle of 109.5 from each other. The majority of the surfaces in the model make a liquid-gas interface where the boundary condition of stress balance between the surface and bulk is applied. The three-dimensional Navier-Stoke equation, along with continuity equation, is solved using the finite volume approach. The numerical results are validated against the available experimental results for the flow velocity and resistance in the interior nodes and Plateau borders. A qualitative illustration of flow in a node in different orientations is shown. The scaled resistance against the flow for different liquid-gas interface mobility is studied and the geometrical characteristics of the node and Plateau border components of the system are compared to investigate the Plateau border and node dominated flow regimes numerically. The findings show the values of the resistance in each component, in addition to the exact point where the flow regimes switch. Furthermore, a more accurate effect of the liquid-gas interface on the foam flow, particularly in the presence of a node in the foam network is obtained. The comparison of the available numerical results with our numerical results shows that the velocity of the node-PB system is lower than the velocity of single PB system for mobile interfaces. That is owing to the fact that despite the more relaxed geometrical structure of the node, constraining effect of merging and mixing of flow and increased viscous damping in the node component result in the node-dominated regime. Moreover, we obtain an accurate updated correlation for the dependence of the scaled average velocity of the node-Plateau border system on the liquid-gas interface mobility described by Boussinesq number. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. The IS-ENES climate4impact portal: bridging the CMIP5 and CORDEX data to impact users

    NASA Astrophysics Data System (ADS)

    Som de Cerff, Wim; Plieger, Maarten; Page, Christian; Tatarinova, Natalia; Hutjes, Ronald; de Jong, Fokke; Bärring, Lars; Sjökvist, Elin; Vega Saldarriaga, Manuel; Santiago Cofiño Gonzalez, Antonio

    2015-04-01

    The aim of climate4impact (climate4impact.eu) is to enhance the use of Climate Research Data and to enhance the interaction with climate effect/impact communities. The portal is based on 17 impact use cases from 5 different European countries, and is evaluated by a user panel consisting of use case owners. It has been developed within the IS-ENES European project and is currently operated and further developed in the IS ENES2 project. As the climate impact community is very broad, the focus is mainly on the scientific impact community. Climate4impact is connected to the Earth System Grid Federation (ESGF) nodes containing global climate model data (GCM data) from the fifth phase of the Coupled Model Intercomparison Project (CMIP5) and regional climate model data (RCM) data from the Coordinated Regional Climate Downscaling Experiment (CORDEX). This global network of climate model data centers offers services for data description, discovery and download. The climate4impact portal connects to these services using OpenID, and offers a user interface for searching, visualizing and downloading global climate model data and more. A challenging task is to describe the available model data and how it can be used. The portal informs users about possible caveats when using climate model data. All impact use cases are described in the documentation section, using highlighted keywords pointing to detailed information in the glossary. Climate4impact currently has two main objectives. The first one is to work on a web interface which automatically generates a graphical user interface on WPS endpoints. The WPS calculates climate indices and subset data using OpenClimateGIS/icclim on data stored in ESGF data nodes. Data is then transmitted from ESGF nodes over secured OpenDAP and becomes available in a new, per user, secured OpenDAP server. The results can then be visualized again using ADAGUC WMS. Dedicated wizards for processing of climate indices will be developed in close collaboration with users. The second one is to expose climate4impact services, so as to offer standardized services which can be used by other portals (like the future Copernicus platform, developed in the EU FP7 CLIPC project). This has the advantage to add interoperability between several portals, as well as to enable the design of specific portals aimed at different impact communities, either thematic or national. In the presentation the following subjects will be detailed: - Lessons learned developing climate4impact.eu - Download: Directly from ESGF nodes and other THREDDS catalogs - Connection with the downscaling portal of the university of Cantabria - Experiences on the question and answer site via Askbot - Visualization: Visualize data from ESGF data nodes using ADAGUC Web Map Services. - Processing: Transform data, subset, export into other formats, and perform climate indices calculations using Web Processing Services implemented by PyWPS, based on NCAR NCPP OpenClimateGIS and IS-ENES2 icclim. - Security: Login using OpenID for access to the ESGF data nodes. The ESGF works in conjunction with several external websites and systems. The climate4impact portal uses X509 based short lived credentials, generated on behalf of the user with a MyProxy service. Single Sign-on (SSO) is used to make these websites and systems work together. - Discovery: Facetted search based on e.g. variable name, model and institute using the ESGF search services. A catalog browser allows for browsing through CMIP5 and any other climate model data catalogues (e.g. ESSENCE, EOBS, UNIDATA).

  17. XSemantic: An Extension of LCA Based XML Semantic Search

    NASA Astrophysics Data System (ADS)

    Supasitthimethee, Umaporn; Shimizu, Toshiyuki; Yoshikawa, Masatoshi; Porkaew, Kriengkrai

    One of the most convenient ways to query XML data is a keyword search because it does not require any knowledge of XML structure or learning a new user interface. However, the keyword search is ambiguous. The users may use different terms to search for the same information. Furthermore, it is difficult for a system to decide which node is likely to be chosen as a return node and how much information should be included in the result. To address these challenges, we propose an XML semantic search based on keywords called XSemantic. On the one hand, we give three definitions to complete in terms of semantics. Firstly, the semantic term expansion, our system is robust from the ambiguous keywords by using the domain ontology. Secondly, to return semantic meaningful answers, we automatically infer the return information from the user queries and take advantage of the shortest path to return meaningful connections between keywords. Thirdly, we present the semantic ranking that reflects the degree of similarity as well as the semantic relationship so that the search results with the higher relevance are presented to the users first. On the other hand, in the LCA and the proximity search approaches, we investigated the problem of information included in the search results. Therefore, we introduce the notion of the Lowest Common Element Ancestor (LCEA) and define our simple rule without any requirement on the schema information such as the DTD or XML Schema. The first experiment indicated that XSemantic not only properly infers the return information but also generates compact meaningful results. Additionally, the benefits of our proposed semantics are demonstrated by the second experiment.

  18. The climate4impact portal: bridging the CMIP5 and CORDEX data infrastructure to impact users

    NASA Astrophysics Data System (ADS)

    Plieger, Maarten; Som de Cerff, Wim; Pagé, Christian; Tatarinova, Natalia; Cofiño, Antonio; Vega Saldarriaga, Manuel; Hutjes, Ronald; de Jong, Fokke; Bärring, Lars; Sjökvist, Elin

    2015-04-01

    The aim of climate4impact is to enhance the use of Climate Research Data and to enhance the interaction with climate effect/impact communities. The portal is based on 21 impact use cases from 5 different European countries, and is evaluated by a user panel consisting of use case owners. It has been developed within the European projects IS-ENES and IS-ENES2 for more than 5 years, and its development currently continues within IS-ENES2 and CLIPC. As the climate impact community is very broad, the focus is mainly on the scientific impact community. This work has resulted in the ENES portal interface for climate impact communities and can be visited at www.climate4impact.eu. The climate4impact is connected to the Earth System Grid Federation (ESGF) nodes containing global climate model data (GCM data) from the fifth phase of the Coupled Model Intercomparison Project (CMIP5) and regional climate model data (RCM) data from the Coordinated Regional Climate Downscaling Experiment (CORDEX). This global network of climate model data centers offers services for data description, discovery and download. The climate4impact portal connects to these services using OpenID, and offers a user interface for searching, visualizing and downloading global climate model data and more. A challenging task was to describe the available model data and how it can be used. The portal tries to inform users about possible caveats when using climate model data. All impact use cases are described in the documentation section, using highlighted keywords pointing to detailed information in the glossary. During the project, the content management system Drupal was used to enable partners to contribute on the documentation section. In this presentation the architecture and following items will be detailed: - Visualization: Visualize data from ESGF data nodes using ADAGUC Web Map Services. - Processing: Transform data, subset, export into other formats, and perform climate indices calculations using Web Processing Services implemented by PyWPS, based on NCAR NCPP OpenClimateGIS and IS-ENES2 icclim. - Security: Login using OpenID for access to the ESGF data nodes. The ESGF works in conjunction with several external websites and systems. The climate4impact portal uses X509 based short lived credentials, generated on behalf of the user with a MyProxy service. Single Sign-on (SSO) is used to make these websites and systems work together. - Discovery: Facetted search based on e.g. variable name, model and institute using the ESGF search services. A catalog browser allows for browsing through CMIP5 and any other climate model data catalogues (e.g. ESSENCE, EOBS, UNIDATA). - Download: Directly from ESGF nodes and other THREDDS catalogs This architecture will also be used for the future Copernicus platform, developed in the EU FP7 CLIPC project. - Connection with the downscaling portal of the university of Cantabria - Experiences on the question and answer site via Askbot The current main objectives for climate4impact can be summarized in two objectives. The first one is to work on a web interface which automatically generates a graphical user interface on WPS endpoints. The WPS calculates climate indices and subset data using OpenClimateGIS/icclim on data stored in ESGF data nodes. Data is then transmitted from ESGF nodes over secured OpenDAP and becomes available in a new, per user, secured OpenDAP server. The results can then be visualized again using ADAGUC WMS. Dedicated wizards for processing of climate indices will be developed in close collaboration with users. The second one is to expose climate4impact services, so as to offer standardized services which can be used by other portals. This has the advantage to add interoperability between several portals, as well as to enable the design of specific portals aimed at different impact communities, either thematic or national, for example.

  19. Relaxation mechanisms, structure and properties of semi-coherent interfaces

    DOE PAGES

    Shao, Shuai; Wang, Jian

    2015-10-15

    In this work, using the Cu–Ni (111) semi-coherent interface as a model system, we combine atomistic simulations and defect theory to reveal the relaxation mechanisms, structure, and properties of semi-coherent interfaces. By calculating the generalized stacking fault energy (GSFE) profile of the interface, two stable structures and a high-energy structure are located. During the relaxation, the regions that possess the stable structures expand and develop into coherent regions; the regions with high-energy structure shrink into the intersection of misfit dislocations (nodes). This process reduces the interface excess potential energy but increases the core energy of the misfit dislocations and nodes.more » The core width is dependent on the GSFE of the interface. The high-energy structure relaxes by relative rotation and dilatation between the crystals. The relative rotation is responsible for the spiral pattern at nodes. The relative dilatation is responsible for the creation of free volume at nodes, which facilitates the nodes’ structural transformation. Several node structures have been observed and analyzed. In conclusion, the various structures have significant impact on the plastic deformation in terms of lattice dislocation nucleation, as well as the point defect formation energies.« less

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shao, Shuai; Wang, Jian

    In this work, using the Cu–Ni (111) semi-coherent interface as a model system, we combine atomistic simulations and defect theory to reveal the relaxation mechanisms, structure, and properties of semi-coherent interfaces. By calculating the generalized stacking fault energy (GSFE) profile of the interface, two stable structures and a high-energy structure are located. During the relaxation, the regions that possess the stable structures expand and develop into coherent regions; the regions with high-energy structure shrink into the intersection of misfit dislocations (nodes). This process reduces the interface excess potential energy but increases the core energy of the misfit dislocations and nodes.more » The core width is dependent on the GSFE of the interface. The high-energy structure relaxes by relative rotation and dilatation between the crystals. The relative rotation is responsible for the spiral pattern at nodes. The relative dilatation is responsible for the creation of free volume at nodes, which facilitates the nodes’ structural transformation. Several node structures have been observed and analyzed. In conclusion, the various structures have significant impact on the plastic deformation in terms of lattice dislocation nucleation, as well as the point defect formation energies.« less

  1. Active Low Intrusion Hybrid Monitor for Wireless Sensor Networks

    PubMed Central

    Navia, Marlon; Campelo, Jose C.; Bonastre, Alberto; Ors, Rafael; Capella, Juan V.; Serrano, Juan J.

    2015-01-01

    Several systems have been proposed to monitor wireless sensor networks (WSN). These systems may be active (causing a high degree of intrusion) or passive (low observability inside the nodes). This paper presents the implementation of an active hybrid (hardware and software) monitor with low intrusion. It is based on the addition to the sensor node of a monitor node (hardware part) which, through a standard interface, is able to receive the monitoring information sent by a piece of software executed in the sensor node. The intrusion on time, code, and energy caused in the sensor nodes by the monitor is evaluated as a function of data size and the interface used. Then different interfaces, commonly available in sensor nodes, are evaluated: serial transmission (USART), serial peripheral interface (SPI), and parallel. The proposed hybrid monitor provides highly detailed information, barely disturbed by the measurement tool (interference), about the behavior of the WSN that may be used to evaluate many properties such as performance, dependability, security, etc. Monitor nodes are self-powered and may be removed after the monitoring campaign to be reused in other campaigns and/or WSNs. No other hardware-independent monitoring platforms with such low interference have been found in the literature. PMID:26393604

  2. Centrally managed unified shared virtual address space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilkes, John

    Systems, apparatuses, and methods for managing a unified shared virtual address space. A host may execute system software and manage a plurality of nodes coupled to the host. The host may send work tasks to the nodes, and for each node, the host may externally manage the node's view of the system's virtual address space. Each node may have a central processing unit (CPU) style memory management unit (MMU) with an internal translation lookaside buffer (TLB). In one embodiment, the host may be coupled to a given node via an input/output memory management unit (IOMMU) interface, where the IOMMU frontendmore » interface shares the TLB with the given node's MMU. In another embodiment, the host may control the given node's view of virtual address space via memory-mapped control registers.« less

  3. A Scalable Data Access Layer to Manage Structured Heterogeneous Biomedical Data.

    PubMed

    Delussu, Giovanni; Lianas, Luca; Frexia, Francesca; Zanetti, Gianluigi

    2016-01-01

    This work presents a scalable data access layer, called PyEHR, designed to support the implementation of data management systems for secondary use of structured heterogeneous biomedical and clinical data. PyEHR adopts the openEHR's formalisms to guarantee the decoupling of data descriptions from implementation details and exploits structure indexing to accelerate searches. Data persistence is guaranteed by a driver layer with a common driver interface. Interfaces for two NoSQL Database Management Systems are already implemented: MongoDB and Elasticsearch. We evaluated the scalability of PyEHR experimentally through two types of tests, called "Constant Load" and "Constant Number of Records", with queries of increasing complexity on synthetic datasets of ten million records each, containing very complex openEHR archetype structures, distributed on up to ten computing nodes.

  4. SS-Wrapper: a package of wrapper applications for similarity searches on Linux clusters.

    PubMed

    Wang, Chunlin; Lefkowitz, Elliot J

    2004-10-28

    Large-scale sequence comparison is a powerful tool for biological inference in modern molecular biology. Comparing new sequences to those in annotated databases is a useful source of functional and structural information about these sequences. Using software such as the basic local alignment search tool (BLAST) or HMMPFAM to identify statistically significant matches between newly sequenced segments of genetic material and those in databases is an important task for most molecular biologists. Searching algorithms are intrinsically slow and data-intensive, especially in light of the rapid growth of biological sequence databases due to the emergence of high throughput DNA sequencing techniques. Thus, traditional bioinformatics tools are impractical on PCs and even on dedicated UNIX servers. To take advantage of larger databases and more reliable methods, high performance computation becomes necessary. We describe the implementation of SS-Wrapper (Similarity Search Wrapper), a package of wrapper applications that can parallelize similarity search applications on a Linux cluster. Our wrapper utilizes a query segmentation-search (QS-search) approach to parallelize sequence database search applications. It takes into consideration load balancing between each node on the cluster to maximize resource usage. QS-search is designed to wrap many different search tools, such as BLAST and HMMPFAM using the same interface. This implementation does not alter the original program, so newly obtained programs and program updates should be accommodated easily. Benchmark experiments using QS-search to optimize BLAST and HMMPFAM showed that QS-search accelerated the performance of these programs almost linearly in proportion to the number of CPUs used. We have also implemented a wrapper that utilizes a database segmentation approach (DS-BLAST) that provides a complementary solution for BLAST searches when the database is too large to fit into the memory of a single node. Used together, QS-search and DS-BLAST provide a flexible solution to adapt sequential similarity searching applications in high performance computing environments. Their ease of use and their ability to wrap a variety of database search programs provide an analytical architecture to assist both the seasoned bioinformaticist and the wet-bench biologist.

  5. SS-Wrapper: a package of wrapper applications for similarity searches on Linux clusters

    PubMed Central

    Wang, Chunlin; Lefkowitz, Elliot J

    2004-01-01

    Background Large-scale sequence comparison is a powerful tool for biological inference in modern molecular biology. Comparing new sequences to those in annotated databases is a useful source of functional and structural information about these sequences. Using software such as the basic local alignment search tool (BLAST) or HMMPFAM to identify statistically significant matches between newly sequenced segments of genetic material and those in databases is an important task for most molecular biologists. Searching algorithms are intrinsically slow and data-intensive, especially in light of the rapid growth of biological sequence databases due to the emergence of high throughput DNA sequencing techniques. Thus, traditional bioinformatics tools are impractical on PCs and even on dedicated UNIX servers. To take advantage of larger databases and more reliable methods, high performance computation becomes necessary. Results We describe the implementation of SS-Wrapper (Similarity Search Wrapper), a package of wrapper applications that can parallelize similarity search applications on a Linux cluster. Our wrapper utilizes a query segmentation-search (QS-search) approach to parallelize sequence database search applications. It takes into consideration load balancing between each node on the cluster to maximize resource usage. QS-search is designed to wrap many different search tools, such as BLAST and HMMPFAM using the same interface. This implementation does not alter the original program, so newly obtained programs and program updates should be accommodated easily. Benchmark experiments using QS-search to optimize BLAST and HMMPFAM showed that QS-search accelerated the performance of these programs almost linearly in proportion to the number of CPUs used. We have also implemented a wrapper that utilizes a database segmentation approach (DS-BLAST) that provides a complementary solution for BLAST searches when the database is too large to fit into the memory of a single node. Conclusions Used together, QS-search and DS-BLAST provide a flexible solution to adapt sequential similarity searching applications in high performance computing environments. Their ease of use and their ability to wrap a variety of database search programs provide an analytical architecture to assist both the seasoned bioinformaticist and the wet-bench biologist. PMID:15511296

  6. HURON (HUman and Robotic Optimization Network) Multi-Agent Temporal Activity Planner/Scheduler

    NASA Technical Reports Server (NTRS)

    Hua, Hook; Mrozinski, Joseph J.; Elfes, Alberto; Adumitroaie, Virgil; Shelton, Kacie E.; Smith, Jeffrey H.; Lincoln, William P.; Weisbin, Charles R.

    2012-01-01

    HURON solves the problem of how to optimize a plan and schedule for assigning multiple agents to a temporal sequence of actions (e.g., science tasks). Developed as a generic planning and scheduling tool, HURON has been used to optimize space mission surface operations. The tool has also been used to analyze lunar architectures for a variety of surface operational scenarios in order to maximize return on investment and productivity. These scenarios include numerous science activities performed by a diverse set of agents: humans, teleoperated rovers, and autonomous rovers. Once given a set of agents, activities, resources, resource constraints, temporal constraints, and de pendencies, HURON computes an optimal schedule that meets a specified goal (e.g., maximum productivity or minimum time), subject to the constraints. HURON performs planning and scheduling optimization as a graph search in state-space with forward progression. Each node in the graph contains a state instance. Starting with the initial node, a graph is automatically constructed with new successive nodes of each new state to explore. The optimization uses a set of pre-conditions and post-conditions to create the children states. The Python language was adopted to not only enable more agile development, but to also allow the domain experts to easily define their optimization models. A graphical user interface was also developed to facilitate real-time search information feedback and interaction by the operator in the search optimization process. The HURON package has many potential uses in the fields of Operations Research and Management Science where this technology applies to many commercial domains requiring optimization to reduce costs. For example, optimizing a fleet of transportation truck routes, aircraft flight scheduling, and other route-planning scenarios involving multiple agent task optimization would all benefit by using HURON.

  7. MotifNet: a web-server for network motif analysis.

    PubMed

    Smoly, Ilan Y; Lerman, Eugene; Ziv-Ukelson, Michal; Yeger-Lotem, Esti

    2017-06-15

    Network motifs are small topological patterns that recur in a network significantly more often than expected by chance. Their identification emerged as a powerful approach for uncovering the design principles underlying complex networks. However, available tools for network motif analysis typically require download and execution of computationally intensive software on a local computer. We present MotifNet, the first open-access web-server for network motif analysis. MotifNet allows researchers to analyze integrated networks, where nodes and edges may be labeled, and to search for motifs of up to eight nodes. The output motifs are presented graphically and the user can interactively filter them by their significance, number of instances, node and edge labels, and node identities, and view their instances. MotifNet also allows the user to distinguish between motifs that are centered on specific nodes and motifs that recur in distinct parts of the network. MotifNet is freely available at http://netbio.bgu.ac.il/motifnet . The website was implemented using ReactJs and supports all major browsers. The server interface was implemented in Python with data stored on a MySQL database. estiyl@bgu.ac.il or michaluz@cs.bgu.ac.il. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  8. Administering truncated receive functions in a parallel messaging interface

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2014-12-09

    Administering truncated receive functions in a parallel messaging interface (`PMI`) of a parallel computer comprising a plurality of compute nodes coupled for data communications through the PMI and through a data communications network, including: sending, through the PMI on a source compute node, a quantity of data from the source compute node to a destination compute node; specifying, by an application on the destination compute node, a portion of the quantity of data to be received by the application on the destination compute node and a portion of the quantity of data to be discarded; receiving, by the PMI on the destination compute node, all of the quantity of data; providing, by the PMI on the destination compute node to the application on the destination compute node, only the portion of the quantity of data to be received by the application; and discarding, by the PMI on the destination compute node, the portion of the quantity of data to be discarded.

  9. Interface Supports Lightweight Subsystem Routing for Flight Applications

    NASA Technical Reports Server (NTRS)

    Lux, James P.; Block, Gary L.; Ahmad, Mohammad; Whitaker, William D.; Dillon, James W.

    2010-01-01

    A wireless avionics interface exploits the constrained nature of data networks in flight systems to use a lightweight routing method. This simplified routing means that a processor is not required, and the logic can be implemented as an intellectual property (IP) core in a field-programmable gate array (FPGA). The FPGA can be shared with the flight subsystem application. In addition, the router is aware of redundant subsystems, and can be configured to provide hot standby support as part of the interface. This simplifies implementation of flight applications requiring hot stand - by support. When a valid inbound packet is received from the network, the destination node address is inspected to determine whether the packet is to be processed by this node. Each node has routing tables for the next neighbor node to guide the packet to the destination node. If it is to be processed, the final packet destination is inspected to determine whether the packet is to be forwarded to another node, or routed locally. If the packet is local, it is sent to an Applications Data Interface (ADI), which is attached to a local flight application. Under this scheme, an interface can support many applications in a subsystem supporting a high level of subsystem integration. If the packet is to be forwarded to another node, it is sent to the outbound packet router. The outbound packet router receives packets from an ADI or a packet to be forwarded. It then uses a lookup table to determine the next destination for the packet. Upon detecting a remote subsystem failure, the routing table can be updated to autonomously bypass the failed subsystem.

  10. Method and systems for a radiation tolerant bus interface circuit

    NASA Technical Reports Server (NTRS)

    Kinstler, Gary A. (Inventor)

    2007-01-01

    A bus management tool that allows communication to be maintained between a group of nodes operatively connected on two busses in the presence of radiation by transmitting periodically a first message from one to another of the nodes on one of the busses, determining whether the first message was received by the other of the nodes on the first bus, and when it is determined that the first message was not received by the other of the nodes, transmitting a recovery command to the other of the nodes on a second of the of busses. Methods, systems, and articles of manufacture consistent with the present invention also provide for a bus recovery tool on the other node that re-initializes a bus interface circuit operatively connecting the other node to the first bus in response to the recovery command.

  11. A Scalable Data Access Layer to Manage Structured Heterogeneous Biomedical Data

    PubMed Central

    Lianas, Luca; Frexia, Francesca; Zanetti, Gianluigi

    2016-01-01

    This work presents a scalable data access layer, called PyEHR, designed to support the implementation of data management systems for secondary use of structured heterogeneous biomedical and clinical data. PyEHR adopts the openEHR’s formalisms to guarantee the decoupling of data descriptions from implementation details and exploits structure indexing to accelerate searches. Data persistence is guaranteed by a driver layer with a common driver interface. Interfaces for two NoSQL Database Management Systems are already implemented: MongoDB and Elasticsearch. We evaluated the scalability of PyEHR experimentally through two types of tests, called “Constant Load” and “Constant Number of Records”, with queries of increasing complexity on synthetic datasets of ten million records each, containing very complex openEHR archetype structures, distributed on up to ten computing nodes. PMID:27936191

  12. blastjs: a BLAST+ wrapper for Node.js.

    PubMed

    Page, Martin; MacLean, Dan; Schudoma, Christian

    2016-02-27

    To cope with the ever-increasing amount of sequence data generated in the field of genomics, the demand for efficient and fast database searches that drive functional and structural annotation in both large- and small-scale genome projects is on the rise. The tools of the BLAST+ suite are the most widely employed bioinformatic method for these database searches. Recent trends in bioinformatics application development show an increasing number of JavaScript apps that are based on modern frameworks such as Node.js. Until now, there is no way of using database searches with the BLAST+ suite from a Node.js codebase. We developed blastjs, a Node.js library that wraps the search tools of the BLAST+ suite and thus allows to easily add significant functionality to any Node.js-based application. blastjs is a library that allows the incorporation of BLAST+ functionality into bioinformatics applications based on JavaScript and Node.js. The library was designed to be as user-friendly as possible and therefore requires only a minimal amount of code in the client application. The library is freely available under the MIT license at https://github.com/teammaclean/blastjs.

  13. Extending the granularity of representation and control for the MIL-STD CAIS 1.0 node model

    NASA Technical Reports Server (NTRS)

    Rogers, Kathy L.

    1986-01-01

    The Common APSE (Ada 1 Program Support Environment) Interface Set (CAIS) (DoD85) node model provides an excellent baseline for interfaces in a single-host development environment. To encompass the entire spectrum of computing, however, the CAIS model should be extended in four areas. It should provide the interface between the engineering workstation and the host system throughout the entire lifecycle of the system. It should provide a basis for communication and integration functions needed by distributed host environments. It should provide common interfaces for communications mechanisms to and among target processors. It should provide facilities for integration, validation, and verification of test beds extending to distributed systems on geographically separate processors with heterogeneous instruction set architectures (ISAS). Additions to the PROCESS NODE model to extend the CAIS into these four areas are proposed.

  14. Distributed Multihoming Routing Method by Crossing Control MIPv6 with SCTP

    NASA Astrophysics Data System (ADS)

    Shi, Hongbo; Hamagami, Tomoki

    There are various wireless communication technologies, such as 3G, WiFi, used widely in the world. Recently, not only the laptop but also the smart phones can be equipped with multiple wireless devices. The communication terminals which are implemented with multiple interfaces are usually called multi-homed nodes. Meanwhile, a multi-homed node with multiple interfaces can also be regarded as multiple single-homed nodes. For example, when a person who is using smart phone and laptop to connect to the Internet concurrently, we may regard the person as a multi-homed node in the Internet. This paper proposes a new routing method, Multi-homed Mobile Cross-layer Control to handle multi-homed mobile nodes. Our suggestion can provide a distributed end-to-end routing method for handling the communications among multi-homed nodes at the fundamental network layer.

  15. GENESI-DR Portal: a scientific gateway to distributed repositories

    NASA Astrophysics Data System (ADS)

    Goncalves, Pedro; Brito, Fabrice; D'Andria, Fabio; Cossu, Roberto; Fusco, Luigi

    2010-05-01

    GENESI-DR (Ground European Network for Earth Science Interoperations - Digital Repositories) is a European Commission (EC)-funded project, kicked-off early 2008 lead by ESA; partners include Space Agencies (DLR, ASI, CNES), both space and no-space data providers such as ENEA (I), Infoterra (UK), K-SAT (N), NILU (N), JRC (EU) and industry as Elsag Datamat (I), CS (F) and TERRADUE (I). GENESI-DR intends to meet the challenge of facilitating "time to science" from different Earth Science disciplines in discovery, access and use (combining, integrating, processing, …) of historical and recent Earth-related data from space, airborne and in-situ sensors, which are archived in large distributed repositories. "Discovering" which data are available on a "geospatial web" is one of the main challenges ES scientists have to face today. Some well- known data sets are referred to in many places, available from many sources. For core information with a common purpose many copies are distributed, e.g., VMap0, Landsat, and SRTM. Other data sets in low or local demand may only be found in a few places and niche communities. Relevant services, results of analysis, applications and tools are accessible in a very scattered and uncoordinated way, often through individual initiatives from Earth Observation mission operators, scientific institutes dealing with ground measurements, service companies or data catalogues. In the discourse of Spatial Data Infrastructures, there are "catalogue services" - directories containing information on where spatial data and services can be found. For metadata "records" describing spatial data and services, there are "registries". The Geospatial industry coins specifications for search interfaces, where it might do better to reach out to other information retrieval and Internet communities. These considerations are the basis for the GENESI-DR scientific portal, which adopts a simple model allowing the geo-spatial classification and discovery of information as a loosely connected federation of nodes. This network had however to be resilient to node failures and able to scale with the growing addition of new information about data and services. The GENESI-DR scientific portal is still evolving as the project deploys the different components amongst the different partners, but the aim is to provide the connection to information, establish rights, access it and in some cases apply algorithms using the computer power available on the infrastructure with simple interfaces. As information is discovered in the network, it can be further exploited, filtered or enhanced according to the user goals. To implement this vision two specialized graphical interfaces were designed on the portal. The first, concentrates on the text-based search of information, while the second is a command and control of submission and order status on a distributed processing environment. The text search uses natural language features that extract the spatial temporal components from the user query. This is then propagated to the nodes by mapping them to OpenSearch extensions, and then returned to the user as an aggregated list of the resources. These can either be access points to dataset series or services that can be further analysed and processed. At this stage, the user is presented with dedicated interfaces that correspond to context of the action that is performing. Be it a bulk data download, data processing or data mining, the different services offer specialized interfaces that are integrated on the portal. In the overall, the GENESI-DR project identifies best practices and supporting context for the use of a minimal abstract model to loosely connect a federation of Digital Repositories. Surpassing the apparent lack of cost effectiveness of the Spatial Data Infrastructures effort in developing "catalogue services" is achieved by trimming the use cases to the most common and relevant. The GENESI-DR scientific portal is, as such, the visible front-end of a dedicated infrastructure providing transparent access to information and allowing Earth Science communities to easily and quickly derive objective information and share knowledge based on all environmentally sensitive domains.

  16. Capillary Flow Experiment in Node 2

    NASA Image and Video Library

    2013-06-15

    Astronaut Karen Nyberg,Expedition 36 flight engineer,works on the Capillary Flow Experiment (CFE) Vane Gap-1 (VG-1) setup in the Node 2/Harmony. The CFE-2 vessel is used to observe fluid interface and critical wetting behavior in a cylindrical chamber with elliptic cross-section and an adjustable central perforated vane. The primary objective of the Vane Gap experiments is to determine equilibrium interface configurations and critical wetting conditions for interfaces between interior corners separated by a gap.

  17. Local area network with fault-checking, priorities, and redundant backup

    NASA Technical Reports Server (NTRS)

    Morales, Sergio (Inventor); Friedman, Gary L. (Inventor)

    1989-01-01

    This invention is a redundant error detecting and correcting local area networked computer system having a plurality of nodes each including a network connector board within the node for connecting to an interfacing transceiver operably attached to a network cable. There is a first network cable disposed along a path to interconnect the nodes. The first network cable includes a plurality of first interfacing transceivers attached thereto. A second network cable is disposed in parallel with the first cable and, in like manner, includes a plurality of second interfacing transceivers attached thereto. There are a plurality of three position switches each having a signal input, three outputs for individual selective connection to the input, and a control input for receiving signals designating which of the outputs is to be connected to the signal input. Each of the switches includes means for designating a response address for responding to addressed signals appearing at the control input and each of the switches further has its signal input connected to a respective one of the input/output lines from the nodes. Also, one of the three outputs is connected to a repective one of the plurality of first interfacing transceivers. There is master switch control means having an output connected to the control inputs of the plurality of three position switches and an input for receiving directive signals for outputting addressed switch position signals to the three position switches as well as monitor and control computer means having a pair of network connector boards therein connected to respective ones of one of the first interfacing transceivers and one of the second interfacing transceivers and an output connected to the input of the master switch means for monitoring the status of the networked computer system by sending messages to the nodes and receiving and verifying messages therefrom and for sending control signals to the master switch to cause the master switch to cause respective ones of the nodes to use a desired one of the first and second cables for transmitting and receiving messages and for disconnecting desired ones of the nodes from both cables.

  18. ALMA Correlator Real-Time Data Processor

    NASA Astrophysics Data System (ADS)

    Pisano, J.; Amestica, R.; Perez, J.

    2005-10-01

    The design of a real-time Linux application utilizing Real-Time Application Interface (RTAI) to process real-time data from the radio astronomy correlator for the Atacama Large Millimeter Array (ALMA) is described. The correlator is a custom-built digital signal processor which computes the cross-correlation function of two digitized signal streams. ALMA will have 64 antennas with 2080 signal streams each with a sample rate of 4 giga-samples per second. The correlator's aggregate data output will be 1 gigabyte per second. The software is defined by hard deadlines with high input and processing data rates, while requiring interfaces to non real-time external computers. The designed computer system - the Correlator Data Processor or CDP, consists of a cluster of 17 SMP computers, 16 of which are compute nodes plus a master controller node all running real-time Linux kernels. Each compute node uses an RTAI kernel module to interface to a 32-bit parallel interface which accepts raw data at 64 megabytes per second in 1 megabyte chunks every 16 milliseconds. These data are transferred to tasks running on multiple CPUs in hard real-time using RTAI's LXRT facility to perform quantization corrections, data windowing, FFTs, and phase corrections for a processing rate of approximately 1 GFLOPS. Highly accurate timing signals are distributed to all seventeen computer nodes in order to synchronize them to other time-dependent devices in the observatory array. RTAI kernel tasks interface to the timing signals providing sub-millisecond timing resolution. The CDP interfaces, via the master node, to other computer systems on an external intra-net for command and control, data storage, and further data (image) processing. The master node accesses these external systems utilizing ALMA Common Software (ACS), a CORBA-based client-server software infrastructure providing logging, monitoring, data delivery, and intra-computer function invocation. The software is being developed in tandem with the correlator hardware which presents software engineering challenges as the hardware evolves. The current status of this project and future goals are also presented.

  19. A Multi-Purpose Modular Electronics Integration Node for Exploration Extravehicular Activity

    NASA Technical Reports Server (NTRS)

    Hodgson, Edward; Papale, William; Wichowski, Robert; Rosenbush, David; Hawes, Kevin; Stankiewicz, Tom

    2013-01-01

    As NASA works to develop an effective integrated portable life support system design for exploration Extravehicular activity (EVA), alternatives to the current system s electrical power and control architecture are needed to support new requirements for flexibility, maintainability, reliability, and reduced mass and volume. Experience with the current Extravehicular Mobility Unit (EMU) has demonstrated that the current architecture, based in a central power supply, monitoring and control unit, with dedicated analog wiring harness connections to active components in the system has a significant impact on system packaging and seriously constrains design flexibility in adapting to component obsolescence and changing system needs over time. An alternative architecture based in the use of a digital data bus offers possible wiring harness and system power savings, but risks significant penalties in component complexity and cost. A hybrid architecture that relies on a set of electronic and power interface nodes serving functional models within the Portable Life Support System (PLSS) is proposed to minimize both packaging and component level penalties. A common interface node hardware design can further reduce penalties by reducing the nonrecurring development costs, making miniaturization more practical, maximizing opportunities for maturation and reliability growth, providing enhanced fault tolerance, and providing stable design interfaces for system components and a central control. Adaptation to varying specific module requirements can be achieved with modest changes in firmware code within the module. A preliminary design effort has developed a common set of hardware interface requirements and functional capabilities for such a node based on anticipated modules comprising an exploration PLSS, and a prototype node has been designed assembled, programmed, and tested. One instance of such a node has been adapted to support testing the swingbed carbon dioxide and humidity control element in NASA s advanced PLSS 2.0 test article. This paper will describe the common interface node design concept, results of the prototype development and test effort, and plans for use in NASA PLSS 2.0 integrated tests.

  20. Are Bibliographic Management Software Search Interfaces Reliable?: A Comparison between Search Results Obtained Using Database Interfaces and the EndNote Online Search Function

    ERIC Educational Resources Information Center

    Fitzgibbons, Megan; Meert, Deborah

    2010-01-01

    The use of bibliographic management software and its internal search interfaces is now pervasive among researchers. This study compares the results between searches conducted in academic databases' search interfaces versus the EndNote search interface. The results show mixed search reliability, depending on the database and type of search…

  1. Geospatial-temporal semantic graph representations of trajectories from remote sensing and geolocation data

    DOEpatents

    Perkins, David Nikolaus; Brost, Randolph; Ray, Lawrence P.

    2017-08-08

    Various technologies for facilitating analysis of large remote sensing and geolocation datasets to identify features of interest are described herein. A search query can be submitted to a computing system that executes searches over a geospatial temporal semantic (GTS) graph to identify features of interest. The GTS graph comprises nodes corresponding to objects described in the remote sensing and geolocation datasets, and edges that indicate geospatial or temporal relationships between pairs of nodes in the nodes. Trajectory information is encoded in the GTS graph by the inclusion of movable nodes to facilitate searches for features of interest in the datasets relative to moving objects such as vehicles.

  2. High Speed All-Optical Data Distribution Network

    NASA Astrophysics Data System (ADS)

    Braun, Steve; Hodara, Henri

    2017-11-01

    This article describes the performance and capabilities of an all-optical network featuring low latency, high speed file transfer between serially connected optical nodes. A basic component of the network is a network interface card (NIC) implemented through a unique planar lightwave circuit (PLC) that performs add/drop data and optical signal amplification. The network uses a linear bus topology with nodes in a "T" configuration, as described in the text. The signal is sent optically (hence, no latency) to all nodes via wavelength division multiplexing (WDM), with each node receiver tuned to wavelength of choice via an optical de-multiplexer. Each "T" node routes a portion of the signal to/from the bus through optical couplers, embedded in the network interface card (NIC), to each of the 1 through n computers.

  3. Node Depth Adjustment Based Target Tracking in UWSNs Using Improved Harmony Search.

    PubMed

    Liu, Meiqin; Zhang, Duo; Zhang, Senlin; Zhang, Qunfei

    2017-12-04

    Underwater wireless sensor networks (UWSNs) can provide a promising solution to underwater target tracking. Due to the limited computation and bandwidth resources, only a small part of nodes are selected to track the target at each interval. How to improve tracking accuracy with a small number of nodes is a key problem. In recent years, a node depth adjustment system has been developed and applied to issues of network deployment and routing protocol. As far as we know, all existing tracking schemes keep underwater nodes static or moving with water flow, and node depth adjustment has not been utilized for underwater target tracking yet. This paper studies node depth adjustment method for target tracking in UWSNs. Firstly, since a Fisher Information Matrix (FIM) can quantify the estimation accuracy, its relation to node depth is derived as a metric. Secondly, we formulate the node depth adjustment as an optimization problem to determine moving depth of activated node, under the constraint of moving range, the value of FIM is used as objective function, which is aimed to be minimized over moving distance of nodes. Thirdly, to efficiently solve the optimization problem, an improved Harmony Search (HS) algorithm is proposed, in which the generating probability is modified to improve searching speed and accuracy. Finally, simulation results are presented to verify performance of our scheme.

  4. Node Depth Adjustment Based Target Tracking in UWSNs Using Improved Harmony Search

    PubMed Central

    Zhang, Senlin; Zhang, Qunfei

    2017-01-01

    Underwater wireless sensor networks (UWSNs) can provide a promising solution to underwater target tracking. Due to the limited computation and bandwidth resources, only a small part of nodes are selected to track the target at each interval. How to improve tracking accuracy with a small number of nodes is a key problem. In recent years, a node depth adjustment system has been developed and applied to issues of network deployment and routing protocol. As far as we know, all existing tracking schemes keep underwater nodes static or moving with water flow, and node depth adjustment has not been utilized for underwater target tracking yet. This paper studies node depth adjustment method for target tracking in UWSNs. Firstly, since a Fisher Information Matrix (FIM) can quantify the estimation accuracy, its relation to node depth is derived as a metric. Secondly, we formulate the node depth adjustment as an optimization problem to determine moving depth of activated node, under the constraint of moving range, the value of FIM is used as objective function, which is aimed to be minimized over moving distance of nodes. Thirdly, to efficiently solve the optimization problem, an improved Harmony Search (HS) algorithm is proposed, in which the generating probability is modified to improve searching speed and accuracy. Finally, simulation results are presented to verify performance of our scheme. PMID:29207541

  5. Military Standard Common APSE (Ada Programming Support Environment) Interface Set (CAIS).

    DTIC Science & Technology

    1985-01-01

    QUEUEASE. LAST-KEY (QUEENAME) . LASTREI.TIONI(QUEUE-NAME). FILE-NODE. PORN . ATTRIBUTTES. ACCESSCONTROL. LEVEL); CLOSE (QUEUE BASE); CLOSE(FILE NODE...PROPOSED XIIT-STD-C.4 31 J NNUAfY logs procedure zTERT (ITERATOR: out NODE ITERATON; MAMIE: NAME STRING.KIND: NODE KID : KEY : RELATIONSHIP KEY PA1TTE1 :R

  6. A bioinformatics knowledge discovery in text application for grid computing

    PubMed Central

    Castellano, Marcello; Mastronardi, Giuseppe; Bellotti, Roberto; Tarricone, Gianfranco

    2009-01-01

    Background A fundamental activity in biomedical research is Knowledge Discovery which has the ability to search through large amounts of biomedical information such as documents and data. High performance computational infrastructures, such as Grid technologies, are emerging as a possible infrastructure to tackle the intensive use of Information and Communication resources in life science. The goal of this work was to develop a software middleware solution in order to exploit the many knowledge discovery applications on scalable and distributed computing systems to achieve intensive use of ICT resources. Methods The development of a grid application for Knowledge Discovery in Text using a middleware solution based methodology is presented. The system must be able to: perform a user application model, process the jobs with the aim of creating many parallel jobs to distribute on the computational nodes. Finally, the system must be aware of the computational resources available, their status and must be able to monitor the execution of parallel jobs. These operative requirements lead to design a middleware to be specialized using user application modules. It included a graphical user interface in order to access to a node search system, a load balancing system and a transfer optimizer to reduce communication costs. Results A middleware solution prototype and the performance evaluation of it in terms of the speed-up factor is shown. It was written in JAVA on Globus Toolkit 4 to build the grid infrastructure based on GNU/Linux computer grid nodes. A test was carried out and the results are shown for the named entity recognition search of symptoms and pathologies. The search was applied to a collection of 5,000 scientific documents taken from PubMed. Conclusion In this paper we discuss the development of a grid application based on a middleware solution. It has been tested on a knowledge discovery in text process to extract new and useful information about symptoms and pathologies from a large collection of unstructured scientific documents. As an example a computation of Knowledge Discovery in Database was applied on the output produced by the KDT user module to extract new knowledge about symptom and pathology bio-entities. PMID:19534749

  7. A bioinformatics knowledge discovery in text application for grid computing.

    PubMed

    Castellano, Marcello; Mastronardi, Giuseppe; Bellotti, Roberto; Tarricone, Gianfranco

    2009-06-16

    A fundamental activity in biomedical research is Knowledge Discovery which has the ability to search through large amounts of biomedical information such as documents and data. High performance computational infrastructures, such as Grid technologies, are emerging as a possible infrastructure to tackle the intensive use of Information and Communication resources in life science. The goal of this work was to develop a software middleware solution in order to exploit the many knowledge discovery applications on scalable and distributed computing systems to achieve intensive use of ICT resources. The development of a grid application for Knowledge Discovery in Text using a middleware solution based methodology is presented. The system must be able to: perform a user application model, process the jobs with the aim of creating many parallel jobs to distribute on the computational nodes. Finally, the system must be aware of the computational resources available, their status and must be able to monitor the execution of parallel jobs. These operative requirements lead to design a middleware to be specialized using user application modules. It included a graphical user interface in order to access to a node search system, a load balancing system and a transfer optimizer to reduce communication costs. A middleware solution prototype and the performance evaluation of it in terms of the speed-up factor is shown. It was written in JAVA on Globus Toolkit 4 to build the grid infrastructure based on GNU/Linux computer grid nodes. A test was carried out and the results are shown for the named entity recognition search of symptoms and pathologies. The search was applied to a collection of 5,000 scientific documents taken from PubMed. In this paper we discuss the development of a grid application based on a middleware solution. It has been tested on a knowledge discovery in text process to extract new and useful information about symptoms and pathologies from a large collection of unstructured scientific documents. As an example a computation of Knowledge Discovery in Database was applied on the output produced by the KDT user module to extract new knowledge about symptom and pathology bio-entities.

  8. Single-agent parallel window search

    NASA Technical Reports Server (NTRS)

    Powley, Curt; Korf, Richard E.

    1991-01-01

    Parallel window search is applied to single-agent problems by having different processes simultaneously perform iterations of Iterative-Deepening-A(asterisk) (IDA-asterisk) on the same problem but with different cost thresholds. This approach is limited by the time to perform the goal iteration. To overcome this disadvantage, the authors consider node ordering. They discuss how global node ordering by minimum h among nodes with equal f = g + h values can reduce the time complexity of serial IDA-asterisk by reducing the time to perform the iterations prior to the goal iteration. Finally, the two ideas of parallel window search and node ordering are combined to eliminate the weaknesses of each approach while retaining the strengths. The resulting approach, called simply parallel window search, can be used to find a near-optimal solution quickly, improve the solution until it is optimal, and then finally guarantee optimality, depending on the amount of time available.

  9. XML-based information system for planetary sciences

    NASA Astrophysics Data System (ADS)

    Carraro, F.; Fonte, S.; Turrini, D.

    2009-04-01

    EuroPlaNet (EPN in the following) has been developed by the planetological community under the "Sixth Framework Programme" (FP6 in the following), the European programme devoted to the improvement of the European research efforts through the creation of an internal market for science and technology. The goal of the EPN programme is the creation of a European network aimed to the diffusion of data produced by space missions dedicated to the study of the Solar System. A special place within the EPN programme is that of I.D.I.S. (Integrated and Distributed Information Service). The main goal of IDIS is to offer to the planetary science community a user-friendly access to the data and information produced by the various types of research activities, i.e. Earth-based observations, space observations, modeling, theory and laboratory experiments. During the FP6 programme IDIS development consisted in the creation of a series of thematic nodes, each of them specialized in a specific scientific domain, and a technical coordination node. The four thematic nodes are the Atmosphere node, the Plasma node, the Interiors & Surfaces node and the Small Bodies & Dust node. The main task of the nodes have been the building up of selected scientific cases related with the scientific domain of each node. The second work done by EPN nodes have been the creation of a catalogue of resources related to their main scientific theme. Both these efforts have been used as the basis for the development of the main IDIS goal, i.e. the integrated distributed service. An XML-based data model have been developed to describe resources using meta-data and to store the meta-data within an XML-based database called eXist. A search engine has been then developed in order to allow users to search resources within the database. Users can select the resource type and can insert one or more values or can choose a value among those present in a list, depending on selected resource. The system searches for all the resources containing the inserted values within the resources descriptions. An important facility of the IDIS search system is the multi-node search capability. This is due to the capacity of eXist to make queries on remote databases. This allows the system to show all resources which satisfy the search criteria on local node and to show how many resources are found on remote nodes, giving also a link to open the results page on remote nodes. During FP7 the development of the IDIS system will have the main goal to make the service Virtual Observatory compliant.

  10. Persistence and Adaptation in Immunity: T Cells Balance the Extent and Thoroughness of Search

    PubMed Central

    Fricke, G. Matthew; Letendre, Kenneth A.; Moses, Melanie E.; Cannon, Judy L.

    2016-01-01

    Effective search strategies have evolved in many biological systems, including the immune system. T cells are key effectors of the immune response, required for clearance of pathogenic infection. T cell activation requires that T cells encounter antigen-bearing dendritic cells within lymph nodes, thus, T cell search patterns within lymph nodes may be a crucial determinant of how quickly a T cell immune response can be initiated. Previous work suggests that T cell motion in the lymph node is similar to a Brownian random walk, however, no detailed analysis has definitively shown whether T cell movement is consistent with Brownian motion. Here, we provide a precise description of T cell motility in lymph nodes and a computational model that demonstrates how motility impacts T cell search efficiency. We find that both Brownian and Lévy walks fail to capture the complexity of T cell motion. Instead, T cell movement is better described as a correlated random walk with a heavy-tailed distribution of step lengths. Using computer simulations, we identify three distinct factors that contribute to increasing T cell search efficiency: 1) a lognormal distribution of step lengths, 2) motion that is directionally persistent over short time scales, and 3) heterogeneity in movement patterns. Furthermore, we show that T cells move differently in specific frequently visited locations that we call “hotspots” within lymph nodes, suggesting that T cells change their movement in response to the lymph node environment. Our results show that like foraging animals, T cells adapt to environmental cues, suggesting that adaption is a fundamental feature of biological search. PMID:26990103

  11. DMA shared byte counters in a parallel computer

    DOEpatents

    Chen, Dong; Gara, Alan G.; Heidelberger, Philip; Vranas, Pavlos

    2010-04-06

    A parallel computer system is constructed as a network of interconnected compute nodes. Each of the compute nodes includes at least one processor, a memory and a DMA engine. The DMA engine includes a processor interface for interfacing with the at least one processor, DMA logic, a memory interface for interfacing with the memory, a DMA network interface for interfacing with the network, injection and reception byte counters, injection and reception FIFO metadata, and status registers and control registers. The injection FIFOs maintain memory locations of the injection FIFO metadata memory locations including its current head and tail, and the reception FIFOs maintain the reception FIFO metadata memory locations including its current head and tail. The injection byte counters and reception byte counters may be shared between messages.

  12. Application of open source standards and technologies in the http://climate4impact.eu/ portal

    NASA Astrophysics Data System (ADS)

    Plieger, Maarten; Som de Cerff, Wim; Pagé, Christian; Tatarinova, Natalia

    2015-04-01

    This presentation will demonstrate how to calculate and visualize the climate indice SU (number of summer days) on the climate4impact portal. The following topics will be covered during the demonstration: - Security: Login using OpenID for access to the Earth System Grid Fedeation (ESGF) data nodes. The ESGF works in conjunction with several external websites and systems. The climate4impact portal uses X509 based short lived credentials, generated on behalf of the user with a MyProxy service. Single Sign-on (SSO) is used to make these websites and systems work together. - Discovery: Facetted search based on e.g. variable name, model and institute using the ESGF search services. A catalog browser allows for browsing through CMIP5 and any other climate model data catalogues (e.g. ESSENCE, EOBS, UNIDATA). - Processing using Web Processing Services (WPS): Transform data, subset, export into other formats, and perform climate indices calculations using Web Processing Services implemented by PyWPS, based on NCAR NCPP OpenClimateGIS and IS-ENES2 ICCLIM. - Visualization using Web Map Services (WMS): Visualize data from ESGF data nodes using ADAGUC Web Map Services. The aim of climate4impact is to enhance the use of Climate Research Data and to enhance the interaction with climate effect/impact communities. The portal is based on 21 impact use cases from 5 different European countries, and is evaluated by a user panel consisting of use case owners. It has been developed within the European projects IS-ENES and IS-ENES2 for more than 5 years, and its development currently continues within IS-ENES2 and CLIPC. As the climate impact community is very broad, the focus is mainly on the scientific impact community. This work has resulted in the ENES portal interface for climate impact communities and can be visited at http://climate4impact.eu/ The current main objectives for climate4impact can be summarized in two objectives. The first one is to work on a web interface which automatically generates a graphical user interface on WPS endpoints. The WPS calculates climate indices and subset data using OpenClimateGIS/ICCLIM on data stored in ESGF data nodes. Data is then transmitted from ESGF nodes over secured OpenDAP and becomes available in a new, per user, secured OpenDAP server. The results can then be visualized again using ADAGUC WMS. Dedicated wizards for processing of climate indices will be developed in close collaboration with users. The second one is to expose climate4impact services, so as to offer standardized services which can be used by other portals. This has the advantage to add interoperability between several portals, as well as to enable the design of specific portals aimed at different impact communities, either thematic or national, for example.

  13. MTVis: tree exploration using a multitouch interface

    NASA Astrophysics Data System (ADS)

    Andrews, David; Teoh, Soon Tee

    2010-01-01

    We present MTVis, a multi-touch interactive tree visualization system. The multi-touch interface display hardware is built using the LED-LP technology, and the tree layout is based on RINGS, but enhanced with multitouch interactions. We describe the features of the system, and how the multi-touch interface enhances the user's experience in exploring the tree data structure. In particular, the multi-touch interface allows the user to simultaneously control two child nodes of the root, and rotate them so that some nodes are magnified, while preserving the layout of the tree. We also describe the other meaninful touch screen gestures the users can use to intuitively explore the tree.

  14. Wireless Avionics Packet to Support Fault Tolerance for Flight Applications

    NASA Technical Reports Server (NTRS)

    Block, Gary L.; Whitaker, William D.; Dillon, James W.; Lux, James P.; Ahmad, Mohammad

    2009-01-01

    In this protocol and packet format, data traffic is monitored by all network interfaces to determine the health of transmitter and subsystems. When failures are detected, the network inter face applies its recover y policies to provide continued service despite the presence of faults. The protocol, packet format, and inter face are independent of the data link technology used. The current demonstration system supports both commercial off-the-shelf wireless connections and wired Ethernet connections. Other technologies such as 1553 or serial data links can be used for the network backbone. The Wireless Avionics packet is divided into three parts: a header, a data payload, and a checksum. The header has the following components: magic number, version, quality of service, time to live, sending transceiver, function code, payload length, source Application Data Interface (ADI) address, destination ADI address, sending node address, target node address, and a sequence number. The magic number is used to identify WAV packets, and allows the packet format to be updated in the future. The quality of service field allows routing decisions to be made based on this value and can be used to route critical management data over a dedicated channel. The time to live value is used to discard misrouted packets while the source transceiver is updated at each hop. This information is used to monitor the health of each transceiver in the network. To identify the packet type, the function code is used. Besides having a regular data packet, the system supports diagnostic packets for fault detection and isolation. The payload length specifies the number of data bytes in the payload, and this supports variable-length packets in the network. The source ADI is the address of the originating interface. This can be used by the destination application to identify the originating source of the packet where the address consists of a subnet, subsystem class within the subnet, a subsystem unit, and the local ADI number. The destination ADI is used to route the packet to its ultimate destination. At each hop, the sending interface uses the destination address to determine the next node for the data. The sending node is the node address of the interface that is broadcasting the packet. This field is used to determine the health of the subsystem that is sending the packet. In the case of a packet that traverses several intermediate nodes, it may be the node address of the intermediate node. The target node is the node address of the next hop for the packet. It may be an intermediate node, or the final destination for the packet. The sequence number is used to identify duplicate packets. Because each interface has multiple transceivers, the same packet will appear at both receivers. The sequence number allows the interface to correlate the reception and forward a single, unique packet for additional processing. The subnet field allows data traffic to be partitioned into segregated local networks to support large networks while keeping each subnet at a manageable size. This also keeps the routing table small enough so routing can be done by a simple table lookup in an FPGA device. The subsystem class identifies members of a set of redundant subsystems, and, in a hot standby configuration, all members of the subsystem class will receive the data packets. Only the active subsystem will generate data traffic. Specific units in a class of redundant units can be identified and, if the hot standby configuration is not used, packets will be directed to a specific subsystem unit.

  15. Europlanet - Joining the European Planetary Research Information Service

    NASA Astrophysics Data System (ADS)

    Capria, M. T.; Chanteur, G.; Schmidt, W.

    2009-04-01

    The "Europlanet Research Infrastructure - Europlanet RI", supported by the European Commission's Framework Program 7, aims at integrating major parts of the distributed European Planetary Research infrastructure with as diverse components as space exploration, ground-based observations, laboratory experiments and numerical model-ling teams. A central part of Europlanet RI is the "Integrated and Distributed Information Service" or Europlanet-IDIS which intends to provide easy Web-based access to information about scientists and teams working in related fields, observatories or laboratories with capabilities possibly beneficial to planetary research, modelling expertise useful for planetary science and observations from space-based, ground-based or laboratory measurements. As far as the type of data and their access methods allow, IDIS will provide Virtual Observatory (VO) like access to a variety of data from distributed sources and tools to compare and integrate this information to further data analysis and re-search. IDIS itself is providing a platform for information and data sharing and for data mining. It is structured as a network of thematic nodes each concentrating on a sub-set of research areas in planetary sciences. But the most important elements of IDIS and the whole Europlanet RI are the single scientists, institutes, laboratories, observatories and mission project teams. Without them the whole effort would remain an empty shell. How can an interested individual or team join this activity and what are the benefits to be expected from the related effort? The poster gives detailed answers to these questions. Here some highlights: 1. Locate from the Europlanet web pages (addresses see below) the thematic node best related to the own field of expertise. This might be more than one. 2. Define which services you want to offer to the community: just the contact address, field of competence, off-line access to data on request or even on-line searchable access to data to be integrated into the VO features of IDIS? Any combination and many more alternatives are possible. 3. Contact the staff of the selected node(s) to go through the details 4. The node's expert team will evaluate the information to ensure that it is compliant with the minimum requirements for Europlanet information providers like correct address, related field of competence, quality of data if any etc. 5. The new resource meta data (addresses, contents etc) will be added to the IDIS system including update of the search facilities 6. If data are offered for on-line access, the IDIS team will provide tools to generate a network-compatible generic interface. This one-time effort will make it possible to search the new data sets and combine them with related in-formation from other sources. Benefits for the information provider: - wide advertisement for the own resources and capabilities with increase in scientific references to the own activities and publications - new co-operation possibilities with so far unknown teams. Team exchange might be financially supported by other segments of the Europlanet RI - strong arguments for new funding applications and many more aspects List of contact web-sites: Technical node for support and management aspects: http://www.europlanet-idis.fi/ Planetary Surfaces and Interiors node: http://europlanet.dlr.de/ Planetary Plasma node: http://europlanet-plasmanode.oeaw.ac.at/ Planetary Atmospheres node: http://idis.ipsl.jussieu.fr/ Virtual Observatory Paris Data Centre: http://vo.obspm.fr/ Small Bodies and Dust node: http://www.ifsi-roma.inaf.it/europlanet/

  16. Trail-Based Search for Efficient Event Report to Mobile Actors in Wireless Sensor and Actor Networks.

    PubMed

    Xu, Zhezhuang; Liu, Guanglun; Yan, Haotian; Cheng, Bin; Lin, Feilong

    2017-10-27

    In wireless sensor and actor networks, when an event is detected, the sensor node needs to transmit an event report to inform the actor. Since the actor moves in the network to execute missions, its location is always unavailable to the sensor nodes. A popular solution is the search strategy that can forward the data to a node without its location information. However, most existing works have not considered the mobility of the node, and thus generate significant energy consumption or transmission delay. In this paper, we propose the trail-based search (TS) strategy that takes advantage of actor's mobility to improve the search efficiency. The main idea of TS is that, when the actor moves in the network, it can leave its trail composed of continuous footprints. The search packet with the event report is transmitted in the network to search the actor or its footprints. Once an effective footprint is discovered, the packet will be forwarded along the trail until it is received by the actor. Moreover, we derive the condition to guarantee the trail connectivity, and propose the redundancy reduction scheme based on TS (TS-R) to reduce nontrivial transmission redundancy that is generated by the trail. The theoretical and numerical analysis is provided to prove the efficiency of TS. Compared with the well-known expanding ring search (ERS), TS significantly reduces the energy consumption and search delay.

  17. Fast content-based image retrieval using dynamic cluster tree

    NASA Astrophysics Data System (ADS)

    Chen, Jinyan; Sun, Jizhou; Wu, Rongteng; Zhang, Yaping

    2008-03-01

    A novel content-based image retrieval data structure is developed in present work. It can improve the searching efficiency significantly. All images are organized into a tree, in which every node is comprised of images with similar features. Images in a children node have more similarity (less variance) within themselves in relative to its parent. It means that every node is a cluster and each of its children nodes is a sub-cluster. Information contained in a node includes not only the number of images, but also the center and the variance of these images. Upon the addition of new images, the tree structure is capable of dynamically changing to ensure the minimization of total variance of the tree. Subsequently, a heuristic method has been designed to retrieve the information from this tree. Given a sample image, the probability of a tree node that contains the similar images is computed using the center of the node and its variance. If the probability is higher than a certain threshold, this node will be recursively checked to locate the similar images. So will its children nodes if their probability is also higher than that threshold. If no sufficient similar images were founded, a reduced threshold value would be adopted to initiate a new seeking from the root node. The search terminates when it found sufficient similar images or the threshold value is too low to give meaningful sense. Experiments have shown that the proposed dynamic cluster tree is able to improve the searching efficiency notably.

  18. Thesaurus-Enhanced Search Interfaces.

    ERIC Educational Resources Information Center

    Shiri, Ali Asghar; Revie, Crawford; Chowdhury, Gobinda

    2002-01-01

    Discussion of user interfaces to information retrieval systems focuses on interfaces that incorporate thesauri as part of their searching and browsing facilities. Discusses research literature related to information searching behavior, information retrieval interface evaluation, search term selection, and query expansion; and compares thesaurus…

  19. Dynamic graph system for a semantic database

    DOEpatents

    Mizell, David

    2016-04-12

    A method and system in a computer system for dynamically providing a graphical representation of a data store of entries via a matrix interface is disclosed. A dynamic graph system provides a matrix interface that exposes to an application program a graphical representation of data stored in a data store such as a semantic database storing triples. To the application program, the matrix interface represents the graph as a sparse adjacency matrix that is stored in compressed form. Each entry of the data store is considered to represent a link between nodes of the graph. Each entry has a first field and a second field identifying the nodes connected by the link and a third field with a value for the link that connects the identified nodes. The first, second, and third fields represent the rows, column, and elements of the adjacency matrix.

  20. Dynamic graph system for a semantic database

    DOEpatents

    Mizell, David

    2015-01-27

    A method and system in a computer system for dynamically providing a graphical representation of a data store of entries via a matrix interface is disclosed. A dynamic graph system provides a matrix interface that exposes to an application program a graphical representation of data stored in a data store such as a semantic database storing triples. To the application program, the matrix interface represents the graph as a sparse adjacency matrix that is stored in compressed form. Each entry of the data store is considered to represent a link between nodes of the graph. Each entry has a first field and a second field identifying the nodes connected by the link and a third field with a value for the link that connects the identified nodes. The first, second, and third fields represent the rows, column, and elements of the adjacency matrix.

  1. Legacy2Drupal - Conversion of an existing oceanographic relational database to a semantically enabled Drupal content management system

    NASA Astrophysics Data System (ADS)

    Maffei, A. R.; Chandler, C. L.; Work, T.; Allen, J.; Groman, R. C.; Fox, P. A.

    2009-12-01

    Content Management Systems (CMSs) provide powerful features that can be of use to oceanographic (and other geo-science) data managers. However, in many instances, geo-science data management offices have previously designed customized schemas for their metadata. The WHOI Ocean Informatics initiative and the NSF funded Biological Chemical and Biological Data Management Office (BCO-DMO) have jointly sponsored a project to port an existing, relational database containing oceanographic metadata, along with an existing interface coded in Cold Fusion middleware, to a Drupal6 Content Management System. The goal was to translate all the existing database tables, input forms, website reports, and other features present in the existing system to employ Drupal CMS features. The replacement features include Drupal content types, CCK node-reference fields, themes, RDB, SPARQL, workflow, and a number of other supporting modules. Strategic use of some Drupal6 CMS features enables three separate but complementary interfaces that provide access to oceanographic research metadata via the MySQL database: 1) a Drupal6-powered front-end; 2) a standard SQL port (used to provide a Mapserver interface to the metadata and data; and 3) a SPARQL port (feeding a new faceted search capability being developed). Future plans include the creation of science ontologies, by scientist/technologist teams, that will drive semantically-enabled faceted search capabilities planned for the site. Incorporation of semantic technologies included in the future Drupal 7 core release is also anticipated. Using a public domain CMS as opposed to proprietary middleware, and taking advantage of the many features of Drupal 6 that are designed to support semantically-enabled interfaces will help prepare the BCO-DMO database for interoperability with other ecosystem databases.

  2. Searches over graphs representing geospatial-temporal remote sensing data

    DOEpatents

    Brost, Randolph; Perkins, David Nikolaus

    2018-03-06

    Various technologies pertaining to identifying objects of interest in remote sensing images by searching over geospatial-temporal graph representations are described herein. Graphs are constructed by representing objects in remote sensing images as nodes, and connecting nodes with undirected edges representing either distance or adjacency relationships between objects and directed edges representing changes in time. Geospatial-temporal graph searches are made computationally efficient by taking advantage of characteristics of geospatial-temporal data in remote sensing images through the application of various graph search techniques.

  3. Underwater Sensor Network Redeployment Algorithm Based on Wolf Search

    PubMed Central

    Jiang, Peng; Feng, Yang; Wu, Feng

    2016-01-01

    This study addresses the optimization of node redeployment coverage in underwater wireless sensor networks. Given that nodes could easily become invalid under a poor environment and the large scale of underwater wireless sensor networks, an underwater sensor network redeployment algorithm was developed based on wolf search. This study is to apply the wolf search algorithm combined with crowded degree control in the deployment of underwater wireless sensor networks. The proposed algorithm uses nodes to ensure coverage of the events, and it avoids the prematurity of the nodes. The algorithm has good coverage effects. In addition, considering that obstacles exist in the underwater environment, nodes are prevented from being invalid by imitating the mechanism of avoiding predators. Thus, the energy consumption of the network is reduced. Comparative analysis shows that the algorithm is simple and effective in wireless sensor network deployment. Compared with the optimized artificial fish swarm algorithm, the proposed algorithm exhibits advantages in network coverage, energy conservation, and obstacle avoidance. PMID:27775659

  4. Cooperative Search and Rescue with Artificial Fishes Based on Fish-Swarm Algorithm for Underwater Wireless Sensor Networks

    PubMed Central

    Zhao, Wei; Tang, Zhenmin; Yang, Yuwang; Wang, Lei; Lan, Shaohua

    2014-01-01

    This paper presents a searching control approach for cooperating mobile sensor networks. We use a density function to represent the frequency of distress signals issued by victims. The mobile nodes' moving in mission space is similar to the behaviors of fish-swarm in water. So, we take the mobile node as artificial fish node and define its operations by a probabilistic model over a limited range. A fish-swarm based algorithm is designed requiring local information at each fish node and maximizing the joint detection probabilities of distress signals. Optimization of formation is also considered for the searching control approach and is optimized by fish-swarm algorithm. Simulation results include two schemes: preset route and random walks, and it is showed that the control scheme has adaptive and effective properties. PMID:24741341

  5. Cooperative search and rescue with artificial fishes based on fish-swarm algorithm for underwater wireless sensor networks.

    PubMed

    Zhao, Wei; Tang, Zhenmin; Yang, Yuwang; Wang, Lei; Lan, Shaohua

    2014-01-01

    This paper presents a searching control approach for cooperating mobile sensor networks. We use a density function to represent the frequency of distress signals issued by victims. The mobile nodes' moving in mission space is similar to the behaviors of fish-swarm in water. So, we take the mobile node as artificial fish node and define its operations by a probabilistic model over a limited range. A fish-swarm based algorithm is designed requiring local information at each fish node and maximizing the joint detection probabilities of distress signals. Optimization of formation is also considered for the searching control approach and is optimized by fish-swarm algorithm. Simulation results include two schemes: preset route and random walks, and it is showed that the control scheme has adaptive and effective properties.

  6. Empowering smartphone users with sensor node for air quality measurement

    NASA Astrophysics Data System (ADS)

    Oletic, Dinko; Bilas, Vedran

    2013-06-01

    We present an architecture of a sensor node developed for use with smartphones for participatory sensing of air quality in urban environments. Our solution features inexpensive metal-oxide semiconductor gas sensors (MOX) for measurement of CO, O3, NO2 and VOC, along with sensors for ambient temperature and humidity. We focus on our design of sensor interface consisting of power-regulated heater temperature control, and the design of resistance sensing circuit. Accuracy of the sensor interface is characterized. Power consumption of the sensor node is analysed. Preliminary data obtained from the CO gas sensors in laboratory conditions and during the outdoor field-test is shown.

  7. Trail-Based Search for Efficient Event Report to Mobile Actors in Wireless Sensor and Actor Networks †

    PubMed Central

    Xu, Zhezhuang; Liu, Guanglun; Yan, Haotian; Cheng, Bin; Lin, Feilong

    2017-01-01

    In wireless sensor and actor networks, when an event is detected, the sensor node needs to transmit an event report to inform the actor. Since the actor moves in the network to execute missions, its location is always unavailable to the sensor nodes. A popular solution is the search strategy that can forward the data to a node without its location information. However, most existing works have not considered the mobility of the node, and thus generate significant energy consumption or transmission delay. In this paper, we propose the trail-based search (TS) strategy that takes advantage of actor’s mobility to improve the search efficiency. The main idea of TS is that, when the actor moves in the network, it can leave its trail composed of continuous footprints. The search packet with the event report is transmitted in the network to search the actor or its footprints. Once an effective footprint is discovered, the packet will be forwarded along the trail until it is received by the actor. Moreover, we derive the condition to guarantee the trail connectivity, and propose the redundancy reduction scheme based on TS (TS-R) to reduce nontrivial transmission redundancy that is generated by the trail. The theoretical and numerical analysis is provided to prove the efficiency of TS. Compared with the well-known expanding ring search (ERS), TS significantly reduces the energy consumption and search delay. PMID:29077017

  8. Designing a Visual Interface for Online Searching.

    ERIC Educational Resources Information Center

    Lin, Xia

    1999-01-01

    "MedLine Search Assistant" is a new interface for MEDLINE searching that improves both search precision and recall by helping the user convert a free text search to a controlled vocabulary-based search in a visual environment. Features of the interface are described, followed by details of the conceptual design and the physical design of…

  9. The Earth System Grid Federation: An Open Infrastructure for Access to Distributed Geospatial Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ananthakrishnan, Rachana; Bell, Gavin; Cinquini, Luca

    2013-01-01

    The Earth System Grid Federation (ESGF) is a multi-agency, international collaboration that aims at developing the software infrastructure needed to facilitate and empower the study of climate change on a global scale. The ESGF s architecture employs a system of geographically distributed peer nodes, which are independently administered yet united by the adoption of common federation protocols and application programming interfaces (APIs). The cornerstones of its interoperability are the peer-to-peer messaging that is continuously exchanged among all nodes in the federation; a shared architecture and API for search and discovery; and a security infrastructure based on industry standards (OpenID, SSL,more » GSI and SAML). The ESGF software is developed collaboratively across institutional boundaries and made available to the community as open source. It has now been adopted by multiple Earth science projects and allows access to petabytes of geophysical data, including the entire model output used for the next international assessment report on climate change (IPCC-AR5) and a suite of satellite observations (obs4MIPs) and reanalysis data sets (ANA4MIPs).« less

  10. The Earth System Grid Federation: An Open Infrastructure for Access to Distributed Geo-Spatial Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cinquini, Luca; Crichton, Daniel; Miller, Neill

    2012-01-01

    The Earth System Grid Federation (ESGF) is a multi-agency, international collaboration that aims at developing the software infrastructure needed to facilitate and empower the study of climate change on a global scale. The ESGF s architecture employs a system of geographically distributed peer nodes, which are independently administered yet united by the adoption of common federation protocols and application programming interfaces (APIs). The cornerstones of its interoperability are the peer-to-peer messaging that is continuously exchanged among all nodes in the federation; a shared architecture and API for search and discovery; and a security infrastructure based on industry standards (OpenID, SSL,more » GSI and SAML). The ESGF software is developed collaboratively across institutional boundaries and made available to the community as open source. It has now been adopted by multiple Earth science projects and allows access to petabytes of geophysical data, including the entire model output used for the next international assessment report on climate change (IPCC-AR5) and a suite of satellite observations (obs4MIPs) and reanalysis data sets (ANA4MIPs).« less

  11. The Earth System Grid Federation : an Open Infrastructure for Access to Distributed Geospatial Data

    NASA Technical Reports Server (NTRS)

    Cinquini, Luca; Crichton, Daniel; Mattmann, Chris; Harney, John; Shipman, Galen; Wang, Feiyi; Ananthakrishnan, Rachana; Miller, Neill; Denvil, Sebastian; Morgan, Mark; hide

    2012-01-01

    The Earth System Grid Federation (ESGF) is a multi-agency, international collaboration that aims at developing the software infrastructure needed to facilitate and empower the study of climate change on a global scale. The ESGF's architecture employs a system of geographically distributed peer nodes, which are independently administered yet united by the adoption of common federation protocols and application programming interfaces (APIs). The cornerstones of its interoperability are the peer-to-peer messaging that is continuously exchanged among all nodes in the federation; a shared architecture and API for search and discovery; and a security infrastructure based on industry standards (OpenID, SSL, GSI and SAML). The ESGF software is developed collaboratively across institutional boundaries and made available to the community as open source. It has now been adopted by multiple Earth science projects and allows access to petabytes of geophysical data, including the entire model output used for the next international assessment report on climate change (IPCC-AR5) and a suite of satellite observations (obs4MIPs) and reanalysis data sets (ANA4MIPs).

  12. Harmonised information exchange between decentralised food composition database systems.

    PubMed

    Pakkala, H; Christensen, T; de Victoria, I Martínez; Presser, K; Kadvan, A

    2010-11-01

    The main aim of the European Food Information Resource (EuroFIR) project is to develop and disseminate a comprehensive, coherent and validated data bank for the distribution of food composition data (FCD). This can only be accomplished by harmonising food description and data documentation and by the use of standardised thesauri. The data bank is implemented through a network of local FCD storages (usually national) under the control and responsibility of the local (national) EuroFIR partner. The implementation of the system based on the EuroFIR specifications is under development. The data interchange happens through the EuroFIR Web Services interface, allowing the partners to implement their system using methods and software suitable for the local computer environment. The implementation uses common international standards, such as Simple Object Access Protocol, Web Service Description Language and Extensible Markup Language (XML). A specifically constructed EuroFIR search facility (eSearch) was designed for end users. The EuroFIR eSearch facility compiles queries using a specifically designed Food Data Query Language and sends a request to those network nodes linked to the EuroFIR Web Services that will most likely have the requested information. The retrieved FCD are compiled into a specifically designed data interchange format (the EuroFIR Food Data Transport Package) in XML, which is sent back to the EuroFIR eSearch facility as the query response. The same request-response operation happens in all the nodes that have been selected in the EuroFIR eSearch facility for a certain task. Finally, the FCD are combined by the EuroFIR eSearch facility and delivered to the food compiler. The implementation of FCD interchange using decentralised computer systems instead of traditional data-centre models has several advantages. First of all, the local partners have more control over their FCD, which will increase commitment and improve quality. Second, a multicentred solution is more economically viable than the creation of a centralised data bank, because of the lack of national political support for multinational systems.

  13. Natural Language Search Interfaces: Health Data Needs Single-Field Variable Search.

    PubMed

    Jay, Caroline; Harper, Simon; Dunlop, Ian; Smith, Sam; Sufi, Shoaib; Goble, Carole; Buchan, Iain

    2016-01-14

    Data discovery, particularly the discovery of key variables and their inter-relationships, is key to secondary data analysis, and in-turn, the evolving field of data science. Interface designers have presumed that their users are domain experts, and so they have provided complex interfaces to support these "experts." Such interfaces hark back to a time when searches needed to be accurate first time as there was a high computational cost associated with each search. Our work is part of a governmental research initiative between the medical and social research funding bodies to improve the use of social data in medical research. The cross-disciplinary nature of data science can make no assumptions regarding the domain expertise of a particular scientist, whose interests may intersect multiple domains. Here we consider the common requirement for scientists to seek archived data for secondary analysis. This has more in common with search needs of the "Google generation" than with their single-domain, single-tool forebears. Our study compares a Google-like interface with traditional ways of searching for noncomplex health data in a data archive. Two user interfaces are evaluated for the same set of tasks in extracting data from surveys stored in the UK Data Archive (UKDA). One interface, Web search, is "Google-like," enabling users to browse, search for, and view metadata about study variables, whereas the other, traditional search, has standard multioption user interface. Using a comprehensive set of tasks with 20 volunteers, we found that the Web search interface met data discovery needs and expectations better than the traditional search. A task × interface repeated measures analysis showed a main effect indicating that answers found through the Web search interface were more likely to be correct (F1,19=37.3, P<.001), with a main effect of task (F3,57=6.3, P<.001). Further, participants completed the task significantly faster using the Web search interface (F1,19=18.0, P<.001). There was also a main effect of task (F2,38=4.1, P=.025, Greenhouse-Geisser correction applied). Overall, participants were asked to rate learnability, ease of use, and satisfaction. Paired mean comparisons showed that the Web search interface received significantly higher ratings than the traditional search interface for learnability (P=.002, 95% CI [0.6-2.4]), ease of use (P<.001, 95% CI [1.2-3.2]), and satisfaction (P<.001, 95% CI [1.8-3.5]). The results show superior cross-domain usability of Web search, which is consistent with its general familiarity and with enabling queries to be refined as the search proceeds, which treats serendipity as part of the refinement. The results provide clear evidence that data science should adopt single-field natural language search interfaces for variable search supporting in particular: query reformulation; data browsing; faceted search; surrogates; relevance feedback; summarization, analytics, and visual presentation.

  14. Natural Language Search Interfaces: Health Data Needs Single-Field Variable Search

    PubMed Central

    Smith, Sam; Sufi, Shoaib; Goble, Carole; Buchan, Iain

    2016-01-01

    Background Data discovery, particularly the discovery of key variables and their inter-relationships, is key to secondary data analysis, and in-turn, the evolving field of data science. Interface designers have presumed that their users are domain experts, and so they have provided complex interfaces to support these “experts.” Such interfaces hark back to a time when searches needed to be accurate first time as there was a high computational cost associated with each search. Our work is part of a governmental research initiative between the medical and social research funding bodies to improve the use of social data in medical research. Objective The cross-disciplinary nature of data science can make no assumptions regarding the domain expertise of a particular scientist, whose interests may intersect multiple domains. Here we consider the common requirement for scientists to seek archived data for secondary analysis. This has more in common with search needs of the “Google generation” than with their single-domain, single-tool forebears. Our study compares a Google-like interface with traditional ways of searching for noncomplex health data in a data archive. Methods Two user interfaces are evaluated for the same set of tasks in extracting data from surveys stored in the UK Data Archive (UKDA). One interface, Web search, is “Google-like,” enabling users to browse, search for, and view metadata about study variables, whereas the other, traditional search, has standard multioption user interface. Results Using a comprehensive set of tasks with 20 volunteers, we found that the Web search interface met data discovery needs and expectations better than the traditional search. A task × interface repeated measures analysis showed a main effect indicating that answers found through the Web search interface were more likely to be correct (F 1,19=37.3, P<.001), with a main effect of task (F 3,57=6.3, P<.001). Further, participants completed the task significantly faster using the Web search interface (F 1,19=18.0, P<.001). There was also a main effect of task (F 2,38=4.1, P=.025, Greenhouse-Geisser correction applied). Overall, participants were asked to rate learnability, ease of use, and satisfaction. Paired mean comparisons showed that the Web search interface received significantly higher ratings than the traditional search interface for learnability (P=.002, 95% CI [0.6-2.4]), ease of use (P<.001, 95% CI [1.2-3.2]), and satisfaction (P<.001, 95% CI [1.8-3.5]). The results show superior cross-domain usability of Web search, which is consistent with its general familiarity and with enabling queries to be refined as the search proceeds, which treats serendipity as part of the refinement. Conclusions The results provide clear evidence that data science should adopt single-field natural language search interfaces for variable search supporting in particular: query reformulation; data browsing; faceted search; surrogates; relevance feedback; summarization, analytics, and visual presentation. PMID:26769334

  15. Data communications in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2014-02-11

    Data communications in a parallel active messaging interface ('PAMI') or a parallel computer, the parallel computer including a plurality of compute nodes that execute a parallel application, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution of a compute node, including specification of a client, a context, and a task, the compute nodes and the endpoints coupled for data communications instruction, the instruction characterized by instruction type, the instruction specifying a transmission of transfer data from the origin endpoint to a target endpoint and transmitting, in accordance witht the instruction type, the transfer data from the origin endpoin to the target endpoint.

  16. Distributed downhole drilling network

    DOEpatents

    Hall, David R.; Hall, Jr., H. Tracy; Fox, Joe; Pixton, David S.

    2006-11-21

    A high-speed downhole network providing real-time data from downhole components of a drilling strings includes a bottom-hole node interfacing to a bottom-hole assembly located proximate the bottom end of a drill string. A top-hole node is connected proximate the top end of the drill string. One or several intermediate nodes are located along the drill string between the bottom-hole node and the top-hole node. The intermediate nodes are configured to receive and transmit data packets transmitted between the bottom-hole node and the top-hole node. A communications link, integrated into the drill string, is used to operably connect the bottom-hole node, the intermediate nodes, and the top-hole node. In selected embodiments, a personal or other computer may be connected to the top-hole node, to analyze data received from the intermediate and bottom-hole nodes.

  17. Detection of breast cancer metastasis in sentinel lymph nodes using intra-operative real time GeneSearch BLN Assay in the operating room: results of the Cardiff study.

    PubMed

    Mansel, Robert E; Goyal, Amit; Douglas-Jones, Anthony; Woods, Victoria; Goyal, Sumit; Monypenny, Ian; Sweetland, Helen; Newcombe, Robert G; Jasani, Bharat

    2009-06-01

    Intra-operative assessment is not routinely performed in the UK due to poor sensitivity of available methods and overburdened pathology resources. We conducted a prospective clinical feasibility study of the GeneSearch Breast Lymph Node (BLN) Assay (Veridex, LLC, Warren, NJ) to confirm its potential usefulness within the UK healthcare system. In the assay 50% of the lymph node was processed to detect the presence of cytokeratin-19 and mammaglobin mRNA. The assay was calibrated to detect metastases >0.2 mm. Assay results were compared to H&E performed on each face of approximately 2 mm alternating node slabs and 3 additional sections cut at approximately 150 microm interval from each face of the node slab. 124 sentinel lymph nodes were removed from 82 breast cancer patients. The assay correctly identified all 6 patients with sentinel node macrometastases (>2.0 mm), and 2 of 3 patients with sentinel node micrometastases (0.2-2.0 mm). Sentinel lymph nodes in 4 patients were assay positive but histology negative. Two of these four patients had isolated tumor cells seen by histology. The overall concordance with histology was 93.9% (77/82), with sensitivity of 88.9% (8/9, 95% CI 56.5-98%), specificity of 94.6% (69/73, 95% CI 86.7-97.8%), positive predictive value of 66.7% (8/12, 95% CI 39.1-86.2%) and negative predictive value of 98.6% (69/70, 95% CI 92.3-99.7%). The assay was performed in a median time of 32 min (range 26-69 min). Intra-operative assessment of sentinel lymph node can be performed rapidly and accurately using the GeneSearch BLN Assay.

  18. Adaptive triangular mesh generation

    NASA Technical Reports Server (NTRS)

    Erlebacher, G.; Eiseman, P. R.

    1984-01-01

    A general adaptive grid algorithm is developed on triangular grids. The adaptivity is provided by a combination of node addition, dynamic node connectivity and a simple node movement strategy. While the local restructuring process and the node addition mechanism take place in the physical plane, the nodes are displaced on a monitor surface, constructed from the salient features of the physical problem. An approximation to mean curvature detects changes in the direction of the monitor surface, and provides the pulling force on the nodes. Solutions to the axisymmetric Grad-Shafranov equation demonstrate the capturing, by triangles, of the plasma-vacuum interface in a free-boundary equilibrium configuration.

  19. Data communications in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2013-11-12

    Data communications in a parallel active messaging interface (`PAMI`) of a parallel computer composed of compute nodes that execute a parallel application, each compute node including application processors that execute the parallel application and at least one management processor dedicated to gathering information regarding data communications. The PAMI is composed of data communications endpoints, each endpoint composed of a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes and the endpoints coupled for data communications through the PAMI and through data communications resources. Embodiments function by gathering call site statistics describing data communications resulting from execution of data communications instructions and identifying in dependence upon the call cite statistics a data communications algorithm for use in executing a data communications instruction at a call site in the parallel application.

  20. Node-making process in network meta-analysis of nonpharmacological treatment are poorly reported.

    PubMed

    James, Arthur; Yavchitz, Amélie; Ravaud, Philippe; Boutron, Isabelle

    2018-05-01

    To identify methods to support the node-making process in network meta-analyses (NMAs) of nonpharmacological treatments. We proceeded in two stages. First, we conducted a literature review of guidelines and methodological articles about NMAs to identify methods proposed to lump interventions into nodes. Second, we conducted a systematic review of NMAs of nonpharmacological treatments to extract methods used by authors to support their node-making process. MEDLINE and Google Scholar were searched to identify articles assessing NMA guidelines or methodology intended for NMA authors. MEDLINE, CENTRAL, and EMBASE were searched to identify reports of NMAs including at least one nonpharmacological treatment. Both searches involved articles available from database inception to March 2016. From the methodological review, we identified and extracted methods proposed to lump interventions into nodes. From the systematic review, the reporting of the network was assessed as long as the method described supported the node-making process. Among the 116 articles retrieved in the literature review, 12 (10%) discussed the concept of lumping or splitting interventions in NMAs. No consensual method was identified during the methodological review, and expert consensus was the only method proposed to support the node-making process. Among 5187 references for the systematic review, we included 110 reports of NMAs published between 2007 and 2016. The nodes were described in the introduction section of 88 reports (80%), which suggested that the node content might have been a priori decided before the systematic review. Nine reports (8.1%) described a specific process or justification to build nodes for the network. Two methods were identified: (1) fit a previously published classification and (2) expert consensus. Despite the importance of NMA in the delivery of evidence when several interventions are available for a single indication, recommendations on the reporting of the node-making process in NMAs are lacking, and reporting of the node-making process in NMAs seems insufficient. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. A study of the influence of task familiarity on user behaviors and performance with a MeSH term suggestion interface for PubMed bibliographic search.

    PubMed

    Tang, Muh-Chyun; Liu, Ying-Hsang; Wu, Wan-Ching

    2013-09-01

    Previous research has shown that information seekers in biomedical domain need more support in formulating their queries. A user study was conducted to evaluate the effectiveness of a metadata based query suggestion interface for PubMed bibliographic search. The study also investigated the impact of search task familiarity on search behaviors and the effectiveness of the interface. A real user, user search request and real system approach was used for the study. Unlike tradition IR evaluation, where assigned tasks were used, the participants were asked to search requests of their own. Forty-four researchers in Health Sciences participated in the evaluation - each conducted two research requests of their own, alternately with the proposed interface and the PubMed baseline. Several performance criteria were measured to assess the potential benefits of the experimental interface, including users' assessment of their original and eventual queries, the perceived usefulness of the interfaces, satisfaction with the search results, and the average relevance score of the saved records. The results show that, when searching for an unfamiliar topic, users were more likely to change their queries, indicating the effect of familiarity on search behaviors. The results also show that the interface scored higher on several of the performance criteria, such as the "goodness" of the queries, perceived usefulness, and user satisfaction. Furthermore, in line with our hypothesis, the proposed interface was relatively more effective when less familiar search requests were attempted. Results indicate that there is a selective compatibility between search familiarity and search interface. One implication of the research for system evaluation is the importance of taking into consideration task familiarity when assessing the effectiveness of interactive IR systems. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  2. Development of climate data storage and processing model

    NASA Astrophysics Data System (ADS)

    Okladnikov, I. G.; Gordov, E. P.; Titov, A. G.

    2016-11-01

    We present a storage and processing model for climate datasets elaborated in the framework of a virtual research environment (VRE) for climate and environmental monitoring and analysis of the impact of climate change on the socio-economic processes on local and regional scales. The model is based on a «shared nothings» distributed computing architecture and assumes using a computing network where each computing node is independent and selfsufficient. Each node holds a dedicated software for the processing and visualization of geospatial data providing programming interfaces to communicate with the other nodes. The nodes are interconnected by a local network or the Internet and exchange data and control instructions via SSH connections and web services. Geospatial data is represented by collections of netCDF files stored in a hierarchy of directories in the framework of a file system. To speed up data reading and processing, three approaches are proposed: a precalculation of intermediate products, a distribution of data across multiple storage systems (with or without redundancy), and caching and reuse of the previously obtained products. For a fast search and retrieval of the required data, according to the data storage and processing model, a metadata database is developed. It contains descriptions of the space-time features of the datasets available for processing, their locations, as well as descriptions and run options of the software components for data analysis and visualization. The model and the metadata database together will provide a reliable technological basis for development of a high- performance virtual research environment for climatic and environmental monitoring.

  3. On Building a Search Interface Discovery System

    NASA Astrophysics Data System (ADS)

    Shestakov, Denis

    A huge portion of the Web known as the deep Web is accessible via search interfaces to myriads of databases on the Web. While relatively good approaches for querying the contents of web databases have been recently proposed, one cannot fully utilize them having most search interfaces unlocated. Thus, the automatic recognition of search interfaces to online databases is crucial for any application accessing the deep Web. This paper describes the architecture of the I-Crawler, a system for finding and classifying search interfaces. The I-Crawler is intentionally designed to be used in the deep web characterization surveys and for constructing directories of deep web resources.

  4. Assembly Mechanism of the Contractile Ring for Cytokinesis by Fission Yeast

    NASA Astrophysics Data System (ADS)

    Vavylonis, Dimitrios; Wu, Jian-Qiu; Huang, Xiaolei; O'Shaughnessy, Ben; Pollard, Thomas

    2008-03-01

    Animals and fungi assemble a contractile ring of actin filaments and the motor protein myosin to separate into individual daughter cells during cytokinesis. We studied the mechanism of contractile ring assembly in fission yeast with high time resolution confocal microscopy, computational image analysis methods, and numerical simulations. Approximately 63 nodes containing myosin, broadly distributed around the cell equator, assembled into a ring through stochastic motions, making many starts, stops, and changes of direction as they condense into a ring. Estimates of node friction coefficients from the mean square displacement of stationary nodes imply forces for node movement are greater than ˜ 4 pN, similarly to forces by a few molecular motors. Skeletonization and topology analysis of images of cells expressing fluorescent actin filament markers showed transient linear elements extending in all directions from myosin nodes and establishing connections among them. We propose a model with traction between nodes depending on transient connections established by stochastic search and capture (``search, capture, pull and release''). Numerical simulations of the model using parameter values obtained from experiment succesfully condense nodes into a continuous ring.

  5. Network module detection: Affinity search technique with the multi-node topological overlap measure

    PubMed Central

    Li, Ai; Horvath, Steve

    2009-01-01

    Background Many clustering procedures only allow the user to input a pairwise dissimilarity or distance measure between objects. We propose a clustering method that can input a multi-point dissimilarity measure d(i1, i2, ..., iP) where the number of points P can be larger than 2. The work is motivated by gene network analysis where clusters correspond to modules of highly interconnected nodes. Here, we define modules as clusters of network nodes with high multi-node topological overlap. The topological overlap measure is a robust measure of interconnectedness which is based on shared network neighbors. In previous work, we have shown that the multi-node topological overlap measure yields biologically meaningful results when used as input of network neighborhood analysis. Findings We adapt network neighborhood analysis for the use of module detection. We propose the Module Affinity Search Technique (MAST), which is a generalized version of the Cluster Affinity Search Technique (CAST). MAST can accommodate a multi-node dissimilarity measure. Clusters grow around user-defined or automatically chosen seeds (e.g. hub nodes). We propose both local and global cluster growth stopping rules. We use several simulations and a gene co-expression network application to argue that the MAST approach leads to biologically meaningful results. We compare MAST with hierarchical clustering and partitioning around medoid clustering. Conclusion Our flexible module detection method is implemented in the MTOM software which can be downloaded from the following webpage: PMID:19619323

  6. Network module detection: Affinity search technique with the multi-node topological overlap measure.

    PubMed

    Li, Ai; Horvath, Steve

    2009-07-20

    Many clustering procedures only allow the user to input a pairwise dissimilarity or distance measure between objects. We propose a clustering method that can input a multi-point dissimilarity measure d(i1, i2, ..., iP) where the number of points P can be larger than 2. The work is motivated by gene network analysis where clusters correspond to modules of highly interconnected nodes. Here, we define modules as clusters of network nodes with high multi-node topological overlap. The topological overlap measure is a robust measure of interconnectedness which is based on shared network neighbors. In previous work, we have shown that the multi-node topological overlap measure yields biologically meaningful results when used as input of network neighborhood analysis. We adapt network neighborhood analysis for the use of module detection. We propose the Module Affinity Search Technique (MAST), which is a generalized version of the Cluster Affinity Search Technique (CAST). MAST can accommodate a multi-node dissimilarity measure. Clusters grow around user-defined or automatically chosen seeds (e.g. hub nodes). We propose both local and global cluster growth stopping rules. We use several simulations and a gene co-expression network application to argue that the MAST approach leads to biologically meaningful results. We compare MAST with hierarchical clustering and partitioning around medoid clustering. Our flexible module detection method is implemented in the MTOM software which can be downloaded from the following webpage: http://www.genetics.ucla.edu/labs/horvath/MTOM/

  7. Optimal Quantum Spatial Search on Random Temporal Networks

    NASA Astrophysics Data System (ADS)

    Chakraborty, Shantanav; Novo, Leonardo; Di Giorgio, Serena; Omar, Yasser

    2017-12-01

    To investigate the performance of quantum information tasks on networks whose topology changes in time, we study the spatial search algorithm by continuous time quantum walk to find a marked node on a random temporal network. We consider a network of n nodes constituted by a time-ordered sequence of Erdös-Rényi random graphs G (n ,p ), where p is the probability that any two given nodes are connected: After every time interval τ , a new graph G (n ,p ) replaces the previous one. We prove analytically that, for any given p , there is always a range of values of τ for which the running time of the algorithm is optimal, i.e., O (√{n }), even when search on the individual static graphs constituting the temporal network is suboptimal. On the other hand, there are regimes of τ where the algorithm is suboptimal even when each of the underlying static graphs are sufficiently connected to perform optimal search on them. From this first study of quantum spatial search on a time-dependent network, it emerges that the nontrivial interplay between temporality and connectivity is key to the algorithmic performance. Moreover, our work can be extended to establish high-fidelity qubit transfer between any two nodes of the network. Overall, our findings show that one can exploit temporality to achieve optimal quantum information tasks on dynamical random networks.

  8. The Portals 4.0 network programming interface.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barrett, Brian W.; Brightwell, Ronald Brian; Pedretti, Kevin

    2012-11-01

    This report presents a specification for the Portals 4.0 network programming interface. Portals 4.0 is intended to allow scalable, high-performance network communication between nodes of a parallel computing system. Portals 4.0 is well suited to massively parallel processing and embedded systems. Portals 4.0 represents an adaption of the data movement layer developed for massively parallel processing platforms, such as the 4500-node Intel TeraFLOPS machine. Sandias Cplant cluster project motivated the development of Version 3.0, which was later extended to Version 3.3 as part of the Cray Red Storm machine and XT line. Version 4.0 is targeted to the next generationmore » of machines employing advanced network interface architectures that support enhanced offload capabilities.« less

  9. A Hybrid Scheme for Fine-Grained Search and Access Authorization in Fog Computing Environment

    PubMed Central

    Xiao, Min; Zhou, Jing; Liu, Xuejiao; Jiang, Mingda

    2017-01-01

    In the fog computing environment, the encrypted sensitive data may be transferred to multiple fog nodes on the edge of a network for low latency; thus, fog nodes need to implement a search over encrypted data as a cloud server. Since the fog nodes tend to provide service for IoT applications often running on resource-constrained end devices, it is necessary to design lightweight solutions. At present, there is little research on this issue. In this paper, we propose a fine-grained owner-forced data search and access authorization scheme spanning user-fog-cloud for resource constrained end users. Compared to existing schemes only supporting either index encryption with search ability or data encryption with fine-grained access control ability, the proposed hybrid scheme supports both abilities simultaneously, and index ciphertext and data ciphertext are constructed based on a single ciphertext-policy attribute based encryption (CP-ABE) primitive and share the same key pair, thus the data access efficiency is significantly improved and the cost of key management is greatly reduced. Moreover, in the proposed scheme, the resource constrained end devices are allowed to rapidly assemble ciphertexts online and securely outsource most of decryption task to fog nodes, and mediated encryption mechanism is also adopted to achieve instantaneous user revocation instead of re-encrypting ciphertexts with many copies in many fog nodes. The security and the performance analysis show that our scheme is suitable for a fog computing environment. PMID:28629131

  10. A Hybrid Scheme for Fine-Grained Search and Access Authorization in Fog Computing Environment.

    PubMed

    Xiao, Min; Zhou, Jing; Liu, Xuejiao; Jiang, Mingda

    2017-06-17

    In the fog computing environment, the encrypted sensitive data may be transferred to multiple fog nodes on the edge of a network for low latency; thus, fog nodes need to implement a search over encrypted data as a cloud server. Since the fog nodes tend to provide service for IoT applications often running on resource-constrained end devices, it is necessary to design lightweight solutions. At present, there is little research on this issue. In this paper, we propose a fine-grained owner-forced data search and access authorization scheme spanning user-fog-cloud for resource constrained end users. Compared to existing schemes only supporting either index encryption with search ability or data encryption with fine-grained access control ability, the proposed hybrid scheme supports both abilities simultaneously, and index ciphertext and data ciphertext are constructed based on a single ciphertext-policy attribute based encryption (CP-ABE) primitive and share the same key pair, thus the data access efficiency is significantly improved and the cost of key management is greatly reduced. Moreover, in the proposed scheme, the resource constrained end devices are allowed to rapidly assemble ciphertexts online and securely outsource most of decryption task to fog nodes, and mediated encryption mechanism is also adopted to achieve instantaneous user revocation instead of re-encrypting ciphertexts with many copies in many fog nodes. The security and the performance analysis show that our scheme is suitable for a fog computing environment.

  11. Toward two-dimensional search engines

    NASA Astrophysics Data System (ADS)

    Ermann, L.; Chepelianskii, A. D.; Shepelyansky, D. L.

    2012-07-01

    We study the statistical properties of various directed networks using ranking of their nodes based on the dominant vectors of the Google matrix known as PageRank and CheiRank. On average PageRank orders nodes proportionally to a number of ingoing links, while CheiRank orders nodes proportionally to a number of outgoing links. In this way, the ranking of nodes becomes two dimensional which paves the way for the development of two-dimensional search engines of a new type. Statistical properties of information flow on the PageRank-CheiRank plane are analyzed for networks of British, French and Italian universities, Wikipedia, Linux Kernel, gene regulation and other networks. A special emphasis is done for British universities networks using the large database publicly available in the UK. Methods of spam links control are also analyzed.

  12. KENNEDY SPACE CENTER, FLA. - Astronaut Tim Kopra aids in Intravehicular Activity (IVA) constraints testing on the Italian-built Node 2, a future element of the International Space Station. The second of three Station connecting modules, the Node 2 attaches to the end of the U.S. Lab and provides attach locations for several other elements. Kopra is currently assigned technical duties in the Space Station Branch of the Astronaut Office, where his primary focus involves the testing of crew interfaces for two future ISS modules as well as the implementation of support computers and operational Local Area Network on ISS. Node 2 is scheduled to launch on mission STS-120, Station assembly flight 10A.

    NASA Image and Video Library

    2004-02-03

    KENNEDY SPACE CENTER, FLA. - Astronaut Tim Kopra aids in Intravehicular Activity (IVA) constraints testing on the Italian-built Node 2, a future element of the International Space Station. The second of three Station connecting modules, the Node 2 attaches to the end of the U.S. Lab and provides attach locations for several other elements. Kopra is currently assigned technical duties in the Space Station Branch of the Astronaut Office, where his primary focus involves the testing of crew interfaces for two future ISS modules as well as the implementation of support computers and operational Local Area Network on ISS. Node 2 is scheduled to launch on mission STS-120, Station assembly flight 10A.

  13. KENNEDY SPACE CENTER, FLA. - In the Space Station Processing Facility, workers check over the Italian-built Node 2, a future element of the International Space Station. The second of three Station connecting modules, the Node 2 attaches to the end of the U.S. Lab and provides attach locations for several other elements. Kopra is currently assigned technical duties in the Space Station Branch of the Astronaut Office, where his primary focus involves the testing of crew interfaces for two future ISS modules as well as the implementation of support computers and operational Local Area Network on ISS. Node 2 is scheduled to launch on mission STS-120, Station assembly flight 10A.

    NASA Image and Video Library

    2004-02-03

    KENNEDY SPACE CENTER, FLA. - In the Space Station Processing Facility, workers check over the Italian-built Node 2, a future element of the International Space Station. The second of three Station connecting modules, the Node 2 attaches to the end of the U.S. Lab and provides attach locations for several other elements. Kopra is currently assigned technical duties in the Space Station Branch of the Astronaut Office, where his primary focus involves the testing of crew interfaces for two future ISS modules as well as the implementation of support computers and operational Local Area Network on ISS. Node 2 is scheduled to launch on mission STS-120, Station assembly flight 10A.

  14. On Internet Symmetry and its Impact on Society

    NASA Astrophysics Data System (ADS)

    Wolff, S. S.

    2014-12-01

    The end-to-end principle, enunciated by Clark and Saltzer in 1981 enabled an Internet implementation in which there was a symmetry among the network nodes in the sense that no node was architecturally distinguished. Each interface to the network had a unique and accessible address and could communicate on equal terms with any other interface or collection of interfaces. In this egalitarian implementation there was in consequence no architectural distinction between providers and consumers of content - any network node could play either role. As the Internet spread to university campuses, incoming students found 10 megabit Ethernet in the dorm - while their parents at home were still stuck with 56 kilobit dialup. In the two decades bisected by the millenium, this combination of speed and symmetry on campus and beyond led to a panoply of transformational Internet applications such as Internet video conferencing and billion dollar industries like Google, Yahoo!, and Facebook. This talk places early Internet history in a social context, elaborates on the social and economic outcomes, defines"middlebox friction", discusses its erosive consequences, and suggests a solution to restore symmetry to the Internet-at-large.

  15. EXTENSIBLE DATABASE FRAMEWORK FOR MANAGEMENT OF UNSTRUCTURED AND SEMI-STRUCTURED DOCUMENTS

    NASA Technical Reports Server (NTRS)

    Gawdiak, Yuri O. (Inventor); La, Tracy T. (Inventor); Lin, Shu-Chun Y. (Inventor); Malof, David A. (Inventor); Tran, Khai Peter B. (Inventor)

    2005-01-01

    Method and system for querying a collection of Unstructured or semi-structured documents to identify presence of, and provide context and/or content for, keywords and/or keyphrases. The documents are analyzed and assigned a node structure, including an ordered sequence of mutually exclusive node segments or strings. Each node has an associated set of at least four, five or six attributes with node information and can represent a format marker or text, with the last node in any node segment usually being a text node. A keyword (or keyphrase) is specified. and the last node in each node segment is searched for a match with the keyword. When a match is found at a query node, or at a node determined with reference to a query node, the system displays the context andor the content of the query node.

  16. Demonstrating the climate4impact portal: bridging the CMIP5 data infrastructure to impact users

    NASA Astrophysics Data System (ADS)

    Plieger, Maarten; Som de Cerff, Wim; Page, Christian; Hutjes, Ronald; de Jong, Fokke; Bärring, Lars; Sjökvist, Elin

    2013-04-01

    Together with seven other partners (CERFACS, CNRS-IPSL, SMHI, INHGA, CMCC, WUR, MF-CNRM), KNMI is involved in the FP7 project IS-ENES (http://is.enes.org), which supports the European climate modeling infrastructure, in the work package 'Bridging Climate Research Data and the Needs of the Impact Community'. The aim of this work package is to enhance the use of climate model data and to enhance the interaction with climate effect/impact communities. The portal is based on 17 impact use cases from 5 different European countries, and is evaluated by a user panel consisting of use case owners. As the climate impact community is very broad, the focus is mainly on the scientific impact community. This work has resulted in a prototype portal, the ENES portal interface for climate impact communities, that can be visited at www.climate4impact.eu. The portal is connected to all Earth System Grid Federation (ESGF) nodes containing global climate model data (GCM data) from the fifth phase of the Coupled Model Intercomparison Project (CMIP5) and later from the Coordinated Regional Climate Downscaling Experiment (CORDEX). This global network of all major climate model data centers offers services for data description, discovery and download. The climate4impact portal connects to these services and offers a user interface for searching, visualizing and downloading global climate model data and more. During the project, the content management system Drupal was used to enable partners to contribute on the documentation section. The following topics will be demonstrated: - Security: Login using OpenID for access to the ESG data nodes. The ESG works in conjunction with several external websites and systems. The climate4impact portal uses X509 based short lived credentials, generated on behalf of the user with a MyProxy service. Single Sign-on (SSO) is used to make these websites and systems work together. - Discovery: Facetted search based on e.g. variable name, model and institute using the ESG search services. A catalog browser allows for browsing through CMIP5 and other climate model data catalogues (e.g. ESSENCE, EOBS, UNIDATA). - Download: Directly from ESG nodes and other THREDDS catalogs - Visualization: Visualize any data directly using ADAGUC dynamic Web Map Services. - Transformation: Transform your data into other formats, perform basic calculations and extractions using OCG Web Processing Services The current portal is a Prototype. It is built to explore state-of-art technologies to provide improved access to climate model data. The prototype will be evaluated and is the basis for development of an operational service. The portal and services provided will be sustained and supported during the development of these operational services (2013-2016) in the second phase of the FP7 IS-ENES project, ISENES2.

  17. Supercomputer Environments

    DTIC Science & Technology

    1990-01-09

    data structures can easily be presented to the user interface. An emphasis of the Graph Browser was the realization of graph views and graph animation ... animation of the graph. Anima- tion of the graph includes changing node shapes, changing node and arc colors, changing node and arc text, and making...many graphs tend to be tree-like. Animtion of a graph is a useful feature. One of the primary goals of GMB was to support animated graphs. For animation

  18. Data communications in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2013-10-29

    Data communications in a parallel active messaging interface (`PAMI`) of a parallel computer, the parallel computer including a plurality of compute nodes that execute a parallel application, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes and the endpoints coupled for data communications through the PAMI and through data communications resources, including receiving in an origin endpoint of the PAMI a data communications instruction, the instruction characterized by an instruction type, the instruction specifying a transmission of transfer data from the origin endpoint to a target endpoint and transmitting, in accordance with the instruction type, the transfer data from the origin endpoint to the target endpoint.

  19. Smart Sensor Network for Aircraft Corrosion Monitoring

    DTIC Science & Technology

    2010-02-01

    Network Elements – Hub, Network capable application processor ( NCAP ) – Node, Smart transducer interface module (STIM)  Corrosion Sensing and...software Transducer software Network Protocol 1451.2 1451.3 1451.5 1451.6 1451.7 I/O Node -processor Power TEDS Smart Sensor Hub ( NCAP ) IEEE 1451.0 and

  20. Theoretical Analysis of Local Search and Simple Evolutionary Algorithms for the Generalized Travelling Salesperson Problem.

    PubMed

    Pourhassan, Mojgan; Neumann, Frank

    2018-06-22

    The generalized travelling salesperson problem is an important NP-hard combinatorial optimization problem for which meta-heuristics, such as local search and evolutionary algorithms, have been used very successfully. Two hierarchical approaches with different neighbourhood structures, namely a Cluster-Based approach and a Node-Based approach, have been proposed by Hu and Raidl (2008) for solving this problem. In this paper, local search algorithms and simple evolutionary algorithms based on these approaches are investigated from a theoretical perspective. For local search algorithms, we point out the complementary abilities of the two approaches by presenting instances where they mutually outperform each other. Afterwards, we introduce an instance which is hard for both approaches when initialized on a particular point of the search space, but where a variable neighbourhood search combining them finds the optimal solution in polynomial time. Then we turn our attention to analysing the behaviour of simple evolutionary algorithms that use these approaches. We show that the Node-Based approach solves the hard instance of the Cluster-Based approach presented in Corus et al. (2016) in polynomial time. Furthermore, we prove an exponential lower bound on the optimization time of the Node-Based approach for a class of Euclidean instances.

  1. The climate4impact portal: bridging CMIP5 data to impact users

    NASA Astrophysics Data System (ADS)

    Som de Cerff, Wim; Plieger, Maarten; Page, Christian; Hutjes, Ronald; de Jong, Fokke; Barring, Lars; Sjökvist, Elin

    2013-04-01

    Together with seven other partners (CERFACS, CNRS-IPSL, SMHI, INHGA, CMCC, WUR, MF-CNRM), KNMI is involved in the FP7 project IS-ENES (http://is.enes.org), which supports the European climate modeling infrastructure, in the work package 'Bridging Climate Research Data and the Needs of the Impact Community'. The aim of this work package is to enhance the use of climate model data and to enhance the interaction with climate effect/impact communities. The portal is based on 17 impact use cases from 5 different European countries, and is evaluated by a user panel consisting of use case owners. As the climate impact community is very broad, the focus is mainly on the scientific impact community. This work has resulted in a prototype portal, the ENES portal interface for climate impact communities, that can be visited at www.climate4impact.eu. The portal is connected to all Earth System Grid Federation (ESGF) nodes containing global climate model data (GCM data) from the fifth phase of the Coupled Model Intercomparison Project (CMIP5) and later from the Coordinated Regional Climate Downscaling Experiment (CORDEX). This global network of all major climate model data centers offers services for data description, discovery and download. The climate4impact portal connects to these services and offers a user interface for searching, visualizing and downloading global climate model data and more. A challenging task was to describe the available model data and how it can be used. The portal tries to inform users about possible caveats when using model data. All impact use cases are described in the documentation section, using highlighted keywords pointing to detailed information in the glossary. The current portal is a Prototype. It is built to explore state-of-art technologies to provide improved access to climate model data. The prototype will be evaluated and is the basis for development of an operational service. The portal and services provided will be sustained and supported during the development of these operational services (2013-2016) in the second phase of the FP7 IS-ENES project, ISENES2. In this presentation the architecture and following items will be detailed: • Security: Login using OpenID for access to the ESGF data nodes. The ESGF works in conjunction with several external websites and systems. The portal provides access to several distributed archives, most importantly the ESGF nodes. Single Sign-on (SSO) is used to let these websites and systems work together. • Discovery: Intelligent search based on e.g. variable name, model, institute. A catalog browser allows for browsing through CMIP5 and other climate model data catalogues (e.g. ESSENCE, EOBS, UNIDATA). • Download: Directly from ESGF nodes and other THREDDS catalogs • Visualization: Visualize any data directly on a map (ADAGUC Map services). • Transformation: Transform your data into other formats, perform basic calculations and extractions

  2. The portals 4.0.1 network programming interface.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barrett, Brian W.; Brightwell, Ronald Brian; Pedretti, Kevin

    2013-04-01

    This report presents a specification for the Portals 4.0 network programming interface. Portals 4.0 is intended to allow scalable, high-performance network communication between nodes of a parallel computing system. Portals 4.0 is well suited to massively parallel processing and embedded systems. Portals 4.0 represents an adaption of the data movement layer developed for massively parallel processing platforms, such as the 4500-node Intel TeraFLOPS machine. Sandias Cplant cluster project motivated the development of Version 3.0, which was later extended to Version 3.3 as part of the Cray Red Storm machine and XT line. Version 4.0 is targeted to the next generationmore » of machines employing advanced network interface architectures that support enhanced offload capabilities. 3« less

  3. Uploading, Searching and Visualizing of Paleomagnetic and Rock Magnetic Data in the Online MagIC Database

    NASA Astrophysics Data System (ADS)

    Minnett, R.; Koppers, A.; Tauxe, L.; Constable, C.; Donadini, F.

    2007-12-01

    The Magnetics Information Consortium (MagIC) is commissioned to implement and maintain an online portal to a relational database populated by both rock and paleomagnetic data. The goal of MagIC is to archive all available measurements and derived properties from paleomagnetic studies of directions and intensities, and for rock magnetic experiments (hysteresis, remanence, susceptibility, anisotropy). MagIC is hosted under EarthRef.org at http://earthref.org/MAGIC/ and will soon implement two search nodes, one for paleomagnetism and one for rock magnetism. Currently the PMAG node is operational. Both nodes provide query building based on location, reference, methods applied, material type and geological age, as well as a visual map interface to browse and select locations. Users can also browse the database by data type or by data compilation to view all contributions associated with well known earlier collections like PINT, GMPDB or PSVRL. The query result set is displayed in a digestible tabular format allowing the user to descend from locations to sites, samples, specimens and measurements. At each stage, the result set can be saved and, where appropriate, can be visualized by plotting global location maps, equal area, XY, age, and depth plots, or typical Zijderveld, hysteresis, magnetization and remanence diagrams. User contributions to the MagIC database are critical to achieving a useful research tool. We have developed a standard data and metadata template (version 2.3) that can be used to format and upload all data at the time of publication in Earth Science journals. Software tools are provided to facilitate population of these templates within Microsoft Excel. These tools allow for the import/export of text files and provide advanced functionality to manage and edit the data, and to perform various internal checks to maintain data integrity and prepare for uploading. The MagIC Contribution Wizard at http://earthref.org/MAGIC/upload.htm executes the upload and takes only a few minutes to process tens of thousands of data records. The standardized MagIC template files are stored in the digital archives of EarthRef.org where they remain available for download by the public (in both text and Excel format). Finally, the contents of these template files are automatically parsed into the online relational database, making the data available for online searches in the paleomagnetic and rock magnetic search nodes. During the upload process the owner has the option of keeping the contribution private so it can be viewed in the context of other data sets and visualized using the suite of MagIC plotting tools. Alternatively, the new data can be password protected and shared with a group of users at the contributor's discretion. Once they are published and the owner is comfortable making the upload publicly accessible, the MagIC Editing Committee reviews the contribution for adherence to the MagIC data model and conventions to ensure a high level of data integrity.

  4. A finite element study on the mechanical response of the head-neck interface of hip implants under realistic forces and moments of daily activities: Part 1, level walking.

    PubMed

    Farhoudi, Hamidreza; Fallahnezhad, Khosro; Oskouei, Reza H; Taylor, Mark

    2017-11-01

    This paper investigates the mechanical response of a modular head-neck interface of hip joint implants under realistic loads of level walking. The realistic loads of the walking activity consist of three dimensional gait forces and the associated frictional moments. These forces and moments were extracted for a 32mm metal-on-metal bearing couple. A previously reported geometry of a modular CoCr/CoCr head-neck interface with a proximal contact was used for this investigation. An explicit finite element analysis was performed to investigate the interface mechanical responses. To study the level of contribution and also the effect of superposition of the load components, three different scenarios of loading were studied: gait forces only, frictional moments only, and combined gait forces and frictional moments. Stress field, micro-motions, shear stresses and fretting work at the contacting nodes of the interface were analysed. Gait forces only were found to significantly influence the mechanical environment of the head-neck interface by temporarily extending the contacting area (8.43% of initially non-contacting surface nodes temporarily came into contact), and therefore changing the stress field and resultant micro-motions during the gait cycle. The frictional moments only did not cause considerable changes in the mechanical response of the interface (only 0.27% of the non-contacting surface nodes temporarily came into contact). However, when superposed with the gait forces, the mechanical response of the interface, particularly micro-motions and fretting work, changed compared to the forces only case. The normal contact stresses and micro-motions obtained from this realistic load-controlled study were typically in the range of 0-275MPa and 0-38µm, respectively. These ranges were found comparable to previous experimental displacement-controlled pin/cylinder-on-disk fretting corrosion studies. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Architecture and method for a burst buffer using flash technology

    DOEpatents

    Tzelnic, Percy; Faibish, Sorin; Gupta, Uday K.; Bent, John; Grider, Gary Alan; Chen, Hsing-bung

    2016-03-15

    A parallel supercomputing cluster includes compute nodes interconnected in a mesh of data links for executing an MPI job, and solid-state storage nodes each linked to a respective group of the compute nodes for receiving checkpoint data from the respective compute nodes, and magnetic disk storage linked to each of the solid-state storage nodes for asynchronous migration of the checkpoint data from the solid-state storage nodes to the magnetic disk storage. Each solid-state storage node presents a file system interface to the MPI job, and multiple MPI processes of the MPI job write the checkpoint data to a shared file in the solid-state storage in a strided fashion, and the solid-state storage node asynchronously migrates the checkpoint data from the shared file in the solid-state storage to the magnetic disk storage and writes the checkpoint data to the magnetic disk storage in a sequential fashion.

  6. KENNEDY SPACE CENTER, FLA. - Astronaut Tim Kopra (facing camera) aids in Intravehicular Activity (IVA) constraints testing on the Italian-built Node 2, a future element of the International Space Station. The second of three Station connecting modules, the Node 2 attaches to the end of the U.S. Lab and provides attach locations for several other elements. Kopra is currently assigned technical duties in the Space Station Branch of the Astronaut Office, where his primary focus involves the testing of crew interfaces for two future ISS modules as well as the implementation of support computers and operational Local Area Network on ISS. Node 2 is scheduled to launch on mission STS-120, Station assembly flight 10A.

    NASA Image and Video Library

    2004-02-03

    KENNEDY SPACE CENTER, FLA. - Astronaut Tim Kopra (facing camera) aids in Intravehicular Activity (IVA) constraints testing on the Italian-built Node 2, a future element of the International Space Station. The second of three Station connecting modules, the Node 2 attaches to the end of the U.S. Lab and provides attach locations for several other elements. Kopra is currently assigned technical duties in the Space Station Branch of the Astronaut Office, where his primary focus involves the testing of crew interfaces for two future ISS modules as well as the implementation of support computers and operational Local Area Network on ISS. Node 2 is scheduled to launch on mission STS-120, Station assembly flight 10A.

  7. KENNEDY SPACE CENTER, FLA. - Astronaut Tim Kopra talks to a technician (off-camera) during Intravehicular Activity (IVA) constraints testing on the Italian-built Node 2, a future element of the International Space Station. The second of three Station connecting modules, the Node 2 attaches to the end of the U.S. Lab and provides attach locations for several other elements. Kopra is currently assigned technical duties in the Space Station Branch of the Astronaut Office, where his primary focus involves the testing of crew interfaces for two future ISS modules as well as the implementation of support computers and operational Local Area Network on ISS. Node 2 is scheduled to launch on mission STS-120, Station assembly flight 10A.

    NASA Image and Video Library

    2004-02-03

    KENNEDY SPACE CENTER, FLA. - Astronaut Tim Kopra talks to a technician (off-camera) during Intravehicular Activity (IVA) constraints testing on the Italian-built Node 2, a future element of the International Space Station. The second of three Station connecting modules, the Node 2 attaches to the end of the U.S. Lab and provides attach locations for several other elements. Kopra is currently assigned technical duties in the Space Station Branch of the Astronaut Office, where his primary focus involves the testing of crew interfaces for two future ISS modules as well as the implementation of support computers and operational Local Area Network on ISS. Node 2 is scheduled to launch on mission STS-120, Station assembly flight 10A.

  8. An object-oriented and quadrilateral-mesh based solution adaptive algorithm for compressible multi-fluid flows

    NASA Astrophysics Data System (ADS)

    Zheng, H. W.; Shu, C.; Chew, Y. T.

    2008-07-01

    In this paper, an object-oriented and quadrilateral-mesh based solution adaptive algorithm for the simulation of compressible multi-fluid flows is presented. The HLLC scheme (Harten, Lax and van Leer approximate Riemann solver with the Contact wave restored) is extended to adaptively solve the compressible multi-fluid flows under complex geometry on unstructured mesh. It is also extended to the second-order of accuracy by using MUSCL extrapolation. The node, edge and cell are arranged in such an object-oriented manner that each of them inherits from a basic object. A home-made double link list is designed to manage these objects so that the inserting of new objects and removing of the existing objects (nodes, edges and cells) are independent of the number of objects and only of the complexity of O( 1). In addition, the cells with different levels are further stored in different lists. This avoids the recursive calculation of solution of mother (non-leaf) cells. Thus, high efficiency is obtained due to these features. Besides, as compared to other cell-edge adaptive methods, the separation of nodes would reduce the memory requirement of redundant nodes, especially in the cases where the level number is large or the space dimension is three. Five two-dimensional examples are used to examine its performance. These examples include vortex evolution problem, interface only problem under structured mesh and unstructured mesh, bubble explosion under the water, bubble-shock interaction, and shock-interface interaction inside the cylindrical vessel. Numerical results indicate that there is no oscillation of pressure and velocity across the interface and it is feasible to apply it to solve compressible multi-fluid flows with large density ratio (1000) and strong shock wave (the pressure ratio is 10,000) interaction with the interface.

  9. System for solving diagnosis and hitting set problems

    NASA Technical Reports Server (NTRS)

    Vatan, Farrokh (Inventor); Fijany, Amir (Inventor)

    2007-01-01

    The diagnosis problem arises when a system's actual behavior contradicts the expected behavior, thereby exhibiting symptoms (a collection of conflict sets). System diagnosis is then the task of identifying faulty components that are responsible for anomalous behavior. To solve the diagnosis problem, the present invention describes a method for finding the minimal set of faulty components (minimal diagnosis set) that explain the conflict sets. The method includes acts of creating a matrix of the collection of conflict sets, and then creating nodes from the matrix such that each node is a node in a search tree. A determination is made as to whether each node is a leaf node or has any children nodes. If any given node has children nodes, then the node is split until all nodes are leaf nodes. Information gathered from the leaf nodes is used to determine the minimal diagnosis set.

  10. The PANTHER User Experience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coram, Jamie L.; Morrow, James D.; Perkins, David Nikolaus

    2015-09-01

    This document describes the PANTHER R&D Application, a proof-of-concept user interface application developed under the PANTHER Grand Challenge LDRD. The purpose of the application is to explore interaction models for graph analytics, drive algorithmic improvements from an end-user point of view, and support demonstration of PANTHER technologies to potential customers. The R&D Application implements a graph-centric interaction model that exposes analysts to the algorithms contained within the GeoGraphy graph analytics library. Users define geospatial-temporal semantic graph queries by constructing search templates based on nodes, edges, and the constraints among them. Users then analyze the results of the queries using bothmore » geo-spatial and temporal visualizations. Development of this application has made user experience an explicit driver for project and algorithmic level decisions that will affect how analysts one day make use of PANTHER technologies.« less

  11. The User Interface: How Does Your Product Look and Feel?

    ERIC Educational Resources Information Center

    Strukhoff, Roger

    1987-01-01

    Discusses the importance of user cordial interfaces to the successful marketing of optical data disk products, and describes features of several online systems. The topics discussed include full text searching, indexed searching, menu driven interfaces, natural language interfaces, computer graphics, and possible future developments. (CLB)

  12. Low latency messages on distributed memory multiprocessors

    NASA Technical Reports Server (NTRS)

    Rosing, Matthew; Saltz, Joel

    1993-01-01

    Many of the issues in developing an efficient interface for communication on distributed memory machines are described and a portable interface is proposed. Although the hardware component of message latency is less than one microsecond on many distributed memory machines, the software latency associated with sending and receiving typed messages is on the order of 50 microseconds. The reason for this imbalance is that the software interface does not match the hardware. By changing the interface to match the hardware more closely, applications with fine grained communication can be put on these machines. Based on several tests that were run on the iPSC/860, an interface that will better match current distributed memory machines is proposed. The model used in the proposed interface consists of a computation processor and a communication processor on each node. Communication between these processors and other nodes in the system is done through a buffered network. Information that is transmitted is either data or procedures to be executed on the remote processor. The dual processor system is better suited for efficiently handling asynchronous communications compared to a single processor system. The ability to send data or procedure is very flexible for minimizing message latency, based on the type of communication being performed. The test performed and the proposed interface are described.

  13. An Intuitive Dashboard for Bayesian Network Inference

    NASA Astrophysics Data System (ADS)

    Reddy, Vikas; Charisse Farr, Anna; Wu, Paul; Mengersen, Kerrie; Yarlagadda, Prasad K. D. V.

    2014-03-01

    Current Bayesian network software packages provide good graphical interface for users who design and develop Bayesian networks for various applications. However, the intended end-users of these networks may not necessarily find such an interface appealing and at times it could be overwhelming, particularly when the number of nodes in the network is large. To circumvent this problem, this paper presents an intuitive dashboard, which provides an additional layer of abstraction, enabling the end-users to easily perform inferences over the Bayesian networks. Unlike most software packages, which display the nodes and arcs of the network, the developed tool organises the nodes based on the cause-and-effect relationship, making the user-interaction more intuitive and friendly. In addition to performing various types of inferences, the users can conveniently use the tool to verify the behaviour of the developed Bayesian network. The tool has been developed using QT and SMILE libraries in C++.

  14. Method for gathering and summarizing internet information

    DOEpatents

    Potok, Thomas E.; Elmore, Mark Thomas; Reed, Joel Wesley; Treadwell, Jim N.; Samatova, Nagiza Faridovna

    2010-04-06

    A computer method of gathering and summarizing large amounts of information comprises collecting information from a plurality of information sources (14, 51) according to respective maps (52) of the information sources (14), converting the collected information from a storage format to XML-language documents (26, 53) and storing the XML-language documents in a storage medium, searching for documents (55) according to a search query (13) having at least one term and identifying the documents (26) found in the search, and displaying the documents as nodes (33) of a tree structure (32) having links (34) and nodes (33) so as to indicate similarity of the documents to each other.

  15. System for gathering and summarizing internet information

    DOEpatents

    Potok, Thomas E.; Elmore, Mark Thomas; Reed, Joel Wesley; Treadwell, Jim N.; Samatova, Nagiza Faridovna

    2006-07-04

    A computer method of gathering and summarizing large amounts of information comprises collecting information from a plurality of information sources (14, 51) according to respective maps (52) of the information sources (14), converting the collected information from a storage format to XML-language documents (26, 53) and storing the XML-language documents in a storage medium, searching for documents (55) according to a search query (13) having at least one term and identifying the documents (26) found in the search, and displaying the documents as nodes (33) of a tree structure (32) having links (34) and nodes (33) so as to indicate similarity of the documents to each other.

  16. Method for gathering and summarizing internet information

    DOEpatents

    Potok, Thomas E [Oak Ridge, TN; Elmore, Mark Thomas [Oak Ridge, TN; Reed, Joel Wesley [Knoxville, TN; Treadwell, Jim N [Louisville, TN; Samatova, Nagiza Faridovna [Oak Ridge, TN

    2008-01-01

    A computer method of gathering and summarizing large amounts of information comprises collecting information from a plurality of information sources (14, 51) according to respective maps (52) of the information sources (14), converting the collected information from a storage format to XML-language documents (26, 53) and storing the XML-language documents in a storage medium, searching for documents (55) according to a search query (13) having at least one term and identifying the documents (26) found in the search, and displaying the documents as nodes (33) of a tree structure (32) having links (34) and nodes (33) so as to indicate similarity of the documents to each other.

  17. An Effective Cuckoo Search Algorithm for Node Localization in Wireless Sensor Network.

    PubMed

    Cheng, Jing; Xia, Linyuan

    2016-08-31

    Localization is an essential requirement in the increasing prevalence of wireless sensor network (WSN) applications. Reducing the computational complexity, communication overhead in WSN localization is of paramount importance in order to prolong the lifetime of the energy-limited sensor nodes and improve localization performance. This paper proposes an effective Cuckoo Search (CS) algorithm for node localization. Based on the modification of step size, this approach enables the population to approach global optimal solution rapidly, and the fitness of each solution is employed to build mutation probability for avoiding local convergence. Further, the approach restricts the population in the certain range so that it can prevent the energy consumption caused by insignificant search. Extensive experiments were conducted to study the effects of parameters like anchor density, node density and communication range on the proposed algorithm with respect to average localization error and localization success ratio. In addition, a comparative study was conducted to realize the same localization task using the same network deployment. Experimental results prove that the proposed CS algorithm can not only increase convergence rate but also reduce average localization error compared with standard CS algorithm and Particle Swarm Optimization (PSO) algorithm.

  18. An Effective Cuckoo Search Algorithm for Node Localization in Wireless Sensor Network

    PubMed Central

    Cheng, Jing; Xia, Linyuan

    2016-01-01

    Localization is an essential requirement in the increasing prevalence of wireless sensor network (WSN) applications. Reducing the computational complexity, communication overhead in WSN localization is of paramount importance in order to prolong the lifetime of the energy-limited sensor nodes and improve localization performance. This paper proposes an effective Cuckoo Search (CS) algorithm for node localization. Based on the modification of step size, this approach enables the population to approach global optimal solution rapidly, and the fitness of each solution is employed to build mutation probability for avoiding local convergence. Further, the approach restricts the population in the certain range so that it can prevent the energy consumption caused by insignificant search. Extensive experiments were conducted to study the effects of parameters like anchor density, node density and communication range on the proposed algorithm with respect to average localization error and localization success ratio. In addition, a comparative study was conducted to realize the same localization task using the same network deployment. Experimental results prove that the proposed CS algorithm can not only increase convergence rate but also reduce average localization error compared with standard CS algorithm and Particle Swarm Optimization (PSO) algorithm. PMID:27589756

  19. An MPA-IO interface to HPSS

    NASA Technical Reports Server (NTRS)

    Jones, Terry; Mark, Richard; Martin, Jeanne; May, John; Pierce, Elsie; Stanberry, Linda

    1996-01-01

    This paper describes an implementation of the proposed MPI-IO (Message Passing Interface - Input/Output) standard for parallel I/O. Our system uses third-party transfer to move data over an external network between the processors where it is used and the I/O devices where it resides. Data travels directly from source to destination, without the need for shuffling it among processors or funneling it through a central node. Our distributed server model lets multiple compute nodes share the burden of coordinating data transfers. The system is built on the High Performance Storage System (HPSS), and a prototype version runs on a Meiko CS-2 parallel computer.

  20. Development and Implementation of Production Area of Agricultural Product Data Collection System Based on Embedded System

    NASA Astrophysics Data System (ADS)

    Xi, Lei; Guo, Wei; Che, Yinchao; Zhang, Hao; Wang, Qiang; Ma, Xinming

    To solve problems in detecting the origin of agricultural products, this paper brings about an embedded data-based terminal, applies middleware thinking, and provides reusable long-range two-way data exchange module between business equipment and data acquisition systems. The system is constructed by data collection node and data center nodes. Data collection nodes taking embedded data terminal NetBoxII as the core, consisting of data acquisition interface layer, controlling information layer and data exchange layer, completing the data reading of different front-end acquisition equipments, and packing the data TCP to realize the data exchange between data center nodes according to the physical link (GPRS / CDMA / Ethernet). Data center node consists of the data exchange layer, the data persistence layer, and the business interface layer, which make the data collecting durable, and provide standardized data for business systems based on mapping relationship of collected data and business data. Relying on public communications networks, application of the system could establish the road of flow of information between the scene of origin certification and management center, and could realize the real-time collection, storage and processing between data of origin certification scene and databases of certification organization, and could achieve needs of long-range detection of agricultural origin.

  1. Evaluation of a Novel Conjunctive Exploratory Navigation Interface for Consumer Health Information: A Crowdsourced Comparative Study

    PubMed Central

    Cui, Licong; Carter, Rebecca

    2014-01-01

    Background Numerous consumer health information websites have been developed to provide consumers access to health information. However, lookup search is insufficient for consumers to take full advantage of these rich public information resources. Exploratory search is considered a promising complementary mechanism, but its efficacy has never before been rigorously evaluated for consumer health information retrieval interfaces. Objective This study aims to (1) introduce a novel Conjunctive Exploratory Navigation Interface (CENI) for supporting effective consumer health information retrieval and navigation, and (2) evaluate the effectiveness of CENI through a search-interface comparative evaluation using crowdsourcing with Amazon Mechanical Turk (AMT). Methods We collected over 60,000 consumer health questions from NetWellness, one of the first consumer health websites to provide high-quality health information. We designed and developed a novel conjunctive exploratory navigation interface to explore NetWellness health questions with health topics as dynamic and searchable menus. To investigate the effectiveness of CENI, we developed a second interface with keyword-based search only. A crowdsourcing comparative study was carefully designed to compare three search modes of interest: (A) the topic-navigation-based CENI, (B) the keyword-based lookup interface, and (C) either the most commonly available lookup search interface with Google, or the resident advanced search offered by NetWellness. To compare the effectiveness of the three search modes, 9 search tasks were designed with relevant health questions from NetWellness. Each task included a rating of difficulty level and questions for validating the quality of answers. Ninety anonymous and unique AMT workers were recruited as participants. Results Repeated-measures ANOVA analysis of the data showed the search modes A, B, and C had statistically significant differences among their levels of difficulty (P<.001). Wilcoxon signed-rank test (one-tailed) between A and B showed that A was significantly easier than B (P<.001). Paired t tests (one-tailed) between A and C showed A was significantly easier than C (P<.001). Participant responses on the preferred search modes showed that 47.8% (43/90) participants preferred A, 25.6% (23/90) preferred B, 24.4% (22/90) preferred C. Participant comments on the preferred search modes indicated that CENI was easy to use, provided better organization of health questions by topics, allowed users to narrow down to the most relevant contents quickly, and supported the exploratory navigation by non-experts or those unsure how to initiate their search. Conclusions We presented a novel conjunctive exploratory navigation interface for consumer health information retrieval and navigation. Crowdsourcing permitted a carefully designed comparative search-interface evaluation to be completed in a timely and cost-effective manner with a relatively large number of participants recruited anonymously. Accounting for possible biases, our study has shown for the first time with crowdsourcing that the combination of exploratory navigation and lookup search is more effective than lookup search alone. PMID:24513593

  2. Evaluation of a novel Conjunctive Exploratory Navigation Interface for consumer health information: a crowdsourced comparative study.

    PubMed

    Cui, Licong; Carter, Rebecca; Zhang, Guo-Qiang

    2014-02-10

    Numerous consumer health information websites have been developed to provide consumers access to health information. However, lookup search is insufficient for consumers to take full advantage of these rich public information resources. Exploratory search is considered a promising complementary mechanism, but its efficacy has never before been rigorously evaluated for consumer health information retrieval interfaces. This study aims to (1) introduce a novel Conjunctive Exploratory Navigation Interface (CENI) for supporting effective consumer health information retrieval and navigation, and (2) evaluate the effectiveness of CENI through a search-interface comparative evaluation using crowdsourcing with Amazon Mechanical Turk (AMT). We collected over 60,000 consumer health questions from NetWellness, one of the first consumer health websites to provide high-quality health information. We designed and developed a novel conjunctive exploratory navigation interface to explore NetWellness health questions with health topics as dynamic and searchable menus. To investigate the effectiveness of CENI, we developed a second interface with keyword-based search only. A crowdsourcing comparative study was carefully designed to compare three search modes of interest: (A) the topic-navigation-based CENI, (B) the keyword-based lookup interface, and (C) either the most commonly available lookup search interface with Google, or the resident advanced search offered by NetWellness. To compare the effectiveness of the three search modes, 9 search tasks were designed with relevant health questions from NetWellness. Each task included a rating of difficulty level and questions for validating the quality of answers. Ninety anonymous and unique AMT workers were recruited as participants. Repeated-measures ANOVA analysis of the data showed the search modes A, B, and C had statistically significant differences among their levels of difficulty (P<.001). Wilcoxon signed-rank test (one-tailed) between A and B showed that A was significantly easier than B (P<.001). Paired t tests (one-tailed) between A and C showed A was significantly easier than C (P<.001). Participant responses on the preferred search modes showed that 47.8% (43/90) participants preferred A, 25.6% (23/90) preferred B, 24.4% (22/90) preferred C. Participant comments on the preferred search modes indicated that CENI was easy to use, provided better organization of health questions by topics, allowed users to narrow down to the most relevant contents quickly, and supported the exploratory navigation by non-experts or those unsure how to initiate their search. We presented a novel conjunctive exploratory navigation interface for consumer health information retrieval and navigation. Crowdsourcing permitted a carefully designed comparative search-interface evaluation to be completed in a timely and cost-effective manner with a relatively large number of participants recruited anonymously. Accounting for possible biases, our study has shown for the first time with crowdsourcing that the combination of exploratory navigation and lookup search is more effective than lookup search alone.

  3. Analytic Modeling of Pressurization and Cryogenic Propellant

    NASA Technical Reports Server (NTRS)

    Corpening, Jeremy H.

    2010-01-01

    An analytic model for pressurization and cryogenic propellant conditions during all mission phases of any liquid rocket based vehicle has been developed and validated. The model assumes the propellant tanks to be divided into five nodes and also implements an empirical correlation for liquid stratification if desired. The five nodes include a tank wall node exposed to ullage gas, an ullage gas node, a saturated propellant vapor node at the liquid-vapor interface, a liquid node, and a tank wall node exposed to liquid. The conservation equations of mass and energy are then applied across all the node boundaries and, with the use of perfect gas assumptions, explicit solutions for ullage and liquid conditions are derived. All fluid properties are updated real time using NIST Refprop.1 Further, mass transfer at the liquid-vapor interface is included in the form of evaporation, bulk boiling of liquid propellant, and condensation given the appropriate conditions for each. Model validation has proven highly successful against previous analytic models and various Saturn era test data and reasonably successful against more recent LH2 tank self pressurization ground test data. Finally, this model has been applied to numerous design iterations for the Altair Lunar Lander, Ares V Core Stage, and Ares V Earth Departure Stage in order to characterize Helium and autogenous pressurant requirements, propellant lost to evaporation and thermodynamic venting to maintain propellant conditions, and non-uniform tank draining in configurations utilizing multiple LH2 or LO2 propellant tanks. In conclusion, this model provides an accurate and efficient means of analyzing multiple design configurations for any cryogenic propellant tank in launch, low-acceleration coast, or in-space maneuvering and supplies the user with pressurization requirements, unusable propellants from evaporation and liquid stratification, and general ullage gas, liquid, and tank wall conditions as functions of time.

  4. Finding paths in tree graphs with a quantum walk

    NASA Astrophysics Data System (ADS)

    Koch, Daniel; Hillery, Mark

    2018-01-01

    We analyze the potential for different types of searches using the formalism of scattering random walks on quantum computers. Given a particular type of graph consisting of nodes and connections, a "tree maze," we would like to find a selected final node as quickly as possible, faster than any classical search algorithm. We show that this can be done using a quantum random walk, both through numerical calculations as well as by using the eigenvectors and eigenvalues of the quantum system.

  5. A Strategy Based on Protein-Protein Interface Motifs May Help in Identifying Drug Off-Targets

    PubMed Central

    Engin, H. Billur; Keskin, Ozlem; Nussinov, Ruth; Gursoy, Attila

    2014-01-01

    Networks are increasingly used to study the impact of drugs at the systems level. From the algorithmic standpoint, a drug can ‘attack’ nodes or edges of a protein-protein interaction network. In this work, we propose a new network strategy, “The Interface Attack”, based on protein-protein interfaces. Similar interface architectures can occur between unrelated proteins. Consequently, in principle, a drug that binds to one has a certain probability of binding others. The interface attack strategy simultaneously removes from the network all interactions that consist of similar interface motifs. This strategy is inspired by network pharmacology and allows inferring potential off-targets. We introduce a network model which we call “Protein Interface and Interaction Network (P2IN)”, which is the integration of protein-protein interface structures and protein interaction networks. This interface-based network organization clarifies which protein pairs have structurally similar interfaces, and which proteins may compete to bind the same surface region. We built the P2IN of p53 signaling network and performed network robustness analysis. We show that (1) ‘hitting’ frequent interfaces (a set of edges distributed around the network) might be as destructive as eleminating high degree proteins (hub nodes); (2) frequent interfaces are not always topologically critical elements in the network; and (3) interface attack may reveal functional changes in the system better than attack of single proteins. In the off-target detection case study, we found that drugs blocking the interface between CDK6 and CDKN2D may also affect the interaction between CDK4 and CDKN2D. PMID:22817115

  6. The North Carolina State University Libraries Search Experience: Usability Testing Tabbed Search Interfaces for Academic Libraries

    ERIC Educational Resources Information Center

    Teague-Rector, Susan; Ballard, Angela; Pauley, Susan K.

    2011-01-01

    Creating a learnable, effective, and user-friendly library Web site hinges on providing easy access to search. Designing a search interface for academic libraries can be particularly challenging given the complexity and range of searchable library collections, such as bibliographic databases, electronic journals, and article search silos. Library…

  7. When is a search not a search? A comparison of searching the AMED complementary health database via EBSCOhost, OVID and DIALOG.

    PubMed

    Younger, Paula; Boddy, Kate

    2009-06-01

    The researchers involved in this study work at Exeter Health library and at the Complementary Medicine Unit, Peninsula School of Medicine and Dentistry (PCMD). Within this collaborative environment it is possible to access the electronic resources of three institutions. This includes access to AMED and other databases using different interfaces. The aim of this study was to investigate whether searching different interfaces to the AMED allied health and complementary medicine database produced the same results when using identical search terms. The following Internet-based AMED interfaces were searched: DIALOG DataStar; EBSCOhost and OVID SP_UI01.00.02. Search results from all three databases were saved in an endnote database to facilitate analysis. A checklist was also compiled comparing interface features. In our initial search, DIALOG returned 29 hits, OVID 14 and Ebsco 8. If we assume that DIALOG returned 100% of potential hits, OVID initially returned only 48% of hits and EBSCOhost only 28%. In our search, a researcher using the Ebsco interface to carry out a simple search on AMED would miss over 70% of possible search hits. Subsequent EBSCOhost searches on different subjects failed to find between 21 and 86% of the hits retrieved using the same keywords via DIALOG DataStar. In two cases, the simple EBSCOhost search failed to find any of the results found via DIALOG DataStar. Depending on the interface, the number of hits retrieved from the same database with the same simple search can vary dramatically. Some simple searches fail to retrieve a substantial percentage of citations. This may result in an uninformed literature review, research funding application or treatment intervention. In addition to ensuring that keywords, spelling and medical subject headings (MeSH) accurately reflect the nature of the search, database users should include wildcards and truncation and adapt their search strategy substantially to retrieve the maximum number of appropriate citations possible. Librarians should be aware of these differences when making purchasing decisions, carrying out literature searches and planning user education.

  8. A Software Implementation of a Satellite Interface Message Processor.

    ERIC Educational Resources Information Center

    Eastwood, Margaret A.; Eastwood, Lester F., Jr.

    A design for network control software for a computer network is described in which some nodes are linked by a communications satellite channel. It is assumed that the network has an ARPANET-like configuration; that is, that specialized processors at each node are responsible for message switching and network control. The purpose of the control…

  9. A finite elements method to solve the Bloch-Torrey equation applied to diffusion magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Nguyen, Dang Van; Li, Jing-Rebecca; Grebenkov, Denis; Le Bihan, Denis

    2014-04-01

    The complex transverse water proton magnetization subject to diffusion-encoding magnetic field gradient pulses in a heterogeneous medium can be modeled by the multiple compartment Bloch-Torrey partial differential equation (PDE). In addition, steady-state Laplace PDEs can be formulated to produce the homogenized diffusion tensor that describes the diffusion characteristics of the medium in the long time limit. In spatial domains that model biological tissues at the cellular level, these two types of PDEs have to be completed with permeability conditions on the cellular interfaces. To solve these PDEs, we implemented a finite elements method that allows jumps in the solution at the cell interfaces by using double nodes. Using a transformation of the Bloch-Torrey PDE we reduced oscillations in the searched-for solution and simplified the implementation of the boundary conditions. The spatial discretization was then coupled to the adaptive explicit Runge-Kutta-Chebyshev time-stepping method. Our proposed method is second order accurate in space and second order accurate in time. We implemented this method on the FEniCS C++ platform and show time and spatial convergence results. Finally, this method is applied to study some relevant questions in diffusion MRI.

  10. Fast correspondences search in anatomical trees

    NASA Astrophysics Data System (ADS)

    dos Santos, Thiago R.; Gergel, Ingmar; Meinzer, Hans-Peter; Maier-Hein, Lena

    2010-03-01

    Registration of multiple medical images commonly comprises the steps feature extraction, correspondences search and transformation computation. In this paper, we present a new method for a fast and pose independent search of correspondences using as features anatomical trees such as the bronchial system in the lungs or the vessel system in the liver. Our approach scores the similarities between the trees' nodes (bifurcations) taking into account both, topological properties extracted from their graph representations and anatomical properties extracted from the trees themselves. The node assignment maximizes the global similarity (sum of the scores of each pair of assigned nodes), assuring that the matches are distributed throughout the trees. Furthermore, the proposed method is able to deal with distortions in the data, such as noise, motion, artifacts, and problems associated with the extraction method, such as missing or false branches. According to an evaluation on swine lung data sets, the method requires less than one second on average to compute the matching and yields a high rate of correct matches compared to state of the art work.

  11. Finding and Exploring Health Information with a Slider-Based User Interface.

    PubMed

    Pang, Patrick Cheong-Iao; Verspoor, Karin; Pearce, Jon; Chang, Shanton

    2016-01-01

    Despite the fact that search engines are the primary channel to access online health information, there are better ways to find and explore health information on the web. Search engines are prone to problems when they are used to find health information. For instance, users have difficulties in expressing health scenarios with appropriate search keywords, search results are not optimised for medical queries, and the search process does not account for users' literacy levels and reading preferences. In this paper, we describe our approach to addressing these problems by introducing a novel design using a slider-based user interface for discovering health information without the need for precise search keywords. The user evaluation suggests that the interface is easy to use and able to assist users in the process of discovering new information. This study demonstrates the potential value of adopting slider controls in the user interface of health websites for navigation and information discovery.

  12. Mirador: A Simple, Fast Search Interface for Remote Sensing Data

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher; Strub, Richard; Seiler, Edward; Joshi, Talak; MacHarrie, Peter

    2008-01-01

    A major challenge for remote sensing science researchers is searching and acquiring relevant data files for their research projects based on content, space and time constraints. Several structured query (SQ) and hierarchical navigation (HN) search interfaces have been develop ed to satisfy this requirement, yet the dominant search engines in th e general domain are based on free-text search. The Goddard Earth Sci ences Data and Information Services Center has developed a free-text search interface named Mirador that supports space-time queries, inc luding a gazetteer and geophysical event gazetteer. In order to compe nsate for a slightly reduced search precision relative to SQ and HN t echniques, Mirador uses several search optimizations to return result s quickly. The quick response enables a more iterative search strateg y than is available with many SQ and HN techniques.

  13. Online Exhibits & Concept Maps

    NASA Astrophysics Data System (ADS)

    Douma, M.

    2009-12-01

    Presenting the complexity of geosciences to the public via the Internet poses a number of challenges. For example, utilizing various - and sometimes redundant - Web 2.0 tools can quickly devour limited time. Do you tweet? Do you write press releases? Do you create an exhibit or concept map? The presentation will provide participants with a context for utilizing Web 2.0 tools by briefly highlighting methods of online scientific communication across several dimensions. It will address issues of: * breadth and depth (e.g. from narrow topics to well-rounded views), * presentation methods (e.g. from text to multimedia, from momentary to enduring), * sources and audiences (e.g. for experts or for the public, content developed by producers to that developed by users), * content display (e.g. from linear to non-linear, from instructive to entertaining), * barriers to entry (e.g. from an incumbent advantage to neophyte accessible, from amateur to professional), * cost and reach (e.g. from cheap to expensive), and * impact (e.g. the amount learned, from anonymity to brand awareness). Against this backdrop, the presentation will provide an overview of two methods of online information dissemination, exhibits and concept maps, using the WebExhibits online museum (www.webexhibits.org) and SpicyNodes information visualization tool (www.spicynodes.org) as examples, with tips on how geoscientists can use either to communicate their science. Richly interactive online exhibits can serve to engage a large audience, appeal to visitors with multiple learning styles, prompt exploration and discovery, and present a topic’s breadth and depth. WebExhibits, which was among the first online museums, delivers interactive information, virtual experiments, and hands-on activities to the public. While large, multidisciplinary exhibits on topics like “Color Vision and Art” or “Calendars Through the Ages” require teams of scholars, user interface experts, professional writers and editors, teachers, artists, and web designers, a smaller scale collaborative effort can result in an effective mini-exhibit. Online concept maps can present a large quantity of information in bite-size chunks, demonstrating interrelationships between pieces of information without inundating visitors. SpicyNodes uses radial mapping technology to enable visitors to learn about a topic or search for information in intuitive and organic ways. This online concept mapping tool can be used as a portal to invite exploration into topics, or as a means of displaying hierarchies of information. With nodes that contain text, audio, video, and links, interactive online concept maps especially engage visual, kinesthetic, and nonlinear learners. SpicyNodes is also useful for scientists who wish to complement papers, chapters, and books with an online interface that is especially appealing to nonlinear learners. Essentially, SpicyNodes shifts the burden of discovery from the reader to the author. For example, the author may create a nodemap on climate change with hundreds of nodes, but as visitors drill through the nodemap for information (e.g. from climate change to atmospheric gases to carbon dioxide), they see only a few nodes at a time and are not overwhelmed.

  14. KENNEDY SPACE CENTER, FLA. - Astronaut Tim Kopra (second from right) talks with workers in the Space Station Processing Facility about the Intravehicular Activity (IVA) constraints testing on the Italian-built Node 2, a future element of the International Space Station. . The second of three Station connecting modules, the Node 2 attaches to the end of the U.S. Lab and provides attach locations for several other elements. Kopra is currently assigned technical duties in the Space Station Branch of the Astronaut Office, where his primary focus involves the testing of crew interfaces for two future ISS modules as well as the implementation of support computers and operational Local Area Network on ISS. Node 2 is scheduled to launch on mission STS-120, Station assembly flight 10A.

    NASA Image and Video Library

    2004-02-03

    KENNEDY SPACE CENTER, FLA. - Astronaut Tim Kopra (second from right) talks with workers in the Space Station Processing Facility about the Intravehicular Activity (IVA) constraints testing on the Italian-built Node 2, a future element of the International Space Station. . The second of three Station connecting modules, the Node 2 attaches to the end of the U.S. Lab and provides attach locations for several other elements. Kopra is currently assigned technical duties in the Space Station Branch of the Astronaut Office, where his primary focus involves the testing of crew interfaces for two future ISS modules as well as the implementation of support computers and operational Local Area Network on ISS. Node 2 is scheduled to launch on mission STS-120, Station assembly flight 10A.

  15. Organizing Diverse, Distributed Project Information

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.

    2003-01-01

    SemanticOrganizer is a software application designed to organize and integrate information generated within a distributed organization or as part of a project that involves multiple, geographically dispersed collaborators. SemanticOrganizer incorporates the capabilities of database storage, document sharing, hypermedia navigation, and semantic-interlinking into a system that can be customized to satisfy the specific information-management needs of different user communities. The program provides a centralized repository of information that is both secure and accessible to project collaborators via the World Wide Web. SemanticOrganizer's repository can be used to collect diverse information (including forms, documents, notes, data, spreadsheets, images, and sounds) from computers at collaborators work sites. The program organizes the information using a unique network-structured conceptual framework, wherein each node represents a data record that contains not only the original information but also metadata (in effect, standardized data that characterize the information). Links among nodes express semantic relationships among the data records. The program features a Web interface through which users enter, interlink, and/or search for information in the repository. By use of this repository, the collaborators have immediate access to the most recent project information, as well as to archived information. A key advantage to SemanticOrganizer is its ability to interlink information together in a natural fashion using customized terminology and concepts that are familiar to a user community.

  16. Planetary Data Systems (PDS) Imaging Node Atlas II

    NASA Technical Reports Server (NTRS)

    Stanboli, Alice; McAuley, James M.

    2013-01-01

    The Planetary Image Atlas (PIA) is a Rich Internet Application (RIA) that serves planetary imaging data to the science community and the general public. PIA also utilizes the USGS Unified Planetary Coordinate system (UPC) and the on-Mars map server. The Atlas was designed to provide the ability to search and filter through greater than 8 million planetary image files. This software is a three-tier Web application that contains a search engine backend (MySQL, JAVA), Web service interface (SOAP) between server and client, and a GWT Google Maps API client front end. This application allows for the search, retrieval, and download of planetary images and associated meta-data from the following missions: 2001 Mars Odyssey, Cassini, Galileo, LCROSS, Lunar Reconnaissance Orbiter, Mars Exploration Rover, Mars Express, Magellan, Mars Global Surveyor, Mars Pathfinder, Mars Reconnaissance Orbiter, MESSENGER, Phoe nix, Viking Lander, Viking Orbiter, and Voyager. The Atlas utilizes the UPC to translate mission-specific coordinate systems into a unified coordinate system, allowing the end user to query across missions of similar targets. If desired, the end user can also use a mission-specific view of the Atlas. The mission-specific views rely on the same code base. This application is a major improvement over the initial version of the Planetary Image Atlas. It is a multi-mission search engine. This tool includes both basic and advanced search capabilities, providing a product search tool to interrogate the collection of planetary images. This tool lets the end user query information about each image, and ignores the data that the user has no interest in. Users can reduce the number of images to look at by defining an area of interest with latitude and longitude ranges.

  17. Model of myosin node aggregation into a contractile ring: the effect of local alignment

    NASA Astrophysics Data System (ADS)

    Ojkic, Nikola; Wu, Jian-Qiu; Vavylonis, Dimitrios

    2011-09-01

    Actomyosin bundles frequently form through aggregation of membrane-bound myosin clusters. One such example is the formation of the contractile ring in fission yeast from a broad band of cortical nodes. Nodes are macromolecular complexes containing several dozens of myosin-II molecules and a few formin dimers. The condensation of a broad band of nodes into the contractile ring has been previously described by a search, capture, pull and release (SCPR) model. In SCPR, a random search process mediated by actin filaments nucleated by formins leads to transient actomyosin connections among nodes that pull one another into a ring. The SCPR model reproduces the transport of nodes over long distances and predicts observed clump-formation instabilities in mutants. However, the model does not generate transient linear elements and meshwork structures as observed in some wild-type and mutant cells during ring assembly. As a minimal model of node alignment, we added short-range aligning forces to the SCPR model representing currently unresolved mechanisms that may involve structural components, cross-linking and bundling proteins. We studied the effect of the local node alignment mechanism on ring formation numerically. We varied the new parameters and found viable rings for a realistic range of values. Morphologically, transient structures that form during ring assembly resemble those observed in experiments with wild-type and cdc25-22 cells. Our work supports a hierarchical process of ring self-organization involving components drawn together from distant parts of the cell followed by progressive stabilization.

  18. ESTminer: a Web interface for mining EST contig and cluster databases.

    PubMed

    Huang, Yecheng; Pumphrey, Janie; Gingle, Alan R

    2005-03-01

    ESTminer is a Web application and database schema for interactive mining of expressed sequence tag (EST) contig and cluster datasets. The Web interface contains a query frame that allows the selection of contigs/clusters with specific cDNA library makeup or a threshold number of members. The results are displayed as color-coded tree nodes, where the color indicates the fractional size of each cDNA library component. The nodes are expandable, revealing library statistics as well as EST or contig members, with links to sequence data, GenBank records or user configurable links. Also, the interface allows 'queries within queries' where the result set of a query is further filtered by the subsequent query. ESTminer is implemented in Java/JSP and the package, including MySQL and Oracle schema creation scripts, is available from http://cggc.agtec.uga.edu/Data/download.asp agingle@uga.edu.

  19. Prognostic Value of Lymph Node Yield and Density in Head and Neck Malignancies.

    PubMed

    Cheraghlou, Shayan; Otremba, Michael; Kuo Yu, Phoebe; Agogo, George O; Hersey, Denise; Judson, Benjamin L

    2018-06-01

    Objective Studies have suggested that the lymph node yield and lymph node density from selective or elective neck dissections are predictive of patient outcomes and may be used for patient counseling, treatment planning, or quality measurement. Our objective was to systematically review the literature and conduct a meta-analysis of studies that investigated the prognostic significance of lymph node yield and/or lymph node density after neck dissection for patients with head and neck cancer. Data Sources The Ovid/Medline, Ovid/Embase, and NLM PubMed databases were systematically searched on January 23, 2017, for articles published between January 1, 1946, and January 23, 2017. Review Methods We reviewed English-language original research that included survival analysis of patients undergoing neck dissection for a head and neck malignancy stratified by lymph node yield and/or lymph node density. Study data were extracted by 2 independent researchers (S.C. and M.O.). We utilized the DerSimonian and Laird random effects model to account for heterogeneity of studies. Results Our search yielded 350 nonduplicate articles, with 23 studies included in the final synthesis. Pooled results demonstrated that increased lymph node yield was associated with a significant improvement in survival (hazard ratio, 0.833; 95% CI, 0.790-0.879). Additionally, we found that increased lymph node density was associated with poorer survival (hazard ratio, 1.916; 95% CI, 1.637-2.241). Conclusions Increased nodal yield portends improved outcomes and may be a valuable quality indicator for neck dissections, while increased lymph node density is associated with diminished survival and may be used for postsurgical counseling and planning for adjuvant therapy.

  20. Network Science Research Laboratory (NSRL) Telemetry Warehouse

    DTIC Science & Technology

    2016-06-01

    Functionality and architecture of the NSRL Telemetry Warehouse are also described as well as the web interface, data structure, security aspects, and...Experiment Controller 6 4.5 Telemetry Sensors 7 4.6 Custom Data Processing Nodes 7 5. Web Interface 8 6. Data Structure 8 6.1 Measurements 8...telemetry in comma-separated value (CSV) format from the web interface or via custom applications developed by researchers using the client application

  1. Exploring a Net Centric Architecture Using the Net Warrior Airborne Early Warning and Control Node

    DTIC Science & Technology

    2007-12-01

    implemented in different languages. Customisation Interfaces for customising components. User-friendly customisation tools will use these interfaces...Sun Enterprise Java Beans. Customisation Customisation in the context of components is defined in [Heineman & Councill 2001, p. 42] as ‘…the ability...of a consumer to adapt a component prior to its installation or use’. Customisation can be facilitated through the use of specialised interfaces

  2. Real-Time Distributed Algorithms for Visual and Battlefield Reasoning

    DTIC Science & Technology

    2006-08-01

    High-Level Task Definition Language, Graphical User Interface (GUI), Story Analysis, Story Interpretation, SensIT Nodes 16. SECURITY...or more actions to be taken in the event the conditions are satisfied. We developed graphical user interfaces that may be used to express such...actions to be taken in the event the conditions are satisfied. We developed graphical user interfaces that may be used to express such task

  3. Earthdata Search Summer ESIP Usability Workshop

    NASA Technical Reports Server (NTRS)

    Reese, Mark; Sirato, Jeff

    2017-01-01

    The Earthdata Search Client has undergone multiple rounds of usability testing during 2017 and the user feedback received has resulted in an enhanced user interface. This session will showcase the new Earthdata Search Client user interface and provide hands-on experience for participants to learn how to search, visualize and download data in the desired format.

  4. Quantum statistics in complex networks

    NASA Astrophysics Data System (ADS)

    Bianconi, Ginestra

    The Barabasi-Albert (BA) model for a complex network shows a characteristic power law connectivity distribution typical of scale free systems. The Ising model on the BA network shows that the ferromagnetic phase transition temperature depends logarithmically on its size. We have introduced a fitness parameter for the BA network which describes the different abilities of nodes to compete for links. This model predicts the formation of a scale free network where each node increases its connectivity in time as a power-law with an exponent depending on its fitness. This model includes the fact that the node connectivity and growth rate do not depend on the node age alone and it reproduces non trivial correlation properties of the Internet. We have proposed a model of bosonic networks by a generalization of the BA model where the properties of quantum statistics can be applied. We have introduced a fitness eta i = e-bei where the temperature T = 1/ b is determined by the noise in the system and the energy ei accounts for qualitative differences of each node for acquiring links. The results of this work show that a power law network with exponent gamma = 2 can give a Bose condensation where a single node grabs a finite fraction of all the links. In order to address the connection with self-organized processes we have introduced a model for a growing Cayley tree that generalizes the dynamics of invasion percolation. At each node we associate a parameter ei (called energy) such that the probability to grow for each node is given by pii ∝ ebei where T = 1/ b is a statistical parameter of the system determined by the noise called the temperature. This model has been solved analytically with a similar mathematical technique as the bosonic scale-free networks and it shows the self organization of the low energy nodes at the interface. In the thermodynamic limit the Fermi distribution describes the probability of the energy distribution at the interface.

  5. Research on Information Sharing Method for Future C2 in Network Centric Environment

    DTIC Science & Technology

    2011-06-01

    subscription (or search) request. Then, some of the information service nodes for future C2 deal with these users’ requests, locate, federated search the... federated search server is responsible for resolving the search requests sending out from the users, and executing the federated search . The information... federated search server, information filtering model, or information subscription matching algorithm (such as users subscribe the target information at two

  6. A Variational Nodal Approach to 2D/1D Pin Resolved Neutron Transport for Pressurized Water Reactors

    DOE PAGES

    Zhang, Tengfei; Lewis, E. E.; Smith, M. A.; ...

    2017-04-18

    A two-dimensional/one-dimensional (2D/1D) variational nodal approach is presented for pressurized water reactor core calculations without fuel-moderator homogenization. A 2D/1D approximation to the within-group neutron transport equation is derived and converted to an even-parity form. The corresponding nodal functional is presented and discretized to obtain response matrix equations. Within the nodes, finite elements in the x-y plane and orthogonal functions in z are used to approximate the spatial flux distribution. On the radial interfaces, orthogonal polynomials are employed; on the axial interfaces, piecewise constants corresponding to the finite elements eliminate the interface homogenization that has been a challenge for method ofmore » characteristics (MOC)-based 2D/1D approximations. The angular discretization utilizes an even-parity integral method within the nodes, and low-order spherical harmonics (P N) on the axial interfaces. The x-y surfaces are treated with high-order P N combined with quasi-reflected interface conditions. Furthermore, the method is applied to the C5G7 benchmark problems and compared to Monte Carlo reference calculations.« less

  7. A Variational Nodal Approach to 2D/1D Pin Resolved Neutron Transport for Pressurized Water Reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Tengfei; Lewis, E. E.; Smith, M. A.

    A two-dimensional/one-dimensional (2D/1D) variational nodal approach is presented for pressurized water reactor core calculations without fuel-moderator homogenization. A 2D/1D approximation to the within-group neutron transport equation is derived and converted to an even-parity form. The corresponding nodal functional is presented and discretized to obtain response matrix equations. Within the nodes, finite elements in the x-y plane and orthogonal functions in z are used to approximate the spatial flux distribution. On the radial interfaces, orthogonal polynomials are employed; on the axial interfaces, piecewise constants corresponding to the finite elements eliminate the interface homogenization that has been a challenge for method ofmore » characteristics (MOC)-based 2D/1D approximations. The angular discretization utilizes an even-parity integral method within the nodes, and low-order spherical harmonics (P N) on the axial interfaces. The x-y surfaces are treated with high-order P N combined with quasi-reflected interface conditions. Furthermore, the method is applied to the C5G7 benchmark problems and compared to Monte Carlo reference calculations.« less

  8. The climate4impact portal: bridging the CMIP5 data infrastructure to impact users

    NASA Astrophysics Data System (ADS)

    Plieger, Maarten; Som de Cerff, Wim; Page, Christian; Hutjes, Ronald; de Jong, Fokke; Bärring, Lars; Sjökvist, Elin

    2013-04-01

    Together with seven other partners (CERFACS, CNRS-IPSL, SMHI, INHGA, CMCC, WUR, MF-CNRM), KNMI is involved in the FP7 project IS-ENES (http://is.enes.org), which supports the European climate modeling infrastructure, in the work package 'Bridging Climate Research Data and the Needs of the Impact Community'. The aim of this work package is to enhance the use of climate model data and to enhance the interaction with climate effect/impact communities. The portal is based on 17 impact use cases from 5 different European countries, and is evaluated by a user panel consisting of use case owners. As the climate impact community is very broad, the focus is mainly on the scientific impact community. This work has resulted in a prototype portal, the ENES portal interface for climate impact communities, that can be visited at www.climate4impact.eu. The portal is connected to all Earth System Grid Federation (ESGF) nodes containing global climate model data (GCM data) from the fifth phase of the Coupled Model Intercomparison Project (CMIP5) and later from the Coordinated Regional Climate Downscaling Experiment (CORDEX). This global network of all major climate model data centers offers services for data description, discovery and download. The climate4impact portal connects to these services and offers a user interface for searching, visualizing and downloading global climate model data and more. A challenging task was to describe the available model data and how it can be used. The portal tries to inform users about possible caveats when using GCM data. All impact use cases are described in the documentation section, using highlighted keywords pointing to detailed information in the glossary. During the project, the content management system Drupal was used to enable partners to contribute on the documentation section. In this presentation the architecture and following items will be detailed: - Security: Login using OpenID for access to the ESG data nodes. The ESG works in conjunction with several external websites and systems. The climate4impact portal uses X509 based short lived credentials, generated on behalf of the user with a MyProxy service. Single Sign-on (SSO) is used to make these websites and systems work together. - Discovery: Facetted search based on e.g. variable name, model and institute using the ESG search services. A catalog browser allows for browsing through CMIP5 and other climate model data catalogues (e.g. ESSENCE, EOBS, UNIDATA). - Download: Directly from ESG nodes and other THREDDS catalogs - Visualization: Visualize any data directly using ADAGUC dynamic Web Map Services. - Transformation: Transform your data into other formats, perform basic calculations and extractions using OCG Web Processing Services The current portal is a Prototype. It is built to explore state-of-art technologies to provide improved access to climate model data. The prototype will be evaluated and is the basis for development of an operational service. The portal and services provided will be sustained and supported during the development of these operational services (2013-2016) in the second phase of the FP7 IS-ENES project, ISENES2.

  9. Microgrid and Inverter Control and Simulator Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2012-09-13

    A collection of software that can simulate the operation of an inverter on a microgrid or control a real inverter. In addition, it can simulate the control of multiple nodes on a microgrid." Application: Simulation of inverters and microgrids; control of inverters on microgrids." The MMI submodule is designed to control custom inverter hardware, and to simulate that hardware. The INVERTER submodule is only the simulator code, and is of an earlier generation than the simulator in MMI. The MICROGRID submodule is an agent-based simulator of multiple nodes on a microgrid which presents a web interface. The WIND submodule producesmore » movies of wind data with a web interface.« less

  10. A suffix arrays based approach to semantic search in P2P systems

    NASA Astrophysics Data System (ADS)

    Shi, Qingwei; Zhao, Zheng; Bao, Hu

    2007-09-01

    Building a semantic search system on top of peer-to-peer (P2P) networks is becoming an attractive and promising alternative scheme for the reason of scalability, Data freshness and search cost. In this paper, we present a Suffix Arrays based algorithm for Semantic Search (SASS) in P2P systems, which generates a distributed Semantic Overlay Network (SONs) construction for full-text search in P2P networks. For each node through the P2P network, SASS distributes document indices based on a set of suffix arrays, by which clusters are created depending on words or phrases shared between documents, therefore, the search cost for a given query is decreased by only scanning semantically related documents. In contrast to recently announced SONs scheme designed by using metadata or predefined-class, SASS is an unsupervised approach for decentralized generation of SONs. SASS is also an incremental, linear time algorithm, which efficiently handle the problem of nodes update in P2P networks. Our simulation results demonstrate that SASS yields high search efficiency in dynamic environments.

  11. OPUS - Outer Planets Unified Search with Enhanced Surface Geometry Parameters - Not Just for Rings

    NASA Astrophysics Data System (ADS)

    Gordon, Mitchell; Showalter, Mark Robert; Ballard, Lisa; Tiscareno, Matthew S.; Heather, Neil

    2016-10-01

    In recent years, with the massive influx of data into the PDS from a wide array of missions and instruments, finding the precise data you need has been an ongoing challenge. For remote sensing data obtained from Jupiter to Pluto, that challenge is being addressed by the Outer Planets Unified Search, more commonly known as OPUS.OPUS is a powerful search tool available at the PDS Ring-Moon Systems Node (RMS) - formerly the PDS Rings Node. While OPUS was originally designed with ring data in mind, its capabilities have been extended to include all of the targets within an instrument's field of view. OPUS provides preview images of search results, and produces a zip file for easy download of selected products, including a table of user specified metadata. For Cassini ISS and Voyager ISS we have generated and include calibrated versions of every image.Currently OPUS supports data returned by Cassini ISS, UVIS, VIMS, and CIRS (Saturn data through June 2010), New Horizons Jupiter LORRI, Galileo SSI, Voyager ISS and IRIS, and Hubble (ACS, WFC3 and WFPC2).At the RMS Node, we have developed and incorporated into OPUS detailed geometric metadata, based on the most recent SPICE kernels, for all of the bodies in the Cassini Saturn observations. This extensive set of geometric metadata is unique to the RMS Node and enables search constraints such as latitudes and longitudes (Saturn, Titan, and icy satellites), viewing and illumination geometry (phase, incidence and emission angles), and distances and resolution.Our near term plans include adding the full set of Cassini CIRS Saturn data (with enhanced geometry), New Horizons MVIC Jupiter encounter images, New Horizons LORRI and MVIC Pluto data, HST STIS observations, and Cassini and Voyager ring occultations. We also plan to develop enhanced geometric metadata for the New Horizons LORRI and MVIC instruments for both the Jupiter and the Pluto encounters.OPUS: http://pds-rings.seti.org/search/

  12. Implementation of Distributed Services for a Deep Sea Moored Instrument Network

    NASA Astrophysics Data System (ADS)

    Oreilly, T. C.; Headley, K. L.; Risi, M.; Davis, D.; Edgington, D. R.; Salamy, K. A.; Chaffey, M.

    2004-12-01

    The Monterey Ocean Observing System (MOOS) is a moored observatory network consisting of interconnected instrument nodes on the sea surface, midwater, and deep sea floor. We describe Software Infrastructure and Applications for MOOS ("SIAM"), which implement the management, control, and data acquisition infrastructure for the moored observatory. Links in the MOOS network include fiber-optic and 10-BaseT copper connections between the at-sea nodes. A Globalstar satellite transceiver or 900 MHz Freewave terrestrial line-of-sight RF modem provides the link to shore. All of these links support Internet protocols, providing TCP/IP connectivity throughout a system that extends from shore to sensor nodes at the air-sea interface, through the oceanic water column to a benthic network of sensor nodes extending across the deep sea floor. Exploiting this TCP/IP infrastructure as well as capabilities provided by MBARI's MOOS mooring controller, we use powerful Internet software technologies to implement a distributed management, control and data acquisition system for the moored observatory. The system design meets the demanding functional requirements specified for MOOS. Nodes and their instruments are represented by Java RMI "services" having well defined software interfaces. Clients anywhere on the network can interact with any node or instrument through its corresponding service. A client may be on the same node as the service, may be on another node, or may reside on shore. Clients may be human, e.g. when a scientist on shore accesses a deployed instrument in real-time through a user interface. Clients may also be software components that interact autonomously with instruments and nodes, e.g. for purposes such as system resource management or autonomous detection and response to scientifically interesting events. All electrical power to the moored network is provided by solar and wind energy, and the RF shore-to-mooring links are intermittent and relatively low-bandwidth connections. Thus power and wireless bandwidth are limited resources that constrain our choice of service technologies and wireless access strategy. We describe and evaluate system performance in light of actual deployment of observatory elements in Monterey Bay, and discuss how the system can be developed further. We also consider management and control strategies for the cable-to-shore observatory known as MARS ("Monterey Accelerated Research System"). The MARS cable will provide high power and continuous high-bandwidth connectivity between seafloor instrument nodes and shore, thus removing key limitations of the moored observatory. Moreover MARS functional requirements may differ significantly from MOOS requirements. In light of these differences, we discuss how elements of our MOOS moored observatory architecture might be adapted to MARS.

  13. Querying Event Sequences by Exact Match or Similarity Search: Design and Empirical Evaluation

    PubMed Central

    Wongsuphasawat, Krist; Plaisant, Catherine; Taieb-Maimon, Meirav; Shneiderman, Ben

    2012-01-01

    Specifying event sequence queries is challenging even for skilled computer professionals familiar with SQL. Most graphical user interfaces for database search use an exact match approach, which is often effective, but near misses may also be of interest. We describe a new similarity search interface, in which users specify a query by simply placing events on a blank timeline and retrieve a similarity-ranked list of results. Behind this user interface is a new similarity measure for event sequences which the users can customize by four decision criteria, enabling them to adjust the impact of missing, extra, or swapped events or the impact of time shifts. We describe a use case with Electronic Health Records based on our ongoing collaboration with hospital physicians. A controlled experiment with 18 participants compared exact match and similarity search interfaces. We report on the advantages and disadvantages of each interface and suggest a hybrid interface combining the best of both. PMID:22379286

  14. Communities and classes in symmetric fractals

    NASA Astrophysics Data System (ADS)

    Krawczyk, Małgorzata J.

    2015-07-01

    Two aspects of fractal networks are considered: the community structure and the class structure, where classes of nodes appear as a consequence of a local symmetry of nodes. The analyzed systems are the networks constructed for two selected symmetric fractals: the Sierpinski triangle and the Koch curve. Communities are searched for by means of a set of differential equations. Overlapping nodes which belong to two different communities are identified by adding some noise to the initial connectivity matrix. Then, a node can be characterized by a spectrum of probabilities of belonging to different communities. Our main goal is that the overlapping nodes with the same spectra belong to the same class.

  15. The Magnetics Information Consortium (MagIC) Online Database: Uploading, Searching and Visualizing Paleomagnetic and Rock Magnetic Data

    NASA Astrophysics Data System (ADS)

    Minnett, R.; Koppers, A.; Tauxe, L.; Constable, C.; Pisarevsky, S. A.; Jackson, M.; Solheid, P.; Banerjee, S.; Johnson, C.

    2006-12-01

    The Magnetics Information Consortium (MagIC) is commissioned to implement and maintain an online portal to a relational database populated by both rock and paleomagnetic data. The goal of MagIC is to archive all measurements and the derived properties for studies of paleomagnetic directions (inclination, declination) and intensities, and for rock magnetic experiments (hysteresis, remanence, susceptibility, anisotropy). MagIC is hosted under EarthRef.org at http://earthref.org/MAGIC/ and has two search nodes, one for paleomagnetism and one for rock magnetism. Both nodes provide query building based on location, reference, methods applied, material type and geological age, as well as a visual map interface to browse and select locations. The query result set is displayed in a digestible tabular format allowing the user to descend through hierarchical levels such as from locations to sites, samples, specimens, and measurements. At each stage, the result set can be saved and, if supported by the data, can be visualized by plotting global location maps, equal area plots, or typical Zijderveld, hysteresis, and various magnetization and remanence diagrams. User contributions to the MagIC database are critical to achieving a useful research tool. We have developed a standard data and metadata template (Version 2.1) that can be used to format and upload all data at the time of publication in Earth Science journals. Software tools are provided to facilitate population of these templates within Microsoft Excel. These tools allow for the import/export of text files and provide advanced functionality to manage and edit the data, and to perform various internal checks to maintain data integrity and prepare for uploading. The MagIC Contribution Wizard at http://earthref.org/MAGIC/upload.htm executes the upload and takes only a few minutes to process several thousand data records. The standardized MagIC template files are stored in the digital archives of EarthRef.org where they remain available for download by the public (in both text and Excel format). Finally, the contents of these template files are automatically parsed into the online relational database, making the data available for online searches in the paleomagnetic and rock magnetic search nodes. The MagIC database contains all data transferred from the IAGA paleomagnetic poles database (GPMDB), the lava flow paleosecular variation database (PSVRL), lake sediment database (SECVR) and the PINT database. Additionally, a substantial number of data compiled under the Time Averaged Field Investigations project is now included plus a significant fraction of the data collected at SIO and the IRM. Ongoing additions of legacy data include over 40 papers from studies on the Hawaiian Islands and Mexico, data compilations from archeomagnetic studies and updates to the lake sediment dataset.

  16. The OrbitOutlook Data Archive

    NASA Astrophysics Data System (ADS)

    Czajkowski, M.; Shilliday, A.; LoFaso, N.; Dipon, A.; Van Brackle, D.

    2016-09-01

    In this paper, we describe and depict the Defense Advanced Research Projects Agency (DARPA)'s OrbitOutlook Data Archive (OODA) architecture. OODA is the infrastructure that DARPA's OrbitOutlook program has developed to integrate diverse data from various academic, commercial, government, and amateur space situational awareness (SSA) telescopes. At the heart of the OODA system is its world model - a distributed data store built to quickly query big data quantities of information spread out across multiple processing nodes and data centers. The world model applies a multi-index approach where each index is a distinct view on the data. This allows for analysts and analytics (algorithms) to access information through queries with a variety of terms that may be of interest to them. Our indices include: a structured global-graph view of knowledge, a keyword search of data content, an object-characteristic range search, and a geospatial-temporal orientation of spatially located data. In addition, the world model applies a federated approach by connecting to existing databases and integrating them into one single interface as a "one-stop shopping place" to access SSA information. In addition to the world model, OODA provides a processing platform for various analysts to explore and analytics to execute upon this data. Analytic algorithms can use OODA to take raw data and build information from it. They can store these products back into the world model, allowing analysts to gain situational awareness with this information. Analysts in turn would help decision makers use this knowledge to address a wide range of SSA problems. OODA is designed to make it easy for software developers who build graphical user interfaces (GUIs) and algorithms to quickly get started with working with this data. This is done through a multi-language software development kit that includes multiple application program interfaces (APIs) and a data model with SSA concepts and terms such as: space observation, observable, measurable, metadata, track, space object, catalog, expectation, and maneuver.

  17. Legacy2Drupal: Conversion of an existing relational oceanographic database to a Drupal 7 CMS

    NASA Astrophysics Data System (ADS)

    Work, T. T.; Maffei, A. R.; Chandler, C. L.; Groman, R. C.

    2011-12-01

    Content Management Systems (CMSs) such as Drupal provide powerful features that can be of use to oceanographic (and other geo-science) data managers. However, in many instances, geo-science data management offices have already designed and implemented customized schemas for their metadata. The NSF funded Biological Chemical and Biological Data Management Office (BCO-DMO) has ported an existing relational database containing oceanographic metadata, along with an existing interface coded in Cold Fusion middleware, to a Drupal 7 Content Management System. This is an update on an effort described as a proof-of-concept in poster IN21B-1051, presented at AGU2009. The BCO-DMO project has translated all the existing database tables, input forms, website reports, and other features present in the existing system into Drupal CMS features. The replacement features are made possible by the use of Drupal content types, CCK node-reference fields, a custom theme, and a number of other supporting modules. This presentation describes the process used to migrate content in the original BCO-DMO metadata database to Drupal 7, some problems encountered during migration, and the modules used to migrate the content successfully. Strategic use of Drupal 7 CMS features that enable three separate but complementary interfaces to provide access to oceanographic research metadata will also be covered: 1) a Drupal 7-powered user front-end; 2) REST-ful JSON web services (providing a Mapserver interface to the metadata and data; and 3) a SPARQL interface to a semantic representation of the repository metadata (this feeding a new faceted search capability currently under development). The existing BCO-DMO ontology, developed in collaboration with Rensselaer Polytechnic Institute's Tetherless World Constellation, makes strategic use of pre-existing ontologies and will be used to drive semantically-enabled faceted search capabilities planned for the site. At this point, the use of semantic technologies included in the Drupal 7 core is anticipated. Using a public domain CMS as opposed to proprietary middleware, and taking advantage of the many features of Drupal 7 that are designed to support semantically-enabled interfaces will help prepare the BCO-DMO and other science data repositories for interoperability between systems that serve ecosystem research data.

  18. Local thermal injury elicits immediate dynamic behavioural responses by corneal Langerhans cells

    PubMed Central

    Ward, Brant R; Jester, James V; Nishibu, Akiko; Vishwanath, Mridula; Shalhevet, David; Kumamoto, Tadashi; Petroll, W Matthew; Cavanagh, H Dwight; Takashima, Akira

    2007-01-01

    Langerhans cells (LCs) represent a special subset of immature dendritic cells (DCs) that reside in epithelial tissues at the environmental interfaces. Although dynamic interactions of mature DCs with T cells have been visualized in lymph nodes, the cellular behaviours linked with the surveillance of tissues for pathogenic signals, an important function of immature DCs, remain unknown. To visualize LCs in situ, bone marrow cells from C57BL/6 mice expressing the enhanced green fluorescent protein (EGFP) transgene were transplanted into syngeneic wild-type recipients. Motile activities of EGFP+ corneal LCs in intact organ cultures were then recorded by time lapse two-photon microscopy. At baseline, corneal LCs exhibited a unique motion, termed dendrite surveillance extension and retraction cycling habitude (dSEARCH), characterized by rhythmic extension and retraction of their dendritic processes through intercellular spaces between epithelial cells. Upon pinpoint injury produced by infrared laser, LCs showed augmented dSEARCH and amoeba-like lateral movement. Interleukin (IL)-1 receptor antagonist completely abrogated both injury-associated changes, suggesting roles for IL-1. In the absence of injury, exogenous IL-1 caused a transient increase in dSEARCH without provoking lateral migration, whereas tumour necrosis factor-α induced both changes. Our results demonstrate rapid cytokine-mediated behavioural responses by LCs to local tissue injury, providing new insights into the biology of LCs. PMID:17250587

  19. Interference Effects Redress over Power-Efficient Wireless-Friendly Mesh Networks for Ubiquitous Sensor Communications across Smart Cities.

    PubMed

    Santana, Jose; Marrero, Domingo; Macías, Elsa; Mena, Vicente; Suárez, Álvaro

    2017-07-21

    Ubiquitous sensing allows smart cities to take control of many parameters (e.g., road traffic, air or noise pollution levels, etc.). An inexpensive Wireless Mesh Network can be used as an efficient way to transport sensed data. When that mesh is autonomously powered (e.g., solar powered), it constitutes an ideal portable network system which can be deployed when needed. Nevertheless, its power consumption must be restrained to extend its operational cycle and for preserving the environment. To this end, our strategy fosters wireless interface deactivation among nodes which do not participate in any route. As we show, this contributes to a significant power saving for the mesh. Furthermore, our strategy is wireless-friendly, meaning that it gives priority to deactivation of nodes receiving (and also causing) interferences from (to) the rest of the smart city. We also show that a routing protocol can adapt to this strategy in which certain nodes deactivate their own wireless interfaces.

  20. Interference Effects Redress over Power-Efficient Wireless-Friendly Mesh Networks for Ubiquitous Sensor Communications across Smart Cities

    PubMed Central

    Marrero, Domingo; Macías, Elsa; Mena, Vicente

    2017-01-01

    Ubiquitous sensing allows smart cities to take control of many parameters (e.g., road traffic, air or noise pollution levels, etc.). An inexpensive Wireless Mesh Network can be used as an efficient way to transport sensed data. When that mesh is autonomously powered (e.g., solar powered), it constitutes an ideal portable network system which can be deployed when needed. Nevertheless, its power consumption must be restrained to extend its operational cycle and for preserving the environment. To this end, our strategy fosters wireless interface deactivation among nodes which do not participate in any route. As we show, this contributes to a significant power saving for the mesh. Furthermore, our strategy is wireless-friendly, meaning that it gives priority to deactivation of nodes receiving (and also causing) interferences from (to) the rest of the smart city. We also show that a routing protocol can adapt to this strategy in which certain nodes deactivate their own wireless interfaces. PMID:28754013

  1. Model Checking a Self-Stabilizing Distributed Clock Synchronization Protocol for Arbitrary Digraphs

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.

    2011-01-01

    This report presents the mechanical verification of a self-stabilizing distributed clock synchronization protocol for arbitrary digraphs in the absence of faults. This protocol does not rely on assumptions about the initial state of the system, other than the presence of at least one node, and no central clock or a centrally generated signal, pulse, or message is used. The system under study is an arbitrary, non-partitioned digraph ranging from fully connected to 1-connected networks of nodes while allowing for differences in the network elements. Nodes are anonymous, i.e., they do not have unique identities. There is no theoretical limit on the maximum number of participating nodes. The only constraint on the behavior of the node is that the interactions with other nodes are restricted to defined links and interfaces. This protocol deterministically converges within a time bound that is a linear function of the self-stabilization period.

  2. RootJS: Node.js Bindings for ROOT 6

    NASA Astrophysics Data System (ADS)

    Beffart, Theo; Früh, Maximilian; Haas, Christoph; Rajgopal, Sachin; Schwabe, Jonas; Wolff, Christoph; Szuba, Marek

    2017-10-01

    We present rootJS, an interface making it possible to seamlessly integrate ROOT 6 into applications written for Node.js, the JavaScript runtime platform increasingly commonly used to create high-performance Web applications. ROOT features can be called both directly from Node.js code and by JIT-compiling C++ macros. All rootJS methods are invoked asynchronously and support callback functions, allowing non-blocking operation of Node.js applications using them. Last but not least, our bindings have been designed to platform-independent and should therefore work on all systems supporting both ROOT 6 and Node.js. Thanks to rootJS it is now possible to create ROOT-aware Web applications taking full advantage of the high performance and extensive capabilities of Node.js. Examples include platforms for the quality assurance of acquired, reconstructed or simulated data, book-keeping and e-log systems, and even Web browser-based data visualisation and analysis.

  3. Self-Organizing OFDMA System for Broadband Communication

    NASA Technical Reports Server (NTRS)

    Roy, Aloke (Inventor); Anandappan, Thanga (Inventor); Malve, Sharath Babu (Inventor)

    2016-01-01

    Systems and methods for a self-organizing OFDMA system for broadband communication are provided. In certain embodiments a communication node for a self organizing network comprises a communication interface configured to transmit data to and receive data from a plurality of nodes; and a processing unit configured to execute computer readable instructions. Further, computer readable instructions direct the processing unit to identify a sub-region within a cell, wherein the communication node is located in the sub-region; and transmit at least one data frame, wherein the data from the communication node is transmitted at a particular time and frequency as defined within the at least one data frame, where the time and frequency are associated with the sub-region.

  4. Informedia at TRECVID 2003: Analyzing and Searching Broadcast News Video

    DTIC Science & Technology

    2004-11-03

    browsing interface to browse the top-ranked shots according to the different classifiers. Color and texture based image search engines were also...different classifiers. Color and texture based image search engines were also optimized better performance. This “new” interface was evaluated as

  5. A Semantic Approach for Knowledge Discovery to Help Mitigate Habitat Loss in the Gulf of Mexico

    NASA Astrophysics Data System (ADS)

    Ramachandran, R.; Maskey, M.; Graves, S.; Hardin, D.

    2008-12-01

    Noesis is a meta-search engine and a resource aggregator that uses domain ontologies to provide scoped search capabilities. Ontologies enable Noesis to help users refine their searches for information on the open web and in hidden web locations such as data catalogues with standardized, but discipline specific vocabularies. Through its ontologies Noesis provides a guided refinement of search queries which produces complete and accurate searches while reducing the user's burden to experiment with different search strings. All search results are organized by categories (e. g. all results from Google are grouped together) which may be selected or omitted according to the desire of the user. During the past two years ontologies were developed for sea grasses in the Gulf of Mexico and were used to support a habitat restoration demonstration project. Currently these ontologies are being augmented to address the special characteristics of mangroves. These new ontologies will extend the demonstration project to broader regions of the Gulf including protected mangrove locations in coastal Mexico. Noesis contributes to the decision making process by producing a comprehensive list of relevant resources based on the semantic information contained in the ontologies. Ontologies are organized in a tree like taxonomies, where the child nodes represent the Specializations and the parent nodes represent the Generalizations of a node or concept. Specializations can be used to provide more detailed search, while generalizations are used to make the search broader. Ontologies are also used to link two syntactically different terms to one semantic concept (synonyms). Appending a synonym to the query expands the search, thus providing better search coverage. Every concept has a set of properties that are neither in the same inheritance hierarchy (Specializations / Generalizations) nor equivalent (synonyms). These are called Related Concepts and they are captured in the ontology through property relationships. By using Related Concepts users can search for resources with respect to a particular property. Noesis automatically generates searches that include all of these capabilities, removing the burden from the user and producing broader and more accurate search results. This presentation will demonstrate the features of Noesis and describe its application to habitat studies in the Gulf of Mexico.

  6. Adding a Visualization Feature to Web Search Engines: It’s Time

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, Pak C.

    Since the first world wide web (WWW) search engine quietly entered our lives in 1994, the “information need” behind web searching has rapidly grown into a multi-billion dollar business that dominates the internet landscape, drives e-commerce traffic, propels global economy, and affects the lives of the whole human race. Today’s search engines are faster, smarter, and more powerful than those released just a few years ago. With the vast investment pouring into research and development by leading web technology providers and the intense emotion behind corporate slogans such as “win the web” or “take back the web,” I can’t helpmore » but ask why are we still using the very same “text-only” interface that was used 13 years ago to browse our search engine results pages (SERPs)? Why has the SERP interface technology lagged so far behind in the web evolution when the corresponding search technology has advanced so rapidly? In this article I explore some current SERP interface issues, suggest a simple but practical visual-based interface design approach, and argue why a visual approach can be a strong candidate for tomorrow’s SERP interface.« less

  7. What a Difference a Tag Cloud Makes: Effects of Tasks and Cognitive Abilities on Search Results Interface Use

    ERIC Educational Resources Information Center

    Gwizdka, Jacek

    2009-01-01

    Introduction: The goal of this study is to expand our understanding of the relationships between selected tasks, cognitive abilities and search result interfaces. The underlying objective is to understand how to select search results presentation for tasks and user contexts Method: Twenty three participants conducted four search tasks of two types…

  8. Automatic detection of axillary lymphadenopathy on CT scans of untreated chronic lymphocytic leukemia patients

    NASA Astrophysics Data System (ADS)

    Liu, Jiamin; Hua, Jeremy; Chellappa, Vivek; Petrick, Nicholas; Sahiner, Berkman; Farooqui, Mohammed; Marti, Gerald; Wiestner, Adrian; Summers, Ronald M.

    2012-03-01

    Patients with chronic lymphocytic leukemia (CLL) have an increased frequency of axillary lymphadenopathy. Pretreatment CT scans can be used to upstage patients at the time of presentation and post-treatment CT scans can reduce the number of complete responses. In the current clinical workflow, the detection and diagnosis of lymph nodes is usually performed manually by examining all slices of CT images, which can be time consuming and highly dependent on the observer's experience. A system for automatic lymph node detection and measurement is desired. We propose a computer aided detection (CAD) system for axillary lymph nodes on CT scans in CLL patients. The lung is first automatically segmented and the patient's body in lung region is extracted to set the search region for lymph nodes. Multi-scale Hessian based blob detection is then applied to detect potential lymph nodes within the search region. Next, the detected potential candidates are segmented by fast level set method. Finally, features are calculated from the segmented candidates and support vector machine (SVM) classification is utilized for false positive reduction. Two blobness features, Frangi's and Li's, are tested and their free-response receiver operating characteristic (FROC) curves are generated to assess system performance. We applied our detection system to 12 patients with 168 axillary lymph nodes measuring greater than 10 mm. All lymph nodes are manually labeled as ground truth. The system achieved sensitivities of 81% and 85% at 2 false positives per patient for Frangi's and Li's blobness, respectively.

  9. Integrating Actionable User-defined Faceted Rules into the Hybrid Science Data System for Advanced Rapid Imaging & Analysis

    NASA Astrophysics Data System (ADS)

    Manipon, G. J. M.; Hua, H.; Owen, S. E.; Sacco, G. F.; Agram, P. S.; Moore, A. W.; Yun, S. H.; Fielding, E. J.; Lundgren, P.; Rosen, P. A.; Webb, F.; Liu, Z.; Smith, A. T.; Wilson, B. D.; Simons, M.; Poland, M. P.; Cervelli, P. F.

    2014-12-01

    The Hybrid Science Data System (HySDS) scalably powers the ingestion, metadata extraction, cataloging, high-volume data processing, and publication of the geodetic data products for the Advanced Rapid Imaging & Analysis for Monitoring Hazard (ARIA-MH) project at JPL. HySDS uses a heterogeneous set of worker nodes from private & public clouds as well as virtual & bare-metal machines to perform every aspect of the traditional science data system. For our science data users, the forefront of HySDS is the facet search interface, FacetView, which allows them to browse, filter, and access the published products. Users are able to explore the collection of product metadata information and apply multiple filters to constrain the result set down to their particular interests. It allows them to download these faceted products for further analysis and generation of derived products. However, we have also employed a novel approach to faceting where it is also used to apply constraints for custom monitoring of products, system resources, and triggers for automated data processing. The power of the facet search interface is well documented across various domains and its usefulness is rooted in the current state of existence of metadata. However, user needs usually extend beyond what is currently present in the data system. A user interested in synthetic aperture radar (SAR) data over Kilauea will download them from FacetView but would also want email notification of future incoming scenes. The user may even want that data pushed to a remote workstation for automated processing. Better still, these future products could trigger HySDS to run the user's analysis on its array of worker nodes, on behalf of the user, and ingest the resulting derived products. We will present our findings in integrating an ancillary, user-defined, system-driven processing system for HySDS that allows users to define faceted rules based on facet constraints and triggers actions when new SAR data products arrive that match the constraints. We will discuss use cases where users have defined rules for the automated generation of InSAR derived products: interferograms for California and Kilauea, time-series analyses, and damage proxy maps. These findings are relevant for science data system development of the proposed NASA-ISRO SAR mission.

  10. An MPI + $X$ implementation of contact global search using Kokkos

    DOE PAGES

    Hansen, Glen A.; Xavier, Patrick G.; Mish, Sam P.; ...

    2015-10-05

    This paper describes an approach that seeks to parallelize the spatial search associated with computational contact mechanics. In contact mechanics, the purpose of the spatial search is to find “nearest neighbors,” which is the prelude to an imprinting search that resolves the interactions between the external surfaces of contacting bodies. In particular, we are interested in the contact global search portion of the spatial search associated with this operation on domain-decomposition-based meshes. Specifically, we describe an implementation that combines standard domain-decomposition-based MPI-parallel spatial search with thread-level parallelism (MPI-X) available on advanced computer architectures (those with GPU coprocessors). Our goal ismore » to demonstrate the efficacy of the MPI-X paradigm in the overall contact search. Standard MPI-parallel implementations typically use a domain decomposition of the external surfaces of bodies within the domain in an attempt to efficiently distribute computational work. This decomposition may or may not be the same as the volume decomposition associated with the host physics. The parallel contact global search phase is then employed to find and distribute surface entities (nodes and faces) that are needed to compute contact constraints between entities owned by different MPI ranks without further inter-rank communication. Key steps of the contact global search include computing bounding boxes, building surface entity (node and face) search trees and finding and distributing entities required to complete on-rank (local) spatial searches. To enable source-code portability and performance across a variety of different computer architectures, we implemented the algorithm using the Kokkos hardware abstraction library. While we targeted development towards machines with a GPU accelerator per MPI rank, we also report performance results for OpenMP with a conventional multi-core compute node per rank. Results here demonstrate a 47 % decrease in the time spent within the global search algorithm, comparing the reference ACME algorithm with the GPU implementation, on an 18M face problem using four MPI ranks. As a result, while further work remains to maximize performance on the GPU, this result illustrates the potential of the proposed implementation.« less

  11. Optimal random search for a single hidden target.

    PubMed

    Snider, Joseph

    2011-01-01

    A single target is hidden at a location chosen from a predetermined probability distribution. Then, a searcher must find a second probability distribution from which random search points are sampled such that the target is found in the minimum number of trials. Here it will be shown that if the searcher must get very close to the target to find it, then the best search distribution is proportional to the square root of the target distribution regardless of dimension. For a Gaussian target distribution, the optimum search distribution is approximately a Gaussian with a standard deviation that varies inversely with how close the searcher must be to the target to find it. For a network where the searcher randomly samples nodes and looks for the fixed target along edges, the optimum is either to sample a node with probability proportional to the square root of the out-degree plus 1 or not to do so at all.

  12. A Search Engine Features Comparison.

    ERIC Educational Resources Information Center

    Vorndran, Gerald

    Until recently, the World Wide Web (WWW) public access search engines have not included many of the advanced commands, options, and features commonly available with the for-profit online database user interfaces, such as DIALOG. This study evaluates the features and characteristics common to both types of search interfaces, examines the Web search…

  13. Evidence for global processing of complex visual displays

    NASA Technical Reports Server (NTRS)

    Munson, Robert C.; Horst, Richard L.

    1986-01-01

    'Polar graphic' displays, in which changes in system status are represented by distortions in the form of a geometric figure, were presented to subjects, and reaction time (RT) to discriminate system status was recorded. Of interest was the extent to which reaction time showed evidence of global processing of these displays as the number of nodes and difficulty of discrimination were varied. When discrimination of system status was easy, RT showed no increase with increasing number of nodes, providing evidence of global processing. When discrimination was difficult, systematic differences in RT as a function of the number of nodes suggested the invocation of other (local) processes, although the data were not consistent with a node-by-node search process.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barrett, Brian; Brightwell, Ronald B.; Grant, Ryan

    This report presents a specification for the Portals 4 networ k programming interface. Portals 4 is intended to allow scalable, high-performance network communication betwee n nodes of a parallel computing system. Portals 4 is well suited to massively parallel processing and embedded syste ms. Portals 4 represents an adaption of the data movement layer developed for massively parallel processing platfor ms, such as the 4500-node Intel TeraFLOPS machine. Sandia's Cplant cluster project motivated the development of Version 3.0, which was later extended to Version 3.3 as part of the Cray Red Storm machine and XT line. Version 4 is tarmore » geted to the next generation of machines employing advanced network interface architectures that support enh anced offload capabilities.« less

  15. Divide and Conquer (DC) BLAST: fast and easy BLAST execution within HPC environments

    DOE PAGES

    Yim, Won Cheol; Cushman, John C.

    2017-07-22

    Bioinformatics is currently faced with very large-scale data sets that lead to computational jobs, especially sequence similarity searches, that can take absurdly long times to run. For example, the National Center for Biotechnology Information (NCBI) Basic Local Alignment Search Tool (BLAST and BLAST+) suite, which is by far the most widely used tool for rapid similarity searching among nucleic acid or amino acid sequences, is highly central processing unit (CPU) intensive. While the BLAST suite of programs perform searches very rapidly, they have the potential to be accelerated. In recent years, distributed computing environments have become more widely accessible andmore » used due to the increasing availability of high-performance computing (HPC) systems. Therefore, simple solutions for data parallelization are needed to expedite BLAST and other sequence analysis tools. However, existing software for parallel sequence similarity searches often requires extensive computational experience and skill on the part of the user. In order to accelerate BLAST and other sequence analysis tools, Divide and Conquer BLAST (DCBLAST) was developed to perform NCBI BLAST searches within a cluster, grid, or HPC environment by using a query sequence distribution approach. Scaling from one (1) to 256 CPU cores resulted in significant improvements in processing speed. Thus, DCBLAST dramatically accelerates the execution of BLAST searches using a simple, accessible, robust, and parallel approach. DCBLAST works across multiple nodes automatically and it overcomes the speed limitation of single-node BLAST programs. DCBLAST can be used on any HPC system, can take advantage of hundreds of nodes, and has no output limitations. Thus, this freely available tool simplifies distributed computation pipelines to facilitate the rapid discovery of sequence similarities between very large data sets.« less

  16. Divide and Conquer (DC) BLAST: fast and easy BLAST execution within HPC environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yim, Won Cheol; Cushman, John C.

    Bioinformatics is currently faced with very large-scale data sets that lead to computational jobs, especially sequence similarity searches, that can take absurdly long times to run. For example, the National Center for Biotechnology Information (NCBI) Basic Local Alignment Search Tool (BLAST and BLAST+) suite, which is by far the most widely used tool for rapid similarity searching among nucleic acid or amino acid sequences, is highly central processing unit (CPU) intensive. While the BLAST suite of programs perform searches very rapidly, they have the potential to be accelerated. In recent years, distributed computing environments have become more widely accessible andmore » used due to the increasing availability of high-performance computing (HPC) systems. Therefore, simple solutions for data parallelization are needed to expedite BLAST and other sequence analysis tools. However, existing software for parallel sequence similarity searches often requires extensive computational experience and skill on the part of the user. In order to accelerate BLAST and other sequence analysis tools, Divide and Conquer BLAST (DCBLAST) was developed to perform NCBI BLAST searches within a cluster, grid, or HPC environment by using a query sequence distribution approach. Scaling from one (1) to 256 CPU cores resulted in significant improvements in processing speed. Thus, DCBLAST dramatically accelerates the execution of BLAST searches using a simple, accessible, robust, and parallel approach. DCBLAST works across multiple nodes automatically and it overcomes the speed limitation of single-node BLAST programs. DCBLAST can be used on any HPC system, can take advantage of hundreds of nodes, and has no output limitations. Thus, this freely available tool simplifies distributed computation pipelines to facilitate the rapid discovery of sequence similarities between very large data sets.« less

  17. MrGrid: A Portable Grid Based Molecular Replacement Pipeline

    PubMed Central

    Reboul, Cyril F.; Androulakis, Steve G.; Phan, Jennifer M. N.; Whisstock, James C.; Goscinski, Wojtek J.; Abramson, David; Buckle, Ashley M.

    2010-01-01

    Background The crystallographic determination of protein structures can be computationally demanding and for difficult cases can benefit from user-friendly interfaces to high-performance computing resources. Molecular replacement (MR) is a popular protein crystallographic technique that exploits the structural similarity between proteins that share some sequence similarity. But the need to trial permutations of search models, space group symmetries and other parameters makes MR time- and labour-intensive. However, MR calculations are embarrassingly parallel and thus ideally suited to distributed computing. In order to address this problem we have developed MrGrid, web-based software that allows multiple MR calculations to be executed across a grid of networked computers, allowing high-throughput MR. Methodology/Principal Findings MrGrid is a portable web based application written in Java/JSP and Ruby, and taking advantage of Apple Xgrid technology. Designed to interface with a user defined Xgrid resource the package manages the distribution of multiple MR runs to the available nodes on the Xgrid. We evaluated MrGrid using 10 different protein test cases on a network of 13 computers, and achieved an average speed up factor of 5.69. Conclusions MrGrid enables the user to retrieve and manage the results of tens to hundreds of MR calculations quickly and via a single web interface, as well as broadening the range of strategies that can be attempted. This high-throughput approach allows parameter sweeps to be performed in parallel, improving the chances of MR success. PMID:20386612

  18. Efficient searching and annotation of metabolic networks using chemical similarity

    PubMed Central

    Pertusi, Dante A.; Stine, Andrew E.; Broadbelt, Linda J.; Tyo, Keith E.J.

    2015-01-01

    Motivation: The urgent need for efficient and sustainable biological production of fuels and high-value chemicals has elicited a wave of in silico techniques for identifying promising novel pathways to these compounds in large putative metabolic networks. To date, these approaches have primarily used general graph search algorithms, which are prohibitively slow as putative metabolic networks may exceed 1 million compounds. To alleviate this limitation, we report two methods—SimIndex (SI) and SimZyme—which use chemical similarity of 2D chemical fingerprints to efficiently navigate large metabolic networks and propose enzymatic connections between the constituent nodes. We also report a Byers–Waterman type pathway search algorithm for further paring down pertinent networks. Results: Benchmarking tests run with SI show it can reduce the number of nodes visited in searching a putative network by 100-fold with a computational time improvement of up to 105-fold. Subsequent Byers–Waterman search application further reduces the number of nodes searched by up to 100-fold, while SimZyme demonstrates ∼90% accuracy in matching query substrates with enzymes. Using these modules, we have designed and annotated an alternative to the methylerythritol phosphate pathway to produce isopentenyl pyrophosphate with more favorable thermodynamics than the native pathway. These algorithms will have a significant impact on our ability to use large metabolic networks that lack annotation of promiscuous reactions. Availability and implementation: Python files will be available for download at http://tyolab.northwestern.edu/tools/. Contact: k-tyo@northwestern.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25417203

  19. Efficient searching and annotation of metabolic networks using chemical similarity.

    PubMed

    Pertusi, Dante A; Stine, Andrew E; Broadbelt, Linda J; Tyo, Keith E J

    2015-04-01

    The urgent need for efficient and sustainable biological production of fuels and high-value chemicals has elicited a wave of in silico techniques for identifying promising novel pathways to these compounds in large putative metabolic networks. To date, these approaches have primarily used general graph search algorithms, which are prohibitively slow as putative metabolic networks may exceed 1 million compounds. To alleviate this limitation, we report two methods--SimIndex (SI) and SimZyme--which use chemical similarity of 2D chemical fingerprints to efficiently navigate large metabolic networks and propose enzymatic connections between the constituent nodes. We also report a Byers-Waterman type pathway search algorithm for further paring down pertinent networks. Benchmarking tests run with SI show it can reduce the number of nodes visited in searching a putative network by 100-fold with a computational time improvement of up to 10(5)-fold. Subsequent Byers-Waterman search application further reduces the number of nodes searched by up to 100-fold, while SimZyme demonstrates ∼ 90% accuracy in matching query substrates with enzymes. Using these modules, we have designed and annotated an alternative to the methylerythritol phosphate pathway to produce isopentenyl pyrophosphate with more favorable thermodynamics than the native pathway. These algorithms will have a significant impact on our ability to use large metabolic networks that lack annotation of promiscuous reactions. Python files will be available for download at http://tyolab.northwestern.edu/tools/. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  20. Trident: scalable compute archives: workflows, visualization, and analysis

    NASA Astrophysics Data System (ADS)

    Gopu, Arvind; Hayashi, Soichi; Young, Michael D.; Kotulla, Ralf; Henschel, Robert; Harbeck, Daniel

    2016-08-01

    The Astronomy scientific community has embraced Big Data processing challenges, e.g. associated with time-domain astronomy, and come up with a variety of novel and efficient data processing solutions. However, data processing is only a small part of the Big Data challenge. Efficient knowledge discovery and scientific advancement in the Big Data era requires new and equally efficient tools: modern user interfaces for searching, identifying and viewing data online without direct access to the data; tracking of data provenance; searching, plotting and analyzing metadata; interactive visual analysis, especially of (time-dependent) image data; and the ability to execute pipelines on supercomputing and cloud resources with minimal user overhead or expertise even to novice computing users. The Trident project at Indiana University offers a comprehensive web and cloud-based microservice software suite that enables the straight forward deployment of highly customized Scalable Compute Archive (SCA) systems; including extensive visualization and analysis capabilities, with minimal amount of additional coding. Trident seamlessly scales up or down in terms of data volumes and computational needs, and allows feature sets within a web user interface to be quickly adapted to meet individual project requirements. Domain experts only have to provide code or business logic about handling/visualizing their domain's data products and about executing their pipelines and application work flows. Trident's microservices architecture is made up of light-weight services connected by a REST API and/or a message bus; a web interface elements are built using NodeJS, AngularJS, and HighCharts JavaScript libraries among others while backend services are written in NodeJS, PHP/Zend, and Python. The software suite currently consists of (1) a simple work flow execution framework to integrate, deploy, and execute pipelines and applications (2) a progress service to monitor work flows and sub-work flows (3) ImageX, an interactive image visualization service (3) an authentication and authorization service (4) a data service that handles archival, staging and serving of data products, and (5) a notification service that serves statistical collation and reporting needs of various projects. Several other additional components are under development. Trident is an umbrella project, that evolved from the One Degree Imager, Portal, Pipeline, and Archive (ODI-PPA) project which we had initially refactored toward (1) a powerful analysis/visualization portal for Globular Cluster System (GCS) survey data collected by IU researchers, 2) a data search and download portal for the IU Electron Microscopy Center's data (EMC-SCA), 3) a prototype archive for the Ludwig Maximilian University's Wide Field Imager. The new Trident software has been used to deploy (1) a metadata quality control and analytics portal (RADY-SCA) for DICOM formatted medical imaging data produced by the IU Radiology Center, 2) Several prototype work flows for different domains, 3) a snapshot tool within IU's Karst Desktop environment, 4) a limited component-set to serve GIS data within the IU GIS web portal. Trident SCA systems leverage supercomputing and storage resources at Indiana University but can be configured to make use of any cloud/grid resource, from local workstations/servers to (inter)national supercomputing facilities such as XSEDE.

  1. Utilizing knowledge base of amino acids structural neighborhoods to predict protein-protein interaction sites.

    PubMed

    Jelínek, Jan; Škoda, Petr; Hoksza, David

    2017-12-06

    Protein-protein interactions (PPI) play a key role in an investigation of various biochemical processes, and their identification is thus of great importance. Although computational prediction of which amino acids take part in a PPI has been an active field of research for some time, the quality of in-silico methods is still far from perfect. We have developed a novel prediction method called INSPiRE which benefits from a knowledge base built from data available in Protein Data Bank. All proteins involved in PPIs were converted into labeled graphs with nodes corresponding to amino acids and edges to pairs of neighboring amino acids. A structural neighborhood of each node was then encoded into a bit string and stored in the knowledge base. When predicting PPIs, INSPiRE labels amino acids of unknown proteins as interface or non-interface based on how often their structural neighborhood appears as interface or non-interface in the knowledge base. We evaluated INSPiRE's behavior with respect to different types and sizes of the structural neighborhood. Furthermore, we examined the suitability of several different features for labeling the nodes. Our evaluations showed that INSPiRE clearly outperforms existing methods with respect to Matthews correlation coefficient. In this paper we introduce a new knowledge-based method for identification of protein-protein interaction sites called INSPiRE. Its knowledge base utilizes structural patterns of known interaction sites in the Protein Data Bank which are then used for PPI prediction. Extensive experiments on several well-established datasets show that INSPiRE significantly surpasses existing PPI approaches.

  2. An Improved Heuristic Method for Subgraph Isomorphism Problem

    NASA Astrophysics Data System (ADS)

    Xiang, Yingzhuo; Han, Jiesi; Xu, Haijiang; Guo, Xin

    2017-09-01

    This paper focus on the subgraph isomorphism (SI) problem. We present an improved genetic algorithm, a heuristic method to search the optimal solution. The contribution of this paper is that we design a dedicated crossover algorithm and a new fitness function to measure the evolution process. Experiments show our improved genetic algorithm performs better than other heuristic methods. For a large graph, such as a subgraph of 40 nodes, our algorithm outperforms the traditional tree search algorithms. We find that the performance of our improved genetic algorithm does not decrease as the number of nodes in prototype graphs.

  3. A rank-based Prediction Algorithm of Learning User's Intention

    NASA Astrophysics Data System (ADS)

    Shen, Jie; Gao, Ying; Chen, Cang; Gong, HaiPing

    Internet search has become an important part in people's daily life. People can find many types of information to meet different needs through search engines on the Internet. There are two issues for the current search engines: first, the users should predetermine the types of information they want and then change to the appropriate types of search engine interfaces. Second, most search engines can support multiple kinds of search functions, each function has its own separate search interface. While users need different types of information, they must switch between different interfaces. In practice, most queries are corresponding to various types of information results. These queries can search the relevant results in various search engines, such as query "Palace" contains the websites about the introduction of the National Palace Museum, blog, Wikipedia, some pictures and video information. This paper presents a new aggregative algorithm for all kinds of search results. It can filter and sort the search results by learning three aspects about the query words, search results and search history logs to achieve the purpose of detecting user's intention. Experiments demonstrate that this rank-based method for multi-types of search results is effective. It can meet the user's search needs well, enhance user's satisfaction, provide an effective and rational model for optimizing search engines and improve user's search experience.

  4. nodeGame: Real-time, synchronous, online experiments in the browser.

    PubMed

    Balietti, Stefano

    2017-10-01

    nodeGame is a free, open-source JavaScript/ HTML5 framework for conducting synchronous experiments online and in the lab directly in the browser window. It is specifically designed to support behavioral research along three dimensions: (i) larger group sizes, (ii) real-time (but also discrete time) experiments, and (iii) batches of simultaneous experiments. nodeGame has a modular source code, and defines an API (application programming interface) through which experimenters can create new strategic environments and configure the platform. With zero-install, nodeGame can run on a great variety of devices, from desktop computers to laptops, smartphones, and tablets. The current version of the software is 3.0, and extensive documentation is available on the wiki pages at http://nodegame.org .

  5. ORA User’s Guide 2013

    DTIC Science & Technology

    2013-06-03

    and a C++ computational backend . The most current version of ORA (3.0.8.5) software is available on the casos website: http://casos.cs.cmu.edu...optimizing a network’s design structure. ORA uses a Java interface for ease of use, and a C++ computational backend . The most current version of ORA...Eigenvector Centrality : Node most connected to other highly connected nodes. Assists in identifying those who can mobilize others Entity Class

  6. Spying on Search Strategies

    ERIC Educational Resources Information Center

    Tenopir, Carol

    2004-01-01

    Only the most dedicated super-searchers are motivated to learn and control command systems, like DialogClassic, that rely on the user to input complex search strategies. Infrequent searchers and most end users choose interfaces that do some of the work for them and make the search process appear easy. However, the easier a good interface seems to…

  7. Google Scholar as replacement for systematic literature searches: good relative recall and precision are not enough.

    PubMed

    Boeker, Martin; Vach, Werner; Motschall, Edith

    2013-10-26

    Recent research indicates a high recall in Google Scholar searches for systematic reviews. These reports raised high expectations of Google Scholar as a unified and easy to use search interface. However, studies on the coverage of Google Scholar rarely used the search interface in a realistic approach but instead merely checked for the existence of gold standard references. In addition, the severe limitations of the Google Search interface must be taken into consideration when comparing with professional literature retrieval tools.The objectives of this work are to measure the relative recall and precision of searches with Google Scholar under conditions which are derived from structured search procedures conventional in scientific literature retrieval; and to provide an overview of current advantages and disadvantages of the Google Scholar search interface in scientific literature retrieval. General and MEDLINE-specific search strategies were retrieved from 14 Cochrane systematic reviews. Cochrane systematic review search strategies were translated to Google Scholar search expression as good as possible under consideration of the original search semantics. The references of the included studies from the Cochrane reviews were checked for their inclusion in the result sets of the Google Scholar searches. Relative recall and precision were calculated. We investigated Cochrane reviews with a number of included references between 11 and 70 with a total of 396 references. The Google Scholar searches resulted in sets between 4,320 and 67,800 and a total of 291,190 hits. The relative recall of the Google Scholar searches had a minimum of 76.2% and a maximum of 100% (7 searches). The precision of the Google Scholar searches had a minimum of 0.05% and a maximum of 0.92%. The overall relative recall for all searches was 92.9%, the overall precision was 0.13%. The reported relative recall must be interpreted with care. It is a quality indicator of Google Scholar confined to an experimental setting which is unavailable in systematic retrieval due to the severe limitations of the Google Scholar search interface. Currently, Google Scholar does not provide necessary elements for systematic scientific literature retrieval such as tools for incremental query optimization, export of a large number of references, a visual search builder or a history function. Google Scholar is not ready as a professional searching tool for tasks where structured retrieval methodology is necessary.

  8. STScI Archive Manual, Version 7.0

    NASA Astrophysics Data System (ADS)

    Padovani, Paolo

    1999-06-01

    The STScI Archive Manual provides information a user needs to know to access the HST archive via its two user interfaces: StarView and a World Wide Web (WWW) interface. It provides descriptions of the StarView screens used to access information in the database and the format of that information, and introduces the use to the WWW interface. Using the two interfaces, users can search for observations, preview public data, and retrieve data from the archive. Using StarView one can also find calibration reference files and perform detailed association searches. With the WWW interface archive users can access, and obtain information on, all Multimission Archive at Space Telescope (MAST) data, a collection of mainly optical and ultraviolet datasets which include, amongst others, the International Ultraviolet Explorer (IUE) Final Archive. Both interfaces feature a name resolver which simplifies searches based on target name.

  9. Searching ClinicalTrials.gov and the International Clinical Trials Registry Platform to inform systematic reviews: what are the optimal search approaches?

    PubMed

    Glanville, Julie M; Duffy, Steven; McCool, Rachael; Varley, Danielle

    2014-07-01

    Since 2005, International Committee of Medical Journal Editors (ICMJE) member journals have required that clinical trials be registered in publicly available trials registers before they are considered for publication. The research explores whether it is adequate, when searching to inform systematic reviews, to search for relevant clinical trials using only public trials registers and to identify the optimal search approaches in trials registers. A search was conducted in ClinicalTrials.gov and the International Clinical Trials Registry Platform (ICTRP) for research studies that had been included in eight systematic reviews. Four search approaches (highly sensitive, sensitive, precise, and highly precise) were performed using the basic and advanced interfaces in both resources. On average, 84% of studies were not listed in either resource. The largest number of included studies was retrieved in ClinicalTrials.gov and ICTRP when a sensitive search approach was used in the basic interface. The use of the advanced interface maintained or improved sensitivity in 16 of 19 strategies for Clinicaltrials.gov and 8 of 18 for ICTRP. No single search approach was sensitive enough to identify all studies included in the 6 reviews. Trials registers cannot yet be relied upon as the sole means to locate trials for systematic reviews. Trials registers lag behind the major bibliographic databases in terms of their search interfaces. For systematic reviews, trials registers and major bibliographic databases should be searched. Trials registers should be searched using sensitive approaches, and both the registers consulted in this study should be searched.

  10. SearchGUI: An open-source graphical user interface for simultaneous OMSSA and X!Tandem searches.

    PubMed

    Vaudel, Marc; Barsnes, Harald; Berven, Frode S; Sickmann, Albert; Martens, Lennart

    2011-03-01

    The identification of proteins by mass spectrometry is a standard technique in the field of proteomics, relying on search engines to perform the identifications of the acquired spectra. Here, we present a user-friendly, lightweight and open-source graphical user interface called SearchGUI (http://searchgui.googlecode.com), for configuring and running the freely available OMSSA (open mass spectrometry search algorithm) and X!Tandem search engines simultaneously. Freely available under the permissible Apache2 license, SearchGUI is supported on Windows, Linux and OSX. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Issues in the design of a pilot concept-based query interface for the neuroinformatics information framework.

    PubMed

    Marenco, Luis; Li, Yuli; Martone, Maryann E; Sternberg, Paul W; Shepherd, Gordon M; Miller, Perry L

    2008-09-01

    This paper describes a pilot query interface that has been constructed to help us explore a "concept-based" approach for searching the Neuroscience Information Framework (NIF). The query interface is concept-based in the sense that the search terms submitted through the interface are selected from a standardized vocabulary of terms (concepts) that are structured in the form of an ontology. The NIF contains three primary resources: the NIF Resource Registry, the NIF Document Archive, and the NIF Database Mediator. These NIF resources are very different in their nature and therefore pose challenges when designing a single interface from which searches can be automatically launched against all three resources simultaneously. The paper first discusses briefly several background issues involving the use of standardized biomedical vocabularies in biomedical information retrieval, and then presents a detailed example that illustrates how the pilot concept-based query interface operates. The paper concludes by discussing certain lessons learned in the development of the current version of the interface.

  12. Issues in the Design of a Pilot Concept-Based Query Interface for the Neuroinformatics Information Framework

    PubMed Central

    Li, Yuli; Martone, Maryann E.; Sternberg, Paul W.; Shepherd, Gordon M.; Miller, Perry L.

    2009-01-01

    This paper describes a pilot query interface that has been constructed to help us explore a “concept-based” approach for searching the Neuroscience Information Framework (NIF). The query interface is concept-based in the sense that the search terms submitted through the interface are selected from a standardized vocabulary of terms (concepts) that are structured in the form of an ontology. The NIF contains three primary resources: the NIF Resource Registry, the NIF Document Archive, and the NIF Database Mediator. These NIF resources are very different in their nature and therefore pose challenges when designing a single interface from which searches can be automatically launched against all three resources simultaneously. The paper first discusses briefly several background issues involving the use of standardized biomedical vocabularies in biomedical information retrieval, and then presents a detailed example that illustrates how the pilot concept-based query interface operates. The paper concludes by discussing certain lessons learned in the development of the current version of the interface. PMID:18953674

  13. A finite elements method to solve the Bloch–Torrey equation applied to diffusion magnetic resonance imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Dang Van; NeuroSpin, Bat145, Point Courrier 156, CEA Saclay Center, 91191 Gif-sur-Yvette Cedex; Li, Jing-Rebecca, E-mail: jingrebecca.li@inria.fr

    2014-04-15

    The complex transverse water proton magnetization subject to diffusion-encoding magnetic field gradient pulses in a heterogeneous medium can be modeled by the multiple compartment Bloch–Torrey partial differential equation (PDE). In addition, steady-state Laplace PDEs can be formulated to produce the homogenized diffusion tensor that describes the diffusion characteristics of the medium in the long time limit. In spatial domains that model biological tissues at the cellular level, these two types of PDEs have to be completed with permeability conditions on the cellular interfaces. To solve these PDEs, we implemented a finite elements method that allows jumps in the solution atmore » the cell interfaces by using double nodes. Using a transformation of the Bloch–Torrey PDE we reduced oscillations in the searched-for solution and simplified the implementation of the boundary conditions. The spatial discretization was then coupled to the adaptive explicit Runge–Kutta–Chebyshev time-stepping method. Our proposed method is second order accurate in space and second order accurate in time. We implemented this method on the FEniCS C++ platform and show time and spatial convergence results. Finally, this method is applied to study some relevant questions in diffusion MRI.« less

  14. Empirical Determination of Pattern Match Confidence in Labeled Graphs

    DTIC Science & Technology

    2014-02-07

    were explored; Erdős–Rényi [6] random graphs, Barabási–Albert preferential attachment graphs [2], and Watts– Strogatz [18] small world graphs. The ER...B. Erdos - Renyi Barabasi - Albert Gr ap h Ty pe Strogatz - Watts Direct Within 2 nodes Within 4 nodes Search Limit 1 10 100 1000 10000 100000 100...Barabási–Albert (BA, crosses) and Watts– Strogatz (WS, trian- gles) graphs were generated with sizes ranging from 50 to 2500 nodes, and labeled

  15. Malignant melanoma (non-metastatic): sentinel lymph node biopsy.

    PubMed

    Pay, Andy

    2016-01-19

    The incidence of malignant melanoma has increased over the past 25 years in the UK, but death rates have remained fairly constant. The 5-year survival rate ranges from 20% to 95%, depending on disease stage. Risks are greater in white populations and in people with higher numbers of skin naevi. We conducted a systematic overview, aiming to answer the following clinical question: What is the evidence for performing a sentinel lymph node biopsy in people with malignant melanoma with clinically uninvolved lymph nodes? We searched: Medline, Embase, The Cochrane Library and other important databases up to October 2014 (Clinical Evidence overviews are updated periodically; please check our website for the most up-to-date version of this overview). At this update, searching of electronic databases retrieved 221 studies. After deduplication and removal of conference abstracts, 99 records were screened for inclusion in the overview. Appraisal of titles and abstracts led to the exclusion of 58 studies and the further review of 41 full publications. Of the 41 full articles evaluated, one systematic review and three RCTs were added at this update. We performed a GRADE evaluation for two PICO combinations. In this systematic overview, we evaluated the evidence for performing sentinel lymph node biopsy in people with malignant melanoma with clinically uninvolved lymph nodes.

  16. Redefining early gastric cancer.

    PubMed

    Barreto, Savio G; Windsor, John A

    2016-01-01

    The problem is that current definitions of early gastric cancer allow the inclusion of regional lymph node metastases. The increasing use of endoscopic submucosal dissection to treat early gastric cancer is a concern because regional lymph nodes are not addressed. The aim of the study was thus to critically evaluate current evidence with regard to tumour-specific factors associated with lymph node metastases in "early gastric cancer" to develop a more precise definition and improve clinical management. A systematic and comprehensive search of major reference databases (MEDLINE, EMBASE, PubMed and the Cochrane Library) was undertaken using a combination of text words "early gastric cancer", "lymph node metastasis", "factors", "endoscopy", "surgery", "lymphadenectomy" "mucosa", "submucosa", "lymphovascular invasion", "differentiated", "undifferentiated" and "ulcer". All available publications that described tumour-related factors associated with lymph node metastases in early gastric cancer were included. The initial search yielded 1494 studies, of which 42 studies were included in the final analysis. Over time, the definition of early gastric cancer has broadened and the indications for endoscopic treatment have widened. The mean frequency of lymph node metastases increased on the basis of depth of infiltration (mucosa 6% vs. submucosa 28%), presence of lymphovascular invasion (absence 9% vs. presence 53%), tumour differentiation (differentiated 13% vs. undifferentiated 34%) and macroscopic type (elevated 13% vs. flat 26%) and tumour diameter (≤2 cm 8% vs. >2 cm 25%). There is a need to re-examine the diagnosis and staging of early gastric cancer to ensure that patients with one or more identifiable risk factor for lymph node metastases are not denied appropriate chemotherapy and surgical resection.

  17. Optimizing real-time Web-based user interfaces for observatories

    NASA Astrophysics Data System (ADS)

    Gibson, J. Duane; Pickering, Timothy E.; Porter, Dallan; Schaller, Skip

    2008-08-01

    In using common HTML/Ajax approaches for web-based data presentation and telescope control user interfaces at the MMT Observatory (MMTO), we rapidly were confronted with web browser performance issues. Much of the operational data at the MMTO is highly dynamic and is constantly changing during normal operations. Status of telescope subsystems must be displayed with minimal latency to telescope operators and other users. A major motivation of migrating toward web-based applications at the MMTO is to provide easy access to current and past observatory subsystem data for a wide variety of users on their favorite operating system through a familiar interface, their web browser. Performance issues, especially for user interfaces that control telescope subsystems, led to investigations of more efficient use of HTML/Ajax and web server technologies as well as other web-based technologies, such as Java and Flash/Flex. The results presented here focus on techniques for optimizing HTML/Ajax web applications with near real-time data display. This study indicates that direct modification of the contents or "nodeValue" attribute of text nodes is the most efficient method of updating data values displayed on a web page. Other optimization techniques are discussed for web-based applications that display highly dynamic data.

  18. Design and implementation of a hybrid MPI-CUDA model for the Smith-Waterman algorithm.

    PubMed

    Khaled, Heba; Faheem, Hossam El Deen Mostafa; El Gohary, Rania

    2015-01-01

    This paper provides a novel hybrid model for solving the multiple pair-wise sequence alignment problem combining message passing interface and CUDA, the parallel computing platform and programming model invented by NVIDIA. The proposed model targets homogeneous cluster nodes equipped with similar Graphical Processing Unit (GPU) cards. The model consists of the Master Node Dispatcher (MND) and the Worker GPU Nodes (WGN). The MND distributes the workload among the cluster working nodes and then aggregates the results. The WGN performs the multiple pair-wise sequence alignments using the Smith-Waterman algorithm. We also propose a modified implementation to the Smith-Waterman algorithm based on computing the alignment matrices row-wise. The experimental results demonstrate a considerable reduction in the running time by increasing the number of the working GPU nodes. The proposed model achieved a performance of about 12 Giga cell updates per second when we tested against the SWISS-PROT protein knowledge base running on four nodes.

  19. Patscanui: an intuitive web interface for searching patterns in DNA and protein data.

    PubMed

    Blin, Kai; Wohlleben, Wolfgang; Weber, Tilmann

    2018-05-02

    Patterns in biological sequences frequently signify interesting features in the underlying molecule. Many tools exist to search for well-known patterns. Less support is available for exploratory analysis, where no well-defined patterns are known yet. PatScanUI (https://patscan.secondarymetabolites.org/) provides a highly interactive web interface to the powerful generic pattern search tool PatScan. The complex PatScan-patterns are created in a drag-and-drop aware interface allowing researchers to do rapid prototyping of the often complicated patterns useful to identifying features of interest.

  20. Browsing schematics: Query-filtered graphs with context nodes

    NASA Technical Reports Server (NTRS)

    Ciccarelli, Eugene C.; Nardi, Bonnie A.

    1988-01-01

    The early results of a research project to create tools for building interfaces to intelligent systems on the NASA Space Station are reported. One such tool is the Schematic Browser which helps users engaged in engineering problem solving find and select schematics from among a large set. Users query for schematics with certain components, and the Schematic Browser presents a graph whose nodes represent the schematics with those components. The query greatly reduces the number of choices presented to the user, filtering the graph to a manageable size. Users can reformulate and refine the query serially until they locate the schematics of interest. To help users maintain orientation as they navigate a large body of data, the graph also includes nodes that are not matches but provide global and local context for the matching nodes. Context nodes include landmarks, ancestors, siblings, children and previous matches.

  1. Parallel file system with metadata distributed across partitioned key-value store c

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary; Torres, Aaron

    2017-09-19

    Improved techniques are provided for storing metadata associated with a plurality of sub-files associated with a single shared file in a parallel file system. The shared file is generated by a plurality of applications executing on a plurality of compute nodes. A compute node implements a Parallel Log Structured File System (PLFS) library to store at least one portion of the shared file generated by an application executing on the compute node and metadata for the at least one portion of the shared file on one or more object storage servers. The compute node is also configured to implement a partitioned data store for storing a partition of the metadata for the shared file, wherein the partitioned data store communicates with partitioned data stores on other compute nodes using a message passing interface. The partitioned data store can be implemented, for example, using Multidimensional Data Hashing Indexing Middleware (MDHIM).

  2. Correctness Proof of a Self-Stabilizing Distributed Clock Synchronization Protocol for Arbitrary Digraphs

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.

    2011-01-01

    This report presents a deductive proof of a self-stabilizing distributed clock synchronization protocol. It is focused on the distributed clock synchronization of an arbitrary, non-partitioned digraph ranging from fully connected to 1-connected networks of nodes while allowing for differences in the network elements. This protocol does not rely on assumptions about the initial state of the system, and no central clock or a centrally generated signal, pulse, or message is used. Nodes are anonymous, i.e., they do not have unique identities. There is no theoretical limit on the maximum number of participating nodes. The only constraint on the behavior of the node is that the interactions with other nodes are restricted to defined links and interfaces. We present a deductive proof of the correctness of the protocol as it applies to the networks with unidirectional and bidirectional links. We also confirm the claims of determinism and linear convergence.

  3. Porting Gravitational Wave Signal Extraction to Parallel Virtual Machine (PVM)

    NASA Technical Reports Server (NTRS)

    Thirumalainambi, Rajkumar; Thompson, David E.; Redmon, Jeffery

    2009-01-01

    Laser Interferometer Space Antenna (LISA) is a planned NASA-ESA mission to be launched around 2012. The Gravitational Wave detection is fundamentally the determination of frequency, source parameters, and waveform amplitude derived in a specific order from the interferometric time-series of the rotating LISA spacecrafts. The LISA Science Team has developed a Mock LISA Data Challenge intended to promote the testing of complicated nested search algorithms to detect the 100-1 millihertz frequency signals at amplitudes of 10E-21. However, it has become clear that, sequential search of the parameters is very time consuming and ultra-sensitive; hence, a new strategy has been developed. Parallelization of existing sequential search algorithms of Gravitational Wave signal identification consists of decomposing sequential search loops, beginning with outermost loops and working inward. In this process, the main challenge is to detect interdependencies among loops and partitioning the loops so as to preserve concurrency. Existing parallel programs are based upon either shared memory or distributed memory paradigms. In PVM, master and node programs are used to execute parallelization and process spawning. The PVM can handle process management and process addressing schemes using a virtual machine configuration. The task scheduling and the messaging and signaling can be implemented efficiently for the LISA Gravitational Wave search process using a master and 6 nodes. This approach is accomplished using a server that is available at NASA Ames Research Center, and has been dedicated to the LISA Data Challenge Competition. Historically, gravitational wave and source identification parameters have taken around 7 days in this dedicated single thread Linux based server. Using PVM approach, the parameter extraction problem can be reduced to within a day. The low frequency computation and a proxy signal-to-noise ratio are calculated in separate nodes that are controlled by the master using message and vector of data passing. The message passing among nodes follows a pattern of synchronous and asynchronous send-and-receive protocols. The communication model and the message buffers are allocated dynamically to address rapid search of gravitational wave source information in the Mock LISA data sets.

  4. MEMS tracking mirror system for a bidirectional free-space optical link.

    PubMed

    Jeon, Sungho; Toshiyoshi, Hiroshi

    2017-08-20

    We report on a bidirectional free-space optical system that is capable of automatic connection and tracking of an optical link between two nodes. A piezoelectric micro-electro-mechanical systems (MEMS) optical scanner is used to steer a laser beam of two wavelengths superposed to visually present a communication zone, to search for the position of the remote node by means of the retro-reflector optics, and to transmit the data between the nodes. A feedback system is developed to control the MEMS scanner to dynamically establish the optical link within a 10-ms transition time and to keep track of the moving node.

  5. Usability evaluation of an experimental text summarization system and three search engines: implications for the reengineering of health care interfaces.

    PubMed

    Kushniruk, Andre W; Kan, Min-Yem; McKeown, Kathleen; Klavans, Judith; Jordan, Desmond; LaFlamme, Mark; Patel, Vimia L

    2002-01-01

    This paper describes the comparative evaluation of an experimental automated text summarization system, Centrifuser and three conventional search engines - Google, Yahoo and About.com. Centrifuser provides information to patients and families relevant to their questions about specific health conditions. It then produces a multidocument summary of articles retrieved by a standard search engine, tailored to the user's question. Subjects, consisting of friends or family of hospitalized patients, were asked to "think aloud" as they interacted with the four systems. The evaluation involved audio- and video recording of subject interactions with the interfaces in situ at a hospital. Results of the evaluation show that subjects found Centrifuser's summarization capability useful and easy to understand. In comparing Centrifuser to the three search engines, subjects' ratings varied; however, specific interface features were deemed useful across interfaces. We conclude with a discussion of the implications for engineering Web-based retrieval systems.

  6. Decentralized Control of Scheduling in Distributed Systems.

    DTIC Science & Technology

    1983-03-18

    the job scheduling algorithm adapts to the changing busyness of the various hosts in the system. The environment in which the job scheduling entities...resources and processes that constitute the node and a set of interfaces for accessing these processes and resources. The structure of a node could change ...parallel. Chang [CHNG82] has also described some algorithms for detecting properties of general graphs by traversing paths in a graph in parallel. One of

  7. Standardized Fault-Tolerant Sensing Nodes for an Intelligent Turbine Engine Control System (Preprint)

    DTIC Science & Technology

    2013-05-01

    representation of a centralized control system on a turbine engine. All actuators and sensors are point-to-point cabled to the controller ( FADEC ) which...electronics themselves. Figure 1: Centralized Control System Each function resides within the FADEC and uses Unique Point-to-Point Analog...distributed control system on the same turbine engine. The actuators and sensors interface to Smart Nodes which, in turn communicate to the FADEC via

  8. Key-value store with internal key-value storage interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, John M.; Faibish, Sorin; Ting, Dennis P. J.

    A key-value store is provided having one or more key-value storage interfaces. A key-value store on at least one compute node comprises a memory for storing a plurality of key-value pairs; and an abstract storage interface comprising a software interface module that communicates with at least one persistent storage device providing a key-value interface for persistent storage of one or more of the plurality of key-value pairs, wherein the software interface module provides the one or more key-value pairs to the at least one persistent storage device in a key-value format. The abstract storage interface optionally processes one or moremore » batch operations on the plurality of key-value pairs. A distributed embodiment for a partitioned key-value store is also provided.« less

  9. A lock-free priority queue design based on multi-dimensional linked lists

    DOE PAGES

    Dechev, Damian; Zhang, Deli

    2015-04-03

    The throughput of concurrent priority queues is pivotal to multiprocessor applications such as discrete event simulation, best-first search and task scheduling. Existing lock-free priority queues are mostly based on skiplists, which probabilistically create shortcuts in an ordered list for fast insertion of elements. The use of skiplists eliminates the need of global rebalancing in balanced search trees and ensures logarithmic sequential search time on average, but the worst-case performance is linear with respect to the input size. In this paper, we propose a quiescently consistent lock-free priority queue based on a multi-dimensional list that guarantees worst-case search time of O(logN)more » for key universe of size N. The novel multi-dimensional list (MDList) is composed of nodes that contain multiple links to child nodes arranged by their dimensionality. The insertion operation works by first injectively mapping the scalar key to a high-dimensional vector, then uniquely locating the target position by using the vector as coordinates. Nodes in MDList are ordered by their coordinate prefixes and the ordering property of the data structure is readily maintained during insertion without rebalancing nor randomization. Furthermore, in our experimental evaluation using a micro-benchmark, our priority queue achieves an average of 50% speedup over the state of the art approaches under high concurrency.« less

  10. Path Network Recovery Using Remote Sensing Data and Geospatial-Temporal Semantic Graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    William C. McLendon III; Brost, Randy C.

    Remote sensing systems produce large volumes of high-resolution images that are difficult to search. The GeoGraphy (pronounced Geo-Graph-y) framework [2, 20] encodes remote sensing imagery into a geospatial-temporal semantic graph representation to enable high level semantic searches to be performed. Typically scene objects such as buildings and trees tend to be shaped like blocks with few holes, but other shapes generated from path networks tend to have a large number of holes and can span a large geographic region due to their connectedness. For example, we have a dataset covering the city of Philadelphia in which there is a singlemore » road network node spanning a 6 mile x 8 mile region. Even a simple question such as "find two houses near the same street" might give unexpected results. More generally, nodes arising from networks of paths (roads, sidewalks, trails, etc.) require additional processing to make them useful for searches in GeoGraphy. We have assigned the term Path Network Recovery to this process. Path Network Recovery is a three-step process involving (1) partitioning the network node into segments, (2) repairing broken path segments interrupted by occlusions or sensor noise, and (3) adding path-aware search semantics into GeoQuestions. This report covers the path network recovery process, how it is used, and some example use cases of the current capabilities.« less

  11. A lock-free priority queue design based on multi-dimensional linked lists

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dechev, Damian; Zhang, Deli

    The throughput of concurrent priority queues is pivotal to multiprocessor applications such as discrete event simulation, best-first search and task scheduling. Existing lock-free priority queues are mostly based on skiplists, which probabilistically create shortcuts in an ordered list for fast insertion of elements. The use of skiplists eliminates the need of global rebalancing in balanced search trees and ensures logarithmic sequential search time on average, but the worst-case performance is linear with respect to the input size. In this paper, we propose a quiescently consistent lock-free priority queue based on a multi-dimensional list that guarantees worst-case search time of O(logN)more » for key universe of size N. The novel multi-dimensional list (MDList) is composed of nodes that contain multiple links to child nodes arranged by their dimensionality. The insertion operation works by first injectively mapping the scalar key to a high-dimensional vector, then uniquely locating the target position by using the vector as coordinates. Nodes in MDList are ordered by their coordinate prefixes and the ordering property of the data structure is readily maintained during insertion without rebalancing nor randomization. Furthermore, in our experimental evaluation using a micro-benchmark, our priority queue achieves an average of 50% speedup over the state of the art approaches under high concurrency.« less

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McKisson, John

    The source code for the Java Data Acquisition suite provides interfaces to the JLab built USB FPGA ADC across a LAN network. Each jDaq node provides ListMode data from JLab built detector systems and readouts.

  13. Dynamic Extension of a Virtualized Cluster by using Cloud Resources

    NASA Astrophysics Data System (ADS)

    Oberst, Oliver; Hauth, Thomas; Kernert, David; Riedel, Stephan; Quast, Günter

    2012-12-01

    The specific requirements concerning the software environment within the HEP community constrain the choice of resource providers for the outsourcing of computing infrastructure. The use of virtualization in HPC clusters and in the context of cloud resources is therefore a subject of recent developments in scientific computing. The dynamic virtualization of worker nodes in common batch systems provided by ViBatch serves each user with a dynamically virtualized subset of worker nodes on a local cluster. Now it can be transparently extended by the use of common open source cloud interfaces like OpenNebula or Eucalyptus, launching a subset of the virtual worker nodes within the cloud. This paper demonstrates how a dynamically virtualized computing cluster is combined with cloud resources by attaching remotely started virtual worker nodes to the local batch system.

  14. A correction function method for the wave equation with interface jump conditions

    NASA Astrophysics Data System (ADS)

    Abraham, David S.; Marques, Alexandre Noll; Nave, Jean-Christophe

    2018-01-01

    In this paper a novel method to solve the constant coefficient wave equation, subject to interface jump conditions, is presented. In general, such problems pose issues for standard finite difference solvers, as the inherent discontinuity in the solution results in erroneous derivative information wherever the stencils straddle the given interface. Here, however, the recently proposed Correction Function Method (CFM) is used, in which correction terms are computed from the interface conditions, and added to affected nodes to compensate for the discontinuity. In contrast to existing methods, these corrections are not simply defined at affected nodes, but rather generalized to a continuous function within a small region surrounding the interface. As a result, the correction function may be defined in terms of its own governing partial differential equation (PDE) which may be solved, in principle, to arbitrary order of accuracy. The resulting scheme is not only arbitrarily high order, but also robust, having already seen application to Poisson problems and the heat equation. By extending the CFM to this new class of PDEs, the treatment of wave interface discontinuities in homogeneous media becomes possible. This allows, for example, for the straightforward treatment of infinitesimal source terms and sharp boundaries, free of staircasing errors. Additionally, new modifications to the CFM are derived, allowing compatibility with explicit multi-step methods, such as Runge-Kutta (RK4), without a reduction in accuracy. These results are then verified through numerous numerical experiments in one and two spatial dimensions.

  15. Implementing an SIG based platform of application and service for city spatial information in Shanghai

    NASA Astrophysics Data System (ADS)

    Yu, Bailang; Wu, Jianping

    2006-10-01

    Spatial Information Grid (SIG) is an infrastructure that has the ability to provide the services for spatial information according to users' needs by means of collecting, sharing, organizing and processing the massive distributed spatial information resources. This paper presents the architecture, technologies and implementation of the Shanghai City Spatial Information Application and Service System, a SIG based platform, which is an integrated platform that serves for administration, planning, construction and development of the city. In the System, there are ten categories of spatial information resources, including city planning, land-use, real estate, river system, transportation, municipal facility construction, environment protection, sanitation, urban afforestation and basic geographic information data. In addition, spatial information processing services are offered as a means of GIS Web Services. The resources and services are all distributed in different web-based nodes. A single database is created to store the metadata of all the spatial information. A portal site is published as the main user interface of the System. There are three main functions in the portal site. First, users can search the metadata and consequently acquire the distributed data by using the searching results. Second, some spatial processing web applications that developed with GIS Web Services, such as file format conversion, spatial coordinate transfer, cartographic generalization and spatial analysis etc, are offered to use. Third, GIS Web Services currently available in the System can be searched and new ones can be registered. The System has been working efficiently in Shanghai Government Network since 2005.

  16. Proposal for optimal placement platform of bikes using queueing networks.

    PubMed

    Mizuno, Shinya; Iwamoto, Shogo; Seki, Mutsumi; Yamaki, Naokazu

    2016-01-01

    In recent social experiments, rental motorbikes and rental bicycles have been arranged at nodes, and environments where users can ride these bikes have been improved. When people borrow bikes, they return them to nearby nodes. Some experiments have been conducted using the models of Hamachari of Yokohama, the Niigata Rental Cycle, and Bicing. However, from these experiments, the effectiveness of distributing bikes was unclear, and many models were discontinued midway. Thus, we need to consider whether these models are effectively designed to represent the distribution system. Therefore, we construct a model to arrange the nodes for distributing bikes using a queueing network. To adopt realistic values for our model, we use the Google Maps application program interface. Thus, we can easily obtain values of distance and transit time between nodes in various places in the world. Moreover, we apply the distribution of a population to a gravity model and we compute the effective transition probability for this queueing network. If the arrangement of the nodes and number of bikes at each node is known, we can precisely design the system. We illustrate our system using convenience stores as nodes and optimize the node configuration. As a result, we can optimize simultaneously the number of nodes, node places, and number of bikes for each node, and we can construct a base for a rental cycle business to use our system.

  17. Updates to the Virtual Atomic and Molecular Data Centre

    NASA Astrophysics Data System (ADS)

    Hill, Christian; Tennyson, Jonathan; Gordon, Iouli E.; Rothman, Laurence S.; Dubernet, Marie-Lise

    2014-06-01

    The Virtual Atomic and Molecular Data Centre (VAMDC) has established a set of standards for the storage and transmission of atomic and molecular data and an SQL-based query language (VSS2) for searching online databases, known as nodes. The project has also created an online service, the VAMDC Portal, through which all of these databases may be searched and their results compared and aggregated. Since its inception four years ago, the VAMDC e-infrastructure has grown to encompass over 40 databases, including HITRAN, in more than 20 countries and engages actively with scientists in six continents. Associated with the portal are a growing suite of software tools for the transformation of data from its native, XML-based, XSAMS format, to a range of more convenient human-readable (such as HTML) and machinereadable (such as CSV) formats. The relational database for HITRAN1, created as part of the VAMDC project is a flexible and extensible data model which is able to represent a wider range of parameters than the current fixed-format text-based one. Over the next year, a new online interface to this database will be tested, released and fully documented - this web application, HITRANonline2, will fully replace the ageing and incomplete JavaHAWKS software suite.

  18. Google Scholar as replacement for systematic literature searches: good relative recall and precision are not enough

    PubMed Central

    2013-01-01

    Background Recent research indicates a high recall in Google Scholar searches for systematic reviews. These reports raised high expectations of Google Scholar as a unified and easy to use search interface. However, studies on the coverage of Google Scholar rarely used the search interface in a realistic approach but instead merely checked for the existence of gold standard references. In addition, the severe limitations of the Google Search interface must be taken into consideration when comparing with professional literature retrieval tools. The objectives of this work are to measure the relative recall and precision of searches with Google Scholar under conditions which are derived from structured search procedures conventional in scientific literature retrieval; and to provide an overview of current advantages and disadvantages of the Google Scholar search interface in scientific literature retrieval. Methods General and MEDLINE-specific search strategies were retrieved from 14 Cochrane systematic reviews. Cochrane systematic review search strategies were translated to Google Scholar search expression as good as possible under consideration of the original search semantics. The references of the included studies from the Cochrane reviews were checked for their inclusion in the result sets of the Google Scholar searches. Relative recall and precision were calculated. Results We investigated Cochrane reviews with a number of included references between 11 and 70 with a total of 396 references. The Google Scholar searches resulted in sets between 4,320 and 67,800 and a total of 291,190 hits. The relative recall of the Google Scholar searches had a minimum of 76.2% and a maximum of 100% (7 searches). The precision of the Google Scholar searches had a minimum of 0.05% and a maximum of 0.92%. The overall relative recall for all searches was 92.9%, the overall precision was 0.13%. Conclusion The reported relative recall must be interpreted with care. It is a quality indicator of Google Scholar confined to an experimental setting which is unavailable in systematic retrieval due to the severe limitations of the Google Scholar search interface. Currently, Google Scholar does not provide necessary elements for systematic scientific literature retrieval such as tools for incremental query optimization, export of a large number of references, a visual search builder or a history function. Google Scholar is not ready as a professional searching tool for tasks where structured retrieval methodology is necessary. PMID:24160679

  19. A novel PON based UMTS broadband wireless access network architecture with an algorithm to guarantee end to end QoS

    NASA Astrophysics Data System (ADS)

    Sana, Ajaz; Hussain, Shahab; Ali, Mohammed A.; Ahmed, Samir

    2007-09-01

    In this paper we proposes a novel Passive Optical Network (PON) based broadband wireless access network architecture to provide multimedia services (video telephony, video streaming, mobile TV, mobile emails etc) to mobile users. In the conventional wireless access networks, the base stations (Node B) and Radio Network Controllers (RNC) are connected by point to point T1/E1 lines (Iub interface). The T1/E1 lines are expensive and add up to operating costs. Also the resources (transceivers and T1/E1) are designed for peak hours traffic, so most of the time the dedicated resources are idle and wasted. Further more the T1/E1 lines are not capable of supporting bandwidth (BW) required by next generation wireless multimedia services proposed by High Speed Packet Access (HSPA, Rel.5) for Universal Mobile Telecommunications System (UMTS) and Evolution Data only (EV-DO) for Code Division Multiple Access 2000 (CDMA2000). The proposed PON based back haul can provide Giga bit data rates and Iub interface can be dynamically shared by Node Bs. The BW is dynamically allocated and the unused BW from lightly loaded Node Bs is assigned to heavily loaded Node Bs. We also propose a novel algorithm to provide end to end Quality of Service (QoS) (between RNC and user equipment).The algorithm provides QoS bounds in the wired domain as well as in wireless domain with compensation for wireless link errors. Because of the air interface there can be certain times when the user equipment (UE) is unable to communicate with Node B (usually referred to as link error). Since the link errors are bursty and location dependent. For a proposed approach, the scheduler at the Node B maps priorities and weights for QoS into wireless MAC. The compensations for errored links is provided by the swapping of services between the active users and the user data is divided into flows, with flows allowed to lag or lead. The algorithm guarantees (1)delay and throughput for error-free flows,(2)short term fairness among error-free flows,(3)long term fairness among errored and error-free flows,(4)graceful degradation for leading flows and graceful compensation for lagging flows.

  20. SLIM: an alternative Web interface for MEDLINE/PubMed searches – a preliminary study

    PubMed Central

    Muin, Michael; Fontelo, Paul; Liu, Fang; Ackerman, Michael

    2005-01-01

    Background With the rapid growth of medical information and the pervasiveness of the Internet, online search and retrieval systems have become indispensable tools in medicine. The progress of Web technologies can provide expert searching capabilities to non-expert information seekers. The objective of the project is to create an alternative search interface for MEDLINE/PubMed searches using JavaScript slider bars. SLIM, or Slider Interface for MEDLINE/PubMed searches, was developed with PHP and JavaScript. Interactive slider bars in the search form controlled search parameters such as limits, filters and MeSH terminologies. Connections to PubMed were done using the Entrez Programming Utilities (E-Utilities). Custom scripts were created to mimic the automatic term mapping process of Entrez. Page generation times for both local and remote connections were recorded. Results Alpha testing by developers showed SLIM to be functionally stable. Page generation times to simulate loading times were recorded the first week of alpha and beta testing. Average page generation times for the index page, previews and searches were 2.94 milliseconds, 0.63 seconds and 3.84 seconds, respectively. Eighteen physicians from the US, Australia and the Philippines participated in the beta testing and provided feedback through an online survey. Most users found the search interface user-friendly and easy to use. Information on MeSH terms and the ability to instantly hide and display abstracts were identified as distinctive features. Conclusion SLIM can be an interactive time-saving tool for online medical literature research that improves user control and capability to instantly refine and refocus search strategies. With continued development and by integrating search limits, methodology filters, MeSH terms and levels of evidence, SLIM may be useful in the practice of evidence-based medicine. PMID:16321145

  1. SLIM: an alternative Web interface for MEDLINE/PubMed searches - a preliminary study.

    PubMed

    Muin, Michael; Fontelo, Paul; Liu, Fang; Ackerman, Michael

    2005-12-01

    With the rapid growth of medical information and the pervasiveness of the Internet, online search and retrieval systems have become indispensable tools in medicine. The progress of Web technologies can provide expert searching capabilities to non-expert information seekers. The objective of the project is to create an alternative search interface for MEDLINE/PubMed searches using JavaScript slider bars. SLIM, or Slider Interface for MEDLINE/PubMed searches, was developed with PHP and JavaScript. Interactive slider bars in the search form controlled search parameters such as limits, filters and MeSH terminologies. Connections to PubMed were done using the Entrez Programming Utilities (E-Utilities). Custom scripts were created to mimic the automatic term mapping process of Entrez. Page generation times for both local and remote connections were recorded. Alpha testing by developers showed SLIM to be functionally stable. Page generation times to simulate loading times were recorded the first week of alpha and beta testing. Average page generation times for the index page, previews and searches were 2.94 milliseconds, 0.63 seconds and 3.84 seconds, respectively. Eighteen physicians from the US, Australia and the Philippines participated in the beta testing and provided feedback through an online survey. Most users found the search interface user-friendly and easy to use. Information on MeSH terms and the ability to instantly hide and display abstracts were identified as distinctive features. SLIM can be an interactive time-saving tool for online medical literature research that improves user control and capability to instantly refine and refocus search strategies. With continued development and by integrating search limits, methodology filters, MeSH terms and levels of evidence, SLIM may be useful in the practice of evidence-based medicine.

  2. IEEE-802.15.4-based low-power body sensor node with RF energy harvester.

    PubMed

    Tran, Thang Viet; Chung, Wan-Young

    2014-01-01

    This paper proposes the design and implementation of a low-voltage and low-power body sensor node based on the IEEE 802.15.4 standard to collect electrocardiography (ECG) and photoplethysmography (PPG) signals. To achieve compact size, low supply voltage, and low power consumption, the proposed platform is integrated into a ZigBee mote, which contains a DC-DC booster, a PPG sensor interface module, and an ECG front-end circuit that has ultra-low current consumption. The input voltage of the proposed node is very low and has a wide range, from 0.65 V to 3.3 V. An RF energy harvester is also designed to charge the battery during the working mode or standby mode of the node. The power consumption of the proposed node reaches 14 mW in working mode to prolong the battery lifetime. The software is supported by the nesC language under the TinyOS environment, which enables the proposed node to be easily configured to function as an individual health monitoring node or a node in a wireless body sensor network (BSN). The proposed node is used to set up a wireless BSN that can simultaneously collect ECG and PPG signals and monitor the results on the personal computer.

  3. Scientific Impact Recognition Award. Sentinel node staging for breast cancer: intraoperative molecular pathology overcomes conventional histologic sampling errors.

    PubMed

    Blumencranz, Peter; Whitworth, Pat W; Deck, Kenneth; Rosenberg, Anne; Reintgen, Douglas; Beitsch, Peter; Chagpar, Anees; Julian, Thomas; Saha, Sukamal; Mamounas, Eleftherios; Giuliano, Armando; Simmons, Rache

    2007-10-01

    When sentinel node dissection reveals breast cancer metastasis, completion axillary lymph node dissection is ideally performed during the same operation. Intraoperative histologic techniques have low and variable sensitivity. A new intraoperative molecular assay (GeneSearch BLN Assay; Veridex, LLC, Warren, NJ) was evaluated to determine its efficiency in identifying significant sentinel lymph node metastases (>.2 mm). Positive or negative BLN Assay results generated from fresh 2-mm node slabs were compared with results from conventional histologic evaluation of adjacent fixed tissue slabs. In a prospective study of 416 patients at 11 clinical sites, the assay detected 98% of metastases >2 mm and 88% of metastasis greater >.2 mm, results superior to frozen section. Micrometastases were less frequently detected (57%) and assay positive results in nodes found negative by histology were rare (4%). The BLN Assay is properly calibrated for use as a stand alone intraoperative molecular test.

  4. SearchGUI: A Highly Adaptable Common Interface for Proteomics Search and de Novo Engines.

    PubMed

    Barsnes, Harald; Vaudel, Marc

    2018-05-25

    Mass-spectrometry-based proteomics has become the standard approach for identifying and quantifying proteins. A vital step consists of analyzing experimentally generated mass spectra to identify the underlying peptide sequences for later mapping to the originating proteins. We here present the latest developments in SearchGUI, a common open-source interface for the most frequently used freely available proteomics search and de novo engines that has evolved into a central component in numerous bioinformatics workflows.

  5. Exact analytical solution of irreversible binary dynamics on networks.

    PubMed

    Laurence, Edward; Young, Jean-Gabriel; Melnik, Sergey; Dubé, Louis J

    2018-03-01

    In binary cascade dynamics, the nodes of a graph are in one of two possible states (inactive, active), and nodes in the inactive state make an irreversible transition to the active state, as soon as their precursors satisfy a predetermined condition. We introduce a set of recursive equations to compute the probability of reaching any final state, given an initial state, and a specification of the transition probability function of each node. Because the naive recursive approach for solving these equations takes factorial time in the number of nodes, we also introduce an accelerated algorithm, built around a breath-first search procedure. This algorithm solves the equations as efficiently as possible in exponential time.

  6. Exact analytical solution of irreversible binary dynamics on networks

    NASA Astrophysics Data System (ADS)

    Laurence, Edward; Young, Jean-Gabriel; Melnik, Sergey; Dubé, Louis J.

    2018-03-01

    In binary cascade dynamics, the nodes of a graph are in one of two possible states (inactive, active), and nodes in the inactive state make an irreversible transition to the active state, as soon as their precursors satisfy a predetermined condition. We introduce a set of recursive equations to compute the probability of reaching any final state, given an initial state, and a specification of the transition probability function of each node. Because the naive recursive approach for solving these equations takes factorial time in the number of nodes, we also introduce an accelerated algorithm, built around a breath-first search procedure. This algorithm solves the equations as efficiently as possible in exponential time.

  7. Prevention of Malicious Nodes Communication in MANETs by Using Authorized Tokens

    NASA Astrophysics Data System (ADS)

    Chandrakant, N.; Shenoy, P. Deepa; Venugopal, K. R.; Patnaik, L. M.

    A rapid increase of wireless networks and mobile computing applications has changed the landscape of network security. A MANET is more susceptible to the attacks than wired network. As a result, attacks with malicious intent have been and will be devised to take advantage of these vulnerabilities and to cripple the MANET operation. Hence we need to search for new architecture and mechanisms to protect the wireless networks and mobile computing applications. In this paper, we examine the nodes that come under the vicinity of base node and members of the network and communication is provided to genuine nodes only. It is found that the proposed algorithm is a effective algorithm for security in MANETs.

  8. Distributed Water Pollution Source Localization with Mobile UV-Visible Spectrometer Probes in Wireless Sensor Networks.

    PubMed

    Ma, Junjie; Meng, Fansheng; Zhou, Yuexi; Wang, Yeyao; Shi, Ping

    2018-02-16

    Pollution accidents that occur in surface waters, especially in drinking water source areas, greatly threaten the urban water supply system. During water pollution source localization, there are complicated pollutant spreading conditions and pollutant concentrations vary in a wide range. This paper provides a scalable total solution, investigating a distributed localization method in wireless sensor networks equipped with mobile ultraviolet-visible (UV-visible) spectrometer probes. A wireless sensor network is defined for water quality monitoring, where unmanned surface vehicles and buoys serve as mobile and stationary nodes, respectively. Both types of nodes carry UV-visible spectrometer probes to acquire in-situ multiple water quality parameter measurements, in which a self-adaptive optical path mechanism is designed to flexibly adjust the measurement range. A novel distributed algorithm, called Dual-PSO, is proposed to search for the water pollution source, where one particle swarm optimization (PSO) procedure computes the water quality multi-parameter measurements on each node, utilizing UV-visible absorption spectra, and another one finds the global solution of the pollution source position, regarding mobile nodes as particles. Besides, this algorithm uses entropy to dynamically recognize the most sensitive parameter during searching. Experimental results demonstrate that online multi-parameter monitoring of a drinking water source area with a wide dynamic range is achieved by this wireless sensor network and water pollution sources are localized efficiently with low-cost mobile node paths.

  9. Distributed Water Pollution Source Localization with Mobile UV-Visible Spectrometer Probes in Wireless Sensor Networks

    PubMed Central

    Zhou, Yuexi; Wang, Yeyao; Shi, Ping

    2018-01-01

    Pollution accidents that occur in surface waters, especially in drinking water source areas, greatly threaten the urban water supply system. During water pollution source localization, there are complicated pollutant spreading conditions and pollutant concentrations vary in a wide range. This paper provides a scalable total solution, investigating a distributed localization method in wireless sensor networks equipped with mobile ultraviolet-visible (UV-visible) spectrometer probes. A wireless sensor network is defined for water quality monitoring, where unmanned surface vehicles and buoys serve as mobile and stationary nodes, respectively. Both types of nodes carry UV-visible spectrometer probes to acquire in-situ multiple water quality parameter measurements, in which a self-adaptive optical path mechanism is designed to flexibly adjust the measurement range. A novel distributed algorithm, called Dual-PSO, is proposed to search for the water pollution source, where one particle swarm optimization (PSO) procedure computes the water quality multi-parameter measurements on each node, utilizing UV-visible absorption spectra, and another one finds the global solution of the pollution source position, regarding mobile nodes as particles. Besides, this algorithm uses entropy to dynamically recognize the most sensitive parameter during searching. Experimental results demonstrate that online multi-parameter monitoring of a drinking water source area with a wide dynamic range is achieved by this wireless sensor network and water pollution sources are localized efficiently with low-cost mobile node paths. PMID:29462929

  10. Malignant melanoma (non-metastatic): sentinel lymph node biopsy

    PubMed Central

    2016-01-01

    Introduction The incidence of malignant melanoma has increased over the past 25 years in the UK, but death rates have remained fairly constant. The 5-year survival rate ranges from 20% to 95%, depending on disease stage. Risks are greater in white populations and in people with higher numbers of skin naevi. Methods and outcomes We conducted a systematic overview, aiming to answer the following clinical question: What is the evidence for performing a sentinel lymph node biopsy in people with malignant melanoma with clinically uninvolved lymph nodes? We searched: Medline, Embase, The Cochrane Library and other important databases up to October 2014 (BMJ Clinical Evidence overviews are updated periodically; please check our website for the most up-to-date version of this overview). Results At this update, searching of electronic databases retrieved 221 studies. After deduplication and removal of conference abstracts, 99 records were screened for inclusion in the overview. Appraisal of titles and abstracts led to the exclusion of 58 studies and the further review of 41 full publications. Of the 41 full articles evaluated, one systematic review and three RCTs were added at this update. We performed a GRADE evaluation for two PICO combinations. Conclusions In this systematic overview, we evaluated the evidence for performing sentinel lymph node biopsy in people with malignant melanoma with clinically uninvolved lymph nodes. PMID:26788739

  11. Epsilon-Q: An Automated Analyzer Interface for Mass Spectral Library Search and Label-Free Protein Quantification.

    PubMed

    Cho, Jin-Young; Lee, Hyoung-Joo; Jeong, Seul-Ki; Paik, Young-Ki

    2017-12-01

    Mass spectrometry (MS) is a widely used proteome analysis tool for biomedical science. In an MS-based bottom-up proteomic approach to protein identification, sequence database (DB) searching has been routinely used because of its simplicity and convenience. However, searching a sequence DB with multiple variable modification options can increase processing time, false-positive errors in large and complicated MS data sets. Spectral library searching is an alternative solution, avoiding the limitations of sequence DB searching and allowing the detection of more peptides with high sensitivity. Unfortunately, this technique has less proteome coverage, resulting in limitations in the detection of novel and whole peptide sequences in biological samples. To solve these problems, we previously developed the "Combo-Spec Search" method, which uses manually multiple references and simulated spectral library searching to analyze whole proteomes in a biological sample. In this study, we have developed a new analytical interface tool called "Epsilon-Q" to enhance the functions of both the Combo-Spec Search method and label-free protein quantification. Epsilon-Q performs automatically multiple spectral library searching, class-specific false-discovery rate control, and result integration. It has a user-friendly graphical interface and demonstrates good performance in identifying and quantifying proteins by supporting standard MS data formats and spectrum-to-spectrum matching powered by SpectraST. Furthermore, when the Epsilon-Q interface is combined with the Combo-Spec search method, called the Epsilon-Q system, it shows a synergistic function by outperforming other sequence DB search engines for identifying and quantifying low-abundance proteins in biological samples. The Epsilon-Q system can be a versatile tool for comparative proteome analysis based on multiple spectral libraries and label-free quantification.

  12. Federated Space-Time Query for Earth Science Data Using OpenSearch Conventions

    NASA Astrophysics Data System (ADS)

    Lynnes, C.; Beaumont, B.; Duerr, R. E.; Hua, H.

    2009-12-01

    The past decade has seen a burgeoning of remote sensing and Earth science data providers, as evidenced in the growth of the Earth Science Information Partner (ESIP) federation. At the same time, the need to combine diverse data sets to enable understanding of the Earth as a system has also grown. While the expansion of data providers is in general a boon to such studies, the diversity presents a challenge to finding useful data for a given study. Locating all the data files with aerosol information for a particular volcanic eruption, for example, may involve learning and using several different search tools to execute the requisite space-time queries. To address this issue, the ESIP federation is developing a federated space-time query framework, based on the OpenSearch convention (www.opensearch.org), with Geo and Time extensions. In this framework, data providers publish OpenSearch Description Documents that describe in a machine-readable form how to execute queries against the provider. The novelty of OpenSearch is that the space-time query interface becomes both machine callable and easy enough to integrate into the web browser's search box. This flexibility, together with a simple REST (HTTP-get) interface, should allow a variety of data providers to participate in the federated search framework, from large institutional data centers to individual scientists. The simple interface enables trivial querying of multiple data sources and participation in recursive-like federated searches--all using the same common OpenSearch interface. This simplicity also makes the construction of clients easy, as does existing OpenSearch client libraries in a variety of languages. Moreover, a number of clients and aggregation services already exist and OpenSearch is already supported by a number of web browsers such as Firefox and Internet Explorer.

  13. Use of small stand-alone Internet nodes as a distributed control system

    NASA Astrophysics Data System (ADS)

    Goodwin, Robert W.; Kucera, Michael J.; Shea, Michael F.

    1994-12-01

    For several years, the standard model for accelerator control systems has been workstation consoles connected to VME local stations by a Local Area Network with analog and digital data being accessed via a field bus to custom I/O interface electronics. Commercially available hardware has now made it possible to implement a small stand-alone data acquisition station that combines the LAN connection, the computer, and the analog and digital I/O interface on a single board. This eliminates the complexity of a field bus and the associated proprietary I/O hardware. A minimum control system is one data acquisition station and a Macintosh or workstation console, both connected to the network; larger systems have more consoles and nodes. An implementation of this architecture is described along with performance and operational experience.

  14. A FPGA embedded web server for remote monitoring and control of smart sensors networks.

    PubMed

    Magdaleno, Eduardo; Rodríguez, Manuel; Pérez, Fernando; Hernández, David; García, Enrique

    2013-12-27

    This article describes the implementation of a web server using an embedded Altera NIOS II IP core, a general purpose and configurable RISC processor which is embedded in a Cyclone FPGA. The processor uses the μCLinux operating system to support a Boa web server of dynamic pages using Common Gateway Interface (CGI). The FPGA is configured to act like the master node of a network, and also to control and monitor a network of smart sensors or instruments. In order to develop a totally functional system, the FPGA also includes an implementation of the time-triggered protocol (TTP/A). Thus, the implemented master node has two interfaces, the webserver that acts as an Internet interface and the other to control the network. This protocol is widely used to connecting smart sensors and actuators and microsystems in embedded real-time systems in different application domains, e.g., industrial, automotive, domotic, etc., although this protocol can be easily replaced by any other because of the inherent characteristics of the FPGA-based technology.

  15. A FPGA Embedded Web Server for Remote Monitoring and Control of Smart Sensors Networks

    PubMed Central

    Magdaleno, Eduardo; Rodríguez, Manuel; Pérez, Fernando; Hernández, David; García, Enrique

    2014-01-01

    This article describes the implementation of a web server using an embedded Altera NIOS II IP core, a general purpose and configurable RISC processor which is embedded in a Cyclone FPGA. The processor uses the μCLinux operating system to support a Boa web server of dynamic pages using Common Gateway Interface (CGI). The FPGA is configured to act like the master node of a network, and also to control and monitor a network of smart sensors or instruments. In order to develop a totally functional system, the FPGA also includes an implementation of the time-triggered protocol (TTP/A). Thus, the implemented master node has two interfaces, the webserver that acts as an Internet interface and the other to control the network. This protocol is widely used to connecting smart sensors and actuators and microsystems in embedded real-time systems in different application domains, e.g., industrial, automotive, domotic, etc., although this protocol can be easily replaced by any other because of the inherent characteristics of the FPGA-based technology. PMID:24379047

  16. Windows .NET Network Distributed Basic Local Alignment Search Toolkit (W.ND-BLAST)

    PubMed Central

    Dowd, Scot E; Zaragoza, Joaquin; Rodriguez, Javier R; Oliver, Melvin J; Payton, Paxton R

    2005-01-01

    Background BLAST is one of the most common and useful tools for Genetic Research. This paper describes a software application we have termed Windows .NET Distributed Basic Local Alignment Search Toolkit (W.ND-BLAST), which enhances the BLAST utility by improving usability, fault recovery, and scalability in a Windows desktop environment. Our goal was to develop an easy to use, fault tolerant, high-throughput BLAST solution that incorporates a comprehensive BLAST result viewer with curation and annotation functionality. Results W.ND-BLAST is a comprehensive Windows-based software toolkit that targets researchers, including those with minimal computer skills, and provides the ability increase the performance of BLAST by distributing BLAST queries to any number of Windows based machines across local area networks (LAN). W.ND-BLAST provides intuitive Graphic User Interfaces (GUI) for BLAST database creation, BLAST execution, BLAST output evaluation and BLAST result exportation. This software also provides several layers of fault tolerance and fault recovery to prevent loss of data if nodes or master machines fail. This paper lays out the functionality of W.ND-BLAST. W.ND-BLAST displays close to 100% performance efficiency when distributing tasks to 12 remote computers of the same performance class. A high throughput BLAST job which took 662.68 minutes (11 hours) on one average machine was completed in 44.97 minutes when distributed to 17 nodes, which included lower performance class machines. Finally, there is a comprehensive high-throughput BLAST Output Viewer (BOV) and Annotation Engine components, which provides comprehensive exportation of BLAST hits to text files, annotated fasta files, tables, or association files. Conclusion W.ND-BLAST provides an interactive tool that allows scientists to easily utilizing their available computing resources for high throughput and comprehensive sequence analyses. The install package for W.ND-BLAST is freely downloadable from . With registration the software is free, installation, networking, and usage instructions are provided as well as a support forum. PMID:15819992

  17. A Search Strategy of Level-Based Flooding for the Internet of Things

    PubMed Central

    Qiu, Tie; Ding, Yanhong; Xia, Feng; Ma, Honglian

    2012-01-01

    This paper deals with the query problem in the Internet of Things (IoT). Flooding is an important query strategy. However, original flooding is prone to cause heavy network loads. To address this problem, we propose a variant of flooding, called Level-Based Flooding (LBF). With LBF, the whole network is divided into several levels according to the distances (i.e., hops) between the sensor nodes and the sink node. The sink node knows the level information of each node. Query packets are broadcast in the network according to the levels of nodes. Upon receiving a query packet, sensor nodes decide how to process it according to the percentage of neighbors that have processed it. When the target node receives the query packet, it sends its data back to the sink node via random walk. We show by extensive simulations that the performance of LBF in terms of cost and latency is much better than that of original flooding, and LBF can be used in IoT of different scales. PMID:23112594

  18. ADS Bumblebee comes of age

    NASA Astrophysics Data System (ADS)

    Accomazzi, Alberto; Kurtz, Michael J.; Henneken, Edwin; Grant, Carolyn S.; Thompson, Donna M.; Chyla, Roman; McDonald, Steven; Shaulis, Taylor J.; Blanco-Cuaresma, Sergi; Shapurian, Golnaz; Hostetler, Timothy W.; Templeton, Matthew R.; Lockhart, Kelly E.

    2018-01-01

    The ADS Team has been working on a new system architecture and user interface named “ADS Bumblebee” since 2015. The new system presents many advantages over the traditional ADS interface and search engine (“ADS Classic”). A new, state of the art search engine features a number of new capabilities such as full-text search, advanced citation queries, filtering of results and scalable analytics for any search results. Its services are built on a cloud computing platform which can be easily scaled to match user demand. The Bumblebee user interface is a rich javascript application which leverages the features of the search engine and integrates a number of additional visualizations such as co-author and co-citation networks which provide a hierarchical view of research groups and research topics, respectively. Displays of paper analytics provide views of the basic article metrics (citations, reads, and age). All visualizations are interactive and provide ways to further refine search results. This new search system, which has been in beta for the past three years, has now matured to the point that it provides feature and content parity with ADS Classic, and has become the recommended way to access ADS content and services. Following a successful transition to Bumblebee, the use of ADS Classic will be discouraged starting in 2018 and phased out in 2019. You can access our new interface at https://ui.adsabs.harvard.edu

  19. A regenerative microchannel neural interface for recording from and stimulating peripheral axons in vivo

    NASA Astrophysics Data System (ADS)

    FitzGerald, James J.; Lago, Natalia; Benmerah, Samia; Serra, Jordi; Watling, Christopher P.; Cameron, Ruth E.; Tarte, Edward; Lacour, Stéphanie P.; McMahon, Stephen B.; Fawcett, James W.

    2012-02-01

    Neural interfaces are implanted devices that couple the nervous system to electronic circuitry. They are intended for long term use to control assistive technologies such as muscle stimulators or prosthetics that compensate for loss of function due to injury. Here we present a novel design of interface for peripheral nerves. Recording from axons is complicated by the small size of extracellular potentials and the concentration of current flow at nodes of Ranvier. Confining axons to microchannels of ˜100 µm diameter produces amplified potentials that are independent of node position. After implantation of microchannel arrays into rat sciatic nerve, axons regenerated through the channels forming ‘mini-fascicles’, each typically containing ˜100 myelinated fibres and one or more blood vessels. Regenerated motor axons reconnected to distal muscles, as demonstrated by the recovery of an electromyogram and partial prevention of muscle atrophy. Efferent motor potentials and afferent signals evoked by muscle stretch or cutaneous stimulation were easily recorded from the mini-fascicles and were in the range of 35-170 µV. Individual motor units in distal musculature were activated from channels using stimulus currents in the microampere range. Microchannel interfaces are a potential solution for applications such as prosthetic limb control or enhancing recovery after nerve injury.

  20. A Hybrid Evaluation System Framework (Shell & Web) with Standardized Access to Climate Model Data and Verification Tools for a Clear Climate Science Infrastructure on Big Data High Performance Computers

    NASA Astrophysics Data System (ADS)

    Kadow, C.; Illing, S.; Kunst, O.; Cubasch, U.

    2014-12-01

    The project 'Integrated Data and Evaluation System for Decadal Scale Prediction' (INTEGRATION) as part of the German decadal prediction project MiKlip develops a central evaluation system. The fully operational hybrid features a HPC shell access and an user friendly web-interface. It employs one common system with a variety of verification tools and validation data from different projects in- and outside of MiKlip. The evaluation system is located at the German Climate Computing Centre (DKRZ) and has direct access to the bulk of its ESGF node including millions of climate model data sets, e.g. from CMIP5 and CORDEX. The database is organized by the international CMOR standard using the meta information of the self-describing model, reanalysis and observational data sets. Apache Solr is used for indexing the different data projects into one common search environment. This implemented meta data system with its advanced but easy to handle search tool supports users, developers and their tools to retrieve the required information. A generic application programming interface (API) allows scientific developers to connect their analysis tools with the evaluation system independently of the programming language used. Users of the evaluation techniques benefit from the common interface of the evaluation system without any need to understand the different scripting languages. Facilitating the provision and usage of tools and climate data increases automatically the number of scientists working with the data sets and identify discrepancies. Additionally, the history and configuration sub-system stores every analysis performed with the evaluation system in a MySQL database. Configurations and results of the tools can be shared among scientists via shell or web-system. Therefore, plugged-in tools gain automatically from transparency and reproducibility. Furthermore, when configurations match while starting a evaluation tool, the system suggests to use results already produced by other users-saving CPU time, I/O and disk space. This study presents the different techniques and advantages of such a hybrid evaluation system making use of a Big Data HPC in climate science. website: www-miklip.dkrz.de visitor-login: guest password: miklip

  1. A Hybrid Evaluation System Framework (Shell & Web) with Standardized Access to Climate Model Data and Verification Tools for a Clear Climate Science Infrastructure on Big Data High Performance Computers

    NASA Astrophysics Data System (ADS)

    Kadow, Christopher; Illing, Sebastian; Kunst, Oliver; Ulbrich, Uwe; Cubasch, Ulrich

    2015-04-01

    The project 'Integrated Data and Evaluation System for Decadal Scale Prediction' (INTEGRATION) as part of the German decadal prediction project MiKlip develops a central evaluation system. The fully operational hybrid features a HPC shell access and an user friendly web-interface. It employs one common system with a variety of verification tools and validation data from different projects in- and outside of MiKlip. The evaluation system is located at the German Climate Computing Centre (DKRZ) and has direct access to the bulk of its ESGF node including millions of climate model data sets, e.g. from CMIP5 and CORDEX. The database is organized by the international CMOR standard using the meta information of the self-describing model, reanalysis and observational data sets. Apache Solr is used for indexing the different data projects into one common search environment. This implemented meta data system with its advanced but easy to handle search tool supports users, developers and their tools to retrieve the required information. A generic application programming interface (API) allows scientific developers to connect their analysis tools with the evaluation system independently of the programming language used. Users of the evaluation techniques benefit from the common interface of the evaluation system without any need to understand the different scripting languages. Facilitating the provision and usage of tools and climate data increases automatically the number of scientists working with the data sets and identify discrepancies. Additionally, the history and configuration sub-system stores every analysis performed with the evaluation system in a MySQL database. Configurations and results of the tools can be shared among scientists via shell or web-system. Therefore, plugged-in tools gain automatically from transparency and reproducibility. Furthermore, when configurations match while starting a evaluation tool, the system suggests to use results already produced by other users-saving CPU time, I/O and disk space. This study presents the different techniques and advantages of such a hybrid evaluation system making use of a Big Data HPC in climate science. website: www-miklip.dkrz.de visitor-login: click on "Guest"

  2. Implementation of Web Processing Services (WPS) over IPSL Earth System Grid Federation (ESGF) node

    NASA Astrophysics Data System (ADS)

    Kadygrov, Nikolay; Denvil, Sebastien; Carenton, Nicolas; Levavasseur, Guillaume; Hempelmann, Nils; Ehbrecht, Carsten

    2016-04-01

    The Earth System Grid Federation (ESGF) is aimed to provide access to climate data for the international climate community. ESGF is a system of distributed and federated nodes that dynamically interact with each other. ESGF user may search and download climatic data, geographically distributed over the world, from one common web interface and through standardized API. With the continuous development of the climate models and the beginning of the sixth phase of the Coupled Model Intercomparison Project (CMIP6), the amount of data available from ESGF will continuously increase during the next 5 years. IPSL holds a replication of the different global and regional climate models output, observations and reanalysis data (CMIP5, CORDEX, obs4MIPs, etc) that are available on the IPSL ESGF node. In order to let scientists perform analysis of the models without downloading vast amount of data the Web Processing Services (WPS) were installed at IPSL compute node. The work is part of the CONVERGENCE project founded by French National Research Agency (ANR). PyWPS implementation of the Web processing Service standard from Open Geospatial Consortium (OGC) in the framework of birdhouse software is used. The processes could be run by user remotely through web-based WPS client or by using command-line tool. All the calculations are performed on the server side close to the data. If the models/observations are not available at IPSL it will be downloaded and cached by WPS process from ESGF network using synda tool. The outputs of the WPS processes are available for download as plots, tar-archives or as NetCDF files. We present the architecture of WPS at IPSL along with the processes for evaluation of the model performance, on-site diagnostics and post-analysis processing of the models output, e.g.: - regriding/interpolation/aggregation - ocgis (OpenClimateGIS) based polygon subsetting of the data - average seasonal cycle, multimodel mean, multimodel mean bias - calculation of the climate indices with icclim library (CERFACS) - atmospheric modes of variability In order to evaluate performance of any new model, once it became available in ESGF, we implement WPS with several model diagnostics and performance metrics calculated using ESMValTool (Eyring et al., GMDD 2015). As a further step we are developing new WPS processes and core-functions to be implemented at ISPL ESGF compute node following the scientific community needs.

  3. A wireless modular multi-modal multi-node patch platform for robust biosignal monitoring.

    PubMed

    Pantelopoulos, Alexandros; Saldivar, Enrique; Roham, Masoud

    2011-01-01

    In this paper a wireless modular, multi-modal, multi-node patch platform is described. The platform comprises low-cost semi-disposable patch design aiming at unobtrusive ambulatory monitoring of multiple physiological parameters. Owing to its modular design it can be interfaced with various low-power RF communication and data storage technologies, while the data fusion of multi-modal and multi-node features facilitates measurement of several biosignals from multiple on-body locations for robust feature extraction. Preliminary results of the patch platform are presented which illustrate the capability to extract respiration rate from three different independent metrics, which combined together can give a more robust estimate of the actual respiratory rate.

  4. PcapDB: Search Optimized Packet Capture, Version 0.1.0.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferrell, Paul; Steinfadt, Shannon

    PcapDB is a packet capture system designed to optimize the captured data for fast search in the typical (network incident response) use case. The technology involved in this software has been submitted via the IDEAS system and has been filed as a provisional patent. It includes the following primary components: capture: The capture component utilizes existing capture libraries to retrieve packets from network interfaces. Once retrieved the packets are passed to additional threads for sorting into flows and indexing. The sorted flows and indexes are passed to other threads so that they can be written to disk. These components aremore » written in the C programming language. search: The search components provide a means to find relevant flows and the associated packets. A search query is parsed and represented as a search tree. Various search commands, written in C, are then used resolve this tree into a set of search results. The tree generation and search execution management components are written in python. interface: The PcapDB web interface is written in Python on the Django framework. It provides a series of pages, API's, and asynchronous tasks that allow the user to manage the capture system, perform searches, and retrieve results. Web page components are written in HTML,CSS and Javascript.« less

  5. Graph Unification and Tangram Hypothesis Explanation Representation (GATHER) and System and Component Modeling Framework (SCMF)

    DTIC Science & Technology

    2008-08-01

    services, DIDS and DMS, are deployable on the TanGrid system and are accessible via two APIs, a Java client and a servlet based interface. Additionally...but required the user to instantiate an IGraph object with several Java Maps containing the nodes, node attributes, edge types, and the connections...restrictions imposed by the bulk ingest process. Finally, once the bulk ingest process was available in the GraphUnification Java Archives (JAR), DC was

  6. Methods of visualizing graphs

    DOEpatents

    Wong, Pak C.; Mackey, Patrick S.; Perrine, Kenneth A.; Foote, Harlan P.; Thomas, James J.

    2008-12-23

    Methods for visualizing a graph by automatically drawing elements of the graph as labels are disclosed. In one embodiment, the method comprises receiving node information and edge information from an input device and/or communication interface, constructing a graph layout based at least in part on that information, wherein the edges are automatically drawn as labels, and displaying the graph on a display device according to the graph layout. In some embodiments, the nodes are automatically drawn as labels instead of, or in addition to, the label-edges.

  7. Starry sky sign: A prevalent sonographic finding in mediastinal tuberculous lymph nodes.

    PubMed

    Alici, Ibrahim Onur; Demirci, Nilg N Yilmaz; Yilmaz, Aydin; Karakaya, Jale; Erdogan, Yurdanur

    2015-01-01

    We report a prevalent finding in tuberculous lymphadenitis (TL): Starry sky sign, hyperechoic foci without acoustic shadows over a hypoechoic background. We retrospectively searched the database for a possible relationship of starry sky sign with a specific diagnosis and also the prevalence and accuracy of the finding. Starry sky sign was found in 16 of 31 tuberculous lymph nodes, while none of other lymph nodes (1,015 lymph nodes) exhibited this finding; giving a sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy of 51.6%, 100%, 100%, 98.5%, and 98.5%, respectively. Bacteriologic and histologic findings are gold standard in the diagnosis of tuberculosis, but this finding may guide the bronchoscopist in choosing the more pathologic node within a station and increase the diagnostic yield as it may relate to actively dividing mycobacteria.

  8. Going End to End to Deliver High-Speed Data

    NASA Technical Reports Server (NTRS)

    2005-01-01

    By the end of the 1990s, the optical fiber "backbone" of the telecommunication and data-communication networks had evolved from megabits-per-second transmission rates to gigabits-per-second transmission rates. Despite this boom in bandwidth, however, users at the end nodes were still not being reached on a consistent basis. (An end node is any device that does not behave like a router or a managed hub or switch. Examples of end node objects are computers, printers, serial interface processor phones, and unmanaged hubs and switches.) The primary reason that prevents bandwidth from reaching the end nodes is the complex local network topology that exists between the optical backbone and the end nodes. This complex network topology consists of several layers of routing and switch equipment which introduce potential congestion points and network latency. By breaking down the complex network topology, a true optical connection can be achieved. Access Optical Networks, Inc., is making this connection a reality with guidance from NASA s nondestructive evaluation experts.

  9. FLIP for FLAG model visualization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wooten, Hasani Omar

    A graphical user interface has been developed for FLAG users. FLIP (FLAG Input deck Parser) provides users with an organized view of FLAG models and a means for efficiently and easily navigating and editing nodes, parameters, and variables.

  10. Spatial Search by Quantum Walk is Optimal for Almost all Graphs.

    PubMed

    Chakraborty, Shantanav; Novo, Leonardo; Ambainis, Andris; Omar, Yasser

    2016-03-11

    The problem of finding a marked node in a graph can be solved by the spatial search algorithm based on continuous-time quantum walks (CTQW). However, this algorithm is known to run in optimal time only for a handful of graphs. In this work, we prove that for Erdös-Renyi random graphs, i.e., graphs of n vertices where each edge exists with probability p, search by CTQW is almost surely optimal as long as p≥log^{3/2}(n)/n. Consequently, we show that quantum spatial search is in fact optimal for almost all graphs, meaning that the fraction of graphs of n vertices for which this optimality holds tends to one in the asymptotic limit. We obtain this result by proving that search is optimal on graphs where the ratio between the second largest and the largest eigenvalue is bounded by a constant smaller than 1. Finally, we show that we can extend our results on search to establish high fidelity quantum communication between two arbitrary nodes of a random network of interacting qubits, namely, to perform quantum state transfer, as well as entanglement generation. Our work shows that quantum information tasks typically designed for structured systems retain performance in very disordered structures.

  11. Transfer Learning to Accelerate Interface Structure Searches

    NASA Astrophysics Data System (ADS)

    Oda, Hiromi; Kiyohara, Shin; Tsuda, Koji; Mizoguchi, Teruyasu

    2017-12-01

    Interfaces have atomic structures that are significantly different from those in the bulk, and play crucial roles in material properties. The central structures at the interfaces that provide properties have been extensively investigated. However, determination of even one interface structure requires searching for the stable configuration among many thousands of candidates. Here, a powerful combination of machine learning techniques based on kriging and transfer learning (TL) is proposed as a method for unveiling the interface structures. Using the kriging+TL method, thirty-three grain boundaries were systematically determined from 1,650,660 candidates in only 462 calculations, representing an increase in efficiency over conventional all-candidate calculation methods, by a factor of approximately 3,600.

  12. Traffic Generator (TrafficGen) Version 1.4.2: Users Guide

    DTIC Science & Technology

    2016-06-01

    events, the user has to enter them manually . We will research and implement a way to better define and organize the multicast addresses so they can be...the network with Transmission Control Protocol and User Datagram Protocol Internet Protocol traffic. Each node generating network traffic in an...TrafficGen Graphical User Interface (GUI) 3 3.1 Anatomy of the User Interface 3 3.2 Scenario Configuration and MGEN Files 4 4. Working with

  13. A Revised Interface for the ARL Topodef Mobility Design Tool

    DTIC Science & Technology

    2012-04-01

    designed paths as though moving down a conveyor belt . Giving paths an existence independent of the nodes that travel along them not only makes their...A Revised Interface for the ARL Topodef Mobility Design Tool by Andrew J. Toth and Michael Christensen ARL-TR-5980 April 2012...Disclaimers The findings in this report are not to be construed as an official Department of the Army position unless so designated by other

  14. Collaborative funding to facilitate airport ground access [research brief].

    DOT National Transportation Integrated Search

    2012-06-01

    Airports are major interchange nodes in the passenger and freight transportation system. Here, local and regional transportation systems interface with those for national and international air travel and air freight. However, funding projects to impr...

  15. Expandable pallet for space station interface attachments

    NASA Technical Reports Server (NTRS)

    Wesselski, Clarence J. (Inventor)

    1988-01-01

    Described is a foldable expandable pallet for Space Station interface attachments with a basic square configuration. Each pallet consists of a series of struts joined together by node point fittings to make a rigid structure. The struts have hinge fittings which are spring loaded to permit collapse of the module for stowage transport to a Space Station in the payload bay of the Space Shuttle, and development on orbit. Dimensions of the pallet are selected to provide convenient, closely spaced attachment points between the node points of the relatively widely spaced trusses of a Space Station platform. A pallet is attached to a strut at four points: one close fitting hole, two oversize holes, and a slot to allow for thermal expansion/contraction and for manufacturing tolerances. Applications of the pallet include its use in rotary or angular joints; servicing of splints; with gridded plates; as instrument mounting bases; and as a roadbed for a Mobile Service Center (MSC).

  16. CAPRI (Computational Analysis PRogramming Interface): A Solid Modeling Based Infra-Structure for Engineering Analysis and Design Simulations

    NASA Technical Reports Server (NTRS)

    Haimes, Robert; Follen, Gregory J.

    1998-01-01

    CAPRI is a CAD-vendor neutral application programming interface designed for the construction of analysis and design systems. By allowing access to the geometry from within all modules (grid generators, solvers and post-processors) such tasks as meshing on the actual surfaces, node enrichment by solvers and defining which mesh faces are boundaries (for the solver and visualization system) become simpler. The overall reliance on file 'standards' is minimized. This 'Geometry Centric' approach makes multi-physics (multi-disciplinary) analysis codes much easier to build. By using the shared (coupled) surface as the foundation, CAPRI provides a single call to interpolate grid-node based data from the surface discretization in one volume to another. Finally, design systems are possible where the results can be brought back into the CAD system (and therefore manufactured) because all geometry construction and modification are performed using the CAD system's geometry kernel.

  17. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael E; Ratterman, Joseph D; Smith, Brian E

    2014-02-11

    Endpoint-based parallel data processing in a parallel active messaging interface ('PAMI') of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective opeartion through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  18. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-08-12

    Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  19. Fencing data transfers in a parallel active messaging interface of a parallel computer

    DOEpatents

    Blocksome, Michael A.; Mamidala, Amith R.

    2015-06-02

    Fencing data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task; the compute nodes coupled for data communications through the PAMI and through data communications resources including at least one segment of shared random access memory; including initiating execution through the PAMI of an ordered sequence of active SEND instructions for SEND data transfers between two endpoints, effecting deterministic SEND data transfers through a segment of shared memory; and executing through the PAMI, with no FENCE accounting for SEND data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all SEND instructions initiated prior to execution of the FENCE instruction for SEND data transfers between the two endpoints.

  20. Fencing data transfers in a parallel active messaging interface of a parallel computer

    DOEpatents

    Blocksome, Michael A.; Mamidala, Amith R.

    2015-06-09

    Fencing data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task; the compute nodes coupled for data communications through the PAMI and through data communications resources including at least one segment of shared random access memory; including initiating execution through the PAMI of an ordered sequence of active SEND instructions for SEND data transfers between two endpoints, effecting deterministic SEND data transfers through a segment of shared memory; and executing through the PAMI, with no FENCE accounting for SEND data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all SEND instructions initiated prior to execution of the FENCE instruction for SEND data transfers between the two endpoints.

  1. Fencing data transfers in a parallel active messaging interface of a parallel computer

    DOEpatents

    Blocksome, Michael A.; Mamidala, Amith R.

    2015-08-11

    Fencing data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint comprising a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI and through data communications resources including a deterministic data communications network, including initiating execution through the PAMI of an ordered sequence of active SEND instructions for SEND data transfers between two endpoints, effecting deterministic SEND data transfers; and executing through the PAMI, with no FENCE accounting for SEND data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all SEND instructions initiated prior to execution of the FENCE instruction for SEND data transfers between the two endpoints.

  2. Fencing data transfers in a parallel active messaging interface of a parallel computer

    DOEpatents

    Blocksome, Michael A.; Mamidala, Amith R.

    2015-06-30

    Fencing data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint comprising a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI and through data communications resources including a deterministic data communications network, including initiating execution through the PAMI of an ordered sequence of active SEND instructions for SEND data transfers between two endpoints, effecting deterministic SEND data transfers; and executing through the PAMI, with no FENCE accounting for SEND data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all SEND instructions initiated prior to execution of the FENCE instruction for SEND data transfers between the two endpoints.

  3. Introducing ADS Labs

    NASA Astrophysics Data System (ADS)

    Accomazzi, Alberto; Henneken, E.; Grant, C. S.; Kurtz, M. J.; Di Milia, G.; Luker, J.; Thompson, D. M.; Bohlen, E.; Murray, S. S.

    2011-05-01

    ADS Labs is a platform that ADS is introducing in order to test and receive feedback from the community on new technologies and prototype services. Currently, ADS Labs features a new interface for abstract searches, faceted filtering of results, visualization of co-authorship networks, article-level recommendations, and a full-text search service. The streamlined abstract search interface provides a simple, one-box search with options for ranking results based on a paper relevancy, freshness, number of citations, and downloads. In addition, it provides advanced rankings based on collaborative filtering techniques. The faceted filtering interface allows users to narrow search results based on a particular property or set of properties ("facets"), allowing users to manage large lists and explore the relationship between them. For any set or sub-set of records, the co-authorship network can be visualized in an interactive way, offering a view of the distribution of contributors and their inter-relationships. This provides an immediate way to detect groups and collaborations involved in a particular research field. For a majority of papers in Astronomy, our new interface will provide a list of related articles of potential interest. The recommendations are based on a number of factors, including text similarity, citations, and co-readership information. The new full-text search interface allows users to find all instances of particular words or phrases in the body of the articles in our full-text archive. This includes all of the scanned literature in ADS as well as a select portion of the current astronomical literature, including ApJ, ApJS, AJ, MNRAS, PASP, A&A, and soon additional content from Springer journals. Fulltext search results include a list of the matching papers as well as a list of "snippets" of text highlighting the context in which the search terms were found. ADS Labs is available at http://adslabs.org

  4. Method and apparatus for eliminating unsuccessful tries in a search tree

    NASA Technical Reports Server (NTRS)

    Peterson, John C. (Inventor); Chow, Edward (Inventor); Madan, Herb S. (Inventor)

    1991-01-01

    A circuit switching system in an M-ary, n-cube connected network completes a best-first path from an originating node to a destination node by latching valid legs of the path as the path is being sought out. Each network node is provided with a routing hyperswitch sub-network, (HSN) connected between that node and bidirectional high capacity communication channels of the n-cube network. The sub-networks are all controlled by routing algorithms which respond to message identification headings (headers) on messages to be routed along one or more routing legs. The header includes information embedded therein which is interpreted by each sub-network to route and historically update the header. A logic circuit, available at every node, implements the algorithm and automatically forwards or back-tracks the header in the network legs of various paths until a completed path is latched.

  5. Development of user-centered interfaces to search the knowledge resources of the Virginia Henderson International Nursing Library.

    PubMed

    Jones, Josette; Harris, Marcelline; Bagley-Thompson, Cheryl; Root, Jane

    2003-01-01

    This poster describes the development of user-centered interfaces in order to extend the functionality of the Virginia Henderson International Nursing Library (VHINL) from library to web based portal to nursing knowledge resources. The existing knowledge structure and computational models are revised and made complementary. Nurses' search behavior is captured and analyzed, and the resulting search models are mapped to the revised knowledge structure and computational model.

  6. Self-powered wireless sensor networks for telemedicine applications

    NASA Astrophysics Data System (ADS)

    Polk, Todd William

    Technology advances in wireless sensor networks have made it possible for these tiny systems to enter the realm of ubiquitous or pervasive computing which has been forecast for several years. These nodes, or motes as they are known, typically run off of battery power and when used sparingly can operate in excess of one year. When requirements necessitate higher usage, battery monitoring and replacement becomes a major issue. Large systems can quickly become cost prohibitive. To combat this issue, researchers have looked to energy harvesting to power these motes. However, this research has mainly centered on outdoor solar harvesting to take advantage of higher energy levels provided by the sun. Indoor harvesting has been presented in the past as not feasible. In this dissertation, we present a system that utilizes energy harvested from overhead fluorescent lights to power the infrastructure (routing) nodes of an indoor telemedicine based wireless network. The limitations of indoor harvesting are exploited and leveraged through creative hardware design. A unique message routing protocol has been developed to control these routing nodes and allow continual operation. Standard medical devices have been interfaced to the system to allow wireless transmission of patient data to a central collection point where the data is organized, stored and presented to the user via a graphical user interface (GUI). The range of the system has been extended by interfacing a cellular modem to the system to allow two-way communication between the GUI and a remote healthcare provider. Extensive physical testing has been done to determine the robustness of the system, and the boundary conditions for extremely large networks were tested via simulation.

  7. Semantic Visualization of Wireless Sensor Networks for Elderly Monitoring

    NASA Astrophysics Data System (ADS)

    Stocklöw, Carsten; Kamieth, Felix

    In the area of Ambient Intelligence, Wireless Sensor Networks are commonly used for user monitoring purposes like health monitoring and user localization. Existing work on visualization of wireless sensor networks focuses mainly on displaying individual nodes and logical, graph-based topologies. This way, the relation to the real-world deployment is lost. This paper presents a novel approach for visualization of wireless sensor networks and interaction with complex services on the nodes. The environment is realized as a 3D model, and multiple nodes, that are worn by a single individual, are grouped together to allow an intuitive interface for end users. We describe application examples and show that our approach allows easier access to network information and functionality by comparing it with existing solutions.

  8. KSC-04PD-0148

    NASA Technical Reports Server (NTRS)

    2004-01-01

    KENNEDY SPACE CENTER, FLA. Astronaut Tim Kopra (second from right) talks with workers in the Space Station Processing Facility about the Intravehicular Activity (IVA) constraints testing on the Italian-built Node 2, a future element of the International Space Station. . The second of three Station connecting modules, the Node 2 attaches to the end of the U.S. Lab and provides attach locations for several other elements. Kopra is currently assigned technical duties in the Space Station Branch of the Astronaut Office, where his primary focus involves the testing of crew interfaces for two future ISS modules as well as the implementation of support computers and operational Local Area Network on ISS. Node 2 is scheduled to launch on mission STS-120, Station assembly flight 10A.

  9. Designing Search: Effective Search Interfaces for Academic Library Web Sites

    ERIC Educational Resources Information Center

    Teague-Rector, Susan; Ghaphery, Jimmy

    2008-01-01

    Academic libraries customize, support, and provide access to myriad information systems, each with complex graphical user interfaces. The number of possible information entry points on an academic library Web site is both daunting to the end-user and consistently challenging to library Web site designers. Faced with the challenges inherent in…

  10. E-MSD: an integrated data resource for bioinformatics.

    PubMed

    Golovin, A; Oldfield, T J; Tate, J G; Velankar, S; Barton, G J; Boutselakis, H; Dimitropoulos, D; Fillon, J; Hussain, A; Ionides, J M C; John, M; Keller, P A; Krissinel, E; McNeil, P; Naim, A; Newman, R; Pajon, A; Pineda, J; Rachedi, A; Copeland, J; Sitnov, A; Sobhany, S; Suarez-Uruena, A; Swaminathan, G J; Tagari, M; Tromm, S; Vranken, W; Henrick, K

    2004-01-01

    The Macromolecular Structure Database (MSD) group (http://www.ebi.ac.uk/msd/) continues to enhance the quality and consistency of macromolecular structure data in the Protein Data Bank (PDB) and to work towards the integration of various bioinformatics data resources. We have implemented a simple form-based interface that allows users to query the MSD directly. The MSD 'atlas pages' show all of the information in the MSD for a particular PDB entry. The group has designed new search interfaces aimed at specific areas of interest, such as the environment of ligands and the secondary structures of proteins. We have also implemented a novel search interface that begins to integrate separate MSD search services in a single graphical tool. We have worked closely with collaborators to build a new visualization tool that can present both structure and sequence data in a unified interface, and this data viewer is now used throughout the MSD services for the visualization and presentation of search results. Examples showcasing the functionality and power of these tools are available from tutorial webpages (http://www. ebi.ac.uk/msd-srv/docs/roadshow_tutorial/).

  11. E-MSD: an integrated data resource for bioinformatics

    PubMed Central

    Golovin, A.; Oldfield, T. J.; Tate, J. G.; Velankar, S.; Barton, G. J.; Boutselakis, H.; Dimitropoulos, D.; Fillon, J.; Hussain, A.; Ionides, J. M. C.; John, M.; Keller, P. A.; Krissinel, E.; McNeil, P.; Naim, A.; Newman, R.; Pajon, A.; Pineda, J.; Rachedi, A.; Copeland, J.; Sitnov, A.; Sobhany, S.; Suarez-Uruena, A.; Swaminathan, G. J.; Tagari, M.; Tromm, S.; Vranken, W.; Henrick, K.

    2004-01-01

    The Macromolecular Structure Database (MSD) group (http://www.ebi.ac.uk/msd/) continues to enhance the quality and consistency of macromolecular structure data in the Protein Data Bank (PDB) and to work towards the integration of various bioinformatics data resources. We have implemented a simple form-based interface that allows users to query the MSD directly. The MSD ‘atlas pages’ show all of the information in the MSD for a particular PDB entry. The group has designed new search interfaces aimed at specific areas of interest, such as the environment of ligands and the secondary structures of proteins. We have also implemented a novel search interface that begins to integrate separate MSD search services in a single graphical tool. We have worked closely with collaborators to build a new visualization tool that can present both structure and sequence data in a unified interface, and this data viewer is now used throughout the MSD services for the visualization and presentation of search results. Examples showcasing the functionality and power of these tools are available from tutorial webpages (http://www.ebi.ac.uk/msd-srv/docs/roadshow_tutorial/). PMID:14681397

  12. The Analysis of Alpha Beta Pruning and MTD(f) Algorithm to Determine the Best Algorithm to be Implemented at Connect Four Prototype

    NASA Astrophysics Data System (ADS)

    Tommy, Lukas; Hardjianto, Mardi; Agani, Nazori

    2017-04-01

    Connect Four is a two-player game which the players take turns dropping discs into a grid to connect 4 of one’s own discs next to each other vertically, horizontally, or diagonally. At Connect Four, Computer requires artificial intelligence (AI) in order to play properly like human. There are many AI algorithms that can be implemented to Connect Four, but the suitable algorithms are unknown. The suitable algorithm means optimal in choosing move and its execution time is not slow at search depth which is deep enough. In this research, analysis and comparison between standard alpha beta (AB) Pruning and MTD(f) will be carried out at the prototype of Connect Four in terms of optimality (win percentage) and speed (execution time and the number of leaf nodes). Experiments are carried out by running computer versus computer mode with 12 different conditions, i.e. varied search depth (5 through 10) and who moves first. The percentage achieved by MTD(f) based on experiments is win 45,83%, lose 37,5% and draw 16,67%. In the experiments with search depth 8, MTD(f) execution time is 35, 19% faster and evaluate 56,27% fewer leaf nodes than AB Pruning. The results of this research are MTD(f) is as optimal as AB Pruning at Connect Four prototype, but MTD(f) on average is faster and evaluates fewer leaf nodes than AB Pruning. The execution time of MTD(f) is not slow and much faster than AB Pruning at search depth which is deep enough.

  13. An Energy Efficient Simultaneous-Node Repositioning Algorithm for Mobile Sensor Networks

    PubMed Central

    Hasbullah, Halabi; Nazir, Babar; Khan, Imran Ali

    2014-01-01

    Recently, wireless sensor network (WSN) applications have seen an increase in interest. In search and rescue, battlefield reconnaissance, and some other such applications, so that a survey of the area of interest can be made collectively, a set of mobile nodes is deployed. Keeping the network nodes connected is vital for WSNs to be effective. The provision of connectivity can be made at the time of startup and can be maintained by carefully coordinating the nodes when they move. However, if a node suddenly fails, the network could be partitioned to cause communication problems. Recently, several methods that use the relocation of nodes for connectivity restoration have been proposed. However, these methods have the tendency to not consider the potential coverage loss in some locations. This paper addresses the concerns of both connectivity and coverage in an integrated way so that this gap can be filled. A novel algorithm for simultaneous-node repositioning is introduced. In this approach, each neighbour of the failed node, one by one, moves in for a certain amount of time to take the place of the failed node, after which it returns to its original location in the network. The effectiveness of this algorithm has been verified by the simulation results. PMID:25152924

  14. OPUS: A Comprehensive Search Tool for Remote Sensing Observations of the Outer Planets. Now with Enhanced Geometric Metadata for Cassini and New Horizons Optical Remote Sensing Instruments.

    NASA Astrophysics Data System (ADS)

    Gordon, M. K.; Showalter, M. R.; Ballard, L.; Tiscareno, M.; French, R. S.; Olson, D.

    2017-06-01

    The PDS RMS Node hosts OPUS - an accurate, comprehensive search tool for spacecraft remote sensing observations. OPUS supports Cassini: CIRS, ISS, UVIS, VIMS; New Horizons: LORRI, MVIC; Galileo SSI; Voyager ISS; and Hubble: ACS, STIS, WFC3, WFPC2.

  15. Internet Medline providers.

    PubMed

    Vine, D L; Coady, T R

    1998-01-01

    Each database in this review has features that will appeal to some users. Each provides a credible interface to information available within the Medline database. The major differences are pricing and interface design. In this context, features that cost more and might seem trivial to the occasional searcher may actually save time and money when used by the professional. Internet Grateful Med is free, but Ms. Coady and I agree the availability of only three ANDable search fields is a major functional limitation. PubMed is also free but much more powerful. The command line interface that permits very sophisticated searches requires a commitment that casual users will find intimidating. Ms. Coady did not believe the feedback currently provided during a search was sufficient for sustained professional use. Paper Chase and Knowledge Finder are mature, modestly priced Medline search services. Paper Chase provides a menu-driven interface that is very easy to use, yet permits the user to search virtually all of Medline's data fields. Knowledge Finder emphasizes the use of natural language queries but fully supports more traditional search strategies. The impact of the tradeoff between fuzzy and Boolean strategies offered by Knowledge Finder is unclear and beyond the scope of this review. Additional software must be downloaded to use all of Knowledge Finders' features. Other providers required no software beyond the basic Internet browser, and this requirement prevented Ms. Coady from evaluating Knowledge Finder. Ovid and Silver Platter offer well-designed interfaces that simplify the construction of complex queries. These are clearly services designed for professional users. While pricing eliminates these for casual use, it should be emphasized that Medline citation access is only a portion of the service provided by these high-end vendors. Finally, we should comment that each of the vendors and government-sponsored services provided prompt and useful feedback to e-mail questions about usage. In conclusion, we would suggest you try the various services, determine which interface suits your style and budget, then perform simple searches until you learn the strengths and weaknesses of the service you select.

  16. Unified User Interface to Support Effective and Intuitive Data Discovery, Dissemination, and Analysis at NASA GES DISC

    NASA Technical Reports Server (NTRS)

    Petrenko, M.; Hegde, M.; Bryant, K.; Johnson, J. E.; Ritrivi, A.; Shen, S.; Volmer, B.; Pham, L. B.

    2015-01-01

    Goddard Earth Sciences Data and Information Services Center (GES DISC) has been providing access to scientific data sets since 1990s. Beginning as one of the first Earth Observing System Data and Information System (EOSDIS) archive centers, GES DISC has evolved to offer a wide range of science-enabling services. With a growing understanding of needs and goals of its science users, GES DISC continues to improve and expand on its broad set of data discovery and access tools, sub-setting services, and visualization tools. Nonetheless, the multitude of the available tools, a partial overlap of functionality, and independent and uncoupled interfaces employed by these tools often leave the end users confused as of what tools or services are the most appropriate for a task at hand. As a result, some the services remain underutilized or largely unknown to the users, significantly reducing the availability of the data and leading to a great loss of scientific productivity. In order to improve the accessibility of GES DISC tools and services, we have designed and implemented UUI, the Unified User Interface. UUI seeks to provide a simple, unified, and intuitive one-stop shop experience for the key services available at GES DISC, including sub-setting (Simple Subset Wizard), granule file search (Mirador), plotting (Giovanni), and other services. In this poster, we will discuss the main lessons, obstacles, and insights encountered while designing the UUI experience. We will also present the architecture and technology behind UUI, including NodeJS, Angular, and Mongo DB, as well as speculate on the future of the tool at GES DISC as well as in a broader context of the Space Science Informatics.

  17. Lyceum: A Multi-Protocol Digital Library Gateway

    NASA Technical Reports Server (NTRS)

    Maa, Ming-Hokng; Nelson, Michael L.; Esler, Sandra L.

    1997-01-01

    Lyceum is a prototype scalable query gateway that provides a logically central interface to multi-protocol and physically distributed, digital libraries of scientific and technical information. Lyceum processes queries to multiple syntactically distinct search engines used by various distributed information servers from a single logically central interface without modification of the remote search engines. A working prototype (http://www.larc.nasa.gov/lyceum/) demonstrates the capabilities, potentials, and advantages of this type of meta-search engine by providing access to over 50 servers covering over 20 disciplines.

  18. How to improve your PubMed/MEDLINE searches: 3. advanced searching, MeSH and My NCBI.

    PubMed

    Fatehi, Farhad; Gray, Leonard C; Wootton, Richard

    2014-03-01

    Although the basic PubMed search is often helpful, the results may sometimes be non-specific. For more control over the search process you can use the Advanced Search Builder interface. This allows a targeted search in specific fields, with the convenience of being able to select the intended search field from a list. It also provides a history of your previous searches. The search history is useful to develop a complex search query by combining several previous searches using Boolean operators. For indexing the articles in MEDLINE, the NLM uses a controlled vocabulary system called MeSH. This standardised vocabulary solves the problem of authors, researchers and librarians who may use different terms for the same concept. To be efficient in a PubMed search, you should start by identifying the most appropriate MeSH terms and use them in your search where possible. My NCBI is a personal workspace facility available through PubMed and makes it possible to customise the PubMed interface. It provides various capabilities that can enhance your search performance.

  19. Parametric Grid Information in the DOE Knowledge Base: Data Preparation, Storage, and Access

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HIPP,JAMES R.; MOORE,SUSAN G.; MYERS,STEPHEN C.

    The parametric grid capability of the Knowledge Base provides an efficient, robust way to store and access interpolatable information which is needed to monitor the Comprehensive Nuclear Test Ban Treaty. To meet both the accuracy and performance requirements of operational monitoring systems, we use a new approach which combines the error estimation of kriging with the speed and robustness of Natural Neighbor Interpolation (NNI). The method involves three basic steps: data preparation (DP), data storage (DS), and data access (DA). The goal of data preparation is to process a set of raw data points to produce a sufficient basis formore » accurate NNI of value and error estimates in the Data Access step. This basis includes a set of nodes and their connectedness, collectively known as a tessellation, and the corresponding values and errors that map to each node, which we call surfaces. In many cases, the raw data point distribution is not sufficiently dense to guarantee accurate error estimates from the NNI, so the original data set must be densified using a newly developed interpolation technique known as Modified Bayesian Kriging. Once appropriate kriging parameters have been determined by variogram analysis, the optimum basis for NNI is determined in a process they call mesh refinement, which involves iterative kriging, new node insertion, and Delauny triangle smoothing. The process terminates when an NNI basis has been calculated which will fir the kriged values within a specified tolerance. In the data storage step, the tessellations and surfaces are stored in the Knowledge Base, currently in a binary flatfile format but perhaps in the future in a spatially-indexed database. Finally, in the data access step, a client application makes a request for an interpolated value, which triggers a data fetch from the Knowledge Base through the libKBI interface, a walking triangle search for the containing triangle, and finally the NNI interpolation.« less

  20. Design and Implementation of Sound Searching Robots in Wireless Sensor Networks

    PubMed Central

    Han, Lianfu; Shen, Zhengguang; Fu, Changfeng; Liu, Chao

    2016-01-01

    A sound target-searching robot system which includes a 4-channel microphone array for sound collection, magneto-resistive sensor for declination measurement, and a wireless sensor networks (WSN) for exchanging information is described. It has an embedded sound signal enhancement, recognition and location method, and a sound searching strategy based on a digital signal processor (DSP). As the wireless network nodes, three robots comprise the WSN a personal computer (PC) in order to search the three different sound targets in task-oriented collaboration. The improved spectral subtraction method is used for noise reduction. As the feature of audio signal, Mel-frequency cepstral coefficient (MFCC) is extracted. Based on the K-nearest neighbor classification method, we match the trained feature template to recognize sound signal type. This paper utilizes the improved generalized cross correlation method to estimate time delay of arrival (TDOA), and then employs spherical-interpolation for sound location according to the TDOA and the geometrical position of the microphone array. A new mapping has been proposed to direct the motor to search sound targets flexibly. As the sink node, the PC receives and displays the result processed in the WSN, and it also has the ultimate power to make decision on the received results in order to improve their accuracy. The experiment results show that the designed three-robot system implements sound target searching function without collisions and performs well. PMID:27657088

  1. Design and Implementation of Sound Searching Robots in Wireless Sensor Networks.

    PubMed

    Han, Lianfu; Shen, Zhengguang; Fu, Changfeng; Liu, Chao

    2016-09-21

    A sound target-searching robot system which includes a 4-channel microphone array for sound collection, magneto-resistive sensor for declination measurement, and a wireless sensor networks (WSN) for exchanging information is described. It has an embedded sound signal enhancement, recognition and location method, and a sound searching strategy based on a digital signal processor (DSP). As the wireless network nodes, three robots comprise the WSN a personal computer (PC) in order to search the three different sound targets in task-oriented collaboration. The improved spectral subtraction method is used for noise reduction. As the feature of audio signal, Mel-frequency cepstral coefficient (MFCC) is extracted. Based on the K-nearest neighbor classification method, we match the trained feature template to recognize sound signal type. This paper utilizes the improved generalized cross correlation method to estimate time delay of arrival (TDOA), and then employs spherical-interpolation for sound location according to the TDOA and the geometrical position of the microphone array. A new mapping has been proposed to direct the motor to search sound targets flexibly. As the sink node, the PC receives and displays the result processed in the WSN, and it also has the ultimate power to make decision on the received results in order to improve their accuracy. The experiment results show that the designed three-robot system implements sound target searching function without collisions and performs well.

  2. Visual search and the aging brain: discerning the effects of age-related brain volume shrinkage on alertness, feature binding, and attentional control.

    PubMed

    Müller-Oehring, Eva M; Schulte, Tilman; Rohlfing, Torsten; Pfefferbaum, Adolf; Sullivan, Edith V

    2013-01-01

    Decline in visuospatial abilities with advancing age has been attributed to a demise of bottom-up and top-down functions involving sensory processing, selective attention, and executive control. These functions may be differentially affected by age-related volume shrinkage of subcortical and cortical nodes subserving the dorsal and ventral processing streams and the corpus callosum mediating interhemispheric information exchange. Fifty-five healthy adults (25-84 years) underwent structural MRI and performed a visual search task to test perceptual and attentional demands by combining feature-conjunction searches with "gestalt" grouping and attentional cueing paradigms. Poorer conjunction, but not feature, search performance was related to older age and volume shrinkage of nodes in the dorsolateral processing stream. When displays allowed perceptual grouping through distractor homogeneity, poorer conjunction-search performance correlated with smaller ventrolateral prefrontal cortical and callosal volumes. An alerting cue attenuated age effects on conjunction search, and the alertness benefit was associated with thalamic, callosal, and temporal cortex volumes. Our results indicate that older adults can capitalize on early parallel stages of visual information processing, whereas age-related limitations arise at later serial processing stages requiring self-guided selective attention and executive control. These limitations are explained in part by age-related brain volume shrinkage and can be mitigated by external cues.

  3. A Hybrid Symbiotic Organisms Search Algorithm with Variable Neighbourhood Search for Solving Symmetric and Asymmetric Traveling Salesman Problem

    NASA Astrophysics Data System (ADS)

    Umam, M. I. H.; Santosa, B.

    2018-04-01

    Combinatorial optimization has been frequently used to solve both problems in science, engineering, and commercial applications. One combinatorial problems in the field of transportation is to find a shortest travel route that can be taken from the initial point of departure to point of destination, as well as minimizing travel costs and travel time. When the distance from one (initial) node to another (destination) node is the same with the distance to travel back from destination to initial, this problems known to the Traveling Salesman Problem (TSP), otherwise it call as an Asymmetric Traveling Salesman Problem (ATSP). The most recent optimization techniques is Symbiotic Organisms Search (SOS). This paper discuss how to hybrid the SOS algorithm with variable neighborhoods search (SOS-VNS) that can be applied to solve the ATSP problem. The proposed mechanism to add the variable neighborhoods search as a local search is to generate the better initial solution and then we modify the phase of parasites with adapting mechanism of mutation. After modification, the performance of the algorithm SOS-VNS is evaluated with several data sets and then the results is compared with the best known solution and some algorithm such PSO algorithm and SOS original algorithm. The SOS-VNS algorithm shows better results based on convergence, divergence and computing time.

  4. Best-First Heuristic Search for Multicore Machines

    DTIC Science & Technology

    2010-01-01

    Otto, 1998) to implement an asynchronous version of PRA* that they call Hash Distributed A* ( HDA *). HDA * distributes nodes using a hash function in...nodes which are being communicated between peers are in transit. In contact with the authors of HDA *, we have created an implementation of HDA * for...Also, our implementation of HDA * allows us to make a fair comparison between algorithms by sharing common data structures such as priority queues and

  5. A comprehensive literatures update of clinical researches of superparamagnetic resonance iron oxide nanoparticles for magnetic resonance imaging

    PubMed Central

    Idée, Jean-Marc

    2017-01-01

    This paper aims to update the clinical researches using superparamagnetic iron oxide (SPIO) nanoparticles as magnetic resonance imaging (MRI) contrast agent published during the past five years. PubMed database was used for literature search, and the search terms were (SPIO OR superparamagnetic iron oxide OR Resovist OR Ferumoxytol OR Ferumoxtran-10) AND (MRI OR magnetic resonance imaging). The literature search results show clinical research on SPIO remains robust, particularly fuelled by the approval of ferumoxytol for intravenously administration. SPIOs have been tested on MR angiography, sentinel lymph node detection, lymph node metastasis evaluation; inflammation evaluation; blood volume measurement; as well as liver imaging. Two experimental SPIOs with unique potentials are also discussed in this review. A curcumin-conjugated SPIO can penetrate brain blood barrier (BBB) and bind to amyloid plaques in Alzheime’s disease transgenic mice brain, and thereafter detectable by MRI. Another SPIO was fabricated with a core of Fe3O4 nanoparticle and a shell coating of concentrated hydrophilic polymer brushes and are almost not taken by peripheral macrophages as well as by mononuclear phagocytes and reticuloendothelial system (RES) due to the suppression of non-specific protein binding caused by their stealthy ‘‘brush-afforded’’ structure. This SPIO may offer potentials for the applications such as drug targeting and tissue or organ imaging other than liver and lymph nodes. PMID:28275562

  6. A Self-Stabilizing Byzantine-Fault-Tolerant Clock Synchronization Protocol

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.

    2009-01-01

    This report presents a rapid Byzantine-fault-tolerant self-stabilizing clock synchronization protocol that is independent of application-specific requirements. It is focused on clock synchronization of a system in the presence of Byzantine faults after the cause of any transient faults has dissipated. A model of this protocol is mechanically verified using the Symbolic Model Verifier (SMV) [SMV] where the entire state space is examined and proven to self-stabilize in the presence of one arbitrary faulty node. Instances of the protocol are proven to tolerate bursts of transient failures and deterministically converge with a linear convergence time with respect to the synchronization period. This protocol does not rely on assumptions about the initial state of the system other than the presence of sufficient number of good nodes. All timing measures of variables are based on the node s local clock, and no central clock or externally generated pulse is used. The Byzantine faulty behavior modeled here is a node with arbitrarily malicious behavior that is allowed to influence other nodes at every clock tick. The only constraint is that the interactions are restricted to defined interfaces.

  7. Search systems and computer-implemented search methods

    DOEpatents

    Payne, Deborah A.; Burtner, Edwin R.; Hampton, Shawn D.; Gillen, David S.; Henry, Michael J.

    2017-03-07

    Search systems and computer-implemented search methods are described. In one aspect, a search system includes a communications interface configured to access a plurality of data items of a collection, wherein the data items include a plurality of image objects individually comprising image data utilized to generate an image of the respective data item. The search system may include processing circuitry coupled with the communications interface and configured to process the image data of the data items of the collection to identify a plurality of image content facets which are indicative of image content contained within the images and to associate the image objects with the image content facets and a display coupled with the processing circuitry and configured to depict the image objects associated with the image content facets.

  8. Search systems and computer-implemented search methods

    DOEpatents

    Payne, Deborah A.; Burtner, Edwin R.; Bohn, Shawn J.; Hampton, Shawn D.; Gillen, David S.; Henry, Michael J.

    2015-12-22

    Search systems and computer-implemented search methods are described. In one aspect, a search system includes a communications interface configured to access a plurality of data items of a collection, wherein the data items include a plurality of image objects individually comprising image data utilized to generate an image of the respective data item. The search system may include processing circuitry coupled with the communications interface and configured to process the image data of the data items of the collection to identify a plurality of image content facets which are indicative of image content contained within the images and to associate the image objects with the image content facets and a display coupled with the processing circuitry and configured to depict the image objects associated with the image content facets.

  9. An efficient grid layout algorithm for biological networks utilizing various biological attributes

    PubMed Central

    Kojima, Kaname; Nagasaki, Masao; Jeong, Euna; Kato, Mitsuru; Miyano, Satoru

    2007-01-01

    Background Clearly visualized biopathways provide a great help in understanding biological systems. However, manual drawing of large-scale biopathways is time consuming. We proposed a grid layout algorithm that can handle gene-regulatory networks and signal transduction pathways by considering edge-edge crossing, node-edge crossing, distance measure between nodes, and subcellular localization information from Gene Ontology. Consequently, the layout algorithm succeeded in drastically reducing these crossings in the apoptosis model. However, for larger-scale networks, we encountered three problems: (i) the initial layout is often very far from any local optimum because nodes are initially placed at random, (ii) from a biological viewpoint, human layouts still exceed automatic layouts in understanding because except subcellular localization, it does not fully utilize biological information of pathways, and (iii) it employs a local search strategy in which the neighborhood is obtained by moving one node at each step, and automatic layouts suggest that simultaneous movements of multiple nodes are necessary for better layouts, while such extension may face worsening the time complexity. Results We propose a new grid layout algorithm. To address problem (i), we devised a new force-directed algorithm whose output is suitable as the initial layout. For (ii), we considered that an appropriate alignment of nodes having the same biological attribute is one of the most important factors of the comprehension, and we defined a new score function that gives an advantage to such configurations. For solving problem (iii), we developed a search strategy that considers swapping nodes as well as moving a node, while keeping the order of the time complexity. Though a naïve implementation increases by one order, the time complexity, we solved this difficulty by devising a method that caches differences between scores of a layout and its possible updates. Conclusion Layouts of the new grid layout algorithm are compared with that of the previous algorithm and human layout in an endothelial cell model, three times as large as the apoptosis model. The total cost of the result from the new grid layout algorithm is similar to that of the human layout. In addition, its convergence time is drastically reduced (40% reduction). PMID:17338825

  10. Integration of multi-interface conversion channel using FPGA for modular photonic network

    NASA Astrophysics Data System (ADS)

    Janicki, Tomasz; Pozniak, Krzysztof T.; Romaniuk, Ryszard S.

    2010-09-01

    The article discusses the integration of different types of interfaces with FPGA circuits using a reconfigurable communication platform. The solution has been implemented in practice in a single node of a distributed measurement system. Construction of communication platform has been presented with its selected hardware modules, described in VHDL and implemented in FPGA circuits. The graphical user interface (GUI) has been described that allows a user to control the operation of the system. In the final part of the article selected practical solutions have been introduced. The whole measurement system resides on multi-gigabit optical network. The optical network construction is highly modular, reconfigurable and scalable.

  11. Automatic detection of pelvic lymph nodes using multiple MR sequences

    NASA Astrophysics Data System (ADS)

    Yan, Michelle; Lu, Yue; Lu, Renzhi; Requardt, Martin; Moeller, Thomas; Takahashi, Satoru; Barentsz, Jelle

    2007-03-01

    A system for automatic detection of pelvic lymph nodes is developed by incorporating complementary information extracted from multiple MR sequences. A single MR sequence lacks sufficient diagnostic information for lymph node localization and staging. Correct diagnosis often requires input from multiple complementary sequences which makes manual detection of lymph nodes very labor intensive. Small lymph nodes are often missed even by highly-trained radiologists. The proposed system is aimed at assisting radiologists in finding lymph nodes faster and more accurately. To the best of our knowledge, this is the first such system reported in the literature. A 3-dimensional (3D) MR angiography (MRA) image is employed for extracting blood vessels that serve as a guide in searching for pelvic lymph nodes. Segmentation, shape and location analysis of potential lymph nodes are then performed using a high resolution 3D T1-weighted VIBE (T1-vibe) MR sequence acquired by Siemens 3T scanner. An optional contrast-agent enhanced MR image, such as post ferumoxtran-10 T2*-weighted MEDIC sequence, can also be incorporated to further improve detection accuracy of malignant nodes. The system outputs a list of potential lymph node locations that are overlaid onto the corresponding MR sequences and presents them to users with associated confidence levels as well as their sizes and lengths in each axis. Preliminary studies demonstrates the feasibility of automatic lymph node detection and scenarios in which this system may be used to assist radiologists in diagnosis and reporting.

  12. A comparative study of the A* heuristic search algorithm used to solve efficiently a puzzle game

    NASA Astrophysics Data System (ADS)

    Iordan, A. E.

    2018-01-01

    The puzzle game presented in this paper consists in polyhedra (prisms, pyramids or pyramidal frustums) which can be moved using the free available spaces. The problem requires to be found the minimum number of movements in order the game reaches to a goal configuration starting from an initial configuration. Because the problem is enough complex, the principal difficulty in solving it is given by dimension of search space, that leads to necessity of a heuristic search. The improving of the search method consists into determination of a strong estimation by the heuristic function which will guide the search process to the most promising side of the search tree. The comparative study is realized among Manhattan heuristic and the Hamming heuristic using A* search algorithm implemented in Java. This paper also presents the necessary stages in object oriented development of a software used to solve efficiently this puzzle game. The modelling of the software is achieved through specific UML diagrams representing the phases of analysis, design and implementation, the system thus being described in a clear and practical manner. With the purpose to confirm the theoretical results which demonstrates that Manhattan heuristic is more efficient was used space complexity criterion. The space complexity was measured by the number of generated nodes from the search tree, by the number of the expanded nodes and by the effective branching factor. From the experimental results obtained by using the Manhattan heuristic, improvements were observed regarding space complexity of A* algorithm versus Hamming heuristic.

  13. Tabu Search enhances network robustness under targeted attacks

    NASA Astrophysics Data System (ADS)

    Sun, Shi-wen; Ma, Yi-lin; Li, Rui-qi; Wang, Li; Xia, Cheng-yi

    2016-03-01

    We focus on the optimization of network robustness with respect to intentional attacks on high-degree nodes. Given an existing network, this problem can be considered as a typical single-objective combinatorial optimization problem. Based on the heuristic Tabu Search optimization algorithm, a link-rewiring method is applied to reconstruct the network while keeping the degree of every node unchanged. Through numerical simulations, BA scale-free network and two real-world networks are investigated to verify the effectiveness of the proposed optimization method. Meanwhile, we analyze how the optimization affects other topological properties of the networks, including natural connectivity, clustering coefficient and degree-degree correlation. The current results can help to improve the robustness of existing complex real-world systems, as well as to provide some insights into the design of robust networks.

  14. The navigation system of the JPL robot

    NASA Technical Reports Server (NTRS)

    Thompson, A. M.

    1977-01-01

    The control structure of the JPL research robot and the operations of the navigation subsystem are discussed. The robot functions as a network of interacting concurrent processes distributed among several computers and coordinated by a central executive. The results of scene analysis are used to create a segmented terrain model in which surface regions are classified by traversibility. The model is used by a path planning algorithm, PATH, which uses tree search methods to find the optimal path to a goal. In PATH, the search space is defined dynamically as a consequence of node testing. Maze-solving and the use of an associative data base for context dependent node generation are also discussed. Execution of a planned path is accomplished by a feedback guidance process with automatic error recovery.

  15. Development of public science archive system of Subaru Telescope

    NASA Astrophysics Data System (ADS)

    Baba, Hajime; Yasuda, Naoki; Ichikawa, Shin-Ichi; Yagi, Masafumi; Iwamoto, Nobuyuki; Takata, Tadafumi; Horaguchi, Toshihiro; Taga, Masatochi; Watanabe, Masaru; Okumura, Shin-Ichiro; Ozawa, Tomohiko; Yamamoto, Naotaka; Hamabe, Masaru

    2002-09-01

    We have developed a public science archive system, Subaru-Mitaka-Okayama-Kiso Archive system (SMOKA), as a successor of Mitaka-Okayama-Kiso Archive (MOKA) system. SMOKA provides an access to the public data of Subaru Telescope, the 188 cm telescope at Okayama Astrophysical Observatory, and the 105 cm Schmidt telescope at Kiso Observatory of the University of Tokyo. Since 1997, we have tried to compile the dictionary of FITS header keywords. The accomplishment of the dictionary enabled us to construct an unified public archive of the data obtained with various instruments at the telescopes. SMOKA has two kinds of user interfaces; Simple Search and Advanced Search. Novices can search data by simply selecting the name of the target with the Simple Search interface. Experts would prefer to set detailed constraints on the query, using the Advanced Search interface. In order to improve the efficiency of searching, several new features are implemented, such as archive status plots, calibration data search, an annotation system, and an improved Quick Look Image browsing system. We can efficiently develop and operate SMOKA by adopting a three-tier model for the system. Java servlets and Java Server Pages (JSP) are useful to separate the front-end presentation from the middle and back-end tiers.

  16. PIA: An Intuitive Protein Inference Engine with a Web-Based User Interface.

    PubMed

    Uszkoreit, Julian; Maerkens, Alexandra; Perez-Riverol, Yasset; Meyer, Helmut E; Marcus, Katrin; Stephan, Christian; Kohlbacher, Oliver; Eisenacher, Martin

    2015-07-02

    Protein inference connects the peptide spectrum matches (PSMs) obtained from database search engines back to proteins, which are typically at the heart of most proteomics studies. Different search engines yield different PSMs and thus different protein lists. Analysis of results from one or multiple search engines is often hampered by different data exchange formats and lack of convenient and intuitive user interfaces. We present PIA, a flexible software suite for combining PSMs from different search engine runs and turning these into consistent results. PIA can be integrated into proteomics data analysis workflows in several ways. A user-friendly graphical user interface can be run either locally or (e.g., for larger core facilities) from a central server. For automated data processing, stand-alone tools are available. PIA implements several established protein inference algorithms and can combine results from different search engines seamlessly. On several benchmark data sets, we show that PIA can identify a larger number of proteins at the same protein FDR when compared to that using inference based on a single search engine. PIA supports the majority of established search engines and data in the mzIdentML standard format. It is implemented in Java and freely available at https://github.com/mpc-bioinformatics/pia.

  17. Deploying the ATLAS Metadata Interface (AMI) on the cloud with Jenkins

    NASA Astrophysics Data System (ADS)

    Lambert, F.; Odier, J.; Fulachier, J.; ATLAS Collaboration

    2017-10-01

    The ATLAS Metadata Interface (AMI) is a mature application of more than 15 years of existence. Mainly used by the ATLAS experiment at CERN, it consists of a very generic tool ecosystem for metadata aggregation and cataloguing. AMI is used by the ATLAS production system, therefore the service must guarantee a high level of availability. We describe our monitoring and administration systems, and the Jenkins-based strategy used to dynamically test and deploy cloud OpenStack nodes on demand.

  18. Interface Psychology: Touchscreens Change Attribute Importance, Decision Criteria, and Behavior in Online Choice

    PubMed Central

    Gips, James

    2015-01-01

    Abstract As the rise of tablets and smartphones move the dominant interface for digital content from mouse or trackpad to direct touchscreen interaction, work is needed to explore the role of interfaces in shaping psychological reactions to online content. This research explores the role of direct-touch interfaces in product search and choice, and isolates the touch element from other form factor changes such as screen size. Results from an experimental study using a travel recommendation Web site show that a direct-touch interface (vs. a more traditional mouse interface) increases the number of alternatives searched, and biases evaluations toward tangible attributes such as décor and furniture over intangible attributes such as WiFi and employee demeanor. Direct-touch interfaces also elevate the importance of internal and subjective satisfaction metrics such as instinct over external and objective metrics such as reviews, which in turn increases anticipated satisfaction metrics. Findings suggest that interfaces can strongly affect how online content is explored, perceived, remembered, and acted on, and further work in interface psychology could be as fruitful as research exploring the content itself. PMID:26348814

  19. Interface Psychology: Touchscreens Change Attribute Importance, Decision Criteria, and Behavior in Online Choice.

    PubMed

    Brasel, S Adam; Gips, James

    2015-09-01

    As the rise of tablets and smartphones move the dominant interface for digital content from mouse or trackpad to direct touchscreen interaction, work is needed to explore the role of interfaces in shaping psychological reactions to online content. This research explores the role of direct-touch interfaces in product search and choice, and isolates the touch element from other form factor changes such as screen size. Results from an experimental study using a travel recommendation Web site show that a direct-touch interface (vs. a more traditional mouse interface) increases the number of alternatives searched, and biases evaluations toward tangible attributes such as décor and furniture over intangible attributes such as WiFi and employee demeanor. Direct-touch interfaces also elevate the importance of internal and subjective satisfaction metrics such as instinct over external and objective metrics such as reviews, which in turn increases anticipated satisfaction metrics. Findings suggest that interfaces can strongly affect how online content is explored, perceived, remembered, and acted on, and further work in interface psychology could be as fruitful as research exploring the content itself.

  20. The EPOS e-Infrastructure

    NASA Astrophysics Data System (ADS)

    Jeffery, Keith; Bailo, Daniele

    2014-05-01

    The European Plate Observing System (EPOS) is integrating geoscientific information concerning earth movements in Europe. We are approaching the end of the PP (Preparatory Project) phase and in October 2014 expect to continue with the full project within ESFRI (European Strategic Framework for Research Infrastructures). The key aspects of EPOS concern providing services to allow homogeneous access by end-users over heterogeneous data, software, facilities, equipment and services. The e-infrastructure of EPOS is the heart of the project since it integrates the work on organisational, legal, economic and scientific aspects. Following the creation of an inventory of relevant organisations, persons, facilities, equipment, services, datasets and software (RIDE) the scale of integration required became apparent. The EPOS e-infrastructure architecture has been developed systematically based on recorded primary (user) requirements and secondary (interoperation with other systems) requirements through Strawman, Woodman and Ironman phases with the specification - and developed confirmatory prototypes - becoming more precise and progressively moving from paper to implemented system. The EPOS architecture is based on global core services (Integrated Core Services - ICS) which access thematic nodes (domain-specific European-wide collections, called thematic Core Services - TCS), national nodes and specific institutional nodes. The key aspect is the metadata catalog. In one dimension this is described in 3 levels: (1) discovery metadata using well-known and commonly used standards such as DC (Dublin Core) to enable users (via an intelligent user interface) to search for objects within the EPOS environment relevant to their needs; (2) contextual metadata providing the context of the object described in the catalog to enable a user or the system to determine the relevance of the discovered object(s) to their requirement - the context includes projects, funding, organisations involved, persons involved, related publications, facilities, equipment and others, and utilises CERIF (Common European Research Information Format) standard (see www.eurocris.org); (3) detailed metadata which is specific to a domain or to a particular object and includes the schema describing the object to processing software. The other dimension of the metadata concerns the objects described. These are classified into users, services (including software), data and resources (computing, data storage, instruments and scientific equipment). An alternative architecture has been considered: using brokering. This technique has been used especially in North America geoscience projects to interoperate datasets. The technique involves writing software to interconvert between any two node datasets. Given n nodes this implies writing n*(n-1) convertors. EPOS Working Group 7 (e-infrastructures and virtual community) which deals with the design and implementation of a prototype of the EPOS services, chose to use an approach which endows the system with an extreme flexibility and sustainability. It is called the Metadata Catalogue approach. With the use of the catalogue the EPOS system can: 1. interoperate with software, services, users, organisations, facilities, equipment etc. as well as datasets; 2. avoid to write n*(n-1) software convertors and generate as much as possible, through the information contained in the catalogue only n convertors. This is a huge saving - especially in maintenance as the datasets (or other node resources) evolve. We are working on (semi-) automation of convertor generation by metadata mapping - this is leading-edge computer science research; 3. make large use of contextual metadata which enable a user or a machine to: (i) improve discovery of resources at nodes; (ii) improve precision and recall in search; (iii) drive the systems for identification, authentication, authorisation, security and privacy recording the relevant attributes of the node resources and of the user; (iv) manage provenance and long-term digital preservation; The linkage between the Integrated Services, which provide the integration of data and services, with the diverse Thematic Services Nodes is provided by means of a compatibility layer, which includes the aforementioned metadata catalogue. This layer provides 'connectors' to make local data, software and services available through the EPOS Integrated Services layer. In conclusion, we believe the EPOS e-infrastructure architecture is fit for purpose including long-term sustainability and pan-European access to data and services.

  1. The Scalable Checkpoint/Restart Library

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moody, A.

    The Scalable Checkpoint/Restart (SCR) library provides an interface that codes may use to worite our and read in application-level checkpoints in a scalable fashion. In the current implementation, checkpoint files are cached in local storage (hard disk or RAM disk) on the compute nodes. This technique provides scalable aggregate bandwidth and uses storage resources that are fully dedicated to the job. This approach addresses the two common drawbacks of checkpointing a large-scale application to a shared parallel file system, namely, limited bandwidth and file system contention. In fact, on current platforms, SCR scales linearly with the number of compute nodes.more » It has been benchmarked as high as 720GB/s on 1094 nodes of Atlas, which is nearly two orders of magnitude faster thanthe parallel file system.« less

  2. Software/hardware distributed processing network supporting the Ada environment

    NASA Astrophysics Data System (ADS)

    Wood, Richard J.; Pryk, Zen

    1993-09-01

    A high-performance, fault-tolerant, distributed network has been developed, tested, and demonstrated. The network is based on the MIPS Computer Systems, Inc. R3000 Risc for processing, VHSIC ASICs for high speed, reliable, inter-node communications and compatible commercial memory and I/O boards. The network is an evolution of the Advanced Onboard Signal Processor (AOSP) architecture. It supports Ada application software with an Ada- implemented operating system. A six-node implementation (capable of expansion up to 256 nodes) of the RISC multiprocessor architecture provides 120 MIPS of scalar throughput, 96 Mbytes of RAM and 24 Mbytes of non-volatile memory. The network provides for all ground processing applications, has merit for space-qualified RISC-based network, and interfaces to advanced Computer Aided Software Engineering (CASE) tools for application software development.

  3. Efficient Deployment of Key Nodes for Optimal Coverage of Industrial Mobile Wireless Networks

    PubMed Central

    Li, Xiaomin; Li, Di; Dong, Zhijie; Hu, Yage; Liu, Chengliang

    2018-01-01

    In recent years, industrial wireless networks (IWNs) have been transformed by the introduction of mobile nodes, and they now offer increased extensibility, mobility, and flexibility. Nevertheless, mobile nodes pose efficiency and reliability challenges. Efficient node deployment and management of channel interference directly affect network system performance, particularly for key node placement in clustered wireless networks. This study analyzes this system model, considering both industrial properties of wireless networks and their mobility. Then, static and mobile node coverage problems are unified and simplified to target coverage problems. We propose a novel strategy for the deployment of clustered heads in grouped industrial mobile wireless networks (IMWNs) based on the improved maximal clique model and the iterative computation of new candidate cluster head positions. The maximal cliques are obtained via a double-layer Tabu search. Each cluster head updates its new position via an improved virtual force while moving with full coverage to find the minimal inter-cluster interference. Finally, we develop a simulation environment. The simulation results, based on a performance comparison, show the efficacy of the proposed strategies and their superiority over current approaches. PMID:29439439

  4. Subgrouping Automata: automatic sequence subgrouping using phylogenetic tree-based optimum subgrouping algorithm.

    PubMed

    Seo, Joo-Hyun; Park, Jihyang; Kim, Eun-Mi; Kim, Juhan; Joo, Keehyoung; Lee, Jooyoung; Kim, Byung-Gee

    2014-02-01

    Sequence subgrouping for a given sequence set can enable various informative tasks such as the functional discrimination of sequence subsets and the functional inference of unknown sequences. Because an identity threshold for sequence subgrouping may vary according to the given sequence set, it is highly desirable to construct a robust subgrouping algorithm which automatically identifies an optimal identity threshold and generates subgroups for a given sequence set. To meet this end, an automatic sequence subgrouping method, named 'Subgrouping Automata' was constructed. Firstly, tree analysis module analyzes the structure of tree and calculates the all possible subgroups in each node. Sequence similarity analysis module calculates average sequence similarity for all subgroups in each node. Representative sequence generation module finds a representative sequence using profile analysis and self-scoring for each subgroup. For all nodes, average sequence similarities are calculated and 'Subgrouping Automata' searches a node showing statistically maximum sequence similarity increase using Student's t-value. A node showing the maximum t-value, which gives the most significant differences in average sequence similarity between two adjacent nodes, is determined as an optimum subgrouping node in the phylogenetic tree. Further analysis showed that the optimum subgrouping node from SA prevents under-subgrouping and over-subgrouping. Copyright © 2013. Published by Elsevier Ltd.

  5. Interactive Planning for Capability Driven Air & Space Operations

    DTIC Science & Technology

    2008-04-30

    Time, Routledge and Kegan , London, UK, 1980. [5] A. Bochman, Concerted instant–interval temporal semantics I: Temporal ontologies, Notre Dame Journal...then return true else deleteStatement (X, rj , Y ) end if end for return false Figure 8 shows the search space for instance in Table 2. The green ...nodes are those for which the set of relations cor- responding to the path from the root form a consistent set. A path from root to a green leaf node

  6. An Embedded Statistical Method for Coupling Molecular Dynamics and Finite Element Analyses

    NASA Technical Reports Server (NTRS)

    Saether, E.; Glaessgen, E.H.; Yamakov, V.

    2008-01-01

    The coupling of molecular dynamics (MD) simulations with finite element methods (FEM) yields computationally efficient models that link fundamental material processes at the atomistic level with continuum field responses at higher length scales. The theoretical challenge involves developing a seamless connection along an interface between two inherently different simulation frameworks. Various specialized methods have been developed to solve particular classes of problems. Many of these methods link the kinematics of individual MD atoms with FEM nodes at their common interface, necessarily requiring that the finite element mesh be refined to atomic resolution. Some of these coupling approaches also require simulations to be carried out at 0 K and restrict modeling to two-dimensional material domains due to difficulties in simulating full three-dimensional material processes. In the present work, a new approach to MD-FEM coupling is developed based on a restatement of the standard boundary value problem used to define a coupled domain. The method replaces a direct linkage of individual MD atoms and finite element (FE) nodes with a statistical averaging of atomistic displacements in local atomic volumes associated with each FE node in an interface region. The FEM and MD computational systems are effectively independent and communicate only through an iterative update of their boundary conditions. With the use of statistical averages of the atomistic quantities to couple the two computational schemes, the developed approach is referred to as an embedded statistical coupling method (ESCM). ESCM provides an enhanced coupling methodology that is inherently applicable to three-dimensional domains, avoids discretization of the continuum model to atomic scale resolution, and permits finite temperature states to be applied.

  7. A New Concurrent Multiscale Methodology for Coupling Molecular Dynamics and Finite Element Analyses

    NASA Technical Reports Server (NTRS)

    Yamakov, Vesselin; Saether, Erik; Glaessgen, Edward H/.

    2008-01-01

    The coupling of molecular dynamics (MD) simulations with finite element methods (FEM) yields computationally efficient models that link fundamental material processes at the atomistic level with continuum field responses at higher length scales. The theoretical challenge involves developing a seamless connection along an interface between two inherently different simulation frameworks. Various specialized methods have been developed to solve particular classes of problems. Many of these methods link the kinematics of individual MD atoms with FEM nodes at their common interface, necessarily requiring that the finite element mesh be refined to atomic resolution. Some of these coupling approaches also require simulations to be carried out at 0 K and restrict modeling to two-dimensional material domains due to difficulties in simulating full three-dimensional material processes. In the present work, a new approach to MD-FEM coupling is developed based on a restatement of the standard boundary value problem used to define a coupled domain. The method replaces a direct linkage of individual MD atoms and finite element (FE) nodes with a statistical averaging of atomistic displacements in local atomic volumes associated with each FE node in an interface region. The FEM and MD computational systems are effectively independent and communicate only through an iterative update of their boundary conditions. With the use of statistical averages of the atomistic quantities to couple the two computational schemes, the developed approach is referred to as an embedded statistical coupling method (ESCM). ESCM provides an enhanced coupling methodology that is inherently applicable to three-dimensional domains, avoids discretization of the continuum model to atomic scale resolution, and permits finite temperature states to be applied.

  8. Characterization of topological structure on complex networks.

    PubMed

    Nakamura, Ikuo

    2003-10-01

    Characterizing the topological structure of complex networks is a significant problem especially from the viewpoint of data mining on the World Wide Web. "Page rank" used in the commercial search engine Google is such a measure of authority to rank all the nodes matching a given query. We have investigated the page-rank distribution of the real Web and a growing network model, both of which have directed links and exhibit a power law distributions of in-degree (the number of incoming links to the node) and out-degree (the number of outgoing links from the node), respectively. We find a concentration of page rank on a small number of nodes and low page rank on high degree regimes in the real Web, which can be explained by topological properties of the network, e.g., network motifs, and connectivities of nearest neighbors.

  9. Interface Between CDS/ISIS and the Web at the Library of the Cagliari Observatory

    NASA Astrophysics Data System (ADS)

    Mureddu, Leonardo; Denotti, Franca; Alvito, Gianni

    The library catalog of the Cagliari Observatory was digitized some years ago, by using CDS/ISIS with a practical format named ``ASTCA'' derived from the well-known ``BIBLO''. Recently the observatory has put some effort into the creation and maintenance of a Web site; on that occasion the library database has been interfaced to the Web server by means of the software WWWISIS and a locally created search form. Both books and journals can be searched by remote users. Book searches can be made by authors, titles or keywords.

  10. Ontology-Driven Search and Triage: Design of a Web-Based Visual Interface for MEDLINE.

    PubMed

    Demelo, Jonathan; Parsons, Paul; Sedig, Kamran

    2017-02-02

    Diverse users need to search health and medical literature to satisfy open-ended goals such as making evidence-based decisions and updating their knowledge. However, doing so is challenging due to at least two major difficulties: (1) articulating information needs using accurate vocabulary and (2) dealing with large document sets returned from searches. Common search interfaces such as PubMed do not provide adequate support for exploratory search tasks. Our objective was to improve support for exploratory search tasks by combining two strategies in the design of an interactive visual interface by (1) using a formal ontology to help users build domain-specific knowledge and vocabulary and (2) providing multi-stage triaging support to help mitigate the information overload problem. We developed a Web-based tool, Ontology-Driven Visual Search and Triage Interface for MEDLINE (OVERT-MED), to test our design ideas. We implemented a custom searchable index of MEDLINE, which comprises approximately 25 million document citations. We chose a popular biomedical ontology, the Human Phenotype Ontology (HPO), to test our solution to the vocabulary problem. We implemented multistage triaging support in OVERT-MED, with the aid of interactive visualization techniques, to help users deal with large document sets returned from searches. Formative evaluation suggests that the design features in OVERT-MED are helpful in addressing the two major difficulties described above. Using a formal ontology seems to help users articulate their information needs with more accurate vocabulary. In addition, multistage triaging combined with interactive visualizations shows promise in mitigating the information overload problem. Our strategies appear to be valuable in addressing the two major problems in exploratory search. Although we tested OVERT-MED with a particular ontology and document collection, we anticipate that our strategies can be transferred successfully to other contexts. ©Jonathan Demelo, Paul Parsons, Kamran Sedig. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 02.02.2017.

  11. Ontology-Driven Search and Triage: Design of a Web-Based Visual Interface for MEDLINE

    PubMed Central

    2017-01-01

    Background Diverse users need to search health and medical literature to satisfy open-ended goals such as making evidence-based decisions and updating their knowledge. However, doing so is challenging due to at least two major difficulties: (1) articulating information needs using accurate vocabulary and (2) dealing with large document sets returned from searches. Common search interfaces such as PubMed do not provide adequate support for exploratory search tasks. Objective Our objective was to improve support for exploratory search tasks by combining two strategies in the design of an interactive visual interface by (1) using a formal ontology to help users build domain-specific knowledge and vocabulary and (2) providing multi-stage triaging support to help mitigate the information overload problem. Methods We developed a Web-based tool, Ontology-Driven Visual Search and Triage Interface for MEDLINE (OVERT-MED), to test our design ideas. We implemented a custom searchable index of MEDLINE, which comprises approximately 25 million document citations. We chose a popular biomedical ontology, the Human Phenotype Ontology (HPO), to test our solution to the vocabulary problem. We implemented multistage triaging support in OVERT-MED, with the aid of interactive visualization techniques, to help users deal with large document sets returned from searches. Results Formative evaluation suggests that the design features in OVERT-MED are helpful in addressing the two major difficulties described above. Using a formal ontology seems to help users articulate their information needs with more accurate vocabulary. In addition, multistage triaging combined with interactive visualizations shows promise in mitigating the information overload problem. Conclusions Our strategies appear to be valuable in addressing the two major problems in exploratory search. Although we tested OVERT-MED with a particular ontology and document collection, we anticipate that our strategies can be transferred successfully to other contexts. PMID:28153818

  12. A Shared Infrastructure for Federated Search Across Distributed Scientific Metadata Catalogs

    NASA Astrophysics Data System (ADS)

    Reed, S. A.; Truslove, I.; Billingsley, B. W.; Grauch, A.; Harper, D.; Kovarik, J.; Lopez, L.; Liu, M.; Brandt, M.

    2013-12-01

    The vast amount of science metadata can be overwhelming and highly complex. Comprehensive analysis and sharing of metadata is difficult since institutions often publish to their own repositories. There are many disjoint standards used for publishing scientific data, making it difficult to discover and share information from different sources. Services that publish metadata catalogs often have different protocols, formats, and semantics. The research community is limited by the exclusivity of separate metadata catalogs and thus it is desirable to have federated search interfaces capable of unified search queries across multiple sources. Aggregation of metadata catalogs also enables users to critique metadata more rigorously. With these motivations in mind, the National Snow and Ice Data Center (NSIDC) and Advanced Cooperative Arctic Data and Information Service (ACADIS) implemented two search interfaces for the community. Both the NSIDC Search and ACADIS Arctic Data Explorer (ADE) use a common infrastructure which keeps maintenance costs low. The search clients are designed to make OpenSearch requests against Solr, an Open Source search platform. Solr applies indexes to specific fields of the metadata which in this instance optimizes queries containing keywords, spatial bounds and temporal ranges. NSIDC metadata is reused by both search interfaces but the ADE also brokers additional sources. Users can quickly find relevant metadata with minimal effort and ultimately lowers costs for research. This presentation will highlight the reuse of data and code between NSIDC and ACADIS, discuss challenges and milestones for each project, and will identify creation and use of Open Source libraries.

  13. A Cluster-Based Architecture to Structure the Topology of Parallel Wireless Sensor Networks

    PubMed Central

    Lloret, Jaime; Garcia, Miguel; Bri, Diana; Diaz, Juan R.

    2009-01-01

    A wireless sensor network is a self-configuring network of mobile nodes connected by wireless links where the nodes have limited capacity and energy. In many cases, the application environment requires the design of an exclusive network topology for a particular case. Cluster-based network developments and proposals in existence have been designed to build a network for just one type of node, where all nodes can communicate with any other nodes in their coverage area. Let us suppose a set of clusters of sensor nodes where each cluster is formed by different types of nodes (e.g., they could be classified by the sensed parameter using different transmitting interfaces, by the node profile or by the type of device: laptops, PDAs, sensor etc.) and exclusive networks, as virtual networks, are needed with the same type of sensed data, or the same type of devices, or even the same type of profiles. In this paper, we propose an algorithm that is able to structure the topology of different wireless sensor networks to coexist in the same environment. It allows control and management of the topology of each network. The architecture operation and the protocol messages will be described. Measurements from a real test-bench will show that the designed protocol has low bandwidth consumption and also demonstrates the viability and the scalability of the proposed architecture. Our ccluster-based algorithm is compared with other algorithms reported in the literature in terms of architecture and protocol measurements. PMID:22303185

  14. A Game Theoretic Optimization Method for Energy Efficient Global Connectivity in Hybrid Wireless Sensor Networks

    PubMed Central

    Lee, JongHyup; Pak, Dohyun

    2016-01-01

    For practical deployment of wireless sensor networks (WSN), WSNs construct clusters, where a sensor node communicates with other nodes in its cluster, and a cluster head support connectivity between the sensor nodes and a sink node. In hybrid WSNs, cluster heads have cellular network interfaces for global connectivity. However, when WSNs are active and the load of cellular networks is high, the optimal assignment of cluster heads to base stations becomes critical. Therefore, in this paper, we propose a game theoretic model to find the optimal assignment of base stations for hybrid WSNs. Since the communication and energy cost is different according to cellular systems, we devise two game models for TDMA/FDMA and CDMA systems employing power prices to adapt to the varying efficiency of recent wireless technologies. The proposed model is defined on the assumptions of the ideal sensing field, but our evaluation shows that the proposed model is more adaptive and energy efficient than local selections. PMID:27589743

  15. Jali - Unstructured Mesh Infrastructure for Multi-Physics Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garimella, Rao V; Berndt, Markus; Coon, Ethan

    2017-04-13

    Jali is a parallel unstructured mesh infrastructure library designed for use by multi-physics simulations. It supports 2D and 3D arbitrary polyhedral meshes distributed over hundreds to thousands of nodes. Jali can read write Exodus II meshes along with fields and sets on the mesh and support for other formats is partially implemented or is (https://github.com/MeshToolkit/MSTK), an open source general purpose unstructured mesh infrastructure library from Los Alamos National Laboratory. While it has been made to work with other mesh frameworks such as MOAB and STKmesh in the past, support for maintaining the interface to these frameworks has been suspended formore » now. Jali supports distributed as well as on-node parallelism. Support of on-node parallelism is through direct use of the the mesh in multi-threaded constructs or through the use of "tiles" which are submeshes or sub-partitions of a partition destined for a compute node.« less

  16. Software-defined Quantum Networking Ecosystem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humble, Travis S.; Sadlier, Ronald

    The software enables a user to perform modeling and simulation of software-defined quantum networks. The software addresses the problem of how to synchronize transmission of quantum and classical signals through multi-node networks and to demonstrate quantum information protocols such as quantum teleportation. The software approaches this problem by generating a graphical model of the underlying network and attributing properties to each node and link in the graph. The graphical model is then simulated using a combination of discrete-event simulators to calculate the expected state of each node and link in the graph at a future time. A user interacts withmore » the software by providing an initial network model and instantiating methods for the nodes to transmit information with each other. This includes writing application scripts in python that make use of the software library interfaces. A user then initiates the application scripts, which invokes the software simulation. The user then uses the built-in diagnostic tools to query the state of the simulation and to collect statistics on synchronization.« less

  17. A monitoring system for vegetable greenhouses based on a wireless sensor network.

    PubMed

    Li, Xiu-hong; Cheng, Xiao; Yan, Ke; Gong, Peng

    2010-01-01

    A wireless sensor network-based automatic monitoring system is designed for monitoring the life conditions of greenhouse vegetables. The complete system architecture includes a group of sensor nodes, a base station, and an internet data center. For the design of wireless sensor node, the JN5139 micro-processor is adopted as the core component and the Zigbee protocol is used for wireless communication between nodes. With an ARM7 microprocessor and embedded ZKOS operating system, a proprietary gateway node is developed to achieve data influx, screen display, system configuration and GPRS based remote data forwarding. Through a Client/Server mode the management software for remote data center achieves real-time data distribution and time-series analysis. Besides, a GSM-short-message-based interface is developed for sending real-time environmental measurements, and for alarming when a measurement is beyond some pre-defined threshold. The whole system has been tested for over one year and satisfactory results have been observed, which indicate that this system is very useful for greenhouse environment monitoring.

  18. Nearest neighbor imputation using spatial–temporal correlations in wireless sensor networks

    PubMed Central

    Li, YuanYuan; Parker, Lynne E.

    2016-01-01

    Missing data is common in Wireless Sensor Networks (WSNs), especially with multi-hop communications. There are many reasons for this phenomenon, such as unstable wireless communications, synchronization issues, and unreliable sensors. Unfortunately, missing data creates a number of problems for WSNs. First, since most sensor nodes in the network are battery-powered, it is too expensive to have the nodes retransmit missing data across the network. Data re-transmission may also cause time delays when detecting abnormal changes in an environment. Furthermore, localized reasoning techniques on sensor nodes (such as machine learning algorithms to classify states of the environment) are generally not robust enough to handle missing data. Since sensor data collected by a WSN is generally correlated in time and space, we illustrate how replacing missing sensor values with spatially and temporally correlated sensor values can significantly improve the network’s performance. However, our studies show that it is important to determine which nodes are spatially and temporally correlated with each other. Simple techniques based on Euclidean distance are not sufficient for complex environmental deployments. Thus, we have developed a novel Nearest Neighbor (NN) imputation method that estimates missing data in WSNs by learning spatial and temporal correlations between sensor nodes. To improve the search time, we utilize a kd-tree data structure, which is a non-parametric, data-driven binary search tree. Instead of using traditional mean and variance of each dimension for kd-tree construction, and Euclidean distance for kd-tree search, we use weighted variances and weighted Euclidean distances based on measured percentages of missing data. We have evaluated this approach through experiments on sensor data from a volcano dataset collected by a network of Crossbow motes, as well as experiments using sensor data from a highway traffic monitoring application. Our experimental results show that our proposed 𝒦-NN imputation method has a competitive accuracy with state-of-the-art Expectation–Maximization (EM) techniques, while using much simpler computational techniques, thus making it suitable for use in resource-constrained WSNs. PMID:28435414

  19. High Performance Data Transfer for Distributed Data Intensive Sciences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, Chin; Cottrell, R 'Les' A.; Hanushevsky, Andrew B.

    We report on the development of ZX software providing high performance data transfer and encryption. The design scales in: computation power, network interfaces, and IOPS while carefully balancing the available resources. Two U.S. patent-pending algorithms help tackle data sets containing lots of small files and very large files, and provide insensitivity to network latency. It has a cluster-oriented architecture, using peer-to-peer technologies to ease deployment, operation, usage, and resource discovery. Its unique optimizations enable effective use of flash memory. Using a pair of existing data transfer nodes at SLAC and NERSC, we compared its performance to that of bbcp andmore » GridFTP and determined that they were comparable. With a proof of concept created using two four-node clusters with multiple distributed multi-core CPUs, network interfaces and flash memory, we achieved 155Gbps memory-to-memory over a 2x100Gbps link aggregated channel and 70Gbps file-to-file with encryption over a 5000 mile 100Gbps link.« less

  20. A Domain Decomposition Parallelization of the Fast Marching Method

    NASA Technical Reports Server (NTRS)

    Herrmann, M.

    2003-01-01

    In this paper, the first domain decomposition parallelization of the Fast Marching Method for level sets has been presented. Parallel speedup has been demonstrated in both the optimal and non-optimal domain decomposition case. The parallel performance of the proposed method is strongly dependent on load balancing separately the number of nodes on each side of the interface. A load imbalance of nodes on either side of the domain leads to an increase in communication and rollback operations. Furthermore, the amount of inter-domain communication can be reduced by aligning the inter-domain boundaries with the interface normal vectors. In the case of optimal load balancing and aligned inter-domain boundaries, the proposed parallel FMM algorithm is highly efficient, reaching efficiency factors of up to 0.98. Future work will focus on the extension of the proposed parallel algorithm to higher order accuracy. Also, to further enhance parallel performance, the coupling of the domain decomposition parallelization to the G(sub 0)-based parallelization will be investigated.

  1. Finding Minimum-Power Broadcast Trees for Wireless Networks

    NASA Technical Reports Server (NTRS)

    Arabshahi, Payman; Gray, Andrew; Das, Arindam; El-Sharkawi, Mohamed; Marks, Robert, II

    2004-01-01

    Some algorithms have been devised for use in a method of constructing tree graphs that represent connections among the nodes of a wireless communication network. These algorithms provide for determining the viability of any given candidate connection tree and for generating an initial set of viable trees that can be used in any of a variety of search algorithms (e.g., a genetic algorithm) to find a tree that enables the network to broadcast from a source node to all other nodes while consuming the minimum amount of total power. The method yields solutions better than those of a prior algorithm known as the broadcast incremental power algorithm, albeit at a slightly greater computational cost.

  2. A critical evaluation of lymph node ratio in head and neck cancer.

    PubMed

    de Ridder, M; Marres, C C M; Smeele, L E; van den Brekel, M W M; Hauptmann, M; Balm, A J M; van Velthuysen, M L F

    2016-12-01

    In head and neck squamous cell carcinoma (HNSCC), the search for better prognostic factors beyond TNM-stage is ongoing. Lymph node ratio (LNR) (positive lymph nodes/total lymph nodes) is gaining interest in view of its potential prognostic significance. All HNSCC patients at the Netherlands Cancer Institute undergoing neck dissection for lymph node metastases in the neck region between 2002 and 2012 (n = 176) were included. Based on a protocol change in specimen processing, the cohort was subdivided in two distinct consecutive periods (pre and post 2007). The prognostic value of LNR, N-stage, and number of positive lymph nodes for overall survival was assessed. The mean number of examined lymph nodes after 2007 was significantly higher (42.3) than before (35.8) (p = 0.024). The higher number concerned mostly lymph nodes in level V. The mean number of positive lymph nodes before 2007 was 3.3 vs. 3.6 after 2007 (p = 0.745). By multivariate analysis of both pre- and post-2007 cohort data, two factors remained associated with an increased hazard of dying: N2 [HR 2.1 (1.1-4.1) and 2.4 (1.0-5.8)] and >3 positive lymph nodes [HR 2.0 (1.1-3.5) and 3.1 (1.4-6.9)]. Hazard ratio for LNR >7 % was not significantly different: pre 2007 at 2.2 (1.3-3.8) and post 2007 at 2.1 (1.0-4.8, p = 0.053). In this study, changes in specimen processing influenced LNR values, but not the total number of tumor positive nodes found. Therefore, in HNSCC, the number of positive nodes seems a more reliable parameter than LNR, provided a minimum number of lymph nodes are examined.

  3. Model Checking A Self-Stabilizing Synchronization Protocol for Arbitrary Digraphs

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.

    2012-01-01

    This report presents the mechanical verification of a self-stabilizing distributed clock synchronization protocol for arbitrary digraphs in the absence of faults. This protocol does not rely on assumptions about the initial state of the system, other than the presence of at least one node, and no central clock or a centrally generated signal, pulse, or message is used. The system under study is an arbitrary, non-partitioned digraph ranging from fully connected to 1-connected networks of nodes while allowing for differences in the network elements. Nodes are anonymous, i.e., they do not have unique identities. There is no theoretical limit on the maximum number of participating nodes. The only constraint on the behavior of the node is that the interactions with other nodes are restricted to defined links and interfaces. This protocol deterministically converges within a time bound that is a linear function of the self-stabilization period. A bounded model of the protocol is verified using the Symbolic Model Verifier (SMV) for a subset of digraphs. Modeling challenges of the protocol and the system are addressed. The model checking effort is focused on verifying correctness of the bounded model of the protocol as well as confirmation of claims of determinism and linear convergence with respect to the self-stabilization period.

  4. Dual-Stack Single-Radio Communication Architecture for UAV Acting As a Mobile Node to Collect Data in WSNs

    PubMed Central

    Sayyed, Ali; Medeiros de Araújo, Gustavo; Bodanese, João Paulo; Buss Becker, Leandro

    2015-01-01

    The use of mobile nodes to collect data in a Wireless Sensor Network (WSN) has gained special attention over the last years. Some researchers explore the use of Unmanned Aerial Vehicles (UAVs) as mobile node for such data-collection purposes. Analyzing these works, it is apparent that mobile nodes used in such scenarios are typically equipped with at least two different radio interfaces. The present work presents a Dual-Stack Single-Radio Communication Architecture (DSSRCA), which allows a UAV to communicate in a bidirectional manner with a WSN and a Sink node. The proposed architecture was specifically designed to support different network QoS requirements, such as best-effort and more reliable communications, attending both UAV-to-WSN and UAV-to-Sink communications needs. DSSRCA was implemented and tested on a real UAV, as detailed in this paper. This paper also includes a simulation analysis that addresses bandwidth consumption in an environmental monitoring application scenario. It includes an analysis of the data gathering rate that can be achieved considering different UAV flight speeds. Obtained results show the viability of using a single radio transmitter for collecting data from the WSN and forwarding such data to the Sink node. PMID:26389911

  5. Dual-Stack Single-Radio Communication Architecture for UAV Acting As a Mobile Node to Collect Data in WSNs.

    PubMed

    Sayyed, Ali; de Araújo, Gustavo Medeiros; Bodanese, João Paulo; Becker, Leandro Buss

    2015-09-16

    The use of mobile nodes to collect data in a Wireless Sensor Network (WSN) has gained special attention over the last years. Some researchers explore the use of Unmanned Aerial Vehicles (UAVs) as mobile node for such data-collection purposes. Analyzing these works, it is apparent that mobile nodes used in such scenarios are typically equipped with at least two different radio interfaces. The present work presents a Dual-Stack Single-Radio Communication Architecture (DSSRCA), which allows a UAV to communicate in a bidirectional manner with a WSN and a Sink node. The proposed architecture was specifically designed to support different network QoS requirements, such as best-effort and more reliable communications, attending both UAV-to-WSN and UAV-to-Sink communications needs. DSSRCA was implemented and tested on a real UAV, as detailed in this paper. This paper also includes a simulation analysis that addresses bandwidth consumption in an environmental monitoring application scenario. It includes an analysis of the data gathering rate that can be achieved considering different UAV flight speeds. Obtained results show the viability of using a single radio transmitter for collecting data from the WSN and forwarding such data to the Sink node.

  6. Mercury Toolset for Spatiotemporal Metadata

    NASA Technical Reports Server (NTRS)

    Wilson, Bruce E.; Palanisamy, Giri; Devarakonda, Ranjeet; Rhyne, B. Timothy; Lindsley, Chris; Green, James

    2010-01-01

    Mercury (http://mercury.ornl.gov) is a set of tools for federated harvesting, searching, and retrieving metadata, particularly spatiotemporal metadata. Version 3.0 of the Mercury toolset provides orders of magnitude improvements in search speed, support for additional metadata formats, integration with Google Maps for spatial queries, facetted type search, support for RSS (Really Simple Syndication) delivery of search results, and enhanced customization to meet the needs of the multiple projects that use Mercury. It provides a single portal to very quickly search for data and information contained in disparate data management systems, each of which may use different metadata formats. Mercury harvests metadata and key data from contributing project servers distributed around the world and builds a centralized index. The search interfaces then allow the users to perform a variety of fielded, spatial, and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data. Mercury periodically (typically daily) harvests metadata sources through a collection of interfaces and re-indexes these metadata to provide extremely rapid search capabilities, even over collections with tens of millions of metadata records. A number of both graphical and application interfaces have been constructed within Mercury, to enable both human users and other computer programs to perform queries. Mercury was also designed to support multiple different projects, so that the particular fields that can be queried and used with search filters are easy to configure for each different project.

  7. Mercury Toolset for Spatiotemporal Metadata

    NASA Astrophysics Data System (ADS)

    Devarakonda, Ranjeet; Palanisamy, Giri; Green, James; Wilson, Bruce; Rhyne, B. Timothy; Lindsley, Chris

    2010-06-01

    Mercury (http://mercury.ornl.gov) is a set of tools for federated harvesting, searching, and retrieving metadata, particularly spatiotemporal metadata. Version 3.0 of the Mercury toolset provides orders of magnitude improvements in search speed, support for additional metadata formats, integration with Google Maps for spatial queries, facetted type search, support for RSS (Really Simple Syndication) delivery of search results, and enhanced customization to meet the needs of the multiple projects that use Mercury. It provides a single portal to very quickly search for data and information contained in disparate data management systems, each of which may use different metadata formats. Mercury harvests metadata and key data from contributing project servers distributed around the world and builds a centralized index. The search interfaces then allow the users to perform a variety of fielded, spatial, and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data. Mercury periodically (typically daily)harvests metadata sources through a collection of interfaces and re-indexes these metadata to provide extremely rapid search capabilities, even over collections with tens of millions of metadata records. A number of both graphical and application interfaces have been constructed within Mercury, to enable both human users and other computer programs to perform queries. Mercury was also designed to support multiple different projects, so that the particular fields that can be queried and used with search filters are easy to configure for each different project.

  8. [Study on Information Extraction of Clinic Expert Information from Hospital Portals].

    PubMed

    Zhang, Yuanpeng; Dong, Jiancheng; Qian, Danmin; Geng, Xingyun; Wu, Huiqun; Wang, Li

    2015-12-01

    Clinic expert information provides important references for residents in need of hospital care. Usually, such information is hidden in the deep web and cannot be directly indexed by search engines. To extract clinic expert information from the deep web, the first challenge is to make a judgment on forms. This paper proposes a novel method based on a domain model, which is a tree structure constructed by the attributes of search interfaces. With this model, search interfaces can be classified to a domain and filled in with domain keywords. Another challenge is to extract information from the returned web pages indexed by search interfaces. To filter the noise information on a web page, a block importance model is proposed. The experiment results indicated that the domain model yielded a precision 10.83% higher than that of the rule-based method, whereas the block importance model yielded an F₁ measure 10.5% higher than that of the XPath method.

  9. PubMedReco: A Real-Time Recommender System for PubMed Citations.

    PubMed

    Samuel, Hamman W; Zaïane, Osmar R

    2017-01-01

    We present a recommender system, PubMedReco, for real-time suggestions of medical articles from PubMed, a database of over 23 million medical citations. PubMedReco can recommend medical article citations while users are conversing in a synchronous communication environment such as a chat room. Normally, users would have to leave their chat interface to open a new web browser window, and formulate an appropriate search query to retrieve relevant results. PubMedReco automatically generates the search query and shows relevant citations within the same integrated user interface. PubMedReco analyzes relevant keywords associated with the conversation and uses them to search for relevant citations using the PubMed E-utilities programming interface. Our contributions include improvements to the user experience for searching PubMed from within health forums and chat rooms, and a machine learning model for identifying relevant keywords. We demonstrate the feasibility of PubMedReco using BMJ's Doc2Doc forum discussions.

  10. Peeling the Onion: Okapi System Architecture and Software Design Issues.

    ERIC Educational Resources Information Center

    Jones, S.; And Others

    1997-01-01

    Discusses software design issues for Okapi, an information retrieval system that incorporates both search engine and user interface and supports weighted searching, relevance feedback, and query expansion. The basic search system, adjacency searching, and moving toward a distributed system are discussed. (Author/LRW)

  11. New control system for the 1.5m and 0.9m telescopes at Sierra Nevada Observatory

    NASA Astrophysics Data System (ADS)

    Costillo, Luis P.; Ramos, J. Luis; Ibáñez, J. Miguel; Aparicio, Beatriz; Herránz, Miguel; García, Antonio J.

    2006-06-01

    The Sierra Nevada Observatory (Granada, Spain) has a number of telescopes. Our study will focus on two Nasmyth telescopes with apertures of 1.5m and 0.9m and an equatorial mount. The system currently installed to control these telescopes is a 1995 centralized VME module. However, given the problems which have arisen due to the number of wires and other complications, we have decided to change this control module. We will control each telescope with a distributed control philosophy, using a serial linear communication bus between independent nodes, although all system capabilities are accessible from a central unit anywhere and at any time via internet. We have divided the tasks and have one node for alpha control, another for delta control, one for the dome, one for the focus and the central unit to interface with a pc. The nodes for alpha, delta and the dome will be used by means of FPGA's in order to efficiently sample the encoders and the control algorithms, and to generate the output for the motors and the servo. The focus will have a microcontroller, and the system is easy to expand in the event of the inclusion of more nodes. After having studied several fieldbus systems, we have opted for the CAN bus, because of its reliability and broadcasting possibilities. In this way, all the important information will be on the bus, and every node will be able to access the information at any time. This document explains the new design made in the IAA for the new consoles of control whose basic characteristics are, the distributed control, the hardware simplify, the cable remove, the safety and maintenance improve and facilitating the observation improving the interface with the user, and finally to prepare the system for the remote observation.

  12. Flexibility and Performance of Parallel File Systems

    NASA Technical Reports Server (NTRS)

    Kotz, David; Nieuwejaar, Nils

    1996-01-01

    As we gain experience with parallel file systems, it becomes increasingly clear that a single solution does not suit all applications. For example, it appears to be impossible to find a single appropriate interface, caching policy, file structure, or disk-management strategy. Furthermore, the proliferation of file-system interfaces and abstractions make applications difficult to port. We propose that the traditional functionality of parallel file systems be separated into two components: a fixed core that is standard on all platforms, encapsulating only primitive abstractions and interfaces, and a set of high-level libraries to provide a variety of abstractions and application-programmer interfaces (API's). We present our current and next-generation file systems as examples of this structure. Their features, such as a three-dimensional file structure, strided read and write interfaces, and I/O-node programs, are specifically designed with the flexibility and performance necessary to support a wide range of applications.

  13. A Detailed Analysis of End-User Search Behaviors.

    ERIC Educational Resources Information Center

    Wildemuth, Barbara M.; And Others

    1991-01-01

    Discussion of search strategy formulation focuses on a study at the University of North Carolina at Chapel Hill that analyzed how medical students developed and revised search strategies for microbiology database searches. Implications for future research on search behavior, for system interface design, and for end user training are suggested. (16…

  14. Two Upper Bounds for the Weighted Path Length of Binary Trees. Report No. UIUCDCS-R-73-565.

    ERIC Educational Resources Information Center

    Pradels, Jean Louis

    Rooted binary trees with weighted nodes are structures encountered in many areas, such as coding theory, searching and sorting, information storage and retrieval. The path length is a meaningful quantity which gives indications about the expected time of a search or the length of a code, for example. In this paper, two sharp bounds for the total…

  15. Study on user interface of pathology picture archiving and communication system.

    PubMed

    Kim, Dasueran; Kang, Peter; Yun, Jungmin; Park, Sung-Hye; Seo, Jeong-Wook; Park, Peom

    2014-01-01

    It is necessary to improve the pathology workflow. A workflow task analysis was performed using a pathology picture archiving and communication system (pathology PACS) in order to propose a user interface for the Pathology PACS considering user experience. An interface analysis of the Pathology PACS in Seoul National University Hospital and a task analysis of the pathology workflow were performed by observing recorded video. Based on obtained results, a user interface for the Pathology PACS was proposed. Hierarchical task analysis of Pathology PACS was classified into 17 tasks including 1) pre-operation, 2) text, 3) images, 4) medical record viewer, 5) screen transition, 6) pathology identification number input, 7) admission date input, 8) diagnosis doctor, 9) diagnosis code, 10) diagnosis, 11) pathology identification number check box, 12) presence or absence of images, 13) search, 14) clear, 15) Excel save, 16) search results, and 17) re-search. And frequently used menu items were identified and schematized. A user interface for the Pathology PACS considering user experience could be proposed as a preliminary step, and this study may contribute to the development of medical information systems based on user experience and usability.

  16. Penile Cancer: Contemporary Lymph Node Management.

    PubMed

    O'Brien, Jonathan S; Perera, Marlon; Manning, Todd; Bozin, Mike; Cabarkapa, Sonja; Chen, Emily; Lawrentschuk, Nathan

    2017-06-01

    In penile cancer, the optimal diagnostics and management of metastatic lymph nodes are not clear. Advances in minimally invasive staging, including dynamic sentinel lymph node biopsy, have widened the diagnostic repertoire of the urologist. We aimed to provide an objective update of the recent trends in the management of penile squamous cell carcinoma, and inguinal and pelvic lymph node metastases. We systematically reviewed several medical databases, including the Web of Science® (with MEDLINE®), Embase® and Cochrane databases, according to PRISMA (Preferred Reporting Items for Systematic Review and Meta-Analyses) guidelines. The search terms used were penile cancer, lymph node, sentinel node, minimally invasive, surgery and outcomes, alone and in combination. Articles pertaining to the management of lymph nodes in penile cancer were reviewed, including original research, reviews and clinical guidelines published between 1980 and 2016. Accurate and minimally invasive lymph node staging is of the utmost importance in the surgical management of penile squamous cell carcinoma. In patients with clinically node negative disease, a growing body of evidence supports the use of sentinel lymph node biopsies. Dynamic sentinel lymph node biopsy exposes the patient to minimal risk, and results in superior sensitivity and specificity profiles compared to alternate nodal staging techniques. In the presence of locoregional disease, improvements in inguinal or pelvic lymphadenectomy have reduced morbidity and improved oncologic outcomes. A multimodal approach of chemotherapy and surgery has demonstrated a survival benefit for patients with advanced disease. Recent developments in lymph node management have occurred in penile cancer, such as minimally invasive lymph node diagnosis and intervention strategies. These advances have been met with a degree of controversy in the contemporary literature. Current data suggest that dynamic sentinel lymph node biopsy provides excellent sensitivity and specificity for detecting lymph node metastases. More robust long-term data on multicenter patient cohorts are required to determine the optimal management of lymph nodes in penile cancer. Copyright © 2017 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  17. Fuzzy Logic based Handoff Latency Reduction Mechanism in Layer 2 of Heterogeneous Mobile IPv6 Networks

    NASA Astrophysics Data System (ADS)

    Anwar, Farhat; Masud, Mosharrof H.; Latif, Suhaimi A.

    2013-12-01

    Mobile IPv6 (MIPv6) is one of the pioneer standards that support mobility in IPv6 environment. It has been designed to support different types of technologies for providing seamless communications in next generation network. However, MIPv6 and subsequent standards have some limitations due to its handoff latency. In this paper, a fuzzy logic based mechanism is proposed to reduce the handoff latency of MIPv6 for Layer 2 (L2) by scanning the Access Points (APs) while the Mobile Node (MN) is moving among different APs. Handoff latency occurs when the MN switches from one AP to another in L2. Heterogeneous network is considered in this research in order to reduce the delays in L2. Received Signal Strength Indicator (RSSI) and velocity of the MN are considered as the input of fuzzy logic technique. This technique helps the MN to measure optimum signal quality from APs for the speedy mobile node based on fuzzy logic input rules and makes a list of interfaces. A suitable interface from the list of available interfaces can be selected like WiFi, WiMAX or GSM. Simulation results show 55% handoff latency reduction and 50% packet loss improvement in L2 compared to standard to MIPv6.

  18. Modelling Safe Interface Interactions in Web Applications

    NASA Astrophysics Data System (ADS)

    Brambilla, Marco; Cabot, Jordi; Grossniklaus, Michael

    Current Web applications embed sophisticated user interfaces and business logic. The original interaction paradigm of the Web based on static content pages that are browsed by hyperlinks is, therefore, not valid anymore. In this paper, we advocate a paradigm shift for browsers and Web applications, that improves the management of user interaction and browsing history. Pages are replaced by States as basic navigation nodes, and Back/Forward navigation along the browsing history is replaced by a full-fledged interactive application paradigm, supporting transactions at the interface level and featuring Undo/Redo capabilities. This new paradigm offers a safer and more precise interaction model, protecting the user from unexpected behaviours of the applications and the browser.

  19. The PennBMBI: Design of a General Purpose Wireless Brain-Machine-Brain Interface System.

    PubMed

    Liu, Xilin; Zhang, Milin; Subei, Basheer; Richardson, Andrew G; Lucas, Timothy H; Van der Spiegel, Jan

    2015-04-01

    In this paper, a general purpose wireless Brain-Machine-Brain Interface (BMBI) system is presented. The system integrates four battery-powered wireless devices for the implementation of a closed-loop sensorimotor neural interface, including a neural signal analyzer, a neural stimulator, a body-area sensor node and a graphic user interface implemented on the PC end. The neural signal analyzer features a four channel analog front-end with configurable bandpass filter, gain stage, digitization resolution, and sampling rate. The target frequency band is configurable from EEG to single unit activity. A noise floor of 4.69 μVrms is achieved over a bandwidth from 0.05 Hz to 6 kHz. Digital filtering, neural feature extraction, spike detection, sensing-stimulating modulation, and compressed sensing measurement are realized in a central processing unit integrated in the analyzer. A flash memory card is also integrated in the analyzer. A 2-channel neural stimulator with a compliance voltage up to ± 12 V is included. The stimulator is capable of delivering unipolar or bipolar, charge-balanced current pulses with programmable pulse shape, amplitude, width, pulse train frequency and latency. A multi-functional sensor node, including an accelerometer, a temperature sensor, a flexiforce sensor and a general sensor extension port has been designed. A computer interface is designed to monitor, control and configure all aforementioned devices via a wireless link, according to a custom designed communication protocol. Wireless closed-loop operation between the sensory devices, neural stimulator, and neural signal analyzer can be configured. The proposed system was designed to link two sites in the brain, bridging the brain and external hardware, as well as creating new sensory and motor pathways for clinical practice. Bench test and in vivo experiments are performed to verify the functions and performances of the system.

  20. Supercomputer networking for space science applications

    NASA Technical Reports Server (NTRS)

    Edelson, B. I.

    1992-01-01

    The initial design of a supercomputer network topology including the design of the communications nodes along with the communications interface hardware and software is covered. Several space science applications that are proposed experiments by GSFC and JPL for a supercomputer network using the NASA ACTS satellite are also reported.

  1. SkyQuery - A Prototype Distributed Query and Cross-Matching Web Service for the Virtual Observatory

    NASA Astrophysics Data System (ADS)

    Thakar, A. R.; Budavari, T.; Malik, T.; Szalay, A. S.; Fekete, G.; Nieto-Santisteban, M.; Haridas, V.; Gray, J.

    2002-12-01

    We have developed a prototype distributed query and cross-matching service for the VO community, called SkyQuery, which is implemented with hierarchichal Web Services. SkyQuery enables astronomers to run combined queries on existing distributed heterogeneous astronomy archives. SkyQuery provides a simple, user-friendly interface to run distributed queries over the federation of registered astronomical archives in the VO. The SkyQuery client connects to the portal Web Service, which farms the query out to the individual archives, which are also Web Services called SkyNodes. The cross-matching algorithm is run recursively on each SkyNode. Each archive is a relational DBMS with a HTM index for fast spatial lookups. The results of the distributed query are returned as an XML DataSet that is automatically rendered by the client. SkyQuery also returns the image cutout corresponding to the query result. SkyQuery finds not only matches between the various catalogs, but also dropouts - objects that exist in some of the catalogs but not in others. This is often as important as finding matches. We demonstrate the utility of SkyQuery with a brown-dwarf search between SDSS and 2MASS, and a search for radio-quiet quasars in SDSS, 2MASS and FIRST. The importance of a service like SkyQuery for the worldwide astronomical community cannot be overstated: data on the same objects in various archives is mapped in different wavelength ranges and looks very different due to different errors, instrument sensitivities and other peculiarities of each archive. Our cross-matching algorithm preforms a fuzzy spatial join across multiple catalogs. This type of cross-matching is currently often done by eye, one object at a time. A static cross-identification table for a set of archives would become obsolete by the time it was built - the exponential growth of astronomical data means that a dynamic cross-identification mechanism like SkyQuery is the only viable option. SkyQuery was funded by a grant from the NASA AISR program.

  2. Field Model: An Object-Oriented Data Model for Fields

    NASA Technical Reports Server (NTRS)

    Moran, Patrick J.

    2001-01-01

    We present an extensible, object-oriented data model designed for field data entitled Field Model (FM). FM objects can represent a wide variety of fields, including fields of arbitrary dimension and node type. FM can also handle time-series data. FM achieves generality through carefully selected topological primitives and through an implementation that leverages the potential of templated C++. FM supports fields where the nodes values are paired with any cell type. Thus FM can represent data where the field nodes are paired with the vertices ("vertex-centered" data), fields where the nodes are paired with the D-dimensional cells in R(sup D) (often called "cell-centered" data), as well as fields where nodes are paired with edges or other cell types. FM is designed to effectively handle very large data sets; in particular FM employs a demand-driven evaluation strategy that works especially well with large field data. Finally, the interfaces developed for FM have the potential to effectively abstract field data based on adaptive meshes. We present initial results with a triangular adaptive grid in R(sup 2) and discuss how the same design abstractions would work equally well with other adaptive-grid variations, including meshes in R(sup 3).

  3. The stopping rules for winsorized tree

    NASA Astrophysics Data System (ADS)

    Ch'ng, Chee Keong; Mahat, Nor Idayu

    2017-11-01

    Winsorized tree is a modified tree-based classifier that is able to investigate and to handle all outliers in all nodes along the process of constructing the tree. It overcomes the tedious process of constructing a classical tree where the splitting of branches and pruning go concurrently so that the constructed tree would not grow bushy. This mechanism is controlled by the proposed algorithm. In winsorized tree, data are screened for identifying outlier. If outlier is detected, the value is neutralized using winsorize approach. Both outlier identification and value neutralization are executed recursively in every node until predetermined stopping criterion is met. The aim of this paper is to search for significant stopping criterion to stop the tree from further splitting before overfitting. The result obtained from the conducted experiment on pima indian dataset proved that the node could produce the final successor nodes (leaves) when it has achieved the range of 70% in information gain.

  4. A framework and methodology for navigating disaster and global health in crisis literature.

    PubMed

    Chan, Jennifer L; Burkle, Frederick M

    2013-04-04

    Both 'disasters' and 'global health in crisis' research has dramatically grown due to the ever-increasing frequency and magnitude of crises around the world. Large volumes of peer-reviewed literature are not only a testament to the field's value and evolution, but also present an unprecedented outpouring of seemingly unmanageable information across a wide array of crises and disciplines. Disaster medicine, health and humanitarian assistance, global health and public health disaster literature all lie within the disaster and global health in crisis literature spectrum and are increasingly accepted as multidisciplinary and transdisciplinary disciplines. Researchers, policy makers, and practitioners now face a new challenge; that of accessing this expansive literature for decision-making and exploring new areas of research. Individuals are also reaching beyond the peer-reviewed environment to grey literature using search engines like Google Scholar to access policy documents, consensus reports and conference proceedings. What is needed is a method and mechanism with which to search and retrieve relevant articles from this expansive body of literature. This manuscript presents both a framework and workable process for a diverse group of users to navigate the growing peer-reviewed and grey disaster and global health in crises literature. Disaster terms from textbooks, peer-reviewed and grey literature were used to design a framework of thematic clusters and subject matter 'nodes'. A set of 84 terms, selected from 143 curated terms was organized within each node reflecting topics within the disaster and global health in crisis literature. Terms were crossed with one another and the term 'disaster'. The results were formatted into tables and matrices. This process created a roadmap of search terms that could be applied to the PubMed database. Each search in the matrix or table results in a listed number of articles. This process was applied to literature from PubMed from 2005-2011. A complementary process was also applied to Google Scholar using the same framework of clusters, nodes, and terms expanding the search process to include the broader grey literature assets. A framework of four thematic clusters and twelve subject matter nodes were designed to capture diverse disaster and global health in crisis-related content. From 2005-2011 there were 18,660 articles referring to the term [disaster]. Restricting the search to human research, MeSH, and English language there remained 7,736 identified articles representing an unmanageable number to adequately process for research, policy or best practices. However, using the crossed search and matrix process revealed further examples of robust realms of research in disasters, emergency medicine, EMS, public health and global health. Examples of potential gaps in current peer-reviewed disaster and global health in crisis literature were identified as mental health, elderly care, and alternate sites of care. The same framework and process was then applied to Google Scholar, specifically for topics that resulted in few PubMed search returns. When applying the same framework and process to the Google Scholar example searches retrieved unique peer-reviewed articles not identified in PubMed and documents including books, governmental documents and consensus papers. The proposed framework, methodology and process using four clusters, twelve nodes and a matrix and table process applied to PubMed and Google Scholar unlocks otherwise inaccessible opportunities to better navigate the massively growing body of peer-reviewed disaster and global health in crises literature. This approach will assist researchers, policy makers, and practitioners to generate future research questions, report on the overall evolution of the disaster and global health in crisis field and further guide disaster planning, prevention, preparedness, mitigation response and recovery.

  5. Locating overlapping dense subgraphs in gene (protein) association networks and predicting novel protein functional groups among these subgraphs

    NASA Astrophysics Data System (ADS)

    Palla, Gergely; Derenyi, Imre; Farkas, Illes J.; Vicsek, Tamas

    2006-03-01

    Most tasks in a cell are performed not by individual proteins, but by functional groups of proteins (either physically interacting with each other or associated in other ways). In gene (protein) association networks these groups show up as sets of densely connected nodes. In the yeast, Saccharomyces cerevisiae, known physically interacting groups of proteins (called protein complexes) strongly overlap: the total number of proteins contained by these complexes by far underestimates the sum of their sizes (2750 vs. 8932). Thus, most functional groups of proteins, both physically interacting and other, are likely to share many of their members with other groups. However, current algorithms searching for dense groups of nodes in networks usually exclude overlaps. With the aim to discover both novel functions of individual proteins and novel protein functional groups we combine in protein association networks (i) a search for overlapping dense subgraphs based on the Clique Percolation Method (CPM) (Palla, G., et.al. Nature 435, 814-818 (2005), http://angel.elte.hu/clustering), which explicitly allows for overlaps among the groups, and (ii) a verification and characterization of the identified groups of nodes (proteins) with the help of standard annotation databases listing known functions.

  6. Human-Robot Interface: Issues in Operator Performance, Interface Design, and Technologies

    DTIC Science & Technology

    2006-07-01

    and the use of lightweight portable robotic sensor platforms. 5 robotics has reached a point where some generalities of HRI transcend specific...displays with control devices such as joysticks, wheels, and pedals (Kamsickas, 2003). Typical control stations include panels displaying (a) sensor ...tasks that do not involve mobility and usually involve camera control or data fusion from sensors Active search: Search tasks that involve mobility

  7. Generating a fault-tolerant global clock using high-speed control signals for the MetaNet architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ofek, Y.

    1994-05-01

    This work describes a new technique, based on exchanging control signals between neighboring nodes, for constructing a stable and fault-tolerant global clock in a distributed system with an arbitrary topology. It is shown that it is possible to construct a global clock reference with time step that is much smaller than the propagation delay over the network's links. The synchronization algorithm ensures that the global clock tick' has a stable periodicity, and therefore, it is possible to tolerate failures of links and clocks that operate faster and/or slower than nominally specified, as well as hard failures. The approach taken inmore » this work is to generate a global clock from the ensemble of the local transmission clocks and not to directly synchronize these high-speed clocks. The steady-state algorithm, which generates the global clock, is executed in hardware by the network interface of each node. At the network interface, it is possible to measure accurately the propagation delay between neighboring nodes with a small error or uncertainty and thereby to achieve global synchronization that is proportional to these error measurements. It is shown that the local clock drift (or rate uncertainty) has only a secondary effect on the maximum global clock rate. The synchronization algorithm can tolerate any physical failure. 18 refs.« less

  8. Full Text and Figure Display Improves Bioscience Literature Search

    PubMed Central

    Divoli, Anna; Wooldridge, Michael A.; Hearst, Marti A.

    2010-01-01

    When reading bioscience journal articles, many researchers focus attention on the figures and their captions. This observation led to the development of the BioText literature search engine [1], a freely available Web-based application that allows biologists to search over the contents of Open Access Journals, and see figures from the articles displayed directly in the search results. This article presents a qualitative assessment of this system in the form of a usability study with 20 biologist participants using and commenting on the system. 19 out of 20 participants expressed a desire to use a bioscience literature search engine that displays articles' figures alongside the full text search results. 15 out of 20 participants said they would use a caption search and figure display interface either frequently or sometimes, while 4 said rarely and 1 said undecided. 10 out of 20 participants said they would use a tool for searching the text of tables and their captions either frequently or sometimes, while 7 said they would use it rarely if at all, 2 said they would never use it, and 1 was undecided. This study found evidence, supporting results of an earlier study, that bioscience literature search systems such as PubMed should show figures from articles alongside search results. It also found evidence that full text and captions should be searched along with the article title, metadata, and abstract. Finally, for a subset of users and information needs, allowing for explicit search within captions for figures and tables is a useful function, but it is not entirely clear how to cleanly integrate this within a more general literature search interface. Such a facility supports Open Access publishing efforts, as it requires access to full text of documents and the lifting of restrictions in order to show figures in the search interface. PMID:20418942

  9. Searching Databases without Query-Building Aids: Implications for Dyslexic Users

    ERIC Educational Resources Information Center

    Berget, Gerd; Sandnes, Frode Eika

    2015-01-01

    Introduction: Few studies document the information searching behaviour of users with cognitive impairments. This paper therefore addresses the effect of dyslexia on information searching in a database with no tolerance for spelling errors and no query-building aids. The purpose was to identify effective search interface design guidelines that…

  10. Geologic Materials Center - Inventory | Alaska Division of Geological &

    Science.gov Websites

    Alaska Visiting Alaska State Employees DGGS State of Alaska search Department of Natural Resources Reports Employment Staff Directory Publications Search Statewide Maps New Releases Sales Interactive Maps - Inventory Inventory Search Find GMC Inventory Samples The search interface functionality is dependent on the

  11. (Meta)Search like Google

    ERIC Educational Resources Information Center

    Rochkind, Jonathan

    2007-01-01

    The ability to search and receive results in more than one database through a single interface--or metasearch--is something many users want. Google Scholar--the search engine of specifically scholarly content--and library metasearch products like Ex Libris's MetaLib, Serials Solution's Central Search, WebFeat, and products based on MuseGlobal used…

  12. Evolvable Neural Software System

    NASA Technical Reports Server (NTRS)

    Curtis, Steven A.

    2009-01-01

    The Evolvable Neural Software System (ENSS) is composed of sets of Neural Basis Functions (NBFs), which can be totally autonomously created and removed according to the changing needs and requirements of the software system. The resulting structure is both hierarchical and self-similar in that a given set of NBFs may have a ruler NBF, which in turn communicates with other sets of NBFs. These sets of NBFs may function as nodes to a ruler node, which are also NBF constructs. In this manner, the synthetic neural system can exhibit the complexity, three-dimensional connectivity, and adaptability of biological neural systems. An added advantage of ENSS over a natural neural system is its ability to modify its core genetic code in response to environmental changes as reflected in needs and requirements. The neural system is fully adaptive and evolvable and is trainable before release. It continues to rewire itself while on the job. The NBF is a unique, bilevel intelligence neural system composed of a higher-level heuristic neural system (HNS) and a lower-level, autonomic neural system (ANS). Taken together, the HNS and the ANS give each NBF the complete capabilities of a biological neural system to match sensory inputs to actions. Another feature of the NBF is the Evolvable Neural Interface (ENI), which links the HNS and ANS. The ENI solves the interface problem between these two systems by actively adapting and evolving from a primitive initial state (a Neural Thread) to a complicated, operational ENI and successfully adapting to a training sequence of sensory input. This simulates the adaptation of a biological neural system in a developmental phase. Within the greater multi-NBF and multi-node ENSS, self-similar ENI s provide the basis for inter-NBF and inter-node connectivity.

  13. Hybrid networking sensing system for structural health monitoring of a concrete cable-stayed bridge

    NASA Astrophysics Data System (ADS)

    Torbol, Marco; Kim, Sehwan; Chien, Ting-Chou; Shinozuka, Masanobu

    2013-04-01

    The purpose of this study is the remote structural health monitoring to identify the torsional natural frequencies and mode shapes of a concrete cable-stayed bridge using a hybrid networking sensing system. The system consists of one data aggregation unit, which is daisy-chained to one or more sensing nodes. A wireless interface is used between the data aggregation units, whereas a wired interface is used between a data aggregation unit and the sensing nodes. Each sensing node is equipped with high-precision MEMS accelerometers with adjustable sampling frequency from 0.2 Hz to 1.2 kHz. The entire system was installed inside the reinforced concrete box-girder deck of Hwamyung Bridge, which is a cable stayed bridge in Busan, South Korea, to protect the system from the harsh environmental conditions. This deployment makes wireless communication a challenge due to the signal losses and the high levels of attenuation. To address these issues, the concept of hybrid networking system is introduced with the efficient local power distribution technique. The theoretical communication range of Wi-Fi is 100m. However, inside the concrete girder, the peer to peer wireless communication cannot exceed about 20m. The distance is further reduced by the line of sight between the antennas. However, the wired daisy-chained connection between sensing nodes is useful because the data aggregation unit can be placed in the optimal location for transmission. To overcome the limitation of the wireless communication range, we adopt a high-gain antenna that extends the wireless communication distance to 50m. Additional help is given by the multi-hopping data communication protocol. The 4G modem, which allows remote access to the system, is the only component exposed to the external environment.

  14. Roles of Formin Nodes and Myosin Motor Activity in Mid1p-dependent Contractile-Ring Assembly during Fission Yeast Cytokinesis

    PubMed Central

    Coffman, Valerie C.; Nile, Aaron H.; Lee, I-Ju; Liu, Huayang

    2009-01-01

    Two prevailing models have emerged to explain the mechanism of contractile-ring assembly during cytokinesis in the fission yeast Schizosaccharomyces pombe: the spot/leading cable model and the search, capture, pull, and release (SCPR) model. We tested some of the basic assumptions of the two models. Monte Carlo simulations of the SCPR model require that the formin Cdc12p is present in >30 nodes from which actin filaments are nucleated and captured by myosin-II in neighboring nodes. The force produced by myosin motors pulls the nodes together to form a compact contractile ring. Live microscopy of cells expressing Cdc12p fluorescent fusion proteins shows for the first time that Cdc12p localizes to a broad band of 30–50 dynamic nodes, where actin filaments are nucleated in random directions. The proposed progenitor spot, essential for the spot/leading cable model, usually disappears without nucleating actin filaments. α-Actinin ain1 deletion cells form a normal contractile ring through nodes in the absence of the spot. Myosin motor activity is required to condense the nodes into a contractile ring, based on slower or absent node condensation in myo2-E1 and UCS rng3-65 mutants. Taken together, these data provide strong support for the SCPR model of contractile-ring formation in cytokinesis. PMID:19864459

  15. Tendon Healing in Bone Tunnel after Human Anterior Cruciate Ligament Reconstruction: A Systematic Review of Histological Results.

    PubMed

    Lu, Hongbin; Chen, Can; Xie, Shanshan; Tang, Yifu; Qu, Jin

    2018-05-21

    Most studies concerning to tendon healing and incorporation into bone are mainly based on animal studies due to the invasive nature of the biopsy procedure. The evidence considering tendon graft healing to bone in humans is limited in several case series or case reports, and therefore, it is difficult to understand the healing process. A computerized search using relevant search terms was performed in the PubMed, EMBASE, Scopus, and Cochrane Library databases, as well as a manual search of reference lists. Searches were limited to studies that investigated tendon graft healing to bone by histologic examination after anterior cruciate ligament (ACL) reconstruction with hamstring. Ten studies were determined to be eligible for this systematic review. Thirty-seven cases were extracted from the included studies. Most studies showed that a fibrovascular interface would form at the tendon-bone interface at the early stage and a fibrous indirect interface with Sharpey-like fibers would be expected at the later stage. Cartilage-like tissue at tendon graft-bone interface was reported in three studies. Tendon graft failed to integrate with the surrounding bone in 10 of the 37 cases. Unexpectedly, suspensory type of fixation was used for the above failure cases. An indirect type of insertion with Sharpey-like fibers at tendon-bone interface could be expected after ACL reconstruction with hamstring. Regional cartilage-like tissue may form at tendon-bone interface occasionally. The underlying tendon-to-bone healing process is far from understood in the human hamstring ACL reconstruction. Further human studies are highly needed to understand tendon graft healing in bone tunnel after hamstring ACL reconstruction. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  16. A Question of Interface Design: How Do Online Service GUIs Measure Up?

    ERIC Educational Resources Information Center

    Head, Alison J.

    1997-01-01

    Describes recent improvements in graphical user interfaces (GUIs) offered by online services. Highlights include design considerations, including computer engineering capabilities and users' abilities; fundamental GUI design principles; user empowerment; visual communication and interaction; and an evaluation of online search interfaces. (LRW)

  17. Decohesion Elements using Two and Three-Parameter Mixed-Mode Criteria

    NASA Technical Reports Server (NTRS)

    Davila, Carlos G.; Camanho, Pedro P.

    2001-01-01

    An eight-node decohesion element implementing different criteria to predict delamination growth under mixed-mode loading is proposed. The element is used at the interface between solid finite elements to model the initiation and propagation of delamination. A single displacement-based damage parameter is used in a softening law to track the damage state of the interface. The power law criterion and a three-parameter mixed-mode criterion are used to predict delamination growth. The accuracy of the predictions is evaluated in single mode delamination and in the mixed-mode bending tests.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Damiani, D.; Dubrovin, M.; Gaponenko, I.

    Psana(Photon Science Analysis) is a software package that is used to analyze data produced by the Linac Coherent Light Source X-ray free-electron laser at the SLAC National Accelerator Laboratory. The project began in 2011, is written primarily in C++ with some Python, and provides user interfaces in both C++ and Python. Most users use the Python interface. The same code can be run in real time while data are being taken as well as offline, executing on many nodes/cores using MPI for parallelization. It is publicly available and installable on the RHEL5/6/7 operating systems.

  19. A Registry for Planetary Data Tools and Services

    NASA Astrophysics Data System (ADS)

    Hardman, S.; Cayanan, M.; Hughes, J. S.; Joyner, R.; Crichton, D.; Law, E.

    2018-04-01

    The PDS Engineering Node has upgraded a prototype Tool Registry developed by the International Planetary Data Alliance to increase the visibility and enhance functionality along with incorporating the registered tools into PDS data search results.

  20. The diagnostic performance of shear wave elastography for malignant cervical lymph nodes: A systematic review and meta-analysis.

    PubMed

    Suh, Chong Hyun; Choi, Young Jun; Baek, Jung Hwan; Lee, Jeong Hyun

    2017-01-01

    To evaluate the diagnostic performance of shear wave elastography for malignant cervical lymph nodes. We searched the Ovid-MEDLINE and EMBASE databases for published studies regarding the use of shear wave elastography for diagnosing malignant cervical lymph nodes. The diagnostic performance of shear wave elastography was assessed using bivariate modelling and hierarchical summary receiver operating characteristic modelling. Meta-regression analysis and subgroup analysis according to acoustic radiation force impulse imaging (ARFI) and Supersonic shear imaging (SSI) were also performed. Eight eligible studies which included a total sample size of 481 patients with 647 cervical lymph nodes, were included. Shear wave elastography showed a summary sensitivity of 81 % (95 % CI: 72-88 %) and specificity of 85 % (95 % CI: 70-93 %). The results of meta-regression analysis revealed that the prevalence of malignant lymph nodes was a significant factor affecting study heterogeneity (p < .01). According to the subgroup analysis, the summary estimates of the sensitivity and specificity did not differ between ARFI and SSI (p = .93). Shear wave elastography is an acceptable imaging modality for diagnosing malignant cervical lymph nodes. We believe that both ARFI and SSI may have a complementary role for diagnosing malignant cervical lymph nodes. • Shear wave elastography is acceptable modality for diagnosing malignant cervical lymph nodes. • Shear wave elastography demonstrated summary sensitivity of 81 % and specificity of 85 %. • ARFI and SSI have complementary roles for diagnosing malignant cervical lymph nodes.

  1. Tags Extarction from Spatial Documents in Search Engines

    NASA Astrophysics Data System (ADS)

    Borhaninejad, S.; Hakimpour, F.; Hamzei, E.

    2015-12-01

    Nowadays the selective access to information on the Web is provided by search engines, but in the cases which the data includes spatial information the search task becomes more complex and search engines require special capabilities. The purpose of this study is to extract the information which lies in spatial documents. To that end, we implement and evaluate information extraction from GML documents and a retrieval method in an integrated approach. Our proposed system consists of three components: crawler, database and user interface. In crawler component, GML documents are discovered and their text is parsed for information extraction; storage. The database component is responsible for indexing of information which is collected by crawlers. Finally the user interface component provides the interaction between system and user. We have implemented this system as a pilot system on an Application Server as a simulation of Web. Our system as a spatial search engine provided searching capability throughout the GML documents and thus an important step to improve the efficiency of search engines has been taken.

  2. HTTP-based Search and Ordering Using ECHO's REST-based and OpenSearch APIs

    NASA Astrophysics Data System (ADS)

    Baynes, K.; Newman, D. J.; Pilone, D.

    2012-12-01

    Metadata is an important entity in the process of cataloging, discovering, and describing Earth science data. NASA's Earth Observing System (EOS) ClearingHOuse (ECHO) acts as the core metadata repository for EOSDIS data centers, providing a centralized mechanism for metadata and data discovery and retrieval. By supporting both the ESIP's Federated Search API and its own search and ordering interfaces, ECHO provides multiple capabilities that facilitate ease of discovery and access to its ever-increasing holdings. Users are able to search and export metadata in a variety of formats including ISO 19115, json, and ECHO10. This presentation aims to inform technically savvy clients interested in automating search and ordering of ECHO's metadata catalog. The audience will be introduced to practical and applicable examples of end-to-end workflows that demonstrate finding, sub-setting and ordering data that is bound by keyword, temporal and spatial constraints. Interaction with the ESIP OpenSearch Interface will be highlighted, as will ECHO's own REST-based API.

  3. Environmental Data Store: A Web-Based System Providing Management and Exploitation for Multi-Data-Type Environmental Data

    NASA Astrophysics Data System (ADS)

    Ji, P.; Piasecki, M.

    2012-12-01

    With the rapid growth in data volumes, data diversity and data demands from multi-disciplinary research effort, data management and exploitation are increasingly facing significant challenges for environmental scientific community. We describe Environmental data store (EDS), a system we are developing that is a web-based system following an open source implementation to manage and exploit multi-data-type environmental data. EDS provides repository services for the six fundamental data types, which meet the demands of multi-disciplinary environmental research. These data types are: a) Time Series Data, b) GeoSpatial data, c) Digital Data, d) Ex-Situ Sampling data, e) Modeling Data, f) Raster Data. Through data portal, EDS allows for efficient consuming these six types of data placed in data pool, which is made up of different data nodes corresponding to different data types, including iRODS, ODM, THREADS, ESSDB, GeoServer, etc.. EDS data portal offers unified submission interface for the above different data types; provides fully integrated, scalable search across content from the above different data systems; also features mapping, analysis, exporting and visualization, through integration with other software. EDS uses a number of developed systems, follows widely used data standards, and highlights the thematic, semantic, and syntactic support on the submission and search, in order to advance multi-disciplinary environmental research. This system will be installed and develop at the CrossRoads initiative at the City College of New York.

  4. Engineering the CernVM-Filesystem as a High Bandwidth Distributed Filesystem for Auxiliary Physics Data

    NASA Astrophysics Data System (ADS)

    Dykstra, D.; Bockelman, B.; Blomer, J.; Herner, K.; Levshina, T.; Slyz, M.

    2015-12-01

    A common use pattern in the computing models of particle physics experiments is running many distributed applications that read from a shared set of data files. We refer to this data is auxiliary data, to distinguish it from (a) event data from the detector (which tends to be different for every job), and (b) conditions data about the detector (which tends to be the same for each job in a batch of jobs). Relatively speaking, conditions data also tends to be relatively small per job where both event data and auxiliary data are larger per job. Unlike event data, auxiliary data comes from a limited working set of shared files. Since there is spatial locality of the auxiliary data access, the use case appears to be identical to that of the CernVM- Filesystem (CVMFS). However, we show that distributing auxiliary data through CVMFS causes the existing CVMFS infrastructure to perform poorly. We utilize a CVMFS client feature called "alien cache" to cache data on existing local high-bandwidth data servers that were engineered for storing event data. This cache is shared between the worker nodes at a site and replaces caching CVMFS files on both the worker node local disks and on the site's local squids. We have tested this alien cache with the dCache NFSv4.1 interface, Lustre, and the Hadoop Distributed File System (HDFS) FUSE interface, and measured performance. In addition, we use high-bandwidth data servers at central sites to perform the CVMFS Stratum 1 function instead of the low-bandwidth web servers deployed for the CVMFS software distribution function. We have tested this using the dCache HTTP interface. As a result, we have a design for an end-to-end high-bandwidth distributed caching read-only filesystem, using existing client software already widely deployed to grid worker nodes and existing file servers already widely installed at grid sites. Files are published in a central place and are soon available on demand throughout the grid and cached locally on the site with a convenient POSIX interface. This paper discusses the details of the architecture and reports performance measurements.

  5. Engineering the CernVM-Filesystem as a High Bandwidth Distributed Filesystem for Auxiliary Physics Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dykstra, D.; Bockelman, B.; Blomer, J.

    A common use pattern in the computing models of particle physics experiments is running many distributed applications that read from a shared set of data files. We refer to this data is auxiliary data, to distinguish it from (a) event data from the detector (which tends to be different for every job), and (b) conditions data about the detector (which tends to be the same for each job in a batch of jobs). Relatively speaking, conditions data also tends to be relatively small per job where both event data and auxiliary data are larger per job. Unlike event data, auxiliarymore » data comes from a limited working set of shared files. Since there is spatial locality of the auxiliary data access, the use case appears to be identical to that of the CernVM- Filesystem (CVMFS). However, we show that distributing auxiliary data through CVMFS causes the existing CVMFS infrastructure to perform poorly. We utilize a CVMFS client feature called 'alien cache' to cache data on existing local high-bandwidth data servers that were engineered for storing event data. This cache is shared between the worker nodes at a site and replaces caching CVMFS files on both the worker node local disks and on the site's local squids. We have tested this alien cache with the dCache NFSv4.1 interface, Lustre, and the Hadoop Distributed File System (HDFS) FUSE interface, and measured performance. In addition, we use high-bandwidth data servers at central sites to perform the CVMFS Stratum 1 function instead of the low-bandwidth web servers deployed for the CVMFS software distribution function. We have tested this using the dCache HTTP interface. As a result, we have a design for an end-to-end high-bandwidth distributed caching read-only filesystem, using existing client software already widely deployed to grid worker nodes and existing file servers already widely installed at grid sites. Files are published in a central place and are soon available on demand throughout the grid and cached locally on the site with a convenient POSIX interface. This paper discusses the details of the architecture and reports performance measurements.« less

  6. Lymph node segmentation on CT images by a shape model guided deformable surface methodh

    NASA Astrophysics Data System (ADS)

    Maleike, Daniel; Fabel, Michael; Tetzlaff, Ralf; von Tengg-Kobligk, Hendrik; Heimann, Tobias; Meinzer, Hans-Peter; Wolf, Ivo

    2008-03-01

    With many tumor entities, quantitative assessment of lymph node growth over time is important to make therapy choices or to evaluate new therapies. The clinical standard is to document diameters on transversal slices, which is not the best measure for a volume. We present a new algorithm to segment (metastatic) lymph nodes and evaluate the algorithm with 29 lymph nodes in clinical CT images. The algorithm is based on a deformable surface search, which uses statistical shape models to restrict free deformation. To model lymph nodes, we construct an ellipsoid shape model, which strives for a surface with strong gradients and user-defined gray values. The algorithm is integrated into an application, which also allows interactive correction of the segmentation results. The evaluation shows that the algorithm gives good results in the majority of cases and is comparable to time-consuming manual segmentation. The median volume error was 10.1% of the reference volume before and 6.1% after manual correction. Integrated into an application, it is possible to perform lymph node volumetry for a whole patient within the 10 to 15 minutes time limit imposed by clinical routine.

  7. JAIL: a structure-based interface library for macromolecules.

    PubMed

    Günther, Stefan; von Eichborn, Joachim; May, Patrick; Preissner, Robert

    2009-01-01

    The increasing number of solved macromolecules provides a solid number of 3D interfaces, if all types of molecular contacts are being considered. JAIL annotates three different kinds of macromolecular interfaces, those between interacting protein domains, interfaces of different protein chains and interfaces between proteins and nucleic acids. This results in a total number of about 184,000 database entries. All the interfaces can easily be identified by a detailed search form or by a hierarchical tree that describes the protein domain architectures classified by the SCOP database. Visual inspection of the interfaces is possible via an interactive protein viewer. Furthermore, large scale analyses are supported by an implemented sequential and by a structural clustering. Similar interfaces as well as non-redundant interfaces can be easily picked out. Additionally, the sequential conservation of binding sites was also included in the database and is retrievable via Jmol. A comprehensive download section allows the composition of representative data sets with user defined parameters. The huge data set in combination with various search options allow a comprehensive view on all interfaces between macromolecules included in the Protein Data Bank (PDB). The download of the data sets supports numerous further investigations in macromolecular recognition. JAIL is publicly available at http://bioinformatics.charite.de/jail.

  8. An Application of the A* Search to Trajectory Optimization

    DTIC Science & Technology

    1990-05-11

    linearized model of orbital motion called the Clohessy - Wiltshire Equations and a node search technique called A*. The planner discussed in this thesis starts...states while transfer time is left unspecified. 13 Chapter 2. Background HILL’S ( CLOHESSY - WILTSHIRE ) EQUATIONS The Euler-Hill equations describe... Clohessy - Wiltshire equations. The coordinate system used in this thesis is commonly referred to as Local Vertical, Local Horizontal or LVLH reference frame

  9. Monte Carlo tree search -based non-coplanar trajectory design for station parameter optimized radiation therapy (SPORT).

    PubMed

    Dong, Peng; Liu, Hongcheng; Xing, Lei

    2018-06-04

    An important yet challenging problem in LINAC-based rotational arc radiation therapy is the design of beam trajectory, which requires simultaneous consideration of delivery efficiency and final dose distribution. In this work, we propose a novel trajectory selection strategy by developing a Monte Carlo tree search (MCTS) algorithm during the beam trajectory selection process. Methods: To search through the vast number of possible trajectories, MCTS algorithm was implemented. In this approach, a candidate trajectory is explored by starting from a leaf node and sequentially examining the next level of linked nodes with consideration of geometric and physical constraints. The maximum Upper Confidence Bounds for Trees, which is a function of average objective function value and the number of times the node under testing has been visited, was employed to intelligently select the trajectory. For each candidate trajectory, we run an inverse fluence map optimization with an infinity norm regularization. The ranking of the plan as measured by the corresponding objective function value was then fed back to update the statistics of the nodes on the trajectory. The method was evaluated with a chest wall and a brain case, and the results were compared with the coplanar and noncoplanar 4pi beam configurations. Results: For both clinical cases, the MCTS method found effective and easy-to-deliver trajectories within an hour. As compared with the coplanar plans, it offers much better sparing of the OARs while maintaining the PTV coverage. The quality of the MCTS-generated plan is found to be comparable to the 4pi plans. Conclusion: AI based on MCTS is valuable to facilitate the design of beam trajectory and paves the way for future clinical use of non-coplanar treatment delivery. . © 2018 Institute of Physics and Engineering in Medicine.

  10. Target Control in Logical Models Using the Domain of Influence of Nodes.

    PubMed

    Yang, Gang; Gómez Tejeda Zañudo, Jorge; Albert, Réka

    2018-01-01

    Dynamical models of biomolecular networks are successfully used to understand the mechanisms underlying complex diseases and to design therapeutic strategies. Network control and its special case of target control, is a promising avenue toward developing disease therapies. In target control it is assumed that a small subset of nodes is most relevant to the system's state and the goal is to drive the target nodes into their desired states. An example of target control would be driving a cell to commit to apoptosis (programmed cell death). From the experimental perspective, gene knockout, pharmacological inhibition of proteins, and providing sustained external signals are among practical intervention techniques. We identify methodologies to use the stabilizing effect of sustained interventions for target control in Boolean network models of biomolecular networks. Specifically, we define the domain of influence (DOI) of a node (in a certain state) to be the nodes (and their corresponding states) that will be ultimately stabilized by the sustained state of this node regardless of the initial state of the system. We also define the related concept of the logical domain of influence (LDOI) of a node, and develop an algorithm for its identification using an auxiliary network that incorporates the regulatory logic. This way a solution to the target control problem is a set of nodes whose DOI can cover the desired target node states. We perform greedy randomized adaptive search in node state space to find such solutions. We apply our strategy to in silico biological network models of real systems to demonstrate its effectiveness.

  11. The Feasibility and Oncological Safety of Axillary Reverse Mapping in Patients with Breast Cancer: A Systematic Review and Meta-Analysis of Prospective Studies

    PubMed Central

    Han, Chao; Yang, Ben; Zuo, Wen-Shu; Zheng, Gang; Yang, Li; Zheng, Mei-Zhu

    2016-01-01

    Objective The axillary reverse mapping (ARM) technique has recently been developed to prevent lymphedema by preserving the arm lymphatic drainage during sentinel lymph node biopsy (SLNB) or axillary lymph node dissection (ALND) procedures. The objective of this systematic review and meta-analysis was to evaluate the feasibility and oncological safety of ARM. Methods We searched Medline, Embase, Web of science, Scopus, and the Cochrane Library for relevant prospective studies. The identification rate of ARM nodes, the crossover rate of SLN-ARM nodes, the proportion of metastatic ARM nodes, and the incidence of complications were pooled into meta-analyses by the random-effects model. Results A total of 24 prospective studies were included into meta-analyses, of which 11 studies reported ARM during SLNB, and 18 studies reported ARM during SLNB. The overall identification rate of ARM nodes was 38.2% (95% CI 32.9%-43.8%) during SLNB and 82.8% (78.0%-86.6%) during ALND, respectively. The crossover rate of SLN-ARM nodes was 19.6% (95% CI 14.4%-26.1%). The metastatic rate of ARM nodes was 16.9% (95% CI 14.2%-20.1%). The pooled incidence of lymphedema was 4.1% (95% CI 2.9–5.9%) for patients undergoing ARM procedure. Conclusions The ARM procedure was feasible during ALND. Nevertheless, it was restricted by low identification rate of ARM nodes during SLNB. ARM was beneficial for preventing lymphedema. However, this technique should be performed with caution given the possibility of crossover SLN-ARM nodes and metastatic ARM nodes. ARM appeared to be unsuitable for patients with clinically positive breast cancer due to oncological safety concern. PMID:26919589

  12. Visualization of usability and functionality of a professional website through web-mining.

    PubMed

    Jones, Josette F; Mahoui, Malika; Gopa, Venkata Devi Pragna

    2007-10-11

    Functional interface design requires understanding of the information system structure and the user. Web logs record user interactions with the interface, and thus provide some insight into user search behavior and efficiency of the search process. The present study uses a data-mining approach with techniques such as association rules, clustering and classification, to visualize the usability and functionality of a digital library through in depth analyses of web logs.

  13. CodeSlinger: a case study in domain-driven interactive tool design for biomedical coding scheme exploration and use.

    PubMed

    Flowers, Natalie L

    2010-01-01

    CodeSlinger is a desktop application that was developed to aid medical professionals in the intertranslation, exploration, and use of biomedical coding schemes. The application was designed to provide a highly intuitive, easy-to-use interface that simplifies a complex business problem: a set of time-consuming, laborious tasks that were regularly performed by a group of medical professionals involving manually searching coding books, searching the Internet, and checking documentation references. A workplace observation session with a target user revealed the details of the current process and a clear understanding of the business goals of the target user group. These goals drove the design of the application's interface, which centers on searches for medical conditions and displays the codes found in the application's database that represent those conditions. The interface also allows the exploration of complex conceptual relationships across multiple coding schemes.

  14. Patient-Centered Tools for Medication Information Search

    PubMed Central

    Wilcox, Lauren; Feiner, Steven; Elhadad, Noémie; Vawdrey, David; Tran, Tran H.

    2016-01-01

    Recent research focused on online health information seeking highlights a heavy reliance on general-purpose search engines. However, current general-purpose search interfaces do not necessarily provide adequate support for non-experts in identifying suitable sources of health information. Popular search engines have recently introduced search tools in their user interfaces for a range of topics. In this work, we explore how such tools can support non-expert, patient-centered health information search. Scoping the current work to medication-related search, we report on findings from a formative study focused on the design of patient-centered, medication-information search tools. Our study included qualitative interviews with patients, family members, and domain experts, as well as observations of their use of Remedy, a technology probe embodying a set of search tools. Post-operative cardiothoracic surgery patients and their visiting family members used the tools to find information about their hospital medications and were interviewed before and after their use. Domain experts conducted similar search tasks and provided qualitative feedback on their preferences and recommendations for designing these tools. Findings from our study suggest the importance of four valuation principles underlying our tools: credibility, readability, consumer perspective, and topical relevance. PMID:28163972

  15. Patient-Centered Tools for Medication Information Search.

    PubMed

    Wilcox, Lauren; Feiner, Steven; Elhadad, Noémie; Vawdrey, David; Tran, Tran H

    2014-05-20

    Recent research focused on online health information seeking highlights a heavy reliance on general-purpose search engines. However, current general-purpose search interfaces do not necessarily provide adequate support for non-experts in identifying suitable sources of health information. Popular search engines have recently introduced search tools in their user interfaces for a range of topics. In this work, we explore how such tools can support non-expert, patient-centered health information search. Scoping the current work to medication-related search, we report on findings from a formative study focused on the design of patient-centered, medication-information search tools. Our study included qualitative interviews with patients, family members, and domain experts, as well as observations of their use of Remedy, a technology probe embodying a set of search tools. Post-operative cardiothoracic surgery patients and their visiting family members used the tools to find information about their hospital medications and were interviewed before and after their use. Domain experts conducted similar search tasks and provided qualitative feedback on their preferences and recommendations for designing these tools. Findings from our study suggest the importance of four valuation principles underlying our tools: credibility, readability, consumer perspective, and topical relevance.

  16. BioSearch: a semantic search engine for Bio2RDF

    PubMed Central

    Qiu, Honglei; Huang, Jiacheng

    2017-01-01

    Abstract Biomedical data are growing at an incredible pace and require substantial expertise to organize data in a manner that makes them easily findable, accessible, interoperable and reusable. Massive effort has been devoted to using Semantic Web standards and technologies to create a network of Linked Data for the life sciences, among others. However, while these data are accessible through programmatic means, effective user interfaces for non-experts to SPARQL endpoints are few and far between. Contributing to user frustrations is that data are not necessarily described using common vocabularies, thereby making it difficult to aggregate results, especially when distributed across multiple SPARQL endpoints. We propose BioSearch — a semantic search engine that uses ontologies to enhance federated query construction and organize search results. BioSearch also features a simplified query interface that allows users to optionally filter their keywords according to classes, properties and datasets. User evaluation demonstrated that BioSearch is more effective and usable than two state of the art search and browsing solutions. Database URL: http://ws.nju.edu.cn/biosearch/ PMID:29220451

  17. The NCAR Digital Asset Services Hub (DASH): Implementing Unified Data Discovery and Access

    NASA Astrophysics Data System (ADS)

    Stott, D.; Worley, S. J.; Hou, C. Y.; Nienhouse, E.

    2017-12-01

    The National Center for Atmospheric Research (NCAR) Directorate created the Data Stewardship Engineering Team (DSET) to plan and implement an integrated single entry point for uniform digital asset discovery and access across the organization in order to improve the efficiency of access, reduce the costs, and establish the foundation for interoperability with other federated systems. This effort supports new policies included in federal funding mandates, NSF data management requirements, and journal citation recommendations. An inventory during the early planning stage identified diverse asset types across the organization that included publications, datasets, metadata, models, images, and software tools and code. The NCAR Digital Asset Services Hub (DASH) is being developed and phased in this year to improve the quality of users' experiences in finding and using these assets. DASH serves to provide engagement, training, search, and support through the following four nodes (see figure). DASH MetadataDASH provides resources for creating and cataloging metadata to the NCAR Dialect, a subset of ISO 19115. NMDEdit, an editor based on a European open source application, has been configured for manual entry of NCAR metadata. CKAN, an open source data portal platform, harvests these XML records (along with records output directly from databases) from a Web Accessible Folder (WAF) on GitHub for validation. DASH SearchThe NCAR Dialect metadata drives cross-organization search and discovery through CKAN, which provides the display interface of search results. DASH search will establish interoperability by facilitating metadata sharing with other federated systems. DASH ConsultingThe DASH Data Curation & Stewardship Coordinator assists with Data Management (DM) Plan preparation and advises on Digital Object Identifiers. The coordinator arranges training sessions on the DASH metadata tools and DM planning, and provides one-on-one assistance as requested. DASH RepositoryA repository is under development for NCAR datasets currently not in existing lab-managed archives. The DASH repository will be under NCAR governance and meet Trustworthy Repositories Audit & Certification (TRAC) requirements. This poster will highlight the processes, lessons learned, and current status of the DASH effort at NCAR.

  18. Cervical lymph node metastasis in adenoid cystic carcinoma of the major salivary glands.

    PubMed

    2017-02-01

    To verify the prevalence of cervical lymph node metastasis in adenoid cystic carcinoma of major salivary glands, and to establish recommendations for elective neck treatment. A search was conducted of the US National Library of Medicine database. Appropriate articles were selected from the abstracts, and the original publications were obtained to extract data. Among 483 cases of major salivary gland adenoid cystic carcinoma, a total of 90 (18.6 per cent) had cervical metastasis. The prevalence of positive nodes from adenoid cystic carcinoma was 14.5 per cent for parotid gland, 22.5 per cent for submandibular gland and 24.7 per cent for sublingual gland. Cervical lymph node metastasis occurred more frequently in patients with primary tumour stage T3-4 adenoid cystic carcinoma, and was usually located in levels II and III in the neck. Adenoid cystic carcinoma of the major salivary glands is associated with a significant prevalence of cervical node metastasis, and elective neck treatment is indicated for T3 and T4 primary tumours, as well as tumours with other histological risk factors.

  19. Linear magnetoconductivity in an intrinsic topological Weyl semimetal

    NASA Astrophysics Data System (ADS)

    Zhang, Song-Bo; Lu, Hai-Zhou; Shen, Shun-Qing

    2016-05-01

    Searching for the signature of the violation of chiral charge conservation in solids has inspired a growing passion for the magneto-transport in topological semimetals. One of the open questions is how the conductivity depends on magnetic fields in a semimetal phase when the Fermi energy crosses the Weyl nodes. Here, we study both the longitudinal and transverse magnetoconductivity of a topological Weyl semimetal near the Weyl nodes with the help of a two-node model that includes all the topological semimetal properties. In the semimetal phase, the Fermi energy crosses only the 0th Landau bands in magnetic fields. For a finite potential range of impurities, it is found that both the longitudinal and transverse magnetoconductivity are positive and linear at the Weyl nodes, leading to an anisotropic and negative magnetoresistivity. The longitudinal magnetoconductivity depends on the potential range of impurities. The longitudinal conductivity remains finite at zero field, even though the density of states vanishes at the Weyl nodes. This work establishes a relation between the linear magnetoconductivity and the intrinsic topological Weyl semimetal phase.

  20. Stanford Hardware Development Program

    NASA Technical Reports Server (NTRS)

    Peterson, A.; Linscott, I.; Burr, J.

    1986-01-01

    Architectures for high performance, digital signal processing, particularly for high resolution, wide band spectrum analysis were developed. These developments are intended to provide instrumentation for NASA's Search for Extraterrestrial Intelligence (SETI) program. The real time signal processing is both formal and experimental. The efficient organization and optimal scheduling of signal processing algorithms were investigated. The work is complemented by efforts in processor architecture design and implementation. A high resolution, multichannel spectrometer that incorporates special purpose microcoded signal processors is being tested. A general purpose signal processor for the data from the multichannel spectrometer was designed to function as the processing element in a highly concurrent machine. The processor performance required for the spectrometer is in the range of 1000 to 10,000 million instructions per second (MIPS). Multiple node processor configurations, where each node performs at 100 MIPS, are sought. The nodes are microprogrammable and are interconnected through a network with high bandwidth for neighboring nodes, and medium bandwidth for nodes at larger distance. The implementation of both the current mutlichannel spectrometer and the signal processor as Very Large Scale Integration CMOS chip sets was commenced.

  1. Scanner focus metrology and control system for advanced 10nm logic node

    NASA Astrophysics Data System (ADS)

    Oh, Junghun; Maeng, Kwang-Seok; Shin, Jae-Hyung; Choi, Won-Woong; Won, Sung-Keun; Grouwstra, Cedric; El Kodadi, Mohamed; Heil, Stephan; van der Meijden, Vidar; Hong, Jong Kyun; Kim, Sang-Jin; Kwon, Oh-Sung

    2018-03-01

    Immersion lithography is being extended beyond the 10-nm node and the lithography performance requirement needs to be tightened further to ensure good yield. Amongst others, good on-product focus control with accurate and dense metrology measurements is essential to enable this. In this paper, we will present new solutions that enable onproduct focus monitoring and control (mean and uniformity) suitable for high volume manufacturing environment. We will introduce the concept of pure focus and its role in focus control through the imaging optimizer scanner correction interface. The results will show that the focus uniformity can be improved by up to 25%.

  2. Variations in Medical Subject Headings (MeSH) mapping: from the natural language of patron terms to the controlled vocabulary of mapped lists*

    PubMed Central

    Gault, Lora V.; Shultz, Mary; Davies, Kathy J.

    2002-01-01

    Objectives: This study compared the mapping of natural language patron terms to the Medical Subject Headings (MeSH) across six MeSH interfaces for the MEDLINE database. Methods: Test data were obtained from search requests submitted by patrons to the Library of the Health Sciences, University of Illinois at Chicago, over a nine-month period. Search request statements were parsed into separate terms or phrases. Using print sources from the National Library of Medicine, Each parsed patron term was assigned corresponding MeSH terms. Each patron term was entered into each of the selected interfaces to determine how effectively they mapped to MeSH. Data were collected for mapping success, accessibility of MeSH term within mapped list, and total number of MeSH choices within each list. Results: The selected MEDLINE interfaces do not map the same patron term in the same way, nor do they consistently lead to what is considered the appropriate MeSH term. Conclusions: If searchers utilize the MEDLINE database to its fullest potential by mapping to MeSH, the results of the mapping will vary between interfaces. This variance may ultimately impact the search results. These differences should be considered when choosing a MEDLINE interface and when instructing end users. PMID:11999175

  3. Variations in Medical Subject Headings (MeSH) mapping: from the natural language of patron terms to the controlled vocabulary of mapped lists.

    PubMed

    Gault, Lora V; Shultz, Mary; Davies, Kathy J

    2002-04-01

    This study compared the mapping of natural language patron terms to the Medical Subject Headings (MeSH) across six MeSH interfaces for the MEDLINE database. Test data were obtained from search requests submitted by patrons to the Library of the Health Sciences, University of Illinois at Chicago, over a nine-month period. Search request statements were parsed into separate terms or phrases. Using print sources from the National Library of Medicine, Each parsed patron term was assigned corresponding MeSH terms. Each patron term was entered into each of the selected interfaces to determine how effectively they mapped to MeSH. Data were collected for mapping success, accessibility of MeSH term within mapped list, and total number of MeSH choices within each list. The selected MEDLINE interfaces do not map the same patron term in the same way, nor do they consistently lead to what is considered the appropriate MeSH term. If searchers utilize the MEDLINE database to its fullest potential by mapping to MeSH, the results of the mapping will vary between interfaces. This variance may ultimately impact the search results. These differences should be considered when choosing a MEDLINE interface and when instructing end users.

  4. Interfaces and Expert Systems for Online Retrieval.

    ERIC Educational Resources Information Center

    Kehoe, Cynthia A.

    1985-01-01

    This paper reviews the history of separate online system interfaces which led to efforts to develop expert systems for searching databases, particularly for end users, and introduces the research on such expert systems. Appended is a bibliography of sources on interfaces and expert systems for online retrieval. (Author/EJS)

  5. Technical development of PubMed interact: an improved interface for MEDLINE/PubMed searches.

    PubMed

    Muin, Michael; Fontelo, Paul

    2006-11-03

    The project aims to create an alternative search interface for MEDLINE/PubMed that may provide assistance to the novice user and added convenience to the advanced user. An earlier version of the project was the 'Slider Interface for MEDLINE/PubMed searches' (SLIM) which provided JavaScript slider bars to control search parameters. In this new version, recent developments in Web-based technologies were implemented. These changes may prove to be even more valuable in enhancing user interactivity through client-side manipulation and management of results. PubMed Interact is a Web-based MEDLINE/PubMed search application built with HTML, JavaScript and PHP. It is implemented on a Windows Server 2003 with Apache 2.0.52, PHP 4.4.1 and MySQL 4.1.18. PHP scripts provide the backend engine that connects with E-Utilities and parses XML files. JavaScript manages client-side functionalities and converts Web pages into interactive platforms using dynamic HTML (DHTML), Document Object Model (DOM) tree manipulation and Ajax methods. With PubMed Interact, users can limit searches with JavaScript slider bars, preview result counts, delete citations from the list, display and add related articles and create relevance lists. Many interactive features occur at client-side, which allow instant feedback without reloading or refreshing the page resulting in a more efficient user experience. PubMed Interact is a highly interactive Web-based search application for MEDLINE/PubMed that explores recent trends in Web technologies like DOM tree manipulation and Ajax. It may become a valuable technical development for online medical search applications.

  6. MusiCat System Makes Library Searches More Fruitful.

    ERIC Educational Resources Information Center

    Hansen, Per Hofman

    2002-01-01

    Describes MusiCat, an experimental user interface designed in Denmark that reflects the different ways that patrons think when searching for musical material on the Web. Discusses technology versus content; search trees that use a hierarchy of subject terms; and usability testing. (LRW)

  7. The Perfectly Organized Search Service.

    ERIC Educational Resources Information Center

    Leach, Sandra Sinsel; Spencer, Mary Ellen

    1993-01-01

    Describes the evolution and operation of the successful Database Search Service (DSS) at the John C. Hodges Library, University of Tennessee, with detailed information about equipment, policies, software, training, and physical layout. Success is attributed to careful administration, standardization of search equipment and interfaces, staff…

  8. Carbon nanotube circuit integration up to sub-20 nm channel lengths.

    PubMed

    Shulaker, Max Marcel; Van Rethy, Jelle; Wu, Tony F; Liyanage, Luckshitha Suriyasena; Wei, Hai; Li, Zuanyi; Pop, Eric; Gielen, Georges; Wong, H-S Philip; Mitra, Subhasish

    2014-04-22

    Carbon nanotube (CNT) field-effect transistors (CNFETs) are a promising emerging technology projected to achieve over an order of magnitude improvement in energy-delay product, a metric of performance and energy efficiency, compared to silicon-based circuits. However, due to substantial imperfections inherent with CNTs, the promise of CNFETs has yet to be fully realized. Techniques to overcome these imperfections have yielded promising results, but thus far only at large technology nodes (1 μm device size). Here we demonstrate the first very large scale integration (VLSI)-compatible approach to realizing CNFET digital circuits at highly scaled technology nodes, with devices ranging from 90 nm to sub-20 nm channel lengths. We demonstrate inverters functioning at 1 MHz and a fully integrated CNFET infrared light sensor and interface circuit at 32 nm channel length. This demonstrates the feasibility of realizing more complex CNFET circuits at highly scaled technology nodes.

  9. The raw disk i/o performance of compaq storage works RAID arrays under tru64 unix

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uselton, A C

    2000-10-19

    We report on the raw disk i/o performance of a set of Compaq StorageWorks RAID arrays connected to our cluster of Compaq ES40 computers via Fibre Channel. The best cumulative peak sustained data rate is l17MB/s per node for reads and 77MB/s per node for writes. This value occurs for a configuration in which a node has two Fibre Channel interfaces to a switch, which in turn has two connections to each of two Compaq StorageWorks RAID arrays. Each RAID array has two HSG80 RAID controllers controlling (together) two 5+p RAID chains. A 10% more space efficient arrangement using amore » single 1l+p RAID chain in place of the two 5+P chains is 25% slower for reads and 40% slower for writes.« less

  10. The Deep Impact Network Experiment Operations Center Monitor and Control System

    NASA Technical Reports Server (NTRS)

    Wang, Shin-Ywan (Cindy); Torgerson, J. Leigh; Schoolcraft, Joshua; Brenman, Yan

    2009-01-01

    The Interplanetary Overlay Network (ION) software at JPL is an implementation of Delay/Disruption Tolerant Networking (DTN) which has been proposed as an interplanetary protocol to support space communication. The JPL Deep Impact Network (DINET) is a technology development experiment intended to increase the technical readiness of the JPL implemented ION suite. The DINET Experiment Operations Center (EOC) developed by JPL's Protocol Technology Lab (PTL) was critical in accomplishing the experiment. EOC, containing all end nodes of simulated spaces and one administrative node, exercised publish and subscribe functions for payload data among all end nodes to verify the effectiveness of data exchange over ION protocol stacks. A Monitor and Control System was created and installed on the administrative node as a multi-tiered internet-based Web application to support the Deep Impact Network Experiment by allowing monitoring and analysis of the data delivery and statistics from ION. This Monitor and Control System includes the capability of receiving protocol status messages, classifying and storing status messages into a database from the ION simulation network, and providing web interfaces for viewing the live results in addition to interactive database queries.

  11. Massively parallel algorithm and implementation of RI-MP2 energy calculation for peta-scale many-core supercomputers.

    PubMed

    Katouda, Michio; Naruse, Akira; Hirano, Yukihiko; Nakajima, Takahito

    2016-11-15

    A new parallel algorithm and its implementation for the RI-MP2 energy calculation utilizing peta-flop-class many-core supercomputers are presented. Some improvements from the previous algorithm (J. Chem. Theory Comput. 2013, 9, 5373) have been performed: (1) a dual-level hierarchical parallelization scheme that enables the use of more than 10,000 Message Passing Interface (MPI) processes and (2) a new data communication scheme that reduces network communication overhead. A multi-node and multi-GPU implementation of the present algorithm is presented for calculations on a central processing unit (CPU)/graphics processing unit (GPU) hybrid supercomputer. Benchmark results of the new algorithm and its implementation using the K computer (CPU clustering system) and TSUBAME 2.5 (CPU/GPU hybrid system) demonstrate high efficiency. The peak performance of 3.1 PFLOPS is attained using 80,199 nodes of the K computer. The peak performance of the multi-node and multi-GPU implementation is 514 TFLOPS using 1349 nodes and 4047 GPUs of TSUBAME 2.5. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  12. A Monitoring System for Vegetable Greenhouses based on a Wireless Sensor Network

    PubMed Central

    Li, Xiu-hong; Cheng, Xiao; Yan, Ke; Gong, Peng

    2010-01-01

    A wireless sensor network-based automatic monitoring system is designed for monitoring the life conditions of greenhouse vegetatables. The complete system architecture includes a group of sensor nodes, a base station, and an internet data center. For the design of wireless sensor node, the JN5139 micro-processor is adopted as the core component and the Zigbee protocol is used for wireless communication between nodes. With an ARM7 microprocessor and embedded ZKOS operating system, a proprietary gateway node is developed to achieve data influx, screen display, system configuration and GPRS based remote data forwarding. Through a Client/Server mode the management software for remote data center achieves real-time data distribution and time-series analysis. Besides, a GSM-short-message-based interface is developed for sending real-time environmental measurements, and for alarming when a measurement is beyond some pre-defined threshold. The whole system has been tested for over one year and satisfactory results have been observed, which indicate that this system is very useful for greenhouse environment monitoring. PMID:22163391

  13. Global trends in infectious diseases at the wildlife-livestock interface.

    PubMed

    Wiethoelter, Anke K; Beltrán-Alcrudo, Daniel; Kock, Richard; Mor, Siobhan M

    2015-08-04

    The role and significance of wildlife-livestock interfaces in disease ecology has largely been neglected, despite recent interest in animals as origins of emerging diseases in humans. Scoping review methods were applied to objectively assess the relative interest by the scientific community in infectious diseases at interfaces between wildlife and livestock, to characterize animal species and regions involved, as well as to identify trends over time. An extensive literature search combining wildlife, livestock, disease, and geographical search terms yielded 78,861 publications, of which 15,998 were included in the analysis. Publications dated from 1912 to 2013 and showed a continuous increasing trend, including a shift from parasitic to viral diseases over time. In particular there was a significant increase in publications on the artiodactyls-cattle and bird-poultry interface after 2002 and 2003, respectively. These trends could be traced to key disease events that stimulated public interest and research funding. Among the top 10 diseases identified by this review, the majority were zoonoses. Prominent wildlife-livestock interfaces resulted largely from interaction between phylogenetically closely related and/or sympatric species. The bird-poultry interface was the most frequently cited wildlife-livestock interface worldwide with other interfaces reflecting regional circumstances. This review provides the most comprehensive overview of research on infectious diseases at the wildlife-livestock interface to date.

  14. Global trends in infectious diseases at the wildlife–livestock interface

    PubMed Central

    Wiethoelter, Anke K.; Beltrán-Alcrudo, Daniel; Kock, Richard; Mor, Siobhan M.

    2015-01-01

    The role and significance of wildlife–livestock interfaces in disease ecology has largely been neglected, despite recent interest in animals as origins of emerging diseases in humans. Scoping review methods were applied to objectively assess the relative interest by the scientific community in infectious diseases at interfaces between wildlife and livestock, to characterize animal species and regions involved, as well as to identify trends over time. An extensive literature search combining wildlife, livestock, disease, and geographical search terms yielded 78,861 publications, of which 15,998 were included in the analysis. Publications dated from 1912 to 2013 and showed a continuous increasing trend, including a shift from parasitic to viral diseases over time. In particular there was a significant increase in publications on the artiodactyls–cattle and bird–poultry interface after 2002 and 2003, respectively. These trends could be traced to key disease events that stimulated public interest and research funding. Among the top 10 diseases identified by this review, the majority were zoonoses. Prominent wildlife–livestock interfaces resulted largely from interaction between phylogenetically closely related and/or sympatric species. The bird–poultry interface was the most frequently cited wildlife–livestock interface worldwide with other interfaces reflecting regional circumstances. This review provides the most comprehensive overview of research on infectious diseases at the wildlife–livestock interface to date. PMID:26195733

  15. Examples Manual for Program MAVART

    DTIC Science & Technology

    1990-08-01

    Capacitance vs. Frequency 36 PROBLEM 9 37 5.3.1 Deformation at 375 Hz 38 5.3.2 Tangential Stress Contours 38 5.3.3 Admittance Components vs Frequency 39...5.3.4 Transmitting Voltage Response 39 5.3.5 Directivity at 375 Hz 40 5.3.6 Near-Field Pressure Contours 40 SERIES 3: TRHIINAR BENDER DISK 41 6.0.1 Cross...be used in the resonance search [1.1a]. If NSEL is zero, the default node used is the first in the node list. The output data indicate that 375 Hz is

  16. Node synchronization schemes for the Big Viterbi Decoder

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Swanson, L.; Arnold, S.

    1992-01-01

    The Big Viterbi Decoder (BVD), currently under development for the DSN, includes three separate algorithms to acquire and maintain node and frame synchronization. The first measures the number of decoded bits between two consecutive renormalization operations (renorm rate), the second detects the presence of the frame marker in the decoded bit stream (bit correlation), while the third searches for an encoded version of the frame marker in the encoded input stream (symbol correlation). A detailed account of the operation is given, as well as performance comparison, of the three methods.

  17. Rate-Compatible Protograph LDPC Codes

    NASA Technical Reports Server (NTRS)

    Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)

    2014-01-01

    Digital communication coding methods resulting in rate-compatible low density parity-check (LDPC) codes built from protographs. Described digital coding methods start with a desired code rate and a selection of the numbers of variable nodes and check nodes to be used in the protograph. Constraints are set to satisfy a linear minimum distance growth property for the protograph. All possible edges in the graph are searched for the minimum iterative decoding threshold and the protograph with the lowest iterative decoding threshold is selected. Protographs designed in this manner are used in decode and forward relay channels.

  18. Distributed-Memory Breadth-First Search on Massive Graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buluc, Aydin; Beamer, Scott; Madduri, Kamesh

    This chapter studies the problem of traversing large graphs using the breadth-first search order on distributed-memory supercomputers. We consider both the traditional level-synchronous top-down algorithm as well as the recently discovered direction optimizing algorithm. We analyze the performance and scalability trade-offs in using different local data structures such as CSR and DCSC, enabling in-node multithreading, and graph decompositions such as 1D and 2D decomposition.

  19. Ranking Information in Networks

    NASA Astrophysics Data System (ADS)

    Eliassi-Rad, Tina; Henderson, Keith

    Given a network, we are interested in ranking sets of nodes that score highest on user-specified criteria. For instance in graphs from bibliographic data (e.g. PubMed), we would like to discover sets of authors with expertise in a wide range of disciplines. We present this ranking task as a Top-K problem; utilize fixed-memory heuristic search; and present performance of both the serial and distributed search algorithms on synthetic and real-world data sets.

  20. Human Interface to Netcentricity

    DTIC Science & Technology

    2006-06-01

    experiencing. This is a radically different approach than using a federated search engine to bring back all relevant documents. The search engine...not be any closer to answering their question. More importantly, if they only have access to a 22 federated search , the program does not have the

  1. Content-based Music Search and Recommendation System

    NASA Astrophysics Data System (ADS)

    Takegawa, Kazuki; Hijikata, Yoshinori; Nishida, Shogo

    Recently, the turn volume of music data on the Internet has increased rapidly. This has increased the user's cost to find music data suiting their preference from such a large data set. We propose a content-based music search and recommendation system. This system has an interface for searching and finding music data and an interface for editing a user profile which is necessary for music recommendation. By exploiting the visualization of the feature space of music and the visualization of the user profile, the user can search music data and edit the user profile. Furthermore, by exploiting the infomation which can be acquired from each visualized object in a mutually complementary manner, we make it easier for the user to search music data and edit the user profile. Concretely, the system gives to the user an information obtained from the user profile when searching music data and an information obtained from the feature space of music when editing the user profile.

  2. Laryngospasm: What Causes It?

    MedlinePlus

    ... contents/search. Accessed Jan. 12, 2018. AskMayoExpert. Paradoxical vocal fold motion. Rochester, Minn.: Mayo Foundation for Medical Education and Research; 2017. What is LPR? American Academy of Otolaryngology-Head and Neck Surgery. http://www.entnet.org/?q=node/1449. Accessed ...

  3. Continuous Security and Configuration Monitoring of HPC Clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia-Lomeli, H. D.; Bertsch, A. D.; Fox, D. M.

    Continuous security and configuration monitoring of information systems has been a time consuming and laborious task for system administrators at the High Performance Computing (HPC) center. Prior to this project, system administrators had to manually check the settings of thousands of nodes, which required a significant number of hours rendering the old process ineffective and inefficient. This paper explains the application of Splunk Enterprise, a software agent, and a reporting tool in the development of a user application interface to track and report on critical system updates and security compliance status of HPC Clusters. In conjunction with other configuration managementmore » systems, the reporting tool is to provide continuous situational awareness to system administrators of the compliance state of information systems. Our approach consisted of the development, testing, and deployment of an agent to collect any arbitrary information across a massively distributed computing center, and organize that information into a human-readable format. Using Splunk Enterprise, this raw data was then gathered into a central repository and indexed for search, analysis, and correlation. Following acquisition and accumulation, the reporting tool generated and presented actionable information by filtering the data according to command line parameters passed at run time. Preliminary data showed results for over six thousand nodes. Further research and expansion of this tool could lead to the development of a series of agents to gather and report critical system parameters. However, in order to make use of the flexibility and resourcefulness of the reporting tool the agent must conform to specifications set forth in this paper. This project has simplified the way system administrators gather, analyze, and report on the configuration and security state of HPC clusters, maintaining ongoing situational awareness. Rather than querying each cluster independently, compliance checking can be managed from one central location.« less

  4. Deterministic Integration of Quantum Dots into on-Chip Multimode Interference Beamsplitters Using in Situ Electron Beam Lithography

    NASA Astrophysics Data System (ADS)

    Schnauber, Peter; Schall, Johannes; Bounouar, Samir; Höhne, Theresa; Park, Suk-In; Ryu, Geun-Hwan; Heindel, Tobias; Burger, Sven; Song, Jin-Dong; Rodt, Sven; Reitzenstein, Stephan

    2018-04-01

    The development of multi-node quantum optical circuits has attracted great attention in recent years. In particular, interfacing quantum-light sources, gates and detectors on a single chip is highly desirable for the realization of large networks. In this context, fabrication techniques that enable the deterministic integration of pre-selected quantum-light emitters into nanophotonic elements play a key role when moving forward to circuits containing multiple emitters. Here, we present the deterministic integration of an InAs quantum dot into a 50/50 multi-mode interference beamsplitter via in-situ electron beam lithography. We demonstrate the combined emitter-gate interface functionality by measuring triggered single-photon emission on-chip with $g^{(2)}(0) = 0.13\\pm 0.02$. Due to its high patterning resolution as well as spectral and spatial control, in-situ electron beam lithography allows for integration of pre-selected quantum emitters into complex photonic systems. Being a scalable single-step approach, it paves the way towards multi-node, fully integrated quantum photonic chips.

  5. Toward a human-centered hyperlipidemia management system: the interaction between internal and external information on relational data search.

    PubMed

    Gong, Yang; Zhang, Jiajie

    2011-04-01

    In a distributed information search task, data representation and cognitive distribution jointly affect user search performance in terms of response time and accuracy. Guided by UFuRT (User, Function, Representation, Task), a human-centered framework, we proposed a search model and task taxonomy. The model defines its application in the context of healthcare setting. The taxonomy clarifies the legitimate operations for each type of search task of relational data. We then developed experimental prototypes of hyperlipidemia data displays. Based on the displays, we tested the search tasks performance through two experiments. The experiments are of a within-subject design with a random sample of 24 participants. The results support our hypotheses and validate the prediction of the model and task taxonomy. In this study, representation dimensions, data scales, and search task types are the main factors in determining search efficiency and effectiveness. Specifically, the more external representations provided on the interface the better search task performance of users. The results also suggest the ideal search performance occurs when the question type and its corresponding data scale representation match. The implications of the study lie in contributing to the effective design of search interface for relational data, especially laboratory results, which could be more effectively designed in electronic medical records.

  6. Seismic modeling with radial basis function-generated finite differences (RBF-FD) – a simplified treatment of interfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, Bradley, E-mail: brma7253@colorado.edu; Fornberg, Bengt, E-mail: Fornberg@colorado.edu

    In a previous study of seismic modeling with radial basis function-generated finite differences (RBF-FD), we outlined a numerical method for solving 2-D wave equations in domains with material interfaces between different regions. The method was applicable on a mesh-free set of data nodes. It included all information about interfaces within the weights of the stencils (allowing the use of traditional time integrators), and was shown to solve problems of the 2-D elastic wave equation to 3rd-order accuracy. In the present paper, we discuss a refinement of that method that makes it simpler to implement. It can also improve accuracy formore » the case of smoothly-variable model parameter values near interfaces. We give several test cases that demonstrate the method solving 2-D elastic wave equation problems to 4th-order accuracy, even in the presence of smoothly-curved interfaces with jump discontinuities in the model parameters.« less

  7. Low-power, transparent optical network interface for high bandwidth off-chip interconnects.

    PubMed

    Liboiron-Ladouceur, Odile; Wang, Howard; Garg, Ajay S; Bergman, Keren

    2009-04-13

    The recent emergence of multicore architectures and chip multiprocessors (CMPs) has accelerated the bandwidth requirements in high-performance processors for both on-chip and off-chip interconnects. For next generation computing clusters, the delivery of scalable power efficient off-chip communications to each compute node has emerged as a key bottleneck to realizing the full computational performance of these systems. The power dissipation is dominated by the off-chip interface and the necessity to drive high-speed signals over long distances. We present a scalable photonic network interface approach that fully exploits the bandwidth capacity offered by optical interconnects while offering significant power savings over traditional E/O and O/E approaches. The power-efficient interface optically aggregates electronic serial data streams into a multiple WDM channel packet structure at time-of-flight latencies. We demonstrate a scalable optical network interface with 70% improvement in power efficiency for a complete end-to-end PCI Express data transfer.

  8. Seismic modeling with radial basis function-generated finite differences (RBF-FD) - a simplified treatment of interfaces

    NASA Astrophysics Data System (ADS)

    Martin, Bradley; Fornberg, Bengt

    2017-04-01

    In a previous study of seismic modeling with radial basis function-generated finite differences (RBF-FD), we outlined a numerical method for solving 2-D wave equations in domains with material interfaces between different regions. The method was applicable on a mesh-free set of data nodes. It included all information about interfaces within the weights of the stencils (allowing the use of traditional time integrators), and was shown to solve problems of the 2-D elastic wave equation to 3rd-order accuracy. In the present paper, we discuss a refinement of that method that makes it simpler to implement. It can also improve accuracy for the case of smoothly-variable model parameter values near interfaces. We give several test cases that demonstrate the method solving 2-D elastic wave equation problems to 4th-order accuracy, even in the presence of smoothly-curved interfaces with jump discontinuities in the model parameters.

  9. Penalty-Based Finite Element Interface Technology for Analysis of Homogeneous and Composite Structures

    NASA Technical Reports Server (NTRS)

    Averill, Ronald C.

    2002-01-01

    An effective and robust interface element technology able to connect independently modeled finite element subdomains has been developed. This method is based on the use of penalty constraints and allows coupling of finite element models whose nodes do not coincide along their common interface. Additionally, the present formulation leads to a computational approach that is very efficient and completely compatible with existing commercial software. A significant effort has been directed toward identifying those model characteristics (element geometric properties, material properties, and loads) that most strongly affect the required penalty parameter, and subsequently to developing simple 'formulae' for automatically calculating the proper penalty parameter for each interface constraint. This task is especially critical in composite materials and structures, where adjacent sub-regions may be composed of significantly different materials or laminates. This approach has been validated by investigating a variety of two-dimensional problems, including composite laminates.

  10. Rodent wearable ultrasound system for wireless neural recording.

    PubMed

    Piech, David K; Kay, Joshua E; Boser, Bernhard E; Maharbiz, Michel M

    2017-07-01

    Advances in minimally-invasive, distributed biological interface nodes enable possibilities for networks of sensors and actuators to connect the brain with external devices. The recent development of the neural dust sensor mote has shown that utilizing ultrasound backscatter communication enables untethered sub-mm neural recording devices. These implanted sensor motes require a wearable external ultrasound interrogation device to enable in-vivo, freely-behaving neural interface experiments. However, minimizing the complexity and size of the implanted sensors shifts the power and processing burden to the external interrogator. In this paper, we present an ultrasound backscatter interrogator that supports real-time backscatter processing in a rodent-wearable, completely wireless device. We demonstrate a generic digital encoding scheme which is intended for transmitting neural information. The system integrates a front-end ultrasonic interface ASIC with off-the-shelf components to enable a highly compact ultrasound interrogation device intended for rodent neural interface experiments but applicable to other model systems.

  11. Sensor node for remote monitoring of waterborne disease-causing bacteria.

    PubMed

    Kim, Kyukwang; Myung, Hyun

    2015-05-05

    A sensor node for sampling water and checking for the presence of harmful bacteria such as E. coli in water sources was developed in this research. A chromogenic enzyme substrate assay method was used to easily detect coliform bacteria by monitoring the color change of the sampled water mixed with a reagent. Live webcam image streaming to the web browser of the end user with a Wi-Fi connected sensor node shows the water color changes in real time. The liquid can be manipulated on the web-based user interface, and also can be observed by webcam feeds. Image streaming and web console servers run on an embedded processor with an expansion board. The UART channel of the expansion board is connected to an external Arduino board and a motor driver to control self-priming water pumps to sample the water, mix the reagent, and remove the water sample after the test is completed. The sensor node can repeat water testing until the test reagent is depleted. The authors anticipate that the use of the sensor node developed in this research can decrease the cost and required labor for testing samples in a factory environment and checking the water quality of local water sources in developing countries.

  12. Design of Deformation Monitoring System for Volcano Mitigation

    NASA Astrophysics Data System (ADS)

    Islamy, M. R. F.; Salam, R. A.; Munir, M. M.; Irsyam, M.; Khairurrijal

    2016-08-01

    Indonesia has many active volcanoes that are potentially disastrous. It needs good mitigation systems to prevent victims and to reduce casualties from potential disaster caused by volcanoes eruption. Therefore, the system to monitor the deformation of volcano was built. This system employed telemetry with the combination of Radio Frequency (RF) communications of XBEE and General Packet Radio Service (GPRS) communication of SIM900. There are two types of modules in this system, first is the coordinator as a parent and second is the node as a child. Each node was connected to coordinator forming a Wireless Sensor Network (WSN) with a star topology and it has an inclinometer based sensor, a Global Positioning System (GPS), and an XBEE module. The coordinator collects data to each node, one a time, to prevent collision data between nodes, save data to SD Card and transmit data to web server via GPRS. Inclinometer was calibrated with self-built in calibrator and tested in high temperature environment to check the durability. The GPS was tested by displaying its position in web server via Google Map Application Protocol Interface (API v.3). It was shown that the coordinator can receive and transmit data from every node to web server very well and the system works well in a high temperature environment.

  13. National Geothermal Data System: Open Access to Geoscience Data, Maps, and Documents

    NASA Astrophysics Data System (ADS)

    Caudill, C. M.; Richard, S. M.; Musil, L.; Sonnenschein, A.; Good, J.

    2014-12-01

    The U.S. National Geothermal Data System (NGDS) provides free open access to millions of geoscience data records, publications, maps, and reports via distributed web services to propel geothermal research, development, and production. NGDS is built on the US Geoscience Information Network (USGIN) data integration framework, which is a joint undertaking of the USGS and the Association of American State Geologists (AASG), and is compliant with international standards and protocols. NGDS currently serves geoscience information from 60+ data providers in all 50 states. Free and open source software is used in this federated system where data owners maintain control of their data. This interactive online system makes geoscience data easily discoverable, accessible, and interoperable at no cost to users. The dynamic project site http://geothermaldata.org serves as the information source and gateway to the system, allowing data and applications discovery and availability of the system's data feed. It also provides access to NGDS specifications and the free and open source code base (on GitHub), a map-centric and library style search interface, other software applications utilizing NGDS services, NGDS tutorials (via YouTube and USGIN site), and user-created tools and scripts. The user-friendly map-centric web-based application has been created to support finding, visualizing, mapping, and acquisition of data based on topic, location, time, provider, or key words. Geographic datasets visualized through the map interface also allow users to inspect the details of individual GIS data points (e.g. wells, geologic units, etc.). In addition, the interface provides the information necessary for users to access the GIS data from third party software applications such as GoogleEarth, UDig, and ArcGIS. A redistributable, free and open source software package called GINstack (USGIN software stack) was also created to give data providers a simple way to release data using interoperable and shareable standards, upload data and documents, and expose those data as a node in the NGDS or any larger data system through a CSW endpoint. The easy-to-use interface is supported by back-end software including Postgres, GeoServer, and custom CKAN extensions among others.

  14. gWEGA: GPU-accelerated WEGA for molecular superposition and shape comparison.

    PubMed

    Yan, Xin; Li, Jiabo; Gu, Qiong; Xu, Jun

    2014-06-05

    Virtual screening of a large chemical library for drug lead identification requires searching/superimposing a large number of three-dimensional (3D) chemical structures. This article reports a graphic processing unit (GPU)-accelerated weighted Gaussian algorithm (gWEGA) that expedites shape or shape-feature similarity score-based virtual screening. With 86 GPU nodes (each node has one GPU card), gWEGA can screen 110 million conformations derived from an entire ZINC drug-like database with diverse antidiabetic agents as query structures within 2 s (i.e., screening more than 55 million conformations per second). The rapid screening speed was accomplished through the massive parallelization on multiple GPU nodes and rapid prescreening of 3D structures (based on their shape descriptors and pharmacophore feature compositions). Copyright © 2014 Wiley Periodicals, Inc.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Draelos, Timothy J.; Ballard, Sanford; Young, Christopher J.

    Given a set of observations within a specified time window, a fitness value is calculated at each grid node by summing station-specific conditional fitness values. Assuming each observation was generated by a refracted P wave, these values are proportional to the conditional probabilities that each observation was generated by a seismic event at the grid node. The node with highest fitness value is accepted as a hypothetical event location, subject to some minimal fitness value, and all arrivals within a longer time window consistent with that event are associated with it. During the association step, a variety of different phasesmore » are considered. In addition, once associated with an event, an arrival is removed from further consideration. While unassociated arrivals remain, the search for other events is repeated until none are identified.« less

  16. A coherent Ising machine for 2000-node optimization problems

    NASA Astrophysics Data System (ADS)

    Inagaki, Takahiro; Haribara, Yoshitaka; Igarashi, Koji; Sonobe, Tomohiro; Tamate, Shuhei; Honjo, Toshimori; Marandi, Alireza; McMahon, Peter L.; Umeki, Takeshi; Enbutsu, Koji; Tadanaga, Osamu; Takenouchi, Hirokazu; Aihara, Kazuyuki; Kawarabayashi, Ken-ichi; Inoue, Kyo; Utsunomiya, Shoko; Takesue, Hiroki

    2016-11-01

    The analysis and optimization of complex systems can be reduced to mathematical problems collectively known as combinatorial optimization. Many such problems can be mapped onto ground-state search problems of the Ising model, and various artificial spin systems are now emerging as promising approaches. However, physical Ising machines have suffered from limited numbers of spin-spin couplings because of implementations based on localized spins, resulting in severe scalability problems. We report a 2000-spin network with all-to-all spin-spin couplings. Using a measurement and feedback scheme, we coupled time-multiplexed degenerate optical parametric oscillators to implement maximum cut problems on arbitrary graph topologies with up to 2000 nodes. Our coherent Ising machine outperformed simulated annealing in terms of accuracy and computation time for a 2000-node complete graph.

  17. Discovery of a missing disease spreader

    NASA Astrophysics Data System (ADS)

    Maeno, Yoshiharu

    2011-10-01

    This study presents a method to discover an outbreak of an infectious disease in a region for which data are missing, but which is at work as a disease spreader. Node discovery for the spread of an infectious disease is defined as discriminating between the nodes which are neighboring to a missing disease spreader node, and the rest, given a dataset on the number of cases. The spread is described by stochastic differential equations. A perturbation theory quantifies the impact of the missing spreader on the moments of the number of cases. Statistical discriminators examine the mid-body or tail-ends of the probability density function, and search for the disturbance from the missing spreader. They are tested with computationally synthesized datasets, and applied to the SARS outbreak and flu pandemic.

  18. Navigation interface for recommending home medical products.

    PubMed

    Luo, Gang

    2012-04-01

    Based on users' health issues, an intelligent personal health record (iPHR) system can automatically recommend home medical products (HMPs) and display them in a sequential order. However, the sequential output interface does not categorize search results and is not easy for users to quickly navigate to their desired HMPs. To address this problem, we developed a navigation interface for retrieved HMPs. Our idea is to use medical knowledge and nursing knowledge to construct a navigation hierarchy based on product categories. This hierarchy is added to the left side of each search result Web page to help users move through retrieved HMPs. We demonstrate the effectiveness of our techniques using USMLE medical exam cases.

  19. Chandra Source Catalog: User Interfaces

    NASA Astrophysics Data System (ADS)

    Bonaventura, Nina; Evans, I. N.; Harbo, P. N.; Rots, A. H.; Tibbetts, M. S.; Van Stone, D. W.; Zografou, P.; Anderson, C. S.; Chen, J. C.; Davis, J. E.; Doe, S. M.; Evans, J. D.; Fabbiano, G.; Galle, E.; Gibbs, D. G.; Glotfelty, K. J.; Grier, J. D.; Hain, R.; Hall, D. M.; He, X.; Houck, J. C.; Karovska, M.; Lauer, J.; McCollough, M. L.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Nowak, M. A.; Plummer, D. A.; Primini, F. A.; Refsdal, B. L.; Siemiginowska, A. L.; Sundheim, B. A.; Winkelman, S. L.

    2010-03-01

    The CSCview data mining interface is available for browsing the Chandra Source Catalog (CSC) and downloading tables of quality-assured source properties and data products. Once the desired source properties and search criteria are entered into the CSCview query form, the resulting source matches are returned in a table along with the values of the requested source properties for each source. (The catalog can be searched on any source property, not just position.) At this point, the table of search results may be saved to a text file, and the available data products for each source may be downloaded. CSCview save files are output in RDB-like and VOTable format. The available CSC data products include event files, spectra, lightcurves, and images, all of which are processed with the CIAO software. CSC data may also be accessed non-interactively with Unix command-line tools such as cURL and Wget, using ADQL 2.0 query syntax. In fact, CSCview features a separate ADQL query form for those who wish to specify this type of query within the GUI. Several interfaces are available for learning if a source is included in the catalog (in addition to CSCview): 1) the CSC interface to Sky in Google Earth shows the footprint of each Chandra observation on the sky, along with the CSC footprint for comparison (CSC source properties are also accessible when a source within a Chandra field-of-view is clicked); 2) the CSC Limiting Sensitivity online tool indicates if a source at an input celestial location was too faint for detection; 3) an IVOA Simple Cone Search interface locates all CSC sources within a specified radius of an R.A. and Dec.; and 4) the CSC-SDSS cross-match service returns the list of sources common to the CSC and SDSS, either all such sources or a subset based on search criteria.

  20. Migration to Earth Observation Satellite Product Dissemination System at JAXA

    NASA Astrophysics Data System (ADS)

    Ikehata, Y.; Matsunaga, M.

    2017-12-01

    JAXA released "G-Portal" as a portal web site for search and deliver data of Earth observation satellites in February 2013. G-Portal handles ten satellites data; GPM, TRMM, Aqua, ADEOS-II, ALOS (search only), ALOS-2 (search only), MOS-1, MOS-1b, ERS-1 and JERS-1 and archives 5.17 million products and 14 million catalogues in total. Users can search those products/catalogues in GUI web search and catalogue interface(CSW/Opensearch). In this fiscal year, we will replace this to "Next G-Portal" and has been doing integration, test and migrations. New G-Portal will treat data of satellites planned to be launched in the future in addition to those handled by G - Portal. At system architecture perspective, G-Portal adopted "cluster system" for its redundancy, so we must replace the servers into those with higher specifications when we improve its performance ("scale up approach"). This requests a lot of cost in every improvement. To avoid this, Next G-Portal adopts "scale out" system: load balancing interfaces, distributed file system, distributed data bases. (We reported in AGU fall meeting 2015(IN23D-1748).) At customer usability perspective, G-Portal provides complicated interface: "step by step" web design, randomly generated URLs, sftp (needs anomaly tcp port). Customers complained about the interfaces and the support team had been tired from answering them. To solve this problem, Next G-Portal adopts simple interfaces: "1 page" web design, RESTful URL, and Normal FTP. (We reported in AGU fall meeting 2016(IN23B-1778).) Furthermore, Next G-Portal must merge GCOM-W data dissemination system to be terminated in the next March as well as the current G-Portal. This might arrise some difficulties, since the current G-Portal and GCOM-W data dissemination systems are quite different from Next G-Portal. The presentation reports the knowledge obtained from the process of merging those systems.

  1. Using LDPC Code Constraints to Aid Recovery of Symbol Timing

    NASA Technical Reports Server (NTRS)

    Jones, Christopher; Villasnor, John; Lee, Dong-U; Vales, Esteban

    2008-01-01

    A method of utilizing information available in the constraints imposed by a low-density parity-check (LDPC) code has been proposed as a means of aiding the recovery of symbol timing in the reception of a binary-phase-shift-keying (BPSK) signal representing such a code in the presence of noise, timing error, and/or Doppler shift between the transmitter and the receiver. This method and the receiver architecture in which it would be implemented belong to a class of timing-recovery methods and corresponding receiver architectures characterized as pilotless in that they do not require transmission and reception of pilot signals. Acquisition and tracking of a signal of the type described above have traditionally been performed upstream of, and independently of, decoding and have typically involved utilization of a phase-locked loop (PLL). However, the LDPC decoding process, which is iterative, provides information that can be fed back to the timing-recovery receiver circuits to improve performance significantly over that attainable in the absence of such feedback. Prior methods of coupling LDPC decoding with timing recovery had focused on the use of output code words produced as the iterations progress. In contrast, in the present method, one exploits the information available from the metrics computed for the constraint nodes of an LDPC code during the decoding process. In addition, the method involves the use of a waveform model that captures, better than do the waveform models of the prior methods, distortions introduced by receiver timing errors and transmitter/ receiver motions. An LDPC code is commonly represented by use of a bipartite graph containing two sets of nodes. In the graph corresponding to an (n,k) code, the n variable nodes correspond to the code word symbols and the n-k constraint nodes represent the constraints that the code places on the variable nodes in order for them to form a valid code word. The decoding procedure involves iterative computation of values associated with these nodes. A constraint node represents a parity-check equation using a set of variable nodes as inputs. A valid decoded code word is obtained if all parity-check equations are satisfied. After each iteration, the metrics associated with each constraint node can be evaluated to determine the status of the associated parity check. Heretofore, normally, these metrics would be utilized only within the LDPC decoding process to assess whether or not variable nodes had converged to a codeword. In the present method, it is recognized that these metrics can be used to determine accuracy of the timing estimates used in acquiring the sampled data that constitute the input to the LDPC decoder. In fact, the number of constraints that are satisfied exhibits a peak near the optimal timing estimate. Coarse timing estimation (or first-stage estimation as described below) is found via a parametric search for this peak. The present method calls for a two-stage receiver architecture illustrated in the figure. The first stage would correct large time delays and frequency offsets; the second stage would track random walks and correct residual time and frequency offsets. In the first stage, constraint-node feedback from the LDPC decoder would be employed in a search algorithm in which the searches would be performed in successively narrower windows to find the correct time delay and/or frequency offset. The second stage would include a conventional first-order PLL with a decision-aided timing-error detector that would utilize, as its decision aid, decoded symbols from the LDPC decoder. The method has been tested by means of computational simulations in cases involving various timing and frequency errors. The results of the simulations ined in the ideal case of perfect timing in the receiver.

  2. Distributed parallel messaging for multiprocessor systems

    DOEpatents

    Chen, Dong; Heidelberger, Philip; Salapura, Valentina; Senger, Robert M; Steinmacher-Burrow, Burhard; Sugawara, Yutaka

    2013-06-04

    A method and apparatus for distributed parallel messaging in a parallel computing system. The apparatus includes, at each node of a multiprocessor network, multiple injection messaging engine units and reception messaging engine units, each implementing a DMA engine and each supporting both multiple packet injection into and multiple reception from a network, in parallel. The reception side of the messaging unit (MU) includes a switch interface enabling writing of data of a packet received from the network to the memory system. The transmission side of the messaging unit, includes switch interface for reading from the memory system when injecting packets into the network.

  3. Graphical user interface for wireless sensor networks simulator

    NASA Astrophysics Data System (ADS)

    Paczesny, Tomasz; Paczesny, Daniel; Weremczuk, Jerzy

    2008-01-01

    Wireless Sensor Networks (WSN) are currently very popular area of development. It can be suited in many applications form military through environment monitoring, healthcare, home automation and others. Those networks, when working in dynamic, ad-hoc model, need effective protocols which must differ from common computer networks algorithms. Research on those protocols would be difficult without simulation tool, because real applications often use many nodes and tests on such a big networks take much effort and costs. The paper presents Graphical User Interface (GUI) for simulator which is dedicated for WSN studies, especially in routing and data link protocols evaluation.

  4. BioCarian: search engine for exploratory searches in heterogeneous biological databases.

    PubMed

    Zaki, Nazar; Tennakoon, Chandana

    2017-10-02

    There are a large number of biological databases publicly available for scientists in the web. Also, there are many private databases generated in the course of research projects. These databases are in a wide variety of formats. Web standards have evolved in the recent times and semantic web technologies are now available to interconnect diverse and heterogeneous sources of data. Therefore, integration and querying of biological databases can be facilitated by techniques used in semantic web. Heterogeneous databases can be converted into Resource Description Format (RDF) and queried using SPARQL language. Searching for exact queries in these databases is trivial. However, exploratory searches need customized solutions, especially when multiple databases are involved. This process is cumbersome and time consuming for those without a sufficient background in computer science. In this context, a search engine facilitating exploratory searches of databases would be of great help to the scientific community. We present BioCarian, an efficient and user-friendly search engine for performing exploratory searches on biological databases. The search engine is an interface for SPARQL queries over RDF databases. We note that many of the databases can be converted to tabular form. We first convert the tabular databases to RDF. The search engine provides a graphical interface based on facets to explore the converted databases. The facet interface is more advanced than conventional facets. It allows complex queries to be constructed, and have additional features like ranking of facet values based on several criteria, visually indicating the relevance of a facet value and presenting the most important facet values when a large number of choices are available. For the advanced users, SPARQL queries can be run directly on the databases. Using this feature, users will be able to incorporate federated searches of SPARQL endpoints. We used the search engine to do an exploratory search on previously published viral integration data and were able to deduce the main conclusions of the original publication. BioCarian is accessible via http://www.biocarian.com . We have developed a search engine to explore RDF databases that can be used by both novice and advanced users.

  5. Breadth-First Search-Based Single-Phase Algorithms for Bridge Detection in Wireless Sensor Networks

    PubMed Central

    Akram, Vahid Khalilpour; Dagdeviren, Orhan

    2013-01-01

    Wireless sensor networks (WSNs) are promising technologies for exploring harsh environments, such as oceans, wild forests, volcanic regions and outer space. Since sensor nodes may have limited transmission range, application packets may be transmitted by multi-hop communication. Thus, connectivity is a very important issue. A bridge is a critical edge whose removal breaks the connectivity of the network. Hence, it is crucial to detect bridges and take preventions. Since sensor nodes are battery-powered, services running on nodes should consume low energy. In this paper, we propose energy-efficient and distributed bridge detection algorithms for WSNs. Our algorithms run single phase and they are integrated with the Breadth-First Search (BFS) algorithm, which is a popular routing algorithm. Our first algorithm is an extended version of Milic's algorithm, which is designed to reduce the message length. Our second algorithm is novel and uses ancestral knowledge to detect bridges. We explain the operation of the algorithms, analyze their proof of correctness, message, time, space and computational complexities. To evaluate practical importance, we provide testbed experiments and extensive simulations. We show that our proposed algorithms provide less resource consumption, and the energy savings of our algorithms are up by 5.5-times. PMID:23845930

  6. Cat scratch disease and lymph node tuberculosis in a colon patient with cancer.

    PubMed

    Matias, M; Marques, T; Ferreira, M A; Ribeiro, L

    2013-12-12

    A 71-year-old man operated for a sigmoid tumour remained in the surveillance after adjuvant chemotherapy. After 3 years, a left axillary lymph node was visible on CT scan. The biopsy revealed a necrotising and abscessed granulomatous lymphadenitis, suggestive of cat scratch disease. The patient confirmed having been scratched by a cat and the serology for Bartonella henselae was IgM+/IgG-. Direct and culture examinations for tuberculosis were negative. The patient was treated for cat scratch disease. One year later, the CT scan showed increased left axillary lymph nodes and a left pleural effusion. Direct and cultural examinations to exclude tuberculosis were again negative. Interferon-γ release assay testing for tuberculosis was undetermined and then positive. Lymph node and pleural tuberculosis were diagnosed and treated with a good radiological response. This article has provides evidence of the importance of continued search for the right diagnosis and that two diagnoses can happen in the same patient.

  7. A novel ternary content addressable memory design based on resistive random access memory with high intensity and low search energy

    NASA Astrophysics Data System (ADS)

    Han, Runze; Shen, Wensheng; Huang, Peng; Zhou, Zheng; Liu, Lifeng; Liu, Xiaoyan; Kang, Jinfeng

    2018-04-01

    A novel ternary content addressable memory (TCAM) design based on resistive random access memory (RRAM) is presented. Each TCAM cell consists of two parallel RRAM to both store and search for ternary data. The cell size of the proposed design is 8F2, enable a ∼60× cell area reduction compared with the conventional static random access memory (SRAM) based implementation. Simulation results also show that the search delay and energy consumption of the proposed design at the 64-bit word search are 2 ps and 0.18 fJ/bit/search respectively at 22 nm technology node, where significant improvements are achieved compared to previous works. The desired characteristics of RRAM for implementation of the high performance TCAM search chip are also discussed.

  8. Penalty-Based Interface Technology for Prediction of Delamination Growth in Laminated Structures

    NASA Technical Reports Server (NTRS)

    Averill, Ronald C.

    2004-01-01

    An effective interface element technology has been developed for connecting and simulating crack growth between independently modeled finite element subdomains (e.g., composite plies). This method has been developed using penalty constraints and allows coupling of finite element models whose nodes do not necessarily coincide along their common interface. Additionally, the present formulation leads to a computational approach that is very efficient and completely compatible with existing commercial software. The present interface element has been implemented in the commercial finite element code ABAQUS as a User Element Subroutine (UEL), making it easy to test the approach for a wide range of problems. The interface element technology has been formulated to simulate delamination growth in composite laminates. Thanks to its special features, the interface element approach makes it possible to release portions of the interface surface whose length is smaller than that of the finite elements. In addition, the penalty parameter can vary within the interface element, allowing the damage model to be applied to a desired fraction of the interface between the two meshes. Results for double cantilever beam DCB, end-loaded split (ELS) and fixed-ratio mixed mode (FRMM) specimens are presented. These results are compared to measured data to assess the ability of the present damage model to simulate crack growth.

  9. A Multimodal Adaptive Wireless Control Interface for People With Upper-Body Disabilities.

    PubMed

    Fall, Cheikh Latyr; Quevillon, Francis; Blouin, Martine; Latour, Simon; Campeau-Lecours, Alexandre; Gosselin, Clement; Gosselin, Benoit

    2018-06-01

    This paper describes a multimodal body-machine interface (BoMI) to help individuals with upper-limb disabilities using advanced assistive technologies, such as robotic arms. The proposed system uses a wearable and wireless body sensor network (WBSN) supporting up to six sensor nodes to measure the natural upper-body gesture of the users and translate it into control commands. Natural gesture of the head and upper-body parts, as well as muscular activity, are measured using inertial measurement units (IMUs) and surface electromyography (sEMG) using custom-designed multimodal wireless sensor nodes. An IMU sensing node is attached to a headset worn by the user. It has a size of 2.9 cm 2.9 cm, a maximum power consumption of 31 mW, and provides angular precision of 1. Multimodal patch sensor nodes, including both IMU and sEMG sensing modalities are placed over the user able-body parts to measure the motion and muscular activity. These nodes have a size of 2.5 cm 4.0 cm and a maximum power consumption of 11 mW. The proposed BoMI runs on a Raspberry Pi. It can adapt to several types of users through different control scenarios using the head and shoulder motion, as well as muscular activity, and provides a power autonomy of up to 24 h. JACO, a 6-DoF assistive robotic arm, is used as a testbed to evaluate the performance of the proposed BoMI. Ten able-bodied subjects performed ADLs while operating the AT device, using the Test d'Évaluation des Membres Supérieurs de Personnes Âgées to evaluate and compare the proposed BoMI with the conventional joystick controller. It is shown that the users can perform all tasks with the proposed BoMI, almost as fast as with the joystick controller, with only 30% time overhead on average, while being potentially more accessible to the upper-body disabled who cannot use the conventional joystick controller. Tests show that control performance with the proposed BoMI improved by up to 17% on average, after three trials.

  10. DEEP--a tool for differential expression effector prediction.

    PubMed

    Degenhardt, Jost; Haubrock, Martin; Dönitz, Jürgen; Wingender, Edgar; Crass, Torsten

    2007-07-01

    High-throughput methods for measuring transcript abundance, like SAGE or microarrays, are widely used for determining differences in gene expression between different tissue types, dignities (normal/malignant) or time points. Further analysis of such data frequently aims at the identification of gene interaction networks that form the causal basis for the observed properties of the systems under examination. To this end, it is usually not sufficient to rely on the measured gene expression levels alone; rather, additional biological knowledge has to be taken into account in order to generate useful hypotheses about the molecular mechanism leading to the realization of a certain phenotype. We present a method that combines gene expression data with biological expert knowledge on molecular interaction networks, as described by the TRANSPATH database on signal transduction, to predict additional--and not necessarily differentially expressed--genes or gene products which might participate in processes specific for either of the examined tissues or conditions. In a first step, significance values for over-expression in tissue/condition A or B are assigned to all genes in the expression data set. Genes with a significance value exceeding a certain threshold are used as starting points for the reconstruction of a graph with signaling components as nodes and signaling events as edges. In a subsequent graph traversal process, again starting from the previously identified differentially expressed genes, all encountered nodes 'inherit' all their starting nodes' significance values. In a final step, the graph is visualized, the nodes being colored according to a weighted average of their inherited significance values. Each node's, or sub-network's, predominant color, ranging from green (significant for tissue/condition A) over yellow (not significant for either tissue/condition) to red (significant for tissue/condition B), thus gives an immediate visual clue on which molecules--differentially expressed or not--may play pivotal roles in the tissues or conditions under examination. The described method has been implemented in Java as a client/server application and a web interface called DEEP (Differential Expression Effector Prediction). The client, which features an easy-to-use graphical interface, can freely be downloaded from the following URL: http://deep.bioinf.med.uni-goettingen.de.

  11. Technical development of PubMed Interact: an improved interface for MEDLINE/PubMed searches

    PubMed Central

    Muin, Michael; Fontelo, Paul

    2006-01-01

    Background The project aims to create an alternative search interface for MEDLINE/PubMed that may provide assistance to the novice user and added convenience to the advanced user. An earlier version of the project was the 'Slider Interface for MEDLINE/PubMed searches' (SLIM) which provided JavaScript slider bars to control search parameters. In this new version, recent developments in Web-based technologies were implemented. These changes may prove to be even more valuable in enhancing user interactivity through client-side manipulation and management of results. Results PubMed Interact is a Web-based MEDLINE/PubMed search application built with HTML, JavaScript and PHP. It is implemented on a Windows Server 2003 with Apache 2.0.52, PHP 4.4.1 and MySQL 4.1.18. PHP scripts provide the backend engine that connects with E-Utilities and parses XML files. JavaScript manages client-side functionalities and converts Web pages into interactive platforms using dynamic HTML (DHTML), Document Object Model (DOM) tree manipulation and Ajax methods. With PubMed Interact, users can limit searches with JavaScript slider bars, preview result counts, delete citations from the list, display and add related articles and create relevance lists. Many interactive features occur at client-side, which allow instant feedback without reloading or refreshing the page resulting in a more efficient user experience. Conclusion PubMed Interact is a highly interactive Web-based search application for MEDLINE/PubMed that explores recent trends in Web technologies like DOM tree manipulation and Ajax. It may become a valuable technical development for online medical search applications. PMID:17083729

  12. PageRank without hyperlinks: Reranking with PubMed related article networks for biomedical text retrieval

    PubMed Central

    Lin, Jimmy

    2008-01-01

    Background Graph analysis algorithms such as PageRank and HITS have been successful in Web environments because they are able to extract important inter-document relationships from manually-created hyperlinks. We consider the application of these techniques to biomedical text retrieval. In the current PubMed® search interface, a MEDLINE® citation is connected to a number of related citations, which are in turn connected to other citations. Thus, a MEDLINE record represents a node in a vast content-similarity network. This article explores the hypothesis that these networks can be exploited for text retrieval, in the same manner as hyperlink graphs on the Web. Results We conducted a number of reranking experiments using the TREC 2005 genomics track test collection in which scores extracted from PageRank and HITS analysis were combined with scores returned by an off-the-shelf retrieval engine. Experiments demonstrate that incorporating PageRank scores yields significant improvements in terms of standard ranked-retrieval metrics. Conclusion The link structure of content-similarity networks can be exploited to improve the effectiveness of information retrieval systems. These results generalize the applicability of graph analysis algorithms to text retrieval in the biomedical domain. PMID:18538027

  13. PLACNETw: a web-based tool for plasmid reconstruction from bacterial genomes.

    PubMed

    Vielva, Luis; de Toro, María; Lanza, Val F; de la Cruz, Fernando

    2017-12-01

    PLACNET is a graph-based tool for reconstruction of plasmids from next generation sequence pair-end datasets. PLACNET graphs contain two types of nodes (assembled contigs and reference genomes) and two types of edges (scaffold links and homology to references). Manual pruning of the graphs is a necessary requirement in PLACNET, but this is difficult for users without solid bioinformatic background. PLACNETw, a webtool based on PLACNET, provides an interactive graphic interface, automates BLAST searches, and extracts the relevant information for decision making. It allows a user with domain expertise to visualize the scaffold graphs and related information of contigs as well as reference sequences, so that the pruning operations can be done interactively from a personal computer without the need for additional tools. After successful pruning, each plasmid becomes a separate connected component subgraph. The resulting data are automatically downloaded by the user. PLACNETw is freely available at https://castillo.dicom.unican.es/upload/. delacruz@unican.es. A tutorial video and several solved examples are available at https://castillo.dicom.unican.es/placnetw_video/ and https://castillo.dicom.unican.es/examples/. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  14. The Design of a Polymorphous Cognitive Agent Architecture (PCAA)

    DTIC Science & Technology

    2008-05-01

    tree, and search agents may search the tree for documents or clusters, depositing pheromones on the way down the tree. 46 19 The quality of SODAS...location in the lattice is a node, connected to its neighbors by links, and agents roam across the lattice, depositing pheromones . 49 21 A possible FPGA...provided by swarming, and also figure out a way for learning in ACT-R to trickle down to swarming computations, e.g., through the pheromones . Integration

  15. Constructing Effective Search Strategies for Electronic Searching.

    ERIC Educational Resources Information Center

    Flanagan, Lynn; Parente, Sharon Campbell

    Electronic databases have grown tremendously in both number and popularity since their development during the 1960s. Access to electronic databases in academic libraries was originally offered primarily through mediated search services by trained librarians; however, the advent of CD-ROM and end-user interfaces for online databases has shifted the…

  16. Mercury: Reusable software application for Metadata Management, Data Discovery and Access

    NASA Astrophysics Data System (ADS)

    Devarakonda, Ranjeet; Palanisamy, Giri; Green, James; Wilson, Bruce E.

    2009-12-01

    Mercury is a federated metadata harvesting, data discovery and access tool based on both open source packages and custom developed software. It was originally developed for NASA, and the Mercury development consortium now includes funding from NASA, USGS, and DOE. Mercury is itself a reusable toolset for metadata, with current use in 12 different projects. Mercury also supports the reuse of metadata by enabling searching across a range of metadata specification and standards including XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115. Mercury provides a single portal to information contained in distributed data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfaces then allow the users to perform simple, fielded, spatial and temporal searches across these metadata sources. One of the major goals of the recent redesign of Mercury was to improve the software reusability across the projects which currently fund the continuing development of Mercury. These projects span a range of land, atmosphere, and ocean ecological communities and have a number of common needs for metadata searches, but they also have a number of needs specific to one or a few projects To balance these common and project-specific needs, Mercury’s architecture includes three major reusable components; a harvester engine, an indexing system and a user interface component. The harvester engine is responsible for harvesting metadata records from various distributed servers around the USA and around the world. The harvester software was packaged in such a way that all the Mercury projects will use the same harvester scripts but each project will be driven by a set of configuration files. The harvested files are then passed to the Indexing system, where each of the fields in these structured metadata records are indexed properly, so that the query engine can perform simple, keyword, spatial and temporal searches across these metadata sources. The search user interface software has two API categories; a common core API which is used by all the Mercury user interfaces for querying the index and a customized API for project specific user interfaces. For our work in producing a reusable, portable, robust, feature-rich application, Mercury received a 2008 NASA Earth Science Data Systems Software Reuse Working Group Peer-Recognition Software Reuse Award. The new Mercury system is based on a Service Oriented Architecture and effectively reuses components for various services such as Thesaurus Service, Gazetteer Web Service and UDDI Directory Services. The software also provides various search services including: RSS, Geo-RSS, OpenSearch, Web Services and Portlets, integrated shopping cart to order datasets from various data centers (ORNL DAAC, NSIDC) and integrated visualization tools. Other features include: Filtering and dynamic sorting of search results, book-markable search results, save, retrieve, and modify search criteria.

  17. Research on Agriculture Domain Meta-Search Engine System

    NASA Astrophysics Data System (ADS)

    Xie, Nengfu; Wang, Wensheng

    The rapid growth of agriculture web information brings a fact that search engine can not return a satisfied result for users’ queries. In this paper, we propose an agriculture domain search engine system, called ADSE, that can obtains results by an advance interface to several searches and aggregates them. We also discuss two key technologies: agriculture information determination and engine.

  18. US Search and Rescue Mission Control Center functions

    NASA Technical Reports Server (NTRS)

    1977-01-01

    A satellite aided Search and Rescue (SAR) Mission concept consisting of a local coverage bent pipe system, and a global coverage system is described. The SAR instrument is to consist of a Canadian repeater and a French processor for which Canada and France, respectively are to evaluate health and trends. Performance evaluations of each system were provided. The United States and Canada will each have a Search and Rescue Mission Control Center (MCC) and their functions were also examined. A summary of the interface requirements necessary to perform each function was included as well as the information requirements between the USMCC and each of its interfaces. Physical requirements such as location, manning etc. of the USMCC were discussed.

  19. Adaptive interface for personalizing information seeking.

    PubMed

    Narayanan, S; Koppaka, Lavanya; Edala, Narasimha; Loritz, Don; Daley, Raymond

    2004-12-01

    An adaptive interface autonomously adjusts its display and available actions to current goals and abilities of the user by assessing user status, system task, and the context. Knowledge content adaptability is needed for knowledge acquisition and refinement tasks. In the case of knowledge content adaptability, the requirements of interface design focus on the elicitation of information from the user and the refinement of information based on patterns of interaction. In such cases, the emphasis on adaptability is on facilitating information search and knowledge discovery. In this article, we present research on adaptive interfaces that facilitates personalized information seeking from a large data warehouse. The resulting proof-of-concept system, called source recommendation system (SRS), assists users in locating and navigating data sources in the repository. Based on the initial user query and an analysis of the content of the search results, the SRS system generates a profile of the user tailored to the individual's context during information seeking. The user profiles are refined successively and are used in progressively guiding the user to the appropriate set of sources within the knowledge base. The SRS system is implemented as an Internet browser plug-in to provide a seamless and unobtrusive, personalized experience to the users during the information search process. The rationale behind our approach, system design, empirical evaluation, and implications for research on adaptive interfaces are described in this paper.

  20. 78 FR 35761 - Final Priority; National Institute on Disability and Rehabilitation Research-Disability and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-14

    ... include cloud computing applications that allow for personalized accessible interfaces. The RERC must... the article search feature at: www.federalregister.gov . Specifically, through the advanced search...

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abe Lederman

    This report contains the comprehensive summary of the work performed on the SBIR Phase II project (“Distributed Relevance Ranking in Heterogeneous Document Collections”) at Deep Web Technologies (http://www.deepwebtech.com). We have successfully completed all of the tasks defined in our SBIR Proposal work plan (See Table 1 - Phase II Tasks Status). The project was completed on schedule and we have successfully deployed an initial production release of the software architecture at DOE-OSTI for the Science.gov Alliance's search portal (http://www.science.gov). We have implemented a set of grid services that supports the extraction, filtering, aggregation, and presentation of search results from numerousmore » heterogeneous document collections. Illustration 3 depicts the services required to perform QuickRank™ filtering of content as defined in our architecture documentation. Functionality that has been implemented is indicated by the services highlighted in green. We have successfully tested our implementation in a multi-node grid deployment both within the Deep Web Technologies offices, and in a heterogeneous geographically distributed grid environment. We have performed a series of load tests in which we successfully simulated 100 concurrent users submitting search requests to the system. This testing was performed on deployments of one, two, and three node grids with services distributed in a number of different configurations. The preliminary results from these tests indicate that our architecture will scale well across multi-node grid deployments, but more work will be needed, beyond the scope of this project, to perform testing and experimentation to determine scalability and resiliency requirements. We are pleased to report that a production quality version (1.4) of the science.gov Alliance's search portal based on our grid architecture was released in June of 2006. This demonstration portal is currently available at http://science.gov/search30 . The portal allows the user to select from a number of collections grouped by category and enter a query expression (See Illustration 1 - Science.gov 3.0 Search Page). After the user clicks “search” a results page is displayed that provides a list of results from the selected collections ordered by relevance based on the query expression the user provided. Our grid based solution to deep web search and document ranking has already gained attention within DOE, other Government Agencies and a fortune 50 company. We are committed to the continued development of grid based solutions to large scale data access, filtering, and presentation problems within the domain of Information Retrieval and the more general categories of content management, data mining and data analysis.« less

  2. The strategies WDK: a graphical search interface and web development kit for functional genomics databases

    PubMed Central

    Fischer, Steve; Aurrecoechea, Cristina; Brunk, Brian P.; Gao, Xin; Harb, Omar S.; Kraemer, Eileen T.; Pennington, Cary; Treatman, Charles; Kissinger, Jessica C.; Roos, David S.; Stoeckert, Christian J.

    2011-01-01

    Web sites associated with the Eukaryotic Pathogen Bioinformatics Resource Center (EuPathDB.org) have recently introduced a graphical user interface, the Strategies WDK, intended to make advanced searching and set and interval operations easy and accessible to all users. With a design guided by usability studies, the system helps motivate researchers to perform dynamic computational experiments and explore relationships across data sets. For example, PlasmoDB users seeking novel therapeutic targets may wish to locate putative enzymes that distinguish pathogens from their hosts, and that are expressed during appropriate developmental stages. When a researcher runs one of the approximately 100 searches available on the site, the search is presented as a first step in a strategy. The strategy is extended by running additional searches, which are combined with set operators (union, intersect or minus), or genomic interval operators (overlap, contains). A graphical display uses Venn diagrams to make the strategy’s flow obvious. The interface facilitates interactive adjustment of the component searches with changes propagating forward through the strategy. Users may save their strategies, creating protocols that can be shared with colleagues. The strategy system has now been deployed on all EuPathDB databases, and successfully deployed by other projects. The Strategies WDK uses a configurable MVC architecture that is compatible with most genomics and biological warehouse databases, and is available for download at code.google.com/p/strategies-wdk. Database URL: www.eupathdb.org PMID:21705364

  3. A Real-Time Monitoring System of Industry Carbon Monoxide Based on Wireless Sensor Networks.

    PubMed

    Yang, Jiachen; Zhou, Jianxiong; Lv, Zhihan; Wei, Wei; Song, Houbing

    2015-11-20

    Carbon monoxide (CO) burns or explodes at over-standard concentration. Hence, in this paper, a Wifi-based, real-time monitoring of a CO system is proposed for application in the construction industry, in which a sensor measuring node is designed by low-frequency modulation method to acquire CO concentration reliably, and a digital filtering method is adopted for noise filtering. According to the triangulation, the Wifi network is constructed to transmit information and determine the position of nodes. The measured data are displayed on a computer or smart phone by a graphical interface. The experiment shows that the monitoring system obtains excellent accuracy and stability in long-term continuous monitoring.

  4. A synchronization method for wireless acquisition systems, application to brain computer interfaces.

    PubMed

    Foerster, M; Bonnet, S; van Langhenhove, A; Porcherot, J; Charvet, G

    2013-01-01

    A synchronization method for wireless acquisition systems has been developed and implemented on a wireless ECoG recording implant and on a wireless EEG recording helmet. The presented algorithm and hardware implementation allow the precise synchronization of several data streams from several sensor nodes for applications where timing is critical like in event-related potential (ERP) studies. The proposed method has been successfully applied to obtain visual evoked potentials and compared with a reference biosignal amplifier. The control over the exact sampling frequency allows reducing synchronization errors that will otherwise accumulate during a recording. The method is scalable to several sensor nodes communicating with a shared base station.

  5. A Self-Stabilizing Distributed Clock Synchronization Protocol for Arbitrary Digraphs

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.

    2011-01-01

    This report presents a self-stabilizing distributed clock synchronization protocol in the absence of faults in the system. It is focused on the distributed clock synchronization of an arbitrary, non-partitioned digraph ranging from fully connected to 1-connected networks of nodes while allowing for differences in the network elements. This protocol does not rely on assumptions about the initial state of the system, other than the presence of at least one node, and no central clock or a centrally generated signal, pulse, or message is used. Nodes are anonymous, i.e., they do not have unique identities. There is no theoretical limit on the maximum number of participating nodes. The only constraint on the behavior of the node is that the interactions with other nodes are restricted to defined links and interfaces. We present an outline of a deductive proof of the correctness of the protocol. A model of the protocol was mechanically verified using the Symbolic Model Verifier (SMV) for a variety of topologies. Results of the mechanical proof of the correctness of the protocol are provided. The model checking results have verified the correctness of the protocol as they apply to the networks with unidirectional and bidirectional links. In addition, the results confirm the claims of determinism and linear convergence. As a result, we conjecture that the protocol solves the general case of this problem. We also present several variations of the protocol and discuss that this synchronization protocol is indeed an emergent system.

  6. Perceptual grouping effects on cursor movement expectations.

    PubMed

    Dorneich, Michael C; Hamblin, Christopher J; Lancaster, Jeff A; Olofinboba, Olu

    2014-05-01

    Two studies were conducted to develop an understanding of factors that drive user expectations when navigating between discrete elements on a display via a limited degree-of-freedom cursor control device. For the Orion Crew Exploration Vehicle spacecraft, a free-floating cursor with a graphical user interface (GUI) would require an unachievable level of accuracy due to expected acceleration and vibration conditions during dynamic phases of flight. Therefore, Orion program proposed using a "caged" cursor to "jump" from one controllable element (node) on the GUI to another. However, nodes are not likely to be arranged on a rectilinear grid, and so movements between nodes are not obvious. Proximity between nodes, direction of nodes relative to each other, and context features may all contribute to user cursor movement expectations. In an initial study, we examined user expectations based on the nodes themselves. In a second study, we examined the effect of context features on user expectations. The studies established that perceptual grouping effects influence expectations to varying degrees. Based on these results, a simple rule set was developed to support users in building a straightforward mental model that closely matches their natural expectations for cursor movement. The results will help designers of display formats take advantage of the natural context-driven cursor movement expectations of users to reduce navigation errors, increase usability, and decrease access time. The rules set and guidelines tie theory to practice and can be applied in environments where vibration or acceleration are significant, including spacecraft, aircraft, and automobiles.

  7. WORDGRAPH: Keyword-in-Context Visualization for NETSPEAK's Wildcard Search.

    PubMed

    Riehmann, Patrick; Gruendl, Henning; Potthast, Martin; Trenkmann, Martin; Stein, Benno; Froehlich, Benno

    2012-09-01

    The WORDGRAPH helps writers in visually choosing phrases while writing a text. It checks for the commonness of phrases and allows for the retrieval of alternatives by means of wildcard queries. To support such queries, we implement a scalable retrieval engine, which returns high-quality results within milliseconds using a probabilistic retrieval strategy. The results are displayed as WORDGRAPH visualization or as a textual list. The graphical interface provides an effective means for interactive exploration of search results using filter techniques, query expansion, and navigation. Our observations indicate that, of three investigated retrieval tasks, the textual interface is sufficient for the phrase verification task, wherein both interfaces support context-sensitive word choice, and the WORDGRAPH best supports the exploration of a phrase's context or the underlying corpus. Our user study confirms these observations and shows that WORDGRAPH is generally the preferred interface over the textual result list for queries containing multiple wildcards.

  8. A sensor network to iPhone interface separating continuous and sporadic processes in mobile telemedicine.

    PubMed

    D'Angelo, Lorenzo T; Schneider, Michael; Neugebauer, Paul; Lueth, Tim C

    2011-01-01

    In this contribution, a new concept for interfacing sensor network nodes (motes) and smartphones is presented for the first time. In the last years, a variety of telemedicine applications on smartphones for data reception, display and transmission have been developed. However, it is not always practical or possible to have a smartphone application running continuously to accomplish these tasks. The presented system allows receiving and storing data continuously using a mote and visualizing or sending it on the go using the smartphone as user interface only when desired. Thus, the processes of data reception and storage run on a safe system consuming less energy and the smartphone's potential along with its battery are not demanded continuously. Both, system concept and realization with an Apple iPhone are presented.

  9. Using COMSOL Software on the Peregrine System | High-Performance Computing

    Science.gov Websites

    the following command: lmstat.comsol COMSOL can be used by starting the COMSOL GUI that allows one to compute node, the following will bring up the COMSOL interface. module purge module load comsol/5.3 comsol following command comsol batch -inputfile myinputfile.mph -outputfile out.mph Running a Parallel COMSOL Job

  10. Cervical lymphadenopathy in the dental patient: a review of clinical approach.

    PubMed

    Parisi, Ernesta; Glick, Michael

    2005-06-01

    Lymph node enlargement may be an incidental finding on examination, or may be associated with a patient complaint. It is likely that over half of all patients examined each day may have enlarged lymph nodes in the head and neck region. There are no written guidelines specifying when further evaluation of lymphadenopathy is necessary. With such a high frequency of occurrence, oral health care providers need to be able to determine when lymphadenopathy should be investigated further. Although most cervical lymphadenopathy is the result of a benign infectious etiology, clinicians should search for a precipitating cause and examine other nodal locations to exclude generalized lymphadenopathy. Lymph nodes larger than 1 cm in diameter are generally considered abnormal. Malignancy should be considered when palpable lymph nodes are identified in the supraclavicular region, or when nodes are rock hard, rubbery, or fixed in consistency. Patients with unexplained localized cervical lymphadenopathy presenting with a benign clinical picture should be observed for a 2- to 4-week period. Generalized lymphadenopathy should prompt further clinical investigation. This article reviews common causes of lymphadenopathy, and presents a methodical clinical approach to a patient with cervical lymphadenopathy.

  11. Interface dermatitis in skin lesions of Kikuchi-Fujimoto's disease: a histopathological marker of evolution into systemic lupus erythematosus?

    PubMed

    Paradela, S; Lorenzo, J; Martínez-Gómez, W; Yebra-Pimentel, T; Valbuena, L; Fonseca, E

    2008-12-01

    Kikuchi's disease (KD) is a self-limiting histiocytic necrotizing lymphadenitis (HNL). Cutaneous manifestations are frequent and usually show histopathological findings similar to those observed in the involved lymph nodes. HNL with superposed histological features to KD has been described in patients with lupus erythematosus (LE), and a group of healthy patients previously reported as having HNL may evolve into LE after several months. Up to date, features to predict which HNL patients will have a self-limiting disease and which could develop LE have been not identified. In order to clarify the characteristics of skin lesions associated with KD, we report a case of HNL with evolution into systemic lupus erythematosus (SLE) and a review of previous reports of KD with cutaneous manifestations. A 17-year-old woman presented with a 4-month history of fever and generalised lymphadenopathy. A diagnosis of HNL was established based on a lymph node biopsy. One month later, she developed an erythematoedematous rash on her upper body, with histopathological findings of interface dermatitis. After 8 months, anti-nuclear antibodies (ANA) at titre of 1/320, anti-DNA-ds antibodies and marked decrease of complement levels were detected. During the following 2 years, she developed diagnostic criteria for SLE, with arthralgias, pleuritis, aseptic meningitis, haemolytic anaemia and lupus nephritis. To our knowledge, 27 cases of nodal and cutaneous KD have been reported, 9 of which later developed LE. In all these patients, the skin biopsy revealed interface dermatitis. Skin biopsy revealed a pattern of interface dermatitis in all reviewed KD cases, which evolved into LE. Even this histopathological finding was not previously considered significant; it might be a marker of evolution into LE.

  12. CBM First-level Event Selector Input Interface Demonstrator

    NASA Astrophysics Data System (ADS)

    Hutter, Dirk; de Cuveland, Jan; Lindenstruth, Volker

    2017-10-01

    CBM is a heavy-ion experiment at the future FAIR facility in Darmstadt, Germany. Featuring self-triggered front-end electronics and free-streaming read-out, event selection will exclusively be done by the First Level Event Selector (FLES). Designed as an HPC cluster with several hundred nodes its task is an online analysis and selection of the physics data at a total input data rate exceeding 1 TByte/s. To allow efficient event selection, the FLES performs timeslice building, which combines the data from all given input links to self-contained, potentially overlapping processing intervals and distributes them to compute nodes. Partitioning the input data streams into specialized containers allows performing this task very efficiently. The FLES Input Interface defines the linkage between the FEE and the FLES data transport framework. A custom FPGA PCIe board, the FLES Interface Board (FLIB), is used to receive data via optical links and transfer them via DMA to the host’s memory. The current prototype of the FLIB features a Kintex-7 FPGA and provides up to eight 10 GBit/s optical links. A custom FPGA design has been developed for this board. DMA transfers and data structures are optimized for subsequent timeslice building. Index tables generated by the FPGA enable fast random access to the written data containers. In addition the DMA target buffers can directly serve as InfiniBand RDMA source buffers without copying the data. The usage of POSIX shared memory for these buffers allows data access from multiple processes. An accompanying HDL module has been developed to integrate the FLES link into the front-end FPGA designs. It implements the front-end logic interface as well as the link protocol. Prototypes of all Input Interface components have been implemented and integrated into the FLES test framework. This allows the implementation and evaluation of the foreseen CBM read-out chain.

  13. BBPH: Using progressive hedging within branch and bound to solve multi-stage stochastic mixed integer programs

    DOE PAGES

    Barnett, Jason; Watson, Jean -Paul; Woodruff, David L.

    2016-11-27

    Progressive hedging, though an effective heuristic for solving stochastic mixed integer programs (SMIPs), is not guaranteed to converge in this case. Here, we describe BBPH, a branch and bound algorithm that uses PH at each node in the search tree such that, given sufficient time, it will always converge to a globally optimal solution. Additionally, to providing a theoretically convergent “wrapper” for PH applied to SMIPs, computational results demonstrate that for some difficult problem instances branch and bound can find improved solutions after exploring only a few nodes.

  14. Sensitivity and specificity of indocyanine green near-infrared fluorescence imaging in detection of metastatic lymph nodes in colorectal cancer: Systematic review and meta-analysis.

    PubMed

    Emile, Sameh H; Elfeki, Hossam; Shalaby, Mostafa; Sakr, Ahmad; Sileri, Pierpaolo; Laurberg, Søren; Wexner, Steven D

    2017-11-01

    This review aimed to determine the overall sensitivity and specificity of indocyanine green (ICG) near-infrared (NIR) fluorescence in sentinel lymph node (SLN) detection in Colorectal cancer (CRC). A systematic search in electronic databases was conducted. Twelve studies including 248 patients were reviewed. The median sensitivity, specificity, and accuracy rates were 73.7, 100, and 75.7. The pooled sensitivity and specificity rates were 71% and 84.6%. In conclusion, ICG-NIR fluorescence is a promising technique for detecting SLNs in CRC. © 2017 Wiley Periodicals, Inc.

  15. Visual Exploratory Search of Relationship Graphs on Smartphones

    PubMed Central

    Ouyang, Jianquan; Zheng, Hao; Kong, Fanbin; Liu, Tianming

    2013-01-01

    This paper presents a novel framework for Visual Exploratory Search of Relationship Graphs on Smartphones (VESRGS) that is composed of three major components: inference and representation of semantic relationship graphs on the Web via meta-search, visual exploratory search of relationship graphs through both querying and browsing strategies, and human-computer interactions via the multi-touch interface and mobile Internet on smartphones. In comparison with traditional lookup search methodologies, the proposed VESRGS system is characterized with the following perceived advantages. 1) It infers rich semantic relationships between the querying keywords and other related concepts from large-scale meta-search results from Google, Yahoo! and Bing search engines, and represents semantic relationships via graphs; 2) the exploratory search approach empowers users to naturally and effectively explore, adventure and discover knowledge in a rich information world of interlinked relationship graphs in a personalized fashion; 3) it effectively takes the advantages of smartphones’ user-friendly interfaces and ubiquitous Internet connection and portability. Our extensive experimental results have demonstrated that the VESRGS framework can significantly improve the users’ capability of seeking the most relevant relationship information to their own specific needs. We envision that the VESRGS framework can be a starting point for future exploration of novel, effective search strategies in the mobile Internet era. PMID:24223936

  16. Llnking the EarthScope Data Virtual Catalog to the GEON Portal

    NASA Astrophysics Data System (ADS)

    Lin, K.; Memon, A.; Baru, C.

    2008-12-01

    The EarthScope Data Portal provides a unified, single-point of access to EarthScope data and products from USArray, Plate Boundary Observatory (PBO), and San Andreas Fault Observatory at Depth (SAFOD) experiments. The portal features basic search and data access capabilities to allow users to discover and access EarthScope data using spatial, temporal, and other metadata-based (data type, station specific) search conditions. The portal search module is the user interface implementation of the EarthScope Data Search Web Service. This Web Service acts as a virtual catalog that in turn invokes Web services developed by IRIS (Incorporated Research Institutions for Seismology), UNAVCO (University NAVSTAR Consortium), and GFZ (German Research Center for Geosciences) to search for EarthScope data in the archives at each of these locations. These Web Services provide information about all resources (data) that match the specified search conditions. In this presentation we will describe how the EarthScope Data Search Web service can be integrated into the GEONsearch application in the GEON Portal (see http://portal.geongrid.org). Thus, a search request issued at the GEON Portal will also search the EarthScope virtual catalog thereby providing users seamless access to data in GEON as well as the Earthscope via a common user interface.

  17. Database Search Engines: Paradigms, Challenges and Solutions.

    PubMed

    Verheggen, Kenneth; Martens, Lennart; Berven, Frode S; Barsnes, Harald; Vaudel, Marc

    2016-01-01

    The first step in identifying proteins from mass spectrometry based shotgun proteomics data is to infer peptides from tandem mass spectra, a task generally achieved using database search engines. In this chapter, the basic principles of database search engines are introduced with a focus on open source software, and the use of database search engines is demonstrated using the freely available SearchGUI interface. This chapter also discusses how to tackle general issues related to sequence database searching and shows how to minimize their impact.

  18. A New Interface for the Magnetics Information Consortium (MagIC) Paleo and Rock Magnetic Database

    NASA Astrophysics Data System (ADS)

    Jarboe, N.; Minnett, R.; Koppers, A. A. P.; Tauxe, L.; Constable, C.; Shaar, R.; Jonestrask, L.

    2014-12-01

    The Magnetic Information Consortium (MagIC) database (http://earthref.org/MagIC/) continues to improve the ease of uploading data, the creation of complex searches, data visualization, and data downloads for the paleomagnetic, geomagnetic, and rock magnetic communities. Data uploading has been simplified and no longer requires the use of the Excel SmartBook interface. Instead, properly formatted MagIC text files can be dragged-and-dropped onto an HTML 5 web interface. Data can be uploaded one table at a time to facilitate ease of uploading and data error checking is done online on the whole dataset at once instead of incrementally in an Excel Console. Searching the database has improved with the addition of more sophisticated search parameters and with the ability to use them in complex combinations. Searches may also be saved as permanent URLs for easy reference or for use as a citation in a publication. Data visualization plots (ARAI, equal area, demagnetization, Zijderveld, etc.) are presented with the data when appropriate to aid the user in understanding the dataset. Data from the MagIC database may be downloaded from individual contributions or from online searches for offline use and analysis in the tab delimited MagIC text file format. With input from the paleomagnetic, geomagnetic, and rock magnetic communities, the MagIC database will continue to improve as a data warehouse and resource.

  19. Endometrial cancer - reduce to the minimum. A new paradigm for adjuvant treatments?

    PubMed Central

    2011-01-01

    Background Up to now, the role of adjuvant radiation therapy and the extent of lymph node dissection for early stage endometrial cancer are controversial. In order to clarify the current position of the given adjuvant treatment options, a systematic review was performed. Materials and methods Both, Pubmed and ISI Web of Knowledge database were searched using the following keywords and MESH headings: "Endometrial cancer", "Endometrial Neoplasms", "Endometrial Neoplasms/radiotherapy", "External beam radiation therapy", "Brachytherapy" and adequate combinations. Conclusion Recent data from randomized trials indicate that external beam radiation therapy - particularly in combination with extended lymph node dissection - or radical lymph node dissection increases toxicity without any improvement of overall survival rates. Thus, reduced surgical aggressiveness and limitation of radiotherapy to vaginal-vault-brachytherapy only is sufficient for most cases of early stage endometrial cancer. PMID:22118369

  20. A new method for producing automated seismic bulletins: Probabilistic event detection, association, and location

    DOE PAGES

    Draelos, Timothy J.; Ballard, Sanford; Young, Christopher J.; ...

    2015-10-01

    Given a set of observations within a specified time window, a fitness value is calculated at each grid node by summing station-specific conditional fitness values. Assuming each observation was generated by a refracted P wave, these values are proportional to the conditional probabilities that each observation was generated by a seismic event at the grid node. The node with highest fitness value is accepted as a hypothetical event location, subject to some minimal fitness value, and all arrivals within a longer time window consistent with that event are associated with it. During the association step, a variety of different phasesmore » are considered. In addition, once associated with an event, an arrival is removed from further consideration. While unassociated arrivals remain, the search for other events is repeated until none are identified.« less

  1. EMERSE: The Electronic Medical Record Search Engine

    PubMed Central

    Hanauer, David A.

    2006-01-01

    EMERSE (The Electronic Medical Record Search Engine) is an intuitive, powerful search engine for free-text documents in the electronic medical record. It offers multiple options for creating complex search queries yet has an interface that is easy enough to be used by those with minimal computer experience. EMERSE is ideal for retrospective chart reviews and data abstraction and may have potential for clinical care as well.

  2. Design of an On-Line Query Language for Full Text Patent Search.

    ERIC Educational Resources Information Center

    Glantz, Richard S.

    The design of an English-like query language and an interactive computer environment for searching the full text of the U.S. patent collection are discussed. Special attention is paid to achieving a transparent user interface, to providing extremely broad search capabilities (including nested substitution classes, Kleene star events, and domain…

  3. Lymph node harvest in colon and rectal cancer: Current considerations

    PubMed Central

    McDonald, James R; Renehan, Andrew G; O’Dwyer, Sarah T; Haboubi, Najib Y

    2012-01-01

    The prognostic significance of identifying lymph node (LN) metastases following surgical resection for colon and rectal cancer is well recognized and is reflected in accurate staging of the disease. An established body of evidence exists, demonstrating an association between a higher total LN count and improved survival, particularly for node negative colon cancer. In node positive disease, however, the lymph node ratios may represent a better prognostic indicator, although the impact of this on clinical treatment has yet to be universally established. By extension, strategies to increase surgical node harvest and/or laboratory methods to increase LN yield seem logical and might improve cancer staging. However, debate prevails as to whether or not these extrapolations are clinically relevant, particularly when very high LN counts are sought. Current guidelines recommend a minimum of 12 nodes harvested as the standard of care, yet the evidence for such is questionable as it is unclear whether an increasing the LN count results in improved survival. Findings from modern treatments, including down-staging in rectal cancer using pre-operative chemoradiotherapy, paradoxically suggest that lower LN count, or indeed complete absence of LNs, are associated with improved survival; implying that using a specific number of LNs harvested as a measure of surgical quality is not always appropriate. The pursuit of a sufficient LN harvest represents good clinical practice; however, recent evidence shows that the exhaustive searching for very high LN yields may be unnecessary and has little influence on modern approaches to treatment. PMID:22347537

  4. Modeling the Webgraph: How Far We Are

    NASA Astrophysics Data System (ADS)

    Donato, Debora; Laura, Luigi; Leonardi, Stefano; Millozzi, Stefano

    The following sections are included: * Introduction * Preliminaries * WebBase * In-degree and out-degree * PageRank * Bipartite cliques * Strongly connected components * Stochastic models of the webgraph * Models of the webgraph * A multi-layer model * Large scale simulation * Algorithmic techniques for generating and measuring webgraphs * Data representation and multifiles * Generating webgraphs * Traversal with two bits for each node * Semi-external breadth first search * Semi-external depth first search * Computation of the SCCs * Computation of the bow-tie regions * Disjoint bipartite cliques * PageRank * Summary and outlook

  5. Developing a feasible neighbourhood search for solving hub location problem in a communication network

    NASA Astrophysics Data System (ADS)

    Rakhmawati, Fibri; Mawengkang, Herman; Buulolo, F.; Mardiningsih

    2018-01-01

    The hub location with single assignment is the problem of locating hubs and assigning the terminal nodes to hubs in order to minimize the cost of hub installation and the cost of routing the traffic in the network. There may also be capacity restrictions on the amount of traffic that can transit by hubs. This paper discusses how to model the polyhedral properties of the problems and develop a feasible neighbourhood search method to solve the model.

  6. Sandia Compact Sensor Node (SCSN) v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HARRINGTON, JOHN

    2009-01-07

    The SCSN communication protocol is implemented in software and incorporates elements of Frequency Division Multiple Access (FDMA), Time Division Multiple Access (TDMA), and Carrier Sense Multiple Access (CSMA) to reduce radio message collisions, latency, and power consumption. Alarm messages are expeditiously routed to a central node as a 'star' network with minimum overhead. Other messages can be routed along network links between any two nodes so that peer-to-peer communication is possible. Broadcast messages can be composed that flood the entire network or just specific portions with minimal radio traffic and latency. Two-way communication with sensor nodes, which sleep most ofmore » the time to conserve battery life, can occur at seven second intervals. SCSN software also incorporates special algorithms to minimize superfluous radio traffic that can result from excessive intrusion alarm messages. A built-in seismic detector is implemented with a geophone and software that distinguishes between pedestrian and vehicular targets. Other external sensors can be attached to a SCSN using supervised interface lines that are controlled by software. All software is written in the ANSI C language for ease of development, maintenance, and portability.« less

  7. Spacecraft On-Board Information Extraction Computer (SOBIEC)

    NASA Technical Reports Server (NTRS)

    Eisenman, David; Decaro, Robert E.; Jurasek, David W.

    1994-01-01

    The Jet Propulsion Laboratory is the Technical Monitor on an SBIR Program issued for Irvine Sensors Corporation to develop a highly compact, dual use massively parallel processing node known as SOBIEC. SOBIEC couples 3D memory stacking technology provided by nCUBE. The node contains sufficient network Input/Output to implement up to an order-13 binary hypercube. The benefit of this network, is that it scales linearly as more processors are added, and it is a superset of other commonly used interconnect topologies such as: meshes, rings, toroids, and trees. In this manner, a distributed processing network can be easily devised and supported. The SOBIEC node has sufficient memory for most multi-computer applications, and also supports external memory expansion and DMA interfaces. The SOBIEC node is supported by a mature set of software development tools from nCUBE. The nCUBE operating system (OS) provides configuration and operational support for up to 8000 SOBIEC processors in an order-13 binary hypercube or any subset or partition(s) thereof. The OS is UNIX (USL SVR4) compatible, with C, C++, and FORTRAN compilers readily available. A stand-alone development system is also available to support SOBIEC test and integration.

  8. Geostatistical applications in ground-water modeling in south-central Kansas

    USGS Publications Warehouse

    Ma, T.-S.; Sophocleous, M.; Yu, Y.-S.

    1999-01-01

    This paper emphasizes the supportive role of geostatistics in applying ground-water models. Field data of 1994 ground-water level, bedrock, and saltwater-freshwater interface elevations in south-central Kansas were collected and analyzed using the geostatistical approach. Ordinary kriging was adopted to estimate initial conditions for ground-water levels and topography of the Permian bedrock at the nodes of a finite difference grid used in a three-dimensional numerical model. Cokriging was used to estimate initial conditions for the saltwater-freshwater interface. An assessment of uncertainties in the estimated data is presented. The kriged and cokriged estimation variances were analyzed to evaluate the adequacy of data employed in the modeling. Although water levels and bedrock elevations are well described by spherical semivariogram models, additional data are required for better cokriging estimation of the interface data. The geostatistically analyzed data were employed in a numerical model of the Siefkes site in the project area. Results indicate that the computed chloride concentrations and ground-water drawdowns reproduced the observed data satisfactorily.This paper emphasizes the supportive role of geostatistics in applying ground-water models. Field data of 1994 ground-water level, bedrock, and saltwater-freshwater interface elevations in south-central Kansas were collected and analyzed using the geostatistical approach. Ordinary kriging was adopted to estimate initial conditions for ground-water levels and topography of the Permian bedrock at the nodes of a finite difference grid used in a three-dimensional numerical model. Cokriging was used to estimate initial conditions for the saltwater-freshwater interface. An assessment of uncertainties in the estimated data is presented. The kriged and cokriged estimation variances were analyzed to evaluate the adequacy of data employed in the modeling. Although water levels and bedrock elevations are well described by spherical semivariogram models, additional data are required for better cokriging estimation of the interface data. The geostatistically analyzed data were employed in a numerical model of the Siefkes site in the project area. Results indicate that the computed chloride concentrations and ground-water drawdowns reproduced the observed data satisfactorily.

  9. Concurrent evolution of feature extractors and modular artificial neural networks

    NASA Astrophysics Data System (ADS)

    Hannak, Victor; Savakis, Andreas; Yang, Shanchieh Jay; Anderson, Peter

    2009-05-01

    This paper presents a new approach for the design of feature-extracting recognition networks that do not require expert knowledge in the application domain. Feature-Extracting Recognition Networks (FERNs) are composed of interconnected functional nodes (feurons), which serve as feature extractors, and are followed by a subnetwork of traditional neural nodes (neurons) that act as classifiers. A concurrent evolutionary process (CEP) is used to search the space of feature extractors and neural networks in order to obtain an optimal recognition network that simultaneously performs feature extraction and recognition. By constraining the hill-climbing search functionality of the CEP on specific parts of the solution space, i.e., individually limiting the evolution of feature extractors and neural networks, it was demonstrated that concurrent evolution is a necessary component of the system. Application of this approach to a handwritten digit recognition task illustrates that the proposed methodology is capable of producing recognition networks that perform in-line with other methods without the need for expert knowledge in image processing.

  10. The Influence of Gender Difference on the Information-Seeking Behaviors for the Graphical Interface of Children's Digital Library

    ERIC Educational Resources Information Center

    Hsieh, Tsia-ying; Wu, Ko-chiu

    2015-01-01

    Children conducting searches using the interfaces of library websites often encounter obstacles due to typographical errors, digital divides, or a failure to grasp keywords. Satisfaction with a given interface may also vary according to the gender of the user, making it a variable in information seeking behavior. Children benefit more from…

  11. KONAGAbase: a genomic and transcriptomic database for the diamondback moth, Plutella xylostella.

    PubMed

    Jouraku, Akiya; Yamamoto, Kimiko; Kuwazaki, Seigo; Urio, Masahiro; Suetsugu, Yoshitaka; Narukawa, Junko; Miyamoto, Kazuhisa; Kurita, Kanako; Kanamori, Hiroyuki; Katayose, Yuichi; Matsumoto, Takashi; Noda, Hiroaki

    2013-07-09

    The diamondback moth (DBM), Plutella xylostella, is one of the most harmful insect pests for crucifer crops worldwide. DBM has rapidly evolved high resistance to most conventional insecticides such as pyrethroids, organophosphates, fipronil, spinosad, Bacillus thuringiensis, and diamides. Therefore, it is important to develop genomic and transcriptomic DBM resources for analysis of genes related to insecticide resistance, both to clarify the mechanism of resistance of DBM and to facilitate the development of insecticides with a novel mode of action for more effective and environmentally less harmful insecticide rotation. To contribute to this goal, we developed KONAGAbase, a genomic and transcriptomic database for DBM (KONAGA is the Japanese word for DBM). KONAGAbase provides (1) transcriptomic sequences of 37,340 ESTs/mRNAs and 147,370 RNA-seq contigs which were clustered and assembled into 84,570 unigenes (30,695 contigs, 50,548 pseudo singletons, and 3,327 singletons); and (2) genomic sequences of 88,530 WGS contigs with 246,244 degenerate contigs and 106,455 singletons from which 6,310 de novo identified repeat sequences and 34,890 predicted gene-coding sequences were extracted. The unigenes and predicted gene-coding sequences were clustered and 32,800 representative sequences were extracted as a comprehensive putative gene set. These sequences were annotated with BLAST descriptions, Gene Ontology (GO) terms, and Pfam descriptions, respectively. KONAGAbase contains rich graphical user interface (GUI)-based web interfaces for easy and efficient searching, browsing, and downloading sequences and annotation data. Five useful search interfaces consisting of BLAST search, keyword search, BLAST result-based search, GO tree-based search, and genome browser are provided. KONAGAbase is publicly available from our website (http://dbm.dna.affrc.go.jp/px/) through standard web browsers. KONAGAbase provides DBM comprehensive transcriptomic and draft genomic sequences with useful annotation information with easy-to-use web interfaces, which helps researchers to efficiently search for target sequences such as insect resistance-related genes. KONAGAbase will be continuously updated and additional genomic/transcriptomic resources and analysis tools will be provided for further efficient analysis of the mechanism of insecticide resistance and the development of effective insecticides with a novel mode of action for DBM.

  12. OpenSearch (ECHO-ESIP) & REST API for Earth Science Data Access

    NASA Astrophysics Data System (ADS)

    Mitchell, A.; Cechini, M.; Pilone, D.

    2010-12-01

    This presentation will provide a brief technical overview of OpenSearch, the Earth Science Information Partners (ESIP) Federated Search framework, and the REST architecture; discuss NASA’s Earth Observing System (EOS) ClearingHOuse’s (ECHO) implementation lessons learned; and demonstrate the simplified usage of these technologies. SOAP, as a framework for web service communication has numerous advantages for Enterprise applications and Java/C# type programming languages. As a technical solution, SOAP has been a reliable framework on top of which many applications have been successfully developed and deployed. However, as interest grows for quick development cycles and more intriguing “mashups,” the SOAP API loses its appeal. Lightweight and simple are the vogue characteristics that are sought after. Enter the REST API architecture and OpenSearch format. Both of these items provide a new path for application development addressing some of the issues unresolved by SOAP. ECHO has made available all of its discovery, order submission, and data management services through a publicly accessible SOAP API. This interface is utilized by a variety of ECHO client and data partners to provide valuable capabilities to end users. As ECHO interacted with current and potential partners looking to develop Earth Science tools utilizing ECHO, it became apparent that the development overhead required to interact with the SOAP API was a growing barrier to entry. ECHO acknowledged the technical issues that were being uncovered by its partner community and chose to provide two new interfaces for interacting with the ECHO metadata catalog. The first interface is built upon the OpenSearch format and ESIP Federated Search framework. Leveraging these two items, a client (ECHO-ESIP) was developed with a focus on simplified searching and results presentation. The second interface is built upon the Representational State Transfer (REST) architecture. Leveraging the REST architecture, a new API has been made available that will provide access to the entire SOAP API suite of services. The results of these development activities has not only positioned to engage in the thriving world of mashup applications, but also provided an excellent real-world case study of how to successfully leverage these emerging technologies.

  13. A Fast, Minimalist Search Tool for Remote Sensing Data

    NASA Astrophysics Data System (ADS)

    Lynnes, C. S.; Macharrie, P. G.; Elkins, M.; Joshi, T.; Fenichel, L. H.

    2005-12-01

    We present a tool that emphasizes speed and simplicity in searching remotely sensed Earth Science data. The tool, nicknamed "Mirador" (Spanish for a scenic overlook), provides only four freetext search form fields, for Keywords, Location, Data Start and Data Stop. This contrasts with many current Earth Science search tools that offer highly structured interfaces in order to ensure precise, non-zero results. The disadvantages of the structured approach lie in its complexity and resultant learning curve, as well as the time it takes to formulate and execute the search, thus discouraging iterative discovery. On the other hand, the success of the basic Google search interface shows that many users are willing to forgo high search precision if the search process is fast enough to enable rapid iteration. Therefore, we employ several methods to increase the speed of search formulation and execution. Search formulation is expedited by the minimalist search form, with only one required field. Also, a gazetteer enables the use of geographic terms as shorthand for latitude/longitude coordinates. The search execution is accelerated by initially presenting dataset results (returned from a Google Mini appliance) with an estimated number of "hits" for each dataset based on the user's space-time constraints. The more costly file-level search is executed against a PostGres database only when the user "drills down", and then covering only the fraction of the time period needed to return the next page of results. The simplicity of the search form makes the tool easy to learn and use, and the speed of the searches enables an iterative form of data discovery.

  14. LEGION: Lightweight Expandable Group of Independently Operating Nodes

    NASA Technical Reports Server (NTRS)

    Burl, Michael C.

    2012-01-01

    LEGION is a lightweight C-language software library that enables distributed asynchronous data processing with a loosely coupled set of compute nodes. Loosely coupled means that a node can offer itself in service to a larger task at any time and can withdraw itself from service at any time, provided it is not actively engaged in an assignment. The main program, i.e., the one attempting to solve the larger task, does not need to know up front which nodes will be available, how many nodes will be available, or at what times the nodes will be available, which is normally the case in a "volunteer computing" framework. The LEGION software accomplishes its goals by providing message-based, inter-process communication similar to MPI (message passing interface), but without the tight coupling requirements. The software is lightweight and easy to install as it is written in standard C with no exotic library dependencies. LEGION has been demonstrated in a challenging planetary science application in which a machine learning system is used in closed-loop fashion to efficiently explore the input parameter space of a complex numerical simulation. The machine learning system decides which jobs to run through the simulator; then, through LEGION calls, the system farms those jobs out to a collection of compute nodes, retrieves the job results as they become available, and updates a predictive model of how the simulator maps inputs to outputs. The machine learning system decides which new set of jobs would be most informative to run given the results so far; this basic loop is repeated until sufficient insight into the physical system modeled by the simulator is obtained.

  15. Building a Smart Portal for Astronomy

    NASA Astrophysics Data System (ADS)

    Derriere, S.; Boch, T.

    2011-07-01

    The development of a portal for accessing astronomical resources is not an easy task. The ever-increasing complexity of the data products can result in very complex user interfaces, requiring a lot of effort and learning from the user in order to perform searches. This is often a design choice, where the user must explicitly set many constraints, while the portal search logic remains simple. We investigated a different approach, where the query interface is kept as simple as possible (ideally, a simple text field, like for Google search), and the search logic is made much more complex to interpret the query in a relevant manner. We will present the implications of this approach in terms of interpretation and categorization of the query parameters (related to astronomical vocabularies), translation (mapping) of these concepts into the portal components metadata, identification of query schemes and use cases matching the input parameters, and delivery of query results to the user.

  16. User interface issues in supporting human-computer integrated scheduling

    NASA Technical Reports Server (NTRS)

    Cooper, Lynne P.; Biefeld, Eric W.

    1991-01-01

    Explored here is the user interface problems encountered with the Operations Missions Planner (OMP) project at the Jet Propulsion Laboratory (JPL). OMP uses a unique iterative approach to planning that places additional requirements on the user interface, particularly to support system development and maintenance. These requirements are necessary to support the concepts of heuristically controlled search, in-progress assessment, and iterative refinement of the schedule. The techniques used to address the OMP interface needs are given.

  17. Implementing the Army NetCentric Data Strategy in a ServiceOriented Environment

    DTIC Science & Technology

    2009-04-23

    D a t a D i s c o v e r y Data Retrieval Data Subscription Data Discovery D a t a A c c e s s Artifact Discovery Federated Search Data Search Data...define common interfaces to search and  retrieve data across the enterprise.  • Patterns • Search • Status • Receive – Services • Federated   Search • Artifact

  18. Data management routines for reproducible research using the G-Node Python Client library

    PubMed Central

    Sobolev, Andrey; Stoewer, Adrian; Pereira, Michael; Kellner, Christian J.; Garbers, Christian; Rautenberg, Philipp L.; Wachtler, Thomas

    2014-01-01

    Structured, efficient, and secure storage of experimental data and associated meta-information constitutes one of the most pressing technical challenges in modern neuroscience, and does so particularly in electrophysiology. The German INCF Node aims to provide open-source solutions for this domain that support the scientific data management and analysis workflow, and thus facilitate future data access and reproducible research. G-Node provides a data management system, accessible through an application interface, that is based on a combination of standardized data representation and flexible data annotation to account for the variety of experimental paradigms in electrophysiology. The G-Node Python Library exposes these services to the Python environment, enabling researchers to organize and access their experimental data using their familiar tools while gaining the advantages that a centralized storage entails. The library provides powerful query features, including data slicing and selection by metadata, as well as fine-grained permission control for collaboration and data sharing. Here we demonstrate key actions in working with experimental neuroscience data, such as building a metadata structure, organizing recorded data in datasets, annotating data, or selecting data regions of interest, that can be automated to large degree using the library. Compliant with existing de-facto standards, the G-Node Python Library is compatible with many Python tools in the field of neurophysiology and thus enables seamless integration of data organization into the scientific data workflow. PMID:24634654

  19. Data management routines for reproducible research using the G-Node Python Client library.

    PubMed

    Sobolev, Andrey; Stoewer, Adrian; Pereira, Michael; Kellner, Christian J; Garbers, Christian; Rautenberg, Philipp L; Wachtler, Thomas

    2014-01-01

    Structured, efficient, and secure storage of experimental data and associated meta-information constitutes one of the most pressing technical challenges in modern neuroscience, and does so particularly in electrophysiology. The German INCF Node aims to provide open-source solutions for this domain that support the scientific data management and analysis workflow, and thus facilitate future data access and reproducible research. G-Node provides a data management system, accessible through an application interface, that is based on a combination of standardized data representation and flexible data annotation to account for the variety of experimental paradigms in electrophysiology. The G-Node Python Library exposes these services to the Python environment, enabling researchers to organize and access their experimental data using their familiar tools while gaining the advantages that a centralized storage entails. The library provides powerful query features, including data slicing and selection by metadata, as well as fine-grained permission control for collaboration and data sharing. Here we demonstrate key actions in working with experimental neuroscience data, such as building a metadata structure, organizing recorded data in datasets, annotating data, or selecting data regions of interest, that can be automated to large degree using the library. Compliant with existing de-facto standards, the G-Node Python Library is compatible with many Python tools in the field of neurophysiology and thus enables seamless integration of data organization into the scientific data workflow.

  20. The Cold Dark Matter Search test stand warm electronics card

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hines, Bruce; /Colorado U., Denver; Hansen, Sten

    A card which does the signal processing for four SQUID amplifiers and two charge sensitive channels is described. The card performs the same functions as is presently done with two custom 9U x 280mm Eurocard modules, a commercial multi-channel VME digitizer, a PCI to GPIB interface, a PCI to VME interface and a custom built linear power supply. By integrating these functions onto a single card and using the power over Ethernet standard, the infrastructure requirements for instrumenting a Cold Dark Matter Search (CDMS) detector test stand are significantly reduced.

  1. PCLIPS: Parallel CLIPS

    NASA Technical Reports Server (NTRS)

    Hall, Lawrence O.; Bennett, Bonnie H.; Tello, Ivan

    1994-01-01

    A parallel version of CLIPS 5.1 has been developed to run on Intel Hypercubes. The user interface is the same as that for CLIPS with some added commands to allow for parallel calls. A complete version of CLIPS runs on each node of the hypercube. The system has been instrumented to display the time spent in the match, recognize, and act cycles on each node. Only rule-level parallelism is supported. Parallel commands enable the assertion and retraction of facts to/from remote nodes working memory. Parallel CLIPS was used to implement a knowledge-based command, control, communications, and intelligence (C(sup 3)I) system to demonstrate the fusion of high-level, disparate sources. We discuss the nature of the information fusion problem, our approach, and implementation. Parallel CLIPS has also be used to run several benchmark parallel knowledge bases such as one to set up a cafeteria. Results show from running Parallel CLIPS with parallel knowledge base partitions indicate that significant speed increases, including superlinear in some cases, are possible.

  2. The CEOS International Directory Network: Progress and Plans, Spring, 1999

    NASA Technical Reports Server (NTRS)

    Olsen, Lola M.

    1999-01-01

    The Global Change Master Directory (GCMD) serves as the software development hub for the Committee on Earth observation Satellites' (CEOS) International Directory Network (IDN). The GCMD has upgraded the software for the IDN nodes as Version 7 of the GCMD: MD7-Oracle and MD7-Isite, as well as three other MD7 experimental interfaces. The contribution by DLR representatives (Germany) of the DLR Thesaurus will be demonstrated as an educational tool for use with MD7-Isite. The software will be installed at twelve nodes around the world: Brazil, Argentina, the Netherlands, Canada, France, Germany, Italy, Japan, Australia, New Zealand, Switzerland, and several sites in the United States. Representing NASA for the International Directory Network and the CEOS Data Access Subgroup, NASA's contribution to this international interoperability effort will be updated. Discussion will include interoperability with the CEOS Interoperability Protocol (CIP), features of the latest version of the software, including upgraded capabilities for distributed input by the IDN nodes, installation logistics, "mirroring", population objectives, and future plans.

  3. The CEOS International Directory Network Progress and Plans: Spring, 1999

    NASA Technical Reports Server (NTRS)

    Olsen, Lola M.

    1999-01-01

    The Global Change Master Directory (GCMD) serves as the software development hub for the Committee on Earth Observation Satellites' (CEOS) International Directory Network (IDN). The GCMD has upgraded the software for the IDN nodes as Version 7 of the GCMD: MD7-Oracle and MD7-Isite, as well as three other MD7 experimental interfaces. The contribution by DLR representatives (Germany) of the DLR Thesaurus will be demonstrated as an educational tool for use with MD7-Isite. The software will be installed at twelve nodes around the world: Brazil, Argentina, the Netherlands, Canada, France, Germany, Italy, Japan, Australia, New Zealand, Switzerland, and several sites in the United States. Representing NASA for the International Directory Network and the CEOS Data Access Subgroup, NASA's contribution to this international interoperability effort will be updated. Discussion will include interoperability with the CEOS Interoperability Protocol (CIP), features of the latest version of the software, including upgraded capabilities for distributed input by the IDN nodes, installation logistics, "mirroring', population objectives, and future plans.

  4. Digital system for structural dynamics simulation

    NASA Technical Reports Server (NTRS)

    Krauter, A. I.; Lagace, L. J.; Wojnar, M. K.; Glor, C.

    1982-01-01

    State-of-the-art digital hardware and software for the simulation of complex structural dynamic interactions, such as those which occur in rotating structures (engine systems). System were incorporated in a designed to use an array of processors in which the computation for each physical subelement or functional subsystem would be assigned to a single specific processor in the simulator. These node processors are microprogrammed bit-slice microcomputers which function autonomously and can communicate with each other and a central control minicomputer over parallel digital lines. Inter-processor nearest neighbor communications busses pass the constants which represent physical constraints and boundary conditions. The node processors are connected to the six nearest neighbor node processors to simulate the actual physical interface of real substructures. Computer generated finite element mesh and force models can be developed with the aid of the central control minicomputer. The control computer also oversees the animation of a graphics display system, disk-based mass storage along with the individual processing elements.

  5. Development of Implantable Wireless Sensor Nodes for Animal Husbandry and MedTech Innovation.

    PubMed

    Lu, Jian; Zhang, Lan; Zhang, Dapeng; Matsumoto, Sohei; Hiroshima, Hiroshi; Maeda, Ryutaro; Sato, Mizuho; Toyoda, Atsushi; Gotoh, Takafumi; Ohkohchi, Nobuhiro

    2018-03-26

    In this paper, we report the development, evaluation, and application of ultra-small low-power wireless sensor nodes for advancing animal husbandry, as well as for innovation of medical technologies. A radio frequency identification (RFID) chip with hybrid interface and neglectable power consumption was introduced to enable switching of ON/OFF and measurement mode after implantation. A wireless power transmission system with a maximum efficiency of 70% and an access distance of up to 5 cm was developed to allow the sensor node to survive for a duration of several weeks from a few minutes' remote charge. The results of field tests using laboratory mice and a cow indicated the high accuracy of the collected biological data and bio-compatibility of the package. As a result of extensive application of the above technologies, a fully solid wireless pH sensor and a surgical navigation system using artificial magnetic field and a 3D MEMS magnetic sensor are introduced in this paper, and the preliminary experimental results are presented and discussed.

  6. A Loader for Executing Multi-Binary Applications on the Thinking Machines CM-5: It's Not Just for SPMD Anymore

    NASA Technical Reports Server (NTRS)

    Becker, Jeffrey C.

    1995-01-01

    The Thinking Machines CM-5 platform was designed to run single program, multiple data (SPMD) applications, i.e., to run a single binary across all nodes of a partition, with each node possibly operating on different data. Certain classes of applications, such as multi-disciplinary computational fluid dynamics codes, are facilitated by the ability to have subsets of the partition nodes running different binaries. In order to extend the CM-5 system software to permit such applications, a multi-program loader was developed. This system is based on the dld loader which was originally developed for workstations. This paper provides a high level description of dld, and describes how it was ported to the CM-5 to provide support for multi-binary applications. Finally, it elaborates how the loader has been used to implement the CM-5 version of MPIRUN, a portable facility for running multi-disciplinary/multi-zonal MPI (Message-Passing Interface Standard) codes.

  7. Spectral Element Method for the Simulation of Unsteady Compressible Flows

    NASA Technical Reports Server (NTRS)

    Diosady, Laslo Tibor; Murman, Scott M.

    2013-01-01

    This work uses a discontinuous-Galerkin spectral-element method (DGSEM) to solve the compressible Navier-Stokes equations [1{3]. The inviscid ux is computed using the approximate Riemann solver of Roe [4]. The viscous fluxes are computed using the second form of Bassi and Rebay (BR2) [5] in a manner consistent with the spectral-element approximation. The method of lines with the classical 4th-order explicit Runge-Kutta scheme is used for time integration. Results for polynomial orders up to p = 15 (16th order) are presented. The code is parallelized using the Message Passing Interface (MPI). The computations presented in this work are performed using the Sandy Bridge nodes of the NASA Pleiades supercomputer at NASA Ames Research Center. Each Sandy Bridge node consists of 2 eight-core Intel Xeon E5-2670 processors with a clock speed of 2.6Ghz and 2GB per core memory. On a Sandy Bridge node the Tau Benchmark [6] runs in a time of 7.6s.

  8. Engineering and Probing Topological Properties of Dirac Semimetal Films by Asymmetric Charge Transfer.

    PubMed

    Villanova, John W; Barnes, Edwin; Park, Kyungwha

    2017-02-08

    Dirac semimetals (DSMs) have topologically robust three-dimensional Dirac (doubled Weyl) nodes with Fermi-arc states. In heterostructures involving DSMs, charge transfer occurs at the interfaces, which can be used to probe and control their bulk and surface topological properties through surface-bulk connectivity. Here we demonstrate that despite a band gap in DSM films, asymmetric charge transfer at the surface enables one to accurately identify locations of the Dirac-node projections from gapless band crossings and to examine and engineer properties of the topological Fermi-arc surface states connecting the projections, by simulating adatom-adsorbed DSM films using a first-principles method with an effective model. The positions of the Dirac-node projections are insensitive to charge transfer amount or slab thickness except for extremely thin films. By varying the amount of charge transfer, unique spin textures near the projections and a separation between the Fermi-arc states change, which can be observed by gating without adatoms.

  9. A New Concept of Controller for Accelerators' Magnet Power Supplies

    NASA Astrophysics Data System (ADS)

    Visintini, Roberto; Cleva, Stefano; Cautero, Marco; Ciesla, Tomasz

    2016-04-01

    The complexity of a particle accelerator implies the remote control of very large numbers of devices, with many different typologies, either distributed along the accelerator or concentrated in locations, often far away from each other. Local and global control systems handle the devices through dedicated communication channels and interfaces. Each controlled device is practically a “smart node” performing a specific task. In addition, very often, those tasks are managed in real-time mode. The performances required to the control interface has an influence on the cost of the distributed nodes as well as on their hardware and software implementation. In large facilities (e.g. CERN) the “smart nodes” derive from specific in-house developments. Alternatively, it is possible to find on the market commercial devices, whose performances (and prices) are spread over a broad range, and spanning from proprietary design (customizable to the user's needs) to open source/design. In this paper, we will describe some applications of smart nodes in the particle accelerators field, with special focus on the power supplies for magnets. In modern accelerators, in fact, magnets and their associated power supplies constitute systems distributed along the accelerator itself, and strongly interfaced with the remote control system as well as with more specific (and often more demanding) orbit/trajectory feedback systems. We will give examples of actual systems, installed and operational on two light sources, Elettra and FERMI, located in the Elettra Research Center in Trieste, Italy.

  10. Environmental Dataset Gateway (EDG) Search Widget

    EPA Pesticide Factsheets

    Use the Environmental Dataset Gateway (EDG) to find and access EPA's environmental resources. Many options are available for easily reusing EDG content in other other applications. This allows individuals to provide direct access to EPA's metadata outside the EDG interface. The EDG Search Widget makes it possible to search the EDG from another web page or application. The search widget can be included on your website by simply inserting one or two lines of code. Users can type a search term or lucene search query in the search field and retrieve a pop-up list of records that match that search.

  11. Combining Thermal And Structural Analyses

    NASA Technical Reports Server (NTRS)

    Winegar, Steven R.

    1990-01-01

    Computer code makes programs compatible so stresses and deformations calculated. Paper describes computer code combining thermal analysis with structural analysis. Called SNIP (for SINDA-NASTRAN Interfacing Program), code provides interface between finite-difference thermal model of system and finite-element structural model when no node-to-element correlation between models. Eliminates much manual work in converting temperature results of SINDA (Systems Improved Numerical Differencing Analyzer) program into thermal loads for NASTRAN (NASA Structural Analysis) program. Used to analyze concentrating reflectors for solar generation of electric power. Large thermal and structural models needed to predict distortion of surface shapes, and SNIP saves considerable time and effort in combining models.

  12. Mixed-Mode Decohesion Elements for Analyses of Progressive Delamination

    NASA Technical Reports Server (NTRS)

    Davila, Carlos G.; Camanho, Pedro P.; deMoura, Marcelo F.

    2001-01-01

    A new 8-node decohesion element with mixed mode capability is proposed and demonstrated. The element is used at the interface between solid finite elements to model the initiation and propagation of delamination. A single displacement-based damage parameter is used in a strain softening law to track the damage state of the interface. The method can be used in conjunction with conventional material degradation procedures to account for inplane and intra-laminar damage modes. The accuracy of the predictions is evaluated in single mode delamination tests, in the mixed-mode bending test, and in a structural configuration consisting of the debonding of a stiffener flange from its skin.

  13. A 2-D Interface Element for Coupled Analysis of Independently Modeled 3-D Finite Element Subdomains

    NASA Technical Reports Server (NTRS)

    Kandil, Osama A.

    1998-01-01

    Over the past few years, the development of the interface technology has provided an analysis framework for embedding detailed finite element models within finite element models which are less refined. This development has enabled the use of cascading substructure domains without the constraint of coincident nodes along substructure boundaries. The approach used for the interface element is based on an alternate variational principle often used in deriving hybrid finite elements. The resulting system of equations exhibits a high degree of sparsity but gives rise to a non-positive definite system which causes difficulties with many of the equation solvers in general-purpose finite element codes. Hence the global system of equations is generally solved using, a decomposition procedure with pivoting. The research reported to-date for the interface element includes the one-dimensional line interface element and two-dimensional surface interface element. Several large-scale simulations, including geometrically nonlinear problems, have been reported using the one-dimensional interface element technology; however, only limited applications are available for the surface interface element. In the applications reported to-date, the geometry of the interfaced domains exactly match each other even though the spatial discretization within each domain may be different. As such, the spatial modeling of each domain, the interface elements and the assembled system is still laborious. The present research is focused on developing a rapid modeling procedure based on a parametric interface representation of independently defined subdomains which are also independently discretized.

  14. The role of MRI in axillary lymph node imaging in breast cancer patients: a systematic review.

    PubMed

    Kuijs, V J L; Moossdorff, M; Schipper, R J; Beets-Tan, R G H; Heuts, E M; Keymeulen, K B M I; Smidt, M L; Lobbes, M B I

    2015-04-01

    To assess whether MRI can exclude axillary lymph node metastasis, potentially replacing sentinel lymph node biopsy (SLNB), and consequently eliminating the risk of SLNB-associated morbidity. PubMed, Cochrane, Medline and Embase databases were searched for relevant publications up to July 2014. Studies were selected based on predefined inclusion and exclusion criteria and independently assessed by two reviewers using a standardised extraction form. Sixteen eligible studies were selected from 1,372 publications identified by the search. A dedicated axillary protocol [sensitivity 84.7 %, negative predictive value (NPV) 95.0 %] was superior to a standard protocol covering both the breast and axilla simultaneously (sensitivity 82.0 %, NPV 82.6 %). Dynamic, contrast-enhanced MRI had a lower median sensitivity (60.0 %) and NPV (80.0 %) compared to non-enhanced T1w/T2w sequences (88.4, 94.7 %), diffusion-weighted imaging (84.2, 90.6 %) and ultrasmall superparamagnetic iron oxide (USPIO)- enhanced T2*w sequences (83.0, 95.9 %). The most promising results seem to be achievable when using non-enhanced T1w/T2w and USPIO-enhanced T2*w sequences in combination with a dedicated axillary protocol (sensitivity 84.7 % and NPV 95.0 %). The diagnostic performance of some MRI protocols for excluding axillary lymph node metastases approaches the NPV needed to replace SLNB. However, current observations are based on studies with heterogeneous study designs and limited populations. • Some axillary MRI protocols approach the NPV of an SLNB procedure. • Dedicated axillary MRI is more accurate than protocols also covering the breast. • T1w/T2w protocols combined with USPIO-enhanced sequences are the most promising sequences.

  15. Open Clients for Distributed Databases

    NASA Astrophysics Data System (ADS)

    Chayes, D. N.; Arko, R. A.

    2001-12-01

    We are actively developing a collection of open source example clients that demonstrate use of our "back end" data management infrastructure. The data management system is reported elsewhere at this meeting (Arko and Chayes: A Scaleable Database Infrastructure). In addition to their primary goal of being examples for others to build upon, some of these clients may have limited utility in them selves. More information about the clients and the data infrastructure is available on line at http://data.ldeo.columbia.edu. The available examples to be demonstrated include several web-based clients including those developed for the Community Review System of the Digital Library for Earth System Education, a real-time watch standers log book, an offline interface to use log book entries, a simple client to search on multibeam metadata and others are Internet enabled and generally web-based front ends that support searches against one or more relational databases using industry standard SQL queries. In addition to the web based clients, simple SQL searches from within Excel and similar applications will be demonstrated. By defining, documenting and publishing a clear interface to the fully searchable databases, it becomes relatively easy to construct client interfaces that are optimized for specific applications in comparison to building a monolithic data and user interface system.

  16. Search of the Deep and Dark Web via DARPA Memex

    NASA Astrophysics Data System (ADS)

    Mattmann, C. A.

    2015-12-01

    Search has progressed through several stages due to the increasing size of the Web. Search engines first focused on text and its rate of occurrence; then focused on the notion of link analysis and citation then on interactivity and guided search; and now on the use of social media - who we interact with, what we comment on, and who we follow (and who follows us). The next stage, referred to as "deep search," requires solutions that can bring together text, images, video, importance, interactivity, and social media to solve this challenging problem. The Apache Nutch project provides an open framework for large-scale, targeted, vertical search with capabilities to support all past and potential future search engine foci. Nutch is a flexible infrastructure allowing open access to ranking; URL selection and filtering approaches, to the link graph generated from search, and Nutch has spawned entire sub communities including Apache Hadoop and Apache Tika. It addresses many current needs with the capability to support new technologies such as image and video. On the DARPA Memex project, we are creating create specific extensions to Nutch that will directly improve its overall technological superiority for search and that will directly allow us to address complex search problems including human trafficking. We are integrating state-of-the-art algorithms developed by Kitware for IARPA Aladdin combined with work by Harvard to provide image and video understanding support allowing automatic detection of people and things and massive deployment via Nutch. We are expanding Apache Tika for scene understanding, object/person detection and classification in images/video. We are delivering an interactive and visual interface for initiating Nutch crawls. The interface uses Python technologies to expose Nutch data and to provide a domain specific language for crawls. With the Bokeh visualization library the interface we are delivering simple interactive crawl visualization and plotting techniques for exploring crawled information. The platform classifies, identify, and thwart predators, help to find victims and to identify buyers in human trafficking and will deliver technological superiority in search engines for DARPA. We are already transitioning the technologies into Geo and Planetary Science, and Bioinformatics.

  17. EMERSE: The Electronic Medical Record Search Engine

    PubMed Central

    Hanauer, David A.

    2006-01-01

    EMERSE (The Electronic Medical Record Search Engine) is an intuitive, powerful search engine for free-text documents in the electronic medical record. It offers multiple options for creating complex search queries yet has an interface that is easy enough to be used by those with minimal computer experience. EMERSE is ideal for retrospective chart reviews and data abstraction and may have potential for clinical care as well. PMID:17238560

  18. Standardization of search methods for guideline development: an international survey of evidence-based guideline development groups.

    PubMed

    Deurenberg, Rikie; Vlayen, Joan; Guillo, Sylvie; Oliver, Thomas K; Fervers, Beatrice; Burgers, Jako

    2008-03-01

    Effective literature searching is particularly important for clinical practice guideline development. Sophisticated searching and filtering mechanisms are needed to help ensure that all relevant research is reviewed. To assess the methods used for the selection of evidence for guideline development by evidence-based guideline development organizations. A semistructured questionnaire assessing the databases, search filters and evaluation methods used for literature retrieval was distributed to eight major organizations involved in evidence-based guideline development. All of the organizations used search filters as part of guideline development. The medline database was the primary source accessed for literature retrieval. The OVID or SilverPlatter interfaces were used in preference to the freely accessed PubMed interface. The Cochrane Library, embase, cinahl and psycinfo databases were also frequently used by the organizations. All organizations reported the intention to improve and validate their filters for finding literature specifically relevant for guidelines. In the first international survey of its kind, eight major guideline development organizations indicated a strong interest in identifying, improving and standardizing search filters to improve guideline development. It is to be hoped that this will result in the standardization of, and open access to, search filters, an improvement in literature searching outcomes and greater collaboration among guideline development organizations.

  19. [A survey of the best bibliographic searching system in occupational medicine and discussion of its implementation].

    PubMed

    Inoue, J

    1991-12-01

    When occupational health personnel, especially occupational physicians search bibliographies, they usually have to search bibliographies by themselves. Also, if a library is not available because of the location of their work place, they might have to rely on online databases. Although there are many commercial databases in the world, people who seldom use them, will have problems with on-line searching, such as user-computer interface, keywords, and so on. The present study surveyed the best bibliographic searching system in the field of occupational medicine by questionnaire through the use of DIALOG OnDisc MEDLINE as a commercial database. In order to ascertain the problems involved in determining the best bibliographic searching system, a prototype bibliographic searching system was constructed and then evaluated. Finally, solutions for the problems were discussed. These led to the following conclusions: to construct the best bibliographic searching system at the present time, 1) a concept of micro-to-mainframe links (MML) is needed for the computer hardware network; 2) multi-lingual font standards and an excellent common user-computer interface are needed for the computer software; 3) a short course and education of database management systems, and support of personal information processing for retrieved data are necessary for the practical use of the system.

  20. BCM Search Launcher--an integrated interface to molecular biology data base search and analysis services available on the World Wide Web.

    PubMed

    Smith, R F; Wiese, B A; Wojzynski, M K; Davison, D B; Worley, K C

    1996-05-01

    The BCM Search Launcher is an integrated set of World Wide Web (WWW) pages that organize molecular biology-related search and analysis services available on the WWW by function, and provide a single point of entry for related searches. The Protein Sequence Search Page, for example, provides a single sequence entry form for submitting sequences to WWW servers that offer remote access to a variety of different protein sequence search tools, including BLAST, FASTA, Smith-Waterman, BEAUTY, PROSITE, and BLOCKS searches. Other Launch pages provide access to (1) nucleic acid sequence searches, (2) multiple and pair-wise sequence alignments, (3) gene feature searches, (4) protein secondary structure prediction, and (5) miscellaneous sequence utilities (e.g., six-frame translation). The BCM Search Launcher also provides a mechanism to extend the utility of other WWW services by adding supplementary hypertext links to results returned by remote servers. For example, links to the NCBI's Entrez data base and to the Sequence Retrieval System (SRS) are added to search results returned by the NCBI's WWW BLAST server. These links provide easy access to auxiliary information, such as Medline abstracts, that can be extremely helpful when analyzing BLAST data base hits. For new or infrequent users of sequence data base search tools, we have preset the default search parameters to provide the most informative first-pass sequence analysis possible. We have also developed a batch client interface for Unix and Macintosh computers that allows multiple input sequences to be searched automatically as a background task, with the results returned as individual HTML documents directly to the user's system. The BCM Search Launcher and batch client are available on the WWW at URL http:@gc.bcm.tmc.edu:8088/search-launcher.html.

Top