Sample records for urls sites users

  1. A Concept for Continuous Monitoring that Reduces Redundancy in Information Assurance Processes

    DTIC Science & Technology

    2011-09-01

    System.out.println(“Driver loaded”); String url=“jdbc:postgresql://localhost/IAcontrols”; String user = “ postgres ”; String pwd... postgres ”; Connection DB_mobile_conn = DriverManager.getConnection(url,user,pwd); System.out.println(“Database Connect ok...user = “ postgres ”; String pwd = “ postgres ”; Connection DB_mobile_conn = DriverManager.getConnection(url,user,pwd); System.out.println

  2. MedlinePlus Connect in Use

    MedlinePlus

    ... MedlinePlus Connect in Use URL of this page: https://medlineplus.gov/connect/users.html MedlinePlus Connect in ... will change.) Old URLs New URLs Web Application https://apps.nlm.nih.gov/medlineplus/services/mpconnect.cfm? ...

  3. A cross-hazard analysis of terse message retransmission on Twitter.

    PubMed

    Sutton, Jeannette; Gibson, C Ben; Phillips, Nolan Edward; Spiro, Emma S; League, Cedar; Johnson, Britta; Fitzhugh, Sean M; Butts, Carter T

    2015-12-01

    For decades, public warning messages have been relayed via broadcast information channels, including radio and television; more recently, risk communication channels have expanded to include social media sites, where messages can be easily amplified by user retransmission. This research examines the factors that predict the extent of retransmission for official hazard communications disseminated via Twitter. Using data from events involving five different hazards, we identity three types of attributes--local network properties, message content, and message style--that jointly amplify and/or attenuate the retransmission of official communications under imminent threat. We find that the use of an agreed-upon hashtag and the number of users following an official account positively influence message retransmission, as does message content describing hazard impacts or emphasizing cohesion among users. By contrast, messages directed at individuals, expressing gratitude, or including a URL were less widely disseminated than similar messages without these features. Our findings suggest that some measures commonly taken to convey additional information to the public (e.g., URL inclusion) may come at a cost in terms of message amplification; on the other hand, some types of content not traditionally emphasized in guidance on hazard communication may enhance retransmission rates.

  4. Finding Those Missing Links

    ERIC Educational Resources Information Center

    Gunn, Holly

    2004-01-01

    In this article, the author stresses not to give up on a site when a URL returns an error message. Many web sites can be found by using strategies such as URL trimming, searching cached sites, site searching and searching the WayBack Machine. Methods and tips for finding web sites are contained within this article.

  5. Van Allen Probes Science Gateway and Space Weather Data Processing

    NASA Astrophysics Data System (ADS)

    Romeo, G.; Barnes, R. J.; Weiss, M.; Fox, N. J.; Mauk, B.; Potter, M.; Kessel, R.

    2014-12-01

    The Van Allen Probes Science Gateway acts as a centralized interface to the instrument Science Operation Centers (SOCs), provides mission planning tools, and hosts a number of science related activities such as the mission bibliography. Most importantly, the Gateway acts as the primary site for processing and delivering the VAP Space Weather data to users. Over the past year, the web-site has been completely redesigned with the focus on easier navigation and improvements of the existing tools such as the orbit plotter, position calculator and magnetic footprint tool. In addition, a new data plotting facility has been added. Based on HTML5, which allows users to interactively plot Van Allen Probes summary and space weather data. The user can tailor the tool to display exactly the plot they wish to see and then share this with other users via either a URL or by QR code. Various types of plots can be created, including simple time series, data plotted as a function of orbital location, and time versus L-Shell. We discuss the new Van Allen Probes Science Gateway and the Space Weather Data Pipeline.

  6. 78 FR 33807 - Privacy Act New System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-05

    .... For National Institute of Standards and Technology, Chief Information Officer, 100 Bureau Drive..., address, email, and telephone number; credit card information; Web site URL; organization category and...; title; address; email address; telephone number; Web site URL; organization category and description...

  7. New Searching Capability and OpenURL Linking in the ADS

    NASA Astrophysics Data System (ADS)

    Eichhorn, Guenther; Accomazzi, A.; Grant, C. S.; Henneken, E.; Kurtz, M. J.; Thompson, D. M.; Murray, S. S.

    2006-12-01

    The ADS is the search system of choice for the astronomical community. It also covers a large part of the physics and physics/astronomy education literature. In order to make access to this system as easy as possible, we developed a Google-like interface version of our search form. This one-field search parses the user input and automatically detects author names and year ranges. Firefox users can set up their browser to have this search field installed in the top right corner search field to have even easier access to the ADS search capability. The basic search is available from the ADS Homepage at: http://adsabs.harvard.edu To aid with access to subscription journals the ADS now supports OpenURL linking. If your library supports an OpenURL server, you can specify this server in the ADS preference settings. All links to journal articles will then automatically be directed to the OpenURL with the appropriate link information. We provide a selection of known OpenURL servers to choose from. If your server is not in this list, please send the necessary information to ads@cfa.harvard.edu and we will include it in our list. The ADS is funded by NASA grant NNG06GG68G.

  8. A method for the automated detection phishing websites through both site characteristics and image analysis

    NASA Astrophysics Data System (ADS)

    White, Joshua S.; Matthews, Jeanna N.; Stacy, John L.

    2012-06-01

    Phishing website analysis is largely still a time-consuming manual process of discovering potential phishing sites, verifying if suspicious sites truly are malicious spoofs and if so, distributing their URLs to the appropriate blacklisting services. Attackers increasingly use sophisticated systems for bringing phishing sites up and down rapidly at new locations, making automated response essential. In this paper, we present a method for rapid, automated detection and analysis of phishing websites. Our method relies on near real-time gathering and analysis of URLs posted on social media sites. We fetch the pages pointed to by each URL and characterize each page with a set of easily computed values such as number of images and links. We also capture a screen-shot of the rendered page image, compute a hash of the image and use the Hamming distance between these image hashes as a form of visual comparison. We provide initial results demonstrate the feasibility of our techniques by comparing legitimate sites to known fraudulent versions from Phishtank.com, by actively introducing a series of minor changes to a phishing toolkit captured in a local honeypot and by performing some initial analysis on a set of over 2.8 million URLs posted to Twitter over a 4 days in August 2011. We discuss the issues encountered during our testing such as resolvability and legitimacy of URL's posted on Twitter, the data sets used, the characteristics of the phishing sites we discovered, and our plans for future work.

  9. R4FRS_RCRAINFO

    EPA Pesticide Factsheets

    To improve public health and the environment, the United States Environmental Protection Agency (USEPA) collects information about facilities, sites, or places subject to environmental regulation or of environmental interest. Through the Geospatial Data Download Service, the public is now able to download the EPA Geodata shapefile containing facility and site information from EPA's national program systems. The file is Internet accessible from the Envirofacts Web site (http://www.epa.gov/enviro). The data may be used with geospatial mapping applications. (Note: The shapefile omits facilities without latitude/longitude coordinates.) The EPA Geospatial Data contains the name, location (latitude/longitude), and EPA program information about specific facilities and sites. In addition, the file contains a Uniform Resource Locator (URL), which allows mapping applications to present an option to users to access additional EPA data resources on a specific facility or site.

  10. US EPA Region 4 RMP Facilities

    EPA Pesticide Factsheets

    To improve public health and the environment, the United States Environmental Protection Agency (USEPA) collects information about facilities, sites, or places subject to environmental regulation or of environmental interest. Through the Geospatial Data Download Service, the public is now able to download the EPA Geodata shapefile containing facility and site information from EPA's national program systems. The file is Internet accessible from the Envirofacts Web site (http://www.epa.gov/enviro). The data may be used with geospatial mapping applications. (Note: The shapefile omits facilities without latitude/longitude coordinates.) The EPA Geospatial Data contains the name, location (latitude/longitude), and EPA program information about specific facilities and sites. In addition, the file contains a Uniform Resource Locator (URL), which allows mapping applications to present an option to users to access additional EPA data resources on a specific facility or site.

  11. A Bookmarking Service for Organizing and Sharing URLs

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Wolfe, Shawn R.; Chen, James R.; Mathe, Nathalie; Rabinowitz, Joshua L.

    1997-01-01

    Web browser bookmarking facilities predominate as the method of choice for managing URLs. In this paper, we describe some deficiencies of current bookmarking schemes, and examine an alternative to current approaches. We present WebTagger(TM), an implemented prototype of a personal bookmarking service that provides both individuals and groups with a customizable means of organizing and accessing Web-based information resources. In addition, the service enables users to supply feedback on the utility of these resources relative to their information needs, and provides dynamically-updated ranking of resources based on incremental user feedback. Individuals may access the service from anywhere on the Internet, and require no special software. This service greatly simplifies the process of sharing URLs within groups, in comparison with manual methods involving email. The underlying bookmark organization scheme is more natural and flexible than current hierarchical schemes supported by the major Web browsers, and enables rapid access to stored bookmarks.

  12. US EPA Region 4 Brownfields

    EPA Pesticide Factsheets

    To improve public health and the environment, the United States Environmental Protection Agency (USEPA) collects information about facilities, sites, or places subject to environmental regulation or of environmental interest. Through the Geospatial Data Download Service, the public is now able to download the EPA Geodata shapefile containing facility and site information from EPA's national program systems. The file is Internet accessible from the Envirofacts Web site (https://www3.epa.gov/enviro/). The data may be used with geospatial mapping applications. (Note: The shapefile omits facilities without latitude/longitude coordinates.) The EPA Geospatial Data contains the name, location (latitude/longitude), and EPA program information about specific facilities and sites. In addition, the file contains a Uniform Resource Locator (URL), which allows mapping applications to present an option to users to access additional EPA data resources on a specific facility or site. This dataset shows Brownfields listed in the 2012 Facility Registry System.

  13. U.S. EPAs Geospatial Data Access Project

    EPA Pesticide Factsheets

    To improve public health and the environment, the United States Environmental Protection Agency (EPA) collects information about facilities, sites, or places subject to environmental regulation or of environmental interest. Through the Geospatial Data Download Service, the public is now able to download the EPA Geodata Shapefile, Feature Class or extensible markup language (XML) file containing facility and site information from EPA's national program systems. The files are Internet accessible from the Envirofacts Web site (https://www3.epa.gov/enviro/). The data may be used with geospatial mapping applications. (Note: The files omit facilities without latitude/longitude coordinates.) The EPA Geospatial Data contains the name, location (latitude/longitude), and EPA program information about specific facilities and sites. In addition, the files contain a Uniform Resource Locator (URL), which allows mapping applications to present an option to users to access additional EPA data resources on a specific facility or site.

  14. Use Them ... or Lose Them? The Case for and against Using QR Codes

    ERIC Educational Resources Information Center

    Cunningham, Chuck; Dull, Cassie

    2011-01-01

    A quick-response (QR) code is a two-dimensional, black-and-white square barcode and links directly to a URL of one's choice. When the code is scanned with a smartphone, it will automatically redirect the user to the designated URL. QR codes are popping up everywhere--billboards, magazines, posters, shop windows, TVs, computer screens, and more.…

  15. Measuring Link-Resolver Success: Comparing 360 Link with a Local Implementation of WebBridge

    ERIC Educational Resources Information Center

    Herrera, Gail

    2011-01-01

    This study reviewed link resolver success comparing 360 Link and a local implementation of WebBridge. Two methods were used: (1) comparing article-level access and (2) examining technical issues for 384 randomly sampled OpenURLs. Google Analytics was used to collect user-generated OpenURLs. For both methods, 360 Link out-performed the local…

  16. Knowing Where They Went: Six Years of Online Access Statistics via the Online Catalog for Federal Government Information

    ERIC Educational Resources Information Center

    Brown, Christopher C.

    2011-01-01

    As federal government information is increasingly migrating to online formats, libraries are providing links to this content via URLs or persistent URLs (PURLs) in their online public access catalogs (OPACs). Clickthrough statistics that accumulated as users visited links to online content in the University of Denver's library OPAC were gathered…

  17. MODster: Namespaces and Redirection for Earth Science Data

    NASA Astrophysics Data System (ADS)

    Frew, J.; Metzger, D.; Slaughter, P.

    2005-12-01

    MODster is a distributed, decentralized inventory server for Earth science data granules (standard units of data content and structure.) MODster connects data granule users (people who know which specific granule they want, but who don't know who has it or how to get it) with data granule providers (people or institutions that keep granules accessible online.) * If you're a provider, you can tell MODster which granules you have and where they live (i.e., their URLs.) * If you're a user, you can ask MODster for a granule, and it will transparently redirect your request to whomever has it. The key to making this work is a standard granule namespace. A granule namespace is a naming convention that associates particular names with particular granules, regardless of where those granules live. Different Earth science data products have their own granule namespaces. For example, in the MODIS granule namespace, the granule name "MOD43A2.A1998365.h5.v8.001.1999001090020.hdf" always refers to version 1 of the 5th horizontal and 8th vertical tile of the Level 3 16-day Bi-directional Reflectance Distribution Function product, acquired by the MODIS Terra sensor on 31 December 1998 and generated on 01 January 1999 at 9:00:20 AM. A MODster URL is simply a standard way of referring to a data product namespace and one of its granules. MODster URLs have the general form "http://server/namespace/granule" where "granule" is a granule name that conforms to a granule namespace, "namespace" is a MODster namespace, which is the name of a granule namespace whose conventions are known to MODster, and "server" is a MODster server, which is an HTTP server that can redirect namespace/granule requests to granule providers. A MODster URL with no granule component gets a description of the MODster namespace, its authority (the persons or institutions responsible for documenting and maintaining the naming convention), and also any services for that MODster namespace that the MODster server supports. Our current MODster implementation allows granule providers to explicitly register their granules, and can also crawl provider sites looking for granules whose names match specific rules or regular expressions.

  18. World Wide Web Metaphors for Search Mission Data

    NASA Technical Reports Server (NTRS)

    Norris, Jeffrey S.; Wallick, Michael N.; Joswig, Joseph C.; Powell, Mark W.; Torres, Recaredo J.; Mittman, David S.; Abramyan, Lucy; Crockett, Thomas M.; Shams, Khawaja S.; Fox, Jason M.; hide

    2010-01-01

    A software program that searches and browses mission data emulates a Web browser, containing standard meta - phors for Web browsing. By taking advantage of back-end URLs, users may save and share search states. Also, since a Web interface is familiar to users, training time is reduced. Familiar back and forward buttons move through a local search history. A refresh/reload button regenerates a query, and loads in any new data. URLs can be constructed to save search results. Adding context to the current search is also handled through a familiar Web metaphor. The query is constructed by clicking on hyperlinks that represent new components to the search query. The selection of a link appears to the user as a page change; the choice of links changes to represent the updated search and the results are filtered by the new criteria. Selecting a navigation link changes the current query and also the URL that is associated with it. The back button can be used to return to the previous search state. This software is part of the MSLICE release, which was written in Java. It will run on any current Windows, Macintosh, or Linux system.

  19. Understanding Consistency Maintenance in Service Discovery Architectures during Communication Failure

    DTIC Science & Technology

    2002-07-01

    our general model include: (1) service user (SU), (2) service manager (SM), and (3) service cache manager ( SCM ), where the SCM is an optional...maintained by SMs that satisfy specific requirements. Where employed, the SCM operates as an intermediary, matching advertised SDs of SMs to...Directory Service Agent (optional) not applicableLookup ServiceService Cache Manager ( SCM ) Service URL Service Type Service Attributes Template URL

  20. Efficient Automated Inventories and Aggregations for Satellite Data Using OPeNDAP and THREDDS

    NASA Astrophysics Data System (ADS)

    Gallagher, J.; Cornillon, P. C.; Potter, N.; Jones, M.

    2011-12-01

    Organizing online data presents a number of challenges, among which is keeping their inventories current. It is preferable to have these descriptions built and maintained by automated systems because many online data sets are dynamic, changing as new data are added or moved and as computer resources are reallocated within an organization. Automated systems can make periodic checks and update records accordingly, tracking these conditions and providing up-to-date inventories and aggregations. In addition, automated systems can enforce a high degree of uniformity across a number of remote sites, something that is hard to achieve with inventories written by people. While building inventories for online data can be done using a brute-force algorithm to read information from each granule in the data set, that ignores some important aspects of these data sets, and discards some key opportunities for optimization. First, many data sets that consist of a large number of granules exhibit a high degree of similarity between granules, and second, the URLs that reference the individual granules typically contain metadata themselves. We present software that crawls servers for online data and builds inventories and aggregations automatically, using simple rules to organize the discrete URLs into logical groups that correspond to the data sets as a typical user would perceive. Special attention is paid to recognizing patterns in the collections of URLs and using these patterns to limit reading from the data granules themselves. To date the software has crawled over 4 million URLs that reference online data from approximately 10 data servers and has built approximately 400 inventories. When compared to brute-force techniques, the combination of targeted direct-reads from selected granules and analysis of the URLs results in improvements of several to many orders of magnitude, depending on the data set organization. We conclude the presentation with observations about the crawler and ways that the metadata sources it uses can be changed to improve its operation, including improved catalog organization at data sites and ways that the crawler can be bundled with data servers to improve efficiency. The crawler, written in Java, reads THREDDS catalogs and other metadata from OPeNDAP servers and is available from opendap.org as open-source software.

  1. Efficient Web Vulnerability Detection Tool for Sleeping Giant-Cross Site Request Forgery

    NASA Astrophysics Data System (ADS)

    Parimala, G.; Sangeetha, M.; AndalPriyadharsini, R.

    2018-04-01

    Now day’s web applications are very high in the rate of usage due to their user friendly environment and getting any information via internet but these web applications are affected by lot of threats. CSRF attack is one of the serious threats to web applications which is based on the vulnerabilities present in the normal web request and response of HTTP protocol. It is hard to detect but hence still it is present in most of the existing web applications. In CSRF attack, without user knowledge the unwanted actions on a reliable websites are forced to happen. So it is placed in OWASP’s top 10 Web Application attacks list. My proposed work is to do a real time scan of CSRF vulnerability attack in given URL of the web applications as well as local host address for any organization using python language. Client side detection of CSRF is depended on Form count which is presented in that given web site.

  2. The Electron Microscopy Outreach Program: A Web-based resource for research and education.

    PubMed

    Sosinsky, G E; Baker, T S; Hand, G; Ellisman, M H

    1999-01-01

    We have developed a centralized World Wide Web (WWW)-based environment that serves as a resource of software tools and expertise for biological electron microscopy. A major focus is molecular electron microscopy, but the site also includes information and links on structural biology at all levels of resolution. This site serves to help integrate or link structural biology techniques in accordance with user needs. The WWW site, called the Electron Microscopy (EM) Outreach Program (URL: http://emoutreach.sdsc.edu), provides scientists with computational and educational tools for their research and edification. In particular, we have set up a centralized resource containing course notes, references, and links to image analysis and three-dimensional reconstruction software for investigators wanting to learn about EM techniques either within or outside of their fields of expertise. Copyright 1999 Academic Press.

  3. Van Allen Probes Science Gateway: A Centralized Data Access Point

    NASA Astrophysics Data System (ADS)

    Romeo, G.; Barnes, R. J.; Ukhorskiy, A. Y.; Sotirelis, T.; Stephens, G. K.; Kessel, R.; Potter, M.

    2015-12-01

    The Van Allen Probes Science Gateway acts a centralized interface to the instrument Science Operation Centers (SOCs), provides mission planning tools, and hosts a number of science related activities such as the mission bibliography. Most importantly, the Gateway acts as the primary site for processing and delivering the Van Allen Probes Space Weather data to users. Over the past years, the web-site has been completely redesigned with the focus on easier navigation and improvements of the existing tools such as the orbit plotter, position calculator and magnetic footprint tool. In addition, a new data plotting facility has been added. Based on HTML5, which allows users to interactively plot Van Allen Probes science and space weather data. The user can tailor the tool to display exactly the plot they wish to see and then share this with other users via either a URL or by QR code. Various types of plots can be created, including, simple time series, data plotted as a function of orbital location, and time versus L-Shell, capability of visualizing data from both probes (A & B) on the same plot. In cooperation with all Van Allen Probes Instrument SOCs, the Science Gateway will soon be able to serve higher level data products (Level 3), and to visualize them via the above mentioned HTML5 interface. Users will also be able to create customized CDF files on the fly.

  4. CCProf: exploring conformational change profile of proteins

    PubMed Central

    Chang, Che-Wei; Chou, Chai-Wei; Chang, Darby Tien-Hao

    2016-01-01

    In many biological processes, proteins have important interactions with various molecules such as proteins, ions or ligands. Many proteins undergo conformational changes upon these interactions, where regions with large conformational changes are critical to the interactions. This work presents the CCProf platform, which provides conformational changes of entire proteins, named conformational change profile (CCP) in the context. CCProf aims to be a platform where users can study potential causes of novel conformational changes. It provides 10 biological features, including conformational change, potential binding target site, secondary structure, conservation, disorder propensity, hydropathy propensity, sequence domain, structural domain, phosphorylation site and catalytic site. All these information are integrated into a well-aligned view, so that researchers can capture important relevance between different biological features visually. The CCProf contains 986 187 protein structure pairs for 3123 proteins. In addition, CCProf provides a 3D view in which users can see the protein structures before and after conformational changes as well as binding targets that induce conformational changes. All information (e.g. CCP, binding targets and protein structures) shown in CCProf, including intermediate data are available for download to expedite further analyses. Database URL: http://zoro.ee.ncku.edu.tw/ccprof/ PMID:27016699

  5. Host Immunity via Mutable Virtualized Large-Scale Network Containers

    DTIC Science & Technology

    2016-07-25

    and constrain the distributed persistent inside crawlers that have va.lid credentials to access the web services. The main idea is to add a marker...to each web page URL and use the URL path and user inforn1ation contained in the marker to help accurately detect crawlers at its earliest stage...more than half of all website traffic, and malicious bots contributes almost one third of the traffic. As one type of bots, web crawlers have been

  6. Citations to Web pages in scientific articles: the permanence of archived references.

    PubMed

    Thorp, Andrea W; Schriger, David L

    2011-02-01

    We validate the use of archiving Internet references by comparing the accessibility of published uniform resource locators (URLs) with corresponding archived URLs over time. We scanned the "Articles in Press" section in Annals of Emergency Medicine from March 2009 through June 2010 for Internet references in research articles. If an Internet reference produced the authors' expected content, the Web page was archived with WebCite (http://www.webcitation.org). Because the archived Web page does not change, we compared it with the original URL to determine whether the original Web page had changed. We attempted to access each original URL and archived Web site URL at 3-month intervals from the time of online publication during an 18-month study period. Once a URL no longer existed or failed to contain the original authors' expected content, it was excluded from further study. The number of original URLs and archived URLs that remained accessible over time was totaled and compared. A total of 121 articles were reviewed and 144 Internet references were found within 55 articles. Of the original URLs, 15% (21/144; 95% confidence interval [CI] 9% to 21%) were inaccessible at publication. During the 18-month observation period, there was no loss of archived URLs (apart from the 4% [5/123; 95% CI 2% to 9%] that could not be archived), whereas 35% (49/139) of the original URLs were lost (46% loss; 95% CI 33% to 61% by the Kaplan-Meier method; difference between curves P<.0001, log rank test). Archiving a referenced Web page at publication can help preserve the authors' expected information. Copyright © 2010 American College of Emergency Physicians. Published by Mosby, Inc. All rights reserved.

  7. The strategies WDK: a graphical search interface and web development kit for functional genomics databases

    PubMed Central

    Fischer, Steve; Aurrecoechea, Cristina; Brunk, Brian P.; Gao, Xin; Harb, Omar S.; Kraemer, Eileen T.; Pennington, Cary; Treatman, Charles; Kissinger, Jessica C.; Roos, David S.; Stoeckert, Christian J.

    2011-01-01

    Web sites associated with the Eukaryotic Pathogen Bioinformatics Resource Center (EuPathDB.org) have recently introduced a graphical user interface, the Strategies WDK, intended to make advanced searching and set and interval operations easy and accessible to all users. With a design guided by usability studies, the system helps motivate researchers to perform dynamic computational experiments and explore relationships across data sets. For example, PlasmoDB users seeking novel therapeutic targets may wish to locate putative enzymes that distinguish pathogens from their hosts, and that are expressed during appropriate developmental stages. When a researcher runs one of the approximately 100 searches available on the site, the search is presented as a first step in a strategy. The strategy is extended by running additional searches, which are combined with set operators (union, intersect or minus), or genomic interval operators (overlap, contains). A graphical display uses Venn diagrams to make the strategy’s flow obvious. The interface facilitates interactive adjustment of the component searches with changes propagating forward through the strategy. Users may save their strategies, creating protocols that can be shared with colleagues. The strategy system has now been deployed on all EuPathDB databases, and successfully deployed by other projects. The Strategies WDK uses a configurable MVC architecture that is compatible with most genomics and biological warehouse databases, and is available for download at code.google.com/p/strategies-wdk. Database URL: www.eupathdb.org PMID:21705364

  8. How Intrusion Detection Can Improve Software Decoy Applications

    DTIC Science & Technology

    2003-03-01

    THIS PAGE INTENTIONALLY LEFT BLANK 41 V. DISCUSSION Military history suggests it is best to employ a layered, defense-in...database: alert, postgresql , user=snort dbname=snort # output database: log, unixodbc, user=snort dbname=snort # output database: log, mssql, dbname...Threat Monitoring and Surveillance, James P. Anderson Co., Fort Washington. PA, April 1980. URL http://csrc.nist.gov/publications/ history /ande80

  9. A web server for analysis, comparison and prediction of protein ligand binding sites.

    PubMed

    Singh, Harinder; Srivastava, Hemant Kumar; Raghava, Gajendra P S

    2016-03-25

    One of the major challenges in the field of system biology is to understand the interaction between a wide range of proteins and ligands. In the past, methods have been developed for predicting binding sites in a protein for a limited number of ligands. In order to address this problem, we developed a web server named 'LPIcom' to facilitate users in understanding protein-ligand interaction. Analysis, comparison and prediction modules are available in the "LPIcom' server to predict protein-ligand interacting residues for 824 ligands. Each ligand must have at least 30 protein binding sites in PDB. Analysis module of the server can identify residues preferred in interaction and binding motif for a given ligand; for example residues glycine, lysine and arginine are preferred in ATP binding sites. Comparison module of the server allows comparing protein-binding sites of multiple ligands to understand the similarity between ligands based on their binding site. This module indicates that ATP, ADP and GTP ligands are in the same cluster and thus their binding sites or interacting residues exhibit a high level of similarity. Propensity-based prediction module has been developed for predicting ligand-interacting residues in a protein for more than 800 ligands. In addition, a number of web-based tools have been integrated to facilitate users in creating web logo and two-sample between ligand interacting and non-interacting residues. In summary, this manuscript presents a web-server for analysis of ligand interacting residue. This server is available for public use from URL http://crdd.osdd.net/raghava/lpicom .

  10. Web usage mining at an academic health sciences library: an exploratory study.

    PubMed

    Bracke, Paul J

    2004-10-01

    This paper explores the potential of multinomial logistic regression analysis to perform Web usage mining for an academic health sciences library Website. Usage of database-driven resource gateway pages was logged for a six-month period, including information about users' network addresses, referring uniform resource locators (URLs), and types of resource accessed. It was found that referring URL did vary significantly by two factors: whether a user was on-campus and what type of resource was accessed. Although the data available for analysis are limited by the nature of the Web and concerns for privacy, this method demonstrates the potential for gaining insight into Web usage that supplements Web log analysis. It can be used to improve the design of static and dynamic Websites today and could be used in the design of more advanced Web systems in the future.

  11. Phylo-mLogo: an interactive and hierarchical multiple-logo visualization tool for alignment of many sequences

    PubMed Central

    Shih, Arthur Chun-Chieh; Lee, DT; Peng, Chin-Lin; Wu, Yu-Wei

    2007-01-01

    Background When aligning several hundreds or thousands of sequences, such as epidemic virus sequences or homologous/orthologous sequences of some big gene families, to reconstruct the epidemiological history or their phylogenies, how to analyze and visualize the alignment results of many sequences has become a new challenge for computational biologists. Although there are several tools available for visualization of very long sequence alignments, few of them are applicable to the alignments of many sequences. Results A multiple-logo alignment visualization tool, called Phylo-mLogo, is presented in this paper. Phylo-mLogo calculates the variabilities and homogeneities of alignment sequences by base frequencies or entropies. Different from the traditional representations of sequence logos, Phylo-mLogo not only displays the global logo patterns of the whole alignment of multiple sequences, but also demonstrates their local homologous logos for each clade hierarchically. In addition, Phylo-mLogo also allows the user to focus only on the analysis of some important, structurally or functionally constrained sites in the alignment selected by the user or by built-in automatic calculation. Conclusion With Phylo-mLogo, the user can symbolically and hierarchically visualize hundreds of aligned sequences simultaneously and easily check the changes of their amino acid sites when analyzing many homologous/orthologous or influenza virus sequences. More information of Phylo-mLogo can be found at URL . PMID:17319966

  12. Curating Virtual Data Collections

    NASA Technical Reports Server (NTRS)

    Lynnes, Chris; Leon, Amanda; Ramapriyan, Hampapuram; Tsontos, Vardis; Shie, Chung-Lin; Liu, Zhong

    2015-01-01

    NASAs Earth Observing System Data and Information System (EOSDIS) contains a rich set of datasets and related services throughout its many elements. As a result, locating all the EOSDIS data and related resources relevant to particular science theme can be daunting. This is largely because EOSDIS data's organizing principle is affected more by the way they are produced than around the expected end use. Virtual collections oriented around science themes can overcome this by presenting collections of data and related resources that are organized around the user's interest, not around the way the data were produced. Virtual collections consist of annotated web addresses (URLs) that point to data and related resource addresses, thus avoiding the need to copy all of the relevant data to a single place. These URL addresses can be consumed by a variety of clients, ranging from basic URL downloaders (wget, curl) and web browsers to sophisticated data analysis programs such as the Integrated Data Viewer.

  13. Internet-Based Laboratory Activities Designed for Studying the Sun with Satellites

    NASA Astrophysics Data System (ADS)

    Slater, T. F.

    1998-12-01

    Yohkoh Public Outreach Project (YPOP) is a collaborative industry, university, and K-16 project bringing fascinating and dynamic images of the Sun to the public in real-time. Partners have developed an extensive public access and educational WWW site containing more than 100 pages of vibrant images with current information that focuses on movies of the X-ray output of our Sun taken by the Yohkoh Satellite. More than 5 Gb of images and movies are available on the WWW site from the Yohkoh satellite, a joint project of the Institute for Space and Astronautical Sciences (ISAS) and NASA. Using a movie theater motif, the site was created by teams working at Lockheed Martin Advanced Technology Center, Palo Alto, CA in the Solar and Astrophysics Research Group, the Montana State University Solar Physics Research Group, and the Montana State University Conceptual Astronomy and Physics Education Research Group with funding from the NASA Learning Technology Project (LTP) program (NASA LTP SK30G4410R). The Yohkoh Movie Theater Internet Site is found at URL: http://www.lmsal.com/YPOP/ and mirrored at URL: http://solar.physics.montana.edu/YPOP/. In addition to being able to request automated movies for any dates in a 5 Gb on-line database, the user can view automatically updated daily images and movies of our Sun over the last 72 hours. Master science teachers working with the NASA funded Yohkoh Public Outreach Project have developed nine technology-based on-line lessons for K-16 classrooms. These interdisciplinary science, mathematics, and technology lessons integrate Internet resources, real-time images of the Sun, and extensive NASA image databases. Instructors are able to freely access each of the classroom-ready activities. The activities require students to use scientific inquiry skills and manage electronic information to solve problems consistent with the emphasis of the NRC National Science Education Standards.

  14. Sentiment Analysis of Web Sites Related to Vaginal Mesh Use in Pelvic Reconstructive Surgery.

    PubMed

    Hobson, Deslyn T G; Meriwether, Kate V; Francis, Sean L; Kinman, Casey L; Stewart, J Ryan

    2018-05-02

    The purpose of this study was to utilize sentiment analysis to describe online opinions toward vaginal mesh. We hypothesized that sentiment in legal Web sites would be more negative than that in medical and reference Web sites. We generated a list of relevant key words related to vaginal mesh and searched Web sites using the Google search engine. Each unique uniform resource locator (URL) was sorted into 1 of 6 categories: "medical", "legal", "news/media", "patient generated", "reference", or "unrelated". Sentiment of relevant Web sites, the primary outcome, was scored on a scale of -1 to +1, and mean sentiment was compared across all categories using 1-way analysis of variance. Tukey test evaluated differences between category pairs. Google searches of 464 unique key words resulted in 11,405 URLs. Sentiment analysis was performed on 8029 relevant URLs (3472 legal, 1625 "medical", 1774 "reference", 666 "news media", 492 "patient generated"). The mean sentiment for all relevant Web sites was +0.01 ± 0.16; analysis of variance revealed significant differences between categories (P < 0.001). Web sites categorized as "legal" and "news/media" had a slightly negative mean sentiment, whereas those categorized as "medical," "reference," and "patient generated" had slightly positive mean sentiments. Tukey test showed differences between all category pairs except the "medical" versus "reference" in comparison with the largest mean difference (-0.13) seen in the "legal" versus "reference" comparison. Web sites related to vaginal mesh have an overall mean neutral sentiment, and Web sites categorized as "medical," "reference," and "patient generated" have significantly higher sentiment scores than related Web sites in "legal" and "news/media" categories.

  15. How To Get Your Web Page Noticed.

    ERIC Educational Resources Information Center

    Schrock, Kathleen

    1997-01-01

    Presents guidelines for making a Web site noticeable. Discusses submitting the URL to directories, links, and announcement lists, and sending the site over the server via FTP to search engines. Describes how to index the site with "Title,""Heading," and "Meta" tags. (AEF)

  16. Standardized description of scientific evidence using the Evidence Ontology (ECO)

    PubMed Central

    Chibucos, Marcus C.; Mungall, Christopher J.; Balakrishnan, Rama; Christie, Karen R.; Huntley, Rachael P.; White, Owen; Blake, Judith A.; Lewis, Suzanna E.; Giglio, Michelle

    2014-01-01

    The Evidence Ontology (ECO) is a structured, controlled vocabulary for capturing evidence in biological research. ECO includes diverse terms for categorizing evidence that supports annotation assertions including experimental types, computational methods, author statements and curator inferences. Using ECO, annotation assertions can be distinguished according to the evidence they are based on such as those made by curators versus those automatically computed or those made via high-throughput data review versus single test experiments. Originally created for capturing evidence associated with Gene Ontology annotations, ECO is now used in other capacities by many additional annotation resources including UniProt, Mouse Genome Informatics, Saccharomyces Genome Database, PomBase, the Protein Information Resource and others. Information on the development and use of ECO can be found at http://evidenceontology.org. The ontology is freely available under Creative Commons license (CC BY-SA 3.0), and can be downloaded in both Open Biological Ontologies and Web Ontology Language formats at http://code.google.com/p/evidenceontology. Also at this site is a tracker for user submission of term requests and questions. ECO remains under active development in response to user-requested terms and in collaborations with other ontologies and database resources. Database URL: Evidence Ontology Web site: http://evidenceontology.org PMID:25052702

  17. The National Solar Observatory Digital Library - a resource for space weather studies

    NASA Astrophysics Data System (ADS)

    Hill, F.; Erdwurm, W.; Branston, D.; McGraw, R.

    2000-09-01

    We describe the National Solar Observatory Digital Library (NSODL), consisting of 200GB of on-line archived solar data, a RDBMS search engine, and an Internet HTML-form user interface. The NSODL is open to all users and provides simple access to solar physics data of basic importance for space weather research and forecasting, heliospheric research, and education. The NSODL can be accessed at the URL www.nso.noao.edu/diglib.

  18. Protein Frustratometer 2: a tool to localize energetic frustration in protein molecules, now with electrostatics.

    PubMed

    Parra, R Gonzalo; Schafer, Nicholas P; Radusky, Leandro G; Tsai, Min-Yeh; Guzovsky, A Brenda; Wolynes, Peter G; Ferreiro, Diego U

    2016-07-08

    The protein frustratometer is an energy landscape theory-inspired algorithm that aims at localizing and quantifying the energetic frustration present in protein molecules. Frustration is a useful concept for analyzing proteins' biological behavior. It compares the energy distributions of the native state with respect to structural decoys. The network of minimally frustrated interactions encompasses the folding core of the molecule. Sites of high local frustration often correlate with functional regions such as binding sites and regions involved in allosteric transitions. We present here an upgraded version of a webserver that measures local frustration. The new implementation that allows the inclusion of electrostatic energy terms, important to the interactions with nucleic acids, is significantly faster than the previous version enabling the analysis of large macromolecular complexes within a user-friendly interface. The webserver is freely available at URL: http://frustratometer.qb.fcen.uba.ar. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  19. Managing Personal and Group Collections of Information

    NASA Technical Reports Server (NTRS)

    Wolfe, Shawn R.; Wragg, Stephen D.; Chen, James R.; Koga, Dennis (Technical Monitor)

    1999-01-01

    The internet revolution has dramatically increased the amount of information available to users. Various tools such as search engines have been developed to help users find the information they need from this vast repository. Users often also need tools to help manipulate the growing amount of useful information they have discovered. Current tools available for this purpose are typically local components of web browsers designed to manage URL bookmarks. They provide limited functionalities to handle high information complexities. To tackle this have created DIAMS, an agent-based tool to help users or groups manage their information collections and share their collections with other. the main features of DIAMS are described here.

  20. Inter-disciplinary Interactions in Underground Laboratories

    NASA Astrophysics Data System (ADS)

    Wang, J. S.; Bettini, A.

    2010-12-01

    Many of underground facilities, ranging from simple cavities to fully equipped laboratories, have been established worldwide (1) to evaluate the impacts of emplacing nuclear wastes in underground research laboratories (URLs) and (2) to measure rare physics events in deep underground laboratories (DULs). In this presentation, we compare similarities and differences between URLs and DULs in focus of site characterization, in quantification of quietness, and in improvement of signal to noise ratios. The nuclear waste URLs are located primarily in geological medium with potentials for slow flow/transport and long isolation. The URL medium include plastic salt, hard rock, soft clay, volcanic tuff, basalt and shale, at over ~500 m where waste repositories are envisioned to be excavated. The majority of URLs are dedicated facilities excavated after extensive site characterization. The focuses are on fracture distributions, heterogeneity, scaling, coupled processes, and other fundamental issues of earth sciences. For the physics DULs, the depth/overburden thickness is the main parameter that determines the damping of cosmic rays, and that, consequently, should be larger than, typically, 800m. Radioactivity from rocks, neutron flux, and radon gas, depending on local rock and ventilation conditions (largely independent of depth), are also characterized at different sites to quantify the background level for physics experiments. DULs have been constructed by excavating dedicated experimental halls and service cavities near to a road tunnel (horizontal access) or in a mine (vertical access). Cavities at shallower depths are suitable for experiments on neutrinos from artificial source, power reactors or accelerators. Rocks stability (depth dependent), safe access, and utility supply are among factors of main concerns for DULs. While the focuses and missions of URLs and DULs are very different, common experience and lessons learned may be useful for ongoing development of new facilities needed for next generation of underground assessments and experiments. There are growing interests in developing multi-disciplinary programs in DULs and some URLs have rooms set aside for physics experiments. Examples of DULs and URLs with interactions between earth sciences and physics include Gran Sasso in Italy, Kaimioka in Japan, Canfranc in Spain, LSBB in France, WIPP in New Mexico, DUSEL in South Dakota, and Jing Ping deep tunnel underground laboratory proposal in China. Instruments of common interests include interferometers, laser strain meters, seismic networks, tiltmeters, gravimeters, magnetometers, and other sensors to detect signals over different frequencies and water chemical analyses, including radon concentrations. Radon emissions are of concern for physics experiments and are studied as possible precursors of earthquakes. Measuring geoneutrino flux and energy spectrum in different locations is of interests to both physics and earth sciences. The contributions of U and Th in the crust and the mantle to the energy production in the Earth can be studied. One final note is that our ongoing reviews are aimed to contribute to technological innovations anticipated through inter-disciplinary interactions.

  1. Improving the quality and impact of public health social media activity in Scotland during 2016: #ScotPublicHealth.

    PubMed

    Mackenzie, Douglas Graham

    2017-06-07

    Social media, including Twitter, potentially provides a route to communicate public health messages to a large audience. Simple measures can boost onward broadcast to other users ('retweeting'). This study compares the impact of a structured programme of social media activity in Scotland during 2016 (using #ScotPublicHealth hashtag) with previous years. The Twitter search function was used to identify tweets between 2014 and 2016 inclusive. The first three tweets from each Twitter user were selected for each period. The number of retweets was used as a measure of impact. The quality of tweets was assessed by recording use of image, weblink (uniform resource locator or URL), mention of another Twitter user and/or hashtag, each of which have been shown to boost number of retweets. The percentage of tweets with an image, URL and/or mention of another Twitter user increased during the period of study. The percentage of tweets retweeted during Scottish Public Health conferences increased from 43% in 2014 to 70% in 2016. The volume of tweeting also increased. The quality and impact of tweets sent by the Scottish Public Health community was higher during 2016 than previous years. Conference tweeting remains an area for improvement. © The Author 2017. Published by Oxford University Press on behalf of Faculty of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  2. JBioWH: an open-source Java framework for bioinformatics data integration

    PubMed Central

    Vera, Roberto; Perez-Riverol, Yasset; Perez, Sonia; Ligeti, Balázs; Kertész-Farkas, Attila; Pongor, Sándor

    2013-01-01

    The Java BioWareHouse (JBioWH) project is an open-source platform-independent programming framework that allows a user to build his/her own integrated database from the most popular data sources. JBioWH can be used for intensive querying of multiple data sources and the creation of streamlined task-specific data sets on local PCs. JBioWH is based on a MySQL relational database scheme and includes JAVA API parser functions for retrieving data from 20 public databases (e.g. NCBI, KEGG, etc.). It also includes a client desktop application for (non-programmer) users to query data. In addition, JBioWH can be tailored for use in specific circumstances, including the handling of massive queries for high-throughput analyses or CPU intensive calculations. The framework is provided with complete documentation and application examples and it can be downloaded from the Project Web site at http://code.google.com/p/jbiowh. A MySQL server is available for demonstration purposes at hydrax.icgeb.trieste.it:3307. Database URL: http://code.google.com/p/jbiowh PMID:23846595

  3. JBioWH: an open-source Java framework for bioinformatics data integration.

    PubMed

    Vera, Roberto; Perez-Riverol, Yasset; Perez, Sonia; Ligeti, Balázs; Kertész-Farkas, Attila; Pongor, Sándor

    2013-01-01

    The Java BioWareHouse (JBioWH) project is an open-source platform-independent programming framework that allows a user to build his/her own integrated database from the most popular data sources. JBioWH can be used for intensive querying of multiple data sources and the creation of streamlined task-specific data sets on local PCs. JBioWH is based on a MySQL relational database scheme and includes JAVA API parser functions for retrieving data from 20 public databases (e.g. NCBI, KEGG, etc.). It also includes a client desktop application for (non-programmer) users to query data. In addition, JBioWH can be tailored for use in specific circumstances, including the handling of massive queries for high-throughput analyses or CPU intensive calculations. The framework is provided with complete documentation and application examples and it can be downloaded from the Project Web site at http://code.google.com/p/jbiowh. A MySQL server is available for demonstration purposes at hydrax.icgeb.trieste.it:3307. Database URL: http://code.google.com/p/jbiowh.

  4. Attrition of Canadian Internet pharmacy websites: what are the implications?

    PubMed

    Veronin, Michael A; Clancy, Kristen M

    2013-01-01

    The unavailability of Internet pharmacy websites may impact a consumer's drug purchases and health care. To address the issue of attrition, a defined set of Canadian Internet pharmacy websites was examined at three separate time intervals. In February to March 2006, 117 distinct, fully functional "Canadian Internet pharmacy" websites were located using the advanced search options of Google and the uniform resource locator (URL) for each website was recorded. To determine website attrition, each of the 117 websites obtained and recorded from the previous study was revisited at two later periods of time within a 4-year period. After approximately 4 years and 5 months, only 59 (50.4%) sites were found in the original state. Thirty-four sites (29.1%) had moved to a new URL address and were not functioning as the original Internet pharmacy. For 24 sites (20.5%) the viewer was redirected to another Canadian Internet pharmacy site. Of concern for patients if Internet pharmacy sites were suddenly inaccessible would be the disruption of continuity of care.

  5. Can I get a retweet please? Health research recruitment and the Twittersphere.

    PubMed

    O'Connor, Anita; Jackson, Leigh; Goldsmith, Lesley; Skirton, Heather

    2014-03-01

    To evaluate the social networking site Twitter™ as a vehicle for recruitment in online health research and to examine how the Twitter community would share information: the focus of our study was the antenatal experience of mothers of advanced maternal age. One result of growth in worldwide Internet and mobile phone usage is the increased ability to source health information online and to use social media sites including Facebook and Twitter. Although social media have been used in previous health research, there is a lack of literature on the use of Twitter in health research. A cross-sectional survey. We report a novel recruitment method via a social networking site between May and August 2012. Through a Twitter account, we tweeted and requested other Twitter users to retweet our invitation to be involved in the study. Tweets contained a unique URL directing participants to an online survey hosted on the Survey Monkey™ website. Over 11 weeks, 749 original tweets were posted by the researcher. A total of 529 mothers accessed the survey as a result of 359 researcher tweets and subsequent retweets that were seen by Twitter users. The survey was fully completed by 299 (56·5%) participants. Twitter is a cost-effective means of recruitment, enabling engagement with potentially difficult-to-reach populations, providing participants with transparency, anonymity and a more accessible method by which to participate in health research. © 2013 John Wiley & Sons Ltd.

  6. Data Pool Description

    Atmospheric Science Data Center

    2016-04-29

    ASDC Data Pool   Notices   • DataPool will transition from ...  • Use IE7 for FTP sessions: a) Select "View", "Open FTP site in Windows Explorer" or b) Open Windows Explorer and enter the URL for the FTP site in the address bar ...

  7. Optimizing the NASA Technical Report Server

    NASA Technical Reports Server (NTRS)

    Nelson, Michael L.; Maa, Ming-Hokng

    1996-01-01

    The NASA Technical Report Server (NTRS), a World Wide Web report distribution NASA technical publications service, is modified for performance enhancement, greater protocol support, and human interface optimization. Results include: Parallel database queries, significantly decreasing user access times by an average factor of 2.3; access from clients behind firewalls and/ or proxies which truncate excessively long Uniform Resource Locators (URLs); access to non-Wide Area Information Server (WAIS) databases and compatibility with the 239-50.3 protocol; and a streamlined user interface.

  8. Moving Controlled Vocabularies into the Semantic Web

    NASA Astrophysics Data System (ADS)

    Thomas, R.; Lowry, R. K.; Kokkinaki, A.

    2015-12-01

    One of the issues with legacy oceanographic data formats is that the only tool available for describing what a measurement is and how it was made is a single metadata tag known as the parameter code. The British Oceanographic Data Centre (BODC) has been supporting the international oceanographic community gain maximum benefit from this through a controlled vocabulary known as the BODC Parameter Usage Vocabulary (PUV). Over time this has grown to over 34,000 entries some of which have preferred labels with over 400 bytes of descriptive information detailing what was measured and how. A decade ago the BODC pioneered making this information available in a more useful form with the implementation of a prototype vocabulary server (NVS) that referenced each 'parameter code' as a URL. This developed into the current server (NVS V2) in which the parameter URL resolves into an RDF document based on the SKOS data model which includes a list of resource URLs mapped to the 'parameter'. For example the parameter code for a contaminant in biota, such as 'cadmium in Mytilus edulis', carries RDF triples leading to the entry for Mytilus edulis in the WoRMS and for cadmium in the ChEBI ontologies. By providing links into these external ontologies the information captured in a 1980s parameter code now conforms to the Linked Data paradigm of the Semantic Web, vastly increasing the descriptive information accessible to a user. This presentation will describe the next steps along the road to the Semantic Web with the development of a SPARQL end point1 to expose the PUV plus the 190 other controlled vocabularies held in NVS. Whilst this is ideal for those fluent in SPARQL, most users require something a little more user-friendly and so the NVS browser2 was developed over the end point to allow less technical users to query the vocabularies and navigate the NVS ontology. This tool integrates into an editor that allows vocabulary content to be manipulated by authorised users outside BODC. Having placed Linked Data tooling over a single SPARQL end point the obvious future development for this system is to support semantic interoperability outside NVS by the incorporation of federated SPARQL end points in the USA and Australia during the ODIP II project. 1https://vocab.nerc.ac.uk/sparql 2 https://www.bodc.ac.uk/data/codes_and_formats/vocabulary_search/

  9. Analyzing user behavior of the micro-blogging website Sina Weibo during hot social events

    NASA Astrophysics Data System (ADS)

    Guan, Wanqiu; Gao, Haoyu; Yang, Mingmin; Li, Yuan; Ma, Haixin; Qian, Weining; Cao, Zhigang; Yang, Xiaoguang

    2014-02-01

    The spread and resonance of users’ opinions on Sina Weibo, the most popular micro-blogging website in China, are tremendously influential, having significantly affected the processes of many real-world hot social events. We select 21 hot events that were widely discussed on Sina Weibo in 2011, and do some statistical analyses. Our main findings are that (i) male users are more likely to be involved, (ii) messages that contain pictures and those posted by verified users are more likely to be reposted, while those with URLs are less likely, (iii) the gender factor, for most events, presents no significant difference in reposting likelihood.

  10. Attrition of Canadian Internet pharmacy websites: what are the implications?

    PubMed Central

    Veronin, Michael A; Clancy, Kristen M

    2013-01-01

    Background The unavailability of Internet pharmacy websites may impact a consumer’s drug purchases and health care. Objective To address the issue of attrition, a defined set of Canadian Internet pharmacy websites was examined at three separate time intervals. Methods In February to March 2006, 117 distinct, fully functional “Canadian Internet pharmacy” websites were located using the advanced search options of Google and the uniform resource locator (URL) for each website was recorded. To determine website attrition, each of the 117 websites obtained and recorded from the previous study was revisited at two later periods of time within a 4-year period. Results After approximately 4 years and 5 months, only 59 (50.4%) sites were found in the original state. Thirty-four sites (29.1%) had moved to a new URL address and were not functioning as the original Internet pharmacy. For 24 sites (20.5%) the viewer was redirected to another Canadian Internet pharmacy site. Conclusion Of concern for patients if Internet pharmacy sites were suddenly inaccessible would be the disruption of continuity of care. PMID:23983491

  11. WARCProcessor: An Integrative Tool for Building and Management of Web Spam Corpora.

    PubMed

    Callón, Miguel; Fdez-Glez, Jorge; Ruano-Ordás, David; Laza, Rosalía; Pavón, Reyes; Fdez-Riverola, Florentino; Méndez, Jose Ramón

    2017-12-22

    In this work we present the design and implementation of WARCProcessor, a novel multiplatform integrative tool aimed to build scientific datasets to facilitate experimentation in web spam research. The developed application allows the user to specify multiple criteria that change the way in which new corpora are generated whilst reducing the number of repetitive and error prone tasks related with existing corpus maintenance. For this goal, WARCProcessor supports up to six commonly used data sources for web spam research, being able to store output corpus in standard WARC format together with complementary metadata files. Additionally, the application facilitates the automatic and concurrent download of web sites from Internet, giving the possibility of configuring the deep of the links to be followed as well as the behaviour when redirected URLs appear. WARCProcessor supports both an interactive GUI interface and a command line utility for being executed in background.

  12. WARCProcessor: An Integrative Tool for Building and Management of Web Spam Corpora

    PubMed Central

    Callón, Miguel; Fdez-Glez, Jorge; Ruano-Ordás, David; Laza, Rosalía; Pavón, Reyes; Méndez, Jose Ramón

    2017-01-01

    In this work we present the design and implementation of WARCProcessor, a novel multiplatform integrative tool aimed to build scientific datasets to facilitate experimentation in web spam research. The developed application allows the user to specify multiple criteria that change the way in which new corpora are generated whilst reducing the number of repetitive and error prone tasks related with existing corpus maintenance. For this goal, WARCProcessor supports up to six commonly used data sources for web spam research, being able to store output corpus in standard WARC format together with complementary metadata files. Additionally, the application facilitates the automatic and concurrent download of web sites from Internet, giving the possibility of configuring the deep of the links to be followed as well as the behaviour when redirected URLs appear. WARCProcessor supports both an interactive GUI interface and a command line utility for being executed in background. PMID:29271913

  13. Web-Based Lessons from Frontliners.

    ERIC Educational Resources Information Center

    Joseph, Linda C.

    1998-01-01

    Describes Web-site lessons and resources on the role of women in history, games, circulatory system, the study of color for emergent readers, ePals classroom exchange for French students, nutrition and the food pyramid for elementary and secondary students, and classroom management for teachers. Provides URLs for related Web sites. (PEN)

  14. Incorporating the Internet into Traditional Library Instruction.

    ERIC Educational Resources Information Center

    Fonseca, Tony; King, Monica

    2000-01-01

    Presents a template for teaching traditional library research and one for incorporating the Web. Highlights include the differences between directories and search engines; devising search strategies; creating search terms; how to choose search engines; evaluating online resources; helpful Web sites; and how to read URLs to evaluate a Web site's…

  15. Brady Geothermal Field InSAR Raw Data

    DOE Data Explorer

    Ali, Tabrez

    2015-03-31

    List of TerraSAR-X/TanDEM-X images acquired between 2015-01-01 and 2015-03-31, and archived at https://winsar.unavco.org. See file "BHS InSAR Data with URLs.csv" for individual links. NOTE: The user must create an account in order to access the data (See "Instructions for Creating an Account" below).

  16. Open Astronomy Catalogs API

    NASA Astrophysics Data System (ADS)

    Guillochon, James; Cowperthwaite, Philip S.

    2018-05-01

    We announce the public release of the application program interface (API) for the Open Astronomy Catalogs (OACs), the OACAPI. The OACs serve near-complete collections of supernova, tidal disruption, kilonova, and fast stars data (including photometry, spectra, radio, and X-ray observations) via a user-friendly web interface that displays the data interactively and offers full data downloads. The OACAPI, by contrast, enables users to specifically download particular pieces of the OAC dataset via a flexible programmatic syntax, either via URL GET requests, or via a module within the astroquery Python package.

  17. Nencki Genomics Database—Ensembl funcgen enhanced with intersections, user data and genome-wide TFBS motifs

    PubMed Central

    Krystkowiak, Izabella; Lenart, Jakub; Debski, Konrad; Kuterba, Piotr; Petas, Michal; Kaminska, Bozena; Dabrowski, Michal

    2013-01-01

    We present the Nencki Genomics Database, which extends the functionality of Ensembl Regulatory Build (funcgen) for the three species: human, mouse and rat. The key enhancements over Ensembl funcgen include the following: (i) a user can add private data, analyze them alongside the public data and manage access rights; (ii) inside the database, we provide efficient procedures for computing intersections between regulatory features and for mapping them to the genes. To Ensembl funcgen-derived data, which include data from ENCODE, we add information on conserved non-coding (putative regulatory) sequences, and on genome-wide occurrence of transcription factor binding site motifs from the current versions of two major motif libraries, namely, Jaspar and Transfac. The intersections and mapping to the genes are pre-computed for the public data, and the result of any procedure run on the data added by the users is stored back into the database, thus incrementally increasing the body of pre-computed data. As the Ensembl funcgen schema for the rat is currently not populated, our database is the first database of regulatory features for this frequently used laboratory animal. The database is accessible without registration using the mysql client: mysql –h database.nencki-genomics.org –u public. Registration is required only to add or access private data. A WSDL webservice provides access to the database from any SOAP client, including the Taverna Workbench with a graphical user interface. Database URL: http://www.nencki-genomics.org. PMID:24089456

  18. Trajectory Browser Website

    NASA Technical Reports Server (NTRS)

    Foster, Cyrus; Jaroux, Belgacem A.

    2012-01-01

    The Trajectory Browser is a web-based tool developed at the NASA Ames Research Center to be used for the preliminary assessment of trajectories to small-bodies and planets and for providing relevant launch date, time-of-flight and V requirements. The site hosts a database of transfer trajectories from Earth to asteroids and planets for various types of missions such as rendezvous, sample return or flybys. A search engine allows the user to find trajectories meeting desired constraints on the launch window, mission duration and delta V capability, while a trajectory viewer tool allows the visualization of the heliocentric trajectory and the detailed mission itinerary. The anticipated user base of this tool consists primarily of scientists and engineers designing interplanetary missions in the context of pre-phase A studies, particularly for performing accessibility surveys to large populations of small-bodies. The educational potential of the website is also recognized for academia and the public with regards to trajectory design, a field that has generally been poorly understood by the public. The website is currently hosted on NASA-internal URL http://trajbrowser.arc.nasa.gov/ with plans for a public release as soon as development is complete.

  19. MedlinePlus FAQ: Framing

    MedlinePlus

    ... URL of this page: https://medlineplus.gov/faq/framing.html I'd like to link to MedlinePlus, ... M. encyclopedia. Our license agreements do not permit framing of their content from our site. For more ...

  20. Web-based Collaboration and Visualization in the ANDRILL Program

    NASA Astrophysics Data System (ADS)

    Reed, J.; Rack, F. R.; Huffman, L. T.; Cattadori, M.

    2009-12-01

    ANDRILL has embraced the web as a platform for facilitating collaboration and communicating science with educators, students and researchers alike. Two recent ANDRILL education and outreach projects, Project Circle 2008 and the Climate Change Student Summit, brought together classrooms from around the world to participate in cutting edge science. A large component of each project was the online collaboration achieved through project websites, blogs, and the GroupHub--a secure online environment where students could meet to send messages, exchange presentations and pictures, and even chat live. These technologies enabled students from different countries and time zones to connect and participate in a shared 'conversation' about climate change research. ANDRILL has also developed several interactive, web-based visualizations to make scientific drilling data more engaging and accessible to the science community and the public. Each visualization is designed around three core concepts that enable the Web 2.0 platform, namely, that they are: (1) customizable - a user can customize the visualization to display the exact data she is interested in; (2) linkable - each view in the visualization has a distinct URL that the user can share with her friends via sites like Facebook and Twitter; and (3) mashable - the user can take the visualization, mash it up with data from other sites or her own research, and embed it in her blog or website. The web offers an ideal environment for visualization and collaboration because it requires no special software and works across all computer platforms, which allows organizations and research projects to engage much larger audiences. In this presentation we will describe past challenges and successes, as well as future plans.

  1. Development of 3D browsing and interactive web system

    NASA Astrophysics Data System (ADS)

    Shi, Xiaonan; Fu, Jian; Jin, Chaolin

    2017-09-01

    In the current market, users need to download specific software or plug-ins to browse the 3D model, and browsing the system may be unstable, and it cannot be 3D model interaction issues In order to solve this problem, this paper presents a solution to the interactive browsing of the model in the server-side parsing model, and when the system is applied, the user only needs to input the system URL and upload the 3D model file to operate the browsing The server real-time parsing 3D model, the interactive response speed, these completely follows the user to walk the minimalist idea, and solves the current market block 3D content development question.

  2. FilterGate, or Knowing What We're Walling In or Walling Out.

    ERIC Educational Resources Information Center

    Wolinsky, Art

    2001-01-01

    Discusses problems with Internet filtering when it results in erroneously blocked Web sites. Topics include the Children's Online Protection Act (CIPA); blocking all sites on an Internet Service Provider (ISP) through Round Robin DNS; blocking by URL or by IP number; and questioning the need for filters. (LRW)

  3. DAVID-WS: a stateful web service to facilitate gene/protein list analysis

    PubMed Central

    Jiao, Xiaoli; Sherman, Brad T.; Huang, Da Wei; Stephens, Robert; Baseler, Michael W.; Lane, H. Clifford; Lempicki, Richard A.

    2012-01-01

    Summary: The database for annotation, visualization and integrated discovery (DAVID), which can be freely accessed at http://david.abcc.ncifcrf.gov/, is a web-based online bioinformatics resource that aims to provide tools for the functional interpretation of large lists of genes/proteins. It has been used by researchers from more than 5000 institutes worldwide, with a daily submission rate of ∼1200 gene lists from ∼400 unique researchers, and has been cited by more than 6000 scientific publications. However, the current web interface does not support programmatic access to DAVID, and the uniform resource locator (URL)-based application programming interface (API) has a limit on URL size and is stateless in nature as it uses URL request and response messages to communicate with the server, without keeping any state-related details. DAVID-WS (web service) has been developed to automate user tasks by providing stateful web services to access DAVID programmatically without the need for human interactions. Availability: The web service and sample clients (written in Java, Perl, Python and Matlab) are made freely available under the DAVID License at http://david.abcc.ncifcrf.gov/content.jsp?file=WS.html. Contact: xiaoli.jiao@nih.gov; rlempicki@nih.gov PMID:22543366

  4. DAVID-WS: a stateful web service to facilitate gene/protein list analysis.

    PubMed

    Jiao, Xiaoli; Sherman, Brad T; Huang, Da Wei; Stephens, Robert; Baseler, Michael W; Lane, H Clifford; Lempicki, Richard A

    2012-07-01

    The database for annotation, visualization and integrated discovery (DAVID), which can be freely accessed at http://david.abcc.ncifcrf.gov/, is a web-based online bioinformatics resource that aims to provide tools for the functional interpretation of large lists of genes/proteins. It has been used by researchers from more than 5000 institutes worldwide, with a daily submission rate of ∼1200 gene lists from ∼400 unique researchers, and has been cited by more than 6000 scientific publications. However, the current web interface does not support programmatic access to DAVID, and the uniform resource locator (URL)-based application programming interface (API) has a limit on URL size and is stateless in nature as it uses URL request and response messages to communicate with the server, without keeping any state-related details. DAVID-WS (web service) has been developed to automate user tasks by providing stateful web services to access DAVID programmatically without the need for human interactions. The web service and sample clients (written in Java, Perl, Python and Matlab) are made freely available under the DAVID License at http://david.abcc.ncifcrf.gov/content.jsp?file=WS.html.

  5. MetExploreViz: web component for interactive metabolic network visualization.

    PubMed

    Chazalviel, Maxime; Frainay, Clément; Poupin, Nathalie; Vinson, Florence; Merlet, Benjamin; Gloaguen, Yoann; Cottret, Ludovic; Jourdan, Fabien

    2017-09-15

    MetExploreViz is an open source web component that can be easily embedded in any web site. It provides features dedicated to the visualization of metabolic networks and pathways and thus offers a flexible solution to analyze omics data in a biochemical context. Documentation and link to GIT code repository (GPL 3.0 license)are available at this URL: http://metexplore.toulouse.inra.fr/metexploreViz/doc /. Tutorial is available at this URL. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  6. ToxRefDB - Release user-friendly web-based tool for mining ToxRefDB

    EPA Science Inventory

    The updated URL link is for a table of NCCT ToxCast public datasets. The next to last row of the table has the link for the US EPA ToxCast ToxRefDB Data Release October 2014. ToxRefDB provides detailed chemical toxicity data in a publically accessible searchable format. ToxRefD...

  7. Method of recommending items to a user based on user interest

    DOEpatents

    Bollen, John; Van De Sompel, Herbert

    2013-11-05

    Although recording of usage data is common in scholarly information services, its exploitation for the creation of value-added services remains limited due to concerns regarding, among others, user privacy, data validity, and the lack of accepted standards for the representation, sharing and aggregation of usage data. A technical, standards-based architecture for sharing usage information is presented. In this architecture, OpenURL-compliant linking servers aggregate usage information of a specific user community as it navigates the distributed information environment that it has access to. This usage information is made OAI-PMH harvestable so that usage information exposed by many linking servers can be aggregated to facilitate the creation of value-added services with a reach beyond that of a single community or a single information service.

  8. Global reach of direct-to-consumer advertising using social media for illicit online drug sales.

    PubMed

    Mackey, Tim Ken; Liang, Bryan A

    2013-05-29

    Illicit or rogue Internet pharmacies are a recognized global public health threat that have been identified as utilizing various forms of online marketing and promotion, including social media. To assess the accessibility of creating illicit no prescription direct-to-consumer advertising (DTCA) online pharmacy social media marketing (eDTCA2.0) and evaluate its potential global reach. We identified the top 4 social media platforms allowing eDTCA2.0. After determining applicable platforms (ie, Facebook, Twitter, Google+, and MySpace), we created a fictitious advertisement advertising no prescription drugs online and posted it to the identified social media platforms. Each advertisement linked to a unique website URL that consisted of a site error page. Employing Web search analytics, we tracked the number of users visiting these sites and their location. We used commercially available Internet tools and services, including website hosting, domain registration, and website analytic services. Illicit online pharmacy social media content for Facebook, Twitter, and MySpace remained accessible despite highly questionable and potentially illegal content. Fictitious advertisements promoting illicit sale of drugs generated aggregate unique user traffic of 2795 visits over a 10-month period. Further, traffic to our websites originated from a number of countries, including high-income and middle-income countries, and emerging markets. Our results indicate there are few barriers to entry for social media-based illicit online drug marketing. Further, illicit eDTCA2.0 has globalized outside US borders to other countries through unregulated Internet marketing.

  9. Precipitation links (PrecipLinks) - a prototype directory for precipitation information

    NASA Technical Reports Server (NTRS)

    Velanthapillia, Balendran; Stocker, Erich Franz

    2006-01-01

    This poster describes a web directory of research oriented precipitation links. In this era of sophisticated search engines and web agents, it might seem counterproductive to establish such a directory of links. However, entering precipitation into a search engine like google will yield over one million hits. To further exacerbate this situation many of the returned links are dead, duplicates of other links, incomplete, or only marginally related to research precipitation or even the broader precipitation area. Sometimes connecting the linked URL causes the browser to lose context and not be able to get back to the original page. Even using more sophisticated search engines query parameters or agents while reducing the overall return doesn't eliminate all of the other issues listed. As part of the development of the measurement-based Precipitation Processing System (PPS) that will support Tropical Rainfall Measuring Mission (TRMM) version 7 reprocessing and the Global Precipitation Measurement (GPM) mission a precipitation links (PrecipLinks) facility is being developed. PrecipLinks is intended to share locations of other sites that contain information or data pertaining to precipitation research. Potential contributors can log-on to the PrecipLinks website and register their site for inclusion in the directory. The price for inclusion is the requirement to place a link back to PrecipLinks on the webpage that is registered. This ensures that users will be able to easily get back to PrecipLinks regardless of any context issues that browsers might have. Perhaps more importantly users while visiting one site that they know can be referred to a location that has many others sites with which they might not be familiar. PrecipLinks is designed to have a very flat structure. This poster summarizes these categories (information, data, services) and the reasons for their selection. Providers may register multiple pages to which they wish to direct users. However, each page may be attached to only one of these categories. Each page to which they refer users will also have a return link to PrecipLinks. The poster describes the operation of the system both the automated and the human processes. It also provides images for the various steps in the registration and use.

  10. 78 FR 55083 - Submission for OMB Review; 30-day Comment Request; Genomics and Society Public Surveys in...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-09

    ... site. The URL for this survey site will also be advertised separately through media and social media... information on the proposed project, contact: Laura M. Koehly, Ph.D., Senior Investigator, Social and... in health conditions and associated risk factors; The role of friends, family, media, and health...

  11. Fusion Genes Predict Prostate Cancer Recurrence

    DTIC Science & Technology

    2017-10-01

    URL for any Internet site(s) that disseminates the results of the research activities. A short description of each site should be provided. It is not...University of Wisconsin System Madison, WI 53715 REPORT DATE: October 2017 TYPE OF REPORT: Annual PREPARED FOR: U.S. Army Medical Research and...policy or decision unless so designated by other documentation. REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden

  12. Using USNO's API to Obtain Data

    NASA Astrophysics Data System (ADS)

    Lesniak, Michael V.; Pozniak, Daniel; Punnoose, Tarun

    2015-01-01

    The U.S. Naval Observatory (USNO) is in the process of modernizing its publicly available web services into APIs (Application Programming Interfaces). Services configured as APIs offer greater flexibility to the user and allow greater usage. Depending on the particular service, users who implement our APIs will receive either a PNG (Portable Network Graphics) image or data in JSON (JavaScript Object Notation) format. This raw data can then be embedded in third-party web sites or in apps.Part of the USNO's mission is to provide astronomical and timing data to government agencies and the general public. To this end, the USNO provides accurate computations of astronomical phenomena such as dates of lunar phases, rise and set times of the Moon and Sun, and lunar and solar eclipse times. Users who navigate to our web site and select one of our 18 services are prompted to complete a web form, specifying parameters such as date, time, location, and object. Many of our services work for years between 1700 and 2100, meaning that past, present, and future events can be computed. Upon form submission, our web server processes the request, computes the data, and outputs it to the user.Over recent years, the use of the web by the general public has vastly changed. In response to this, the USNO is modernizing its web-based data services. This includes making our computed data easier to embed within third-party web sites as well as more easily querying from apps running on tablets and smart phones. To facilitate this, the USNO has begun converting its services into APIs. In addition to the existing web forms for the various services, users are able to make direct URL requests that return either an image or numerical data.To date, four of our web services have been configured to run with APIs. Two are image-producing services: "Apparent Disk of a Solar System Object" and "Day and Night Across the Earth." Two API data services are "Complete Sun and Moon Data for One Day" and "Dates of Primary Phases of the Moon." Instructions for how to use our API services as well as examples of their use can be found on one of our explanatory web pages and will be discussed here.

  13. Combination of heterogeneous criteria for the automatic detection of ethical principles on health web sites.

    PubMed

    Gaudinat, Arnaud; Grabar, Natalia; Boyer, Célia

    2007-10-11

    The detection of ethical issues of web sites aims at selection of information helpful to the reader and is an important concern in medical informatics. Indeed, with the ever-increasing volume of online health information, coupled with its uneven reliability and quality, the public should be aware about the quality of information available online. In order to address this issue, we propose methods for the automatic detection of statements related to ethical principles such as those of the HONcode. For the detection of these statements, we combine two kinds of heterogeneous information: content-based categorizations and URL-based categorizations through application of the machine learning algorithms. Our objective is to observe the quality of categorization through URL's for web pages where categorization through content has been proven to be not precise enough. The results obtained indicate that only some of the principles were better processed.

  14. Is Domain Highlighting Actually Helpful in Identifying Phishing Web Pages?

    PubMed

    Xiong, Aiping; Proctor, Robert W; Yang, Weining; Li, Ninghui

    2017-06-01

    To evaluate the effectiveness of domain highlighting in helping users identify whether Web pages are legitimate or spurious. As a component of the URL, a domain name can be overlooked. Consequently, browsers highlight the domain name to help users identify which Web site they are visiting. Nevertheless, few studies have assessed the effectiveness of domain highlighting, and the only formal study confounded highlighting with instructions to look at the address bar. We conducted two phishing detection experiments. Experiment 1 was run online: Participants judged the legitimacy of Web pages in two phases. In Phase 1, participants were to judge the legitimacy based on any information on the Web page, whereas in Phase 2, they were to focus on the address bar. Whether the domain was highlighted was also varied. Experiment 2 was conducted similarly but with participants in a laboratory setting, which allowed tracking of fixations. Participants differentiated the legitimate and fraudulent Web pages better than chance. There was some benefit of attending to the address bar, but domain highlighting did not provide effective protection against phishing attacks. Analysis of eye-gaze fixation measures was in agreement with the task performance, but heat-map results revealed that participants' visual attention was attracted by the highlighted domains. Failure to detect many fraudulent Web pages even when the domain was highlighted implies that users lacked knowledge of Web page security cues or how to use those cues. Potential applications include development of phishing prevention training incorporating domain highlighting with other methods to help users identify phishing Web pages.

  15. Genetic testing and your cancer risk

    MedlinePlus

    ... GO About MedlinePlus Site Map FAQs Customer Support Health Topics Drugs & Supplements Videos & Tools Español You Are Here: Home → Medical Encyclopedia → Genetic testing and your cancer risk URL of this page: //medlineplus.gov/ency/patientinstructions/ ...

  16. 76 FR 800 - Policy and Procedural Change Regarding the Publication of Notices of Funding Opportunities in the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-06

    ... Web site, http://www.grants.gov , in accordance with the policy directive issued by the Office of... posted at http://www.grants.gov by following the universal resource locator (URL) link included in the synopsis, or by visiting ETA's Web site at http://www.doleta.gov . DATES: Effective Date: January 6, 2011...

  17. The GEON Integrated Data Viewer (IDV) and IRIS DMC Services Illustrate CyberInfrastructure Support for Seismic Data Visualization and Interpretation

    NASA Astrophysics Data System (ADS)

    Meertens, C.; Wier, S.; Ahern, T.; Casey, R.; Weertman, B.; Laughbon, C.

    2008-12-01

    UNAVCO and the IRIS DMC are data service partners for seismic visualization, particularly for hypocentral data and tomography. UNAVCO provides the GEON Integrated Data Viewer (IDV), an extension of the Unidata IDV, a free, interactive, research-level, software display and analysis tool for data in 3D (latitude, longitude, depth) and 4D (with time), located on or inside the Earth. The GEON IDV is designed to meet the challenge of investigating complex, multi-variate, time-varying, three- dimensional geoscience data in the context of new remote and shared data sources. The GEON IDV supports data access from data sources using HTTP and FTP servers, OPeNDAP servers, THREDDS catalogs, RSS feeds, and WMS (web map) servers. The IRIS DMC (Data Management System) has developed web services providing data for earthquake hypocentral data and seismic tomography model grids. These services can be called by the GEON IDV to access data at IRIS without copying files. The IRIS Earthquake Browser (IEB) is a web-based query tool for hypocentral data. The IEB combines the DMC's large database of more than 1,900,000 earthquakes with the Google Maps web interface. With the IEB you can quickly find earthquakes in any region of the globe and then import this information into the GEON Integrated Data Viewer where the hypocenters may be visualized. You can select earthquakes by location region, time, depth, and magnitude. The IEB gives the IDV a URL to the selected data. The IDV then shows the data as maps or 3D displays, with interactive control of vertical scale, area, map projection, with symbol size and color control by magnitude or depth. The IDV can show progressive time animation of, for example, aftershocks filling a source region. The IRIS Tomoserver converts seismic tomography model output grids to NetCDF for use in the IDV. The Tomoserver accepts a tomographic model file as input from a user and provides an equivalent NetCDF file as output. The service supports NA04, S3D, A1D and CUB input file formats, contributed by their respective creators. The NetCDF file is saved to a location that can be referenced with a URL on an IRIS server. The URL for the NetCDF file is provided to the user. The user can download the data from IRIS, or copy the URL into IDV directly for interpretation, and the IDV will access the data at IRIS. The Tomoserver conversion software was developed by Instrumental Software Technologies, Inc. Use cases with the GEON IDV and IRIS DMC data services will be shown.

  18. blend4php: a PHP API for galaxy

    PubMed Central

    Wytko, Connor; Soto, Brian; Ficklin, Stephen P.

    2017-01-01

    Galaxy is a popular framework for execution of complex analytical pipelines typically for large data sets, and is a commonly used for (but not limited to) genomic, genetic and related biological analysis. It provides a web front-end and integrates with high performance computing resources. Here we report the development of the blend4php library that wraps Galaxy’s RESTful API into a PHP-based library. PHP-based web applications can use blend4php to automate execution, monitoring and management of a remote Galaxy server, including its users, workflows, jobs and more. The blend4php library was specifically developed for the integration of Galaxy with Tripal, the open-source toolkit for the creation of online genomic and genetic web sites. However, it was designed as an independent library for use by any application, and is freely available under version 3 of the GNU Lesser General Public License (LPGL v3.0) at https://github.com/galaxyproject/blend4php. Database URL: https://github.com/galaxyproject/blend4php PMID:28077564

  19. GREAT: a web portal for Genome Regulatory Architecture Tools

    PubMed Central

    Bouyioukos, Costas; Bucchini, François; Elati, Mohamed; Képès, François

    2016-01-01

    GREAT (Genome REgulatory Architecture Tools) is a novel web portal for tools designed to generate user-friendly and biologically useful analysis of genome architecture and regulation. The online tools of GREAT are freely accessible and compatible with essentially any operating system which runs a modern browser. GREAT is based on the analysis of genome layout -defined as the respective positioning of co-functional genes- and its relation with chromosome architecture and gene expression. GREAT tools allow users to systematically detect regular patterns along co-functional genomic features in an automatic way consisting of three individual steps and respective interactive visualizations. In addition to the complete analysis of regularities, GREAT tools enable the use of periodicity and position information for improving the prediction of transcription factor binding sites using a multi-view machine learning approach. The outcome of this integrative approach features a multivariate analysis of the interplay between the location of a gene and its regulatory sequence. GREAT results are plotted in web interactive graphs and are available for download either as individual plots, self-contained interactive pages or as machine readable tables for downstream analysis. The GREAT portal can be reached at the following URL https://absynth.issb.genopole.fr/GREAT and each individual GREAT tool is available for downloading. PMID:27151196

  20. Recent improvements in the NASA technical report server

    NASA Technical Reports Server (NTRS)

    Maa, Ming-Hokng; Nelson, Michael L.

    1995-01-01

    The NASA Technical Report Server (NTRS), a World Wide Web (WWW) report distribution service, has been modified to allow parallel database queries, significantly decreasing user access time by an average factor of 2.3, access from clients behind firewalls and/or proxies which truncate excessively long Uniform Resource Locators (URL's), access to non-Wide Area Information Server (WAIS) databases, and compatibility with the Z39-50.3 protocol.

  1. Integrating Webtop Components with Thin-Client Web Applicators using WDK Tickets

    NASA Technical Reports Server (NTRS)

    Duley, Jason

    2004-01-01

    Contents include the folloving: Issues surrounding encryption/decryption of password strings when deploying on different machines and platforms. Security concerns when exposing docbases to internet users. Docbase Session management in Java Servlets. Customization of Webtop components. WDK Tickets as a silent login alternative. Encoding Tickets and Ticket syntax. Invoking Webtop components via an Action URL. Issues with accessing Webtop components on Mac OS X through SSL.

  2. Global Reach of Direct-to-Consumer Advertising Using Social Media for Illicit Online Drug Sales

    PubMed Central

    Liang, Bryan A

    2013-01-01

    Background Illicit or rogue Internet pharmacies are a recognized global public health threat that have been identified as utilizing various forms of online marketing and promotion, including social media. Objective To assess the accessibility of creating illicit no prescription direct-to-consumer advertising (DTCA) online pharmacy social media marketing (eDTCA2.0) and evaluate its potential global reach. Methods We identified the top 4 social media platforms allowing eDTCA2.0. After determining applicable platforms (ie, Facebook, Twitter, Google+, and MySpace), we created a fictitious advertisement advertising no prescription drugs online and posted it to the identified social media platforms. Each advertisement linked to a unique website URL that consisted of a site error page. Employing Web search analytics, we tracked the number of users visiting these sites and their location. We used commercially available Internet tools and services, including website hosting, domain registration, and website analytic services. Results Illicit online pharmacy social media content for Facebook, Twitter, and MySpace remained accessible despite highly questionable and potentially illegal content. Fictitious advertisements promoting illicit sale of drugs generated aggregate unique user traffic of 2795 visits over a 10-month period. Further, traffic to our websites originated from a number of countries, including high-income and middle-income countries, and emerging markets. Conclusions Our results indicate there are few barriers to entry for social media–based illicit online drug marketing. Further, illicit eDTCA2.0 has globalized outside US borders to other countries through unregulated Internet marketing. PMID:23718965

  3. Dive and Explore: An Interactive Web Visualization that Simulates Making an ROV Dive to an Active Submarine Volcano

    NASA Astrophysics Data System (ADS)

    Weiland, C.; Chadwick, W. W.

    2004-12-01

    Several years ago we created an exciting and engaging multimedia exhibit for the Hatfield Marine Science Center that lets visitors simulate making a dive to the seafloor with the remotely operated vehicle (ROV) named ROPOS. The exhibit immerses the user in an interactive experience that is naturally fun but also educational. The public display is located at the Hatfield Marine Science Visitor Center in Newport, Oregon. We are now completing a revision to the project that will make this engaging virtual exploration accessible to a much larger audience. With minor modifications we will be able to put the exhibit onto the world wide web so that any person with internet access can view and learn about exciting volcanic and hydrothermal activity at Axial Seamount on the Juan de Fuca Ridge. The modifications address some cosmetic and logistic ISSUES confronted in the museum environment, but will mainly involve compressing video clips so they can be delivered more efficiently over the internet. The web version, like the museum version, will allow users to choose from 1 of 3 different dives sites in the caldera of Axial Volcano. The dives are based on real seafloor settings at Axial seamount, an active submarine volcano on the Juan de Fuca Ridge (NE Pacific) that is also the location of a seafloor observatory called NeMO. Once a dive is chosen, then the user watches ROPOS being deployed and then arrives into a 3-D computer-generated seafloor environment that is based on the real world but is easier to visualize and navigate. Once on the bottom, the user is placed within a 360 degree panorama and can look in all directions by manipulating the computer mouse. By clicking on markers embedded in the scene, the user can then either move to other panorama locations via movies that travel through the 3-D virtual environment, or they can play video clips from actual ROPOS dives specifically related to that scene. Audio accompanying the video clips informs the user where they are going or what they are looking at. After the user is finished exploring the dive site they end the dive by leaving the bottom and watching the ROV being recovered onto the ship at the surface. Within the three simulated dives there are a total of 6 arrival and departure movies, 7 seafloor panoramas, 12 travel movies, and 23 ROPOS video clips. This virtual exploration is part of the NeMO web site and will be at this URL http://www.pmel.noaa.gov/vents/dive.html

  4. Global Precipitation Measurement (GPM) Mission Products and Services at the NASA Goddard Earth Sciences Data and Information Services Center (GES DISC)

    NASA Technical Reports Server (NTRS)

    Liu, Z.; Ostrenga, D.; Vollmer, B.; Kempler, S.; Deshong, B.; Greene, M.

    2015-01-01

    The NASA Goddard Earth Sciences (GES) Data and Information Services Center (DISC) hosts and distributes GPM data within the NASA Earth Observation System Data Information System (EOSDIS). The GES DISC is also home to the data archive for the GPM predecessor, the Tropical Rainfall Measuring Mission (TRMM). Over the past 17 years, the GES DISC has served the scientific as well as other communities with TRMM data and user-friendly services. During the GPM era, the GES DISC will continue to provide user-friendly data services and customer support to users around the world. GPM products currently and to-be available: -Level-1 GPM Microwave Imager (GMI) and partner radiometer products, DPR products -Level-2 Goddard Profiling Algorithm (GPROF) GMI and partner products, DPR products -Level-3 daily and monthly products, DPR products -Integrated Multi-satellitE Retrievals for GPM (IMERG) products (early, late, and final) A dedicated Web portal (including user guides, etc.) has been developed for GPM data (http://disc.sci.gsfc.nasa.gov/gpm). Data services that are currently and to-be available include Google-like Mirador (http://mirador.gsfc.nasa.gov/) for data search and access; data access through various Web services (e.g., OPeNDAP, GDS, WMS, WCS); conversion into various formats (e.g., netCDF, HDF, KML (for Google Earth), ASCII); exploration, visualization, and statistical online analysis through Giovanni (http://giovanni.gsfc.nasa.gov); generation of value-added products; parameter and spatial subsetting; time aggregation; regridding; data version control and provenance; documentation; science support for proper data usage, FAQ, help desk; monitoring services (e.g. Current Conditions) for applications. The United User Interface (UUI) is the next step in the evolution of the GES DISC web site. It attempts to provide seamless access to data, information and services through a single interface without sending the user to different applications or URLs (e.g., search, access, subset, Giovanni, documents).

  5. Diversity: On-Line Resources.

    ERIC Educational Resources Information Center

    Helms, Ronald G.

    1997-01-01

    Argues that the Internet and the World Wide Web are excellent resources for multicultural education. Reviews 25 Internet sites (provides URLs) that are of interest for social educators and students on topics from indigenous peoples of Mexico to Africa to U.S. immigrant groups to teaching diversity. (DSK)

  6. Patent urachus repair - slideshow

    MedlinePlus

    ... Drugs & Supplements Videos & Tools About MedlinePlus Show Search Search MedlinePlus GO GO About MedlinePlus Site Map FAQs Customer Support Health Topics Drugs & Supplements Videos & Tools Español You Are Here: Home → Medical Encyclopedia → Patent urachus repair - series—Normal anatomy URL of this ...

  7. OReFiL: an online resource finder for life sciences.

    PubMed

    Yamamoto, Yasunori; Takagi, Toshihisa

    2007-08-06

    Many online resources for the life sciences have been developed and introduced in peer-reviewed papers recently, ranging from databases and web applications to data-analysis software. Some have been introduced in special journal issues or websites with a search function, but others remain scattered throughout the Internet and in the published literature. The searchable resources on these sites are collected and maintained manually and are therefore of higher quality than automatically updated sites, but also require more time and effort. We developed an online resource search system called OReFiL to address these issues. We developed a crawler to gather all of the web pages whose URLs appear in MEDLINE abstracts and full-text papers on the BioMed Central open-access journals. The URLs were extracted using regular expressions and rules based on our heuristic knowledge. We then indexed the online resources to facilitate their retrieval and comparison by researchers. Because every online resource has at least one PubMed ID, we can easily acquire its summary with Medical Subject Headings (MeSH) terms and confirm its credibility through reference to the corresponding PubMed entry. In addition, because OReFiL automatically extracts URLs and updates the index, minimal time and effort is needed to maintain the system. We developed OReFiL, a search system for online life science resources, which is freely available. The system's distinctive features include the ability to return up-to-date query-relevant online resources introduced in peer-reviewed papers; the ability to search using free words, MeSH terms, or author names; easy verification of each hit following links to the corresponding PubMed entry or to papers citing the URL through the search systems of BioMed Central, Scirus, HighWire Press, or Google Scholar; and quick confirmation of the existence of an online resource web page.

  8. OReFiL: an online resource finder for life sciences

    PubMed Central

    Yamamoto, Yasunori; Takagi, Toshihisa

    2007-01-01

    Background Many online resources for the life sciences have been developed and introduced in peer-reviewed papers recently, ranging from databases and web applications to data-analysis software. Some have been introduced in special journal issues or websites with a search function, but others remain scattered throughout the Internet and in the published literature. The searchable resources on these sites are collected and maintained manually and are therefore of higher quality than automatically updated sites, but also require more time and effort. Description We developed an online resource search system called OReFiL to address these issues. We developed a crawler to gather all of the web pages whose URLs appear in MEDLINE abstracts and full-text papers on the BioMed Central open-access journals. The URLs were extracted using regular expressions and rules based on our heuristic knowledge. We then indexed the online resources to facilitate their retrieval and comparison by researchers. Because every online resource has at least one PubMed ID, we can easily acquire its summary with Medical Subject Headings (MeSH) terms and confirm its credibility through reference to the corresponding PubMed entry. In addition, because OReFiL automatically extracts URLs and updates the index, minimal time and effort is needed to maintain the system. Conclusion We developed OReFiL, a search system for online life science resources, which is freely available. The system's distinctive features include the ability to return up-to-date query-relevant online resources introduced in peer-reviewed papers; the ability to search using free words, MeSH terms, or author names; easy verification of each hit following links to the corresponding PubMed entry or to papers citing the URL through the search systems of BioMed Central, Scirus, HighWire Press, or Google Scholar; and quick confirmation of the existence of an online resource web page. PMID:17683589

  9. The Live Access Server Scientific Product Generation Through Workflow Orchestration

    NASA Astrophysics Data System (ADS)

    Hankin, S.; Calahan, J.; Li, J.; Manke, A.; O'Brien, K.; Schweitzer, R.

    2006-12-01

    The Live Access Server (LAS) is a well-established Web-application for display and analysis of geo-science data sets. The software, which can be downloaded and installed by anyone, gives data providers an easy way to establish services for their on-line data holdings, so their users can make plots; create and download data sub-sets; compare (difference) fields; and perform simple analyses. Now at version 7.0, LAS has been in operation since 1994. The current "Armstrong" release of LAS V7 consists of three components in a tiered architecture: user interface, workflow orchestration and Web Services. The LAS user interface (UI) communicates with the LAS Product Server via an XML protocol embedded in an HTTP "get" URL. Libraries (APIs) have been developed in Java, JavaScript and perl that can readily generate this URL. As a result of this flexibility it is common to find LAS user interfaces of radically different character, tailored to the nature of specific datasets or the mindset of specific users. When a request is received by the LAS Product Server (LPS -- the workflow orchestration component), business logic converts this request into a series of Web Service requests invoked via SOAP. These "back- end" Web services perform data access and generate products (visualizations, data subsets, analyses, etc.). LPS then packages these outputs into final products (typically HTML pages) via Jakarta Velocity templates for delivery to the end user. "Fine grained" data access is performed by back-end services that may utilize JDBC for data base access; the OPeNDAP "DAPPER" protocol; or (in principle) the OGC WFS protocol. Back-end visualization services are commonly legacy science applications wrapped in Java or Python (or perl) classes and deployed as Web Services accessible via SOAP. Ferret is the default visualization application used by LAS, though other applications such as Matlab, CDAT, and GrADS can also be used. Other back-end services may include generation of Google Earth layers using KML; generation of maps via WMS or ArcIMS protocols; and data manipulation with Unix utilities.

  10. DPS Planetary Science Graduate Programs Listing: A Resource for Students and Advisors

    NASA Astrophysics Data System (ADS)

    Klassen, David R.; Roman, Anthony; Meinke, Bonnie

    2015-11-01

    We began a web page on the DPS Education site in 2013 listing all the graduate programs we could find that can lead to a PhD with a planetary science focus. Since then the static page has evolved into a database-driven, filtered-search site. It is intended to be a useful resource for both undergraduate students and undergraduate advisers, allowing them to find and compare programs across a basic set of search criteria. From the filtered list users can click on links to get a "quick look" at the database information and follow links to the program main site.The reason for such a list is because planetary science is a heading that covers an extremely diverse set of disciplines. The usual case is that planetary scientists are housed in a discipline-placed department so that finding them is typically not easy—undergraduates cannot look for a Planetary Science department, but must (somehow) know to search for them in all their possible places. This can overwhelm even determined undergraduate student, and even many advisers!We present here the updated site and a walk-through of the basic features. In addition we ask for community feedback on additional features to make the system more usable for them. Finally, we call upon those mentoring and advising undergraduates to use this resource, and program admission chairs to continue to review their entry and provide us with the most up-to-date information.The URL for our site is http://dps.aas.org/education/graduate-schools.

  11. What’s in a URL? Genre Classification from URLs

    DTIC Science & Technology

    2012-01-01

    webpages with access to the content of a document and feature extraction from URLs alone. Feature Extraction from Webpages Stylistic and structural...2010). Character n-grams (sequence of n characters) are attractive because of their simplicity and because they encapsulate both lexical and stylistic ...report might be stylistic . Feature Extraction from URLs The syntactic characteristics of URLs have been fairly sta- ble over the years. URL terms are

  12. RefPrimeCouch—a reference gene primer CouchApp

    PubMed Central

    Silbermann, Jascha; Wernicke, Catrin; Pospisil, Heike; Frohme, Marcus

    2013-01-01

    To support a quantitative real-time polymerase chain reaction standardization project, a new reference gene database application was required. The new database application was built with the explicit goal of simplifying not only the development process but also making the user interface more responsive and intuitive. To this end, CouchDB was used as the backend with a lightweight dynamic user interface implemented client-side as a one-page web application. Data entry and curation processes were streamlined using an OpenRefine-based workflow. The new RefPrimeCouch database application provides its data online under an Open Database License. Database URL: http://hpclife.th-wildau.de:5984/rpc/_design/rpc/view.html PMID:24368831

  13. RefPrimeCouch--a reference gene primer CouchApp.

    PubMed

    Silbermann, Jascha; Wernicke, Catrin; Pospisil, Heike; Frohme, Marcus

    2013-01-01

    To support a quantitative real-time polymerase chain reaction standardization project, a new reference gene database application was required. The new database application was built with the explicit goal of simplifying not only the development process but also making the user interface more responsive and intuitive. To this end, CouchDB was used as the backend with a lightweight dynamic user interface implemented client-side as a one-page web application. Data entry and curation processes were streamlined using an OpenRefine-based workflow. The new RefPrimeCouch database application provides its data online under an Open Database License. Database URL: http://hpclife.th-wildau.de:5984/rpc/_design/rpc/view.html.

  14. Easing the Discovery of NASA and International Near-Real-Time Data Using the Global Change Master Directory

    NASA Technical Reports Server (NTRS)

    Olsen, Lola; Morahan, Michael; Aleman, Alicia; Cepero, Laurel; Stevens, Tyler; Ritz, Scott; Holland, Monica

    2011-01-01

    The Global Change Master Directory (GCMD) provides an extensive directory of descriptive and spatial information about data sets and data-related services, which are relevant to Earth science research. The directory's data discovery components include controlled keywords, free-text searches, and map/date searches. The GCMD portal for NASA's Land Atmosphere Near-real-time Capability for EOS (LANCE) data products leverages these discovery features by providing users a direct route to NASA's Near-Real-Time (NRT) collections. This portal offers direct access to collection entries by instrument name, informing users of the availability of data. After a relevant collection entry is found through the GCMD's search components, the "Get Data" URL within the entry directs the user to the desired data. http://gcmd.nasa.gov/r/p/gcmd_lance_nrt.

  15. Development of an advanced support system for site investigations

    NASA Astrophysics Data System (ADS)

    Mizuno, T.; Hama, K.; Iwatsuki, T.; Semba, T.

    2009-12-01

    JAEA has the responsibility for R&D to enhance reliability of High Level Waste (HLW) disposal technology and to develop safety assessment methodology with associated databases; these should support both the implementer (NUMO) and the relevant regulatory organizations. With this responsibility, JAEA has initiated development of advanced technology in the field of Knowledge Engineering. Known as the Information Synthesis and Interpretation System (ISIS), it incorporates knowledge currently being obtained in the Underground Research Laboratory (URL) projects in Expert System (ES) modules for the Japanese HLW disposal program. This knowledge includes fundamental understanding of relevant geological environments, technical know-how for the application of complex investigation techniques, experience gained in earlier site work, etc. However, much knowledge is not undocumented because the knowledge is treated as tacit knowledge and, without focused action soon, may be permanently lost. Therefore, a new approach is necessary to transfer the knowledge obtained in these URL projects to support the site characterization and subsequent safety assessment of potential repository sites by NUMO and the formulation of guidelines by regulatory organizations. In this paper, we introduce the ES for selecting tracers for borehole drilling. ES is the system built by applying electronic information technology to support the planning, conducting investigations and assessing of investigation results. Tracers are generally used for borehole drilling to monitor and quantitatively assess the degree of contamination of groundwater by drilling fluid. JAEA uses fluorescent dye as tracer in drilling fluid. When a fluorescent dye is used for drilling, suitable type and concentration must be selected. The technical points to be considered are; 1) linearity of fluorescent spectrum intensity with variations in concentration, 2) pH dependence of fluorescent spectrum intensity, 3) stability of fluorescent dye, 4) sorption/adsorption properties for rock being investigated, 5) detection limit of analyzer, 6) comparison of the fluorescent spectrum with dissolved organics and tracers used in other boreholes. In addition, costs and environmental impact are important factors to be considered. Thus, significant knowledge is needed in selecting the tracer for actual investigations. Fortunately, the ES for tracer selection already contains much knowledge needed. For example, the chemical data set for a suite of fluorescence dyes is in the ES, along with guidelines for their use. Therefore, this ES can support the use of fluorescent dye as tracer in actual investigations, even if the investigating scientists have little or no experience with it. In conclusion, the ES modules are and will be built as a support system for future researchers to perform optimized site investigations in a user-friendly manner. In this paper, we introduce the ES for selection of borehole drilling fluid tracer. Eventually, ES covering the full range of site investigation methods will be developed.

  16. novPTMenzy: a database for enzymes involved in novel post-translational modifications

    PubMed Central

    Khater, Shradha; Mohanty, Debasisa

    2015-01-01

    With the recent discoveries of novel post-translational modifications (PTMs) which play important roles in signaling and biosynthetic pathways, identification of such PTM catalyzing enzymes by genome mining has been an area of major interest. Unlike well-known PTMs like phosphorylation, glycosylation, SUMOylation, no bioinformatics resources are available for enzymes associated with novel and unusual PTMs. Therefore, we have developed the novPTMenzy database which catalogs information on the sequence, structure, active site and genomic neighborhood of experimentally characterized enzymes involved in five novel PTMs, namely AMPylation, Eliminylation, Sulfation, Hydroxylation and Deamidation. Based on a comprehensive analysis of the sequence and structural features of these known PTM catalyzing enzymes, we have created Hidden Markov Model profiles for the identification of similar PTM catalyzing enzymatic domains in genomic sequences. We have also created predictive rules for grouping them into functional subfamilies and deciphering their mechanistic details by structure-based analysis of their active site pockets. These analytical modules have been made available as user friendly search interfaces of novPTMenzy database. It also has a specialized analysis interface for some PTMs like AMPylation and Eliminylation. The novPTMenzy database is a unique resource that can aid in discovery of unusual PTM catalyzing enzymes in newly sequenced genomes. Database URL: http://www.nii.ac.in/novptmenzy.html PMID:25931459

  17. --No Title--

    Science.gov Websites

    @font-face { font-family: 'DroidSansRegular'; src: url('../fonts/droidsans-webfont.eot'); src: url -family: 'DroidSansBold'; src: url('../fonts/droidsans-bold-webfont.eot'); src: url('../fonts/droidsans

  18. Hosting an `Ask the Astronomer' Site on the Internet

    NASA Astrophysics Data System (ADS)

    Odenwald, S. F.

    1996-12-01

    Since 1995, the World Wide Web has explosively evolved into a significant medium for dispensing astronomical information to the general public. In addition to the numerous image archives that have proliferated, an increasing number of sites invite visitors to pose questions about astronomy and receive answers provided by professional astronomers. In this paper, I describe the operation of an Ask the Astronomer site that was opened on the WWW during August, 1995 as part of an astronomy education resource area called the "Astronomy Cafe" (URL=http://www2.ari.net/home/odenwald/cafe.html). The Astronomy Cafe includes a number of documents describing: a career in astronomy; how research papers are written; essays about cosmology, hyperspace and infrared astronomy; and the results from a 100-question, just for fun, personality test which distinguishes astronomers from non-astronomers. The Ask the Astronomer site is operated by a single astronomer through private donations and is now approaching its 500th day of operation. It contains over 2000+ questions and answers with a growth rate of 5 - 10 questions per day. It has attracted 70,000 visitors who are responsible for nearly 1 million 'hits' during the site's lifetime. The monthly statistics provide a unique survey of the kinds of individuals and organizations who visit Ask the Astronomer-type web sites, moreover, the accumulated questions provide a diagnostic X-ray into the public mind in the area of astronomy. I will present an analysis of the user demographics, and the types of questions that appear to be the most frequently asked. A paper copy of the complete index of these questions will be available for inspection.

  19. Job Opportunities Glitter for Librarians Who Surf the Net.

    ERIC Educational Resources Information Center

    Azar, A. Paula

    1996-01-01

    The Internet gives library professionals access to job opportunities that are not readily accessible in print. Employers can advertise at minimal cost and reach a broad, technically adept audience. This article lists Internet job resource sites and listservs for library and information professionals, providing Uniform Resource Locators (URLs),…

  20. PROVIDING SOLUTIONS FOR A BETTER TOMORROW: REDUCING THE RISKS ASSOCIATED WITH LEAD IN SOIL; URL:

    EPA Science Inventory

    This brief publication describes, in general language, the health risks associated with exposure to soil and dust contaminated with lead as well as an innovative method to immobilize lead contaminants in the soil (and thereby reduce the risk of exposure) at Superfund sites. Also ...

  1. GREAT: a web portal for Genome Regulatory Architecture Tools.

    PubMed

    Bouyioukos, Costas; Bucchini, François; Elati, Mohamed; Képès, François

    2016-07-08

    GREAT (Genome REgulatory Architecture Tools) is a novel web portal for tools designed to generate user-friendly and biologically useful analysis of genome architecture and regulation. The online tools of GREAT are freely accessible and compatible with essentially any operating system which runs a modern browser. GREAT is based on the analysis of genome layout -defined as the respective positioning of co-functional genes- and its relation with chromosome architecture and gene expression. GREAT tools allow users to systematically detect regular patterns along co-functional genomic features in an automatic way consisting of three individual steps and respective interactive visualizations. In addition to the complete analysis of regularities, GREAT tools enable the use of periodicity and position information for improving the prediction of transcription factor binding sites using a multi-view machine learning approach. The outcome of this integrative approach features a multivariate analysis of the interplay between the location of a gene and its regulatory sequence. GREAT results are plotted in web interactive graphs and are available for download either as individual plots, self-contained interactive pages or as machine readable tables for downstream analysis. The GREAT portal can be reached at the following URL https://absynth.issb.genopole.fr/GREAT and each individual GREAT tool is available for downloading. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  2. IFIS Model-Plus: A Web-Based GUI for Visualization, Comparison and Evaluation of Distributed Flood Forecasts and Hindcasts

    NASA Astrophysics Data System (ADS)

    Krajewski, W. F.; Della Libera Zanchetta, A.; Mantilla, R.; Demir, I.

    2017-12-01

    This work explores the use of hydroinformatics tools to provide an user friendly and accessible interface for executing and assessing the output of realtime flood forecasts using distributed hydrological models. The main result is the implementation of a web system that uses an Iowa Flood Information System (IFIS)-based environment for graphical displays of rainfall-runoff simulation results for both real-time and past storm events. It communicates with ASYNCH ODE solver to perform large-scale distributed hydrological modeling based on segmentation of the terrain into hillslope-link hydrologic units. The cyber-platform also allows hindcast of model performance by testing multiple model configurations and assumptions of vertical flows in the soils. The scope of the currently implemented system is the entire set of contributing watersheds for the territory of the state of Iowa. The interface provides resources for visualization of animated maps for different water-related modeled states of the environment, including flood-waves propagation with classification of flood magnitude, runoff generation, surface soil moisture and total water column in the soil. Additional tools for comparing different model configurations and performing model evaluation by comparing to observed variables at monitored sites are also available. The user friendly interface has been published to the web under the URL http://ifis.iowafloodcenter.org/ifis/sc/modelplus/.

  3. Autoplot: a Browser for Science Data on the Web

    NASA Astrophysics Data System (ADS)

    Faden, J.; Weigel, R. S.; West, E. E.; Merka, J.

    2008-12-01

    Autoplot (www.autoplot.org) is software for plotting data from many different sources and in many different file formats. Data from CDF, CEF, Fits, NetCDF, and OpenDAP can be plotted, along with many other sources such as ASCII tables and Excel spreadsheets. This is done by adapting these various data formats and APIs into a common data model that borrows from the netCDF and CDF data models. Autoplot uses a web browser metaphor to simplify use. The user specifies a parameter URL, for example a CDF file accessible via http with a parameter name appended, and the file resource is downloaded and the parameter is rendered in a scientifically meaningful way. When data span multiple files, the user can use a file name template in the URL to aggregate (combine) a set of remote files. So the problem of aggregating data across file boundaries is handled on the client side, allowing simple web servers to be used. The das2 graphics library provides rich controls for exploring the data. Scripting is supported through Python, providing not just programmatic control, but for calculating new parameters in a language that will look familiar to IDL and Matlab users. Autoplot is Java-based software, and will run on most computers without a burdensome installation process. It can also used as an applet or as a servlet that serves static images. Autoplot was developed as part of the Virtual Radiation Belt Observatory (ViRBO) project, and is also being used for the Virtual Magnetospheric Observatory (VMO). It is expected that this flexible, general-purpose plotting tool will be useful for allowing a data provider to add instant visualization capabilities to a directory of files or for general use in the Virtual Observatory environment.

  4. IYA2009 in Second Life

    NASA Astrophysics Data System (ADS)

    Gauthier, Adrienne J.

    2009-05-01

    Highlights from the first 6 months of the IYA2009 island in the multi-user 3D virtual world called Second Life ® will be shown. Future plans for exhibits and events will be discussed. You can find the 'Astronomy 2009' island by visiting this URL: http://secondastronomy.org/Astronomy2009/ which will trigger a teleport to our space. Keep up with our project at http://secondastronomy.org. Special thanks go to our primary sponsors: 400 Years of the Telescope/Interstellar Studios and The University of Arizona Department of Astronomy.

  5. SoyFN: a knowledge database of soybean functional networks.

    PubMed

    Xu, Yungang; Guo, Maozu; Liu, Xiaoyan; Wang, Chunyu; Liu, Yang

    2014-01-01

    Many databases for soybean genomic analysis have been built and made publicly available, but few of them contain knowledge specifically targeting the omics-level gene-gene, gene-microRNA (miRNA) and miRNA-miRNA interactions. Here, we present SoyFN, a knowledge database of soybean functional gene networks and miRNA functional networks. SoyFN provides user-friendly interfaces to retrieve, visualize, analyze and download the functional networks of soybean genes and miRNAs. In addition, it incorporates much information about KEGG pathways, gene ontology annotations and 3'-UTR sequences as well as many useful tools including SoySearch, ID mapping, Genome Browser, eFP Browser and promoter motif scan. SoyFN is a schema-free database that can be accessed as a Web service from any modern programming language using a simple Hypertext Transfer Protocol call. The Web site is implemented in Java, JavaScript, PHP, HTML and Apache, with all major browsers supported. We anticipate that this database will be useful for members of research communities both in soybean experimental science and bioinformatics. Database URL: http://nclab.hit.edu.cn/SoyFN.

  6. Document Clustering Approach for Meta Search Engine

    NASA Astrophysics Data System (ADS)

    Kumar, Naresh, Dr.

    2017-08-01

    The size of WWW is growing exponentially with ever change in technology. This results in huge amount of information with long list of URLs. Manually it is not possible to visit each page individually. So, if the page ranking algorithms are used properly then user search space can be restricted up to some pages of searched results. But available literatures show that no single search system can provide qualitative results from all the domains. This paper provides solution to this problem by introducing a new meta search engine that determine the relevancy of query corresponding to web page and cluster the results accordingly. The proposed approach reduces the user efforts, improves the quality of results and performance of the meta search engine.

  7. Data Center Energy Practitioner (DCEP) Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Traber, Kim; Salim, Munther; Sartor, Dale A.

    2016-02-02

    The main objective for the DCEP program is to raise the standards of those involved in energy assessments of data centers to accelerate energy savings. The program is driven by the fact that significant knowledge, training, and skills are required to perform accurate energy assessments. The program will raise the confidence level in energy assessments in data centers. For those who pass the exam, the program will recognize them as Data Center Energy Practitioners (DCEPs) by issuing a certificate. Hardware req.: PC, MAC; Software Req.: Windows; Related/Auxiliary software--MS Office; Type of files: executable modules, user guide; Documentation: e-user manual; Documentation:more » http://www.1.eere.energy.gov/industry/datacenters/ 12/10/15-New Documentation URL: https://datacenters.lbl.gov/dcep« less

  8. 78 FR 61443 - Small Business Size Standards: Waiver of the Nonmanufacturer Rule

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-03

    ... to rescind the Class Waiver of the Nonmanufacturer Rule (NMR) for Aerospace Ball and Roller Bearings... accessed by accessing the following URL: http://www.sba.gov/sites/default/files/files/NMR_WAIVED_3110... roller bearings manufactured by small businesses, unless an Individual Waiver of the NMR is granted by...

  9. Modeling Computer Communication Networks in a Realistic 3D Environment

    DTIC Science & Technology

    2010-03-01

    50 2. Comparison of visualization tools . . . . . . . . . . . . . . . . . 75 xi List of Abbreviations Abbreviation Page 2D two-dimensional...International Conference on, 77 –84, 2001. 20. National Defense and the Canadian Forces. “Joint Fires Support”. URL http: //www.cfd-cdf.forces.gc.ca/sites/ page ...Table of Contents Page Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv Acknowledgements

  10. GeoSearcher: Location-Based Ranking of Search Engine Results.

    ERIC Educational Resources Information Center

    Watters, Carolyn; Amoudi, Ghada

    2003-01-01

    Discussion of Web queries with geospatial dimensions focuses on an algorithm that assigns location coordinates dynamically to Web sites based on the URL. Describes a prototype search system that uses the algorithm to re-rank search engine results for queries with a geospatial dimension, thus providing an alternative ranking order for search engine…

  11. The life and death of URLs in five biomedical informatics journals.

    PubMed

    Carnevale, Randy J; Aronsky, Dominik

    2007-04-01

    To determine the decay rate of Uniform Record Locators (URLs) in the reference section of biomedical informatics journals. URL references were collected from printed journal articles of the first and middle issues of 1999-2004 and electronically available in-press articles in January 2005. We limited this set to five biomedical informatics journals: Artificial Intelligence in Medicine, International Journal of Medical Informatics, Journal of the American Medical Informatics Association: JAMIA, Methods of Information in Medicine, and Journal of Biomedical Informatics. During a 1-month period, URL access attempts were performed eight times a day at regular intervals. Of the 19,108 references extracted from 606 printed and 86 in-press articles, 1112 (5.8%) references contained a URL. Of the 1049 unique URLs, 726 (69.2%) were alive, 230 (21.9%) were dead, and 93 (8.9%) were comatose. URLs from in-press articles included 212 URLs, of which 169 (79.7%) were alive, 21 (9.9%) were dead, and 22 (10.4%) were comatose. The average annual decay, or link rot, rate was 5.4%. The URL decay rate in biomedical informatics journals is high. A commonly accepted strategy for the permanent archival of digital information referenced in scholarly publications is urgently needed.

  12. Statistical Models for Predicting Threat Detection From Human Behavior.

    PubMed

    Kelley, Timothy; Amon, Mary J; Bertenthal, Bennett I

    2018-01-01

    Users must regularly distinguish between secure and insecure cyber platforms in order to preserve their privacy and safety. Mouse tracking is an accessible, high-resolution measure that can be leveraged to understand the dynamics of perception, categorization, and decision-making in threat detection. Researchers have begun to utilize measures like mouse tracking in cyber security research, including in the study of risky online behavior. However, it remains an empirical question to what extent real-time information about user behavior is predictive of user outcomes and demonstrates added value compared to traditional self-report questionnaires. Participants navigated through six simulated websites, which resembled either secure "non-spoof" or insecure "spoof" versions of popular websites. Websites also varied in terms of authentication level (i.e., extended validation, standard validation, or partial encryption). Spoof websites had modified Uniform Resource Locator (URL) and authentication level. Participants chose to "login" to or "back" out of each website based on perceived website security. Mouse tracking information was recorded throughout the task, along with task performance. After completing the website identification task, participants completed a questionnaire assessing their security knowledge and degree of familiarity with the websites simulated during the experiment. Despite being primed to the possibility of website phishing attacks, participants generally showed a bias for logging in to websites versus backing out of potentially dangerous sites. Along these lines, participant ability to identify spoof websites was around the level of chance. Hierarchical Bayesian logistic models were used to compare the accuracy of two-factor (i.e., website security and encryption level), survey-based (i.e., security knowledge and website familiarity), and real-time measures (i.e., mouse tracking) in predicting risky online behavior during phishing attacks. Participant accuracy in identifying spoof and non-spoof websites was best captured using a model that included real-time indicators of decision-making behavior, as compared to two-factor and survey-based models. Findings validate three widely applicable measures of user behavior derived from mouse tracking recordings, which can be utilized in cyber security and user intervention research. Survey data alone are not as strong at predicting risky Internet behavior as models that incorporate real-time measures of user behavior, such as mouse tracking.

  13. Bundle Data Approach at GES DISC Targeting Natural Hazards

    NASA Technical Reports Server (NTRS)

    Shie, Chung-Lin; Shen, Suhung; Kempler, Steven J.

    2015-01-01

    Severe natural phenomena such as hurricane, volcano, blizzard, flood and drought have the potential to cause immeasurable property damages, great socioeconomic impact, and tragic loss of human life. From searching to assessing the Big, i.e., massive and heterogeneous scientific data (particularly, satellite and model products) in order to investigate those natural hazards, it has, however, become a daunting task for Earth scientists and applications researchers, especially during recent decades. The NASA Goddard Earth Sciences Data and Information Service Center (GES DISC) has served Big Earth science data, and the pertinent valuable information and services to the aforementioned users of diverse communities for years. In order to help and guide our users to online readily (i.e., with a minimum effort) acquire their requested data from our enormous resource at GES DISC for studying their targeted hazard event, we have thus initiated a Bundle Data approach in 2014, first targeting the hurricane event topic. We have recently worked on new topics such as volcano and blizzard. The bundle data of a specific hazard event is basically a sophisticated integrated data package consisting of a series of proper datasets containing a group of relevant (knowledge--based) data variables readily accessible to users via a system-prearranged table linking those data variables to the proper datasets (URLs). This online approach has been developed by utilizing a few existing data services such as Mirador as search engine; Giovanni for visualization; and OPeNDAP for data access, etc. The online Data Cookbook site at GES DISC is the current host for the bundle data. We are now also planning on developing an Automated Virtual Collection Framework that shall eventually accommodate the bundle data, as well as further improve our management in Big Data.

  14. "Bundle Data" Approach at GES DISC Targeting Natural Hazards

    NASA Astrophysics Data System (ADS)

    Shie, C. L.; Shen, S.; Kempler, S. J.

    2015-12-01

    Severe natural phenomena such as hurricane, volcano, blizzard, flood and drought have the potential to cause immeasurable property damages, great socioeconomic impact, and tragic loss of human life. From searching to assessing the "Big", i.e., massive and heterogeneous scientific data (particularly, satellite and model products) in order to investigate those natural hazards, it has, however, become a daunting task for Earth scientists and applications researchers, especially during recent decades. The NASA Goddard Earth Sciences Data and Information Service Center (GES DISC) has served "Big" Earth science data, and the pertinent valuable information and services to the aforementioned users of diverse communities for years. In order to help and guide our users to online readily (i.e., with a minimum effort) acquire their requested data from our enormous resource at GES DISC for studying their targeted hazard/event, we have thus initiated a "Bundle Data" approach in 2014, first targeting the hurricane event/topic. We have recently worked on new topics such as volcano and blizzard. The "bundle data" of a specific hazard/event is basically a sophisticated integrated data package consisting of a series of proper datasets containing a group of relevant ("knowledge-based") data variables readily accessible to users via a system-prearranged table linking those data variables to the proper datasets (URLs). This online approach has been developed by utilizing a few existing data services such as Mirador as search engine; Giovanni for visualization; and OPeNDAP for data access, etc. The online "Data Cookbook" site at GES DISC is the current host for the "bundle data". We are now also planning on developing an "Automated Virtual Collection Framework" that shall eventually accommodate the "bundle data", as well as further improve our management in "Big Data".

  15. Dissemination of radiological information using enhanced podcasts.

    PubMed

    Thapa, Mahesh M; Richardson, Michael L

    2010-03-01

    Podcasts and vodcasts (video podcasts) have become popular means of sharing educational information via the Internet. In this article, we introduce another method, an enhanced podcast, which allows images to be displayed with the audio. Bookmarks and URLs may also be imbedded within the presentation. This article describes a step-by-step tutorial for recording and distributing an enhanced podcast using the Macintosh operating system. Enhanced podcasts can also be created on the Windows platform using other software. An example of an enhanced podcast and a demonstration video of all the steps described in this article are available online at web.mac.com/mthapa. An enhanced podcast is an effective method of delivering radiological information via the Internet. Viewing images while simultaneously listening to audio content allows the user to have a richer experience than with a simple podcast. Incorporation of bookmarks and URLs within the presentation will make learning more efficient and interactive. The use of still images rather than video clips equates to a much smaller file size for an enhanced podcast compared to a vodcast, allowing quicker upload and download times.

  16. Usage based indicators to assess the impact of scholarly works: architecture and method

    DOEpatents

    Bollen, Johan [Santa Fe, NM; Van De Sompel, Herbert [Santa Fe, NM

    2012-03-13

    Although recording of usage data is common in scholarly information services, its exploitation for the creation of value-added services remains limited due to concerns regarding, among others, user privacy, data validity, and the lack of accepted standards for the representation, sharing and aggregation of usage data. A technical, standards-based architecture for sharing usage information is presented. In this architecture, OpenURL-compliant linking servers aggregate usage information of a specific user community as it navigates the distributed information environment that it has access to. This usage information is made OAI-PMH harvestable so that usage information exposed by many linking servers can be aggregated to facilitate the creation of value-added services with a reach beyond that of a single community or a single information service.

  17. BioSearch: a semantic search engine for Bio2RDF

    PubMed Central

    Qiu, Honglei; Huang, Jiacheng

    2017-01-01

    Abstract Biomedical data are growing at an incredible pace and require substantial expertise to organize data in a manner that makes them easily findable, accessible, interoperable and reusable. Massive effort has been devoted to using Semantic Web standards and technologies to create a network of Linked Data for the life sciences, among others. However, while these data are accessible through programmatic means, effective user interfaces for non-experts to SPARQL endpoints are few and far between. Contributing to user frustrations is that data are not necessarily described using common vocabularies, thereby making it difficult to aggregate results, especially when distributed across multiple SPARQL endpoints. We propose BioSearch — a semantic search engine that uses ontologies to enhance federated query construction and organize search results. BioSearch also features a simplified query interface that allows users to optionally filter their keywords according to classes, properties and datasets. User evaluation demonstrated that BioSearch is more effective and usable than two state of the art search and browsing solutions. Database URL: http://ws.nju.edu.cn/biosearch/ PMID:29220451

  18. Observations from the GOES Space Environment Monitor and Solar X-ray Imager are now available in a whole new way!

    NASA Astrophysics Data System (ADS)

    Wilkinson, D. C.

    2012-12-01

    NOAA's Geosynchronous Operational Environmental Satellites (GOES) have been observing the environment in near-earth-space for over 37 years. Those data are down-linked and processed by the Space Weather Prediction Center (SWPC) and form the cornerstone of their alert and forecast services. At the close of each UT day these data are ingested by the National Geophysical Data Center (NGDC) where they are merged into the national archive and made available to the user community in a uniform manner. In 2012 NGDC unveiled a RESTful web service for accessing these data. What does this mean? Users can now build a web-like URL using simple predefined constructs that allows their browser or custom software to directly access the relational archives and bundle the requested data into a variety of popular formats. The user can select precisely the data they need and the results are delivered immediately. NGDC understands that many users are perfectly happy retrieving data via pre-generated files and will continue to provide internally documented NetCDF and CSV files far into the future.

  19. Observations from the GOES Space Environment Monitor and Solar X-ray Imager are now available in a whole new way!

    NASA Astrophysics Data System (ADS)

    Wilkinson, D. C.

    2013-12-01

    NOAA's Geosynchronous Operational Environmental Satellites (GOES) have been observing the environment in near-earth-space for over 37 years. Those data are down-linked and processed by the Space Weather Prediction Center (SWPC) and form the cornerstone of their alert and forecast services. At the close of each UT day these data are ingested by the National Geophysical Data Center (NGDC) where they are merged into the national archive and made available to the user community in a uniform manner. In 2012 NGDC unveiled a RESTful web service for accessing these data. What does this mean? Users can now build a web-like URL using simple predefined constructs that allows their browser or custom software to directly access the relational archives and bundle the requested data into a variety of popular formats. The user can select precisely the data they need and the results are delivered immediately. NGDC understands that many users are perfectly happy retrieving data via pre-generated files and will continue to provide internally documented NetCDF and CSV files far into the future.

  20. International Collaboration Activities on Engineered Barrier Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jove-Colon, Carlos F.

    The Used Fuel Disposition Campaign (UFDC) within the DOE Fuel Cycle Technologies (FCT) program has been engaging in international collaborations between repository R&D programs for high-level waste (HLW) disposal to leverage on gathered knowledge and laboratory/field data of near- and far-field processes from experiments at underground research laboratories (URL). Heater test experiments at URLs provide a unique opportunity to mimetically study the thermal effects of heat-generating nuclear waste in subsurface repository environments. Various configurations of these experiments have been carried out at various URLs according to the disposal design concepts of the hosting country repository program. The FEBEX (Full-scale Engineeredmore » Barrier Experiment in Crystalline Host Rock) project is a large-scale heater test experiment originated by the Spanish radioactive waste management agency (Empresa Nacional de Residuos Radiactivos S.A. – ENRESA) at the Grimsel Test Site (GTS) URL in Switzerland. The project was subsequently managed by CIEMAT. FEBEX-DP is a concerted effort of various international partners working on the evaluation of sensor data and characterization of samples obtained during the course of this field test and subsequent dismantling. The main purpose of these field-scale experiments is to evaluate feasibility for creation of an engineered barrier system (EBS) with a horizontal configuration according to the Spanish concept of deep geological disposal of high-level radioactive waste in crystalline rock. Another key aspect of this project is to improve the knowledge of coupled processes such as thermal-hydro-mechanical (THM) and thermal-hydro-chemical (THC) operating in the near-field environment. The focus of these is on model development and validation of predictions through model implementation in computational tools to simulate coupled THM and THC processes.« less

  1. Quality of Internet information in pediatric otolaryngology: a comparison of three most referenced websites.

    PubMed

    Volsky, Peter G; Baldassari, Cristina M; Mushti, Sirisha; Derkay, Craig S

    2012-09-01

    Patients commonly refer to Internet health-related information. To date, no quantitative comparison of the accuracy and readability of common diagnoses in Pediatric Otolaryngology exist. (1) identify the three most frequently referenced Internet sources; (2) compare the content accuracy and (3) ascertain user-friendliness of each site; (4) inform practitioners and patients of the quality of available information. Twenty-four diagnoses in pediatric otolaryngology were entered in Google and the top five URLs for each were ranked. Articles were accessed for each topic in the three most frequently referenced sites. Standard rubrics were developed to include proprietary scores for content, errors, navigability, and validated metrics of readability. Wikipedia, eMedicine, and NLM/NIH MedlinePlus were the most referenced sources. For content accuracy, eMedicine scored highest (84%; p<0.05) over MedlinePlus (49%) and Wikipedia (46%). The highest incidence of errors and omissions per article was found in Wikipedia (0.98±0.19), twice more than eMedicine (0.42±0.19; p<0.05). Errors were similar between MedlinePlus and both eMedicine and Wikipedia. On ratings for user interface, which incorporated Flesch-Kinkaid Reading Level and Flesch Reading Ease, MedlinePlus was the most user-friendly (4.3±0.29). This was nearly twice that of eMedicine (2.4±0.26) and slightly greater than Wikipedia (3.7±0.3). All differences were significant (p<0.05). There were 7 topics for which articles were not available on MedlinePlus. Knowledge of the quality of available information on the Internet improves pediatric otolaryngologists' ability to counsel parents. The top web search results for pediatric otolaryngology diagnoses are Wikipedia, MedlinePlus, and eMedicine. Online information varies in quality, with a 46-84% concordance with current textbooks. eMedicine has the most accurate, comprehensive content and fewest errors, but is more challenging to read and navigate. Both Wikipedia and MedlinePlus have lower content accuracy and more errors, however MedlinePlus is simplest of all to read, at a 9th Grade level. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  2. Scientists as Communicators: Inclusion of a Science/Education Liaison on Research Expeditions

    NASA Astrophysics Data System (ADS)

    Sautter, L. R.

    2004-12-01

    Communication of research and scientific results to an audience outside of one's field poses a challenge to many scientists. Many research scientists have a natural ability to address the challenge, while others may chose to seek assistance. Research cruise PIs maywish to consider including a Science/Education Liaison (SEL) on future grants. The SEL is a marine scientist whose job before, during and after the cruise is to work with the shipboard scientists to document the science conducted. The SEL's role is three-fold: (1) to communicate shipboard science activities near-real-time to the public via the web; (2) to develop a variety of web-based resources based on the scientific operations; and (3) to assist educators with the integration of these resources into classroom curricula. The first role involves at-sea writing and relaying from ship-to-shore (via email) a series of Daily Logs. NOAA Ocean Exploration (OE) has mastered the use of web-posted Daily Logs for their major expeditions (see their OceanExplorer website), introducing millions of users to deep sea exploration. Project Oceanica uses the OE daily log model to document research expeditions. In addition to writing daily logs and participating on OE expeditions, Oceanica's SEL also documents the cruise's scientific operations and preliminary findings using video and photos, so that web-based resources (photo galleries, video galleries, and PhotoDocumentaries) can be developed during and following the cruise, and posted on the expedition's home page within the Oceanica web site (see URL). We have created templates for constructing these science resources which allow the shipboard scientists to assist with web resource development. Bringing users to the site is achieved through email communications to a growing list of educators, scientists, and students, and through collaboration with the COSEE network. With a large research expedition-based inventory of web resources now available, Oceanica is training teachers and college faculty on the use and incorporation of these resources into middle school, high school and introductory college classrooms. Support for a SEL on shipboard expeditions serves to catalyze the dissemination of the scientific operations to a broad audience of users.

  3. DIRT: The Dust InfraRed Toolbox

    NASA Astrophysics Data System (ADS)

    Pound, M. W.; Wolfire, M. G.; Mundy, L. G.; Teuben, P. J.; Lord, S.

    We present DIRT, a Java applet geared toward modeling a variety of processes in envelopes of young and evolved stars. Users can automatically and efficiently search grids of pre-calculated models to fit their data. A large set of physical parameters and dust types are included in the model database, which contains over 500,000 models. The computing cluster for the database is described in the accompanying paper by Teuben et al. (2000). A typical user query will return about 50-100 models, which the user can then interactively filter as a function of 8 model parameters (e.g., extinction, size, flux, luminosity). A flexible, multi-dimensional plotter (Figure 1) allows users to view the models, rotate them, tag specific parameters with color or symbol size, and probe individual model points. For any given model, auxiliary plots such as dust grain properties, radial intensity profiles, and the flux as a function of wavelength and beamsize can be viewed. The user can fit observed data to several models simultaneously and see the results of the fit; the best fit is automatically selected for plotting. The URL for this project is http://dustem.astro.umd.edu.

  4. Leveraging Globus to Support Access and Delivery of Scientific Data

    NASA Astrophysics Data System (ADS)

    Cram, T.; Schuster, D.; Ji, Z.; Worley, S. J.

    2015-12-01

    The NCAR Research Data Archive (RDA; http://rda.ucar.edu) contains a large and diverse collection of meteorological and oceanographic observations, operational and reanalysis outputs, and remote sensing datasets to support atmospheric and geoscience research. The RDA contains greater than 600 dataset collections which support the varying needs of a diverse user community. The number of RDA users is increasing annually, and the most popular method used to access the RDA data holdings is through web based protocols, such as wget and cURL based scripts. In the year 2014, 11,000 unique users downloaded greater than 1.1 petabytes of data from the RDA, and customized data products were prepared for more than 45,000 user-driven requests. In order to further support this increase in web download usage, the RDA has implemented the Globus data transfer service (www.globus.org) to provide a GridFTP data transfer option for the user community. The Globus service is broadly scalable, has an easy to install client, is sustainably supported, and provides a robust, efficient, and reliable data transfer option for the research community. This presentation will highlight the technical functionality, challenges, and usefulness of the Globus data transfer service for accessing the RDA data holdings.

  5. Migration to Earth Observation Satellite Product Dissemination System at JAXA

    NASA Astrophysics Data System (ADS)

    Ikehata, Y.; Matsunaga, M.

    2017-12-01

    JAXA released "G-Portal" as a portal web site for search and deliver data of Earth observation satellites in February 2013. G-Portal handles ten satellites data; GPM, TRMM, Aqua, ADEOS-II, ALOS (search only), ALOS-2 (search only), MOS-1, MOS-1b, ERS-1 and JERS-1 and archives 5.17 million products and 14 million catalogues in total. Users can search those products/catalogues in GUI web search and catalogue interface(CSW/Opensearch). In this fiscal year, we will replace this to "Next G-Portal" and has been doing integration, test and migrations. New G-Portal will treat data of satellites planned to be launched in the future in addition to those handled by G - Portal. At system architecture perspective, G-Portal adopted "cluster system" for its redundancy, so we must replace the servers into those with higher specifications when we improve its performance ("scale up approach"). This requests a lot of cost in every improvement. To avoid this, Next G-Portal adopts "scale out" system: load balancing interfaces, distributed file system, distributed data bases. (We reported in AGU fall meeting 2015(IN23D-1748).) At customer usability perspective, G-Portal provides complicated interface: "step by step" web design, randomly generated URLs, sftp (needs anomaly tcp port). Customers complained about the interfaces and the support team had been tired from answering them. To solve this problem, Next G-Portal adopts simple interfaces: "1 page" web design, RESTful URL, and Normal FTP. (We reported in AGU fall meeting 2016(IN23B-1778).) Furthermore, Next G-Portal must merge GCOM-W data dissemination system to be terminated in the next March as well as the current G-Portal. This might arrise some difficulties, since the current G-Portal and GCOM-W data dissemination systems are quite different from Next G-Portal. The presentation reports the knowledge obtained from the process of merging those systems.

  6. Building an OpenURL Resolver in Your Own Workshop

    ERIC Educational Resources Information Center

    Dahl, Mark

    2004-01-01

    OpenURL resolver is the next big thing for libraries. An OpenURL resolver is simply a piece of software that sucks in attached data and serves up a Web page that tells one where he or she can get the book or article represented by it. In this article, the author describes how he designed an OpenURL resolver for his library, the Lewis & Clark…

  7. The FTS atomic spectrum tool (FAST) for rapid analysis of line spectra

    NASA Astrophysics Data System (ADS)

    Ruffoni, M. P.

    2013-07-01

    The FTS Atomic Spectrum Tool (FAST) is an interactive graphical program designed to simplify the analysis of atomic emission line spectra obtained from Fourier transform spectrometers. Calculated, predicted and/or known experimental line parameters are loaded alongside experimentally observed spectral line profiles for easy comparison between new experimental data and existing results. Many such line profiles, which could span numerous spectra, may be viewed simultaneously to help the user detect problems from line blending or self-absorption. Once the user has determined that their experimental line profile fits are good, a key feature of FAST is the ability to calculate atomic branching fractions, transition probabilities, and oscillator strengths-and their uncertainties-which is not provided by existing analysis packages. Program SummaryProgram title: FAST: The FTS Atomic Spectrum Tool Catalogue identifier: AEOW_v1_0 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AEOW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 293058 No. of bytes in distributed program, including test data, etc.: 13809509 Distribution format: tar.gz Programming language: C++. Computer: Intel x86-based systems. Operating system: Linux/Unix/Windows. RAM: 8 MB minimum. About 50-200 MB for a typical analysis. Classification: 2.2, 2.3, 21.2. Nature of problem: Visualisation of atomic line spectra including the comparison of theoretical line parameters with experimental atomic line profiles. Accurate intensity calibration of experimental spectra, and the determination of observed relative line intensities that are needed for calculating atomic branching fractions and oscillator strengths. Solution method: FAST is centred around a graphical interface, where a user may view sets of experimental line profiles and compare them to calculated data (such as from the Kurucz database [1]), predicted line parameters, and/or previously known experimental results. With additional information on the spectral response of the spectrometer, obtained from a calibrated standard light source, FT spectra may be intensity calibrated. In turn, this permits the user to calculate atomic branching fractions and oscillator strengths, and their respective uncertainties. Running time: Open ended. Defined by the user. References: [1] R.L. Kurucz (2007). URL http://kurucz.harvard.edu/atoms/.

  8. Understanding PubMed user search behavior through log analysis.

    PubMed

    Islamaj Dogan, Rezarta; Murray, G Craig; Névéol, Aurélie; Lu, Zhiyong

    2009-01-01

    This article reports on a detailed investigation of PubMed users' needs and behavior as a step toward improving biomedical information retrieval. PubMed is providing free service to researchers with access to more than 19 million citations for biomedical articles from MEDLINE and life science journals. It is accessed by millions of users each day. Efficient search tools are crucial for biomedical researchers to keep abreast of the biomedical literature relating to their own research. This study provides insight into PubMed users' needs and their behavior. This investigation was conducted through the analysis of one month of log data, consisting of more than 23 million user sessions and more than 58 million user queries. Multiple aspects of users' interactions with PubMed are characterized in detail with evidence from these logs. Despite having many features in common with general Web searches, biomedical information searches have unique characteristics that are made evident in this study. PubMed users are more persistent in seeking information and they reformulate queries often. The three most frequent types of search are search by author name, search by gene/protein, and search by disease. Use of abbreviation in queries is very frequent. Factors such as result set size influence users' decisions. Analysis of characteristics such as these plays a critical role in identifying users' information needs and their search habits. In turn, such an analysis also provides useful insight for improving biomedical information retrieval.Database URL:http://www.ncbi.nlm.nih.gov/PubMed.

  9. PhyloBot: A Web Portal for Automated Phylogenetics, Ancestral Sequence Reconstruction, and Exploration of Mutational Trajectories.

    PubMed

    Hanson-Smith, Victor; Johnson, Alexander

    2016-07-01

    The method of phylogenetic ancestral sequence reconstruction is a powerful approach for studying evolutionary relationships among protein sequence, structure, and function. In particular, this approach allows investigators to (1) reconstruct and "resurrect" (that is, synthesize in vivo or in vitro) extinct proteins to study how they differ from modern proteins, (2) identify key amino acid changes that, over evolutionary timescales, have altered the function of the protein, and (3) order historical events in the evolution of protein function. Widespread use of this approach has been slow among molecular biologists, in part because the methods require significant computational expertise. Here we present PhyloBot, a web-based software tool that makes ancestral sequence reconstruction easy. Designed for non-experts, it integrates all the necessary software into a single user interface. Additionally, PhyloBot provides interactive tools to explore evolutionary trajectories between ancestors, enabling the rapid generation of hypotheses that can be tested using genetic or biochemical approaches. Early versions of this software were used in previous studies to discover genetic mechanisms underlying the functions of diverse protein families, including V-ATPase ion pumps, DNA-binding transcription regulators, and serine/threonine protein kinases. PhyloBot runs in a web browser, and is available at the following URL: http://www.phylobot.com. The software is implemented in Python using the Django web framework, and runs on elastic cloud computing resources from Amazon Web Services. Users can create and submit jobs on our free server (at the URL listed above), or use our open-source code to launch their own PhyloBot server.

  10. PhyloBot: A Web Portal for Automated Phylogenetics, Ancestral Sequence Reconstruction, and Exploration of Mutational Trajectories

    PubMed Central

    Hanson-Smith, Victor; Johnson, Alexander

    2016-01-01

    The method of phylogenetic ancestral sequence reconstruction is a powerful approach for studying evolutionary relationships among protein sequence, structure, and function. In particular, this approach allows investigators to (1) reconstruct and “resurrect” (that is, synthesize in vivo or in vitro) extinct proteins to study how they differ from modern proteins, (2) identify key amino acid changes that, over evolutionary timescales, have altered the function of the protein, and (3) order historical events in the evolution of protein function. Widespread use of this approach has been slow among molecular biologists, in part because the methods require significant computational expertise. Here we present PhyloBot, a web-based software tool that makes ancestral sequence reconstruction easy. Designed for non-experts, it integrates all the necessary software into a single user interface. Additionally, PhyloBot provides interactive tools to explore evolutionary trajectories between ancestors, enabling the rapid generation of hypotheses that can be tested using genetic or biochemical approaches. Early versions of this software were used in previous studies to discover genetic mechanisms underlying the functions of diverse protein families, including V-ATPase ion pumps, DNA-binding transcription regulators, and serine/threonine protein kinases. PhyloBot runs in a web browser, and is available at the following URL: http://www.phylobot.com. The software is implemented in Python using the Django web framework, and runs on elastic cloud computing resources from Amazon Web Services. Users can create and submit jobs on our free server (at the URL listed above), or use our open-source code to launch their own PhyloBot server. PMID:27472806

  11. Progress developing the JAXA next generation satellite data repository (G-Portal).

    NASA Astrophysics Data System (ADS)

    Ikehata, Y.

    2016-12-01

    JAXA has been operating the "G-Portal" as a repository for search and access data of Earth observation satellite related JAXA since February 2013. The G-Portal handles ten satellites data; GPM, TRMM, Aqua, ADEOS-II, ALOS (search only), ALOS-2 (search only), MOS-1, MOS-1b, ERS-1 and JERS-1. G-Portal plans to import future satellites GCOM-C and EarthCARE. Except for ALOS and ALOS-2, all of these data are open and free. The G-Portal supports web search, catalogue search (CSW and OpenSearch) and direct download by SFTP for data access. However, the G-Portal has some problems about performance and usability. For example, about performance, the G-Portal is based on 10Gbps network and uses scale out architecture. (Conceptual design was reported in AGU fall meeting 2015. (IN23D-1748)) In order to improve those problems, JAXA is developing the next generation repository since February 2016. This paper describes usability problems improvements and challenges towards the next generation system. The improvements and challenges include the following points. Current web interface uses "step by step" design and URL is generated randomly. For that reason, users must see the Web page and click many times to get desired satellite data. So, Web design will be changed completely from "step by step" to "1 page" and URL will be based on REST (REpresentational State Transfer). Regarding direct download, the current method(SFTP) is very hard to use because of anomaly port assign and key-authentication. So, we will support FTP protocol. Additionally, the next G-Portal improve catalogue service. Currently catalogue search is available only to limited users including NASA, ESA and CEOS due to performance and reliability issue, but we will remove this limitation. Furthermore, catalogue search client function will be implemented to take in other agencies satellites catalogue. Users will be able to search satellite data across agencies.

  12. The Chandra Source Catalog: User Interface

    NASA Astrophysics Data System (ADS)

    Bonaventura, Nina; Evans, I. N.; Harbo, P. N.; Rots, A. H.; Tibbetts, M. S.; Van Stone, D. W.; Zografou, P.; Anderson, C. S.; Chen, J. C.; Davis, J. E.; Doe, S. M.; Evans, J. D.; Fabbiano, G.; Galle, E.; Gibbs, D. G.; Glotfelty, K. J.; Grier, J. D.; Hain, R.; Hall, D. M.; He, X.; Houck, J. C.; Karovska, M.; Lauer, J.; McCollough, M. L.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Nowak, M. A.; Plummer, D. A.; Primini, F. A.; Refsdal, B. L.; Siemiginowska, A. L.; Sundheim, B. A.; Winkelman, S. L.

    2009-01-01

    The Chandra Source Catalog (CSC) is the definitive catalog of all X-ray sources detected by Chandra. The CSC is presented to the user in two tables: the Master Chandra Source Table and the Table of Individual Source Observations. Each distinct X-ray source identified in the CSC is represented by a single master source entry and one or more individual source entries. If a source is unaffected by confusion and pile-up in multiple observations, the individual source observations are merged to produce a master source. In each table, a row represents a source, and each column a quantity that is officially part of the catalog. The CSC contains positions and multi-band fluxes for the sources, as well as derived spatial, spectral, and temporal source properties. The CSC also includes associated source region and full-field data products for each source, including images, photon event lists, light curves, and spectra. The master source properties represent the best estimates of the properties of a source, and are presented in the following categories: Position and Position Errors, Source Flags, Source Extent and Errors, Source Fluxes, Source Significance, Spectral Properties, and Source Variability. The CSC Data Access GUI provides direct access to the source properties and data products contained in the catalog. The user may query the catalog database via a web-style search or an SQL command-line query. Each query returns a table of source properties, along with the option to browse and download associated data products. The GUI is designed to run in a web browser with Java version 1.5 or higher, and may be accessed via a link on the CSC website homepage (http://cxc.harvard.edu/csc/). As an alternative to the GUI, the contents of the CSC may be accessed directly through a URL, using the command-line tool, cURL. Support: NASA contract NAS8-03060 (CXC).

  13. MedlinePlus Connect: Email List

    MedlinePlus

    ... MedlinePlus Connect → Email List URL of this page: https://medlineplus.gov/connect/emaillist.html MedlinePlus Connect: Email ... will change.) Old URLs New URLs Web Application https://apps.nlm.nih.gov/medlineplus/services/mpconnect.cfm? ...

  14. Automated ground-water monitoring with Robowell: case studies and potential applications

    NASA Astrophysics Data System (ADS)

    Granato, Gregory E.; Smith, Kirk P.

    2002-02-01

    Robowell is an automated system and method for monitoring ground-water quality. Robowell meets accepted manual- sampling protocols without high labor and laboratory costs. Robowell periodically monitors and records water-quality properties and constituents in ground water by pumping a well or multilevel sampler until one or more purge criteria have been met. A record of frequent water-quality measurements from a monitoring site can indicate changes in ground-water quality and can provide a context for the interpretation of laboratory data from discrete samples. Robowell also can communicate data and system performance through a remote communication link. Remote access to ground-water data enables the user to monitor conditions and optimize manual sampling efforts. Six Robowell prototypes have successfully monitored ground-water quality during all four seasons of the year under different hydrogeologic conditions, well designs, and geochemical environments. The U.S. Geological Survey is seeking partners for research with robust and economical water-quality monitoring instruments designed to measure contaminants of concern in conjunction with the application and commercialization of the Robowell technology. Project publications and information about technology transfer opportunities are available on the Internet at URL http://ma.water.usgs.gov/automon/

  15. Drug-Path: a database for drug-induced pathways

    PubMed Central

    Zeng, Hui; Cui, Qinghua

    2015-01-01

    Some databases for drug-associated pathways have been built and are publicly available. However, the pathways curated in most of these databases are drug-action or drug-metabolism pathways. In recent years, high-throughput technologies such as microarray and RNA-sequencing have produced lots of drug-induced gene expression profiles. Interestingly, drug-induced gene expression profile frequently show distinct patterns, indicating that drugs normally induce the activation or repression of distinct pathways. Therefore, these pathways contribute to study the mechanisms of drugs and drug-repurposing. Here, we present Drug-Path, a database of drug-induced pathways, which was generated by KEGG pathway enrichment analysis for drug-induced upregulated genes and downregulated genes based on drug-induced gene expression datasets in Connectivity Map. Drug-Path provides user-friendly interfaces to retrieve, visualize and download the drug-induced pathway data in the database. In addition, the genes deregulated by a given drug are highlighted in the pathways. All data were organized using SQLite. The web site was implemented using Django, a Python web framework. Finally, we believe that this database will be useful for related researches. Database URL: http://www.cuilab.cn/drugpath PMID:26130661

  16. CoryneRegNet 3.0--an interactive systems biology platform for the analysis of gene regulatory networks in corynebacteria and Escherichia coli.

    PubMed

    Baumbach, Jan; Wittkop, Tobias; Rademacher, Katrin; Rahmann, Sven; Brinkrolf, Karina; Tauch, Andreas

    2007-04-30

    CoryneRegNet is an ontology-based data warehouse for the reconstruction and visualization of transcriptional regulatory interactions in prokaryotes. To extend the biological content of CoryneRegNet, we added comprehensive data on transcriptional regulations in the model organism Escherichia coli K-12, originally deposited in the international reference database RegulonDB. The enhanced web interface of CoryneRegNet offers several types of search options. The results of a search are displayed in a table-based style and include a visualization of the genetic organization of the respective gene region. Information on DNA binding sites of transcriptional regulators is depicted by sequence logos. The results can also be displayed by several layouters implemented in the graphical user interface GraphVis, allowing, for instance, the visualization of genome-wide network reconstructions and the homology-based inter-species comparison of reconstructed gene regulatory networks. In an application example, we compare the composition of the gene regulatory networks involved in the SOS response of E. coli and Corynebacterium glutamicum. CoryneRegNet is available at the following URL: http://www.cebitec.uni-bielefeld.de/groups/gi/software/coryneregnet/.

  17. EMAGE mouse embryo spatial gene expression database: 2010 update

    PubMed Central

    Richardson, Lorna; Venkataraman, Shanmugasundaram; Stevenson, Peter; Yang, Yiya; Burton, Nicholas; Rao, Jianguo; Fisher, Malcolm; Baldock, Richard A.; Davidson, Duncan R.; Christiansen, Jeffrey H.

    2010-01-01

    EMAGE (http://www.emouseatlas.org/emage) is a freely available online database of in situ gene expression patterns in the developing mouse embryo. Gene expression domains from raw images are extracted and integrated spatially into a set of standard 3D virtual mouse embryos at different stages of development, which allows data interrogation by spatial methods. An anatomy ontology is also used to describe sites of expression, which allows data to be queried using text-based methods. Here, we describe recent enhancements to EMAGE including: the release of a completely re-designed website, which offers integration of many different search functions in HTML web pages, improved user feedback and the ability to find similar expression patterns at the click of a button; back-end refactoring from an object oriented to relational architecture, allowing associated SQL access; and the provision of further access by standard formatted URLs and a Java API. We have also increased data coverage by sourcing from a greater selection of journals and developed automated methods for spatial data annotation that are being applied to spatially incorporate the genome-wide (∼19 000 gene) ‘EURExpress’ dataset into EMAGE. PMID:19767607

  18. Automated ground-water monitoring with robowell-Case studies and potential applications

    USGS Publications Warehouse

    Granato, G.E.; Smith, K.P.; ,

    2001-01-01

    Robowell is an automated system and method for monitoring ground-water quality. Robowell meets accepted manual-sampling protocols without high labor and laboratory costs. Robowell periodically monitors and records water-quality properties and constituents in ground water by pumping a well or multilevel sampler until one or more purge criteria have been met. A record of frequent water-quality measurements from a monitoring site can indicate changes in ground-water quality and can provide a context for the interpretation of laboratory data from discrete samples. Robowell also can communicate data and system performance through a remote communication link. Remote access to ground-water data enables the user to monitor conditions and optimize manual sampling efforts. Six Robowell prototypes have successfully monitored ground-water quality during all four seasons of the year under different hydrogeologic conditions, well designs, and geochemical environments. The U.S. Geological Survey is seeking partners for research with robust and economical water-quality monitoring instruments designed to measure contaminants of concern in conjunction with the application and commercialization of the Robowell technology. Project publications and information about technology transfer opportunities are available on the Internet at URL http://ma.water.usgs.gov/automon/.

  19. GeneStoryTeller: a mobile app for quick and comprehensive information retrieval of human genes

    PubMed Central

    Eleftheriou, Stergiani V.; Bourdakou, Marilena M.; Athanasiadis, Emmanouil I.; Spyrou, George M.

    2015-01-01

    In the last few years, mobile devices such as smartphones and tablets have become an integral part of everyday life, due to their software/hardware rapid development, as well as the increased portability they offer. Nevertheless, up to now, only few Apps have been developed in the field of bioinformatics, capable to perform fast and robust access to services. We have developed the GeneStoryTeller, a mobile application for Android platforms, where users are able to instantly retrieve information regarding any recorded human gene, derived from eight publicly available databases, as a summary story. Complementary information regarding gene–drugs interactions, functional annotation and disease associations for each selected gene is also provided in the gene story. The most challenging part during the development of the GeneStoryTeller was to keep balance between storing data locally within the app and obtaining the updated content dynamically via a network connection. This was accomplished with the implementation of an administrative site where data are curated and synchronized with the application requiring a minimum human intervention. Database URL: http://bioserver-3.bioacademy.gr/Bioserver/GeneStoryTeller/. PMID:26055097

  20. blend4php: a PHP API for galaxy.

    PubMed

    Wytko, Connor; Soto, Brian; Ficklin, Stephen P

    2017-01-01

    Galaxy is a popular framework for execution of complex analytical pipelines typically for large data sets, and is a commonly used for (but not limited to) genomic, genetic and related biological analysis. It provides a web front-end and integrates with high performance computing resources. Here we report the development of the blend4php library that wraps Galaxy's RESTful API into a PHP-based library. PHP-based web applications can use blend4php to automate execution, monitoring and management of a remote Galaxy server, including its users, workflows, jobs and more. The blend4php library was specifically developed for the integration of Galaxy with Tripal, the open-source toolkit for the creation of online genomic and genetic web sites. However, it was designed as an independent library for use by any application, and is freely available under version 3 of the GNU Lesser General Public License (LPGL v3.0) at https://github.com/galaxyproject/blend4phpDatabase URL: https://github.com/galaxyproject/blend4php. © The Author(s) 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  1. A New and Improved MPB Web Site

    NASA Astrophysics Data System (ADS)

    Warner, Brian D.

    2018-01-01

    The Minor Planet Bulletin home page has a new URL: http://www.MinorPlanet.info/MPB/mpb.php. The new home page features free access (data rates may apply) to almost all papers from Volume 1 (1973) to present. Also included are a basic search feature that allows finding papers by title/abstract and/or authors and links to download the MPB authors guide and cumulative indices.

  2. Making Dynamic Digital Maps Cross-Platform and WWW Capable

    NASA Astrophysics Data System (ADS)

    Condit, C. D.

    2001-05-01

    High-quality color geologic maps are an invaluable information resource for educators, students and researchers. However, maps with large datasets that include images, or various types of movies, in addition to site locations where analytical data has been collected, are difficult to publish in a format that facilitates their easy access, distribution and use. The development of capable desktop computers and object oriented graphical programming environments has facilitated publication of such data sets in an encapsulated form. The original Dynamic Digital Map (DDM) programs, developed using the Macintosh based SuperCard programming environment, exemplified this approach, in which all data are included in a single package designed so that display and access to the data did not depend on proprietary programs. These DDMs were aimed for ease of use, and allowed data to be displayed by several methods, including point-and-click at icons pin-pointing sample (or image) locations on maps, and from clicklists of sample or site numbers. Each of these DDMs included an overview and automated tour explaining the content organization and program use. This SuperCard development culminated in a "DDM Template", which is a SuperCard shell into which SuperCard users could insert their own content and thus create their own DDMs, following instructions in an accompanying "DDM Cookbook" (URL http://www.geo.umass.edu/faculty/condit/condit2.html). These original SuperCard-based DDMs suffered two critical limitations: a single user platform (Macintosh) and, although they can be downloaded from the web, their use lacked an integration into the WWW. Over the last eight months I have been porting the DDM technology to MetaCard, which is aggressively cross-platform (11 UNIX dialects, WIN32 and Macintosh). The new MetaCard DDM is redesigned to make the maps and images accessible either from CD or the web, using the "LoadNGo" concept. LoadNGo allows the user to download the stand-alone DDM program using a standard browser, and then use the program independently to access images, maps and data with fast web connections. DDMs are intended to be a fast and inexpensive way to publish and make accessible, as an integrated product, high-quality color maps and data sets. They are not a substitute for the analytical capability of GIS; however maps produced using GIS and CAD programs can be easily integrated into DDMs. The preparation of any map product is a time consuming effort. To compliment that effort, the DDM Templates have build into them the capability to contain explanatory text at three different user levels (or perhaps in three different languages), thus one DDM may be used as both a research publication medium and an educational outreach product, with the user choosing which user mode to access the data.

  3. Manning's roughness coefficient for Illinois streams

    USGS Publications Warehouse

    Soong, David T.; Prater, Crystal D.; Halfar, Teresa M.; Wobig, Loren A.

    2012-01-01

    Manning's roughness coefficients for 43 natural and constructed streams in Illinois are reported and displayed on a U.S. Geological Survey Web site. At a majority of the sites, discharge and stage were measured, and corresponding Manning's coefficients—the n-values—were determined at more than one river discharge. The n-values discussed in this report are computed from data representing the stream reach studied and, therefore, are reachwise values. Presentation of the resulting n-values takes a visual-comparison approach similar to the previously published Barnes report (1967), in which photographs of channel conditions, description of the site, and the resulting n-values are organized for each site. The Web site where the data can be accessed and are displayed is at URL http://il.water.usgs.gov/proj/nvalues/.

  4. An introduction to QR Codes: linking libraries and mobile patrons.

    PubMed

    Hoy, Matthew B

    2011-01-01

    QR codes, or "Quick Response" codes, are two-dimensional barcodes that can be scanned by mobile smartphone cameras. These codes can be used to provide fast access to URLs, telephone numbers, and short passages of text. With the rapid adoption of smartphones, librarians are able to use QR codes to promote services and help library users find materials quickly and independently. This article will explain what QR codes are, discuss how they can be used in the library, and describe issues surrounding their use. A list of resources for generating and scanning QR codes is also provided.

  5. Biogeochemical and hydrologic processes controlling mercury cycling in Great Salt Lake, Utah

    NASA Astrophysics Data System (ADS)

    Naftz, D.; Kenney, T.; Angeroth, C.; Waddell, B.; Darnall, N.; Perschon, C.; Johnson, W. P.

    2006-12-01

    Great Salt Lake (GSL), in the Western United States, is a terminal lake with a highly variable surface area that can exceed 5,100 km2. The open water and adjacent wetlands of the GSL ecosystem support millions of migratory waterfowl and shorebirds from throughout the Western Hemisphere, as well as a brine shrimp industry with annual revenues exceeding 70 million dollars. Despite the ecologic and economic significance of GSL, little is known about the biogeochemical cycling of mercury (Hg) and no water-quality standards currently exist for this system. Whole water samples collected since 2000 were determined to contain elevated concentrations of total Hg (100 ng/L) and methyl Hg (33 ng/L). The elevated levels of methyl Hg are likely the result of high rates of SO4 reduction and associated Hg methylation in persistently anoxic areas of the lake at depths greater than 6.5 m below the water surface. Hydroacoustic equipment deployed in this anoxic layer indicates a "conveyor belt" flow system that can distribute methyl Hg in a predominantly southerly direction throughout the southern half of GSL (fig. 1, URL: http://users.o2wire.com/dnaftz/Dave/AGU-abs-figs- AUG06.pdf). Periodic and sustained wind events on GSL may result in transport of the methyl Hg-rich anoxic water and bottom sediments into the oxic and biologically active regions. Sediment traps positioned above the anoxic brine interface have captured up to 6 mm of bottom sediment during cumulative wind-driven resuspension events (fig. 2, URL:http://users.o2wire.com/dnaftz/Dave/AGU-abs-figs-AUG06.pdf). Vertical velocity data collected with hydroacoustic equipment indicates upward flow > 1.5 cm/sec during transient wind events (fig. 3, URL:http://users.o2wire.com/dnaftz/Dave/AGU-abs-figs-AUG06.pdf). Transport of methyl Hg into the oxic regions of GSL is supported by biota samples. The median Hg concentration (wet weight) in brine shrimp increased seasonally from the spring to fall time period and is likely a function of the seasonal aging and resulting Hg bioaccumulation in the adult brine shrimp population. Brine shrimp are the primary food source for eared grebes during the fall molt (August through December); the Hg concentration in eared grebe livers more than doubled during this time period. In 2005, Hg concentration in breast muscle tissue from two duck species was observed to consistently exceed the U.S. Environmental Protection Agency screening level of 0.3 mg/kg (wet weight), resulting in a health advisory issued by the State of Utah to duck hunters regarding consumption of these duck species from the GSL ecosystem.

  6. WholeCellSimDB: a hybrid relational/HDF database for whole-cell model predictions

    PubMed Central

    Karr, Jonathan R.; Phillips, Nolan C.; Covert, Markus W.

    2014-01-01

    Mechanistic ‘whole-cell’ models are needed to develop a complete understanding of cell physiology. However, extracting biological insights from whole-cell models requires running and analyzing large numbers of simulations. We developed WholeCellSimDB, a database for organizing whole-cell simulations. WholeCellSimDB was designed to enable researchers to search simulation metadata to identify simulations for further analysis, and quickly slice and aggregate simulation results data. In addition, WholeCellSimDB enables users to share simulations with the broader research community. The database uses a hybrid relational/hierarchical data format architecture to efficiently store and retrieve both simulation setup metadata and results data. WholeCellSimDB provides a graphical Web-based interface to search, browse, plot and export simulations; a JavaScript Object Notation (JSON) Web service to retrieve data for Web-based visualizations; a command-line interface to deposit simulations; and a Python API to retrieve data for advanced analysis. Overall, we believe WholeCellSimDB will help researchers use whole-cell models to advance basic biological science and bioengineering. Database URL: http://www.wholecellsimdb.org Source code repository URL: http://github.com/CovertLab/WholeCellSimDB PMID:25231498

  7. Polar Domain Discovery with Sparkler

    NASA Astrophysics Data System (ADS)

    Duerr, R.; Khalsa, S. J. S.; Mattmann, C. A.; Ottilingam, N. K.; Singh, K.; Lopez, L. A.

    2017-12-01

    The scientific web is vast and ever growing. It encompasses millions of textual, scientific and multimedia documents describing research in a multitude of scientific streams. Most of these documents are hidden behind forms which require user action to retrieve and thus can't be directly accessed by content crawlers. These documents are hosted on web servers across the world, most often on outdated hardware and network infrastructure. Hence it is difficult and time-consuming to aggregate documents from the scientific web, especially those relevant to a specific domain. Thus generating meaningful domain-specific insights is currently difficult. We present an automated discovery system (Figure 1) using Sparkler, an open-source, extensible, horizontally scalable crawler which facilitates high throughput and focused crawling of documents pertinent to a particular domain such as information about polar regions. With this set of highly domain relevant documents, we show that it is possible to answer analytical questions about that domain. Our domain discovery algorithm leverages prior domain knowledge to reach out to commercial/scientific search engines to generate seed URLs. Subject matter experts then annotate these seed URLs manually on a scale from highly relevant to irrelevant. We leverage this annotated dataset to train a machine learning model which predicts the `domain relevance' of a given document. We extend Sparkler with this model to focus crawling on documents relevant to that domain. Sparkler avoids disruption of service by 1) partitioning URLs by hostname such that every node gets a different host to crawl and by 2) inserting delays between subsequent requests. With an NSF-funded supercomputer Wrangler, we scaled our domain discovery pipeline to crawl about 200k polar specific documents from the scientific web, within a day.

  8. AtmiRNET: a web-based resource for reconstructing regulatory networks of Arabidopsis microRNAs.

    PubMed

    Chien, Chia-Hung; Chiang-Hsieh, Yi-Fan; Chen, Yi-An; Chow, Chi-Nga; Wu, Nai-Yun; Hou, Ping-Fu; Chang, Wen-Chi

    2015-01-01

    Compared with animal microRNAs (miRNAs), our limited knowledge of how miRNAs involve in significant biological processes in plants is still unclear. AtmiRNET is a novel resource geared toward plant scientists for reconstructing regulatory networks of Arabidopsis miRNAs. By means of highlighted miRNA studies in target recognition, functional enrichment of target genes, promoter identification and detection of cis- and trans-elements, AtmiRNET allows users to explore mechanisms of transcriptional regulation and miRNA functions in Arabidopsis thaliana, which are rarely investigated so far. High-throughput next-generation sequencing datasets from transcriptional start sites (TSSs)-relevant experiments as well as five core promoter elements were collected to establish the support vector machine-based prediction model for Arabidopsis miRNA TSSs. Then, high-confidence transcription factors participate in transcriptional regulation of Arabidopsis miRNAs are provided based on statistical approach. Furthermore, both experimentally verified and putative miRNA-target interactions, whose validity was supported by the correlations between the expression levels of miRNAs and their targets, are elucidated for functional enrichment analysis. The inferred regulatory networks give users an intuitive insight into the pivotal roles of Arabidopsis miRNAs through the crosstalk between miRNA transcriptional regulation (upstream) and miRNA-mediate (downstream) gene circuits. The valuable information that is visually oriented in AtmiRNET recruits the scant understanding of plant miRNAs and will be useful (e.g. ABA-miR167c-auxin signaling pathway) for further research. Database URL: http://AtmiRNET.itps.ncku.edu.tw/ © The Author(s) 2015. Published by Oxford University Press.

  9. Modeling Computer Communication Networks in a Realistic 3D Environment

    DTIC Science & Technology

    2010-03-01

    50 2. Comparison of visualization tools . . . . . . . . . . . . . . . . . 75 xi List of Abbreviations Abbreviation Page 2D two-dimensional...International Conference on, 77 –84, 2001. 20. National Defense and the Canadian Forces. “Joint Fires Support”. URL http: //www.cfd-cdf.forces.gc.ca/sites/ page ...UNLIMITED. Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour

  10. Easing the Discovery of NASA and International Near-Real-Time Data Using the Global Change Master Directory

    NASA Astrophysics Data System (ADS)

    Ritz, S.; Olsen, L. M.; Morahan, M.; Stevens, T.; Aleman, A.; Grebas, S. K.

    2011-12-01

    The Global Change Master Directory (GCMD) provides an extensive directory of descriptive and spatial information about data sets and data-related services, which are relevant to Earth science research. The directory's data discovery components include controlled keywords, free-text searches, and map/date searches. The GCMD portal for NASA's Land Atmosphere Near-real-time Capability for EOS (LANCE) data products leverages these discovery features by providing users a direct route to NASA's Near-Real-Time (NRT) collections. This portal offers direct access to collection entries by instrument name, informing users of the availability of data. After a relevant collection entry is found through the GCMD's search components, the "Get Data" URL within the entry directs the user to the desired data. Building on the importance of Near-Real-Time (NRT) data, the Committee on Earth Observation Satellites (CEOS) International Directory Network (IDN) is targeting an effort to identify NRT data set collections from the CEOS international members. The international collections will be advertised as the "CEOS IDN NRT" portal to assist users in rapidly discovering these products, which are potentially useful for their research or public response. [This portal is expected to be released in 2012.

  11. DyNAVacS: an integrative tool for optimized DNA vaccine design.

    PubMed

    Harish, Nagarajan; Gupta, Rekha; Agarwal, Parul; Scaria, Vinod; Pillai, Beena

    2006-07-01

    DNA vaccines have slowly emerged as keystones in preventive immunology due to their versatility in inducing both cell-mediated as well as humoral immune responses. The design of an efficient DNA vaccine, involves choice of a suitable expression vector, ensuring optimal expression by codon optimization, engineering CpG motifs for enhancing immune responses and providing additional sequence signals for efficient translation. DyNAVacS is a web-based tool created for rapid and easy design of DNA vaccines. It follows a step-wise design flow, which guides the user through the various sequential steps in the design of the vaccine. Further, it allows restriction enzyme mapping, design of primers spanning user specified sequences and provides information regarding the vectors currently used for generation of DNA vaccines. The web version uses Apache HTTP server. The interface was written in HTML and utilizes the Common Gateway Interface scripts written in PERL for functionality. DyNAVacS is an integrated tool consisting of user-friendly programs, which require minimal information from the user. The software is available free of cost, as a web based application at URL: http://miracle.igib.res.in/dynavac/.

  12. Statistical Models for Predicting Threat Detection From Human Behavior

    PubMed Central

    Kelley, Timothy; Amon, Mary J.; Bertenthal, Bennett I.

    2018-01-01

    Users must regularly distinguish between secure and insecure cyber platforms in order to preserve their privacy and safety. Mouse tracking is an accessible, high-resolution measure that can be leveraged to understand the dynamics of perception, categorization, and decision-making in threat detection. Researchers have begun to utilize measures like mouse tracking in cyber security research, including in the study of risky online behavior. However, it remains an empirical question to what extent real-time information about user behavior is predictive of user outcomes and demonstrates added value compared to traditional self-report questionnaires. Participants navigated through six simulated websites, which resembled either secure “non-spoof” or insecure “spoof” versions of popular websites. Websites also varied in terms of authentication level (i.e., extended validation, standard validation, or partial encryption). Spoof websites had modified Uniform Resource Locator (URL) and authentication level. Participants chose to “login” to or “back” out of each website based on perceived website security. Mouse tracking information was recorded throughout the task, along with task performance. After completing the website identification task, participants completed a questionnaire assessing their security knowledge and degree of familiarity with the websites simulated during the experiment. Despite being primed to the possibility of website phishing attacks, participants generally showed a bias for logging in to websites versus backing out of potentially dangerous sites. Along these lines, participant ability to identify spoof websites was around the level of chance. Hierarchical Bayesian logistic models were used to compare the accuracy of two-factor (i.e., website security and encryption level), survey-based (i.e., security knowledge and website familiarity), and real-time measures (i.e., mouse tracking) in predicting risky online behavior during phishing attacks. Participant accuracy in identifying spoof and non-spoof websites was best captured using a model that included real-time indicators of decision-making behavior, as compared to two-factor and survey-based models. Findings validate three widely applicable measures of user behavior derived from mouse tracking recordings, which can be utilized in cyber security and user intervention research. Survey data alone are not as strong at predicting risky Internet behavior as models that incorporate real-time measures of user behavior, such as mouse tracking. PMID:29713296

  13. Renal Gene Expression Database (RGED): a relational database of gene expression profiles in kidney disease

    PubMed Central

    Zhang, Qingzhou; Yang, Bo; Chen, Xujiao; Xu, Jing; Mei, Changlin; Mao, Zhiguo

    2014-01-01

    We present a bioinformatics database named Renal Gene Expression Database (RGED), which contains comprehensive gene expression data sets from renal disease research. The web-based interface of RGED allows users to query the gene expression profiles in various kidney-related samples, including renal cell lines, human kidney tissues and murine model kidneys. Researchers can explore certain gene profiles, the relationships between genes of interests and identify biomarkers or even drug targets in kidney diseases. The aim of this work is to provide a user-friendly utility for the renal disease research community to query expression profiles of genes of their own interest without the requirement of advanced computational skills. Availability and implementation: Website is implemented in PHP, R, MySQL and Nginx and freely available from http://rged.wall-eva.net. Database URL: http://rged.wall-eva.net PMID:25252782

  14. Diamond Eye: a distributed architecture for image data mining

    NASA Astrophysics Data System (ADS)

    Burl, Michael C.; Fowlkes, Charless; Roden, Joe; Stechert, Andre; Mukhtar, Saleem

    1999-02-01

    Diamond Eye is a distributed software architecture, which enables users (scientists) to analyze large image collections by interacting with one or more custom data mining servers via a Java applet interface. Each server is coupled with an object-oriented database and a computational engine, such as a network of high-performance workstations. The database provides persistent storage and supports querying of the 'mined' information. The computational engine provides parallel execution of expensive image processing, object recognition, and query-by-content operations. Key benefits of the Diamond Eye architecture are: (1) the design promotes trial evaluation of advanced data mining and machine learning techniques by potential new users (all that is required is to point a web browser to the appropriate URL), (2) software infrastructure that is common across a range of science mining applications is factored out and reused, and (3) the system facilitates closer collaborations between algorithm developers and domain experts.

  15. SalmonDB: a bioinformatics resource for Salmo salar and Oncorhynchus mykiss

    PubMed Central

    Di Génova, Alex; Aravena, Andrés; Zapata, Luis; González, Mauricio; Maass, Alejandro; Iturra, Patricia

    2011-01-01

    SalmonDB is a new multiorganism database containing EST sequences from Salmo salar, Oncorhynchus mykiss and the whole genome sequence of Danio rerio, Gasterosteus aculeatus, Tetraodon nigroviridis, Oryzias latipes and Takifugu rubripes, built with core components from GMOD project, GOPArc system and the BioMart project. The information provided by this resource includes Gene Ontology terms, metabolic pathways, SNP prediction, CDS prediction, orthologs prediction, several precalculated BLAST searches and domains. It also provides a BLAST server for matching user-provided sequences to any of the databases and an advanced query tool (BioMart) that allows easy browsing of EST databases with user-defined criteria. These tools make SalmonDB database a valuable resource for researchers searching for transcripts and genomic information regarding S. salar and other salmonid species. The database is expected to grow in the near feature, particularly with the S. salar genome sequencing project. Database URL: http://genomicasalmones.dim.uchile.cl/ PMID:22120661

  16. SalmonDB: a bioinformatics resource for Salmo salar and Oncorhynchus mykiss.

    PubMed

    Di Génova, Alex; Aravena, Andrés; Zapata, Luis; González, Mauricio; Maass, Alejandro; Iturra, Patricia

    2011-01-01

    SalmonDB is a new multiorganism database containing EST sequences from Salmo salar, Oncorhynchus mykiss and the whole genome sequence of Danio rerio, Gasterosteus aculeatus, Tetraodon nigroviridis, Oryzias latipes and Takifugu rubripes, built with core components from GMOD project, GOPArc system and the BioMart project. The information provided by this resource includes Gene Ontology terms, metabolic pathways, SNP prediction, CDS prediction, orthologs prediction, several precalculated BLAST searches and domains. It also provides a BLAST server for matching user-provided sequences to any of the databases and an advanced query tool (BioMart) that allows easy browsing of EST databases with user-defined criteria. These tools make SalmonDB database a valuable resource for researchers searching for transcripts and genomic information regarding S. salar and other salmonid species. The database is expected to grow in the near feature, particularly with the S. salar genome sequencing project. Database URL: http://genomicasalmones.dim.uchile.cl/

  17. Visualizing multiattribute Web transactions using a freeze technique

    NASA Astrophysics Data System (ADS)

    Hao, Ming C.; Cotting, Daniel; Dayal, Umeshwar; Machiraju, Vijay; Garg, Pankaj

    2003-05-01

    Web transactions are multidimensional and have a number of attributes: client, URL, response times, and numbers of messages. One of the key questions is how to simultaneously lay out in a graph the multiple relationships, such as the relationships between the web client response times and URLs in a web access application. In this paper, we describe a freeze technique to enhance a physics-based visualization system for web transactions. The idea is to freeze one set of objects before laying out the next set of objects during the construction of the graph. As a result, we substantially reduce the force computation time. This technique consists of three steps: automated classification, a freeze operation, and a graph layout. These three steps are iterated until the final graph is generated. This iterated-freeze technique has been prototyped in several e-service applications at Hewlett Packard Laboratories. It has been used to visually analyze large volumes of service and sales transactions at online web sites.

  18. Extracting scientific articles from a large digital archive: BioStor and the Biodiversity Heritage Library.

    PubMed

    Page, Roderic D M

    2011-05-23

    The Biodiversity Heritage Library (BHL) is a large digital archive of legacy biological literature, comprising over 31 million pages scanned from books, monographs, and journals. During the digitisation process basic metadata about the scanned items is recorded, but not article-level metadata. Given that the article is the standard unit of citation, this makes it difficult to locate cited literature in BHL. Adding the ability to easily find articles in BHL would greatly enhance the value of the archive. A service was developed to locate articles in BHL based on matching article metadata to BHL metadata using approximate string matching, regular expressions, and string alignment. This article locating service is exposed as a standard OpenURL resolver on the BioStor web site http://biostor.org/openurl/. This resolver can be used on the web, or called by bibliographic tools that support OpenURL. BioStor provides tools for extracting, annotating, and visualising articles from the Biodiversity Heritage Library. BioStor is available from http://biostor.org/.

  19. Uniform Resource Locators (URLs): Powerful Reference Tools for Librarians and Information Professionals.

    ERIC Educational Resources Information Center

    Smith, Teresa S.

    The Internet is a network of networks which continually accumulates and amasses information, much of which is without organization and evaluation. This study addresses the need for establishing a database of Uniform Resource Locators (URLs), and for collecting, organizing, indexing, and publishing catalogs of URLs. Librarians and information…

  20. Deciding to Change OpenURL Link Resolvers

    ERIC Educational Resources Information Center

    Johnson, Megan; Leonard, Andrea; Wiswell, John

    2015-01-01

    This article will be of interest to librarians, particularly those in consortia that are evaluating OpenURL link resolvers. This case study contrasts WebBridge (an Innovative Interface product) and LinkSource (EBSCO's product). This study assisted us in the decision-making process of choosing an OpenURL link resolver that was sustainable to…

  1. Groups: knowledge spreadsheets for symbolic biocomputing.

    PubMed

    Travers, Michael; Paley, Suzanne M; Shrager, Jeff; Holland, Timothy A; Karp, Peter D

    2013-01-01

    Knowledge spreadsheets (KSs) are a visual tool for interactive data analysis and exploration. They differ from traditional spreadsheets in that rather than being oriented toward numeric data, they work with symbolic knowledge representation structures and provide operations that take into account the semantics of the application domain. 'Groups' is an implementation of KSs within the Pathway Tools system. Groups allows Pathway Tools users to define a group of objects (e.g. groups of genes or metabolites) from a Pathway/Genome Database. Groups can be transformed (e.g. by transforming a metabolite group to the group of pathways in which those metabolites are substrates); combined through set operations; analysed (e.g. through enrichment analysis); and visualized (e.g. by painting onto a metabolic map diagram). Users of the Pathway Tools-based BioCyc.org website have made extensive use of Groups, and an informal survey of Groups users suggests that Groups has achieved the goal of allowing biologists themselves to perform some data manipulations that previously would have required the assistance of a programmer. Database URL: BioCyc.org.

  2. Integration Telegram Bot on E-Complaint Applications in College

    NASA Astrophysics Data System (ADS)

    Rosid, M. A.; Rachmadany, A.; Multazam, M. T.; Nandiyanto, A. B. D.; Abdullah, A. G.; Widiaty, I.

    2018-01-01

    Internet of Things (IoT) has influenced human life where IoT internet connectivity extending from human-to-humans to human-to-machine or machine-to-machine. With this research field, it will be created a technology and concepts that allow humans to communicate with machines for a specific purpose. This research aimed to integrate between application service of the telegram sender with application of e-complaint at a college. With this application, users do not need to visit the Url of the E-compliant application; but, they can be accessed simply by submitting a complaint via Telegram, and then the complaint will be forwarded to the E-complaint Application. From the test results, e-complaint integration with Telegram Bot has been run in accordance with the design. Telegram Bot is made able to provide convenience to the user in this academician to submit a complaint, besides the telegram bot provides the user interaction with the usual interface used by people everyday on their smartphones. Thus, with this system, the complained work unit can immediately make improvements since all the complaints process can be delivered rapidly.

  3. OLSVis: an animated, interactive visual browser for bio-ontologies

    PubMed Central

    2012-01-01

    Background More than one million terms from biomedical ontologies and controlled vocabularies are available through the Ontology Lookup Service (OLS). Although OLS provides ample possibility for querying and browsing terms, the visualization of parts of the ontology graphs is rather limited and inflexible. Results We created the OLSVis web application, a visualiser for browsing all ontologies available in the OLS database. OLSVis shows customisable subgraphs of the OLS ontologies. Subgraphs are animated via a real-time force-based layout algorithm which is fully interactive: each time the user makes a change, e.g. browsing to a new term, hiding, adding, or dragging terms, the algorithm performs smooth and only essential reorganisations of the graph. This assures an optimal viewing experience, because subsequent screen layouts are not grossly altered, and users can easily navigate through the graph. URL: http://ols.wordvis.com Conclusions The OLSVis web application provides a user-friendly tool to visualise ontologies from the OLS repository. It broadens the possibilities to investigate and select ontology subgraphs through a smooth visualisation method. PMID:22646023

  4. Yaughan and Curriboo Plantations: Studies in Afro-American Archaeology.

    DTIC Science & Technology

    1983-04-01

    dogwood, briars, shrubs , and vines . The only site approaching a native habitat before testing was 38BK76, which had been covered by an oak-hickory forest...1974 Basic statistical methods, fourth edition. Harper and Row, Mew York. Drewel, Henry 1980 Personal communication. Art historian, University of Indiana ...South Carolina. Lofstrom, Edward Urling * 1976 A seriation of historic ceramics in the midwest , 1780-1870. Paper presented at the Joint Plains Midwest

  5. 78 FR 16857 - Office of the Assistant Secretary for Financial Resources, Office of Grants and Acquisition...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-19

    ... Inventory.'' The following should be changed: The notice provided an incorrect URL address: http://www.hhs.gov/grants/servicecontractsfy11.html . The correct URL address is as follows: http://www.hhs.gov... following URL address: http://www.hhs.gov/grants/servicecontractsfy11.html . Change the fiscal year to FY...

  6. Using an Integrated Naive Bayes Calssifier for Crawling Relevent Data on the Web

    NASA Astrophysics Data System (ADS)

    Mihsra, A.

    2015-12-01

    In our experiments (at JPL, NASA) for DARPA Memex project, we wanted to crawl a large amount of data for various domains. A big challenge was data relevancy in the crawled data. More than 50% of the data was irrelevant to the domain at hand. One immediate solution was to use good seeds (seeds are the initial urls from where the program starts to crawl) and make sure that the crawl remains into the original host urls. This although a very efficient technique, fails under two conditions. One when you aim to reach deeper into the web; into new hosts (not in the seed list) and two when the website hosts myriad content types eg. a News website.The relevancy calculation used to be a post processing step i.e. once we had finished crawling, we trained a NaiveBayes Classifier and used it to find a rough relevancy of the web pages that we had. Integrating the relevancy into the crawling rather than after it was very important because crawling takes resources and time. To save both we needed to get an idea of relevancy of the whole crawl during run time and be able to steer its course accordingly. We use Apache Nutch as the crawler, which uses a plugin system to incorporate any new implementations and hence we built a plugin for Nutch.The Naive Bayes Parse Plugin works in the following way. It parses every page and decides, using a trained model (which is built in situ only once using the positive and negative examples given by the user in a very simple format), if it is relevant; If true, then it allows all the outlinks from that page to go to the next round of crawling; If not, then it gives the urls a second chance to prove themselves by checking some commonly expected words in the url relevant to that domain. This two tier system is very intuitive and efficient in focusing the crawl. In our initial test experiments over 100 seed urls, the results were astonishingly good with a recall of 98%.The same technique can be applied to geo-informatics. This will help scientists gather data that is relevant to their specific domain. As a proof of concept we also crawled nsidc.org and some similar websites and were very efficiently able to keep the crawler from going into the hub websites like Yahoo, commercial/advertising portals and irrelevant content pages.It is a strong start towards focused crawling using Nutch, one of the most scalable and ever evolving crawler available today.

  7. Energetic Neutral Atom (ENA) Movies and Other Cool Data from Cassini's Magnetosphere Imaging Instrument (MIMI)

    NASA Astrophysics Data System (ADS)

    Kusterer, M. B.; Mitchell, D. G.; Krimigis, S. M.; Vandegriff, J. D.

    2014-12-01

    Having been at Saturn for over a decade, the MIMI instrument on Cassini has created a rich dataset containing many details about Saturn's magnetosphere. In particular, the images of energetic neutral atoms (ENAs) taken by the Ion and Neutral Camera (INCA) offer a global perspective on Saturn's plasma environment. The MIMI team is now regularly making movies (in MP4 format) consisting of consecutive ENA images. The movies correct for spacecraft attitude changes by projecting the images (whose viewing angles can substantially vary from one image to the next) into a fixed inertial frame that makes it easy to view spatial features evolving in time. These movies are now being delivered to the PDS and are also available at the MIMI team web site. Several other higher order products are now also available, including 20-day energy-time spectrograms for the Charge-Energy-Mass Spectrometer (CHEMS) sensor, and daily energy-time spectrograms for the Low Energy Magnetospheric Measurements system (LEMMS) sensor. All spectrograms are available as plots or digital data in ASCII format. For all MIMI sensors, a Data User Guide is also available. This paper presents details and examples covering the specifics of MIMI higher order data products. URL: http://cassini-mimi.jhuapl.edu/

  8. Using Twitter to Understand Public Perceptions Regarding the #HPV Vaccine: Opportunities for Public Health Nurses to Engage in Social Marketing.

    PubMed

    Keim-Malpass, Jessica; Mitchell, Emma M; Sun, Emily; Kennedy, Christine

    2017-07-01

    Given the degree of public mistrust and provider hesitation regarding the human papillomavirus (HPV) vaccine, it is important to explore how information regarding the vaccine is shared online via social media outlets. The purpose of this study was to evaluate the content of messaging regarding the HPV vaccine on the social media and microblogging site Twitter, and describe the sentiment of those messages. This study utilized a cross-sectional descriptive approach. Over a 2-week period, Twitter content was searched hourly using key terms "#HPV and #Gardasil," which yielded 1,794 Twitter posts for analysis. Each post was then analyzed individually using an a priori coding strategy and directed content analysis. The majority of Twitter posts were written by lay consumers and were sharing commentary about a media source. However, when actual URLs were shared, the most common form of share was linking back to a blog post written by lay users. The vast majority of content was presented as polarizing (either as a positive or negative tweet), with 51% of the Tweets representing a positive viewpoint. Using Twitter to understand public sentiment offers a novel perspective to explore the context of health communication surrounding certain controversial issues. © 2017 Wiley Periodicals, Inc.

  9. Disappearing act: decay of uniform resource locators in health care management journals

    PubMed Central

    Wagner, Cassie; Gebremichael, Meseret D.; Soltys, Michael J.

    2009-01-01

    Objectives: This study examines the problem of decay of uniform resource locators (URLs) in health care management journals and seeks to determine whether continued availability at a given URL relates to the date of publication, the type of resource, or the top-level URL domain. Methods: The authors determined the availability of web-based resources cited in articles published in five source journals from 2002 to 2004. The data were analyzed using correlation, chi-square, and descriptive statistics. Attempts were made to locate the unavailable resources. Results: After checking twice, 49.3% of the original 2,011 cited resources could not be located at the cited URL. The older the article, the more likely that URLs in the reference list of that article were inactive (r = −0.62, P<0.001, n = 1,968). There was no difference in availability across resource types (χ2 = 5.28, df = 2, P = 0.07, n = 1,786). Whether an URL was active varied by top-level domain (χ2 = 14.92, df = 4, P = 0.00, n = 1,786). Conclusions: URL decay is a serious problem in health care management journals. In addition to using website archiving tools like WebCite, publishers should require authors to both keep copies of Internet-based information they used and deposit copies of data with the publishers. PMID:19404503

  10. eF-seek: prediction of the functional sites of proteins by searching for similar electrostatic potential and molecular surface shape.

    PubMed

    Kinoshita, Kengo; Murakami, Yoichi; Nakamura, Haruki

    2007-07-01

    We have developed a method to predict ligand-binding sites in a new protein structure by searching for similar binding sites in the Protein Data Bank (PDB). The similarities are measured according to the shapes of the molecular surfaces and their electrostatic potentials. A new web server, eF-seek, provides an interface to our search method. It simply requires a coordinate file in the PDB format, and generates a prediction result as a virtual complex structure, with the putative ligands in a PDB format file as the output. In addition, the predicted interacting interface is displayed to facilitate the examination of the virtual complex structure on our own applet viewer with the web browser (URL: http://eF-site.hgc.jp/eF-seek).

  11. An HTML5-Based Pure Website Solution for Rapidly Viewing and Processing Large-Scale 3D Medical Volume Reconstruction on Mobile Internet

    PubMed Central

    Chen, Xin; Zhang, Ye; Zhang, Jingna; Li, Ying; Mo, Xuemei; Chen, Wei

    2017-01-01

    This study aimed to propose a pure web-based solution to serve users to access large-scale 3D medical volume anywhere with good user experience and complete details. A novel solution of the Master-Slave interaction mode was proposed, which absorbed advantages of remote volume rendering and surface rendering. On server side, we designed a message-responding mechanism to listen to interactive requests from clients (Slave model) and to guide Master volume rendering. On client side, we used HTML5 to normalize user-interactive behaviors on Slave model and enhance the accuracy of behavior request and user-friendly experience. The results showed that more than four independent tasks (each with a data size of 249.4 MB) could be simultaneously carried out with a 100-KBps client bandwidth (extreme test); the first loading time was <12 s, and the response time of each behavior request for final high quality image remained at approximately 1 s, while the peak value of bandwidth was <50-KBps. Meanwhile, the FPS value for each client was ≥40. This solution could serve the users by rapidly accessing the application via one URL hyperlink without special software and hardware requirement in a diversified network environment and could be easily integrated into other telemedical systems seamlessly. PMID:28638406

  12. An HTML5-Based Pure Website Solution for Rapidly Viewing and Processing Large-Scale 3D Medical Volume Reconstruction on Mobile Internet.

    PubMed

    Qiao, Liang; Chen, Xin; Zhang, Ye; Zhang, Jingna; Wu, Yi; Li, Ying; Mo, Xuemei; Chen, Wei; Xie, Bing; Qiu, Mingguo

    2017-01-01

    This study aimed to propose a pure web-based solution to serve users to access large-scale 3D medical volume anywhere with good user experience and complete details. A novel solution of the Master-Slave interaction mode was proposed, which absorbed advantages of remote volume rendering and surface rendering. On server side, we designed a message-responding mechanism to listen to interactive requests from clients ( Slave model) and to guide Master volume rendering. On client side, we used HTML5 to normalize user-interactive behaviors on Slave model and enhance the accuracy of behavior request and user-friendly experience. The results showed that more than four independent tasks (each with a data size of 249.4 MB) could be simultaneously carried out with a 100-KBps client bandwidth (extreme test); the first loading time was <12 s, and the response time of each behavior request for final high quality image remained at approximately 1 s, while the peak value of bandwidth was <50-KBps. Meanwhile, the FPS value for each client was ≥40. This solution could serve the users by rapidly accessing the application via one URL hyperlink without special software and hardware requirement in a diversified network environment and could be easily integrated into other telemedical systems seamlessly.

  13. Knowledge Discovery from Growing Social Networks

    DTIC Science & Technology

    2009-12-24

    a trackback. We exploited the blog “Theme salon of blogs” in the site “goo” 2, where a blogger can recruit trackbacks of other bloggers by registering...using trackbacks. Thus, a piece of information can propagate from one blogger to another blogger through a trackback. We exploited the blog “Theme salon ...interesting propagation properties. The circle is a URL that corresponds to the musical baton which is a kind of telephone game on the Internet. It has the

  14. NASA's Global Change Master Directory: Discover and Access Earth Science Data Sets, Related Data Services, and Climate Diagnostics

    NASA Astrophysics Data System (ADS)

    Aleman, A.; Olsen, L. M.; Ritz, S.; Stevens, T.; Morahan, M.; Grebas, S. K.

    2011-12-01

    NASA's Global Change Master Directory provides the scientific community with the ability to discover, access, and use Earth science data, data-related services, and climate diagnostics worldwide.The GCMD offers descriptions of Earth science data sets using the Directory Interchange Format (DIF) metadata standard; Earth science related data services are described using the Service Entry Resource Format (SERF); and climate visualizations are described using the Climate Diagnostic (CD) standard. The DIF, SERF and CD standards each capture data attributes used to determine whether a data set, service, or climate visualization is relevant to a user's needs.Metadata fields include: title, summary, science keywords, service keywords, data center, data set citation, personnel, instrument, platform, quality, related URL, temporal and spatial coverage, data resolution and distribution information.In addition, nine valuable sets of controlled vocabularies have been developed to assist users in normalizing the search for data descriptions. An update to the GCMD's search functionality is planned to further capitalize on the controlled vocabularies during database queries.By implementing a dynamic keyword "tree", users will have the ability to search for data sets by combining keywords in new ways.This will allow users to conduct more relevant and efficient database searches to support the free exchange and re-use of Earth science data.

  15. NPInter v3.0: an upgraded database of noncoding RNA-associated interactions

    PubMed Central

    Hao, Yajing; Wu, Wei; Li, Hui; Yuan, Jiao; Luo, Jianjun; Zhao, Yi; Chen, Runsheng

    2016-01-01

    Despite the fact that a large quantity of noncoding RNAs (ncRNAs) have been identified, their functions remain unclear. To enable researchers to have a better understanding of ncRNAs’ functions, we updated the NPInter database to version 3.0, which contains experimentally verified interactions between ncRNAs (excluding tRNAs and rRNAs), especially long noncoding RNAs (lncRNAs) and other biomolecules (proteins, mRNAs, miRNAs and genomic DNAs). In NPInter v3.0, interactions pertaining to ncRNAs are not only manually curated from scientific literature but also curated from high-throughput technologies. In addition, we also curated lncRNA–miRNA interactions from in silico predictions supported by AGO CLIP-seq data. When compared with NPInter v2.0, the interactions are more informative (with additional information on tissues or cell lines, binding sites, conservation, co-expression values and other features) and more organized (with divisions on data sets by data sources, tissues or cell lines, experiments and other criteria). NPInter v3.0 expands the data set to 491,416 interactions in 188 tissues (or cell lines) from 68 kinds of experimental technologies. NPInter v3.0 also improves the user interface and adds new web services, including a local UCSC Genome Browser to visualize binding sites. Additionally, NPInter v3.0 defined a high-confidence set of interactions and predicted the functions of lncRNAs in human and mouse based on the interactions curated in the database. NPInter v3.0 is available at http://www.bioinfo.org/NPInter/. Database URL: http://www.bioinfo.org/NPInter/ PMID:27087310

  16. Effectiveness of off-line and web-based promotion of health information web sites.

    PubMed

    Jones, Craig E; Pinnock, Carole B

    2002-01-01

    The relative effectiveness of off-line and web-based promotional activities in increasing the use of health information web sites by target audiences were compared. Visitor sessions were classified according to their method of arrival at the site (referral) as external web site, search engine, or "no referrer" (i.e., visitor arriving at the site by inputting URL or using bookmarks). The number of Australian visitor sessions correlated with no referrer referrals but not web site or search-engine referrals. Results showed that the targeted consumer group is more likely to access the web site as a result of off-line promotional activities. The properties of target audiences likely to influence the effectiveness of off-line versus on-line promotional strategies include the size of the Internet using population of the target audience, their proficiency in the use of the Internet, and the increase in effectiveness of off-line promotional activities when applied to locally defined target audiences.

  17. OGDD (Olive Genetic Diversity Database): a microsatellite markers' genotypes database of worldwide olive trees for cultivar identification and virgin olive oil traceability

    PubMed Central

    Ben Ayed, Rayda; Ben Hassen, Hanen; Ennouri, Karim; Ben Marzoug, Riadh; Rebai, Ahmed

    2016-01-01

    Olive (Olea europaea), whose importance is mainly due to nutritional and health features, is one of the most economically significant oil-producing trees in the Mediterranean region. Unfortunately, the increasing market demand towards virgin olive oil could often result in its adulteration with less expensive oils, which is a serious problem for the public and quality control evaluators of virgin olive oil. Therefore, to avoid frauds, olive cultivar identification and virgin olive oil authentication have become a major issue for the producers and consumers of quality control in the olive chain. Presently, genetic traceability using SSR is the cost effective and powerful marker technique that can be employed to resolve such problems. However, to identify an unknown monovarietal virgin olive oil cultivar, a reference system has become necessary. Thus, an Olive Genetic Diversity Database (OGDD) (http://www.bioinfo-cbs.org/ogdd/) is presented in this work. It is a genetic, morphologic and chemical database of worldwide olive tree and oil having a double function. In fact, besides being a reference system generated for the identification of unkown olive or virgin olive oil cultivars based on their microsatellite allele size(s), it provides users additional morphological and chemical information for each identified cultivar. Currently, OGDD is designed to enable users to easily retrieve and visualize biologically important information (SSR markers, and olive tree and oil characteristics of about 200 cultivars worldwide) using a set of efficient query interfaces and analysis tools. It can be accessed through a web service from any modern programming language using a simple hypertext transfer protocol call. The web site is implemented in java, JavaScript, PHP, HTML and Apache with all major browsers supported. Database URL: http://www.bioinfo-cbs.org/ogdd/ PMID:26827236

  18. Design and implementation of a database for Brucella melitensis genome annotation.

    PubMed

    De Hertogh, Benoît; Lahlimi, Leïla; Lambert, Christophe; Letesson, Jean-Jacques; Depiereux, Eric

    2008-03-18

    The genome sequences of three Brucella biovars and of some species close to Brucella sp. have become available, leading to new relationship analysis. Moreover, the automatic genome annotation of the pathogenic bacteria Brucella melitensis has been manually corrected by a consortium of experts, leading to 899 modifications of start sites predictions among the 3198 open reading frames (ORFs) examined. This new annotation, coupled with the results of automatic annotation tools of the complete genome sequences of the B. melitensis genome (including BLASTs to 9 genomes close to Brucella), provides numerous data sets related to predicted functions, biochemical properties and phylogenic comparisons. To made these results available, alphaPAGe, a functional auto-updatable database of the corrected sequence genome of B. melitensis, has been built, using the entity-relationship (ER) approach and a multi-purpose database structure. A friendly graphical user interface has been designed, and users can carry out different kinds of information by three levels of queries: (1) the basic search use the classical keywords or sequence identifiers; (2) the original advanced search engine allows to combine (by using logical operators) numerous criteria: (a) keywords (textual comparison) related to the pCDS's function, family domains and cellular localization; (b) physico-chemical characteristics (numerical comparison) such as isoelectric point or molecular weight and structural criteria such as the nucleic length or the number of transmembrane helix (TMH); (c) similarity scores with Escherichia coli and 10 species phylogenetically close to B. melitensis; (3) complex queries can be performed by using a SQL field, which allows all queries respecting the database's structure. The database is publicly available through a Web server at the following url: http://www.fundp.ac.be/urbm/bioinfo/aPAGe.

  19. Extracting scientific articles from a large digital archive: BioStor and the Biodiversity Heritage Library

    PubMed Central

    2011-01-01

    Background The Biodiversity Heritage Library (BHL) is a large digital archive of legacy biological literature, comprising over 31 million pages scanned from books, monographs, and journals. During the digitisation process basic metadata about the scanned items is recorded, but not article-level metadata. Given that the article is the standard unit of citation, this makes it difficult to locate cited literature in BHL. Adding the ability to easily find articles in BHL would greatly enhance the value of the archive. Description A service was developed to locate articles in BHL based on matching article metadata to BHL metadata using approximate string matching, regular expressions, and string alignment. This article locating service is exposed as a standard OpenURL resolver on the BioStor web site http://biostor.org/openurl/. This resolver can be used on the web, or called by bibliographic tools that support OpenURL. Conclusions BioStor provides tools for extracting, annotating, and visualising articles from the Biodiversity Heritage Library. BioStor is available from http://biostor.org/. PMID:21605356

  20. Guide to the Internet. The world wide web.

    PubMed Central

    Pallen, M.

    1995-01-01

    The world wide web provides a uniform, user friendly interface to the Internet. Web pages can contain text and pictures and are interconnected by hypertext links. The addresses of web pages are recorded as uniform resource locators (URLs), transmitted by hypertext transfer protocol (HTTP), and written in hypertext markup language (HTML). Programs that allow you to use the web are available for most operating systems. Powerful on line search engines make it relatively easy to find information on the web. Browsing through the web--"net surfing"--is both easy and enjoyable. Contributing to the web is not difficult, and the web opens up new possibilities for electronic publishing and electronic journals. Images p1554-a Fig 5 PMID:8520402

  1. Integrating diverse databases into an unified analysis framework: a Galaxy approach

    PubMed Central

    Blankenberg, Daniel; Coraor, Nathan; Von Kuster, Gregory; Taylor, James; Nekrutenko, Anton

    2011-01-01

    Recent technological advances have lead to the ability to generate large amounts of data for model and non-model organisms. Whereas, in the past, there have been a relatively small number of central repositories that serve genomic data, an increasing number of distinct specialized data repositories and resources have been established. Here, we describe a generic approach that provides for the integration of a diverse spectrum of data resources into a unified analysis framework, Galaxy (http://usegalaxy.org). This approach allows the simplified coupling of external data resources with the data analysis tools available to Galaxy users, while leveraging the native data mining facilities of the external data resources. Database URL: http://usegalaxy.org PMID:21531983

  2. JIP: Java image processing on the Internet

    NASA Astrophysics Data System (ADS)

    Wang, Dongyan; Lin, Bo; Zhang, Jun

    1998-12-01

    In this paper, we present JIP - Java Image Processing on the Internet, a new Internet based application for remote education and software presentation. JIP offers an integrate learning environment on the Internet where remote users not only can share static HTML documents and lectures notes, but also can run and reuse dynamic distributed software components, without having the source code or any extra work of software compilation, installation and configuration. By implementing a platform-independent distributed computational model, local computational resources are consumed instead of the resources on a central server. As an extended Java applet, JIP allows users to selected local image files on their computers or specify any image on the Internet using an URL as input. Multimedia lectures such as streaming video/audio and digital images are integrated into JIP and intelligently associated with specific image processing functions. Watching demonstrations an practicing the functions with user-selected input data dramatically encourages leaning interest, while promoting the understanding of image processing theory. The JIP framework can be easily applied to other subjects in education or software presentation, such as digital signal processing, business, mathematics, physics, or other areas such as employee training and charged software consumption.

  3. Where to find nutritional science journals on the World Wide Web.

    PubMed

    Brown, C M

    1997-08-01

    The World Wide Web (WWW) is a burgeoning information resource that can be utilized for current awareness and assistance in manuscript preparation and submission. The ever changing and expanding nature of the WWW allows it to provide up to the minute information, but this inherent changeability often makes information access difficult. To assist nutrition scientists in locating useful information about nutritional science journals on the WWW, this article critically reviews and describes the WWW sites for seventeen highly ranked nutrition and dietetics journals. Included in each annotation are the site's title, web address or Universal Resource Locator (URL), journal ranking and site authorship. Also listed is whether or not the site makes available the guidelines for authors, tables of contents, abstracts, online ordering, as well as information about the editorial board. This critical survey illustrates that the information on the web, regardless of its authority, is not of equal quality.

  4. Web-based, virtual course units as a didactic concept for medical teaching.

    PubMed

    Schultze-Mosgau, Stefan; Zielinski, Thomas; Lochner, Jürgen

    2004-06-01

    The objective was to develop a web-based, virtual series of lectures for evidence-based, standardized knowledge transfer independent of location and time with possibilities for interactive participation and a concluding web-based online examination. Within the framework of a research project, specific Intranet and Internet capable course modules were developed together with a concluding examination. The concept of integrating digital and analogue course units supported by sound was based on FlashCam (Nexus Concepts), Flash MX (Macromedia), HTML and JavaScript. A Web server/SGI Indigo Unix server was used as a platform by the course provider. A variety of independent formats (swf, avi, mpeg, DivX, etc.) were integrated in the individual swf modules. An online examination was developed to monitor the learning effect. The examination papers are automatically forwarded by email after completion. The results are also returned to the user automatically after they have been processed by a key program and an evaluation program. The system requirements for the user PC have deliberately been kept low (Internet Explorer 5.0, Flash-Player 6, 56 kbit/s modem, 200 MHz PC). Navigation is intuitive. Users were provided with a technical online introduction and a FAQ list. Eighty-two students of dentistry in their 3rd to 5th years of study completed a questionnaire to assess the course content and the user friendliness (SPSS V11) with grades 1 to 6 (1 = 'excellent' and 6 = 'unsatisfactory'). The course units can be viewed under the URL: http://giga.rrze.uni-erlangen.de/movies/MKG/trailer and URL: http://giga.rrze.uni-erlangen.de/movies/MKG/demo/index. Some 89% of the students gave grades 1 (excellent) and 2 (good) for accessibility independent of time and 83% for access independent of location. Grades 1 and 2 were allocated for an objectivization of the knowledge transfer by 67% of the students and for the use of video sequences for demonstrating surgical techniques by 91% of the students. The course units were used as an optional method of studying by 87% of the students; 76% of the students made use of this facility from home; 83% of the students used Internet Explorer as a browser; 60% used online streaming and 35% downloading as the preferred method for data transfer. The course units contribute to an evidence-based objectivization of multimedia knowledge transfer independent of time and location. Online examinations permit automatic monitoring and evaluation of the learning effect. The modular structure permits easy updating of course contents. Hyperlinks with literature sources facilitate study.

  5. Attitudes of Male Unrestricted Line (URL) Officers Towards Integration of Women into Their Designators and Towards Women in Combat.

    DTIC Science & Technology

    1983-12-01

    APR 1 11984 THESIS D ,. *. ATTITUDES OF MALE UNRESTRICTED LINE (URL) OFFICERS𔃿 TOWARDS INTEGRATION OF WOMEN INTO THEIR DESIGNATORS AND TOWARDS WOMEN... Male Unrestricted Line (URL) Master’s Thesis Officers Towards Integration of Women into their December, 1983 ,- Designators and Towards Women in...Integration of Women sq. AMYRACT (CtMue an rew side i moeep and Idenllifr blekn mb ) Using Rand Survey data, this thesis examines the attitudes of male Unre

  6. Constructing Uniform Resource Locators (URLs) for Searching the Marine Realms Information Bank

    USGS Publications Warehouse

    Linck, Guthrie A.; Allwardt, Alan O.; Lightsom, Frances L.

    2009-01-01

    The Marine Realms Information Bank (MRIB) is a digital library that provides access to free online scientific information about the oceans and coastal regions. To search its collection, MRIB uses a Common Gateway Interface (CGI) program, which allows automated search requests using Uniform Resource Locators (URLs). This document provides an overview of how to construct URLs to execute MRIB queries. The parameters listed allow detailed control of which records are retrieved, how they are returned, and how their display is formatted.

  7. Sex-specific 99th percentiles derived from the AACC Universal Sample Bank for the Roche Gen 5 cTnT assay: Comorbidities and statistical methods influence derivation of reference limits.

    PubMed

    Gunsolus, Ian L; Jaffe, Allan S; Sexter, Anne; Schulz, Karen; Ler, Ranka; Lindgren, Brittany; Saenger, Amy K; Love, Sara A; Apple, Fred S

    2017-12-01

    Our purpose was to determine a) overall and sex-specific 99th percentile upper reference limits (URL) and b) influences of statistical methods and comorbidities on the URLs. Heparin plasma from 838 normal subjects (423 men, 415 women) were obtained from the AACC (Universal Sample Bank). The cobas e602 measured cTnT (Roche Gen 5 assay); limit of detection (LoD), 3ng/L. Hemoglobin A1c (URL 6.5%), NT-proBNP (URL 125ng/L) and eGFR (60mL/min/1.73m 2 ) were measured, along with identification of statin use, to better define normality. 99th percentile URLs were determined by the non-parametric (NP), Harrell-Davis Estimator (HDE) and Robust (R) methods. 355 men and 339 women remained after exclusions. Overall<50% of subjects had measureable concentrations ≥ LoD: 45.6% no exclusion, 43.5% after exclusion; compared to men: 68.1% no exclusion, 65.1% post exclusion; women: 22.7% no exclusion, 20.9% post exclusion. The statistical method used influenced URLs as follows: pre/post exclusion overall, NP 16/16ng/L, HDE 17/17ng/L, R not available; men NP 18/16ng/L, HDE 21/19ng/L, R 16/11ng/L; women NP 13/10ng/L, HDE 14/14ng/L, R not available. We demonstrated that a) the Gen 5 cTnT assay does not meet the IFCC guideline for high-sensitivity assays, b) surrogate biomarkers significantly lowers the URLs and c) statistical methods used impact URLs. Our data suggest lower sex-specific cTnT 99th percentiles than reported in the FDA approved package insert. We emphasize the importance of detailing the criteria used to include and exclude subjects for defining a healthy population and the statistical method used to calculate 99th percentiles and identify outliers. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  8. The design and construction of an interactive website concerning biomedical photography.

    PubMed

    Williams, Robin; Williams, Gigi

    2003-06-01

    The purpose of this communication is to make readers aware of what the authors believe is an important online resource about medical and scientific photography for doctors, scientists and students. It is a website freely accessible and its URL is http://msp.rmit.edu.au. The site is designed as a resource base: it is not meant to be a 'course' but the reader will find much practical information about technique and applications of scientific imaging methods. The site is currently a comprehensive collection of resources relating to invisible radiation photography but there are plans to expand the site to a range of clinical recording topics, and other potential contributors are asked to join the project. It contains a vast collection of photographs from many photographers as well as graphs, diagrams, tables and references. This paper also discusses some of the important issues surrounding the 'publication' of such a site such as currency and access versus credibility; technological obsolescence, site design and usage.

  9. Towards a personalized Internet: a case for a full decentralization.

    PubMed

    Kermarrec, Anne-Marie

    2013-03-28

    The Web has become a user-centric platform where users post, share, annotate, comment and forward content be it text, videos, pictures, URLs, etc. This social dimension creates tremendous new opportunities for information exchange over the Internet, as exemplified by the surprising and exponential growth of social networks and collaborative platforms. Yet, niche content is sometimes difficult to retrieve using traditional search engines because they target the mass rather than the individual. Likewise, relieving users from useless notification is tricky in a world where there is so much information and so little of interest for each and every one of us. We argue that ultra-specific content could be retrieved and disseminated should search and notification be personalized to fit this new setting. We also argue that users' interests should be implicitly captured by the system rather than relying on explicit classifications simply because the world is by nature unstructured, dynamic and users do not want to be hampered in their actions by a tight and static framework. In this paper, we review some existing personalization approaches, most of which are centralized. We then advocate the need for fully decentralized systems because personalization raises two main issues. Firstly, personalization requires information to be stored and maintained at a user granularity which can significantly hurt the scalability of a centralized solution. Secondly, at a time when the 'big brother is watching you' attitude is prominent, users may be more and more reluctant to give away their personal data to the few large companies that can afford such personalization. We start by showing how to achieve personalization in decentralized systems and conclude with the research agenda ahead.

  10. NASA's Global Change Master Directory: Discover and Access Earth Science Data Sets, Related Data Services, and Climate Diagnostics

    NASA Technical Reports Server (NTRS)

    Aleman, Alicia; Olsen, Lola; Ritz, Scott; Morahan, Michael; Cepero, Laurel; Stevens, Tyler

    2011-01-01

    NASA's Global Change Master Directory provides the scientific community with the ability to discover, access, and use Earth science data, data-related services, and climate diagnostics worldwide. The GCMD offers descriptions of Earth science data sets using the Directory Interchange Format (DIF) metadata standard; Earth science related data services are described using the Service Entry Resource Format (SERF); and climate visualizations are described using the Climate Diagnostic (CD) standard. The DIF, SERF and CD standards each capture data attributes used to determine whether a data set, service, or climate visualization is relevant to a user's needs. Metadata fields include: title, summary, science keywords, service keywords, data center, data set citation, personnel, instrument, platform, quality, related URL, temporal and spatial coverage, data resolution and distribution information. In addition, nine valuable sets of controlled vocabularies have been developed to assist users in normalizing the search for data descriptions. An update to the GCMD's search functionality is planned to further capitalize on the controlled vocabularies during database queries. By implementing a dynamic keyword "tree", users will have the ability to search for data sets by combining keywords in new ways. This will allow users to conduct more relevant and efficient database searches to support the free exchange and re-use of Earth science data. http://gcmd.nasa.gov/

  11. 75 FR 75170 - APHIS User Fee Web Site

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-02

    ...] APHIS User Fee Web Site AGENCY: Animal and Plant Health Inspection Service, USDA. ACTION: Notice... recover the costs of providing certain services. This notice announces the availability of a Web site that contains information about the Agency's user fees. ADDRESSES: The Agency's user fee Web site is located at...

  12. A Forensic Examination of Online Search Facility URL Record Structures.

    PubMed

    Horsman, Graeme

    2018-05-29

    The use of search engines and associated search functions to locate content online is now common practice. As a result, a forensic examination of a suspect's online search activity can be a critical aspect in establishing whether an offense has been committed in many investigations. This article offers an analysis of online search URL structures to support law enforcement and associated digital forensics practitioners interpret acts of online searching during an investigation. Google, Bing, Yahoo!, and DuckDuckGo searching functions are examined, and key URL attribute structures and metadata have been documented. In addition, an overview of social media searching covering Twitter, Facebook, Instagram, and YouTube is offered. Results show the ability to extract embedded metadata from search engine URLs which can establish online searching behaviors and the timing of searches. © 2018 American Academy of Forensic Sciences.

  13. Application Examples for Handle System Usage

    NASA Astrophysics Data System (ADS)

    Toussaint, F.; Weigel, T.; Thiemann, H.; Höck, H.; Stockhause, M.; Lautenschlager, M.

    2012-12-01

    Besides the well-known DOI (Digital Object Identifiers) as a special form of Handles that resolve to scientific publications there are various other applications in use. Others perhaps are just not yet. We present some examples for the existing ones and some ideas for the future. The national German project C3-Grid provides a framework to implement a first solution for provenance tracing and explore unforeseen implications. Though project-specific, the high-level architecture is generic and represents well a common notion of data derivation. Users select one or many input datasets and a workflow software module (an agent in this context) to execute on the data. The output data is deposited in a repository to be delivered to the user. All data is accompanied by an XML metadata document. All input and output data, metadata and the workflow module receive Handles and are linked together to establish a directed acyclic graph of derived data objects and involved agents. Data that has been modified by a workflow module is linked to its predecessor data and the workflow module involved. Version control systems such as svn or git provide Internet access to software repositories using URLs. To refer to a specific state of the source code of for instance a C3 workflow module, it is sufficient to reference the URL to the svn revision or git hash. In consequence, individual revisions and the repository as a whole receive PIDs. Moreover, the revision specific PIDs are linked to their respective predecessors and become part of the provenance graph. Another example for usage of PIDs in a current major project is given in EUDAT (European Data Infrastructure) which will link scientific data of several research communities together. In many fields it is necessary to provide data objects at multiple locations for a variety of applications. To ensure consistency, not only the master of a data object but also its copies shall be provided with a PID. To verify transaction safety and to keep all copies consistent requires that the chain from master to copy and vice versa has to be resolvable, preferably through PIDs directly. As part of EUDAT necessary services are created on the basis of iRODS. These form the core structure of the data infrastructure developed within EUDAT. Though many implementations of PID systems already exist, many valuable web accessible data sources come with unresolvable identifiers like UUIDs, with instable recognition patterns like URLs, or even with proprietary implementations. However, other data collections would like to link to them in the data descriptions of their metadata. In addition, by usage of PIDs one can decouple the responsibilities for data and MD in projects where necessary. For some metadata entities like persons or even institutes it makes sense to give them single PIDs that point to contact and/or location information. ORCID (Open Researcher & Contributor ID), e.g., keeps track of persons working in scholarly fields, independent of name changes and linguistic variances. The ISO 27729 based International Standard Name Identifier (ISNI) also identifies legal entities and fictional characters besides natural persons. Other systems exist that, e.g., reference geographic localities. IDs of this kind may resolve to a URL where detailed information is given.

  14. miRSponge: a manually curated database for experimentally supported miRNA sponges and ceRNAs.

    PubMed

    Wang, Peng; Zhi, Hui; Zhang, Yunpeng; Liu, Yue; Zhang, Jizhou; Gao, Yue; Guo, Maoni; Ning, Shangwei; Li, Xia

    2015-01-01

    In this study, we describe miRSponge, a manually curated database, which aims at providing an experimentally supported resource for microRNA (miRNA) sponges. Recent evidence suggests that miRNAs are themselves regulated by competing endogenous RNAs (ceRNAs) or 'miRNA sponges' that contain miRNA binding sites. These competitive molecules can sequester miRNAs to prevent them interacting with their natural targets to play critical roles in various biological and pathological processes. It has become increasingly important to develop a high quality database to record and store ceRNA data to support future studies. To this end, we have established the experimentally supported miRSponge database that contains data on 599 miRNA-sponge interactions and 463 ceRNA relationships from 11 species following manual curating from nearly 1200 published articles. Database classes include endogenously generated molecules including coding genes, pseudogenes, long non-coding RNAs and circular RNAs, along with exogenously introduced molecules including viral RNAs and artificial engineered sponges. Approximately 70% of the interactions were identified experimentally in disease states. miRSponge provides a user-friendly interface for convenient browsing, retrieval and downloading of dataset. A submission page is also included to allow researchers to submit newly validated miRNA sponge data. Database URL: http://www.bio-bigdata.net/miRSponge. © The Author(s) 2015. Published by Oxford University Press.

  15. miRSponge: a manually curated database for experimentally supported miRNA sponges and ceRNAs

    PubMed Central

    Wang, Peng; Zhi, Hui; Zhang, Yunpeng; Liu, Yue; Zhang, Jizhou; Gao, Yue; Guo, Maoni; Ning, Shangwei; Li, Xia

    2015-01-01

    In this study, we describe miRSponge, a manually curated database, which aims at providing an experimentally supported resource for microRNA (miRNA) sponges. Recent evidence suggests that miRNAs are themselves regulated by competing endogenous RNAs (ceRNAs) or ‘miRNA sponges’ that contain miRNA binding sites. These competitive molecules can sequester miRNAs to prevent them interacting with their natural targets to play critical roles in various biological and pathological processes. It has become increasingly important to develop a high quality database to record and store ceRNA data to support future studies. To this end, we have established the experimentally supported miRSponge database that contains data on 599 miRNA-sponge interactions and 463 ceRNA relationships from 11 species following manual curating from nearly 1200 published articles. Database classes include endogenously generated molecules including coding genes, pseudogenes, long non-coding RNAs and circular RNAs, along with exogenously introduced molecules including viral RNAs and artificial engineered sponges. Approximately 70% of the interactions were identified experimentally in disease states. miRSponge provides a user-friendly interface for convenient browsing, retrieval and downloading of dataset. A submission page is also included to allow researchers to submit newly validated miRNA sponge data. Database URL: http://www.bio-bigdata.net/miRSponge. PMID:26424084

  16. The Human Oral Microbiome Database: a web accessible resource for investigating oral microbe taxonomic and genomic information

    PubMed Central

    Chen, Tsute; Yu, Wen-Han; Izard, Jacques; Baranova, Oxana V.; Lakshmanan, Abirami; Dewhirst, Floyd E.

    2010-01-01

    The human oral microbiome is the most studied human microflora, but 53% of the species have not yet been validly named and 35% remain uncultivated. The uncultivated taxa are known primarily from 16S rRNA sequence information. Sequence information tied solely to obscure isolate or clone numbers, and usually lacking accurate phylogenetic placement, is a major impediment to working with human oral microbiome data. The goal of creating the Human Oral Microbiome Database (HOMD) is to provide the scientific community with a body site-specific comprehensive database for the more than 600 prokaryote species that are present in the human oral cavity based on a curated 16S rRNA gene-based provisional naming scheme. Currently, two primary types of information are provided in HOMD—taxonomic and genomic. Named oral species and taxa identified from 16S rRNA gene sequence analysis of oral isolates and cloning studies were placed into defined 16S rRNA phylotypes and each given unique Human Oral Taxon (HOT) number. The HOT interlinks phenotypic, phylogenetic, genomic, clinical and bibliographic information for each taxon. A BLAST search tool is provided to match user 16S rRNA gene sequences to a curated, full length, 16S rRNA gene reference data set. For genomic analysis, HOMD provides comprehensive set of analysis tools and maintains frequently updated annotations for all the human oral microbial genomes that have been sequenced and publicly released. Oral bacterial genome sequences, determined as part of the Human Microbiome Project, are being added to the HOMD as they become available. We provide HOMD as a conceptual model for the presentation of microbiome data for other human body sites. Database URL: http://www.homd.org PMID:20624719

  17. Web-Based Family Life Education: Spotlight on User Experience

    ERIC Educational Resources Information Center

    Doty, Jennifer; Doty, Matthew; Dwrokin, Jodi

    2011-01-01

    Family Life Education (FLE) websites can benefit from the field of user experience, which makes technology easy to use. A heuristic evaluation of five FLE sites was performed using Neilson's heuristics, guidelines for making sites user friendly. Greater site complexity resulted in more potential user problems. Sites most frequently had problems…

  18. NASA Langley Highlights, 1998

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Langley's mission is accomplished by performing innovative research relevant to national needs and Agency goals, transferring technology to users in a timely manner, and providing development support to other United States Government Agencies, industry, other NASA Centers, the educational community, and the local community. This report contains highlights of some of the major accomplishments and applications that have been made by Langley researchers and by our university and industry colleagues during the past year. The highlights illustrate the broad range of research and technology activities carried out by NASA Langley Research Center and the contributions of this work toward maintaining United States' leadership in aeronautics and space research. A color electronic version of this report is available at URL http://larcpubs.larc.nasa.gov/randt/1998/.

  19. The upper reference limit for thyroid peroxidase autoantibodies is method-dependent: A collaborative study with biomedical industries.

    PubMed

    Tozzoli, Renato; D'Aurizio, Federica; Ferrari, Anna; Castello, Roberto; Metus, Paolo; Caruso, Beatrice; Perosa, Anna Rosa; Sirianni, Francesca; Stenner, Elisabetta; Steffan, Agostino; Villalta, Danilo

    2016-01-15

    The determination of the upper reference limit (URL) for thyroid peroxidase autoantibodies (TPOAbs) is a contentious issue, because of the difficulty in defining the reference population. The aim of this study was to establish the URL (eURL) for TPOAbs, according to the National Academy of Clinical Biochemistry (NACB) guidelines and to compare them with those obtained in a female counterpart, by the use of six commercial automated platforms. 120 healthy males and 120 healthy females with NACB-required characteristics (<30years, TSH between 0.5 and 2.0mIU/L, normal thyroid ultrasound, without personal/family history of thyroid and non-thyroid autoimmune diseases) were studied. Sera were analyzed for TPOAbs concentration using six immunoassay methods applied in automated analyzers: Advia Centaur XP (CEN), Siemens Healthcare Diagnostics; Maglumi 2000 Plus, Shenzen New Industries Biomedical Engineering; Architect ci4100, Abbott; Cobas e411 (COB) Roche Diagnostics; Unicel DxI (UNI) and Lumipulse G1200, Fujirebio. Within each method, TPOAbs values had a high degree of dispersion and the eURLs were lower than those stated by the manufacturer. A statistically significant difference (p<0.05) between medians of males and females was observed only for COB and for UNI. However, the comparison of the male and female proportions positive for TPOAbs using the eURL of the counterpart, showed the lack of clinical significance of the above differences (Chi-square test, p>0.05). Despite the analytical harmonization, the wide dispersion of the results and the differences of the eURLs between methods suggest the need of further studies focusing on TPO antigen preparations as the possible source of variability between different assays. In addition, the lack of clinical significant difference between males and females, in terms of TPOAb eURLs, confirms the suitability of the NACB recommendations. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Creating and Searching a Local Inventory for Data Granules in a Remote Archive

    NASA Astrophysics Data System (ADS)

    Cornillon, P. C.

    2016-12-01

    More often than not, search capabilities for network accessible data do not exist or do not meet the requirements of the user. For large archives this can make finding data of interest tedious at best. This summer, the author encountered such a problem with regard to the two existing archives of VIIRS L2 sea surface temperature (SST) fields obtained with the new ACSPO retrieval algorithm; one at the Jet Propulsion Laboratory's PO-DAAC and the other at NOAA's National Centers for Environmental Information (NCEI). In both cases the data were available via ftp and OPeNDAP but there was no search capability at the PO-DAAC and the NCEI archive was incomplete. Furthermore, in order to meet the needs of a broad range of datasets and users, the beta version of the search engine at NCEI was cumbersome for the searches of interest. Although some of these problems have been resolved since (and may be described in other posters/presentations at this meeting), the solution described in this presentation offers the user the ability to develop a search capability for archives lacking a search capability and/or to configure searches more to his or her preferences than the generic searches offered by the data provider. The solution, a Matlab script, used html access to the PO-DAAC web site to locate all VIIRS 10 minute granules and OPeNDAP access to acquire the bounding box for each granule from the metadata bound to the file. This task required several hours of wall time to acquire the data and to write the bounding boxes to a local file with the associated ftp and OPeNDAP urls for the 110,000+ granule archive. A second Matlab script searched the local archive, seconds, for granules falling in a user defined space-time window and an ascii file of wget commands associated with these was generated. This file was then executed to acquire the data of interest. The wget commands can be configured to acquire the entire files via ftp or a subset of each file via OPeNDAP. Furthermore, the search capability, based on bounding boxes and rectangular regions, could easily be modified to further refine the search. Finally, the script that builds the inventory has been designed to update the local inventory, minutes per month rather than hours.

  1. Linking to NSCEP's Online Publications

    EPA Pesticide Factsheets

    Each online document has a permanent URL that can be linked to for future reference. To find the short URL for a document you need to be in the document and able to view the icon bar above the document.

  2. 29 CFR 1614.703 - Manner and format of data.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... vertical columns. The oldest fiscal year data shall be listed first, reading left to right, with the other... Resource Locator (URL) for the data it posts under this subpart. Thereafter, new or changed URLs shall be...

  3. 75 FR 75962 - Proposed Information Collection; Comment Request; Commerce.Gov Web Site User Survey

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-07

    ...; Commerce.Gov Web Site User Survey AGENCY: Office of the Secretary, Office of Public Affairs. ACTION: Notice... serve users of Commerce.gov and the Department of Commerce bureaus' Web sites, the Offices of Public Affairs will collect information from users about their experience on the Web sites. A random number of...

  4. Demonstrating S-NPP VIIRS Products with the Naval Research Laboratory R&D Websites

    NASA Astrophysics Data System (ADS)

    Kuciauskas, A. P.; Hawkins, J.; Solbrig, J.; Bankert, R.; Richardson, K.; Surratt, M.; Miller, S. D.; Kent, J.

    2014-12-01

    The Naval Research Laboratory, Marine Meteorology Division in Monterey, CA (NRL-MRY) has been developing and providing the global community with VIIRS-derived state of the art image products on three operational websites: · NexSat: www.nrlmry.navy.mil/NEXSAT.html · VIIRS Page: www.nrlmry.navy.mil/VIIRS.html · Tropical Cyclone Page: www.nrlmry.navy.mil/TC.html These user-friendly websites are accessed by the global public with a daily average of 250,000 and 310,000 web hits for NexSat and Tropical Cyclone websites, respectively. Users consist of operational, research, scientific field campaigns, academia, and weather enthusiasts. The websites also contain ancillary products from 5 geostationary and 27 low earth orbiting sensors, ranging from visible through microwave channels. NRL-MRY also leverages the NRL global and regional numerical weather prediction (NWP) models for assessing cloud top measurements and synoptic overlays. With collaborations at CIMSS' Direct Readout site along with the AFWA IDPS-FNMOC and NOAA IDPS portals, a robust component to our websites are product latencies that typically satisfy operational time constraints necessary for planning purposes. Given these resources, NRL-MRY acquires ~2TBytes of data and produces 100,000 image products on a daily basis. In partnership with the COMET program, our product tutorials contain simple and graphically enhanced descriptions that accommodate users ranging from basic to advanced understanding of satellite meteorology. This presentation will provide an overview of our website functionality: animations, co-registered formats, and Google Earth viewing. Through imagery, we will also demonstrate the superiority of VIIRS against its heritage sensor counterparts. A focal aspect will be the demonstration of the VIIRS Day Night Band (DNB) in detecting nighttime features such as wildfires, volcanic ash, Arctic sea ice, and tropical cyclones. We also plan to illustrate how NexSat and VIIRS websites demonstrate CAL/VAL ocean color activity. We will also discuss outreach and training efforts designed for research and operational applications. Our goal is to encourage the audience to add our URLs into their suite of web-based satellite resources.

  5. Photographs of the Sea floor Offshore of New York and New Jersey

    USGS Publications Warehouse

    Butman, Bradford; Gutierrez, Benjamin T.; Buchholtz ten Brink, Marilyn R.; Schwab, William S.; Blackwood, Dann S.; Mecray, Ellen L.; Middleton, Tammie J.

    2003-01-01

    This DVD-ROM contains photographs of the sea floor and sediment texture data collected as part of studies carried out by the U.S. Geological Survey (USGS) in the New York Bight (Figure 1a (PDF format)). The studies were designed to map the sea floor (Butman, 1998, URL: http://pubs.usgs.gov/fs/fs133-98/) and to develop an understanding of the transport and long-term fate of sediments and associated contaminants in the region (Mecray and others, 1999, URL: http://pubs.usgs.gov/fs/fs114-99/). The data were collected on four research cruises carried out between 1996 and 2000 (Appendix I). The images and texture data were collected to provide direct observations of the sea floor geology and to aid in the interpretation of backscatter intensity data obtained from sidescan sonar and multibeam surveys of the sea floor. Preliminary descriptions of the sea floor geology in this region may be found in Schwab and others (2000, URL: http://pubs.usgs.gov/of/of00-295/; 2003), Butman and others (1998, URL: http://pubs.usgs.gov/of/of98-616/.), and Butman and others (2002, URL: http://pubs.usgs.gov/of/of00-503/). Schwab and others (2000 URL: http://pubs.usgs.gov/of/of00-295/; 2003) have identified 11 geologic units in New York Bight (Figure 2 (PDF format)). These units identify areas of active sediment transport, extensive anthropogenic influence on the sea floor, and various geologic units. Butman and others (2003) and Harris and others (in press) present the results of a moored array experiment carried out in the Hudson Shelf Valley to investigate the transport of sediments during winter. Summaries of these and other studies may be found at USGS studies in the New York Bight (URL: http://woodshole.er.usgs.gov/project-pages/newyork/). This DVD-ROM contains digital images of bottom still photographs, images digitized from videos, sediment grain-size analysis results, and short QuickTime movies from video transects. The data are presented in tabular form and in an ESRI (Environmental Systems Research Institute, URL: http://www.esri.com) ArcView project where the image and sample locations may be viewed superimposed on maps showing side-scan sonar and/or multibeam backscatter intensity and bottom topography.

  6. PDBj Mine: design and implementation of relational database interface for Protein Data Bank Japan

    PubMed Central

    Kinjo, Akira R.; Yamashita, Reiko; Nakamura, Haruki

    2010-01-01

    This article is a tutorial for PDBj Mine, a new database and its interface for Protein Data Bank Japan (PDBj). In PDBj Mine, data are loaded from files in the PDBMLplus format (an extension of PDBML, PDB's canonical XML format, enriched with annotations), which are then served for the user of PDBj via the worldwide web (WWW). We describe the basic design of the relational database (RDB) and web interfaces of PDBj Mine. The contents of PDBMLplus files are first broken into XPath entities, and these paths and data are indexed in the way that reflects the hierarchical structure of the XML files. The data for each XPath type are saved into the corresponding relational table that is named as the XPath itself. The generation of table definitions from the PDBMLplus XML schema is fully automated. For efficient search, frequently queried terms are compiled into a brief summary table. Casual users can perform simple keyword search, and 'Advanced Search' which can specify various conditions on the entries. More experienced users can query the database using SQL statements which can be constructed in a uniform manner. Thus, PDBj Mine achieves a combination of the flexibility of XML documents and the robustness of the RDB. Database URL: http://www.pdbj.org/ PMID:20798081

  7. PDBj Mine: design and implementation of relational database interface for Protein Data Bank Japan.

    PubMed

    Kinjo, Akira R; Yamashita, Reiko; Nakamura, Haruki

    2010-08-25

    This article is a tutorial for PDBj Mine, a new database and its interface for Protein Data Bank Japan (PDBj). In PDBj Mine, data are loaded from files in the PDBMLplus format (an extension of PDBML, PDB's canonical XML format, enriched with annotations), which are then served for the user of PDBj via the worldwide web (WWW). We describe the basic design of the relational database (RDB) and web interfaces of PDBj Mine. The contents of PDBMLplus files are first broken into XPath entities, and these paths and data are indexed in the way that reflects the hierarchical structure of the XML files. The data for each XPath type are saved into the corresponding relational table that is named as the XPath itself. The generation of table definitions from the PDBMLplus XML schema is fully automated. For efficient search, frequently queried terms are compiled into a brief summary table. Casual users can perform simple keyword search, and 'Advanced Search' which can specify various conditions on the entries. More experienced users can query the database using SQL statements which can be constructed in a uniform manner. Thus, PDBj Mine achieves a combination of the flexibility of XML documents and the robustness of the RDB. Database URL: http://www.pdbj.org/

  8. AGORA : Organellar genome annotation from the amino acid and nucleotide references.

    PubMed

    Jung, Jaehee; Kim, Jong Im; Jeong, Young-Sik; Yi, Gangman

    2018-03-29

    Next-generation sequencing (NGS) technologies have led to the accumulation of highthroughput sequence data from various organisms in biology. To apply gene annotation of organellar genomes for various organisms, more optimized tools for functional gene annotation are required. Almost all gene annotation tools are mainly focused on the chloroplast genome of land plants or the mitochondrial genome of animals.We have developed a web application AGORA for the fast, user-friendly, and improved annotations of organellar genomes. AGORA annotates genes based on a BLAST-based homology search and clustering with selected reference sequences from the NCBI database or user-defined uploaded data. AGORA can annotate the functional genes in almost all mitochondrion and plastid genomes of eukaryotes. The gene annotation of a genome with an exon-intron structure within a gene or inverted repeat region is also available. It provides information of start and end positions of each gene, BLAST results compared with the reference sequence, and visualization of gene map by OGDRAW. Users can freely use the software, and the accessible URL is https://bigdata.dongguk.edu/gene_project/AGORA/.The main module of the tool is implemented by the python and php, and the web page is built by the HTML and CSS to support all browsers. gangman@dongguk.edu.

  9. Data Mining of Network Logs

    NASA Technical Reports Server (NTRS)

    Collazo, Carlimar

    2011-01-01

    The statement of purpose is to analyze network monitoring logs to support the computer incident response team. Specifically, gain a clear understanding of the Uniform Resource Locator (URL) and its structure, and provide a way to breakdown a URL based on protocol, host name domain name, path, and other attributes. Finally, provide a method to perform data reduction by identifying the different types of advertisements shown on a webpage for incident data analysis. The procedures used for analysis and data reduction will be a computer program which would analyze the URL and identify and advertisement links from the actual content links.

  10. Information about liver transplantation on the World Wide Web.

    PubMed

    Hanif, F; Sivaprakasam, R; Butler, A; Huguet, E; Pettigrew, G J; Michael, E D A; Praseedom, R K; Jamieson, N V; Bradley, J A; Gibbs, P

    2006-09-01

    Orthotopic liver transplant (OLTx) has evolved to a successful surgical management for end-stage liver diseases. Awareness and information about OLTx is an important tool in assisting OLTx recipients and people supporting them, including non-transplant clinicians. The study aimed to investigate the nature and quality of liver transplant-related patient information on the World Wide Web. Four common search engines were used to explore the Internet by using the key words 'Liver transplant'. The URL (unique resource locator) of the top 50 returns was chosen as it was judged unlikely that the average user would search beyond the first 50 sites returned by a given search. Each Web site was assessed on the following categories: origin, language, accessibility and extent of the information. A weighted Information Score (IS) was created to assess the quality of clinical and educational value of each Web site and was scored independently by three transplant clinicians. The Internet search performed with the aid of the four search engines yielded a total of 2,255,244 Web sites. Of the 200 possible sites, only 58 Web sites were assessed because of repetition of the same Web sites and non-accessible links. The overall median weighted IS was 22 (IQR 1 - 42). Of the 58 Web sites analysed, 45 (77%) belonged to USA, six (10%) were European, and seven (12%) were from the rest of the world. The median weighted IS of publications originating from Europe and USA was 40 (IQR = 22 - 60) and 23 (IQR = 6 - 38), respectively. Although European Web sites produced a higher weighted IS [40 (IQR = 22 - 60)] as compared with the USA publications [23 (IQR = 6 - 38)], this was not statistically significant (p = 0.07). Web sites belonging to the academic institutions and the professional organizations scored significantly higher with a median weighted IS of 28 (IQR = 16 - 44) and 24(12 - 35), respectively, as compared with the commercial Web sites (median = 6 with IQR of 0 - 14, p = .001). There was an Intraclass Correlation Coefficient (ICC) of 0.89 and an associated 95% CI (0.83, 0.93) for the three observers on the 58 Web sites. The study highlights the need for a significant improvement in the information available on the World Wide Web about OLTx. It concludes that the educational material currently available on the World Wide Web about liver transplant is of poor quality and requires rigorous input from health care professionals. The authors suggest that clinicians should pay more attention to take the necessary steps to improve the standard of information available on their relevant Web sites and must take an active role in helping their patients find Web sites that provide the best and accurate information specifically applicable to the loco-regional circumstances.

  11. User Vulnerability and its Reduction on a Social Networking Site

    DTIC Science & Technology

    2014-01-01

    social networking sites bring about new...and explore other users’ profiles and friend networks. Social networking sites have reshaped business models [Vayner- chuk 2009], provided platform... social networking sites is to enable users to be more social, user privacy and security issues cannot be ignored. On one hand, most social networking sites

  12. Snow Tweets: Emergency Information Dissemination in a US County During 2014 Winter Storms

    PubMed Central

    Bonnan-White, Jess; Shulman, Jason; Bielecke, Abigail

    2014-01-01

    Introduction: This paper describes how American federal, state, and local organizations created, sourced, and disseminated emergency information via social media in preparation for several winter storms in one county in the state of New Jersey (USA). Methods: Postings submitted to Twitter for three winter storm periods were collected from selected organizations, along with a purposeful sample of select private local users. Storm-related posts were analyzed for stylistic features (hashtags, retweet mentions, embedded URLs). Sharing and re-tweeting patterns were also mapped using NodeXL. Results: Results indicate emergency management entities were active in providing preparedness and response information during the selected winter weather events. A large number of posts, however, did not include unique Twitter features that maximize dissemination and discovery by users. Visual representations of interactions illustrate opportunities for developing stronger relationships among agencies. Discussion: Whereas previous research predominantly focuses on large-scale national or international disaster contexts, the current study instead provides needed analysis in a small-scale context. With practice during localized events like extreme weather, effective information dissemination in large events can be enhanced. PMID:25685629

  13. Products available from NREL`s Renewable Resource Data Center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, T.Q.; Rymes, M.

    1995-10-01

    The Renewable Resource Data Center (RReDC) has been developed at the National Renewable Energy Laboratory (NREL) under the Resource Assessment Program. Initial offerings are broadband solar irradiance data bases such as the Daily Statistics Files and Typical Meteorological Years from the 1961--1990 National Solar Radiation Data Base, the West Associates data gathered in the Southwest US from 1976 through 1980, the New NOAA Network that replaced SOLMET from 1977 through 1980, and the one-minute data from four universities under the SEMRTS program. Unique data sets are the thousands of measured solar spectra and measurements of the solar intensity in themore » circumsolar region. All these data are provided with their accompanying documentation and online help. Other products such as Shining On and Solar Radiation Data Manual for Flat-Plate and Concentrating Collectors are available in their entirety, as well as glossaries, bibliographies, maps, and other user helps. The Uniform Resource Locator (URL) address of the RReDC is ``http://rredc.nrel.gov.`` Users should have World Wide Web (WWW) browsing software (such as Mosaic), which supports Forms and the necessary browsing viewers.« less

  14. Snow Tweets: Emergency Information Dissemination in a US County During 2014 Winter Storms.

    PubMed

    Bonnan-White, Jess; Shulman, Jason; Bielecke, Abigail

    2014-12-22

    This paper describes how American federal, state, and local organizations created, sourced, and disseminated emergency information via social media in preparation for several winter storms in one county in the state of New Jersey (USA). Postings submitted to Twitter for three winter storm periods were collected from selected organizations, along with a purposeful sample of select private local users. Storm-related posts were analyzed for stylistic features (hashtags, retweet mentions, embedded URLs). Sharing and re-tweeting patterns were also mapped using NodeXL. RESULTS indicate emergency management entities were active in providing preparedness and response information during the selected winter weather events. A large number of posts, however, did not include unique Twitter features that maximize dissemination and discovery by users. Visual representations of interactions illustrate opportunities for developing stronger relationships among agencies. Whereas previous research predominantly focuses on large-scale national or international disaster contexts, the current study instead provides needed analysis in a small-scale context. With practice during localized events like extreme weather, effective information dissemination in large events can be enhanced.

  15. EviNet: a web platform for network enrichment analysis with flexible definition of gene sets.

    PubMed

    Jeggari, Ashwini; Alekseenko, Zhanna; Petrov, Iurii; Dias, José M; Ericson, Johan; Alexeyenko, Andrey

    2018-06-09

    The new web resource EviNet provides an easily run interface to network enrichment analysis for exploration of novel, experimentally defined gene sets. The major advantages of this analysis are (i) applicability to any genes found in the global network rather than only to those with pathway/ontology term annotations, (ii) ability to connect genes via different molecular mechanisms rather than within one high-throughput platform, and (iii) statistical power sufficient to detect enrichment of very small sets, down to individual genes. The users' gene sets are either defined prior to upload or derived interactively from an uploaded file by differential expression criteria. The pathways and networks used in the analysis can be chosen from the collection menu. The calculation is typically done within seconds or minutes and the stable URL is provided immediately. The results are presented in both visual (network graphs) and tabular formats using jQuery libraries. Uploaded data and analysis results are kept in separated project directories not accessible by other users. EviNet is available at https://www.evinet.org/.

  16. Sustainable funding for biocuration: The Arabidopsis Information Resource (TAIR) as a case study of a subscription-based funding model.

    PubMed

    Reiser, Leonore; Berardini, Tanya Z; Li, Donghui; Muller, Robert; Strait, Emily M; Li, Qian; Mezheritsky, Yarik; Vetushko, Andrey; Huala, Eva

    2016-01-01

    Databases and data repositories provide essential functions for the research community by integrating, curating, archiving and otherwise packaging data to facilitate discovery and reuse. Despite their importance, funding for maintenance of these resources is increasingly hard to obtain. Fueled by a desire to find long term, sustainable solutions to database funding, staff from the Arabidopsis Information Resource (TAIR), founded the nonprofit organization, Phoenix Bioinformatics, using TAIR as a test case for user-based funding. Subscription-based funding has been proposed as an alternative to grant funding but its application has been very limited within the nonprofit sector. Our testing of this model indicates that it is a viable option, at least for some databases, and that it is possible to strike a balance that maximizes access while still incentivizing subscriptions. One year after transitioning to subscription support, TAIR is self-sustaining and Phoenix is poised to expand and support additional resources that wish to incorporate user-based funding strategies. Database URL: www.arabidopsis.org. © The Author(s) 2016. Published by Oxford University Press.

  17. Sustainable funding for biocuration: The Arabidopsis Information Resource (TAIR) as a case study of a subscription-based funding model

    PubMed Central

    Berardini, Tanya Z.; Li, Donghui; Muller, Robert; Strait, Emily M.; Li, Qian; Mezheritsky, Yarik; Vetushko, Andrey; Huala, Eva

    2016-01-01

    Databases and data repositories provide essential functions for the research community by integrating, curating, archiving and otherwise packaging data to facilitate discovery and reuse. Despite their importance, funding for maintenance of these resources is increasingly hard to obtain. Fueled by a desire to find long term, sustainable solutions to database funding, staff from the Arabidopsis Information Resource (TAIR), founded the nonprofit organization, Phoenix Bioinformatics, using TAIR as a test case for user-based funding. Subscription-based funding has been proposed as an alternative to grant funding but its application has been very limited within the nonprofit sector. Our testing of this model indicates that it is a viable option, at least for some databases, and that it is possible to strike a balance that maximizes access while still incentivizing subscriptions. One year after transitioning to subscription support, TAIR is self-sustaining and Phoenix is poised to expand and support additional resources that wish to incorporate user-based funding strategies. Database URL: www.arabidopsis.org PMID:26989150

  18. Microarray R-based analysis of complex lysate experiments with MIRACLE

    PubMed Central

    List, Markus; Block, Ines; Pedersen, Marlene Lemvig; Christiansen, Helle; Schmidt, Steffen; Thomassen, Mads; Tan, Qihua; Baumbach, Jan; Mollenhauer, Jan

    2014-01-01

    Motivation: Reverse-phase protein arrays (RPPAs) allow sensitive quantification of relative protein abundance in thousands of samples in parallel. Typical challenges involved in this technology are antibody selection, sample preparation and optimization of staining conditions. The issue of combining effective sample management and data analysis, however, has been widely neglected. Results: This motivated us to develop MIRACLE, a comprehensive and user-friendly web application bridging the gap between spotting and array analysis by conveniently keeping track of sample information. Data processing includes correction of staining bias, estimation of protein concentration from response curves, normalization for total protein amount per sample and statistical evaluation. Established analysis methods have been integrated with MIRACLE, offering experimental scientists an end-to-end solution for sample management and for carrying out data analysis. In addition, experienced users have the possibility to export data to R for more complex analyses. MIRACLE thus has the potential to further spread utilization of RPPAs as an emerging technology for high-throughput protein analysis. Availability: Project URL: http://www.nanocan.org/miracle/ Contact: mlist@health.sdu.dk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25161257

  19. Microarray R-based analysis of complex lysate experiments with MIRACLE.

    PubMed

    List, Markus; Block, Ines; Pedersen, Marlene Lemvig; Christiansen, Helle; Schmidt, Steffen; Thomassen, Mads; Tan, Qihua; Baumbach, Jan; Mollenhauer, Jan

    2014-09-01

    Reverse-phase protein arrays (RPPAs) allow sensitive quantification of relative protein abundance in thousands of samples in parallel. Typical challenges involved in this technology are antibody selection, sample preparation and optimization of staining conditions. The issue of combining effective sample management and data analysis, however, has been widely neglected. This motivated us to develop MIRACLE, a comprehensive and user-friendly web application bridging the gap between spotting and array analysis by conveniently keeping track of sample information. Data processing includes correction of staining bias, estimation of protein concentration from response curves, normalization for total protein amount per sample and statistical evaluation. Established analysis methods have been integrated with MIRACLE, offering experimental scientists an end-to-end solution for sample management and for carrying out data analysis. In addition, experienced users have the possibility to export data to R for more complex analyses. MIRACLE thus has the potential to further spread utilization of RPPAs as an emerging technology for high-throughput protein analysis. Project URL: http://www.nanocan.org/miracle/. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  20. Establishing a Link Between Prescription Drug Abuse and Illicit Online Pharmacies: Analysis of Twitter Data.

    PubMed

    Katsuki, Takeo; Mackey, Tim Ken; Cuomo, Raphael

    2015-12-16

    Youth and adolescent non-medical use of prescription medications (NUPM) has become a national epidemic. However, little is known about the association between promotion of NUPM behavior and access via the popular social media microblogging site, Twitter, which is currently used by a third of all teens. In order to better assess NUPM behavior online, this study conducts surveillance and analysis of Twitter data to characterize the frequency of NUPM-related tweets and also identifies illegal access to drugs of abuse via online pharmacies. Tweets were collected over a 2-week period from April 1-14, 2015, by applying NUPM keyword filters for both generic/chemical and street names associated with drugs of abuse using the Twitter public streaming application programming interface. Tweets were then analyzed for relevance to NUPM and whether they promoted illegal online access to prescription drugs using a protocol of content coding and supervised machine learning. A total of 2,417,662 tweets were collected and analyzed for this study. Tweets filtered for generic drugs names comprised 232,108 tweets, including 22,174 unique associated uniform resource locators (URLs), and 2,185,554 tweets (376,304 unique URLs) filtered for street names. Applying an iterative process of manual content coding and supervised machine learning, 81.72% of the generic and 12.28% of the street NUPM datasets were predicted as having content relevant to NUPM respectively. By examining hyperlinks associated with NUPM relevant content for the generic Twitter dataset, we discovered that 75.72% of the tweets with URLs included a hyperlink to an online marketing affiliate that directly linked to an illicit online pharmacy advertising the sale of Valium without a prescription. This study examined the association between Twitter content, NUPM behavior promotion, and online access to drugs using a broad set of prescription drug keywords. Initial results are concerning, as our study found over 45,000 tweets that directly promoted NUPM by providing a URL that actively marketed the illegal online sale of prescription drugs of abuse. Additional research is needed to further establish the link between Twitter content and NUPM, as well as to help inform future technology-based tools, online health promotion activities, and public policy to combat NUPM online.

  1. Establishing a Link Between Prescription Drug Abuse and Illicit Online Pharmacies: Analysis of Twitter Data

    PubMed Central

    Cuomo, Raphael

    2015-01-01

    Background Youth and adolescent non-medical use of prescription medications (NUPM) has become a national epidemic. However, little is known about the association between promotion of NUPM behavior and access via the popular social media microblogging site, Twitter, which is currently used by a third of all teens. Objective In order to better assess NUPM behavior online, this study conducts surveillance and analysis of Twitter data to characterize the frequency of NUPM-related tweets and also identifies illegal access to drugs of abuse via online pharmacies. Methods Tweets were collected over a 2-week period from April 1-14, 2015, by applying NUPM keyword filters for both generic/chemical and street names associated with drugs of abuse using the Twitter public streaming application programming interface. Tweets were then analyzed for relevance to NUPM and whether they promoted illegal online access to prescription drugs using a protocol of content coding and supervised machine learning. Results A total of 2,417,662 tweets were collected and analyzed for this study. Tweets filtered for generic drugs names comprised 232,108 tweets, including 22,174 unique associated uniform resource locators (URLs), and 2,185,554 tweets (376,304 unique URLs) filtered for street names. Applying an iterative process of manual content coding and supervised machine learning, 81.72% of the generic and 12.28% of the street NUPM datasets were predicted as having content relevant to NUPM respectively. By examining hyperlinks associated with NUPM relevant content for the generic Twitter dataset, we discovered that 75.72% of the tweets with URLs included a hyperlink to an online marketing affiliate that directly linked to an illicit online pharmacy advertising the sale of Valium without a prescription. Conclusions This study examined the association between Twitter content, NUPM behavior promotion, and online access to drugs using a broad set of prescription drug keywords. Initial results are concerning, as our study found over 45,000 tweets that directly promoted NUPM by providing a URL that actively marketed the illegal online sale of prescription drugs of abuse. Additional research is needed to further establish the link between Twitter content and NUPM, as well as to help inform future technology-based tools, online health promotion activities, and public policy to combat NUPM online. PMID:26677966

  2. Author Correction: Genome-wide analysis of multi- and extensively drug-resistant Mycobacterium tuberculosis.

    PubMed

    Coll, Francesc; Phelan, Jody; Hill-Cawthorne, Grant A; Nair, Mridul B; Mallard, Kim; Ali, Shahjahan; Abdallah, Abdallah M; Alghamdi, Saad; Alsomali, Mona; Ahmed, Abdallah O; Portelli, Stephanie; Oppong, Yaa; Alves, Adriana; Bessa, Theolis Barbosa; Campino, Susana; Caws, Maxine; Chatterjee, Anirvan; Crampin, Amelia C; Dheda, Keertan; Furnham, Nicholas; Glynn, Judith R; Grandjean, Louis; Ha, Dang Minh; Hasan, Rumina; Hasan, Zahra; Hibberd, Martin L; Joloba, Moses; Jones-López, Edward C; Matsumoto, Tomoshige; Miranda, Anabela; Moore, David J; Mocillo, Nora; Panaiotov, Stefan; Parkhill, Julian; Penha, Carlos; Perdigão, João; Portugal, Isabel; Rchiad, Zineb; Robledo, Jaime; Sheen, Patricia; Shesha, Nashwa Talaat; Sirgel, Frik A; Sola, Christophe; Sousa, Erivelton Oliveira; Streicher, Elizabeth M; Van Helden, Paul; Viveiros, Miguel; Warren, Robert M; McNerney, Ruth; Pain, Arnab; Clark, Taane G

    2018-05-01

    In the version of this article initially published, the URL listed for TubercuList was incorrect. The correct URL is https://mycobrowser.epfl.ch/. The error has been corrected in the HTML and PDF versions of the article.

  3. The DOI Is Coming.

    ERIC Educational Resources Information Center

    Scharf, Davida

    2002-01-01

    Discussion of improving accessibility to copyrighted electronic content focuses on the Digital Object Identifier (DOI) and the Open URL standard and linking software. Highlights include work of the World Wide Web consortium; URI (Uniform Resource Identifier); URL (Uniform Resource Locator); URN (Uniform Resource Name); OCLC's (Online Computer…

  4. Maser: one-stop platform for NGS big data from analysis to visualization

    PubMed Central

    Kinjo, Sonoko; Monma, Norikazu; Misu, Sadahiko; Kitamura, Norikazu; Imoto, Junichi; Yoshitake, Kazutoshi; Gojobori, Takashi; Ikeo, Kazuho

    2018-01-01

    Abstract A major challenge in analyzing the data from high-throughput next-generation sequencing (NGS) is how to handle the huge amounts of data and variety of NGS tools and visualize the resultant outputs. To address these issues, we developed a cloud-based data analysis platform, Maser (Management and Analysis System for Enormous Reads), and an original genome browser, Genome Explorer (GE). Maser enables users to manage up to 2 terabytes of data to conduct analyses with easy graphical user interface operations and offers analysis pipelines in which several individual tools are combined as a single pipeline for very common and standard analyses. GE automatically visualizes genome assembly and mapping results output from Maser pipelines, without requiring additional data upload. With this function, the Maser pipelines can graphically display the results output from all the embedded tools and mapping results in a web browser. Therefore Maser realized a more user-friendly analysis platform especially for beginners by improving graphical display and providing the selected standard pipelines that work with built-in genome browser. In addition, all the analyses executed on Maser are recorded in the analysis history, helping users to trace and repeat the analyses. The entire process of analysis and its histories can be shared with collaborators or opened to the public. In conclusion, our system is useful for managing, analyzing, and visualizing NGS data and achieves traceability, reproducibility, and transparency of NGS analysis. Database URL: http://cell-innovation.nig.ac.jp/maser/ PMID:29688385

  5. Functional Requirements for Information Resource Provenance on the Web

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCusker, James P.; Lebo, Timothy; Graves, Alvaro

    We provide a means to formally explain the relationship between HTTP URLs and the representations returned when they are requested. According to existing World Wide Web architecture, the URL serves as an identier for a semiotic referent while the document returned via HTTP serves as a representation of the same referent. This begins with two sides of a semiotic triangle; the third side is the relationship between the URL and the representation received. We complete this description by extending the library science resource model Functional Requirements for Bibliographic Resources (FRBR) with cryptographic message and content digests to create a Functionalmore » Requirements for Information Resources (FRIR). We show how applying the FRIR model to HTTP GET and POST transactions disambiguates the many relationships between a given URL and all representations received from its request, provides fine-grained explanations that are complementary to existing explanations of web resources, and integrates easily into the emerging W3C provenance standard.« less

  6. Publisher Correction: N6-methyladenosine RNA modification regulates embryonic neural stem cell self-renewal through histone modifications.

    PubMed

    Wang, Yang; Li, Yue; Yue, Minghui; Wang, Jun; Kumar, Sandeep; Wechsler-Reya, Robert J; Zhang, Zhaolei; Ogawa, Yuya; Kellis, Manolis; Duester, Gregg; Zhao, Jing Crystal

    2018-06-07

    In the version of this article initially published online, there were errors in URLs for www.southernbiotech.com, appearing in Methods sections "m6A dot-blot" and "Western blot analysis." The first two URLs should be https://www.southernbiotech.com/?catno=4030-05&type=Polyclonal#&panel1-1 and the third should be https://www.southernbiotech.com/?catno=6170-05&type=Polyclonal. In addition, some Methods URLs for bioz.com, www.abcam.com and www.sysy.com were printed correctly but not properly linked. The errors have been corrected in the PDF and HTML versions of this article.

  7. GigaDB: promoting data dissemination and reproducibility

    PubMed Central

    Sneddon, Tam P.; Si Zhe, Xiao; Edmunds, Scott C.; Li, Peter; Goodman, Laurie; Hunter, Christopher I.

    2014-01-01

    Often papers are published where the underlying data supporting the research are not made available because of the limitations of making such large data sets publicly and permanently accessible. Even if the raw data are deposited in public archives, the essential analysis intermediaries, scripts or software are frequently not made available, meaning the science is not reproducible. The GigaScience journal is attempting to address this issue with the associated data storage and dissemination portal, the GigaScience database (GigaDB). Here we present the current version of GigaDB and reveal plans for the next generation of improvements. However, most importantly, we are soliciting responses from you, the users, to ensure that future developments are focused on the data storage and dissemination issues that still need resolving. Database URL: http://www.gigadb.org PMID:24622612

  8. OGDD (Olive Genetic Diversity Database): a microsatellite markers' genotypes database of worldwide olive trees for cultivar identification and virgin olive oil traceability.

    PubMed

    Ben Ayed, Rayda; Ben Hassen, Hanen; Ennouri, Karim; Ben Marzoug, Riadh; Rebai, Ahmed

    2016-01-01

    Olive (Olea europaea), whose importance is mainly due to nutritional and health features, is one of the most economically significant oil-producing trees in the Mediterranean region. Unfortunately, the increasing market demand towards virgin olive oil could often result in its adulteration with less expensive oils, which is a serious problem for the public and quality control evaluators of virgin olive oil. Therefore, to avoid frauds, olive cultivar identification and virgin olive oil authentication have become a major issue for the producers and consumers of quality control in the olive chain. Presently, genetic traceability using SSR is the cost effective and powerful marker technique that can be employed to resolve such problems. However, to identify an unknown monovarietal virgin olive oil cultivar, a reference system has become necessary. Thus, an Olive Genetic Diversity Database (OGDD) (http://www.bioinfo-cbs.org/ogdd/) is presented in this work. It is a genetic, morphologic and chemical database of worldwide olive tree and oil having a double function. In fact, besides being a reference system generated for the identification of unkown olive or virgin olive oil cultivars based on their microsatellite allele size(s), it provides users additional morphological and chemical information for each identified cultivar. Currently, OGDD is designed to enable users to easily retrieve and visualize biologically important information (SSR markers, and olive tree and oil characteristics of about 200 cultivars worldwide) using a set of efficient query interfaces and analysis tools. It can be accessed through a web service from any modern programming language using a simple hypertext transfer protocol call. The web site is implemented in java, JavaScript, PHP, HTML and Apache with all major browsers supported. Database URL: http://www.bioinfo-cbs.org/ogdd/. © The Author(s) 2016. Published by Oxford University Press.

  9. Addition of a breeding database in the Genome Database for Rosaceae

    PubMed Central

    Evans, Kate; Jung, Sook; Lee, Taein; Brutcher, Lisa; Cho, Ilhyung; Peace, Cameron; Main, Dorrie

    2013-01-01

    Breeding programs produce large datasets that require efficient management systems to keep track of performance, pedigree, geographical and image-based data. With the development of DNA-based screening technologies, more breeding programs perform genotyping in addition to phenotyping for performance evaluation. The integration of breeding data with other genomic and genetic data is instrumental for the refinement of marker-assisted breeding tools, enhances genetic understanding of important crop traits and maximizes access and utility by crop breeders and allied scientists. Development of new infrastructure in the Genome Database for Rosaceae (GDR) was designed and implemented to enable secure and efficient storage, management and analysis of large datasets from the Washington State University apple breeding program and subsequently expanded to fit datasets from other Rosaceae breeders. The infrastructure was built using the software Chado and Drupal, making use of the Natural Diversity module to accommodate large-scale phenotypic and genotypic data. Breeders can search accessions within the GDR to identify individuals with specific trait combinations. Results from Search by Parentage lists individuals with parents in common and results from Individual Variety pages link to all data available on each chosen individual including pedigree, phenotypic and genotypic information. Genotypic data are searchable by markers and alleles; results are linked to other pages in the GDR to enable the user to access tools such as GBrowse and CMap. This breeding database provides users with the opportunity to search datasets in a fully targeted manner and retrieve and compare performance data from multiple selections, years and sites, and to output the data needed for variety release publications and patent applications. The breeding database facilitates efficient program management. Storing publicly available breeding data in a database together with genomic and genetic data will further accelerate the cross-utilization of diverse data types by researchers from various disciplines. Database URL: http://www.rosaceae.org/breeders_toolbox PMID:24247530

  10. Addition of a breeding database in the Genome Database for Rosaceae.

    PubMed

    Evans, Kate; Jung, Sook; Lee, Taein; Brutcher, Lisa; Cho, Ilhyung; Peace, Cameron; Main, Dorrie

    2013-01-01

    Breeding programs produce large datasets that require efficient management systems to keep track of performance, pedigree, geographical and image-based data. With the development of DNA-based screening technologies, more breeding programs perform genotyping in addition to phenotyping for performance evaluation. The integration of breeding data with other genomic and genetic data is instrumental for the refinement of marker-assisted breeding tools, enhances genetic understanding of important crop traits and maximizes access and utility by crop breeders and allied scientists. Development of new infrastructure in the Genome Database for Rosaceae (GDR) was designed and implemented to enable secure and efficient storage, management and analysis of large datasets from the Washington State University apple breeding program and subsequently expanded to fit datasets from other Rosaceae breeders. The infrastructure was built using the software Chado and Drupal, making use of the Natural Diversity module to accommodate large-scale phenotypic and genotypic data. Breeders can search accessions within the GDR to identify individuals with specific trait combinations. Results from Search by Parentage lists individuals with parents in common and results from Individual Variety pages link to all data available on each chosen individual including pedigree, phenotypic and genotypic information. Genotypic data are searchable by markers and alleles; results are linked to other pages in the GDR to enable the user to access tools such as GBrowse and CMap. This breeding database provides users with the opportunity to search datasets in a fully targeted manner and retrieve and compare performance data from multiple selections, years and sites, and to output the data needed for variety release publications and patent applications. The breeding database facilitates efficient program management. Storing publicly available breeding data in a database together with genomic and genetic data will further accelerate the cross-utilization of diverse data types by researchers from various disciplines. Database URL: http://www.rosaceae.org/breeders_toolbox.

  11. Phisherman v 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phisherman is an online software tool that was created to help experimenters study phishing. It can potentially be re-purposed to run other human studies. Phisherman enables studies to be run online, so that users can participate from their own computers. This means that experimenters can get data from subjects in their natural settings. Alternatively, an experimenter can also run the app online in a lab-based setting, if that is desired. The software enables the online deployment of a study that is comprised of three main parts: (1) a consent page, (2) a survey, and (3) an identification task, with instruction/transitionmore » screens between each part, allowing the experimenter to provide the user with instructions and messages. Upon logging in, the subject is taken to the permission page, where they agree to or do not agree to take part in the study. If the subject agrees to participate, then the software randomly chooses between doing the survey first (and identification task second) or the identification task first (and survey second). This is to balance possible order effects in the data. Procedurally, in the identification task, the software shows the stimuli to the subject, and asks if she thinks it is a phish (yes/no) and how confident she is about her answer. The subject is given 5 levels of certainty to select from, labeled "low" (1), to "medium" (3), to "high" (5), with the option of picking a level between low and medium (2), and between medium and high (4). After selecting his/her confidence level, then the "Next" button activates, allowing a user to move to the next email. The software saves a given subject's progress in the identification task, so that she may log in and out of the site. The consent page is a space for the experimenter to provide the subject with human studies board /internal review board information, and to formally consent to participate in the study. The survey is a space for the experimenter to provide questions and spaces for the users to input answers (allowing both multiple-choice and free-answer options). Phisherman includes administrative pages for managing the stimuli and users. This includes a tool for the experimenter to create, preview, edit, delete (if desired), and manage stimuli (emails). The stimuli may include pictures (uploaded to an appropriate folder) and links, for realism. The software includes a safety feature that prevents the user from going to any link location or opening a file/image. Instead of re-directing the subject's browser, the software provides a pop-up box with the URL location of where the user would have gone. Another administrative page may be used to create fake subject accounts for testing the software prior to deployment, as well as to delete subject accounts when necessary. Data from the experiment can be downloaded from another administrative page.« less

  12. Storing, Browsing, Querying, and Sharing Data: the THREDDS Data Repository (TDR)

    NASA Astrophysics Data System (ADS)

    Wilson, A.; Lindholm, D.; Baltzer, T.

    2005-12-01

    The Unidata Internet Data Distribution (IDD) network delivers gigabytes of data per day in near real time to sites across the U.S. and beyond. The THREDDS Data Server (TDS) supports public browsing of metadata and data access via OPeNDAP enabled URLs for datasets such as these. With such large quantities of data, sites generally employ a simple data management policy, keeping the data for a relatively short term on the order of hours to perhaps a week or two. In order to save interesting data in longer term storage and make it available for sharing, a user must move the data herself. In this case the user is responsible for determining where space is available, executing the data movement, generating any desired metadata, and setting access control to enable sharing. This task sequence is generally based on execution of a sequence of low level operating system specific commands with significant user involvement. The LEAD (Linked Environments for Atmospheric Discovery) project is building a cyberinfrastructure to support research and education in mesoscale meteorology. LEAD orchestrations require large, robust, and reliable storage with speedy access to stage data and store both intermediate and final results. These requirements suggest storage solutions that involve distributed storage, replication, and interfacing to archival storage systems such as mass storage systems and tape or removable disks. LEAD requirements also include metadata generation and access in order to support querying. In support of both THREDDS and LEAD requirements, Unidata is designing and prototyping the THREDDS Data Repository (TDR), a framework for a modular data repository to support distributed data storage and retrieval using a variety of back end storage media and interchangeable software components. The TDR interface will provide high level abstractions for long term storage, controlled, fast and reliable access, and data movement capabilities via a variety of technologies such as OPeNDAP and gridftp. The modular structure will allow substitution of software components so that both simple and complex storage media can be integrated into the repository. It will also allow integration of different varieties of supporting software. For example, if replication is desired, replica management could be handled via a simple hash table or a complex solution such as Replica Locater Service (RLS). In order to ensure that metadata is available for all the data in the repository, the TDR will also generate THREDDS metadata when necessary. Users will be able to establish levels of access control to their metadata and data. Coupled with a THREDDS Data Server, both browsing via THREDDS catalogs and querying capabilities will be supported. This presentation will describe the motivating factors, current status, and future plans of the TDR. References: IDD: http://www.unidata.ucar.edu/content/software/idd/index.html THREDDS: http://www.unidata.ucar.edu/content/projects/THREDDS/tech/server/ServerStatus.html LEAD: http://lead.ou.edu/ RLS: http://www.isi.edu/~annc/papers/chervenakRLSjournal05.pdf

  13. Distribution and prediction of catalytic domains in 2-oxoglutarate dependent dioxygenases

    PubMed Central

    2012-01-01

    Background The 2-oxoglutarate dependent superfamily is a diverse group of non-haem dioxygenases, and is present in prokaryotes, eukaryotes, and archaea. The enzymes differ in substrate preference and reaction chemistry, a factor that precludes their classification by homology studies and electronic annotation schemes alone. In this work, I propose and explore the rationale of using substrates to classify structurally similar alpha-ketoglutarate dependent enzymes. Findings Differential catalysis in phylogenetic clades of 2-OG dependent enzymes, is determined by the interactions of a subset of active-site amino acids. Identifying these with existing computational methods is challenging and not feasible for all proteins. A clustering protocol based on validated mechanisms of catalysis of known molecules, in tandem with group specific hidden markov model profiles is able to differentiate and sequester these enzymes. Access to this repository is by a web server that compares user defined unknown sequences to these pre-defined profiles and outputs a list of predicted catalytic domains. The server is free and is accessible at the following URL ( http://comp-biol.theacms.in/H2OGpred.html). Conclusions The proposed stratification is a novel attempt at classifying and predicting 2-oxoglutarate dependent function. In addition, the server will provide researchers with a tool to compare their data to a comprehensive list of HMM profiles of catalytic domains. This work, will aid efforts by investigators to screen and characterize putative 2-OG dependent sequences. The profile database will be updated at regular intervals. PMID:22862831

  14. Overview of the interactive task in BioCreative V

    PubMed Central

    Wang, Qinghua; S. Abdul, Shabbir; Almeida, Lara; Ananiadou, Sophia; Balderas-Martínez, Yalbi I.; Batista-Navarro, Riza; Campos, David; Chilton, Lucy; Chou, Hui-Jou; Contreras, Gabriela; Cooper, Laurel; Dai, Hong-Jie; Ferrell, Barbra; Fluck, Juliane; Gama-Castro, Socorro; George, Nancy; Gkoutos, Georgios; Irin, Afroza K.; Jensen, Lars J.; Jimenez, Silvia; Jue, Toni R.; Keseler, Ingrid; Madan, Sumit; Matos, Sérgio; McQuilton, Peter; Milacic, Marija; Mort, Matthew; Natarajan, Jeyakumar; Pafilis, Evangelos; Pereira, Emiliano; Rao, Shruti; Rinaldi, Fabio; Rothfels, Karen; Salgado, David; Silva, Raquel M.; Singh, Onkar; Stefancsik, Raymund; Su, Chu-Hsien; Subramani, Suresh; Tadepally, Hamsa D.; Tsaprouni, Loukia; Vasilevsky, Nicole; Wang, Xiaodong; Chatr-Aryamontri, Andrew; Laulederkind, Stanley J. F.; Matis-Mitchell, Sherri; McEntyre, Johanna; Orchard, Sandra; Pundir, Sangya; Rodriguez-Esteban, Raul; Van Auken, Kimberly; Lu, Zhiyong; Schaeffer, Mary; Wu, Cathy H.; Hirschman, Lynette; Arighi, Cecilia N.

    2016-01-01

    Fully automated text mining (TM) systems promote efficient literature searching, retrieval, and review but are not sufficient to produce ready-to-consume curated documents. These systems are not meant to replace biocurators, but instead to assist them in one or more literature curation steps. To do so, the user interface is an important aspect that needs to be considered for tool adoption. The BioCreative Interactive task (IAT) is a track designed for exploring user-system interactions, promoting development of useful TM tools, and providing a communication channel between the biocuration and the TM communities. In BioCreative V, the IAT track followed a format similar to previous interactive tracks, where the utility and usability of TM tools, as well as the generation of use cases, have been the focal points. The proposed curation tasks are user-centric and formally evaluated by biocurators. In BioCreative V IAT, seven TM systems and 43 biocurators participated. Two levels of user participation were offered to broaden curator involvement and obtain more feedback on usability aspects. The full level participation involved training on the system, curation of a set of documents with and without TM assistance, tracking of time-on-task, and completion of a user survey. The partial level participation was designed to focus on usability aspects of the interface and not the performance per se. In this case, biocurators navigated the system by performing pre-designed tasks and then were asked whether they were able to achieve the task and the level of difficulty in completing the task. In this manuscript, we describe the development of the interactive task, from planning to execution and discuss major findings for the systems tested. Database URL: http://www.biocreative.org PMID:27589961

  15. Visualization of historical data for the ATLAS detector controls - DDV

    NASA Astrophysics Data System (ADS)

    Maciejewski, J.; Schlenker, S.

    2017-10-01

    The ATLAS experiment is one of four detectors located on the Large Hardon Collider (LHC) based at CERN. Its detector control system (DCS) stores the slow control data acquired within the back-end of distributed WinCC OA applications, which enables the data to be retrieved for future analysis, debugging and detector development in an Oracle relational database. The ATLAS DCS Data Viewer (DDV) is a client-server application providing access to the historical data outside of the experiment network. The server builds optimized SQL queries, retrieves the data from the database and serves it to the clients via HTTP connections. The server also implements protection methods to prevent malicious use of the database. The client is an AJAX-type web application based on the Vaadin (framework build around the Google Web Toolkit (GWT)) which gives users the possibility to access the data with ease. The DCS metadata can be selected using a column-tree navigation or a search engine supporting regular expressions. The data is visualized by a selection of output modules such as a java script value-over time plots or a lazy loading table widget. Additional plugins give the users the possibility to retrieve the data in ROOT format or as an ASCII file. Control system alarms can also be visualized in a dedicated table if necessary. Python mock-up scripts can be generated by the client, allowing the user to query the pythonic DDV server directly, such that the users can embed the scripts into more complex analysis programs. Users are also able to store searches and output configurations as XML on the server to share with others via URL or to embed in HTML.

  16. Development of a site analysis tool for distributed wind projects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaw, Shawn

    The Cadmus Group, Inc., in collaboration with the National Renewable Energy Laboratory (NREL) and Encraft, was awarded a grant from the Department of Energy (DOE) to develop a site analysis tool for distributed wind technologies. As the principal investigator for this project, Mr. Shawn Shaw was responsible for overall project management, direction, and technical approach. The product resulting from this project is the Distributed Wind Site Analysis Tool (DSAT), a software tool for analyzing proposed sites for distributed wind technology (DWT) systems. This user-friendly tool supports the long-term growth and stability of the DWT market by providing reliable, realistic estimatesmore » of site and system energy output and feasibility. DSAT-which is accessible online and requires no purchase or download of software-is available in two account types; Standard: This free account allows the user to analyze a limited number of sites and to produce a system performance report for each; and Professional: For a small annual fee users can analyze an unlimited number of sites, produce system performance reports, and generate other customizable reports containing key information such as visual influence and wind resources. The tool’s interactive maps allow users to create site models that incorporate the obstructions and terrain types present. Users can generate site reports immediately after entering the requisite site information. Ideally, this tool also educates users regarding good site selection and effective evaluation practices.« less

  17. The NCAR Research Data Archive's Hybrid Approach for Data Discovery and Access

    NASA Astrophysics Data System (ADS)

    Schuster, D.; Worley, S. J.

    2013-12-01

    The NCAR Research Data Archive (RDA http://rda.ucar.edu) maintains a variety of data discovery and access capabilities for it's 600+ dataset collections to support the varying needs of a diverse user community. In-house developed and standards-based community tools offer services to more than 10,000 users annually. By number of users the largest group is external and access the RDA through web based protocols; the internal NCAR HPC users are fewer in number, but typically access more data volume. This paper will detail the data discovery and access services maintained by the RDA to support both user groups, and show metrics that illustrate how the community is using the services. The distributed search capability enabled by standards-based community tools, such as Geoportal and an OAI-PMH access point that serves multiple metadata standards, provide pathways for external users to initially discover RDA holdings. From here, in-house developed web interfaces leverage primary discovery level metadata databases that support keyword and faceted searches. Internal NCAR HPC users, or those familiar with the RDA, may go directly to the dataset collection of interest and refine their search based on rich file collection metadata. Multiple levels of metadata have proven to be invaluable for discovery within terabyte-sized archives composed of many atmospheric or oceanic levels, hundreds of parameters, and often numerous grid and time resolutions. Once users find the data they want, their access needs may vary as well. A THREDDS data server running on targeted dataset collections enables remote file access through OPENDAP and other web based protocols primarily for external users. In-house developed tools give all users the capability to submit data subset extraction and format conversion requests through scalable, HPC based delayed mode batch processing. Users can monitor their RDA-based data processing progress and receive instructions on how to access the data when it is ready. External users are provided with RDA server generated scripts to download the resulting request output. Similarly they can download native dataset collection files or partial files using Wget or cURL based scripts supplied by the RDA server. Internal users can access the resulting request output or native dataset collection files directly from centralized file systems.

  18. Raising the Degree of Service-Orientation of a SOA-based Software System: A Case Study

    DTIC Science & Technology

    2009-12-01

    protocols, as well as executable processes that can be compiled into runtime scripts” [2] The Business Process Modeling Notation ( BPMN ) provides a...Notation ( BPMN ) 1.2. Jan. 2009. URL: http://www.omg.org/spec/ BPMN /1.2/ [25] .NET Framework Developer Center. .NET Remoting Overview. 2003. URL: http

  19. Effectiveness of Prophylactic Antibiotics against Post-Ureteroscopic Lithotripsy Infections: Systematic Review and Meta-Analysis.

    PubMed

    Lo, Chi-Wen; Yang, Stephen Shei-Dei; Hsieh, Cheng-Hsing; Chang, Shang-Jen

    2015-08-01

    To evaluate the effectiveness of prophylactic antibiotic therapy in reducing the incidence of post-ureteroscopic lithotripsy (URL) infections. A systemic search of PubMED was performed to identify all randomized trials that compared the incidence of post-operative infections in patients without pre-operative urinary tract infections who underwent URL with and without a single dose of prophylactic antibiotics. The data were analyzed using Cochrane Collaboration Review Manager (RevMan, version 5.2). The endpoints of the analysis were pyuria (>10 white blood cells/high-power field), bacteriuria (urine culture with bacteria >10(5) colony-forming units/mL), and febrile urinary tract infections (fUTIs), defined as a body temperature of >38°C with pyuria or meaningful bacteriuria within 1 wk after the operation. In total, four trials enrolling 500 patients met the inclusion criteria and were subjected to meta-analysis. Prophylactic antibiotics significantly reduced post-URL pyuria (risk ratios [RR] 0.65; 95% confidence interval [CI] 0.51-0.82) and bacteriuria (RR 0.26; 95% CI 0.12-0.60; p=0.001). Patients who received prophylactic antibiotics tended to have lower rates of fUTI, although the difference was not statistically significant. Prophylactic antibiotic therapy can reduce the incidence of pyuria and bacteriuria after URL. However, because of the low incidence of post-URL fUTIs, we failed to show that a single dose of prophylactic antibiotics can reduce the rate of such infections significantly.

  20. Pele Plume Deposit on Io

    NASA Technical Reports Server (NTRS)

    1997-01-01

    The varied effects of Ionian volcanism can be seen in this false color infrared composite image of Io's trailing hemisphere. Low resolution color data from Galileo's first orbit (June, 1996) have been combined with a higher resolution clear filter picture taken on the third orbit (November, 1996) of the spacecraft around Jupiter.

    A diffuse ring of bright red material encircles Pele, the site of an ongoing, high velocity volcanic eruption. Pele's plume is nearly invisible, except in back-lit photographs, but its deposits indicate energetic ejection of sulfurous materials out to distances more than 600 kilometers from the central vent. Another bright red deposit lies adjacent to Marduk, also a currently active ediface. High temperature hot spots have been detected at both these locations, due to the eruption of molten material in lava flows or lava lakes. Bright red deposits on Io darken and disappear within years or decades of deposition, so the presence of bright red materials marks the sites of recent volcanism.

    This composite was created from data obtained by the Solid State Imaging (CCD) system aboard NASA's Galileo spacecraft. The region imaged is centered on 15 degrees South, 224 degrees West, and is almost 2400 kilometers across. The finest details that can be discerned in this picture are about 3 kilometers across. North is towards the top of the picture and the sun illuminates the surface from the west.

    The Jet Propulsion Laboratory, Pasadena, CA manages the mission for NASA's Office of Space Science, Washington, DC.

    This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov. Background information and educational context for the images can be found at URL http://www.jpl.nasa.gov/galileo/sepo

  1. Comparison Analysis among Large Amount of SNS Sites

    NASA Astrophysics Data System (ADS)

    Toriumi, Fujio; Yamamoto, Hitoshi; Suwa, Hirohiko; Okada, Isamu; Izumi, Kiyoshi; Hashimoto, Yasuhiro

    In recent years, application of Social Networking Services (SNS) and Blogs are growing as new communication tools on the Internet. Several large-scale SNS sites are prospering; meanwhile, many sites with relatively small scale are offering services. Such small-scale SNSs realize small-group isolated type of communication while neither mixi nor MySpace can do that. However, the studies on SNS are almost about particular large-scale SNSs and cannot analyze whether their results apply for general features or for special characteristics on the SNSs. From the point of view of comparison analysis on SNS, comparison with just several types of those cannot reach a statistically significant level. We analyze many SNS sites with the aim of classifying them by using some approaches. Our paper classifies 50,000 sites for small-scale SNSs and gives their features from the points of network structure, patterns of communication, and growth rate of SNS. The result of analysis for network structure shows that many SNS sites have small-world attribute with short path lengths and high coefficients of their cluster. Distribution of degrees of the SNS sites is close to power law. This result indicates the small-scale SNS sites raise the percentage of users with many friends than mixi. According to the analysis of their coefficients of assortativity, those SNS sites have negative values of assortativity, and that means users with high degree tend to connect users with small degree. Next, we analyze the patterns of user communication. A friend network of SNS is explicit while users' communication behaviors are defined as an implicit network. What kind of relationships do these networks have? To address this question, we obtain some characteristics of users' communication structure and activation patterns of users on the SNS sites. By using new indexes, friend aggregation rate and friend coverage rate, we show that SNS sites with high value of friend coverage rate activate diary postings and their comments. Besides, they become activated when hub users with high degree do not behave actively on the sites with high value of friend aggregation rate and high value of friend coverage rate. On the other hand, activation emerges when hub users behave actively on the sites with low value of friend aggregation rate and high value of friend coverage rate. Finally, we observe SNS sites which are increasing the number of users considerably, from the viewpoint of network structure, and extract characteristics of high growth SNS sites. As a result of discrimination on the basis of the decision tree analysis, we can recognize the high growth SNS sites with a high degree of accuracy. Besides, this approach suggests mixi and the other small-scale SNS sites have different character trait.

  2. Developing a Taxonomy of Characteristics and Features of Collaboration Tools for Teams in Distributed Environments

    DTIC Science & Technology

    2007-09-01

    Motion URL: http://www.blackberry.com/products/blackberry/index.shtml Software Name: Bricolage Company: Bricolage URL: http://www.bricolage.cc...Workflow Customizable control over editorial content. Bricolage Bricolage Feature Description Software Company Workflow Allows development...content for Nuxeo Collaborative Portal projects. Nuxeo Workspace Add, edit, delete, content through web interface. Bricolage Bricolage

  3. 40 CFR 53.23 - Test procedures.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... up and stabilize. Determine measurement noise at each of two fixed concentrations, first using zero.... Note to § 53.23(b)(2): Use of a chart recorder in addition to the DM is optional. (iii) Measure zero... atmosphere concentration of 80 ±5 percent of the URL instead of zero air, and let S at 80 percent of the URL...

  4. 40 CFR 53.23 - Test procedures.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... up and stabilize. Determine measurement noise at each of two fixed concentrations, first using zero.... Note to § 53.23(b)(2): Use of a chart recorder in addition to the DM is optional. (iii) Measure zero... atmosphere concentration of 80 ±5 percent of the URL instead of zero air, and let S at 80 percent of the URL...

  5. 40 CFR 53.23 - Test procedures.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... up and stabilize. Determine measurement noise at each of two fixed concentrations, first using zero.... Note to § 53.23(b)(2): Use of a chart recorder in addition to the DM is optional. (iii) Measure zero... atmosphere concentration of 80 ±5 percent of the URL instead of zero air, and let S at 80 percent of the URL...

  6. Divide and Recombine for Large Complex Data

    DTIC Science & Technology

    2017-12-01

    Empirical Methods in Natural Language Processing , October 2014 Keywords Enter keywords for the publication. URL Enter the URL...low-latency data processing systems. Declarative Languages for Interactive Visualization: The Reactive Vega Stack Another thread of XDATA research...for array processing operations embedded in the R programming language . Vector virtual machines work well for long vectors. One of the most

  7. TriatoKey: a web and mobile tool for biodiversity identification of Brazilian triatomine species

    PubMed Central

    Márcia de Oliveira, Luciana; Nogueira de Brito, Raissa; Anderson Souza Guimarães, Paul; Vitor Mastrângelo Amaro dos Santos, Rômulo; Gonçalves Diotaiuti, Liléia; de Cássia Moreira de Souza, Rita

    2017-01-01

    Abstract Triatomines are blood-sucking insects that transmit the causative agent of Chagas disease, Trypanosoma cruzi. Despite being recognized as a difficult task, the correct taxonomic identification of triatomine species is crucial for vector control in Latin America, where the disease is endemic. In this context, we have developed a web and mobile tool based on PostgreSQL database to help healthcare technicians to overcome the difficulties to identify triatomine vectors when the technical expertise is missing. The web and mobile version makes use of real triatomine species pictures and dichotomous key method to support the identification of potential vectors that occur in Brazil. It provides a user example-driven interface with simple language. TriatoKey can also be useful for educational purposes. Database URL: http://triatokey.cpqrr.fiocruz.br PMID:28605769

  8. PlantCAZyme: a database for plant carbohydrate-active enzymes

    PubMed Central

    Ekstrom, Alexander; Taujale, Rahil; McGinn, Nathan; Yin, Yanbin

    2014-01-01

    PlantCAZyme is a database built upon dbCAN (database for automated carbohydrate active enzyme annotation), aiming to provide pre-computed sequence and annotation data of carbohydrate active enzymes (CAZymes) to plant carbohydrate and bioenergy research communities. The current version contains data of 43 790 CAZymes of 159 protein families from 35 plants (including angiosperms, gymnosperms, lycophyte and bryophyte mosses) and chlorophyte algae with fully sequenced genomes. Useful features of the database include: (i) a BLAST server and a HMMER server that allow users to search against our pre-computed sequence data for annotation purpose, (ii) a download page to allow batch downloading data of a specific CAZyme family or species and (iii) protein browse pages to provide an easy access to the most comprehensive sequence and annotation data. Database URL: http://cys.bios.niu.edu/plantcazyme/ PMID:25125445

  9. CerebralWeb: a Cytoscape.js plug-in to visualize networks stratified by subcellular localization.

    PubMed

    Frias, Silvia; Bryan, Kenneth; Brinkman, Fiona S L; Lynn, David J

    2015-01-01

    CerebralWeb is a light-weight JavaScript plug-in that extends Cytoscape.js to enable fast and interactive visualization of molecular interaction networks stratified based on subcellular localization or other user-supplied annotation. The application is designed to be easily integrated into any website and is configurable to support customized network visualization. CerebralWeb also supports the automatic retrieval of Cerebral-compatible localizations for human, mouse and bovine genes via a web service and enables the automated parsing of Cytoscape compatible XGMML network files. CerebralWeb currently supports embedded network visualization on the InnateDB (www.innatedb.com) and Allergy and Asthma Portal (allergen.innatedb.com) database and analysis resources. Database tool URL: http://www.innatedb.com/CerebralWeb © The Author(s) 2015. Published by Oxford University Press.

  10. Testing the Effectiveness of Interactive Multimedia for Library-User Education

    ERIC Educational Resources Information Center

    Markey, Karen; Armstrong, Annie; De Groote, Sandy; Fosmire, Michael; Fuderer, Laura; Garrett, Kelly; Georgas, Helen; Sharp, Linda; Smith, Cheri; Spaly, Michael; Warner, Joni E.

    2005-01-01

    A test of the effectiveness of interactive multimedia Web sites demonstrates that library users' topic knowledge was significantly greater after visiting the sites than before. Library users want more such sites about library services, their majors, and campus life generally. Librarians describe the roles they want to play on multimedia production…

  11. URS DataBase: universe of RNA structures and their motifs.

    PubMed

    Baulin, Eugene; Yacovlev, Victor; Khachko, Denis; Spirin, Sergei; Roytberg, Mikhail

    2016-01-01

    The Universe of RNA Structures DataBase (URSDB) stores information obtained from all RNA-containing PDB entries (2935 entries in October 2015). The content of the database is updated regularly. The database consists of 51 tables containing indexed data on various elements of the RNA structures. The database provides a web interface allowing user to select a subset of structures with desired features and to obtain various statistical data for a selected subset of structures or for all structures. In particular, one can easily obtain statistics on geometric parameters of base pairs, on structural motifs (stems, loops, etc.) or on different types of pseudoknots. The user can also view and get information on an individual structure or its selected parts, e.g. RNA-protein hydrogen bonds. URSDB employs a new original definition of loops in RNA structures. That definition fits both pseudoknot-free and pseudoknotted secondary structures and coincides with the classical definition in case of pseudoknot-free structures. To our knowledge, URSDB is the first database supporting searches based on topological classification of pseudoknots and on extended loop classification.Database URL: http://server3.lpm.org.ru/urs/. © The Author(s) 2016. Published by Oxford University Press.

  12. MODFLOW-2000, the U.S. Geological Survey modular ground-water model -- Documentation of MOD-PREDICT for predictions, prediction sensitivity analysis, and evaluation of uncertainty

    USGS Publications Warehouse

    Tonkin, M.J.; Hill, Mary C.; Doherty, John

    2003-01-01

    This document describes the MOD-PREDICT program, which helps evaluate userdefined sets of observations, prior information, and predictions, using the ground-water model MODFLOW-2000. MOD-PREDICT takes advantage of the existing Observation and Sensitivity Processes (Hill and others, 2000) by initiating runs of MODFLOW-2000 and using the output files produced. The names and formats of the MODFLOW-2000 input files are unchanged, such that full backward compatibility is maintained. A new name file and input files are required for MOD-PREDICT. The performance of MOD-PREDICT has been tested in a variety of applications. Future applications, however, might reveal errors that were not detected in the test simulations. Users are requested to notify the U.S. Geological Survey of any errors found in this document or the computer program using the email address available at the web address below. Updates might occasionally be made to this document, to the MOD-PREDICT program, and to MODFLOW- 2000. Users can check for updates on the Internet at URL http://water.usgs.gov/software/ground water.html/.

  13. An integrative data analysis platform for gene set analysis and knowledge discovery in a data warehouse framework.

    PubMed

    Chen, Yi-An; Tripathi, Lokesh P; Mizuguchi, Kenji

    2016-01-01

    Data analysis is one of the most critical and challenging steps in drug discovery and disease biology. A user-friendly resource to visualize and analyse high-throughput data provides a powerful medium for both experimental and computational biologists to understand vastly different biological data types and obtain a concise, simplified and meaningful output for better knowledge discovery. We have previously developed TargetMine, an integrated data warehouse optimized for target prioritization. Here we describe how upgraded and newly modelled data types in TargetMine can now survey the wider biological and chemical data space, relevant to drug discovery and development. To enhance the scope of TargetMine from target prioritization to broad-based knowledge discovery, we have also developed a new auxiliary toolkit to assist with data analysis and visualization in TargetMine. This toolkit features interactive data analysis tools to query and analyse the biological data compiled within the TargetMine data warehouse. The enhanced system enables users to discover new hypotheses interactively by performing complicated searches with no programming and obtaining the results in an easy to comprehend output format. Database URL: http://targetmine.mizuguchilab.org. © The Author(s) 2016. Published by Oxford University Press.

  14. An integrative data analysis platform for gene set analysis and knowledge discovery in a data warehouse framework

    PubMed Central

    Chen, Yi-An; Tripathi, Lokesh P.; Mizuguchi, Kenji

    2016-01-01

    Data analysis is one of the most critical and challenging steps in drug discovery and disease biology. A user-friendly resource to visualize and analyse high-throughput data provides a powerful medium for both experimental and computational biologists to understand vastly different biological data types and obtain a concise, simplified and meaningful output for better knowledge discovery. We have previously developed TargetMine, an integrated data warehouse optimized for target prioritization. Here we describe how upgraded and newly modelled data types in TargetMine can now survey the wider biological and chemical data space, relevant to drug discovery and development. To enhance the scope of TargetMine from target prioritization to broad-based knowledge discovery, we have also developed a new auxiliary toolkit to assist with data analysis and visualization in TargetMine. This toolkit features interactive data analysis tools to query and analyse the biological data compiled within the TargetMine data warehouse. The enhanced system enables users to discover new hypotheses interactively by performing complicated searches with no programming and obtaining the results in an easy to comprehend output format. Database URL: http://targetmine.mizuguchilab.org PMID:26989145

  15. HerDing: herb recommendation system to treat diseases using genes and chemicals

    PubMed Central

    Choi, Wonjun; Choi, Chan-Hun; Kim, Young Ran; Kim, Seon-Jong; Na, Chang-Su; Lee, Hyunju

    2016-01-01

    In recent years, herbs have been researched for new drug candidates because they have a long empirical history of treating diseases and are relatively free from side effects. Studies to scientifically prove the medical efficacy of herbs for target diseases often spend a considerable amount of time and effort in choosing candidate herbs and in performing experiments to measure changes of marker genes when treating herbs. A computational approach to recommend herbs for treating diseases might be helpful to promote efficiency in the early stage of such studies. Although several databases related to traditional Chinese medicine have been already developed, there is no specialized Web tool yet recommending herbs to treat diseases based on disease-related genes. Therefore, we developed a novel search engine, HerDing, focused on retrieving candidate herb-related information with user search terms (a list of genes, a disease name, a chemical name or an herb name). HerDing was built by integrating public databases and by applying a text-mining method. The HerDing website is free and open to all users, and there is no login requirement. Database URL: http://combio.gist.ac.kr/herding PMID:26980517

  16. URS DataBase: universe of RNA structures and their motifs

    PubMed Central

    Baulin, Eugene; Yacovlev, Victor; Khachko, Denis; Spirin, Sergei; Roytberg, Mikhail

    2016-01-01

    The Universe of RNA Structures DataBase (URSDB) stores information obtained from all RNA-containing PDB entries (2935 entries in October 2015). The content of the database is updated regularly. The database consists of 51 tables containing indexed data on various elements of the RNA structures. The database provides a web interface allowing user to select a subset of structures with desired features and to obtain various statistical data for a selected subset of structures or for all structures. In particular, one can easily obtain statistics on geometric parameters of base pairs, on structural motifs (stems, loops, etc.) or on different types of pseudoknots. The user can also view and get information on an individual structure or its selected parts, e.g. RNA–protein hydrogen bonds. URSDB employs a new original definition of loops in RNA structures. That definition fits both pseudoknot-free and pseudoknotted secondary structures and coincides with the classical definition in case of pseudoknot-free structures. To our knowledge, URSDB is the first database supporting searches based on topological classification of pseudoknots and on extended loop classification. Database URL: http://server3.lpm.org.ru/urs/ PMID:27242032

  17. MPD: a pathogen genome and metagenome database

    PubMed Central

    Zhang, Tingting; Miao, Jiaojiao; Han, Na; Qiang, Yujun; Zhang, Wen

    2018-01-01

    Abstract Advances in high-throughput sequencing have led to unprecedented growth in the amount of available genome sequencing data, especially for bacterial genomes, which has been accompanied by a challenge for the storage and management of such huge datasets. To facilitate bacterial research and related studies, we have developed the Mypathogen database (MPD), which provides access to users for searching, downloading, storing and sharing bacterial genomics data. The MPD represents the first pathogenic database for microbial genomes and metagenomes, and currently covers pathogenic microbial genomes (6604 genera, 11 071 species, 41 906 strains) and metagenomic data from host, air, water and other sources (28 816 samples). The MPD also functions as a management system for statistical and storage data that can be used by different organizations, thereby facilitating data sharing among different organizations and research groups. A user-friendly local client tool is provided to maintain the steady transmission of big sequencing data. The MPD is a useful tool for analysis and management in genomic research, especially for clinical Centers for Disease Control and epidemiological studies, and is expected to contribute to advancing knowledge on pathogenic bacteria genomes and metagenomes. Database URL: http://data.mypathogen.org PMID:29917040

  18. HerDing: herb recommendation system to treat diseases using genes and chemicals.

    PubMed

    Choi, Wonjun; Choi, Chan-Hun; Kim, Young Ran; Kim, Seon-Jong; Na, Chang-Su; Lee, Hyunju

    2016-01-01

    In recent years, herbs have been researched for new drug candidates because they have a long empirical history of treating diseases and are relatively free from side effects. Studies to scientifically prove the medical efficacy of herbs for target diseases often spend a considerable amount of time and effort in choosing candidate herbs and in performing experiments to measure changes of marker genes when treating herbs. A computational approach to recommend herbs for treating diseases might be helpful to promote efficiency in the early stage of such studies. Although several databases related to traditional Chinese medicine have been already developed, there is no specialized Web tool yet recommending herbs to treat diseases based on disease-related genes. Therefore, we developed a novel search engine, HerDing, focused on retrieving candidate herb-related information with user search terms (a list of genes, a disease name, a chemical name or an herb name). HerDing was built by integrating public databases and by applying a text-mining method. The HerDing website is free and open to all users, and there is no login requirement. Database URL: http://combio.gist.ac.kr/herding. © The Author(s) 2016. Published by Oxford University Press.

  19. Web Analytics: A Picture of the Academic Library Web Site User

    ERIC Educational Resources Information Center

    Black, Elizabeth L.

    2009-01-01

    This article describes the usefulness of Web analytics for understanding the users of an academic library Web site. Using a case study, the analysis describes how Web analytics can answer questions about Web site user behavior, including when visitors come, the duration of the visit, how they get there, the technology they use, and the most…

  20. The Use of Underground Research Laboratories to Support Repository Development Programs. A Roadmap for the Underground Research Facilities Network.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacKinnon, Robert J.

    2015-10-26

    Under the auspices of the International Atomic Energy Agency (IAEA), nationally developed underground research laboratories (URLs) and associated research institutions are being offered for use by other nations. These facilities form an Underground Research Facilities (URF) Network for training in and demonstration of waste disposal technologies and the sharing of knowledge and experience related to geologic repository development, research, and engineering. In order to achieve its objectives, the URF Network regularly sponsors workshops and training events related to the knowledge base that is transferable between existing URL programs and to nations with an interest in developing a new URL. Thismore » report describes the role of URLs in the context of a general timeline for repository development. This description includes identification of key phases and activities that contribute to repository development as a repository program evolves from an early research and development phase to later phases such as construction, operations, and closure. This information is cast in the form of a matrix with the entries in this matrix forming the basis of the URF Network roadmap that will be used to identify and plan future workshops and training events.« less

  1. Argo: an integrative, interactive, text mining-based workbench supporting curation

    PubMed Central

    Rak, Rafal; Rowley, Andrew; Black, William; Ananiadou, Sophia

    2012-01-01

    Curation of biomedical literature is often supported by the automatic analysis of textual content that generally involves a sequence of individual processing components. Text mining (TM) has been used to enhance the process of manual biocuration, but has been focused on specific databases and tasks rather than an environment integrating TM tools into the curation pipeline, catering for a variety of tasks, types of information and applications. Processing components usually come from different sources and often lack interoperability. The well established Unstructured Information Management Architecture is a framework that addresses interoperability by defining common data structures and interfaces. However, most of the efforts are targeted towards software developers and are not suitable for curators, or are otherwise inconvenient to use on a higher level of abstraction. To overcome these issues we introduce Argo, an interoperable, integrative, interactive and collaborative system for text analysis with a convenient graphic user interface to ease the development of processing workflows and boost productivity in labour-intensive manual curation. Robust, scalable text analytics follow a modular approach, adopting component modules for distinct levels of text analysis. The user interface is available entirely through a web browser that saves the user from going through often complicated and platform-dependent installation procedures. Argo comes with a predefined set of processing components commonly used in text analysis, while giving the users the ability to deposit their own components. The system accommodates various areas and levels of user expertise, from TM and computational linguistics to ontology-based curation. One of the key functionalities of Argo is its ability to seamlessly incorporate user-interactive components, such as manual annotation editors, into otherwise completely automatic pipelines. As a use case, we demonstrate the functionality of an in-built manual annotation editor that is well suited for in-text corpus annotation tasks. Database URL: http://www.nactem.ac.uk/Argo PMID:22434844

  2. APASdb: a database describing alternative poly(A) sites and selection of heterogeneous cleavage sites downstream of poly(A) signals

    PubMed Central

    You, Leiming; Wu, Jiexin; Feng, Yuchao; Fu, Yonggui; Guo, Yanan; Long, Liyuan; Zhang, Hui; Luan, Yijie; Tian, Peng; Chen, Liangfu; Huang, Guangrui; Huang, Shengfeng; Li, Yuxin; Li, Jie; Chen, Chengyong; Zhang, Yaqing; Chen, Shangwu; Xu, Anlong

    2015-01-01

    Increasing amounts of genes have been shown to utilize alternative polyadenylation (APA) 3′-processing sites depending on the cell and tissue type and/or physiological and pathological conditions at the time of processing, and the construction of genome-wide database regarding APA is urgently needed for better understanding poly(A) site selection and APA-directed gene expression regulation for a given biology. Here we present a web-accessible database, named APASdb (http://mosas.sysu.edu.cn/utr), which can visualize the precise map and usage quantification of different APA isoforms for all genes. The datasets are deeply profiled by the sequencing alternative polyadenylation sites (SAPAS) method capable of high-throughput sequencing 3′-ends of polyadenylated transcripts. Thus, APASdb details all the heterogeneous cleavage sites downstream of poly(A) signals, and maintains near complete coverage for APA sites, much better than the previous databases using conventional methods. Furthermore, APASdb provides the quantification of a given APA variant among transcripts with different APA sites by computing their corresponding normalized-reads, making our database more useful. In addition, APASdb supports URL-based retrieval, browsing and display of exon-intron structure, poly(A) signals, poly(A) sites location and usage reads, and 3′-untranslated regions (3′-UTRs). Currently, APASdb involves APA in various biological processes and diseases in human, mouse and zebrafish. PMID:25378337

  3. SITEX 2.0: Projections of protein functional sites on eukaryotic genes. Extension with orthologous genes.

    PubMed

    Medvedeva, Irina V; Demenkov, Pavel S; Ivanisenko, Vladimir A

    2017-04-01

    Functional sites define the diversity of protein functions and are the central object of research of the structural and functional organization of proteins. The mechanisms underlying protein functional sites emergence and their variability during evolution are distinguished by duplication, shuffling, insertion and deletion of the exons in genes. The study of the correlation between a site structure and exon structure serves as the basis for the in-depth understanding of sites organization. In this regard, the development of programming resources that allow the realization of the mutual projection of exon structure of genes and primary and tertiary structures of encoded proteins is still the actual problem. Previously, we developed the SitEx system that provides information about protein and gene sequences with mapped exon borders and protein functional sites amino acid positions. The database included information on proteins with known 3D structure. However, data with respect to orthologs was not available. Therefore, we added the projection of sites positions to the exon structures of orthologs in SitEx 2.0. We implemented a search through database using site conservation variability and site discontinuity through exon structure. Inclusion of the information on orthologs allowed to expand the possibilities of SitEx usage for solving problems regarding the analysis of the structural and functional organization of proteins. Database URL: http://www-bionet.sscc.ru/sitex/ .

  4. Secure Web-Site Access with Tickets and Message-Dependent Digests

    USGS Publications Warehouse

    Donato, David I.

    2008-01-01

    Although there are various methods for restricting access to documents stored on a World Wide Web (WWW) site (a Web site), none of the widely used methods is completely suitable for restricting access to Web applications hosted on an otherwise publicly accessible Web site. A new technique, however, provides a mix of features well suited for restricting Web-site or Web-application access to authorized users, including the following: secure user authentication, tamper-resistant sessions, simple access to user state variables by server-side applications, and clean session terminations. This technique, called message-dependent digests with tickets, or MDDT, maintains secure user sessions by passing single-use nonces (tickets) and message-dependent digests of user credentials back and forth between client and server. Appendix 2 provides a working implementation of MDDT with PHP server-side code and JavaScript client-side code.

  5. Periodic Email Prompts to Re-Use an Internet-Delivered Computer-Tailored Lifestyle Program: Influence of Prompt Content and Timing

    PubMed Central

    de Vries, Hein; Candel, Math; van de Kar, Angelique; van Osch, Liesbeth

    2013-01-01

    Background Adherence to Internet-delivered lifestyle interventions using multiple tailoring is suboptimal. Therefore, it is essential to invest in proactive strategies, such as periodic email prompts, to boost re-use of the intervention. Objective This study investigated the influence of content and timing of a single email prompt on re-use of an Internet-delivered computer-tailored (CT) lifestyle program. Methods A sample of municipality employees was invited to participate in the program. All participants who decided to use the program received an email prompting them to revisit the program. A 2×3 (content × timing) design was used to test manipulations of prompt content and timing. Depending on the study group participants were randomly assigned to, they received either a prompt containing standard content (an invitation to revisit the program), or standard content plus a preview of new content placed on the program website. Participants received this prompt after 2, 4, or 6 weeks. In addition to these 6 experimental conditions, a control condition was included consisting of participants who did not receive an additional email prompt. Clicks on the uniform resource locator (URL) provided in the prompt and log-ins to the CT program were objectively monitored. Logistic regression analyses were conducted to determine whether prompt content and/or prompt timing predicted clicking on the URL and logging in to the CT program. Results Of all program users (N=240), 206 participants received a subsequent email prompting them to revisit the program. A total of 53 participants (25.7%) who received a prompt reacted to this prompt by clicking on the URL, and 25 participants (12.1%) actually logged in to the program. There was a main effect of prompt timing; participants receiving an email prompt 2 weeks after their first visit clicked on the URL significantly more often compared with participants that received the prompt after 4 weeks (odds ratio [OR] 3.069, 95% CI 1.392-6.765, P=.005) and after 6 weeks (OR 4.471, 95% CI 1.909-10.471, P=.001). Furthermore, participants who received an email prompt 2 weeks after their first visit logged in to the program significantly more often compared to participants receiving the prompt after 6 weeks (OR 16.356, 95% CI 2.071-129.196, P=.008). A trend was observed with regard to prompt content. Participants receiving a prompt with additional content were more likely to log in to the program compared to participants who received a standard prompt. However, this result was not statistically significant (OR 2.286, 95% CI 0.892-5.856, P=.09). Conclusions The key findings suggest that boosting revisits to a CT program benefits most from relatively short prompt timing. Furthermore, a preview of new website content may be added to a standard prompt to further increase its effectiveness in persuading people to log in to the program. PMID:23363466

  6. The Reliability of Tweets as a Supplementary Method of Seasonal Influenza Surveillance

    PubMed Central

    Aslam, Anoshé A; Spitzberg, Brian H; An, Li; Gawron, J Mark; Gupta, Dipak K; Peddecord, K Michael; Nagel, Anna C; Allen, Christopher; Yang, Jiue-An; Lindsay, Suzanne

    2014-01-01

    Background Existing influenza surveillance in the United States is focused on the collection of data from sentinel physicians and hospitals; however, the compilation and distribution of reports are usually delayed by up to 2 weeks. With the popularity of social media growing, the Internet is a source for syndromic surveillance due to the availability of large amounts of data. In this study, tweets, or posts of 140 characters or less, from the website Twitter were collected and analyzed for their potential as surveillance for seasonal influenza. Objective There were three aims: (1) to improve the correlation of tweets to sentinel-provided influenza-like illness (ILI) rates by city through filtering and a machine-learning classifier, (2) to observe correlations of tweets for emergency department ILI rates by city, and (3) to explore correlations for tweets to laboratory-confirmed influenza cases in San Diego. Methods Tweets containing the keyword “flu” were collected within a 17-mile radius from 11 US cities selected for population and availability of ILI data. At the end of the collection period, 159,802 tweets were used for correlation analyses with sentinel-provided ILI and emergency department ILI rates as reported by the corresponding city or county health department. Two separate methods were used to observe correlations between tweets and ILI rates: filtering the tweets by type (non-retweets, retweets, tweets with a URL, tweets without a URL), and the use of a machine-learning classifier that determined whether a tweet was “valid”, or from a user who was likely ill with the flu. Results Correlations varied by city but general trends were observed. Non-retweets and tweets without a URL had higher and more significant (P<.05) correlations than retweets and tweets with a URL. Correlations of tweets to emergency department ILI rates were higher than the correlations observed for sentinel-provided ILI for most of the cities. The machine-learning classifier yielded the highest correlations for many of the cities when using the sentinel-provided or emergency department ILI as well as the number of laboratory-confirmed influenza cases in San Diego. High correlation values (r=.93) with significance at P<.001 were observed for laboratory-confirmed influenza cases for most categories and tweets determined to be valid by the classifier. Conclusions Compared to tweet analyses in the previous influenza season, this study demonstrated increased accuracy in using Twitter as a supplementary surveillance tool for influenza as better filtering and classification methods yielded higher correlations for the 2013-2014 influenza season than those found for tweets in the previous influenza season, where emergency department ILI rates were better correlated to tweets than sentinel-provided ILI rates. Further investigations in the field would require expansion with regard to the location that the tweets are collected from, as well as the availability of more ILI data. PMID:25406040

  7. Periodic email prompts to re-use an internet-delivered computer-tailored lifestyle program: influence of prompt content and timing.

    PubMed

    Schneider, Francine; de Vries, Hein; Candel, Math; van de Kar, Angelique; van Osch, Liesbeth

    2013-01-31

    Adherence to Internet-delivered lifestyle interventions using multiple tailoring is suboptimal. Therefore, it is essential to invest in proactive strategies, such as periodic email prompts, to boost re-use of the intervention. This study investigated the influence of content and timing of a single email prompt on re-use of an Internet-delivered computer-tailored (CT) lifestyle program. A sample of municipality employees was invited to participate in the program. All participants who decided to use the program received an email prompting them to revisit the program. A 2×3 (content × timing) design was used to test manipulations of prompt content and timing. Depending on the study group participants were randomly assigned to, they received either a prompt containing standard content (an invitation to revisit the program), or standard content plus a preview of new content placed on the program website. Participants received this prompt after 2, 4, or 6 weeks. In addition to these 6 experimental conditions, a control condition was included consisting of participants who did not receive an additional email prompt. Clicks on the uniform resource locator (URL) provided in the prompt and log-ins to the CT program were objectively monitored. Logistic regression analyses were conducted to determine whether prompt content and/or prompt timing predicted clicking on the URL and logging in to the CT program. Of all program users (N=240), 206 participants received a subsequent email prompting them to revisit the program. A total of 53 participants (25.7%) who received a prompt reacted to this prompt by clicking on the URL, and 25 participants (12.1%) actually logged in to the program. There was a main effect of prompt timing; participants receiving an email prompt 2 weeks after their first visit clicked on the URL significantly more often compared with participants that received the prompt after 4 weeks (odds ratio [OR] 3.069, 95% CI 1.392-6.765, P=.005) and after 6 weeks (OR 4.471, 95% CI 1.909-10.471, P=.001). Furthermore, participants who received an email prompt 2 weeks after their first visit logged in to the program significantly more often compared to participants receiving the prompt after 6 weeks (OR 16.356, 95% CI 2.071-129.196, P=.008). A trend was observed with regard to prompt content. Participants receiving a prompt with additional content were more likely to log in to the program compared to participants who received a standard prompt. However, this result was not statistically significant (OR 2.286, 95% CI 0.892-5.856, P=.09). The key findings suggest that boosting revisits to a CT program benefits most from relatively short prompt timing. Furthermore, a preview of new website content may be added to a standard prompt to further increase its effectiveness in persuading people to log in to the program.

  8. An Investigation into Web Content Accessibility Guideline Conformance for an Aging Population

    ERIC Educational Resources Information Center

    Curran, Kevin; Robinson, David

    2007-01-01

    Poor web site design can cause difficulties for specific groups of users. By applying the Web Content Accessibility Guidelines to a web site, the amount of possible users who can successfully view the content of that site will increase, especially for those who are in the disabled and older adult categories of online users. Older adults are coming…

  9. 9 CFR 130.18 - User fees for veterinary diagnostic reagents produced at NVSL or other authorized site (excluding...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 9 Animals and Animal Products 1 2011-01-01 2011-01-01 false User fees for veterinary diagnostic reagents produced at NVSL or other authorized site (excluding FADDL). 130.18 Section 130.18 Animals and... § 130.18 User fees for veterinary diagnostic reagents produced at NVSL or other authorized site...

  10. 9 CFR 130.18 - User fees for veterinary diagnostic reagents produced at NVSL or other authorized site (excluding...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 9 Animals and Animal Products 1 2010-01-01 2010-01-01 false User fees for veterinary diagnostic reagents produced at NVSL or other authorized site (excluding FADDL). 130.18 Section 130.18 Animals and... § 130.18 User fees for veterinary diagnostic reagents produced at NVSL or other authorized site...

  11. Definition of the upper reference limit for thyroglobulin antibodies according to the National Academy of Clinical Biochemistry guidelines: comparison of eleven different automated methods.

    PubMed

    D'Aurizio, F; Metus, P; Ferrari, A; Caruso, B; Castello, R; Villalta, D; Steffan, A; Gaspardo, K; Pesente, F; Bizzaro, N; Tonutti, E; Valverde, S; Cosma, C; Plebani, M; Tozzoli, R

    2017-12-01

    In the last two decades, thyroglobulin autoantibodies (TgAb) measurement has progressively switched from marker of thyroid autoimmunity to test associated with thyroglobulin (Tg) to verify the presence or absence of TgAb interference in the follow-up of patients with differentiated thyroid cancer. Of note, TgAb measurement is cumbersome: despite standardization against the International Reference Preparation MRC 65/93, several studies demonstrated high inter-method variability and wide variation in limits of detection and in reference intervals. Taking into account the above considerations, the main aim of the present study was the determination of TgAb upper reference limit (URL), according to the National Academy of Clinical Biochemistry guidelines, through the comparison of eleven commercial automated immunoassay platforms. The sera of 120 healthy males, selected from a population survey in the province of Verona, Italy, were tested for TgAb concentration using eleven IMA applied on as many automated analyzers: AIA-2000 (AIA) and AIA-CL2400 (CL2), Tosoh Bioscience; Architect (ARC), Abbott Diagnostics; Advia Centaur XP (CEN) and Immulite 2000 XPi (IMM), Siemens Healthineers; Cobas 6000 (COB), Roche Diagnostics; Kryptor (KRY), Thermo Fisher Scientific BRAHMS, Liaison XL (LIA), Diasorin; Lumipulse G (LUM), Fujirebio; Maglumi 2000 Plus (MAG), Snibe and Phadia 250 (PHA), Phadia AB, Thermo Fisher Scientific. All assays were performed according to manufacturers' instructions in six different laboratories in Friuli-Venezia Giulia and Veneto regions of Italy [Lab 1 (AIA), Lab 2 (CL2), Lab 3 (ARC, COB and LUM), Lab 4 (CEN, IMM, KRY and MAG), Lab 5 (LIA) and Lab 6 (PHA)]. Since TgAb values were not normally distributed, the experimental URL (e-URL) was established at 97.5 percentile according to the non-parametric method. TgAb e-URLs showed a significant inter-method variability. Considering the same method, e-URL was much lower than that suggested by manufacturers (m-URL), except for ARC and MAG. Correlation and linear regression were unsatisfactory. Consequently, the agreement between methods was poor, with significant bias in Bland-Altman plot. Despite the efforts for harmonization, TgAb methods cannot be used interchangeably. Therefore, additional effort is required to improve analytical performance taking into consideration approved protocols and guidelines. Moreover, TgAb URL should be used with caution in the management of differentiated thyroid carcinoma patients since the presence and/or the degree of TgAb interference in Tg measurement has not yet been well defined.

  12. Exploring Global Exposure Factors Resources URLs

    EPA Pesticide Factsheets

    The dataset is a compilation of hyperlinks (URLs) for resources (databases, compendia, published articles, etc.) useful for exposure assessment specific to consumer product use.This dataset is associated with the following publication:Zaleski, R., P. Egeghy, and P. Hakkinen. Exploring Global Exposure Factors Resources for Use in Consumer Exposure Assessments. International Journal of Environmental Research and Public Health. Molecular Diversity Preservation International, Basel, SWITZERLAND, 13(7): 744, (2016).

  13. What We've Learned From Doing Usability Testing on OpenURL Resolvers and Federated Search Engines

    ERIC Educational Resources Information Center

    Cervone, Frank

    2005-01-01

    OpenURL resolvers and federated search engines are important new services in the library field. For some librarians, these services may seem "old hat" by now, but for the majority these services are still in the early stages of implementation or planning. In many cases, these two services are offered as a seamlessly integrated whole.…

  14. Disappearing Act: Persistence and Attrition of Uniform Resource Locators (URLs) in an Open Access Medical Journal

    ERIC Educational Resources Information Center

    Nagaraja, Aragudige; Joseph, Shine A.; Polen, Hyla H.; Clauson, Kevin A.

    2011-01-01

    Purpose: The aim of this paper is to assess and catalogue the magnitude of URL attrition in a high-impact, open access (OA) general medical journal. Design/methodology/approach: All "Public Library of Science Medicine (PLoS Medicine)" articles for 2005-2007 were evaluated and the following items were assessed: number of entries per issue; type of…

  15. Trends in the production of scientific data analysis resources.

    PubMed

    Hennessey, Jason; Georgescu, Constantin; Wren, Jonathan D

    2014-01-01

    As the amount of scientific data grows, peer-reviewed Scientific Data Analysis Resources (SDARs) such as published software programs, databases and web servers have had a strong impact on the productivity of scientific research. SDARs are typically linked to using an Internet URL, which have been shown to decay in a time-dependent fashion. What is less clear is whether or not SDAR-producing group size or prior experience in SDAR production correlates with SDAR persistence or whether certain institutions or regions account for a disproportionate number of peer-reviewed resources. We first quantified the current availability of over 26,000 unique URLs published in MEDLINE abstracts/titles over the past 20 years, then extracted authorship, institutional and ZIP code data. We estimated which URLs were SDARs by using keyword proximity analysis. We identified 23,820 non-archival URLs produced between 1996 and 2013, out of which 11,977 were classified as SDARs. Production of SDARs as measured with the Gini coefficient is more widely distributed among institutions (.62) and ZIP codes (.65) than scientific research in general, which tends to be disproportionately clustered within elite institutions (.91) and ZIPs (.96). An estimated one percent of institutions produced 68% of published research whereas the top 1% only accounted for 16% of SDARs. Some labs produced many SDARs (maximum detected = 64), but 74% of SDAR-producing authors have only published one SDAR. Interestingly, decayed SDARs have significantly fewer average authors (4.33 +/- 3.06), than available SDARs (4.88 +/- 3.59) (p < 8.32 × 10-4). Approximately 3.4% of URLs, as published, contain errors in their entry/format, including DOIs and links to clinical trials registry numbers. SDAR production is less dependent upon institutional location and resources, and SDAR online persistence does not seem to be a function of infrastructure or expertise. Yet, SDAR team size correlates positively with SDAR accessibility, suggesting a possible sociological factor involved. While a detectable URL entry error rate of 3.4% is relatively low, it raises the question of whether or not this is a general error rate that extends to additional published entities.

  16. Working without a Crystal Ball: Predicting Web Trends for Web Services Librarians

    ERIC Educational Resources Information Center

    Ovadia, Steven

    2008-01-01

    User-centered design is a principle stating that electronic resources, like library Web sites, should be built around the needs of the users. This article interviews Web developers of library and non-library-related Web sites, determining how they assess user needs and how they decide to adapt certain technologies for users. According to the…

  17. Characteristics and Effectiveness of the U.S. State E-Government-to-Business Services

    ERIC Educational Resources Information Center

    Zhao, Jensen J.; Truell, Allen; Alexander, Melody W.

    2008-01-01

    This study examined the user-interface characteristics and effectiveness of the e-government-to-business (G2B) sites of the 50 U.S. states and Washington, D.C. A group of 306 online users were trained to assess the sites. The findings indicate that the majority of the state G2B sites included the user-interface characteristics that provided online…

  18. Stated choice models for predicting the impact of user fees at public recreation sites

    Treesearch

    Herbert W. Schroeder; Jordan Louviere

    1999-01-01

    A crucial question in the implementation of fee programs is how the users of recreation sites will respond to various levels and types of fees. Stated choice models can help managers anticipate the impact of user fees on people's choices among the alternative recreation sites available to them. Models developed for both day and overnight trips to several areas and...

  19. Gender's equality in evaluation of urine particles: Results of a multicenter study of the Italian Urinalysis Group.

    PubMed

    Manoni, Fabio; Gessoni, Gianluca; Alessio, Maria Grazia; Caleffi, Alberta; Saccani, Graziella; Epifani, Maria Grazia; Tinello, Agostino; Zorzan, Tatiana; Valverde, Sara; Caputo, Marco; Lippi, Giuseppe

    2014-01-01

    We performed a multicenter study to calculate the upper reference limits (URL) for urine particle quantification in mid-stream samples by using automated urine analyzers. Two laboratories tested 283 subjects using a Sysmex UF-100, two other laboratories tested 313 subjects using Sysmex UF-1000i, whereas two other laboratories tested 267 subjects using Iris IQ®200. The URLs of UF-100 in females and males were 7.8/μL and 6.7/μL for epithelial cells (EC), 11.1/μL and 9.9/μL for red blood cells (RBC), 10.2/μL and 9.7/μL for white blood cells (WBC), and 0.85/μL and 0.87/μL for cylinders (CAST). The URLs of UF-1000i in females and males were 7.6/μL and 7.1/μL for EC, 12.2/μL and 11.1/μL for RBC, 11.9/μL and 11.7/μL for WBC, and 0.88/μL and 0.86/μL for CAST. The URLs of Iris IQ®200 in females and males were 7.8/μL and 6.6/μL for EC, 12.4/μL and 10.1/μL for RBC, 10.9/μL and 9.9/μL for WBC, and 1.1/μL and 1.0/μL for CAST. The URLs obtained in this study were comparable to the lowest values previously reported in the literature. Moreover, no gender-related difference was observed, and analyzer-specific upper reference limits were very similar. © 2013.

  20. Improved Functionality and Curation Support in the ADS

    NASA Astrophysics Data System (ADS)

    Accomazzi, Alberto; Kurtz, Michael J.; Henneken, Edwin A.; Grant, Carolyn S.; Thompson, Donna; Chyla, Roman; Holachek, Alexandra; Sudilovsky, Vladimir; Murray, Stephen S.

    2015-01-01

    In this poster we describe the developments of the new ADS platform over the past year, focusing on the functionality which improves its discovery and curation capabilities.The ADS Application Programming Interface (API) is being updated to support authenticated access to the entire suite of ADS services, in addition to the search functionality itself. This allows programmatic access to resources which are specific to a user or class of users.A new interface, built directly on top of the API, now provides a more intuitive search experience and takes into account the best practices in web usability and responsive design. The interface now incorporates in-line views of graphics from the AAS Astroexplorer and the ADS All-Sky Survey image collections.The ADS Private Libraries, first introduced over 10 years ago, are now being enhanced to allow the bookmarking, tagging and annotation of records of interest. In addition, libraries can be shared with one or more ADS users, providing an easy way to collaborate in the curation of lists of papers. A library can also be explicitly made public and shared at large via the publishing of its URL.In collaboration with the AAS, the ADS plans to support the adoption of ORCID identifiers by implementing a plugin which will simplify the import of papers in ORCID via a query to the ADS API. Deeper integration between the two systems will depend on available resources and feedback from the community.

  1. Bookshelf: a simple curation system for the storage of biomolecular simulation data.

    PubMed

    Vohra, Shabana; Hall, Benjamin A; Holdbrook, Daniel A; Khalid, Syma; Biggin, Philip C

    2010-01-01

    Molecular dynamics simulations can now routinely generate data sets of several hundreds of gigabytes in size. The ability to generate this data has become easier over recent years and the rate of data production is likely to increase rapidly in the near future. One major problem associated with this vast amount of data is how to store it in a way that it can be easily retrieved at a later date. The obvious answer to this problem is a database. However, a key issue in the development and maintenance of such a database is its sustainability, which in turn depends on the ease of the deposition and retrieval process. Encouraging users to care about meta-data is difficult and thus the success of any storage system will ultimately depend on how well used by end-users the system is. In this respect we suggest that even a minimal amount of metadata if stored in a sensible fashion is useful, if only at the level of individual research groups. We discuss here, a simple database system which we call 'Bookshelf', that uses python in conjunction with a mysql database to provide an extremely simple system for curating and keeping track of molecular simulation data. It provides a user-friendly, scriptable solution to the common problem amongst biomolecular simulation laboratories; the storage, logging and subsequent retrieval of large numbers of simulations. Download URL: http://sbcb.bioch.ox.ac.uk/bookshelf/

  2. Bookshelf: a simple curation system for the storage of biomolecular simulation data

    PubMed Central

    Vohra, Shabana; Hall, Benjamin A.; Holdbrook, Daniel A.; Khalid, Syma; Biggin, Philip C.

    2010-01-01

    Molecular dynamics simulations can now routinely generate data sets of several hundreds of gigabytes in size. The ability to generate this data has become easier over recent years and the rate of data production is likely to increase rapidly in the near future. One major problem associated with this vast amount of data is how to store it in a way that it can be easily retrieved at a later date. The obvious answer to this problem is a database. However, a key issue in the development and maintenance of such a database is its sustainability, which in turn depends on the ease of the deposition and retrieval process. Encouraging users to care about meta-data is difficult and thus the success of any storage system will ultimately depend on how well used by end-users the system is. In this respect we suggest that even a minimal amount of metadata if stored in a sensible fashion is useful, if only at the level of individual research groups. We discuss here, a simple database system which we call ‘Bookshelf’, that uses python in conjunction with a mysql database to provide an extremely simple system for curating and keeping track of molecular simulation data. It provides a user-friendly, scriptable solution to the common problem amongst biomolecular simulation laboratories; the storage, logging and subsequent retrieval of large numbers of simulations. Download URL: http://sbcb.bioch.ox.ac.uk/bookshelf/ PMID:21169341

  3. MGIS: managing banana (Musa spp.) genetic resources information and high-throughput genotyping data

    PubMed Central

    Guignon, V.; Sempere, G.; Sardos, J.; Hueber, Y.; Duvergey, H.; Andrieu, A.; Chase, R.; Jenny, C.; Hazekamp, T.; Irish, B.; Jelali, K.; Adeka, J.; Ayala-Silva, T.; Chao, C.P.; Daniells, J.; Dowiya, B.; Effa effa, B.; Gueco, L.; Herradura, L.; Ibobondji, L.; Kempenaers, E.; Kilangi, J.; Muhangi, S.; Ngo Xuan, P.; Paofa, J.; Pavis, C.; Thiemele, D.; Tossou, C.; Sandoval, J.; Sutanto, A.; Vangu Paka, G.; Yi, G.; Van den houwe, I.; Roux, N.

    2017-01-01

    Abstract Unraveling the genetic diversity held in genebanks on a large scale is underway, due to advances in Next-generation sequence (NGS) based technologies that produce high-density genetic markers for a large number of samples at low cost. Genebank users should be in a position to identify and select germplasm from the global genepool based on a combination of passport, genotypic and phenotypic data. To facilitate this, a new generation of information systems is being designed to efficiently handle data and link it with other external resources such as genome or breeding databases. The Musa Germplasm Information System (MGIS), the database for global ex situ-held banana genetic resources, has been developed to address those needs in a user-friendly way. In developing MGIS, we selected a generic database schema (Chado), the robust content management system Drupal for the user interface, and Tripal, a set of Drupal modules which links the Chado schema to Drupal. MGIS allows germplasm collection examination, accession browsing, advanced search functions, and germplasm orders. Additionally, we developed unique graphical interfaces to compare accessions and to explore them based on their taxonomic information. Accession-based data has been enriched with publications, genotyping studies and associated genotyping datasets reporting on germplasm use. Finally, an interoperability layer has been implemented to facilitate the link with complementary databases like the Banana Genome Hub and the MusaBase breeding database. Database URL: https://www.crop-diversity.org/mgis/ PMID:29220435

  4. A web Accessible Framework for Discovery, Visualization and Dissemination of Polar Data

    NASA Astrophysics Data System (ADS)

    Kirsch, P. J.; Breen, P.; Barnes, T. D.

    2007-12-01

    A web accessible information framework, currently under development within the Physical Sciences Division of the British Antarctic Survey is described. The datasets accessed are generally heterogeneous in nature from fields including space physics, meteorology, atmospheric chemistry, ice physics, and oceanography. Many of these are returned in near real time over a 24/7 limited bandwidth link from remote Antarctic Stations and ships. The requirement is to provide various user groups - each with disparate interests and demands - a system incorporating a browsable and searchable catalogue; bespoke data summary visualization, metadata access facilities and download utilities. The system allows timely access to raw and processed datasets through an easily navigable discovery interface. Once discovered, a summary of the dataset can be visualized in a manner prescribed by the particular projects and user communities or the dataset may be downloaded, subject to accessibility restrictions that may exist. In addition, access to related ancillary information including software, documentation, related URL's and information concerning non-electronic media (of particular relevance to some legacy datasets) is made directly available having automatically been associated with a dataset during the discovery phase. Major components of the framework include the relational database containing the catalogue, the organizational structure of the systems holding the data - enabling automatic updates of the system catalogue and real-time access to data -, the user interface design, and administrative and data management scripts allowing straightforward incorporation of utilities, datasets and system maintenance.

  5. docBUILDER - Building Your Useful Metadata for Earth Science Data and Services.

    NASA Astrophysics Data System (ADS)

    Weir, H. M.; Pollack, J.; Olsen, L. M.; Major, G. R.

    2005-12-01

    The docBUILDER tool, created by NASA's Global Change Master Directory (GCMD), assists the scientific community in efficiently creating quality data and services metadata. Metadata authors are asked to complete five required fields to ensure enough information is provided for users to discover the data and related services they seek. After the metadata record is submitted to the GCMD, it is reviewed for semantic and syntactic consistency. Currently, two versions are available - a Web-based tool accessible with most browsers (docBUILDERweb) and a stand-alone desktop application (docBUILDERsolo). The Web version is available through the GCMD website, at http://gcmd.nasa.gov/User/authoring.html. This version has been updated and now offers: personalized templates to ease entering similar information for multiple data sets/services; automatic population of Data Center/Service Provider URLs based on the selected center/provider; three-color support to indicate required, recommended, and optional fields; an editable text window containing the XML record, to allow for quick editing; and improved overall performance and presentation. The docBUILDERsolo version offers the ability to create metadata records on a computer wherever you are. Except for installation and the occasional update of keywords, data/service providers are not required to have an Internet connection. This freedom will allow users with portable computers (Windows, Mac, and Linux) to create records in field campaigns, whether in Antarctica or the Australian Outback. This version also offers a spell-checker, in addition to all of the features found in the Web version.

  6. Data List - Specifying and Acquiring Earth Science Data Measurements All at Once

    NASA Astrophysics Data System (ADS)

    Shie, C. L.; Teng, W. L.; Liu, Z.; Hearty, T. J., III; Shen, S.; Li, A.; Hegde, M.; Bryant, K.; Seiler, E.; Kempler, S. J.

    2016-12-01

    Natural phenomena, such as tropical storms (e.g., hurricane/typhoons), winter storms (e.g., blizzards) volcanic eruptions, floods, and drought, have the potential to cause immense property damage, great socioeconomic impact, and tragic losses of human life. In order to investigate and assess these natural hazards in a timely manner, there needs to be efficient searching and accessing of massive amounts of heterogeneous scientific data from, particularly, satellite and model products. This is a daunting task for most application users, decision makers, and science researchers. The NASA Goddard Earth Sciences Data and Information Service Center (GES DISC) has, for many years, archived and served massive amounts of Earth science data, along with value-added information and services. In order to facilitate the GES DISC users in acquiring their data of interest "all at once," with minimum effort, the GES DISC has started developing a value-added and knowledge-based data service framework. This framework allows the preparation and presentation to users of collections of data and their related resources for natural disaster events or other scientific themes. These collections of data, initially termed "Data Bundle" and then "Virtual Collections" and finally "Data Lists," contain suites of annotated Web addresses (URLs) that point to their respective data and resource addresses, "all at once" and "virtually." Because these collections of data are virtual, there is no need to duplicate the data. Currently available "Data Lists" for several natural disaster phenomena and the architecture of the data service framework will be presented.

  7. Web-based Electronic Sharing and RE-allocation of Assets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leverett, Dave; Miller, Robert A.; Berlin, Gary J.

    2002-09-09

    The Electronic Asses Sharing Program is a web-based application that provides the capability for complex-wide sharing and reallocation of assets that are excess, under utilized, or un-utilized. through a web-based fron-end and supporting has database with a search engine, users can search for assets that they need, search for assets needed by others, enter assets they need, and enter assets they have available for reallocation. In addition, entire listings of available assets and needed assets can be viewed. The application is written in Java, the hash database and search engine are in Object-oriented Java Database Management (OJDBM). The application willmore » be hosted on an SRS-managed server outside the Firewall and access will be controlled via a protected realm. An example of the application can be viewed at the followinig (temporary) URL: http://idgdev.srs.gov/servlet/srs.weshare.WeShare« less

  8. De retibus socialibus et legibus momenti

    NASA Astrophysics Data System (ADS)

    Gayo-Avello, D.; Brenes, D. J.; Fernández-Fernández, D.; Fernández-Menéndez, M. E.; García-Suárez, R.

    2011-05-01

    Online Social Networks (OSNs) are a cutting edge topic. Almost everybody —users, marketers, brands, companies, and researchers— is approaching OSNs to better understand them and take advantage of their benefits. Maybe one of the key concepts underlying OSNs is that of influence which is highly related, although not entirely identical, to those of popularity and centrality. Influence is, according to Merriam-Webster, "the capacity of causing an effect in indirect or intangible ways". Hence, in the context of OSNs, it has been proposed to analyze the clicks received by promoted URLs in order to check for any positive correlation between the number of visits and different "influence" scores. That evaluation methodology is used in this letter to compare a number of those techniques with a new method firstly described here. That new method is a simple and rather elegant solution which tackles with influence in OSNs by applying a physical metaphor. On social networks and the laws of influence.

  9. Mfold web server for nucleic acid folding and hybridization prediction.

    PubMed

    Zuker, Michael

    2003-07-01

    The abbreviated name, 'mfold web server', describes a number of closely related software applications available on the World Wide Web (WWW) for the prediction of the secondary structure of single stranded nucleic acids. The objective of this web server is to provide easy access to RNA and DNA folding and hybridization software to the scientific community at large. By making use of universally available web GUIs (Graphical User Interfaces), the server circumvents the problem of portability of this software. Detailed output, in the form of structure plots with or without reliability information, single strand frequency plots and 'energy dot plots', are available for the folding of single sequences. A variety of 'bulk' servers give less information, but in a shorter time and for up to hundreds of sequences at once. The portal for the mfold web server is http://www.bioinfo.rpi.edu/applications/mfold. This URL will be referred to as 'MFOLDROOT'.

  10. Google's Geo Education Outreach: Results and Discussion of Outreach Trip to Alaskan High Schools.

    NASA Astrophysics Data System (ADS)

    Kolb, E. J.; Bailey, J.; Bishop, A.; Cain, J.; Goddard, M.; Hurowitz, K.; Kennedy, K.; Ornduff, T.; Sfraga, M.; Wernecke, J.

    2008-12-01

    The focus of Google's Geo Education outreach efforts (http://www.google.com/educators/geo.html) is on helping primary, secondary, and post-secondary educators incorporate Google Earth and Sky, Google Maps, and SketchUp into their classroom lessons. In partnership with the University of Alaska, our Geo Education team members visited several remote Alaskan high schools during a one-week period in September. At each school, we led several 40-minute hands-on learning sessions in which Google products were used by the students to investigate local geologic and environmental processes. For the teachers, we provided several resources including follow-on lesson plans, example KML-based lessons, useful URL's, and website resources that multiple users can contribute to. This talk will highlight results of the trip and discuss how educators can access and use Google's Geo Education resources.

  11. CADB: Conformation Angles DataBase of proteins

    PubMed Central

    Sheik, S. S.; Ananthalakshmi, P.; Bhargavi, G. Ramya; Sekar, K.

    2003-01-01

    Conformation Angles DataBase (CADB) provides an online resource to access data on conformation angles (both main-chain and side-chain) of protein structures in two data sets corresponding to 25% and 90% sequence identity between any two proteins, available in the Protein Data Bank. In addition, the database contains the necessary crystallographic parameters. The package has several flexible options and display facilities to visualize the main-chain and side-chain conformation angles for a particular amino acid residue. The package can also be used to study the interrelationship between the main-chain and side-chain conformation angles. A web based JAVA graphics interface has been deployed to display the user interested information on the client machine. The database is being updated at regular intervals and can be accessed over the World Wide Web interface at the following URL: http://144.16.71.148/cadb/. PMID:12520049

  12. CFD Data Sets on the WWW for Education and Testing

    NASA Technical Reports Server (NTRS)

    Globus, Al; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    The Numerical Aerodynamic Simulation (NAS) Systems Division at NASA Ames Research Center has begun the development of a Computational Fluid Dynamics (CFD) data set archive on the World Wide Web (WWW) at URL http://www.nas.nasa.gov/NAS/DataSets/. Data sets are integrated with related information such as research papers, metadata, visualizations, etc. In this paper, four classes of users are identified and discussed: students, visualization developers, CFD practitioners, and management. Bandwidth and security issues are briefly reviewed and the status of the archive as of May 1995 is examined. Routine network distribution of data sets is likely to have profound implications for the conduct of science. The exact nature of these changes is subject to speculation, but the ability for anyone to examine the data, in addition to the investigator's analysis, may well play an important role in the future.

  13. An Atlas of annotations of Hydra vulgaris transcriptome.

    PubMed

    Evangelista, Daniela; Tripathi, Kumar Parijat; Guarracino, Mario Rosario

    2016-09-22

    RNA sequencing takes advantage of the Next Generation Sequencing (NGS) technologies for analyzing RNA transcript counts with an excellent accuracy. Trying to interpret this huge amount of data in biological information is still a key issue, reason for which the creation of web-resources useful for their analysis is highly desiderable. Starting from a previous work, Transcriptator, we present the Atlas of Hydra's vulgaris, an extensible web tool in which its complete transcriptome is annotated. In order to provide to the users an advantageous resource that include the whole functional annotated transcriptome of Hydra vulgaris water polyp, we implemented the Atlas web-tool contains 31.988 accesible and downloadable transcripts of this non-reference model organism. Atlas, as a freely available resource, can be considered a valuable tool to rapidly retrieve functional annotation for transcripts differentially expressed in Hydra vulgaris exposed to the distinct experimental treatments. WEB RESOURCE URL: http://www-labgtp.na.icar.cnr.it/Atlas .

  14. The Victor C++ library for protein representation and advanced manipulation.

    PubMed

    Hirsh, Layla; Piovesan, Damiano; Giollo, Manuel; Ferrari, Carlo; Tosatto, Silvio C E

    2015-04-01

    Protein sequence and structure representation and manipulation require dedicated software libraries to support methods of increasing complexity. Here, we describe the VIrtual Constrution TOol for pRoteins (Victor) C++ library, an open source platform dedicated to enabling inexperienced users to develop advanced tools and gathering contributions from the community. The provided application examples cover statistical energy potentials, profile-profile sequence alignments and ab initio loop modeling. Victor was used over the last 15 years in several publications and optimized for efficiency. It is provided as a GitHub repository with source files and unit tests, plus extensive online documentation, including a Wiki with help files and tutorials, examples and Doxygen documentation. The C++ library and online documentation, distributed under a GPL license are available from URL: http://protein.bio.unipd.it/victor/. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  15. User-Centric Secure Cross-Site Interaction Framework for Online Social Networking Services

    ERIC Educational Resources Information Center

    Ko, Moo Nam

    2011-01-01

    Social networking service is one of major technological phenomena on Web 2.0. Hundreds of millions of users are posting message, photos, and videos on their profiles and interacting with other users, but the sharing and interaction are limited within the same social networking site. Although users can share some content on a social networking site…

  16. Marketing E-Commerce by Social media using Product Recommendations and user Embedding

    NASA Astrophysics Data System (ADS)

    Ramalingam, V. V.; Pandian, A.; Masilamani, Kirthiga

    2018-04-01

    MarketingE-CommercebySodal media is the best way to improve marketing and business widely.The major issues faced with E-commerce and Social media interfacing is cold- start cross-site problem. The cold-start problem occurs at a situation when user is not having the history of purchase records.For the user who does not have a history of purchase records, we have introduced a method of finding the users’ interested product without knowing any of the demographic information of the user. The product is recommended on basesof visits i.e., the item which is most likely to be visited by the users occur in the hit list. This product is rated at the top position for the users to purchase. The e-commerce with social media sites uses the strategy of user embedding and product recommendations. The product recommendations are achieved by incorporating LatentDirichlet Allocation(LDA), Re Ranking and Collaborative Filtering algorithms. The proposed framework can enhance the recommendation system by embedding products and users. This shows the potential of solving cold-start cross-site problem across the e-commerce and social media sites and enhances the marketing strategy.

  17. toxoMine: an integrated omics data warehouse for Toxoplasma gondii systems biology research

    PubMed Central

    Rhee, David B.; Croken, Matthew McKnight; Shieh, Kevin R.; Sullivan, Julie; Micklem, Gos; Kim, Kami; Golden, Aaron

    2015-01-01

    Toxoplasma gondii (T. gondii) is an obligate intracellular parasite that must monitor for changes in the host environment and respond accordingly; however, it is still not fully known which genetic or epigenetic factors are involved in regulating virulence traits of T. gondii. There are on-going efforts to elucidate the mechanisms regulating the stage transition process via the application of high-throughput epigenomics, genomics and proteomics techniques. Given the range of experimental conditions and the typical yield from such high-throughput techniques, a new challenge arises: how to effectively collect, organize and disseminate the generated data for subsequent data analysis. Here, we describe toxoMine, which provides a powerful interface to support sophisticated integrative exploration of high-throughput experimental data and metadata, providing researchers with a more tractable means toward understanding how genetic and/or epigenetic factors play a coordinated role in determining pathogenicity of T. gondii. As a data warehouse, toxoMine allows integration of high-throughput data sets with public T. gondii data. toxoMine is also able to execute complex queries involving multiple data sets with straightforward user interaction. Furthermore, toxoMine allows users to define their own parameters during the search process that gives users near-limitless search and query capabilities. The interoperability feature also allows users to query and examine data available in other InterMine systems, which would effectively augment the search scope beyond what is available to toxoMine. toxoMine complements the major community database ToxoDB by providing a data warehouse that enables more extensive integrative studies for T. gondii. Given all these factors, we believe it will become an indispensable resource to the greater infectious disease research community. Database URL: http://toxomine.org PMID:26130662

  18. Methods for Coding Tobacco-Related Twitter Data: A Systematic Review

    PubMed Central

    Unger, Jennifer B; Cruz, Tess Boley; Chu, Kar-Hai

    2017-01-01

    Background As Twitter has grown in popularity to 313 million monthly active users, researchers have increasingly been using it as a data source for tobacco-related research. Objective The objective of this systematic review was to assess the methodological approaches of categorically coded tobacco Twitter data and make recommendations for future studies. Methods Data sources included PsycINFO, Web of Science, PubMed, ABI/INFORM, Communication Source, and Tobacco Regulatory Science. Searches were limited to peer-reviewed journals and conference proceedings in English from January 2006 to July 2016. The initial search identified 274 articles using a Twitter keyword and a tobacco keyword. One coder reviewed all abstracts and identified 27 articles that met the following inclusion criteria: (1) original research, (2) focused on tobacco or a tobacco product, (3) analyzed Twitter data, and (4) coded Twitter data categorically. One coder extracted data collection and coding methods. Results E-cigarettes were the most common type of Twitter data analyzed, followed by specific tobacco campaigns. The most prevalent data sources were Gnip and Twitter’s Streaming application programming interface (API). The primary methods of coding were hand-coding and machine learning. The studies predominantly coded for relevance, sentiment, theme, user or account, and location of user. Conclusions Standards for data collection and coding should be developed to be able to more easily compare and replicate tobacco-related Twitter results. Additional recommendations include the following: sample Twitter’s databases multiple times, make a distinction between message attitude and emotional tone for sentiment, code images and URLs, and analyze user profiles. Being relatively novel and widely used among adolescents and black and Hispanic individuals, Twitter could provide a rich source of tobacco surveillance data among vulnerable populations. PMID:28363883

  19. ROME (Request Object Management Environment)

    NASA Astrophysics Data System (ADS)

    Kong, M.; Good, J. C.; Berriman, G. B.

    2005-12-01

    Most current astronomical archive services are based on an HTML/ CGI architecture where users submit HTML forms via a browser and CGI programs operating under a web server process the requests. Most services return an HTML result page with URL links to the result files or, for longer jobs, return a message indicating that email will be sent when the job is done. This paradigm has a few serious shortcomings. First, it is all too common for something to go wrong and for the user to never hear about the job again. Second, for long and complicated jobs there is often important intermediate information that would allow the user to adjust the processing. Finally, unless some sort of custom queueing mechanism is used, background jobs are started immediately upon receiving the CGI request. When there are many such requests the server machine can easily be overloaded and either slow to a crawl or crash. Request Object Management Environment (ROME) is a collection of middleware components being developed under the National Virtual Observatory Project to provide mechanism for managing long jobs such as computationally intensive statistical analysis requests or the generation of large scale mosaic images. Written as EJB objects within the open-source JBoss applications server, ROME receives processing requests via a servelet interface, stores them in a DBMS using JDBC, distributes the processing (via queuing mechanisms) across multiple machines and environments (including Grid resources), manages realtime messages from the processing modules, and ensures proper user notification. The request processing modules are identical in structure to standard CGI-programs -- though they can optionally implement status messaging -- and can be written in any language. ROME will persist these jobs across failures of processing modules, network outages, and even downtime of ROME and the DBMS, restarting them as necessary.

  20. On-Site Social Surveys and the Determination of Social Carrying Capacity in Wildland Recreation Management

    Treesearch

    Patrick C. West

    1981-01-01

    It has been suggested that on-site surveys of user fail to measure crowding accurately because long time users who knew the area before the "crowds" came tend to feel the most crowded, and thus do not return. Such "displaced" users would not be included in current on-site survey samples. Results from a limited test at the Sylvania Recreation Area...

  1. "Less Clicking, More Watching": Results from the User-Centered Design of a Multi-Institutional Web Site for Art and Culture.

    ERIC Educational Resources Information Center

    Vergo, John; Karat, Clare-Marie; Karat, John; Pinhanez, Claudio; Arora, Renee; Cofino, Thomas; Riecken, Doug; Podlaseck, Mark

    This paper summarizes a 10-month long research project conducted at the IBM T.J. Watson Research Center aimed at developing the design concept of a multi-institutional art and culture web site. The work followed a user-centered design (UCD) approach, where interaction with prototypes and feedback from potential users of the web site were sought…

  2. 37 CFR 261.2 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... through a Web Site. Web Site is a site located on the World Wide Web that can be located by an end user... transmitted over the Internet during the relevant period to all end users within the United States from all...

  3. 37 CFR 261.2 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... through a Web Site. Web Site is a site located on the World Wide Web that can be located by an end user... transmitted over the Internet during the relevant period to all end users within the United States from all...

  4. Introducing a New Interface for the Online MagIC Database by Integrating Data Uploading, Searching, and Visualization

    NASA Astrophysics Data System (ADS)

    Jarboe, N.; Minnett, R.; Constable, C.; Koppers, A. A.; Tauxe, L.

    2013-12-01

    The Magnetics Information Consortium (MagIC) is dedicated to supporting the paleomagnetic, geomagnetic, and rock magnetic communities through the development and maintenance of an online database (http://earthref.org/MAGIC/), data upload and quality control, searches, data downloads, and visualization tools. While MagIC has completed importing some of the IAGA paleomagnetic databases (TRANS, PINT, PSVRL, GPMDB) and continues to import others (ARCHEO, MAGST and SECVR), further individual data uploading from the community contributes a wealth of easily-accessible rich datasets. Previously uploading of data to the MagIC database required the use of an Excel spreadsheet using either a Mac or PC. The new method of uploading data utilizes an HTML 5 web interface where the only computer requirement is a modern browser. This web interface will highlight all errors discovered in the dataset at once instead of the iterative error checking process found in the previous Excel spreadsheet data checker. As a web service, the community will always have easy access to the most up-to-date and bug free version of the data upload software. The filtering search mechanism of the MagIC database has been changed to a more intuitive system where the data from each contribution is displayed in tables similar to how the data is uploaded (http://earthref.org/MAGIC/search/). Searches themselves can be saved as a permanent URL, if desired. The saved search URL could then be used as a citation in a publication. When appropriate, plots (equal area, Zijderveld, ARAI, demagnetization, etc.) are associated with the data to give the user a quicker understanding of the underlying dataset. The MagIC database will continue to evolve to meet the needs of the paleomagnetic, geomagnetic, and rock magnetic communities.

  5. A Serviced-based Approach to Connect Seismological Infrastructures: Current Efforts at the IRIS DMC

    NASA Astrophysics Data System (ADS)

    Ahern, Tim; Trabant, Chad

    2014-05-01

    As part of the COOPEUS initiative to build infrastructure that connects European and US research infrastructures, IRIS has advocated for the development of Federated services based upon internationally recognized standards using web services. By deploying International Federation of Digital Seismograph Networks (FDSN) endorsed web services at multiple data centers in the US and Europe, we have shown that integration within seismological domain can be realized. By deploying identical methods to invoke the web services at multiple centers this approach can significantly ease the methods through which a scientist can access seismic data (time series, metadata, and earthquake catalogs) from distributed federated centers. IRIS has developed an IRIS federator that helps a user identify where seismic data from global seismic networks can be accessed. The web services based federator can build the appropriate URLs and return them to client software running on the scientists own computer. These URLs are then used to directly pull data from the distributed center in a very peer-based fashion. IRIS is also involved in deploying web services across horizontal domains. As part of the US National Science Foundation's (NSF) EarthCube effort, an IRIS led EarthCube Building Block's project is underway. When completed this project will aid in the discovery, access, and usability of data across multiple geoscienece domains. This presentation will summarize current IRIS efforts in building vertical integration infrastructure within seismology working closely with 5 centers in Europe and 2 centers in the US, as well as how we are taking first steps toward horizontal integration of data from 14 different domains in the US, in Europe, and around the world.

  6. Making of the underground scientific experimental programme at the Meuse/Haute-Marne underground research laboratory, North Eastern France

    NASA Astrophysics Data System (ADS)

    Delay, Jacques; Vinsot, Agnès; Krieguer, Jean-Marie; Rebours, Hervé; Armand, Gilles

    In November 1999 Andra began building an Underground Research Laboratory (URL) on the border of the Meuse and Haute-Marne departments in eastern France. The research activities of the URL are dedicated to study the feasibility of reversible, deep geological disposal of high-activity, long-lived radioactive wastes in an argillaceous host rock. The Laboratory consists of two shafts, an experimental drift at 445 m depth and a set of technical and experimental drifts at the main level at 490 m depth. The main objective of the research is to characterize the confining properties of the argillaceous rock through in situ hydrogeological tests, chemical measurements and diffusion experiments. In order to achieve this goal, a fundamental understanding of the geoscientific properties and processes that govern geological isolation in clay-rich rocks has been acquired. This understanding includes both the host rocks at the laboratory site and the regional geological context. After establishing the geological conditions, the underground research programme had to demonstrate that the construction and operation of a geological disposal will not introduce pathways for waste migration. Thus, the construction of the laboratory itself serves a research purpose through the monitoring of excavation effects and the optimization of construction technology. These studies are primarily geomechanical in nature, though chemical and hydrogeological coupling also have important roles. In order to achieve the scientific objectives of this project in the underground drifts, a specific methodology has been applied for carrying out the experimental programme conducted concurrently with the construction of the shafts and drifts. This methodology includes technological as well as organizational aspects and a systematic use of feedback from other laboratories abroad and every scientific zone of the URL already installed. This methodology was first applied to set up a multi-purpose experimental area at 445 m depth. Then the setting up of the experimental programme at the level 490 m was improved from the knowledge acquired during installation of the drift at 445 m. The several steps of the underground scientific programme are illustrated by presenting three experiments carried out in the underground drifts. The first experiment was carried out from the drift at 445 m depth, from end of 2004 to mid 2005. This experiment aimed at setting up an array of about 16 boreholes to monitor the geomechanical changes during and after construction of the shaft between 445 and 490 m. The second experiment was set up in the drift at 445 m depth, and also at the main level at 490 m depth. It consisted in determining the composition of the interstitial water by circulating gas in one borehole and water of a known composition in the other. The evolution of the composition of both water and gases enabled us to test the thermodynamic model of the water/rock interactions. The third example is related to the testing of a concept of interruption of the EDZ through a cross-cut slot technology. The concept, which was tested successfully at Mont Terri (Switzerland), has been transposed and adapted to the URL site conditions. The results will be used for developing a concept for drift sealing.

  7. QUEST Hanford Site Computer Users - What do they do?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    WITHERSPOON, T.T.

    2000-03-02

    The Fluor Hanford Chief Information Office requested that a computer-user survey be conducted to determine the user's dependence on the computer and its importance to their ability to accomplish their work. Daily use trends and future needs of Hanford Site personal computer (PC) users was also to be defined. A primary objective was to use the data to determine how budgets should be focused toward providing those services that are truly needed by the users.

  8. Open Source GIS Connectors to the NASA GES DISC Satellite Data

    NASA Astrophysics Data System (ADS)

    Pham, L.; Kempler, S. J.; Yang, W.

    2014-12-01

    The NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) houses a suite of satellite-derived GIS data including high spatiotemporal resolution precipitation, air quality, and modeled land surface parameter data. The data are extremely useful to various GIS research and applications at regional, continental, and global scales, as evidenced by the growing GIS user requests to the data. On the other hand, we also found that some GIS users, especially those from the ArcGIS community, having difficulties in obtaining, importing, and using our data, primarily due to the unfamiliarity of the users with our products and GIS software's lack of capabilities in dealing with the predominately raster form data in various sometimes very complicated formats. In this presentation, we introduce a set of open source ArcGIS data connectors that significantly simplify the access and use of our data in ArcGIS. With the connectors, users do not need to know the data access URLs, the access protocols or syntaxes, and data formats. Nor do they need to browse through a long list of variables that are often embedded into one single science data file and whose names may sometimes be confusing to those not familiar with the file (such as variable CH4_VMR_D for "CH4 Volume mixing ratio from the descending orbit" and variable EVPsfc for "Total Evapotranspiration"). The connectors will expose most GIS-related variables to the users with easy to understand names. User can simply define the spatiotemporal range of their study, select interested parameter(s), and have the needed data be downloaded, imported, and displayed in ArcGIS. The connectors are python text files and there is no installation process. They can be placed at any user directory and be started by simply clicking on it. In the presentation, we'll also demonstrate how to use the tools to load GES DISC time series air quality data with a few clicks and how such data depict the spatial and temporal patterns of air quality in different parts of the world during the past decade.

  9. PolyA_DB 3 catalogs cleavage and polyadenylation sites identified by deep sequencing in multiple genomes

    PubMed Central

    Wang, Ruijia; Nambiar, Ram; Zheng, Dinghai

    2018-01-01

    Abstract PolyA_DB is a database cataloging cleavage and polyadenylation sites (PASs) in several genomes. Previous versions were based mainly on expressed sequence tags (ESTs), which had a limited amount and could lead to inaccurate PAS identification due to the presence of internal A-rich sequences in transcripts. Here, we present an updated version of the database based solely on deep sequencing data. First, PASs are mapped by the 3′ region extraction and deep sequencing (3′READS) method, ensuring unequivocal PAS identification. Second, a large volume of data based on diverse biological samples increases PAS coverage by 3.5-fold over the EST-based version and provides PAS usage information. Third, strand-specific RNA-seq data are used to extend annotated 3′ ends of genes to obtain more thorough annotations of alternative polyadenylation (APA) sites. Fourth, conservation information of PAS across mammals sheds light on significance of APA sites. The database (URL: http://www.polya-db.org/v3) currently holds PASs in human, mouse, rat and chicken, and has links to the UCSC genome browser for further visualization and for integration with other genomic data. PMID:29069441

  10. LymPHOS 2.0: an update of a phosphosite database of primary human T cells

    PubMed Central

    Nguyen, Tien Dung; Vidal-Cortes, Oriol; Gallardo, Oscar; Abian, Joaquin; Carrascal, Montserrat

    2015-01-01

    LymPHOS is a web-oriented database containing peptide and protein sequences and spectrometric information on the phosphoproteome of primary human T-Lymphocytes. Current release 2.0 contains 15 566 phosphorylation sites from 8273 unique phosphopeptides and 4937 proteins, which correspond to a 45-fold increase over the original database description. It now includes quantitative data on phosphorylation changes after time-dependent treatment with activators of the TCR-mediated signal transduction pathway. Sequence data quality has also been improved with the use of multiple search engines for database searching. LymPHOS can be publicly accessed at http://www.lymphos.org. Database URL: http://www.lymphos.org. PMID:26708986

  11. Internet resources for the anaesthesiologist.

    PubMed

    Johnson, Edward

    2012-05-01

    There is considerable useful information about anaesthesia available on the World Wide Web. However, at present, it is very incomplete and scattered around many sites. Many anaesthetists find it difficult to get the right information they need because of the sheer volume of information available on the internet. This article starts with the basics of the Internet, how to utilize the search engine at the maximum and presents a comprehensive list of important websites. These important websites, which are felt to offer high educational value for the anaesthesiologists, have been selected from an extensive search on the Internet. Top-rated anaesthesia websites, web blogs, forums, societies, e-books, e-journals and educational resources are elaborately discussed with relevant URLs.

  12. Internet resources for the anaesthesiologist

    PubMed Central

    Johnson, Edward

    2012-01-01

    There is considerable useful information about anaesthesia available on the World Wide Web. However, at present, it is very incomplete and scattered around many sites. Many anaesthetists find it difficult to get the right information they need because of the sheer volume of information available on the internet. This article starts with the basics of the Internet, how to utilize the search engine at the maximum and presents a comprehensive list of important websites. These important websites, which are felt to offer high educational value for the anaesthesiologists, have been selected from an extensive search on the Internet. Top-rated anaesthesia websites, web blogs, forums, societies, e-books, e-journals and educational resources are elaborately discussed with relevant URLs. PMID:22923818

  13. The Effectiveness of Commercial Internet Web Sites: A User's Perspective.

    ERIC Educational Resources Information Center

    Bell, Hudson; Tang, Nelson K. H.

    1998-01-01

    A user survey of 60 company Web sites (electronic commerce, entertainment and leisure, financial and banking services, information services, retailing and travel, and tourism) determined that 30% had facilities for conducting online transactions and only 7% charged for site access. Overall, Web sites were rated high in ease of access, content, and…

  14. Appendix A. Borderlands Site Database

    Treesearch

    A.C. MacWilliams

    2006-01-01

    The database includes modified components of the Arizona State Museum Site Recording System (Arizona State Museum 1993) and the New Mexico NMCRIS User?s Guide (State of New Mexico 1993). When sites contain more than one recorded component, these instances were entered separately with the result that many sites have multiple entries. Information for this database...

  15. Improving ATLAS grid site reliability with functional tests using HammerCloud

    NASA Astrophysics Data System (ADS)

    Elmsheuser, Johannes; Legger, Federica; Medrano Llamas, Ramon; Sciacca, Gianfranco; van der Ster, Dan

    2012-12-01

    With the exponential growth of LHC (Large Hadron Collider) data in 2011, and more coming in 2012, distributed computing has become the established way to analyse collider data. The ATLAS grid infrastructure includes almost 100 sites worldwide, ranging from large national computing centers to smaller university clusters. These facilities are used for data reconstruction and simulation, which are centrally managed by the ATLAS production system, and for distributed user analysis. To ensure the smooth operation of such a complex system, regular tests of all sites are necessary to validate the site capability of successfully executing user and production jobs. We report on the development, optimization and results of an automated functional testing suite using the HammerCloud framework. Functional tests are short lightweight applications covering typical user analysis and production schemes, which are periodically submitted to all ATLAS grid sites. Results from those tests are collected and used to evaluate site performances. Sites that fail or are unable to run the tests are automatically excluded from the PanDA brokerage system, therefore avoiding user or production jobs to be sent to problematic sites.

  16. Enabling Interoperable and Selective Data Sharing among Social Networking Sites

    NASA Astrophysics Data System (ADS)

    Shin, Dongwan; Lopes, Rodrigo

    With the widespread use of social networking (SN) sites and even introduction of a social component in non-social oriented services, there is a growing concern over user privacy in general, how to handle and share user profiles across SN sites in particular. Although there have been several proprietary or open source-based approaches to unifying the creation of third party applications, the availability and retrieval of user profile information are still limited to the site where the third party application is run, mostly devoid of the support for data interoperability. In this paper we propose an approach to enabling interopearable and selective data sharing among SN sites. To support selective data sharing, we discuss an authenticated dictionary (ADT)-based credential which enables a user to share only a subset of her information certified by external SN sites with applications running on an SN site. For interoperable data sharing, we propose an extension to the OpenSocial API so that it can provide an open source-based framework for allowing the ADT-based credential to be used seamlessly among different SN sites.

  17. Access to the NCAR Research Data Archive via the Globus Data Transfer Service

    NASA Astrophysics Data System (ADS)

    Cram, T.; Schuster, D.; Ji, Z.; Worley, S. J.

    2014-12-01

    The NCAR Research Data Archive (RDA; http://rda.ucar.edu) contains a large and diverse collection of meteorological and oceanographic observations, operational and reanalysis outputs, and remote sensing datasets to support atmospheric and geoscience research. The RDA contains greater than 600 dataset collections which support the varying needs of a diverse user community. The number of RDA users is increasing annually, and the most popular method used to access the RDA data holdings is through web based protocols, such as wget and cURL based scripts. In the year 2013, 10,000 unique users downloaded greater than 820 terabytes of data from the RDA, and customized data products were prepared for more than 29,000 user-driven requests. In order to further support this increase in web download usage, the RDA is implementing the Globus data transfer service (www.globus.org) to provide a GridFTP data transfer option for the user community. The Globus service is broadly scalable, has an easy to install client, is sustainably supported, and provides a robust, efficient, and reliable data transfer option for RDA users. This paper highlights the main functionality and usefulness of the Globus data transfer service for accessing the RDA holdings. The Globus data transfer service, developed and supported by the Computation Institute at The University of Chicago and Argonne National Laboratory, uses the GridFTP as a fast, secure, and reliable method for transferring data between two endpoints. A Globus user account is required to use this service, and data transfer endpoints are defined on the Globus web interface. In the RDA use cases, the access endpoint is created on the RDA data server at NCAR. The data user defines the receiving endpoint for the data transfer, which can be the main file system at a host institution, a personal work station, or laptop. Once initiated, the data transfer runs as an unattended background process by Globus, and Globus ensures that the transfer is accurately fulfilled. Users can monitor the data transfer progress on the Globus web interface and optionally receive an email notification once it is complete. Globus also provides a command-line interface to support scripted transfers, which can be useful when embedded in data processing workflows.

  18. Robotic Telepresence: Perception, Performance, and User Experience

    DTIC Science & Technology

    2012-02-01

    defined as “a human-computer-machine condition in which a user receives sufficient information about a remote, real-world site through a machine so...that the user feels physically present at the remote, real-world site ” (Aliberti and Bruen, 2006). Telepresence often includes capabilities for a more...outdoor route reconnaissance course (figures 4 and 5) was located at the Molnar MOUT (Military Operations in Urban Terrain) site in Fort Benning, GA. It

  19. Making YOHKOH SXT Images Available to the Public: The YOHKOH Public Outreach Project

    NASA Astrophysics Data System (ADS)

    Larson, M. B.; McKenzie, D.; Slater, T.; Acton, L.; Alexander, D.; Freeland, S.; Lemen, J.; Metcalf, T.

    1999-05-01

    The NASA funded Yohkoh Public Outreach Project (YPOP) provides public access to high quality Yohkoh SXT data via the World Wide Web. The products of this effort are available to the scientific research community, K-12 schools, and informal education centers including planetaria, museums, and libraries. The project utilizes the intrinsic excitement of the SXT data, and in particular the SXT movies, to develop science learning tools and classroom activities. The WWW site at URL: http://solar.physics.montana.edu/YPOP/ uses a movie theater theme to highlight available Yohkoh movies in a format that is entertaining and inviting to non-scientists. The site features informational tours of the Sun as a star, the solar magnetic field, the internal structure and the Sun's general features. The on-line Solar Classroom has proven very popular, showcasing hand-on activities about image filtering, the solar cycle, satellite orbits, image processing, construction of a model Yohkoh satellite, solar rotation, measuring sunspots and building a portable sundial. The YPOP Guestbook has been helpful in evaluating the usefulness of the site with over 300 detailed comments to date.

  20. Would you tell everyone this? Facebook conversations as health promotion interventions.

    PubMed

    Syred, Jonathan; Naidoo, Carla; Woodhall, Sarah C; Baraitser, Paula

    2014-04-11

    Health promotion interventions on social networking sites can communicate individually tailored content to a large audience. User-generated content helps to maximize engagement, but health promotion websites have had variable success in supporting user engagement. The aim of our study was to examine which elements of moderator and participant behavior stimulated and maintained interaction with a sexual health promotion site on Facebook. We examined the pattern and content of posts on a Facebook page. Google analytics was used to describe the number of people using the page and viewing patterns. A qualitative, thematic approach was used to analyze content. During the study period (January 18, 2010, to June 27, 2010), 576 users interacted 888 times with the site through 508 posts and 380 comments with 93% of content generated by users. The user-generated conversation continued while new participants were driven to the site by advertising, but interaction with the site ceased rapidly after the advertising stopped. Conversations covered key issues on chlamydia and chlamydia testing. Users endorsed testing, celebrated their negative results, and modified and questioned key messages. There was variation in user approach to the site from sharing of personal experience and requesting help to joking about sexually transmitted infection. The moderator voice was reactive, unengaged, tolerant, simplistic, and was professional in tone. There was no change in the moderator approach throughout the period studied. Our findings suggest this health promotion site provided a space for single user posts but not a self-sustaining conversation. Possible explanations for this include little new content from the moderator, a definition of content too narrow to hold the interest of participants, and limited responsiveness to user needs. Implications for health promotion practice include the need to consider a life cycle approach to online community development for health promotion and the need for a developing moderator strategy to reflect this. This strategy should reflect two facets of moderation for online health promotion interventions: (1) unengaged and professional oversight to provide a safe space for discussion and to maintain information quality, and (2) a more engaged and interactive presence designed to maintain interest that generates new material for discussion and is responsive to user requests.

  1. Publishing NASA Metadata as Linked Open Data for Semantic Mashups

    NASA Astrophysics Data System (ADS)

    Wilson, Brian; Manipon, Gerald; Hua, Hook

    2014-05-01

    Data providers are now publishing more metadata in more interoperable forms, e.g. Atom or RSS 'casts', as Linked Open Data (LOD), or as ISO Metadata records. A major effort on the part of the NASA's Earth Science Data and Information System (ESDIS) project is the aggregation of metadata that enables greater data interoperability among scientific data sets regardless of source or application. Both the Earth Observing System (EOS) ClearingHOuse (ECHO) and the Global Change Master Directory (GCMD) repositories contain metadata records for NASA (and other) datasets and provided services. These records contain typical fields for each dataset (or software service) such as the source, creation date, cognizant institution, related access URL's, and domain and variable keywords to enable discovery. Under a NASA ACCESS grant, we demonstrated how to publish the ECHO and GCMD dataset and services metadata as LOD in the RDF format. Both sets of metadata are now queryable at SPARQL endpoints and available for integration into "semantic mashups" in the browser. It is straightforward to reformat sets of XML metadata, including ISO, into simple RDF and then later refine and improve the RDF predicates by reusing known namespaces such as Dublin core, georss, etc. All scientific metadata should be part of the LOD world. In addition, we developed an "instant" drill-down and browse interface that provides faceted navigation so that the user can discover and explore the 25,000 datasets and 3000 services. The available facets and the free-text search box appear in the left panel, and the instantly updated results for the dataset search appear in the right panel. The user can constrain the value of a metadata facet simply by clicking on a word (or phrase) in the "word cloud" of values for each facet. The display section for each dataset includes the important metadata fields, a full description of the dataset, potentially some related URL's, and a "search" button that points to an OpenSearch GUI that is pre-configured to search for granules within the dataset. We will present our experiences with converting NASA metadata into LOD, discuss the challenges, illustrate some of the enabled mashups, and demonstrate the latest version of the "instant browse" interface for navigating multiple metadata collections.

  2. An Analysis of SE and MBSE Concepts to Support Defence Capability Acquisition

    DTIC Science & Technology

    2014-09-01

    Government Department of Finance and Deregulation, Canberra, ACT, August 2011. [online] URL: http://agimo.gov.au/files/2012/04/AGA_RM_v3_0.pdf ANSI...First Time, White Paper, Aberdeen Group Group, August 2011. [online] URL: http://www.aberdeen.com/Aberdeen- Library/7121/RA-system-design...Edge e-zine, IBM Software Group, August 2003. Cantor 2003b Cantor, Murray, Rational Unified Process for Systems Engineering Part I1: System

  3. MONITORING OF PORE WATER PRESSURE AND WATER CONTENT AROUND A HORIZONTAL DRIFT THROUGH EXCAVATION - MEASUREMENT AT THE 140m GALLERY IN THE HORONOBE URL -

    NASA Astrophysics Data System (ADS)

    Yabuuchi, Satoshi; Kunimaru, Takanori; Kishi, Atsuyasu; Komatsu, Mitsuru

    Japan Atomic Energy Agency has been conducting the Horonobe Underground Research Laboratory (URL) project in Horonobe, Hokkaido, as a part of the research and development program on geological disposal of high-level radioactive waste. Pore water pressure and water content around a horizontal drift in the URL have been monitored for over 18 months since before the drift excavation was started. During the drift excavation, both pore water pressure and water content were decreasing. Pore water pressure has been still positive though it continued to decrease with its gradient gradually smaller after excavation, while water content turned to increase about 6 months after the completion of the excavation. It turned to fall again about 5 months later. An unsaturated zone containing gases which were dissolved in groundwater may have been formed around the horizontal drift.

  4. Schroedinger’s code: Source code availability and transparency in astrophysics

    NASA Astrophysics Data System (ADS)

    Ryan, PW; Allen, Alice; Teuben, Peter

    2018-01-01

    Astronomers use software for their research, but how many of the codes they use are available as source code? We examined a sample of 166 papers from 2015 for clearly identified software use, then searched for source code for the software packages mentioned in these research papers. We categorized the software to indicate whether source code is available for download and whether there are restrictions to accessing it, and if source code was not available, whether some other form of the software, such as a binary, was. Over 40% of the source code for the software used in our sample was not available for download.As URLs have often been used as proxy citations for software, we also extracted URLs from one journal’s 2015 research articles, removed those from certain long-term, reliable domains, and tested the remainder to determine what percentage of these URLs were still accessible in September and October, 2017.

  5. Book Out! An Inventory Story

    NASA Technical Reports Server (NTRS)

    Panait, Claudia M.

    2004-01-01

    The NASA Glenn Library is a science and engineering research library providing the most current books, journals, CD-ROM's and documents to support the study of aeronautics, space propulsion and power, communications technology, materials and structures and microgravity science. The GRC technical library also supports the research and development efforts of all scientists and engineers on site via full text electronic files, literature searching, technical reports, etc. As an intern in the NASA Glenn Library, I attempt to support these objectives through efficiently and effectively fulfilling the assignment that was given to me. The assignment that was relegated to me was to catalog National Advisory Committee for Aeronautics, NASA Technical Documents into NASA Galaxie. This process consists of holdings being added to existing Galaxie records, upgrades and editing done to the bibliographic records when needed, adding URL's into Galaxie when they were missing from the record. NASA ASAP and Digidoc was used to locate URL's of PDF's that were not in Galaxie. A spreadsheet of documents with no URL's were maintained. Also, a subject channel of web, fill-text, paid and free, journal and other subject specific pages were developed and expanded fiom current content of intranet pages. To expand upon the second half of my assignment, I was given the project of taking inventory of the library s book collection. I kept record of the books that were not accounted for on a master list I was given to work fiom and submitted them for correction and addition. I also made sure the books were placed in the appropriate order and made corrections to any discrepancies that existed between the master list and what was on the shelf. Upon completion of this assignment, I will have verified that 21,113 books were in the correct location, order and have the correct corresponding serial number and barcode. In conclusion, as of this date I have input around 750 documents into NASA Galaxie, inputting about half of the NASA Technical Documents into the system. The rest of my tenure in this program will consist of finishing the other half of the reports. In regard to the second assignment, I still have about three-quarters of the collection to record and correct.

  6. Overview of the interactive task in BioCreative V.

    PubMed

    Wang, Qinghua; S Abdul, Shabbir; Almeida, Lara; Ananiadou, Sophia; Balderas-Martínez, Yalbi I; Batista-Navarro, Riza; Campos, David; Chilton, Lucy; Chou, Hui-Jou; Contreras, Gabriela; Cooper, Laurel; Dai, Hong-Jie; Ferrell, Barbra; Fluck, Juliane; Gama-Castro, Socorro; George, Nancy; Gkoutos, Georgios; Irin, Afroza K; Jensen, Lars J; Jimenez, Silvia; Jue, Toni R; Keseler, Ingrid; Madan, Sumit; Matos, Sérgio; McQuilton, Peter; Milacic, Marija; Mort, Matthew; Natarajan, Jeyakumar; Pafilis, Evangelos; Pereira, Emiliano; Rao, Shruti; Rinaldi, Fabio; Rothfels, Karen; Salgado, David; Silva, Raquel M; Singh, Onkar; Stefancsik, Raymund; Su, Chu-Hsien; Subramani, Suresh; Tadepally, Hamsa D; Tsaprouni, Loukia; Vasilevsky, Nicole; Wang, Xiaodong; Chatr-Aryamontri, Andrew; Laulederkind, Stanley J F; Matis-Mitchell, Sherri; McEntyre, Johanna; Orchard, Sandra; Pundir, Sangya; Rodriguez-Esteban, Raul; Van Auken, Kimberly; Lu, Zhiyong; Schaeffer, Mary; Wu, Cathy H; Hirschman, Lynette; Arighi, Cecilia N

    2016-01-01

    Fully automated text mining (TM) systems promote efficient literature searching, retrieval, and review but are not sufficient to produce ready-to-consume curated documents. These systems are not meant to replace biocurators, but instead to assist them in one or more literature curation steps. To do so, the user interface is an important aspect that needs to be considered for tool adoption. The BioCreative Interactive task (IAT) is a track designed for exploring user-system interactions, promoting development of useful TM tools, and providing a communication channel between the biocuration and the TM communities. In BioCreative V, the IAT track followed a format similar to previous interactive tracks, where the utility and usability of TM tools, as well as the generation of use cases, have been the focal points. The proposed curation tasks are user-centric and formally evaluated by biocurators. In BioCreative V IAT, seven TM systems and 43 biocurators participated. Two levels of user participation were offered to broaden curator involvement and obtain more feedback on usability aspects. The full level participation involved training on the system, curation of a set of documents with and without TM assistance, tracking of time-on-task, and completion of a user survey. The partial level participation was designed to focus on usability aspects of the interface and not the performance per se In this case, biocurators navigated the system by performing pre-designed tasks and then were asked whether they were able to achieve the task and the level of difficulty in completing the task. In this manuscript, we describe the development of the interactive task, from planning to execution and discuss major findings for the systems tested.Database URL: http://www.biocreative.org. Published by Oxford University Press 2016. This work is written by US Government employees and is in the public domain in the US.

  7. yStreX: yeast stress expression database

    PubMed Central

    Wanichthanarak, Kwanjeera; Nookaew, Intawat; Petranovic, Dina

    2014-01-01

    Over the past decade genome-wide expression analyses have been often used to study how expression of genes changes in response to various environmental stresses. Many of these studies (such as effects of oxygen concentration, temperature stress, low pH stress, osmotic stress, depletion or limitation of nutrients, addition of different chemical compounds, etc.) have been conducted in the unicellular Eukaryal model, yeast Saccharomyces cerevisiae. However, the lack of a unifying or integrated, bioinformatics platform that would permit efficient and rapid use of all these existing data remain an important issue. To facilitate research by exploiting existing transcription data in the field of yeast physiology, we have developed the yStreX database. It is an online repository of analyzed gene expression data from curated data sets from different studies that capture genome-wide transcriptional changes in response to diverse environmental transitions. The first aim of this online database is to facilitate comparison of cross-platform and cross-laboratory gene expression data. Additionally, we performed different expression analyses, meta-analyses and gene set enrichment analyses; and the results are also deposited in this database. Lastly, we constructed a user-friendly Web interface with interactive visualization to provide intuitive access and to display the queried data for users with no background in bioinformatics. Database URL: http://www.ystrexdb.com PMID:25024351

  8. LCR-eXXXplorer: a web platform to search, visualize and share data for low complexity regions in protein sequences.

    PubMed

    Kirmitzoglou, Ioannis; Promponas, Vasilis J

    2015-07-01

    Local compositionally biased and low complexity regions (LCRs) in amino acid sequences have initially attracted the interest of researchers due to their implication in generating artifacts in sequence database searches. There is accumulating evidence of the biological significance of LCRs both in physiological and in pathological situations. Nonetheless, LCR-related algorithms and tools have not gained wide appreciation across the research community, partly due to the fact that only a handful of user-friendly software is currently freely available. We developed LCR-eXXXplorer, an extensible online platform attempting to fill this gap. LCR-eXXXplorer offers tools for displaying LCRs from the UniProt/SwissProt knowledgebase, in combination with other relevant protein features, predicted or experimentally verified. Moreover, users may perform powerful queries against a custom designed sequence/LCR-centric database. We anticipate that LCR-eXXXplorer will be a useful starting point in research efforts for the elucidation of the structure, function and evolution of proteins with LCRs. LCR-eXXXplorer is freely available at the URL http://repeat.biol.ucy.ac.cy/lcr-exxxplorer. vprobon@ucy.ac.cy Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  9. A User-centered Model for Web Site Design

    PubMed Central

    Kinzie, Mable B.; Cohn, Wendy F.; Julian, Marti F.; Knaus, William A.

    2002-01-01

    As the Internet continues to grow as a delivery medium for health information, the design of effective Web sites becomes increasingly important. In this paper, the authors provide an overview of one effective model for Web site design, a user-centered process that includes techniques for needs assessment, goal/task analysis, user interface design, and rapid prototyping. They detail how this approach was employed to design a family health history Web site, Health Heritage . This Web site helps patients record and maintain their family health histories in a secure, confidential manner. It also supports primary care physicians through analysis of health histories, identification of potential risks, and provision of health care recommendations. Visual examples of the design process are provided to show how the use of this model resulted in an easy-to-use Web site that is likely to meet user needs. The model is effective across diverse content arenas and is appropriate for applications in varied media. PMID:12087113

  10. Electronic Cigarette Marketing Online: a Multi-Site, Multi-Product Comparison.

    PubMed

    Chu, Kar-Hai; Sidhu, Anupreet K; Valente, Thomas W

    2015-01-01

    Electronic cigarette awareness and use has been increasing rapidly. E-cigarette brands have utilized social networking sites to promote their products, as the growth of the e-cigarette industry has paralleled that of Web 2.0. These online platforms are cost-effective and have unique technological features and user demographics that can be attractive for selective marketing. The popularity of multiple sites also poses a risk of exposure to social networks where e-cigarette brands might not have a presence. To examine the marketing strategies of leading e-cigarette brands on multiple social networking sites, and to identify how affordances of the digital media are used to their advantage. Secondary analyses include determining if any brands are benefitting from site demographics, and exploring cross-site diffusion of marketing content through multi-site users. We collected data from two e-cigarette brands from four social networking sites over approximately 2.5 years. Content analysis is used to search for themes, population targeting, marketing strategies, and cross-site spread of messages. Twitter appeared to be the most frequently used social networking site for interacting directly with product users. Facebook supported informational broadcasts, such as announcements regarding political legislation. E-cigarette brands also differed in their approaches to their users, from informal conversations to direct product marketing. E-cigarette makers use different strategies to market their product and engage their users. There was no evidence of direct targeting of vulnerable populations, but the affordances of the different sites are exploited to best broadcast context-specific messages. We developed a viable method to study cross-site diffusion, although additional refinement is needed to account for how different types of digital media are used.

  11. Electronic Cigarette Marketing Online: a Multi-Site, Multi-Product Comparison

    PubMed Central

    Sidhu, Anupreet K; Valente, Thomas W

    2015-01-01

    Background Electronic cigarette awareness and use has been increasing rapidly. E-cigarette brands have utilized social networking sites to promote their products, as the growth of the e-cigarette industry has paralleled that of Web 2.0. These online platforms are cost-effective and have unique technological features and user demographics that can be attractive for selective marketing. The popularity of multiple sites also poses a risk of exposure to social networks where e-cigarette brands might not have a presence. Objective To examine the marketing strategies of leading e-cigarette brands on multiple social networking sites, and to identify how affordances of the digital media are used to their advantage. Secondary analyses include determining if any brands are benefitting from site demographics, and exploring cross-site diffusion of marketing content through multi-site users. Methods We collected data from two e-cigarette brands from four social networking sites over approximately 2.5 years. Content analysis is used to search for themes, population targeting, marketing strategies, and cross-site spread of messages. Results Twitter appeared to be the most frequently used social networking site for interacting directly with product users. Facebook supported informational broadcasts, such as announcements regarding political legislation. E-cigarette brands also differed in their approaches to their users, from informal conversations to direct product marketing. Conclusions E-cigarette makers use different strategies to market their product and engage their users. There was no evidence of direct targeting of vulnerable populations, but the affordances of the different sites are exploited to best broadcast context-specific messages. We developed a viable method to study cross-site diffusion, although additional refinement is needed to account for how different types of digital media are used. PMID:27227129

  12. End User Evaluations

    NASA Astrophysics Data System (ADS)

    Jay, Caroline; Lunn, Darren; Michailidou, Eleni

    As new technologies emerge, and Web sites become increasingly sophisticated, ensuring they remain accessible to disabled and small-screen users is a major challenge. While guidelines and automated evaluation tools are useful for informing some aspects of Web site design, numerous studies have demonstrated that they provide no guarantee that the site is genuinely accessible. The only reliable way to evaluate the accessibility of a site is to study the intended users interacting with it. This chapter outlines the processes that can be used throughout the design life cycle to ensure Web accessibility, describing their strengths and weaknesses, and discussing the practical and ethical considerations that they entail. The chapter also considers an important emerging trend in user evaluations: combining data from studies of “standard” Web use with data describing existing accessibility issues, to drive accessibility solutions forward.

  13. Expansion of the On-line Archive "Statistically Downscaled WCRP CMIP3 Climate Projections"

    NASA Astrophysics Data System (ADS)

    Brekke, L. D.; Pruitt, T.; Maurer, E. P.; Das, T.; Duffy, P.; White, K.

    2009-12-01

    Presentation highlights status and plans for a public-access archive of downscaled CMIP3 climate projections. Incorporating climate projection information into long-term evaluations of water and energy resources requires analysts to have access to projections at "basin-relevant" resolution. Such projections would ideally be bias-corrected to account for climate model tendencies to systematically simulate historical conditions different than observed. In 2007, the U.S. Bureau of Reclamation, Santa Clara University and Lawrence Livermore National Laboratory (LLNL) collaborated to develop an archive of 112 bias-corrected and spatially disaggregated (BCSD) CMIP3 temperature and precipitation projections. These projections were generated using 16 CMIP3 models to simulate three emissions pathways (A2, A1b, and B1) from one or more initializations (runs). Projections are specified on a monthly time step from 1950-2099 and at 0.125 degree spatial resolution within the North American Land Data Assimilation System domain (i.e. contiguous U.S., southern Canada and northern Mexico). Archive data are freely accessible at LLNL Green Data Oasis (url). Since being launched, the archive has served over 3500 data requests by nearly 500 users in support of a range of planning, research and educational activities. Archive developers continue to look for ways to improve the archive and respond to user needs. One request has been to serve the intermediate datasets generated during the BCSD procedure, helping users to interpret the relative influences of the bias-correction and spatial disaggregation on the transformed CMIP3 output. This request has been addressed with intermediate datasets now posted at the archive web-site. Another request relates closely to studying hydrologic and ecological impacts under climate change, where users are asking for projected diurnal temperature information (e.g., projected daily minimum and maximum temperature) and daily time step resolution. In response, archive developers are adding content in 2010, teaming with Scripps Institution of Oceanography (through their NOAA-RISA California-Nevada Applications Program and the California Climate Change Center) to apply a new daily downscaling technique to a sub-ensemble of the archive’s CMIP3 projections. The new technique, Bias-Corrected Constructed Analogs, combines the BC part of BCSD with a recently developed technique that preserves the daily sequencing structure of CMIP3 projections (Constructed Analogs, or CA). Such data will more easily serve hydrologic and ecological impacts assessments, and offer an opportunity to evaluate projection uncertainty associated with downscaling technique. Looking ahead to the arrival CMIP5 projections, archive collaborators have plans apply both BCSD and BCCA over the contiguous U.S. consistent with CMIP3 applications above, and also apply BCSD globally at a 0.5 degree spatial resolution. The latter effort involves collaboration with U.S. Army Corps of Engineers (USACE) and Climate Central.

  14. What a User Wants: Redesigning a Library's Web Site Based on a Card-Sort Analysis

    ERIC Educational Resources Information Center

    Robbins, Laura Pope; Esposito, Lisa; Kretz, Chris; Aloi, Michael

    2007-01-01

    Web site usability concerns anyone with a Web site to maintain. Libraries, however, are often the biggest offenders in terms of usability. In our efforts to provide users with everything they need for research, we often overwhelm them with sites that are confusing in structure, difficult to navigate, and weighed down with jargon. Dowling College…

  15. Decision Analysis for Remediation Technologies (DART) user`s manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sebo, D.

    1997-09-01

    This user`s manual is an introduction to the use of the Decision Analysis for Remediation Technology (DART) Report Generator. DART provides a user interface to a database containing site data (e.g., contaminants, waste depth, area) for sites within the Subsurface Contaminant Focus Area (SCFA). The database also contains SCFA requirements, needs, and technology information. The manual is arranged in two major sections. The first section describes loading DART onto a user system. The second section describes DART operation. DART operation is organized into sections by the user interface forms. For each form, user input, both optional and required, DART capabilities,more » and the result of user selections will be covered in sufficient detail to enable the user to understand DART, capabilities and determine how to use DART to meet specific needs.« less

  16. Newly Released TRMM Version 7 Products, Other Precipitation Datasets and Data Services at NASA GES DISC

    NASA Technical Reports Server (NTRS)

    Liu, Zhong; Ostrenga, D.; Teng, W. L.; Trivedi, Bhagirath; Kempler, S.

    2012-01-01

    The NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) is home of global precipitation product archives, in particular, the Tropical Rainfall Measuring Mission (TRMM) products. TRMM is a joint U.S.-Japan satellite mission to monitor tropical and subtropical (40 S - 40 N) precipitation and to estimate its associated latent heating. The TRMM satellite provides the first detailed and comprehensive dataset on the four dimensional distribution of rainfall and latent heating over vastly undersampled tropical and subtropical oceans and continents. The TRMM satellite was launched on November 27, 1997. TRMM data products are archived at and distributed by GES DISC. The newly released TRMM Version 7 consists of several changes including new parameters, new products, meta data, data structures, etc. For example, hydrometeor profiles in 2A12 now have 28 layers (14 in V6). New parameters have been added to several popular Level-3 products, such as, 3B42, 3B43. Version 2.2 of the Global Precipitation Climatology Project (GPCP) dataset has been added to the TRMM Online Visualization and Analysis System (TOVAS; URL: http://disc2.nascom.nasa.gov/Giovanni/tovas/), allowing online analysis and visualization without downloading data and software. The GPCP dataset extends back to 1979. Version 3 of the Global Precipitation Climatology Centre (GPCC) monitoring product has been updated in TOVAS as well. The product provides global gauge-based monthly rainfall along with number of gauges per grid. The dataset begins in January 1986. To facilitate data and information access and support precipitation research and applications, we have developed a Precipitation Data and Information Services Center (PDISC; URL: http://disc.gsfc.nasa.gov/precipitation). In addition to TRMM, PDISC provides current and past observational precipitation data. Users can access precipitation data archives consisting of both remote sensing and in-situ observations. Users can use these data products to conduct a wide variety of activities, including case studies, model evaluation, uncertainty investigation, etc. To support Earth science applications, PDISC provides users near-real-time precipitation products over the Internet. At PDISC, users can access tools and software. Documentation, FAQ and assistance are also available. Other capabilities include: 1) Mirador (http://mirador.gsfc.nasa.gov/), a simplified interface for searching, browsing, and ordering Earth science data at NASA Goddard Earth Sciences Data and Information Services Center (GES DISC). Mirador is designed to be fast and easy to learn; 2)TOVAS; 3) NetCDF data download for the GIS community; 4) Data via OPeNDAP (http://disc.sci.gsfc.nasa.gov/services/opendap/). The OPeNDAP provides remote access to individual variables within datasets in a form usable by many tools, such as IDV, McIDAS-V, Panoply, Ferret and GrADS; 5) The Open Geospatial Consortium (OGC) Web Map Service (WMS) (http://disc.sci.gsfc.nasa.gov/services/wxs_ogc.shtml). The WMS is an interface that allows the use of data and enables clients to build customized maps with data coming from a different network.

  17. Newly Released TRMM Version 7 Products, GPCP Version 2.2 Precipitation Dataset and Data Services at NASA GES DISC

    NASA Astrophysics Data System (ADS)

    Ostrenga, D.; Liu, Z.; Teng, W. L.; Trivedi, B.; Kempler, S.

    2011-12-01

    The NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) is home of global precipitation product archives, in particular, the Tropical Rainfall Measuring Mission (TRMM) products. TRMM is a joint U.S.-Japan satellite mission to monitor tropical and subtropical (40deg S - 40deg N) precipitation and to estimate its associated latent heating. The TRMM satellite provides the first detailed and comprehensive dataset on the four dimensional distribution of rainfall and latent heating over vastly undersampled tropical and subtropical oceans and continents. The TRMM satellite was launched on November 27, 1997. TRMM data products are archived at and distributed by GES DISC. The newly released TRMM Version 7 consists of several changes including new parameters, new products, meta data, data structures, etc. For example, hydrometeor profiles in 2A12 now have 28 layers (14 in V6). New parameters have been added to several popular Level-3 products, such as, 3B42, 3B43. Version 2.2 of the Global Precipitation Climatology Project (GPCP) dataset has been added to the TRMM Online Visualization and Analysis System (TOVAS; URL: http://disc2.nascom.nasa.gov/Giovanni/tovas/), allowing online analysis and visualization without downloading data and software. The GPCP dataset extends back to 1979. Results of basic intercomparison between the new and the previous versions of both TRMM and GPCP will be presented to help understand changes in data product characteristics. To facilitate data and information access and support precipitation research and applications, we have developed a Precipitation Data and Information Services Center (PDISC; URL: http://disc.gsfc.nasa.gov/precipitation). In addition to TRMM, PDISC provides current and past observational precipitation data. Users can access precipitation data archives consisting of both remote sensing and in-situ observations. Users can use these data products to conduct a wide variety of activities, including case studies, model evaluation, uncertainty investigation, etc. To support Earth science applications, PDISC provides users near-real-time precipitation products over the Internet. At PDISC, users can access tools and software. Documentation, FAQ and assistance are also available. Other capabilities include: 1) Mirador (http://mirador.gsfc.nasa.gov/), a simplified interface for searching, browsing, and ordering Earth science data at NASA Goddard Earth Sciences Data and Information Services Center (GES DISC). Mirador is designed to be fast and easy to learn; 2)TOVAS; 3) NetCDF data download for the GIS community; 4) Data via OPeNDAP (http://disc.sci.gsfc.nasa.gov/services/opendap/). The OPeNDAP provides remote access to individual variables within datasets in a form usable by many tools, such as IDV, McIDAS-V, Panoply, Ferret and GrADS; 5) The Open Geospatial Consortium (OGC) Web Map Service (WMS) (http://disc.sci.gsfc.nasa.gov/services/wxs_ogc.shtml). The WMS is an interface that allows the use of data and enables clients to build customized maps with data coming from a different network. More details along with examples will be presented.

  18. A GIS-Interface Web Site: Exploratory Learning for Geography Curriculum

    ERIC Educational Resources Information Center

    Huang, Kuo Hung

    2011-01-01

    Although Web-based instruction provides learners with sufficient resources for self-paced learning, previous studies have confirmed that browsing navigation-oriented Web sites possibly hampers users' comprehension of information. Web sites designed as "categories of materials" for navigation demand more cognitive effort from users to orient their…

  19. Commissions as information organizations: Meeting the information needs of an electronic society

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sevel, F.

    1997-11-01

    This paper describes how commission-sponsored web sites can effectively meet electronic information needs. Demographics of internet users are presented and analyzed. Online activities and user access data are also described. The implications of the characteristics of internet users for commission-sponsored web sites are discussed, and guidelines for determining marketing objectives are presented.

  20. A cross disciplinary study of link decay and the effectiveness of mitigation techniques

    PubMed Central

    2013-01-01

    Background The dynamic, decentralized world-wide-web has become an essential part of scientific research and communication. Researchers create thousands of web sites every year to share software, data and services. These valuable resources tend to disappear over time. The problem has been documented in many subject areas. Our goal is to conduct a cross-disciplinary investigation of the problem and test the effectiveness of existing remedies. Results We accessed 14,489 unique web pages found in the abstracts within Thomson Reuters' Web of Science citation index that were published between 1996 and 2010 and found that the median lifespan of these web pages was 9.3 years with 62% of them being archived. Survival analysis and logistic regression were used to find significant predictors of URL lifespan. The availability of a web page is most dependent on the time it is published and the top-level domain names. Similar statistical analysis revealed biases in current solutions: the Internet Archive favors web pages with fewer layers in the Universal Resource Locator (URL) while WebCite is significantly influenced by the source of publication. We also created a prototype for a process to submit web pages to the archives and increased coverage of our list of scientific webpages in the Internet Archive and WebCite by 22% and 255%, respectively. Conclusion Our results show that link decay continues to be a problem across different disciplines and that current solutions for static web pages are helping and can be improved. PMID:24266891

  1. A cross disciplinary study of link decay and the effectiveness of mitigation techniques.

    PubMed

    Hennessey, Jason; Ge, Steven

    2013-01-01

    The dynamic, decentralized world-wide-web has become an essential part of scientific research and communication. Researchers create thousands of web sites every year to share software, data and services. These valuable resources tend to disappear over time. The problem has been documented in many subject areas. Our goal is to conduct a cross-disciplinary investigation of the problem and test the effectiveness of existing remedies. We accessed 14,489 unique web pages found in the abstracts within Thomson Reuters' Web of Science citation index that were published between 1996 and 2010 and found that the median lifespan of these web pages was 9.3 years with 62% of them being archived. Survival analysis and logistic regression were used to find significant predictors of URL lifespan. The availability of a web page is most dependent on the time it is published and the top-level domain names. Similar statistical analysis revealed biases in current solutions: the Internet Archive favors web pages with fewer layers in the Universal Resource Locator (URL) while WebCite is significantly influenced by the source of publication. We also created a prototype for a process to submit web pages to the archives and increased coverage of our list of scientific webpages in the Internet Archive and WebCite by 22% and 255%, respectively. Our results show that link decay continues to be a problem across different disciplines and that current solutions for static web pages are helping and can be improved.

  2. Electrostatic design of protein-protein association rates.

    PubMed

    Schreiber, Gideon; Shaul, Yossi; Gottschalk, Kay E

    2006-01-01

    De novo design and redesign of proteins and protein complexes have made promising progress in recent years. Here, we give an overview of how to use available computer-based tools to design proteins to bind faster and tighter to their protein-complex partner by electrostatic optimization between the two proteins. Electrostatic optimization is possible because of the simple relation between the Debye-Huckel energy of interaction between a pair of proteins and their rate of association. This can be used for rapid, structure-based calculations of the electrostatic attraction between the two proteins in the complex. Using these principles, we developed two computer programs that predict the change in k(on), and as such the affinity, on introducing charged mutations. The two programs have a web interface that is available at www.weizmann.ac.il/home/bcges/PARE.html and http://bip.weizmann.ac.il/hypare. When mutations leading to charge optimization are introduced outside the physical binding site, the rate of dissociation is unchanged and therefore the change in k(on) parallels that of the affinity. This design method was evaluated on a number of different protein complexes resulting in binding rates and affinities of hundreds of fold faster and tighter compared to wild type. In this chapter, we demonstrate the procedure and go step by step over the methodology of using these programs for protein-association design. Finally, the way to easily implement the principle of electrostatic design for any protein complex of choice is shown.

  3. A systematic identification and analysis of scientists on Twitter.

    PubMed

    Ke, Qing; Ahn, Yong-Yeol; Sugimoto, Cassidy R

    2017-01-01

    Metrics derived from Twitter and other social media-often referred to as altmetrics-are increasingly used to estimate the broader social impacts of scholarship. Such efforts, however, may produce highly misleading results, as the entities that participate in conversations about science on these platforms are largely unknown. For instance, if altmetric activities are generated mainly by scientists, does it really capture broader social impacts of science? Here we present a systematic approach to identifying and analyzing scientists on Twitter. Our method can identify scientists across many disciplines, without relying on external bibliographic data, and be easily adapted to identify other stakeholder groups in science. We investigate the demographics, sharing behaviors, and interconnectivity of the identified scientists. We find that Twitter has been employed by scholars across the disciplinary spectrum, with an over-representation of social and computer and information scientists; under-representation of mathematical, physical, and life scientists; and a better representation of women compared to scholarly publishing. Analysis of the sharing of URLs reveals a distinct imprint of scholarly sites, yet only a small fraction of shared URLs are science-related. We find an assortative mixing with respect to disciplines in the networks between scientists, suggesting the maintenance of disciplinary walls in social media. Our work contributes to the literature both methodologically and conceptually-we provide new methods for disambiguating and identifying particular actors on social media and describing the behaviors of scientists, thus providing foundational information for the construction and use of indicators on the basis of social media metrics.

  4. Relationship Between Faults Oriented Parallel and Oblique to Bedding in Neogene Massive Siliceous Mudstones at The Horonobe Underground Research Laboratory, Japan

    NASA Astrophysics Data System (ADS)

    Hayano, Akira; Ishii, Eiichi

    2016-10-01

    This study investigates the mechanical relationship between bedding-parallel and bedding-oblique faults in a Neogene massive siliceous mudstone at the site of the Horonobe Underground Research Laboratory (URL) in Hokkaido, Japan, on the basis of observations of drill-core recovered from pilot boreholes and fracture mapping on shaft and gallery walls. Four bedding-parallel faults with visible fault gouge, named respectively the MM Fault, the Last MM Fault, the S1 Fault, and the S2 Fault (stratigraphically, from the highest to the lowest), were observed in two pilot boreholes (PB-V01 and SAB-1). The distribution of the bedding-parallel faults at 350 m depth in the Horonobe URL indicates that these faults are spread over at least several tens of meters in parallel along a bedding plane. The observation that the bedding-oblique fault displaces the Last MM fault is consistent with the previous interpretation that the bedding- oblique faults formed after the bedding-parallel faults. In addition, the bedding-parallel faults terminate near the MM and S1 faults, indicating that the bedding-parallel faults with visible fault gouge act to terminate the propagation of younger bedding-oblique faults. In particular, the MM and S1 faults, which have a relatively thick fault gouge, appear to have had a stronger control on the propagation of bedding-oblique faults than did the Last MM fault, which has a relatively thin fault gouge.

  5. A systematic identification and analysis of scientists on Twitter

    PubMed Central

    Ke, Qing; Ahn, Yong-Yeol; Sugimoto, Cassidy R.

    2017-01-01

    Metrics derived from Twitter and other social media—often referred to as altmetrics—are increasingly used to estimate the broader social impacts of scholarship. Such efforts, however, may produce highly misleading results, as the entities that participate in conversations about science on these platforms are largely unknown. For instance, if altmetric activities are generated mainly by scientists, does it really capture broader social impacts of science? Here we present a systematic approach to identifying and analyzing scientists on Twitter. Our method can identify scientists across many disciplines, without relying on external bibliographic data, and be easily adapted to identify other stakeholder groups in science. We investigate the demographics, sharing behaviors, and interconnectivity of the identified scientists. We find that Twitter has been employed by scholars across the disciplinary spectrum, with an over-representation of social and computer and information scientists; under-representation of mathematical, physical, and life scientists; and a better representation of women compared to scholarly publishing. Analysis of the sharing of URLs reveals a distinct imprint of scholarly sites, yet only a small fraction of shared URLs are science-related. We find an assortative mixing with respect to disciplines in the networks between scientists, suggesting the maintenance of disciplinary walls in social media. Our work contributes to the literature both methodologically and conceptually—we provide new methods for disambiguating and identifying particular actors on social media and describing the behaviors of scientists, thus providing foundational information for the construction and use of indicators on the basis of social media metrics. PMID:28399145

  6. Mackay campus of environmental education and digital cultural construction: the application of 3D virtual reality

    NASA Astrophysics Data System (ADS)

    Chien, Shao-Chi; Chung, Yu-Wei; Lin, Yi-Hsuan; Huang, Jun-Yi; Chang, Jhih-Ting; He, Cai-Ying; Cheng, Yi-Wen

    2012-04-01

    This study uses 3D virtual reality technology to create the "Mackay campus of the environmental education and digital cultural 3D navigation system" for local historical sites in the Tamsui (Hoba) area, in hopes of providing tourism information and navigation through historical sites using a 3D navigation system. We used Auto CAD, Sketch Up, and SpaceEyes 3D software to construct the virtual reality scenes and create the school's historical sites, such as the House of Reverends, the House of Maidens, the Residence of Mackay, and the Education Hall. We used this technology to complete the environmental education and digital cultural Mackay campus . The platform we established can indeed achieve the desired function of providing tourism information and historical site navigation. The interactive multimedia style and the presentation of the information will allow users to obtain a direct information response. In addition to showing the external appearances of buildings, the navigation platform can also allow users to enter the buildings to view lifelike scenes and textual information related to the historical sites. The historical sites are designed according to their actual size, which gives users a more realistic feel. In terms of the navigation route, the navigation system does not force users along a fixed route, but instead allows users to freely control the route they would like to take to view the historical sites on the platform.

  7. Would You Tell Everyone This? Facebook Conversations as Health Promotion Interventions

    PubMed Central

    Syred, Jonathan; Naidoo, Carla; Woodhall, Sarah C

    2014-01-01

    Background Health promotion interventions on social networking sites can communicate individually tailored content to a large audience. User-generated content helps to maximize engagement, but health promotion websites have had variable success in supporting user engagement. Objective The aim of our study was to examine which elements of moderator and participant behavior stimulated and maintained interaction with a sexual health promotion site on Facebook. Methods We examined the pattern and content of posts on a Facebook page. Google analytics was used to describe the number of people using the page and viewing patterns. A qualitative, thematic approach was used to analyze content. Results During the study period (January 18, 2010, to June 27, 2010), 576 users interacted 888 times with the site through 508 posts and 380 comments with 93% of content generated by users. The user-generated conversation continued while new participants were driven to the site by advertising, but interaction with the site ceased rapidly after the advertising stopped. Conversations covered key issues on chlamydia and chlamydia testing. Users endorsed testing, celebrated their negative results, and modified and questioned key messages. There was variation in user approach to the site from sharing of personal experience and requesting help to joking about sexually transmitted infection. The moderator voice was reactive, unengaged, tolerant, simplistic, and was professional in tone. There was no change in the moderator approach throughout the period studied. Conclusions Our findings suggest this health promotion site provided a space for single user posts but not a self-sustaining conversation. Possible explanations for this include little new content from the moderator, a definition of content too narrow to hold the interest of participants, and limited responsiveness to user needs. Implications for health promotion practice include the need to consider a life cycle approach to online community development for health promotion and the need for a developing moderator strategy to reflect this. This strategy should reflect two facets of moderation for online health promotion interventions: (1) unengaged and professional oversight to provide a safe space for discussion and to maintain information quality, and (2) a more engaged and interactive presence designed to maintain interest that generates new material for discussion and is responsive to user requests. PMID:24727742

  8. myPhyloDB: a local web server for the storage and analysis of metagenomic data.

    PubMed

    Manter, Daniel K; Korsa, Matthew; Tebbe, Caleb; Delgado, Jorge A

    2016-01-01

    myPhyloDB v.1.1.2 is a user-friendly personal database with a browser-interface designed to facilitate the storage, processing, analysis, and distribution of microbial community populations (e.g. 16S metagenomics data). MyPhyloDB archives raw sequencing files, and allows for easy selection of project(s)/sample(s) of any combination from all available data in the database. The data processing capabilities of myPhyloDB are also flexible enough to allow the upload and storage of pre-processed data, or use the built-in Mothur pipeline to automate the processing of raw sequencing data. myPhyloDB provides several analytical (e.g. analysis of covariance,t-tests, linear regression, differential abundance (DESeq2), and principal coordinates analysis (PCoA)) and normalization (rarefaction, DESeq2, and proportion) tools for the comparative analysis of taxonomic abundance, species richness and species diversity for projects of various types (e.g. human-associated, human gut microbiome, air, soil, and water) for any taxonomic level(s) desired. Finally, since myPhyloDB is a local web-server, users can quickly distribute data between colleagues and end-users by simply granting others access to their personal myPhyloDB database. myPhyloDB is available athttp://www.ars.usda.gov/services/software/download.htm?softwareid=472 and more information along with tutorials can be found on our websitehttp://www.myphylodb.org. Database URL:http://www.myphylodb.org. Published by Oxford University Press 2016. This work is written by US Government employees and is in the public domain in the United States.

  9. Mendel,MD: A user-friendly open-source web tool for analyzing WES and WGS in the diagnosis of patients with Mendelian disorders

    PubMed Central

    D. Linhares, Natália; Pena, Sérgio D. J.

    2017-01-01

    Whole exome and whole genome sequencing have both become widely adopted methods for investigating and diagnosing human Mendelian disorders. As pangenomic agnostic tests, they are capable of more accurate and agile diagnosis compared to traditional sequencing methods. This article describes new software called Mendel,MD, which combines multiple types of filter options and makes use of regularly updated databases to facilitate exome and genome annotation, the filtering process and the selection of candidate genes and variants for experimental validation and possible diagnosis. This tool offers a user-friendly interface, and leads clinicians through simple steps by limiting the number of candidates to achieve a final diagnosis of a medical genetics case. A useful innovation is the “1-click” method, which enables listing all the relevant variants in genes present at OMIM for perusal by clinicians. Mendel,MD was experimentally validated using clinical cases from the literature and was tested by students at the Universidade Federal de Minas Gerais, at GENE–Núcleo de Genética Médica in Brazil and at the Children’s University Hospital in Dublin, Ireland. We show in this article how it can simplify and increase the speed of identifying the culprit mutation in each of the clinical cases that were received for further investigation. Mendel,MD proved to be a reliable web-based tool, being open-source and time efficient for identifying the culprit mutation in different clinical cases of patients with Mendelian Disorders. It is also freely accessible for academic users on the following URL: https://mendelmd.org. PMID:28594829

  10. Global War on Terrorism: Analyzing the Strategic Threat

    DTIC Science & Technology

    2004-11-01

    lous Muslim country, the jihadists have developed an anti-Semitic streak. Abu Bakar Ba’asyir, a leading Indonesian jihadist, was arrested following...The Public Teachings of Abu Bakar Ba’asyir,” Ambon PosKo Zwolle, online ed., in English, 26 May 2003, URL: <https://datawarehouse10.dia.ic.gov/fcgi-bin...Teachings of Abu Bakar Ba’asyir.” Ambon PosKo Zwolle, on- line ed., in English, 26 May 2003. URL: <https://datawarehouse10.dia.ic.gov/fcgi- bin

  11. Growing a National Learning Environments and Resources Network for Science, Mathematics, Engineering, and Technology Education: Current Issues and Opportunities for the NSDL Program; Open Linking in the Scholarly Information Environment Using the OpenURL Framework; The HeadLine Personal Information Environment: Evaluation Phase One.

    ERIC Educational Resources Information Center

    Zia, Lee L.; Van de Sompel, Herbert; Beit-Arie, Oren; Gambles, Anne

    2001-01-01

    Includes three articles that discuss the National Science Foundation's National Science, Mathematics, Engineering, and Technology Education Digital Library (NSDL) program; the OpenURL framework for open reference linking in the Web-based scholarly information environment; and HeadLine (Hybrid Electronic Access and Delivery in the Library Networked…

  12. 78 FR 66746 - Medical Device User Fee and Modernization Act; Notice to Public of Web Site Location of Fiscal...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-06

    ...] Medical Device User Fee and Modernization Act; Notice to Public of Web Site Location of Fiscal Year 2014... and Drug Administration (FDA or the Agency) is announcing the Web site location where the Agency will... documents, FDA has committed to updating its Web site in a timely manner to reflect the Agency's review of...

  13. Managing a User’s Vulnerability on a Social Networking Site

    DTIC Science & Technology

    2015-05-01

    aid not only the cyberbullying of teenagers but also the cyberstalking and cyberharassment of adults3. On a social networking site, an individual user...news/2011-07-23-facebook-stalker- sentenced_n.htm 3en.wikipedia.org/wiki/ Cyberbullying 1 posts and subsequent interactions. The owner of the site

  14. Designing Search: Effective Search Interfaces for Academic Library Web Sites

    ERIC Educational Resources Information Center

    Teague-Rector, Susan; Ghaphery, Jimmy

    2008-01-01

    Academic libraries customize, support, and provide access to myriad information systems, each with complex graphical user interfaces. The number of possible information entry points on an academic library Web site is both daunting to the end-user and consistently challenging to library Web site designers. Faced with the challenges inherent in…

  15. A web-based platform to support an evidence-based mental health intervention: lessons from the CBITS web site.

    PubMed

    Vona, Pamela; Wilmoth, Pete; Jaycox, Lisa H; McMillen, Janey S; Kataoka, Sheryl H; Wong, Marleen; DeRosier, Melissa E; Langley, Audra K; Kaufman, Joshua; Tang, Lingqi; Stein, Bradley D

    2014-11-01

    To explore the role of Web-based platforms in behavioral health, the study examined usage of a Web site for supporting training and implementation of an evidence-based intervention. Using data from an online registration survey and Google Analytics, the investigators examined user characteristics and Web site utilization. Site engagement was substantial across user groups. Visit duration differed by registrants' characteristics. Less experienced clinicians spent more time on the Web site. The training section accounted for most page views across user groups. Individuals previously trained in the Cognitive-Behavioral Intervention for Trauma in Schools intervention viewed more implementation assistance and online community pages than did other user groups. Web-based platforms have the potential to support training and implementation of evidence-based interventions for clinicians of varying levels of experience and may facilitate more rapid dissemination. Web-based platforms may be promising for trauma-related interventions, because training and implementation support should be readily available after a traumatic event.

  16. Effects of organizational scheme and labeling on task performance in product-centered and user-centered retail Web sites.

    PubMed

    Resnick, Marc L; Sanchez, Julian

    2004-01-01

    As companies increase the quantity of information they provide through their Web sites, it is critical that content is structured with an appropriate architecture. However, resource constraints often limit the ability of companies to apply all Web design principles completely. This study quantifies the effects of two major information architecture principles in a controlled study that isolates the incremental effects of organizational scheme and labeling on user performance and satisfaction. Sixty participants with a wide range of Internet and on-line shopping experience were recruited to complete a series of shopping tasks on a prototype retail shopping Web site. User-centered labels provided a significant benefit in performance and satisfaction over labels obtained through company-centered methods. User-centered organization did not result in improved performance except when the label quality was poor. Significant interactions suggest specific guidelines for allocating resources in Web site design. Applications of this research include the design of Web sites for any commercial application, particularly E-commerce.

  17. Software-supported USER cloning strategies for site-directed mutagenesis and DNA assembly.

    PubMed

    Genee, Hans Jasper; Bonde, Mads Tvillinggaard; Bagger, Frederik Otzen; Jespersen, Jakob Berg; Sommer, Morten O A; Wernersson, Rasmus; Olsen, Lars Rønn

    2015-03-20

    USER cloning is a fast and versatile method for engineering of plasmid DNA. We have developed a user friendly Web server tool that automates the design of optimal PCR primers for several distinct USER cloning-based applications. Our Web server, named AMUSER (Automated DNA Modifications with USER cloning), facilitates DNA assembly and introduction of virtually any type of site-directed mutagenesis by designing optimal PCR primers for the desired genetic changes. To demonstrate the utility, we designed primers for a simultaneous two-position site-directed mutagenesis of green fluorescent protein (GFP) to yellow fluorescent protein (YFP), which in a single step reaction resulted in a 94% cloning efficiency. AMUSER also supports degenerate nucleotide primers, single insert combinatorial assembly, and flexible parameters for PCR amplification. AMUSER is freely available online at http://www.cbs.dtu.dk/services/AMUSER/.

  18. A Comparison of Users' Personal Information Sharing Awareness, Habits, and Practices in Social Networking Sites and E-Learning Systems

    ERIC Educational Resources Information Center

    Ball, Albert L.

    2012-01-01

    Although reports of identity theft continue to be widely published, users continue to post an increasing amount of personal information online, especially within social networking sites (SNS) and e-learning systems (ELS). Research has suggested that many users lack awareness of the threats that risky online personal information sharing poses to…

  19. Services for Graduate Students: A Review of Academic Library Web Sites

    ERIC Educational Resources Information Center

    Rempel, Hannah Gascho

    2010-01-01

    A library's Web site is well recognized as the gateway to the library for the vast majority of users. Choosing the most user-friendly Web architecture to reflect the many services libraries offer is a complex process, and librarians are still experimenting to find what works best for their users. As part of a redesign of the Oregon State…

  20. Conceptual Web Users' Actions Prediction for Ontology-Based Browsing Recommendations

    NASA Astrophysics Data System (ADS)

    Robal, Tarmo; Kalja, Ahto

    The Internet consists of thousands of web sites with different kinds of structures. However, users are browsing the web according to their informational expectations towards the web site searched, having an implicit conceptual model of the domain in their minds. Nevertheless, people tend to repeat themselves and have partially shared conceptual views while surfing the web, finding some areas of web sites more interesting than others. Herein, we take advantage of the latter and provide a model and a study on predicting users' actions based on the web ontology concepts and their relations.

  1. New extension software modules to enhance searching and display of transcriptome data in Tripal databases

    PubMed Central

    Chen, Ming; Henry, Nathan; Almsaeed, Abdullah; Zhou, Xiao; Wegrzyn, Jill; Ficklin, Stephen

    2017-01-01

    Abstract Tripal is an open source software package for developing biological databases with a focus on genetic and genomic data. It consists of a set of core modules that deliver essential functions for loading and displaying data records and associated attributes including organisms, sequence features and genetic markers. Beyond the core modules, community members are encouraged to contribute extension modules to build on the Tripal core and to customize Tripal for individual community needs. To expand the utility of the Tripal software system, particularly for RNASeq data, we developed two new extension modules. Tripal Elasticsearch enables fast, scalable searching of the entire content of a Tripal site as well as the construction of customized advanced searches of specific data types. We demonstrate the use of this module for searching assembled transcripts by functional annotation. A second module, Tripal Analysis Expression, houses and displays records from gene expression assays such as RNA sequencing. This includes biological source materials (biomaterials), gene expression values and protocols used to generate the data. In the case of an RNASeq experiment, this would reflect the individual organisms and tissues used to produce sequencing libraries, the normalized gene expression values derived from the RNASeq data analysis and a description of the software or code used to generate the expression values. The module will load data from common flat file formats including standard NCBI Biosample XML. Data loading, display options and other configurations can be controlled by authorized users in the Drupal administrative backend. Both modules are open source, include usage documentation, and can be found in the Tripal organization’s GitHub repository. Database URL: Tripal Elasticsearch module: https://github.com/tripal/tripal_elasticsearch Tripal Analysis Expression module: https://github.com/tripal/tripal_analysis_expression PMID:29220446

  2. Atmospheric Radiation Measurement program climate research facility operations quarterly report October 1 - December 31, 2006.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sisterson, D. L.

    2007-03-14

    Individual raw data streams from instrumentation at the Atmospheric Radiation Measurement (ARM) Program Climate Research Facility (ACRF) fixed and mobile sites are collected and sent to the Data Management Facility (DMF) at Pacific Northwest National Laboratory (PNNL) for processing in near real time. Raw and processed data are then sent daily to the ACRF Archive, where they are made available to users. For each instrument, we calculate the ratio of the actual number of data records received daily at the Archive to the expected number of data records. The results are tabulated by (1) individual data stream, site, and monthmore » for the current year and (2) site and fiscal year dating back to 1998. Table 1 shows the accumulated maximum operation time (planned uptime), the actual hours of operation, and the variance (unplanned downtime) for the period October 1 through December 31, 2006, for the fixed and mobile sites. Although the AMF is currently up and running in Niamey, Niger, Africa, the AMF statistics are reported separately and not included in the aggregate average with the fixed sites. The first quarter comprises a total of 2,208 hours. For all fixed sites, the actual data availability (and therefore actual hours of operation) exceeded the individual (and well as aggregate average of the fixed sites) operational goal for the first quarter of fiscal year (FY) 2007. The Site Access Request System is a web-based database used to track visitors to the fixed sites, all of which have facilities that can be visited. The NSA locale has the Barrow and Atqasuk sites. The SGP site has a Central Facility, 23 extended facilities, 4 boundary facilities, and 3 intermediate facilities. The TWP locale has the Manus, Nauru, and Darwin sites. NIM represents the AMF statistics for the current deployment in Niamey, Niger, Africa. PYE represents the AMF statistics for the Point Reyes, California, past deployment in 2005. In addition, users who do not want to wait for data to be provided through the ACRF Archive can request an account on the local site data system. The eight research computers are located at the Barrow and Atqasuk sites; the SGP Central Facility; the TWP Manus, Nauru, and Darwin sites; the DMF at PNNL; and the AMF in Niger. This report provides the cumulative numbers of visitors and user accounts by site for the period January 1, 2006 - December 31, 2006. The U.S. Department of Energy requires national user facilities to report facility use by total visitor days-broken down by institution type, gender, race, citizenship, visitor role, visit purpose, and facility-for actual visitors and for active user research computer accounts. During this reporting period, the ACRF Archive did not collect data on user characteristics in this way. Work is under way to collect and report these data. Table 2 shows the summary of cumulative users for the period January 1, 2006 - December 31, 2006. For the first quarter of FY 2007, the overall number of users is up from the last reporting period. The historical data show that there is an apparent relationship between the total number of users and the 'size' of field campaigns, called Intensive Operation Periods (IOPs): larger IOPs draw more of the site facility resources, which are reflected by the number of site visits and site visit days, research accounts, and device accounts. These types of users typically collect and analyze data in near-real time for a site-specific IOP that is in progress. However, the Archive accounts represent persistent (year-to-year) ACRF data users that often mine from the entire collection of ACRF data, which mostly includes routine data from the fixed and mobile sites, as well as cumulative IOP data sets. Archive data users continue to show a steady growth, which is independent of the size of IOPs. For this quarter, the number of Archive data user accounts was 961, the highest since record-keeping began. For reporting purposes, the three ACRF sites and the AMF operate 24 hours per day, 7 days per week, and 52 weeks per year. Although the AMF is not officially collecting data this quarter, personnel are regularly involved with teardown, packing, hipping, unpacking, setup, and maintenance activities, so they are included in the safety statistics. Time is reported in days instead of hours. If any lost work time is incurred by any employee, it is counted as a workday loss. Table 3 reports the consecutive days since the last recordable or reportable injury or incident causing damage to property, equipment, or vehicle for the period October 1 - December 31, 2006. There were no recordable or lost workdays or incidents for the first quarter of FY 2007.« less

  3. A Pilot Study of the Interface Design of Cross-Cultural Web Sites through Usability Testing of Multilanguage Web Sites and Determining the Preferences of Taiwanese and American Users

    ERIC Educational Resources Information Center

    Ku, David Tawei; Chang, Chia-Chi

    2014-01-01

    By conducting usability testing on a multilanguage Web site, this study analyzed the cultural differences between Taiwanese and American users in the performance of assigned tasks. To provide feasible insight into cross-cultural Web site design, Microsoft Office Online (MOO) that supports both traditional Chinese and English and contains an almost…

  4. Leveraging Site Search and Analytics to Maintain a User-Centered Focus

    ERIC Educational Resources Information Center

    Mitchell, Erik

    2011-01-01

    Web design is a necessarily iterative process. During the process, it can be difficult to balance the interests and focus of the library site experts and their novice users. It can also be easy to lose focus on the main goals of site use and become wrapped up in the process of design or coding or in the internal politics of site design. Just as…

  5. Introducing videoconferencing into educational oncopathology seminars: technical aspects, user satisfaction and open issues.

    PubMed

    Della Mea, Vincenzo; Carbone, Antonino; Greatti, Ermes; Beltrami, Carlo A

    2003-01-01

    We used set-top videoconferencing equipment connected by ISDN at 384 kbit/s for six educational seminars held between the University of Udine (the local site) and the National Cancer Institute in Aviano (the remote site), 60 km away. User satisfaction was evaluated by questionnaire. The median length of seminars was 58 min (range 48-61 min), followed by a 20 min (15-26 min) discussion. Eighty-two users answered the questionnaire (a 43% response rate): 56 in Udine (a median of 11 per seminar) and 26 in Aviano (a median of 5 per seminar). Answers to the questions were similar at the two sites. Videoconferencing did not affect the users' experience of attending the seminars, as both interest and clarity were similar at the local and remote site. The results suggested that videoconferencing is a viable method for delivering seminars in oncopathology, where image quality is important.

  6. microPIR2: a comprehensive database for human–mouse comparative study of microRNA–promoter interactions

    PubMed Central

    Piriyapongsa, Jittima; Bootchai, Chaiwat; Ngamphiw, Chumpol; Tongsima, Sissades

    2014-01-01

    microRNA (miRNA)–promoter interaction resource (microPIR) is a public database containing over 15 million predicted miRNA target sites located within human promoter sequences. These predicted targets are presented along with their related genomic and experimental data, making the microPIR database the most comprehensive repository of miRNA promoter target sites. Here, we describe major updates of the microPIR database including new target predictions in the mouse genome and revised human target predictions. The updated database (microPIR2) now provides ∼80 million human and 40 million mouse predicted target sites. In addition to being a reference database, microPIR2 is a tool for comparative analysis of target sites on the promoters of human–mouse orthologous genes. In particular, this new feature was designed to identify potential miRNA–promoter interactions conserved between species that could be stronger candidates for further experimental validation. We also incorporated additional supporting information to microPIR2 such as nuclear and cytoplasmic localization of miRNAs and miRNA–disease association. Extra search features were also implemented to enable various investigations of targets of interest. Database URL: http://www4a.biotec.or.th/micropir2 PMID:25425035

  7. New "persona" concept helps site designers cater to target user segments' needs.

    PubMed

    2004-09-01

    Using the relatively new "persona" design concept, Web strategists create a set of archetypical user characters, each one representing one of their site's primary audiences. Then, as their site is constructed or upgraded, they champion the personas, arguing on their behalf and forcing the design team to take each audience's needs and wants into account.

  8. User-Centered Design and Usability Testing of a Web Site: An Illustrative Case Study.

    ERIC Educational Resources Information Center

    Corry, Michael D.; Frick, Theodore W.; Hansen, Lisa

    1997-01-01

    Presents an overview of user-centered design and usability testing. Describes a Web site evaluation project at a university, the iterative process of rapid prototyping and usability testing, and how the findings helped to improve the design. Discusses recommendations for university Web site design and reflects on problems faced in usability…

  9. Why Do You Adopt Social Networking Sites? Investigating the Driving Factors through Structural Equation Modelling

    ERIC Educational Resources Information Center

    Jan, Muhammad Tahir

    2017-01-01

    Purpose: The purpose of this paper is to investigate those factors that are associated with the adoption of social networking sites from the perspective of Muslim users residing in Malaysia. Design/methodology/approach: A complete self-administered questionnaire was collected from 223 Muslim users of social networking sites in Malaysia. Both…

  10. Cost consideration as a factor affecting recreation site decisions

    Treesearch

    Allan Marsinko; John Dwyer; Herb Schroeder

    2001-01-01

    Because they are charged with providing opportunities for all potential site users, it is important that managers at public sites understand the characteristics and behaviors of different user groups. Recreationists who are sensitive to cost may be more sensitive to certain changes in policies, such as fees and other charges, than those who are not sensitive to costs....

  11. A survey of health-related activities on second life.

    PubMed

    Beard, Leslie; Wilson, Kumanan; Morra, Dante; Keelan, Jennifer

    2009-05-22

    Increasingly, governments, health care agencies, companies, and private groups have chosen Second Life as part of their Web 2.0 communication strategies. Second Life offers unique design features for disseminating health information, training health professionals, and enabling patient education for both academic and commercial health behavior research. This study aimed to survey and categorize the range of health-related activities on Second Life; to examine the design attributes of the most innovative and popular sites; and to assess the potential utility of Second Life for the dissemination of health information and for health behavior change. We used three separate search strategies to identify health-related sites on Second Life. The first used the application's search engine, entering both generic and select illness-specific keywords, to seek out sites. The second identified sites through a comprehensive review of print, blog, and media sources discussing health activities on Second Life. We then visited each site and used a snowball method to identify other health sites until we reached saturation (no new health sites were identified). The content, user experience, and chief purpose of each site were tabulated as well as basic site information, including user traffic data and site size. We found a wide range of health-related activities on Second Life, and a diverse group of users, including organizations, groups, and individuals. For many users, Second Life activities are a part of their Web 2.0 communication strategy. The most common type of health-related site in our sample (n = 68) were those whose principle aim was patient education or to increase awareness about health issues. The second most common type of site were support sites, followed by training sites, and marketing sites. Finally, a few sites were purpose-built to conduct research in SL or to recruit participants for real-life research. Studies show that behaviors from virtual worlds can translate to the real world. Our survey suggests that users are engaged in a range of health-related activities in Second Life which are potentially impacting real-life behaviors. Further research evaluating the impact of health-related activities on Second Life is warranted.

  12. A Survey of Health-Related Activities on Second Life

    PubMed Central

    Beard, Leslie; Wilson, Kumanan; Morra, Dante

    2009-01-01

    Background Increasingly, governments, health care agencies, companies, and private groups have chosen Second Life as part of their Web 2.0 communication strategies. Second Life offers unique design features for disseminating health information, training health professionals, and enabling patient education for both academic and commercial health behavior research. Objectives This study aimed to survey and categorize the range of health-related activities on Second Life; to examine the design attributes of the most innovative and popular sites; and to assess the potential utility of Second Life for the dissemination of health information and for health behavior change. Methods We used three separate search strategies to identify health-related sites on Second Life. The first used the application’s search engine, entering both generic and select illness-specific keywords, to seek out sites. The second identified sites through a comprehensive review of print, blog, and media sources discussing health activities on Second Life. We then visited each site and used a snowball method to identify other health sites until we reached saturation (no new health sites were identified). The content, user experience, and chief purpose of each site were tabulated as well as basic site information, including user traffic data and site size. Results We found a wide range of health-related activities on Second Life, and a diverse group of users, including organizations, groups, and individuals. For many users, Second Life activities are a part of their Web 2.0 communication strategy. The most common type of health-related site in our sample (n = 68) were those whose principle aim was patient education or to increase awareness about health issues. The second most common type of site were support sites, followed by training sites, and marketing sites. Finally, a few sites were purpose-built to conduct research in SL or to recruit participants for real-life research. Conclusions Studies show that behaviors from virtual worlds can translate to the real world. Our survey suggests that users are engaged in a range of health-related activities in Second Life which are potentially impacting real-life behaviors. Further research evaluating the impact of health-related activities on Second Life is warranted. PMID:19632971

  13. e-Ana and e-Mia: A Content Analysis of Pro–Eating Disorder Web Sites

    PubMed Central

    Schenk, Summer; Wilson, Jenny L.; Peebles, Rebecka

    2010-01-01

    Objectives. The Internet offers Web sites that describe, endorse, and support eating disorders. We examined the features of pro–eating disorder Web sites and the messages to which users may be exposed. Methods. We conducted a systematic content analysis of 180 active Web sites, noting site logistics, site accessories, “thinspiration” material (images and prose intended to inspire weight loss), tips and tricks, recovery, themes, and perceived harm. Results. Practically all (91%) of the Web sites were open to the public, and most (79%) had interactive features. A large majority (84%) offered pro-anorexia content, and 64% provided pro-bulimia content. Few sites focused on eating disorders as a lifestyle choice. Thinspiration material appeared on 85% of the sites, and 83% provided overt suggestions on how to engage in eating-disordered behaviors. Thirty-eight percent of the sites included recovery-oriented information or links. Common themes were success, control, perfection, and solidarity. Conclusions. Pro–eating disorder Web sites present graphic material to encourage, support, and motivate site users to continue their efforts with anorexia and bulimia. Continued monitoring will offer a valuable foundation to build a better understanding of the effects of these sites on their users. PMID:20558807

  14. Rating knowledge sharing in cross-domain collaborative filtering.

    PubMed

    Li, Bin; Zhu, Xingquan; Li, Ruijiang; Zhang, Chengqi

    2015-05-01

    Cross-domain collaborative filtering (CF) aims to share common rating knowledge across multiple related CF domains to boost the CF performance. In this paper, we view CF domains as a 2-D site-time coordinate system, on which multiple related domains, such as similar recommender sites or successive time-slices, can share group-level rating patterns. We propose a unified framework for cross-domain CF over the site-time coordinate system by sharing group-level rating patterns and imposing user/item dependence across domains. A generative model, say ratings over site-time (ROST), which can generate and predict ratings for multiple related CF domains, is developed as the basic model for the framework. We further introduce cross-domain user/item dependence into ROST and extend it to two real-world cross-domain CF scenarios: 1) ROST (sites) for alleviating rating sparsity in the target domain, where multiple similar sites are viewed as related CF domains and some items in the target domain depend on their correspondences in the related ones; and 2) ROST (time) for modeling user-interest drift over time, where a series of time-slices are viewed as related CF domains and a user at current time-slice depends on herself in the previous time-slice. All these ROST models are instances of the proposed unified framework. The experimental results show that ROST (sites) can effectively alleviate the sparsity problem to improve rating prediction performance and ROST (time) can clearly track and visualize user-interest drift over time.

  15. Trust in health Websites: a survey among Norwegian Internet users.

    PubMed

    Rosenvinge, Jan H; Laugerud, Stein; Hjortdahl, Per

    2003-01-01

    Whether consumers feel able to trust the information presented on a health-related Website is as important a quality criterion as more objective criteria. We investigated whether trust was related to five aspects of health Websites: the involvement of health professionals, a facility for interactive communication, information about those responsible for the site, a picture of those responsible for the site, and the impression of site update frequency. A polling agency invited, by email, a sample of 600 Norwegian users of e-health information to participate in the study and 476 subjects did so (a 79% response rate), by completing a questionnaire online. Their mean age was 41 years and 53% were female. All five aspects of health Websites were related to the trust placed in the site but they were not consistently related to gender or age. Trust in Websites that were frequently updated was related to being a frequent e-health user, while those who trusted interactive e-health sites were low-frequency users who tended to order drugs and health products from the sites. The probability of taking action as a result of e-health information was related to the frequency of visits to health Websites but not to the five aspects of them investigated in relation to trust. However, respondents who trusted sites that were perceived as being frequently updated and to have health professionals involved were more likely to be frequent users of e-health information.

  16. Reading level of privacy policies on Internet health Web sites.

    PubMed

    Graber, Mark A; D'Alessandro, Donna M; Johnson-West, Jill

    2002-07-01

    Most individuals would like to maintain the privacy of their medical information on the World Wide Web (WWW). In response, commercial interests and other sites post privacy policies that are designed to inform users of how their information will be used. However, it is not known if these statements are comprehensible to most WWW users. The purpose of this study was to determine the reading level of privacy statements on Internet health Web sites and to determine whether these statements can inform users of their rights. This was a descriptive study. Eighty Internet health sites were examined and the readability of their privacy policies was determined. The selected sample included the top 25 Internet health sites as well as other sites that a user might encounter while researching a common problem such as high blood pressure. Sixty percent of the sites were commercial (.com), 17.5% were organizations (.org), 8.8% were from the United Kingdom (.uk), 3.8% were United States governmental (.gov), and 2.5% were educational (.edu). The readability level of the privacy policies was calculated using the Flesch, the Fry, and the SMOG readability levels. Of the 80 Internet health Web sites studied, 30% (including 23% of the commercial Web sites) had no privacy policy posted. The average readability level of the remaining sites required 2 years of college level education to comprehend, and no Web site had a privacy policy that was comprehensible by most English-speaking individuals in the United States. The privacy policies of health Web sites are not easily understood by most individuals in the United States and do not serve to inform users of their rights. Possible remedies include rewriting policies to make them comprehensible and protecting online health information by using legal statutes or standardized insignias indicating compliance with a set of privacy standards (eg, "Health on the Net" [HON] http://www.hon.ch).

  17. Finding Influential Users in Social Media Using Association Rule Learning

    NASA Astrophysics Data System (ADS)

    Erlandsson, Fredrik; Bródka, Piotr; Borg, Anton; Johnson, Henric

    2016-04-01

    Influential users play an important role in online social networks since users tend to have an impact on one other. Therefore, the proposed work analyzes users and their behavior in order to identify influential users and predict user participation. Normally, the success of a social media site is dependent on the activity level of the participating users. For both online social networking sites and individual users, it is of interest to find out if a topic will be interesting or not. In this article, we propose association learning to detect relationships between users. In order to verify the findings, several experiments were executed based on social network analysis, in which the most influential users identified from association rule learning were compared to the results from Degree Centrality and Page Rank Centrality. The results clearly indicate that it is possible to identify the most influential users using association rule learning. In addition, the results also indicate a lower execution time compared to state-of-the-art methods.

  18. Implementing a user-driven online quality improvement toolkit for cancer care.

    PubMed

    Luck, Jeff; York, Laura S; Bowman, Candice; Gale, Randall C; Smith, Nina; Asch, Steven M

    2015-05-01

    Peer-to-peer collaboration within integrated health systems requires a mechanism for sharing quality improvement lessons. The Veterans Health Administration (VA) developed online compendia of tools linked to specific cancer quality indicators. We evaluated awareness and use of the toolkits, variation across facilities, impact of social marketing, and factors influencing toolkit use. A diffusion of innovations conceptual framework guided the collection of user activity data from the Toolkit Series SharePoint site and an online survey of potential Lung Cancer Care Toolkit users. The VA Toolkit Series site had 5,088 unique visitors in its first 22 months; 5% of users accounted for 40% of page views. Social marketing communications were correlated with site usage. Of survey respondents (n = 355), 54% had visited the site, of whom 24% downloaded at least one tool. Respondents' awareness of the lung cancer quality performance of their facility, and facility participation in quality improvement collaboratives, were positively associated with Toolkit Series site use. Facility-level lung cancer tool implementation varied widely across tool types. The VA Toolkit Series achieved widespread use and a high degree of user engagement, although use varied widely across facilities. The most active users were aware of and active in cancer care quality improvement. Toolkit use seemed to be reinforced by other quality improvement activities. A combination of user-driven tool creation and centralized toolkit development seemed to be effective for leveraging health information technology to spread disease-specific quality improvement tools within an integrated health care system. Copyright © 2015 by American Society of Clinical Oncology.

  19. Classifying and profiling Social Networking Site users: a latent segmentation approach.

    PubMed

    Alarcón-del-Amo, María-del-Carmen; Lorenzo-Romero, Carlota; Gómez-Borja, Miguel-Ángel

    2011-09-01

    Social Networking Sites (SNSs) have showed an exponential growth in the last years. The first step for an efficient use of SNSs stems from an understanding of the individuals' behaviors within these sites. In this research, we have obtained a typology of SNS users through a latent segmentation approach, based on the frequency by which users perform different activities within the SNSs, sociodemographic variables, experience in SNSs, and dimensions related to their interaction patterns. Four different segments have been obtained. The "introvert" and "novel" users are the more occasional. They utilize SNSs mainly to communicate with friends, although "introverts" are more passive users. The "versatile" user performs different activities, although occasionally. Finally, the "expert-communicator" performs a greater variety of activities with a higher frequency. They tend to perform some marketing-related activities such as commenting on ads or gathering information about products and brands as well as commenting ads. The companies can take advantage of these segmentation schemes in different ways: first, by tracking and monitoring information interchange between users regarding their products and brands. Second, they should match the SNS users' profiles with their market targets to use SNSs as marketing tools. Finally, for most business, the expert users could be interesting opinion leaders and potential brand influencers.

  20. Study of Citizen Scientist Motivations and Effectiveness of Social Media Campaigns

    NASA Astrophysics Data System (ADS)

    Gugliucci, Nicole E.; Gay, P. L.; Bracey, G.; Lehan, C.; Lewis, S.; Moore, J.; Rhea, J.

    2013-01-01

    CosmoQuest is an online citizen science and astronomy education portal that invites users to explore the universe. Since its launch in January 2012, several thousand citizen scientists have participated in mapping and discovery projects involving the Moon, the Kuiper Belt, and asteroid Vesta. Since our goal is to support community building as well as involving users with citizen science tasks, we are interested in what motivates users to join the site, participate in the science, participate in the forums, and come back to the site over a period of time. We would also like to efficiently target our social media interactions towards activities that are more likely to bring new and existing users to the site. With those goals in mind, we analyze site usage statistics and correlate them with specific, targeted social media campaigns to highlight events or projects that CosmoQuest has hosted in its first year. We also survey our users to get a more detailed look at citizen scientist motivations and the efficacy of our community building activities.

  1. Are the users of social networking sites homogeneous? A cross-cultural study.

    PubMed

    Alarcón-Del-Amo, María-Del-Carmen; Gómez-Borja, Miguel-Ángel; Lorenzo-Romero, Carlota

    2015-01-01

    The growing use of Social Networking Sites (SNS) around the world has made it necessary to understand individuals' behaviors within these sites according to different cultures. Based on a comparative study between two different European countries (The Netherlands versus Spain), a comparison of typologies of networked Internet users has been obtained through a latent segmentation approach. These typologies are based on the frequency with which users perform different activities, their socio-demographic variables, and experience in social networking and interaction patterns. The findings show new insights regarding international segmentation in order to analyse SNS user behaviors in both countries. These results are relevant for marketing strategists eager to use the communication potential of networked individuals and for marketers willing to explore the potential of online networking as a low cost and a highly efficient alternative to traditional networking approaches. For most businesses, expert users could be valuable opinion leaders and potential brand influencers.

  2. Are the users of social networking sites homogeneous? A cross-cultural study

    PubMed Central

    Alarcón-del-Amo, María-del-Carmen; Gómez-Borja, Miguel-Ángel; Lorenzo-Romero, Carlota

    2015-01-01

    The growing use of Social Networking Sites (SNS) around the world has made it necessary to understand individuals' behaviors within these sites according to different cultures. Based on a comparative study between two different European countries (The Netherlands versus Spain), a comparison of typologies of networked Internet users has been obtained through a latent segmentation approach. These typologies are based on the frequency with which users perform different activities, their socio-demographic variables, and experience in social networking and interaction patterns. The findings show new insights regarding international segmentation in order to analyse SNS user behaviors in both countries. These results are relevant for marketing strategists eager to use the communication potential of networked individuals and for marketers willing to explore the potential of online networking as a low cost and a highly efficient alternative to traditional networking approaches. For most businesses, expert users could be valuable opinion leaders and potential brand influencers. PMID:26321971

  3. Improving menu categories.

    PubMed

    2004-09-01

    No matter how good a site's navigational tools, site visitors will not use them if the menu categories are ambiguous. Users have to know what to expect when they click on a particular menu item. If the categories are not intuitive, users will have to resort to the site's search engine, ignoring the entire structure. The Pennsylvania Medical Society site (http://www.pamedsoc.org) had been plagued with poor menu labels until it took a step back and improved them.

  4. Atmospheric Radiation Measurement program climate research facility operations quarterly report July 1 - September 30, 2008.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sisterson, D. L.

    2008-10-08

    Individual raw data streams from instrumentation at the Atmospheric Radiation Measurement (ARM) Program Climate Research Facility (ACRF) fixed and mobile sites are collected and sent to the Data Management Facility (DMF) at Pacific Northwest National Laboratory (PNNL) for processing in near real-time. Raw and processed data are then sent daily to the ACRF Archive, where they are made available to users. For each instrument, we calculate the ratio of the actual number of data records received daily at the Archive to the expected number of data records. The results are tabulated by (1) individual data stream, site, and month formore » the current year and (2) site and fiscal year (FY) dating back to 1998. Table 1 shows the accumulated maximum operation time (planned uptime), actual hours of operation, and variance (unplanned downtime) for the period July 1 - September 30, 2008, for the fixed sites. The AMF has been deployed to China, but the data have not yet been released. The fourth quarter comprises a total of 2,208 hours. The average exceeded our goal this quarter. The Site Access Request System is a web-based database used to track visitors to the fixed and mobile sites, all of which have facilities that can be visited. The NSA locale has the Barrow and Atqasuk sites. The SGP site has a central facility, 23 extended facilities, 4 boundary facilities, and 3 intermediate facilities. The TWP locale has the Manus, Nauru, and Darwin sites. HFE represents the AMF statistics for the Shouxian, China, deployment in 2008. FKB represents the AMF statistics for the Haselbach, Germany, past deployment in 2007. NIM represents the AMF statistics for the Niamey, Niger, Africa, past deployment in 2006. PYE represents just the AMF Archive statistics for the Point Reyes, California, past deployment in 2005. In addition, users who do not want to wait for data to be provided through the ACRF Archive can request a research account on the local site data system. The seven computers for the research accounts are located at the Barrow and Atqasuk sites; the SGP central facility; the TWP Manus, Nauru, and Darwin sites; and the DMF at PNNL. In addition, the ACRF serves as a data repository for a long-term Arctic atmospheric observatory in Eureka, Canada (80 degrees 05 minutes N, 86 degrees 43 minutes W) as part of the multiagency Study of Environmental Arctic Change (SEARCH) Program. NOAA began providing instruments for the site in 2005, and currently cloud radar data are available. The intent of the site is to monitor the important components of the Arctic atmosphere, including clouds, aerosols, atmospheric radiation, and local-scale atmospheric dynamics. Because of the similarity of ACRF NSA data streams and the important synergy that can be formed between a network of Arctic atmospheric observations, much of the SEARCH observatory data are archived in the ARM archive. Instruments will be added to the site over time. For more information, please visit http://www.db.arm.gov/data. The designation for the archived Eureka data is YEU and is now included in the ACRF user metrics. This quarterly report provides the cumulative numbers of visitors and user accounts by site for the period October 1, 2007 - September 30, 2008. Table 2 shows the summary of cumulative users for the period October 1, 2007 - September 30, 2008. For the fourth quarter of FY 2008, the overall number of users is down substantially (about 30%) from last quarter. Most of this decrease resulted from a reduction in the ACRF Infrastructure users (e.g., site visits, research accounts, on-site device accounts, etc.) associated with the AMF China deployment. While users had easy access to the previous AMF deployment in Germany that resulted in all-time high user statistics, physical and remote access to on-site accounts are extremely limited for the AMF deployment in China. Furthermore, AMF data have not yet been released from China to the Data Management Facility for processing, which affects Archive user statistics. However, Archive users are only down about 10% from last quarter. Another reason for the apparent reduction in Archive users is that data from the Indirect and Semi-Direct Aerosol Campaign (ISDAC), a major field campaign conducted on the North Slope of Alaska, are not yet available to users. For reporting purposes, the three ACRF sites and the AMF operate 24 hours per day, 7 days per week, and 52 weeks per year. Time is reported in days instead of hours. If any lost work time is incurred by any employee, it is counted as a workday loss. Table 3 reports the consecutive days since the last recordable or reportable injury or incident causing damage to property, equipment, or vehicle for the period July 1 - September 30, 2008. There were no incidents this reporting period.« less

  5. Assessing public health job portals over the internet.

    PubMed

    Joshi, Ashish; Mirza, Attiqa; McFarlane, Kim; Amadi, Chioma

    2016-09-01

    The objective of our study was to search existing public health job websites over the internet and describe the challenges related to finding these job websites. An internet search was conducted using different search engines, including Google, Yahoo and Bing, with several keywords including: Public Health Jobs, Epidemiology Jobs, Biostatistics Jobs, Health Policy and Management Jobs, Community Health Jobs, Health Administration Jobs, Nutrition Jobs, Environmental and Occupational Health Science Jobs, GIS Jobs, and Public Health Informatics Jobs. We recorded the first 20 websites that appeared in the results of each keyword search, thus generating 600 URLs. Duplicate sites and non-functional sites were excluded from this search, allowing analysis of unique sites only. The initial search resulted in 600 websites of which there were 470 duplicates. More than half of the website categories were ".com" (54%; n = 323) followed by ".gov" (19%; n = 111) and ".edu" 15% (n = 90). Results of our findings showed 194 unique websites resulting from a search of 600 website links. More than half of them had actual public health or its related jobs (56%; n = 108). There is a need to establish standard occupational classification categories for the public health workforce. © Royal Society for Public Health 2016.

  6. Improved data retrieval from TreeBASE via taxonomic and linguistic data enrichment

    PubMed Central

    Anwar, Nadia; Hunt, Ela

    2009-01-01

    Background TreeBASE, the only data repository for phylogenetic studies, is not being used effectively since it does not meet the taxonomic data retrieval requirements of the systematics community. We show, through an examination of the queries performed on TreeBASE, that data retrieval using taxon names is unsatisfactory. Results We report on a new wrapper supporting taxon queries on TreeBASE by utilising a Taxonomy and Classification Database (TCl-Db) we created. TCl-Db holds merged and consolidated taxonomic names from multiple data sources and can be used to translate hierarchical, vernacular and synonym queries into specific query terms in TreeBASE. The query expansion supported by TCl-Db shows very significant information retrieval quality improvement. The wrapper can be accessed at the URL The methodology we developed is scalable and can be applied to new data, as those become available in the future. Conclusion Significantly improved data retrieval quality is shown for all queries, and additional flexibility is achieved via user-driven taxonomy selection. PMID:19426482

  7. BioMart Central Portal: an open database network for the biological community

    PubMed Central

    Guberman, Jonathan M.; Ai, J.; Arnaiz, O.; Baran, Joachim; Blake, Andrew; Baldock, Richard; Chelala, Claude; Croft, David; Cros, Anthony; Cutts, Rosalind J.; Di Génova, A.; Forbes, Simon; Fujisawa, T.; Gadaleta, E.; Goodstein, D. M.; Gundem, Gunes; Haggarty, Bernard; Haider, Syed; Hall, Matthew; Harris, Todd; Haw, Robin; Hu, S.; Hubbard, Simon; Hsu, Jack; Iyer, Vivek; Jones, Philip; Katayama, Toshiaki; Kinsella, R.; Kong, Lei; Lawson, Daniel; Liang, Yong; Lopez-Bigas, Nuria; Luo, J.; Lush, Michael; Mason, Jeremy; Moreews, Francois; Ndegwa, Nelson; Oakley, Darren; Perez-Llamas, Christian; Primig, Michael; Rivkin, Elena; Rosanoff, S.; Shepherd, Rebecca; Simon, Reinhard; Skarnes, B.; Smedley, Damian; Sperling, Linda; Spooner, William; Stevenson, Peter; Stone, Kevin; Teague, J.; Wang, Jun; Wang, Jianxin; Whitty, Brett; Wong, D. T.; Wong-Erasmus, Marie; Yao, L.; Youens-Clark, Ken; Yung, Christina; Zhang, Junjun; Kasprzyk, Arek

    2011-01-01

    BioMart Central Portal is a first of its kind, community-driven effort to provide unified access to dozens of biological databases spanning genomics, proteomics, model organisms, cancer data, ontology information and more. Anybody can contribute an independently maintained resource to the Central Portal, allowing it to be exposed to and shared with the research community, and linking it with the other resources in the portal. Users can take advantage of the common interface to quickly utilize different sources without learning a new system for each. The system also simplifies cross-database searches that might otherwise require several complicated steps. Several integrated tools streamline common tasks, such as converting between ID formats and retrieving sequences. The combination of a wide variety of databases, an easy-to-use interface, robust programmatic access and the array of tools make Central Portal a one-stop shop for biological data querying. Here, we describe the structure of Central Portal and show example queries to demonstrate its capabilities. Database URL: http://central.biomart.org. PMID:21930507

  8. CHRONIS: an animal chromosome image database.

    PubMed

    Toyabe, Shin-Ichi; Akazawa, Kouhei; Fukushi, Daisuke; Fukui, Kiichi; Ushiki, Tatsuo

    2005-01-01

    We have constructed a database system named CHRONIS (CHROmosome and Nano-Information System) to collect images of animal chromosomes and related nanotechnological information. CHRONIS enables rapid sharing of information on chromosome research among cell biologists and researchers in other fields via the Internet. CHRONIS is also intended to serve as a liaison tool for researchers who work in different centers. The image database contains more than 3,000 color microscopic images, including karyotypic images obtained from more than 1,000 species of animals. Researchers can browse the contents of the database using a usual World Wide Web interface in the following URL: http://chromosome.med.niigata-u.ac.jp/chronis/servlet/chronisservlet. The system enables users to input new images into the database, to locate images of interest by keyword searches, and to display the images with detailed information. CHRONIS has a wide range of applications, such as searching for appropriate probes for fluorescent in situ hybridization, comparing various kinds of microscopic images of a single species, and finding researchers working in the same field of interest.

  9. EXTRACT: interactive extraction of environment metadata and term suggestion for metagenomic sample annotation.

    PubMed

    Pafilis, Evangelos; Buttigieg, Pier Luigi; Ferrell, Barbra; Pereira, Emiliano; Schnetzer, Julia; Arvanitidis, Christos; Jensen, Lars Juhl

    2016-01-01

    The microbial and molecular ecology research communities have made substantial progress on developing standards for annotating samples with environment metadata. However, sample manual annotation is a highly labor intensive process and requires familiarity with the terminologies used. We have therefore developed an interactive annotation tool, EXTRACT, which helps curators identify and extract standard-compliant terms for annotation of metagenomic records and other samples. Behind its web-based user interface, the system combines published methods for named entity recognition of environment, organism, tissue and disease terms. The evaluators in the BioCreative V Interactive Annotation Task found the system to be intuitive, useful, well documented and sufficiently accurate to be helpful in spotting relevant text passages and extracting organism and environment terms. Comparison of fully manual and text-mining-assisted curation revealed that EXTRACT speeds up annotation by 15-25% and helps curators to detect terms that would otherwise have been missed. Database URL: https://extract.hcmr.gr/. © The Author(s) 2016. Published by Oxford University Press.

  10. dbHiMo: a web-based epigenomics platform for histone-modifying enzymes

    PubMed Central

    Choi, Jaeyoung; Kim, Ki-Tae; Huh, Aram; Kwon, Seomun; Hong, Changyoung; Asiegbu, Fred O.; Jeon, Junhyun; Lee, Yong-Hwan

    2015-01-01

    Over the past two decades, epigenetics has evolved into a key concept for understanding regulation of gene expression. Among many epigenetic mechanisms, covalent modifications such as acetylation and methylation of lysine residues on core histones emerged as a major mechanism in epigenetic regulation. Here, we present the database for histone-modifying enzymes (dbHiMo; http://hme.riceblast.snu.ac.kr/) aimed at facilitating functional and comparative analysis of histone-modifying enzymes (HMEs). HMEs were identified by applying a search pipeline built upon profile hidden Markov model (HMM) to proteomes. The database incorporates 11 576 HMEs identified from 603 proteomes including 483 fungal, 32 plants and 51 metazoan species. The dbHiMo provides users with web-based personalized data browsing and analysis tools, supporting comparative and evolutionary genomics. With comprehensive data entries and associated web-based tools, our database will be a valuable resource for future epigenetics/epigenomics studies. Database URL: http://hme.riceblast.snu.ac.kr/ PMID:26055100

  11. ERAIZDA: a model for holistic annotation of animal infectious and zoonotic diseases

    PubMed Central

    Buza, Teresia M.; Jack, Sherman W.; Kirunda, Halid; Khaitsa, Margaret L.; Lawrence, Mark L.; Pruett, Stephen; Peterson, Daniel G.

    2015-01-01

    There is an urgent need for a unified resource that integrates trans-disciplinary annotations of emerging and reemerging animal infectious and zoonotic diseases. Such data integration will provide wonderful opportunity for epidemiologists, researchers and health policy makers to make data-driven decisions designed to improve animal health. Integrating emerging and reemerging animal infectious and zoonotic disease data from a large variety of sources into a unified open-access resource provides more plausible arguments to achieve better understanding of infectious and zoonotic diseases. We have developed a model for interlinking annotations of these diseases. These diseases are of particular interest because of the threats they pose to animal health, human health and global health security. We demonstrated the application of this model using brucellosis, an infectious and zoonotic disease. Preliminary annotations were deposited into VetBioBase database (http://vetbiobase.igbb.msstate.edu). This database is associated with user-friendly tools to facilitate searching, retrieving and downloading of disease-related information. Database URL: http://vetbiobase.igbb.msstate.edu PMID:26581408

  12. Finding relevant biomedical datasets: the UC San Diego solution for the bioCADDIE Retrieval Challenge

    PubMed Central

    Wei, Wei; Ji, Zhanglong; He, Yupeng; Zhang, Kai; Ha, Yuanchi; Li, Qi; Ohno-Machado, Lucila

    2018-01-01

    Abstract The number and diversity of biomedical datasets grew rapidly in the last decade. A large number of datasets are stored in various repositories, with different formats. Existing dataset retrieval systems lack the capability of cross-repository search. As a result, users spend time searching datasets in known repositories, and they typically do not find new repositories. The biomedical and healthcare data discovery index ecosystem (bioCADDIE) team organized a challenge to solicit new indexing and searching strategies for retrieving biomedical datasets across repositories. We describe the work of one team that built a retrieval pipeline and examined its performance. The pipeline used online resources to supplement dataset metadata, automatically generated queries from users’ free-text questions, produced high-quality retrieval results and achieved the highest inferred Normalized Discounted Cumulative Gain among competitors. The results showed that it is a promising solution for cross-database, cross-domain and cross-repository biomedical dataset retrieval. Database URL: https://github.com/w2wei/dataset_retrieval_pipeline PMID:29688374

  13. Mfold web server for nucleic acid folding and hybridization prediction

    PubMed Central

    Zuker, Michael

    2003-01-01

    The abbreviated name, ‘mfold web server’, describes a number of closely related software applications available on the World Wide Web (WWW) for the prediction of the secondary structure of single stranded nucleic acids. The objective of this web server is to provide easy access to RNA and DNA folding and hybridization software to the scientific community at large. By making use of universally available web GUIs (Graphical User Interfaces), the server circumvents the problem of portability of this software. Detailed output, in the form of structure plots with or without reliability information, single strand frequency plots and ‘energy dot plots’, are available for the folding of single sequences. A variety of ‘bulk’ servers give less information, but in a shorter time and for up to hundreds of sequences at once. The portal for the mfold web server is http://www.bioinfo.rpi.edu/applications/mfold. This URL will be referred to as ‘MFOLDROOT’. PMID:12824337

  14. Juicebox.js Provides a Cloud-Based Visualization System for Hi-C Data.

    PubMed

    Robinson, James T; Turner, Douglass; Durand, Neva C; Thorvaldsdóttir, Helga; Mesirov, Jill P; Aiden, Erez Lieberman

    2018-02-28

    Contact mapping experiments such as Hi-C explore how genomes fold in 3D. Here, we introduce Juicebox.js, a cloud-based web application for exploring the resulting datasets. Like the original Juicebox application, Juicebox.js allows users to zoom in and out of such datasets using an interface similar to Google Earth. Juicebox.js also has many features designed to facilitate data reproducibility and sharing. Furthermore, Juicebox.js encodes the exact state of the browser in a shareable URL. Creating a public browser for a new Hi-C dataset does not require coding and can be accomplished in under a minute. The web app also makes it possible to create interactive figures online that can complement or replace ordinary journal figures. When combined with Juicer, this makes the entire process of data analysis transparent, insofar as every step from raw reads to published figure is publicly available as open source code. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  15. The plant phenological online database (PPODB): an online database for long-term phenological data

    NASA Astrophysics Data System (ADS)

    Dierenbach, Jonas; Badeck, Franz-W.; Schaber, Jörg

    2013-09-01

    We present an online database that provides unrestricted and free access to over 16 million plant phenological observations from over 8,000 stations in Central Europe between the years 1880 and 2009. Unique features are (1) a flexible and unrestricted access to a full-fledged database, allowing for a wide range of individual queries and data retrieval, (2) historical data for Germany before 1951 ranging back to 1880, and (3) more than 480 curated long-term time series covering more than 100 years for individual phenological phases and plants combined over Natural Regions in Germany. Time series for single stations or Natural Regions can be accessed through a user-friendly graphical geo-referenced interface. The joint databases made available with the plant phenological database PPODB render accessible an important data source for further analyses of long-term changes in phenology. The database can be accessed via www.ppodb.de .

  16. On-the-fly selection of cell-specific enhancers, genes, miRNAs and proteins across the human body using SlideBase

    PubMed Central

    Ienasescu, Hans; Li, Kang; Andersson, Robin; Vitezic, Morana; Rennie, Sarah; Chen, Yun; Vitting-Seerup, Kristoffer; Lagoni, Emil; Boyd, Mette; Bornholdt, Jette; de Hoon, Michiel J. L.; Kawaji, Hideya; Lassmann, Timo; Hayashizaki, Yoshihide; Forrest, Alistair R. R.; Carninci, Piero; Sandelin, Albin

    2016-01-01

    Genomics consortia have produced large datasets profiling the expression of genes, micro-RNAs, enhancers and more across human tissues or cells. There is a need for intuitive tools to select subsets of such data that is the most relevant for specific studies. To this end, we present SlideBase, a web tool which offers a new way of selecting genes, promoters, enhancers and microRNAs that are preferentially expressed/used in a specified set of cells/tissues, based on the use of interactive sliders. With the help of sliders, SlideBase enables users to define custom expression thresholds for individual cell types/tissues, producing sets of genes, enhancers etc. which satisfy these constraints. Changes in slider settings result in simultaneous changes in the selected sets, updated in real time. SlideBase is linked to major databases from genomics consortia, including FANTOM, GTEx, The Human Protein Atlas and BioGPS. Database URL: http://slidebase.binf.ku.dk PMID:28025337

  17. Implementing Recommendations From Web Accessibility Guidelines: A Comparative Study of Nondisabled Users and Users With Visual Impairments.

    PubMed

    Schmutz, Sven; Sonderegger, Andreas; Sauer, Juergen

    2017-09-01

    The present study examined whether implementing recommendations of Web accessibility guidelines would have different effects on nondisabled users than on users with visual impairments. The predominant approach for making Web sites accessible for users with disabilities is to apply accessibility guidelines. However, it has been hardly examined whether this approach has side effects for nondisabled users. A comparison of the effects on both user groups would contribute to a better understanding of possible advantages and drawbacks of applying accessibility guidelines. Participants from two matched samples, comprising 55 participants with visual impairments and 55 without impairments, took part in a synchronous remote testing of a Web site. Each participant was randomly assigned to one of three Web sites, which differed in the level of accessibility (very low, low, and high) according to recommendations of the well-established Web Content Accessibility Guidelines 2.0 (WCAG 2.0). Performance (i.e., task completion rate and task completion time) and a range of subjective variables (i.e., perceived usability, positive affect, negative affect, perceived aesthetics, perceived workload, and user experience) were measured. Higher conformance to Web accessibility guidelines resulted in increased performance and more positive user ratings (e.g., perceived usability or aesthetics) for both user groups. There was no interaction between user group and accessibility level. Higher conformance to WCAG 2.0 may result in benefits for nondisabled users and users with visual impairments alike. Practitioners may use the present findings as a basis for deciding on whether and how to implement accessibility best.

  18. MRFM (Magnetic Resonance Force Microscopy) MURI ARO Final Report (Grant W911NF-05-1-0403, University of Washington)

    DTIC Science & Technology

    2012-10-14

    of high-gradient cobalt -tipped cantilevers, NanoMRI Conference 2012; Ascona, Switzerland; July 22 – 27, 2012, [url]. 4. R. Picone, J. Garbini, and J...url]. 5. J. G. Longenecker, H. J. Mamin, A. W. Senko, L. Chen, C. T. Rettner, D. Rugar, and J. A. Marohn, High gradient cobalt nanomagnets...Longenecker, H. J. Mamin, A. W. Senko, L. Chen, C. T. Rettner, D. Rugar, and J. A. Marohn, Development and characterization of high-gradient cobalt -tipped

  19. Identifying Service Delivery Strategies for Ethnically Diverse Users of a Wildland-Urban Recreation Site

    Treesearch

    John M. Baas

    1992-01-01

    Service delivery has become an increasingly important part of managing public lands for recreation. The range of preferences held by ethnically diverse users of recreation sites may warrant the development of more than one service delivery strategy. Two questions were examined: (1) Are there differences in site perceptions that can be identified on the basis on...

  20. Cluster-Randomized Trial of Personalized Site Performance Feedback in Get With The Guidelines-Heart Failure.

    PubMed

    DeVore, Adam D; Cox, Margueritte; Heidenreich, Paul A; Fonarow, Gregg C; Yancy, Clyde W; Eapen, Zubin J; Peterson, Eric D; Hernandez, Adrian F

    2015-07-01

    There is significant variation in the delivery of evidence-based care for patients with heart failure (HF), but there is limited evidence defining the best methods to improve the quality of care. We performed a cluster-randomized trial of personalized site performance feedback at 147 hospitals participating in the Get With The Guidelines-Heart Failure quality improvement program from October 2009 to March 2011. The intervention provided sites with specific data on their heart failure achievement and quality measures in addition to the usual Get With The Guidelines-Heart Failure tools. The primary outcome for our trial was improvement in site composite quality of care score. Overall, 73 hospitals (n=33 886 patients) received the intervention, whereas 74 hospitals (n=37 943 patients) did not. One year after the intervention, both the intervention and control arms had a similar mean change in percentage points in their composite quality score (absolute change, +0.31 [SE, 1.51] versus +3.18 [SE, 1.68] in control; P=0.21). Similarly, none of the individual achievement measures or quality measures improved more at intervention versus control hospitals. Our site-based intervention, which included personalized site feedback on adherence to quality metrics, was not able to elicit more quality improvement beyond that already associated with participation in the Get With The Guidelines-Heart Failure program. URL: http://www.clinicaltrials.gov. Unique identifier: NCT00979264. © 2015 American Heart Association, Inc.

  1. IsoPlot: a database for comparison of mRNA isoforms in fruit fly and mosquitoes

    PubMed Central

    Ng, I-Man; Tsai, Shang-Chi

    2017-01-01

    Abstract Alternative splicing (AS), a mechanism by which different forms of mature messenger RNAs (mRNAs) are generated from the same gene, widely occurs in the metazoan genomes. Knowledge about isoform variants and abundance is crucial for understanding the functional context in the molecular diversity of the species. With increasing transcriptome data of model and non-model species, a database for visualization and comparison of AS events with up-to-date information is needed for further research. IsoPlot is a publicly available database with visualization tools for exploration of AS events, including three major species of mosquitoes, Aedes aegypti, Anopheles gambiae, and Culex quinquefasciatus, and fruit fly Drosophila melanogaster, the model insect species. IsoPlot includes not only 88,663 annotated transcripts but also 17,037 newly predicted transcripts from massive transcriptome data at different developmental stages of mosquitoes. The web interface enables users to explore the patterns and abundance of isoforms in different experimental conditions as well as cross-species sequence comparison of orthologous transcripts. IsoPlot provides a platform for researchers to access comprehensive information about AS events in mosquitoes and fruit fly. Our database is available on the web via an interactive user interface with an intuitive graphical design, which is applicable for the comparison of complex isoforms within or between species. Database URL: http://isoplot.iis.sinica.edu.tw/ PMID:29220459

  2. The National NeuroAIDS Tissue Consortium (NNTC) Database: an integrated database for HIV-related studies

    PubMed Central

    Cserhati, Matyas F.; Pandey, Sanjit; Beaudoin, James J.; Baccaglini, Lorena; Guda, Chittibabu; Fox, Howard S.

    2015-01-01

    We herein present the National NeuroAIDS Tissue Consortium-Data Coordinating Center (NNTC-DCC) database, which is the only available database for neuroAIDS studies that contains data in an integrated, standardized form. This database has been created in conjunction with the NNTC, which provides human tissue and biofluid samples to individual researchers to conduct studies focused on neuroAIDS. The database contains experimental datasets from 1206 subjects for the following categories (which are further broken down into subcategories): gene expression, genotype, proteins, endo-exo-chemicals, morphometrics and other (miscellaneous) data. The database also contains a wide variety of downloadable data and metadata for 95 HIV-related studies covering 170 assays from 61 principal investigators. The data represent 76 tissue types, 25 measurement types, and 38 technology types, and reaches a total of 33 017 407 data points. We used the ISA platform to create the database and develop a searchable web interface for querying the data. A gene search tool is also available, which searches for NCBI GEO datasets associated with selected genes. The database is manually curated with many user-friendly features, and is cross-linked to the NCBI, HUGO and PubMed databases. A free registration is required for qualified users to access the database. Database URL: http://nntc-dcc.unmc.edu PMID:26228431

  3. AMMOS2: a web server for protein-ligand-water complexes refinement via molecular mechanics.

    PubMed

    Labbé, Céline M; Pencheva, Tania; Jereva, Dessislava; Desvillechabrol, Dimitri; Becot, Jérôme; Villoutreix, Bruno O; Pajeva, Ilza; Miteva, Maria A

    2017-07-03

    AMMOS2 is an interactive web server for efficient computational refinement of protein-small organic molecule complexes. The AMMOS2 protocol employs atomic-level energy minimization of a large number of experimental or modeled protein-ligand complexes. The web server is based on the previously developed standalone software AMMOS (Automatic Molecular Mechanics Optimization for in silico Screening). AMMOS utilizes the physics-based force field AMMP sp4 and performs optimization of protein-ligand interactions at five levels of flexibility of the protein receptor. The new version 2 of AMMOS implemented in the AMMOS2 web server allows the users to include explicit water molecules and individual metal ions in the protein-ligand complexes during minimization. The web server provides comprehensive analysis of computed energies and interactive visualization of refined protein-ligand complexes. The ligands are ranked by the minimized binding energies allowing the users to perform additional analysis for drug discovery or chemical biology projects. The web server has been extensively tested on 21 diverse protein-ligand complexes. AMMOS2 minimization shows consistent improvement over the initial complex structures in terms of minimized protein-ligand binding energies and water positions optimization. The AMMOS2 web server is freely available without any registration requirement at the URL: http://drugmod.rpbs.univ-paris-diderot.fr/ammosHome.php. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  4. Wasabi: An Integrated Platform for Evolutionary Sequence Analysis and Data Visualization.

    PubMed

    Veidenberg, Andres; Medlar, Alan; Löytynoja, Ari

    2016-04-01

    Wasabi is an open source, web-based environment for evolutionary sequence analysis. Wasabi visualizes sequence data together with a phylogenetic tree within a modern, user-friendly interface: The interface hides extraneous options, supports context sensitive menus, drag-and-drop editing, and displays additional information, such as ancestral sequences, associated with specific tree nodes. The Wasabi environment supports reproducibility by automatically storing intermediate analysis steps and includes built-in functions to share data between users and publish analysis results. For computational analysis, Wasabi supports PRANK and PAGAN for phylogeny-aware alignment and alignment extension, and it can be easily extended with other tools. Along with drag-and-drop import of local files, Wasabi can access remote data through URL and import sequence data, GeneTrees and EPO alignments directly from Ensembl. To demonstrate a typical workflow using Wasabi, we reproduce key findings from recent comparative genomics studies, including a reanalysis of the EGLN1 gene from the tiger genome study: These case studies can be browsed within Wasabi at http://wasabiapp.org:8000?id=usecases. Wasabi runs inside a web browser and does not require any installation. One can start using it at http://wasabiapp.org. All source code is licensed under the AGPLv3. © The Author(s) 2015. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  5. TimeTree2: species divergence times on the iPhone.

    PubMed

    Kumar, Sudhir; Hedges, S Blair

    2011-07-15

    Scientists, educators and the general public often need to know times of divergence between species. But they rarely can locate that information because it is buried in the scientific literature, usually in a format that is inaccessible to text search engines. We have developed a public knowledgebase that enables data-driven access to the collection of peer-reviewed publications in molecular evolution and phylogenetics that have reported estimates of time of divergence between species. Users can query the TimeTree resource by providing two names of organisms (common or scientific) that can correspond to species or groups of species. The current TimeTree web resource (TimeTree2) contains timetrees reported from molecular clock analyses in 910 published studies and 17 341 species that span the diversity of life. TimeTree2 interprets complex and hierarchical data from these studies for each user query, which can be launched using an iPhone application, in addition to the website. Published time estimates are now readily accessible to the scientific community, K-12 and college educators, and the general public, without requiring knowledge of evolutionary nomenclature. TimeTree2 is accessible from the URL http://www.timetree.org, with an iPhone app available from iTunes (http://itunes.apple.com/us/app/timetree/id372842500?mt=8) and a YouTube tutorial (http://www.youtube.com/watch?v=CxmshZQciwo).

  6. AMMOS2: a web server for protein–ligand–water complexes refinement via molecular mechanics

    PubMed Central

    Labbé, Céline M.; Pencheva, Tania; Jereva, Dessislava; Desvillechabrol, Dimitri; Becot, Jérôme; Villoutreix, Bruno O.; Pajeva, Ilza

    2017-01-01

    Abstract AMMOS2 is an interactive web server for efficient computational refinement of protein–small organic molecule complexes. The AMMOS2 protocol employs atomic-level energy minimization of a large number of experimental or modeled protein–ligand complexes. The web server is based on the previously developed standalone software AMMOS (Automatic Molecular Mechanics Optimization for in silico Screening). AMMOS utilizes the physics-based force field AMMP sp4 and performs optimization of protein–ligand interactions at five levels of flexibility of the protein receptor. The new version 2 of AMMOS implemented in the AMMOS2 web server allows the users to include explicit water molecules and individual metal ions in the protein–ligand complexes during minimization. The web server provides comprehensive analysis of computed energies and interactive visualization of refined protein–ligand complexes. The ligands are ranked by the minimized binding energies allowing the users to perform additional analysis for drug discovery or chemical biology projects. The web server has been extensively tested on 21 diverse protein–ligand complexes. AMMOS2 minimization shows consistent improvement over the initial complex structures in terms of minimized protein–ligand binding energies and water positions optimization. The AMMOS2 web server is freely available without any registration requirement at the URL: http://drugmod.rpbs.univ-paris-diderot.fr/ammosHome.php. PMID:28486703

  7. Social networking sites and older users - a systematic review.

    PubMed

    Nef, Tobias; Ganea, Raluca L; Müri, René M; Mosimann, Urs P

    2013-07-01

    Social networking sites can be beneficial for senior citizens to promote social participation and to enhance intergenerational communication. Particularly for older adults with impaired mobility, social networking sites can help them to connect with family members and other active social networking users. The aim of this systematic review is to give an overview of existing scientific literature on social networking in older users. Computerized databases were searched and 105 articles were identified and screened using exclusion criteria. After exclusion of 87 articles, 18 articles were included, reviewed, classified, and the key findings were extracted. Common findings are identified and critically discussed and possible future research directions are outlined. The main benefit of using social networking sites for older adults is to enter in an intergenerational communication with younger family members (children and grandchildren) that is appreciated by both sides. Identified barriers are privacy concerns, technical difficulties and the fact that current Web design does not take the needs of older users into account. Under the conditions that these problems are carefully addressed, social networking sites have the potential to support today's and tomorrow's communication between older and younger family members.

  8. A multilingual assessment of melanoma information quality on the Internet.

    PubMed

    Bari, Lilla; Kemeny, Lajos; Bari, Ferenc

    2014-06-01

    This study aims to assess and compare melanoma information quality in Hungarian, Czech, and German languages on the Internet. We used country-specific Google search engines to retrieve the first 25 uniform resource locators (URLs) by searching the word "melanoma" in the given language. Using the automated toolbar of Health On the Net Foundation (HON), we assessed each Web site for HON certification based on the Health On the Net Foundation Code of Conduct (HONcode). Information quality was determined using a 35-point checklist created by Bichakjian et al. (J Clin Oncol 20:134-141, 2002), with the NCCN melanoma guideline as control. After excluding duplicate and link-only pages, a total of 24 Hungarian, 18 Czech, and 21 German melanoma Web sites were evaluated and rated. The amount of HON certified Web sites was the highest among the German Web pages (19%). One of the retrieved Hungarian and none of the Czech Web sites were HON certified. We found the highest number of Web sites containing comprehensive, correct melanoma information in German language, followed by Czech and Hungarian pages. Although the majority of the Web sites lacked data about incidence, risk factors, prevention, treatment, work-up, and follow-up, at least one comprehensive, high-quality Web site was found in each language. Several Web sites contained incorrect information in each language. While a small amount of comprehensive, quality melanoma-related Web sites was found, most of the retrieved Web content lacked basic disease information, such as risk factors, prevention, and treatment. A significant number of Web sites contained malinformation. In case of melanoma, primary and secondary preventions are of especially high importance; therefore, the improvement of disease information quality available on the Internet is necessary.

  9. World-Wide Web Tools for Locating Planetary Images

    NASA Technical Reports Server (NTRS)

    Kanefsky, Bob; Deiss, Ron (Technical Monitor)

    1995-01-01

    The explosive growth of the World-Wide Web (WWW) in the past year has made it feasible to provide interactive graphical tools to assist scientists in locating planetary images. The highest available resolution images of any site of interest can be quickly found on a map or plot, and, if online, displayed immediately on nearly any computer equipped with a color screen, an Internet connection, and any of the free WWW browsers. The same tools may also be of interest to educators, students, and the general public. Image finding tools have been implemented covering most of the solar system: Earth, Mars, and the moons and planets imaged by Voyager. The Mars image-finder, which plots the footprints of all the high-resolution Viking Orbiter images and can be used to display any that are available online, also contains a complete scrollable atlas and hypertext gazetteer to help locating areas. The Earth image-finder is linked to thousands of Shuttle images stored at NASA/JSC, and displays them as red dots on a globe. The Voyager image-finder plots images as dots, by longitude and apparent target size, linked to online images. The locator (URL) for the top-level page is http: //ic-www.arc.nasa.gov/ic/projects/bayes-group/Atlas/. Through the efforts of the Planetary Data System and other organizations, hundreds of thousands of planetary images are now available on CD-ROM, and many of these have been made available on the WWW. However, locating images of a desired site is still problematic, in practice. For example, many scientists studying Mars use digital image maps, which are one third the resolution of Viking Orbiter survey images. When they douse Viking Orbiter images, they often work with photographically printed hardcopies, which lack the flexibility of digital images: magnification, contrast stretching, and other basic image-processing techniques offered by off-the-shelf software. From the perspective of someone working on an experimental image processing technique for super-resolution, the discovery that potential users are often not using the highest resolution already available, nor using conventional image processing techniques, was surprising. This motivated the present work.

  10. Selective Self-Presentation and Social Comparison Through Photographs on Social Networking Sites.

    PubMed

    Fox, Jesse; Vendemia, Megan A

    2016-10-01

    Through social media and camera phones, users enact selective self-presentation as they choose, edit, and post photographs of themselves (such as selfies) to social networking sites for an imagined audience. Photos typically focus on users' physical appearance, which may compound existing sociocultural pressures about body image. We identified users of social networking sites among a nationally representative U.S. sample (N = 1,686) and examined women's and men's photo-related behavior, including posting photos, editing photos, and feelings after engaging in upward and downward social comparison with others' photos on social networking sites. We identified some sex differences: women edited photos more frequently and felt worse after upward social comparison than men. Body image and body comparison tendency mediated these effects.

  11. Porting Social Media Contributions with SIOC

    NASA Astrophysics Data System (ADS)

    Bojars, Uldis; Breslin, John G.; Decker, Stefan

    Social media sites, including social networking sites, have captured the attention of millions of users as well as billions of dollars in investment and acquisition. To better enable a user's access to multiple sites, portability between social media sites is required in terms of both (1) the personal profiles and friend networks and (2) a user's content objects expressed on each site. This requires representation mechanisms to interconnect both people and objects on the Web in an interoperable, extensible way. The Semantic Web provides the required representation mechanisms for portability between social media sites: it links people and objects to record and represent the heterogeneous ties that bind each to the other. The FOAF (Friend-of-a-Friend) initiative provides a solution to the first requirement, and this paper discusses how the SIOC (Semantically-Interlinked Online Communities) project can address the latter. By using agreed-upon Semantic Web formats like FOAF and SIOC to describe people, content objects, and the connections that bind them together, social media sites can interoperate and provide portable data by appealing to some common semantics. In this paper, we will discuss the application of Semantic Web technology to enhance current social media sites with semantics and to address issues with portability between social media sites. It has been shown that social media sites can serve as rich data sources for SIOC-based applications such as the SIOC Browser, but in the other direction, we will now show how SIOC data can be used to represent and port the diverse social media contributions (SMCs) made by users on heterogeneous sites.

  12. "When 'Bad' is 'Good'": Identifying Personal Communication and Sentiment in Drug-Related Tweets.

    PubMed

    Daniulaityte, Raminta; Chen, Lu; Lamy, Francois R; Carlson, Robert G; Thirunarayan, Krishnaprasad; Sheth, Amit

    2016-10-24

    To harness the full potential of social media for epidemiological surveillance of drug abuse trends, the field needs a greater level of automation in processing and analyzing social media content. The objective of the study is to describe the development of supervised machine-learning techniques for the eDrugTrends platform to automatically classify tweets by type/source of communication (personal, official/media, retail) and sentiment (positive, negative, neutral) expressed in cannabis- and synthetic cannabinoid-related tweets. Tweets were collected using Twitter streaming Application Programming Interface and filtered through the eDrugTrends platform using keywords related to cannabis, marijuana edibles, marijuana concentrates, and synthetic cannabinoids. After creating coding rules and assessing intercoder reliability, a manually labeled data set (N=4000) was developed by coding several batches of randomly selected subsets of tweets extracted from the pool of 15,623,869 collected by eDrugTrends (May-November 2015). Out of 4000 tweets, 25% (1000/4000) were used to build source classifiers and 75% (3000/4000) were used for sentiment classifiers. Logistic Regression (LR), Naive Bayes (NB), and Support Vector Machines (SVM) were used to train the classifiers. Source classification (n=1000) tested Approach 1 that used short URLs, and Approach 2 where URLs were expanded and included into the bag-of-words analysis. For sentiment classification, Approach 1 used all tweets, regardless of their source/type (n=3000), while Approach 2 applied sentiment classification to personal communication tweets only (2633/3000, 88%). Multiclass and binary classification tasks were examined, and machine-learning sentiment classifier performance was compared with Valence Aware Dictionary for sEntiment Reasoning (VADER), a lexicon and rule-based method. The performance of each classifier was assessed using 5-fold cross validation that calculated average F-scores. One-tailed t test was used to determine if differences in F-scores were statistically significant. In multiclass source classification, the use of expanded URLs did not contribute to significant improvement in classifier performance (0.7972 vs 0.8102 for SVM, P=.19). In binary classification, the identification of all source categories improved significantly when unshortened URLs were used, with personal communication tweets benefiting the most (0.8736 vs 0.8200, P<.001). In multiclass sentiment classification Approach 1, SVM (0.6723) performed similarly to NB (0.6683) and LR (0.6703). In Approach 2, SVM (0.7062) did not differ from NB (0.6980, P=.13) or LR (F=0.6931, P=.05), but it was over 40% more accurate than VADER (F=0.5030, P<.001). In multiclass task, improvements in sentiment classification (Approach 2 vs Approach 1) did not reach statistical significance (eg, SVM: 0.7062 vs 0.6723, P=.052). In binary sentiment classification (positive vs negative), Approach 2 (focus on personal communication tweets only) improved classification results, compared with Approach 1, for LR (0.8752 vs 0.8516, P=.04) and SVM (0.8800 vs 0.8557, P=.045). The study provides an example of the use of supervised machine learning methods to categorize cannabis- and synthetic cannabinoid-related tweets with fairly high accuracy. Use of these content analysis tools along with geographic identification capabilities developed by the eDrugTrends platform will provide powerful methods for tracking regional changes in user opinions related to cannabis and synthetic cannabinoids use over time and across different regions.

  13. The good, the bad and the dubious: VHELIBS, a validation helper for ligands and binding sites

    PubMed Central

    2013-01-01

    Background Many Protein Data Bank (PDB) users assume that the deposited structural models are of high quality but forget that these models are derived from the interpretation of experimental data. The accuracy of atom coordinates is not homogeneous between models or throughout the same model. To avoid basing a research project on a flawed model, we present a tool for assessing the quality of ligands and binding sites in crystallographic models from the PDB. Results The Validation HElper for LIgands and Binding Sites (VHELIBS) is software that aims to ease the validation of binding site and ligand coordinates for non-crystallographers (i.e., users with little or no crystallography knowledge). Using a convenient graphical user interface, it allows one to check how ligand and binding site coordinates fit to the electron density map. VHELIBS can use models from either the PDB or the PDB_REDO databank of re-refined and re-built crystallographic models. The user can specify threshold values for a series of properties related to the fit of coordinates to electron density (Real Space R, Real Space Correlation Coefficient and average occupancy are used by default). VHELIBS will automatically classify residues and ligands as Good, Dubious or Bad based on the specified limits. The user is also able to visually check the quality of the fit of residues and ligands to the electron density map and reclassify them if needed. Conclusions VHELIBS allows inexperienced users to examine the binding site and the ligand coordinates in relation to the experimental data. This is an important step to evaluate models for their fitness for drug discovery purposes such as structure-based pharmacophore development and protein-ligand docking experiments. PMID:23895374

  14. Analysis of pathology department Web sites and practical recommendations.

    PubMed

    Nero, Christopher; Dighe, Anand S

    2008-09-01

    There are numerous customers for pathology departmental Web sites, including pathology department staff, clinical staff, residency applicants, job seekers, and other individuals outside the department seeking department information. Despite the increasing importance of departmental Web sites as a means of distributing information, no analysis has been done to date of the content and usage of pathology department Web sites. In this study, we analyzed pathology department Web sites to examine the elements present on each site and to evaluate the use of search technology on these sites. Further, we examined the usage patterns of our own departmental Internet and internet Web sites to better understand the users of pathology Web sites. We reviewed selected departmental pathology Web sites and analyzed their content and functionality. Our institution's departmental pathology Web sites were modified to enable detailed information to be stored regarding users and usage patterns, and that information was analyzed. We demonstrate considerable heterogeneity in departmental Web sites with many sites lacking basic content and search features. In addition, we demonstrate that increasing the traffic of a department's informational Web sites may result in reduced phone inquiries to the laboratory. We propose recommendations for pathology department Web sites to maximize promotion of a department's mission. A departmental pathology Web site is an essential communication tool for all pathology departments, and attention to the users and content of the site can have operational impact.

  15. Methods for Coding Tobacco-Related Twitter Data: A Systematic Review.

    PubMed

    Lienemann, Brianna A; Unger, Jennifer B; Cruz, Tess Boley; Chu, Kar-Hai

    2017-03-31

    As Twitter has grown in popularity to 313 million monthly active users, researchers have increasingly been using it as a data source for tobacco-related research. The objective of this systematic review was to assess the methodological approaches of categorically coded tobacco Twitter data and make recommendations for future studies. Data sources included PsycINFO, Web of Science, PubMed, ABI/INFORM, Communication Source, and Tobacco Regulatory Science. Searches were limited to peer-reviewed journals and conference proceedings in English from January 2006 to July 2016. The initial search identified 274 articles using a Twitter keyword and a tobacco keyword. One coder reviewed all abstracts and identified 27 articles that met the following inclusion criteria: (1) original research, (2) focused on tobacco or a tobacco product, (3) analyzed Twitter data, and (4) coded Twitter data categorically. One coder extracted data collection and coding methods. E-cigarettes were the most common type of Twitter data analyzed, followed by specific tobacco campaigns. The most prevalent data sources were Gnip and Twitter's Streaming application programming interface (API). The primary methods of coding were hand-coding and machine learning. The studies predominantly coded for relevance, sentiment, theme, user or account, and location of user. Standards for data collection and coding should be developed to be able to more easily compare and replicate tobacco-related Twitter results. Additional recommendations include the following: sample Twitter's databases multiple times, make a distinction between message attitude and emotional tone for sentiment, code images and URLs, and analyze user profiles. Being relatively novel and widely used among adolescents and black and Hispanic individuals, Twitter could provide a rich source of tobacco surveillance data among vulnerable populations. ©Brianna A Lienemann, Jennifer B Unger, Tess Boley Cruz, Kar-Hai Chu. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 31.03.2017.

  16. Text-mining-assisted biocuration workflows in Argo

    PubMed Central

    Rak, Rafal; Batista-Navarro, Riza Theresa; Rowley, Andrew; Carter, Jacob; Ananiadou, Sophia

    2014-01-01

    Biocuration activities have been broadly categorized into the selection of relevant documents, the annotation of biological concepts of interest and identification of interactions between the concepts. Text mining has been shown to have a potential to significantly reduce the effort of biocurators in all the three activities, and various semi-automatic methodologies have been integrated into curation pipelines to support them. We investigate the suitability of Argo, a workbench for building text-mining solutions with the use of a rich graphical user interface, for the process of biocuration. Central to Argo are customizable workflows that users compose by arranging available elementary analytics to form task-specific processing units. A built-in manual annotation editor is the single most used biocuration tool of the workbench, as it allows users to create annotations directly in text, as well as modify or delete annotations created by automatic processing components. Apart from syntactic and semantic analytics, the ever-growing library of components includes several data readers and consumers that support well-established as well as emerging data interchange formats such as XMI, RDF and BioC, which facilitate the interoperability of Argo with other platforms or resources. To validate the suitability of Argo for curation activities, we participated in the BioCreative IV challenge whose purpose was to evaluate Web-based systems addressing user-defined biocuration tasks. Argo proved to have the edge over other systems in terms of flexibility of defining biocuration tasks. As expected, the versatility of the workbench inevitably lengthened the time the curators spent on learning the system before taking on the task, which may have affected the usability of Argo. The participation in the challenge gave us an opportunity to gather valuable feedback and identify areas of improvement, some of which have already been introduced. Database URL: http://argo.nactem.ac.uk PMID:25037308

  17. A Collaboration in Support of LBA Science and Data Exchange: Beija-flor and EOS-WEBSTER

    NASA Astrophysics Data System (ADS)

    Schloss, A. L.; Gentry, M. J.; Keller, M.; Rhyne, T.; Moore, B.

    2001-12-01

    The University of New Hampshire (UNH) has developed a Web-based tool that makes data, information, products, and services concerning terrestrial ecological and hydrological processes available to the Earth Science community. Our WEB-based System for Terrestrial Ecosystem Research (EOS-WEBSTER) provides a GIS-oriented interface to select, subset, reformat and download three main types of data: selected NASA Earth Observing System (EOS) remotely sensed data products, results from a suite of ecosystem and hydrological models, and geographic reference data. The Large Scale Biosphere-Atmosphere Experiment in Amazonia Project (LBA) has implemented a search engine, Beija-flor, that provides a centralized access point to data sets acquired for and produced by LBA researchers. The metadata in the Beija-flor index describe the content of the data sets and contain links to data distributed around the world. The query system returns a list of data sets that meet the search criteria of the user. A common problem when a user of a system like Beija-flor wants data products located within another system is that users are required to re-specify information, such as spatial coordinates, in the other system. This poster describes methodology by which Beija-flor generates a unique URL containing the requested search parameters and passes the information to EOS-WEBSTER, thus making the interactive services and large diverse data holdings in EOS-WEBSTER directly available to Beija-flor users. This "Calling Card" is used by EOS-WEBSTER to generate on-demand custom products tailored to each Beija-flor request. Through a collaborative effort, we have demonstrated the ability to integrate project-specific search engines such as Beija-flor with the products and services of large data systems such as EOS-WEBSTER, to provide very specific information products with a minimal amount of additional programming. This methodology has the potential to greatly facilitate research data exchange by enhancing the interoperability of diverse data systems beyond the two described here.

  18. NETL's Energy Data Exchange (EDX) - a coordination, collaboration, and data resource discovery platform for energy science

    NASA Astrophysics Data System (ADS)

    Rose, K.; Rowan, C.; Rager, D.; Dehlin, M.; Baker, D. V.; McIntyre, D.

    2015-12-01

    Multi-organizational research teams working jointly on projects often encounter problems with discovery, access to relevant existing resources, and data sharing due to large file sizes, inappropriate file formats, or other inefficient options that make collaboration difficult. The Energy Data eXchange (EDX) from Department of Energy's (DOE) National Energy Technology Laboratory (NETL) is an evolving online research environment designed to overcome these challenges in support of DOE's fossil energy goals while offering improved access to data driven products of fossil energy R&D such as datasets, tools, and web applications. In 2011, development of NETL's Energy Data eXchange (EDX) was initiated and offers i) a means for better preserving of NETL's research and development products for future access and re-use, ii) efficient, discoverable access to authoritative, relevant, external resources, and iii) an improved approach and tools to support secure, private collaboration and coordination between multi-organizational teams to meet DOE mission and goals. EDX presently supports fossil energy and SubTER Crosscut research activities, with an ever-growing user base. EDX is built on a heavily customized instance of the open source platform, Comprehensive Knowledge Archive Network (CKAN). EDX connects users to externally relevant data and tools through connecting to external data repositories built on different platforms and other CKAN platforms (e.g. Data.gov). EDX does not download and repost data or tools that already have an online presence. This leads to redundancy and even error. If a relevant resource already has an online instance, is hosted by another online entity, EDX will point users to that external host either using web services, inventorying URLs and other methods. EDX offers users the ability to leverage private-secure capabilities custom built into the system. The team is presently working on version 3 of EDX which will incorporate big data analytical capabilities amongst other advanced features.

  19. Age Factor in Business Education Students' Use of Social Networking Sites in Tertiary Institutions in Anambra State, Nigeria

    ERIC Educational Resources Information Center

    Ementa, Christiana Ngozi; Ile, Chika Madu

    2015-01-01

    There are diverse social networking sites which range from those that provide social sharing and interaction to those that provide networks for professionals within same and other fields. Social networking sites require a user to sign up, create a profile and begin sending short messages about what the user is doing or thinking. The study sought…

  20. Primer on the Implementation of a Pharmacy Intranet Site to Improve Department Communication

    PubMed Central

    Hale, Holly J.

    2013-01-01

    Purpose: The purpose of the article is to describe the experience of selecting, developing, and implementing a pharmacy department intranet site with commentary regarding application to other institutions. Clinical practitioners and supporting staff need an effective, efficient, organized, and user-friendly communication tool to utilize and relay information required to optimize patient care. Summary: To create a functional and user-friendly department intranet site, department leadership and staff should be involved in the process from selection of product through implementation. A product that supports both document storage management and communication delivery and has the capability to be customized to provide varied levels of site access is desirable. The designation of an intranet site owner/developer within the department will facilitate purposeful site design and site maintenance execution. A well-designed and up-to-date site along with formal end-user training are essential for staff adoption and continued utilization. Conclusion: Development of a department intranet site requires a considerable time investment by several members of the department. The implementation of an intranet site can be an important step toward achieving improved communications. Staff utilization of this resource is key to its success. PMID:24421523

  1. Primer on the implementation of a pharmacy intranet site to improve department communication.

    PubMed

    Hale, Holly J

    2013-07-01

    The purpose of the article is to describe the experience of selecting, developing, and implementing a pharmacy department intranet site with commentary regarding application to other institutions. Clinical practitioners and supporting staff need an effective, efficient, organized, and user-friendly communication tool to utilize and relay information required to optimize patient care. To create a functional and user-friendly department intranet site, department leadership and staff should be involved in the process from selection of product through implementation. A product that supports both document storage management and communication delivery and has the capability to be customized to provide varied levels of site access is desirable. The designation of an intranet site owner/developer within the department will facilitate purposeful site design and site maintenance execution. A well-designed and up-to-date site along with formal end-user training are essential for staff adoption and continued utilization. Development of a department intranet site requires a considerable time investment by several members of the department. The implementation of an intranet site can be an important step toward achieving improved communications. Staff utilization of this resource is key to its success.

  2. Obtaining antibiotics online from within the UK: a cross-sectional study

    PubMed Central

    Boyd, Sara Elizabeth; Moore, Luke Stephen Prockter; Gilchrist, Mark; Costelloe, Ceire; Castro-Sánchez, Enrique; Franklin, Bryony Dean; Holmes, Alison Helen

    2017-01-01

    Background: Improved antibiotic stewardship (AS) and reduced prescribing in primary care, with a parallel increase in personal internet use, could lead citizens to obtain antibiotics from alternative sources online. Objectives: A cross-sectional analysis was performed to: (i) determine the quality and legality of online pharmacies selling antibiotics to the UK public; (ii) describe processes for obtaining antibiotics online from within the UK; and (iii) identify resulting AS and patient safety issues. Methods: Searches were conducted for ‘buy antibiotics online’ using Google and Yahoo. For each search engine, data from the first 10 web sites with unique URL addresses were reviewed. Analysis was conducted on evidence of appropriate pharmacy registration, prescription requirement, whether antibiotic choice was ‘prescriber-driven’ or ‘consumer-driven’, and whether specific information was required (allergies, comorbidities, pregnancy) or given (adverse effects) prior to purchase. Results: Twenty unique URL addresses were analysed in detail. Online pharmacies evidencing their location in the UK (n = 5; 25%) required a prescription before antibiotic purchase, and were appropriately registered. Online pharmacies unclear about the location they were operating from (n = 10; 50%) had variable prescription requirements, and no evidence of appropriate registration. Nine (45%) online pharmacies did not require a prescription prior to purchase. For 16 (80%) online pharmacies, decisions were initially consumer-driven for antibiotic choice, dose and quantity. Conclusions: Wide variation exists among online pharmacies in relation to antibiotic practices, highlighting considerable patient safety and AS issues. Improved education, legislation, regulation and new best practice stewardship guidelines are urgently needed for online antibiotic suppliers. PMID:28333179

  3. Do Smartphone Power Users Protect Mobile Privacy Better than Nonpower Users? Exploring Power Usage as a Factor in Mobile Privacy Protection and Disclosure.

    PubMed

    Kang, Hyunjin; Shin, Wonsun

    2016-03-01

    This study examines how consumers' competence at using smartphone technology (i.e., power usage) affects their privacy protection behaviors. A survey conducted with smartphone users shows that power usage influences privacy protection behavior not only directly but also indirectly through privacy concerns and trust placed in mobile service providers. A follow-up experiment indicates that the effects of power usage on smartphone users' information management can be a function of content personalization. Users, high on power usage, are less likely to share personal information on personalized mobile sites, but they become more revealing when they interact with nonpersonalized mobile sites.

  4. PlantAPA: A Portal for Visualization and Analysis of Alternative Polyadenylation in Plants

    PubMed Central

    Wu, Xiaohui; Zhang, Yumin; Li, Qingshun Q.

    2016-01-01

    Alternative polyadenylation (APA) is an important layer of gene regulation that produces mRNAs that have different 3′ ends and/or encode diverse protein isoforms. Up to 70% of annotated genes in plants undergo APA. Increasing numbers of poly(A) sites collected in various plant species demand new methods and tools to access and mine these data. We have created an open-access web service called PlantAPA (http://bmi.xmu.edu.cn/plantapa) to visualize and analyze genome-wide poly(A) sites in plants. PlantAPA provides various interactive and dynamic graphics and seamlessly integrates a genome browser that can profile heterogeneous cleavage sites and quantify expression patterns of poly(A) sites across different conditions. Particularly, through PlantAPA, users can analyze poly(A) sites in extended 3′ UTR regions, intergenic regions, and ambiguous regions owing to alternative transcription or RNA processing. In addition, it also provides tools for analyzing poly(A) site selections, 3′ UTR lengthening or shortening, non-canonical APA site switching, and differential gene expression between conditions, making it more powerful for the study of APA-mediated gene expression regulation. More importantly, PlantAPA offers a bioinformatics pipeline that allows users to upload their own short reads or ESTs for poly(A) site extraction, enabling users to further explore poly(A) site selection using stored PlantAPA poly(A) sites together with their own poly(A) site datasets. To date, PlantAPA hosts the largest database of APA sites in plants, including Oryza sativa, Arabidopsis thaliana, Medicago truncatula, and Chlamydomonas reinhardtii. As a user-friendly web service, PlantAPA will be a valuable addition to the community of biologists studying APA mechanisms and gene expression regulation in plants. PMID:27446120

  5. TRMM Precipitation Application Examples Using Data Services at NASA GES DISC

    NASA Technical Reports Server (NTRS)

    Liu, Zhong; Ostrenga, D.; Teng, W.; Kempler, S.; Greene, M.

    2012-01-01

    Data services to support precipitation applications are important for maximizing the NASA TRMM (Tropical Rainfall Measuring Mission) and the future GPM (Global Precipitation Mission) mission's societal benefits. TRMM Application examples using data services at the NASA GES DISC, including samples from users around the world will be presented in this poster. Precipitation applications often require near-real-time support. The GES DISC provides such support through: 1) Providing near-real-time precipitation products through TOVAS; 2) Maps of current conditions for monitoring precipitation and its anomaly around the world; 3) A user friendly tool (TOVAS) to analyze and visualize near-real-time and historical precipitation products; and 4) The GES DISC Hurricane Portal that provides near-real-time monitoring services for the Atlantic basin. Since the launch of TRMM, the GES DISC has developed data services to support precipitation applications around the world. In addition to the near-real-time services, other services include: 1) User friendly TRMM Online Visualization and Analysis System (TOVAS; URL: http://disc2.nascom.nasa.gov/Giovanni/tovas/); 2) Mirador (http://mirador.gsfc.nasa.gov/), a simplified interface for searching, browsing, and ordering Earth science data at GES DISC. Mirador is designed to be fast and easy to learn; 3) Data via OPeNDAP (http://disc.sci.gsfc.nasa.gov/services/opendap/). The OPeNDAP provides remote access to individual variables within datasets in a form usable by many tools, such as IDV, McIDAS-V, Panoply, Ferret and GrADS; and 4) The Open Geospatial Consortium (OGC) Web Map Service (WMS) (http://disc.sci.gsfc.nasa.gov/services/wxs_ogc.shtml). The WMS is an interface that allows the use of data and enables clients to build customized maps with data coming from a different network.

  6. Students paperwork tracking system (SPATRASE)

    NASA Astrophysics Data System (ADS)

    Ishak, I. Y.; Othman, M. B.; Talib, Rahmat; Ilyas, M. A.

    2017-09-01

    This paper focused on a system for tracking the status of the paperwork using the Near Field Communication (NFC) technology and mobile apps. Student paperwork tracking system or known as SPATRASE was developed to ease the user to track the location status of the paperwork. The current problem faced by the user is the process of approval paperwork takes around a month or more. The process took around a month to get full approval from the department because of many procedures that need to be done. Nevertheless, the user cannot know the location status of the paperwork immediately because of the inefficient manual system. The submitter needs to call the student affairs department to get the information about the location status of the paperwork. Thus, this project was purposed as an alternative to solve the waiting time of the paperwork location status. The prototype of this system involved the hardware and software. The project consists of NFC tags, RFID Reader, and mobile apps. At each checkpoint, the RFID Reader was placed on the secretary desk. While the system involved the development of database using Google Docs that linked to the web server. After that, the submitter received the URL link and be directed to the web server and mobile apps. This system is capable of checking their location status tracking using mobile apps and Google Docs. With this system, it makes the tracking process become efficient and reliable to know the paperwork at the exact location. Thus, it is preventing the submitter to call the department all the time. Generally, this project is fully functional and we hope it can help Universiti Tun Hussein Onn Malaysia (UTHM) to overcome the problem of paperwork missing and location of the paperwork.

  7. Alcohol Marketing on Twitter and Instagram: Evidence of Directly Advertising to Youth/Adolescents.

    PubMed

    Barry, Adam E; Bates, Austin M; Olusanya, Olufunto; Vinal, Cystal E; Martin, Emily; Peoples, Janiene E; Jackson, Zachary A; Billinger, Shanaisa A; Yusuf, Aishatu; Cauley, Daunte A; Montano, Javier R

    2016-07-01

    Assess whether alcohol companies restrict youth/adolescent access, interaction, and exposure to their marketing on Twitter and Instagram. Employed five fictitious male and female Twitter (n = 10) and Instagram (n = 10) user profiles aged 13, 15, 17, 19 and/or 21. Using cellular smartphones, we determined whether profiles could (a) interact with advertising content-e.g. retweet, view video or picture content, comment, share URL; and/or (b) follow and directly receive advertising material updates from the official Instagram and Twitter pages of 22 alcohol brands for 30 days. All user profiles could fully access, view, and interact with alcohol industry content posted on Instagram and Twitter. Twitter's age-gate, which restricts access for those under 21, successfully prevented underage profiles from following and subsequently receiving promotional material/updates. The two 21+ profiles collectively received 1836 alcohol-related tweets within 30 days. All Instagram profiles, however, were able to follow all alcohol brand pages and received an average of 362 advertisements within 30 days. The quantity of promotional updates increased throughout the week, reaching their peak on Thursday and Friday. Representatives/controllers of alcohol brand Instagram pages would respond directly to our underage user's comments. The alcohol industry is in violation of their proposed self-regulation guidelines for digital marketing communications on Instagram. While Twitter's age-gate effectively blocked direct to phone updates, unhindered access to post was possible. Everyday our fictitious profiles, even those as young as 13, were bombarded with alcohol industry messages and promotional material directly to their smartphones. © The Author 2015. Medical Council on Alcohol and Oxford University Press. All rights reserved.

  8. A service for the application of data quality information to NASA earth science satellite records

    NASA Astrophysics Data System (ADS)

    Armstrong, E. M.; Xing, Z.; Fry, C.; Khalsa, S. J. S.; Huang, T.; Chen, G.; Chin, T. M.; Alarcon, C.

    2016-12-01

    A recurring demand in working with satellite-based earth science data records is the need to apply data quality information. Such quality information is often contained within the data files as an array of "flags", but can also be represented by more complex quality descriptions such as combinations of bit flags, or even other ancillary variables that can be applied as thresholds to the geophysical variable of interest. For example, with Level 2 granules from the Group for High Resolution Sea Surface Temperature (GHRSST) project up to 6 independent variables could be used to screen the sea surface temperature measurements on a pixel-by-pixel basis. Quality screening of Level 3 data from the Soil Moisture Active Passive (SMAP) instrument can be become even more complex, involving 161 unique bit states or conditions a user can screen for. The application of quality information is often a laborious process for the user until they understand the implications of all the flags and bit conditions, and requires iterative approaches using custom software. The Virtual Quality Screening Service, a NASA ACCESS project, is addressing these issues and concerns. The project has developed an infrastructure to expose, apply, and extract quality screening information building off known and proven NASA components for data extraction and subset-by-value, data discovery, and exposure to the user of granule-based quality information. Further sharing of results through well-defined URLs and web service specifications has also been implemented. The presentation will focus on overall description of the technologies and informatics principals employed by the project. Examples of implementations of the end-to-end web service for quality screening with GHRSST and SMAP granules will be demonstrated.

  9. Co-LncRNA: investigating the lncRNA combinatorial effects in GO annotations and KEGG pathways based on human RNA-Seq data

    PubMed Central

    Zhao, Zheng; Bai, Jing; Wu, Aiwei; Wang, Yuan; Zhang, Jinwen; Wang, Zishan; Li, Yongsheng; Xu, Juan; Li, Xia

    2015-01-01

    Long non-coding RNAs (lncRNAs) are emerging as key regulators of diverse biological processes and diseases. However, the combinatorial effects of these molecules in a specific biological function are poorly understood. Identifying co-expressed protein-coding genes of lncRNAs would provide ample insight into lncRNA functions. To facilitate such an effort, we have developed Co-LncRNA, which is a web-based computational tool that allows users to identify GO annotations and KEGG pathways that may be affected by co-expressed protein-coding genes of a single or multiple lncRNAs. LncRNA co-expressed protein-coding genes were first identified in publicly available human RNA-Seq datasets, including 241 datasets across 6560 total individuals representing 28 tissue types/cell lines. Then, the lncRNA combinatorial effects in a given GO annotations or KEGG pathways are taken into account by the simultaneous analysis of multiple lncRNAs in user-selected individual or multiple datasets, which is realized by enrichment analysis. In addition, this software provides a graphical overview of pathways that are modulated by lncRNAs, as well as a specific tool to display the relevant networks between lncRNAs and their co-expressed protein-coding genes. Co-LncRNA also supports users in uploading their own lncRNA and protein-coding gene expression profiles to investigate the lncRNA combinatorial effects. It will be continuously updated with more human RNA-Seq datasets on an annual basis. Taken together, Co-LncRNA provides a web-based application for investigating lncRNA combinatorial effects, which could shed light on their biological roles and could be a valuable resource for this community. Database URL: http://www.bio-bigdata.com/Co-LncRNA/ PMID:26363020

  10. Wintering Waterbirds and Recreationists in Natural Areas: A Sociological Approach to the Awareness of Bird Disturbance

    NASA Astrophysics Data System (ADS)

    Le Corre, Nicolas; Peuziat, Ingrid; Brigand, Louis; Gélinaud, Guillaume; Meur-Férec, Catherine

    2013-10-01

    Disturbance to wintering birds by human recreational activities has become a major concern for managers of many natural areas. Few studies have examined how recreationists perceive their effects on birds, although this impacts their behavior on natural areas. We surveyed 312 users on two coastal ornithological sites in Brittany, France, to investigate their perception of the effects of human activities on wintering birds. The results show that the awareness of environmental issues and knowledge of bird disturbance depends on the socioeconomic characteristics of each user group, both between the two sites and within each site. Results also indicate that, whatever the site and the user group, the vast majority of the respondents (77.6 %) believed that their own presence had no adverse effects on the local bird population. Various arguments were put forward to justify the users' own harmlessness. Objective information on recreationists' awareness of environmental issues, and particularly on their own impact on birds, is important to guide managers in their choice of the most appropriate visitor educational programs. We recommend developing global but also specific educational information for each type of user to raise awareness of their own impact on birds.

  11. Supporting the planning for the evolution of the EOSDIS through an in-depth understanding of user requirements for NASA's world-class Earth science data system

    NASA Astrophysics Data System (ADS)

    Griffin, V. L.; Behnke, J.; Maiden, M.; Fontaine, K.

    2004-12-01

    NASA is planning for the evolution of the Earth Observation System Data and Information System (EOSDIS), a large, complex data system currently supporting over 18 operational NASA satellite missions including the flagship EOS missions: Terra, Aqua, and Aura. A critical underpinning for the evolution planning is developing thorough knowledge of the EOSDIS users and how they use the EOSDIS products in their research and or applications endeavors. This paper provides charts and tables of results from NASA studies that characterized our users, data and techniques. Using these metrics, other projects can apply NASA's 'lessons learned' to the development and operations of their data systems. In 2004, NASA undertook an intensive study of the users and usage of EOSDIS data. The study considered trends in the types and levels of EOS data products being ordered, the expanding number of users requesting products, and the "domains" of those users. The study showed that increasing numbers of users are using the validated, geophysical products produced from the radiance measurements recorded by the EOS instruments; while there remains a steady demand for the radiance products themselves. In 2003, over 2.1 million individuals contacted EOSDIS (as identified by unique email and/or URL) with just over 10% requesting a product or service. The users came from all sectors including 40% from more than 125 countries outside the U.S. University researchers and students (.edu) received over 40% of the some 29 million data and information products disseminated by EOSDIS. The trend in method of delivery for EOSDIS data has been away from receiving data on hard media (tapes, CD-ROM, etc.) to receiving the data over the network. Over 75% of the EOSDIS data products were disseminated via electronic means in 2003 contrasted with just under 30% in 2000. To plan for system-wide evolution you need to know whether the system is meeting the users' needs and expectations. Thus, in 2004 NASA commissioned a comprehensive survey to determine user satisfaction using the American Customer Satisfaction Index (ACSI) approach. The results show that, overall, the users are highly satisfied with the EOSDIS systems and services as the EOSDIS ACSI score outperformed both the averages for U.S. companies and for Federal Agencies. Noteworthy was the fact that there was no statistical difference in the quality scores received by the various EOSDIS data centers. The response indicated that customer support provided by the EOSDIS Distributed Active Archive Centers (DAACs) is "world class" and that a very high number of users intend to use EOSDIS in the future (90%) and to recommend it to their colleagues (86%). The survey highlighted areas that, if improved, could lead to increased user satisfaction, including overall product quality, product documentation, and product selection and ordering processes. These results will be factored into NASA's evolution planning.

  12. APADB: a database for alternative polyadenylation and microRNA regulation events

    PubMed Central

    Müller, Sören; Rycak, Lukas; Afonso-Grunz, Fabian; Winter, Peter; Zawada, Adam M.; Damrath, Ewa; Scheider, Jessica; Schmäh, Juliane; Koch, Ina; Kahl, Günter; Rotter, Björn

    2014-01-01

    Alternative polyadenylation (APA) is a widespread mechanism that contributes to the sophisticated dynamics of gene regulation. Approximately 50% of all protein-coding human genes harbor multiple polyadenylation (PA) sites; their selective and combinatorial use gives rise to transcript variants with differing length of their 3′ untranslated region (3′UTR). Shortened variants escape UTR-mediated regulation by microRNAs (miRNAs), especially in cancer, where global 3′UTR shortening accelerates disease progression, dedifferentiation and proliferation. Here we present APADB, a database of vertebrate PA sites determined by 3′ end sequencing, using massive analysis of complementary DNA ends. APADB provides (A)PA sites for coding and non-coding transcripts of human, mouse and chicken genes. For human and mouse, several tissue types, including different cancer specimens, are available. APADB records the loss of predicted miRNA binding sites and visualizes next-generation sequencing reads that support each PA site in a genome browser. The database tables can either be browsed according to organism and tissue or alternatively searched for a gene of interest. APADB is the largest database of APA in human, chicken and mouse. The stored information provides experimental evidence for thousands of PA sites and APA events. APADB combines 3′ end sequencing data with prediction algorithms of miRNA binding sites, allowing to further improve prediction algorithms. Current databases lack correct information about 3′UTR lengths, especially for chicken, and APADB provides necessary information to close this gap. Database URL: http://tools.genxpro.net/apadb/ PMID:25052703

  13. CaMELS: In silico prediction of calmodulin binding proteins and their binding sites.

    PubMed

    Abbasi, Wajid Arshad; Asif, Amina; Andleeb, Saiqa; Minhas, Fayyaz Ul Amir Afsar

    2017-09-01

    Due to Ca 2+ -dependent binding and the sequence diversity of Calmodulin (CaM) binding proteins, identifying CaM interactions and binding sites in the wet-lab is tedious and costly. Therefore, computational methods for this purpose are crucial to the design of such wet-lab experiments. We present an algorithm suite called CaMELS (CalModulin intEraction Learning System) for predicting proteins that interact with CaM as well as their binding sites using sequence information alone. CaMELS offers state of the art accuracy for both CaM interaction and binding site prediction and can aid biologists in studying CaM binding proteins. For CaM interaction prediction, CaMELS uses protein sequence features coupled with a large-margin classifier. CaMELS models the binding site prediction problem using multiple instance machine learning with a custom optimization algorithm which allows more effective learning over imprecisely annotated CaM-binding sites during training. CaMELS has been extensively benchmarked using a variety of data sets, mutagenic studies, proteome-wide Gene Ontology enrichment analyses and protein structures. Our experiments indicate that CaMELS outperforms simple motif-based search and other existing methods for interaction and binding site prediction. We have also found that the whole sequence of a protein, rather than just its binding site, is important for predicting its interaction with CaM. Using the machine learning model in CaMELS, we have identified important features of protein sequences for CaM interaction prediction as well as characteristic amino acid sub-sequences and their relative position for identifying CaM binding sites. Python code for training and evaluating CaMELS together with a webserver implementation is available at the URL: http://faculty.pieas.edu.pk/fayyaz/software.html#camels. © 2017 Wiley Periodicals, Inc.

  14. The ANSS Station Information System: A Centralized Station Metadata Repository for Populating, Managing and Distributing Seismic Station Metadata

    NASA Astrophysics Data System (ADS)

    Thomas, V. I.; Yu, E.; Acharya, P.; Jaramillo, J.; Chowdhury, F.

    2015-12-01

    Maintaining and archiving accurate site metadata is critical for seismic network operations. The Advanced National Seismic System (ANSS) Station Information System (SIS) is a repository of seismic network field equipment, equipment response, and other site information. Currently, there are 187 different sensor models and 114 data-logger models in SIS. SIS has a web-based user interface that allows network operators to enter information about seismic equipment and assign response parameters to it. It allows users to log entries for sites, equipment, and data streams. Users can also track when equipment is installed, updated, and/or removed from sites. When seismic equipment configurations change for a site, SIS computes the overall gain of a data channel by combining the response parameters of the underlying hardware components. Users can then distribute this metadata in standardized formats such as FDSN StationXML or dataless SEED. One powerful advantage of SIS is that existing data in the repository can be leveraged: e.g., new instruments can be assigned response parameters from the Incorporated Research Institutions for Seismology (IRIS) Nominal Response Library (NRL), or from a similar instrument already in the inventory, thereby reducing the amount of time needed to determine parameters when new equipment (or models) are introduced into a network. SIS is also useful for managing field equipment that does not produce seismic data (eg power systems, telemetry devices or GPS receivers) and gives the network operator a comprehensive view of site field work. SIS allows users to generate field logs to document activities and inventory at sites. Thus, operators can also use SIS reporting capabilities to improve planning and maintenance of the network. Queries such as how many sensors of a certain model are installed or what pieces of equipment have active problem reports are just a few examples of the type of information that is available to SIS users.

  15. Improving safety on rural local and tribal roads site safety analysis - user guide #1.

    DOT National Transportation Integrated Search

    2014-08-01

    This User Guide presents an example of how rural local and Tribal practitioners can study conditions at a preselected site. It demonstrates the step-by-step safety analysis process presented in Improving Safety on Rural Local and Tribal Roads Saf...

  16. Jack'd, a Mobile Social Networking Application: A Site of Exclusion Within a Site of Inclusion.

    PubMed

    Bartone, Michael D

    2018-01-01

    User-generated smartphone applications have created a new level of virtual connectivity for gay males, one in which users can create profiles and meet other users as nearby or as far away as possible. For those within close proximity, the other users can be considered their "virtual neighbors." Although the applications are theoretically designed to be places of inclusion and not exclusion, where any gay male with economic means can download an application, many profiles have been created that exclude other users. Through an examination of profiles on one such application, Jack'd, exclusion is found in the way users celebrate and reinforce ideas of traditional masculinity and denigrate and reinforce stereotypic ideas of femininity embodied by some gay men. Jack'd, and other user-generated smartphone applications, can be read as virtual neighborhoods where one is excluded based on their gender performance.

  17. Use of StreamStats in the Upper French Broad River Basin, North Carolina: A Pilot Water-Resources Web Application

    USGS Publications Warehouse

    Wagner, Chad R.; Tighe, Kirsten C.; Terziotti, Silvia

    2009-01-01

    StreamStats is a Web-based Geographic Information System (GIS) application that was developed by the U.S. Geological Survey (USGS) in cooperation with Environmental Systems Research Institute, Inc. (ESRI) to provide access to an assortment of analytical tools that are useful for water-resources planning and management. StreamStats allows users to easily obtain streamflow statistics, basin characteristics, and descriptive information for USGS data-collection sites and selected ungaged sites. StreamStats also allows users to identify stream reaches upstream and downstream from user-selected sites and obtain information for locations along streams where activities occur that can affect streamflow conditions. This functionality can be accessed through a map-based interface with the user's Web browser or through individual functions requested remotely through other Web applications.

  18. Notions of reliability: considering the importance of difference in guiding patients to health care Web sites.

    PubMed

    Adams, S A; De Bont, A A

    2003-01-01

    This article analyzes the efforts of three organizations to provide a standard that guides Internet users to reliable health care sites. Comparison of health Internet sites, interviews and document studies. In comparing these approaches, three different constructions of reliability are identified. The resulting possibilities and restrictions of these constructions for users that are searching for health information on the Internet are revealed.

  19. Personality and Social Influence Characteristic Affects on Ease of Use and Peer Influence of New Media Users Over Time

    DTIC Science & Technology

    2011-03-01

    online social networking sites (SNSs) have emerged in today’s society as seen in the SNS Facebook and its over 500 million users. Millions of people...and locations through the social networking site ( SNS ) Facebook (Ghonim, 2011). Through the Facebook page created by Ghonim “We are all Khalid...1 Social Network Sites Defined ......................................................................................3 Purpose and

  20. Factors Associated With Time to Site Activation, Randomization, and Enrollment Performance in a Stroke Prevention Trial.

    PubMed

    Demaerschalk, Bart M; Brown, Robert D; Roubin, Gary S; Howard, Virginia J; Cesko, Eldina; Barrett, Kevin M; Longbottom, Mary E; Voeks, Jenifer H; Chaturvedi, Seemant; Brott, Thomas G; Lal, Brajesh K; Meschia, James F; Howard, George

    2017-09-01

    Multicenter clinical trials attempt to select sites that can move rapidly to randomization and enroll sufficient numbers of patients. However, there are few assessments of the success of site selection. In the CREST-2 (Carotid Revascularization and Medical Management for Asymptomatic Carotid Stenosis Trials), we assess factors associated with the time between site selection and authorization to randomize, the time between authorization to randomize and the first randomization, and the average number of randomizations per site per month. Potential factors included characteristics of the site, specialty of the principal investigator, and site type. For 147 sites, the median time between site selection to authorization to randomize was 9.9 months (interquartile range, 7.7, 12.4), and factors associated with early site activation were not identified. The median time between authorization to randomize and a randomization was 4.6 months (interquartile range, 2.6, 10.5). Sites with authorization to randomize in only the carotid endarterectomy study were slower to randomize, and other factors examined were not significantly associated with time-to-randomization. The recruitment rate was 0.26 (95% confidence interval, 0.23-0.28) patients per site per month. By univariate analysis, factors associated with faster recruitment were authorization to randomize in both trials, principal investigator specialties of interventional radiology and cardiology, pre-trial reported performance >50 carotid angioplasty and stenting procedures per year, status in the top half of recruitment in the CREST trial, and classification as a private health facility. Participation in StrokeNet was associated with slower recruitment as compared with the non-StrokeNet sites. Overall, selection of sites with high enrollment rates will likely require customization to align the sites selected to the factor under study in the trial. URL: http://www.clinicaltrials.gov. Unique identifier: NCT02089217. © 2017 American Heart Association, Inc.

  1. It's better to give than to receive: the role of social support, trust, and participation on health-related social networking sites.

    PubMed

    Hether, Heather J; Murphy, Sheila T; Valente, Thomas W

    2014-12-01

    Nearly 60% of American adults and 80% of Internet users have sought health information online. Moreover, Internet users are no longer solely passive consumers of online health content; they are active producers as well. Social media, such as social networking sites, are increasingly being used as online venues for the exchange of health-related information and advice. However, little is known about how participation on health-related social networking sites affects users. Research has shown that women participate more on social networking sites and social networks are more influential among same-sex members. Therefore, this study examined how participation on a social networking site about pregnancy influenced members' health-related attitudes and behaviors. The authors surveyed 114 pregnant members of 8 popular pregnancy-related sites. Analyses revealed that time spent on the sites was less predictive of health-related outcomes than more qualitative assessments such as trust in the sites. Furthermore, providing support was associated with the most outcomes, including seeking more information from additional sources and following recommendations posted on the sites. The implications of these findings, as well as directions for future research, are discussed.

  2. Smokeless Tobacco Use and Periodontal Health in a Rural Male Population

    PubMed Central

    Chu, Yong H.; Tatakis, Dimitris N.

    2010-01-01

    Background Despite the reported effects of smokeless tobacco (ST) on the periodontium and high prevalence of ST use in rural populations and in men, studies on this specific topic are limited. The purpose of this cross-sectional investigation is to evaluate the periodontal health status of male ST users from a rural population. Methods Adult male residents of two rural Appalachian Ohio counties and daily ST users, with a unilateral mandibular oral ST keratosis lesion, were recruited. Subjects completed a questionnaire and received oral examination. Teeth present, ST keratosis lesion, plaque and gingival index, probing depth (PD), recession depth (RD), and attachment level were recorded. Statistical analysis compared ST-site mandibular teeth (teeth adjacent to the subject’s unilateral ST keratosis lesion) to NST-site teeth (contralateral corresponding teeth). Results This study includes 73 ST users. Recession prevalence is much greater in ST-site quadrants (36%) compared to NST-site quadrants (18%; P <0.001). Twice as many teeth had recession on ST-site (approximately 20%) than NST-site (approximately 10%; P = 0.0001). Average buccal RD on ST-site teeth did not differ from that on the NST-site teeth (P = 0.0875). Although average buccal attachment loss is greater on ST-site teeth (P = 0.016), the mean difference is <0.5 mm. When stratified by years of ST use, subjects using ST for 10 to 18 years exhibit the most differences between ST and NST sites, whereas subjects using ST for <10 years show no differences. Conclusion The results indicate that greater gingival recession prevalence and extent are associated with ST placement site in rural male ST users. PMID:20350155

  3. User's Manual for the New England Water-Use Data System (NEWUDS)

    USGS Publications Warehouse

    Horn, Marilee A.

    2003-01-01

    Water is used in a variety of ways that need to be understood for effective management of water resources. Water-use activities need to be categorized and included in a database management system to understand current water uses and to provide information to water-resource management policy decisionmakers. The New England Water-Use Data System (NEWUDS) is a complex database developed to store water-use information that allows water to be tracked from a point of water-use activity (called a 'Site'), such as withdrawal from a resource (reservoir or aquifer), to a second Site, such as distribution to a user (business or irrigator). NEWUDS conceptual model consists of 10 core entities: system, owner, address, location, site, data source, resource, conveyance, transaction/rate, and alias, with tables available to store user-defined details. Three components--site (with both a From Site and a To Site), a conveyance that connects them, and a transaction/rate associated with the movement of water over a specific time interval form the core of the basic NEWUDS network model. The most important step in correctly translating real-world water-use activities into a storable format in NEWUDS depends on choosing the appropriate sites and linking them correctly in a network to model the flow of water from the initial From Site to the final To Site. Ten water-use networks representing real-world activities are described--three withdrawal networks, three return networks, two user networks, two complex community-system networks. Ten case studies of water use, one for each network, also are included in this manual to illustrate how to compile, store, and retrieve the appropriate data. The sequence of data entry into tables is critical because there are many foreign keys. The recommended core entity sequence is (1) system, (2) owner, (3) address, (4) location, (5) site, (6) data source, (7) resource, (8) conveyance, (9) transaction, and (10) rate; with (11) alias and (12) user-defined detail subject areas populated as needed. After each step in data entry, quality-assurance queries should be run to ensure the data are correctly entered so that it can be retrieved accurately. The point of data storage is retrieval. Several retrieval queries that focus on retrieving only relevant data to specific questions are presented in this manual as examples for the NEWUDS user.

  4. Vulnerability Assessment of Open Source Wireshark and Chrome Browser

    DTIC Science & Technology

    2013-08-01

    UNLIMITED 5 We spent much of the initial time learning about the logical model that modern HTML5 web browsers support, including how users interact with...are supposed to protect users of that site against cross-site scripting) and the new powerful an all-encompassing HTML5 standard. This vulnerability

  5. Immersive telepresence system using high-resolution omnidirectional movies and a locomotion interface

    NASA Astrophysics Data System (ADS)

    Ikeda, Sei; Sato, Tomokazu; Kanbara, Masayuki; Yokoya, Naokazu

    2004-05-01

    Technology that enables users to experience a remote site virtually is called telepresence. A telepresence system using real environment images is expected to be used in the field of entertainment, medicine, education and so on. This paper describes a novel telepresence system which enables users to walk through a photorealistic virtualized environment by actual walking. To realize such a system, a wide-angle high-resolution movie is projected on an immersive multi-screen display to present users the virtualized environments and a treadmill is controlled according to detected user's locomotion. In this study, we use an omnidirectional multi-camera system to acquire images real outdoor scene. The proposed system provides users with rich sense of walking in a remote site.

  6. EarthScope's Education, Outreach, and Communications: Using Social Media from Continental to Global Scales

    NASA Astrophysics Data System (ADS)

    Bohon, W.; Frus, R.; Arrowsmith, R.; Fouch, M. J.; Garnero, E. J.; Semken, S. C.; Taylor, W. L.

    2011-12-01

    Social media has emerged as a popular and effective form of communication among all age groups, with nearly half of Internet users belonging to a social network or using another form of social media on a regular basis. This phenomenon creates an excellent opportunity for earth science organizations to use the wide reach, functionality and informal environment of social media platforms to disseminate important scientific information, create brand recognition, and establish trust with users. Further, social media systems can be utilized for missions of education, outreach, and communicating important timely information (e.g., news agencies are common users). They are eminently scaleable (thus serving from a few to millions of users with no cost and no performance problem), searchable (people are turning to them more frequently as conduits for information), and user friendly (thanks to the massive resources poured into the underlying technology and design, these systems are easy to use and have been widely adopted). They can be used, therefore, to engage the public interactively with the EarthScope facilities, experiments, and discoveries, and continue the cycle of discussions, experiments, analysis and conclusions that typify scientific advancement. The EarthScope National Office (ESNO) is launching an effort to utilize social media to broaden its impact as a conduit between scientists, facilities, educators, and the public. The ESNO will use the opportunities that social media affords to offer high quality science content in a variety of formats that appeal to social media users of various age groups, including blogs (popular with users 18-29), Facebook and Twitter updates (popular with users ages 18-50), email updates (popular with older adults), and video clips (popular with all age groups). We will monitor the number of "fans" and "friends" on social media and networking pages in order to gauge the increase in the percentage of the user population visiting the site. We will also use existing tools available on social media sites to track the relationships between users who visit or "friend" the site to determine how knowledge of the site is transferred amongst various social, educational or geographic groups. Finally, we will use this information to iteratively improve the variety of content and media on the site to increase our user pool, improve EarthScope recognition, and provide appropriate and user-specific Earth science information, especially for time sensitive events of wide interest such as natural disasters.

  7. Association Among Periodontitis and the Use of Crack Cocaine and Other Illicit Drugs.

    PubMed

    Antoniazzi, Raquel P; Zanatta, Fabricio B; Rösing, Cassiano K; Feldens, Carlos Alberto

    2016-12-01

    Crack cocaine can alter functions related to the immune system and exert a negative influence on progression and severity of periodontitis. The aim of this study is to compare periodontal status between crack cocaine users and crack cocaine non-users and investigate the association between crack cocaine and periodontitis after adjustments for confounding variables. This cross-sectional study evaluated 106 individuals exposed to crack cocaine and 106 never exposed, matched for age, sex, and tobacco use. An examiner determined visible plaque index (VPI), marginal bleeding index, supragingival dental calculus, probing depth (PD), clinical attachment level (CAL), and bleeding on probing (BOP). Logistic regression was used to model associations between crack cocaine and periodontitis (at least three sites with CAL >4 mm and at least two sites with PD >3 mm, not in the same site or tooth). Prevalence of periodontitis among crack non-users and crack users was 20.8% and 43.4%, respectively. Crack users had greater VPI, BOP, PD ≥3 mm, and CAL ≥4 mm than crack non-users. Periodontitis was associated with age >24 years, schooling ≤8 years, smoking, moderate/heavy alcohol use, and plaque rate ≥41%. Crack users had an approximately three-fold greater chance (odds ratio: 3.44; 95% confidence interval: 1.51 to 7.86) of periodontitis than non-users. Occurrence of periodontitis, visible plaque, and gingival bleeding was significantly higher among crack users, and crack use was associated with occurrence of periodontitis.

  8. Overview of Privacy in Social Networking Sites (SNS)

    NASA Astrophysics Data System (ADS)

    Powale, Pallavi I.; Bhutkar, Ganesh D.

    2013-07-01

    Social Networking Sites (SNS) have become an integral part of communication and life style of people in today's world. Because of the wide range of services offered by SNSs mostly for free of cost, these sites are attracting the attention of all possible Internet users. Most importantly, users from all age groups have become members of SNSs. Since many of the users are not aware of the data thefts associated with information sharing, they freely share their personal information with SNSs. Therefore, SNSs may be used for investigating users' character and social habits by familiar or even unknown persons and agencies. Such commercial and social scenario, has led to number of privacy and security threats. Though, all major issues in SNSs need to be addressed, by SNS providers, privacy of SNS users is the most crucial. And therefore, in this paper, we have focused our discussion on "privacy in SNSs". We have discussed different ways of Personally Identifiable Information (PII) leakages from SNSs, information revelation to third-party domains without user consent and privacy related threats associated with such information sharing. We expect that this comprehensive overview on privacy in SNSs will definitely help in raising user awareness about sharing data and managing their privacy with SNSs. It will also help SNS providers to rethink about their privacy policies.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Witherspoon Editor, P.A.; Bodvarsson Editor, G.S.

    The broad range of activities on radioactive waste isolation that are summarized in Table 1.1 provides a comprehensive picture of the operations that must be carried out in working with this problem. A comparison of these activities with those published in the two previous reviews shows the important progress that is being made in developing and applying the various technologies that have evolved over the past 20 years. There are two basic challenges in perfecting a system of radioactive waste isolation: choosing an appropriate geologic barrier and designing an effective engineered barrier. One of the most important developments that ismore » evident in a large number of the reports in this review is the recognition that a URL provides an excellent facility for investigating and characterizing a rock mass. Moreover, a URL, once developed, provides a convenient facility for two or more countries to conduct joint investigations. This review describes a number of cooperative projects that have been organized in Europe to take advantage of this kind of a facility in conducting research underground. Another critical development is the design of the waste canister (and its accessory equipment) for the engineered barrier. This design problem has been given considerable attention in a number of countries for several years, and some impressive results are described and illustrated in this review. The role of the public as a stakeholder in radioactive waste isolation has not always been fully appreciated. Solutions to the technical problems in characterizing a specific site have generally been obtained without difficulty, but procedures in the past in some countries did not always keep the public and local officials informed of the results. It will be noted in the following chapters that this procedure has caused some problems, especially when approval for a major component in a project was needed. It has been learned that a better way to handle this problem is to keep all stakeholders fully informed of project plans and hold periodic meetings to brief the public, especially in the vicinity of the selected site. This procedure has now been widely adopted and represents one of the most important developments in the Third Worldwide Review.« less

  10. Interdisciplinary Matchmaking: Choosing Collaborators by Skill, Acquaintance and Trust

    NASA Astrophysics Data System (ADS)

    Hupa, Albert; Rzadca, Krzysztof; Wierzbicki, Adam; Datta, Anwitaman

    Social networks are commonly used to enhance recommender systems. Most of such systems recommend a single resource or a person. However, complex problems or projects usually require a team of experts that must work together on a solution. Team recommendation is much more challenging, mostly because of the complex interpersonal relations between members. This chapter presents fundamental concepts on how to score a team based on members' social context and their suitability for a particular project. We represent the social context of an individual as a three-dimensional social network (3DSN) composed of a knowledge dimension expressing skills, a trust dimension and an acquaintance dimension. Dimensions of a 3DSN are used to mathematically formalize the criteria for prediction of the team's performance. We use these criteria to formulate the team recommendation problem as a multi-criteria optimization problem. We demonstrate our approach on empirical data crawled from two web2.0 sites: onephoto.net and a social networking site. We construct 3DSNs and analyze properties of team's performance criteria.

  11. Using the WWW to Make YOHKOH SXT Images Available to the Public: The YOHKOH Public Outreach Project

    NASA Astrophysics Data System (ADS)

    Larson, M.; McKenzie, D.; Slater, T.; Acton, L.; Alexander, D.; Freeland, S.; Lemen, J.; Metcalf, T.

    1997-05-01

    The Yohkoh Public Outreach Project (YPOP) is funded by NASA as one of the Information Infrastructure Technology and Applications Cooperative Agreement Teams to create public access to high quality Yohkoh SXT data via the World Wide Web. These products are being made available to the scientific research community, K-12 schools, and informal education centers including planetaria, museums, and libraries. The project aims to utilize the intrinsic excitement of the SXT data, and in particular the SXT movies, to develop science learning tools and classroom activities. The WWW site at URL: http://www.space.lockheed.com/YPOP/ uses a movie theater theme to highlight available Yohkoh movies in a non-intimidating and entertaining format for non-scientists. The site features lesson plans, 'solar' activities, slide shows and, of course, a variety of movies about the Sun. Classroom activities are currently undergoing development with a team of scientists and K-12 teachers for distribution in late 1997. We will display the products currently online, which include a solar classroom with activities for teachers, background resources, and a virtual tour of our Sun.

  12. Launching a palliative care homepage: the Edmonton experience.

    PubMed

    Pereira, J; Macmillan, A; Bruera, E

    1997-11-01

    The Internet, with its graphical subdivision, the World Wide Web (WWW). has become a powerful tool for the dissemination of information and for communication. This paper discusses the authors' experiences with creating, launching and maintaining an official publication on the Internet by the Edmonton Regional Palliative Care Program and the Division of Palliative Medicine, University of Alberta, Canada. It describes the content and format of the homepage and the process of publication. Over a six-month period there were 892 visits to the site and 84 separate items of correspondence to the site's editors. Of these correspondence items, 36 were requesting further information regarding clinical and other programme information. Sixty-nine of the 84 communications came from North America and Europe. The pattern of readership is briefly discussed as are some of the potential advantages and challenges when utilizing this electronic medium. To promote the dissemination of reliable information on the Internet, the authors encourage other palliative care groups and organizations to publish on the WWW. The URL is http:/(/)www.palliative.org (previously http:/(/)www.caritas.ab.ca/approximately palliate).

  13. Onco-Regulon: an integrated database and software suite for site specific targeting of transcription factors of cancer genes

    PubMed Central

    Tomar, Navneet; Mishra, Akhilesh; Mrinal, Nirotpal; Jayaram, B.

    2016-01-01

    Transcription factors (TFs) bind at multiple sites in the genome and regulate expression of many genes. Regulating TF binding in a gene specific manner remains a formidable challenge in drug discovery because the same binding motif may be present at multiple locations in the genome. Here, we present Onco-Regulon (http://www.scfbio-iitd.res.in/software/onco/NavSite/index.htm), an integrated database of regulatory motifs of cancer genes clubbed with Unique Sequence-Predictor (USP) a software suite that identifies unique sequences for each of these regulatory DNA motifs at the specified position in the genome. USP works by extending a given DNA motif, in 5′→3′, 3′ →5′ or both directions by adding one nucleotide at each step, and calculates the frequency of each extended motif in the genome by Frequency Counter programme. This step is iterated till the frequency of the extended motif becomes unity in the genome. Thus, for each given motif, we get three possible unique sequences. Closest Sequence Finder program predicts off-target drug binding in the genome. Inclusion of DNA-Protein structural information further makes Onco-Regulon a highly informative repository for gene specific drug development. We believe that Onco-Regulon will help researchers to design drugs which will bind to an exclusive site in the genome with no off-target effects, theoretically. Database URL: http://www.scfbio-iitd.res.in/software/onco/NavSite/index.htm PMID:27515825

  14. GIS Services, Visualization Products, and Interoperability at the National Oceanic and Atmospheric Administration (NOAA) National Climatic Data Center (NCDC)

    NASA Astrophysics Data System (ADS)

    Baldwin, R.; Ansari, S.; Reid, G.; Lott, N.; Del Greco, S.

    2007-12-01

    The main goal in developing and deploying Geographic Information System (GIS) services at NOAA's National Climatic Data Center (NCDC) is to provide users with simple access to data archives while integrating new and informative climate products. Several systems at NCDC provide a variety of climatic data in GIS formats and/or map viewers. The Online GIS Map Services provide users with data discovery options which flow into detailed product selection maps, which may be queried using standard "region finder" tools or gazetteer (geographical dictionary search) functions. Each tabbed selection offers steps to help users progress through the systems. A series of additional base map layers or data types have been added to provide companion information. New map services include: Severe Weather Data Inventory, Local Climatological Data, Divisional Data, Global Summary of the Day, and Normals/Extremes products. THREDDS Data Server technology is utilized to provide access to gridded multidimensional datasets such as Model, Satellite and Radar. This access allows users to download data as a gridded NetCDF file, which is readable by ArcGIS. In addition, users may subset the data for a specific geographic region, time period, height range or variable prior to download. The NCDC Weather Radar Toolkit (WRT) is a client tool which accesses Weather Surveillance Radar 1988 Doppler (WSR-88D) data locally or remotely from the NCDC archive, NOAA FTP server or any URL or THREDDS Data Server. The WRT Viewer provides tools for custom data overlays, Web Map Service backgrounds, animations and basic filtering. The export of images and movies is provided in multiple formats. The WRT Data Exporter allows for data export in both vector polygon (Shapefile, Well-Known Text) and raster (GeoTIFF, ESRI Grid, VTK, NetCDF, GrADS) formats. As more users become accustom to GIS, questions of better, cheaper, faster access soon follow. Expanding use and availability can best be accomplished through standards which promote interoperability. Our GIS related products provide Open Geospatial Consortium (OGC) compliant Web Map Services (WMS), Web Feature Services (WFS), Web Coverage Services (WCS) and Federal Geographic Data Committee (FGDC) metadata as a complement to the map viewers. KML/KMZ data files (soon to be compliant OGC specifications) also provide access.

  15. 9 CFR 130.15 - User fees for veterinary diagnostic isolation and identification tests performed at NVSL...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 9 Animals and Animal Products 1 2011-01-01 2011-01-01 false User fees for veterinary diagnostic isolation and identification tests performed at NVSL (excluding FADDL) or other authorized site. 130.15... AGRICULTURE USER FEES USER FEES § 130.15 User fees for veterinary diagnostic isolation and identification...

  16. 9 CFR 130.15 - User fees for veterinary diagnostic isolation and identification tests performed at NVSL...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 9 Animals and Animal Products 1 2010-01-01 2010-01-01 false User fees for veterinary diagnostic isolation and identification tests performed at NVSL (excluding FADDL) or other authorized site. 130.15... AGRICULTURE USER FEES USER FEES § 130.15 User fees for veterinary diagnostic isolation and identification...

  17. A Privacy Preservation Model for Health-Related Social Networking Sites.

    PubMed

    Li, Jingquan

    2015-07-08

    The increasing use of social networking sites (SNS) in health care has resulted in a growing number of individuals posting personal health information online. These sites may disclose users' health information to many different individuals and organizations and mine it for a variety of commercial and research purposes, yet the revelation of personal health information to unauthorized individuals or entities brings a concomitant concern of greater risk for loss of privacy among users. Many users join multiple social networks for different purposes and enter personal and other specific information covering social, professional, and health domains into other websites. Integration of multiple online and real social networks makes the users vulnerable to unintentional and intentional security threats and misuse. This paper analyzes the privacy and security characteristics of leading health-related SNS. It presents a threat model and identifies the most important threats to users and SNS providers. Building on threat analysis and modeling, this paper presents a privacy preservation model that incorporates individual self-protection and privacy-by-design approaches and uses the model to develop principles and countermeasures to protect user privacy. This study paves the way for analysis and design of privacy-preserving mechanisms on health-related SNS.

  18. A Privacy Preservation Model for Health-Related Social Networking Sites

    PubMed Central

    2015-01-01

    The increasing use of social networking sites (SNS) in health care has resulted in a growing number of individuals posting personal health information online. These sites may disclose users' health information to many different individuals and organizations and mine it for a variety of commercial and research purposes, yet the revelation of personal health information to unauthorized individuals or entities brings a concomitant concern of greater risk for loss of privacy among users. Many users join multiple social networks for different purposes and enter personal and other specific information covering social, professional, and health domains into other websites. Integration of multiple online and real social networks makes the users vulnerable to unintentional and intentional security threats and misuse. This paper analyzes the privacy and security characteristics of leading health-related SNS. It presents a threat model and identifies the most important threats to users and SNS providers. Building on threat analysis and modeling, this paper presents a privacy preservation model that incorporates individual self-protection and privacy-by-design approaches and uses the model to develop principles and countermeasures to protect user privacy. This study paves the way for analysis and design of privacy-preserving mechanisms on health-related SNS. PMID:26155953

  19. Intelligent web image retrieval system

    NASA Astrophysics Data System (ADS)

    Hong, Sungyong; Lee, Chungwoo; Nah, Yunmook

    2001-07-01

    Recently, the web sites such as e-business sites and shopping mall sites deal with lots of image information. To find a specific image from these image sources, we usually use web search engines or image database engines which rely on keyword only retrievals or color based retrievals with limited search capabilities. This paper presents an intelligent web image retrieval system. We propose the system architecture, the texture and color based image classification and indexing techniques, and representation schemes of user usage patterns. The query can be given by providing keywords, by selecting one or more sample texture patterns, by assigning color values within positional color blocks, or by combining some or all of these factors. The system keeps track of user's preferences by generating user query logs and automatically add more search information to subsequent user queries. To show the usefulness of the proposed system, some experimental results showing recall and precision are also explained.

  20. Waste treatability guidance program. User`s guide. Revision 0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toth, C.

    1995-12-21

    DOE sites across the country generate and manage radioactive, hazardous, mixed, and sanitary wastes. It is necessary for each site to find the technologies and associated capacities required to manage its waste. One role of DOE HQ Office of Environmental Restoration and Waste Management is to facilitate the integration of the site- specific plans into coherent national plans. DOE has developed a standard methodology for defining and categorizing waste streams into treatability groups based on characteristic parameters that influence waste management technology needs. This Waste Treatability Guidance Program automates the Guidance Document for the categorization of waste information into treatabilitymore » groups; this application provides a consistent implementation of the methodology across the National TRU Program. This User`s Guide provides instructions on how to use the program, including installations instructions and program operation. This document satisfies the requirements of the Software Quality Assurance Plan.« less

  1. Controlling mechanisms over the internet

    NASA Astrophysics Data System (ADS)

    Lumia, Ronald

    1997-01-01

    The internet, widely available throughout the world, can be used to control robots, machine tools, and other mechanisms. This paper will describe a low-cost virtual collaborative environment (VCE) which will connect users with distant equipment. The system is based on PC technology, and incorporates off-line-programming with on-line execution. A remote user programs the systems graphically and simulates the motions and actions of the mechanism until satisfied with the functionality of the program. The program is then transferred from the remote site to the local site where the real equipment exists. At the local site, the simulation is run again to check the program from a safety standpoint. Then, the local user runs the program on the real equipment. During execution, a camera in the real workspace provides an image back to the remote user through a teleconferencing system. The system costs approximately 12,500 dollars and represents a low-cost alternative to the Sandia National Laboratories VCE.

  2. Atmospheric Radiation Measurement program climate research facility operations quarterly report January 1 - March 31, 2008.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sisterson, D. L.

    2008-05-22

    Individual raw data streams from instrumentation at the Atmospheric Radiation Measurement (ARM) Program Climate Research Facility (ACRF) fixed and mobile sites are collected and sent to the Data Management Facility (DMF) at Pacific Northwest National Laboratory (PNNL) for processing in near real time. Raw and processed data are then sent daily to the ACRF Archive, where they are made available to users. For each instrument, we calculate the ratio of the actual number of data records received daily at the Archive to the expected number of data records. The results are tabulated by (1) individual data stream, site, and monthmore » for the current year and (2) site and fiscal year (FY) dating back to 1998. Table 1 shows the accumulated maximum operation time (planned uptime), actual hours of operation, and variance (unplanned downtime) for the period January 1 - March 31, 2008, for the fixed sites. The AMF is being deployed to China and is not in operation this quarter. The second quarter comprises a total of 2,184 hours. The average as well as the individual site values exceeded our goal this quarter. The Site Access Request System is a web-based database used to track visitors to the fixed and mobile sites, all of which have facilities that can be visited. The NSA locale has the Barrow and Atqasuk sites. The SGP site has a central facility, 23 extended facilities, 4 boundary facilities, and 3 intermediate facilities. The TWP locale has the Manus, Nauru, and Darwin sites. FKB represents the AMF statistics for the Haselbach, Germany, past deployment in 2007. NIM represents the AMF statistics for the Niamey, Niger, Africa, past deployment in 2006. PYE represents just the AMF Archive statistics for the Point Reyes, California, past deployment in 2005. In addition, users who do not want to wait for data to be provided through the ACRF Archive can request a research account on the local site data system. The seven computers for the research accounts are located at the Barrow and Atqasuk sites; the SGP central facility; the TWP Manus, Nauru, and Darwin sites; and the DMF at PNNL. In addition, the ACRF serves as a data repository for a long-term Arctic atmospheric observatory in Eureka, Canada (80 degrees 05 minutes N, 86 degrees 43 minutes W) as part of the multiagency Study of Environmental Arctic Change (SEARCH) Program. NOAA began providing instruments for the site in 2005, and currently cloud radar data are available. The intent of the site is to monitor the important components of the Arctic atmosphere, including clouds, aerosols, atmospheric radiation, and local-scale atmospheric dynamics. Because of the similarity of ACRF NSA data streams and the important synergy that can be formed between a network of Arctic atmospheric observations, much of the SEARCH observatory data are archived in the ARM archive. Instruments will be added to the site over time. For more information, please visit http://www.db.arm.gov/data. The designation for the archived Eureka data is YEU and is now included in the ACRF user metrics. This quarterly report provides the cumulative numbers of visitors and user accounts by site for the period April 1, 2007 - March 31, 2008. Table 2 shows the summary of cumulative users for the period April 1, 2007 - March 31, 2007. For the second quarter of FY 2008, the overall number of users was nearly as high as the last reporting period, in which a new record high for number of users was established. This quarter, a new record high was established for the number of user days, particularly due to the large number of field campaign activities in conjunction with the AMF deployment in Germany, as well as major field campaigns at the NSA and SGP sites. This quarter, 37% of the Archive users are ARM science-funded principal investigators and 23% of all other facility users are either ARM science-funded principal investigators or ACRF infrastructure personnel. For reporting purposes, the three ACRF sites and the AMF operate 24 hours per day, 7 days per week, and 52 weeks per year. Time is reported in days instead of hours. If any lost work time is incurred by any employee, it is counted as a workday loss. Table 3 reports the consecutive days since the last recordable or reportable injury or incident causing damage to property, equipment, or vehicle for the period January 1 - March 31, 2008. There were no incidents this reporting period.« less

  3. Manually Classifying User Search Queries on an Academic Library Web Site

    ERIC Educational Resources Information Center

    Chapman, Suzanne; Desai, Shevon; Hagedorn, Kat; Varnum, Ken; Mishra, Sonali; Piacentine, Julie

    2013-01-01

    The University of Michigan Library wanted to learn more about the kinds of searches its users were conducting through the "one search" search box on the Library Web site. Library staff conducted two investigations. A preliminary investigation in 2011 involved the manual review of the 100 most frequently occurring queries conducted…

  4. Examining Digital Literacy Practices on Social Network Sites

    ERIC Educational Resources Information Center

    Buck, Amber

    2012-01-01

    Young adults represent the most avid users of social network sites, and they are also the most concerned with their online identity management, according to the Pew Internet and American Life Project. These practices represent important literate activity today, as individuals who are writing online learn to negotiate interfaces, user agreements,…

  5. The Readability of Information Literacy Content on Academic Library Web Sites

    ERIC Educational Resources Information Center

    Lim, Adriene

    2010-01-01

    This article reports on a study addressing the readability of content on academic libraries' Web sites, specifically content intended to improve users' information literacy skills. Results call for recognition of readability as an evaluative component of text in order to better meet the needs of diverse user populations. (Contains 8 tables.)

  6. 75 FR 39950 - Proposed Collection; Comment Request; Cancer Trials Support Unit (CTSU) Public Use Forms and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-13

    ... Request; Cancer Trials Support Unit (CTSU) Public Use Forms and Customer Satisfaction Surveys (NCI... of customer satisfaction for clinical site staff using the CTSU Help Desk and the CTSU web site. An ongoing user satisfaction survey is in place for the Oncology Patient Enrollment Network (OPEN). User...

  7. Catalytic site identification—a web server to identify catalytic site structural matches throughout PDB

    PubMed Central

    Kirshner, Daniel A.; Nilmeier, Jerome P.; Lightstone, Felice C.

    2013-01-01

    The catalytic site identification web server provides the innovative capability to find structural matches to a user-specified catalytic site among all Protein Data Bank proteins rapidly (in less than a minute). The server also can examine a user-specified protein structure or model to identify structural matches to a library of catalytic sites. Finally, the server provides a database of pre-calculated matches between all Protein Data Bank proteins and the library of catalytic sites. The database has been used to derive a set of hypothesized novel enzymatic function annotations. In all cases, matches and putative binding sites (protein structure and surfaces) can be visualized interactively online. The website can be accessed at http://catsid.llnl.gov. PMID:23680785

  8. Catalytic site identification--a web server to identify catalytic site structural matches throughout PDB.

    PubMed

    Kirshner, Daniel A; Nilmeier, Jerome P; Lightstone, Felice C

    2013-07-01

    The catalytic site identification web server provides the innovative capability to find structural matches to a user-specified catalytic site among all Protein Data Bank proteins rapidly (in less than a minute). The server also can examine a user-specified protein structure or model to identify structural matches to a library of catalytic sites. Finally, the server provides a database of pre-calculated matches between all Protein Data Bank proteins and the library of catalytic sites. The database has been used to derive a set of hypothesized novel enzymatic function annotations. In all cases, matches and putative binding sites (protein structure and surfaces) can be visualized interactively online. The website can be accessed at http://catsid.llnl.gov.

  9. Galaxy Zoo: An Experiment in Public Science Participation

    NASA Astrophysics Data System (ADS)

    Raddick, Jordan; Lintott, C. J.; Schawinski, K.; Thomas, D.; Nichol, R. C.; Andreescu, D.; Bamford, S.; Land, K. R.; Murray, P.; Slosar, A.; Szalay, A. S.; Vandenberg, J.; Galaxy Zoo Team

    2007-12-01

    An interesting question in modern astrophysics research is the relationship between a galaxy's morphology (appearance) and its formation and evolutionary history. Research into this question is complicated by the fact that to get a study sample, researchers must first assign a shape to a large number of galaxies. Classifying a galaxy by shape is nearly impossible for a computer, but easy for a human - however, looking at one million galaxies, one at a time, would take an enormous amount of time. To create such a research sample, we turned to citizen science. We created a web site called Galaxy Zoo (www.galaxyzoo.org) that invites the public to classify the galaxies. New members see a short tutorial and take a short skill test where they classify galaxies of known types. Once they pass the test, they begin to work with the entire sample. The site's interface shows the user an image of a single galaxy from the Sloan Digital Sky Survey. The user clicks a button to classify it. Each classification is stored in a database, associated with the galaxy that it describes. The site has become enormously popular with amateur astronomers, teachers, and others interested in astronomy. So far, more than 110,000 users have joined. We have started a forum where users share images of their favorite galaxies, ask science questions of each other and the "zookeepers," and share classification advice. In a separate poster, we will share science results from the site's first six months of operation. In this poster, we will describe the site as an experiment in public science outreach. We will share user feedback, discuss our plans to study the user community more systematically, and share advice on how to work with citizen science projects to the mutual benefit of both professional and citizen scientists.

  10. American Memory User Evaluation, 1991-1993.

    ERIC Educational Resources Information Center

    Veccia, Susan; And Others

    This report summarizes the American Memory User Evaluation conducted during 1991-1993 in over 40 locations around the United States. The findings are based on 1800 user questionnaires, 120 user interviews, and more than 40 site visits by Library staff. American Memory describes the concept of providing electronic versions of selected Library of…

  11. Atmospheric Radiation Measurement Program Climate Research Facility Operations Quarterly Report. October 1 - December 31, 2009.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D. L. Sisterson

    2010-01-12

    Individual raw data streams from instrumentation at the Atmospheric Radiation Measurement (ARM) Program Climate Research Facility (ACRF) fixed and mobile sites are collected and sent to the Data Management Facility (DMF) at Pacific Northwest National Laboratory (PNNL) for processing in near real-time. Raw and processed data are then sent approximately daily to the ACRF Archive, where they are made available to users. For each instrument, we calculate the ratio of the actual number of data records received daily at the Archive to the expected number of data records. The results are tabulated by (1) individual data stream, site, and monthmore » for the current year and (2) site and fiscal year (FY) dating back to 1998. The U.S. Department of Energy (DOE) requires national user facilities to report time-based operating data. The requirements concern the actual hours of operation (ACTUAL); the estimated maximum operation or uptime goal (OPSMAX), which accounts for planned downtime; and the VARIANCE [1 - (ACTUAL/OPSMAX)], which accounts for unplanned downtime. The OPSMAX time for the first quarter of FY 2010 for the North Slope Alaska (NSA) locale is 1,987.20 hours (0.90 x 2,208); for the Southern Great Plains (SGP) site is 2,097.60 hours (0.95 x 2,208); and for the Tropical Western Pacific (TWP) locale is 1,876.8 hours (0.85 x 2,208). The ARM Mobile Facility (AMF) deployment in Graciosa Island, the Azores, Portugal, continues; its OPSMAX time this quarter is 2,097.60 hours (0.95 x 2,208). The differences in OPSMAX performance reflect the complexity of local logistics and the frequency of extreme weather events. It is impractical to measure OPSMAX for each instrument or data stream. Data availability reported here refers to the average of the individual, continuous data streams that have been received by the Archive. Data not at the Archive are the result of downtime (scheduled or unplanned) of the individual instruments. Therefore, data availability is directly related to individual instrument uptime. Thus, the average percentage of data in the Archive represents the average percentage of the time (24 hours per day, 92 days for this quarter) the instruments were operating this quarter. The Site Access Request System is a web-based database used to track visitors to the fixed and mobile sites, all of which have facilities that can be visited. The NSA locale has the Barrow and Atqasuk sites. The SGP locale has historically had a central facility, 23 extended facilities, 4 boundary facilities, and 3 intermediate facilities. Beginning this quarter, the SGP began a transition to a smaller footprint (150 km x 150 km) by rearranging the original and new instrumentation made available through the American Recovery and Reinvestment Act (ARRA). The central facility and 4 extended facilities will remain, but there will be up to 16 surface new characterization facilities, 4 radar facilities, and 3 profiler facilities sited in the smaller domain. This new configuration will provide observations at scales more appropriate to current and future climate models. The TWP locale has the Manus, Nauru, and Darwin sites. These sites will also have expanded measurement capabilities with the addition of new instrumentation made available through ARRA funds. It is anticipated that the new instrumentation at all the fixed sites will be in place within the next 12 months. The AMF continues its 20-month deployment in Graciosa Island, Azores, Portugal, that started May 1, 2009. The AMF will also have additional observational capabilities within the next 12 months. Users can participate in field experiments at the sites and mobile facility, or they can participate remotely. Therefore, a variety of mechanisms are provided to users to access site information. Users who have immediate (real-time) needs for data access can request a research account on the local site data systems. This access is particularly useful to users for quick decisions in executing time-dependent activities associated with field campaigns at the fixed sites and mobile facility locations. The eight computers for the research accounts are located at the Barrow and Atqasuk sites; the SGP central facility; the TWP Manus, Nauru, and Darwin sites; the AMF; and the DMF at PNNL. However, users are warned that the data provided at the time of collection have not been fully screened for quality and therefore are not considered to be official ACRF data. Hence, these accounts are considered to be part of the facility activities associated with field campaign activities, and users are tracked. In addition, users who visit sites can connect their computer or instrument to an ACRF site data system network, which requires an on-site device account. Remote (off-site) users can also have remote access to any ACRF instrument or computer system at any ACRF site, which requires an off-site device account. These accounts are also managed and tracked.« less

  12. When and why do people post questions about health and illness on Web 2.0-based Q&A sites in Japan.

    PubMed

    Nakayama, Kazuhiro; Nishio, Arisa; Yokoyama, Yukari; Setoyama, Yoko; Togari, Taisuke; Yonekura, Yuki

    2009-01-01

    Web2.0-based Q&A sites such as Yahoo! Answers and OKWave are the fastest-growing sites in Japan. Such sites exploit user-generated content and information-sharing methods and have established point systems and user ratings to reward participation. We analyzed the questions and answers concerning health and illness posted on these sites. We found that the people who posted questions desired to obtain information related to their health problems from various sources, and to seek validation of this information not only by experts but also by people who have undergone similar experiences.

  13. User Data Package for Compressed Natural Gas (CNG) Vehicles for Navy Applications

    DTIC Science & Technology

    1991-04-01

    already available). GENERAL CONSIDERATIONS The advantages and disadvantages for implementing a CNG-fueled vehicle fleet at a specific site vary. However...at the user’s site , if a guaranteed minimum quantity of CNG will be purchased annually by the fleet operator. Utilities are also establishing special...at low pressure and compressed on- site , several additional charges must be added to the cost charged by the natural gas supplier (see Table 1). The

  14. Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS), Version 5.0, Revision 2.0 (User’s Guide)

    DTIC Science & Technology

    2012-05-03

    output (I/O) system. The framework provides tools for common modeling functions, as well as regridding, data decomposition, and communication on...Within this script, the user must specify both the site (DSRC or local) and the platform ( DAVINCI , EINSTEIN, or local machine) on which COAMPS is...being run. For example: site=navy_dsrc (for DSRC usage) site=nrlssc (for local NRL-SSC usage) platform= davinci or einstein (for DSRC usage

  15. Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS) Version 5.0, Rev. 2.0 (User’s Guide)

    DTIC Science & Technology

    2012-05-03

    output (I/O) system. The framework provides tools for common modeling functions, as well as regridding, data decomposition, and communication on...Within this script, the user must specify both the site (DSRC or local) and the platform ( DAVINCI , EINSTEIN, or local machine) on which COAMPS is...being run. For example: site=navy_dsrc (for DSRC usage) site=nrlssc (for local NRL-SSC usage) platform= davinci or einstein (for DSRC usage

  16. Earthquake-induced ground failures in Italy from a reviewed database

    NASA Astrophysics Data System (ADS)

    Martino, S.; Prestininzi, A.; Romeo, R. W.

    2013-05-01

    A database (Italian acronym CEDIT) of earthquake-induced ground failures in Italy is presented, and the related content is analysed. The catalogue collects data regarding landslides, liquefaction, ground cracks, surface faulting and ground-level changes triggered by earthquakes of Mercalli intensity 8 or greater that occurred in the last millennium in Italy. As of January 2013, the CEDIT database has been available online for public use (URL: http://www.ceri.uniroma1.it/cn/index.do?id=230&page=55) and is presently hosted by the website of the Research Centre for Geological Risks (CERI) of the "Sapienza" University of Rome. Summary statistics of the database content indicate that 14% of the Italian municipalities have experienced at least one earthquake-induced ground failure and that landslides are the most common ground effects (approximately 45%), followed by ground cracks (32%) and liquefaction (18%). The relationships between ground effects and earthquake parameters such as seismic source energy (earthquake magnitude and epicentral intensity), local conditions (site intensity) and source-to-site distances are also analysed. The analysis indicates that liquefaction, surface faulting and ground-level changes are much more dependent on the earthquake source energy (i.e. magnitude) than landslides and ground cracks. In contrast, the latter effects are triggered at lower site intensities and greater epicentral distances than the other environmental effects.

  17. Hanford Environmental Information System (HEIS) Operator`s Manual. Volume 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schreck, R.I.

    1991-10-01

    The Hanford Environmental Information System (HEIS) is a consolidated set of automated resources that effectively manage the data gathered during environmental monitoring and restoration of the Hanford Site. The HEIS includes an integrated database that provides consistent and current data to all users and promotes sharing of data by the entire user community. This manual describes the facilities available to the operational user who is responsible for data entry, processing, scheduling, reporting, and quality assurance. A companion manual, the HEIS User`s Manual, describes the facilities available-to the scientist, engineer, or manager who uses the system for environmental monitoring, assessment, andmore » restoration planning; and to the regulator who is responsible for reviewing Hanford Site operations against regulatory requirements and guidelines.« less

  18. 9 CFR 130.17 - User fees for other veterinary diagnostic laboratory tests performed at NVSL (excluding FADDL) or...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 9 Animals and Animal Products 1 2011-01-01 2011-01-01 false User fees for other veterinary... FEES USER FEES § 130.17 User fees for other veterinary diagnostic laboratory tests performed at NVSL (excluding FADDL) or at authorized sites. (a) User fees for veterinary diagnostics tests performed at the...

  19. 9 CFR 130.17 - User fees for other veterinary diagnostic laboratory tests performed at NVSL (excluding FADDL) or...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 9 Animals and Animal Products 1 2010-01-01 2010-01-01 false User fees for other veterinary... FEES USER FEES § 130.17 User fees for other veterinary diagnostic laboratory tests performed at NVSL (excluding FADDL) or at authorized sites. (a) User fees for veterinary diagnostics tests performed at the...

  20. One-Time URL: A Proximity Security Mechanism between Internet of Things and Mobile Devices.

    PubMed

    Solano, Antonio; Dormido, Raquel; Duro, Natividad; González, Víctor

    2016-10-13

    The aim of this paper is to determine the physical proximity of connected things when they are accessed from a smartphone. Links between connected things and mobile communication devices are temporarily created by means of dynamic URLs (uniform resource locators) which may be easily discovered with pervasive short-range radio frequency technologies available on smartphones. In addition, a multi cross domain silent logging mechanism to allow people to interact with their surrounding connected things from their mobile communication devices is presented. The proposed mechanisms are based in web standards technologies, evolving our social network of Internet of Things towards the so-called Web of Things.

  1. USGS Abandoned Mine Lands Research Presented at the NAAMLP Meeting in Billings, Mont., Sept. 25, 2006

    USGS Publications Warehouse

    Johnson, Kate; Church, Stan

    2006-01-01

    The following talk was an invited presentation given at the National Association of Abandoned Mine Lands Programs meeting in Billings, Montana on Sept. 25, 2006. The objective of the talk was to outline the scope of the U.S. Geological Survey research, past, present and future, in the area of abandoned mine research. Two large Professional Papers have come out of our AML studies: Nimick, D.A., Church, S.E., and Finger, S.E., eds., 2004, Integrated investigations of environmental effects of historical mining in the Basin and Boulder mining districts, Boulder River watershed, Jefferson County, Montana: U.S. Geological Survey Professional Paper 1652, 524 p., 2 plates, 1 DVD, URL: http://pubs.er.usgs.gov/usgspubs/pp/pp1652 Church, S.E., von Guerard, Paul, and Finger, S.E., eds., 2006, Integrated Investigations of Environmental Effects of Historical Mining in the Animas River Watershed, San Juan County, Colorado: U.S. Geological Survey Professional Paper 1651, 1,096 p., 6 plates, 1 DVD (in press). Additional publications and links can be found on the USGS AML website at URL: http://amli.usgs.gov/ or are accessible from the USGS Mineral Resource Program website at URL: http://minerals.usgs.gov/.

  2. What users want in e-commerce design: effects of age, education and income.

    PubMed

    Lightner, Nancy J

    2003-01-15

    Preferences for certain characteristics of an online shopping experience may be related to demographic data. This paper discusses the characteristics of that experience, demographic data and preferences by demographic group. The results of an online survey of 488 individuals in the United States indicate that respondents are generally satisfied with their online shopping experiences, with security, information quality and information quantity ranking first in importance overall. The sensory impact of a site ranked last overall of the seven characteristics measured. Preferences for these characteristics in e-commerce sites were differentiated by age, education and income. The sensory impact of sites became less important as respondents increased in age, income or education. As the income of respondents increased, the importance of the reputation of the vendor rose. Web site designers may incorporate these findings into the design of e-commerce sites in an attempt to increase the shopping satisfaction of their users. Results from the customer relationship management portion of the survey suggest that current push technologies and site personalization are not an effective means of achieving user satisfaction.

  3. PPCPS IN THE ENVIRONMENT: FUTURE RESEARCH ...

    EPA Pesticide Factsheets

    Pharmaceuticals and personal care products (PPCPs) are an extraordinarily diverse group of chemicals used in veterinary medicine, agricultural practice, and human health and cosmetic care. The various sources and origins of PPCPs as pollutants in the environment are depicted in an illustration (available: http://www.gov/nerlesd1/chemistrv/pharma/images/drawing.pdf; note: all the URLs cited in the text are from the web site Daughton/EPA 2003a).PPCPs are ubiquitous pollutants, owing their origins in the environment to their worldwide, universal, frequent, and highly dispersed but cumulative usage by multitudes of individuals (and domestic animals) and from other uses such as pest control(e.g.,see: http://www.epa.gov/nerlesd1/chemistry/phara/images/double-drugs.pdf). Therapeutic drugs in current use comprise over 3,000 distinct bioactive chemical entities formulated (using a wide array of so-called inert

  4. The Giardia genome project database.

    PubMed

    McArthur, A G; Morrison, H G; Nixon, J E; Passamaneck, N Q; Kim, U; Hinkle, G; Crocker, M K; Holder, M E; Farr, R; Reich, C I; Olsen, G E; Aley, S B; Adam, R D; Gillin, F D; Sogin, M L

    2000-08-15

    The Giardia genome project database provides an online resource for Giardia lamblia (WB strain, clone C6) genome sequence information. The database includes edited single-pass reads, the results of BLASTX searches, and details of progress towards sequencing the entire 12 million-bp Giardia genome. Pre-sorted BLASTX results can be retrieved based on keyword searches and BLAST searches of the high throughput Giardia data can be initiated from the web site or through NCBI. Descriptions of the genomic DNA libraries, project protocols and summary statistics are also available. Although the Giardia genome project is ongoing, new sequences are made available on a bi-monthly basis to ensure that researchers have access to information that may assist them in the search for genes and their biological function. The current URL of the Giardia genome project database is www.mbl.edu/Giardia.

  5. Orbitofrontal and caudate volumes in cannabis users: a multi-site mega-analysis comparing dependent versus non-dependent users.

    PubMed

    Chye, Yann; Solowij, Nadia; Suo, Chao; Batalla, Albert; Cousijn, Janna; Goudriaan, Anna E; Martin-Santos, Rocio; Whittle, Sarah; Lorenzetti, Valentina; Yücel, Murat

    2017-07-01

    Cannabis (CB) use and dependence are associated with regionally specific alterations to brain circuitry and substantial psychosocial impairment. The objective of this study was to investigate the association between CB use and dependence, and the volumes of brain regions critically involved in goal-directed learning and behaviour-the orbitofrontal cortex (OFC) and caudate. In the largest multi-site structural imaging study of CB users vs healthy controls (HC), 140 CB users and 121 HC were recruited from four research sites. Group differences in OFC and caudate volumes were investigated between HC and CB users and between 70 dependent (CB-dep) and 50 non-dependent (CB-nondep) users. The relationship between quantity of CB use and age of onset of use and caudate and OFC volumes was explored. CB users (consisting of CB-dep and CB-nondep) did not significantly differ from HC in OFC or caudate volume. CB-dep compared to CB-nondep users exhibited significantly smaller volume in the medial and the lateral OFC. Lateral OFC volume was particularly smaller in CB-dep females, and reduced volume in the CB-dep group was associated with higher monthly cannabis dosage. Smaller medial OFC volume may be driven by CB dependence-related mechanisms, while smaller lateral OFC volume may be due to ongoing exposure to cannabinoid compounds. The results highlight a distinction between cannabis use and dependence and warrant examination of gender-specific effects in studies of CB dependence.

  6. Public storage for the Open Science Grid

    NASA Astrophysics Data System (ADS)

    Levshina, T.; Guru, A.

    2014-06-01

    The Open Science Grid infrastructure doesn't provide efficient means to manage public storage offered by participating sites. A Virtual Organization that relies on opportunistic storage has difficulties finding appropriate storage, verifying its availability, and monitoring its utilization. The involvement of the production manager, site administrators and VO support personnel is required to allocate or rescind storage space. One of the main requirements for Public Storage implementation is that it should use SRM or GridFTP protocols to access the Storage Elements provided by the OSG Sites and not put any additional burden on sites. By policy, no new services related to Public Storage can be installed and run on OSG sites. Opportunistic users also have difficulties in accessing the OSG Storage Elements during the execution of jobs. A typical users' data management workflow includes pre-staging common data on sites before a job's execution, then storing for a subsequent download to a local institution the output data produced by a job on a worker node. When the amount of data is significant, the only means to temporarily store the data is to upload it to one of the Storage Elements. In order to do that, a user's job should be aware of the storage location, availability, and free space. After a successful data upload, users must somehow keep track of the data's location for future access. In this presentation we propose solutions for storage management and data handling issues in the OSG. We are investigating the feasibility of using the integrated Rule-Oriented Data System developed at RENCI as a front-end service to the OSG SEs. The current architecture, state of deployment and performance test results will be discussed. We will also provide examples of current usage of the system by beta-users.

  7. CicerTransDB 1.0: a resource for expression and functional study of chickpea transcription factors.

    PubMed

    Gayali, Saurabh; Acharya, Shankar; Lande, Nilesh Vikram; Pandey, Aarti; Chakraborty, Subhra; Chakraborty, Niranjan

    2016-07-29

    Transcription factor (TF) databases are major resource for systematic studies of TFs in specific species as well as related family members. Even though there are several publicly available multi-species databases, the information on the amount and diversity of TFs within individual species is fragmented, especially for newly sequenced genomes of non-model species of agricultural significance. We constructed CicerTransDB (Cicer Transcription Factor Database), the first database of its kind, which would provide a centralized putatively complete list of TFs in a food legume, chickpea. CicerTransDB, available at www.cicertransdb.esy.es , is based on chickpea (Cicer arietinum L.) annotation v 1.0. The database is an outcome of genome-wide domain study and manual classification of TF families. This database not only provides information of the gene, but also gene ontology, domain and motif architecture. CicerTransDB v 1.0 comprises information of 1124 genes of chickpea and enables the user to not only search, browse and download sequences but also retrieve sequence features. CicerTransDB also provides several single click interfaces, transconnecting to various other databases to ease further analysis. Several webAPI(s) integrated in the database allow end-users direct access of data. A critical comparison of CicerTransDB with PlantTFDB (Plant Transcription Factor Database) revealed 68 novel TFs in the chickpea genome, hitherto unexplored. Database URL: http://www.cicertransdb.esy.es.

  8. DisGeNET: a discovery platform for the dynamical exploration of human diseases and their genes.

    PubMed

    Piñero, Janet; Queralt-Rosinach, Núria; Bravo, Àlex; Deu-Pons, Jordi; Bauer-Mehren, Anna; Baron, Martin; Sanz, Ferran; Furlong, Laura I

    2015-01-01

    DisGeNET is a comprehensive discovery platform designed to address a variety of questions concerning the genetic underpinning of human diseases. DisGeNET contains over 380,000 associations between >16,000 genes and 13,000 diseases, which makes it one of the largest repositories currently available of its kind. DisGeNET integrates expert-curated databases with text-mined data, covers information on Mendelian and complex diseases, and includes data from animal disease models. It features a score based on the supporting evidence to prioritize gene-disease associations. It is an open access resource available through a web interface, a Cytoscape plugin and as a Semantic Web resource. The web interface supports user-friendly data exploration and navigation. DisGeNET data can also be analysed via the DisGeNET Cytoscape plugin, and enriched with the annotations of other plugins of this popular network analysis software suite. Finally, the information contained in DisGeNET can be expanded and complemented using Semantic Web technologies and linked to a variety of resources already present in the Linked Data cloud. Hence, DisGeNET offers one of the most comprehensive collections of human gene-disease associations and a valuable set of tools for investigating the molecular mechanisms underlying diseases of genetic origin, designed to fulfill the needs of different user profiles, including bioinformaticians, biologists and health-care practitioners. Database URL: http://www.disgenet.org/ © The Author(s) 2015. Published by Oxford University Press.

  9. ToxReporter: viewing the genome through the eyes of a toxicologist.

    PubMed

    Gosink, Mark

    2016-01-01

    One of the many roles of a toxicologist is to determine if an observed adverse event (AE) is related to a previously unrecognized function of a given gene/protein. Towards that end, he or she will search a variety of public and propriety databases for information linking that protein to the observed AE. However, these databases tend to present all available information about a protein, which can be overwhelming, limiting the ability to find information about the specific toxicity being investigated. ToxReporter compiles information from a broad selection of resources and limits display of the information to user-selected areas of interest. ToxReporter is a PERL-based web-application which utilizes a MySQL database to streamline this process by categorizing public and proprietary domain-derived information into predefined safety categories according to a customizable lexicon. Users can view gene information that is 'red-flagged' according to the safety issue under investigation. ToxReporter also uses a scoring system based on relative counts of the red-flags to rank all genes for the amount of information pertaining to each safety issue and to display their scored ranking as an easily interpretable 'Tox-At-A-Glance' chart. Although ToxReporter was originally developed to display safety information, its flexible design could easily be adapted to display disease information as well.Database URL: ToxReporter is freely available at https://github.com/mgosink/ToxReporter. © The Author(s) 2016. Published by Oxford University Press.

  10. R3D-2-MSA: the RNA 3D structure-to-multiple sequence alignment server

    PubMed Central

    Cannone, Jamie J.; Sweeney, Blake A.; Petrov, Anton I.; Gutell, Robin R.; Zirbel, Craig L.; Leontis, Neocles

    2015-01-01

    The RNA 3D Structure-to-Multiple Sequence Alignment Server (R3D-2-MSA) is a new web service that seamlessly links RNA three-dimensional (3D) structures to high-quality RNA multiple sequence alignments (MSAs) from diverse biological sources. In this first release, R3D-2-MSA provides manual and programmatic access to curated, representative ribosomal RNA sequence alignments from bacterial, archaeal, eukaryal and organellar ribosomes, using nucleotide numbers from representative atomic-resolution 3D structures. A web-based front end is available for manual entry and an Application Program Interface for programmatic access. Users can specify up to five ranges of nucleotides and 50 nucleotide positions per range. The R3D-2-MSA server maps these ranges to the appropriate columns of the corresponding MSA and returns the contents of the columns, either for display in a web browser or in JSON format for subsequent programmatic use. The browser output page provides a 3D interactive display of the query, a full list of sequence variants with taxonomic information and a statistical summary of distinct sequence variants found. The output can be filtered and sorted in the browser. Previous user queries can be viewed at any time by resubmitting the output URL, which encodes the search and re-generates the results. The service is freely available with no login requirement at http://rna.bgsu.edu/r3d-2-msa. PMID:26048960

  11. iSyTE 2.0: a database for expression-based gene discovery in the eye

    PubMed Central

    Kakrana, Atul; Yang, Andrian; Anand, Deepti; Djordjevic, Djordje; Ramachandruni, Deepti; Singh, Abhyudai; Huang, Hongzhan

    2018-01-01

    Abstract Although successful in identifying new cataract-linked genes, the previous version of the database iSyTE (integrated Systems Tool for Eye gene discovery) was based on expression information on just three mouse lens stages and was functionally limited to visualization by only UCSC-Genome Browser tracks. To increase its efficacy, here we provide an enhanced iSyTE version 2.0 (URL: http://research.bioinformatics.udel.edu/iSyTE) based on well-curated, comprehensive genome-level lens expression data as a one-stop portal for the effective visualization and analysis of candidate genes in lens development and disease. iSyTE 2.0 includes all publicly available lens Affymetrix and Illumina microarray datasets representing a broad range of embryonic and postnatal stages from wild-type and specific gene-perturbation mouse mutants with eye defects. Further, we developed a new user-friendly web interface for direct access and cogent visualization of the curated expression data, which supports convenient searches and a range of downstream analyses. The utility of these new iSyTE 2.0 features is illustrated through examples of established genes associated with lens development and pathobiology, which serve as tutorials for its application by the end-user. iSyTE 2.0 will facilitate the prioritization of eye development and disease-linked candidate genes in studies involving transcriptomics or next-generation sequencing data, linkage analysis and GWAS approaches. PMID:29036527

  12. Allie: a database and a search service of abbreviations and long forms.

    PubMed

    Yamamoto, Yasunori; Yamaguchi, Atsuko; Bono, Hidemasa; Takagi, Toshihisa

    2011-01-01

    Many abbreviations are used in the literature especially in the life sciences, and polysemous abbreviations appear frequently, making it difficult to read and understand scientific papers that are outside of a reader's expertise. Thus, we have developed Allie, a database and a search service of abbreviations and their long forms (a.k.a. full forms or definitions). Allie searches for abbreviations and their corresponding long forms in a database that we have generated based on all titles and abstracts in MEDLINE. When a user query matches an abbreviation, Allie returns all potential long forms of the query along with their bibliographic data (i.e. title and publication year). In addition, for each candidate, co-occurring abbreviations and a research field in which it frequently appears in the MEDLINE data are displayed. This function helps users learn about the context in which an abbreviation appears. To deal with synonymous long forms, we use a dictionary called GENA that contains domain-specific terms such as gene, protein or disease names along with their synonymic information. Conceptually identical domain-specific terms are regarded as one term, and then conceptually identical abbreviation-long form pairs are grouped taking into account their appearance in MEDLINE. To keep up with new abbreviations that are continuously introduced, Allie has an automatic update system. In addition, the database of abbreviations and their long forms with their corresponding PubMed IDs is constructed and updated weekly. Database URL: The Allie service is available at http://allie.dbcls.jp/.

  13. DPTEdb, an integrative database of transposable elements in dioecious plants.

    PubMed

    Li, Shu-Fen; Zhang, Guo-Jun; Zhang, Xue-Jin; Yuan, Jin-Hong; Deng, Chuan-Liang; Gu, Lian-Feng; Gao, Wu-Jun

    2016-01-01

    Dioecious plants usually harbor 'young' sex chromosomes, providing an opportunity to study the early stages of sex chromosome evolution. Transposable elements (TEs) are mobile DNA elements frequently found in plants and are suggested to play important roles in plant sex chromosome evolution. The genomes of several dioecious plants have been sequenced, offering an opportunity to annotate and mine the TE data. However, comprehensive and unified annotation of TEs in these dioecious plants is still lacking. In this study, we constructed a dioecious plant transposable element database (DPTEdb). DPTEdb is a specific, comprehensive and unified relational database and web interface. We used a combination of de novo, structure-based and homology-based approaches to identify TEs from the genome assemblies of previously published data, as well as our own. The database currently integrates eight dioecious plant species and a total of 31 340 TEs along with classification information. DPTEdb provides user-friendly web interfaces to browse, search and download the TE sequences in the database. Users can also use tools, including BLAST, GetORF, HMMER, Cut sequence and JBrowse, to analyze TE data. Given the role of TEs in plant sex chromosome evolution, the database will contribute to the investigation of TEs in structural, functional and evolutionary dynamics of the genome of dioecious plants. In addition, the database will supplement the research of sex diversification and sex chromosome evolution of dioecious plants.Database URL: http://genedenovoweb.ticp.net:81/DPTEdb/index.php. © The Author(s) 2016. Published by Oxford University Press.

  14. TimeTree2: species divergence times on the iPhone

    PubMed Central

    Kumar, Sudhir; Hedges, S. Blair

    2011-01-01

    Summary: Scientists, educators and the general public often need to know times of divergence between species. But they rarely can locate that information because it is buried in the scientific literature, usually in a format that is inaccessible to text search engines. We have developed a public knowledgebase that enables data-driven access to the collection of peer-reviewed publications in molecular evolution and phylogenetics that have reported estimates of time of divergence between species. Users can query the TimeTree resource by providing two names of organisms (common or scientific) that can correspond to species or groups of species. The current TimeTree web resource (TimeTree2) contains timetrees reported from molecular clock analyses in 910 published studies and 17 341 species that span the diversity of life. TimeTree2 interprets complex and hierarchical data from these studies for each user query, which can be launched using an iPhone application, in addition to the website. Published time estimates are now readily accessible to the scientific community, K–12 and college educators, and the general public, without requiring knowledge of evolutionary nomenclature. Availability: TimeTree2 is accessible from the URL http://www.timetree.org, with an iPhone app available from iTunes (http://itunes.apple.com/us/app/timetree/id372842500?mt=8) and a YouTube tutorial (http://www.youtube.com/watch?v=CxmshZQciwo). Contact: sbh1@psu.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:21622662

  15. An online analytical processing multi-dimensional data warehouse for malaria data

    PubMed Central

    Madey, Gregory R; Vyushkov, Alexander; Raybaud, Benoit; Burkot, Thomas R; Collins, Frank H

    2017-01-01

    Abstract Malaria is a vector-borne disease that contributes substantially to the global burden of morbidity and mortality. The management of malaria-related data from heterogeneous, autonomous, and distributed data sources poses unique challenges and requirements. Although online data storage systems exist that address specific malaria-related issues, a globally integrated online resource to address different aspects of the disease does not exist. In this article, we describe the design, implementation, and applications of a multi-dimensional, online analytical processing data warehouse, named the VecNet Data Warehouse (VecNet-DW). It is the first online, globally-integrated platform that provides efficient search, retrieval and visualization of historical, predictive, and static malaria-related data, organized in data marts. Historical and static data are modelled using star schemas, while predictive data are modelled using a snowflake schema. The major goals, characteristics, and components of the DW are described along with its data taxonomy and ontology, the external data storage systems and the logical modelling and physical design phases. Results are presented as screenshots of a Dimensional Data browser, a Lookup Tables browser, and a Results Viewer interface. The power of the DW emerges from integrated querying of the different data marts and structuring those queries to the desired dimensions, enabling users to search, view, analyse, and store large volumes of aggregated data, and responding better to the increasing demands of users. Database URL https://dw.vecnet.org/datawarehouse/ PMID:29220463

  16. Analysis of usability factors affecting purchase intention in online e-commerce sites

    NASA Astrophysics Data System (ADS)

    Perdana, R. A.; Suzianti, A.

    2017-03-01

    The growing number of internet users plays a significant role in the emergence of a variety of online e-commerce sites to meet the needs of Indonesians. However, there are still some problems faced by the users in using e-commerce sites. Therefore, a research related to user experience on their purchase intention to foster e-commerce sites is required. This study is conducted to find out the relationship between usability factors on e-commerce users’ purchase intention using a case study by using SEM to analyse the usability of the website. The result of this study shows that credibility, readability and telepresence are usability factors that directly affect purchase intention, while simplicity, consistency and interactivity are usability factors that indirectly affect purchase intention. Therefore, we can conclude that Indonesian consumers are on the Early Majority phase in adopting Company A.

  17. Internet Uses and Gratifications: Understanding Motivations for Using the Internet.

    ERIC Educational Resources Information Center

    Ko, Hanjun

    In this study, the uses and gratifications theory was applied to investigate the Internet users' motivations and their relationship with attitudes toward the Internet as well as types of Web site visited by users. Subjects were 185 college students who completed a self-report questionnaire. Four motivations and five types of Web sites were…

  18. Towards social inclusion through lifelong learning in mental health: analysis of change in the lives of the EMILIA project service users.

    PubMed

    Ramon, Shulamit; Griffiths, Christopher A; Nieminen, Irja; Pedersen, Marialouise; Dawson, Ian

    2011-05-01

    The application of formal lifelong learning to enhance social inclusion in mental health is rarely investigated in terms of change in the lives of service users on a cross-country comparative scale. This study was aimed at examining changes in key areas of the lives of mental health service users across eight European mental health sites. A before and after case study design was applied. Users of mental health services who participated in the lifelong leaning interventions reviewed the changes in key areas of their lives at baseline and 10 months later, through the thematic analysis of qualitative data collected in semi-structured interviews (27 and 21, respectively) and self-reports (138 and 99, respectively). In-depth examples from one site are provided. Most users reported positive changes in the areas of training and social networks, with a sizeable minority moving onto unpaid and paid employment. In addition most users reported active planning for job search and other goals. Obstacles that were highlighted included the negative effects of having a mental illness, difficulties in close relationships and economic disadvantages. The lifelong learning intervention offered within an EU Framework 6 project to mental health service users in eight demonstration sites had a largely positive impact on key areas of their lives at 10 months, though obstacles remained which may be less amenable to change by social interventions.

  19. Modeling of damage, permeability changes and pressure responses during excavation of the TSX tunnel in granitic rock at URL, Canada

    NASA Astrophysics Data System (ADS)

    Rutqvist, Jonny; Börgesson, Lennart; Chijimatsu, Masakazu; Hernelind, Jan; Jing, Lanru; Kobayashi, Akira; Nguyen, Son

    2009-05-01

    This paper presents numerical modeling of excavation-induced damage, permeability changes, and fluid-pressure responses during excavation of a test tunnel associated with the tunnel sealing experiment (TSX) at the Underground Research Laboratory (URL) in Canada. Four different numerical models were applied using a wide range of approaches to model damage and permeability changes in the excavation disturbed zone (EDZ) around the tunnel. Using in situ calibration of model parameters, the modeling could reproduce observed spatial distribution of damage and permeability changes around the tunnel as a combination of disturbance induced by stress redistribution around the tunnel and by the drill-and-blast operation. The modeling showed that stress-induced permeability increase above the tunnel is a result of micro and macrofracturing under high deviatoric (shear) stress, whereas permeability increase alongside the tunnel is a result of opening of existing microfractures under decreased mean stress. The remaining observed fracturing and permeability changes around the periphery of the tunnel were attributed to damage from the drill-and-blast operation. Moreover, a reasonably good agreement was achieved between simulated and observed excavation-induced pressure responses around the TSX tunnel for 1 year following its excavation. The simulations showed that these pressure responses are caused by poroelastic effects as a result of increasing or decreasing mean stress, with corresponding contraction or expansion of the pore volume. The simulation results for pressure evolution were consistent with previous studies, indicating that the observed pressure responses could be captured in a Biot model using a relatively low Biot-Willis’ coefficient, α ≈ 0.2, a porosity of n ≈ 0.007, and a relatively low permeability of k ≈ 2 × 10-22 m2, which is consistent with the very tight, unfractured granite at the site.

  20. GeneBuilder: interactive in silico prediction of gene structure.

    PubMed

    Milanesi, L; D'Angelo, D; Rogozin, I B

    1999-01-01

    Prediction of gene structure in newly sequenced DNA becomes very important in large genome sequencing projects. This problem is complicated due to the exon-intron structure of eukaryotic genes and because gene expression is regulated by many different short nucleotide domains. In order to be able to analyse the full gene structure in different organisms, it is necessary to combine information about potential functional signals (promoter region, splice sites, start and stop codons, 3' untranslated region) together with the statistical properties of coding sequences (coding potential), information about homologous proteins, ESTs and repeated elements. We have developed the GeneBuilder system which is based on prediction of functional signals and coding regions by different approaches in combination with similarity searches in proteins and EST databases. The potential gene structure models are obtained by using a dynamic programming method. The program permits the use of several parameters for gene structure prediction and refinement. During gene model construction, selecting different exon homology levels with a protein sequence selected from a list of homologous proteins can improve the accuracy of the gene structure prediction. In the case of low homology, GeneBuilder is still able to predict the gene structure. The GeneBuilder system has been tested by using the standard set (Burset and Guigo, Genomics, 34, 353-367, 1996) and the performances are: 0.89 sensitivity and 0.91 specificity at the nucleotide level. The total correlation coefficient is 0.88. The GeneBuilder system is implemented as a part of the WebGene a the URL: http://www.itba.mi. cnr.it/webgene and TRADAT (TRAncription Database and Analysis Tools) launcher URL: http://www.itba.mi.cnr.it/tradat.

Top