Sample records for museum web database

  1. 77 FR 39269 - Submission for OMB Review, Comment Request, Proposed Collection: IMLS Museum Web Database...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-02

    ..., Proposed Collection: IMLS Museum Web Database: MuseumsCount.gov AGENCY: Institute of Museum and Library... general public. Information such as name, address, phone, email, Web site, staff size, program details... Museum Web Database: MuseumsCount.gov collection. The 60-day notice for the IMLS Museum Web Database...

  2. 76 FR 54807 - Notice of Proposed Information Collection: IMLS Museum Web Database: MuseumsCount.gov

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-02

    ...: IMLS Museum Web Database: MuseumsCount.gov AGENCY: Institute of Museum and Library Services, National..., and the general public. Information such as name, address, phone, e-mail, Web site, congressional...: IMLS Museum Web Database, MuseumsCount.gov . OMB Number: To be determined. Agency Number: 3137...

  3. CHIP Demonstrator: Semantics-Driven Recommendations and Museum Tour Generation

    NASA Astrophysics Data System (ADS)

    Aroyo, Lora; Stash, Natalia; Wang, Yiwen; Gorgels, Peter; Rutledge, Lloyd

    The main objective of the CHIP project is to demonstrate how Semantic Web technologies can be deployed to provide personalized access to digital museum collections. We illustrate our approach with the digital database ARIA of the Rijksmuseum Amsterdam. For the semantic enrichment of the Rijksmuseum ARIA database we collaborated with the CATCH STITCH project to produce mappings to Iconclass, and with the MultimediaN E-culture project to produce the RDF/OWL of the ARIA and Adlib databases. The main focus of CHIP is on exploring the potential of applying adaptation techniques to provide personalized experience for the museum visitors both on the Web site and in the museum.

  4. THE GB/3D Fossil Types Online Database

    NASA Astrophysics Data System (ADS)

    Howe, M. P.; McCormick, T.

    2012-12-01

    The ICZN and the International Code of Nomenclature for algae, fungi and plants require that every species or subspecies of organism (living & fossil), should have a type or reference specimen to define its characteristic features. These specimens are held in collections around the world and must be available for study. Over time, type specimens can deteriorate or become lost. The British Geological Survey, the National Museum of Wales, the Sedgwick Museum Cambridge and the Oxford Museum of Natural History are working together to create an online database of the type fossils they hold. The web portal provides data about each specimen, searchable on taxonomic, stratigraphic and spatial criteria. For each specimen it is possible to view and download high resolution photographs, and for many of them, 'anaglyph' stereo pairs and 3D scans are available. The portal also provides educational resources (OERs). The rise to prominence of the Web has transformed expectations in accessing information and the Web is now usually the first port of call. However, while many geological museums are providing web-searchable text catalogues, few have undertaken a large-scale program of providing images and 3D models. This project has tackled the issues of merging four distinct data holdings, and setting up workflows to image and scan large numbers of disparate fossils, ranging from small invertebrate macrofossils to large vertebrate skeletal elements. There are three advantages in providing such resources: (1) All users can exploit the collections more efficiently. End-users can view specimens remotely and assess their nature, preservation quality and completeness - in some cases this may be sufficient. It will reduce the need for institutions to send specimens (which are often fragile and always irreplaceable) to researchers by post, or for researchers to make possibly long, expensive and environmentally damaging journeys. (2) A public outreach and education dividend - the ability to view specimens greatly enriches the experience and information content of an institution's website. (3) The ability to digitally image specimens enables museums to have an archive record in case the physical specimens are lost or destroyed by accident or warfare.; Digital model of type of Kreterostephanus kreter Buckmann (GSM49334), an ammonite from the Jurasssic of Dorset, UK - displayed as an anaglyph

  5. Pioneering a web-Based Museum in Taiwan: Design and Implementation of Lifelong Distance Learning of Science Education.

    ERIC Educational Resources Information Center

    Young, Shelley Shwu-Ching; Huang, Yi-Long; Jang, Jyh-Shing Roger

    2000-01-01

    Describes the development and implementation process of a Web-based science museum in Taiwan. Topics include use of the Internet; lifelong distance learning; museums and the Internet; objectives of the science museum; funding; categories of exhibitions; analysis of Web users; homepage characteristics; graphics and the effect on speed; and future…

  6. Edaphostat: interactive ecological analysis of soil organism occurrences and preferences from the Edaphobase data warehouse

    PubMed Central

    Scholz-Starke, Björn; Burkhardt, Ulrich; Lesch, Stephan; Rick, Sebastian; Russell, David; Roß-Nickoll, Martina; Ottermanns, Richard

    2017-01-01

    Abstract The Edaphostat web application allows interactive and dynamic analyses of soil organism data stored in the Edaphobase data warehouse. It is part of the Edaphobase web application and can be accessed by any modern browser. The tool combines data from different sources (publications, field studies and museum collections) and allows species preferences along various environmental gradients (i.e. C/N ratio and pH) and classification systems (habitat type and soil type) to be analyzed. Database URL: Edaphostat is part of the Edaphobase Web Application available at https://portal.edaphobase.org PMID:29220469

  7. Information Design for Visualizing History Museum Artifacts

    ERIC Educational Resources Information Center

    Chen, Yulin; Lai, Tingsheng; Yasuda, Takami; Yokoi, Shigeki

    2011-01-01

    In the past few years, museum visualization systems have become a hot topic that attracts many researchers' interests. Several systems provide Web services for browsing museum collections through the Web. In this paper, we proposed an intelligent museum system for history museum artifacts, and described a study in which we enable access to China…

  8. New taxonomy and old collections: integrating DNA barcoding into the collection curation process.

    PubMed

    Puillandre, N; Bouchet, P; Boisselier-Dubayle, M-C; Brisset, J; Buge, B; Castelin, M; Chagnoux, S; Christophe, T; Corbari, L; Lambourdière, J; Lozouet, P; Marani, G; Rivasseau, A; Silva, N; Terryn, Y; Tillier, S; Utge, J; Samadi, S

    2012-05-01

    Because they house large biodiversity collections and are also research centres with sequencing facilities, natural history museums are well placed to develop DNA barcoding best practices. The main difficulty is generally the vouchering system: it must ensure that all data produced remain attached to the corresponding specimen, from the field to publication in articles and online databases. The Museum National d'Histoire Naturelle in Paris is one of the leading laboratories in the Marine Barcode of Life (MarBOL) project, which was used as a pilot programme to include barcode collections for marine molluscs and crustaceans. The system is based on two relational databases. The first one classically records the data (locality and identification) attached to the specimens. In the second one, tissue-clippings, DNA extractions (both preserved in 2D barcode tubes) and PCR data (including primers) are linked to the corresponding specimen. All the steps of the process [sampling event, specimen identification, molecular processing, data submission to Barcode Of Life Database (BOLD) and GenBank] are thus linked together. Furthermore, we have developed several web-based tools to automatically upload data into the system, control the quality of the sequences produced and facilitate the submission to online databases. This work is the result of a joint effort from several teams in the Museum National d'Histoire Naturelle (MNHN), but also from a collaborative network of taxonomists and molecular systematists outside the museum, resulting in the vouchering so far of ∼41,000 sequences and the production of ∼11,000 COI sequences. © 2012 Blackwell Publishing Ltd.

  9. Design Insights and Inspiration from the Tate: What Museum Web Sites Can Offer Us

    ERIC Educational Resources Information Center

    Riley-Huff, Debra A.

    2009-01-01

    There are many similarities between museums and academic libraries as public service institutions. This article is an examination of museum Web site practices and concepts that might also be transferable to academic library Web sites. It explores the digital manifestations of design and information presentation, user engagement, interactivity, and…

  10. Museums and the Web 2002: Selected Papers from an International Conference (6th, Boston, Massachusetts, April 17-20, 2002).

    ERIC Educational Resources Information Center

    Bearman, David, Ed.; Trant, Jennifer, Ed.

    This proceedings contains the following selected papers from the Museums and the Web 2002 international conference: "The Electronic Guidebook: Using Portable Devices and a Wireless Web-Based Network To Extend the Museum Experience" (Robert Semper, Mirjana Spasojevic); "Eavesdropping on Electronic Guidebooks: Observing Learning…

  11. A 3D Model Based Imdoor Navigation System for Hubei Provincial Museum

    NASA Astrophysics Data System (ADS)

    Xu, W.; Kruminaite, M.; Onrust, B.; Liu, H.; Xiong, Q.; Zlatanova, S.

    2013-11-01

    3D models are more powerful than 2D maps for indoor navigation in a complicate space like Hubei Provincial Museum because they can provide accurate descriptions of locations of indoor objects (e.g., doors, windows, tables) and context information of these objects. In addition, the 3D model is the preferred navigation environment by the user according to the survey. Therefore a 3D model based indoor navigation system is developed for Hubei Provincial Museum to guide the visitors of museum. The system consists of three layers: application, web service and navigation, which is built to support localization, navigation and visualization functions of the system. There are three main strengths of this system: it stores all data needed in one database and processes most calculations on the webserver which make the mobile client very lightweight, the network used for navigation is extracted semi-automatically and renewable, the graphic user interface (GUI), which is based on a game engine, has high performance of visualizing 3D model on a mobile display.

  12. A New Way of Making Cultural Information Resources Visible on the Web: Museums and the Open Archive Initiative.

    ERIC Educational Resources Information Center

    Perkins, John

    Museums hold enormous amounts of information in collections management systems and publish academic and scholarly research in print journals, exhibition catalogs, virtual museum presentations, and community publications. Much of this rich content is unavailable to web search engines or otherwise gets lost in the vastness of the World Wide Web. The…

  13. Designing Virtual Museum Using Web3D Technology

    NASA Astrophysics Data System (ADS)

    Zhao, Jianghai

    VRT was born to have the potentiality of constructing an effective learning environment due to its 3I characteristics: Interaction, Immersion and Imagination. It is now applied in education in a more profound way along with the development of VRT. Virtual Museum is one of the applications. The Virtual Museum is based on the WEB3D technology and extensibility is the most important factor. Considering the advantage and disadvantage of each WEB3D technology, VRML, CULT3D AND VIEWPOINT technologies are chosen. A web chatroom based on flash and ASP technology is also been created in order to make the Virtual Museum an interactive learning environment.

  14. Web-Based Museum Trails on PDAs for University-Level Design Students: Design and Evaluation

    ERIC Educational Resources Information Center

    Reynolds, R.; Walker, K.; Speight, C.

    2010-01-01

    This paper describes the development and evaluation of web-based museum trails for university-level design students to access on handheld devices in the Victoria and Albert Museum (V&A) in London. The trails offered students a range of ways of exploring the museum environment and collections, some encouraging students to interpret objects and…

  15. Make Your Museum Talk: Natural Language Interfaces for Cultural Institutions.

    ERIC Educational Resources Information Center

    Boiano, Stefania; Gaia, Giuliano; Caldarini, Morgana

    A museum can talk to its audience through a variety of channels, such as Web sites, help desks, human guides, brochures. A considerable effort is being made by museums to integrate these different means. The Web site can be designed to be reachable or even updateable from visitors inside the museum via touchscreen and wireless devices. But these…

  16. The World Wide Web Virtual Library of Museums.

    ERIC Educational Resources Information Center

    Bowen, Jonathan P.

    1995-01-01

    Provides an introduction to and overview of the World Wide Web Virtual Library of Museums, an interactive directory of online museums, including organization of the hyperlinks visitor statistics, possible future direction, and information on some of the sites linked to the library. (JKP)

  17. Museums and the Web 2003: Selected Papers from an International Conference (7th, Charlotte, North Carolina, March 19-22, 2003).

    ERIC Educational Resources Information Center

    Bearman, David, Ed.; Trant, Jennifer, Ed.

    This is the proceedings of the seventh annual Museums and the Web conference which took place March 19-22, 2003. MW2003 was the premier international venue to review the state of the Web in arts, culture, and heritage. The formal program consisted of two plenary sessions, eighteen parallel sessions, 35 museum project demonstrations, dozens of…

  18. A Journey through Public History on the Web.

    ERIC Educational Resources Information Center

    Borg, Brent

    2002-01-01

    Provides an annotated list of Web sites that include, but not limited to, the American Family Immigration History Center, Harry S. Truman Presidential Museum and Library, National Council for Public History, National Park Service, U.S. Holocaust Memorial Museum, and the Women of the West Virtual Museum. (CMK)

  19. The HyperMuseum Theme Generator System: Ontology-Based Internet Support for the Active Use of Digital Museum Data for Teaching and Presentation.

    ERIC Educational Resources Information Center

    Stuer, Peter; Meersman, Robert; De Bruyne, Steven

    Museums have always been, sometimes directly and often indirectly, a key resource of arts and cultural heritage information for the classroom educator. The Web now offers an ideal way of taking this resource beyond the traditional textbook or school visit. While museums around the globe are embracing the web and putting virtual exhibitions,…

  20. Storytelling and the Web in South African Museums.

    ERIC Educational Resources Information Center

    Goodnow, Katherine J.; Natland, Yngvar

    The Iziko museums in Cape Town, South Africa in collaboration with the International Museums Studies Programme at the University of Bergen, Norway, have jointly developed a Web-based concept that combines oral storytelling with new technology to connect schools in the South and North. Awaiting funding at the time of publication, this project was…

  1. Museums and the Web 1999: Selected Papers from an International Conference.

    ERIC Educational Resources Information Center

    Bearman, David, Ed.; Trant, Jennifer, Ed.

    Following an introduction by the editors entitled "Interactivity Comes of Age: Museums and World Wide Web at Five," this proceedings contains the following selected papers and case studies: "From the Mountains of the Moon to the Neon Paintbrush" (Peter Walsh); "Visiting a Museum Together: How To Share a Visit to a Virtual…

  2. Evaluating the Usability of a Museum Web Site.

    ERIC Educational Resources Information Center

    Harms, Ilse; Schweibenz, Werner

    This paper presents a research project conducted by the Department of Information Science in cooperation with the Saarland Museum, the art museum of the Federal State of Saarland, Germany. The study had two aims. The first was to evaluate some methods of usability engineering for the Web, and the second was to evaluate the usability of the…

  3. Einstein Online: A Web-based Course for K-12 Teachers from the American Museum of Natural History

    NASA Astrophysics Data System (ADS)

    Steiner, Robert

    2004-05-01

    Einstein Online: A Web-based Course for K-12 Teachers from the American Museum of Natural History Robert V. Steiner, Ph.D. Project Director, Seminars on Science American Museum of Natural History The American Museum of Natural History, in collaboration with Hebrew University and the Skirball Cultural Center, has created a major exhibit on Albert Einstein, including extensive coverage of his contributions to relativity, quantum mechanics and unified field theories as well as the social and political dimensions of his life. Leveraging the assets of this exhibit as well as the expertise of the Museum's Department of Astrophysics and its Education Department, a six-week online professional development course for K-12 teachers has been created, providing inquires into some of the frontiers of physics through rich media resources, facilitated discussion forums and assignments. The course, which requires only minimal Web access, offers a unique opportunity for teachers across the United States to explore modern physics guided by a working scientist and a skilled online facilitator. The course includes original essays by Museum scientists, images, video, simulations, web links and digital resources for classroom use. The course design, development, implementation and evaluation are reviewed.

  4. Ice Stories: Engaging Polar Scientists as Field Correspondents for IPY

    NASA Astrophysics Data System (ADS)

    Miller, M. K.

    2006-12-01

    The International Polar Year (IPY 2007-09) gives the public, teachers, and students an extraordinary opportunity to experience the process of scientific discovery in action. The Exploratorium, working in partnership with international scientists at both poles, will create educational resources for museum and online visitors that celebrate life, legacy and science in the world's polar regions. In this session, Senior Science Producer Mary Miller will discuss the Exploratorium's proposed IPY project, Ice Stories. This unique educational project will provide a public face for IPY by using the power of contemporary media to bring current research to mass audiences with unprecedented intimacy and immediacy. Ice Stories includes: a media-rich, dynamic and continuously updated public Web site; a media-assets database for journalists, media producers, educators, and museum partners; a training program in media production and story-telling for polar scientists Ice Stories provides the public with access to IPY research through the development of a network of Exploratorium-trained polar field correspondents. It makes use of the design, education and production capacity of an informal science center to create a bridge between scientific discovery and interested members of the public. Ice Stories employs sophisticated media production and communication technology as well as strong partnerships with allied research groups and with scientists and international organizations at the poles. The Exploratorium has pioneered in translating current science research into exhibits and presentations accessible to museum and Web audiences. It also has long experience creating award-winning Web sites, professional-development workshops, community outreach, and institutional alliances.

  5. An Integrated Korean Biodiversity and Genetic Information Retrieval System

    PubMed Central

    Lim, Jeongheui; Bhak, Jong; Oh, Hee-Mock; Kim, Chang-Bae; Park, Yong-Ha; Paek, Woon Kee

    2008-01-01

    Background On-line biodiversity information databases are growing quickly and being integrated into general bioinformatics systems due to the advances of fast gene sequencing technologies and the Internet. These can reduce the cost and effort of performing biodiversity surveys and genetic searches, which allows scientists to spend more time researching and less time collecting and maintaining data. This will cause an increased rate of knowledge build-up and improve conservations. The biodiversity databases in Korea have been scattered among several institutes and local natural history museums with incompatible data types. Therefore, a comprehensive database and a nation wide web portal for biodiversity information is necessary in order to integrate diverse information resources, including molecular and genomic databases. Results The Korean Natural History Research Information System (NARIS) was built and serviced as the central biodiversity information system to collect and integrate the biodiversity data of various institutes and natural history museums in Korea. This database aims to be an integrated resource that contains additional biological information, such as genome sequences and molecular level diversity. Currently, twelve institutes and museums in Korea are integrated by the DiGIR (Distributed Generic Information Retrieval) protocol, with Darwin Core2.0 format as its metadata standard for data exchange. Data quality control and statistical analysis functions have been implemented. In particular, integrating molecular and genetic information from the National Center for Biotechnology Information (NCBI) databases with NARIS was recently accomplished. NARIS can also be extended to accommodate other institutes abroad, and the whole system can be exported to establish local biodiversity management servers. Conclusion A Korean data portal, NARIS, has been developed to efficiently manage and utilize biodiversity data, which includes genetic resources. NARIS aims to be integral in maximizing bio-resource utilization for conservation, management, research, education, industrial applications, and integration with other bioinformation data resources. It can be found at . PMID:19091024

  6. Factors Influencing Error Recovery in Collections Databases: A Museum Case Study

    ERIC Educational Resources Information Center

    Marty, Paul F.

    2005-01-01

    This article offers an analysis of the process of error recovery as observed in the development and use of collections databases in a university museum. It presents results from a longitudinal case study of the development of collaborative systems and practices designed to reduce the number of errors found in the museum's databases as museum…

  7. Bradbury Science Museum Collections Inventory Photos Disc #5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strohmeyer, Wendy J.

    The photos on Bradbury Science Museum Collections Inventory Photos Disc #5 is another in an ongoing effort to catalog all artifacts held by the Museum. Photos will be used as part of the condition report for the artifact, and will become part of the collection record in the collections database for that artifact. The collections database will be publically searchable on the Museum website.

  8. 'Bradbury Science Museum Collections Inventory Photos Disc #4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strohmeyer, Wendy J.

    The photos on Bradbury Science Museum Collections Inventory Photos Disc #4 is another in an ongoing effort to catalog all artifacts held by the Museum. Photos will be used as part of the condition report for the artifact, and will become part of the collection record in the collections database for that artifact. The collections database will be publically searchable on the Museum website.

  9. Effective Levels of Adaptation to Different Types of Users in Interactive Museum Systems.

    ERIC Educational Resources Information Center

    Paterno, F.; Mancini, C.

    2000-01-01

    Discusses user interaction with museum application interfaces and emphasizes the importance of adaptable and adaptive interfaces to meet differing user needs. Considers levels of support that can be given to different users during navigation of museum hypermedia information, using examples from the Web site for the Marble Museum (Italy).…

  10. Hobby-Related Information-Seeking Behaviour of Highly Dedicated Online Museum Visitors

    ERIC Educational Resources Information Center

    Skov, Mette

    2013-01-01

    Introduction: This paper explores the characteristics of online museum visitors in an everyday life, information-seeking context. Method: A triangulation of research methods was applied. A Web questionnaire survey gave initial, quantitative information about online museum visitors to a military museum. Follow-up interviews (n = 24) obtained rich,…

  11. Museums and Twitter: An Exploratory Qualitative Study of How Museums Use Twitter for Audience Development and Engagement

    ERIC Educational Resources Information Center

    Osterman, Mark; Thirunarayanan, M.; Ferris, Elizabeth C.; Pabon, Lizette C.; Paul, Natalie; Berger, Rhonda

    2012-01-01

    Museums are competing with a vast variety of Internet-based information delivery sites to keep the public interested in their institutions. To keep pace Museums are increasingly turning to the use of Web 2.0 tools to draw in the public and maintain a standing as cultural and educational leaders. Several museums have started using Twitter. This…

  12. The Paleontology Portal: An online resource meeting the needs a spectrum of learners

    NASA Astrophysics Data System (ADS)

    Lindberg, D.; Scotchmoor, J. G.

    2005-12-01

    The Paleontology Portal website provides a central, interactive entry point to high-quality North American paleontology resources on the Internet for multiple audiences: the research community, government and industry, K-16 students, and the general public. The Portal successfully blends research and education, pulling together information with reviewed and annotated website links for a wide variety of informal learners. Using web-based technology and relational databases, users can explore an interactive map and associated stratigraphic column to access information about particular geographic regions, geologic time periods, depositional environments, and representative taxa. Users are also able to search multiple museum collection databases using a single query form of their own design. Other features include highlights of famous fossil sites and assemblages and a fossil image gallery. Throughout the site, users find images and links to information specific to each time period or geographic region, including current research projects and publications, websites, on-line exhibits and educational materials, and information on collecting fossils. The next phase of development will target the development of resource modules on topics such as collection management and fossil preparation, appropriate for users ranging from the general public to professional paleontologists. Another initiative includes developing methods of personalizing the Portal to support exhibits at museums and other venues on geological history and paleontology. The Paleontology Portal, built by the UC Museum of Paleontology, is a joint project of the Society of Vertebrate Paleontology, the Paleontological Society, and the US Geological Survey, in collaboration with the Paleontological Research Institution, the Fort Worth Museum of Science and History, and the Denver Museum of Nature and Science, which serve as hubs for the project. Paleoportal serves as an effective model in two aspects: (1) providing access to a spectrum of reviewed resources from a single starting interface that enables users from novice to professional to access background-appropriate information, and (2) involvement of a wide range of stakeholders (professional societies, universities and museums, and individuals) in both concept development and management. The project is funded by the National Science Foundation under award no. 0234594

  13. Evaluation of an Educational Website for the Bayou Bend Collection and Gardens, Museum of Fine Arts, Houston.

    ERIC Educational Resources Information Center

    Jenkins, Ann G.; Robin, Bernard R.

    As educators increasingly integrate Web-based resources into their curriculum, there is a growing need for high quality, educationally relevant materials. This study evaluated the Bayou Bend Web site, the result of a collaboration between staff at the Museum of Fine Arts, Houston, Texas, and faculty and graduate students at the University of…

  14. The I-Cleen Project (inquiring on Climate & Energy). Enhancing AN Enquiry-Based Approach to Earth System Sciences in Italian Classrooms

    NASA Astrophysics Data System (ADS)

    Cattadori, M.

    2010-12-01

    In the last years, the world of Italian school underwent some slow but deep transformation processes. One of the negative consequences - documented by specific studies - was the further weakening of the use of inquiring educational practices (or kinds of lessons) by science teachers. This occurred in a scholastic framework already traditionally little inclined to those. The I-CLEEN project (Inquiring on CLimate & Energy, www.icleen.museum ) was born in 2008 with the intent to react to (and contrast) this process (trend) by initiative of a staff of science teachers from different regions, all with many years’ experience, coordinated and supported by the local museum, the Natural Science Museum of Trento - Trento, Italy. I-CLEEN is a free instrument of cooperation for Italian teachers, aimed to support and enhance the practice of the inquiring education in explaining themes in range of Climate and Energy and generally about Earth System Sciences. This project is a consequence of what has been experienced and done in Italy by its creators within the Educational and Outreach program of ANDRILL (ANtarctic geological DRILLing). The core of the project is a database of resources potentially useful to a teacher preparing an inquiring lesson. These are selected by a staff following a specific selection policy. There are also lessons ready to be used in the classrooms, prepared according to a specific editorial standard. These are composed by a paper for the teacher and a paper for the student. The database is technically an information gateway and it is constantly enriched thanks to a job of critical research in the teachers’ practices or the worthiest international educational web projects. These are published in Italian or in bilingual format (Italian-English), always through explicit authorization by the authors and under a Creative Commons license when possible. This contribution illustrates details about this service which is on-line since December 2009 and is characterized by a peculiar use of the informatics technologies. Indeed, both the parts composing the project (site, resources database, publishers, and users) and their respective activities (editing, publishing, cataloguing, administration of web contents and users) are fully dealt by one open source web platform called LifeRay, purposely implemented for this project. Also the undertaking and the study of international projects and reference standards were accurate and broad, both in designing and developing the service (DESIRE project - Development of a European Service for Information on Research and Education) and creating the metadata (DCMI standard - Dublin Core Metadata Initiative - and LOM standard - Learning Object Metadata, IEEE 1484.12.1 2002). Thanks to this feature, it has been recently requested (June 2010) to let the I-CLEEN database interact with the one by the LRE project, the major information gateway of educational resources in the European Union.

  15. The Virtual Ramp to the Equivalent Experience in the Virtual Museum: Accessibility to Museums on the Web.

    ERIC Educational Resources Information Center

    Nevile, Liddy; McCathieNevile, Charles

    This paper argues that a range of forms and modalities of resources should be provided to ensure accessibility and richness on the World Wide Web for all users. Based on experiences in developing virtual exhibitions of Quinkan Aboriginal Rock Art, the authors present a brief overview of the technology available for accessibility. Then they explore…

  16. Museums and the Web 2001: Selected Papers from an International Conference (5th, Seattle, Washington, March 15-17, 2001).

    ERIC Educational Resources Information Center

    Bearman, David, Ed.; Trant, Jennifer, Ed.

    In this selection of papers from the conference, authors from 10 of the more than 35 countries and every continent (except Antarctica) provide discussions covering all levels of museum Web design. They brought a wide variety of experiences and backgrounds to the conference, all of which ensured new perspectives and new ideas. The meetings opened…

  17. Appendix A. Borderlands Site Database

    Treesearch

    A.C. MacWilliams

    2006-01-01

    The database includes modified components of the Arizona State Museum Site Recording System (Arizona State Museum 1993) and the New Mexico NMCRIS User?s Guide (State of New Mexico 1993). When sites contain more than one recorded component, these instances were entered separately with the result that many sites have multiple entries. Information for this database...

  18. Interactive Character as a Virtual Tour Guide to an Online Museum Exhibition.

    ERIC Educational Resources Information Center

    de Almeida, Pilar; Yokoi, Shigeki

    Online museums could benefit from digital "lifelike" characters in order to guide users to virtual tours and to customize the tour information to users' interests. Digital characters have been explored in online museum web sites with different degrees of interaction and modes of communication. Such research, however, does not explore…

  19. Providing Personal Assistance in the SAGRES Virtual Museum.

    ERIC Educational Resources Information Center

    Bertoletti, Ana Carolina; Moraes, Marcia Cristina; da Rocha Costa, Antonio Carlos

    The SAGRES system is an educational environment built on the Web that facilitates the organization of visits to museums, presenting museum information bases in a way adapted to the user's characteristics (capacities and preferences). The system determines the group of links appropriate to the user(s) and shows them in a resultant HTML page. In…

  20. Developing a Model for Technology-Based Museum School Partnerships

    ERIC Educational Resources Information Center

    Sanger, Erika; Silverman, Stan; Kraybill, Anne

    2015-01-01

    In 2012, The New York Institute of Technology and the Albany Institute of History & Art collaborated to increase the capacity of museum educators and classroom teachers to develop successful partnerships and deliver new programs through the use of web-based technologies. The project aligned the content expertise of museum educators from…

  1. Ice Stories: An Educational Collaboration between the Exploratorium and IPY Scientists.

    NASA Astrophysics Data System (ADS)

    Mary, M. K.

    2007-12-01

    The Exploratorium, a renowned interactive science museum in San Francisco, has launched a major NSF-funded public education project to highlight research in the Arctic and Antarctic during the International Polar Year. "Ice Stories" will partner museum media and web producers with polar scientists working in the field to bring their research to the Internet and museum audiences via live Webcasts, video clips, blogs, podcasts, and other media platforms. To prepare scientists for their role as field correspondents, the Exploratoirum will train a cohort of 20- 30 young investigators in media collection, production and narrative story telling during an intensive one-week workshop in San Francisco. The museum will curate the polar field reports, and other IPY news and education events, into a continuously updated Web portal on the Exploratorium's award-winning Website and highlight the ongoing research in museum programming, floor demonstrations, and exhibits. These unique collaborations between formal and informal science can serve as a model for other partnerships during major scientific endeavors beyond the International Polar Year.

  2. The Virtual Museum of Minerals and Molecules: Molecular Visualization in a Virtual Hands-On Museum

    ERIC Educational Resources Information Center

    Barak, Phillip; Nater, Edward A.

    2005-01-01

    The Virtual Museum of Minerals and Molecules (VMMM) is a web-based resource presenting interactive, 3-D, research-grade molecular models of more than 150 minerals and molecules of interest to chemical, earth, plant, and environmental sciences. User interactivity with the 3-D display allows models to be rotated, zoomed, and specific regions of…

  3. Earth Science Digital Museum (ESDM): Toward a new paradigm for museums

    NASA Astrophysics Data System (ADS)

    Dong, Shaochun; Xu, Shijin; Wu, Gangshan

    2006-07-01

    New technologies have pushed traditional museums to take their exhibitions beyond the barrier of a museum's walls and enhance their functions: education and entertainment. Earth Science Digital Museum (ESDM) is such an emerging effort in this field. It serves as a platform for Earth Scientists to build a Web community to share knowledge about the Earth and is of to benefit the general public for their life-long learning. After analyzing the purposes and requirements of ESDM, we present here our basic philosophy of ESDM and a four-layer hierarchical architecture for enhancing the structure of ESDM via Internet. It is a Web-based application to enable specimens to be exhibited, shared and preserved in digital form, and to provide the functionalities of interoperability. One of the key components of ESDM is the development of a metadata set for describing Earth Science specimens and their digital representations, which is particularly important for building ESDM. Practical demonstrations show that ESDM is suitable for formal and informal Earth Science education, including classroom education, online education and life-long learning.

  4. Sentence-Based Metadata: An Approach and Tool for Viewing Database Designs.

    ERIC Educational Resources Information Center

    Boyle, John M.; Gunge, Jakob; Bryden, John; Librowski, Kaz; Hanna, Hsin-Yi

    2002-01-01

    Describes MARS (Museum Archive Retrieval System), a research tool which enables organizations to exchange digital images and documents by means of a common thesaurus structure, and merge the descriptive data and metadata of their collections. Highlights include theoretical basis; searching the MARS database; and examples in European museums.…

  5. Multimedia Database at National Museum of Ethnology

    NASA Astrophysics Data System (ADS)

    Sugita, Shigeharu

    This paper describes the information management system at National Museum of Ethnology, Osaka, Japan. This museum is a kind of research center for cultural anthropology, and has many computer systems such as IBM 3090, VAX11/780, Fujitu M340R, etc. With these computers, distributed multimedia databases are constructed in which not only bibliographic data but also artifact image, slide image, book page image, etc. are stored. The number of data is now about 1.3 million items. These data can be retrieved and displayed on the multimedia workstation which has several displays.

  6. Virtual Museum Learning

    ERIC Educational Resources Information Center

    Prosser, Dominic; Eddisford, Susan

    2004-01-01

    This paper examines children's and adults' attitudes to virtual representations of museum objects. Drawing on empirical research data gained from two web-based digital learning environments. The paper explores the characteristics of on-line learning activities that move children from a sense of wonder into meaningful engagement with objects and…

  7. Opportunities in Education and Public Outreach for Scientists at the School of Ocean and Earth Sciences and Technology

    NASA Astrophysics Data System (ADS)

    Hicks, T.

    2004-12-01

    The School of Ocean and Earth Sciences and Technology (SOEST) at the University of Hawaii at Manoa is home to twelve diverse research institutes, programs and academic departments that focus on a wide range of earth and planetary sciences. SOEST's main outreach goals at the K-12 level are to increase the awareness of Hawaii's schoolchildren regarding earth, ocean, and space science, and to inspire them to consider a career in science. Education and public outreach efforts in SOEST include a variety of programs that engage students and the public in formal as well as informal educational settings, such as our biennial Open House, expedition web sites, Hawaii Ocean Science Bowl, museum exhibits, and programs with local schools. Some of the projects that allow for scientist involvement in E/PO include visiting local classrooms, volunteering in our outreach programs, submitting lessons and media files to our educational database of outreach materials relating to earth and space science research in Hawaii, developing E/PO materials to supplement research grants, and working with local museum staff as science experts.

  8. Kids as Curators: Virtual Art at the Seattle Museum.

    ERIC Educational Resources Information Center

    Scanlan, Laura Wolff

    2000-01-01

    Discusses the use of technology at the Seattle Art Museum (Washington). Includes a Web site that enables students in grades six through ten to act as curators and offers integrations of technology in the exhibition "Leonardo Lives: The Codex Leicester and Leonardo da Vinci's Legacy of Art and Science." (CMK)

  9. Natural history museums and cyberspace

    USGS Publications Warehouse

    Wemmer, C.; Erixon-Stanford, M.; Gardner, A.L.

    1996-01-01

    Natural history museums are entering the electronic age as they increasingly use computers to build accessible and shareable databases that support research and education on a world-wide basis. Museums are exploring the Internet and other shared uses of electronic media to enhance their traditional roles in education, training, identifications, technical assistance, and collections management.

  10. 78 FR 42805 - Nixon Presidential Historical Materials: Opening of Materials

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-17

    ... Library in Yorba Linda. The newly released tapes will also be available on August 21, 2013 on the Web at... Nixon Presidential Historical Materials by the Richard Nixon Presidential Library and Museum, a division... required by law, including the PRMPA. DATES: The Richard Nixon Presidential Library and Museum intends to...

  11. Using Interactive Broadband Multicasting in a Museum Lifelong Learning Program.

    ERIC Educational Resources Information Center

    Steinbach, Leonard

    The Cleveland Museum of Art has embarked on an innovative approach for delivering high quality video-on-demand and live interactive cultural programming, along with Web-based complementary material, to seniors in assisted living residence facilities, community-based centers, and disabled persons in their homes. The project is made possible in part…

  12. A Spectrum of Interoperability: The Site for Science Prototype for the NSDL; Re-Inventing the Wheel? Standards, Interoperability and Digital Cultural Content; Preservation Risk Management for Web Resources: Virtual Remote Control in Cornell's Project Prism; Safekeeping: A Cooperative Approach to Building a Digital Preservation Resource; Object Persistence and Availability in Digital Libraries; Illinois Digital Cultural Heritage Community-Collaborative Interactions among Libraries, Museums and Elementary Schools.

    ERIC Educational Resources Information Center

    Arms, William Y.; Hillmann, Diane; Lagoze, Carl; Krafft, Dean; Marisa, Richard; Saylor, John; Terizzi, Carol; Van de Sompel, Herbert; Gill, Tony; Miller, Paul; Kenney, Anne R.; McGovern, Nancy Y.; Botticelli, Peter; Entlich, Richard; Payette, Sandra; Berthon, Hilary; Thomas, Susan; Webb, Colin; Nelson, Michael L.; Allen, B. Danette; Bennett, Nuala A.; Sandore, Beth; Pianfetti, Evangeline S.

    2002-01-01

    Discusses digital libraries, including interoperability, metadata, and international standards; Web resource preservation efforts at Cornell University; digital preservation at the National Library of Australia; object persistence and availability; collaboration among libraries, museums and elementary schools; Asian digital libraries; and a Web…

  13. Channel Islands National Park vascular plant voucher collections: NPSpecies database

    USGS Publications Warehouse

    Chess, Katherine; McEachern, Kathryn

    2001-01-01

    Collections information for 3898 vascular plant specimens from searches at Santa Barbara Botanic Garden, Rancho Santa Ana Botanic Garden, San Diego Natural History Museum, Los Angeles County Museum of Natural History.

  14. USGS Nonindigenous Aquatic Species database with a focus on the introduced fishes of the lower Tennessee and Cumberland drainages

    USGS Publications Warehouse

    Fuller, Pamela L.; Cannister, Matthew; Johansen, Rebecca; Estes, L. Dwayne; Hamilton, Steven W.; Barrass, Andrew N.

    2013-01-01

    The Nonindigenous Aquatic Species (NAS) database (http://nas.er.usgs.gov) functions as a national repository and clearinghouse for occurrence data for introduced species within the United States. Included is locality information on over 1,100 species of vertebrates, invertebrates, and vascular plants introduced as early as 1850. Taxa include foreign (exotic) species and species native to North America that have been transported outside of their natural range. Locality data are obtained from published and unpublished literature, state, federal and local monitoring programs, museum accessions, on-line databases, websites, professional communications and on-line reporting forms. The NAS web site provides immediate access to new occurrence records through a real-time interface with the NAS database. Visitors to the web site are presented with a set of pre-defined queries that generate lists of species according to state or hydrologic basin of interest. Fact sheets, distribution maps, and information on new occurrences are updated as new records and information become available. The NAS database allows resource managers to learn of new introductions reported in their region or nearby regions, improving response time. Conversely, managers are encouraged to report their observations of new occurrences to the NAS database so information can be disseminated to other managers, researchers, and the public. In May 2004, the NAS database incorporated an Alert System to notify registered users of new introductions as part of a national early detection/rapid response system. Users can register to receive alerts based on geographic or taxonomic criteria. The NAS database was used to identify 23 fish species introduced into the lower Tennessee and Cumberland drainages. Most of these are sport fish stocked to support fisheries, but the list also includes accidental and illegal introductions such as Asian Carps, clupeids, various species popular in the aquarium trade, and Atlantic Needlefish (Strongylura marina) that was introduced via the newly-constructed Tennessee-Tombigbee Canal.

  15. The Bibliometric Analysis Of Literature On Museum Studies

    NASA Astrophysics Data System (ADS)

    Kuo, C. W.; Yang, Y. H.

    2015-08-01

    Museum studies, is the study of museums, museum curation, and how and why museums developed into their institutional role in education and culture through scientific, social, political and other related forces. The purpose of this study is to shed light on the application trends of the international literature related to museum studies on the SCIE, SSCI, and AHCI databases between 1995 and 2014 using a bibliometric technique and citation analysis. The results of this study reveal that influences of the literature related to museum studies on other subject areas continue to expand. Considering the publication of major countries, subject areas, journal and institutions, the results also discussed that the future trend through analysing most cited articles. Moreover, 12 core journal lists are identified by Bradford's law.

  16. New Catalog of Resources Enables Paleogeosciences Research

    NASA Astrophysics Data System (ADS)

    Lingo, R. C.; Horlick, K. A.; Anderson, D. M.

    2014-12-01

    The 21st century promises a new era for scientists of all disciplines, the age where cyber infrastructure enables research and education and fuels discovery. EarthCube is a working community of over 2,500 scientists and students of many Earth Science disciplines who are looking to build bridges between disciplines. The EarthCube initiative will create a digital infrastructure that connects databases, software, and repositories. A catalog of resources (databases, software, repositories) has been produced by the Research Coordination Network for Paleogeosciences to improve the discoverability of resources. The Catalog is currently made available within the larger-scope CINERGI geosciences portal (http://hydro10.sdsc.edu/geoportal/catalog/main/home.page). Other distribution points and web services are planned, using linked data, content services for the web, and XML descriptions that can be harvested using metadata protocols. The databases provide searchable interfaces to find data sets that would otherwise remain dark data, hidden in drawers and on personal computers. The software will be described in catalog entries so just one click will lead users to methods and analytical tools that many geoscientists were unaware of. The repositories listed in the Paleogeosciences Catalog contain physical samples found all across the globe, from natural history museums to the basements of university buildings. EarthCube has over 250 databases, 300 software systems, and 200 repositories which will grow in the coming year. When completed, geoscientists across the world will be connected into a productive workflow for managing, sharing, and exploring geoscience data and information that expedites collaboration and innovation within the paleogeosciences, potentially bringing about new interdisciplinary discoveries.

  17. Information Architecture for the Web: The IA Matrix Approach to Designing Children's Portals.

    ERIC Educational Resources Information Center

    Large, Andrew; Beheshti, Jamshid; Cole, Charles

    2002-01-01

    Presents a matrix that can serve as a tool for designing the information architecture of a Web portal in a logical and systematic manner. Highlights include interfaces; metaphors; navigation; interaction; information retrieval; and an example of a children's Web portal to provide access to museum information. (Author/LRW)

  18. WebWise 2.0: The Power of Community. WebWise Conference on Libraries and Museums in the Digital World Proceedings (9th, Miami Beach, Florida, March 5-7, 2008)

    ERIC Educational Resources Information Center

    Green, David

    2009-01-01

    Since it was coined by Tim O'Reilly in formulating the first Web 2.0 Conference in 2004, the term "Web 2.0" has definitely caught on as a designation of a second generation of Web design and experience that emphasizes a high degree of interaction with, and among, users. Rather than simply consulting and reading Web pages, the Web 2.0 generation is…

  19. The Development of a Virtual Dinosaur Museum

    ERIC Educational Resources Information Center

    Tarng, Wernhuar; Liou, Hsin-Hun

    2007-01-01

    The objective of this article is to study the network and virtual reality technologies for developing a virtual dinosaur museum, which provides a Web-learning environment for students of all ages and the general public to know more about dinosaurs. We first investigate the method for building the 3D dynamic models of dinosaurs, and then describe…

  20. Networked Multi-Sensory Experiences: Beyond Browsers on the Web and in the Museum.

    ERIC Educational Resources Information Center

    Wagmister, Fabian; Burke, Jeff

    This paper presents a vision of digital technology for the museum as a dynamic connection-making tool that defines new genres and enables new experiences of existing works. The following media-rich interactive installations and performances developed at the HyperMedia Studio, a digital media research unit in the UCLA (University of California Los…

  1. The Ambiguity of Perception: Virtual Art Museology, Free-Choice Learning, and Children's Art Education

    ERIC Educational Resources Information Center

    Mulligan, Christine Susan

    2010-01-01

    With many art museums uploading web-based art activities for youngsters, an online phenomenon is burgeoning, and a research domain is emerging. In an effort to contribute empirical evidence to an area of educational research that I refer to as "virtual art museology," or the study of art museum's online art activities for young people, this…

  2. The GB/3D Type Fossils Online Web Portal

    NASA Astrophysics Data System (ADS)

    McCormick, T.; Howe, M. P.

    2013-12-01

    Fossils are the remains of once-living organisms that existed and played out their lives in 3-dimensional environments. The information content provided by a 3d representation of a fossil is much greater than that provided by a traditional photograph, and can grab the attention and imagination of the younger and older general public alike. The British Geological Survey has been leading a consortium of UK natural history museums including the Oxford University Museum of Natural History, the Sedgwick Museum Cambridge, the National Museum of Wales Cardiff, and a number of smaller regional British museums to construct a web portal giving access to metadata, high resolution images and interactive 3d models of type fossils from the UK. The web portal at www.3d-fossils.ac.uk was officially launched in August 2013. It can be used to discover metadata describing the provenance, taxonomy, and stratigraphy of the specimens. Zoom-able high resolution digital photographs are available, including for many specimens ';anaglyph' stereo images that can be viewed in 3d using red-cyan stereo spectacles. For many of the specimens interactive 3d models were generated by scanning with portable ';NextEngine 3D HD' 3d scanners. These models can be downloaded in zipped .OBJ and .PLY format from the web portal, or may be viewed and manipulated directly in certain web browsers. The images and scans may be freely downloaded subject to a Creative Commons Attribution ShareAlike Non-Commercial license. There is a simple application programming interface (API) allowing metadata to be downloaded, with links to the images and models, in a standardised format for use in data mash-ups and third party applications. The web portal also hosts ';open educational resources' explaining the process of fossilization and the importance of type specimens in taxonomy, as well as providing introductions to the most important fossil groups. We have experimented with using a 3d printer to create replicas of the fossils which can be used in education and public outreach. The audience for the web portal includes both professional paleontologists and the general public. The professional paleontologist can use the portal to discover the whereabouts of the type material for a taxon they are studying, and can use the pictures and 3d models to assess the completeness and preservation quality of the material. This may reduce or negate the need to send specimens (which are often fragile and always irreplaceable) to researchers through the post, or for researchers to make possibly long, expensive and environmentally damaging journeys to visit far-off collections. We hope that the pictures and 3d models will help to stimulate public interest in paleontology and natural history. The ability to digitally image and scan specimens in 3d enables institutions to have an archive record in case specimens are lost or destroyed by accident or warfare. Recent events in Cairo and Baghdad remind us that museum collections are vulnerable to civil and military strife.

  3. Statistics, Structures & Satisfied Customers: Using Web Log Data to Improve Site Performance.

    ERIC Educational Resources Information Center

    Peacock, Darren

    This paper explores some of the ways in which the National Museum of Australia is using Web analysis tools to shape its future directions in the delivery of online services. In particular, it explores the potential of quantitative analysis, based on Web server log data, to convert these ephemeral traces of user experience into a strategic…

  4. Funnel-web spider bite: a systematic review of recorded clinical cases.

    PubMed

    Isbister, Geoffrey K; Gray, Michael R; Balit, Corrine R; Raven, Robert J; Stokes, Barrie J; Porges, Kate; Tankel, Alan S; Turner, Elizabeth; White, Julian; Fisher, Malcolm McD

    2005-04-18

    To investigate species-specific envenoming rates and spectrum of severity of funnel-web spider bites, and the efficacy and adverse effects of funnel-web spider antivenom. Cases were identified from a prospective study of spider bite presenting to four major hospitals and three state poisons information centres (1999-2003); museum records of spider specimens since 1926; NSW Poisons Information Centre database; MEDLINE and EMBASE search; clinical toxinology textbooks; the media; and the manufacturer's reports of antivenom use. Patient age and sex, geographical location, month, expert identification of the spider, clinical effects and management; envenoming was classified as severe, mild-moderate or minor/local effects. 198 potential funnel-web spider bites were identified: 138 were definite (spider expertly identified to species or genus), and 77 produced severe envenoming. All species-identified severe cases were attributed to one of six species restricted to NSW and southern Queensland. Rates of severe envenoming were: Hadronyche cerberea (75%), H. formidabilis (63%), Atrax robustus (17%), Hadronyche sp. 14 (17%), H. infensa (14%) and H. versuta (11%). Antivenom was used in 75 patients, including 22 children (median dose, 3 ampoules; range, 1-17), with a complete response in 97% of expertly identified cases. Three adverse reactions were reported, all in adults: two early allergic reactions (one mild and one with severe systemic effects requiring adrenaline), and one case of serum sickness. Severe funnel-web spider envenoming is confined to NSW and southern Queensland; tree-dwelling funnel webs (H. cerberea and H. formidabilis) have the highest envenoming rates. Funnel-web spider antivenom appears effective and safe; severe allergic reactions are uncommon.

  5. Layers of Seeing and Seeing through Layers: The Work of Art in the Age of Digital Imagery

    ERIC Educational Resources Information Center

    Ruby, Louisa Wood

    2008-01-01

    In consulting on or creating a Web site designed to use works of art for teaching purposes, it is extremely important to be aware of the differences between seeing an artwork "in the flesh" and in reproduction. Museum educators are highly aware of this disparity and are therefore eager to have students visit museums to experience authentic works…

  6. Experiments in Web Storytelling

    ERIC Educational Resources Information Center

    Levine, Alan

    2011-01-01

    Recognized as one of our oldest yet still vital forms of communication, storytelling offers new opportunity when it takes place on the web. Even our every day activities of writing email, creating presentations, or participating in social media can become more dynamic when considered stories. A digital storyteller from outside the museum field…

  7. Reviews Book: Marie Curie: A Biography Book: Fast Car Physics Book: Beautiful Invisible Equipment: Fun Fly Stick Science Kit Book: Quantum Theory Cannot Hurt You Book: Chaos: The Science of Predictable Random Motion Book: Seven Wonders of the Universe Book: Special Relativity Equipment: LabVIEWTM 2009 Education Edition Places to Visit: Edison and Ford Winter Estates Places to Visit: The Computer History Museum Web Watch

    NASA Astrophysics Data System (ADS)

    2011-07-01

    WE RECOMMEND Fun Fly Stick Science Kit Fun fly stick introduces electrostatics to youngsters Special Relativity Text makes a useful addition to the study of relativity as an undergraduate LabVIEWTM 2009 Education Edition LabVIEW sets industry standard for gathering and analysing data, signal processing, instrumentation design and control, and automation and robotics Edison and Ford Winter Estates Thomas Edison's home is open to the public The Computer History Museum Take a walk through technology history at this computer museum WORTH A LOOK Fast Car Physics Book races through physics Beautiful Invisible The main subject of this book is theoretical physics Quantum Theory Cannot Hurt You A guide to physics on the large and small scale Chaos: The Science of Predictable Random Motion Book explores the mathematics behind chaotic behaviour Seven Wonders of the Universe A textual trip through the wonderful universe HANDLE WITH CARE Marie Curie: A Biography Book fails to capture Curie's science WEB WATCH Web clips to liven up science lessons

  8. Traveling Exhibitions: translating current science into effective science exhibitions

    NASA Astrophysics Data System (ADS)

    Dusenbery, P.; Morrow, C.; Harold, J.

    The Space Science Institute (SSI) of Boulder, Colorado has recently developed two museum exhibits called the Space Weather Center and MarsQuest. It is currently planning to develop two other exhibitions called Cosmic Origins and InterActive Earth. Museum exhibitions provide research scientists the opportunity to engage in a number of activities that are vital to the success of earth and space outreach programs. The Space Weather Center was developed in partnership with various research missions at NASA's Goddard Space Flight Center. The focus of the presentation will be on the Institute's MarsQuest exhibition. This project is a 5000 square-foot, 2.5M, traveling exhibition that is now touring the country. The exhibit's 3-year tour is enabling millions of Americans to share in the excitement of the scientific exploration of Mars and learn more about their own planet in the process. The associated planetarium show and education program will also be described, with particular emphasis on workshops to orient host museum staff (e.g. museum educators and docents). The workshops make innovative connections between the exhibitions interactive experiences and lesson plans aligned with the National Science Education Standards. SSI is also developing an interactive web site called MarsQuest On-line. The linkage between the web site, education program and exhibit will be discussed. MarsQuest and SSI's other exhibitions are good models for actively involving scientists and their discoveries to help improve informal science education in the museum community and for forging a stronger connection between formal and informal education.

  9. Black Holes Traveling Exhibition: This Time, It's Personal.

    NASA Astrophysics Data System (ADS)

    Dussault, Mary E.; Braswell, E. L.; Sunbury, S.; Wasser, M.; Gould, R. R.

    2012-01-01

    How can you make a topic as abstract as black holes seem relevant to the life of the average museum visitor? In 2009, the Harvard-Smithsonian Center for Astrophysics developed a 2500 square foot interactive museum exhibition, "Black Holes: Space Warps & Time Twists,” with funding from the National Science Foundation and NASA. The exhibition has been visited by more than a quarter million museum-goers, and is about to open in its sixth venue at the Reuben H. Fleet Science Center in San Diego, California. We have found that encouraging visitors to adopt a custom black hole explorer's identity can help to make the science of black holes more accessible and meaningful. The Black Holes exhibition uses networked exhibit technology that serves to personalize the visitor experience, to support learning over time including beyond the gallery, and to provide a rich quantitative source of embedded evaluation data. Visitors entering the exhibition create their own bar-coded "Black Holes Explorer's Card” which they use throughout the exhibition to collect and record images, movies, their own predictions and conclusions, and other black hole artifacts. This digital database of personal discoveries grows as visitors navigate through the gallery, and an automated web-content authoring system creates a personalized online journal of their experience that they can access once they get home. We report here on new intriguing results gathered from data generated by 112,000 visitors across five different venues. For example, an initial review of the data reveals correlations between visitors’ black hole explorer identity choices and their engagement with the exhibition. We will also discuss correlations between learning gains and personalization.

  10. X3DOM as Carrier of the Virtual Heritage

    NASA Astrophysics Data System (ADS)

    Jung, Y.; Behr, J.; Graf, H.

    2011-09-01

    Virtual Museums (VM) are a new model of communication that aims at creating a personalized, immersive, and interactive way to enhance our understanding of the world around us. The term "VM" is a short-cut that comprehends various types of digital creations. One of the carriers for the communication of the virtual heritage at future internet level as de-facto standard is browser front-ends presenting the content and assets of museums. A major driving technology for the documentation and presentation of heritage driven media is real-time 3D content, thus imposing new strategies for a web inclusion. 3D content must become a first class web media that can be created, modified, and shared in the same way as text, images, audio and video are handled on the web right now. A new integration model based on a DOM integration into the web browsers' architecture opens up new possibilities for declarative 3 D content on the web and paves the way for new application scenarios for the virtual heritage at future internet level. With special regards to the X3DOM project as enabling technology for declarative 3D in HTML, this paper describes application scenarios and analyses its technological requirements for an efficient presentation and manipulation of virtual heritage assets on the web.

  11. Web-Based History Learning Environments: Helping All Students Learn and Like History

    ERIC Educational Resources Information Center

    Okolo, Cynthia M.; Englert, Carol Sue; Bouck, Emily C.; Heutsche, Anne M.

    2007-01-01

    This article explores the benefits of the Internet to enhance history instruction for all learners. The authors describe a Web-based learning environment, the Virtual History Museum (VHM), that helps teachers create motivating, inquiry-based history units. VHM also allows teachers to build supports for learners with disabilities or other learning…

  12. Considerations for Producing Media for Science Museum Exhibits: A Volcano Video Case Study

    NASA Astrophysics Data System (ADS)

    Sable, MFA, J.

    2013-12-01

    While science museums continue to expand their use of videos in exhibits, they are also seeking to add engaging content to their websites in the hope of reaching broader audiences. As a cost-effective way to do both, a project is undertaken to develop a video for a museum website that can easily be adapted for use in an exhibit. To establish goals and constraints for the video, this project explores the needs of museums and their audiences. Past literature is compared with current exhibitions in several U.S. museums. Once identified, the needs of science museums are incorporated into the content, form, and style of the two-part video "Living in Pele's Paradise." Through the story of the spectacular 1959-60 eruption of Kilauea Volcano, Hawai'i, the video shows how research and monitoring contribute to helping communities prepare for volcanic hazards. A 20-minute version of the video is produced for the web, and a 4-minute version is developed for use in a hypothetical science museum exhibit. The two versions of the video provide a cross-platform experience with multiple levels of content depth.

  13. Image Databases.

    ERIC Educational Resources Information Center

    Pettersson, Rune

    Different kinds of pictorial databases are described with respect to aims, user groups, search possibilities, storage, and distribution. Some specific examples are given for databases used for the following purposes: (1) labor markets for artists; (2) document management; (3) telling a story; (4) preservation (archives and museums); (5) research;…

  14. Monitoring the Deterioration of Stone at Mindener MUSEUM'S Lapidarium

    NASA Astrophysics Data System (ADS)

    Pomaska, G.

    2013-07-01

    Mindener Museum's Lapidarium incorporates a collection of stone work like reliefs, sculptures and inscriptions from different time epochs as advices of the city's history. These gems must be protected against environmental influences and deterioration. In advance of the measures a 3D reconstruction and detailed documentation has to be taken. The framework to establish hard- and software must match the museum's infrastructure. Two major question will be answered. Are low-cost scanning devices like depth cameras and digital of the shelf cameras suitable for the data acquisition? Does the functionality of open source and freeware covers the demand on investigation and analysis in this application? The working chain described in this contribution covers the structure from motion method and the reconstruction with RGB-D cameras. Mesh processing such as cleaning, smoothing, poisson surface reconstruction and texturing will be accomplished with MeshLab. Data acquisition and modelling continues in structure analysis. Therefore the focus lies as well on latest software developments related to 3D printing technologies. Repairing and finishing of meshes is a task for MeshMixer. Netfabb as a tool for positioning, dimensioning and slicing enables virtual handling of the items. On the Sketchfab web site one can publish and share 3D objects with integration into web pages supported by WebGL. Finally if a prototype is needed, the mesh can be uploaded to a 3D printing device provided by an online service.

  15. CSI Web Adventures: A Forensics Virtual Apprenticeship for Teaching Science and Inspiring STEM Careers

    ERIC Educational Resources Information Center

    Miller, Leslie; Chang, Ching-I; Hoyt, Daniel

    2010-01-01

    CSI: The Experience, a traveling museum exhibit and a companion web adventure, was created through a grant from the National Science Foundation as a potential model for informal learning. The website was designed to enrich and complement the exhibit by modeling the forensic process. Substantive science, real-world lab techniques, and higher-level…

  16. Bringing Terra Science to the People: 10 years of education and public outreach

    NASA Astrophysics Data System (ADS)

    Riebeek, H.; Chambers, L. H.; Yuen, K.; Herring, D.

    2009-12-01

    The default image on Apple's iPhone is a blue, white, green and tan globe: the Blue Marble. The iconic image was produced using Terra data as part of the mission's education and public outreach efforts. As far-reaching and innovative as Terra science has been over the past decade, Terra education and public outreach efforts have been equally successful. This talk will provide an overview of Terra's crosscutting education and public outreach projects, which have reached into educational facilities—classrooms, museums, and science centers, across the Internet, and into everyday life. The Earth Observatory web site was the first web site designed for the public that told the unified story of what we can learn about our planet from all space-based platforms. Initially conceived as part of Terra mission outreach in 1999, the web site has won five Webby awards, the highest recognition a web site can receive. The Visible Earth image gallery is a catalogue of NASA Earth imagery that receives more than one million page views per month. The NEO (NASA Earth Observations) web site and WMS (web mapping service) tool serves global data sets to museums and science centers across the world. Terra educational products, including the My NASA Data web service and the Students' Cloud Observations Online (S'COOL) project, bring Terra data into the classroom. Both projects target multiple grade levels, ranging from elementary school to graduate school. S'COOL uses student observations of clouds to help validate Terra data. Students and their parents have puzzled over weekly "Where on Earth" geography quizzes published on line. Perhaps the most difficult group to reach is the large segment of the public that does not seek out science information online or in a science museum or classroom. To reach these people, EarthSky produced a series of podcasts and radio broadcasts that brought Terra science to more than 30 million people in 2009. Terra imagery, including the Blue Marble, have seen wide distribution in books like Our Changing Planet and films like An Inconvenient Truth. The Blue Marble, courtesy Reto Stockli and Rob Simmon, NASA's Earth Observatory.

  17. Geo-Caching: Place-Based Discovery of Virginia State Parks and Museums

    ERIC Educational Resources Information Center

    Gray, Howard Richard

    2007-01-01

    The use of Global Positioning Systems (GPS) units has exploded in recent years along with the computer technology to access this data-based information. Geo-caching is an exciting game using GPS that provides place-based information regarding the public lands, facilities and cultural heritage programs within the Virginia Parks and Museum system.…

  18. ARCTOS: a relational database relating specimens, specimen-based science, and archival documentation

    USGS Publications Warehouse

    Jarrell, Gordon H.; Ramotnik, Cindy A.; McDonald, D.L.

    2010-01-01

    Data are preserved when they are perpetually discoverable, but even in the Information Age, discovery of legacy data appropriate to particular investigations is uncertain. Secure Internet storage is necessary but insufficient. Data can be discovered only when they are adequately described, and visibility increases markedly if the data are related to other data that are receiving usage. Such relationships can be built within (1) the framework of a relational database, or (1) they can be built among separate resources, within the framework of the Internet. Evolving primarily around biological collections, Arctos is a database that does both of these tasks. It includes data structures for a diversity of specimen attributes, essentially all collection-management tasks, plus literature citations, project descriptions, etc. As a centralized collaboration of several university museums, Arctos is an ideal environment for capitalizing on the many relationships that often exist between items in separate collections. Arctos is related to NIH’s DNA-sequence repository (GenBank) with record-to-record reciprocal linkages, and it serves data to several discipline-specific web portals, including the Global Biodiversity Information Network (GBIF). The University of Alaska Museum’s paleontological collection is Arctos’s recent extension beyond the constraints of neontology. With about 1.3 million cataloged items, additional collections are being added each year.

  19. The Ice Stories experience: a researcher's point of view

    NASA Astrophysics Data System (ADS)

    Courville, Z.

    2009-12-01

    Results from four field seasons of participation in the Ice Stories project are presented from the point of view of a correspondent. Ice Stories is an NSF-funded web-based project in which students, researchers, and logistic coordinators contribute media for a web page hosted by the Exploratorium museum in San Francisco, CA. Ice Stories correspondents receive media training from Exploratorium staff as well as from video, photography, writing, and audio experts from outside the museum. The Exploratorium staff helps to edit and post the media provided by the correspondents, who are typically in the field in remote locations. The feedback the correspondent received from on-line blogs and live webcasts is presented as well as the overall experience and impact of participation in the project. Before and after experiences with outreach will be discussed, as well as future plans.

  20. The Qatar National Historic Environment Record: a Platform for the Development of a Fully-Integrated Cultural Heritage Management Application

    NASA Astrophysics Data System (ADS)

    Cuttler, R. T. H.; Tonner, T. W. W.; Al-Naimi, F. A.; Dingwall, L. M.; Al-Hemaidi, N.

    2013-07-01

    The development of the Qatar National Historic Environment Record (QNHER) by the Qatar Museums Authority and the University of Birmingham in 2008 was based on a customised, bilingual Access database and ArcGIS. While both platforms are stable and well supported, neither was designed for the documentation and retrieval of cultural heritage data. As a result it was decided to develop a custom application using Open Source code. The core module of this application is now completed and is orientated towards the storage and retrieval of geospatial heritage data for the curation of heritage assets. Based on MIDAS Heritage data standards and regionally relevant thesauri, it is a truly bilingual system. Significant attention has been paid to the user interface, which is userfriendly and intuitive. Based on a suite of web services and accessed through a web browser, the system makes full use of internet resources such as Google Maps and Bing Maps. The application avoids long term vendor ''tie-ins'' and as a fully integrated data management system, is now an important tool for both cultural resource managers and heritage researchers in Qatar.

  1. Digital Management and Curation of the National Rock and Ore Collections at NMNH, Smithsonian

    NASA Astrophysics Data System (ADS)

    Cottrell, E.; Andrews, B.; Sorensen, S. S.; Hale, L. J.

    2011-12-01

    The National Museum of Natural History, Smithsonian Institution, is home to the world's largest curated rock collection. The collection houses 160,680 physical rock and ore specimen lots ("samples"), all of which already have a digital record that can be accessed by the public through a searchable web interface (http://collections.mnh.si.edu/search/ms/). In addition, there are 66 accessions pending that when catalogued will add approximately 60,000 specimen lots. NMNH's collections are digitally managed on the KE EMu° platform which has emerged as the premier system for managing collections in natural history museums worldwide. In 2010 the Smithsonian released an ambitious 5 year Digitization Strategic Plan. In Mineral Sciences, new digitization efforts in the next five years will focus on integrating various digital resources for volcanic specimens. EMu sample records will link to the corresponding records for physical eruption information housed within the database of Smithsonian's Global Volcanism Program (GVP). Linkages are also planned between our digital records and geochemical databases (like EarthChem or PetDB) maintained by third parties. We anticipate that these linkages will increase the use of NMNH collections as well as engender new scholarly directions for research. Another large project the museum is currently undertaking involves the integration of the functionality of in-house designed Transaction Management software with the EMu database. This will allow access to the details (borrower, quantity, date, and purpose) of all loans of a given specimen through its catalogue record. We hope this will enable cross-referencing and fertilization of research ideas while avoiding duplicate efforts. While these digitization efforts are critical, we propose that the greatest challenge to sample curation is not posed by digitization and that a global sample registry alone will not ensure that samples are available for reuse. We suggest instead that the ability of the Earth science community to identify and preserve important collections and make them available for future study is limited by personnel and space resources from the level of the individual PI to the level of national facilities. Moreover, when it comes to specimen "estate planning," the cultural attitudes of scientists, institutions, and funding agencies are often inadequate to provide for long-term specimen curation - even if specimen discovery is enabled by digital registry. Timely access to curated samples requires that adequate resources be devoted to the physical care of specimens (facilities) and to the personnel costs associated with curation - from the conservation, storage, and inventory management of specimens, to the dispersal of samples for research, education, and exhibition.

  2. Digital Debates. WebWise Conference on Libraries and Museums in the Digital World Proceedings (10th, Capitol Hill, Washington, D.C, February 25-27, 2009)

    ERIC Educational Resources Information Center

    Zorich, Diane

    2010-01-01

    Debates typically invoke an image of individuals arguing over the merits of opposing viewpoints. However, the term has a softer, more deliberative sense that connotes reflection, discussion, and consideration. The 2009 WebWise conference, titled "Digital Debates," was conducted in this spirit, with panelists and attendees engaged in…

  3. New model for distributed multimedia databases and its application to networking of museums

    NASA Astrophysics Data System (ADS)

    Kuroda, Kazuhide; Komatsu, Naohisa; Komiya, Kazumi; Ikeda, Hiroaki

    1998-02-01

    This paper proposes a new distributed multimedia data base system where the databases storing MPEG-2 videos and/or super high definition images are connected together through the B-ISDN's, and also refers to an example of the networking of museums on the basis of the proposed database system. The proposed database system introduces a new concept of the 'retrieval manager' which functions an intelligent controller so that the user can recognize a set of image databases as one logical database. A user terminal issues a request to retrieve contents to the retrieval manager which is located in the nearest place to the user terminal on the network. Then, the retrieved contents are directly sent through the B-ISDN's to the user terminal from the server which stores the designated contents. In this case, the designated logical data base dynamically generates the best combination of such a retrieving parameter as a data transfer path referring to directly or data on the basis of the environment of the system. The generated retrieving parameter is then executed to select the most suitable data transfer path on the network. Therefore, the best combination of these parameters fits to the distributed multimedia database system.

  4. MRO's High Resolution Imaging Science Experiment (HiRISE) Education And Public Outreach program

    NASA Astrophysics Data System (ADS)

    Gulick, V. C.; Davatzes, A.; McEwen, A.

    2006-12-01

    HiRISE provides an innovative education and public outreach program with a variety of formal and informal educational activities. The centerpiece of HiRISE's E/PO program is it's interactive website called HiWeb (http://marsoweb.nasa.nasa.gov/hirise and http://hirise.lpl.arizona.edu). HiWeb provides an image suggestion facility where the public can submit suggestions for HiRISE images and view HiRISE images in context with other available Mars data. HiRISE EPO has developed K-14 educational materials including activity, coloring and comic books that focus on Mars geology, the image suggestion process, understanding the HiRISE camera and working with digital image data. In addition, we have developed interactive educational games including Mars crosswords, jigsaws, word searches, and flash cards to provide fun ways for students to learn more about Mars. All educational materials and games are aligned with the National Science Standards. HiRISE Clickworkers will provide online opportunities for the public to assist the team in creating geologic feature databases (gullies, boulders, craters, wind streaks, etc.) present in the HiRISE images in addition to other innovative opportunities. Web events (including web chats, casts and forums) with HiRISE team members, will help guide students and educators of HiRISE capabilities and science goals and provide support for submitting good image suggestions. Educator workshops will be held each year at or near the institution of HiRISE team members. Workshop support materials and instructions for all hands-on activities will be placed on HiWeb to facilitate sharing of information with other educators and the general public. Large-scale displays of HiRISE images will be available at several at museums and planetariums.

  5. Online Astronomy Resources from the American Museum of Natural History

    NASA Astrophysics Data System (ADS)

    Steiner, Robert

    2010-02-01

    The American Museum of Natural History, one of the world's largest natural history museums, is the locus of a rich array of scientific research, exhibition and educational resources through its Department of Astrophysics, its Rose Center for Earth and Space and its Hall of Meteorites. For the past decade, the Museum's National Center for Science Literacy, Education and Technology has leveraged these assets to create a panoply of web-based resources for students, teachers and the general public. This session will review several of these resources, including the Digital Universe (a three-dimensional mapping of the Universe); The Solar System (an online graduate course for K-12 teachers); multimedia highlighting searches for exoplanets and ultra-high-energy cosmic rays; Journey to the Stars (a DVD version of the current planetarium show); and the astronomy section of Ology (a website for children ages 7 and up). A copy of the Journey to the Stars DVD will be provided to all attendees. )

  6. A Tactical Framework for Cyberspace Situational Awareness

    DTIC Science & Technology

    2010-06-01

    Command & Control 1. VOIP Telephone 2. Internet Chat 3. Web App ( TBMCS ) 4. Email 5. Web App (PEX) 6. Database (CAMS) 7. Database (ARMS) 8...Database (LogMod) 9. Resource (WWW) 10. Application (PFPS) Mission Planning 1. Application (PFPS) 2. Email 3. Web App ( TBMCS ) 4. Internet Chat...1. Web App (PEX) 2. Database (ARMS) 3. Web App ( TBMCS ) 4. Email 5. Database (CAMS) 6. VOIP Telephone 7. Application (PFPS) 8. Internet Chat 9

  7. Reviews Book: The 4% Universe: Dark Matter, Dark Energy and the Race to Discover the Rest of Reality Book: Quantitative Understanding of Biosystems: An Introduction to Biophysics Book: Edison's Electric Light: The Art of Invention Book: The Edge of Physics: Dispatches from the Frontiers of Cosmology Equipment: Voicebox Equipment: Tracker 4 Books: Hands-On Introduction to NI LabVIEW with Vernier, and Engineering Projects with NI LabVIEW and Vernier Places to Visit: Discovery Museum Book: Philosophy of Science: A Very Short Introduction Web Watch

    NASA Astrophysics Data System (ADS)

    2011-11-01

    WE RECOMMEND Quantitative Understanding of Biosystems: An Introduction to Biophysics Text applies physics to biology concepts Edison's Electric Light: The Art of Invention Edison's light still shines brightly The Edge of Physics: Dispatches from the Frontiers of Cosmology Anecdotes explore cosmology Voicebox Voicebox kit discovers the physics and evolution of speech Tracker 4 Free software tracks motion analysis Hands-On Introduction to NI LabVIEW with Vernier, and Engineering Projects with NI LabVIEW and Vernier Books support the LabVIEW software Discovery Museum Newcastle museum offers science enjoyment for all Philosophy of Science: A Very Short Introduction Philosophy opens up science questions WORTH A LOOK The 4% Universe: Dark Matter, Dark Energy and the Race to Discover the Rest of Reality Book researches the universe WEB WATCH Superconductivity websites are popular

  8. The Virtual Arizona Experience

    NASA Astrophysics Data System (ADS)

    Allison, M. L.; Davis, R.; Conway, F. M.; Bellasai, R.

    2012-12-01

    To commemorate the once-in-a-lifetime event of Arizona's hundredth birthday, the Centennial Commission and the Governor of Arizona envisioned a museum and companion website that would capture the state's history, celebrate its people, and embrace its future. Working with world-renowned museum designers, the state began to seek ideas from across Arizona to create plans for a journey of discovery through science and the humanities. The museum would introduce visitors to some of the people who nurtured the state through its early years and others who are innovating its tomorrows. Showcases would include the resources and experiences that shaped the state's history and are transforming its present day, highlighting the ingenuity that tamed the wild frontier and is envisioning Arizona's next frontiers through science and technology. The Arizona Experience (www.arizonaexperience.org) was initially intended to serve as the web presence for the physical museum, but as delays occurred with the physical museum, the site has quickly developed an identify of its own as an interactive, multimedia experience, reaching a wider audience with functions that would be difficult or expensive to produce in a museum. As leaders in scientific and technological innovation in the state, the Arizona Geological Survey was tasked with designing and creating the Arizona Experience site. The general themes remain the same; however, the site has added content and applications that are better suited to the online environment in order to create a rich, dynamic supplement to a physical museum experience. The website offers the features and displays of the future museum with the interactive nature and learning environment of the web. This provides an encyclopedic overview of the State of Arizona by subject matter experts in a manner that is free and open to the public and erases socio-economic, political, and physical boundaries. Over the Centennial Year of 2012 the site will release a new theme and explore the people, land, and innovations that shape the themes. Themes include (in order of release) Celebrates, Mining & Minerals, Biotech & Life Sciences, Sports & Recreation, Energy, Water, Technology & Aerospace, People & Culture, Ranching & Agriculture, Native American Culture, Astronomy, 21st Century Workforce, and a Best of 2012 release. The materials developed for the site come from content matter experts across the state including academic institutions, historical societies, museums, and professional associations. Currently there are over 300 content providers contributing resources, data, and videos to the site. AZGS interactions with science and technology organizations, associations, and businesses have been critical as we work to engage visitors and industry with the opportunities in Arizona, and translate innovative research and scientific application for a more generalized audience. In addition, we are involving K-12 educators in using the site content and cutting edge technology for developing classroom STEM related content linked to curriculum subject areas.

  9. "Ask a slave" and interpreting race on public history's front line: interview with Azie Mira Dungey.

    PubMed

    Dungey, Azie Mira; Tyson, Amy M

    2014-02-01

    In this interview, Azie Mira Dungey (creator of the web series, "Ask a Slave") and Amy M. Tyson (Associate Professor of History at DePaul University and author of The Wages of History: Emotional Labor on Public History's Front Lines) discuss Dungey's web series, as well as her experiences as a living history interpreter at both the Smithsonian Museum of American History and at Mount Vernon.

  10. Recent Developments in Cultural Heritage Image Databases: Directions for User-Centered Design.

    ERIC Educational Resources Information Center

    Stephenson, Christie

    1999-01-01

    Examines the Museum Educational Site Licensing (MESL) Project--a cooperative project between seven cultural heritage repositories and seven universities--as well as other developments of cultural heritage image databases for academic use. Reviews recent literature on image indexing and retrieval, interface design, and tool development, urging a…

  11. SPARQLGraph: a web-based platform for graphically querying biological Semantic Web databases.

    PubMed

    Schweiger, Dominik; Trajanoski, Zlatko; Pabinger, Stephan

    2014-08-15

    Semantic Web has established itself as a framework for using and sharing data across applications and database boundaries. Here, we present a web-based platform for querying biological Semantic Web databases in a graphical way. SPARQLGraph offers an intuitive drag & drop query builder, which converts the visual graph into a query and executes it on a public endpoint. The tool integrates several publicly available Semantic Web databases, including the databases of the just recently released EBI RDF platform. Furthermore, it provides several predefined template queries for answering biological questions. Users can easily create and save new query graphs, which can also be shared with other researchers. This new graphical way of creating queries for biological Semantic Web databases considerably facilitates usability as it removes the requirement of knowing specific query languages and database structures. The system is freely available at http://sparqlgraph.i-med.ac.at.

  12. The type material of Mantodea (praying mantises) deposited in the National Museum of Natural History, Smithsonian Institution, USA

    PubMed Central

    Svenson, Gavin J.

    2014-01-01

    Abstract The collection of Mantodea of the National Museum of Natural History, Smithsonian Institution, includes 26 holotypes, 7 allotypes, 4 lectotypes, 23 paratypes, and 1 paralectotype. Four type specimens were designated as lectotypes within this work. Highly accurate measurement data, high resolution images of specimens and labels, verbatim label data, georeferenced coordinates, original and newly assigned database codes, and bibliographic data are presented for all primary types. Label data for all paratype specimens in the collection are provide in tabular form. The location of the USNM collection has been moved to the Cleveland Museum of Natural History as a loan under the Off-site Enhancement Program. PMID:25152673

  13. Computerization of the Arkansas Fishes Database

    Treesearch

    Henry W. Robison; L. Gayle Henderson; Melvin L. Warren; Janet S. Rader

    2004-01-01

    Abstract - Until recently, distributional data for the fishes of Arkansas existed in the form of museum records, field notebooks of various ichthyologists, and published fish survey data; none of which was in a digital format. In 1995, a relational database system was used to design a PC platform data entry module for the capture of information on...

  14. Just-in-time Database-Driven Web Applications

    PubMed Central

    2003-01-01

    "Just-in-time" database-driven Web applications are inexpensive, quickly-developed software that can be put to many uses within a health care organization. Database-driven Web applications garnered 73873 hits on our system-wide intranet in 2002. They enabled collaboration and communication via user-friendly Web browser-based interfaces for both mission-critical and patient-care-critical functions. Nineteen database-driven Web applications were developed. The application categories that comprised 80% of the hits were results reporting (27%), graduate medical education (26%), research (20%), and bed availability (8%). The mean number of hits per application was 3888 (SD = 5598; range, 14-19879). A model is described for just-in-time database-driven Web application development and an example given with a popular HTML editor and database program. PMID:14517109

  15. Forging Educational Partnerships Between Science Centers and Ocean, Earth and Atmospheric Scientists

    NASA Astrophysics Data System (ADS)

    Miller, M. K.

    2006-12-01

    When most people think about science education, they usually consider classrooms as ideal venues for communicating and disseminating knowledge. But most learning that we humans engage in happens outside of the classroom and after we finish our formal education. That is where informal science education picks up the ball. The forums for these learning opportunities are diverse: museum exhibits, the Web, documentaries, and after school settings are becoming increasingly important as venues to keep up with the ever changing world of science. . The Exploratorium and other science centers act as transformers between the world of science and the public. As such they are ideal partners for scientists who would like to reach a large and diverse audience of families, adults, teens, and teachers. In this session, Senior Science Producer Mary Miller will discuss the ways that the Exploratorium engages working scientists in helping the museum-going public and Web audiences understand the process and results of scientific research.

  16. The Virtual Museum for Meteorites

    NASA Astrophysics Data System (ADS)

    Madiedo, J. M.

    2012-09-01

    Meteorites play a fundamental role in education and outreach, as these samples of extraterrestrial materials are very valuable tools to promote the public's interest in Astronomy and Planetary Sciences. Thus, for instance, meteorite exhibitions reveal the interest and fascination of students, educators and even researchers for these peculiar rocks and how these can provide information to explain many fundamental questions related to the origin and evolution of our Solar System. However, despite the efforts of private collectors, museums and other institutions to organize meteorite exhibitions, the reach of these is usually limited. But this issue can be addressed thanks to new technologies related to the Internet. In fact we can take advantage of HTML and related technologies to overcome local boundaries and open the possibility of offering these exhibitions for a global audience. With this aim a Virtual Museum for Meteorites has been created and a description of this web-based tool is given here.

  17. 76 FR 1137 - Publicly Available Consumer Product Safety Information Database: Notice of Public Web Conferences

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-07

    ...: Notice of Public Web Conferences AGENCY: Consumer Product Safety Commission. ACTION: Notice. SUMMARY: The Consumer Product Safety Commission (``Commission,'' ``CPSC,'' or ``we'') is announcing two Web conferences... database (``Database''). The Web conferences will be webcast live from the Commission's headquarters in...

  18. 75 FR 3210 - Notice of Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-20

    ... scheduled for 21 January 2010, at 10 a.m. in the Commission offices at the National Building Museum, Suite... buildings, parks and memorials. Draft agendas and additional information regarding the Commission are available on our Web site: http://www.cfa.gov . Inquiries regarding the agenda and requests to submit...

  19. 78 FR 6813 - Notice of Meeting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-31

    ... scheduled for 21 February 2013, at 10:00 a.m. in the Commission offices at the National Building Museum... include buildings, parks, and memorials. Draft agendas and additional information regarding the Commission are available on our Web site: www.cfa.gov . Inquiries regarding the agenda and requests to submit...

  20. Understanding Evolution: An Evolution Website for Teachers

    ERIC Educational Resources Information Center

    Scotchmoor, Judy; Janulaw, Al

    2005-01-01

    While many states are facing challenges to the teaching of evolution in their science classrooms, the University of California Museum of Paleontology, working with the National Center for Science Education, has developed a useful web-based resource for science teachers of all grade- and experience-levels. Understanding Evolution (UE) was developed…

  1. A Trip to Historic Philadelphia on the Web.

    ERIC Educational Resources Information Center

    Wilson, Elizabeth K.

    1997-01-01

    Describes an electronic field trip to colonial Philadelphia (Pennsylvania). The historic locale has generated enough websites (Philadelphia Historic District, Betsy Ross homepage, and the Franklin Institute Science Museum) for students to take a virtual tour of the colonial capital. Suggests structuring the activity as a know-want-learn (KWL)…

  2. Lewis and Clark as Naturalists.

    ERIC Educational Resources Information Center

    Smithsonian Institution, Washington, DC. National Museum of Natural History.

    Intended for use in elementary and high school education, this Web site includes a teacher's guide and three lesson plans. The site contains images of museum specimens, scientific drawings, and field photos of the plant and animal species observed by Meriwether Lewis and William Clark, along with journal excerpts, historical notes, and references…

  3. MarsQuest: Bringing the Excitement of Mars Exploration to the Public

    NASA Astrophysics Data System (ADS)

    Dusenbery, P. B.; Morrow, C. A.; Harold, J. B.; Klug, S. L.

    2002-12-01

    We are living in an extraordinary era of Mars exploration. NASA's Odyssey spacecraft has recently discovered vast amounts of hydrogen beneath the surface of Mars, suggesting the presence of sub-surface ice. Two Mars Exploration Rovers are scheduled to land in early 2004. To bring the excitement and discoveries of Mars exploration to the public, the Space Science Institute (SSI) of Boulder, CO, has developed a comprehensive Mars Education Program that includes: 1) large and small traveling exhibits, 2) workshops for museum and classroom educators (in partnership with the Mars Education Program at Arizona State University (ASU)), and 3) an interactive Website called MarsQuest Online (in partnership with TERC and JPL). All three components will be presented and offered as a good model for actively involving scientists and their discoveries to improve science education in museums and the classroom. The centerpiece of SSI's Mars Education Program is the 5,000-square-foot traveling exhibition, MarsQuest: Exploring the Red Planet, which was developed with support from the National Science Foundation (NSF), NASA, and several corporate donors. The MarsQuest exhibit is nearing the end of a highly successful, fully-booked three-year tour. The Institute plans to send an enhanced and updated MarsQuest on a second three-year tour and is also developing Destination: Mars, a mini-version of MarsQuest designed for smaller venues. Workshops for museum educators, docents, and local teachers are conducted at host sites. These workshops were developed collaboratively by Dr. Cheri Morrow, SSI's Education and Public Outreach Manager, and Sheri Klug, Director of the Mars K-12 Education Program at ASU. They are designed to inspire and empower participants to extend the excitement and science content of the exhibitions into classrooms and museum-based education programs in an ongoing fashion. The MarsQuest Online project is developing a Website that will use the MarsQuest exhibit as a context for online interactives that delve deeper into Mars science. This project, supported by NSF, will explore the potential for in-depth, Web-based studies that extend museum exhibit content onto the Web.

  4. MarsQuest: Bringing the Excitement of Mars Exploration to the Public

    NASA Astrophysics Data System (ADS)

    Dusenbery, P. B.; Morrow, C. A.; Harold, J. B.; Klug, S. L.

    2002-09-01

    We are living in an extraordinary era of Mars exploration. NASA's Odyssey spacecraft has recently discovered vast amounts of hydrogen beneath the surface of Mars, suggesting the presence of sub-surface ice. Two Mars Exploration Rovers are scheduled to land in early 2004. To bring the excitement and discoveries of Mars exploration to the public, the Space Science Institute (SSI) of Boulder, CO, has developed a comprehensive Mars Education Program that includes: 1) large and small traveling exhibits, 2) workshops for museum and classroom educators (in partnership with the Mars Education Program at Arizona State University (ASU)), and 3) an interactive Website called MarsQuest Online (in partnership with TERC and JPL). All three components will be presented and offered as a good model for actively involving scientists and their discoveries to improve science education in museums and the classroom. The centerpiece of SSI's Mars Education Program is the 5,000-square-foot traveling exhibition, MarsQuest: Exploring the Red Planet, which was developed with support from the National Science Foundation (NSF), NASA, and several corporate donors. The MarsQuest exhibit is nearing the end of a highly successful, fully-booked three-year tour. The Institute plans to send an enhanced and updated MarsQuest on a second three-year tour and is also developing Destination: Mars, a mini-version of MarsQuest designed for smaller venues. Workshops for museum educators, docents, and local teachers are conducted at host sites. These workshops were developed collaboratively by Dr. Cheri Morrow, SSI's Education and Public Outreach Manager, and Sheri Klug, Director of the Mars K-12 Education Program at ASU. They are designed to inspire and empower participants to extend the excitement and science content of the exhibitions into classrooms and museum-based education programs in an ongoing fashion. The MarsQuest Online project is developing a Website that will use the MarsQuest exhibit as a context for online interactives that delve deeper into Mars science. This project, supported by NSF, will explore the potential for in-depth, Web-based studies that extend museum exhibit content onto the Web.

  5. Insect Pests and Integrated Pest Management in Museums, Libraries and Historic Buildings

    PubMed Central

    Querner, Pascal

    2015-01-01

    Insect pests are responsible for substantial damage to museum objects, historic books and in buildings like palaces or historic houses. Different wood boring beetles (Anobium punctatum, Hylotrupes bajulus, Lyctus sp. or introduced species), the biscuit beetle (Stegobium paniceum), the cigarette beetle (Lasioderma serricorne), different Dermestides (Attagenus sp., Anthrenus sp., Dermestes sp., Trogoderma sp.), moths like the webbing clothes moth (Tineola bisselliella), Silverfish (Lepisma saccharina) and booklice (Psocoptera) can damage materials, objects or building parts. They are the most common pests found in collections in central Europe, but most of them are distributed all over the world. In tropical countries, termites, cockroaches and other insect pests are also found and result in even higher damage of wood and paper or are a commune annoyance in buildings. In this short review, an introduction to Integrated Pest Management (IPM) in museums is given, the most valuable collections, preventive measures, monitoring in museums, staff responsible for the IPM and chemical free treatment methods are described. In the second part of the paper, the most important insect pests occurring in museums, archives, libraries and historic buildings in central Europe are discussed with a description of the materials and object types that are mostly infested and damaged. Some information on their phenology and biology are highlighted as they can be used in the IPM concept against them. PMID:26463205

  6. Insect Pests and Integrated Pest Management in Museums, Libraries and Historic Buildings.

    PubMed

    Querner, Pascal

    2015-06-16

    Insect pests are responsible for substantial damage to museum objects, historic books and in buildings like palaces or historic houses. Different wood boring beetles (Anobium punctatum, Hylotrupes bajulus, Lyctus sp. or introduced species), the biscuit beetle (Stegobium paniceum), the cigarette beetle (Lasioderma serricorne), different Dermestides (Attagenus sp., Anthrenus sp., Dermestes sp., Trogoderma sp.), moths like the webbing clothes moth (Tineola bisselliella), Silverfish (Lepisma saccharina) and booklice (Psocoptera) can damage materials, objects or building parts. They are the most common pests found in collections in central Europe, but most of them are distributed all over the world. In tropical countries, termites, cockroaches and other insect pests are also found and result in even higher damage of wood and paper or are a commune annoyance in buildings. In this short review, an introduction to Integrated Pest Management (IPM) in museums is given, the most valuable collections, preventive measures, monitoring in museums, staff responsible for the IPM and chemical free treatment methods are described. In the second part of the paper, the most important insect pests occurring in museums, archives, libraries and historic buildings in central Europe are discussed with a description of the materials and object types that are mostly infested and damaged. Some information on their phenology and biology are highlighted as they can be used in the IPM concept against them.

  7. The city model as a tool for participatory urban planning - a case study: The Bilotti open air museum of Cosenza

    NASA Astrophysics Data System (ADS)

    Artese, S.

    2014-05-01

    The paper describes the implementation of the 3D city model of the pedestrian area of Cosenza, which in recent years has become the Bilotti Open Air Museum (MAB). For this purpose were used both the data available (regional technical map, city maps, orthophotos) and acquired through several surveys of buildings and "Corso Mazzini" street (photos, topographic measurements, laser scanner point clouds). In addition to the urban scale model, the survey of the statues of the MAB was carried out. By means of data processing, the models of the same statues have been created, that can be used as objects within the city model. The 3D model of the MAB open air museum has been used to implement a Web-GIS allowing the citizen's participation, understanding and suggestions. The 3D city model is intended as a new tool for urban planning, therefore it has been used both for representing the current situation of the MAB and for design purposes, by acknowledging suggestions regarding a possible different location of the statues and a new way to enjoy the museum.

  8. Using Web Ontology Language to Integrate Heterogeneous Databases in the Neurosciences

    PubMed Central

    Lam, Hugo Y.K.; Marenco, Luis; Shepherd, Gordon M.; Miller, Perry L.; Cheung, Kei-Hoi

    2006-01-01

    Integrative neuroscience involves the integration and analysis of diverse types of neuroscience data involving many different experimental techniques. This data will increasingly be distributed across many heterogeneous databases that are web-accessible. Currently, these databases do not expose their schemas (database structures) and their contents to web applications/agents in a standardized, machine-friendly way. This limits database interoperation. To address this problem, we describe a pilot project that illustrates how neuroscience databases can be expressed using the Web Ontology Language, which is a semantically-rich ontological language, as a common data representation language to facilitate complex cross-database queries. In this pilot project, an existing tool called “D2RQ” was used to translate two neuroscience databases (NeuronDB and CoCoDat) into OWL, and the resulting OWL ontologies were then merged. An OWL-based reasoner (Racer) was then used to provide a sophisticated query language (nRQL) to perform integrated queries across the two databases based on the merged ontology. This pilot project is one step toward exploring the use of semantic web technologies in the neurosciences. PMID:17238384

  9. Reviews

    NASA Astrophysics Data System (ADS)

    2007-05-01

    WE RECOMMEND The Snowman's Coat and Other Science Questions A book of problem-solving for the very young An Inconvenient Truth Al Gore's climate change slide show, brought to DVD OCR Physics for GCSE A thorough but interesting high-level textbook 21st Century Science GCSE Resources A full range of resources to suit teacher and pupil WORTH A LOOK Thinking Without Objects - The Transformation of Mechanics in the Seventeenth Century This book covers a fruitful period in the history of science Paper Rollercoasters An intriguing teaching resource that uses paper and sticky tape HANDLE WITH CARE STEAM - Museum of the Great Western Railway The Swindon museum proves lacking in education resources WEB WATCH Games and simulations demonstrating moments and levers

  10. iRefWeb: interactive analysis of consolidated protein interaction data and their supporting evidence

    PubMed Central

    Turner, Brian; Razick, Sabry; Turinsky, Andrei L.; Vlasblom, James; Crowdy, Edgard K.; Cho, Emerson; Morrison, Kyle; Wodak, Shoshana J.

    2010-01-01

    We present iRefWeb, a web interface to protein interaction data consolidated from 10 public databases: BIND, BioGRID, CORUM, DIP, IntAct, HPRD, MINT, MPact, MPPI and OPHID. iRefWeb enables users to examine aggregated interactions for a protein of interest, and presents various statistical summaries of the data across databases, such as the number of organism-specific interactions, proteins and cited publications. Through links to source databases and supporting evidence, researchers may gauge the reliability of an interaction using simple criteria, such as the detection methods, the scale of the study (high- or low-throughput) or the number of cited publications. Furthermore, iRefWeb compares the information extracted from the same publication by different databases, and offers means to follow-up possible inconsistencies. We provide an overview of the consolidated protein–protein interaction landscape and show how it can be automatically cropped to aid the generation of meaningful organism-specific interactomes. iRefWeb can be accessed at: http://wodaklab.org/iRefWeb. Database URL: http://wodaklab.org/iRefWeb/ PMID:20940177

  11. On Building a Search Interface Discovery System

    NASA Astrophysics Data System (ADS)

    Shestakov, Denis

    A huge portion of the Web known as the deep Web is accessible via search interfaces to myriads of databases on the Web. While relatively good approaches for querying the contents of web databases have been recently proposed, one cannot fully utilize them having most search interfaces unlocated. Thus, the automatic recognition of search interfaces to online databases is crucial for any application accessing the deep Web. This paper describes the architecture of the I-Crawler, a system for finding and classifying search interfaces. The I-Crawler is intentionally designed to be used in the deep web characterization surveys and for constructing directories of deep web resources.

  12. If You Build It, They Will Scan: Oxford University's Exploration of Community Collections

    ERIC Educational Resources Information Center

    Lee, Stuart D.; Lindsay, Kate

    2009-01-01

    Traditional large digitization projects demand massive resources from the central unit (library, museum, or university) that has acquired funding for them. Another model, enabled by easy access to cameras, scanners, and web tools, calls for public contributions to community collections of artifacts. In 2009, the University of Oxford ran a…

  13. 78 FR 15738 - Notice of Availability of Final General Management Plan/Environmental Impact Statement for Effigy...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-12

    ... Planning, Environment, and Public Comment Web site at http://www.parkplanning.nps.gov/indu.efmo . FOR... resources, and museum collections), to natural resources (soils, wild and scenic rivers, vegetation, fish..., to the socioeconomic environment, and to EFMO operations and facilities. The preferred alternative in...

  14. Digital Media and Technology in Afterschool Programs, Libraries, and Museums

    ERIC Educational Resources Information Center

    Herr-Stephenson, Becky; Rhoten, Diana; Perkel, Dan; Sims, Christo

    2011-01-01

    Digital media and technology have become culturally and economically powerful parts of contemporary middle-class American childhoods. Immersed in various forms of digital media as well as mobile and Web-based technologies, young people today appear to develop knowledge and skills through participation in media. This MacArthur Report examines the…

  15. The Canon, the Web, and the Long Tail

    ERIC Educational Resources Information Center

    Sanderhoff, Merete

    2017-01-01

    This article argues that releasing images of artworks into the public domain creates a new possibility for the public to challenge the canon or create their own, based on access to previously inaccessible images. Through the dissemination of openly licensed artworks across the Internet, museums can support the public in expanding their engagement…

  16. Business Faculty Research: Satisfaction with the Web versus Library Databases

    ERIC Educational Resources Information Center

    Dewald, Nancy H.; Silvius, Matthew A.

    2005-01-01

    Business faculty members teaching at undergraduate campuses of the Pennsylvania State University were surveyed in order to assess their satisfaction with free Web sources and with subscription databases for their professional research. Although satisfaction with the Web's ease of use was higher than that for databases, overall satisfaction for…

  17. A Web-Based Database for Nurse Led Outreach Teams (NLOT) in Toronto.

    PubMed

    Li, Shirley; Kuo, Mu-Hsing; Ryan, David

    2016-01-01

    A web-based system can provide access to real-time data and information. Healthcare is moving towards digitizing patients' medical information and securely exchanging it through web-based systems. In one of Ontario's health regions, Nurse Led Outreach Teams (NLOT) provide emergency mobile nursing services to help reduce unnecessary transfers from long-term care homes to emergency departments. Currently the NLOT team uses a Microsoft Access database to keep track of the health information on the residents that they serve. The Access database lacks scalability, portability, and interoperability. The objective of this study is the development of a web-based database using Oracle Application Express that is easily accessible from mobile devices. The web-based database will allow NLOT nurses to enter and access resident information anytime and from anywhere.

  18. Artist Material BRDF Database for Computer Graphics Rendering

    NASA Astrophysics Data System (ADS)

    Ashbaugh, Justin C.

    The primary goal of this thesis was to create a physical library of artist material samples. This collection provides necessary data for the development of a gonio-imaging system for use in museums to more accurately document their collections. A sample set was produced consisting of 25 panels and containing nearly 600 unique samples. Selected materials are representative of those commonly used by artists both past and present. These take into account the variability in visual appearance resulting from the materials and application techniques used. Five attributes of variability were identified including medium, color, substrate, application technique and overcoat. Combinations of these attributes were selected based on those commonly observed in museum collections and suggested by surveying experts in the field. For each sample material, image data is collected and used to measure an average bi-directional reflectance distribution function (BRDF). The results are available as a public-domain image and optical database of artist materials at art-si.org. Additionally, the database includes specifications for each sample along with other information useful for computer graphics rendering such as the rectified sample images and normal maps.

  19. Clever generation of rich SPARQL queries from annotated relational schema: application to Semantic Web Service creation for biological databases.

    PubMed

    Wollbrett, Julien; Larmande, Pierre; de Lamotte, Frédéric; Ruiz, Manuel

    2013-04-15

    In recent years, a large amount of "-omics" data have been produced. However, these data are stored in many different species-specific databases that are managed by different institutes and laboratories. Biologists often need to find and assemble data from disparate sources to perform certain analyses. Searching for these data and assembling them is a time-consuming task. The Semantic Web helps to facilitate interoperability across databases. A common approach involves the development of wrapper systems that map a relational database schema onto existing domain ontologies. However, few attempts have been made to automate the creation of such wrappers. We developed a framework, named BioSemantic, for the creation of Semantic Web Services that are applicable to relational biological databases. This framework makes use of both Semantic Web and Web Services technologies and can be divided into two main parts: (i) the generation and semi-automatic annotation of an RDF view; and (ii) the automatic generation of SPARQL queries and their integration into Semantic Web Services backbones. We have used our framework to integrate genomic data from different plant databases. BioSemantic is a framework that was designed to speed integration of relational databases. We present how it can be used to speed the development of Semantic Web Services for existing relational biological databases. Currently, it creates and annotates RDF views that enable the automatic generation of SPARQL queries. Web Services are also created and deployed automatically, and the semantic annotations of our Web Services are added automatically using SAWSDL attributes. BioSemantic is downloadable at http://southgreen.cirad.fr/?q=content/Biosemantic.

  20. Clever generation of rich SPARQL queries from annotated relational schema: application to Semantic Web Service creation for biological databases

    PubMed Central

    2013-01-01

    Background In recent years, a large amount of “-omics” data have been produced. However, these data are stored in many different species-specific databases that are managed by different institutes and laboratories. Biologists often need to find and assemble data from disparate sources to perform certain analyses. Searching for these data and assembling them is a time-consuming task. The Semantic Web helps to facilitate interoperability across databases. A common approach involves the development of wrapper systems that map a relational database schema onto existing domain ontologies. However, few attempts have been made to automate the creation of such wrappers. Results We developed a framework, named BioSemantic, for the creation of Semantic Web Services that are applicable to relational biological databases. This framework makes use of both Semantic Web and Web Services technologies and can be divided into two main parts: (i) the generation and semi-automatic annotation of an RDF view; and (ii) the automatic generation of SPARQL queries and their integration into Semantic Web Services backbones. We have used our framework to integrate genomic data from different plant databases. Conclusions BioSemantic is a framework that was designed to speed integration of relational databases. We present how it can be used to speed the development of Semantic Web Services for existing relational biological databases. Currently, it creates and annotates RDF views that enable the automatic generation of SPARQL queries. Web Services are also created and deployed automatically, and the semantic annotations of our Web Services are added automatically using SAWSDL attributes. BioSemantic is downloadable at http://southgreen.cirad.fr/?q=content/Biosemantic. PMID:23586394

  1. Facilitating Collaboration, Knowledge Construction and Communication with Web-Enabled Databases.

    ERIC Educational Resources Information Center

    McNeil, Sara G.; Robin, Bernard R.

    This paper presents an overview of World Wide Web-enabled databases that dynamically generate Web materials and focuses on the use of this technology to support collaboration, knowledge construction, and communication. Database applications have been used in classrooms to support learning activities for over a decade, but, although business and…

  2. Implementing a Dynamic Database-Driven Course Using LAMP

    ERIC Educational Resources Information Center

    Laverty, Joseph Packy; Wood, David; Turchek, John

    2011-01-01

    This paper documents the formulation of a database driven open source architecture web development course. The design of a web-based curriculum faces many challenges: a) relative emphasis of client and server-side technologies, b) choice of a server-side language, and c) the cost and efficient delivery of a dynamic web development, database-driven…

  3. A Tutorial in Creating Web-Enabled Databases with Inmagic DB/TextWorks through ODBC.

    ERIC Educational Resources Information Center

    Breeding, Marshall

    2000-01-01

    Explains how to create Web-enabled databases. Highlights include Inmagic's DB/Text WebPublisher product called DB/TextWorks; ODBC (Open Database Connectivity) drivers; Perl programming language; HTML coding; Structured Query Language (SQL); Common Gateway Interface (CGI) programming; and examples of HTML pages and Perl scripts. (LRW)

  4. Web Database Development: Implications for Academic Publishing.

    ERIC Educational Resources Information Center

    Fernekes, Bob

    This paper discusses the preliminary planning, design, and development of a pilot project to create an Internet accessible database and search tool for locating and distributing company data and scholarly work. Team members established four project objectives: (1) to develop a Web accessible database and decision tool that creates Web pages on the…

  5. Digging Deeper: The Deep Web.

    ERIC Educational Resources Information Center

    Turner, Laura

    2001-01-01

    Focuses on the Deep Web, defined as Web content in searchable databases of the type that can be found only by direct query. Discusses the problems of indexing; inability to find information not indexed in the search engine's database; and metasearch engines. Describes 10 sites created to access online databases or directly search them. Lists ways…

  6. Do-It-Yourself: A Special Library's Approach to Creating Dynamic Web Pages Using Commercial Off-The-Shelf Applications

    NASA Technical Reports Server (NTRS)

    Steeman, Gerald; Connell, Christopher

    2000-01-01

    Many librarians may feel that dynamic Web pages are out of their reach, financially and technically. Yet we are reminded in library and Web design literature that static home pages are a thing of the past. This paper describes how librarians at the Institute for Defense Analyses (IDA) library developed a database-driven, dynamic intranet site using commercial off-the-shelf applications. Administrative issues include surveying a library users group for interest and needs evaluation; outlining metadata elements; and, committing resources from managing time to populate the database and training in Microsoft FrontPage and Web-to-database design. Technical issues covered include Microsoft Access database fundamentals, lessons learned in the Web-to-database process (including setting up Database Source Names (DSNs), redesigning queries to accommodate the Web interface, and understanding Access 97 query language vs. Standard Query Language (SQL)). This paper also offers tips on editing Active Server Pages (ASP) scripting to create desired results. A how-to annotated resource list closes out the paper.

  7. WebEAV: automatic metadata-driven generation of web interfaces to entity-attribute-value databases.

    PubMed

    Nadkarni, P M; Brandt, C M; Marenco, L

    2000-01-01

    The task of creating and maintaining a front end to a large institutional entity-attribute-value (EAV) database can be cumbersome when using traditional client-server technology. Switching to Web technology as a delivery vehicle solves some of these problems but introduces others. In particular, Web development environments tend to be primitive, and many features that client-server developers take for granted are missing. WebEAV is a generic framework for Web development that is intended to streamline the process of Web application development for databases having a significant EAV component. It also addresses some challenging user interface issues that arise when any complex system is created. The authors describe the architecture of WebEAV and provide an overview of its features with suitable examples.

  8. From GUI to Gallery: A Study of Online Virtual Environments.

    ERIC Educational Resources Information Center

    Guynup, Stephen Lawrence

    This paper began as an attempt to clarify and classify the development of Web3D environments from 1995 to the present. In that process, important facts came to light. A large proportion of these sites were virtual galleries and museums. Second, these same environments covered a wide array of architectural interpretations and represented some of…

  9. Learning Bridges: A Role for Mobile Technologies in Education

    ERIC Educational Resources Information Center

    Vavoula, Giasemi; Sharples, Mike; Lonsdale, Peter; Rudman, Paul; Meek, Julia

    2007-01-01

    MyArtSpace is a service for children to spread their learning between schools and museums using mobile phones linked to a personal Web space. Using MyArtSpace as an example, the authors discuss the possibilities for mobile technology to form bridges between formal and informal learning. They also offer guidelines for designing such bridges.…

  10. Dig It! The Secrets of Soil

    Science.gov Websites

    It! The Secrets of Soil Come and Explore! Discover the amazing world of soils with images and information from the Dig It! The Secrets of Soil exhibit from the Smithsonian's National Museum of Natural and new web content will be added over the coming months including a new soil blog. New Interactives

  11. Web Consulting for Non-Academic Educational Missions: How Instructional Design Offers a Competitive Advantage

    ERIC Educational Resources Information Center

    Cates, Ward Mitchell; Mattke, Paige Hawkins

    2013-01-01

    Based on a recently completed study of education directors at science museums, this article addresses how design-and-development consultants might use those findings to enhance the way in which they propose and deliver Website services to non-academic organizations with either primary or complementary educational missions. After a very brief…

  12. 49 CFR 630.4 - Requirements.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... TRANSPORTATION NATIONAL TRANSIT DATABASE § 630.4 Requirements. (a) National Transit Database Reporting System... from the National Transit Database Web site located at http://www.ntdprogram.gov. These reference... Transit Database Web site and a notice of any significant changes to the reporting requirements specified...

  13. 49 CFR 630.4 - Requirements.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... TRANSPORTATION NATIONAL TRANSIT DATABASE § 630.4 Requirements. (a) National Transit Database Reporting System... from the National Transit Database Web site located at http://www.ntdprogram.gov. These reference... Transit Database Web site and a notice of any significant changes to the reporting requirements specified...

  14. 49 CFR 630.4 - Requirements.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... TRANSPORTATION NATIONAL TRANSIT DATABASE § 630.4 Requirements. (a) National Transit Database Reporting System... from the National Transit Database Web site located at http://www.ntdprogram.gov. These reference... Transit Database Web site and a notice of any significant changes to the reporting requirements specified...

  15. 49 CFR 630.4 - Requirements.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... TRANSPORTATION NATIONAL TRANSIT DATABASE § 630.4 Requirements. (a) National Transit Database Reporting System... from the National Transit Database Web site located at http://www.ntdprogram.gov. These reference... Transit Database Web site and a notice of any significant changes to the reporting requirements specified...

  16. 49 CFR 630.4 - Requirements.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... TRANSPORTATION NATIONAL TRANSIT DATABASE § 630.4 Requirements. (a) National Transit Database Reporting System... from the National Transit Database Web site located at http://www.ntdprogram.gov. These reference... Transit Database Web site and a notice of any significant changes to the reporting requirements specified...

  17. Use of a secure Internet Web site for collaborative medical research.

    PubMed

    Marshall, W W; Haley, R W

    2000-10-11

    Researchers who collaborate on clinical research studies from diffuse locations need a convenient, inexpensive, secure way to record and manage data. The Internet, with its World Wide Web, provides a vast network that enables researchers with diverse types of computers and operating systems anywhere in the world to log data through a common interface. Development of a Web site for scientific data collection can be organized into 10 steps, including planning the scientific database, choosing a database management software system, setting up database tables for each collaborator's variables, developing the Web site's screen layout, choosing a middleware software system to tie the database software to the Web site interface, embedding data editing and calculation routines, setting up the database on the central server computer, obtaining a unique Internet address and name for the Web site, applying security measures to the site, and training staff who enter data. Ensuring the security of an Internet database requires limiting the number of people who have access to the server, setting up the server on a stand-alone computer, requiring user-name and password authentication for server and Web site access, installing a firewall computer to prevent break-ins and block bogus information from reaching the server, verifying the identity of the server and client computers with certification from a certificate authority, encrypting information sent between server and client computers to avoid eavesdropping, establishing audit trails to record all accesses into the Web site, and educating Web site users about security techniques. When these measures are carefully undertaken, in our experience, information for scientific studies can be collected and maintained on Internet databases more efficiently and securely than through conventional systems of paper records protected by filing cabinets and locked doors. JAMA. 2000;284:1843-1849.

  18. Applying World Wide Web technology to the study of patients with rare diseases.

    PubMed

    de Groen, P C; Barry, J A; Schaller, W J

    1998-07-15

    Randomized, controlled trials of sporadic diseases are rarely conducted. Recent developments in communication technology, particularly the World Wide Web, allow efficient dissemination and exchange of information. However, software for the identification of patients with a rare disease and subsequent data entry and analysis in a secure Web database are currently not available. To study cholangiocarcinoma, a rare cancer of the bile ducts, we developed a computerized disease tracing system coupled with a database accessible on the Web. The tracing system scans computerized information systems on a daily basis and forwards demographic information on patients with bile duct abnormalities to an electronic mailbox. If informed consent is given, the patient's demographic and preexisting medical information available in medical database servers are electronically forwarded to a UNIX research database. Information from further patient-physician interactions and procedures is also entered into this database. The database is equipped with a Web user interface that allows data entry from various platforms (PC-compatible, Macintosh, and UNIX workstations) anywhere inside or outside our institution. To ensure patient confidentiality and data security, the database includes all security measures required for electronic medical records. The combination of a Web-based disease tracing system and a database has broad applications, particularly for the integration of clinical research within clinical practice and for the coordination of multicenter trials.

  19. High Temperature Superconducting Materials Database

    National Institute of Standards and Technology Data Gateway

    SRD 62 NIST High Temperature Superconducting Materials Database (Web, free access)   The NIST High Temperature Superconducting Materials Database (WebHTS) provides evaluated thermal, mechanical, and superconducting property data for oxides and other nonconventional superconductors.

  20. Accessing multimedia content from mobile applications using semantic web technologies

    NASA Astrophysics Data System (ADS)

    Kreutel, Jörn; Gerlach, Andrea; Klekamp, Stefanie; Schulz, Kristin

    2014-02-01

    We describe the ideas and results of an applied research project that aims at leveraging the expressive power of semantic web technologies as a server-side backend for mobile applications that provide access to location and multimedia data and allow for a rich user experience in mobile scenarios, ranging from city and museum guides to multimedia enhancements of any kind of narrative content, including e-book applications. In particular, we will outline a reusable software architecture for both server-side functionality and native mobile platforms that is aimed at significantly decreasing the effort required for developing particular applications of that kind.

  1. What Do They Tell Their Students? Business Faculty Acceptance of the Web and Library Databases for Student Research

    ERIC Educational Resources Information Center

    Dewald, Nancy H.

    2005-01-01

    Business faculty were surveyed as to their use of free Web resources and subscription databases for their own and their students' research. A much higher percentage of respondents either require or encourage Web use by their students than require or encourage database use, though most also advise use of multiple sources.

  2. Structural Ceramics Database

    National Institute of Standards and Technology Data Gateway

    SRD 30 NIST Structural Ceramics Database (Web, free access)   The NIST Structural Ceramics Database (WebSCD) provides evaluated materials property data for a wide range of advanced ceramics known variously as structural ceramics, engineering ceramics, and fine ceramics.

  3. Ocean Drilling Program: Janus Web Database

    Science.gov Websites

    in Janus Data Types and Examples Leg 199, sunrise. Janus Web Database ODP and IODP data are stored in as time permits (see Database Overview for available data). Data are available to everyone. There are

  4. The IRIS Education and Outreach Program: Providing access to data and equipment for educational and public use

    NASA Astrophysics Data System (ADS)

    Taber, J.; Toigo, M.; Bravo, T. K.; Hubenthal, M.; McQuillan, P. J.; Welti, R.

    2009-12-01

    The IRIS Education and Outreach Program has been an integral part of IRIS for the past 10 years and during that time has worked to advance awareness and understanding of seismology and earth science while inspiring careers in geophysics. The focus on seismology and the use of seismic data has allowed the IRIS E&O program to develop and disseminate a unique suite of products and services for a wide range of audiences. One result of that effort has been increased access to the IRIS Data Management System by non-specialist audiences and simplified use of location and waveform data. The Seismic Monitor was one of the first Web-based tools for observing near-real-time seismicity. It continues to be the most popular IRIS web page, and thus it presents aspects of seismology to a very wide audience. For individuals interested in more detailed ground motion information, waveforms can be easily viewed using the Rapid Earthquake Viewer, developed by the University of South Carolina in collaboration with IRIS E&O. The Seismographs in Schools program gives schools the opportunity to apply for a low-cost educational seismograph and to receive training for its use in the classroom. To provide better service to the community, a new Seismographs in Schools website was developed in the past year with enhanced functions to help teachers improve their teaching of seismology. The site encourages schools to make use of seismic data and communicate with other educational seismology users throughout the world. Users can view near-real-time displays of other participating schools, upload and download data, and use the “find a teacher” tool to contact nearby schools that also may be operating seismographs. In order to promote and maintain program participation and communication, the site features a discussion forum to encourage and support the growing global community of educational seismograph users. Any data that is submitted to the Seismographs in Schools Website is also accessible in real-time via the Seismographs in Schools API. This API allows anyone to extract data, re-purpose it, and display it in various ways. This gives educators the ability to build applications on top of the IRIS Seismographs in Schools Website and also lowers the barrier for teachers who want to create their own regional school seismology networks. Recent seismic data are also provided to general audiences via our museum displays. Millions of people have interacted with IRIS/USGS museum displays, many of them at the American Museum of Natural History in New York and the Smithsonian Institution National Museum of Natural History in Washington, D.C. However, a growing number of people explore seismological concepts and near real-time data through our newest display, the Active Earth Display (AED). The AED is a smaller, more flexible version of the museum display, and is now in use at universities and visitor centers throughout the US. Served via a web browser, the display is customizable and the software is available to anyone who applies via the IRIS E&O web pages. Touch screens provide an interactive experience and new content continues to be developed, including new sets of pages focusing on the Cascadia and Basin and Range regions.

  5. Solution Kinetics Database on the Web

    National Institute of Standards and Technology Data Gateway

    SRD 40 NDRL/NIST Solution Kinetics Database on the Web (Web, free access)   Data for free radical processes involving primary radicals from water, inorganic radicals and carbon-centered radicals in solution, and singlet oxygen and organic peroxyl radicals in various solvents.

  6. 77 FR 27403 - Endangered and Threatened Wildlife and Plants; 90-Day Finding on a Petition To List the Eastern...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-10

    ... level), rising sea levels due to climate change may inundate some habitat occupied by the species and... current population levels, and current and projected trends; and (e) Past and ongoing conservation... Philosophical Society, Vol. 4 (1799), pp. 362-381). The Florida Museum of Natural History Web site 2011 ( http...

  7. Re-Assessing Practice: Visual Art, Visually Impaired People and the Web.

    ERIC Educational Resources Information Center

    Howell, Caro; Porter, Dan

    The latest development to come out of ongoing research at Tate Modern, London's new museum of modern art, is i-Map art resources for blind and partially sighted people that are delivered online. Currently i-Map explores the work of Matisse and Picasso, their innovations, influences and personal motivations, as well as key concepts in modern art.…

  8. Ocean Drilling Program: Privacy Policy

    Science.gov Websites

    and products Drilling services and tools Online Janus database Search the ODP/TAMU web site ODP's main web site ODP/TAMU Science Operator Home Ocean Drilling Program Privacy Policy The following is the privacy policy for the www-odp.tamu.edu web site. 1. Cookies are used in the Database portion of the web

  9. Earth's Earliest Ecosystems in the Classroom: The Use of Microbial Mats to Teach General Principles in Microbial Ecology, and Scientific Inquiry

    NASA Technical Reports Server (NTRS)

    Beboutl, Brad M.; Bucaria, Robin

    2004-01-01

    Microbial mats are living examples of the most ancient biological communities on earth, and may also be useful models for the search for life elsewhere. They are centrally important to Astrobiology. In this lecture, we will present an introduction to microbial mats, as well as an introduction to our web-based educational module on the subject of microbial ecology, featuring living mats maintained in a mini "Web Lab" complete with remotely-operable instrumentation. We have partnered with a number of outreach specialists in order to produce an informative and educational web-based presentation, aspects of which will be exported to museum exhibits reaching a wide audience. On our web site, we will conduct regularly scheduled experimental manipulations, linking the experiments to our research activities, and demonstrating fundamental principles of scientific research.

  10. The notes from nature tool for unlocking biodiversity records from museum records through citizen science

    PubMed Central

    Hill, Andrew; Guralnick, Robert; Smith, Arfon; Sallans, Andrew; Rosemary Gillespie; Denslow, Michael; Gross, Joyce; Murrell, Zack; Tim Conyers; Oboyski, Peter; Ball, Joan; Thomer, Andrea; Prys-Jones, Robert; de Torre, Javier; Kociolek, Patrick; Fortson, Lucy

    2012-01-01

    Abstract Legacy data from natural history collections contain invaluable and irreplaceable information about biodiversity in the recent past, providing a baseline for detecting change and forecasting the future of biodiversity on a human-dominated planet. However, these data are often not available in formats that facilitate use and synthesis. New approaches are needed to enhance the rates of digitization and data quality improvement. Notes from Nature provides one such novel approach by asking citizen scientists to help with transcription tasks. The initial web-based prototype of Notes from Nature is soon widely available and was developed collaboratively by biodiversity scientists, natural history collections staff, and experts in citizen science project development, programming and visualization. This project brings together digital images representing different types of biodiversity records including ledgers , herbarium sheets and pinned insects from multiple projects and natural history collections. Experts in developing web-based citizen science applications then designed and built a platform for transcribing textual data and metadata from these images. The end product is a fully open source web transcription tool built using the latest web technologies. The platform keeps volunteers engaged by initially explaining the scientific importance of the work via a short orientation, and then providing transcription “missions” of well defined scope, along with dynamic feedback, interactivity and rewards. Transcribed records, along with record-level and process metadata, are provided back to the institutions.  While the tool is being developed with new users in mind, it can serve a broad range of needs from novice to trained museum specialist. Notes from Nature has the potential to speed the rate of biodiversity data being made available to a broad community of users. PMID:22859890

  11. The notes from nature tool for unlocking biodiversity records from museum records through citizen science.

    PubMed

    Hill, Andrew; Guralnick, Robert; Smith, Arfon; Sallans, Andrew; Rosemary Gillespie; Denslow, Michael; Gross, Joyce; Murrell, Zack; Tim Conyers; Oboyski, Peter; Ball, Joan; Thomer, Andrea; Prys-Jones, Robert; de Torre, Javier; Kociolek, Patrick; Fortson, Lucy

    2012-01-01

    Legacy data from natural history collections contain invaluable and irreplaceable information about biodiversity in the recent past, providing a baseline for detecting change and forecasting the future of biodiversity on a human-dominated planet. However, these data are often not available in formats that facilitate use and synthesis. New approaches are needed to enhance the rates of digitization and data quality improvement. Notes from Nature provides one such novel approach by asking citizen scientists to help with transcription tasks. The initial web-based prototype of Notes from Nature is soon widely available and was developed collaboratively by biodiversity scientists, natural history collections staff, and experts in citizen science project development, programming and visualization. This project brings together digital images representing different types of biodiversity records including ledgers , herbarium sheets and pinned insects from multiple projects and natural history collections. Experts in developing web-based citizen science applications then designed and built a platform for transcribing textual data and metadata from these images. The end product is a fully open source web transcription tool built using the latest web technologies. The platform keeps volunteers engaged by initially explaining the scientific importance of the work via a short orientation, and then providing transcription "missions" of well defined scope, along with dynamic feedback, interactivity and rewards. Transcribed records, along with record-level and process metadata, are provided back to the institutions.  While the tool is being developed with new users in mind, it can serve a broad range of needs from novice to trained museum specialist. Notes from Nature has the potential to speed the rate of biodiversity data being made available to a broad community of users.

  12. The Digital Fish Library: Using MRI to Digitize, Database, and Document the Morphological Diversity of Fish

    PubMed Central

    Berquist, Rachel M.; Gledhill, Kristen M.; Peterson, Matthew W.; Doan, Allyson H.; Baxter, Gregory T.; Yopak, Kara E.; Kang, Ning; Walker, H. J.; Hastings, Philip A.; Frank, Lawrence R.

    2012-01-01

    Museum fish collections possess a wealth of anatomical and morphological data that are essential for documenting and understanding biodiversity. Obtaining access to specimens for research, however, is not always practical and frequently conflicts with the need to maintain the physical integrity of specimens and the collection as a whole. Non-invasive three-dimensional (3D) digital imaging therefore serves a critical role in facilitating the digitization of these specimens for anatomical and morphological analysis as well as facilitating an efficient method for online storage and sharing of this imaging data. Here we describe the development of the Digital Fish Library (DFL, http://www.digitalfishlibrary.org), an online digital archive of high-resolution, high-contrast, magnetic resonance imaging (MRI) scans of the soft tissue anatomy of an array of fishes preserved in the Marine Vertebrate Collection of Scripps Institution of Oceanography. We have imaged and uploaded MRI data for over 300 marine and freshwater species, developed a data archival and retrieval system with a web-based image analysis and visualization tool, and integrated these into the public DFL website to disseminate data and associated metadata freely over the web. We show that MRI is a rapid and powerful method for accurately depicting the in-situ soft-tissue anatomy of preserved fishes in sufficient detail for large-scale comparative digital morphology. However these 3D volumetric data require a sophisticated computational and archival infrastructure in order to be broadly accessible to researchers and educators. PMID:22493695

  13. The Digital Fish Library: using MRI to digitize, database, and document the morphological diversity of fish.

    PubMed

    Berquist, Rachel M; Gledhill, Kristen M; Peterson, Matthew W; Doan, Allyson H; Baxter, Gregory T; Yopak, Kara E; Kang, Ning; Walker, H J; Hastings, Philip A; Frank, Lawrence R

    2012-01-01

    Museum fish collections possess a wealth of anatomical and morphological data that are essential for documenting and understanding biodiversity. Obtaining access to specimens for research, however, is not always practical and frequently conflicts with the need to maintain the physical integrity of specimens and the collection as a whole. Non-invasive three-dimensional (3D) digital imaging therefore serves a critical role in facilitating the digitization of these specimens for anatomical and morphological analysis as well as facilitating an efficient method for online storage and sharing of this imaging data. Here we describe the development of the Digital Fish Library (DFL, http://www.digitalfishlibrary.org), an online digital archive of high-resolution, high-contrast, magnetic resonance imaging (MRI) scans of the soft tissue anatomy of an array of fishes preserved in the Marine Vertebrate Collection of Scripps Institution of Oceanography. We have imaged and uploaded MRI data for over 300 marine and freshwater species, developed a data archival and retrieval system with a web-based image analysis and visualization tool, and integrated these into the public DFL website to disseminate data and associated metadata freely over the web. We show that MRI is a rapid and powerful method for accurately depicting the in-situ soft-tissue anatomy of preserved fishes in sufficient detail for large-scale comparative digital morphology. However these 3D volumetric data require a sophisticated computational and archival infrastructure in order to be broadly accessible to researchers and educators.

  14. Discrepancies among Scopus, Web of Science, and PubMed coverage of funding information in medical journal articles.

    PubMed

    Kokol, Peter; Vošner, Helena Blažun

    2018-01-01

    The overall aim of the present study was to compare the coverage of existing research funding information for articles indexed in Scopus, Web of Science, and PubMed databases. The numbers of articles with funding information published in 2015 were identified in the three selected databases and compared using bibliometric analysis of a sample of twenty-eight prestigious medical journals. Frequency analysis of the number of articles with funding information showed statistically significant differences between Scopus, Web of Science, and PubMed databases. The largest proportion of articles with funding information was found in Web of Science (29.0%), followed by PubMed (14.6%) and Scopus (7.7%). The results show that coverage of funding information differs significantly among Scopus, Web of Science, and PubMed databases in a sample of the same medical journals. Moreover, we found that, currently, funding data in PubMed is more difficult to obtain and analyze compared with that in the other two databases.

  15. A web-based genomic sequence database for the Streptomycetaceae: a tool for systematics and genome mining

    USDA-ARS?s Scientific Manuscript database

    The ARS Microbial Genome Sequence Database (http://199.133.98.43), a web-based database server, was established utilizing the BIGSdb (Bacterial Isolate Genomics Sequence Database) software package, developed at Oxford University, as a tool to manage multi-locus sequence data for the family Streptomy...

  16. Collection Fusion Using Bayesian Estimation of a Linear Regression Model in Image Databases on the Web.

    ERIC Educational Resources Information Center

    Kim, Deok-Hwan; Chung, Chin-Wan

    2003-01-01

    Discusses the collection fusion problem of image databases, concerned with retrieving relevant images by content based retrieval from image databases distributed on the Web. Focuses on a metaserver which selects image databases supporting similarity measures and proposes a new algorithm which exploits a probabilistic technique using Bayesian…

  17. The Web-Database Connection Tools for Sharing Information on the Campus Intranet.

    ERIC Educational Resources Information Center

    Thibeault, Nancy E.

    This paper evaluates four tools for creating World Wide Web pages that interface with Microsoft Access databases: DB Gateway, Internet Database Assistant (IDBA), Microsoft Internet Database Connector (IDC), and Cold Fusion. The system requirements and features of each tool are discussed. A sample application, "The Virtual Help Desk"…

  18. An Overview of ARL’s Multimodal Signatures Database and Web Interface

    DTIC Science & Technology

    2007-12-01

    ActiveX components, which hindered distribution due to license agreements and run-time license software to use such components. g. Proprietary...Overview The database consists of multimodal signature data files in the HDF5 format. Generally, each signature file contains all the ancillary...only contains information in the database, Web interface, and signature files that is releasable to the public. The Web interface consists of static

  19. Analysis and Development of a Web-Enabled Planning and Scheduling Database Application

    DTIC Science & Technology

    2013-09-01

    establishes an entity—relationship diagram for the desired process, constructs an operable database using MySQL , and provides a web- enabled interface for...development, develop, design, process, re- engineering, reengineering, MySQL , structured query language, SQL, myPHPadmin. 15. NUMBER OF PAGES 107 16...relationship diagram for the desired process, constructs an operable database using MySQL , and provides a web-enabled interface for the population of

  20. Vision science literature of Nepal in the database "Web of Science".

    PubMed

    Risal, S; Prasad, H N

    2012-01-01

    Vision Science is considered to be a quite developed discipline in Nepal, with much research currently in progress. Though the results of these endeavors are published in scientific journals, formal citation analyses have not been performed on works contributed by Nepalese vision scientists. To study Nepal's contribution to vision science literature in the database "Web of Science". The primary data source of this paper was Web of Science, a citation database of Thomas Reuters. All bibliometric analyses were performed with the help of Web of Science analysis service. In the current database of vision science literature, Nepalese authors contributed 112 publications to Web of Science, 95 of which were original articles. Pokharel GP had the highest number of citations among contributing authors of Nepal. Hennig A contributed the highest number of article as a first author. The Nepal Eye Hospital contributed the highest number of articles as an institution to the field of Vision Science. Currently, only two journals from Nepal including Journal of Nepal Medical Association (JAMA) are indexed in the Web of Science database (Sieving, 2012). To evaluate the total productivity of vision science literature from Nepal, total publication counts from national journals and articles indexed in other databases such as PubMed and Scopus must also be considered. © NEPjOPH.

  1. Physical Samples and Persistent Identifiers: The Implementation of the International Geo Sample Number (IGSN) Registration Service in CSIRO, Australia

    NASA Astrophysics Data System (ADS)

    Devaraju, Anusuriya; Klump, Jens; Tey, Victor; Fraser, Ryan

    2016-04-01

    Physical samples such as minerals, soil, rocks, water, air and plants are important observational units for understanding the complexity of our environment and its resources. They are usually collected and curated by different entities, e.g., individual researchers, laboratories, state agencies, or museums. Persistent identifiers may facilitate access to physical samples that are scattered across various repositories. They are essential to locate samples unambiguously and to share their associated metadata and data systematically across the Web. The International Geo Sample Number (IGSN) is a persistent, globally unique label for identifying physical samples. The IGSNs of physical samples are registered by end-users (e.g., individual researchers, data centers and projects) through allocating agents. Allocating agents are the institutions acting on behalf of the implementing organization (IGSN e.V.). The Commonwealth Scientific and Industrial Research Organisation CSIRO) is one of the allocating agents in Australia. To implement IGSN in our organisation, we developed a RESTful service and a metadata model. The web service enables a client to register sub-namespaces and multiple samples, and retrieve samples' metadata programmatically. The metadata model provides a framework in which different types of samples may be represented. It is generic and extensible, therefore it may be applied in the context of multi-disciplinary projects. The metadata model has been implemented as an XML schema and a PostgreSQL database. The schema is used to handle sample registrations requests and to disseminate their metadata, whereas the relational database is used to preserve the metadata records. The metadata schema leverages existing controlled vocabularies to minimize the scope for error and incorporates some simplifications to reduce complexity of the schema implementation. The solutions developed have been applied and tested in the context of two sample repositories in CSIRO, the Capricorn Distal Footprints project and the Rock Store.

  2. Point of Entry

    ERIC Educational Resources Information Center

    Manzo, Kathleen Kennedy

    2007-01-01

    As part of a professional development program organized by the Save Ellis Island Foundation, the exhibits, databases, photo archives, and recorded interviews at the island's museum helps put the nation's current immigration debate into a broader historical context. Teachers at these sessions learn from scholars and park personnel about early…

  3. Analysis and visualization of Arabidopsis thaliana GWAS using web 2.0 technologies.

    PubMed

    Huang, Yu S; Horton, Matthew; Vilhjálmsson, Bjarni J; Seren, Umit; Meng, Dazhe; Meyer, Christopher; Ali Amer, Muhammad; Borevitz, Justin O; Bergelson, Joy; Nordborg, Magnus

    2011-01-01

    With large-scale genomic data becoming the norm in biological studies, the storing, integrating, viewing and searching of such data have become a major challenge. In this article, we describe the development of an Arabidopsis thaliana database that hosts the geographic information and genetic polymorphism data for over 6000 accessions and genome-wide association study (GWAS) results for 107 phenotypes representing the largest collection of Arabidopsis polymorphism data and GWAS results to date. Taking advantage of a series of the latest web 2.0 technologies, such as Ajax (Asynchronous JavaScript and XML), GWT (Google-Web-Toolkit), MVC (Model-View-Controller) web framework and Object Relationship Mapper, we have created a web-based application (web app) for the database, that offers an integrated and dynamic view of geographic information, genetic polymorphism and GWAS results. Essential search functionalities are incorporated into the web app to aid reverse genetics research. The database and its web app have proven to be a valuable resource to the Arabidopsis community. The whole framework serves as an example of how biological data, especially GWAS, can be presented and accessed through the web. In the end, we illustrate the potential to gain new insights through the web app by two examples, showcasing how it can be used to facilitate forward and reverse genetics research. Database URL: http://arabidopsis.usc.edu/

  4. Web data mining

    NASA Astrophysics Data System (ADS)

    Wibonele, Kasanda J.; Zhang, Yanqing

    2002-03-01

    A web data mining system using granular computing and ASP programming is proposed. This is a web based application, which allows web users to submit survey data for many different companies. This survey is a collection of questions that will help these companies develop and improve their business and customer service with their clients by analyzing survey data. This web application allows users to submit data anywhere. All the survey data is collected into a database for further analysis. An administrator of this web application can login to the system and view all the data submitted. This web application resides on a web server, and the database resides on the MS SQL server.

  5. STEM Workforce Pipeline

    DTIC Science & Technology

    2013-07-30

    more about STEM. From museums, to gardens, to planetariums and more, Places to Go mobilizes people to explore the STEM resources offered by their...Works website was developed utilizing a phased approach. This approach allowed for informed, periodic updates to the structure, design, and backend ...our web development team, throughout this phase. A significant amount of backend development work on the website, as well as design work was completed

  6. Programs Visualize Earth and Space for Interactive Education

    NASA Technical Reports Server (NTRS)

    2014-01-01

    Kevin Hussey and others at the Jet Propulsion Laboratory produced web applications to visualize all of the spacecraft in orbit around Earth and in the Solar System. Hussey worked with Milwaukee, Wisconsin-based The Elumenati to rewrite the programs, and after licensing them, the company started offering a version that can be viewed on spheres and dome theaters for schools, museums, science centers, and other institutions.

  7. Mining a Web Citation Database for Author Co-Citation Analysis.

    ERIC Educational Resources Information Center

    He, Yulan; Hui, Siu Cheung

    2002-01-01

    Proposes a mining process to automate author co-citation analysis based on the Web Citation Database, a data warehouse for storing citation indices of Web publications. Describes the use of agglomerative hierarchical clustering for author clustering and multidimensional scaling for displaying author cluster maps, and explains PubSearch, a…

  8. Preserving Geological Samples and Metadata from Polar Regions

    NASA Astrophysics Data System (ADS)

    Grunow, A.; Sjunneskog, C. M.

    2011-12-01

    The Office of Polar Programs at the National Science Foundation (NSF-OPP) has long recognized the value of preserving earth science collections due to the inherent logistical challenges and financial costs of collecting geological samples from Polar Regions. NSF-OPP established two national facilities to make Antarctic geological samples and drill cores openly and freely available for research. The Antarctic Marine Geology Research Facility (AMGRF) at Florida State University was established in 1963 and archives Antarctic marine sediment cores, dredge samples and smear slides along with ship logs. The United States Polar Rock Repository (USPRR) at Ohio State University was established in 2003 and archives polar rock samples, marine dredges, unconsolidated materials and terrestrial cores, along with associated materials such as field notes, maps, raw analytical data, paleomagnetic cores, thin sections, microfossil mounts, microslides and residues. The existence of the AMGRF and USPRR helps to minimize redundant sample collecting, lessen the environmental impact of doing polar field work, facilitates field logistics planning and complies with the data sharing requirement of the Antarctic Treaty. USPRR acquires collections through donations from institutions and scientists and then makes these samples available as no-cost loans for research, education and museum exhibits. The AMGRF acquires sediment cores from US based and international collaboration drilling projects in Antarctica. Destructive research techniques are allowed on the loaned samples and loan requests are accepted from any accredited scientific institution in the world. Currently, the USPRR has more than 22,000 cataloged rock samples available to scientists from around the world. All cataloged samples are relabeled with a USPRR number, weighed, photographed and measured for magnetic susceptibility. Many aspects of the sample metadata are included in the database, e.g. geographical location, sample description, collector, rock age, formation, section location, multimedia images as well structural data, field observations, logistics, surface features, etc. The metadata are entered into a commercial, museum based database called EMu. The AMGRF houses more than 25,000m of deep-sea cores and drill cores as well as nearly 3,000 meters of rotary cored geological material from Antarctica. Detailed information on the sediment cores including location, sediment composition are available in cruise reports posted on the AMGRF web-site. Researchers may access the sample collections through the online websites (http://www-bprc.mps.ohio-state.edu/emuwebusprr and http://www.arf.fsu.edu). Searches may be done using multiple search terms or by use of the mapping feature. The on-line databases provide an essential resource for proposal preparation, pilot studies and other sample based research that should make fieldwork more efficient.

  9. Database of Novel and Emerging Adsorbent Materials

    National Institute of Standards and Technology Data Gateway

    SRD 205 NIST/ARPA-E Database of Novel and Emerging Adsorbent Materials (Web, free access)   The NIST/ARPA-E Database of Novel and Emerging Adsorbent Materials is a free, web-based catalog of adsorbent materials and measured adsorption properties of numerous materials obtained from article entries from the scientific literature. Search fields for the database include adsorbent material, adsorbate gas, experimental conditions (pressure, temperature), and bibliographic information (author, title, journal), and results from queries are provided as a list of articles matching the search parameters. The database also contains adsorption isotherms digitized from the cataloged articles, which can be compared visually online in the web application or exported for offline analysis.

  10. THE ECOTOX DATABASE AND ECOLOGICAL SOIL SCREENING LEVEL (ECO-SSL) WEB SITES

    EPA Science Inventory

    The EPA's ECOTOX database (http://www.epa.gov/ecotox/) provides a web browser search interface for locating aquatic and terrestrial toxic effects information. Data on more than 8100 chemicals and 5700 terrestrial and aquatic species are included in the database. Information is ...

  11. Evaluation of the content and accessibility of web sites for accredited orthopaedic sports medicine fellowships.

    PubMed

    Mulcahey, Mary K; Gosselin, Michelle M; Fadale, Paul D

    2013-06-19

    The Internet is a common source of information for orthopaedic residents applying for sports medicine fellowships, with the web sites of the American Orthopaedic Society for Sports Medicine (AOSSM) and the San Francisco Match serving as central databases. We sought to evaluate the web sites for accredited orthopaedic sports medicine fellowships with regard to content and accessibility. We reviewed the existing web sites of the ninety-five accredited orthopaedic sports medicine fellowships included in the AOSSM and San Francisco Match databases from February to March 2012. A Google search was performed to determine the overall accessibility of program web sites and to supplement information obtained from the AOSSM and San Francisco Match web sites. The study sample consisted of the eighty-seven programs whose web sites connected to information about the fellowship. Each web site was evaluated for its informational value. Of the ninety-five programs, fifty-one (54%) had links listed in the AOSSM database. Three (3%) of all accredited programs had web sites that were linked directly to information about the fellowship. Eighty-eight (93%) had links listed in the San Francisco Match database; however, only five (5%) had links that connected directly to information about the fellowship. Of the eighty-seven programs analyzed in our study, all eighty-seven web sites (100%) provided a description of the program and seventy-six web sites (87%) included information about the application process. Twenty-one web sites (24%) included a list of current fellows. Fifty-six web sites (64%) described the didactic instruction, seventy (80%) described team coverage responsibilities, forty-seven (54%) included a description of cases routinely performed by fellows, forty-one (47%) described the role of the fellow in seeing patients in the office, eleven (13%) included call responsibilities, and seventeen (20%) described a rotation schedule. Two Google searches identified direct links for 67% to 71% of all accredited programs. Most accredited orthopaedic sports medicine fellowships lack easily accessible or complete web sites in the AOSSM or San Francisco Match databases. Improvement in the accessibility and quality of information on orthopaedic sports medicine fellowship web sites would facilitate the ability of applicants to obtain useful information.

  12. An occurence records database of Irregular Echinoids (Echinodermata: Echinoidea) in Mexico.

    PubMed

    Martínez-Melo, Alejandra; Solís-Marín, Francisco Alonso; Buitrón-Sánchez, Blanca Estela; Laguarda-Figueras, Alfredo

    2016-01-01

    Research on echinoderms in Mexico began in the late nineteenth century. We present a dataset that includes the taxonomic and geographic information of irregular echinoids from Mexico, housed in four collections: 1) Colección Nacional de Equinodermos "Ma. Elena Caso Muñoz" from the Instituto de Ciencias del Mar y Limnología (ICML), Universidad Nacional Autónoma de México (UNAM); 2) Invertebrate Zoology Collection, Smithsonian Museum of Natural History, Washington, D.C., United States of America (USA); 3) Invertebrate Collection, Museum of Comparative Zoology, University of Harvard, Boston, Massachusetts, USA and 4) Invertebrate Zoology, Peabody Museum, Yale University, New Haven, Connecticut, USA. A total of six orders, 17 families, 35 genera and 68 species are reported, 37 distributed in the Pacific coast and 31 in the Atlantic coast, none of them was found in both coasts. The most diverse region is the Gulf of California (S=32); the most diverse order is Spatangoida with 31 species reported in mexican waters.

  13. [A systematic evaluation of application of the web-based cancer database].

    PubMed

    Huang, Tingting; Liu, Jialin; Li, Yong; Zhang, Rui

    2013-10-01

    In order to support the theory and practice of the web-based cancer database development in China, we applied a systematic evaluation to assess the development condition of the web-based cancer databases at home and abroad. We performed computer-based retrieval of the Ovid-MEDLINE, Springerlink, EBSCOhost, Wiley Online Library and CNKI databases, the papers of which were published between Jan. 1995 and Dec. 2011, and retrieved the references of these papers by hand. We selected qualified papers according to the pre-established inclusion and exclusion criteria, and carried out information extraction and analysis of the papers. Eventually, searching the online database, we obtained 1244 papers, and checking the reference lists, we found other 19 articles. Thirty-one articles met the inclusion and exclusion criteria and we extracted the proofs and assessed them. Analyzing these evidences showed that the U.S.A. counted for 26% in the first place. Thirty-nine percent of these web-based cancer databases are comprehensive cancer databases. As for single cancer databases, breast cancer and prostatic cancer are on the top, both counting for 10% respectively. Thirty-two percent of the cancer database are associated with cancer gene information. For the technical applications, MySQL and PHP applied most widely, nearly 23% each.

  14. A simple method for serving Web hypermaps with dynamic database drill-down

    PubMed Central

    Boulos, Maged N Kamel; Roudsari, Abdul V; Carson, Ewart R

    2002-01-01

    Background HealthCyberMap aims at mapping parts of health information cyberspace in novel ways to deliver a semantically superior user experience. This is achieved through "intelligent" categorisation and interactive hypermedia visualisation of health resources using metadata, clinical codes and GIS. HealthCyberMap is an ArcView 3.1 project. WebView, the Internet extension to ArcView, publishes HealthCyberMap ArcView Views as Web client-side imagemaps. The basic WebView set-up does not support any GIS database connection, and published Web maps become disconnected from the original project. A dedicated Internet map server would be the best way to serve HealthCyberMap database-driven interactive Web maps, but is an expensive and complex solution to acquire, run and maintain. This paper describes HealthCyberMap simple, low-cost method for "patching" WebView to serve hypermaps with dynamic database drill-down functionality on the Web. Results The proposed solution is currently used for publishing HealthCyberMap GIS-generated navigational information maps on the Web while maintaining their links with the underlying resource metadata base. Conclusion The authors believe their map serving approach as adopted in HealthCyberMap has been very successful, especially in cases when only map attribute data change without a corresponding effect on map appearance. It should be also possible to use the same solution to publish other interactive GIS-driven maps on the Web, e.g., maps of real world health problems. PMID:12437788

  15. [A web-based integrated clinical database for laryngeal cancer].

    PubMed

    E, Qimin; Liu, Jialin; Li, Yong; Liang, Chuanyu

    2014-08-01

    To establish an integrated database for laryngeal cancer, and to provide an information platform for laryngeal cancer in clinical and fundamental researches. This database also meet the needs of clinical and scientific use. Under the guidance of clinical expert, we have constructed a web-based integrated clinical database for laryngeal carcinoma on the basis of clinical data standards, Apache+PHP+MySQL technology, laryngeal cancer specialist characteristics and tumor genetic information. A Web-based integrated clinical database for laryngeal carcinoma had been developed. This database had a user-friendly interface and the data could be entered and queried conveniently. In addition, this system utilized the clinical data standards and exchanged information with existing electronic medical records system to avoid the Information Silo. Furthermore, the forms of database was integrated with laryngeal cancer specialist characteristics and tumor genetic information. The Web-based integrated clinical database for laryngeal carcinoma has comprehensive specialist information, strong expandability, high feasibility of technique and conforms to the clinical characteristics of laryngeal cancer specialties. Using the clinical data standards and structured handling clinical data, the database can be able to meet the needs of scientific research better and facilitate information exchange, and the information collected and input about the tumor sufferers are very informative. In addition, the user can utilize the Internet to realize the convenient, swift visit and manipulation on the database.

  16. Ocean Drilling Program: TAMU Staff Directory

    Science.gov Websites

    products Drilling services and tools Online Janus database Search the ODP/TAMU web site ODP's main web site Employment Opportunities ODP | Search | Database | Drilling | Publications | Science | Cruise Info | Public

  17. Establishing a Dynamic Database of Blue and Fin Whale Locations from Recordings at the IMS CTBTO hydro-acoustic network. The Baleakanta Project

    NASA Astrophysics Data System (ADS)

    Le Bras, R. J.; Kuzma, H.

    2013-12-01

    Falling as they do into the frequency range of continuously recording hydrophones (15-100Hz), blue and fin whale songs are a significant source of noise on the hydro-acoustic monitoring array of the International Monitoring System (IMS) of the Comprehensive Nuclear Test Ban Treaty Organization (CTBTO). One researcher's noise, however, can be a very interesting signal in another field of study. The aim of the Baleakanta Project (www.baleakanta.org) is to flag and catalogue these songs, using the azimuth and slowness of the signal measured at multiple hydrophones to solve for the approximate location of singing whales. Applying techniques borrowed from human speaker identification, it may even be possible to recognize the songs of particular individuals. The result will be a dynamic database of whale locations and songs with known individuals noted. This database will be of great value to marine biologists studying cetaceans, as there is no existing dataset which spans the globe over many years (more than 15 years of data have been collected by the IMS). Current whale song datasets from other sources are limited to detections made on small, temporary listening devices. The IMS song catalogue will make it possible to study at least some aspects of the global migration patterns of whales, changes in their songs over time, and the habits of individuals. It is believed that about 10 blue whale 'cultures' exist with distinct vocal patterns; the IMS song catalogue will test that number. Results and a subset of the database (delayed in time to mitigate worries over whaling and harassment of the animals) will be released over the web. A traveling museum exhibit is planned which will not only educate the public about whale songs, but will also make the CTBTO and its achievements more widely known. As a testament to the public's enduring fascination with whales, initial funding for this project has been crowd-sourced through an internet campaign.

  18. SNPversity: a web-based tool for visualizing diversity

    PubMed Central

    Schott, David A; Vinnakota, Abhinav G; Portwood, John L; Andorf, Carson M

    2018-01-01

    Abstract Many stand-alone desktop software suites exist to visualize single nucleotide polymorphism (SNP) diversity, but web-based software that can be easily implemented and used for biological databases is absent. SNPversity was created to answer this need by building an open-source visualization tool that can be implemented on a Unix-like machine and served through a web browser that can be accessible worldwide. SNPversity consists of a HDF5 database back-end for SNPs, a data exchange layer powered by TASSEL libraries that represent data in JSON format, and an interface layer using PHP to visualize SNP information. SNPversity displays data in real-time through a web browser in grids that are color-coded according to a given SNP’s allelic status and mutational state. SNPversity is currently available at MaizeGDB, the maize community’s database, and will be soon available at GrainGenes, the clade-oriented database for Triticeae and Avena species, including wheat, barley, rye, and oat. The code and documentation are uploaded onto github, and they are freely available to the public. We expect that the tool will be highly useful for other biological databases with a similar need to display SNP diversity through their web interfaces. Database URL: https://www.maizegdb.org/snpversity PMID:29688387

  19. Construction of a Linux based chemical and biological information system.

    PubMed

    Molnár, László; Vágó, István; Fehér, András

    2003-01-01

    A chemical and biological information system with a Web-based easy-to-use interface and corresponding databases has been developed. The constructed system incorporates all chemical, numerical and textual data related to the chemical compounds, including numerical biological screen results. Users can search the database by traditional textual/numerical and/or substructure or similarity queries through the web interface. To build our chemical database management system, we utilized existing IT components such as ORACLE or Tripos SYBYL for database management and Zope application server for the web interface. We chose Linux as the main platform, however, almost every component can be used under various operating systems.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    The system is developed to collect, process, store and present the information provided by the radio frequency identification (RFID) devices. The system contains three parts, the application software, the database and the web page. The application software manages multiple RFID devices, such as readers and portals, simultaneously. It communicates with the devices through application programming interface (API) provided by the device vendor. The application software converts data collected by the RFID readers and portals to readable information. It is capable of encrypting data using 256 bits advanced encryption standard (AES). The application software has a graphical user interface (GUI). Themore » GUI mimics the configurations of the nucler material storage sites or transport vehicles. The GUI gives the user and system administrator an intuitive way to read the information and/or configure the devices. The application software is capable of sending the information to a remote, dedicated and secured web and database server. Two captured screen samples, one for storage and transport, are attached. The database is constructed to handle a large number of RFID tag readers and portals. A SQL server is employed for this purpose. An XML script is used to update the database once the information is sent from the application software. The design of the web page imitates the design of the application software. The web page retrieves data from the database and presents it in different panels. The user needs a user name combined with a password to access the web page. The web page is capable of sending e-mail and text messages based on preset criteria, such as when alarm thresholds are excceeded. A captured screen sample is attached. The application software is designed to be installed on a local computer. The local computer is directly connected to the RFID devices and can be controlled locally or remotely. There are multiple local computers managing different sites or transport vehicles. The control from remote sites and information transmitted to a central database server is through secured internet. The information stored in the central databaser server is shown on the web page. The users can view the web page on the internet. A dedicated and secured web and database server (https) is used to provide information security.« less

  1. Ionic Liquids Database- (ILThermo)

    National Institute of Standards and Technology Data Gateway

    SRD 147 NIST Ionic Liquids Database- (ILThermo) (Web, free access)   IUPAC Ionic Liquids Database, ILThermo, is a free web research tool that allows users worldwide to access an up-to-date data collection from the publications on experimental investigations of thermodynamic, and transport properties of ionic liquids as well as binary and ternary mixtures containing ionic liquids.

  2. FirstSearch and NetFirst--Web and Dial-up Access: Plus Ca Change, Plus C'est la Meme Chose?

    ERIC Educational Resources Information Center

    Koehler, Wallace; Mincey, Danielle

    1996-01-01

    Compares and evaluates the differences between OCLC's dial-up and World Wide Web FirstSearch access methods and their interfaces with the underlying databases. Also examines NetFirst, OCLC's new Internet catalog, the only Internet tracking database from a "traditional" database service. (Author/PEN)

  3. Ocean Drilling Program: Web Site Access Statistics

    Science.gov Websites

    and products Drilling services and tools Online Janus database Search the ODP/TAMU web site ODP's main See statistics for JOIDES members. See statistics for Janus database. 1997 October November December accessible only on www-odp.tamu.edu. ** End of ODP, start of IODP. Privacy Policy ODP | Search | Database

  4. XCOM: Photon Cross Sections Database

    National Institute of Standards and Technology Data Gateway

    SRD 8 XCOM: Photon Cross Sections Database (Web, free access)   A web database is provided which can be used to calculate photon cross sections for scattering, photoelectric absorption and pair production, as well as total attenuation coefficients, for any element, compound or mixture (Z <= 100) at energies from 1 keV to 100 GeV.

  5. Transition to a Unified System: Using Perl To Drive Library Databases and Enhance Web Site Functionality.

    ERIC Educational Resources Information Center

    Fagan, Judy Condit

    2001-01-01

    Discusses the need for libraries to routinely redesign their Web sites, and presents a case study that describes how a Perl-driven database at Southern Illinois University's library improved Web site organization and patron access, simplified revisions, and allowed staff unfamiliar with HTML to update content. (Contains 56 references.) (Author/LRW)

  6. ITS-90 Thermocouple Database

    National Institute of Standards and Technology Data Gateway

    SRD 60 NIST ITS-90 Thermocouple Database (Web, free access)   Web version of Standard Reference Database 60 and NIST Monograph 175. The database gives temperature -- electromotive force (emf) reference functions and tables for the letter-designated thermocouple types B, E, J, K, N, R, S and T. These reference functions have been adopted as standards by the American Society for Testing and Materials (ASTM) and the International Electrotechnical Commission (IEC).

  7. A comparative study of six European databases of medically oriented Web resources.

    PubMed

    Abad García, Francisca; González Teruel, Aurora; Bayo Calduch, Patricia; de Ramón Frias, Rosa; Castillo Blasco, Lourdes

    2005-10-01

    The paper describes six European medically oriented databases of Web resources, pertaining to five quality-controlled subject gateways, and compares their performance. The characteristics, coverage, procedure for selecting Web resources, record structure, searching possibilities, and existence of user assistance were described for each database. Performance indicators for each database were obtained by means of searches carried out using the key words, "myocardial infarction." Most of the databases originated in the 1990s in an academic or library context and include all types of Web resources of an international nature. Five databases use Medical Subject Headings. The number of fields per record varies between three and nineteen. The language of the search interfaces is mostly English, and some of them allow searches in other languages. In some databases, the search can be extended to Pubmed. Organizing Medical Networked Information, Catalogue et Index des Sites Médicaux Francophones, and Diseases, Disorders and Related Topics produced the best results. The usefulness of these databases as quick reference resources is clear. In addition, their lack of content overlap means that, for the user, they complement each other. Their continued survival faces three challenges: the instability of the Internet, maintenance costs, and lack of use in spite of their potential usefulness.

  8. THGS: a web-based database of Transmembrane Helices in Genome Sequences

    PubMed Central

    Fernando, S. A.; Selvarani, P.; Das, Soma; Kumar, Ch. Kiran; Mondal, Sukanta; Ramakumar, S.; Sekar, K.

    2004-01-01

    Transmembrane Helices in Genome Sequences (THGS) is an interactive web-based database, developed to search the transmembrane helices in the user-interested gene sequences available in the Genome Database (GDB). The proposed database has provision to search sequence motifs in transmembrane and globular proteins. In addition, the motif can be searched in the other sequence databases (Swiss-Prot and PIR) or in the macromolecular structure database, Protein Data Bank (PDB). Further, the 3D structure of the corresponding queried motif, if it is available in the solved protein structures deposited in the Protein Data Bank, can also be visualized using the widely used graphics package RASMOL. All the sequence databases used in the present work are updated frequently and hence the results produced are up to date. The database THGS is freely available via the world wide web and can be accessed at http://pranag.physics.iisc.ernet.in/thgs/ or http://144.16.71.10/thgs/. PMID:14681375

  9. Education Outreach at M.I.T. Plasma Science and Fusion Center

    NASA Astrophysics Data System (ADS)

    Rivenberg, P.; Censabella, V.

    2000-10-01

    At the MIT PSFC student and staff volunteers work together to increase the public's knowledge of fusion and plasma-related experiments. Seeking to generate excitement about science, engineering and mathematics, the PSFC holds a number of outreach activities throughout the year, including Middle and High School Outreach Days and the Mr. Magnet program. During the past year, in collaboration with the MIT Museum, the PSFC reprogrammed their C-Mod, Jr Video Game to be operated via the keyboard instead of joysticks. The game will eventually be available on the web and on disc. The PSFC maintains a Home Page on the World Wide Web, which can be reached at http://www.psfc.mit.edu.

  10. Virtual Museums for Landscape Valorization and Communication

    NASA Astrophysics Data System (ADS)

    Pietroni, E.

    2017-08-01

    Research in the domain of landscape virtual reconstructions has been mainly focused on digitization and recording inside GIS systems, or real time visualization, paying a minor attention to the development of a methodological approach for the landscape narration, combing different registers, conceptual, emotional incitements and, thus, able to arouse in the public a feeling of emotional "sensing" and self- identification. The landscape reflects also the human activities in the territory and the communities' cultural patterns, their sense of "belonging". In a virtual museum of landscapes, the multidisciplinary approach, the multiplication of perspectives and voices, storytelling, acquire primary importance. A Virtual Museum of landscapes should integrate both holistic and delimited visions. The holistic vision requires a diachronic approach, including both present and past phases of life. On the other side, delimited, or "monographic", representations are useful to go deeper into specific and exemplar stories, regarding specific groups of people. Beside, the emergence of new social media enhancing cultural interactions among people induce the creation of specific social platforms for Cultural Heritage for the active participation of a large number of stakeholders. Co-creation scenarios and tools can be particularly promising. Aton is an example of front-end VR social platform in the web end, for the efficient streaming of medium/large landscape, their exploration and characterization. The Tiber Valley Virtual Museum is an example of sensorial cultural landscape. Starting from the acquisition of topographical data through integrated technologies, several multi-sensory scenarios have been created, inside which visitors can feel embodied and involved.

  11. NNDC Data Services

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tuli, J.K.; Sonzogni,A.

    The National Nuclear Data Center has provided remote access to some of its resources since 1986. The major databases and other resources available currently through NNDC Web site are summarized. The National Nuclear Data Center (NNDC) has provided remote access to the nuclear physics databases it maintains and to other resources since 1986. With considerable innovation access is now mostly through the Web. The NNDC Web pages have been modernized to provide a consistent state-of-the-art style. The improved database services and other resources available from the NNOC site at www.nndc.bnl.govwill be described.

  12. One Click to the Cosmos: The AstroPix Image Archive

    NASA Astrophysics Data System (ADS)

    Hurt, Robert L.; Llamas, J.; Squires, G. K.; Brinkworth, C.; X-ray Center, Chandra; ESO/ESA; Science Center, Spitzer; STScI

    2013-01-01

    Imagine a single website that acts as a portal to the entire wealth of public imagery spanning the world's observatories. This is the goal of the AstroPix project (astropix.ipac.caltech.edu), and you can use it today! Although still in a beta development state, this past year has seen the inclusion of thousands of images spanning some of the most prominent observatories in the world, including Chandra, ESO, Galex, Herschel, Hubble, Spitzer, and WISE, with more on the way. The archive is unique as it is built around the Astronomical Visualization Metadata (AVM) standard, which captures the rich contextual information for each image. This ranges from titles and descriptions, to color representations and observation details, to sky coordinates. AVM enables AstroPix imagery to be used in a variety of unique ways that benefit formal and informal education as well as astronomers and the general public. Visitors to Astropix can search the database using simple free-text queries, or use a structured search (similar to "Smart Playlists" found in iTunes, for example). We are also developing public application programming interfaces (APIs) to allow third party software and websites to access the growing content for a variety of uses (planetarium software, museum kiosks, mobile apps, and creative web interfaces, to name a few). Contributing image assets to AstroPix is as easy as tagging the images with the relevant metadata and including the web links to the images in a simple RSS feed. We will cover some of the latest information about tools to contribute images to AstroPix and ways to use the site.

  13. Comparing Unique Title Coverage of Web of Science and Scopus in Earth and Atmospheric Sciences

    ERIC Educational Resources Information Center

    Barnett, Philip; Lascar, Claudia

    2012-01-01

    The current journal titles in earth and atmospheric sciences, that are unique to each of two databases, Web of Science and Scopus, were identified using different methods. Comparing by subject category shows that Scopus has hundreds of unique titles, and Web of Science just 16. The titles unique to each database have low SCImago Journal Rank…

  14. Generic HTML Form Processor: A versatile PHP script to save web-collected data into a MySQL database.

    PubMed

    Göritz, Anja S; Birnbaum, Michael H

    2005-11-01

    The customizable PHP script Generic HTML Form Processor is intended to assist researchers and students in quickly setting up surveys and experiments that can be administered via the Web. This script relieves researchers from the burdens of writing new CGI scripts and building databases for each Web study. Generic HTML Form Processor processes any syntactically correct HTML forminput and saves it into a dynamically created open-source database. We describe five modes for usage of the script that allow increasing functionality but require increasing levels of knowledge of PHP and Web servers: The first two modes require no previous knowledge, and the fifth requires PHP programming expertise. Use of Generic HTML Form Processor is free for academic purposes, and its Web address is www.goeritz.net/brmic.

  15. Turning Access into a web-enabled secure information system for clinical trials.

    PubMed

    Dongquan Chen; Chen, Wei-Bang; Soong, Mayhue; Soong, Seng-Jaw; Orthner, Helmuth F

    2009-08-01

    Organizations that have limited resources need to conduct clinical studies in a cost-effective, but secure way. Clinical data residing in various individual databases need to be easily accessed and secured. Although widely available, digital certification, encryption, and secure web server, have not been implemented as widely, partly due to a lack of understanding of needs and concerns over issues such as cost and difficulty in implementation. The objective of this study was to test the possibility of centralizing various databases and to demonstrate ways of offering an alternative to a large-scale comprehensive and costly commercial product, especially for simple phase I and II trials, with reasonable convenience and security. We report a working procedure to transform and develop a standalone Access database into a secure Web-based secure information system. For data collection and reporting purposes, we centralized several individual databases; developed, and tested a web-based secure server using self-issued digital certificates. The system lacks audit trails. The cost of development and maintenance may hinder its wide application. The clinical trial databases scattered in various departments of an institution could be centralized into a web-enabled secure information system. The limitations such as the lack of a calendar and audit trail can be partially addressed with additional programming. The centralized Web system may provide an alternative to a comprehensive clinical trial management system.

  16. A Web-based open-source database for the distribution of hyperspectral signatures

    NASA Astrophysics Data System (ADS)

    Ferwerda, J. G.; Jones, S. D.; Du, Pei-Jun

    2006-10-01

    With the coming of age of field spectroscopy as a non-destructive means to collect information on the physiology of vegetation, there is a need for storage of signatures, and, more importantly, their metadata. Without the proper organisation of metadata, the signatures itself become limited. In order to facilitate re-distribution of data, a database for the storage & distribution of hyperspectral signatures and their metadata was designed. The database was built using open-source software, and can be used by the hyperspectral community to share their data. Data is uploaded through a simple web-based interface. The database recognizes major file-formats by ASD, GER and International Spectronics. The database source code is available for download through the hyperspectral.info web domain, and we happily invite suggestion for additions & modification for the database to be submitted through the online forums on the same website.

  17. MEGADOCK-Web: an integrated database of high-throughput structure-based protein-protein interaction predictions.

    PubMed

    Hayashi, Takanori; Matsuzaki, Yuri; Yanagisawa, Keisuke; Ohue, Masahito; Akiyama, Yutaka

    2018-05-08

    Protein-protein interactions (PPIs) play several roles in living cells, and computational PPI prediction is a major focus of many researchers. The three-dimensional (3D) structure and binding surface are important for the design of PPI inhibitors. Therefore, rigid body protein-protein docking calculations for two protein structures are expected to allow elucidation of PPIs different from known complexes in terms of 3D structures because known PPI information is not explicitly required. We have developed rapid PPI prediction software based on protein-protein docking, called MEGADOCK. In order to fully utilize the benefits of computational PPI predictions, it is necessary to construct a comprehensive database to gather prediction results and their predicted 3D complex structures and to make them easily accessible. Although several databases exist that provide predicted PPIs, the previous databases do not contain a sufficient number of entries for the purpose of discovering novel PPIs. In this study, we constructed an integrated database of MEGADOCK PPI predictions, named MEGADOCK-Web. MEGADOCK-Web provides more than 10 times the number of PPI predictions than previous databases and enables users to conduct PPI predictions that cannot be found in conventional PPI prediction databases. In MEGADOCK-Web, there are 7528 protein chains and 28,331,628 predicted PPIs from all possible combinations of those proteins. Each protein structure is annotated with PDB ID, chain ID, UniProt AC, related KEGG pathway IDs, and known PPI pairs. Additionally, MEGADOCK-Web provides four powerful functions: 1) searching precalculated PPI predictions, 2) providing annotations for each predicted protein pair with an experimentally known PPI, 3) visualizing candidates that may interact with the query protein on biochemical pathways, and 4) visualizing predicted complex structures through a 3D molecular viewer. MEGADOCK-Web provides a huge amount of comprehensive PPI predictions based on docking calculations with biochemical pathways and enables users to easily and quickly assess PPI feasibilities by archiving PPI predictions. MEGADOCK-Web also promotes the discovery of new PPIs and protein functions and is freely available for use at http://www.bi.cs.titech.ac.jp/megadock-web/ .

  18. Accessibility and quality of online information for pediatric orthopaedic surgery fellowships.

    PubMed

    Davidson, Austin R; Murphy, Robert F; Spence, David D; Kelly, Derek M; Warner, William C; Sawyer, Jeffrey R

    2014-12-01

    Pediatric orthopaedic fellowship applicants commonly use online-based resources for information on potential programs. Two primary sources are the San Francisco Match (SF Match) database and the Pediatric Orthopaedic Society of North America (POSNA) database. We sought to determine the accessibility and quality of information that could be obtained by using these 2 sources. The online databases of the SF Match and POSNA were reviewed to determine the availability of embedded program links or external links for the included programs. If not available in the SF Match or POSNA data, Web sites for listed programs were located with a Google search. All identified Web sites were analyzed for accessibility, content volume, and content quality. At the time of online review, 50 programs, offering 68 positions, were listed in the SF Match database. Although 46 programs had links included with their information, 36 (72%) of them simply listed http://www.sfmatch.org as their unique Web site. Ten programs (20%) had external links listed, but only 2 (4%) linked directly to the fellowship web page. The POSNA database does not list any links to the 47 programs it lists, which offer 70 positions. On the basis of a Google search of the 50 programs listed in the SF Match database, web pages were found for 35. Of programs with independent web pages, all had a description of the program and 26 (74%) described their application process. Twenty-nine (83%) listed research requirements, 22 (63%) described the rotation schedule, and 12 (34%) discussed the on-call expectations. A contact telephone number and/or email address was provided by 97% of programs. Twenty (57%) listed both the coordinator and fellowship director, 9 (26%) listed the coordinator only, 5 (14%) listed the fellowship director only, and 1 (3%) had no contact information given. The SF Match and POSNA databases provide few direct links to fellowship Web sites, and individual program Web sites either do not exist or do not effectively convey information about the programs. Improved accessibility and accurate information online would allow potential applicants to obtain information about pediatric fellowships in a more efficient manner.

  19. Type material in the NCBI Taxonomy Database

    PubMed Central

    Federhen, Scott

    2015-01-01

    Type material is the taxonomic device that ties formal names to the physical specimens that serve as exemplars for the species. For the prokaryotes these are strains submitted to the culture collections; for the eukaryotes they are specimens submitted to museums or herbaria. The NCBI Taxonomy Database (http://www.ncbi.nlm.nih.gov/taxonomy) now includes annotation of type material that we use to flag sequences from type in GenBank and in Genomes. This has important implications for many NCBI resources, some of which are outlined below. PMID:25398905

  20. Bronzed cowbird taken in Florida

    USGS Publications Warehouse

    Matteson, R.E.

    1970-01-01

    On 8 November 1968 in Gainesville, Florida, I removed a male Bronzed Cowbird (Tangavius a. aeneus) from a blackbird decoy trap containing a large number of Brown-headed Cowbirds (Malothrus ater). Oliver L. Austin, Jr., at the Florida State Museum, verified the species identification by noting the notched inner webs of the outer three primaries, a characteristic of the genus. The subspecific identification was made at the U. S. National Museum where the bird is now specimen number 531666. The subspecies normally ranges from southcentral Texas and the Yucatan Peninsula south through Central America to Panama (Check-list of North American birds, fifth ed., Baltimore, Amer. Ornithol. Union, 1957, p. 542). This Gainesville specimen apparently is the first Bronzed Cow- bird taken in Florida. Alexander Sprunt, Jr., (Florida bird life. In Addendum to Florida bird life, New York, Coward-McCann, 1963, p. 18) lists three photographed sightings at Sarasota, Florida, in April 196

  1. WebEAV

    PubMed Central

    Nadkarni, Prakash M.; Brandt, Cynthia M.; Marenco, Luis

    2000-01-01

    The task of creating and maintaining a front end to a large institutional entity-attribute-value (EAV) database can be cumbersome when using traditional client-server technology. Switching to Web technology as a delivery vehicle solves some of these problems but introduces others. In particular, Web development environments tend to be primitive, and many features that client-server developers take for granted are missing. WebEAV is a generic framework for Web development that is intended to streamline the process of Web application development for databases having a significant EAV component. It also addresses some challenging user interface issues that arise when any complex system is created. The authors describe the architecture of WebEAV and provide an overview of its features with suitable examples. PMID:10887163

  2. TryTransDB: A web-based resource for transport proteins in Trypanosomatidae.

    PubMed

    Sonar, Krushna; Kabra, Ritika; Singh, Shailza

    2018-03-12

    TryTransDB is a web-based resource that stores transport protein data which can be retrieved using a standalone BLAST tool. We have attempted to create an integrated database that can be a one-stop shop for the researchers working with transport proteins of Trypanosomatidae family. TryTransDB (Trypanosomatidae Transport Protein Database) is a web based comprehensive resource that can fire a BLAST search against most of the transport protein sequences (protein and nucleotide) from Trypanosomatidae family organisms. This web resource further allows to compute a phylogenetic tree by performing multiple sequence alignment (MSA) using CLUSTALW suite embedded in it. Also, cross-linking to other databases helps in gathering more information for a certain transport protein in a single website.

  3. Using Web Database Tools To Facilitate the Construction of Knowledge in Online Courses.

    ERIC Educational Resources Information Center

    McNeil, Sara G.; Robin, Bernard R.

    This paper presents an overview of database tools that dynamically generate World Wide Web materials and focuses on the use of these tools to support research activities, as well as teaching and learning. Database applications have been used in classrooms to support learning activities for over a decade, but, although business and e-commerce have…

  4. Comparison of PubMed, Scopus, Web of Science, and Google Scholar: strengths and weaknesses.

    PubMed

    Falagas, Matthew E; Pitsouni, Eleni I; Malietzis, George A; Pappas, Georgios

    2008-02-01

    The evolution of the electronic age has led to the development of numerous medical databases on the World Wide Web, offering search facilities on a particular subject and the ability to perform citation analysis. We compared the content coverage and practical utility of PubMed, Scopus, Web of Science, and Google Scholar. The official Web pages of the databases were used to extract information on the range of journals covered, search facilities and restrictions, and update frequency. We used the example of a keyword search to evaluate the usefulness of these databases in biomedical information retrieval and a specific published article to evaluate their utility in performing citation analysis. All databases were practical in use and offered numerous search facilities. PubMed and Google Scholar are accessed for free. The keyword search with PubMed offers optimal update frequency and includes online early articles; other databases can rate articles by number of citations, as an index of importance. For citation analysis, Scopus offers about 20% more coverage than Web of Science, whereas Google Scholar offers results of inconsistent accuracy. PubMed remains an optimal tool in biomedical electronic research. Scopus covers a wider journal range, of help both in keyword searching and citation analysis, but it is currently limited to recent articles (published after 1995) compared with Web of Science. Google Scholar, as for the Web in general, can help in the retrieval of even the most obscure information but its use is marred by inadequate, less often updated, citation information.

  5. Database Organisation in a Web-Enabled Free and Open-Source Software (foss) Environment for Spatio-Temporal Landslide Modelling

    NASA Astrophysics Data System (ADS)

    Das, I.; Oberai, K.; Sarathi Roy, P.

    2012-07-01

    Landslides exhibit themselves in different mass movement processes and are considered among the most complex natural hazards occurring on the earth surface. Making landslide database available online via WWW (World Wide Web) promotes the spreading and reaching out of the landslide information to all the stakeholders. The aim of this research is to present a comprehensive database for generating landslide hazard scenario with the help of available historic records of landslides and geo-environmental factors and make them available over the Web using geospatial Free & Open Source Software (FOSS). FOSS reduces the cost of the project drastically as proprietary software's are very costly. Landslide data generated for the period 1982 to 2009 were compiled along the national highway road corridor in Indian Himalayas. All the geo-environmental datasets along with the landslide susceptibility map were served through WEBGIS client interface. Open source University of Minnesota (UMN) mapserver was used as GIS server software for developing web enabled landslide geospatial database. PHP/Mapscript server-side application serve as a front-end application and PostgreSQL with PostGIS extension serve as a backend application for the web enabled landslide spatio-temporal databases. This dynamic virtual visualization process through a web platform brings an insight into the understanding of the landslides and the resulting damage closer to the affected people and user community. The landslide susceptibility dataset is also made available as an Open Geospatial Consortium (OGC) Web Feature Service (WFS) which can be accessed through any OGC compliant open source or proprietary GIS Software.

  6. Metadata tables to enable dynamic data modeling and web interface design: the SEER example.

    PubMed

    Weiner, Mark; Sherr, Micah; Cohen, Abigail

    2002-04-01

    A wealth of information addressing health status, outcomes and resource utilization is compiled and made available by various government agencies. While exploration of the data is possible using existing tools, in general, would-be users of the resources must acquire CD-ROMs or download data from the web, and upload the data into their own database. Where web interfaces exist, they are highly structured, limiting the kinds of queries that can be executed. This work develops a web-based database interface engine whose content and structure is generated through interaction with a metadata table. The result is a dynamically generated web interface that can easily accommodate changes in the underlying data model by altering the metadata table, rather than requiring changes to the interface code. This paper discusses the background and implementation of the metadata table and web-based front end and provides examples of its use with the NCI's Surveillance, Epidemiology and End-Results (SEER) database.

  7. CUNY+ Web: Usability Study of the Web-Based GUI Version of the Bibliographic Database of the City University of New York (CUNY).

    ERIC Educational Resources Information Center

    Oulanov, Alexei; Pajarillo, Edmund J. Y.

    2002-01-01

    Describes the usability evaluation of the CUNY (City University of New York) information system in Web and Graphical User Interface (GUI) versions. Compares results to an earlier usability study of the basic information database available on CUNY's wide-area network and describes the applicability of the previous usability instrument to this…

  8. The integrated web service and genome database for agricultural plants with biotechnology information.

    PubMed

    Kim, Changkug; Park, Dongsuk; Seol, Youngjoo; Hahn, Jangho

    2011-01-01

    The National Agricultural Biotechnology Information Center (NABIC) constructed an agricultural biology-based infrastructure and developed a Web based relational database for agricultural plants with biotechnology information. The NABIC has concentrated on functional genomics of major agricultural plants, building an integrated biotechnology database for agro-biotech information that focuses on genomics of major agricultural resources. This genome database provides annotated genome information from 1,039,823 records mapped to rice, Arabidopsis, and Chinese cabbage.

  9. A decade of Web Server updates at the Bioinformatics Links Directory: 2003-2012.

    PubMed

    Brazas, Michelle D; Yim, David; Yeung, Winston; Ouellette, B F Francis

    2012-07-01

    The 2012 Bioinformatics Links Directory update marks the 10th special Web Server issue from Nucleic Acids Research. Beginning with content from their 2003 publication, the Bioinformatics Links Directory in collaboration with Nucleic Acids Research has compiled and published a comprehensive list of freely accessible, online tools, databases and resource materials for the bioinformatics and life science research communities. The past decade has exhibited significant growth and change in the types of tools, databases and resources being put forth, reflecting both technology changes and the nature of research over that time. With the addition of 90 web server tools and 12 updates from the July 2012 Web Server issue of Nucleic Acids Research, the Bioinformatics Links Directory at http://bioinformatics.ca/links_directory/ now contains an impressive 134 resources, 455 databases and 1205 web server tools, mirroring the continued activity and efforts of our field.

  10. The Worldviews Network: Digital Planetariums for Engaging Public Audiences in Global Change Issues

    NASA Astrophysics Data System (ADS)

    Wyatt, R. J.; Koontz, K.; Yu, K.; Gardiner, N.; Connolly, R.; Mcconville, D.

    2013-12-01

    Utilizing the capabilities of digital planetariums, the Denver Museum of Nature & Science, the California Academy of Sciences, NOVA/WGBH, The Elumenati, and affiliates of the National Oceanic & Atmospheric Administration formed the Worldviews Network. The network's mission is to place Earth in its cosmic context to encourage participants to explore connections between social & ecological issues in their backyards. Worldviews launched with informal science institution partners: the American Museum of Natural History, the Perot Museum of Nature & Science, the Journey Museum, the Bell Museum of Natural History, the University of Michigan Natural History Museum, and the National Environmental Modeling & Analysis Center. Worldviews uses immersive visualization technology to engage public audiences on issues of global environmental change at a bioregional level. An immersive planetarium show and dialogue deepens public engagement and awareness of complex human-natural system interactions. People have altered the global climate system. Our communities are increasingly vulnerable to extreme weather events. Land use decisions that people make every day put both human lives and biodiversity at risk through direct and indirect effects. The Worldviews programs demonstrate the complex linkages between Earth's physical and biological systems and their relationship to human health, agriculture, infrastructure, water resources, and energy. We have focused on critical thresholds, such as freshwater use, biodiversity loss, land use change, and anthropogenic changes to the nitrogen and phosphorus cycles. We have been guided by environmental literacy principles to help our audiences understand that humans drive current trends in coupled human-natural systems--and that humans could choose to play an important role in reversing these trends. Museum and planetarium staff members join the Worldviews Network team and external advisers to produce programs that span cosmic, global, and bioregional scales. Each presentation employs a 'See, Know, Do' transformative learning model. 'Seeing' involves the creation, presentation, and experience of viewing immersive visualizations within the planetarium to engage visitors' visual-spatial intelligence. For 'Knowing,' the narratives are constructed to help visitors understand the web of physical-ecological-social systems that interact on Earth. The 'Doing' component emerges from interaction among participants: for example, researchers and non-governmental organizations help audience members conceive of their own relationship to the highlighted issue and ways they may remain involved in systemically addressing problems the audience identifies.

  11. Digital hand atlas and computer-aided bone age assessment via the Web

    NASA Astrophysics Data System (ADS)

    Cao, Fei; Huang, H. K.; Pietka, Ewa; Gilsanz, Vicente

    1999-07-01

    A frequently used assessment method of bone age is atlas matching by a radiological examination of a hand image against a reference set of atlas patterns of normal standards. We are in a process of developing a digital hand atlas with a large standard set of normal hand and wrist images that reflect the skeletal maturity, race and sex difference, and current child development. The digital hand atlas will be used for a computer-aided bone age assessment via Web. We have designed and partially implemented a computer-aided diagnostic (CAD) system for Web-based bone age assessment. The system consists of a digital hand atlas, a relational image database and a Web-based user interface. The digital atlas is based on a large standard set of normal hand an wrist images with extracted bone objects and quantitative features. The image database uses a content- based indexing to organize the hand images and their attributes and present to users in a structured way. The Web-based user interface allows users to interact with the hand image database from browsers. Users can use a Web browser to push a clinical hand image to the CAD server for a bone age assessment. Quantitative features on the examined image, which reflect the skeletal maturity, will be extracted and compared with patterns from the atlas database to assess the bone age. The relevant reference imags and the final assessment report will be sent back to the user's browser via Web. The digital atlas will remove the disadvantages of the currently out-of-date one and allow the bone age assessment to be computerized and done conveniently via Web. In this paper, we present the system design and Web-based client-server model for computer-assisted bone age assessment and our initial implementation of the digital atlas database.

  12. The NCBI BioCollections Database

    PubMed Central

    Sharma, Shobha; Ciufo, Stacy; Starchenko, Elena; Darji, Dakshesh; Chlumsky, Larry; Karsch-Mizrachi, Ilene

    2018-01-01

    Abstract The rapidly growing set of GenBank submissions includes sequences that are derived from vouchered specimens. These are associated with culture collections, museums, herbaria and other natural history collections, both living and preserved. Correct identification of the specimens studied, along with a method to associate the sample with its institution, is critical to the outcome of related studies and analyses. The National Center for Biotechnology Information BioCollections Database was established to allow the association of specimen vouchers and related sequence records to their home institutions. This process also allows cross-linking from the home institution for quick identification of all records originating from each collection. Database URL: https://www.ncbi.nlm.nih.gov/biocollections PMID:29688360

  13. Study of Italian Renaissance sculptures using an external beam nuclear microprobe

    NASA Astrophysics Data System (ADS)

    Zucchiatti, A.; Bouquillon, A.; Moignard, B.; Salomon, J.; Gaborit, J. R.

    2000-03-01

    The use of an extracted proton micro-beam for the PIXE analysis of glazes is discussed in the context of the growing interest in the creation of an analytical database on Italian Renaissance glazed terracotta sculptures. Some results concerning the frieze of an altarpiece of the Louvre museum, featuring white angels and cherubs heads, are presented.

  14. The integrated web service and genome database for agricultural plants with biotechnology information

    PubMed Central

    Kim, ChangKug; Park, DongSuk; Seol, YoungJoo; Hahn, JangHo

    2011-01-01

    The National Agricultural Biotechnology Information Center (NABIC) constructed an agricultural biology-based infrastructure and developed a Web based relational database for agricultural plants with biotechnology information. The NABIC has concentrated on functional genomics of major agricultural plants, building an integrated biotechnology database for agro-biotech information that focuses on genomics of major agricultural resources. This genome database provides annotated genome information from 1,039,823 records mapped to rice, Arabidopsis, and Chinese cabbage. PMID:21887015

  15. Integrated Functional and Executional Modelling of Software Using Web-Based Databases

    NASA Technical Reports Server (NTRS)

    Kulkarni, Deepak; Marietta, Roberta

    1998-01-01

    NASA's software subsystems undergo extensive modification and updates over the operational lifetimes. It is imperative that modified software should satisfy safety goals. This report discusses the difficulties encountered in doing so and discusses a solution based on integrated modelling of software, use of automatic information extraction tools, web technology and databases. To appear in an article of Journal of Database Management.

  16. COMPUTER-AIDED SCIENCE POLICY ANALYSIS AND RESEARCH (WEBCASPAR)

    EPA Science Inventory

    WebCASPAR is a database system containing information about academic science and engineering resources and is available on the World Wide Web. Included in the database is information from several of SRS's academic surveys plus information from a variety of other sources, includin...

  17. EpiCollect: linking smartphones to web applications for epidemiology, ecology and community data collection.

    PubMed

    Aanensen, David M; Huntley, Derek M; Feil, Edward J; al-Own, Fada'a; Spratt, Brian G

    2009-09-16

    Epidemiologists and ecologists often collect data in the field and, on returning to their laboratory, enter their data into a database for further analysis. The recent introduction of mobile phones that utilise the open source Android operating system, and which include (among other features) both GPS and Google Maps, provide new opportunities for developing mobile phone applications, which in conjunction with web applications, allow two-way communication between field workers and their project databases. Here we describe a generic framework, consisting of mobile phone software, EpiCollect, and a web application located within www.spatialepidemiology.net. Data collected by multiple field workers can be submitted by phone, together with GPS data, to a common web database and can be displayed and analysed, along with previously collected data, using Google Maps (or Google Earth). Similarly, data from the web database can be requested and displayed on the mobile phone, again using Google Maps. Data filtering options allow the display of data submitted by the individual field workers or, for example, those data within certain values of a measured variable or a time period. Data collection frameworks utilising mobile phones with data submission to and from central databases are widely applicable and can give a field worker similar display and analysis tools on their mobile phone that they would have if viewing the data in their laboratory via the web. We demonstrate their utility for epidemiological data collection and display, and briefly discuss their application in ecological and community data collection. Furthermore, such frameworks offer great potential for recruiting 'citizen scientists' to contribute data easily to central databases through their mobile phone.

  18. Information Retrieval in Telemedicine: a Comparative Study on Bibliographic Databases

    PubMed Central

    Ahmadi, Maryam; Sarabi, Roghayeh Ershad; Orak, Roohangiz Jamshidi; Bahaadinbeigy, Kambiz

    2015-01-01

    Background and Aims: The first step in each systematic review is selection of the most valid database that can provide the highest number of relevant references. This study was carried out to determine the most suitable database for information retrieval in telemedicine field. Methods: Cinhal, PubMed, Web of Science and Scopus databases were searched for telemedicine matched with Education, cost benefit and patient satisfaction. After analysis of the obtained results, the accuracy coefficient, sensitivity, uniqueness and overlap of databases were calculated. Results: The studied databases differed in the number of retrieved articles. PubMed was identified as the most suitable database for retrieving information on the selected topics with the accuracy and sensitivity ratios of 50.7% and 61.4% respectively. The uniqueness percent of retrieved articles ranged from 38% for Pubmed to 3.0% for Cinhal. The highest overlap rate (18.6%) was found between PubMed and Web of Science. Less than 1% of articles have been indexed in all searched databases. Conclusion: PubMed is suggested as the most suitable database for starting search in telemedicine and after PubMed, Scopus and Web of Science can retrieve about 90% of the relevant articles. PMID:26236086

  19. Protein Information Resource: a community resource for expert annotation of protein data

    PubMed Central

    Barker, Winona C.; Garavelli, John S.; Hou, Zhenglin; Huang, Hongzhan; Ledley, Robert S.; McGarvey, Peter B.; Mewes, Hans-Werner; Orcutt, Bruce C.; Pfeiffer, Friedhelm; Tsugita, Akira; Vinayaka, C. R.; Xiao, Chunlin; Yeh, Lai-Su L.; Wu, Cathy

    2001-01-01

    The Protein Information Resource, in collaboration with the Munich Information Center for Protein Sequences (MIPS) and the Japan International Protein Information Database (JIPID), produces the most comprehensive and expertly annotated protein sequence database in the public domain, the PIR-International Protein Sequence Database. To provide timely and high quality annotation and promote database interoperability, the PIR-International employs rule-based and classification-driven procedures based on controlled vocabulary and standard nomenclature and includes status tags to distinguish experimentally determined from predicted protein features. The database contains about 200 000 non-redundant protein sequences, which are classified into families and superfamilies and their domains and motifs identified. Entries are extensively cross-referenced to other sequence, classification, genome, structure and activity databases. The PIR web site features search engines that use sequence similarity and database annotation to facilitate the analysis and functional identification of proteins. The PIR-Inter­national databases and search tools are accessible on the PIR web site at http://pir.georgetown.edu/ and at the MIPS web site at http://www.mips.biochem.mpg.de. The PIR-International Protein Sequence Database and other files are also available by FTP. PMID:11125041

  20. Information Retrieval in Telemedicine: a Comparative Study on Bibliographic Databases.

    PubMed

    Ahmadi, Maryam; Sarabi, Roghayeh Ershad; Orak, Roohangiz Jamshidi; Bahaadinbeigy, Kambiz

    2015-06-01

    The first step in each systematic review is selection of the most valid database that can provide the highest number of relevant references. This study was carried out to determine the most suitable database for information retrieval in telemedicine field. Cinhal, PubMed, Web of Science and Scopus databases were searched for telemedicine matched with Education, cost benefit and patient satisfaction. After analysis of the obtained results, the accuracy coefficient, sensitivity, uniqueness and overlap of databases were calculated. The studied databases differed in the number of retrieved articles. PubMed was identified as the most suitable database for retrieving information on the selected topics with the accuracy and sensitivity ratios of 50.7% and 61.4% respectively. The uniqueness percent of retrieved articles ranged from 38% for Pubmed to 3.0% for Cinhal. The highest overlap rate (18.6%) was found between PubMed and Web of Science. Less than 1% of articles have been indexed in all searched databases. PubMed is suggested as the most suitable database for starting search in telemedicine and after PubMed, Scopus and Web of Science can retrieve about 90% of the relevant articles.

  1. Issue of data acquisition and processing using short range photogrammetry and terrestrial laser scanning for educational portals and virtual museums based on Wawel cathedral. (Polish Title: Problematyka pozyskiwania i przetwarzania danych fotogrametrycznych i z naziemnego skaningu laserowego na potrzeby tworzenia portali edukacyjnych i wirtualnych muzeów na przykładzie Katedry Wawelskiej)

    NASA Astrophysics Data System (ADS)

    Mitka, B.; Szelest, P.

    2013-12-01

    This paper presents the issues related to the acquisition and processing of terrestrial photogrammetry and laser scanning for building educational portals and virtual museums. Discusses the specific requirements of measurement technology and data processing for all kinds of objects, ranging from architecture through sculpture and architectural detail on the fabric and individual museum exhibits. Educational portals and virtual museums require a modern, high-quality visuals (3D models, virtual tours, animations, etc.) supplemented by descriptive content or audio commentary. Source for obtaining such materials are mostly terrestrial laser scanning and photogrammetry as technologies that provide complete information about the presented geometric objects. However, the performance requirements of web services impose severe restrictions on the presented content. It is necessary to use optimalization geometry process to streamline the way of its presentation. Equally important problem concerns the selection of appropriate technology and process measurement data processing presented for each type of objects. Only skillful selection of measuring equipment and data processing tools effectively ensure the achievement of a satisfactory end result. Both terrestrial laser scanning technology and digital close range photogrammetry has its strengths which should be used but also the limitations that must be taken into account in this kind of work. The key is choosing the right scanner for both the measured object and terrain such as pixel size in the performance of his photos.

  2. Construction and validation of a web-based epidemiological database for inflammatory bowel diseases in Europe An EpiCom study.

    PubMed

    Burisch, Johan; Cukovic-Cavka, Silvija; Kaimakliotis, Ioannis; Shonová, Olga; Andersen, Vibeke; Dahlerup, Jens F; Elkjaer, Margarita; Langholz, Ebbe; Pedersen, Natalia; Salupere, Riina; Kolho, Kaija-Leena; Manninen, Pia; Lakatos, Peter Laszlo; Shuhaibar, Mary; Odes, Selwyn; Martinato, Matteo; Mihu, Ion; Magro, Fernando; Belousova, Elena; Fernandez, Alberto; Almer, Sven; Halfvarson, Jonas; Hart, Ailsa; Munkholm, Pia

    2011-08-01

    The EpiCom-study investigates a possible East-West-gradient in Europe in the incidence of IBD and the association with environmental factors. A secured web-based database is used to facilitate and centralize data registration. To construct and validate a web-based inception cohort database available in both English and Russian language. The EpiCom database has been constructed in collaboration with all 34 participating centers. The database was translated into Russian using forward translation, patient questionnaires were translated by simplified forward-backward translation. Data insertion implies fulfillment of international diagnostic criteria, disease activity, medical therapy, quality of life, work productivity and activity impairment, outcome of pregnancy, surgery, cancer and death. Data is secured by the WinLog3 System, developed in cooperation with the Danish Data Protection Agency. Validation of the database has been performed in two consecutive rounds, each followed by corrections in accordance with comments. The EpiCom database fulfills the requirements of the participating countries' local data security agencies by being stored at a single location. The database was found overall to be "good" or "very good" by 81% of the participants after the second validation round and the general applicability of the database was evaluated as "good" or "very good" by 77%. In the inclusion period January 1st -December 31st 2010 1336 IBD patients have been included in the database. A user-friendly, tailor-made and secure web-based inception cohort database has been successfully constructed, facilitating remote data input. The incidence of IBD in 23 European countries can be found at www.epicom-ecco.eu. Copyright © 2011 European Crohn's and Colitis Organisation. All rights reserved.

  3. Upgrades to the TPSX Material Properties Database

    NASA Technical Reports Server (NTRS)

    Squire, T. H.; Milos, F. S.; Partridge, Harry (Technical Monitor)

    2001-01-01

    The TPSX Material Properties Database is a web-based tool that serves as a database for properties of advanced thermal protection materials. TPSX provides an easy user interface for retrieving material property information in a variety of forms, both graphical and text. The primary purpose and advantage of TPSX is to maintain a high quality source of often used thermal protection material properties in a convenient, easily accessible form, for distribution to government and aerospace industry communities. Last year a major upgrade to the TPSX web site was completed. This year, through the efforts of researchers at several NASA centers, the Office of the Chief Engineer awarded funds to update and expand the databases in TPSX. The FY01 effort focuses on updating correcting the Ames and Johnson thermal protection materials databases. In this session we will summarize the improvements made to the web site last year, report on the status of the on-going database updates, describe the planned upgrades for FY02 and FY03, and provide a demonstration of TPSX.

  4. Village Green Project: Web-accessible Database

    EPA Science Inventory

    The purpose of this web-accessible database is for the public to be able to view instantaneous readings from a solar-powered air monitoring station located in a public location (prototype pilot test is outside of a library in Durham County, NC). The data are wirelessly transmitte...

  5. The Brainomics/Localizer database.

    PubMed

    Papadopoulos Orfanos, Dimitri; Michel, Vincent; Schwartz, Yannick; Pinel, Philippe; Moreno, Antonio; Le Bihan, Denis; Frouin, Vincent

    2017-01-01

    The Brainomics/Localizer database exposes part of the data collected by the in-house Localizer project, which planned to acquire four types of data from volunteer research subjects: anatomical MRI scans, functional MRI data, behavioral and demographic data, and DNA sampling. Over the years, this local project has been collecting such data from hundreds of subjects. We had selected 94 of these subjects for their complete datasets, including all four types of data, as the basis for a prior publication; the Brainomics/Localizer database publishes the data associated with these 94 subjects. Since regulatory rules prevent us from making genetic data available for download, the database serves only anatomical MRI scans, functional MRI data, behavioral and demographic data. To publish this set of heterogeneous data, we use dedicated software based on the open-source CubicWeb semantic web framework. Through genericity in the data model and flexibility in the display of data (web pages, CSV, JSON, XML), CubicWeb helps us expose these complex datasets in original and efficient ways. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Relax with CouchDB - Into the non-relational DBMS era of Bioinformatics

    PubMed Central

    Manyam, Ganiraju; Payton, Michelle A.; Roth, Jack A.; Abruzzo, Lynne V.; Coombes, Kevin R.

    2012-01-01

    With the proliferation of high-throughput technologies, genome-level data analysis has become common in molecular biology. Bioinformaticians are developing extensive resources to annotate and mine biological features from high-throughput data. The underlying database management systems for most bioinformatics software are based on a relational model. Modern non-relational databases offer an alternative that has flexibility, scalability, and a non-rigid design schema. Moreover, with an accelerated development pace, non-relational databases like CouchDB can be ideal tools to construct bioinformatics utilities. We describe CouchDB by presenting three new bioinformatics resources: (a) geneSmash, which collates data from bioinformatics resources and provides automated gene-centric annotations, (b) drugBase, a database of drug-target interactions with a web interface powered by geneSmash, and (c) HapMap-CN, which provides a web interface to query copy number variations from three SNP-chip HapMap datasets. In addition to the web sites, all three systems can be accessed programmatically via web services. PMID:22609849

  7. Fauna Europaea: Coleoptera 2 (excl. series Elateriformia, Scarabaeiformia, Staphyliniformia and superfamily Curculionoidea)

    PubMed Central

    Alonso Zarazaga, Miguel-Angel; Slipinski, Adam; Nilsson, Anders; Jelínek, Josef; Taglianti, Augusto Vigna; Turco, Federica; Otero, Carlos; Canepari, Claudio; Kral, David; Liberti, Gianfranco; Sama, Gianfranco; Nardi, Gianluca; Löbl, Ivan; Horak, Jan; Kolibac, Jiri; Háva, Jirí; Sapiejewski, Maciej; Jäch, Manfred; Bologna, Marco Alberto; Biondi, Maurizio; Nikitsky, Nikolai B.; Mazzoldi, Paolo; Zahradnik, Petr; Wegrzynowicz, Piotr; Constantin, Robert; Gerstmeier, Roland; Zhantiev, Rustem; Fattorini, Simone; Tomaszewska, Wioletta; Rücker, Wolfgang H.; Vazquez-Albalate, Xavier; Cassola, Fabio; Angelini, Fernando; Johnson, Colin; Schawaller, Wolfgang; Regalin, Renato; Baviera, Cosimo; Rocchi, Saverio; Cianferoni, Fabio; Beenen, Ron; Schmitt, Michael; Sassi, David; Kippenberg, Horst; Zampetti, Marcello Franco; Trizzino, Marco; Chiari, Stefano; Carpaneto, Giuseppe Maria; Sabatelli, Simone

    2015-01-01

    Abstract Fauna Europaea provides a public web-service with an index of scientific names (including synonyms) of all living European land and freshwater animals, their geographical distribution at country level (up to the Urals, excluding the Caucasus region), and some additional information. The Fauna Europaea project covers about 230,000 taxonomic names, including 130,000 accepted species and 14,000 accepted subspecies, which is much more than the originally projected number of 100,000 species. This represents a huge effort by more than 400 contributing specialists throughout Europe and is a unique (standard) reference suitable for many users in science, government, industry, nature conservation and education. Coleoptera represent a huge assemblage of holometabolous insects, including as a whole more than 200 recognized families and some 400,000 described species worldwide. Basic information is summarized on their biology, ecology, economic relevance, and estimated number of undescribed species worldwide. Little less than 30,000 species are listed from Europe. The Coleoptera 2 section of the Fauna Europaea database (Archostemata, Myxophaga, Adephaga and Polyphaga excl. the series Elateriformia, Scarabaeiformia, Staphyliniformia and the superfamily Curculionoidea) encompasses 80 families (according to the previously accepted family-level systematic framework) and approximately 13,000 species. Tabulations included a complete list of the families dealt with, the number of species in each, the names of all involved specialists, and, when possible, an estimate of the gaps in terms of total number of species at an European level. A list of some recent useful references is appended. Most families included in the Coleoptera 2 Section have been updated in the most recent release of the Fauna Europaea index, or are ready to be updated as soon as the FaEu data management environment completes its migration from Zoological Museum Amsterdam to Berlin Museum für Naturkunde. PMID:25892924

  8. CREDO: a structural interactomics database for drug discovery

    PubMed Central

    Schreyer, Adrian M.; Blundell, Tom L.

    2013-01-01

    CREDO is a unique relational database storing all pairwise atomic interactions of inter- as well as intra-molecular contacts between small molecules and macromolecules found in experimentally determined structures from the Protein Data Bank. These interactions are integrated with further chemical and biological data. The database implements useful data structures and algorithms such as cheminformatics routines to create a comprehensive analysis platform for drug discovery. The database can be accessed through a web-based interface, downloads of data sets and web services at http://www-cryst.bioc.cam.ac.uk/credo. Database URL: http://www-cryst.bioc.cam.ac.uk/credo PMID:23868908

  9. Columba: an integrated database of proteins, structures, and annotations.

    PubMed

    Trissl, Silke; Rother, Kristian; Müller, Heiko; Steinke, Thomas; Koch, Ina; Preissner, Robert; Frömmel, Cornelius; Leser, Ulf

    2005-03-31

    Structural and functional research often requires the computation of sets of protein structures based on certain properties of the proteins, such as sequence features, fold classification, or functional annotation. Compiling such sets using current web resources is tedious because the necessary data are spread over many different databases. To facilitate this task, we have created COLUMBA, an integrated database of annotations of protein structures. COLUMBA currently integrates twelve different databases, including PDB, KEGG, Swiss-Prot, CATH, SCOP, the Gene Ontology, and ENZYME. The database can be searched using either keyword search or data source-specific web forms. Users can thus quickly select and download PDB entries that, for instance, participate in a particular pathway, are classified as containing a certain CATH architecture, are annotated as having a certain molecular function in the Gene Ontology, and whose structures have a resolution under a defined threshold. The results of queries are provided in both machine-readable extensible markup language and human-readable format. The structures themselves can be viewed interactively on the web. The COLUMBA database facilitates the creation of protein structure data sets for many structure-based studies. It allows to combine queries on a number of structure-related databases not covered by other projects at present. Thus, information on both many and few protein structures can be used efficiently. The web interface for COLUMBA is available at http://www.columba-db.de.

  10. Content and Accessibility of Shoulder and Elbow Fellowship Web Sites in the United States.

    PubMed

    Young, Bradley L; Oladeji, Lasun O; Cichos, Kyle; Ponce, Brent

    2016-01-01

    Increasing numbers of training physicians are using the Internet to gather information about graduate medical education programs. The content and accessibility of web sites that provide this information have been demonstrated to influence applicants' decisions. Assessments of orthopedic fellowship web sites including sports medicine, pediatrics, hand and spine have found varying degrees of accessibility and material. The purpose of this study was to evaluate the accessibility and content of the American Shoulder and Elbow Surgeons (ASES) fellowship web sites (SEFWs). A complete list of ASES programs was obtained from a database on the ASES web site. The accessibility of each SEFWs was assessed by the existence of a functioning link found in the database and through Google®. Then, the following content areas of each SEFWs were evaluated: fellow education, faculty/previous fellow information, and recruitment. At the time of the study, 17 of the 28 (60.7%) ASES programs had web sites accessible through Google®, and only five (17.9%) had functioning links in the ASES database. Nine programs lacked a web site. Concerning web site content, the majority of SEFWs contained information regarding research opportunities, research requirements, case descriptions, meetings and conferences, teaching responsibilities, attending faculty, the application process, and a program description. Fewer than half of the SEFWs provided information regarding rotation schedules, current fellows, previous fellows, on-call expectations, journal clubs, medical school of current fellows, residency of current fellows, employment of previous fellows, current research, and previous research. A large portion of ASES fellowship programs lacked functioning web sites, and even fewer provided functioning links through the ASES database. Valuable information for potential applicants was largely inadequate across present SEFWs.

  11. Application of new type of distributed multimedia databases to networked electronic museum

    NASA Astrophysics Data System (ADS)

    Kuroda, Kazuhide; Komatsu, Naohisa; Komiya, Kazumi; Ikeda, Hiroaki

    1999-01-01

    Recently, various kinds of multimedia application systems have actively been developed based on the achievement of advanced high sped communication networks, computer processing technologies, and digital contents-handling technologies. Under this background, this paper proposed a new distributed multimedia database system which can effectively perform a new function of cooperative retrieval among distributed databases. The proposed system introduces a new concept of 'Retrieval manager' which functions as an intelligent controller so that the user can recognize a set of distributed databases as one logical database. The logical database dynamically generates and performs a preferred combination of retrieving parameters on the basis of both directory data and the system environment. Moreover, a concept of 'domain' is defined in the system as a managing unit of retrieval. The retrieval can effectively be performed by cooperation of processing among multiple domains. Communication language and protocols are also defined in the system. These are used in every action for communications in the system. A language interpreter in each machine translates a communication language into an internal language used in each machine. Using the language interpreter, internal processing, such internal modules as DBMS and user interface modules can freely be selected. A concept of 'content-set' is also introduced. A content-set is defined as a package of contents. Contents in the content-set are related to each other. The system handles a content-set as one object. The user terminal can effectively control the displaying of retrieved contents, referring to data indicating the relation of the contents in the content- set. In order to verify the function of the proposed system, a networked electronic museum was experimentally built. The results of this experiment indicate that the proposed system can effectively retrieve the objective contents under the control to a number of distributed domains. The result also indicate that the system can effectively work even if the system becomes large.

  12. Accessing the SEED genome databases via Web services API: tools for programmers.

    PubMed

    Disz, Terry; Akhter, Sajia; Cuevas, Daniel; Olson, Robert; Overbeek, Ross; Vonstein, Veronika; Stevens, Rick; Edwards, Robert A

    2010-06-14

    The SEED integrates many publicly available genome sequences into a single resource. The database contains accurate and up-to-date annotations based on the subsystems concept that leverages clustering between genomes and other clues to accurately and efficiently annotate microbial genomes. The backend is used as the foundation for many genome annotation tools, such as the Rapid Annotation using Subsystems Technology (RAST) server for whole genome annotation, the metagenomics RAST server for random community genome annotations, and the annotation clearinghouse for exchanging annotations from different resources. In addition to a web user interface, the SEED also provides Web services based API for programmatic access to the data in the SEED, allowing the development of third-party tools and mash-ups. The currently exposed Web services encompass over forty different methods for accessing data related to microbial genome annotations. The Web services provide comprehensive access to the database back end, allowing any programmer access to the most consistent and accurate genome annotations available. The Web services are deployed using a platform independent service-oriented approach that allows the user to choose the most suitable programming platform for their application. Example code demonstrate that Web services can be used to access the SEED using common bioinformatics programming languages such as Perl, Python, and Java. We present a novel approach to access the SEED database. Using Web services, a robust API for access to genomics data is provided, without requiring large volume downloads all at once. The API ensures timely access to the most current datasets available, including the new genomes as soon as they come online.

  13. The GLIMS Glacier Database

    NASA Astrophysics Data System (ADS)

    Raup, B. H.; Khalsa, S. S.; Armstrong, R.

    2007-12-01

    The Global Land Ice Measurements from Space (GLIMS) project has built a geospatial and temporal database of glacier data, composed of glacier outlines and various scalar attributes. These data are being derived primarily from satellite imagery, such as from ASTER and Landsat. Each "snapshot" of a glacier is from a specific time, and the database is designed to store multiple snapshots representative of different times. We have implemented two web-based interfaces to the database; one enables exploration of the data via interactive maps (web map server), while the other allows searches based on text-field constraints. The web map server is an Open Geospatial Consortium (OGC) compliant Web Map Server (WMS) and Web Feature Server (WFS). This means that other web sites can display glacier layers from our site over the Internet, or retrieve glacier features in vector format. All components of the system are implemented using Open Source software: Linux, PostgreSQL, PostGIS (geospatial extensions to the database), MapServer (WMS and WFS), and several supporting components such as Proj.4 (a geographic projection library) and PHP. These tools are robust and provide a flexible and powerful framework for web mapping applications. As a service to the GLIMS community, the database contains metadata on all ASTER imagery acquired over glacierized terrain. Reduced-resolution of the images (browse imagery) can be viewed either as a layer in the MapServer application, or overlaid on the virtual globe within Google Earth. The interactive map application allows the user to constrain by time what data appear on the map. For example, ASTER or glacier outlines from 2002 only, or from Autumn in any year, can be displayed. The system allows users to download their selected glacier data in a choice of formats. The results of a query based on spatial selection (using a mouse) or text-field constraints can be downloaded in any of these formats: ESRI shapefiles, KML (Google Earth), MapInfo, GML (Geography Markup Language) and GMT (Generic Mapping Tools). This "clip-and-ship" function allows users to download only the data they are interested in. Our flexible web interfaces to the database, which includes various support layers (e.g. a layer to help collaborators identify satellite imagery over their region of expertise) will facilitate enhanced analysis to be undertaken on glacier systems, their distribution, and their impacts on other Earth systems.

  14. Web application for detailed real-time database transaction monitoring for CMS condition data

    NASA Astrophysics Data System (ADS)

    de Gruttola, Michele; Di Guida, Salvatore; Innocente, Vincenzo; Pierro, Antonio

    2012-12-01

    In the upcoming LHC era, database have become an essential part for the experiments collecting data from LHC, in order to safely store, and consistently retrieve, a wide amount of data, which are produced by different sources. In the CMS experiment at CERN, all this information is stored in ORACLE databases, allocated in several servers, both inside and outside the CERN network. In this scenario, the task of monitoring different databases is a crucial database administration issue, since different information may be required depending on different users' tasks such as data transfer, inspection, planning and security issues. We present here a web application based on Python web framework and Python modules for data mining purposes. To customize the GUI we record traces of user interactions that are used to build use case models. In addition the application detects errors in database transactions (for example identify any mistake made by user, application failure, unexpected network shutdown or Structured Query Language (SQL) statement error) and provides warning messages from the different users' perspectives. Finally, in order to fullfill the requirements of the CMS experiment community, and to meet the new development in many Web client tools, our application was further developed, and new features were deployed.

  15. WebCSD: the online portal to the Cambridge Structural Database

    PubMed Central

    Thomas, Ian R.; Bruno, Ian J.; Cole, Jason C.; Macrae, Clare F.; Pidcock, Elna; Wood, Peter A.

    2010-01-01

    WebCSD, a new web-based application developed by the Cambridge Crystallographic Data Centre, offers fast searching of the Cambridge Structural Database using only a standard internet browser. Search facilities include two-dimensional substructure, molecular similarity, text/numeric and reduced cell searching. Text, chemical diagrams and three-dimensional structural information can all be studied in the results browser using the efficient entry summaries and embedded three-dimensional viewer. PMID:22477776

  16. Comparisons of citations in Web of Science, Scopus, and Google Scholar for articles published in general medical journals.

    PubMed

    Kulkarni, Abhaya V; Aziz, Brittany; Shams, Iffat; Busse, Jason W

    2009-09-09

    Until recently, Web of Science was the only database available to track citation counts for published articles. Other databases are now available, but their relative performance has not been established. To compare the citation count profiles of articles published in general medical journals among the citation databases of Web of Science, Scopus, and Google Scholar. Cohort study of 328 articles published in JAMA, Lancet, or the New England Journal of Medicine between October 1, 1999, and March 31, 2000. Total citation counts for each article up to June 2008 were retrieved from Web of Science, Scopus, and Google Scholar. Article characteristics were analyzed in linear regression models to determine interaction with the databases. Number of citations received by an article since publication and article characteristics associated with citation in databases. Google Scholar and Scopus retrieved more citations per article with a median of 160 (interquartile range [IQR], 83 to 324) and 149 (IQR, 78 to 289), respectively, than Web of Science (median, 122; IQR, 66 to 241) (P < .001 for both comparisons). Compared with Web of Science, Scopus retrieved more citations from non-English-language sources (median, 10.2% vs 4.1%) and reviews (30.8% vs 18.2%), and fewer citations from articles (57.2% vs 70.5%), editorials (2.1% vs 5.9%), and letters (0.8% vs 2.6%) (all P < .001). On a log(10)-transformed scale, fewer citations were found in Google Scholar to articles with declared industry funding (nonstandardized regression coefficient, -0.09; 95% confidence interval [CI], -0.15 to -0.03), reporting a study of a drug or medical device (-0.05; 95% CI, -0.11 to 0.01), or with group authorship (-0.29; 95% CI, -0.35 to -0.23). In multivariable analysis, group authorship was the only characteristic that differed among the databases; Google Scholar had significantly fewer citations to group-authored articles (-0.30; 95% CI, -0.36 to -0.23) compared with Web of Science. Web of Science, Scopus, and Google Scholar produced quantitatively and qualitatively different citation counts for articles published in 3 general medical journals.

  17. A radiology department intranet: development and applications.

    PubMed

    Willing, S J; Berland, L L

    1999-01-01

    An intranet is a "private Internet" that uses the protocols of the World Wide Web to share information resources within a company or with the company's business partners and clients. The hardware requirements for an intranet begin with a dedicated Web server permanently connected to the departmental network. The heart of a Web server is the hypertext transfer protocol (HTTP) service, which receives a page request from a client's browser and transmits the page back to the client. Although knowledge of hypertext markup language (HTML) is not essential for authoring a Web page, a working familiarity with HTML is useful, as is knowledge of programming and database management. Security can be ensured by using scripts to write information in hidden fields or by means of "cookies." Interfacing databases and database management systems with the Web server and conforming the user interface to HTML syntax can be achieved by means of the common gateway interface (CGI), Active Server Pages (ASP), or other methods. An intranet in a radiology department could include the following types of content: on-call schedules, work schedules and a calendar, a personnel directory, resident resources, memorandums and discussion groups, software for a radiology information system, and databases.

  18. BISQUE: locus- and variant-specific conversion of genomic, transcriptomic and proteomic database identifiers.

    PubMed

    Meyer, Michael J; Geske, Philip; Yu, Haiyuan

    2016-05-15

    Biological sequence databases are integral to efforts to characterize and understand biological molecules and share biological data. However, when analyzing these data, scientists are often left holding disparate biological currency-molecular identifiers from different databases. For downstream applications that require converting the identifiers themselves, there are many resources available, but analyzing associated loci and variants can be cumbersome if data is not given in a form amenable to particular analyses. Here we present BISQUE, a web server and customizable command-line tool for converting molecular identifiers and their contained loci and variants between different database conventions. BISQUE uses a graph traversal algorithm to generalize the conversion process for residues in the human genome, genes, transcripts and proteins, allowing for conversion across classes of molecules and in all directions through an intuitive web interface and a URL-based web service. BISQUE is freely available via the web using any major web browser (http://bisque.yulab.org/). Source code is available in a public GitHub repository (https://github.com/hyulab/BISQUE). haiyuan.yu@cornell.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  19. A community effort to construct a gravity database for the United States and an associated Web portal

    USGS Publications Warehouse

    Keller, Gordon R.; Hildenbrand, T.G.; Kucks, R.; Webring, M.; Briesacher, A.; Rujawitz, K.; Hittleman, A.M.; Roman, D.R.; Winester, D.; Aldouri, R.; Seeley, J.; Rasillo, J.; Torres, R.; Hinze, W. J.; Gates, A.; Kreinovich, V.; Salayandia, L.

    2006-01-01

    Potential field data (gravity and magnetic measurements) are both useful and costeffective tools for many geologic investigations. Significant amounts of these data are traditionally in the public domain. A new magnetic database for North America was released in 2002, and as a result, a cooperative effort between government agencies, industry, and universities to compile an upgraded digital gravity anomaly database, grid, and map for the conterminous United States was initiated and is the subject of this paper. This database is being crafted into a data system that is accessible through a Web portal. This data system features the database, software tools, and convenient access. The Web portal will enhance the quality and quantity of data contributed to the gravity database that will be a shared community resource. The system's totally digital nature ensures that it will be flexible so that it can grow and evolve as new data, processing procedures, and modeling and visualization tools become available. Another goal of this Web-based data system is facilitation of the efforts of researchers and students who wish to collect data from regions currently not represented adequately in the database. The primary goal of upgrading the United States gravity database and this data system is to provide more reliable data that support societal and scientific investigations of national importance. An additional motivation is the international intent to compile an enhanced North American gravity database, which is critical to understanding regional geologic features, the tectonic evolution of the continent, and other issues that cross national boundaries. ?? 2006 Geological Society of America. All rights reserved.

  20. 7 CFR 3430.55 - Technical reporting.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... the Current Research Information System (CRIS). (b) Initial Documentation in the CRIS Database... identification of equipment purchased with any Federal funds under the award and any subsequent use of such equipment. (e) CRIS Web Site Via Internet. The CRIS database is available to the public on the worldwide web...

  1. Online Islamic Organizations and Measuring Web Effectiveness

    DTIC Science & Technology

    2004-12-01

    Internet Research 13 (2003) : 17-26. Retrived from ProQuest online database on 15 May 2004. Lee, Jae-Kwan. “A model for monitoring public sector...Web site strategy.” Internet Research : Electronic Networking Applications and Policy 13 (2003) : 259-266. Retrieved from Emerad online database on

  2. CliniWeb: managing clinical information on the World Wide Web.

    PubMed

    Hersh, W R; Brown, K E; Donohoe, L C; Campbell, E M; Horacek, A E

    1996-01-01

    The World Wide Web is a powerful new way to deliver on-line clinical information, but several problems limit its value to health care professionals: content is highly distributed and difficult to find, clinical information is not separated from non-clinical information, and the current Web technology is unable to support some advanced retrieval capabilities. A system called CliniWeb has been developed to address these problems. CliniWeb is an index to clinical information on the World Wide Web, providing a browsing and searching interface to clinical content at the level of the health care student or provider. Its database contains a list of clinical information resources on the Web that are indexed by terms from the Medical Subject Headings disease tree and retrieved with the assistance of SAPHIRE. Limitations of the processes used to build the database are discussed, together with directions for future research.

  3. Information System through ANIS at CeSAM

    NASA Astrophysics Data System (ADS)

    Moreau, C.; Agneray, F.; Gimenez, S.

    2015-09-01

    ANIS (AstroNomical Information System) is a web generic tool developed at CeSAM to facilitate and standardize the implementation of astronomical data of various kinds through private and/or public dedicated Information Systems. The architecture of ANIS is composed of a database server which contains the project data, a web user interface template which provides high level services (search, extract and display imaging and spectroscopic data using a combination of criteria, an object list, a sql query module or a cone search interfaces), a framework composed of several packages, and a metadata database managed by a web administration entity. The process to implement a new ANIS instance at CeSAM is easy and fast : the scientific project has to submit data or a data secure access, the CeSAM team installs the new instance (web interface template and the metadata database), and the project administrator can configure the instance with the web ANIS-administration entity. Currently, the CeSAM offers through ANIS a web access to VO compliant Information Systems for different projects (HeDaM, HST-COSMOS, CFHTLS-ZPhots, ExoDAT,...).

  4. Analysis Tool Web Services from the EMBL-EBI.

    PubMed

    McWilliam, Hamish; Li, Weizhong; Uludag, Mahmut; Squizzato, Silvano; Park, Young Mi; Buso, Nicola; Cowley, Andrew Peter; Lopez, Rodrigo

    2013-07-01

    Since 2004 the European Bioinformatics Institute (EMBL-EBI) has provided access to a wide range of databases and analysis tools via Web Services interfaces. This comprises services to search across the databases available from the EMBL-EBI and to explore the network of cross-references present in the data (e.g. EB-eye), services to retrieve entry data in various data formats and to access the data in specific fields (e.g. dbfetch), and analysis tool services, for example, sequence similarity search (e.g. FASTA and NCBI BLAST), multiple sequence alignment (e.g. Clustal Omega and MUSCLE), pairwise sequence alignment and protein functional analysis (e.g. InterProScan and Phobius). The REST/SOAP Web Services (http://www.ebi.ac.uk/Tools/webservices/) interfaces to these databases and tools allow their integration into other tools, applications, web sites, pipeline processes and analytical workflows. To get users started using the Web Services, sample clients are provided covering a range of programming languages and popular Web Service tool kits, and a brief guide to Web Services technologies, including a set of tutorials, is available for those wishing to learn more and develop their own clients. Users of the Web Services are informed of improvements and updates via a range of methods.

  5. Analysis Tool Web Services from the EMBL-EBI

    PubMed Central

    McWilliam, Hamish; Li, Weizhong; Uludag, Mahmut; Squizzato, Silvano; Park, Young Mi; Buso, Nicola; Cowley, Andrew Peter; Lopez, Rodrigo

    2013-01-01

    Since 2004 the European Bioinformatics Institute (EMBL-EBI) has provided access to a wide range of databases and analysis tools via Web Services interfaces. This comprises services to search across the databases available from the EMBL-EBI and to explore the network of cross-references present in the data (e.g. EB-eye), services to retrieve entry data in various data formats and to access the data in specific fields (e.g. dbfetch), and analysis tool services, for example, sequence similarity search (e.g. FASTA and NCBI BLAST), multiple sequence alignment (e.g. Clustal Omega and MUSCLE), pairwise sequence alignment and protein functional analysis (e.g. InterProScan and Phobius). The REST/SOAP Web Services (http://www.ebi.ac.uk/Tools/webservices/) interfaces to these databases and tools allow their integration into other tools, applications, web sites, pipeline processes and analytical workflows. To get users started using the Web Services, sample clients are provided covering a range of programming languages and popular Web Service tool kits, and a brief guide to Web Services technologies, including a set of tutorials, is available for those wishing to learn more and develop their own clients. Users of the Web Services are informed of improvements and updates via a range of methods. PMID:23671338

  6. Deep Web video

    ScienceCinema

    None Available

    2018-02-06

    To make the web work better for science, OSTI has developed state-of-the-art technologies and services including a deep web search capability. The deep web includes content in searchable databases available to web users but not accessible by popular search engines, such as Google. This video provides an introduction to the deep web search engine.

  7. The Effects of Task, Database, and Guidance on Interaction in a Goal-Based Scenario.

    ERIC Educational Resources Information Center

    Bell, Benjamin

    This paper describes the "Sickle Cell Counselor" (SCC), a goal based scenario on permanent display at the Museum of Science and Industry in Chicago. SCC is an exploratory hypermedia simulation program which provides users with a basic understanding of Sickle Cell Anemia. The user of the program plays the role of a genetic counselor, and,…

  8. GALT protein database, a bioinformatics resource for the management and analysis of structural features of a galactosemia-related protein and its mutants.

    PubMed

    d'Acierno, Antonio; Facchiano, Angelo; Marabotti, Anna

    2009-06-01

    We describe the GALT-Prot database and its related web-based application that have been developed to collect information about the structural and functional effects of mutations on the human enzyme galactose-1-phosphate uridyltransferase (GALT) involved in the genetic disease named galactosemia type I. Besides a list of missense mutations at gene and protein sequence levels, GALT-Prot reports the analysis results of mutant GALT structures. In addition to the structural information about the wild-type enzyme, the database also includes structures of over 100 single point mutants simulated by means of a computational procedure, and the analysis to each mutant was made with several bioinformatics programs in order to investigate the effect of the mutations. The web-based interface allows querying of the database, and several links are also provided in order to guarantee a high integration with other resources already present on the web. Moreover, the architecture of the database and the web application is flexible and can be easily adapted to store data related to other proteins with point mutations. GALT-Prot is freely available at http://bioinformatica.isa.cnr.it/GALT/.

  9. RBscore&NBench: a high-level web server for nucleic acid binding residues prediction with a large-scale benchmarking database.

    PubMed

    Miao, Zhichao; Westhof, Eric

    2016-07-08

    RBscore&NBench combines a web server, RBscore and a database, NBench. RBscore predicts RNA-/DNA-binding residues in proteins and visualizes the prediction scores and features on protein structures. The scoring scheme of RBscore directly links feature values to nucleic acid binding probabilities and illustrates the nucleic acid binding energy funnel on the protein surface. To avoid dataset, binding site definition and assessment metric biases, we compared RBscore with 18 web servers and 3 stand-alone programs on 41 datasets, which demonstrated the high and stable accuracy of RBscore. A comprehensive comparison led us to develop a benchmark database named NBench. The web server is available on: http://ahsoka.u-strasbg.fr/rbscorenbench/. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  10. S/MARt DB: a database on scaffold/matrix attached regions.

    PubMed

    Liebich, Ines; Bode, Jürgen; Frisch, Matthias; Wingender, Edgar

    2002-01-01

    S/MARt DB, the S/MAR transaction database, is a relational database covering scaffold/matrix attached regions (S/MARs) and nuclear matrix proteins that are involved in the chromosomal attachment to the nuclear scaffold. The data are mainly extracted from original publications, but a World Wide Web interface for direct submissions is also available. S/MARt DB is closely linked to the TRANSFAC database on transcription factors and their binding sites. It is freely accessible through the World Wide Web (http://transfac.gbf.de/SMARtDB/) for non-profit research.

  11. Working with Data: Discovering Knowledge through Mining and Analysis; Systematic Knowledge Management and Knowledge Discovery; Text Mining; Methodological Approach in Discovering User Search Patterns through Web Log Analysis; Knowledge Discovery in Databases Using Formal Concept Analysis; Knowledge Discovery with a Little Perspective.

    ERIC Educational Resources Information Center

    Qin, Jian; Jurisica, Igor; Liddy, Elizabeth D.; Jansen, Bernard J; Spink, Amanda; Priss, Uta; Norton, Melanie J.

    2000-01-01

    These six articles discuss knowledge discovery in databases (KDD). Topics include data mining; knowledge management systems; applications of knowledge discovery; text and Web mining; text mining and information retrieval; user search patterns through Web log analysis; concept analysis; data collection; and data structure inconsistency. (LRW)

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    None Available

    To make the web work better for science, OSTI has developed state-of-the-art technologies and services including a deep web search capability. The deep web includes content in searchable databases available to web users but not accessible by popular search engines, such as Google. This video provides an introduction to the deep web search engine.

  13. Corporate Web Sites in Traditional Print Advertisements.

    ERIC Educational Resources Information Center

    Pardun, Carol J.; Lamb, Larry

    1999-01-01

    Describes the Web presence in print advertisements to determine how marketers are creating bridges between traditional advertising and the Internet. Content analysis showed Web addresses in print ads; categories of advertisers most likely to link print ads with Web sites; and whether the Web site attempts to develop a database of potential…

  14. The Protein Disease Database of human body fluids: II. Computer methods and data issues.

    PubMed

    Lemkin, P F; Orr, G A; Goldstein, M P; Creed, G J; Myrick, J E; Merril, C R

    1995-01-01

    The Protein Disease Database (PDD) is a relational database of proteins and diseases. With this database it is possible to screen for quantitative protein abnormalities associated with disease states. These quantitative relationships use data drawn from the peer-reviewed biomedical literature. Assays may also include those observed in high-resolution electrophoretic gels that offer the potential to quantitate many proteins in a single test as well as data gathered by enzymatic or immunologic assays. We are using the Internet World Wide Web (WWW) and the Web browser paradigm as an access method for wide distribution and querying of the Protein Disease Database. The WWW hypertext transfer protocol and its Common Gateway Interface make it possible to build powerful graphical user interfaces that can support easy-to-use data retrieval using query specification forms or images. The details of these interactions are totally transparent to the users of these forms. Using a client-server SQL relational database, user query access, initial data entry and database maintenance are all performed over the Internet with a Web browser. We discuss the underlying design issues, mapping mechanisms and assumptions that we used in constructing the system, data entry, access to the database server, security, and synthesis of derived two-dimensional gel image maps and hypertext documents resulting from SQL database searches.

  15. Enhanced DIII-D Data Management Through a Relational Database

    NASA Astrophysics Data System (ADS)

    Burruss, J. R.; Peng, Q.; Schachter, J.; Schissel, D. P.; Terpstra, T. B.

    2000-10-01

    A relational database is being used to serve data about DIII-D experiments. The database is optimized for queries across multiple shots, allowing for rapid data mining by SQL-literate researchers. The relational database relates different experiments and datasets, thus providing a big picture of DIII-D operations. Users are encouraged to add their own tables to the database. Summary physics quantities about DIII-D discharges are collected and stored in the database automatically. Meta-data about code runs, MDSplus usage, and visualization tool usage are collected, stored in the database, and later analyzed to improve computing. Documentation on the database may be accessed through programming languages such as C, Java, and IDL, or through ODBC compliant applications such as Excel and Access. A database-driven web page also provides a convenient means for viewing database quantities through the World Wide Web. Demonstrations will be given at the poster.

  16. BioCarian: search engine for exploratory searches in heterogeneous biological databases.

    PubMed

    Zaki, Nazar; Tennakoon, Chandana

    2017-10-02

    There are a large number of biological databases publicly available for scientists in the web. Also, there are many private databases generated in the course of research projects. These databases are in a wide variety of formats. Web standards have evolved in the recent times and semantic web technologies are now available to interconnect diverse and heterogeneous sources of data. Therefore, integration and querying of biological databases can be facilitated by techniques used in semantic web. Heterogeneous databases can be converted into Resource Description Format (RDF) and queried using SPARQL language. Searching for exact queries in these databases is trivial. However, exploratory searches need customized solutions, especially when multiple databases are involved. This process is cumbersome and time consuming for those without a sufficient background in computer science. In this context, a search engine facilitating exploratory searches of databases would be of great help to the scientific community. We present BioCarian, an efficient and user-friendly search engine for performing exploratory searches on biological databases. The search engine is an interface for SPARQL queries over RDF databases. We note that many of the databases can be converted to tabular form. We first convert the tabular databases to RDF. The search engine provides a graphical interface based on facets to explore the converted databases. The facet interface is more advanced than conventional facets. It allows complex queries to be constructed, and have additional features like ranking of facet values based on several criteria, visually indicating the relevance of a facet value and presenting the most important facet values when a large number of choices are available. For the advanced users, SPARQL queries can be run directly on the databases. Using this feature, users will be able to incorporate federated searches of SPARQL endpoints. We used the search engine to do an exploratory search on previously published viral integration data and were able to deduce the main conclusions of the original publication. BioCarian is accessible via http://www.biocarian.com . We have developed a search engine to explore RDF databases that can be used by both novice and advanced users.

  17. Inventory of amphibians and reptiles at Death Valley National Park

    USGS Publications Warehouse

    Persons, Trevor B.; Nowak, Erika M.

    2006-01-01

    As part of the National Park Service Inventory and Monitoring Program in the Mojave Network, we conducted an inventory of amphibians and reptiles at Death Valley National Park in 2002-04. Objectives for this inventory were to: 1) Inventory and document the occurrence of reptile and amphibian species occurring at DEVA, primarily within priority sampling areas, with the goal of documenting at least 90% of the species present; 2) document (through collection or museum specimen and literature review) one voucher specimen for each species identified; 3) provide a GIS-referenced list of sensitive species that are federally or state listed, rare, or worthy of special consideration that occur within priority sampling locations; 4) describe park-wide distribution of federally- or state-listed, rare, or special concern species; 5) enter all species data into the National Park Service NPSpecies database; and 6) provide all deliverables as outlined in the Mojave Network Biological Inventory Study Plan. Methods included daytime and nighttime visual encounter surveys, road driving, and pitfall trapping. Survey effort was concentrated in predetermined priority sampling areas, as well as in areas with a high potential for detecting undocumented species. We recorded 37 species during our surveys, including two species new to the park. During literature review and museum specimen database searches, we recorded three additional species from DEVA, elevating the documented species list to 40 (four amphibians and 36 reptiles). Based on our surveys, as well as literature and museum specimen review, we estimate an overall inventory completeness of 92% for Death Valley and an inventory completeness of 73% for amphibians and 95% for reptiles. Key Words: Amphibians, reptiles, Death Valley National Park, Inyo County, San Bernardino County, Esmeralda County, Nye County, California, Nevada, Mojave Desert, Great Basin Desert, inventory, NPSpecies.

  18. myChEMBL: a virtual machine implementation of open data and cheminformatics tools.

    PubMed

    Ochoa, Rodrigo; Davies, Mark; Papadatos, George; Atkinson, Francis; Overington, John P

    2014-01-15

    myChEMBL is a completely open platform, which combines public domain bioactivity data with open source database and cheminformatics technologies. myChEMBL consists of a Linux (Ubuntu) Virtual Machine featuring a PostgreSQL schema with the latest version of the ChEMBL database, as well as the latest RDKit cheminformatics libraries. In addition, a self-contained web interface is available, which can be modified and improved according to user specifications. The VM is available at: ftp://ftp.ebi.ac.uk/pub/databases/chembl/VM/myChEMBL/current. The web interface and web services code is available at: https://github.com/rochoa85/myChEMBL.

  19. USGS cold-water coral geographic database-Gulf of Mexico and western North Atlantic Ocean, version 1.0

    USGS Publications Warehouse

    Scanlon, Kathryn M.; Waller, Rhian G.; Sirotek, Alexander R.; Knisel, Julia M.; O'Malley, John; Alesandrini, Stian

    2010-01-01

    The USGS Cold-Water Coral Geographic Database (CoWCoG) provides a tool for researchers and managers interested in studying, protecting, and/or utilizing cold-water coral habitats in the Gulf of Mexico and western North Atlantic Ocean.  The database makes information about the locations and taxonomy of cold-water corals available to the public in an easy-to-access form while preserving the scientific integrity of the data.  The database includes over 1700 entries, mostly from published scientific literature, museum collections, and other databases.  The CoWCoG database is easy to search in a variety of ways, and data can be quickly displayed in table form and on a map by using only the software included with this publication.  Subsets of the database can be selected on the basis of geographic location, taxonomy, or other criteria and exported to one of several available file formats.  Future versions of the database are being planned to cover a larger geographic area and additional taxa.

  20. Reviews Exhibitions: Collider: Step inside the World's Greatest Experiment Equipment: Hero Steam Turbine Classroom Video: Most of Our Universe is Missing Book: Serving the Reich Book: Breakthrough to CLIL for Physics Book: The Good Research Guide Apps: Popplet Web Watch Apps

    NASA Astrophysics Data System (ADS)

    2014-03-01

    WE RECOMMEND Collider: step inside the world's greatest experiment A great exhibition at the Science Museum in London Hero Steam Turbine Superb engine model gets up to 2500 rpm Most of Our Universe is Missing BBC video explores the dark truth Serving the Reich Science and morality in Nazi Germany The Good Research Guide A non-specialist book for teachers starting out in education research WORTH A LOOK Breakthrough to CLIL for Physics A book based on a physics curriculum for non-English students WEB WATCH Electric cycles online: patterns of use APPS The virtual laboratory advances personal skills

  1. Facilitating quality control for spectra assignments of small organic molecules: nmrshiftdb2--a free in-house NMR database with integrated LIMS for academic service laboratories.

    PubMed

    Kuhn, Stefan; Schlörer, Nils E

    2015-08-01

    nmrshiftdb2 supports with its laboratory information management system the integration of an electronic lab administration and management into academic NMR facilities. Also, it offers the setup of a local database, while full access to nmrshiftdb2's World Wide Web database is granted. This freely available system allows on the one hand the submission of orders for measurement, transfers recorded data automatically or manually, and enables download of spectra via web interface, as well as the integrated access to prediction, search, and assignment tools of the NMR database for lab users. On the other hand, for the staff and lab administration, flow of all orders can be supervised; administrative tools also include user and hardware management, a statistic functionality for accounting purposes, and a 'QuickCheck' function for assignment control, to facilitate quality control of assignments submitted to the (local) database. Laboratory information management system and database are based on a web interface as front end and are therefore independent of the operating system in use. Copyright © 2015 John Wiley & Sons, Ltd.

  2. Designing a Relational Database for the Basic School; Schools Command Web Enabled Officer and Enlisted Database (Sword)

    DTIC Science & Technology

    2002-06-01

    Student memo for personnel MCLLS . . . . . . . . . . . . . . 75 i. Migrate data to SQL Server...The Web Server is on the same server as the SWORD database in the current version. 4: results set 5: dynamic HTML page 6: dynamic HTML page 3: SQL ...still be supported by Access. SQL Server would be a more viable tool for a fully developed application based on the number of potential users and

  3. Integrated Functional and Executional Modelling of Software Using Web-Based Databases

    NASA Technical Reports Server (NTRS)

    Kulkarni, Deepak; Marietta, Roberta

    1998-01-01

    NASA's software subsystems undergo extensive modification and updates over the operational lifetimes. It is imperative that modified software should satisfy safety goals. This report discusses the difficulties encountered in doing so and discusses a solution based on integrated modelling of software, use of automatic information extraction tools, web technology and databases.

  4. Faculty Views of Open Web Resource Use by College Students

    ERIC Educational Resources Information Center

    Tomaiuolo, Nicholas G.

    2005-01-01

    This article assesses both the extent of students' use of open Web resources and library subscription databases and professors' satisfaction with that use as reported by a survey of 120 community college and university English faculty. It was concluded that although library budgets allocate significant funds to offer subscription databases,…

  5. A Web-Based Tool to Support Data-Based Early Intervention Decision Making

    ERIC Educational Resources Information Center

    Buzhardt, Jay; Greenwood, Charles; Walker, Dale; Carta, Judith; Terry, Barbara; Garrett, Matthew

    2010-01-01

    Progress monitoring and data-based intervention decision making have become key components of providing evidence-based early childhood special education services. Unfortunately, there is a lack of tools to support early childhood service providers' decision-making efforts. The authors describe a Web-based system that guides service providers…

  6. Using Web-based Tutorials To Enhance Library Instruction.

    ERIC Educational Resources Information Center

    Kocour, Bruce G.

    2000-01-01

    Describes the development of a Web site for library instruction at Carson-Newman College (TN) and its integration into English composition courses. Describes the use of a virtual tour, a tutorial on database searching, tutorials on specific databases, and library guides to specific disciplines to create an effective mechanism for active learning.…

  7. Development of a Relational Database for Learning Management Systems

    ERIC Educational Resources Information Center

    Deperlioglu, Omer; Sarpkaya, Yilmaz; Ergun, Ertugrul

    2011-01-01

    In today's world, Web-Based Distance Education Systems have a great importance. Web-based Distance Education Systems are usually known as Learning Management Systems (LMS). In this article, a database design, which was developed to create an educational institution as a Learning Management System, is described. In this sense, developed Learning…

  8. Integrated remote sensing and visualization (IRSV) system for transportation infrastructure operations and management, phase two, volume 4 : web-based bridge information database--visualization analytics and distributed sensing.

    DOT National Transportation Integrated Search

    2012-03-01

    This report introduces the design and implementation of a Web-based bridge information visual analytics system. This : project integrates Internet, multiple databases, remote sensing, and other visualization technologies. The result : combines a GIS ...

  9. Relax with CouchDB--into the non-relational DBMS era of bioinformatics.

    PubMed

    Manyam, Ganiraju; Payton, Michelle A; Roth, Jack A; Abruzzo, Lynne V; Coombes, Kevin R

    2012-07-01

    With the proliferation of high-throughput technologies, genome-level data analysis has become common in molecular biology. Bioinformaticians are developing extensive resources to annotate and mine biological features from high-throughput data. The underlying database management systems for most bioinformatics software are based on a relational model. Modern non-relational databases offer an alternative that has flexibility, scalability, and a non-rigid design schema. Moreover, with an accelerated development pace, non-relational databases like CouchDB can be ideal tools to construct bioinformatics utilities. We describe CouchDB by presenting three new bioinformatics resources: (a) geneSmash, which collates data from bioinformatics resources and provides automated gene-centric annotations, (b) drugBase, a database of drug-target interactions with a web interface powered by geneSmash, and (c) HapMap-CN, which provides a web interface to query copy number variations from three SNP-chip HapMap datasets. In addition to the web sites, all three systems can be accessed programmatically via web services. Copyright © 2012 Elsevier Inc. All rights reserved.

  10. Footprint Database and web services for the Herschel space observatory

    NASA Astrophysics Data System (ADS)

    Verebélyi, Erika; Dobos, László; Kiss, Csaba

    2015-08-01

    Using all telemetry and observational meta-data, we created a searchable database of Herschel observation footprints. Data from the Herschel space observatory is freely available for everyone but no uniformly processed catalog of all observations has been published yet. As a first step, we unified the data model for all three Herschel instruments in all observation modes and compiled a database of sky coverage information. As opposed to methods using a pixellation of the sphere, in our database, sky coverage is stored in exact geometric form allowing for precise area calculations. Indexing of the footprints allows for very fast search among observations based on pointing, time, sky coverage overlap and meta-data. This enables us, for example, to find moving objects easily in Herschel fields. The database is accessible via a web site and also as a set of REST web service functions which makes it usable from program clients like Python or IDL scripts. Data is available in various formats including Virtual Observatory standards.

  11. Development of Web-based Distributed Cooperative Development Environmentof Sign-Language Animation System and its Evaluation

    NASA Astrophysics Data System (ADS)

    Yuizono, Takaya; Hara, Kousuke; Nakayama, Shigeru

    A web-based distributed cooperative development environment of sign-language animation system has been developed. We have extended the system from the previous animation system that was constructed as three tiered system which consists of sign-language animation interface layer, sign-language data processing layer, and sign-language animation database. Two components of a web client using VRML plug-in and web servlet are added to the previous system. The systems can support humanoid-model avatar for interoperability, and can use the stored sign language animation data shared on the database. It is noted in the evaluation of this system that the inverse kinematics function of web client improves the sign-language animation making.

  12. Semantic SenseLab: implementing the vision of the Semantic Web in neuroscience

    PubMed Central

    Samwald, Matthias; Chen, Huajun; Ruttenberg, Alan; Lim, Ernest; Marenco, Luis; Miller, Perry; Shepherd, Gordon; Cheung, Kei-Hoi

    2011-01-01

    Summary Objective Integrative neuroscience research needs a scalable informatics framework that enables semantic integration of diverse types of neuroscience data. This paper describes the use of the Web Ontology Language (OWL) and other Semantic Web technologies for the representation and integration of molecular-level data provided by several of SenseLab suite of neuroscience databases. Methods Based on the original database structure, we semi-automatically translated the databases into OWL ontologies with manual addition of semantic enrichment. The SenseLab ontologies are extensively linked to other biomedical Semantic Web resources, including the Subcellular Anatomy Ontology, Brain Architecture Management System, the Gene Ontology, BIRNLex and UniProt. The SenseLab ontologies have also been mapped to the Basic Formal Ontology and Relation Ontology, which helps ease interoperability with many other existing and future biomedical ontologies for the Semantic Web. In addition, approaches to representing contradictory research statements are described. The SenseLab ontologies are designed for use on the Semantic Web that enables their integration into a growing collection of biomedical information resources. Conclusion We demonstrate that our approach can yield significant potential benefits and that the Semantic Web is rapidly becoming mature enough to realize its anticipated promises. The ontologies are available online at http://neuroweb.med.yale.edu/senselab/ PMID:20006477

  13. Semantic SenseLab: Implementing the vision of the Semantic Web in neuroscience.

    PubMed

    Samwald, Matthias; Chen, Huajun; Ruttenberg, Alan; Lim, Ernest; Marenco, Luis; Miller, Perry; Shepherd, Gordon; Cheung, Kei-Hoi

    2010-01-01

    Integrative neuroscience research needs a scalable informatics framework that enables semantic integration of diverse types of neuroscience data. This paper describes the use of the Web Ontology Language (OWL) and other Semantic Web technologies for the representation and integration of molecular-level data provided by several of SenseLab suite of neuroscience databases. Based on the original database structure, we semi-automatically translated the databases into OWL ontologies with manual addition of semantic enrichment. The SenseLab ontologies are extensively linked to other biomedical Semantic Web resources, including the Subcellular Anatomy Ontology, Brain Architecture Management System, the Gene Ontology, BIRNLex and UniProt. The SenseLab ontologies have also been mapped to the Basic Formal Ontology and Relation Ontology, which helps ease interoperability with many other existing and future biomedical ontologies for the Semantic Web. In addition, approaches to representing contradictory research statements are described. The SenseLab ontologies are designed for use on the Semantic Web that enables their integration into a growing collection of biomedical information resources. We demonstrate that our approach can yield significant potential benefits and that the Semantic Web is rapidly becoming mature enough to realize its anticipated promises. The ontologies are available online at http://neuroweb.med.yale.edu/senselab/. 2009 Elsevier B.V. All rights reserved.

  14. Multiscale Interactive Communication: Inside and Outside Thun Castle

    NASA Astrophysics Data System (ADS)

    Massari, G. A.; Luce, F.; Pellegatta, C.

    2011-09-01

    The applications of informatics to architecture have become, for professionals, a great tool for managing analytical phases and project activities but also, for the general public, new ways of communication that may relate directly present, past and future facts. Museums in historic buildings, their installations and the recent experiences of eco-museums located throughout the territory provide a privileged experimentation field for technical and digital representation. On the one hand, the safeguarding and the functional adaptation of buildings use 3D computer graphics models that are real spatially related databases: in them are ordered, viewed and interpreted the results of archival, artistic-historical, diagnostic, technological-structural studies and the assumption and feasibility of interventions. On the other hand, the disclosure of things and knowledge linked to collective memory relies on interactive maps and hypertext systems that provide access to authentic virtual museums; a sort of multimedia extension of the exhibition hall is produced to an architectural scale, but at landscape scale the result is an instrument of cultural development so far unpublished: works that are separated in direct perception find in a zenith view of the map a synthetic relation, related both to spatial parameters and temporal interpretations.

  15. Genetic diversity is largely unpredictable but scales with museum occurrences in a species-rich clade of Australian lizards

    PubMed Central

    Huang, Huateng; Title, Pascal O.; Donnellan, Stephen C.; Holmes, Iris; Rabosky, Daniel L.

    2017-01-01

    Genetic diversity is a fundamental characteristic of species and is affected by many factors, including mutation rate, population size, life history and demography. To better understand the processes that influence levels of genetic diversity across taxa, we collected genome-wide restriction-associated DNA data from more than 500 individuals spanning 76 nominal species of Australian scincid lizards in the genus Ctenotus. To avoid potential biases associated with variation in taxonomic practice across the group, we used coalescent-based species delimitation to delineate 83 species-level lineages within the genus for downstream analyses. We then used these genetic data to infer levels of within-population genetic diversity. Using a phylogenetically informed approach, we tested whether variation in genetic diversity could be explained by population size, environmental heterogeneity or historical demography. We find that the strongest predictor of genetic diversity is a novel proxy for census population size: the number of vouchered occurrences in museum databases. However, museum occurrences only explain a limited proportion of the variance in genetic diversity, suggesting that genetic diversity might be difficult to predict at shallower phylogenetic scales. PMID:28469025

  16. Genetic diversity is largely unpredictable but scales with museum occurrences in a species-rich clade of Australian lizards.

    PubMed

    Singhal, Sonal; Huang, Huateng; Title, Pascal O; Donnellan, Stephen C; Holmes, Iris; Rabosky, Daniel L

    2017-05-17

    Genetic diversity is a fundamental characteristic of species and is affected by many factors, including mutation rate, population size, life history and demography. To better understand the processes that influence levels of genetic diversity across taxa, we collected genome-wide restriction-associated DNA data from more than 500 individuals spanning 76 nominal species of Australian scincid lizards in the genus Ctenotus To avoid potential biases associated with variation in taxonomic practice across the group, we used coalescent-based species delimitation to delineate 83 species-level lineages within the genus for downstream analyses. We then used these genetic data to infer levels of within-population genetic diversity. Using a phylogenetically informed approach, we tested whether variation in genetic diversity could be explained by population size, environmental heterogeneity or historical demography. We find that the strongest predictor of genetic diversity is a novel proxy for census population size: the number of vouchered occurrences in museum databases. However, museum occurrences only explain a limited proportion of the variance in genetic diversity, suggesting that genetic diversity might be difficult to predict at shallower phylogenetic scales. © 2017 The Author(s).

  17. Inventory of Amphibians and Reptiles at Mojave National Preserve: Final Report

    USGS Publications Warehouse

    Persons, Trevor B.; Nowak, Erika M.

    2007-01-01

    As part of the National Park Service Inventory and Monitoring Program in the Mojave Network, we conducted an inventory of amphibians and reptiles at Mojave National Preserve in 2004-2005. Objectives for this inventory were to use fieldwork, museum collections, and literature review to document the occurrence of reptile and amphibian species occurring at MOJA. Our goals were to document at least 90% of the species present, provide one voucher specimen for each species identified, provide GIS-referenced distribution information for sensitive species, and provide all deliverables, including NPSpecies entries, as outlined in the Mojave Network Biological Inventory Study Plan. Methods included daytime and nighttime visual encounter surveys and nighttime road driving. Survey effort was concentrated in predetermined priority sampling areas, as well as in areas with a high potential for detecting undocumented species. We recorded 31 species during our surveys. During literature review and museum specimen database searches, we found records for seven additional species from MOJA, elevating the documented species list to 38 (two amphibians and 36 reptiles). Based on our surveys, as well as literature and museum specimen review, we estimate an overall inventory completeness of 95% for Mojave National Preserve herpetofauna; 67% for amphibians and 97% for reptiles.

  18. Development of a web database portfolio system with PACS connectivity for undergraduate health education and continuing professional development.

    PubMed

    Ng, Curtise K C; White, Peter; McKay, Janice C

    2009-04-01

    Increasingly, the use of web database portfolio systems is noted in medical and health education, and for continuing professional development (CPD). However, the functions of existing systems are not always aligned with the corresponding pedagogy and hence reflection is often lost. This paper presents the development of a tailored web database portfolio system with Picture Archiving and Communication System (PACS) connectivity, which is based on the portfolio pedagogy. Following a pre-determined portfolio framework, a system model with the components of web, database and mail servers, server side scripts, and a Query/Retrieve (Q/R) broker for conversion between Hypertext Transfer Protocol (HTTP) requests and Q/R service class of Digital Imaging and Communication in Medicine (DICOM) standard, is proposed. The system was piloted with seventy-seven volunteers. A tailored web database portfolio system (http://radep.hti.polyu.edu.hk) was developed. Technological arrangements for reinforcing portfolio pedagogy include popup windows (reminders) with guidelines and probing questions of 'collect', 'select' and 'reflect' on evidence of development/experience, limitation in the number of files (evidence) to be uploaded, the 'Evidence Insertion' functionality to link the individual uploaded artifacts with reflective writing, capability to accommodate diversity of contents and convenient interfaces for reviewing portfolios and communication. Evidence to date suggests the system supports users to build their portfolios with sound hypertext reflection under a facilitator's guidance, and with reviewers to monitor students' progress providing feedback and comments online in a programme-wide situation.

  19. The DBCLS BioHackathon: standardization and interoperability for bioinformatics web services and workflows. The DBCLS BioHackathon Consortium*.

    PubMed

    Katayama, Toshiaki; Arakawa, Kazuharu; Nakao, Mitsuteru; Ono, Keiichiro; Aoki-Kinoshita, Kiyoko F; Yamamoto, Yasunori; Yamaguchi, Atsuko; Kawashima, Shuichi; Chun, Hong-Woo; Aerts, Jan; Aranda, Bruno; Barboza, Lord Hendrix; Bonnal, Raoul Jp; Bruskiewich, Richard; Bryne, Jan C; Fernández, José M; Funahashi, Akira; Gordon, Paul Mk; Goto, Naohisa; Groscurth, Andreas; Gutteridge, Alex; Holland, Richard; Kano, Yoshinobu; Kawas, Edward A; Kerhornou, Arnaud; Kibukawa, Eri; Kinjo, Akira R; Kuhn, Michael; Lapp, Hilmar; Lehvaslaiho, Heikki; Nakamura, Hiroyuki; Nakamura, Yasukazu; Nishizawa, Tatsuya; Nobata, Chikashi; Noguchi, Tamotsu; Oinn, Thomas M; Okamoto, Shinobu; Owen, Stuart; Pafilis, Evangelos; Pocock, Matthew; Prins, Pjotr; Ranzinger, René; Reisinger, Florian; Salwinski, Lukasz; Schreiber, Mark; Senger, Martin; Shigemoto, Yasumasa; Standley, Daron M; Sugawara, Hideaki; Tashiro, Toshiyuki; Trelles, Oswaldo; Vos, Rutger A; Wilkinson, Mark D; York, William; Zmasek, Christian M; Asai, Kiyoshi; Takagi, Toshihisa

    2010-08-21

    Web services have become a key technology for bioinformatics, since life science databases are globally decentralized and the exponential increase in the amount of available data demands for efficient systems without the need to transfer entire databases for every step of an analysis. However, various incompatibilities among database resources and analysis services make it difficult to connect and integrate these into interoperable workflows. To resolve this situation, we invited domain specialists from web service providers, client software developers, Open Bio* projects, the BioMoby project and researchers of emerging areas where a standard exchange data format is not well established, for an intensive collaboration entitled the BioHackathon 2008. The meeting was hosted by the Database Center for Life Science (DBCLS) and Computational Biology Research Center (CBRC) and was held in Tokyo from February 11th to 15th, 2008. In this report we highlight the work accomplished and the common issues arisen from this event, including the standardization of data exchange formats and services in the emerging fields of glycoinformatics, biological interaction networks, text mining, and phyloinformatics. In addition, common shared object development based on BioSQL, as well as technical challenges in large data management, asynchronous services, and security are discussed. Consequently, we improved interoperability of web services in several fields, however, further cooperation among major database centers and continued collaborative efforts between service providers and software developers are still necessary for an effective advance in bioinformatics web service technologies.

  20. The DBCLS BioHackathon: standardization and interoperability for bioinformatics web services and workflows. The DBCLS BioHackathon Consortium*

    PubMed Central

    2010-01-01

    Web services have become a key technology for bioinformatics, since life science databases are globally decentralized and the exponential increase in the amount of available data demands for efficient systems without the need to transfer entire databases for every step of an analysis. However, various incompatibilities among database resources and analysis services make it difficult to connect and integrate these into interoperable workflows. To resolve this situation, we invited domain specialists from web service providers, client software developers, Open Bio* projects, the BioMoby project and researchers of emerging areas where a standard exchange data format is not well established, for an intensive collaboration entitled the BioHackathon 2008. The meeting was hosted by the Database Center for Life Science (DBCLS) and Computational Biology Research Center (CBRC) and was held in Tokyo from February 11th to 15th, 2008. In this report we highlight the work accomplished and the common issues arisen from this event, including the standardization of data exchange formats and services in the emerging fields of glycoinformatics, biological interaction networks, text mining, and phyloinformatics. In addition, common shared object development based on BioSQL, as well as technical challenges in large data management, asynchronous services, and security are discussed. Consequently, we improved interoperability of web services in several fields, however, further cooperation among major database centers and continued collaborative efforts between service providers and software developers are still necessary for an effective advance in bioinformatics web service technologies. PMID:20727200

  1. Java Web Simulation (JWS); a web based database of kinetic models.

    PubMed

    Snoep, J L; Olivier, B G

    2002-01-01

    Software to make a database of kinetic models accessible via the internet has been developed and a core database has been set up at http://jjj.biochem.sun.ac.za/. This repository of models, available to everyone with internet access, opens a whole new way in which we can make our models public. Via the database, a user can change enzyme parameters and run time simulations or steady state analyses. The interface is user friendly and no additional software is necessary. The database currently contains 10 models, but since the generation of the program code to include new models has largely been automated the addition of new models is straightforward and people are invited to submit their models to be included in the database.

  2. Citations and the h index of soil researchers and journals in the Web of Science, Scopus, and Google Scholar

    PubMed Central

    Hartemink, Alfred E.; McBratney, Alex; Jang, Ho-Jun

    2013-01-01

    Citation metrics and h indices differ using different bibliometric databases. We compiled the number of publications, number of citations, h index and year since the first publication from 340 soil researchers from all over the world. On average, Google Scholar has the highest h index, number of publications and citations per researcher, and the Web of Science the lowest. The number of papers in Google Scholar is on average 2.3 times higher and the number of citations is 1.9 times higher compared to the data in the Web of Science. Scopus metrics are slightly higher than that of the Web of Science. The h index in Google Scholar is on average 1.4 times larger than Web of Science, and the h index in Scopus is on average 1.1 times larger than Web of Science. Over time, the metrics increase in all three databases but fastest in Google Scholar. The h index of an individual soil scientist is about 0.7 times the number of years since his/her first publication. There is a large difference between the number of citations, number of publications and the h index using the three databases. From this analysis it can be concluded that the choice of the database affects widely-used citation and evaluation metrics but that bibliometric transfer functions exist to relate the metrics from these three databases. We also investigated the relationship between journal’s impact factor and Google Scholar’s h5-index. The h5-index is a better measure of a journal’s citation than the 2 or 5 year window impact factor. PMID:24167778

  3. Citations and the h index of soil researchers and journals in the Web of Science, Scopus, and Google Scholar.

    PubMed

    Minasny, Budiman; Hartemink, Alfred E; McBratney, Alex; Jang, Ho-Jun

    2013-01-01

    Citation metrics and h indices differ using different bibliometric databases. We compiled the number of publications, number of citations, h index and year since the first publication from 340 soil researchers from all over the world. On average, Google Scholar has the highest h index, number of publications and citations per researcher, and the Web of Science the lowest. The number of papers in Google Scholar is on average 2.3 times higher and the number of citations is 1.9 times higher compared to the data in the Web of Science. Scopus metrics are slightly higher than that of the Web of Science. The h index in Google Scholar is on average 1.4 times larger than Web of Science, and the h index in Scopus is on average 1.1 times larger than Web of Science. Over time, the metrics increase in all three databases but fastest in Google Scholar. The h index of an individual soil scientist is about 0.7 times the number of years since his/her first publication. There is a large difference between the number of citations, number of publications and the h index using the three databases. From this analysis it can be concluded that the choice of the database affects widely-used citation and evaluation metrics but that bibliometric transfer functions exist to relate the metrics from these three databases. We also investigated the relationship between journal's impact factor and Google Scholar's h5-index. The h5-index is a better measure of a journal's citation than the 2 or 5 year window impact factor.

  4. The Next Linear Collider Program

    Science.gov Websites

    Navbar Other Address Books: Laboratory Phone/Email Web Directory SLAC SLAC Phonebook Entire SLAC Web FNAL Telephone Directory Fermilab Search LLNL Phone Book LLNL Web Servers LBNL Directory Services Web Search: A-Z Index KEK E-mail Database Research Projects NLC Website Search: Entire SLAC Web | Help

  5. 78 FR 42775 - CGI Federal, Inc., and Custom Applications Management; Transfer of Data

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-17

    ... develop applications, Web sites, Web pages, web-based applications and databases, in accordance with EPA policies and related Federal standards and procedures. The Contractor will provide [[Page 42776

  6. CerebralWeb: a Cytoscape.js plug-in to visualize networks stratified by subcellular localization.

    PubMed

    Frias, Silvia; Bryan, Kenneth; Brinkman, Fiona S L; Lynn, David J

    2015-01-01

    CerebralWeb is a light-weight JavaScript plug-in that extends Cytoscape.js to enable fast and interactive visualization of molecular interaction networks stratified based on subcellular localization or other user-supplied annotation. The application is designed to be easily integrated into any website and is configurable to support customized network visualization. CerebralWeb also supports the automatic retrieval of Cerebral-compatible localizations for human, mouse and bovine genes via a web service and enables the automated parsing of Cytoscape compatible XGMML network files. CerebralWeb currently supports embedded network visualization on the InnateDB (www.innatedb.com) and Allergy and Asthma Portal (allergen.innatedb.com) database and analysis resources. Database tool URL: http://www.innatedb.com/CerebralWeb © The Author(s) 2015. Published by Oxford University Press.

  7. Virtualization of open-source secure web services to support data exchange in a pediatric critical care research network

    PubMed Central

    Sward, Katherine A; Newth, Christopher JL; Khemani, Robinder G; Cryer, Martin E; Thelen, Julie L; Enriquez, Rene; Shaoyu, Su; Pollack, Murray M; Harrison, Rick E; Meert, Kathleen L; Berg, Robert A; Wessel, David L; Shanley, Thomas P; Dalton, Heidi; Carcillo, Joseph; Jenkins, Tammara L; Dean, J Michael

    2015-01-01

    Objectives To examine the feasibility of deploying a virtual web service for sharing data within a research network, and to evaluate the impact on data consistency and quality. Material and Methods Virtual machines (VMs) encapsulated an open-source, semantically and syntactically interoperable secure web service infrastructure along with a shadow database. The VMs were deployed to 8 Collaborative Pediatric Critical Care Research Network Clinical Centers. Results Virtual web services could be deployed in hours. The interoperability of the web services reduced format misalignment from 56% to 1% and demonstrated that 99% of the data consistently transferred using the data dictionary and 1% needed human curation. Conclusions Use of virtualized open-source secure web service technology could enable direct electronic abstraction of data from hospital databases for research purposes. PMID:25796596

  8. TOPSAN: a dynamic web database for structural genomics.

    PubMed

    Ellrott, Kyle; Zmasek, Christian M; Weekes, Dana; Sri Krishna, S; Bakolitsa, Constantina; Godzik, Adam; Wooley, John

    2011-01-01

    The Open Protein Structure Annotation Network (TOPSAN) is a web-based collaboration platform for exploring and annotating structures determined by structural genomics efforts. Characterization of those structures presents a challenge since the majority of the proteins themselves have not yet been characterized. Responding to this challenge, the TOPSAN platform facilitates collaborative annotation and investigation via a user-friendly web-based interface pre-populated with automatically generated information. Semantic web technologies expand and enrich TOPSAN's content through links to larger sets of related databases, and thus, enable data integration from disparate sources and data mining via conventional query languages. TOPSAN can be found at http://www.topsan.org.

  9. The IVTANTHERMO-Online database for thermodynamic properties of individual substances with web interface

    NASA Astrophysics Data System (ADS)

    Belov, G. V.; Dyachkov, S. A.; Levashov, P. R.; Lomonosov, I. V.; Minakov, D. V.; Morozov, I. V.; Sineva, M. A.; Smirnov, V. N.

    2018-01-01

    The database structure, main features and user interface of an IVTANTHERMO-Online system are reviewed. This system continues the series of the IVTANTHERMO packages developed in JIHT RAS. It includes the database for thermodynamic properties of individual substances and related software for analysis of experimental results, data fitting, calculation and estimation of thermodynamical functions and thermochemistry quantities. In contrast to the previous IVTANTHERMO versions it has a new extensible database design, the client-server architecture, a user-friendly web interface with a number of new features for online and offline data processing.

  10. Model Study for an Economic Data Program on the Conditions of Arts and Cultural Institutions. Final Report.

    ERIC Educational Resources Information Center

    Deane, Robert T.; And Others

    The development of econometric models and a data base to predict the responsiveness of arts institutions to changes in the economy is reported. The study focused on models for museums, theaters (profit and non-profit), symphony, ballet, opera, and dance. The report details four objectives of the project: to identify useful databases and studies on…

  11. Integrating Radar Image Data with Google Maps

    NASA Technical Reports Server (NTRS)

    Chapman, Bruce D.; Gibas, Sarah

    2010-01-01

    A public Web site has been developed as a method for displaying the multitude of radar imagery collected by NASA s Airborne Synthetic Aperture Radar (AIRSAR) instrument during its 16-year mission. Utilizing NASA s internal AIRSAR site, the new Web site features more sophisticated visualization tools that enable the general public to have access to these images. The site was originally maintained at NASA on six computers: one that held the Oracle database, two that took care of the software for the interactive map, and three that were for the Web site itself. Several tasks were involved in moving this complicated setup to just one computer. First, the AIRSAR database was migrated from Oracle to MySQL. Then the back-end of the AIRSAR Web site was updated in order to access the MySQL database. To do this, a few of the scripts needed to be modified; specifically three Perl scripts that query that database. The database connections were then updated from Oracle to MySQL, numerous syntax errors were corrected, and a query was implemented that replaced one of the stored Oracle procedures. Lastly, the interactive map was designed, implemented, and tested so that users could easily browse and access the radar imagery through the Google Maps interface.

  12. The experimental nuclear reaction data (EXFOR): Extended computer database and Web retrieval system

    DOE PAGES

    Zerkin, V. V.; Pritychenko, B.

    2018-02-04

    The EXchange FORmat (EXFOR) experimental nuclear reaction database and the associated Web interface provide access to the wealth of low- and intermediate-energy nuclear reaction physics data. This resource is based on numerical data sets and bibliographical information of ~22,000 experiments since the beginning of nuclear science. The principles of the computer database organization, its extended contents and Web applications development are described. New capabilities for the data sets uploads, renormalization, covariance matrix, and inverse reaction calculations are presented in this paper. The EXFOR database, updated monthly, provides an essential support for nuclear data evaluation, application development, and research activities. Finally,more » it is publicly available at the websites of the International Atomic Energy Agency Nuclear Data Section, http://www-nds.iaea.org/exfor, the U.S. National Nuclear Data Center, http://www.nndc.bnl.gov/exfor, and the mirror sites in China, India and Russian Federation.« less

  13. The experimental nuclear reaction data (EXFOR): Extended computer database and Web retrieval system

    NASA Astrophysics Data System (ADS)

    Zerkin, V. V.; Pritychenko, B.

    2018-04-01

    The EXchange FORmat (EXFOR) experimental nuclear reaction database and the associated Web interface provide access to the wealth of low- and intermediate-energy nuclear reaction physics data. This resource is based on numerical data sets and bibliographical information of ∼22,000 experiments since the beginning of nuclear science. The principles of the computer database organization, its extended contents and Web applications development are described. New capabilities for the data sets uploads, renormalization, covariance matrix, and inverse reaction calculations are presented. The EXFOR database, updated monthly, provides an essential support for nuclear data evaluation, application development, and research activities. It is publicly available at the websites of the International Atomic Energy Agency Nuclear Data Section, http://www-nds.iaea.org/exfor, the U.S. National Nuclear Data Center, http://www.nndc.bnl.gov/exfor, and the mirror sites in China, India and Russian Federation.

  14. ADASS Web Database XML Project

    NASA Astrophysics Data System (ADS)

    Barg, M. I.; Stobie, E. B.; Ferro, A. J.; O'Neil, E. J.

    In the spring of 2000, at the request of the ADASS Program Organizing Committee (POC), we began organizing information from previous ADASS conferences in an effort to create a centralized database. The beginnings of this database originated from data (invited speakers, participants, papers, etc.) extracted from HyperText Markup Language (HTML) documents from past ADASS host sites. Unfortunately, not all HTML documents are well formed and parsing them proved to be an iterative process. It was evident at the beginning that if these Web documents were organized in a standardized way, such as XML (Extensible Markup Language), the processing of this information across the Web could be automated, more efficient, and less error prone. This paper will briefly review the many programming tools available for processing XML, including Java, Perl and Python, and will explore the mapping of relational data from our MySQL database to XML.

  15. The experimental nuclear reaction data (EXFOR): Extended computer database and Web retrieval system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zerkin, V. V.; Pritychenko, B.

    The EXchange FORmat (EXFOR) experimental nuclear reaction database and the associated Web interface provide access to the wealth of low- and intermediate-energy nuclear reaction physics data. This resource is based on numerical data sets and bibliographical information of ~22,000 experiments since the beginning of nuclear science. The principles of the computer database organization, its extended contents and Web applications development are described. New capabilities for the data sets uploads, renormalization, covariance matrix, and inverse reaction calculations are presented in this paper. The EXFOR database, updated monthly, provides an essential support for nuclear data evaluation, application development, and research activities. Finally,more » it is publicly available at the websites of the International Atomic Energy Agency Nuclear Data Section, http://www-nds.iaea.org/exfor, the U.S. National Nuclear Data Center, http://www.nndc.bnl.gov/exfor, and the mirror sites in China, India and Russian Federation.« less

  16. maxdLoad2 and maxdBrowse: standards-compliant tools for microarray experimental annotation, data management and dissemination.

    PubMed

    Hancock, David; Wilson, Michael; Velarde, Giles; Morrison, Norman; Hayes, Andrew; Hulme, Helen; Wood, A Joseph; Nashar, Karim; Kell, Douglas B; Brass, Andy

    2005-11-03

    maxdLoad2 is a relational database schema and Java application for microarray experimental annotation and storage. It is compliant with all standards for microarray meta-data capture; including the specification of what data should be recorded, extensive use of standard ontologies and support for data exchange formats. The output from maxdLoad2 is of a form acceptable for submission to the ArrayExpress microarray repository at the European Bioinformatics Institute. maxdBrowse is a PHP web-application that makes contents of maxdLoad2 databases accessible via web-browser, the command-line and web-service environments. It thus acts as both a dissemination and data-mining tool. maxdLoad2 presents an easy-to-use interface to an underlying relational database and provides a full complement of facilities for browsing, searching and editing. There is a tree-based visualization of data connectivity and the ability to explore the links between any pair of data elements, irrespective of how many intermediate links lie between them. Its principle novel features are: the flexibility of the meta-data that can be captured, the tools provided for importing data from spreadsheets and other tabular representations, the tools provided for the automatic creation of structured documents, the ability to browse and access the data via web and web-services interfaces. Within maxdLoad2 it is very straightforward to customise the meta-data that is being captured or change the definitions of the meta-data. These meta-data definitions are stored within the database itself allowing client software to connect properly to a modified database without having to be specially configured. The meta-data definitions (configuration file) can also be centralized allowing changes made in response to revisions of standards or terminologies to be propagated to clients without user intervention.maxdBrowse is hosted on a web-server and presents multiple interfaces to the contents of maxd databases. maxdBrowse emulates many of the browse and search features available in the maxdLoad2 application via a web-browser. This allows users who are not familiar with maxdLoad2 to browse and export microarray data from the database for their own analysis. The same browse and search features are also available via command-line and SOAP server interfaces. This both enables scripting of data export for use embedded in data repositories and analysis environments, and allows access to the maxd databases via web-service architectures. maxdLoad2 http://www.bioinf.man.ac.uk/microarray/maxd/ and maxdBrowse http://dbk.ch.umist.ac.uk/maxdBrowse are portable and compatible with all common operating systems and major database servers. They provide a powerful, flexible package for annotation of microarray experiments and a convenient dissemination environment. They are available for download and open sourced under the Artistic License.

  17. Nuclear Science References Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pritychenko, B., E-mail: pritychenko@bnl.gov; Běták, E.; Singh, B.

    2014-06-15

    The Nuclear Science References (NSR) database together with its associated Web interface, is the world's only comprehensive source of easily accessible low- and intermediate-energy nuclear physics bibliographic information for more than 210,000 articles since the beginning of nuclear science. The weekly-updated NSR database provides essential support for nuclear data evaluation, compilation and research activities. The principles of the database and Web application development and maintenance are described. Examples of nuclear structure, reaction and decay applications are specifically included. The complete NSR database is freely available at the websites of the National Nuclear Data Center (http://www.nndc.bnl.gov/nsr) and the International Atomic Energymore » Agency (http://www-nds.iaea.org/nsr)« less

  18. A web based relational database management system for filariasis control

    PubMed Central

    Murty, Upadhyayula Suryanarayana; Kumar, Duvvuri Venkata Rama Satya; Sriram, Kumaraswamy; Rao, Kadiri Madhusudhan; Bhattacharyulu, Chakravarthula Hayageeva Narasimha Venakata; Praveen, Bhoopathi; Krishna, Amirapu Radha

    2005-01-01

    The present study describes a RDBMS (relational database management system) for the effective management of Filariasis, a vector borne disease. Filariasis infects 120 million people from 83 countries. The possible re-emergence of the disease and the complexity of existing control programs warrant the development of new strategies. A database containing comprehensive data associated with filariasis finds utility in disease control. We have developed a database containing information on the socio-economic status of patients, mosquito collection procedures, mosquito dissection data, filariasis survey report and mass blood data. The database can be searched using a user friendly web interface. Availability http://www.webfil.org (login and password can be obtained from the authors) PMID:17597846

  19. Digital hand atlas for web-based bone age assessment: system design and implementation

    NASA Astrophysics Data System (ADS)

    Cao, Fei; Huang, H. K.; Pietka, Ewa; Gilsanz, Vicente

    2000-04-01

    A frequently used assessment method of skeletal age is atlas matching by a radiological examination of a hand image against a small set of Greulich-Pyle patterns of normal standards. The method however can lead to significant deviation in age assessment, due to a variety of observers with different levels of training. The Greulich-Pyle atlas based on middle upper class white populations in the 1950s, is also not fully applicable for children of today, especially regarding the standard development in other racial groups. In this paper, we present our system design and initial implementation of a digital hand atlas and computer-aided diagnostic (CAD) system for Web-based bone age assessment. The digital atlas will remove the disadvantages of the currently out-of-date one and allow the bone age assessment to be computerized and done conveniently via Web. The system consists of a hand atlas database, a CAD module and a Java-based Web user interface. The atlas database is based on a large set of clinically normal hand images of diverse ethnic groups. The Java-based Web user interface allows users to interact with the hand image database form browsers. Users can use a Web browser to push a clinical hand image to the CAD server for a bone age assessment. Quantitative features on the examined image, which reflect the skeletal maturity, is then extracted and compared with patterns from the atlas database to assess the bone age.

  20. Globe Teachers Guide and Photographic Data on the Web

    NASA Technical Reports Server (NTRS)

    Kowal, Dan

    2004-01-01

    The task of managing the GLOBE Online Teacher s Guide during this time period focused on transforming the technology behind the delivery system of this document. The web application transformed from a flat file retrieval system to a dynamic database access approach. The new methodology utilizes Java Server Pages (JSP) on the front-end and an Oracle relational database on the backend. This new approach allows users of the web site, mainly teachers, to access content efficiently by grade level and/or by investigation or educational concept area. Moreover, teachers can gain easier access to data sheets and lab and field guides. The new online guide also included updated content for all GLOBE protocols. The GLOBE web management team was given documentation for maintaining the new application. Instructions for modifying the JSP templates and managing database content were included in this document. It was delivered to the team by the end of October, 2003. The National Geophysical Data Center (NGDC) continued to manage the school study site photos on the GLOBE website. 333 study site photo images were added to the GLOBE database and posted on the web during this same time period for 64 schools. Documentation for processing study site photos was also delivered to the new GLOBE web management team. Lastly, assistance was provided in transferring reference applications such as the Cloud and LandSat quizzes and Earth Systems Online Poster from NGDC servers to GLOBE servers along with documentation for maintaining these applications.

  1. Workflow and web application for annotating NCBI BioProject transcriptome data

    PubMed Central

    Vera Alvarez, Roberto; Medeiros Vidal, Newton; Garzón-Martínez, Gina A.; Barrero, Luz S.; Landsman, David

    2017-01-01

    Abstract The volume of transcriptome data is growing exponentially due to rapid improvement of experimental technologies. In response, large central resources such as those of the National Center for Biotechnology Information (NCBI) are continually adapting their computational infrastructure to accommodate this large influx of data. New and specialized databases, such as Transcriptome Shotgun Assembly Sequence Database (TSA) and Sequence Read Archive (SRA), have been created to aid the development and expansion of centralized repositories. Although the central resource databases are under continual development, they do not include automatic pipelines to increase annotation of newly deposited data. Therefore, third-party applications are required to achieve that aim. Here, we present an automatic workflow and web application for the annotation of transcriptome data. The workflow creates secondary data such as sequencing reads and BLAST alignments, which are available through the web application. They are based on freely available bioinformatics tools and scripts developed in-house. The interactive web application provides a search engine and several browser utilities. Graphical views of transcript alignments are available through SeqViewer, an embedded tool developed by NCBI for viewing biological sequence data. The web application is tightly integrated with other NCBI web applications and tools to extend the functionality of data processing and interconnectivity. We present a case study for the species Physalis peruviana with data generated from BioProject ID 67621. Database URL: http://www.ncbi.nlm.nih.gov/projects/physalis/ PMID:28605765

  2. GMODWeb: a web framework for the generic model organism database

    PubMed Central

    O'Connor, Brian D; Day, Allen; Cain, Scott; Arnaiz, Olivier; Sperling, Linda; Stein, Lincoln D

    2008-01-01

    The Generic Model Organism Database (GMOD) initiative provides species-agnostic data models and software tools for representing curated model organism data. Here we describe GMODWeb, a GMOD project designed to speed the development of model organism database (MOD) websites. Sites created with GMODWeb provide integration with other GMOD tools and allow users to browse and search through a variety of data types. GMODWeb was built using the open source Turnkey web framework and is available from . PMID:18570664

  3. NNDC Stand: Activities and Services of the National Nuclear Data Center

    NASA Astrophysics Data System (ADS)

    Pritychenko, B.; Arcilla, R.; Burrows, T. W.; Dunford, C. L.; Herman, M. W.; McLane, V.; Obložinský, P.; Sonzogni, A. A.; Tuli, J. K.; Winchell, D. F.

    2005-05-01

    The National Nuclear Data Center (NNDC) collects, evaluates, and disseminates nuclear physics data for basic nuclear research, applied nuclear technologies including energy, shielding, medical and homeland security. In 2004, to answer the needs of nuclear data users community, NNDC completed a project to modernize data storage and management of its databases and began offering new nuclear data Web services. The principles of database and Web application development as well as related nuclear reaction and structure database services are briefly described.

  4. Accounting Data to Web Interface Using PERL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hargeaves, C

    2001-08-13

    This document will explain the process to create a web interface for the accounting information generated by the High Performance Storage Systems (HPSS) accounting report feature. The accounting report contains useful data but it is not easily accessed in a meaningful way. The accounting report is the only way to see summarized storage usage information. The first step is to take the accounting data, make it meaningful and store the modified data in persistent databases. The second step is to generate the various user interfaces, HTML pages, that will be used to access the data. The third step is tomore » transfer all required files to the web server. The web pages pass parameters to Common Gateway Interface (CGI) scripts that generate dynamic web pages and graphs. The end result is a web page with specific information presented in text with or without graphs. The accounting report has a specific format that allows the use of regular expressions to verify if a line is storage data. Each storage data line is stored in a detailed database file with a name that includes the run date. The detailed database is used to create a summarized database file that also uses run date in its name. The summarized database is used to create the group.html web page that includes a list of all storage users. Scripts that query the database folder to build a list of available databases generate two additional web pages. A master script that is run monthly as part of a cron job, after the accounting report has completed, manages all of these individual scripts. All scripts are written in the PERL programming language. Whenever possible data manipulation scripts are written as filters. All scripts are written to be single source, which means they will function properly on both the open and closed networks at LLNL. The master script handles the command line inputs for all scripts, file transfers to the web server and records run information in a log file. The rest of the scripts manipulate the accounting data or use the files created to generate HTML pages. Each script will be described in detail herein. The following is a brief description of HPSS taken directly from an HPSS web site. ''HPSS is a major development project, which began in 1993 as a Cooperative Research and Development Agreement (CRADA) between government and industry. The primary objective of HPSS is to move very large data objects between high performance computers, workstation clusters, and storage libraries at speeds many times faster than is possible with today's software systems. For example, HPSS can manage parallel data transfers from multiple network-connected disk arrays at rates greater than 1 Gbyte per second, making it possible to access high definition digitized video in real time.'' The HPSS accounting report is a canned report whose format is controlled by the HPSS developers.« less

  5. The Solar System Radio Explorer Kiosk - Leveraging Other E/PO Programs for Greater Impact

    NASA Astrophysics Data System (ADS)

    Garcia, L. N.; Reinisch, B. W.; Taylor, W. W.; Thieman, J. R.; Mendez, F.; Riccobono, M.

    2004-12-01

    The Solar System Radio Explorer Kiosk (SSREK) - a newly won small E/PO follow-on to a NASA/OSS research grant - is designed to leverage existing NASA E/PO projects and other education programs to enable a large return from a small investment. The SSREK project will create an interactive museum kiosk to engage and teach visitors about Jupiter and the Sun by learning what their low frequency radio bursts may be telling us about these worlds. This project will work with the network of radio observers and the archive of data obtained through the NASA-sponsored Radio Jove project. The SSREK project is partnering with the Maryland Science Center (MSC) as a test site for the SSREK. The MSC will enable us to ensure that this project meets the requirements of their museum environment. We are also partnering with the National Federation of the Blind (NFB) to help us enable museum visitors with visual impairments to share in the excitement of science and help these visitors recognize how other senses besides sight can be used to do science. Both the MSC and NFB will assist us in formative and summative evaluation of the project. All of the software and designs for the wheelchair-accessible arcade-style cabinet will be made available on the associated web site hosted at NASA/GSFC - further extending the reach of the project.

  6. Comprehensive mollusk acute toxicity database improves the use of Interspecies Correlation Estimation (ICE) models to predict toxicity of untested freshwater and endangered mussel species

    EPA Science Inventory

    Interspecies correlation estimation (ICE) models extrapolate acute toxicity data from surrogate test species to untested taxa. A suite of ICE models developed from a comprehensive database is available on the US Environmental Protection Agency’s web-based application, Web-I...

  7. WaveNet: A Web-Based Metocean Data Access, Processing and Analysis Tool; Part 5 - WW3 Database

    DTIC Science & Technology

    2015-02-01

    Program ( CDIP ); and Part 4 for the Great Lakes Observing System/Coastal Forecasting System (GLOS/GLCFS). Using step-by-step instructions, this Part 5...Demirbilek, Z., L. Lin, and D. Wilson. 2014a. WaveNet: A web-based metocean data access, processing, and analysis tool; part 3– CDIP database

  8. Web-Resources for Astronomical Data in the Ultraviolet

    NASA Astrophysics Data System (ADS)

    Sachkov, M. E.; Malkov, O. Yu.

    2017-12-01

    In this paper we describe databases of space projects that are operating or have operated in the ultraviolet spectral region. We give brief descriptions and links to major sources for UV data on the web: archives, space mission sites, databases, catalogues. We pay special attention to the World Space Observatory—Ultraviolet mission that will be launched in 2021.

  9. The Common Gateway Interface (CGI) for Enhancing Access to Database Servers via the World Wide Web (WWW).

    ERIC Educational Resources Information Center

    Machovec, George S., Ed.

    1995-01-01

    Explains the Common Gateway Interface (CGI) protocol as a set of rules for passing information from a Web server to an external program such as a database search engine. Topics include advantages over traditional client/server solutions, limitations, sample library applications, and sources of information from the Internet. (LRW)

  10. Teradata University Network: A No Cost Web-Portal for Teaching Database, Data Warehousing, and Data-Related Subjects

    ERIC Educational Resources Information Center

    Jukic, Nenad; Gray, Paul

    2008-01-01

    This paper describes the value that information systems faculty and students in classes dealing with database management, data warehousing, decision support systems, and related topics, could derive from the use of the Teradata University Network (TUN), a free comprehensive web-portal. A detailed overview of TUN functionalities and content is…

  11. 40 CFR 60.2235 - In what form can I submit my reports?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... with ERT are subject to this requirement to be submitted electronically into EPA's WebFIRE database... tests required by this subpart to EPA's WebFIRE database by using the Compliance and Emissions Data...FIRE Administrator, MD C404-02, 4930 Old Page Rd., Durham, NC 27703. The same ERT file with the CBI...

  12. PathCase-SB architecture and database design

    PubMed Central

    2011-01-01

    Background Integration of metabolic pathways resources and regulatory metabolic network models, and deploying new tools on the integrated platform can help perform more effective and more efficient systems biology research on understanding the regulation in metabolic networks. Therefore, the tasks of (a) integrating under a single database environment regulatory metabolic networks and existing models, and (b) building tools to help with modeling and analysis are desirable and intellectually challenging computational tasks. Description PathCase Systems Biology (PathCase-SB) is built and released. The PathCase-SB database provides data and API for multiple user interfaces and software tools. The current PathCase-SB system provides a database-enabled framework and web-based computational tools towards facilitating the development of kinetic models for biological systems. PathCase-SB aims to integrate data of selected biological data sources on the web (currently, BioModels database and KEGG), and to provide more powerful and/or new capabilities via the new web-based integrative framework. This paper describes architecture and database design issues encountered in PathCase-SB's design and implementation, and presents the current design of PathCase-SB's architecture and database. Conclusions PathCase-SB architecture and database provide a highly extensible and scalable environment with easy and fast (real-time) access to the data in the database. PathCase-SB itself is already being used by researchers across the world. PMID:22070889

  13. Atlas of Iberian water beetles (ESACIB database).

    PubMed

    Sánchez-Fernández, David; Millán, Andrés; Abellán, Pedro; Picazo, Félix; Carbonell, José A; Ribera, Ignacio

    2015-01-01

    The ESACIB ('EScarabajos ACuáticos IBéricos') database is provided, including all available distributional data of Iberian and Balearic water beetles from the literature up to 2013, as well as from museum and private collections, PhD theses, and other unpublished sources. The database contains 62,015 records with associated geographic data (10×10 km UTM squares) for 488 species and subspecies of water beetles, 120 of them endemic to the Iberian Peninsula and eight to the Balearic Islands. This database was used for the elaboration of the "Atlas de los Coleópteros Acuáticos de España Peninsular". In this dataset data of 15 additional species has been added: 11 that occur in the Balearic Islands or mainland Portugal but not in peninsular Spain and an other four with mainly terrestrial habits within the genus Helophorus (for taxonomic coherence). The complete dataset is provided in Darwin Core Archive format.

  14. Atlas of Iberian water beetles (ESACIB database)

    PubMed Central

    Sánchez-Fernández, David; Millán, Andrés; Abellán, Pedro; Picazo, Félix; Carbonell, José A.; Ribera, Ignacio

    2015-01-01

    Abstract The ESACIB (‘EScarabajos ACuáticos IBéricos’) database is provided, including all available distributional data of Iberian and Balearic water beetles from the literature up to 2013, as well as from museum and private collections, PhD theses, and other unpublished sources. The database contains 62,015 records with associated geographic data (10×10 km UTM squares) for 488 species and subspecies of water beetles, 120 of them endemic to the Iberian Peninsula and eight to the Balearic Islands. This database was used for the elaboration of the “Atlas de los Coleópteros Acuáticos de España Peninsular”. In this dataset data of 15 additional species has been added: 11 that occur in the Balearic Islands or mainland Portugal but not in peninsular Spain and an other four with mainly terrestrial habits within the genus Helophorus (for taxonomic coherence). The complete dataset is provided in Darwin Core Archive format. PMID:26448717

  15. FERN Ethnomedicinal Plant Database: Exploring Fern Ethnomedicinal Plants Knowledge for Computational Drug Discovery.

    PubMed

    Thakar, Sambhaji B; Ghorpade, Pradnya N; Kale, Manisha V; Sonawane, Kailas D

    2015-01-01

    Fern plants are known for their ethnomedicinal applications. Huge amount of fern medicinal plants information is scattered in the form of text. Hence, database development would be an appropriate endeavor to cope with the situation. So by looking at the importance of medicinally useful fern plants, we developed a web based database which contains information about several group of ferns, their medicinal uses, chemical constituents as well as protein/enzyme sequences isolated from different fern plants. Fern ethnomedicinal plant database is an all-embracing, content management web-based database system, used to retrieve collection of factual knowledge related to the ethnomedicinal fern species. Most of the protein/enzyme sequences have been extracted from NCBI Protein sequence database. The fern species, family name, identification, taxonomy ID from NCBI, geographical occurrence, trial for, plant parts used, ethnomedicinal importance, morphological characteristics, collected from various scientific literatures and journals available in the text form. NCBI's BLAST, InterPro, phylogeny, Clustal W web source has also been provided for the future comparative studies. So users can get information related to fern plants and their medicinal applications at one place. This Fern ethnomedicinal plant database includes information of 100 fern medicinal species. This web based database would be an advantageous to derive information specifically for computational drug discovery, botanists or botanical interested persons, pharmacologists, researchers, biochemists, plant biotechnologists, ayurvedic practitioners, doctors/pharmacists, traditional medicinal users, farmers, agricultural students and teachers from universities as well as colleges and finally fern plant lovers. This effort would be useful to provide essential knowledge for the users about the adventitious applications for drug discovery, applications, conservation of fern species around the world and finally to create social awareness.

  16. Architecture for biomedical multimedia information delivery on the World Wide Web

    NASA Astrophysics Data System (ADS)

    Long, L. Rodney; Goh, Gin-Hua; Neve, Leif; Thoma, George R.

    1997-10-01

    Research engineers at the National Library of Medicine are building a prototype system for the delivery of multimedia biomedical information on the World Wide Web. This paper discuses the architecture and design considerations for the system, which will be used initially to make images and text from the third National Health and Nutrition Examination Survey (NHANES) publicly available. We categorized our analysis as follows: (1) fundamental software tools: we analyzed trade-offs among use of conventional HTML/CGI, X Window Broadway, and Java; (2) image delivery: we examined the use of unconventional TCP transmission methods; (3) database manager and database design: we discuss the capabilities and planned use of the Informix object-relational database manager and the planned schema for the HNANES database; (4) storage requirements for our Sun server; (5) user interface considerations; (6) the compatibility of the system with other standard research and analysis tools; (7) image display: we discuss considerations for consistent image display for end users. Finally, we discuss the scalability of the system in terms of incorporating larger or more databases of similar data, and the extendibility of the system for supporting content-based retrieval of biomedical images. The system prototype is called the Web-based Medical Information Retrieval System. An early version was built as a Java applet and tested on Unix, PC, and Macintosh platforms. This prototype used the MiniSQL database manager to do text queries on a small database of records of participants in the second NHANES survey. The full records and associated x-ray images were retrievable and displayable on a standard Web browser. A second version has now been built, also a Java applet, using the MySQL database manager.

  17. Great Basin paleontological database

    USGS Publications Warehouse

    Zhang, N.; Blodgett, R.B.; Hofstra, A.H.

    2008-01-01

    The U.S. Geological Survey has constructed a paleontological database for the Great Basin physiographic province that can be served over the World Wide Web for data entry, queries, displays, and retrievals. It is similar to the web-database solution that we constructed for Alaskan paleontological data (www.alaskafossil.org). The first phase of this effort was to compile a paleontological bibliography for Nevada and portions of adjacent states in the Great Basin that has recently been completed. In addition, we are also compiling paleontological reports (Known as E&R reports) of the U.S. Geological Survey, which are another extensive source of l,egacy data for this region. Initial population of the database benefited from a recently published conodont data set and is otherwise focused on Devonian and Mississippian localities because strata of this age host important sedimentary exhalative (sedex) Au, Zn, and barite resources and enormons Carlin-type An deposits. In addition, these strata are the most important petroleum source rocks in the region, and record the transition from extension to contraction associated with the Antler orogeny, the Alamo meteorite impact, and biotic crises associated with global oceanic anoxic events. The finished product will provide an invaluable tool for future geologic mapping, paleontological research, and mineral resource investigations in the Great Basin, making paleontological data acquired over nearly the past 150 yr readily available over the World Wide Web. A description of the structure of the database and the web interface developed for this effort are provided herein. This database is being used ws a model for a National Paleontological Database (which we am currently developing for the U.S. Geological Survey) as well as for other paleontological databases now being developed in other parts of the globe. ?? 2008 Geological Society of America.

  18. A World Wide Web (WWW) server database engine for an organelle database, MitoDat.

    PubMed

    Lemkin, P F; Chipperfield, M; Merril, C; Zullo, S

    1996-03-01

    We describe a simple database search engine "dbEngine" which may be used to quickly create a searchable database on a World Wide Web (WWW) server. Data may be prepared from spreadsheet programs (such as Excel, etc.) or from tables exported from relationship database systems. This Common Gateway Interface (CGI-BIN) program is used with a WWW server such as available commercially, or from National Center for Supercomputer Algorithms (NCSA) or CERN. Its capabilities include: (i) searching records by combinations of terms connected with ANDs or ORs; (ii) returning search results as hypertext links to other WWW database servers; (iii) mapping lists of literature reference identifiers to the full references; (iv) creating bidirectional hypertext links between pictures and the database. DbEngine has been used to support the MitoDat database (Mendelian and non-Mendelian inheritance associated with the Mitochondrion) on the WWW.

  19. Integrated database for identifying candidate genes for Aspergillus flavus resistance in maize

    PubMed Central

    2010-01-01

    Background Aspergillus flavus Link:Fr, an opportunistic fungus that produces aflatoxin, is pathogenic to maize and other oilseed crops. Aflatoxin is a potent carcinogen, and its presence markedly reduces the value of grain. Understanding and enhancing host resistance to A. flavus infection and/or subsequent aflatoxin accumulation is generally considered an efficient means of reducing grain losses to aflatoxin. Different proteomic, genomic and genetic studies of maize (Zea mays L.) have generated large data sets with the goal of identifying genes responsible for conferring resistance to A. flavus, or aflatoxin. Results In order to maximize the usage of different data sets in new studies, including association mapping, we have constructed a relational database with web interface integrating the results of gene expression, proteomic (both gel-based and shotgun), Quantitative Trait Loci (QTL) genetic mapping studies, and sequence data from the literature to facilitate selection of candidate genes for continued investigation. The Corn Fungal Resistance Associated Sequences Database (CFRAS-DB) (http://agbase.msstate.edu/) was created with the main goal of identifying genes important to aflatoxin resistance. CFRAS-DB is implemented using MySQL as the relational database management system running on a Linux server, using an Apache web server, and Perl CGI scripts as the web interface. The database and the associated web-based interface allow researchers to examine many lines of evidence (e.g. microarray, proteomics, QTL studies, SNP data) to assess the potential role of a gene or group of genes in the response of different maize lines to A. flavus infection and subsequent production of aflatoxin by the fungus. Conclusions CFRAS-DB provides the first opportunity to integrate data pertaining to the problem of A. flavus and aflatoxin resistance in maize in one resource and to support queries across different datasets. The web-based interface gives researchers different query options for mining the database across different types of experiments. The database is publically available at http://agbase.msstate.edu. PMID:20946609

  20. The Neotoma Paleoecology Database

    NASA Astrophysics Data System (ADS)

    Grimm, E. C.; Ashworth, A. C.; Barnosky, A. D.; Betancourt, J. L.; Bills, B.; Booth, R.; Blois, J.; Charles, D. F.; Graham, R. W.; Goring, S. J.; Hausmann, S.; Smith, A. J.; Williams, J. W.; Buckland, P.

    2015-12-01

    The Neotoma Paleoecology Database (www.neotomadb.org) is a multiproxy, open-access, relational database that includes fossil data for the past 5 million years (the late Neogene and Quaternary Periods). Modern distributional data for various organisms are also being made available for calibration and paleoecological analyses. The project is a collaborative effort among individuals from more than 20 institutions worldwide, including domain scientists representing a spectrum of Pliocene-Quaternary fossil data types, as well as experts in information technology. Working groups are active for diatoms, insects, ostracodes, pollen and plant macroscopic remains, testate amoebae, rodent middens, vertebrates, age models, geochemistry and taphonomy. Groups are also active in developing online tools for data analyses and for developing modules for teaching at different levels. A key design concept of NeotomaDB is that stewards for various data types are able to remotely upload and manage data. Cooperatives for different kinds of paleo data, or from different regions, can appoint their own stewards. Over the past year, much progress has been made on development of the steward software-interface that will enable this capability. The steward interface uses web services that provide access to the database. More generally, these web services enable remote programmatic access to the database, which both desktop and web applications can use and which provide real-time access to the most current data. Use of these services can alleviate the need to download the entire database, which can be out-of-date as soon as new data are entered. In general, the Neotoma web services deliver data either from an entire table or from the results of a view. Upon request, new web services can be quickly generated. Future developments will likely expand the spatial and temporal dimensions of the database. NeotomaDB is open to receiving new datasets and stewards from the global Quaternary community. Research is supported by NSF EAR-0622349.

  1. Integrated database for identifying candidate genes for Aspergillus flavus resistance in maize.

    PubMed

    Kelley, Rowena Y; Gresham, Cathy; Harper, Jonathan; Bridges, Susan M; Warburton, Marilyn L; Hawkins, Leigh K; Pechanova, Olga; Peethambaran, Bela; Pechan, Tibor; Luthe, Dawn S; Mylroie, J E; Ankala, Arunkanth; Ozkan, Seval; Henry, W B; Williams, W P

    2010-10-07

    Aspergillus flavus Link:Fr, an opportunistic fungus that produces aflatoxin, is pathogenic to maize and other oilseed crops. Aflatoxin is a potent carcinogen, and its presence markedly reduces the value of grain. Understanding and enhancing host resistance to A. flavus infection and/or subsequent aflatoxin accumulation is generally considered an efficient means of reducing grain losses to aflatoxin. Different proteomic, genomic and genetic studies of maize (Zea mays L.) have generated large data sets with the goal of identifying genes responsible for conferring resistance to A. flavus, or aflatoxin. In order to maximize the usage of different data sets in new studies, including association mapping, we have constructed a relational database with web interface integrating the results of gene expression, proteomic (both gel-based and shotgun), Quantitative Trait Loci (QTL) genetic mapping studies, and sequence data from the literature to facilitate selection of candidate genes for continued investigation. The Corn Fungal Resistance Associated Sequences Database (CFRAS-DB) (http://agbase.msstate.edu/) was created with the main goal of identifying genes important to aflatoxin resistance. CFRAS-DB is implemented using MySQL as the relational database management system running on a Linux server, using an Apache web server, and Perl CGI scripts as the web interface. The database and the associated web-based interface allow researchers to examine many lines of evidence (e.g. microarray, proteomics, QTL studies, SNP data) to assess the potential role of a gene or group of genes in the response of different maize lines to A. flavus infection and subsequent production of aflatoxin by the fungus. CFRAS-DB provides the first opportunity to integrate data pertaining to the problem of A. flavus and aflatoxin resistance in maize in one resource and to support queries across different datasets. The web-based interface gives researchers different query options for mining the database across different types of experiments. The database is publically available at http://agbase.msstate.edu.

  2. Improving data management and dissemination in web based information systems by semantic enrichment of descriptive data aspects

    NASA Astrophysics Data System (ADS)

    Gebhardt, Steffen; Wehrmann, Thilo; Klinger, Verena; Schettler, Ingo; Huth, Juliane; Künzer, Claudia; Dech, Stefan

    2010-10-01

    The German-Vietnamese water-related information system for the Mekong Delta (WISDOM) project supports business processes in Integrated Water Resources Management in Vietnam. Multiple disciplines bring together earth and ground based observation themes, such as environmental monitoring, water management, demographics, economy, information technology, and infrastructural systems. This paper introduces the components of the web-based WISDOM system including data, logic and presentation tier. It focuses on the data models upon which the database management system is built, including techniques for tagging or linking metadata with the stored information. The model also uses ordered groupings of spatial, thematic and temporal reference objects to semantically tag datasets to enable fast data retrieval, such as finding all data in a specific administrative unit belonging to a specific theme. A spatial database extension is employed by the PostgreSQL database. This object-oriented database was chosen over a relational database to tag spatial objects to tabular data, improving the retrieval of census and observational data at regional, provincial, and local areas. While the spatial database hinders processing raster data, a "work-around" was built into WISDOM to permit efficient management of both raster and vector data. The data model also incorporates styling aspects of the spatial datasets through styled layer descriptions (SLD) and web mapping service (WMS) layer specifications, allowing retrieval of rendered maps. Metadata elements of the spatial data are based on the ISO19115 standard. XML structured information of the SLD and metadata are stored in an XML database. The data models and the data management system are robust for managing the large quantity of spatial objects, sensor observations, census and document data. The operational WISDOM information system prototype contains modules for data management, automatic data integration, and web services for data retrieval, analysis, and distribution. The graphical user interfaces facilitate metadata cataloguing, data warehousing, web sensor data analysis and thematic mapping.

  3. The unified database for the fixed target experiment BM@N

    NASA Astrophysics Data System (ADS)

    Gertsenberger, K. V.

    2016-09-01

    The article describes the developed database designed as comprehensive data storage of the fixed target experiment BM@N [1] at Joint Institute for Nuclear Research (JINR) in Dubna. The structure and purposes of the BM@N facility will be briefly presented. The scheme of the unified database and its parameters will be described in detail. The use of the BM@N database implemented on the PostgreSQL database management system (DBMS) allows one to provide user access to the actual information of the experiment. Also the interfaces developed for the access to the database will be presented. One was implemented as the set of C++ classes to access the data without SQL statements, the other-Web-interface being available on the Web page of the BM@N experiment.

  4. Katherine Fleming | NREL

    Science.gov Websites

    Fleming Photo of Katherine Fleming Katherine Fleming Database and Web Applications Engineer and web application development in the Commercial Buildings Research group. Her projects include the , Katherine was pursuing a Ph.D. with a focus on robotics and working as a Web developer and Web accessibility

  5. Content and Workflow Management for Library Websites: Case Studies

    ERIC Educational Resources Information Center

    Yu, Holly, Ed.

    2005-01-01

    Using database-driven web pages or web content management (WCM) systems to manage increasingly diverse web content and to streamline workflows is a commonly practiced solution recognized in libraries today. However, limited library web content management models and funding constraints prevent many libraries from purchasing commercially available…

  6. Protecting Database Centric Web Services against SQL/XPath Injection Attacks

    NASA Astrophysics Data System (ADS)

    Laranjeiro, Nuno; Vieira, Marco; Madeira, Henrique

    Web services represent a powerful interface for back-end database systems and are increasingly being used in business critical applications. However, field studies show that a large number of web services are deployed with security flaws (e.g., having SQL Injection vulnerabilities). Although several techniques for the identification of security vulnerabilities have been proposed, developing non-vulnerable web services is still a difficult task. In fact, security-related concerns are hard to apply as they involve adding complexity to already complex code. This paper proposes an approach to secure web services against SQL and XPath Injection attacks, by transparently detecting and aborting service invocations that try to take advantage of potential vulnerabilities. Our mechanism was applied to secure several web services specified by the TPC-App benchmark, showing to be 100% effective in stopping attacks, non-intrusive and very easy to use.

  7. JANIS 4: An Improved Version of the NEA Java-based Nuclear Data Information System

    NASA Astrophysics Data System (ADS)

    Soppera, N.; Bossant, M.; Dupont, E.

    2014-06-01

    JANIS is software developed to facilitate the visualization and manipulation of nuclear data, giving access to evaluated data libraries, and to the EXFOR and CINDA databases. It is stand-alone Java software, downloadable from the web and distributed on DVD. Used offline, the system also makes use of an internet connection to access the NEA Data Bank database. It is now also offered as a full web application, only requiring a browser. The features added in the latest version of the software and this new web interface are described.

  8. JANIS 4: An Improved Version of the NEA Java-based Nuclear Data Information System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soppera, N., E-mail: nicolas.soppera@oecd.org; Bossant, M.; Dupont, E.

    JANIS is software developed to facilitate the visualization and manipulation of nuclear data, giving access to evaluated data libraries, and to the EXFOR and CINDA databases. It is stand-alone Java software, downloadable from the web and distributed on DVD. Used offline, the system also makes use of an internet connection to access the NEA Data Bank database. It is now also offered as a full web application, only requiring a browser. The features added in the latest version of the software and this new web interface are described.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Mathew; Bowen, Brian; Coles, Dwight

    The Middleware Automated Deployment Utilities consists the these three components: MAD: Utility designed to automate the deployment of java applications to multiple java application servers. The product contains a front end web utility and backend deployment scripts. MAR: Web front end to maintain and update the components inside database. MWR-Encrypt: Web utility to convert a text string to an encrypted string that is used by the Oracle Weblogic application server. The encryption is done using the built in functions if the Oracle Weblogic product and is mainly used to create an encrypted version of a database password.

  10. Biofuel Database

    National Institute of Standards and Technology Data Gateway

    Biofuel Database (Web, free access)   This database brings together structural, biological, and thermodynamic data for enzymes that are either in current use or are being considered for use in the production of biofuels.

  11. Automated grading of homework assignments and tests in introductory and intermediate statistics courses using active server pages.

    PubMed

    Stockburger, D W

    1999-05-01

    Active server pages permit a software developer to customize the Web experience for users by inserting server-side script and database access into Web pages. This paper describes applications of these techniques and provides a primer on the use of these methods. Applications include a system that generates and grades individualized homework assignments and tests for statistics students. The student accesses the system as a Web page, prints out the assignment, does the assignment, and enters the answers on the Web page. The server, running on NT Server 4.0, grades the assignment, updates the grade book (on a database), and returns the answer key to the student.

  12. From field data collection to earth sciences dissemination: mobile examples in the digital era

    NASA Astrophysics Data System (ADS)

    Giardino, Marco; Ghiraldi, Luca; Palomba, Mauro; Perotti, Luigi

    2015-04-01

    In the framework of the technological and cultural revolution related to the massive diffusion of mobile devices, as smartphones and tablets, the information management and accessibility is changing, and many software houses and developer communities realized applications that can meet various people's needs. Modern collection, storing and sharing of data have radically changed, and advances in ICT increasingly involve field-based activities. Progresses in these researches and applications depend on three main components: hardware, software and web system. Since 2008 the geoSITLab multidisciplinary group (Earth Sciences Department and NatRisk Centre of the University of Torino and the Natural Sciences Museum of the Piemonte Region) is active in defining and testing methods for collecting, managing and sharing field information using mobile devices. Key issues include: Geomorphological Digital Mapping, Natural Hazards monitoring, Geoheritage assessment and applications for the teaching of Earth Sciences. An overview of the application studies is offered here, including the use of Mobile tools for data collection, the construction of relational databases for inventory activities and the test of Web-Mapping tools and mobile apps for data dissemination. The fil rouge of connection is a standardized digital approach allowing the use of mobile devices in each step of the process, which will be analysed within different projects set up by the research group (Geonathaz, EgeoFieldwork, Progeo Piemonte, GeomediaWeb). The hardware component mainly consists of the availability of handheld mobile devices (e.g. smartphones, PDAs and Tablets). The software component corresponds to applications for spatial data visualization on mobile devices, such as composite mobile GIS or simple location-based apps. The web component allows the integration of collected data into geodatabase based on client-server architecture, where the information can be easily loaded, uploaded and shared between field staff and data management team, in order to disseminate collected information to media or to inform the decision makers. Results demonstrated the possibility to record field observations in a fast and reliable way, using standardized formats that can improve the precision of collected information and lower the possibility of errors and data omission. Dedicated forms have been set up for gathering different thematic data (geologic/geomorphologic, faunal and floristic, path system…etc.). Field data allowed to arrange maps and SDI useful for many application purposes: from country-planning to disaster risk management, from Geoheritage management to Earth Science concepts dissemination.

  13. Construction of an ortholog database using the semantic web technology for integrative analysis of genomic data.

    PubMed

    Chiba, Hirokazu; Nishide, Hiroyo; Uchiyama, Ikuo

    2015-01-01

    Recently, various types of biological data, including genomic sequences, have been rapidly accumulating. To discover biological knowledge from such growing heterogeneous data, a flexible framework for data integration is necessary. Ortholog information is a central resource for interlinking corresponding genes among different organisms, and the Semantic Web provides a key technology for the flexible integration of heterogeneous data. We have constructed an ortholog database using the Semantic Web technology, aiming at the integration of numerous genomic data and various types of biological information. To formalize the structure of the ortholog information in the Semantic Web, we have constructed the Ortholog Ontology (OrthO). While the OrthO is a compact ontology for general use, it is designed to be extended to the description of database-specific concepts. On the basis of OrthO, we described the ortholog information from our Microbial Genome Database for Comparative Analysis (MBGD) in the form of Resource Description Framework (RDF) and made it available through the SPARQL endpoint, which accepts arbitrary queries specified by users. In this framework based on the OrthO, the biological data of different organisms can be integrated using the ortholog information as a hub. Besides, the ortholog information from different data sources can be compared with each other using the OrthO as a shared ontology. Here we show some examples demonstrating that the ortholog information described in RDF can be used to link various biological data such as taxonomy information and Gene Ontology. Thus, the ortholog database using the Semantic Web technology can contribute to biological knowledge discovery through integrative data analysis.

  14. Design Considerations for a Web-based Database System of ELISpot Assay in Immunological Research

    PubMed Central

    Ma, Jingming; Mosmann, Tim; Wu, Hulin

    2005-01-01

    The enzyme-linked immunospot (ELISpot) assay has been a primary means in immunological researches (such as HIV-specific T cell response). Due to huge amount of data involved in ELISpot assay testing, the database system is needed for efficient data entry, easy retrieval, secure storage, and convenient data process. Besides, the NIH has recently issued a policy to promote the sharing of research data (see http://grants.nih.gov/grants/policy/data_sharing). The Web-based database system will be definitely benefit to data sharing among broad research communities. Here are some considerations for a database system of ELISpot assay (DBSEA). PMID:16779326

  15. TogoTable: cross-database annotation system using the Resource Description Framework (RDF) data model.

    PubMed

    Kawano, Shin; Watanabe, Tsutomu; Mizuguchi, Sohei; Araki, Norie; Katayama, Toshiaki; Yamaguchi, Atsuko

    2014-07-01

    TogoTable (http://togotable.dbcls.jp/) is a web tool that adds user-specified annotations to a table that a user uploads. Annotations are drawn from several biological databases that use the Resource Description Framework (RDF) data model. TogoTable uses database identifiers (IDs) in the table as a query key for searching. RDF data, which form a network called Linked Open Data (LOD), can be searched from SPARQL endpoints using a SPARQL query language. Because TogoTable uses RDF, it can integrate annotations from not only the reference database to which the IDs originally belong, but also externally linked databases via the LOD network. For example, annotations in the Protein Data Bank can be retrieved using GeneID through links provided by the UniProt RDF. Because RDF has been standardized by the World Wide Web Consortium, any database with annotations based on the RDF data model can be easily incorporated into this tool. We believe that TogoTable is a valuable Web tool, particularly for experimental biologists who need to process huge amounts of data such as high-throughput experimental output. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  16. HepSEQ: International Public Health Repository for Hepatitis B

    PubMed Central

    Gnaneshan, Saravanamuttu; Ijaz, Samreen; Moran, Joanne; Ramsay, Mary; Green, Jonathan

    2007-01-01

    HepSEQ is a repository for an extensive library of public health and molecular data relating to hepatitis B virus (HBV) infection collected from international sources. It is hosted by the Centre for Infections, Health Protection Agency (HPA), England, United Kingdom. This repository has been developed as a web-enabled, quality-controlled database to act as a tool for surveillance, HBV case management and for research. The web front-end for the database system can be accessed from . The format of the database system allows for comprehensive molecular, clinical and epidemiological data to be deposited into a functional database, to search and manipulate the stored data and to extract and visualize the information on epidemiological, virological, clinical, nucleotide sequence and mutational aspects of HBV infection through web front-end. Specific tools, built into the database, can be utilized to analyse deposited data and provide information on HBV genotype, identify mutations with known clinical significance (e.g. vaccine escape, precore and antiviral-resistant mutations) and carry out sequence homology searches against other deposited strains. Further mechanisms are also in place to allow specific tailored searches of the database to be undertaken. PMID:17130143

  17. The Physiology Constant Database of Teen-Agers in Beijing

    PubMed Central

    Wei-Qi, Wei; Guang-Jin, Zhu; Cheng-Li, Xu; Shao-Mei, Han; Bao-Shen, Qi; Li, Chen; Shu-Yu, Zu; Xiao-Mei, Zhou; Wen-Feng, Hu; Zheng-Guo, Zhang

    2004-01-01

    Physiology constants of adolescents are important to understand growing living systems and are a useful reference in clinical and epidemiological research. Until recently, physiology constants were not available in China and therefore most physiologists, physicians, and nutritionists had to use data from abroad for reference. However, the very difference between the Eastern and Western races casts doubt on the usefulness of overseas data. We have therefore created a database system to provide a repository for the storage of physiology constants of teen-agers in Beijing. The several thousands of pieces of data are now divided into hematological biochemistry, lung function, and cardiac function with all data manually checked before being transferred into the database. The database was accomplished through the development of a web interface, scripts, and a relational database. The physiology data were integrated into the relational database system to provide flexible facilities by using combinations of various terms and parameters. A web browser interface was designed for the users to facilitate their searching. The database is available on the web. The statistical table, scatter diagram, and histogram of the data are available for both anonym and user according to queries, while only the user can achieve detail, including download data and advanced search. PMID:15258669

  18. PATIKAweb: a Web interface for analyzing biological pathways through advanced querying and visualization.

    PubMed

    Dogrusoz, U; Erson, E Z; Giral, E; Demir, E; Babur, O; Cetintas, A; Colak, R

    2006-02-01

    Patikaweb provides a Web interface for retrieving and analyzing biological pathways in the Patika database, which contains data integrated from various prominent public pathway databases. It features a user-friendly interface, dynamic visualization and automated layout, advanced graph-theoretic queries for extracting biologically important phenomena, local persistence capability and exporting facilities to various pathway exchange formats.

  19. A Web-Based Multi-Database System Supporting Distributed Collaborative Management and Sharing of Microarray Experiment Information

    PubMed Central

    Burgarella, Sarah; Cattaneo, Dario; Masseroli, Marco

    2006-01-01

    We developed MicroGen, a multi-database Web based system for managing all the information characterizing spotted microarray experiments. It supports information gathering and storing according to the Minimum Information About Microarray Experiments (MIAME) standard. It also allows easy sharing of information and data among all multidisciplinary actors involved in spotted microarray experiments. PMID:17238488

  20. A Revitalized USAF Culture of Innovation

    DTIC Science & Technology

    2013-11-01

    Stephen B., “The United States Air Force and the culture of innovation 1945-1965” Air Force History and Museums Program Washington D.C. Published 2002...developing ideas that have potential. These ideas would be captured in a database similar to the Joint Lessons Learned Information System (JLISS...develop their solutions. A sabbatical program would address this need. While there is little history of sabbaticals in the military, and none

  1. Resources | Division of Cancer Prevention

    Cancer.gov

    Manual of Operations Version 3, 12/13/2012 (PDF, 162KB) Database Sources Consortium for Functional Glycomics databases Design Studies Related to the Development of Distributed, Web-based European Carbohydrate Databases (EUROCarbDB) |

  2. NABIC marker database: A molecular markers information network of agricultural crops.

    PubMed

    Kim, Chang-Kug; Seol, Young-Joo; Lee, Dong-Jun; Jeong, In-Seon; Yoon, Ung-Han; Lee, Gang-Seob; Hahn, Jang-Ho; Park, Dong-Suk

    2013-01-01

    In 2013, National Agricultural Biotechnology Information Center (NABIC) reconstructs a molecular marker database for useful genetic resources. The web-based marker database consists of three major functional categories: map viewer, RSN marker and gene annotation. It provides 7250 marker locations, 3301 RSN marker property, 3280 molecular marker annotation information in agricultural plants. The individual molecular marker provides information such as marker name, expressed sequence tag number, gene definition and general marker information. This updated marker-based database provides useful information through a user-friendly web interface that assisted in tracing any new structures of the chromosomes and gene positional functions using specific molecular markers. The database is available for free at http://nabic.rda.go.kr/gere/rice/molecularMarkers/

  3. Coverage and quality: A comparison of Web of Science and Scopus databases for reporting faculty nursing publication metrics.

    PubMed

    Powell, Kimberly R; Peterson, Shenita R

    Web of Science and Scopus are the leading databases of scholarly impact. Recent studies outside the field of nursing report differences in journal coverage and quality. A comparative analysis of nursing publications reported impact. Journal coverage by each database for the field of nursing was compared. Additionally, publications by 2014 nursing faculty were collected in both databases and compared for overall coverage and reported quality, as modeled by Scimajo Journal Rank, peer review status, and MEDLINE inclusion. Individual author impact, modeled by the h-index, was calculated by each database for comparison. Scopus offered significantly higher journal coverage. For 2014 faculty publications, 100% of journals were found in Scopus, Web of Science offered 82%. No significant difference was found in the quality of reported journals. Author h-index was found to be higher in Scopus. When reporting faculty publications and scholarly impact, academic nursing programs may be better represented by Scopus, without compromising journal quality. Programs with strong interdisciplinary work should examine all areas of strength to ensure appropriate coverage. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Distributed spatial information integration based on web service

    NASA Astrophysics Data System (ADS)

    Tong, Hengjian; Zhang, Yun; Shao, Zhenfeng

    2008-10-01

    Spatial information systems and spatial information in different geographic locations usually belong to different organizations. They are distributed and often heterogeneous and independent from each other. This leads to the fact that many isolated spatial information islands are formed, reducing the efficiency of information utilization. In order to address this issue, we present a method for effective spatial information integration based on web service. The method applies asynchronous invocation of web service and dynamic invocation of web service to implement distributed, parallel execution of web map services. All isolated information islands are connected by the dispatcher of web service and its registration database to form a uniform collaborative system. According to the web service registration database, the dispatcher of web services can dynamically invoke each web map service through an asynchronous delegating mechanism. All of the web map services can be executed at the same time. When each web map service is done, an image will be returned to the dispatcher. After all of the web services are done, all images are transparently overlaid together in the dispatcher. Thus, users can browse and analyze the integrated spatial information. Experiments demonstrate that the utilization rate of spatial information resources is significantly raised thought the proposed method of distributed spatial information integration.

  5. Distributed spatial information integration based on web service

    NASA Astrophysics Data System (ADS)

    Tong, Hengjian; Zhang, Yun; Shao, Zhenfeng

    2009-10-01

    Spatial information systems and spatial information in different geographic locations usually belong to different organizations. They are distributed and often heterogeneous and independent from each other. This leads to the fact that many isolated spatial information islands are formed, reducing the efficiency of information utilization. In order to address this issue, we present a method for effective spatial information integration based on web service. The method applies asynchronous invocation of web service and dynamic invocation of web service to implement distributed, parallel execution of web map services. All isolated information islands are connected by the dispatcher of web service and its registration database to form a uniform collaborative system. According to the web service registration database, the dispatcher of web services can dynamically invoke each web map service through an asynchronous delegating mechanism. All of the web map services can be executed at the same time. When each web map service is done, an image will be returned to the dispatcher. After all of the web services are done, all images are transparently overlaid together in the dispatcher. Thus, users can browse and analyze the integrated spatial information. Experiments demonstrate that the utilization rate of spatial information resources is significantly raised thought the proposed method of distributed spatial information integration.

  6. Phynx: an open source software solution supporting data management and web-based patient-level data review for drug safety studies in the general practice research database and other health care databases.

    PubMed

    Egbring, Marco; Kullak-Ublick, Gerd A; Russmann, Stefan

    2010-01-01

    To develop a software solution that supports management and clinical review of patient data from electronic medical records databases or claims databases for pharmacoepidemiological drug safety studies. We used open source software to build a data management system and an internet application with a Flex client on a Java application server with a MySQL database backend. The application is hosted on Amazon Elastic Compute Cloud. This solution named Phynx supports data management, Web-based display of electronic patient information, and interactive review of patient-level information in the individual clinical context. This system was applied to a dataset from the UK General Practice Research Database (GPRD). Our solution can be setup and customized with limited programming resources, and there is almost no extra cost for software. Access times are short, the displayed information is structured in chronological order and visually attractive, and selected information such as drug exposure can be blinded. External experts can review patient profiles and save evaluations and comments via a common Web browser. Phynx provides a flexible and economical solution for patient-level review of electronic medical information from databases considering the individual clinical context. It can therefore make an important contribution to an efficient validation of outcome assessment in drug safety database studies.

  7. EVpedia: a community web portal for extracellular vesicles research.

    PubMed

    Kim, Dae-Kyum; Lee, Jaewook; Kim, Sae Rom; Choi, Dong-Sic; Yoon, Yae Jin; Kim, Ji Hyun; Go, Gyeongyun; Nhung, Dinh; Hong, Kahye; Jang, Su Chul; Kim, Si-Hyun; Park, Kyong-Su; Kim, Oh Youn; Park, Hyun Taek; Seo, Ji Hye; Aikawa, Elena; Baj-Krzyworzeka, Monika; van Balkom, Bas W M; Belting, Mattias; Blanc, Lionel; Bond, Vincent; Bongiovanni, Antonella; Borràs, Francesc E; Buée, Luc; Buzás, Edit I; Cheng, Lesley; Clayton, Aled; Cocucci, Emanuele; Dela Cruz, Charles S; Desiderio, Dominic M; Di Vizio, Dolores; Ekström, Karin; Falcon-Perez, Juan M; Gardiner, Chris; Giebel, Bernd; Greening, David W; Gross, Julia Christina; Gupta, Dwijendra; Hendrix, An; Hill, Andrew F; Hill, Michelle M; Nolte-'t Hoen, Esther; Hwang, Do Won; Inal, Jameel; Jagannadham, Medicharla V; Jayachandran, Muthuvel; Jee, Young-Koo; Jørgensen, Malene; Kim, Kwang Pyo; Kim, Yoon-Keun; Kislinger, Thomas; Lässer, Cecilia; Lee, Dong Soo; Lee, Hakmo; van Leeuwen, Johannes; Lener, Thomas; Liu, Ming-Lin; Lötvall, Jan; Marcilla, Antonio; Mathivanan, Suresh; Möller, Andreas; Morhayim, Jess; Mullier, François; Nazarenko, Irina; Nieuwland, Rienk; Nunes, Diana N; Pang, Ken; Park, Jaesung; Patel, Tushar; Pocsfalvi, Gabriella; Del Portillo, Hernando; Putz, Ulrich; Ramirez, Marcel I; Rodrigues, Marcio L; Roh, Tae-Young; Royo, Felix; Sahoo, Susmita; Schiffelers, Raymond; Sharma, Shivani; Siljander, Pia; Simpson, Richard J; Soekmadji, Carolina; Stahl, Philip; Stensballe, Allan; Stępień, Ewa; Tahara, Hidetoshi; Trummer, Arne; Valadi, Hadi; Vella, Laura J; Wai, Sun Nyunt; Witwer, Kenneth; Yáñez-Mó, María; Youn, Hyewon; Zeidler, Reinhard; Gho, Yong Song

    2015-03-15

    Extracellular vesicles (EVs) are spherical bilayered proteolipids, harboring various bioactive molecules. Due to the complexity of the vesicular nomenclatures and components, online searches for EV-related publications and vesicular components are currently challenging. We present an improved version of EVpedia, a public database for EVs research. This community web portal contains a database of publications and vesicular components, identification of orthologous vesicular components, bioinformatic tools and a personalized function. EVpedia includes 6879 publications, 172 080 vesicular components from 263 high-throughput datasets, and has been accessed more than 65 000 times from more than 750 cities. In addition, about 350 members from 73 international research groups have participated in developing EVpedia. This free web-based database might serve as a useful resource to stimulate the emerging field of EV research. The web site was implemented in PHP, Java, MySQL and Apache, and is freely available at http://evpedia.info. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. Integration of multiple DICOM Web servers into an enterprise-wide Web-based electronic medical record

    NASA Astrophysics Data System (ADS)

    Stewart, Brent K.; Langer, Steven G.; Martin, Kelly P.

    1999-07-01

    The purpose of this paper is to integrate multiple DICOM image webservers into the currently existing enterprises- wide web-browsable electronic medical record. Over the last six years the University of Washington has created a clinical data repository combining in a distributed relational database information from multiple departmental databases (MIND). A character cell-based view of this data called the Mini Medical Record (MMR) has been available for four years, MINDscape, unlike the text-based MMR. provides a platform independent, dynamic, web browser view of the MIND database that can be easily linked with medical knowledge resources on the network, like PubMed and the Federated Drug Reference. There are over 10,000 MINDscape user accounts at the University of Washington Academic Medical Centers. The weekday average number of hits to MINDscape is 35,302 and weekday average number of individual users is 1252. DICOM images from multiple webservers are now being viewed through the MINDscape electronic medical record.

  9. Capturing citation activity in three health sciences departments: a comparison study of Scopus and Web of Science.

    PubMed

    Sarkozy, Alexandra; Slyman, Alison; Wu, Wendy

    2015-01-01

    Scopus and Web of Science are the two major citation databases that collect and disseminate bibliometric statistics about research articles, journals, institutions, and individual authors. Liaison librarians are now regularly called upon to utilize these databases to assist faculty in finding citation activity on their published works for tenure and promotion, grant applications, and more. But questions about the accuracy, scope, and coverage of these tools deserve closer scrutiny. Discrepancies in citation capture led to a systematic study on how Scopus and Web of Science compared in a real-life situation encountered by liaisons: comparing three different disciplines at a medical school and nursing program. How many articles would each database retrieve for each faculty member using the author-searching tools provided? How many cited references for each faculty member would each tool generate? Results demonstrated troubling differences in publication and citation activity capture between Scopus and Web of Science. Implications for librarians are discussed.

  10. An overview of biomedical literature search on the World Wide Web in the third millennium.

    PubMed

    Kumar, Prince; Goel, Roshni; Jain, Chandni; Kumar, Ashish; Parashar, Abhishek; Gond, Ajay Ratan

    2012-06-01

    Complete access to the existing pool of biomedical literature and the ability to "hit" upon the exact information of the relevant specialty are becoming essential elements of academic and clinical expertise. With the rapid expansion of the literature database, it is almost impossible to keep up to date with every innovation. Using the Internet, however, most people can freely access this literature at any time, from almost anywhere. This paper highlights the use of the Internet in obtaining valuable biomedical research information, which is mostly available from journals, databases, textbooks and e-journals in the form of web pages, text materials, images, and so on. The authors present an overview of web-based resources for biomedical researchers, providing information about Internet search engines (e.g., Google), web-based bibliographic databases (e.g., PubMed, IndMed) and how to use them, and other online biomedical resources that can assist clinicians in reaching well-informed clinical decisions.

  11. The Xeno-glycomics database (XDB): a relational database of qualitative and quantitative pig glycome repertoire.

    PubMed

    Park, Hae-Min; Park, Ju-Hyeong; Kim, Yoon-Woo; Kim, Kyoung-Jin; Jeong, Hee-Jin; Jang, Kyoung-Soon; Kim, Byung-Gee; Kim, Yun-Gon

    2013-11-15

    In recent years, the improvement of mass spectrometry-based glycomics techniques (i.e. highly sensitive, quantitative and high-throughput analytical tools) has enabled us to obtain a large dataset of glycans. Here we present a database named Xeno-glycomics database (XDB) that contains cell- or tissue-specific pig glycomes analyzed with mass spectrometry-based techniques, including a comprehensive pig glycan information on chemical structures, mass values, types and relative quantities. It was designed as a user-friendly web-based interface that allows users to query the database according to pig tissue/cell types or glycan masses. This database will contribute in providing qualitative and quantitative information on glycomes characterized from various pig cells/organs in xenotransplantation and might eventually provide new targets in the α1,3-galactosyltransferase gene-knock out pigs era. The database can be accessed on the web at http://bioinformatics.snu.ac.kr/xdb.

  12. SWS: accessing SRS sites contents through Web Services.

    PubMed

    Romano, Paolo; Marra, Domenico

    2008-03-26

    Web Services and Workflow Management Systems can support creation and deployment of network systems, able to automate data analysis and retrieval processes in biomedical research. Web Services have been implemented at bioinformatics centres and workflow systems have been proposed for biological data analysis. New databanks are often developed by taking into account these technologies, but many existing databases do not allow a programmatic access. Only a fraction of available databanks can thus be queried through programmatic interfaces. SRS is a well know indexing and search engine for biomedical databanks offering public access to many databanks and analysis tools. Unfortunately, these data are not easily and efficiently accessible through Web Services. We have developed 'SRS by WS' (SWS), a tool that makes information available in SRS sites accessible through Web Services. Information on known sites is maintained in a database, srsdb. SWS consists in a suite of WS that can query both srsdb, for information on sites and databases, and SRS sites. SWS returns results in a text-only format and can be accessed through a WSDL compliant client. SWS enables interoperability between workflow systems and SRS implementations, by also managing access to alternative sites, in order to cope with network and maintenance problems, and selecting the most up-to-date among available systems. Development and implementation of Web Services, allowing to make a programmatic access to an exhaustive set of biomedical databases can significantly improve automation of in-silico analysis. SWS supports this activity by making biological databanks that are managed in public SRS sites available through a programmatic interface.

  13. Use of XML and Java for collaborative petroleum reservoir modeling on the Internet

    NASA Astrophysics Data System (ADS)

    Victorine, John; Watney, W. Lynn; Bhattacharya, Saibal

    2005-11-01

    The GEMINI (Geo-Engineering Modeling through INternet Informatics) is a public-domain, web-based freeware that is made up of an integrated suite of 14 Java-based software tools to accomplish on-line, real-time geologic and engineering reservoir modeling. GEMINI facilitates distant collaborations for small company and academic clients, negotiating analyses of both single and multiple wells. The system operates on a single server and an enterprise database. External data sets must be uploaded into this database. Feedback from GEMINI users provided the impetus to develop Stand Alone Web Start Applications of GEMINI modules that reside in and operate from the user's PC. In this version, the GEMINI modules run as applets, which may reside in local user PCs, on the server, or Java Web Start. In this enhanced version, XML-based data handling procedures are used to access data from remote and local databases and save results for later access and analyses. The XML data handling process also integrates different stand-alone GEMINI modules enabling the user(s) to access multiple databases. It provides flexibility to the user to customize analytical approach, database location, and level of collaboration. An example integrated field-study using GEMINI modules and Stand Alone Web Start Applications is provided to demonstrate the versatile applicability of this freeware for cost-effective reservoir modeling.

  14. Geothopica and the interactive analysis and visualization of the updated Italian National Geothermal Database

    NASA Astrophysics Data System (ADS)

    Trumpy, Eugenio; Manzella, Adele

    2017-02-01

    The Italian National Geothermal Database (BDNG), is the largest collection of Italian Geothermal data and was set up in the 1980s. It has since been updated both in terms of content and management tools: information on deep wells and thermal springs (with temperature > 30 °C) are currently organized and stored in a PostgreSQL relational database management system, which guarantees high performance, data security and easy access through different client applications. The BDNG is the core of the Geothopica web site, whose webGIS tool allows different types of user to access geothermal data, to visualize multiple types of datasets, and to perform integrated analyses. The webGIS tool has been recently improved by two specially designed, programmed and implemented visualization tools to display data on well lithology and underground temperatures. This paper describes the contents of the database and its software and data update, as well as the webGIS tool including the new tools for data lithology and temperature visualization. The geoinformation organized in the database and accessible through Geothopica is of use not only for geothermal purposes, but also for any kind of georesource and CO2 storage project requiring the organization of, and access to, deep underground data. Geothopica also supports project developers, researchers, and decision makers in the assessment, management and sustainable deployment of georesources.

  15. Use of XML and Java for collaborative petroleum reservoir modeling on the Internet

    USGS Publications Warehouse

    Victorine, J.; Watney, W.L.; Bhattacharya, S.

    2005-01-01

    The GEMINI (Geo-Engineering Modeling through INternet Informatics) is a public-domain, web-based freeware that is made up of an integrated suite of 14 Java-based software tools to accomplish on-line, real-time geologic and engineering reservoir modeling. GEMINI facilitates distant collaborations for small company and academic clients, negotiating analyses of both single and multiple wells. The system operates on a single server and an enterprise database. External data sets must be uploaded into this database. Feedback from GEMINI users provided the impetus to develop Stand Alone Web Start Applications of GEMINI modules that reside in and operate from the user's PC. In this version, the GEMINI modules run as applets, which may reside in local user PCs, on the server, or Java Web Start. In this enhanced version, XML-based data handling procedures are used to access data from remote and local databases and save results for later access and analyses. The XML data handling process also integrates different stand-alone GEMINI modules enabling the user(s) to access multiple databases. It provides flexibility to the user to customize analytical approach, database location, and level of collaboration. An example integrated field-study using GEMINI modules and Stand Alone Web Start Applications is provided to demonstrate the versatile applicability of this freeware for cost-effective reservoir modeling. ?? 2005 Elsevier Ltd. All rights reserved.

  16. Saada: A Generator of Astronomical Database

    NASA Astrophysics Data System (ADS)

    Michel, L.

    2011-11-01

    Saada transforms a set of heterogeneous FITS files or VOtables of various categories (images, tables, spectra, etc.) in a powerful database deployed on the Web. Databases are located on your host and stay independent of any external server. This job doesn’t require writing code. Saada can mix data of various categories in multiple collections. Data collections can be linked each to others making relevant browsing paths and allowing data-mining oriented queries. Saada supports 4 VO services (Spectra, images, sources and TAP) . Data collections can be published immediately after the deployment of the Web interface.

  17. WebCN: A web-based computation tool for in situ-produced cosmogenic nuclides

    NASA Astrophysics Data System (ADS)

    Ma, Xiuzeng; Li, Yingkui; Bourgeois, Mike; Caffee, Marc; Elmore, David; Granger, Darryl; Muzikar, Paul; Smith, Preston

    2007-06-01

    Cosmogenic nuclide techniques are increasingly being utilized in geoscience research. For this it is critical to establish an effective, easily accessible and well defined tool for cosmogenic nuclide computations. We have been developing a web-based tool (WebCN) to calculate surface exposure ages and erosion rates based on the nuclide concentrations measured by the accelerator mass spectrometry. WebCN for 10Be and 26Al has been finished and published at http://www.physics.purdue.edu/primelab/for_users/rockage.html. WebCN for 36Cl is under construction. WebCN is designed as a three-tier client/server model and uses the open source PostgreSQL for the database management and PHP for the interface design and calculations. On the client side, an internet browser and Microsoft Access are used as application interfaces to access the system. Open Database Connectivity is used to link PostgreSQL and Microsoft Access. WebCN accounts for both spatial and temporal distributions of the cosmic ray flux to calculate the production rates of in situ-produced cosmogenic nuclides at the Earth's surface.

  18. Telling the Story of Ridge Flank Research to all Ages and Audiences

    NASA Astrophysics Data System (ADS)

    Cooper, S. K.; Brennon, R.; Hamner, K.; Kane, J.; Ringlein, J.; Strong, L. R.; Orcutt, B. N.; Fisher, A. T.; Edwards, K. J.; Cowen, J. P.; Hulme, S.; Wheat, C. G.; Scientific Team of Expedition AT18-07

    2011-12-01

    A team of six education and communication specialists took part in Expedition AT18-07 onboard the R/V Atlantis during Summer 2011 as part of Hydrogeologic, Geochemical, and Microbiological Experiments in Young Ocean Crust of the Northeastern Pacific Ocean Using Subseafloor Observatories. Fully integrating into the science party of this expedition, educators brought their diverse backgrounds (middle school science, high school physics and biology, informal science institutions, and science media/communication) to bear as they participated in shipboard operations, laboratory analyses and scientific problem-solving. Their primary role, however, was to translate the excitement and significance of these investigations for a variety of non-science audiences on shore - including museum visitors, scout groups, summer camps, summer schools and college students - and provide rich opportunities for interaction surrounding transformative science in real time. Using a satellite-based internet link, educators took advantage of web-based tools, Skype and social networking sites Facebook, Twitter and YouTube, to bring the real process of science live from the seafloor to classrooms from Washington, D.C. to Taiwan. Activities and products included: 13 live ship-to-shore video broadcasts, development of classroom activities, partnerships among scientists and educators, web-based microbiology investigations, production of videos, development of museum exhibits and programs, and a video game based on the ROV Jason. In addition, several scientists initiated independent education projects, to which the education and communication team contributed their skills, including the Adopt a Microbe from the Seafloor web site, which provided regular art and science activities about microbiology and invites active participation from shore-based groups. Results of post-expedition work with students and the public will be shared, as will pre- and post-expedition evaluation reports on the impact of this experience directly on the team members. Special thanks to the Center for Dark Energy Biosphere Investigations and Deep Earth Academy for sponsoring this work.

  19. Relational databases: a transparent framework for encouraging biology students to think informatically.

    PubMed

    Rice, Michael; Gladstone, William; Weir, Michael

    2004-01-01

    We discuss how relational databases constitute an ideal framework for representing and analyzing large-scale genomic data sets in biology. As a case study, we describe a Drosophila splice-site database that we recently developed at Wesleyan University for use in research and teaching. The database stores data about splice sites computed by a custom algorithm using Drosophila cDNA transcripts and genomic DNA and supports a set of procedures for analyzing splice-site sequence space. A generic Web interface permits the execution of the procedures with a variety of parameter settings and also supports custom structured query language queries. Moreover, new analytical procedures can be added by updating special metatables in the database without altering the Web interface. The database provides a powerful setting for students to develop informatic thinking skills.

  20. Relational Databases: A Transparent Framework for Encouraging Biology Students To Think Informatically

    PubMed Central

    2004-01-01

    We discuss how relational databases constitute an ideal framework for representing and analyzing large-scale genomic data sets in biology. As a case study, we describe a Drosophila splice-site database that we recently developed at Wesleyan University for use in research and teaching. The database stores data about splice sites computed by a custom algorithm using Drosophila cDNA transcripts and genomic DNA and supports a set of procedures for analyzing splice-site sequence space. A generic Web interface permits the execution of the procedures with a variety of parameter settings and also supports custom structured query language queries. Moreover, new analytical procedures can be added by updating special metatables in the database without altering the Web interface. The database provides a powerful setting for students to develop informatic thinking skills. PMID:15592597

  1. A World Wide Web Human Dimensions Framework and Database for Wildlife and Forest Planning

    Treesearch

    Michael A. Tarrant; Alan D. Bright; H. Ken Cordell

    1999-01-01

    The paper describes a human dimensions framework(HDF) for application in wildlife and forest planning. The HDF is delivered via the world wide web and retrieves data on-line from the Social, Economic, Environmental, Leisure, and Attitudes (SEELA) database. The proposed HDF is guided by ten fundamental HD principles, and is applied to wildlife and forest planning using...

  2. dbHiMo: a web-based epigenomics platform for histone-modifying enzymes

    PubMed Central

    Choi, Jaeyoung; Kim, Ki-Tae; Huh, Aram; Kwon, Seomun; Hong, Changyoung; Asiegbu, Fred O.; Jeon, Junhyun; Lee, Yong-Hwan

    2015-01-01

    Over the past two decades, epigenetics has evolved into a key concept for understanding regulation of gene expression. Among many epigenetic mechanisms, covalent modifications such as acetylation and methylation of lysine residues on core histones emerged as a major mechanism in epigenetic regulation. Here, we present the database for histone-modifying enzymes (dbHiMo; http://hme.riceblast.snu.ac.kr/) aimed at facilitating functional and comparative analysis of histone-modifying enzymes (HMEs). HMEs were identified by applying a search pipeline built upon profile hidden Markov model (HMM) to proteomes. The database incorporates 11 576 HMEs identified from 603 proteomes including 483 fungal, 32 plants and 51 metazoan species. The dbHiMo provides users with web-based personalized data browsing and analysis tools, supporting comparative and evolutionary genomics. With comprehensive data entries and associated web-based tools, our database will be a valuable resource for future epigenetics/epigenomics studies. Database URL: http://hme.riceblast.snu.ac.kr/ PMID:26055100

  3. Cpf1-Database: web-based genome-wide guide RNA library design for gene knockout screens using CRISPR-Cpf1.

    PubMed

    Park, Jeongbin; Bae, Sangsu

    2018-03-15

    Following the type II CRISPR-Cas9 system, type V CRISPR-Cpf1 endonucleases have been found to be applicable for genome editing in various organisms in vivo. However, there are as yet no web-based tools capable of optimally selecting guide RNAs (gRNAs) among all possible genome-wide target sites. Here, we present Cpf1-Database, a genome-wide gRNA library design tool for LbCpf1 and AsCpf1, which have DNA recognition sequences of 5'-TTTN-3' at the 5' ends of target sites. Cpf1-Database provides a sophisticated but simple way to design gRNAs for AsCpf1 nucleases on the genome scale. One can easily access the data using a straightforward web interface, and using the powerful collections feature one can easily design gRNAs for thousands of genes in short time. Free access at http://www.rgenome.net/cpf1-database/. sangsubae@hanyang.ac.kr.

  4. Image query and indexing for digital x rays

    NASA Astrophysics Data System (ADS)

    Long, L. Rodney; Thoma, George R.

    1998-12-01

    The web-based medical information retrieval system (WebMIRS) allows interned access to databases containing 17,000 digitized x-ray spine images and associated text data from National Health and Nutrition Examination Surveys (NHANES). WebMIRS allows SQL query of the text, and viewing of the returned text records and images using a standard browser. We are now working (1) to determine utility of data directly derived from the images in our databases, and (2) to investigate the feasibility of computer-assisted or automated indexing of the images to support image retrieval of images of interest to biomedical researchers in the field of osteoarthritis. To build an initial database based on image data, we are manually segmenting a subset of the vertebrae, using techniques from vertebral morphometry. From this, we will derive and add to the database vertebral features. This image-derived data will enhance the user's data access capability by enabling the creation of combined SQL/image-content queries.

  5. PSI/TM-Coffee: a web server for fast and accurate multiple sequence alignments of regular and transmembrane proteins using homology extension on reduced databases.

    PubMed

    Floden, Evan W; Tommaso, Paolo D; Chatzou, Maria; Magis, Cedrik; Notredame, Cedric; Chang, Jia-Ming

    2016-07-08

    The PSI/TM-Coffee web server performs multiple sequence alignment (MSA) of proteins by combining homology extension with a consistency based alignment approach. Homology extension is performed with Position Specific Iterative (PSI) BLAST searches against a choice of redundant and non-redundant databases. The main novelty of this server is to allow databases of reduced complexity to rapidly perform homology extension. This server also gives the possibility to use transmembrane proteins (TMPs) reference databases to allow even faster homology extension on this important category of proteins. Aside from an MSA, the server also outputs topological prediction of TMPs using the HMMTOP algorithm. Previous benchmarking of the method has shown this approach outperforms the most accurate alignment methods such as MSAProbs, Kalign, PROMALS, MAFFT, ProbCons and PRALINE™. The web server is available at http://tcoffee.crg.cat/tmcoffee. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  6. NAWeb 2000: Web-Based Learning - On Track! International Conference on Web-Based Learning. (6th, New Brunswick, Canada, October 14-17, 2000).

    ERIC Educational Resources Information Center

    Hall, Richard., Ed.

    This proceedings of the Sixth International Conference on Web-Based Learning, NAWeb 2000, includes the following papers: "Is a Paradigm Shift Required To Effectively Teach Web-Based Instruction?"; "Issues in Courseware Reuse for a Web-Based Information System"; "The Digital Curriculum Database: Meeting the Needs of Industry and the Challenge of…

  7. Prototype of web-based database of surface wave investigation results for site classification

    NASA Astrophysics Data System (ADS)

    Hayashi, K.; Cakir, R.; Martin, A. J.; Craig, M. S.; Lorenzo, J. M.

    2016-12-01

    As active and passive surface wave methods are getting popular for evaluating site response of earthquake ground motion, demand on the development of database for investigation results is also increasing. Seismic ground motion not only depends on 1D velocity structure but also on 2D and 3D structures so that spatial information of S-wave velocity must be considered in ground motion prediction. The database can support to construct 2D and 3D underground models. Inversion of surface wave processing is essentially non-unique so that other information must be combined into the processing. The database of existed geophysical, geological and geotechnical investigation results can provide indispensable information to improve the accuracy and reliability of investigations. Most investigations, however, are carried out by individual organizations and investigation results are rarely stored in the unified and organized database. To study and discuss appropriate database and digital standard format for the surface wave investigations, we developed a prototype of web-based database to store observed data and processing results of surface wave investigations that we have performed at more than 400 sites in U.S. and Japan. The database was constructed on a web server using MySQL and PHP so that users can access to the database through the internet from anywhere with any device. All data is registered in the database with location and users can search geophysical data through Google Map. The database stores dispersion curves, horizontal to vertical spectral ratio and S-wave velocity profiles at each site that was saved in XML files as digital data so that user can review and reuse them. The database also stores a published 3D deep basin and crustal structure and user can refer it during the processing of surface wave data.

  8. Development of a web geoservices platform for School of Environmental Sciences, Mahatma Gandhi University, Kerala, India

    NASA Astrophysics Data System (ADS)

    Satheendran, S.; John, C. M.; Fasalul, F. K.; Aanisa, K. M.

    2014-11-01

    Web geoservices is the obvious graduation of Geographic Information System in a distributed environment through a simple browser. It enables organizations to share domain-specific rich and dynamic spatial information over the web. The present study attempted to design and develop a web enabled GIS application for the School of Environmental Sciences, Mahatma Gandhi University, Kottayam, Kerala, India to publish various geographical databases to the public through its website. The development of this project is based upon the open source tools and techniques. The output portal site is platform independent. The premier webgis frame work `Geomoose' is utilized. Apache server is used as the Web Server and the UMN Map Server is used as the map server for this project. It provides various customised tools to query the geographical database in different ways and search for various facilities in the geographical area like banks, attractive places, hospitals, hotels etc. The portal site was tested with the output geographical database of 2 projects of the School such as 1) the Tourism Information System for the Malabar region of Kerala State consisting of 5 northern districts 2) the geoenvironmental appraisal of the Athirappilly Hydroelectric Project covering the entire Chalakkudy river basin.

  9. An overview of Airborne Data for Assessing Models (ADAM): a web development effort to effectively disseminate airborne data products

    NASA Astrophysics Data System (ADS)

    Mangosing, D. C.; Chen, G.; Kusterer, J.; Rinsland, P.; Perez, J.; Sorlie, S.; Parker, L.

    2011-12-01

    One of the objectives of the NASA Langley Research Center's MEaSURES project, "Creating a Unified Airborne Database for Model Assessment", is the development of airborne Earth System Data Records (ESDR) for the regional and global model assessment and validation activities performed by the tropospheric chemistry and climate modeling communities. The ongoing development of ADAM, a web site designed to access a unified, standardized and relational ESDR database, meets this objective. The ESDR database is derived from publically available data sets, from NASA airborne field studies to airborne and in-situ studies sponsored by NOAA, NSF, and numerous international partners. The ADAM web development activities provide an opportunity to highlight a growing synergy between the Airborne Science Data for Atmospheric Composition (ASD-AC) group at NASA Langley and the NASA Langley's Atmospheric Sciences Data Center (ASDC). These teams will collaborate on the ADAM web application by leveraging the state-of-the-art service and message-oriented data distribution architecture developed and implemented by ASDC and using a web-based tool provided by the ASD-AC group whose user interface accommodates the nuanced perspective of science users in the atmospheric chemistry and composition and climate modeling communities.

  10. PharmMapper 2017 update: a web server for potential drug target identification with a comprehensive target pharmacophore database.

    PubMed

    Wang, Xia; Shen, Yihang; Wang, Shiwei; Li, Shiliang; Zhang, Weilin; Liu, Xiaofeng; Lai, Luhua; Pei, Jianfeng; Li, Honglin

    2017-07-03

    The PharmMapper online tool is a web server for potential drug target identification by reversed pharmacophore matching the query compound against an in-house pharmacophore model database. The original version of PharmMapper includes more than 7000 target pharmacophores derived from complex crystal structures with corresponding protein target annotations. In this article, we present a new version of the PharmMapper web server, of which the backend pharmacophore database is six times larger than the earlier one, with a total of 23 236 proteins covering 16 159 druggable pharmacophore models and 51 431 ligandable pharmacophore models. The expanded target data cover 450 indications and 4800 molecular functions compared to 110 indications and 349 molecular functions in our last update. In addition, the new web server is united with the statistically meaningful ranking of the identified drug targets, which is achieved through the use of standard scores. It also features an improved user interface. The proposed web server is freely available at http://lilab.ecust.edu.cn/pharmmapper/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  11. Empowering radiologic education on the Internet: a new virtual website technology for hosting interactive educational content on the World Wide Web.

    PubMed

    Frank, M S; Dreyer, K

    2001-06-01

    We describe a virtual web site hosting technology that enables educators in radiology to emblazon and make available for delivery on the world wide web their own interactive educational content, free from dependencies on in-house resources and policies. This suite of technologies includes a graphically oriented software application, designed for the computer novice, to facilitate the input, storage, and management of domain expertise within a database system. The database stores this expertise as choreographed and interlinked multimedia entities including text, imagery, interactive questions, and audio. Case-based presentations or thematic lectures can be authored locally, previewed locally within a web browser, then uploaded at will as packaged knowledge objects to an educator's (or department's) personal web site housed within a virtual server architecture. This architecture can host an unlimited number of unique educational web sites for individuals or departments in need of such service. Each virtual site's content is stored within that site's protected back-end database connected to Internet Information Server (Microsoft Corp, Redmond WA) using a suite of Active Server Page (ASP) modules that incorporate Microsoft's Active Data Objects (ADO) technology. Each person's or department's electronic teaching material appears as an independent web site with different levels of access--controlled by a username-password strategy--for teachers and students. There is essentially no static hypertext markup language (HTML). Rather, all pages displayed for a given site are rendered dynamically from case-based or thematic content that is fetched from that virtual site's database. The dynamically rendered HTML is displayed within a web browser in a Socratic fashion that can assess the recipient's current fund of knowledge while providing instantaneous user-specific feedback. Each site is emblazoned with the logo and identification of the participating institution. Individuals with teacher-level access can use a web browser to upload new content as well as manage content already stored on their virtual site. Each virtual site stores, collates, and scores participants' responses to the interactive questions posed on line. This virtual web site strategy empowers the educator with an end-to-end solution for creating interactive educational content and hosting that content within the educator's personalized and protected educational site on the world wide web, thus providing a valuable outlet that can magnify the impact of his or her talents and contributions.

  12. Astronomical databases of Nikolaev Observatory

    NASA Astrophysics Data System (ADS)

    Protsyuk, Y.; Mazhaev, A.

    2008-07-01

    Several astronomical databases were created at Nikolaev Observatory during the last years. The databases are built by using MySQL search engine and PHP scripts. They are available on NAO web-site http://www.mao.nikolaev.ua.

  13. Resolving the problem of multiple accessions of the same transcript deposited across various public databases.

    PubMed

    Weirick, Tyler; John, David; Uchida, Shizuka

    2017-03-01

    Maintaining the consistency of genomic annotations is an increasingly complex task because of the iterative and dynamic nature of assembly and annotation, growing numbers of biological databases and insufficient integration of annotations across databases. As information exchange among databases is poor, a 'novel' sequence from one reference annotation could be annotated in another. Furthermore, relationships to nearby or overlapping annotated transcripts are even more complicated when using different genome assemblies. To better understand these problems, we surveyed current and previous versions of genomic assemblies and annotations across a number of public databases containing long noncoding RNA. We identified numerous discrepancies of transcripts regarding their genomic locations, transcript lengths and identifiers. Further investigation showed that the positional differences between reference annotations of essentially the same transcript could lead to differences in its measured expression at the RNA level. To aid in resolving these problems, we present the algorithm 'Universal Genomic Accession Hash (UGAHash)' and created an open source web tool to encourage the usage of the UGAHash algorithm. The UGAHash web tool (http://ugahash.uni-frankfurt.de) can be accessed freely without registration. The web tool allows researchers to generate Universal Genomic Accessions for genomic features or to explore annotations deposited in the public databases of the past and present versions. We anticipate that the UGAHash web tool will be a valuable tool to check for the existence of transcripts before judging the newly discovered transcripts as novel. © The Author 2016. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  14. Ontology-oriented retrieval of putative microRNAs in Vitis vinifera via GrapeMiRNA: a web database of de novo predicted grape microRNAs.

    PubMed

    Lazzari, Barbara; Caprera, Andrea; Cestaro, Alessandro; Merelli, Ivan; Del Corvo, Marcello; Fontana, Paolo; Milanesi, Luciano; Velasco, Riccardo; Stella, Alessandra

    2009-06-29

    Two complete genome sequences are available for Vitis vinifera Pinot noir. Based on the sequence and gene predictions produced by the IASMA, we performed an in silico detection of putative microRNA genes and of their targets, and collected the most reliable microRNA predictions in a web database. The application is available at http://www.itb.cnr.it/ptp/grapemirna/. The program FindMiRNA was used to detect putative microRNA genes in the grape genome. A very high number of predictions was retrieved, calling for validation. Nine parameters were calculated and, based on the grape microRNAs dataset available at miRBase, thresholds were defined and applied to FindMiRNA predictions having targets in gene exons. In the resulting subset, predictions were ranked according to precursor positions and sequence similarity, and to target identity. To further validate FindMiRNA predictions, comparisons to the Arabidopsis genome, to the grape Genoscope genome, and to the grape EST collection were performed. Results were stored in a MySQL database and a web interface was prepared to query the database and retrieve predictions of interest. The GrapeMiRNA database encompasses 5,778 microRNA predictions spanning the whole grape genome. Predictions are integrated with information that can be of use in selection procedures. Tools added in the web interface also allow to inspect predictions according to gene ontology classes and metabolic pathways of targets. The GrapeMiRNA database can be of help in selecting candidate microRNA genes to be validated.

  15. Veterans Administration Databases

    Cancer.gov

    The Veterans Administration Information Resource Center provides database and informatics experts, customer service, expert advice, information products, and web technology to VA researchers and others.

  16. NPIDB: Nucleic acid-Protein Interaction DataBase.

    PubMed

    Kirsanov, Dmitry D; Zanegina, Olga N; Aksianov, Evgeniy A; Spirin, Sergei A; Karyagina, Anna S; Alexeevski, Andrei V

    2013-01-01

    The Nucleic acid-Protein Interaction DataBase (http://npidb.belozersky.msu.ru/) contains information derived from structures of DNA-protein and RNA-protein complexes extracted from the Protein Data Bank (3846 complexes in October 2012). It provides a web interface and a set of tools for extracting biologically meaningful characteristics of nucleoprotein complexes. The content of the database is updated weekly. The current version of the Nucleic acid-Protein Interaction DataBase is an upgrade of the version published in 2007. The improvements include a new web interface, new tools for calculation of intermolecular interactions, a classification of SCOP families that contains DNA-binding protein domains and data on conserved water molecules on the DNA-protein interface.

  17. Brain Tumor Database, a free relational database for collection and analysis of brain tumor patient information.

    PubMed

    Bergamino, Maurizio; Hamilton, David J; Castelletti, Lara; Barletta, Laura; Castellan, Lucio

    2015-03-01

    In this study, we describe the development and utilization of a relational database designed to manage the clinical and radiological data of patients with brain tumors. The Brain Tumor Database was implemented using MySQL v.5.0, while the graphical user interface was created using PHP and HTML, thus making it easily accessible through a web browser. This web-based approach allows for multiple institutions to potentially access the database. The BT Database can record brain tumor patient information (e.g. clinical features, anatomical attributes, and radiological characteristics) and be used for clinical and research purposes. Analytic tools to automatically generate statistics and different plots are provided. The BT Database is a free and powerful user-friendly tool with a wide range of possible clinical and research applications in neurology and neurosurgery. The BT Database graphical user interface source code and manual are freely available at http://tumorsdatabase.altervista.org. © The Author(s) 2013.

  18. Web-based Visualization and Query of semantically segmented multiresolution 3D Models in the Field of Cultural Heritage

    NASA Astrophysics Data System (ADS)

    Auer, M.; Agugiaro, G.; Billen, N.; Loos, L.; Zipf, A.

    2014-05-01

    Many important Cultural Heritage sites have been studied over long periods of time by different means of technical equipment, methods and intentions by different researchers. This has led to huge amounts of heterogeneous "traditional" datasets and formats. The rising popularity of 3D models in the field of Cultural Heritage in recent years has brought additional data formats and makes it even more necessary to find solutions to manage, publish and study these data in an integrated way. The MayaArch3D project aims to realize such an integrative approach by establishing a web-based research platform bringing spatial and non-spatial databases together and providing visualization and analysis tools. Especially the 3D components of the platform use hierarchical segmentation concepts to structure the data and to perform queries on semantic entities. This paper presents a database schema to organize not only segmented models but also different Levels-of-Details and other representations of the same entity. It is further implemented in a spatial database which allows the storing of georeferenced 3D data. This enables organization and queries by semantic, geometric and spatial properties. As service for the delivery of the segmented models a standardization candidate of the OpenGeospatialConsortium (OGC), the Web3DService (W3DS) has been extended to cope with the new database schema and deliver a web friendly format for WebGL rendering. Finally a generic user interface is presented which uses the segments as navigation metaphor to browse and query the semantic segmentation levels and retrieve information from an external database of the German Archaeological Institute (DAI).

  19. The Ontological Perspectives of the Semantic Web and the Metadata Harvesting Protocol: Applications of Metadata for Improving Web Search.

    ERIC Educational Resources Information Center

    Fast, Karl V.; Campbell, D. Grant

    2001-01-01

    Compares the implied ontological frameworks of the Open Archives Initiative Protocol for Metadata Harvesting and the World Wide Web Consortium's Semantic Web. Discusses current search engine technology, semantic markup, indexing principles of special libraries and online databases, and componentization and the distinction between data and…

  20. A Framework for Transparently Accessing Deep Web Sources

    ERIC Educational Resources Information Center

    Dragut, Eduard Constantin

    2010-01-01

    An increasing number of Web sites expose their content via query interfaces, many of them offering the same type of products/services (e.g., flight tickets, car rental/purchasing). They constitute the so-called "Deep Web". Accessing the content on the Deep Web has been a long-standing challenge for the database community. For a user interested in…

  1. Web Proxy Auto Discovery for the WLCG

    NASA Astrophysics Data System (ADS)

    Dykstra, D.; Blomer, J.; Blumenfeld, B.; De Salvo, A.; Dewhurst, A.; Verguilov, V.

    2017-10-01

    All four of the LHC experiments depend on web proxies (that is, squids) at each grid site to support software distribution by the CernVM FileSystem (CVMFS). CMS and ATLAS also use web proxies for conditions data distributed through the Frontier Distributed Database caching system. ATLAS & CMS each have their own methods for their grid jobs to find out which web proxies to use for Frontier at each site, and CVMFS has a third method. Those diverse methods limit usability and flexibility, particularly for opportunistic use cases, where an experiment’s jobs are run at sites that do not primarily support that experiment. This paper describes a new Worldwide LHC Computing Grid (WLCG) system for discovering the addresses of web proxies. The system is based on an internet standard called Web Proxy Auto Discovery (WPAD). WPAD is in turn based on another standard called Proxy Auto Configuration (PAC). Both the Frontier and CVMFS clients support this standard. The input into the WLCG system comes from squids registered in the ATLAS Grid Information System (AGIS) and CMS SITECONF files, cross-checked with squids registered by sites in the Grid Configuration Database (GOCDB) and the OSG Information Management (OIM) system, and combined with some exceptions manually configured by people from ATLAS and CMS who operate WLCG Squid monitoring. WPAD servers at CERN respond to http requests from grid nodes all over the world with a PAC file that lists available web proxies, based on IP addresses matched from a database that contains the IP address ranges registered to organizations. Large grid sites are encouraged to supply their own WPAD web servers for more flexibility, to avoid being affected by short term long distance network outages, and to offload the WLCG WPAD servers at CERN. The CERN WPAD servers additionally support requests from jobs running at non-grid sites (particularly for LHC@Home) which they direct to the nearest publicly accessible web proxy servers. The responses to those requests are geographically ordered based on a separate database that maps IP addresses to longitude and latitude.

  2. Web Proxy Auto Discovery for the WLCG

    DOE PAGES

    Dykstra, D.; Blomer, J.; Blumenfeld, B.; ...

    2017-11-23

    All four of the LHC experiments depend on web proxies (that is, squids) at each grid site to support software distribution by the CernVM FileSystem (CVMFS). CMS and ATLAS also use web proxies for conditions data distributed through the Frontier Distributed Database caching system. ATLAS & CMS each have their own methods for their grid jobs to find out which web proxies to use for Frontier at each site, and CVMFS has a third method. Those diverse methods limit usability and flexibility, particularly for opportunistic use cases, where an experiment’s jobs are run at sites that do not primarily supportmore » that experiment. This paper describes a new Worldwide LHC Computing Grid (WLCG) system for discovering the addresses of web proxies. The system is based on an internet standard called Web Proxy Auto Discovery (WPAD). WPAD is in turn based on another standard called Proxy Auto Configuration (PAC). Both the Frontier and CVMFS clients support this standard. The input into the WLCG system comes from squids registered in the ATLAS Grid Information System (AGIS) and CMS SITECONF files, cross-checked with squids registered by sites in the Grid Configuration Database (GOCDB) and the OSG Information Management (OIM) system, and combined with some exceptions manually configured by people from ATLAS and CMS who operate WLCG Squid monitoring. WPAD servers at CERN respond to http requests from grid nodes all over the world with a PAC file that lists available web proxies, based on IP addresses matched from a database that contains the IP address ranges registered to organizations. Large grid sites are encouraged to supply their own WPAD web servers for more flexibility, to avoid being affected by short term long distance network outages, and to offload the WLCG WPAD servers at CERN. The CERN WPAD servers additionally support requests from jobs running at non-grid sites (particularly for LHC@Home) which it directs to the nearest publicly accessible web proxy servers. Furthermore, the responses to those requests are geographically ordered based on a separate database that maps IP addresses to longitude and latitude.« less

  3. Web Proxy Auto Discovery for the WLCG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dykstra, D.; Blomer, J.; Blumenfeld, B.

    All four of the LHC experiments depend on web proxies (that is, squids) at each grid site to support software distribution by the CernVM FileSystem (CVMFS). CMS and ATLAS also use web proxies for conditions data distributed through the Frontier Distributed Database caching system. ATLAS & CMS each have their own methods for their grid jobs to find out which web proxies to use for Frontier at each site, and CVMFS has a third method. Those diverse methods limit usability and flexibility, particularly for opportunistic use cases, where an experiment’s jobs are run at sites that do not primarily supportmore » that experiment. This paper describes a new Worldwide LHC Computing Grid (WLCG) system for discovering the addresses of web proxies. The system is based on an internet standard called Web Proxy Auto Discovery (WPAD). WPAD is in turn based on another standard called Proxy Auto Configuration (PAC). Both the Frontier and CVMFS clients support this standard. The input into the WLCG system comes from squids registered in the ATLAS Grid Information System (AGIS) and CMS SITECONF files, cross-checked with squids registered by sites in the Grid Configuration Database (GOCDB) and the OSG Information Management (OIM) system, and combined with some exceptions manually configured by people from ATLAS and CMS who operate WLCG Squid monitoring. WPAD servers at CERN respond to http requests from grid nodes all over the world with a PAC file that lists available web proxies, based on IP addresses matched from a database that contains the IP address ranges registered to organizations. Large grid sites are encouraged to supply their own WPAD web servers for more flexibility, to avoid being affected by short term long distance network outages, and to offload the WLCG WPAD servers at CERN. The CERN WPAD servers additionally support requests from jobs running at non-grid sites (particularly for LHC@Home) which it directs to the nearest publicly accessible web proxy servers. Furthermore, the responses to those requests are geographically ordered based on a separate database that maps IP addresses to longitude and latitude.« less

  4. The Footprint Database and Web Services of the Herschel Space Observatory

    NASA Astrophysics Data System (ADS)

    Dobos, László; Varga-Verebélyi, Erika; Verdugo, Eva; Teyssier, David; Exter, Katrina; Valtchanov, Ivan; Budavári, Tamás; Kiss, Csaba

    2016-10-01

    Data from the Herschel Space Observatory is freely available to the public but no uniformly processed catalogue of the observations has been published so far. To date, the Herschel Science Archive does not contain the exact sky coverage (footprint) of individual observations and supports search for measurements based on bounding circles only. Drawing on previous experience in implementing footprint databases, we built the Herschel Footprint Database and Web Services for the Herschel Space Observatory to provide efficient search capabilities for typical astronomical queries. The database was designed with the following main goals in mind: (a) provide a unified data model for meta-data of all instruments and observational modes, (b) quickly find observations covering a selected object and its neighbourhood, (c) quickly find every observation in a larger area of the sky, (d) allow for finding solar system objects crossing observation fields. As a first step, we developed a unified data model of observations of all three Herschel instruments for all pointing and instrument modes. Then, using telescope pointing information and observational meta-data, we compiled a database of footprints. As opposed to methods using pixellation of the sphere, we represent sky coverage in an exact geometric form allowing for precise area calculations. For easier handling of Herschel observation footprints with rather complex shapes, two algorithms were implemented to reduce the outline. Furthermore, a new visualisation tool to plot footprints with various spherical projections was developed. Indexing of the footprints using Hierarchical Triangular Mesh makes it possible to quickly find observations based on sky coverage, time and meta-data. The database is accessible via a web site http://herschel.vo.elte.hu and also as a set of REST web service functions, which makes it readily usable from programming environments such as Python or IDL. The web service allows downloading footprint data in various formats including Virtual Observatory standards.

  5. dictyExpress: a Dictyostelium discoideum gene expression database with an explorative data analysis web-based interface.

    PubMed

    Rot, Gregor; Parikh, Anup; Curk, Tomaz; Kuspa, Adam; Shaulsky, Gad; Zupan, Blaz

    2009-08-25

    Bioinformatics often leverages on recent advancements in computer science to support biologists in their scientific discovery process. Such efforts include the development of easy-to-use web interfaces to biomedical databases. Recent advancements in interactive web technologies require us to rethink the standard submit-and-wait paradigm, and craft bioinformatics web applications that share analytical and interactive power with their desktop relatives, while retaining simplicity and availability. We have developed dictyExpress, a web application that features a graphical, highly interactive explorative interface to our database that consists of more than 1000 Dictyostelium discoideum gene expression experiments. In dictyExpress, the user can select experiments and genes, perform gene clustering, view gene expression profiles across time, view gene co-expression networks, perform analyses of Gene Ontology term enrichment, and simultaneously display expression profiles for a selected gene in various experiments. Most importantly, these tasks are achieved through web applications whose components are seamlessly interlinked and immediately respond to events triggered by the user, thus providing a powerful explorative data analysis environment. dictyExpress is a precursor for a new generation of web-based bioinformatics applications with simple but powerful interactive interfaces that resemble that of the modern desktop. While dictyExpress serves mainly the Dictyostelium research community, it is relatively easy to adapt it to other datasets. We propose that the design ideas behind dictyExpress will influence the development of similar applications for other model organisms.

  6. dictyExpress: a Dictyostelium discoideum gene expression database with an explorative data analysis web-based interface

    PubMed Central

    Rot, Gregor; Parikh, Anup; Curk, Tomaz; Kuspa, Adam; Shaulsky, Gad; Zupan, Blaz

    2009-01-01

    Background Bioinformatics often leverages on recent advancements in computer science to support biologists in their scientific discovery process. Such efforts include the development of easy-to-use web interfaces to biomedical databases. Recent advancements in interactive web technologies require us to rethink the standard submit-and-wait paradigm, and craft bioinformatics web applications that share analytical and interactive power with their desktop relatives, while retaining simplicity and availability. Results We have developed dictyExpress, a web application that features a graphical, highly interactive explorative interface to our database that consists of more than 1000 Dictyostelium discoideum gene expression experiments. In dictyExpress, the user can select experiments and genes, perform gene clustering, view gene expression profiles across time, view gene co-expression networks, perform analyses of Gene Ontology term enrichment, and simultaneously display expression profiles for a selected gene in various experiments. Most importantly, these tasks are achieved through web applications whose components are seamlessly interlinked and immediately respond to events triggered by the user, thus providing a powerful explorative data analysis environment. Conclusion dictyExpress is a precursor for a new generation of web-based bioinformatics applications with simple but powerful interactive interfaces that resemble that of the modern desktop. While dictyExpress serves mainly the Dictyostelium research community, it is relatively easy to adapt it to other datasets. We propose that the design ideas behind dictyExpress will influence the development of similar applications for other model organisms. PMID:19706156

  7. [The Development and Application of the Orthopaedics Implants Failure Database Software Based on WEB].

    PubMed

    Huang, Jiahua; Zhou, Hai; Zhang, Binbin; Ding, Biao

    2015-09-01

    This article develops a new failure database software for orthopaedics implants based on WEB. The software is based on B/S mode, ASP dynamic web technology is used as its main development language to achieve data interactivity, Microsoft Access is used to create a database, these mature technologies make the software extend function or upgrade easily. In this article, the design and development idea of the software, the software working process and functions as well as relative technical features are presented. With this software, we can store many different types of the fault events of orthopaedics implants, the failure data can be statistically analyzed, and in the macroscopic view, it can be used to evaluate the reliability of orthopaedics implants and operations, it also can ultimately guide the doctors to improve the clinical treatment level.

  8. Illinois hospital using Web to build database for relationship marketing.

    PubMed

    Rees, T

    2000-01-01

    Silver Cross Hospital and Medical Centers, Joliet, Ill., is promoting its Web site as a tool for gathering health information about patients and prospective patients in order to build a relationship marketing database. The database will enable the hospital to identify health care needs of consumers in Joliet, Will County and many southwestern suburbs of Chicago. The Web site is promoted in a multimedia advertising campaign that invites residents to participate in a Healthy Living Quiz that rewards respondents with free health screenings. The effort is part of a growing planning and marketing strategy in the health care industry called customer relationship management (CRM). Not only does a total CRM plan offer health care organizations the chance to discover the potential for meeting consumers' needs; it also helps find any marketplace gaps that may exist.

  9. Apollo2Go: a web service adapter for the Apollo genome viewer to enable distributed genome annotation.

    PubMed

    Klee, Kathrin; Ernst, Rebecca; Spannagl, Manuel; Mayer, Klaus F X

    2007-08-30

    Apollo, a genome annotation viewer and editor, has become a widely used genome annotation and visualization tool for distributed genome annotation projects. When using Apollo for annotation, database updates are carried out by uploading intermediate annotation files into the respective database. This non-direct database upload is laborious and evokes problems of data synchronicity. To overcome these limitations we extended the Apollo data adapter with a generic, configurable web service client that is able to retrieve annotation data in a GAME-XML-formatted string and pass it on to Apollo's internal input routine. This Apollo web service adapter, Apollo2Go, simplifies the data exchange in distributed projects and aims to render the annotation process more comfortable. The Apollo2Go software is freely available from ftp://ftpmips.gsf.de/plants/apollo_webservice.

  10. Apollo2Go: a web service adapter for the Apollo genome viewer to enable distributed genome annotation

    PubMed Central

    Klee, Kathrin; Ernst, Rebecca; Spannagl, Manuel; Mayer, Klaus FX

    2007-01-01

    Background Apollo, a genome annotation viewer and editor, has become a widely used genome annotation and visualization tool for distributed genome annotation projects. When using Apollo for annotation, database updates are carried out by uploading intermediate annotation files into the respective database. This non-direct database upload is laborious and evokes problems of data synchronicity. Results To overcome these limitations we extended the Apollo data adapter with a generic, configurable web service client that is able to retrieve annotation data in a GAME-XML-formatted string and pass it on to Apollo's internal input routine. Conclusion This Apollo web service adapter, Apollo2Go, simplifies the data exchange in distributed projects and aims to render the annotation process more comfortable. The Apollo2Go software is freely available from . PMID:17760972

  11. The Human Transcript Database: A Catalogue of Full Length cDNA Inserts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bouckk John; Michael McLeod; Kim Worley

    1999-09-10

    The BCM Search Launcher provided improved access to web-based sequence analysis services during the granting period and beyond. The Search Launcher web site grouped analysis procedures by function and provided default parameters that provided reasonable search results for most applications. For instance, most queries were automatically masked for repeat sequences prior to sequence database searches to avoid spurious matches. In addition to the web-based access and arrangements that were made using the functions easier, the BCM Search Launcher provided unique value-added applications like the BEAUTY sequence database search tool that combined information about protein domains and sequence database search resultsmore » to give an enhanced, more complete picture of the reliability and relative value of the information reported. This enhanced search tool made evaluating search results more straight-forward and consistent. Some of the favorite features of the web site are the sequence utilities and the batch client functionality that allows processing of multiple samples from the command line interface. One measure of the success of the BCM Search Launcher is the number of sites that have adopted the models first developed on the site. The graphic display on the BLAST search from the NCBI web site is one such outgrowth, as is the display of protein domain search results within BLAST search results, and the design of the Biology Workbench application. The logs of usage and comments from users confirm the great utility of this resource.« less

  12. Database Reports Over the Internet

    NASA Technical Reports Server (NTRS)

    Smith, Dean Lance

    2002-01-01

    Most of the summer was spent developing software that would permit existing test report forms to be printed over the web on a printer that is supported by Adobe Acrobat Reader. The data is stored in a DBMS (Data Base Management System). The client asks for the information from the database using an HTML (Hyper Text Markup Language) form in a web browser. JavaScript is used with the forms to assist the user and verify the integrity of the entered data. Queries to a database are made in SQL (Sequential Query Language), a widely supported standard for making queries to databases. Java servlets, programs written in the Java programming language running under the control of network server software, interrogate the database and complete a PDF form template kept in a file. The completed report is sent to the browser requesting the report. Some errors are sent to the browser in an HTML web page, others are reported to the server. Access to the databases was restricted since the data are being transported to new DBMS software that will run on new hardware. However, the SQL queries were made to Microsoft Access, a DBMS that is available on most PCs (Personal Computers). Access does support the SQL commands that were used, and a database was created with Access that contained typical data for the report forms. Some of the problems and features are discussed below.

  13. Virtualization of open-source secure web services to support data exchange in a pediatric critical care research network.

    PubMed

    Frey, Lewis J; Sward, Katherine A; Newth, Christopher J L; Khemani, Robinder G; Cryer, Martin E; Thelen, Julie L; Enriquez, Rene; Shaoyu, Su; Pollack, Murray M; Harrison, Rick E; Meert, Kathleen L; Berg, Robert A; Wessel, David L; Shanley, Thomas P; Dalton, Heidi; Carcillo, Joseph; Jenkins, Tammara L; Dean, J Michael

    2015-11-01

    To examine the feasibility of deploying a virtual web service for sharing data within a research network, and to evaluate the impact on data consistency and quality. Virtual machines (VMs) encapsulated an open-source, semantically and syntactically interoperable secure web service infrastructure along with a shadow database. The VMs were deployed to 8 Collaborative Pediatric Critical Care Research Network Clinical Centers. Virtual web services could be deployed in hours. The interoperability of the web services reduced format misalignment from 56% to 1% and demonstrated that 99% of the data consistently transferred using the data dictionary and 1% needed human curation. Use of virtualized open-source secure web service technology could enable direct electronic abstraction of data from hospital databases for research purposes. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. Semantic-JSON: a lightweight web service interface for Semantic Web contents integrating multiple life science databases.

    PubMed

    Kobayashi, Norio; Ishii, Manabu; Takahashi, Satoshi; Mochizuki, Yoshiki; Matsushima, Akihiro; Toyoda, Tetsuro

    2011-07-01

    Global cloud frameworks for bioinformatics research databases become huge and heterogeneous; solutions face various diametric challenges comprising cross-integration, retrieval, security and openness. To address this, as of March 2011 organizations including RIKEN published 192 mammalian, plant and protein life sciences databases having 8.2 million data records, integrated as Linked Open or Private Data (LOD/LPD) using SciNetS.org, the Scientists' Networking System. The huge quantity of linked data this database integration framework covers is based on the Semantic Web, where researchers collaborate by managing metadata across public and private databases in a secured data space. This outstripped the data query capacity of existing interface tools like SPARQL. Actual research also requires specialized tools for data analysis using raw original data. To solve these challenges, in December 2009 we developed the lightweight Semantic-JSON interface to access each fragment of linked and raw life sciences data securely under the control of programming languages popularly used by bioinformaticians such as Perl and Ruby. Researchers successfully used the interface across 28 million semantic relationships for biological applications including genome design, sequence processing, inference over phenotype databases, full-text search indexing and human-readable contents like ontology and LOD tree viewers. Semantic-JSON services of SciNetS.org are provided at http://semanticjson.org.

  15. Development of spatial density maps based on geoprocessing web services: application to tuberculosis incidence in Barcelona, Spain.

    PubMed

    Dominkovics, Pau; Granell, Carlos; Pérez-Navarro, Antoni; Casals, Martí; Orcau, Angels; Caylà, Joan A

    2011-11-29

    Health professionals and authorities strive to cope with heterogeneous data, services, and statistical models to support decision making on public health. Sophisticated analysis and distributed processing capabilities over geocoded epidemiological data are seen as driving factors to speed up control and decision making in these health risk situations. In this context, recent Web technologies and standards-based web services deployed on geospatial information infrastructures have rapidly become an efficient way to access, share, process, and visualize geocoded health-related information. Data used on this study is based on Tuberculosis (TB) cases registered in Barcelona city during 2009. Residential addresses are geocoded and loaded into a spatial database that acts as a backend database. The web-based application architecture and geoprocessing web services are designed according to the Representational State Transfer (REST) principles. These web processing services produce spatial density maps against the backend database. The results are focused on the use of the proposed web-based application to the analysis of TB cases in Barcelona. The application produces spatial density maps to ease the monitoring and decision making process by health professionals. We also include a discussion of how spatial density maps may be useful for health practitioners in such contexts. In this paper, we developed web-based client application and a set of geoprocessing web services to support specific health-spatial requirements. Spatial density maps of TB incidence were generated to help health professionals in analysis and decision-making tasks. The combined use of geographic information tools, map viewers, and geoprocessing services leads to interesting possibilities in handling health data in a spatial manner. In particular, the use of spatial density maps has been effective to identify the most affected areas and its spatial impact. This study is an attempt to demonstrate how web processing services together with web-based mapping capabilities suit the needs of health practitioners in epidemiological analysis scenarios.

  16. Development of spatial density maps based on geoprocessing web services: application to tuberculosis incidence in Barcelona, Spain

    PubMed Central

    2011-01-01

    Background Health professionals and authorities strive to cope with heterogeneous data, services, and statistical models to support decision making on public health. Sophisticated analysis and distributed processing capabilities over geocoded epidemiological data are seen as driving factors to speed up control and decision making in these health risk situations. In this context, recent Web technologies and standards-based web services deployed on geospatial information infrastructures have rapidly become an efficient way to access, share, process, and visualize geocoded health-related information. Methods Data used on this study is based on Tuberculosis (TB) cases registered in Barcelona city during 2009. Residential addresses are geocoded and loaded into a spatial database that acts as a backend database. The web-based application architecture and geoprocessing web services are designed according to the Representational State Transfer (REST) principles. These web processing services produce spatial density maps against the backend database. Results The results are focused on the use of the proposed web-based application to the analysis of TB cases in Barcelona. The application produces spatial density maps to ease the monitoring and decision making process by health professionals. We also include a discussion of how spatial density maps may be useful for health practitioners in such contexts. Conclusions In this paper, we developed web-based client application and a set of geoprocessing web services to support specific health-spatial requirements. Spatial density maps of TB incidence were generated to help health professionals in analysis and decision-making tasks. The combined use of geographic information tools, map viewers, and geoprocessing services leads to interesting possibilities in handling health data in a spatial manner. In particular, the use of spatial density maps has been effective to identify the most affected areas and its spatial impact. This study is an attempt to demonstrate how web processing services together with web-based mapping capabilities suit the needs of health practitioners in epidemiological analysis scenarios. PMID:22126392

  17. CAM on PubMed

    MedlinePlus

    ... citations from the MEDLINE database and additional life science journals. It also includes links to many full-text articles at journal Web sites and other related Web resources. Sample Searches ...

  18. A web-based platform for virtual screening.

    PubMed

    Watson, Paul; Verdonk, Marcel; Hartshorn, Michael J

    2003-09-01

    A fully integrated, web-based, virtual screening platform has been developed to allow rapid virtual screening of large numbers of compounds. ORACLE is used to store information at all stages of the process. The system includes a large database of historical compounds from high throughput screenings (HTS) chemical suppliers, ATLAS, containing over 3.1 million unique compounds with their associated physiochemical properties (ClogP, MW, etc.). The database can be screened using a web-based interface to produce compound subsets for virtual screening or virtual library (VL) enumeration. In order to carry out the latter task within ORACLE a reaction data cartridge has been developed. Virtual libraries can be enumerated rapidly using the web-based interface to the cartridge. The compound subsets can be seamlessly submitted for virtual screening experiments, and the results can be viewed via another web-based interface allowing ad hoc querying of the virtual screening data stored in ORACLE.

  19. Web-based UMLS concept retrieval by automatic text scanning: a comparison of two methods.

    PubMed

    Brandt, C; Nadkarni, P

    2001-01-01

    The Web is increasingly the medium of choice for multi-user application program delivery. Yet selection of an appropriate programming environment for rapid prototyping, code portability, and maintainability remain issues. We summarize our experience on the conversion of a LISP Web application, Search/SR to a new, functionally identical application, Search/SR-ASP using a relational database and active server pages (ASP) technology. Our results indicate that provision of easy access to database engines and external objects is almost essential for a development environment to be considered viable for rapid and robust application delivery. While LISP itself is a robust language, its use in Web applications may be hard to justify given that current vendor implementations do not provide such functionality. Alternative, currently available scripting environments for Web development appear to have most of LISP's advantages and few of its disadvantages.

  20. JetWeb: A WWW interface and database for Monte Carlo tuning and validation

    NASA Astrophysics Data System (ADS)

    Butterworth, J. M.; Butterworth, S.

    2003-06-01

    A World Wide Web interface to a Monte Carlo tuning facility is described. The aim of the package is to allow rapid and reproducible comparisons to be made between detailed measurements at high-energy physics colliders and general physics simulation packages. The package includes a relational database, a Java servlet query and display facility, and clean interfaces to simulation packages and their parameters.

  1. Image

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marsh, Amber; Harsch, Tim; Pitt, Julie

    2007-08-31

    The computer side of the IMAGE project consists of a collection of Perl scripts that perform a variety of tasks; scripts are available to insert, update and delete data from the underlying Oracle database, download data from NCBI's Genbank and other sources, and generate data files for download by interested parties. Web scripts make up the tracking interface, and various tools available on the project web-site (image.llnl.gov) that provide a search interface to the database.

  2. Shifting Sands: Science Researchers on Google Scholar, Web of Science, and PubMed, with Implications for Library Collections Budgets

    ERIC Educational Resources Information Center

    Hightower, Christy; Caldwell, Christy

    2010-01-01

    Science researchers at the University of California Santa Cruz were surveyed about their article database use and preferences in order to inform collection budget choices. Web of Science was the single most used database, selected by 41.6%. Statistically there was no difference between PubMed (21.5%) and Google Scholar (18.7%) as the second most…

  3. Modernizing the MagIC Paleomagnetic and Rock Magnetic Database Technology Stack to Encourage Code Reuse and Reproducible Science

    NASA Astrophysics Data System (ADS)

    Minnett, R.; Koppers, A. A. P.; Jarboe, N.; Jonestrask, L.; Tauxe, L.; Constable, C.

    2016-12-01

    The Magnetics Information Consortium (https://earthref.org/MagIC/) develops and maintains a database and web application for supporting the paleo-, geo-, and rock magnetic scientific community. Historically, this objective has been met with an Oracle database and a Perl web application at the San Diego Supercomputer Center (SDSC). The Oracle Enterprise Cluster at SDSC, however, was decommissioned in July of 2016 and the cost for MagIC to continue using Oracle became prohibitive. This provided MagIC with a unique opportunity to reexamine the entire technology stack and data model. MagIC has developed an open-source web application using the Meteor (http://meteor.com) framework and a MongoDB database. The simplicity of the open-source full-stack framework that Meteor provides has improved MagIC's development pace and the increased flexibility of the data schema in MongoDB encouraged the reorganization of the MagIC Data Model. As a result of incorporating actively developed open-source projects into the technology stack, MagIC has benefited from their vibrant software development communities. This has translated into a more modern web application that has significantly improved the user experience for the paleo-, geo-, and rock magnetic scientific community.

  4. Development of a web-based video management and application processing system

    NASA Astrophysics Data System (ADS)

    Chan, Shermann S.; Wu, Yi; Li, Qing; Zhuang, Yueting

    2001-07-01

    How to facilitate efficient video manipulation and access in a web-based environment is becoming a popular trend for video applications. In this paper, we present a web-oriented video management and application processing system, based on our previous work on multimedia database and content-based retrieval. In particular, we extend the VideoMAP architecture with specific web-oriented mechanisms, which include: (1) Concurrency control facilities for the editing of video data among different types of users, such as Video Administrator, Video Producer, Video Editor, and Video Query Client; different users are assigned various priority levels for different operations on the database. (2) Versatile video retrieval mechanism which employs a hybrid approach by integrating a query-based (database) mechanism with content- based retrieval (CBR) functions; its specific language (CAROL/ST with CBR) supports spatio-temporal semantics of video objects, and also offers an improved mechanism to describe visual content of videos by content-based analysis method. (3) Query profiling database which records the `histories' of various clients' query activities; such profiles can be used to provide the default query template when a similar query is encountered by the same kind of users. An experimental prototype system is being developed based on the existing VideoMAP prototype system, using Java and VC++ on the PC platform.

  5. Web client and ODBC access to legacy database information: a low cost approach.

    PubMed Central

    Sanders, N. W.; Mann, N. H.; Spengler, D. M.

    1997-01-01

    A new method has been developed for the Department of Orthopaedics of Vanderbilt University Medical Center to access departmental clinical data. Previously this data was stored only in the medical center's mainframe DB2 database, it is now additionally stored in a departmental SQL database. Access to this data is available via any ODBC compliant front-end or a web client. With a small budget and no full time staff, we were able to give our department on-line access to many years worth of patient data that was previously inaccessible. PMID:9357735

  6. NeisseriaBase: a specialised Neisseria genomic resource and analysis platform.

    PubMed

    Zheng, Wenning; Mutha, Naresh V R; Heydari, Hamed; Dutta, Avirup; Siow, Cheuk Chuen; Jakubovics, Nicholas S; Wee, Wei Yee; Tan, Shi Yang; Ang, Mia Yang; Wong, Guat Jah; Choo, Siew Woh

    2016-01-01

    Background. The gram-negative Neisseria is associated with two of the most potent human epidemic diseases: meningococcal meningitis and gonorrhoea. In both cases, disease is caused by bacteria colonizing human mucosal membrane surfaces. Overall, the genus shows great diversity and genetic variation mainly due to its ability to acquire and incorporate genetic material from a diverse range of sources through horizontal gene transfer. Although a number of databases exist for the Neisseria genomes, they are mostly focused on the pathogenic species. In this present study we present the freely available NeisseriaBase, a database dedicated to the genus Neisseria encompassing the complete and draft genomes of 15 pathogenic and commensal Neisseria species. Methods. The genomic data were retrieved from National Center for Biotechnology Information (NCBI) and annotated using the RAST server which were then stored into the MySQL database. The protein-coding genes were further analyzed to obtain information such as calculation of GC content (%), predicted hydrophobicity and molecular weight (Da) using in-house Perl scripts. The web application was developed following the secure four-tier web application architecture: (1) client workstation, (2) web server, (3) application server, and (4) database server. The web interface was constructed using PHP, JavaScript, jQuery, AJAX and CSS, utilizing the model-view-controller (MVC) framework. The in-house developed bioinformatics tools implemented in NeisseraBase were developed using Python, Perl, BioPerl and R languages. Results. Currently, NeisseriaBase houses 603,500 Coding Sequences (CDSs), 16,071 RNAs and 13,119 tRNA genes from 227 Neisseria genomes. The database is equipped with interactive web interfaces. Incorporation of the JBrowse genome browser in the database enables fast and smooth browsing of Neisseria genomes. NeisseriaBase includes the standard BLAST program to facilitate homology searching, and for Virulence Factor Database (VFDB) specific homology searches, the VFDB BLAST is also incorporated into the database. In addition, NeisseriaBase is equipped with in-house designed tools such as the Pairwise Genome Comparison tool (PGC) for comparative genomic analysis and the Pathogenomics Profiling Tool (PathoProT) for the comparative pathogenomics analysis of Neisseria strains. Discussion. This user-friendly database not only provides access to a host of genomic resources on Neisseria but also enables high-quality comparative genome analysis, which is crucial for the expanding scientific community interested in Neisseria research. This database is freely available at http://neisseria.um.edu.my.

  7. NeisseriaBase: a specialised Neisseria genomic resource and analysis platform

    PubMed Central

    Zheng, Wenning; Mutha, Naresh V.R.; Heydari, Hamed; Dutta, Avirup; Siow, Cheuk Chuen; Jakubovics, Nicholas S.; Wee, Wei Yee; Tan, Shi Yang; Ang, Mia Yang; Wong, Guat Jah

    2016-01-01

    Background. The gram-negative Neisseria is associated with two of the most potent human epidemic diseases: meningococcal meningitis and gonorrhoea. In both cases, disease is caused by bacteria colonizing human mucosal membrane surfaces. Overall, the genus shows great diversity and genetic variation mainly due to its ability to acquire and incorporate genetic material from a diverse range of sources through horizontal gene transfer. Although a number of databases exist for the Neisseria genomes, they are mostly focused on the pathogenic species. In this present study we present the freely available NeisseriaBase, a database dedicated to the genus Neisseria encompassing the complete and draft genomes of 15 pathogenic and commensal Neisseria species. Methods. The genomic data were retrieved from National Center for Biotechnology Information (NCBI) and annotated using the RAST server which were then stored into the MySQL database. The protein-coding genes were further analyzed to obtain information such as calculation of GC content (%), predicted hydrophobicity and molecular weight (Da) using in-house Perl scripts. The web application was developed following the secure four-tier web application architecture: (1) client workstation, (2) web server, (3) application server, and (4) database server. The web interface was constructed using PHP, JavaScript, jQuery, AJAX and CSS, utilizing the model-view-controller (MVC) framework. The in-house developed bioinformatics tools implemented in NeisseraBase were developed using Python, Perl, BioPerl and R languages. Results. Currently, NeisseriaBase houses 603,500 Coding Sequences (CDSs), 16,071 RNAs and 13,119 tRNA genes from 227 Neisseria genomes. The database is equipped with interactive web interfaces. Incorporation of the JBrowse genome browser in the database enables fast and smooth browsing of Neisseria genomes. NeisseriaBase includes the standard BLAST program to facilitate homology searching, and for Virulence Factor Database (VFDB) specific homology searches, the VFDB BLAST is also incorporated into the database. In addition, NeisseriaBase is equipped with in-house designed tools such as the Pairwise Genome Comparison tool (PGC) for comparative genomic analysis and the Pathogenomics Profiling Tool (PathoProT) for the comparative pathogenomics analysis of Neisseria strains. Discussion. This user-friendly database not only provides access to a host of genomic resources on Neisseria but also enables high-quality comparative genome analysis, which is crucial for the expanding scientific community interested in Neisseria research. This database is freely available at http://neisseria.um.edu.my. PMID:27017950

  8. G6PDdb, an integrated database of glucose-6-phosphate dehydrogenase (G6PD) mutations.

    PubMed

    Kwok, Colin J; Martin, Andrew C R; Au, Shannon W N; Lam, Veronica M S

    2002-03-01

    G6PDdb (http://www.rubic.rdg.ac.uk/g6pd/ or http://www.bioinf.org.uk/g6pd/) is a newly created web-accessible locus-specific mutation database for the human Glucose-6-phosphate dehydrogenase (G6PD) gene. The relational database integrates up-to-date mutational and structural data from various databanks (GenBank, Protein Data Bank, etc.) with biochemically characterized variants and their associated phenotypes obtained from published literature and the Favism website. An automated analysis of the mutations likely to have a significant impact on the structure of the protein has been performed using a recently developed procedure. The database may be queried online and the full results of the analysis of the structural impact of mutations are available. The web page provides a form for submitting additional mutation data and is linked to resources such as the Favism website, OMIM, HGMD, HGVBASE, and the PDB. This database provides insights into the molecular aspects and clinical significance of G6PD deficiency for researchers and clinicians and the web page functions as a knowledge base relevant to the understanding of G6PD deficiency and its management. Copyright 2002 Wiley-Liss, Inc.

  9. The ChEMBL database as linked open data

    PubMed Central

    2013-01-01

    Background Making data available as Linked Data using Resource Description Framework (RDF) promotes integration with other web resources. RDF documents can natively link to related data, and others can link back using Uniform Resource Identifiers (URIs). RDF makes the data machine-readable and uses extensible vocabularies for additional information, making it easier to scale up inference and data analysis. Results This paper describes recent developments in an ongoing project converting data from the ChEMBL database into RDF triples. Relative to earlier versions, this updated version of ChEMBL-RDF uses recently introduced ontologies, including CHEMINF and CiTO; exposes more information from the database; and is now available as dereferencable, linked data. To demonstrate these new features, we present novel use cases showing further integration with other web resources, including Bio2RDF, Chem2Bio2RDF, and ChemSpider, and showing the use of standard ontologies for querying. Conclusions We have illustrated the advantages of using open standards and ontologies to link the ChEMBL database to other databases. Using those links and the knowledge encoded in standards and ontologies, the ChEMBL-RDF resource creates a foundation for integrated semantic web cheminformatics applications, such as the presented decision support. PMID:23657106

  10. Korean Ministry of Environment's web-based visual consumer product exposure and risk assessment system (COPER).

    PubMed

    Lee, Hunjoo; Lee, Kiyoung; Park, Ji Young; Min, Sung-Gi

    2017-05-01

    With support from the Korean Ministry of the Environment (ME), our interdisciplinary research staff developed the COnsumer Product Exposure and Risk assessment system (COPER). This system includes various databases and features that enable the calculation of exposure and determination of risk caused by consumer products use. COPER is divided into three tiers: the integrated database layer (IDL), the domain specific service layer (DSSL), and the exposure and risk assessment layer (ERAL). IDL is organized by the form of the raw data (mostly non-aggregated data) and includes four sub-databases: a toxicity profile, an inventory of Korean consumer products, the weight fractions of chemical substances in the consumer products determined by chemical analysis and national representative exposure factors. DSSL provides web-based information services corresponding to each database within IDL. Finally, ERAL enables risk assessors to perform various exposure and risk assessments, including exposure scenario design via either inhalation or dermal contact by using or organizing each database in an intuitive manner. This paper outlines the overall architecture of the system and highlights some of the unique features of COPER based on visual and dynamic rendering engine for exposure assessment model on web.

  11. A comprehensive view of the web-resources related to sericulture

    PubMed Central

    Singh, Deepika; Chetia, Hasnahana; Kabiraj, Debajyoti; Sharma, Swagata; Kumar, Anil; Sharma, Pragya; Deka, Manab; Bora, Utpal

    2016-01-01

    Recent progress in the field of sequencing and analysis has led to a tremendous spike in data and the development of data science tools. One of the outcomes of this scientific progress is development of numerous databases which are gaining popularity in all disciplines of biology including sericulture. As economically important organism, silkworms are studied extensively for their numerous applications in the field of textiles, biomaterials, biomimetics, etc. Similarly, host plants, pests, pathogens, etc. are also being probed to understand the seri-resources more efficiently. These studies have led to the generation of numerous seri-related databases which are extremely helpful for the scientific community. In this article, we have reviewed all the available online resources on silkworm and its related organisms, including databases as well as informative websites. We have studied their basic features and impact on research through citation count analysis, finally discussing the role of emerging sequencing and analysis technologies in the field of seri-data science. As an outcome of this review, a web portal named SeriPort, has been created which will act as an index for the various sericulture-related databases and web resources available in cyberspace. Database URL: http://www.seriport.in/ PMID:27307138

  12. ChlamyCyc: an integrative systems biology database and web-portal for Chlamydomonas reinhardtii.

    PubMed

    May, Patrick; Christian, Jan-Ole; Kempa, Stefan; Walther, Dirk

    2009-05-04

    The unicellular green alga Chlamydomonas reinhardtii is an important eukaryotic model organism for the study of photosynthesis and plant growth. In the era of modern high-throughput technologies there is an imperative need to integrate large-scale data sets from high-throughput experimental techniques using computational methods and database resources to provide comprehensive information about the molecular and cellular organization of a single organism. In the framework of the German Systems Biology initiative GoFORSYS, a pathway database and web-portal for Chlamydomonas (ChlamyCyc) was established, which currently features about 250 metabolic pathways with associated genes, enzymes, and compound information. ChlamyCyc was assembled using an integrative approach combining the recently published genome sequence, bioinformatics methods, and experimental data from metabolomics and proteomics experiments. We analyzed and integrated a combination of primary and secondary database resources, such as existing genome annotations from JGI, EST collections, orthology information, and MapMan classification. ChlamyCyc provides a curated and integrated systems biology repository that will enable and assist in systematic studies of fundamental cellular processes in Chlamydomonas. The ChlamyCyc database and web-portal is freely available under http://chlamycyc.mpimp-golm.mpg.de.

  13. Introducing glycomics data into the Semantic Web

    PubMed Central

    2013-01-01

    Background Glycoscience is a research field focusing on complex carbohydrates (otherwise known as glycans)a, which can, for example, serve as “switches” that toggle between different functions of a glycoprotein or glycolipid. Due to the advancement of glycomics technologies that are used to characterize glycan structures, many glycomics databases are now publicly available and provide useful information for glycoscience research. However, these databases have almost no link to other life science databases. Results In order to implement support for the Semantic Web most efficiently for glycomics research, the developers of major glycomics databases agreed on a minimal standard for representing glycan structure and annotation information using RDF (Resource Description Framework). Moreover, all of the participants implemented this standard prototype and generated preliminary RDF versions of their data. To test the utility of the converted data, all of the data sets were uploaded into a Virtuoso triple store, and several SPARQL queries were tested as “proofs-of-concept” to illustrate the utility of the Semantic Web in querying across databases which were originally difficult to implement. Conclusions We were able to successfully retrieve information by linking UniCarbKB, GlycomeDB and JCGGDB in a single SPARQL query to obtain our target information. We also tested queries linking UniProt with GlycoEpitope as well as lectin data with GlycomeDB through PDB. As a result, we have been able to link proteomics data with glycomics data through the implementation of Semantic Web technologies, allowing for more flexible queries across these domains. PMID:24280648

  14. Introducing glycomics data into the Semantic Web.

    PubMed

    Aoki-Kinoshita, Kiyoko F; Bolleman, Jerven; Campbell, Matthew P; Kawano, Shin; Kim, Jin-Dong; Lütteke, Thomas; Matsubara, Masaaki; Okuda, Shujiro; Ranzinger, Rene; Sawaki, Hiromichi; Shikanai, Toshihide; Shinmachi, Daisuke; Suzuki, Yoshinori; Toukach, Philip; Yamada, Issaku; Packer, Nicolle H; Narimatsu, Hisashi

    2013-11-26

    Glycoscience is a research field focusing on complex carbohydrates (otherwise known as glycans)a, which can, for example, serve as "switches" that toggle between different functions of a glycoprotein or glycolipid. Due to the advancement of glycomics technologies that are used to characterize glycan structures, many glycomics databases are now publicly available and provide useful information for glycoscience research. However, these databases have almost no link to other life science databases. In order to implement support for the Semantic Web most efficiently for glycomics research, the developers of major glycomics databases agreed on a minimal standard for representing glycan structure and annotation information using RDF (Resource Description Framework). Moreover, all of the participants implemented this standard prototype and generated preliminary RDF versions of their data. To test the utility of the converted data, all of the data sets were uploaded into a Virtuoso triple store, and several SPARQL queries were tested as "proofs-of-concept" to illustrate the utility of the Semantic Web in querying across databases which were originally difficult to implement. We were able to successfully retrieve information by linking UniCarbKB, GlycomeDB and JCGGDB in a single SPARQL query to obtain our target information. We also tested queries linking UniProt with GlycoEpitope as well as lectin data with GlycomeDB through PDB. As a result, we have been able to link proteomics data with glycomics data through the implementation of Semantic Web technologies, allowing for more flexible queries across these domains.

  15. The AstroBID: Searching through the Italian Astronomical Heritage

    NASA Astrophysics Data System (ADS)

    Cirella, E. O.; Gargano, M.; Gasperini, A.; Mandrino, A.; Randazzo, D.; Zanini, V.

    2015-04-01

    The scientific heritage held in the National Institute for Astrophysics (INAF), made up of rare and modern books, instruments, and archival documents spanning from the 15th to the early 20th century, marks the milestones in the history of astronomy in Italy. To promote this history of this historical collection, the Libraries and Historical Archives Service and the Museums Service of INAF have developed a project aimed at creating a single web portal: Polvere di stelle. I beni culturali dell'astronomia italiana (Stardust. The cultural heritage of the Italian astronomy). This portal searches for data coming from the libraries, the instruments collections and the historical archives, regarding the heritage of the Italian Observatories. The BID (Books, Instruments, Documents) of the project is the creation of a multimedia web facility, which allows the public to make simultaneous searches on the three different types of materials.

  16. Nuclear Science References (NSR)

    Science.gov Websites

    be included. For more information, see the help page. The NSR database schema and Web applications have undergone some recent changes. This is a revised version of the NSR Web Interface. NSR Quick Manager: Boris Pritychenko, NNDC, Brookhaven National Laboratory Web Programming: Boris Pritychenko, NNDC

  17. Getting To Know the "Invisible Web."

    ERIC Educational Resources Information Center

    Smith, C. Brian

    2001-01-01

    Discusses the portions of the World Wide Web that cannot be accessed via directories or search engines, explains why they can't be accessed, and offers suggestions for reference librarians to find these sites. Lists helpful resources and gives examples of invisible Web sites which are often databases. (LRW)

  18. Integration of the NRL Digital Library.

    ERIC Educational Resources Information Center

    King, James

    2001-01-01

    The Naval Research Laboratory (NRL) Library has identified six primary areas that need improvement: infrastructure, InfoWeb, TORPEDO Ultra, journal data management, classified data, and linking software. It is rebuilding InfoWeb and TORPEDO Ultra as database-driven Web applications, upgrading the STILAS library catalog, and creating other support…

  19. The Moroccan Genetic Disease Database (MGDD): a database for DNA variations related to inherited disorders and disease susceptibility.

    PubMed

    Charoute, Hicham; Nahili, Halima; Abidi, Omar; Gabi, Khalid; Rouba, Hassan; Fakiri, Malika; Barakat, Abdelhamid

    2014-03-01

    National and ethnic mutation databases provide comprehensive information about genetic variations reported in a population or an ethnic group. In this paper, we present the Moroccan Genetic Disease Database (MGDD), a catalogue of genetic data related to diseases identified in the Moroccan population. We used the PubMed, Web of Science and Google Scholar databases to identify available articles published until April 2013. The Database is designed and implemented on a three-tier model using Mysql relational database and the PHP programming language. To date, the database contains 425 mutations and 208 polymorphisms found in 301 genes and 259 diseases. Most Mendelian diseases in the Moroccan population follow autosomal recessive mode of inheritance (74.17%) and affect endocrine, nutritional and metabolic physiology. The MGDD database provides reference information for researchers, clinicians and health professionals through a user-friendly Web interface. Its content should be useful to improve researches in human molecular genetics, disease diagnoses and design of association studies. MGDD can be publicly accessed at http://mgdd.pasteur.ma.

  20. Mission and Assets Database

    NASA Technical Reports Server (NTRS)

    Baldwin, John; Zendejas, Silvino; Gutheinz, Sandy; Borden, Chester; Wang, Yeou-Fang

    2009-01-01

    Mission and Assets Database (MADB) Version 1.0 is an SQL database system with a Web user interface to centralize information. The database stores flight project support resource requirements, view periods, antenna information, schedule, and forecast results for use in mid-range and long-term planning of Deep Space Network (DSN) assets.

  1. A web-based approach for electrocardiogram monitoring in the home.

    PubMed

    Magrabi, F; Lovell, N H; Celler, B G

    1999-05-01

    A Web-based electrocardiogram (ECG) monitoring service in which a longitudinal clinical record is used for management of patients, is described. The Web application is used to collect clinical data from the patient's home. A database on the server acts as a central repository where this clinical information is stored. A Web browser provides access to the patient's records and ECG data. We discuss the technologies used to automate the retrieval and storage of clinical data from a patient database, and the recording and reviewing of clinical measurement data. On the client's Web browser, ActiveX controls embedded in the Web pages provide a link between the various components including the Web server, Web page, the specialised client side ECG review and acquisition software, and the local file system. The ActiveX controls also implement FTP functions to retrieve and submit clinical data to and from the server. An intelligent software agent on the server is activated whenever new ECG data is sent from the home. The agent compares historical data with newly acquired data. Using this method, an optimum patient care strategy can be evaluated, a summarised report along with reminders and suggestions for action is sent to the doctor and patient by email.

  2. Update of the FANTOM web resource: high resolution transcriptome of diverse cell types in mammals

    PubMed Central

    Lizio, Marina; Harshbarger, Jayson; Abugessaisa, Imad; Noguchi, Shuei; Kondo, Atsushi; Severin, Jessica; Mungall, Chris; Arenillas, David; Mathelier, Anthony; Medvedeva, Yulia A.; Lennartsson, Andreas; Drabløs, Finn; Ramilowski, Jordan A.; Rackham, Owen; Gough, Julian; Andersson, Robin; Sandelin, Albin; Ienasescu, Hans; Ono, Hiromasa; Bono, Hidemasa; Hayashizaki, Yoshihide; Carninci, Piero; Forrest, Alistair R.R.; Kasukawa, Takeya; Kawaji, Hideya

    2017-01-01

    Upon the first publication of the fifth iteration of the Functional Annotation of Mammalian Genomes collaborative project, FANTOM5, we gathered a series of primary data and database systems into the FANTOM web resource (http://fantom.gsc.riken.jp) to facilitate researchers to explore transcriptional regulation and cellular states. In the course of the collaboration, primary data and analysis results have been expanded, and functionalities of the database systems enhanced. We believe that our data and web systems are invaluable resources, and we think the scientific community will benefit for this recent update to deepen their understanding of mammalian cellular organization. We introduce the contents of FANTOM5 here, report recent updates in the web resource and provide future perspectives. PMID:27794045

  3. Automating Information Discovery Within the Invisible Web

    NASA Astrophysics Data System (ADS)

    Sweeney, Edwina; Curran, Kevin; Xie, Ermai

    A Web crawler or spider crawls through the Web looking for pages to index, and when it locates a new page it passes the page on to an indexer. The indexer identifies links, keywords, and other content and stores these within its database. This database is searched by entering keywords through an interface and suitable Web pages are returned in a results page in the form of hyperlinks accompanied by short descriptions. The Web, however, is increasingly moving away from being a collection of documents to a multidimensional repository for sounds, images, audio, and other formats. This is leading to a situation where certain parts of the Web are invisible or hidden. The term known as the "Deep Web" has emerged to refer to the mass of information that can be accessed via the Web but cannot be indexed by conventional search engines. The concept of the Deep Web makes searches quite complex for search engines. Google states that the claim that conventional search engines cannot find such documents as PDFs, Word, PowerPoint, Excel, or any non-HTML page is not fully accurate and steps have been taken to address this problem by implementing procedures to search items such as academic publications, news, blogs, videos, books, and real-time information. However, Google still only provides access to a fraction of the Deep Web. This chapter explores the Deep Web and the current tools available in accessing it.

  4. THE NASA AMES POLYCYCLIC AROMATIC HYDROCARBON INFRARED SPECTROSCOPIC DATABASE: THE COMPUTED SPECTRA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauschlicher, C. W.; Ricca, A.; Boersma, C.

    The astronomical emission features, formerly known as the unidentified infrared bands, are now commonly ascribed to polycyclic aromatic hydrocarbons (PAHs). The laboratory experiments and computational modeling done at the NASA Ames Research Center to create a collection of PAH IR spectra relevant to test and refine the PAH hypothesis have been assembled into a spectroscopic database. This database now contains over 800 PAH spectra spanning 2-2000 {mu}m (5000-5 cm{sup -1}). These data are now available on the World Wide Web at www.astrochem.org/pahdb. This paper presents an overview of the computational spectra in the database and the tools developed to analyzemore » and interpret astronomical spectra using the database. A description of the online and offline user tools available on the Web site is also presented.« less

  5. Correspondence: World Wide Web access to the British Universities Human Embryo Database

    PubMed Central

    AITON, JAMES F.; MCDONOUGH, ARIANA; MCLACHLAN, JOHN C.; SMART, STEVEN D.; WHITEN, SUSAN C.

    1997-01-01

    The British Universities Human Embryo Database has been created by merging information from the Walmsley Collection of Human Embryos at the School of Biological and Medical Sciences, University of St Andrews and from the Boyd Collection of Human Embryos at the Department of Anatomy, University of Cambridge. The database has been made available electronically on the Internet and World Wide Web browsers can be used to implement interactive access to the information stored in the British Universities Human Embryo Database. The database can, therefore, be accessed and searched from remote sites and specific embryos can be identified in terms of their location, age, developmental stage, plane of section, staining technique, and other parameters. It is intended to add information from other similar collections in the UK as it becomes available. PMID:9034891

  6. LigSearch: a knowledge-based web server to identify likely ligands for a protein target

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beer, Tjaart A. P. de; Laskowski, Roman A.; Duban, Mark-Eugene

    LigSearch is a web server for identifying ligands likely to bind to a given protein. Identifying which ligands might bind to a protein before crystallization trials could provide a significant saving in time and resources. LigSearch, a web server aimed at predicting ligands that might bind to and stabilize a given protein, has been developed. Using a protein sequence and/or structure, the system searches against a variety of databases, combining available knowledge, and provides a clustered and ranked output of possible ligands. LigSearch can be accessed at http://www.ebi.ac.uk/thornton-srv/databases/LigSearch.

  7. Querying XML Data with SPARQL

    NASA Astrophysics Data System (ADS)

    Bikakis, Nikos; Gioldasis, Nektarios; Tsinaraki, Chrisa; Christodoulakis, Stavros

    SPARQL is today the standard access language for Semantic Web data. In the recent years XML databases have also acquired industrial importance due to the widespread applicability of XML in the Web. In this paper we present a framework that bridges the heterogeneity gap and creates an interoperable environment where SPARQL queries are used to access XML databases. Our approach assumes that fairly generic mappings between ontology constructs and XML Schema constructs have been automatically derived or manually specified. The mappings are used to automatically translate SPARQL queries to semantically equivalent XQuery queries which are used to access the XML databases. We present the algorithms and the implementation of SPARQL2XQuery framework, which is used for answering SPARQL queries over XML databases.

  8. ESTuber db: an online database for Tuber borchii EST sequences.

    PubMed

    Lazzari, Barbara; Caprera, Andrea; Cosentino, Cristian; Stella, Alessandra; Milanesi, Luciano; Viotti, Angelo

    2007-03-08

    The ESTuber database (http://www.itb.cnr.it/estuber) includes 3,271 Tuber borchii expressed sequence tags (EST). The dataset consists of 2,389 sequences from an in-house prepared cDNA library from truffle vegetative hyphae, and 882 sequences downloaded from GenBank and representing four libraries from white truffle mycelia and ascocarps at different developmental stages. An automated pipeline was prepared to process EST sequences using public software integrated by in-house developed Perl scripts. Data were collected in a MySQL database, which can be queried via a php-based web interface. Sequences included in the ESTuber db were clustered and annotated against three databases: the GenBank nr database, the UniProtKB database and a third in-house prepared database of fungi genomic sequences. An algorithm was implemented to infer statistical classification among Gene Ontology categories from the ontology occurrences deduced from the annotation procedure against the UniProtKB database. Ontologies were also deduced from the annotation of more than 130,000 EST sequences from five filamentous fungi, for intra-species comparison purposes. Further analyses were performed on the ESTuber db dataset, including tandem repeats search and comparison of the putative protein dataset inferred from the EST sequences to the PROSITE database for protein patterns identification. All the analyses were performed both on the complete sequence dataset and on the contig consensus sequences generated by the EST assembly procedure. The resulting web site is a resource of data and links related to truffle expressed genes. The Sequence Report and Contig Report pages are the web interface core structures which, together with the Text search utility and the Blast utility, allow easy access to the data stored in the database.

  9. Hot Topics on the Web: Strategies for Research.

    ERIC Educational Resources Information Center

    Diaz, Karen R.; O'Hanlon, Nancy

    2001-01-01

    Presents strategies for researching topics on the Web that are controversial or current in nature. Discusses topic selection and overviews, including the use of online encyclopedias; search engines; finding laws and pending legislation; advocacy groups; proprietary databases; Web site evaluation; and the continuing usefulness of print materials.…

  10. Market Research: The World Wide Web Meets the Online Services.

    ERIC Educational Resources Information Center

    Bing, Michelle

    1996-01-01

    The World Wide Web can provide direct market research data inexpensively or can target the appropriate professional online database and narrow the search. This article discusses the Web presence of research and investment firms, financial pages, trade associations, and electronic publications containing market research data. It lists Uniform…

  11. The Implications of Well-Formedness on Web-Based Educational Resources.

    ERIC Educational Resources Information Center

    Mohler, James L.

    Within all institutions, Web developers are beginning to utilize technologies that make sites more than static information resources. Databases such as XML (Extensible Markup Language) and XSL (Extensible Stylesheet Language) are key technologies that promise to extend the Web beyond the "information storehouse" paradigm and provide…

  12. DECADE web portal: toward the integration of MaGa, EarthChem and VOTW data systems to further the knowledge on Earth degassing

    NASA Astrophysics Data System (ADS)

    Cardellini, Carlo; Frigeri, Alessandro; Lehnert, Kerstin; Ash, Jason; McCormick, Brendan; Chiodini, Giovanni; Fischer, Tobias; Cottrell, Elizabeth

    2015-04-01

    The release of volatiles from the Earth's interior takes place in both volcanic and non-volcanic areas of the planet. The comprehension of such complex process and the improvement of the current estimates of global carbon emissions, will greatly benefit from the integration of geochemical, petrological and volcanological data. At present, major online data repositories relevant to studies of degassing are not linked and interoperable. In the framework of the Deep Earth Carbon Degassing (DECADE) initiative of the Deep Carbon Observatory (DCO), we are developing interoperability between three data systems that will make their data accessible via the DECADE portal: (1) the Smithsonian Institutionian's Global Volcanism Program database (VOTW) of volcanic activity data, (2) EarthChem databases for geochemical and geochronological data of rocks and melt inclusions, and (3) the MaGa database (Mapping Gas emissions) which contains compositional and flux data of gases released at volcanic and non-volcanic degassing sites. The DECADE web portal will create a powerful search engine of these databases from a single entry point and will return comprehensive multi-component datasets. A user will be able, for example, to obtain data relating to compositions of emitted gases, compositions and age of the erupted products and coincident activity, of a specific volcano. This level of capability requires a complete synergy between the databases, including availability of standard-based web services (WMS, WFS) at all data systems. Data and metadata can thus be extracted from each system without interfering with each database's local schema or being replicated to achieve integration at the DECADE web portal. The DECADE portal will enable new synoptic perspectives on the Earth degassing process allowing to explore Earth degassing related datasets over previously unexplored spatial or temporal ranges.

  13. Web-based Electronic Sharing and RE-allocation of Assets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leverett, Dave; Miller, Robert A.; Berlin, Gary J.

    2002-09-09

    The Electronic Asses Sharing Program is a web-based application that provides the capability for complex-wide sharing and reallocation of assets that are excess, under utilized, or un-utilized. through a web-based fron-end and supporting has database with a search engine, users can search for assets that they need, search for assets needed by others, enter assets they need, and enter assets they have available for reallocation. In addition, entire listings of available assets and needed assets can be viewed. The application is written in Java, the hash database and search engine are in Object-oriented Java Database Management (OJDBM). The application willmore » be hosted on an SRS-managed server outside the Firewall and access will be controlled via a protected realm. An example of the application can be viewed at the followinig (temporary) URL: http://idgdev.srs.gov/servlet/srs.weshare.WeShare« less

  14. Design and implementation of website information disclosure assessment system.

    PubMed

    Cho, Ying-Chiang; Pan, Jen-Yi

    2015-01-01

    Internet application technologies, such as cloud computing and cloud storage, have increasingly changed people's lives. Websites contain vast amounts of personal privacy information. In order to protect this information, network security technologies, such as database protection and data encryption, attract many researchers. The most serious problems concerning web vulnerability are e-mail address and network database leakages. These leakages have many causes. For example, malicious users can steal database contents, taking advantage of mistakes made by programmers and administrators. In order to mitigate this type of abuse, a website information disclosure assessment system is proposed in this study. This system utilizes a series of technologies, such as web crawler algorithms, SQL injection attack detection, and web vulnerability mining, to assess a website's information disclosure. Thirty websites, randomly sampled from the top 50 world colleges, were used to collect leakage information. This testing showed the importance of increasing the security and privacy of website information for academic websites.

  15. The Geoscape Poster: Maximum Impact in Geoscience Education With Minimal Funding

    NASA Astrophysics Data System (ADS)

    Aubele, J. C.; Newsom, J.; Crumpler, L. S.

    2004-12-01

    A geologist/educator and a research curator of the New Mexico Museum of Natural History and Science and a geologist/middle school teacher from the Albuquerque Public Schools have created an educational poster that uses the landscape around Albuquerque in order to teach fundamental geoscience concepts. "Albuquerque's Geoscape" is based on the innovative "Geoscape Vancouver" produced by the Geological Survey of Canada. The Albuquerque poster required four years of development including the creation of unique graphics and text, evaluations, and reviews by geologists and classroom educators. The poster content is aligned with state and national science standards at the middle school level and can be modified by teachers from K-12. All information that a teacher might need in order to teach a thematic unit on major geological topics is included in the poster, and linked to the local landscape. An accompanying web site for teachers includes additional materials. The initial funding for the project was an Intel Innovations in Teaching Grant, in the amount of 3K, awarded to Newsom. Museum in-house resources in science, education and graphics were utilized in the poster design and development. Funding for printing required small contributions from many local and regional organizations supporting science education. These contributors included Sandia National Lab, Rocky Mountain Section AAPG Foundation, New Mexico Academy of Science, ExxonMobile Volunteer Involvement Grant, Federal Bureau of Land Management, Albuquerque Rotary Club and Albuquerque Geological Society. Printing at-cost through a local company produced a poster on high quality paper at low cost. An initial printing of 5000 copies has enabled the Museum to offer the poster free of charge to all greater Albuquerque area K-12 teachers. In addition, the poster is on sale to the general public at the museum store. The response by classroom educators, local geologists, and the general public has been enthusiastic. The Rocky Mountain Section AAPG Foundation Board recently voted to use the poster as a model and encourage creation of other similar posters.

  16. National Vulnerability Database (NVD)

    National Institute of Standards and Technology Data Gateway

    National Vulnerability Database (NVD) (Web, free access)   NVD is a comprehensive cyber security vulnerability database that integrates all publicly available U.S. Government vulnerability resources and provides references to industry resources. It is based on and synchronized with the CVE vulnerability naming standard.

  17. Drinking Water Treatability Database (Database)

    EPA Science Inventory

    The drinking Water Treatability Database (TDB) will provide data taken from the literature on the control of contaminants in drinking water, and will be housed on an interactive, publicly-available USEPA web site. It can be used for identifying effective treatment processes, rec...

  18. Utilization of National Databases for Investigation of Unexplained Affluence.

    DTIC Science & Technology

    1992-04-10

    Accountants American Marketing Association American Museum of Natural History Consumer Bankers Association E . ectronic Funds Transfer Assoc. Association...3,211,000 Doctors who are heavy investors (home 108,000, office 412,000) Entertainment & Sports Figures: Who invest in stocks, bonds, CD’s Money Market ...AD-A262 754ii ii � I J I t! ’i lll I~li iiill~J],! [! Illll,•, i IIHOSPITALITY MARKETING - 656 Mnras Avenue Morierey CA 93940 FINAL REPORT

  19. Giving Students Control over Their Learning; from Self-guided Museum Visits and Field Trips to Using Scanning Technology to Link Content to Earth Samples

    NASA Astrophysics Data System (ADS)

    Kirkby, K. C.; Phipps, M.

    2011-12-01

    While it may seem counterintuitive, sometimes stepping back is one of the more effective pedagogical approaches instructors can make. On museum visits, an instructor's presence fundamentally alters students' experiences and can curtail student learning by limiting questions or discouraging students from exploring their own interests. Students often rely on the instructor and become passive observers, rather than engaged learners. As an alternative to instructor-led visits, self-guided student explorations of museum exhibits proved to be both popular and pedagogically effective. On pre-instruction and post-instruction surveys, these ungraded, self-guided explorations match or exceed the efficacy of traditional graded lab instruction and completely eclipse gains normally achieved by traditional lecture instruction. In addition, these explorations achieve the remarkable goal of integrating undergraduate earth science instruction into students' social lives. Based on the success of the self-guided museum explorations, this fall saw the debut of an attempt to expand this concept to field experiences. A self-guided student field exploration of Saint Anthony Falls focuses on the intersections of geological processes with human history. Students explore the waterfalls' evolution, its early interpretation by 18th and 19th century Dakota and Euro-America societies, and its subsequent social and economic impacts on Upper Midwest societies. Self-guided explorations allow students to explore field settings on their own or with friends and family in a more relaxed manner. At the same time, these explorations give students control over, and responsibility for, their own learning - a powerful pedagogical approach. Student control over their learning is also the goal of an initiative to use scanning technologies, such as linear bar codes, 2D barcodes and radio-frequency identification (RFID), to revolutionize sample identification and study. Scanning technology allows students to practice pattern recognition of earth materials even before they begin to check their properties. As importantly, scanning systems allow students to select a physical earth material sample and link that sample with web page content about its origin, geologic setting, economic uses, or its social and historical relevance. With scanning systems, students are not dependent on instructors for clarification or confirmation, so they can explore earth materials at their own pace and in ways that fit their individual learning style. Despite a greatly reduced emphasis on sample identification in laboratory activities, students who integrated scanning technology and web content with earth material samples did better on unannounced end-of-term identification quizzes than students taught traditional identification methods. Integrating scanning technologies into earth material study represents the first transformative change in how geoscientists have taught introductory sample identification since the 1800's.

  20. Evaluation of the Content and Accessibility of Web Sites for Accredited Orthopaedic Trauma Surgery Fellowships.

    PubMed

    Shaath, M Kareem; Yeranosian, Michael G; Ippolito, Joseph A; Adams, Mark R; Sirkin, Michael S; Reilly, Mark C

    2018-05-02

    Orthopaedic trauma fellowship applicants use online-based resources when researching information on potential U.S. fellowship programs. The 2 primary sources for identifying programs are the Orthopaedic Trauma Association (OTA) database and the San Francisco Match (SF Match) database. Previous studies in other orthopaedic subspecialty areas have demonstrated considerable discrepancies among fellowship programs. The purpose of this study was to analyze content and availability of information on orthopaedic trauma surgery fellowship web sites. The online databases of the OTA and SF Match were reviewed to determine the availability of embedded program links or external links for the included programs. Thereafter, a Google search was performed for each program individually by typing the program's name, followed by the term "orthopaedic trauma fellowship." All identified fellowship web sites were analyzed for accessibility and content. Web sites were evaluated for comprehensiveness in mentioning key components of the orthopaedic trauma surgery curriculum. By consensus, we refined the final list of variables utilizing the methodology of previous studies on the topic. We identified 54 OTA-accredited fellowship programs, offering 87 positions. The majority (94%) of programs had web sites accessible through a Google search. Of the 51 web sites found, all (100%) described their program. Most commonly, hospital affiliation (88%), operative experiences (76%), and rotation overview (65%) were listed, and, least commonly, interview dates (6%), selection criteria (16%), on-call requirements (20%), and fellow evaluation criteria (20%) were listed. Programs with ≥2 fellows provided more information with regard to education content (p = 0.0001) and recruitment content (p = 0.013). Programs with Accreditation Council for Graduate Medical Education (ACGME) accreditation status also provided greater information with regard to education content (odds ratio, 4.0; p = 0.0001). Otherwise, no differences were seen by region, residency affiliation, medical school affiliation, or hospital affiliation. The SF Match and OTA databases provide few direct links to fellowship web sites. Individual program web sites do not effectively and completely convey information about the programs. The Internet is an underused resource for fellow recruitment. The lack of information on these sites allows for future opportunity to optimize this resource.

  1. ProXL (Protein Cross-Linking Database): A Platform for Analysis, Visualization, and Sharing of Protein Cross-Linking Mass Spectrometry Data

    PubMed Central

    2016-01-01

    ProXL is a Web application and accompanying database designed for sharing, visualizing, and analyzing bottom-up protein cross-linking mass spectrometry data with an emphasis on structural analysis and quality control. ProXL is designed to be independent of any particular software pipeline. The import process is simplified by the use of the ProXL XML data format, which shields developers of data importers from the relative complexity of the relational database schema. The database and Web interfaces function equally well for any software pipeline and allow data from disparate pipelines to be merged and contrasted. ProXL includes robust public and private data sharing capabilities, including a project-based interface designed to ensure security and facilitate collaboration among multiple researchers. ProXL provides multiple interactive and highly dynamic data visualizations that facilitate structural-based analysis of the observed cross-links as well as quality control. ProXL is open-source, well-documented, and freely available at https://github.com/yeastrc/proxl-web-app. PMID:27302480

  2. An XML-based Generic Tool for Information Retrieval in Solar Databases

    NASA Astrophysics Data System (ADS)

    Scholl, Isabelle F.; Legay, Eric; Linsolas, Romain

    This paper presents the current architecture of the `Solar Web Project' now in its development phase. This tool will provide scientists interested in solar data with a single web-based interface for browsing distributed and heterogeneous catalogs of solar observations. The main goal is to have a generic application that can be easily extended to new sets of data or to new missions with a low level of maintenance. It is developed with Java and XML is used as a powerful configuration language. The server, independent of any database scheme, can communicate with a client (the user interface) and several local or remote archive access systems (such as existing web pages, ftp sites or SQL databases). Archive access systems are externally described in XML files. The user interface is also dynamically generated from an XML file containing the window building rules and a simplified database description. This project is developed at MEDOC (Multi-Experiment Data and Operations Centre), located at the Institut d'Astrophysique Spatiale (Orsay, France). Successful tests have been conducted with other solar archive access systems.

  3. ProXL (Protein Cross-Linking Database): A Platform for Analysis, Visualization, and Sharing of Protein Cross-Linking Mass Spectrometry Data.

    PubMed

    Riffle, Michael; Jaschob, Daniel; Zelter, Alex; Davis, Trisha N

    2016-08-05

    ProXL is a Web application and accompanying database designed for sharing, visualizing, and analyzing bottom-up protein cross-linking mass spectrometry data with an emphasis on structural analysis and quality control. ProXL is designed to be independent of any particular software pipeline. The import process is simplified by the use of the ProXL XML data format, which shields developers of data importers from the relative complexity of the relational database schema. The database and Web interfaces function equally well for any software pipeline and allow data from disparate pipelines to be merged and contrasted. ProXL includes robust public and private data sharing capabilities, including a project-based interface designed to ensure security and facilitate collaboration among multiple researchers. ProXL provides multiple interactive and highly dynamic data visualizations that facilitate structural-based analysis of the observed cross-links as well as quality control. ProXL is open-source, well-documented, and freely available at https://github.com/yeastrc/proxl-web-app .

  4. A revision of the distribution of sea kraits (Reptilia, Laticauda) with an updated occurrence dataset for ecological and conservation research

    PubMed Central

    Gherghel, Iulian; Papeş, Monica; Brischoux, François; Sahlean, Tiberiu; Strugariu, Alexandru

    2016-01-01

    Abstract The genus Laticauda (Reptilia: Elapidae), commonly known as sea kraits, comprises eight species of marine amphibious snakes distributed along the shores of the Western Pacific Ocean and the Eastern Indian Ocean. We review the information available on the geographic range of sea kraits and analyze their distribution patterns. Generally, we found that south and south-west of Japan, Philippines Archipelago, parts of Indonesia, and Vanuatu have the highest diversity of sea krait species. Further, we compiled the information available on sea kraits’ occurrences from a variety of sources, including museum records, field surveys, and the scientific literature. The final database comprises 694 occurrence records, with Laticauda colubrina having the highest number of records and Laticauda schistorhyncha the lowest. The occurrence records were georeferenced and compiled as a database for each sea krait species. This database can be freely used for future studies. PMID:27110155

  5. A revision of the distribution of sea kraits (Reptilia, Laticauda) with an updated occurrence dataset for ecological and conservation research.

    PubMed

    Gherghel, Iulian; Papeş, Monica; Brischoux, François; Sahlean, Tiberiu; Strugariu, Alexandru

    2016-01-01

    The genus Laticauda (Reptilia: Elapidae), commonly known as sea kraits, comprises eight species of marine amphibious snakes distributed along the shores of the Western Pacific Ocean and the Eastern Indian Ocean. We review the information available on the geographic range of sea kraits and analyze their distribution patterns. Generally, we found that south and south-west of Japan, Philippines Archipelago, parts of Indonesia, and Vanuatu have the highest diversity of sea krait species. Further, we compiled the information available on sea kraits' occurrences from a variety of sources, including museum records, field surveys, and the scientific literature. The final database comprises 694 occurrence records, with Laticauda colubrina having the highest number of records and Laticauda schistorhyncha the lowest. The occurrence records were georeferenced and compiled as a database for each sea krait species. This database can be freely used for future studies.

  6. Towards the Interoperability of Web, Database, and Mass Storage Technologies for Petabyte Archives

    NASA Technical Reports Server (NTRS)

    Moore, Reagan; Marciano, Richard; Wan, Michael; Sherwin, Tom; Frost, Richard

    1996-01-01

    At the San Diego Supercomputer Center, a massive data analysis system (MDAS) is being developed to support data-intensive applications that manipulate terabyte sized data sets. The objective is to support scientific application access to data whether it is located at a Web site, stored as an object in a database, and/or storage in an archival storage system. We are developing a suite of demonstration programs which illustrate how Web, database (DBMS), and archival storage (mass storage) technologies can be integrated. An application presentation interface is being designed that integrates data access to all of these sources. We have developed a data movement interface between the Illustra object-relational database and the NSL UniTree archival storage system running in a production mode at the San Diego Supercomputer Center. With this interface, an Illustra client can transparently access data on UniTree under the control of the Illustr DBMS server. The current implementation is based on the creation of a new DBMS storage manager class, and a set of library functions that allow the manipulation and migration of data stored as Illustra 'large objects'. We have extended this interface to allow a Web client application to control data movement between its local disk, the Web server, the DBMS Illustra server, and the UniTree mass storage environment. This paper describes some of the current approaches successfully integrating these technologies. This framework is measured against a representative sample of environmental data extracted from the San Diego Ba Environmental Data Repository. Practical lessons are drawn and critical research areas are highlighted.

  7. X-ray Photoelectron Spectroscopy Database (Version 4.1)

    National Institute of Standards and Technology Data Gateway

    SRD 20 X-ray Photoelectron Spectroscopy Database (Version 4.1) (Web, free access)   The NIST XPS Database gives access to energies of many photoelectron and Auger-electron spectral lines. The database contains over 22,000 line positions, chemical shifts, doublet splittings, and energy separations of photoelectron and Auger-electron lines.

  8. Sagace: A web-based search engine for biomedical databases in Japan

    PubMed Central

    2012-01-01

    Background In the big data era, biomedical research continues to generate a large amount of data, and the generated information is often stored in a database and made publicly available. Although combining data from multiple databases should accelerate further studies, the current number of life sciences databases is too large to grasp features and contents of each database. Findings We have developed Sagace, a web-based search engine that enables users to retrieve information from a range of biological databases (such as gene expression profiles and proteomics data) and biological resource banks (such as mouse models of disease and cell lines). With Sagace, users can search more than 300 databases in Japan. Sagace offers features tailored to biomedical research, including manually tuned ranking, a faceted navigation to refine search results, and rich snippets constructed with retrieved metadata for each database entry. Conclusions Sagace will be valuable for experts who are involved in biomedical research and drug development in both academia and industry. Sagace is freely available at http://sagace.nibio.go.jp/en/. PMID:23110816

  9. The EMBL nucleotide sequence database

    PubMed Central

    Stoesser, Guenter; Baker, Wendy; van den Broek, Alexandra; Camon, Evelyn; Garcia-Pastor, Maria; Kanz, Carola; Kulikova, Tamara; Lombard, Vincent; Lopez, Rodrigo; Parkinson, Helen; Redaschi, Nicole; Sterk, Peter; Stoehr, Peter; Tuli, Mary Ann

    2001-01-01

    The EMBL Nucleotide Sequence Database (http://www.ebi.ac.uk/embl/) is maintained at the European Bioinformatics Institute (EBI) in an international collaboration with the DNA Data Bank of Japan (DDBJ) and GenBank at the NCBI (USA). Data is exchanged amongst the collaborating databases on a daily basis. The major contributors to the EMBL database are individual authors and genome project groups. Webin is the preferred web-based submission system for individual submitters, whilst automatic procedures allow incorporation of sequence data from large-scale genome sequencing centres and from the European Patent Office (EPO). Database releases are produced quarterly. Network services allow free access to the most up-to-date data collection via ftp, email and World Wide Web interfaces. EBI’s Sequence Retrieval System (SRS), a network browser for databanks in molecular biology, integrates and links the main nucleotide and protein databases plus many specialized databases. For sequence similarity searching a variety of tools (e.g. Blitz, Fasta, BLAST) are available which allow external users to compare their own sequences against the latest data in the EMBL Nucleotide Sequence Database and SWISS-PROT. PMID:11125039

  10. Organizational Alignment Through Information Technology: A Web-Based Approach to Change

    NASA Technical Reports Server (NTRS)

    Heinrichs, W.; Smith, J.

    1999-01-01

    This paper reports on the effectiveness of web-based internet tools and databases to facilitate integration of technical organizations with interfaces that minimize modification of each technical organization.

  11. Semantic-JSON: a lightweight web service interface for Semantic Web contents integrating multiple life science databases

    PubMed Central

    Kobayashi, Norio; Ishii, Manabu; Takahashi, Satoshi; Mochizuki, Yoshiki; Matsushima, Akihiro; Toyoda, Tetsuro

    2011-01-01

    Global cloud frameworks for bioinformatics research databases become huge and heterogeneous; solutions face various diametric challenges comprising cross-integration, retrieval, security and openness. To address this, as of March 2011 organizations including RIKEN published 192 mammalian, plant and protein life sciences databases having 8.2 million data records, integrated as Linked Open or Private Data (LOD/LPD) using SciNetS.org, the Scientists' Networking System. The huge quantity of linked data this database integration framework covers is based on the Semantic Web, where researchers collaborate by managing metadata across public and private databases in a secured data space. This outstripped the data query capacity of existing interface tools like SPARQL. Actual research also requires specialized tools for data analysis using raw original data. To solve these challenges, in December 2009 we developed the lightweight Semantic-JSON interface to access each fragment of linked and raw life sciences data securely under the control of programming languages popularly used by bioinformaticians such as Perl and Ruby. Researchers successfully used the interface across 28 million semantic relationships for biological applications including genome design, sequence processing, inference over phenotype databases, full-text search indexing and human-readable contents like ontology and LOD tree viewers. Semantic-JSON services of SciNetS.org are provided at http://semanticjson.org. PMID:21632604

  12. The JANA calibrations and conditions database API

    NASA Astrophysics Data System (ADS)

    Lawrence, David

    2010-04-01

    Calibrations and conditions databases can be accessed from within the JANA Event Processing framework through the API defined in its JCalibration base class. The API is designed to support everything from databases, to web services to flat files for the backend. A Web Service backend using the gSOAP toolkit has been implemented which is particularly interesting since it addresses many modern cybersecurity issues including support for SSL. The API allows constants to be retrieved through a single line of C++ code with most of the context, including the transport mechanism, being implied by the run currently being analyzed and the environment relieving developers from implementing such details.

  13. CBS Genome Atlas Database: a dynamic storage for bioinformatic results and sequence data.

    PubMed

    Hallin, Peter F; Ussery, David W

    2004-12-12

    Currently, new bacterial genomes are being published on a monthly basis. With the growing amount of genome sequence data, there is a demand for a flexible and easy-to-maintain structure for storing sequence data and results from bioinformatic analysis. More than 150 sequenced bacterial genomes are now available, and comparisons of properties for taxonomically similar organisms are not readily available to many biologists. In addition to the most basic information, such as AT content, chromosome length, tRNA count and rRNA count, a large number of more complex calculations are needed to perform detailed comparative genomics. DNA structural calculations like curvature and stacking energy, DNA compositions like base skews, oligo skews and repeats at the local and global level are just a few of the analysis that are presented on the CBS Genome Atlas Web page. Complex analysis, changing methods and frequent addition of new models are factors that require a dynamic database layout. Using basic tools like the GNU Make system, csh, Perl and MySQL, we have created a flexible database environment for storing and maintaining such results for a collection of complete microbial genomes. Currently, these results counts to more than 220 pieces of information. The backbone of this solution consists of a program package written in Perl, which enables administrators to synchronize and update the database content. The MySQL database has been connected to the CBS web-server via PHP4, to present a dynamic web content for users outside the center. This solution is tightly fitted to existing server infrastructure and the solutions proposed here can perhaps serve as a template for other research groups to solve database issues. A web based user interface which is dynamically linked to the Genome Atlas Database can be accessed via www.cbs.dtu.dk/services/GenomeAtlas/. This paper has a supplemental information page which links to the examples presented: www.cbs.dtu.dk/services/GenomeAtlas/suppl/bioinfdatabase.

  14. Advancements in web-database applications for rabies surveillance.

    PubMed

    Rees, Erin E; Gendron, Bruno; Lelièvre, Frédérick; Coté, Nathalie; Bélanger, Denise

    2011-08-02

    Protection of public health from rabies is informed by the analysis of surveillance data from human and animal populations. In Canada, public health, agricultural and wildlife agencies at the provincial and federal level are responsible for rabies disease control, and this has led to multiple agency-specific data repositories. Aggregation of agency-specific data into one database application would enable more comprehensive data analyses and effective communication among participating agencies. In Québec, RageDB was developed to house surveillance data for the raccoon rabies variant, representing the next generation in web-based database applications that provide a key resource for the protection of public health. RageDB incorporates data from, and grants access to, all agencies responsible for the surveillance of raccoon rabies in Québec. Technological advancements of RageDB to rabies surveillance databases include (1) automatic integration of multi-agency data and diagnostic results on a daily basis; (2) a web-based data editing interface that enables authorized users to add, edit and extract data; and (3) an interactive dashboard to help visualize data simply and efficiently, in table, chart, and cartographic formats. Furthermore, RageDB stores data from citizens who voluntarily report sightings of rabies suspect animals. We also discuss how sightings data can indicate public perception to the risk of racoon rabies and thus aid in directing the allocation of disease control resources for protecting public health. RageDB provides an example in the evolution of spatio-temporal database applications for the storage, analysis and communication of disease surveillance data. The database was fast and inexpensive to develop by using open-source technologies, simple and efficient design strategies, and shared web hosting. The database increases communication among agencies collaborating to protect human health from raccoon rabies. Furthermore, health agencies have real-time access to a wide assortment of data documenting new developments in the raccoon rabies epidemic and this enables a more timely and appropriate response.

  15. Advancements in web-database applications for rabies surveillance

    PubMed Central

    2011-01-01

    Background Protection of public health from rabies is informed by the analysis of surveillance data from human and animal populations. In Canada, public health, agricultural and wildlife agencies at the provincial and federal level are responsible for rabies disease control, and this has led to multiple agency-specific data repositories. Aggregation of agency-specific data into one database application would enable more comprehensive data analyses and effective communication among participating agencies. In Québec, RageDB was developed to house surveillance data for the raccoon rabies variant, representing the next generation in web-based database applications that provide a key resource for the protection of public health. Results RageDB incorporates data from, and grants access to, all agencies responsible for the surveillance of raccoon rabies in Québec. Technological advancements of RageDB to rabies surveillance databases include 1) automatic integration of multi-agency data and diagnostic results on a daily basis; 2) a web-based data editing interface that enables authorized users to add, edit and extract data; and 3) an interactive dashboard to help visualize data simply and efficiently, in table, chart, and cartographic formats. Furthermore, RageDB stores data from citizens who voluntarily report sightings of rabies suspect animals. We also discuss how sightings data can indicate public perception to the risk of racoon rabies and thus aid in directing the allocation of disease control resources for protecting public health. Conclusions RageDB provides an example in the evolution of spatio-temporal database applications for the storage, analysis and communication of disease surveillance data. The database was fast and inexpensive to develop by using open-source technologies, simple and efficient design strategies, and shared web hosting. The database increases communication among agencies collaborating to protect human health from raccoon rabies. Furthermore, health agencies have real-time access to a wide assortment of data documenting new developments in the raccoon rabies epidemic and this enables a more timely and appropriate response. PMID:21810215

  16. WheatGenome.info: A Resource for Wheat Genomics Resource.

    PubMed

    Lai, Kaitao

    2016-01-01

    An integrated database with a variety of Web-based systems named WheatGenome.info hosting wheat genome and genomic data has been developed to support wheat research and crop improvement. The resource includes multiple Web-based applications, which are implemented as a variety of Web-based systems. These include a GBrowse2-based wheat genome viewer with BLAST search portal, TAGdb for searching wheat second generation genome sequence data, wheat autoSNPdb, links to wheat genetic maps using CMap and CMap3D, and a wheat genome Wiki to allow interaction between diverse wheat genome sequencing activities. This portal provides links to a variety of wheat genome resources hosted at other research organizations. This integrated database aims to accelerate wheat genome research and is freely accessible via the web interface at http://www.wheatgenome.info/ .

  17. UnCover on the Web: search hints and applications in library environments.

    PubMed

    Galpern, N F; Albert, K M

    1997-01-01

    Among the huge maze of resources available on the Internet, UnCoverWeb stands out as a valuable tool for medical libraries. This up-to-date, free-access, multidisciplinary database of periodical references is searched through an easy-to-learn graphical user interface that is a welcome improvement over the telnet version. This article reviews the basic and advanced search techniques for UnCoverWeb, as well as providing information on the document delivery functions and table of contents alerting service called Reveal. UnCover's currency is evaluated and compared with other current awareness resources. System deficiencies are discussed, with the conclusion that although UnCoverWeb lacks the sophisticated features of many commercial database search services, it is nonetheless a useful addition to the repertoire of information sources available in a library.

  18. Public outreach and communications of the Alaska Volcano Observatory during the 2005-2006 eruption of Augustine Volcano: Chapter 27 in The 2006 eruption of Augustine Volcano, Alaska

    USGS Publications Warehouse

    Adleman, Jennifer N.; Cameron, Cheryl E.; Snedigar, Seth F.; Neal, Christina A.; Wallace, Kristi L.; Power, John A.; Coombs, Michelle L.; Freymueller, Jeffrey T.

    2010-01-01

    The AVO Web site, with its accompanying database, is the backbone of AVO's external and internal communications. This was the first Cook Inlet volcanic eruption with a public expectation of real-time access to data, updates, and hazards information over the Internet. In March 2005, AVO improved the Web site from individual static pages to a dynamic, database-driven site. This new system provided quick and straightforward access to the latest information for (1) staff within the observatory, (2) emergency managers from State and local governments and organizations, (3) the media, and (4) the public. From mid-December 2005 through April 2006, the AVO Web site served more than 45 million Web pages and about 5.5 terabytes of data.

  19. Update of the FANTOM web resource: high resolution transcriptome of diverse cell types in mammals.

    PubMed

    Lizio, Marina; Harshbarger, Jayson; Abugessaisa, Imad; Noguchi, Shuei; Kondo, Atsushi; Severin, Jessica; Mungall, Chris; Arenillas, David; Mathelier, Anthony; Medvedeva, Yulia A; Lennartsson, Andreas; Drabløs, Finn; Ramilowski, Jordan A; Rackham, Owen; Gough, Julian; Andersson, Robin; Sandelin, Albin; Ienasescu, Hans; Ono, Hiromasa; Bono, Hidemasa; Hayashizaki, Yoshihide; Carninci, Piero; Forrest, Alistair R R; Kasukawa, Takeya; Kawaji, Hideya

    2017-01-04

    Upon the first publication of the fifth iteration of the Functional Annotation of Mammalian Genomes collaborative project, FANTOM5, we gathered a series of primary data and database systems into the FANTOM web resource (http://fantom.gsc.riken.jp) to facilitate researchers to explore transcriptional regulation and cellular states. In the course of the collaboration, primary data and analysis results have been expanded, and functionalities of the database systems enhanced. We believe that our data and web systems are invaluable resources, and we think the scientific community will benefit for this recent update to deepen their understanding of mammalian cellular organization. We introduce the contents of FANTOM5 here, report recent updates in the web resource and provide future perspectives. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  20. TIGER 2010 Boundaries

    EPA Pesticide Factsheets

    This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). This web service includes the State and County boundaries from the TIGER shapefiles compiled into a single national coverage for each layer. The TIGER/Line Files are shapefiles and related database files (.dbf) that are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master Address File / Topologically Integrated Geographic Encoding and Referencing (MAF/TIGER) Database (MTDB).

  1. A Geospatial Database that Supports Derivation of Climatological Features of Severe Weather

    NASA Astrophysics Data System (ADS)

    Phillips, M.; Ansari, S.; Del Greco, S.

    2007-12-01

    The Severe Weather Data Inventory (SWDI) at NOAA's National Climatic Data Center (NCDC) provides user access to archives of several datasets critical to the detection and evaluation of severe weather. These datasets include archives of: · NEXRAD Level-III point features describing general storm structure, hail, mesocyclone and tornado signatures · National Weather Service Storm Events Database · National Weather Service Local Storm Reports collected from storm spotters · National Weather Service Warnings · Lightning strikes from Vaisala's National Lightning Detection Network (NLDN) SWDI archives all of these datasets in a spatial database that allows for convenient searching and subsetting. These data are accessible via the NCDC web site, Web Feature Services (WFS) or automated web services. The results of interactive web page queries may be saved in a variety of formats, including plain text, XML, Google Earth's KMZ, standards-based NetCDF and Shapefile. NCDC's Storm Risk Assessment Project (SRAP) uses data from the SWDI database to derive gridded climatology products that show the spatial distributions of the frequency of various events. SRAP also can relate SWDI events to other spatial data such as roads, population, watersheds, and other geographic, sociological, or economic data to derive products that are useful in municipal planning, emergency management, the insurance industry, and other areas where there is a need to quantify and qualify how severe weather patterns affect people and property.

  2. GrTEdb: the first web-based database of transposable elements in cotton (Gossypium raimondii).

    PubMed

    Xu, Zhenzhen; Liu, Jing; Ni, Wanchao; Peng, Zhen; Guo, Yue; Ye, Wuwei; Huang, Fang; Zhang, Xianggui; Xu, Peng; Guo, Qi; Shen, Xinlian; Du, Jianchang

    2017-01-01

    Although several diploid and tetroploid Gossypium species genomes have been sequenced, the well annotated web-based transposable elements (TEs) database is lacking. To better understand the roles of TEs in structural, functional and evolutionary dynamics of the cotton genome, a comprehensive, specific, and user-friendly web-based database, Gossypium raimondii transposable elements database (GrTEdb), was constructed. A total of 14 332 TEs were structurally annotated and clearly categorized in G. raimondii genome, and these elements have been classified into seven distinct superfamilies based on the order of protein-coding domains, structures and/or sequence similarity, including 2929 Copia-like elements, 10 368 Gypsy-like elements, 299 L1 , 12 Mutators , 435 PIF-Harbingers , 275 CACTAs and 14 Helitrons . Meanwhile, the web-based sequence browsing, searching, downloading and blast tool were implemented to help users easily and effectively to annotate the TEs or TE fragments in genomic sequences from G. raimondii and other closely related Gossypium species. GrTEdb provides resources and information related with TEs in G. raimondii , and will facilitate gene and genome analyses within or across Gossypium species, evaluating the impact of TEs on their host genomes, and investigating the potential interaction between TEs and protein-coding genes in Gossypium species. http://www.grtedb.org/. © The Author(s) 2017. Published by Oxford University Press.

  3. 45 CFR 1180.2 - Definition of a museum.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...) Art museums; (5) Children's museums; (6) General museums; (7) Historic houses and sites; (8) History museums; (9) Nature centers; (10) Natural history and anthropology museums; (11) Planetariums; (12) Science and technology centers; (13) Specialized museums; and (14) Zoological parks. (c) For the purposes...

  4. Longitudinal analysis of meta-analysis literatures in the database of ISI Web of Science.

    PubMed

    Zhu, Changtai; Jiang, Ting; Cao, Hao; Sun, Wenguang; Chen, Zhong; Liu, Jinming

    2015-01-01

    The meta-analysis is regarded as an important evidence for making scientific decision. The database of ISI Web of Science collected a great number of high quality literatures including meta-analysis literatures. However, it is significant to understand the general characteristics of meta-analysis literatures to outline the perspective of meta-analysis. In this present study, we summarized and clarified some features on these literatures in the database of ISI Web of Science. We retrieved the meta-analysis literatures in the database of ISI Web of Science including SCI-E, SSCI, A&HCI, CPCI-S, CPCI-SSH, CCR-E, and IC. The annual growth rate, literature category, language, funding, index citation, agencies and countries/territories of the meta-analysis literatures were analyzed, respectively. A total of 95,719 records, which account for 0.38% (99% CI: 0.38%-0.39%) of all literatures, were found in the database. From 1997 to 2012, the annual growth rate of meta-analysis literatures was 18.18%. The literatures involved in many categories, languages, fundings, citations, publication agencies, and countries/territories. Interestingly, the index citation frequencies of the meta-analysis were significantly higher than that of other type literatures such as multi-centre study, randomize controlled trial, cohort study, case control study, and cases report (P<0.0001). The increasing numbers, intensively global influence and high citations revealed that the meta-analysis has been becoming more and more prominent in recent years. In future, in order to promote the validity of meta-analysis, the CONSORT and PRISMA standard should be continuously popularized in the field of evidence-based medicine.

  5. Secure, web-accessible call rosters for academic radiology departments.

    PubMed

    Nguyen, A V; Tellis, W M; Avrin, D E

    2000-05-01

    Traditionally, radiology department call rosters have been posted via paper and bulletin boards. Frequently, changes to these lists are made by multiple people independently, but often not synchronized, resulting in confusion among the house staff and technical staff as to who is on call and when. In addition, multiple and disparate copies exist in different sections of the department, and changes made would not be propagated to all the schedules. To eliminate such difficulties, a paperless call scheduling application was developed. Our call scheduling program allowed Java-enabled web access to a database by designated personnel from each radiology section who have privileges to make the necessary changes. Once a person made a change, everyone accessing the database would see the modification. This eliminates the chaos resulting from people swapping shifts at the last minute and not having the time to record or broadcast the change. Furthermore, all changes to the database were logged. Users are given a log-in name and password and can only edit their section; however, all personnel have access to all sections' schedules. Our applet was written in Java 2 using the latest technology in database access. We access our Interbase database through the DataExpress and DB Swing (Borland, Scotts Valley, CA) components. The result is secure access to the call rosters via the web. There are many advantages to the web-enabled access, mainly the ability for people to make changes and have the changes recorded and propagated in a single virtual location and available to all who need to know.

  6. Federated or cached searches: Providing expected performance from multiple invasive species databases

    NASA Astrophysics Data System (ADS)

    Graham, Jim; Jarnevich, Catherine S.; Simpson, Annie; Newman, Gregory J.; Stohlgren, Thomas J.

    2011-06-01

    Invasive species are a universal global problem, but the information to identify them, manage them, and prevent invasions is stored around the globe in a variety of formats. The Global Invasive Species Information Network is a consortium of organizations working toward providing seamless access to these disparate databases via the Internet. A distributed network of databases can be created using the Internet and a standard web service protocol. There are two options to provide this integration. First, federated searches are being proposed to allow users to search "deep" web documents such as databases for invasive species. A second method is to create a cache of data from the databases for searching. We compare these two methods, and show that federated searches will not provide the performance and flexibility required from users and a central cache of the datum are required to improve performance.

  7. PhamDB: a web-based application for building Phamerator databases.

    PubMed

    Lamine, James G; DeJong, Randall J; Nelesen, Serita M

    2016-07-01

    PhamDB is a web application which creates databases of bacteriophage genes, grouped by gene similarity. It is backwards compatible with the existing Phamerator desktop software while providing an improved database creation workflow. Key features include a graphical user interface, validation of uploaded GenBank files, and abilities to import phages from existing databases, modify existing databases and queue multiple jobs. Source code and installation instructions for Linux, Windows and Mac OSX are freely available at https://github.com/jglamine/phage PhamDB is also distributed as a docker image which can be managed via Kitematic. This docker image contains the application and all third party software dependencies as a pre-configured system, and is freely available via the installation instructions provided. snelesen@calvin.edu. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. Federated or cached searches: providing expected performance from multiple invasive species databases

    USGS Publications Warehouse

    Graham, Jim; Jarnevich, Catherine S.; Simpson, Annie; Newman, Gregory J.; Stohlgren, Thomas J.

    2011-01-01

    Invasive species are a universal global problem, but the information to identify them, manage them, and prevent invasions is stored around the globe in a variety of formats. The Global Invasive Species Information Network is a consortium of organizations working toward providing seamless access to these disparate databases via the Internet. A distributed network of databases can be created using the Internet and a standard web service protocol. There are two options to provide this integration. First, federated searches are being proposed to allow users to search “deep” web documents such as databases for invasive species. A second method is to create a cache of data from the databases for searching. We compare these two methods, and show that federated searches will not provide the performance and flexibility required from users and a central cache of the datum are required to improve performance.

  9. Ocean Drilling Program: Publication Services: Online Manuscript Submission

    Science.gov Websites

    products Drilling services and tools Online Janus database Search the ODP/TAMU web site ODP/TAMU Science Operator Home ODP's main web site Publications Policy Author Instructions Scientific Results Manuscript use the submission and review forms available on the IODP-USIO publications web site. ODP | Search

  10. Mining Hidden Gems Beneath the Surface: A Look At the Invisible Web.

    ERIC Educational Resources Information Center

    Carlson, Randal D.; Repman, Judi

    2002-01-01

    Describes resources for researchers called the Invisible Web that are hidden from the usual search engines and other tools and contrasts them with those resources available on the surface Web. Identifies specialized search tools, databases, and strategies that can be used to locate credible in-depth information. (Author/LRW)

  11. Web Design Matters

    ERIC Educational Resources Information Center

    Mathews, Brian

    2009-01-01

    The web site is a library's most important feature. Patrons use the web site for numerous functions, such as renewing materials, placing holds, requesting information, and accessing databases. The homepage is the place they turn to look up the hours, branch locations, policies, and events. Whether users are at work, at home, in a building, or on…

  12. Semantic Annotations and Querying of Web Data Sources

    NASA Astrophysics Data System (ADS)

    Hornung, Thomas; May, Wolfgang

    A large part of the Web, actually holding a significant portion of the useful information throughout the Web, consists of views on hidden databases, provided by numerous heterogeneous interfaces that are partly human-oriented via Web forms ("Deep Web"), and partly based on Web Services (only machine accessible). In this paper we present an approach for annotating these sources in a way that makes them citizens of the Semantic Web. We illustrate how queries can be stated in terms of the ontology, and how the annotations are used to selected and access appropriate sources and to answer the queries.

  13. Virus taxonomy: the database of the International Committee on Taxonomy of Viruses (ICTV)

    PubMed Central

    Dempsey, Donald M; Hendrickson, Robert Curtis; Orton, Richard J; Siddell, Stuart G; Smith, Donald B

    2018-01-01

    Abstract The International Committee on Taxonomy of Viruses (ICTV) is charged with the task of developing, refining, and maintaining a universal virus taxonomy. This task encompasses the classification of virus species and higher-level taxa according to the genetic and biological properties of their members; naming virus taxa; maintaining a database detailing the currently approved taxonomy; and providing the database, supporting proposals, and other virus-related information from an open-access, public web site. The ICTV web site (http://ictv.global) provides access to the current taxonomy database in online and downloadable formats, and maintains a complete history of virus taxa back to the first release in 1971. The ICTV has also published the ICTV Report on Virus Taxonomy starting in 1971. This Report provides a comprehensive description of all virus taxa covering virus structure, genome structure, biology and phylogenetics. The ninth ICTV report, published in 2012, is available as an open-access online publication from the ICTV web site. The current, 10th report (http://ictv.global/report/), is being published online, and is replacing the previous hard-copy edition with a completely open access, continuously updated publication. No other database or resource exists that provides such a comprehensive, fully annotated compendium of information on virus taxa and taxonomy. PMID:29040670

  14. A Web-based Tool for SDSS and 2MASS Database Searches

    NASA Astrophysics Data System (ADS)

    Hendrickson, M. A.; Uomoto, A.; Golimowski, D. A.

    We have developed a web site using HTML, Php, Python, and MySQL that extracts, processes, and displays data from the Sloan Digital Sky Survey (SDSS) and the Two-Micron All-Sky Survey (2MASS). The goal is to locate brown dwarf candidates in the SDSS database by looking at color cuts; however, this site could also be useful for targeted searches of other databases as well. MySQL databases are created from broad searches of SDSS and 2MASS data. Broad queries on the SDSS and 2MASS database servers are run weekly so that observers have the most up-to-date information from which to select candidates for observation. Observers can look at detailed information about specific objects including finding charts, images, and available spectra. In addition, updates from previous observations can be added by any collaborators; this format makes observational collaboration simple. Observers can also restrict the database search, just before or during an observing run, to select objects of special interest.

  15. Database resources of the National Center for Biotechnology Information: 2002 update

    PubMed Central

    Wheeler, David L.; Church, Deanna M.; Lash, Alex E.; Leipe, Detlef D.; Madden, Thomas L.; Pontius, Joan U.; Schuler, Gregory D.; Schriml, Lynn M.; Tatusova, Tatiana A.; Wagner, Lukas; Rapp, Barbara A.

    2002-01-01

    In addition to maintaining the GenBank nucleic acid sequence database, the National Center for Biotechnology Information (NCBI) provides data analysis and retrieval resources that operate on the data in GenBank and a variety of other biological data made available through NCBI’s web site. NCBI data retrieval resources include Entrez, PubMed, LocusLink and the Taxonomy Browser. Data analysis resources include BLAST, Electronic PCR, OrfFinder, RefSeq, UniGene, HomoloGene, Database of Single Nucleotide Polymorphisms (dbSNP), Human Genome Sequencing, Human MapViewer, Human¡VMouse Homology Map, Cancer Chromosome Aberration Project (CCAP), Entrez Genomes, Clusters of Orthologous Groups (COGs) database, Retroviral Genotyping Tools, SAGEmap, Gene Expression Omnibus (GEO), Online Mendelian Inheritance in Man (OMIM), the Molecular Modeling Database (MMDB) and the Conserved Domain Database (CDD). Augmenting many of the web applications are custom implementations of the BLAST program optimized to search specialized data sets. All of the resources can be accessed through the NCBI home page at http://www.ncbi.nlm.nih.gov. PMID:11752242

  16. Disseminated Museum Displays and Participation of Students from Underrepresented Populations in Polar Research: Education and Outreach for Joint Projects in GPS and Seismology Solid Earth Science Community

    NASA Astrophysics Data System (ADS)

    Eriksson, S. C.; Wilson, T. J.; Anandakrishnan, S.; Aster, R. C.; Johns, B.; Anderson, K.; Taber, J.

    2006-12-01

    Two Antarctic projects developed by solid earth scientists in the GPS and seismology communities have rich education and outreach activities focused on disseminating information gleaned from this research and on including students from underrepresented groups. Members of the UNAVCO and IRIS research consortia along with international partners from Australia, Canada, Chile, Germany, Italy, New Zealand and the U.K. aim to deploy an ambitious GPS/seismic network to observe the Antarctic glaciological and geologic system using a multidisciplinary and internationally coordinated approach. The second project supports this network. UNAVCO and IRIS are designing and building a reliable power and communication system for autonomous polar station operation which use the latest power and communication technologies for ease of deployment and reliable multi-year operation in severe polar environments. This project will disseminate research results through an IPY/POLENET web-based museum style display based on the next-generation "Museum Lite" capability primarily supported by IRIS. "Museum Lite" uses a standard PC, touch-screen monitor, and standard Internet browsers to exploit the scalability and access of the Internet and to provide customizable content in an interactive setting. The unit is suitable for research departments, public schools, and an assortment of public venues, and can provide wide access to real-time geophysical data, ongoing research, and general information. The POLENET group will work with members of the two consortia to provide content about the project and polar science in general. One unit is to be installed at Barrow's Ilisagvit College through the Barrow Arctic Science Consortium, one at McMurdo Station in Antarctica, and two at other sites to be determined (likely in New Zealand/Australia and in the U.S.). In January, 2006, Museum Lite exhibit was installed at the Amundsen-Scott South Pole Station. Evaluation of this prototype is underway. These projects also have a special focus on engaging underrepresented groups in polar science through coalitions with existing recruitment networks and strong, currently operating programs such as the joint UNAVCO-IRIS-USGS program called Research Experiences in Solid Earth Science for Students (RESESS) that provides multi-year research experiences, ongoing mentorship, and a learning community. Undergraduate students will participate in polar research or in development of the new polar equipment.

  17. [Establishment of a comprehensive database for laryngeal cancer related genes and the miRNAs].

    PubMed

    Li, Mengjiao; E, Qimin; Liu, Jialin; Huang, Tingting; Liang, Chuanyu

    2015-09-01

    By collecting and analyzing the laryngeal cancer related genes and the miRNAs, to build a comprehensive laryngeal cancer-related gene database, which differs from the current biological information database with complex and clumsy structure and focuses on the theme of gene and miRNA, and it could make the research and teaching more convenient and efficient. Based on the B/S architecture, using Apache as a Web server, MySQL as coding language of database design and PHP as coding language of web design, a comprehensive database for laryngeal cancer-related genes was established, providing with the gene tables, protein tables, miRNA tables and clinical information tables of the patients with laryngeal cancer. The established database containsed 207 laryngeal cancer related genes, 243 proteins, 26 miRNAs, and their particular information such as mutations, methylations, diversified expressions, and the empirical references of laryngeal cancer relevant molecules. The database could be accessed and operated via the Internet, by which browsing and retrieval of the information were performed. The database were maintained and updated regularly. The database for laryngeal cancer related genes is resource-integrated and user-friendly, providing a genetic information query tool for the study of laryngeal cancer.

  18. The 2015 Nucleic Acids Research Database Issue and molecular biology database collection.

    PubMed

    Galperin, Michael Y; Rigden, Daniel J; Fernández-Suárez, Xosé M

    2015-01-01

    The 2015 Nucleic Acids Research Database Issue contains 172 papers that include descriptions of 56 new molecular biology databases, and updates on 115 databases whose descriptions have been previously published in NAR or other journals. Following the classification that has been introduced last year in order to simplify navigation of the entire issue, these articles are divided into eight subject categories. This year's highlights include RNAcentral, an international community portal to various databases on noncoding RNA; ValidatorDB, a validation database for protein structures and their ligands; SASBDB, a primary repository for small-angle scattering data of various macromolecular complexes; MoonProt, a database of 'moonlighting' proteins, and two new databases of protein-protein and other macromolecular complexes, ComPPI and the Complex Portal. This issue also includes an unusually high number of cancer-related databases and other databases dedicated to genomic basics of disease and potential drugs and drug targets. The size of NAR online Molecular Biology Database Collection, http://www.oxfordjournals.org/nar/database/a/, remained approximately the same, following the addition of 74 new resources and removal of 77 obsolete web sites. The entire Database Issue is freely available online on the Nucleic Acids Research web site (http://nar.oxfordjournals.org/). Published by Oxford University Press on behalf of Nucleic Acids Research 2014. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  19. Progressive and self-limiting neurodegenerative disorders in Africa: a new prominent field of research led by South Africa but without strong health policy.

    PubMed

    Poreau, Brice

    2016-01-01

    Neurodegenerative disorders are involved in mortality and morbidity of every country. A high prevalence is estimated in Africa. Neurodegenerative disorders are defined by a progressive or self-limiting alteration of neurons implied in specific functional and anatomical functions. It encompasses a various range of clinical disorders from self-limiting to progressive. Focus on public health policies and scientific research is needed to understand the mechanisms to reduce this high prevalence. We use bibliometrics and mapping tools to explore the area studies and countries involved in scientific research on neurodegenerative disorders in Africa. We used two databases: Web of Science and Pubmed. We analyzed the journals, most cited articles, authors, publication years, organizations, funding agencies, countries and keywords in Web of Science Core collection database and publication years and Medical Subject Headings in Pubmed database. We mapped the data using VOSviewer. We accessed 44 articles published between 1975 and 2014 in Web of Science Core collection Database and 669 from Pubmed database. The majority of which were after 2006. The main countries involved in research on neurodegenerative disorders in Africa the USA, the United Kingdom, France and South Africa representing the main network collaboration. Clinical neurology and Genetics hereditary are the main Web of Science categories whereas Neurosciences and Biochemistry and Molecular Biology are the main Web of Science categories for the general search "neurodegenerative disorders" not restrained to Africa. This is confirmed by Medical Subject Headings analysis from Pubmed with one more area study: Treatment. Neurodegenerative disorders research is leaded by South Africa with a network involving the USA, the UK, as well as African countries such Zambia. The chief field that emerged was on patient and hereditary as well as treatment. Public health policies were lacking fields in research whereas prevalence is estimated to be important in every country. New 17 sustainable development goals of the United Nations could help in this way.

  20. Fermilab Security Site Access Request Database

    Science.gov Websites

    Fermilab Security Site Access Request Database Use of the online version of the Fermilab Security Site Access Request Database requires that you login into the ESH&Q Web Site. Note: Only Fermilab generated from the ESH&Q Section's Oracle database on May 27, 2018 05:48 AM. If you have a question

  1. Virtual anastylosis of Greek sculpture as museum policy for public outreach and cognitive accessibility

    NASA Astrophysics Data System (ADS)

    Stanco, Filippo; Tanasi, Davide; Allegra, Dario; Milotta, Filippo Luigi Maria; Lamagna, Gioconda; Monterosso, Giuseppina

    2017-01-01

    This paper deals with a virtual anastylosis of a Greek Archaic statue from ancient Sicily and the development of a public outreach protocol for those with visual impairment or cognitive disabilities through the application of three-dimensional (3-D) printing and haptic technology. The case study consists of the marble head from Leontinoi in southeastern Sicily, acquired in the 18th century and later kept in the collection of the Museum of Castello Ursino in Catania, and a marble torso, retrieved in 1904 and since then displayed in the Archaeological Museum of Siracusa. Due to similar stylistic features, the two pieces can be dated to the end of the sixth century BC. Their association has been an open problem, largely debated by scholars, who have based their hypotheses on comparisons between pictures, but the reassembly of the two artifacts was never attempted. As a result the importance of such an artifact, which could be the only intact Archaic statue of a kouros ever found in Greek Sicily, has not fully been grasped by the public. Consequently, the curatorial dissemination of the knowledge related with such artifacts is purely based on photographic material. As a response to this scenario, the two objects have been 3-D scanned and virtually reassembled. The result has been shared digitally with the public via a web platform and, in order to include increased accessibility for the public with physical or cognitive disabilities, copies of the reassembled statue have been 3-D printed and an interactive test with the 3-D model has been carried out with a haptic device.

  2. Museum Literacies and Adolescents Using Multiple Forms of Texts "On Their Own"

    ERIC Educational Resources Information Center

    Eakle, A. Jonathan

    2009-01-01

    In this article, museum literacies are examined. Data collected during a qualitative study of adolescents in out-of-school and in-school groups in a museum demonstrate how participants used museum literacies. Resources for teachers' uses of museum literacies are described and provided, including museum podcasts, virtual museum Internet sites, and…

  3. Web based tools for data manipulation, visualisation and validation with interactive georeferenced graphs

    NASA Astrophysics Data System (ADS)

    Ivankovic, D.; Dadic, V.

    2009-04-01

    Some of oceanographic parameters have to be manually inserted into database; some (for example data from CTD probe) are inserted from various files. All this parameters requires visualization, validation and manipulation from research vessel or scientific institution, and also public presentation. For these purposes is developed web based system, containing dynamic sql procedures and java applets. Technology background is Oracle 10g relational database, and Oracle application server. Web interfaces are developed using PL/SQL stored database procedures (mod PL/SQL). Additional parts for data visualization include use of Java applets and JavaScript. Mapping tool is Google maps API (javascript) and as alternative java applet. Graph is realized as dynamically generated web page containing java applet. Mapping tool and graph are georeferenced. That means that click on some part of graph, automatically initiate zoom or marker onto location where parameter was measured. This feature is very useful for data validation. Code for data manipulation and visualization are partially realized with dynamic SQL and that allow as to separate data definition and code for data manipulation. Adding new parameter in system requires only data definition and description without programming interface for this kind of data.

  4. Lecturers’ Understanding on Indexing Databases of SINTA, DOAJ, Google Scholar, SCOPUS, and Web of Science: A Study of Indonesians

    NASA Astrophysics Data System (ADS)

    Saleh Ahmar, Ansari; Kurniasih, Nuning; Irawan, Dasapta Erwin; Utami Sutiksno, Dian; Napitupulu, Darmawan; Ikhsan Setiawan, Muhammad; Simarmata, Janner; Hidayat, Rahmat; Busro; Abdullah, Dahlan; Rahim, Robbi; Abraham, Juneman

    2018-01-01

    The Ministry of Research, Technology and Higher Education of Indonesia has introduced several national and international indexers of scientific works. This policy becomes a guideline for lecturers and researchers in choosing the reputable publications. This study aimed to describe the understanding level of Indonesian lecturers related to indexing databases, i.e. SINTA, DOAJ, Scopus, Web of Science, and Google Scholar. This research used descriptive design and survey method. The populations in this study were Indonesian lecturers and researchers. The primary data were obtained from a questionnaire filled by 316 lecturers and researchers from 33 Provinces in Indonesia recruited with convenience sampling technique on October-November 2017. The data analysis was performed using frequency distribution tables, cross tabulation and descriptive analysis. The results of this study showed that the understanding of Indonesian lecturers and researchers regarding publications in indexing databases SINTA, DOAJ, Scopus, Web of Science and Google Scholar is that, on average, 66,5% have known about SINTA, DOAJ, Scopus, Web of Science and Google Scholar. However, based on empirical frequency 76% of them have never published with journals or proceedings indexed in Scopus.

  5. BAGEL4: a user-friendly web server to thoroughly mine RiPPs and bacteriocins.

    PubMed

    van Heel, Auke J; de Jong, Anne; Song, Chunxu; Viel, Jakob H; Kok, Jan; Kuipers, Oscar P

    2018-05-21

    Interest in secondary metabolites such as RiPPs (ribosomally synthesized and posttranslationally modified peptides) is increasing worldwide. To facilitate the research in this field we have updated our mining web server. BAGEL4 is faster than its predecessor and is now fully independent from ORF-calling. Gene clusters of interest are discovered using the core-peptide database and/or through HMM motifs that are present in associated context genes. The databases used for mining have been updated and extended with literature references and links to UniProt and NCBI. Additionally, we have included automated promoter and terminator prediction and the option to upload RNA expression data, which can be displayed along with the identified clusters. Further improvements include the annotation of the context genes, which is now based on a fast blast against the prokaryote part of the UniRef90 database, and the improved web-BLAST feature that dynamically loads structural data such as internal cross-linking from UniProt. Overall BAGEL4 provides the user with more information through a user-friendly web-interface which simplifies data evaluation. BAGEL4 is freely accessible at http://bagel4.molgenrug.nl.

  6. iOS and OS X Apps for Exploring Earthquake Activity

    NASA Astrophysics Data System (ADS)

    Ammon, C. J.

    2015-12-01

    The U.S. Geological Survey and many other agencies rapidly provide information following earthquakes. This timely information garners great public interest and provides a rich opportunity to engage students in discussion and analysis of earthquakes and tectonics. In this presentation I will describe a suite of iOS and Mac OS X apps that I use for teaching and that Penn State employs in outreach efforts in a small museum run by the College of Earth and Mineral Sciences. The iOS apps include a simple, global overview of earthquake activity, epicentral, designed for a quick review or event lookup. A more full-featured iPad app, epicentral-plus, includes a simple global overview along with views that allow a more detailed exploration of geographic regions of interest. In addition, epicentral-plus allows the user to monitor ground motions using seismic channel lists compatible with the IRIS web services. Some limited seismogram processing features are included to allow focus on appropriate signal bandwidths. A companion web site, which includes background material on earthquakes, and a blog that includes sample images and channel lists appropriate for monitoring earthquakes in regions of recent earthquake activity can be accessed through the a third panel in the app. I use epicentral-plus at the beginning of each earthquake seismology class to review recent earthquake activity and to stimulate students to formulate and to ask questions that lead to discussions of earthquake and tectonic processes. Less interactive OS X versions of the apps are used to display a global map of earthquake activity and seismograms in near real time in a small museum on the ground floor of the building hosting Penn State's Geoscience Department.

  7. Informal science educators network project Association of Science-Technology Centers Incorporated. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1997-05-09

    Funding from the Department of Energy and the Annenberg/CPB Math and Science Project have helped the Association of Science-technology Centers Incorporated (ASTC) to establish and sustain an on-line community of informal science educators nationwide. The Project, called the Informal Science Educators Network Project (ISEN), is composed primarily of informal science educators and exhibit developers from science centers, museums, zoos, aquariums, botanical gardens, parks, and nature centers. Although museum-based professionals represent the majority of subscribers to ISEN, also involved are some classroom teachers and teacher educators from colleges and universities. Common to all ISEN participants is a commitment to school andmore » science education reform. Specifically, funding from the Department of Energy helped to boot strap the effort, providing Barrier Reduction Vouchers to 123 educators that enabled them participate in ISEN. Among the major accomplishments of the Project are these: (1) assistance to 123 informal science educators to attend Internet training sessions held in connection with the Project and/or purchase hardware and software that linked them to the Internet; (2) Internet training for 153 informal science educators; (3) development of a listserv which currently has over 180 subscribers--an all-time high; (4) opportunity to participate in four web chats involving informal science educators with noted researchers; (5) development of two sites on the World Wide Web linking informal science educators to Internet resources; (6) creation of an on-line collection of over 40 articles related to inquiry-based teaching and science education reform. In order to continue the momentum of the Project, ASTC has requested from the Annenberg/CPB Math and Science project a no/cost extension through December 1997.« less

  8. 48 CFR 404.1103 - Procedures.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... employees shall not enter information into the Central Contractor Registration (CCR) database on behalf of... be advised to submit a written application to CCR for registration into the CCR database. USDA... registered in the CCR database shall be done via the CCR Internet Web site http://www.ccr.gov. This...

  9. 48 CFR 404.1103 - Procedures.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... employees shall not enter information into the Central Contractor Registration (CCR) database on behalf of... be advised to submit a written application to CCR for registration into the CCR database. USDA... registered in the CCR database shall be done via the CCR Internet Web site http://www.ccr.gov. This...

  10. 48 CFR 404.1103 - Procedures.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... employees shall not enter information into the Central Contractor Registration (CCR) database on behalf of... be advised to submit a written application to CCR for registration into the CCR database. USDA... registered in the CCR database shall be done via the CCR Internet Web site http://www.ccr.gov. This...

  11. 48 CFR 404.1103 - Procedures.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... employees shall not enter information into the Central Contractor Registration (CCR) database on behalf of... be advised to submit a written application to CCR for registration into the CCR database. USDA... registered in the CCR database shall be done via the CCR Internet Web site http://www.ccr.gov. This...

  12. 48 CFR 404.1103 - Procedures.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... employees shall not enter information into the Central Contractor Registration (CCR) database on behalf of... be advised to submit a written application to CCR for registration into the CCR database. USDA... registered in the CCR database shall be done via the CCR Internet Web site http://www.ccr.gov. This...

  13. Atomic Spectra Database (ASD)

    National Institute of Standards and Technology Data Gateway

    SRD 78 NIST Atomic Spectra Database (ASD) (Web, free access)   This database provides access and search capability for NIST critically evaluated data on atomic energy levels, wavelengths, and transition probabilities that are reasonably up-to-date. The NIST Atomic Spectroscopy Data Center has carried out these critical compilations.

  14. Six Online Periodical Databases: A Librarian's View.

    ERIC Educational Resources Information Center

    Willems, Harry

    1999-01-01

    Compares the following World Wide Web-based periodical databases, focusing on their usefulness in K-12 school libraries: EBSCO, Electric Library, Facts on File, SIRS, Wilson, and UMI. Search interfaces, display options, help screens, printing, home access, copyright restrictions, database administration, and making a decision are discussed. A…

  15. A new relational database structure and online interface for the HITRAN database

    NASA Astrophysics Data System (ADS)

    Hill, Christian; Gordon, Iouli E.; Rothman, Laurence S.; Tennyson, Jonathan

    2013-11-01

    A new format for the HITRAN database is proposed. By storing the line-transition data in a number of linked tables described by a relational database schema, it is possible to overcome the limitations of the existing format, which have become increasingly apparent over the last few years as new and more varied data are being used by radiative-transfer models. Although the database in the new format can be searched using the well-established Structured Query Language (SQL), a web service, HITRANonline, has been deployed to allow users to make most common queries of the database using a graphical user interface in a web page. The advantages of the relational form of the database to ensuring data integrity and consistency are explored, and the compatibility of the online interface with the emerging standards of the Virtual Atomic and Molecular Data Centre (VAMDC) project is discussed. In particular, the ability to access HITRAN data using a standard query language from other websites, command line tools and from within computer programs is described.

  16. TIGER 2010 Boundaries

    EPA Pesticide Factsheets

    This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). This web service includes the State, County, and Census Block Groups boundaries from the TIGER shapefiles compiled into a single national coverage for each layer. The TIGER/Line Files are shapefiles and related database files (.dbf) that are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master Address File / Topologically Integrated Geographic Encoding and Referencing (MAF/TIGER) Database (MTDB).

  17. Version VI of the ESTree db: an improved tool for peach transcriptome analysis

    PubMed Central

    Lazzari, Barbara; Caprera, Andrea; Vecchietti, Alberto; Merelli, Ivan; Barale, Francesca; Milanesi, Luciano; Stella, Alessandra; Pozzi, Carlo

    2008-01-01

    Background The ESTree database (db) is a collection of Prunus persica and Prunus dulcis EST sequences that in its current version encompasses 75,404 sequences from 3 almond and 19 peach libraries. Nine peach genotypes and four peach tissues are represented, from four fruit developmental stages. The aim of this work was to implement the already existing ESTree db by adding new sequences and analysis programs. Particular care was given to the implementation of the web interface, that allows querying each of the database features. Results A Perl modular pipeline is the backbone of sequence analysis in the ESTree db project. Outputs obtained during the pipeline steps are automatically arrayed into the fields of a MySQL database. Apart from standard clustering and annotation analyses, version VI of the ESTree db encompasses new tools for tandem repeat identification, annotation against genomic Rosaceae sequences, and positioning on the database of oligomer sequences that were used in a peach microarray study. Furthermore, known protein patterns and motifs were identified by comparison to PROSITE. Based on data retrieved from sequence annotation against the UniProtKB database, a script was prepared to track positions of homologous hits on the GO tree and build statistics on the ontologies distribution in GO functional categories. EST mapping data were also integrated in the database. The PHP-based web interface was upgraded and extended. The aim of the authors was to enable querying the database according to all the biological aspects that can be investigated from the analysis of data available in the ESTree db. This is achieved by allowing multiple searches on logical subsets of sequences that represent different biological situations or features. Conclusions The version VI of ESTree db offers a broad overview on peach gene expression. Sequence analyses results contained in the database, extensively linked to external related resources, represent a large amount of information that can be queried via the tools offered in the web interface. Flexibility and modularity of the ESTree analysis pipeline and of the web interface allowed the authors to set up similar structures for different datasets, with limited manual intervention. PMID:18387211

  18. DECADE Web Portal: Integrating MaGa, EarthChem and GVP Will Further Our Knowledge on Earth Degassing

    NASA Astrophysics Data System (ADS)

    Cardellini, C.; Frigeri, A.; Lehnert, K. A.; Ash, J.; McCormick, B.; Chiodini, G.; Fischer, T. P.; Cottrell, E.

    2014-12-01

    The release of gases from the Earth's interior to the exosphere takes place in both volcanic and non-volcanic areas of the planet. Fully understanding this complex process requires the integration of geochemical, petrological and volcanological data. At present, major online data repositories relevant to studies of degassing are not linked and interoperable. We are developing interoperability between three of those, which will support more powerful synoptic studies of degassing. The three data systems that will make their data accessible via the DECADE portal are: (1) the Smithsonian Institution's Global Volcanism Program database (GVP) of volcanic activity data, (2) EarthChem databases for geochemical and geochronological data of rocks and melt inclusions, and (3) the MaGa database (Mapping Gas emissions) which contains compositional and flux data of gases released at volcanic and non-volcanic degassing sites. These databases are developed and maintained by institutions or groups of experts in a specific field, and data are archived in formats specific to these databases. In the framework of the Deep Earth Carbon Degassing (DECADE) initiative of the Deep Carbon Observatory (DCO), we are developing a web portal that will create a powerful search engine of these databases from a single entry point. The portal will return comprehensive multi-component datasets, based on the search criteria selected by the user. For example, a single geographic or temporal search will return data relating to compositions of emitted gases and erupted products, the age of the erupted products, and coincident activity at the volcano. The development of this level of capability for the DECADE Portal requires complete synergy between these databases, including availability of standard-based web services (WMS, WFS) at all data systems. Data and metadata can thus be extracted from each system without interfering with each database's local schema or being replicated to achieve integration at the DECADE web portal. The DECADE portal will enable new synoptic perspectives on the Earth degassing process. Other data systems can be easily plugged in using the existing framework. Our vision is to explore Earth degassing related datasets over previously unexplored spatial or temporal ranges.

  19. Internet Librarian '98: Proceedings of the Internet Librarian Conference (2nd, Monterey, California, November 1-5, 1998).

    ERIC Educational Resources Information Center

    Nixon, Carol, Comp.; Dengler, M. Heide, Comp.; McHenry, Mare L., Comp.

    This proceedings contains 56 papers, presentation summaries, and/or slide presentations pertaining to the Internet, World Wide Web, intranets, and library systems. Topics include: Web databases in medium sized libraries; Dow Jones Intranet Toolkit; the future of online; Web searching and Internet basics; digital archiving; evolution of the online…

  20. A Holistic, Similarity-Based Approach for Personalized Ranking in Web Databases

    ERIC Educational Resources Information Center

    Telang, Aditya

    2011-01-01

    With the advent of the Web, the notion of "information retrieval" has acquired a completely new connotation and currently encompasses several disciplines ranging from traditional forms of text and data retrieval in unstructured and structured repositories to retrieval of static and dynamic information from the contents of the surface and deep Web.…

Top