Sample records for searchable web-based database

  1. A Web-based searchable system to confirm magnetic resonance compatibility of implantable medical devices in Japan: a preliminary study.

    PubMed

    Fujiwara, Yasuhiro; Fujioka, Hitoshi; Watanabe, Tomoko; Sekiguchi, Maiko; Murakami, Ryuji

    2017-09-01

    Confirmation of the magnetic resonance (MR) compatibility of implanted medical devices (IMDs) is mandatory before conducting magnetic resonance imaging (MRI) examinations. In Japan, few such confirmation methods are in use, and they are time-consuming. This study aimed to develop a Web-based searchable MR safety information system to confirm IMD compatibility and to evaluate the usefulness of the system. First, MR safety information for intravascular stents and stent grafts sold in Japan was gathered by interviewing 20 manufacturers. These IMDs were categorized based on the descriptions available on medical package inserts as: "MR Safe," "MR Conditional," "MR Unsafe," "Unknown," and "No Medical Package Insert Available". An MR safety information database for implants was created based on previously proposed item lists. Finally, a Web-based searchable system was developed using this database. A questionnaire was given to health-care personnel in Japan to evaluate the usefulness of this system. Seventy-nine datasets were collected using information provided by 12 manufacturers and by investigating the medical packaging of the IMDs. Although the datasets must be updated by collecting data from other manufacturers, this system facilitates the easy and rapid acquisition of MR safety information for IMDs, thereby improving the safety of MRI examinations.

  2. Digging Deeper: The Deep Web.

    ERIC Educational Resources Information Center

    Turner, Laura

    2001-01-01

    Focuses on the Deep Web, defined as Web content in searchable databases of the type that can be found only by direct query. Discusses the problems of indexing; inability to find information not indexed in the search engine's database; and metasearch engines. Describes 10 sites created to access online databases or directly search them. Lists ways…

  3. Phytophthora-ID.org: A sequence-based Phytophthora identification tool

    Treesearch

    N.J. Grünwald; F.N. Martin; M.M. Larsen; C.M. Sullivan; C.M. Press; M.D. Coffey; E.M. Hansen; J.L. Parke

    2010-01-01

    Contemporary species identification relies strongly on sequence-based identification, yet resources for identification of many fungal and oomycete pathogens are rare. We developed two web-based, searchable databases for rapid identification of Phytophthora spp. based on sequencing of the internal transcribed spacer (ITS) or the cytochrome oxidase...

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    None Available

    To make the web work better for science, OSTI has developed state-of-the-art technologies and services including a deep web search capability. The deep web includes content in searchable databases available to web users but not accessible by popular search engines, such as Google. This video provides an introduction to the deep web search engine.

  5. Deep Web video

    ScienceCinema

    None Available

    2018-02-06

    To make the web work better for science, OSTI has developed state-of-the-art technologies and services including a deep web search capability. The deep web includes content in searchable databases available to web users but not accessible by popular search engines, such as Google. This video provides an introduction to the deep web search engine.

  6. WEB-BASED DATABASE ON RENEWAL TECHNOLOGIES ...

    EPA Pesticide Factsheets

    As U.S. utilities continue to shore up their aging infrastructure, renewal needs now represent over 43% of annual expenditures compared to new construction for drinking water distribution and wastewater collection systems (Underground Construction [UC], 2016). An increased understanding of renewal options will ultimately assist drinking water utilities in reducing water loss and help wastewater utilities to address infiltration and inflow issues in a cost-effective manner. It will also help to extend the service lives of both drinking water and wastewater mains. This research effort involved collecting case studies on the use of various trenchless pipeline renewal methods and providing the information in an online searchable database. The overall objective was to further support technology transfer and information sharing regarding emerging and innovative renewal technologies for water and wastewater mains. The result of this research is a Web-based, searchable database that utility personnel can use to obtain technology performance and cost data, as well as case study references. The renewal case studies include: technologies used; the conditions under which the technology was implemented; costs; lessons learned; and utility contact information. The online database also features a data mining tool for automated review of the technologies selected and cost data. Based on a review of the case study results and industry data, several findings are presented on tren

  7. Development and Uses of Offline and Web-Searchable Metabolism Databases - The Case of Benzo[a]pyrene.

    PubMed

    Rendic, Slobodan P; Guengerich, Frederick P

    2018-01-01

    The present work describes development of offline and web-searchable metabolism databases for drugs, other chemicals, and physiological compounds using human and model species, prompted by the large amount of data published after year 1990. The intent was to provide a rapid and accurate approach to published data to be applied both in science and to assist therapy. Searches for the data were done using the Pub Med database, accessing the Medline database of references and abstracts. In addition, data presented at scientific conferences (e.g., ISSX conferences) are included covering the publishing period beginning with the year 1976. Application of the data is illustrated by the properties of benzo[a]pyrene (B[a]P) and its metabolites. Analysis show higher activity of P450 1A1 for activation of the (-)- isomer of trans-B[a]P-7,8-diol, while P4501B1 exerts higher activity for the (+)- isomer. P450 1A2 showed equally low activity in the metabolic activation of both isomers. The information collected in the databases is applicable in prediction of metabolic drug-drug and/or drug-chemical interactions in clinical and environmental studies. The data on the metabolism of searched compound (exemplified by benzo[a]pyrene and its metabolites) also indicate toxicological properties of the products of specific reactions. The offline and web-searchable databases had wide range of applications (e.g. computer assisted drug design and development, optimization of clinical therapy, toxicological applications) and adjustment in everyday life styles. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  8. Automated ocean color product validation for the Southern California Bight

    NASA Astrophysics Data System (ADS)

    Davis, Curtiss O.; Tufillaro, Nicholas; Jones, Burt; Arnone, Robert

    2012-06-01

    Automated match ups allow us to maintain and improve the products of current satellite ocean color sensors (MODIS, MERIS), and new sensors (VIIRS). As part of the VIIRS mission preparation, we have created a web based automated match up tool that provides access to searchable fields for date, site, and products, and creates match-ups between satellite (MODIS, MERIS, VIIRS), and in-situ measurements (HyperPRO and SeaPRISM). The back end of the system is a 'mySQL' database, and the front end is a `php' web portal with pull down menus for searchable fields. Based on selections, graphics are generated showing match-ups and statistics, and ascii files are created for downloads for the matchup data. Examples are shown for matching the satellite data with the data from Platform Eureka SeaPRISM off L.A. Harbor in the Southern California Bight.

  9. Design and development of a web-based application for diabetes patient data management.

    PubMed

    Deo, S S; Deobagkar, D N; Deobagkar, Deepti D

    2005-01-01

    A web-based database management system developed for collecting, managing and analysing information of diabetes patients is described here. It is a searchable, client-server, relational database application, developed on the Windows platform using Oracle, Active Server Pages (ASP), Visual Basic Script (VB Script) and Java Script. The software is menu-driven and allows authorized healthcare providers to access, enter, update and analyse patient information. Graphical representation of data can be generated by the system using bar charts and pie charts. An interactive web interface allows users to query the database and generate reports. Alpha- and beta-testing of the system was carried out and the system at present holds records of 500 diabetes patients and is found useful in diagnosis and treatment. In addition to providing patient data on a continuous basis in a simple format, the system is used in population and comparative analysis. It has proved to be of significant advantage to the healthcare provider as compared to the paper-based system.

  10. Footprint Database and web services for the Herschel space observatory

    NASA Astrophysics Data System (ADS)

    Verebélyi, Erika; Dobos, László; Kiss, Csaba

    2015-08-01

    Using all telemetry and observational meta-data, we created a searchable database of Herschel observation footprints. Data from the Herschel space observatory is freely available for everyone but no uniformly processed catalog of all observations has been published yet. As a first step, we unified the data model for all three Herschel instruments in all observation modes and compiled a database of sky coverage information. As opposed to methods using a pixellation of the sphere, in our database, sky coverage is stored in exact geometric form allowing for precise area calculations. Indexing of the footprints allows for very fast search among observations based on pointing, time, sky coverage overlap and meta-data. This enables us, for example, to find moving objects easily in Herschel fields. The database is accessible via a web site and also as a set of REST web service functions which makes it usable from program clients like Python or IDL scripts. Data is available in various formats including Virtual Observatory standards.

  11. Nuclear data made easily accessible through the Notre Dame Nuclear Database

    NASA Astrophysics Data System (ADS)

    Khouw, Timothy; Lee, Kevin; Fasano, Patrick; Mumpower, Matthew; Aprahamian, Ani

    2014-09-01

    In 1994, the NNDC revolutionized nuclear research by providing a colorful, clickable, searchable database over the internet. Over the last twenty years, web technology has evolved dramatically. Our project, the Notre Dame Nuclear Database, aims to provide a more comprehensive and broadly searchable interactive body of data. The database can be searched by an array of filters which includes metadata such as the facility where a measurement is made, the author(s), or date of publication for the datum of interest. The user interface takes full advantage of HTML, a web markup language, CSS (cascading style sheets to define the aesthetics of the website), and JavaScript, a language that can process complex data. A command-line interface is supported that interacts with the database directly on a user's local machine which provides single command access to data. This is possible through the use of a standardized API (application programming interface) that relies upon well-defined filtering variables to produce customized search results. We offer an innovative chart of nuclides utilizing scalable vector graphics (SVG) to deliver users an unsurpassed level of interactivity supported on all computers and mobile devices. We will present a functional demo of our database at the conference.

  12. COMET Multimedia modules and objects in the digital library system

    NASA Astrophysics Data System (ADS)

    Spangler, T. C.; Lamos, J. P.

    2003-12-01

    Over the past ten years of developing Web- and CD-ROM-based training materials, the Cooperative Program for Operational Meteorology, Education and Training (COMET) has created a unique archive of almost 10,000 multimedia objects and some 50 web based interactive multimedia modules on various aspects of weather and weather forecasting. These objects and modules, containing illustrations, photographs, animations,video sequences, audio files, are potentially a valuable resource for university faculty and students, forecasters, emergency managers, public school educators, and other individuals and groups needing such materials for educational use. The COMET Modules are available on the COMET educational web site http://www.meted.ucar.edu, and the COMET Multimedia Database (MMDB) makes a collection of the multimedia objects available in a searchable online database for viewing and download over the Internet. Some 3200 objects are already available at the MMDB Website: http://archive.comet.ucar.edu/moria/

  13. An automated, web-enabled and searchable database system for archiving electrogram and related data from implantable cardioverter defibrillators.

    PubMed

    Zong, W; Wang, P; Leung, B; Moody, G B; Mark, R G

    2002-01-01

    The advent of implantable cardioverter defibrillators (ICDs) has resulted in significant reductions in mortality in patients at high risk for sudden cardiac death. Extensive related basic research and clinical investigation continue. ICDs typically record intracardiac electrograms and inter-beat intervals along with device settings during episodes of device delivery of therapy. Researchers wishing to study these data further have until now been limited to viewing paper plots. In support of multi-center clinical studies of patients with ICDs, we have developed a web based searchable ICD data archiving system, which allows users to use a web browser to upload ICD data from diskettes to a server where the data are automatically processed and archived. Users can view and download the archived ICD data directly via the web. The entire system is built from open source software. At present more than 500 patient ICD data sets have been uploaded to and archived in the system. This project will be of value not only to those who wish to conduct research using ICD data, but also to clinicians who need to archive and review ICD data collected from their patients.

  14. NCBI GEO: archive for functional genomics data sets--update.

    PubMed

    Barrett, Tanya; Wilhite, Stephen E; Ledoux, Pierre; Evangelista, Carlos; Kim, Irene F; Tomashevsky, Maxim; Marshall, Kimberly A; Phillippy, Katherine H; Sherman, Patti M; Holko, Michelle; Yefanov, Andrey; Lee, Hyeseung; Zhang, Naigong; Robertson, Cynthia L; Serova, Nadezhda; Davis, Sean; Soboleva, Alexandra

    2013-01-01

    The Gene Expression Omnibus (GEO, http://www.ncbi.nlm.nih.gov/geo/) is an international public repository for high-throughput microarray and next-generation sequence functional genomic data sets submitted by the research community. The resource supports archiving of raw data, processed data and metadata which are indexed, cross-linked and searchable. All data are freely available for download in a variety of formats. GEO also provides several web-based tools and strategies to assist users to query, analyse and visualize data. This article reports current status and recent database developments, including the release of GEO2R, an R-based web application that helps users analyse GEO data.

  15. DISTRIBUTED STRUCTURE-SEARCHABLE TOXICITY ...

    EPA Pesticide Factsheets

    The ability to assess the potential genotoxicity, carcinogenicity, or other toxicity of pharmaceutical or industrial chemicals based on chemical structure information is a highly coveted and shared goal of varied academic, commercial, and government regulatory groups. These diverse interests often employ different approaches and have different criteria and use for toxicity assessments, but they share a need for unrestricted access to existing public toxicity data linked with chemical structure information. Currently, there exists no central repository of toxicity information, commercial or public, that adequately meets the data requirements for flexible analogue searching, SAR model development, or building of chemical relational databases (CRD). The Distributed Structure-Searchable Toxicity (DSSTox) Public Database Network is being proposed as a community-supported, web-based effort to address these shared needs of the SAR and toxicology communities. The DSSTox project has the following major elements: 1) to adopt and encourage the use of a common standard file format (SDF) for public toxicity databases that includes chemical structure, text and property information, and that can easily be imported into available CRD applications; 2) to implement a distributed source approach, managed by a DSSTox Central Website, that will enable decentralized, free public access to structure-toxicity data files, and that will effectively link knowledgeable toxicity data s

  16. English semantic word-pair norms and a searchable Web portal for experimental stimulus creation.

    PubMed

    Buchanan, Erin M; Holmes, Jessica L; Teasley, Marilee L; Hutchison, Keith A

    2013-09-01

    As researchers explore the complexity of memory and language hierarchies, the need to expand normed stimulus databases is growing. Therefore, we present 1,808 words, paired with their features and concept-concept information, that were collected using previously established norming methods (McRae, Cree, Seidenberg, & McNorgan Behavior Research Methods 37:547-559, 2005). This database supplements existing stimuli and complements the Semantic Priming Project (Hutchison, Balota, Cortese, Neely, Niemeyer, Bengson, & Cohen-Shikora 2010). The data set includes many types of words (including nouns, verbs, adjectives, etc.), expanding on previous collections of nouns and verbs (Vinson & Vigliocco Journal of Neurolinguistics 15:317-351, 2008). We describe the relation between our and other semantic norms, as well as giving a short review of word-pair norms. The stimuli are provided in conjunction with a searchable Web portal that allows researchers to create a set of experimental stimuli without prior programming knowledge. When researchers use this new database in tandem with previous norming efforts, precise stimuli sets can be created for future research endeavors.

  17. Spectroscopic data for an astronomy database

    NASA Technical Reports Server (NTRS)

    Parkinson, W. H.; Smith, Peter L.

    1995-01-01

    Very few of the atomic and molecular data used in analyses of astronomical spectra are currently available in World Wide Web (WWW) databases that are searchable with hypertext browsers. We have begun to rectify this situation by making extensive atomic data files available with simple search procedures. We have also established links to other on-line atomic and molecular databases. All can be accessed from our database homepage with URL: http:// cfa-www.harvard.edu/ amp/ data/ amdata.html.

  18. NCBI GEO: archive for functional genomics data sets—update

    PubMed Central

    Barrett, Tanya; Wilhite, Stephen E.; Ledoux, Pierre; Evangelista, Carlos; Kim, Irene F.; Tomashevsky, Maxim; Marshall, Kimberly A.; Phillippy, Katherine H.; Sherman, Patti M.; Holko, Michelle; Yefanov, Andrey; Lee, Hyeseung; Zhang, Naigong; Robertson, Cynthia L.; Serova, Nadezhda; Davis, Sean; Soboleva, Alexandra

    2013-01-01

    The Gene Expression Omnibus (GEO, http://www.ncbi.nlm.nih.gov/geo/) is an international public repository for high-throughput microarray and next-generation sequence functional genomic data sets submitted by the research community. The resource supports archiving of raw data, processed data and metadata which are indexed, cross-linked and searchable. All data are freely available for download in a variety of formats. GEO also provides several web-based tools and strategies to assist users to query, analyse and visualize data. This article reports current status and recent database developments, including the release of GEO2R, an R-based web application that helps users analyse GEO data. PMID:23193258

  19. ENVIRONMENTAL EFFECTS OF DREDGING AND DISPOSAL (E2-D2)

    EPA Science Inventory

    US Army Corps of Engineers public web site for the "Environmental Effects of Dredging and Disposal" ("E2-D2") searchable database of published reports and studies about environmental impacts associated with dredging and disposal operations. Many of the reports and studies are ava...

  20. Open Clients for Distributed Databases

    NASA Astrophysics Data System (ADS)

    Chayes, D. N.; Arko, R. A.

    2001-12-01

    We are actively developing a collection of open source example clients that demonstrate use of our "back end" data management infrastructure. The data management system is reported elsewhere at this meeting (Arko and Chayes: A Scaleable Database Infrastructure). In addition to their primary goal of being examples for others to build upon, some of these clients may have limited utility in them selves. More information about the clients and the data infrastructure is available on line at http://data.ldeo.columbia.edu. The available examples to be demonstrated include several web-based clients including those developed for the Community Review System of the Digital Library for Earth System Education, a real-time watch standers log book, an offline interface to use log book entries, a simple client to search on multibeam metadata and others are Internet enabled and generally web-based front ends that support searches against one or more relational databases using industry standard SQL queries. In addition to the web based clients, simple SQL searches from within Excel and similar applications will be demonstrated. By defining, documenting and publishing a clear interface to the fully searchable databases, it becomes relatively easy to construct client interfaces that are optimized for specific applications in comparison to building a monolithic data and user interface system.

  1. U.S. Quaternary Fault and Fold Database Released

    NASA Astrophysics Data System (ADS)

    Haller, Kathleen M.; Machette, Michael N.; Dart, Richard L.; Rhea, B. Susan

    2004-06-01

    A comprehensive online compilation of Quaternary-age faults and folds throughout the United States was recently released by the U.S. Geological Survey, with cooperation from state geological surveys, academia, and the private sector. The Web site at http://Qfaults.cr.usgs.gov/ contains searchable databases and related geo-spatial data that characterize earthquake-related structures that could be potential seismic sources for large-magnitude (M > 6) earthquakes.

  2. The BioExtract Server: a web-based bioinformatic workflow platform

    PubMed Central

    Lushbough, Carol M.; Jennewein, Douglas M.; Brendel, Volker P.

    2011-01-01

    The BioExtract Server (bioextract.org) is an open, web-based system designed to aid researchers in the analysis of genomic data by providing a platform for the creation of bioinformatic workflows. Scientific workflows are created within the system by recording tasks performed by the user. These tasks may include querying multiple, distributed data sources, saving query results as searchable data extracts, and executing local and web-accessible analytic tools. The series of recorded tasks can then be saved as a reproducible, sharable workflow available for subsequent execution with the original or modified inputs and parameter settings. Integrated data resources include interfaces to the National Center for Biotechnology Information (NCBI) nucleotide and protein databases, the European Molecular Biology Laboratory (EMBL-Bank) non-redundant nucleotide database, the Universal Protein Resource (UniProt), and the UniProt Reference Clusters (UniRef) database. The system offers access to numerous preinstalled, curated analytic tools and also provides researchers with the option of selecting computational tools from a large list of web services including the European Molecular Biology Open Software Suite (EMBOSS), BioMoby, and the Kyoto Encyclopedia of Genes and Genomes (KEGG). The system further allows users to integrate local command line tools residing on their own computers through a client-side Java applet. PMID:21546552

  3. A World Wide Web (WWW) server database engine for an organelle database, MitoDat.

    PubMed

    Lemkin, P F; Chipperfield, M; Merril, C; Zullo, S

    1996-03-01

    We describe a simple database search engine "dbEngine" which may be used to quickly create a searchable database on a World Wide Web (WWW) server. Data may be prepared from spreadsheet programs (such as Excel, etc.) or from tables exported from relationship database systems. This Common Gateway Interface (CGI-BIN) program is used with a WWW server such as available commercially, or from National Center for Supercomputer Algorithms (NCSA) or CERN. Its capabilities include: (i) searching records by combinations of terms connected with ANDs or ORs; (ii) returning search results as hypertext links to other WWW database servers; (iii) mapping lists of literature reference identifiers to the full references; (iv) creating bidirectional hypertext links between pictures and the database. DbEngine has been used to support the MitoDat database (Mendelian and non-Mendelian inheritance associated with the Mitochondrion) on the WWW.

  4. The North Carolina State University Libraries Search Experience: Usability Testing Tabbed Search Interfaces for Academic Libraries

    ERIC Educational Resources Information Center

    Teague-Rector, Susan; Ballard, Angela; Pauley, Susan K.

    2011-01-01

    Creating a learnable, effective, and user-friendly library Web site hinges on providing easy access to search. Designing a search interface for academic libraries can be particularly challenging given the complexity and range of searchable library collections, such as bibliographic databases, electronic journals, and article search silos. Library…

  5. Interactive Multi-Instrument Database of Solar Flares

    NASA Technical Reports Server (NTRS)

    Ranjan, Shubha S.; Spaulding, Ryan; Deardorff, Donald G.

    2018-01-01

    The fundamental motivation of the project is that the scientific output of solar research can be greatly enhanced by better exploitation of the existing solar/heliosphere space-data products jointly with ground-based observations. Our primary focus is on developing a specific innovative methodology based on recent advances in "big data" intelligent databases applied to the growing amount of high-spatial and multi-wavelength resolution, high-cadence data from NASA's missions and supporting ground-based observatories. Our flare database is not simply a manually searchable time-based catalog of events or list of web links pointing to data. It is a preprocessed metadata repository enabling fast search and automatic identification of all recorded flares sharing a specifiable set of characteristics, features, and parameters. The result is a new and unique database of solar flares and data search and classification tools for the Heliophysics community, enabling multi-instrument/multi-wavelength investigations of flare physics and supporting further development of flare-prediction methodologies.

  6. Piloting a Searchable Database of Dropout Prevention Programs in Nine Low-Income Urban School Districts in the Northeast and Islands Region. Issues & Answers. REL 2008-No. 046

    ERIC Educational Resources Information Center

    Myint-U, Athi; O'Donnell, Lydia; Osher, David; Petrosino, Anthony; Stueve, Ann

    2008-01-01

    Despite evidence that some dropout prevention programs have positive effects, whether districts in the region are using such evidence-based programs has not been documented. To generate and share knowledge on dropout programs and policies, this report details a project to create a searchable database with information on target audiences,…

  7. Distributed structure-searchable toxicity (DSSTox) public database network: a proposal.

    PubMed

    Richard, Ann M; Williams, ClarLynda R

    2002-01-29

    The ability to assess the potential genotoxicity, carcinogenicity, or other toxicity of pharmaceutical or industrial chemicals based on chemical structure information is a highly coveted and shared goal of varied academic, commercial, and government regulatory groups. These diverse interests often employ different approaches and have different criteria and use for toxicity assessments, but they share a need for unrestricted access to existing public toxicity data linked with chemical structure information. Currently, there exists no central repository of toxicity information, commercial or public, that adequately meets the data requirements for flexible analogue searching, Structure-Activity Relationship (SAR) model development, or building of chemical relational databases (CRD). The distributed structure-searchable toxicity (DSSTox) public database network is being proposed as a community-supported, web-based effort to address these shared needs of the SAR and toxicology communities. The DSSTox project has the following major elements: (1) to adopt and encourage the use of a common standard file format (structure data file (SDF)) for public toxicity databases that includes chemical structure, text and property information, and that can easily be imported into available CRD applications; (2) to implement a distributed source approach, managed by a DSSTox Central Website, that will enable decentralized, free public access to structure-toxicity data files, and that will effectively link knowledgeable toxicity data sources with potential users of these data from other disciplines (such as chemistry, modeling, and computer science); and (3) to engage public/commercial/academic/industry groups in contributing to and expanding this community-wide, public data sharing and distribution effort. The DSSTox project's overall aims are to effect the closer association of chemical structure information with existing toxicity data, and to promote and facilitate structure-based exploration of these data within a common chemistry-based framework that spans toxicological disciplines.

  8. ClassLess: A Comprehensive Database of Young Stellar Objects

    NASA Astrophysics Data System (ADS)

    Hillenbrand, Lynne; Baliber, Nairn

    2015-01-01

    We have designed and constructed a database housing published measurements of Young Stellar Objects (YSOs) within ~1 kpc of the Sun. ClassLess, so called because it includes YSOs in all stages of evolution, is a relational database in which user interaction is conducted via HTML web browsers, queries are performed in scientific language, and all data are linked to the sources of publication. Each star is associated with a cluster (or clusters), and both spatially resolved and unresolved measurements are stored, allowing proper use of data from multiple star systems. With this fully searchable tool, myriad ground- and space-based instruments and surveys across wavelength regimes can be exploited. In addition to primary measurements, the database self consistently calculates and serves higher level data products such as extinction, luminosity, and mass. As a result, searches for young stars with specific physical characteristics can be completed with just a few mouse clicks.

  9. The Cancer Epidemiology Descriptive Cohort Database: A Tool to Support Population-Based Interdisciplinary Research

    PubMed Central

    Kennedy, Amy E.; Khoury, Muin J.; Ioannidis, John P.A.; Brotzman, Michelle; Miller, Amy; Lane, Crystal; Lai, Gabriel Y.; Rogers, Scott D.; Harvey, Chinonye; Elena, Joanne W.; Seminara, Daniela

    2017-01-01

    Background We report on the establishment of a web-based Cancer Epidemiology Descriptive Cohort Database (CEDCD). The CEDCD’s goals are to enhance awareness of resources, facilitate interdisciplinary research collaborations, and support existing cohorts for the study of cancer-related outcomes. Methods Comprehensive descriptive data were collected from large cohorts established to study cancer as primary outcome using a newly developed questionnaire. These included an inventory of baseline and follow-up data, biospecimens, genomics, policies, and protocols. Additional descriptive data extracted from publicly available sources were also collected. This information was entered in a searchable and publicly accessible database. We summarized the descriptive data across cohorts and reported the characteristics of this resource. Results As of December 2015, the CEDCD includes data from 46 cohorts representing more than 6.5 million individuals (29% ethnic/racial minorities). Overall, 78% of the cohorts have collected blood at least once, 57% at multiple time points, and 46% collected tissue samples. Genotyping has been performed by 67% of the cohorts, while 46% have performed whole-genome or exome sequencing in subsets of enrolled individuals. Information on medical conditions other than cancer has been collected in more than 50% of the cohorts. More than 600,000 incident cancer cases and more than 40,000 prevalent cases are reported, with 24 cancer sites represented. Conclusions The CEDCD assembles detailed descriptive information on a large number of cancer cohorts in a searchable database. Impact Information from the CEDCD may assist the interdisciplinary research community by facilitating identification of well-established population resources and large-scale collaborative and integrative research. PMID:27439404

  10. A searchable database for the genome of Phomopsis longicolla (isolate MSPL 10-6).

    PubMed

    Darwish, Omar; Li, Shuxian; May, Zane; Matthews, Benjamin; Alkharouf, Nadim W

    2016-01-01

    Phomopsis longicolla (syn. Diaporthe longicolla) is an important seed-borne fungal pathogen that primarily causes Phomopsis seed decay (PSD) in most soybean production areas worldwide. This disease severely decreases soybean seed quality by reducing seed viability and oil quality, altering seed composition, and increasing frequencies of moldy and/or split beans. To facilitate investigation of the genetic base of fungal virulence factors and understand the mechanism of disease development, we designed and developed a database for P. longicolla isolate MSPL 10-6 that contains information about the genome assemblies (contigs), gene models, gene descriptions and GO functional ontologies. A web-based front end to the database was built using ASP.NET, which allows researchers to search and mine the genome of this important fungus. This database represents the first reported genome database for a seed borne fungal pathogen in the Diaporthe- Phomopsis complex. The database will also be a valuable resource for research and agricultural communities. It will aid in the development of new control strategies for this pathogen. http://bioinformatics.towson.edu/Phomopsis_longicolla/HomePage.aspx.

  11. A searchable database for the genome of Phomopsis longicolla (isolate MSPL 10-6)

    PubMed Central

    May, Zane; Matthews, Benjamin; Alkharouf, Nadim W.

    2016-01-01

    Phomopsis longicolla (syn. Diaporthe longicolla) is an important seed-borne fungal pathogen that primarily causes Phomopsis seed decay (PSD) in most soybean production areas worldwide. This disease severely decreases soybean seed quality by reducing seed viability and oil quality, altering seed composition, and increasing frequencies of moldy and/or split beans. To facilitate investigation of the genetic base of fungal virulence factors and understand the mechanism of disease development, we designed and developed a database for P. longicolla isolate MSPL 10-6 that contains information about the genome assemblies (contigs), gene models, gene descriptions and GO functional ontologies. A web-based front end to the database was built using ASP.NET, which allows researchers to search and mine the genome of this important fungus. This database represents the first reported genome database for a seed borne fungal pathogen in the Diaporthe– Phomopsis complex. The database will also be a valuable resource for research and agricultural communities. It will aid in the development of new control strategies for this pathogen. Availability: http://bioinformatics.towson.edu/Phomopsis_longicolla/HomePage.aspx PMID:28197060

  12. 41. DISCOVERY, SEARCH, AND COMMUNICATION OF TEXTUAL KNOWLEDGE RESOURCES IN DISTRIBUTED SYSTEMS a. Discovering and Utilizing Knowledge Sources for Metasearch Knowledge Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zamora, Antonio

    Advanced Natural Language Processing Tools for Web Information Retrieval, Content Analysis, and Synthesis. The goal of this SBIR was to implement and evaluate several advanced Natural Language Processing (NLP) tools and techniques to enhance the precision and relevance of search results by analyzing and augmenting search queries and by helping to organize the search output obtained from heterogeneous databases and web pages containing textual information of interest to DOE and the scientific-technical user communities in general. The SBIR investigated 1) the incorporation of spelling checkers in search applications, 2) identification of significant phrases and concepts using a combination of linguisticmore » and statistical techniques, and 3) enhancement of the query interface and search retrieval results through the use of semantic resources, such as thesauri. A search program with a flexible query interface was developed to search reference databases with the objective of enhancing search results from web queries or queries of specialized search systems such as DOE's Information Bridge. The DOE ETDE/INIS Joint Thesaurus was processed to create a searchable database. Term frequencies and term co-occurrences were used to enhance the web information retrieval by providing algorithmically-derived objective criteria to organize relevant documents into clusters containing significant terms. A thesaurus provides an authoritative overview and classification of a field of knowledge. By organizing the results of a search using the thesaurus terminology, the output is more meaningful than when the results are just organized based on the terms that co-occur in the retrieved documents, some of which may not be significant. An attempt was made to take advantage of the hierarchy provided by broader and narrower terms, as well as other field-specific information in the thesauri. The search program uses linguistic morphological routines to find relevant entries regardless of whether terms are stored in singular or plural form. Implementation of additional inflectional morphology processes for verbs can enhance retrieval further, but this has to be balanced by the possibility of broadening the results too much. In addition to the DOE energy thesaurus, other sources of specialized organized knowledge such as the Medical Subject Headings (MeSH), the Unified Medical Language System (UMLS), and Wikipedia were investigated. The supporting role of the NLP thesaurus search program was enhanced by incorporating spelling aid and a part-of-speech tagger to cope with misspellings in the queries and to determine the grammatical roles of the query words and identify nouns for special processing. To improve precision, multiple modes of searching were implemented including Boolean operators, and field-specific searches. Programs to convert a thesaurus or reference file into searchable support files can be deployed easily, and the resulting files are immediately searchable to produce relevance-ranked results with builtin spelling aid, morphological processing, and advanced search logic. Demonstration systems were built for several databases, including the DOE energy thesaurus.« less

  13. A novel application of the MIRC repository in medical education.

    PubMed

    Roth, Christopher J; Weadock, William J; Dipietro, Michael A

    2005-06-01

    Medical students on the radiology elective in our institution create electronic presentations to present to each other as part of the requirements for the rotation. Access was given to previous students' presentations via the web-based system, Medical Imaging Resource Center (MIRC) project, created and supported by the Radiological Society of North America (RSNA). RadPix Power 2 MIRC (Weadock Software, LLC, Ann Arbor, MI) software converted the Microsoft PowerPoint (Redmond, WA) presentations to a MIRC-compatible format. The textual information on each slide is searchable across the entire MIRC database. Future students will be able to benefit from the work of their predecessors.

  14. StimulStat: A lexical database for Russian.

    PubMed

    Alexeeva, Svetlana; Slioussar, Natalia; Chernova, Daria

    2017-12-07

    In this article, we present StimulStat - a lexical database for the Russian language in the form of a web application. The database contains more than 52,000 of the most frequent Russian lemmas and more than 1.7 million word forms derived from them. These lemmas and forms are characterized according to more than 70 properties that were demonstrated to be relevant for psycholinguistic research, including frequency, length, phonological and grammatical properties, orthographic and phonological neighborhood frequency and size, grammatical ambiguity, homonymy and polysemy. Some properties were retrieved from various dictionaries and are presented collectively in a searchable form for the first time, the others were computed specifically for the database. The database can be accessed freely at http://stimul.cognitivestudies.ru .

  15. The Cancer Epidemiology Descriptive Cohort Database: A Tool to Support Population-Based Interdisciplinary Research.

    PubMed

    Kennedy, Amy E; Khoury, Muin J; Ioannidis, John P A; Brotzman, Michelle; Miller, Amy; Lane, Crystal; Lai, Gabriel Y; Rogers, Scott D; Harvey, Chinonye; Elena, Joanne W; Seminara, Daniela

    2016-10-01

    We report on the establishment of a web-based Cancer Epidemiology Descriptive Cohort Database (CEDCD). The CEDCD's goals are to enhance awareness of resources, facilitate interdisciplinary research collaborations, and support existing cohorts for the study of cancer-related outcomes. Comprehensive descriptive data were collected from large cohorts established to study cancer as primary outcome using a newly developed questionnaire. These included an inventory of baseline and follow-up data, biospecimens, genomics, policies, and protocols. Additional descriptive data extracted from publicly available sources were also collected. This information was entered in a searchable and publicly accessible database. We summarized the descriptive data across cohorts and reported the characteristics of this resource. As of December 2015, the CEDCD includes data from 46 cohorts representing more than 6.5 million individuals (29% ethnic/racial minorities). Overall, 78% of the cohorts have collected blood at least once, 57% at multiple time points, and 46% collected tissue samples. Genotyping has been performed by 67% of the cohorts, while 46% have performed whole-genome or exome sequencing in subsets of enrolled individuals. Information on medical conditions other than cancer has been collected in more than 50% of the cohorts. More than 600,000 incident cancer cases and more than 40,000 prevalent cases are reported, with 24 cancer sites represented. The CEDCD assembles detailed descriptive information on a large number of cancer cohorts in a searchable database. Information from the CEDCD may assist the interdisciplinary research community by facilitating identification of well-established population resources and large-scale collaborative and integrative research. Cancer Epidemiol Biomarkers Prev; 25(10); 1392-401. ©2016 AACR. ©2016 American Association for Cancer Research.

  16. Distributed Structure-Searchable Toxicity (DSSTox) Database

    EPA Pesticide Factsheets

    The Distributed Structure-Searchable Toxicity network provides a public forum for publishing downloadable, structure-searchable, standardized chemical structure files associated with chemical inventories or toxicity data sets of environmental relevance.

  17. MEPD: a Medaka gene expression pattern database

    PubMed Central

    Henrich, Thorsten; Ramialison, Mirana; Quiring, Rebecca; Wittbrodt, Beate; Furutani-Seiki, Makoto; Wittbrodt, Joachim; Kondoh, Hisato

    2003-01-01

    The Medaka Expression Pattern Database (MEPD) stores and integrates information of gene expression during embryonic development of the small freshwater fish Medaka (Oryzias latipes). Expression patterns of genes identified by ESTs are documented by images and by descriptions through parameters such as staining intensity, category and comments and through a comprehensive, hierarchically organized dictionary of anatomical terms. Sequences of the ESTs are available and searchable through BLAST. ESTs in the database are clustered upon entry and have been blasted against public data-bases. The BLAST results are updated regularly, stored within the database and searchable. The MEPD is a project within the Medaka Genome Initiative (MGI) and entries will be interconnected to integrated genomic map databases. MEPD is accessible through the WWW at http://medaka.dsp.jst.go.jp/MEPD. PMID:12519950

  18. PIQMIe: a web server for semi-quantitative proteomics data management and analysis

    PubMed Central

    Kuzniar, Arnold; Kanaar, Roland

    2014-01-01

    We present the Proteomics Identifications and Quantitations Data Management and Integration Service or PIQMIe that aids in reliable and scalable data management, analysis and visualization of semi-quantitative mass spectrometry based proteomics experiments. PIQMIe readily integrates peptide and (non-redundant) protein identifications and quantitations from multiple experiments with additional biological information on the protein entries, and makes the linked data available in the form of a light-weight relational database, which enables dedicated data analyses (e.g. in R) and user-driven queries. Using the web interface, users are presented with a concise summary of their proteomics experiments in numerical and graphical forms, as well as with a searchable protein grid and interactive visualization tools to aid in the rapid assessment of the experiments and in the identification of proteins of interest. The web server not only provides data access through a web interface but also supports programmatic access through RESTful web service. The web server is available at http://piqmie.semiqprot-emc.cloudlet.sara.nl or http://www.bioinformatics.nl/piqmie. This website is free and open to all users and there is no login requirement. PMID:24861615

  19. PIQMIe: a web server for semi-quantitative proteomics data management and analysis.

    PubMed

    Kuzniar, Arnold; Kanaar, Roland

    2014-07-01

    We present the Proteomics Identifications and Quantitations Data Management and Integration Service or PIQMIe that aids in reliable and scalable data management, analysis and visualization of semi-quantitative mass spectrometry based proteomics experiments. PIQMIe readily integrates peptide and (non-redundant) protein identifications and quantitations from multiple experiments with additional biological information on the protein entries, and makes the linked data available in the form of a light-weight relational database, which enables dedicated data analyses (e.g. in R) and user-driven queries. Using the web interface, users are presented with a concise summary of their proteomics experiments in numerical and graphical forms, as well as with a searchable protein grid and interactive visualization tools to aid in the rapid assessment of the experiments and in the identification of proteins of interest. The web server not only provides data access through a web interface but also supports programmatic access through RESTful web service. The web server is available at http://piqmie.semiqprot-emc.cloudlet.sara.nl or http://www.bioinformatics.nl/piqmie. This website is free and open to all users and there is no login requirement. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  20. DB-PABP: a database of polyanion-binding proteins

    PubMed Central

    Fang, Jianwen; Dong, Yinghua; Salamat-Miller, Nazila; Russell Middaugh, C.

    2008-01-01

    The interactions between polyanions (PAs) and polyanion-binding proteins (PABPs) have been found to play significant roles in many essential biological processes including intracellular organization, transport and protein folding. Furthermore, many neurodegenerative disease-related proteins are PABPs. Thus, a better understanding of PA/PABP interactions may not only enhance our understandings of biological systems but also provide new clues to these deadly diseases. The literature in this field is widely scattered, suggesting the need for a comprehensive and searchable database of PABPs. The DB-PABP is a comprehensive, manually curated and searchable database of experimentally characterized PABPs. It is freely available and can be accessed online at http://pabp.bcf.ku.edu/DB_PABP/. The DB-PABP was implemented as a MySQL relational database. An interactive web interface was created using Java Server Pages (JSP). The search page of the database is organized into a main search form and a section for utilities. The main search form enables custom searches via four menus: protein names, polyanion names, the source species of the proteins and the methods used to discover the interactions. Available utilities include a commonality matrix, a function of listing PABPs by the number of interacting polyanions and a string search for author surnames. The DB-PABP is maintained at the University of Kansas. We encourage users to provide feedback and submit new data and references. PMID:17916573

  1. DB-PABP: a database of polyanion-binding proteins.

    PubMed

    Fang, Jianwen; Dong, Yinghua; Salamat-Miller, Nazila; Middaugh, C Russell

    2008-01-01

    The interactions between polyanions (PAs) and polyanion-binding proteins (PABPs) have been found to play significant roles in many essential biological processes including intracellular organization, transport and protein folding. Furthermore, many neurodegenerative disease-related proteins are PABPs. Thus, a better understanding of PA/PABP interactions may not only enhance our understandings of biological systems but also provide new clues to these deadly diseases. The literature in this field is widely scattered, suggesting the need for a comprehensive and searchable database of PABPs. The DB-PABP is a comprehensive, manually curated and searchable database of experimentally characterized PABPs. It is freely available and can be accessed online at http://pabp.bcf.ku.edu/DB_PABP/. The DB-PABP was implemented as a MySQL relational database. An interactive web interface was created using Java Server Pages (JSP). The search page of the database is organized into a main search form and a section for utilities. The main search form enables custom searches via four menus: protein names, polyanion names, the source species of the proteins and the methods used to discover the interactions. Available utilities include a commonality matrix, a function of listing PABPs by the number of interacting polyanions and a string search for author surnames. The DB-PABP is maintained at the University of Kansas. We encourage users to provide feedback and submit new data and references.

  2. TRANSFORMATION OF DEVELOPMENTAL NEUROTOXICITY DATA INTO STRUCTURE-SEARCHABLE TOXML DATABASE IN SUPPORT OF STRUCTURE-ACTIVITY RELATIONSHIP (SAR) WORKFLOW.

    EPA Science Inventory

    Early hazard identification of new chemicals is often difficult due to lack of data on the novel material for toxicity endpoints, including neurotoxicity. At present, there are no structure searchable neurotoxicity databases. A working group was formed to construct a database to...

  3. ClassLess: A Comprehensive Database of Young Stellar Objects

    NASA Astrophysics Data System (ADS)

    Hillenbrand, Lynne A.; baliber, nairn

    2015-08-01

    We have designed and constructed a database intended to house catalog and literature-published measurements of Young Stellar Objects (YSOs) within ~1 kpc of the Sun. ClassLess, so called because it includes YSOs in all stages of evolution, is a relational database in which user interaction is conducted via HTML web browsers, queries are performed in scientific language, and all data are linked to the sources of publication. Each star is associated with a cluster (or clusters), and both spatially resolved and unresolved measurements are stored, allowing proper use of data from multiple star systems. With this fully searchable tool, myriad ground- and space-based instruments and surveys across wavelength regimes can be exploited. In addition to primary measurements, the database self consistently calculates and serves higher level data products such as extinction, luminosity, and mass. As a result, searches for young stars with specific physical characteristics can be completed with just a few mouse clicks. We are in the database population phase now, and are eager to engage with interested experts worldwide on local galactic star formation and young stellar populations.

  4. Digital London: Creating a Searchable Web of Interlinked Sources on Eighteenth Century London

    ERIC Educational Resources Information Center

    Shoemaker, Robert

    2005-01-01

    Purpose: To outline the conceptual and technical difficulties encountered, as well as the opportunities created, when developing an interlinked collection of web-based digitised primary sources on eighteenth century London. Design/methodology/approach: As a pilot study for a larger project, a variety of primary sources, including the "Old…

  5. Best Practices for Searchable Collection Pages

    EPA Pesticide Factsheets

    Searchable Collection pages are stand-alone documents that do not have any web area navigation. They should not recreate existing content on other sites and should be tagged with quality metadata and taxonomy terms.

  6. VIOLIN: vaccine investigation and online information network.

    PubMed

    Xiang, Zuoshuang; Todd, Thomas; Ku, Kim P; Kovacic, Bethany L; Larson, Charles B; Chen, Fang; Hodges, Andrew P; Tian, Yuying; Olenzek, Elizabeth A; Zhao, Boyang; Colby, Lesley A; Rush, Howard G; Gilsdorf, Janet R; Jourdian, George W; He, Yongqun

    2008-01-01

    Vaccines are among the most efficacious and cost-effective tools for reducing morbidity and mortality caused by infectious diseases. The vaccine investigation and online information network (VIOLIN) is a web-based central resource, allowing easy curation, comparison and analysis of vaccine-related research data across various human pathogens (e.g. Haemophilus influenzae, human immunodeficiency virus (HIV) and Plasmodium falciparum) of medical importance and across humans, other natural hosts and laboratory animals. Vaccine-related peer-reviewed literature data have been downloaded into the database from PubMed and are searchable through various literature search programs. Vaccine data are also annotated, edited and submitted to the database through a web-based interactive system that integrates efficient computational literature mining and accurate manual curation. Curated information includes general microbial pathogenesis and host protective immunity, vaccine preparation and characteristics, stimulated host responses after vaccination and protection efficacy after challenge. Vaccine-related pathogen and host genes are also annotated and available for searching through customized BLAST programs. All VIOLIN data are available for download in an eXtensible Markup Language (XML)-based data exchange format. VIOLIN is expected to become a centralized source of vaccine information and to provide investigators in basic and clinical sciences with curated data and bioinformatics tools for vaccine research and development. VIOLIN is publicly available at http://www.violinet.org.

  7. CHEMICAL STRUCTURE INDEXING OF TOXICITY DATA ON ...

    EPA Pesticide Factsheets

    Standardized chemical structure annotation of public toxicity databases and information resources is playing an increasingly important role in the 'flattening' and integration of diverse sets of biological activity data on the Internet. This review discusses public initiatives that are accelerating the pace of this transformation, with particular reference to toxicology-related chemical information. Chemical content annotators, structure locator services, large structure/data aggregator web sites, structure browsers, International Union of Pure and Applied Chemistry (IUPAC) International Chemical Identifier (InChI) codes, toxicity data models and public chemical/biological activity profiling initiatives are all playing a role in overcoming barriers to the integration of toxicity data, and are bringing researchers closer to the reality of a mineable chemical Semantic Web. An example of this integration of data is provided by the collaboration among researchers involved with the Distributed Structure-Searchable Toxicity (DSSTox) project, the Carcinogenic Potency Project, projects at the National Cancer Institute and the PubChem database. Standardizing chemical structure annotation of public toxicity databases

  8. A PATO-compliant zebrafish screening database (MODB): management of morpholino knockdown screen information.

    PubMed

    Knowlton, Michelle N; Li, Tongbin; Ren, Yongliang; Bill, Brent R; Ellis, Lynda Bm; Ekker, Stephen C

    2008-01-07

    The zebrafish is a powerful model vertebrate amenable to high throughput in vivo genetic analyses. Examples include reverse genetic screens using morpholino knockdown, expression-based screening using enhancer trapping and forward genetic screening using transposon insertional mutagenesis. We have created a database to facilitate web-based distribution of data from such genetic studies. The MOrpholino DataBase is a MySQL relational database with an online, PHP interface. Multiple quality control levels allow differential access to data in raw and finished formats. MODBv1 includes sequence information relating to almost 800 morpholinos and their targets and phenotypic data regarding the dose effect of each morpholino (mortality, toxicity and defects). To improve the searchability of this database, we have incorporated a fixed-vocabulary defect ontology that allows for the organization of morpholino affects based on anatomical structure affected and defect produced. This also allows comparison between species utilizing Phenotypic Attribute Trait Ontology (PATO) designated terminology. MODB is also cross-linked with ZFIN, allowing full searches between the two databases. MODB offers users the ability to retrieve morpholino data by sequence of morpholino or target, name of target, anatomical structure affected and defect produced. MODB data can be used for functional genomic analysis of morpholino design to maximize efficacy and minimize toxicity. MODB also serves as a template for future sequence-based functional genetic screen databases, and it is currently being used as a model for the creation of a mutagenic insertional transposon database.

  9. The CTSA Consortium's Catalog of Assets for Translational and Clinical Health Research (CATCHR)

    PubMed Central

    Mapes, Brandy; Basford, Melissa; Zufelt, Anneliese; Wehbe, Firas; Harris, Paul; Alcorn, Michael; Allen, David; Arnim, Margaret; Autry, Susan; Briggs, Michael S.; Carnegie, Andrea; Chavis‐Keeling, Deborah; De La Pena, Carlos; Dworschak, Doris; Earnest, Julie; Grieb, Terri; Guess, Marilyn; Hafer, Nathaniel; Johnson, Tesheia; Kasper, Amanda; Kopp, Janice; Lockie, Timothy; Lombardo, Vincetta; McHale, Leslie; Minogue, Andrea; Nunnally, Beth; O'Quinn, Deanna; Peck, Kelly; Pemberton, Kieran; Perry, Cheryl; Petrie, Ginny; Pontello, Andria; Posner, Rachel; Rehman, Bushra; Roth, Deborah; Sacksteder, Paulette; Scahill, Samantha; Schieri, Lorri; Simpson, Rosemary; Skinner, Anne; Toussant, Kim; Turner, Alicia; Van der Put, Elaine; Wasser, June; Webb, Chris D.; Williams, Maija; Wiseman, Lori; Yasko, Laurel; Pulley, Jill

    2014-01-01

    Abstract The 61 CTSA Consortium sites are home to valuable programs and infrastructure supporting translational science and all are charged with ensuring that such investments translate quickly to improved clinical care. Catalog of Assets for Translational and Clinical Health Research (CATCHR) is the Consortium's effort to collect and make available information on programs and resources to maximize efficiency and facilitate collaborations. By capturing information on a broad range of assets supporting the entire clinical and translational research spectrum, CATCHR aims to provide the necessary infrastructure and processes to establish and maintain an open‐access, searchable database of consortium resources to support multisite clinical and translational research studies. Data are collected using rigorous, defined methods, with the resulting information made visible through an integrated, searchable Web‐based tool. Additional easy‐to‐use Web tools assist resource owners in validating and updating resource information over time. In this paper, we discuss the design and scope of the project, data collection methods, current results, and future plans for development and sustainability. With increasing pressure on research programs to avoid redundancy, CATCHR aims to make available information on programs and core facilities to maximize efficient use of resources. PMID:24456567

  10. SeaBIRD: A Flexible and Intuitive Planetary Datamining Infrastructure

    NASA Astrophysics Data System (ADS)

    Politi, R.; Capaccioni, F.; Giardino, M.; Fonte, S.; Capria, M. T.; Turrini, D.; De Sanctis, M. C.; Piccioni, G.

    2018-04-01

    Description of SeaBIRD (Searchable and Browsable Infrastructure for Repository of Data), a software and hardware infrastructure for multi-mission planetary datamining, with web-based GUI and API set for the integration in users' software.

  11. DISTRIBUTED STRUCTURE-SEARCHABLE TOXICITY (DSSTOX) DATABASE NETWORK: MAKING PUBLIC TOXICITY DATA RESOURCES MORE ACCESSIBLE AND USABLE FOR DATA EXPLORATION AND SAR DEVELOPMENT

    EPA Science Inventory


    Distributed Structure-Searchable Toxicity (DSSTox) Database Network: Making Public Toxicity Data Resources More Accessible and U sable for Data Exploration and SAR Development

    Many sources of public toxicity data are not currently linked to chemical structure, are not ...

  12. Updating a Searchable Database of Dropout Prevention Programs and Policies in Nine Low-Income Urban School Districts in the Northeast and Islands Region. REL Technical Brief. REL 2012-No. 020

    ERIC Educational Resources Information Center

    Myint-U, Athi; O'Donnell, Lydia; Phillips, Dawna

    2012-01-01

    This technical brief describes updates to a database of dropout prevention programs and policies in 2006/07 created by the Regional Education Laboratory (REL) Northeast and Islands and described in the Issues & Answers report, "Piloting a searchable database of dropout prevention programs in nine low-income urban school districts in the…

  13. PubDNA Finder: a web database linking full-text articles to sequences of nucleic acids.

    PubMed

    García-Remesal, Miguel; Cuevas, Alejandro; Pérez-Rey, David; Martín, Luis; Anguita, Alberto; de la Iglesia, Diana; de la Calle, Guillermo; Crespo, José; Maojo, Víctor

    2010-11-01

    PubDNA Finder is an online repository that we have created to link PubMed Central manuscripts to the sequences of nucleic acids appearing in them. It extends the search capabilities provided by PubMed Central by enabling researchers to perform advanced searches involving sequences of nucleic acids. This includes, among other features (i) searching for papers mentioning one or more specific sequences of nucleic acids and (ii) retrieving the genetic sequences appearing in different articles. These additional query capabilities are provided by a searchable index that we created by using the full text of the 176 672 papers available at PubMed Central at the time of writing and the sequences of nucleic acids appearing in them. To automatically extract the genetic sequences occurring in each paper, we used an original method we have developed. The database is updated monthly by automatically connecting to the PubMed Central FTP site to retrieve and index new manuscripts. Users can query the database via the web interface provided. PubDNA Finder can be freely accessed at http://servet.dia.fi.upm.es:8080/pubdnafinder

  14. POLARIS: Helping Managers Get Answers Fast!

    NASA Technical Reports Server (NTRS)

    Corcoran, Patricia M.; Webster, Jeffery

    2007-01-01

    This viewgraph presentation reviews the Project Online Library and Resource Information System (POLARIS) system. It is NASA-wide, web-based system, providing access to information related to Program and Project Management. It will provide a one-stop shop for access to: a searchable, sortable database of all requirements for all product lines, project life cycle diagrams with reviews, project life cycle diagrams with reviews, project review definitions with products review information from NPR 7123.1, NASA Systems Engineering Processes and Requirements, templates and examples of products, project standard WBSs with dictionaries, and requirements for implementation and approval, information from NASA s Metadata Manager (MdM): Attributes of Missions, Themes, Programs & Projects, NPR7120.5 waiver form and instructions and much more. The presentation reviews the plans and timelines for future revisions and modifications.

  15. BioPepDB: an integrated data platform for food-derived bioactive peptides.

    PubMed

    Li, Qilin; Zhang, Chao; Chen, Hongjun; Xue, Jitong; Guo, Xiaolei; Liang, Ming; Chen, Ming

    2018-03-12

    Food-derived bioactive peptides play critical roles in regulating most biological processes and have considerable biological, medical and industrial importance. However, a large number of active peptides data, including sequence, function, source, commercial product information, references and other information are poorly integrated. BioPepDB is a searchable database of food-derived bioactive peptides and their related articles, including more than four thousand bioactive peptide entries. Moreover, BioPepDB provides modules of prediction and hydrolysis-simulation for discovering novel peptides. It can serve as a reference database to investigate the function of different bioactive peptides. BioPepDB is available at http://bis.zju.edu.cn/biopepdbr/ . The web page utilises Apache, PHP5 and MySQL to provide the user interface for accessing the database and predict novel peptides. The database itself is operated on a specialised server.

  16. Improving wilderness stewardship through searchable databases of U.S. legislative history and legislated special provisions

    Treesearch

    David R. Craig; Peter Landres; Laurie Yung

    2010-01-01

    The online resource Wilderness.net currently provides quick access to the text of every public law designating wilderness in the U.S. National Wilderness Preservation System (NWPS). This article describes two new searchable databases recently completed and added to the information available on Wilderness.net to help wilderness managers and others understand and...

  17. Application description and policy model in collaborative environment for sharing of information on epidemiological and clinical research data sets.

    PubMed

    de Carvalho, Elias César Araujo; Batilana, Adelia Portero; Simkins, Julie; Martins, Henrique; Shah, Jatin; Rajgor, Dimple; Shah, Anand; Rockart, Scott; Pietrobon, Ricardo

    2010-02-19

    Sharing of epidemiological and clinical data sets among researchers is poor at best, in detriment of science and community at large. The purpose of this paper is therefore to (1) describe a novel Web application designed to share information on study data sets focusing on epidemiological clinical research in a collaborative environment and (2) create a policy model placing this collaborative environment into the current scientific social context. The Database of Databases application was developed based on feedback from epidemiologists and clinical researchers requiring a Web-based platform that would allow for sharing of information about epidemiological and clinical study data sets in a collaborative environment. This platform should ensure that researchers can modify the information. A Model-based predictions of number of publications and funding resulting from combinations of different policy implementation strategies (for metadata and data sharing) were generated using System Dynamics modeling. The application allows researchers to easily upload information about clinical study data sets, which is searchable and modifiable by other users in a wiki environment. All modifications are filtered by the database principal investigator in order to maintain quality control. The application has been extensively tested and currently contains 130 clinical study data sets from the United States, Australia, China and Singapore. Model results indicated that any policy implementation would be better than the current strategy, that metadata sharing is better than data-sharing, and that combined policies achieve the best results in terms of publications. Based on our empirical observations and resulting model, the social network environment surrounding the application can assist epidemiologists and clinical researchers contribute and search for metadata in a collaborative environment, thus potentially facilitating collaboration efforts among research communities distributed around the globe.

  18. Effects of DDL Technology on Genre Learning

    ERIC Educational Resources Information Center

    Cotos, Elena; Link, Stephanie; Huffman, Sarah

    2017-01-01

    To better understand the promising effects of data-driven learning (DDL) on language learning processes and outcomes, this study explored DDL learning events enabled by the Research Writing Tutor (RWT), a web-based platform containing an English language corpus annotated to enhance rhetorical input, a concordancer that was searchable for…

  19. CODATA recommended values of the fundamental constants

    NASA Astrophysics Data System (ADS)

    Mohr, Peter J.; Taylor, Barry N.

    2000-11-01

    A review is given of the latest Committee on Data for Science and Technology (CODATA) adjustment of the values of the fundamental constants. The new set of constants, referred to as the 1998 values, replaces the values recommended for international use by CODATA in 1986. The values of the constants, and particularly the Rydberg constant, are of relevance to the calculation of precise atomic spectra. The standard uncertainty (estimated standard deviation) of the new recommended value of the Rydberg constant, which is based on precision frequency metrology and a detailed analysis of the theory, is approximately 1/160 times the uncertainty of the 1986 value. The new set of recommended values as well as a searchable bibliographic database that gives citations to the relevant literature is available on the World Wide Web at physics.nist.gov/constants and physics.nist.gov/constantsbib, respectively. .

  20. A User-Friendly, Keyword-Searchable Database of Geoscientific References Through 2007 for Afghanistan

    USGS Publications Warehouse

    Eppinger, Robert G.; Sipeki, Julianna; Scofield, M.L. Sco

    2008-01-01

    This report includes a document and accompanying Microsoft Access 2003 database of geoscientific references for the country of Afghanistan. The reference compilation is part of a larger joint study of Afghanistan?s energy, mineral, and water resources, and geologic hazards currently underway by the U.S. Geological Survey, the British Geological Survey, and the Afghanistan Geological Survey. The database includes both published (n = 2,489) and unpublished (n = 176) references compiled through calendar year 2007. The references comprise two separate tables in the Access database. The reference database includes a user-friendly, keyword-searchable interface and only minimum knowledge of the use of Microsoft Access is required.

  1. Web-Based Requesting and Scheduling Use of Facilities

    NASA Technical Reports Server (NTRS)

    Yeager, Carolyn M.

    2010-01-01

    Automated User's Training Operations Facility Utilization Request (AutoFUR) is prototype software that administers a Web-based system for requesting and allocating facilities and equipment for astronaut-training classes in conjunction with scheduling the classes. AutoFUR also has potential for similar use in such applications as scheduling flight-simulation equipment and instructors in commercial airplane-pilot training, managing preventive- maintenance facilities, and scheduling operating rooms, doctors, nurses, and medical equipment for surgery. Whereas requesting and allocation of facilities was previously a manual process that entailed examination of documents (including paper drawings) from different sources, AutoFUR partly automates the process and makes all of the relevant information available via the requester s computer. By use of AutoFUR, an instructor can fill out a facility-utilization request (FUR) form on line, consult the applicable flight manifest(s) to determine what equipment is needed and where it should be placed in the training facility, reserve the corresponding hardware listed in a training-hardware inventory database, search for alternative hardware if necessary, submit the FUR for processing, and cause paper forms to be printed. Auto-FUR also maintains a searchable archive of prior FURs.

  2. Machine Aided Indexing and the NASA Thesaurus

    NASA Technical Reports Server (NTRS)

    vonOfenheim, Bill

    2007-01-01

    Machine Aided Indexing (MAI) is a Web-based application program for aiding the indexing of literature in the NASA Scientific and Technical Information (STI) Database. MAI was designed to be a convenient, fully interactive tool for determining the subject matter of documents and identifying keywords. The heart of MAI is a natural-language processor that accepts, as input, any user-supplied text, including abstracts, full documents, and Web pages. Within seconds, the text is analyzed and a ranked list of terms is generated. The 17,800 terms of the NASA Thesaurus serve as the foundation of the knowledge base used by MAI. The NASA Thesaurus defines a standard vocabulary, the use of which enables MAI to assist in ensuring that STI documents are uniformly and consistently accessible. Of particular interest to traditional users of the NASA Thesaurus, MAI incorporates a fully searchable thesaurus display module that affords word-search and hierarchy- navigation capabilities that make it much easier and less time-consuming to look up terms and browse, relative to lookup and browsing in older print and Portable Document Format (PDF) digital versions of the Thesaurus. In addition, because MAI is centrally hosted, the Thesaurus data are always current.

  3. Arctic Logistics Information and Support: ALIAS

    NASA Astrophysics Data System (ADS)

    Warnick, W. K.

    2004-12-01

    The ALIAS web site is a gateway to logistics information for arctic research, funded by the U.S. National Science Foundation, and created and maintained by the Arctic Research Consortium of the United States (ARCUS). ALIAS supports the collaborative development and efficient use of all arctic logistics resources. It presents information from a searchable database, including both arctic terrestrial resources and arctic-capable research vessels, on a circumpolar scale. With this encompassing scope, ALIAS is uniquely valuable as a tool to promote and facilitate international collaboration between researchers, which is of increasing importance for vessel-based research due to the high cost and limited number of platforms. Users of the web site can identify vessels which are potential platforms for their research, examine and compare vessel specifications and facilities, learn about research cruises the vessel has performed in the past, and find contact information for scientists who have used the vessel, as well as for the owners and operators of the vessel. The purpose of this poster presentation is to inform the scientific community about the ALIAS website as a tool for planning arctic research generally, and particularly for identifying and contacting vessels which may be suitable for planned ship-based research projects in arctic seas.

  4. Vipie: web pipeline for parallel characterization of viral populations from multiple NGS samples.

    PubMed

    Lin, Jake; Kramna, Lenka; Autio, Reija; Hyöty, Heikki; Nykter, Matti; Cinek, Ondrej

    2017-05-15

    Next generation sequencing (NGS) technology allows laboratories to investigate virome composition in clinical and environmental samples in a culture-independent way. There is a need for bioinformatic tools capable of parallel processing of virome sequencing data by exactly identical methods: this is especially important in studies of multifactorial diseases, or in parallel comparison of laboratory protocols. We have developed a web-based application allowing direct upload of sequences from multiple virome samples using custom parameters. The samples are then processed in parallel using an identical protocol, and can be easily reanalyzed. The pipeline performs de-novo assembly, taxonomic classification of viruses as well as sample analyses based on user-defined grouping categories. Tables of virus abundance are produced from cross-validation by remapping the sequencing reads to a union of all observed reference viruses. In addition, read sets and reports are created after processing unmapped reads against known human and bacterial ribosome references. Secured interactive results are dynamically plotted with population and diversity charts, clustered heatmaps and a sortable and searchable abundance table. The Vipie web application is a unique tool for multi-sample metagenomic analysis of viral data, producing searchable hits tables, interactive population maps, alpha diversity measures and clustered heatmaps that are grouped in applicable custom sample categories. Known references such as human genome and bacterial ribosomal genes are optionally removed from unmapped ('dark matter') reads. Secured results are accessible and shareable on modern browsers. Vipie is a freely available web-based tool whose code is open source.

  5. Insights on WWW-based geoscience teaching: Climbing the first year learning cliff

    NASA Astrophysics Data System (ADS)

    Lamberson, Michelle N.; Johnson, Mark; Bevier, Mary Lou; Russell, J. Kelly

    1997-06-01

    In early 1995, The University of British Columbia Department of Geological Sciences (now Earth and Ocean Sciences) initiated a project that explored the effectiveness of the World Wide Web as a teaching and learning medium. Four decisions made at the onset of the project have guided the department's educational technology plan: (1) over 90% of funding recieved from educational technology grants was committed towards personnel; (2) materials developed are modular in design; (3) a data-base approach was taken to resource development; and (4) a strong commitment to student involvement in courseware development. The project comprised development of a web site for an existing core course: Geology 202, Introduction to Petrology. The web site is a gateway to course information, content, resources, exercises, and several searchable data-bases (images, petrologic definitions, and minerals in thin section). Material was developed on either an IBM or UNIX machine, ported to a UNIX platform, and is accessed using the Netscape browser. The resources consist primarily of HTML files or CGI scripts with associated text, images, sound, digital movies, and animations. Students access the web site from the departmental student computer facility, from home or a computer station in the petrology laboratory. Results of a survey of the Geol 202 students indicate that they found the majority of the resources useful, and the site is being expanded. The Geology 202 project had a "trickle-up" effect throughout the department: prior to this project, there was minimal use of Internet resources in lower-level geology courses. By the end of the 1996-1997 academic year, we anticipate that at least 17 Earth and Ocean Science courses will have a WWW site for one or all of the following uses: (1) presenting basic information; (2) accessing lecture images; (3) providing a jumping-off point for exploring related WWW sites; (4) conducting on-line exercises; and/or (5) providing a communications forum for students and faculty via a Hypernews group. Url http://www.science.ubc.ca/

  6. DISTRIBUTED STRUCTURE-SEARCHABLE TOXICITY (DSSTOX) PUBLIC DATABASE NETWORK: A PROPOSAL

    EPA Science Inventory

    The ability to assess the potential genotoxicity, carcinogenicity, or other toxicity of pharmaceutical or industrial chemicals based on chemical structure information is a highly coveted and shared goal of varied academic, commercial, and government regulatory groups. These dive...

  7. Spreadsheets for Analyzing and Optimizing Space Missions

    NASA Technical Reports Server (NTRS)

    Some, Raphael R.; Agrawal, Anil K.; Czikmantory, Akos J.; Weisbin, Charles R.; Hua, Hook; Neff, Jon M.; Cowdin, Mark A.; Lewis, Brian S.; Iroz, Juana; Ross, Rick

    2009-01-01

    XCALIBR (XML Capability Analysis LIBRary) is a set of Extensible Markup Language (XML) database and spreadsheet- based analysis software tools designed to assist in technology-return-on-investment analysis and optimization of technology portfolios pertaining to outer-space missions. XCALIBR is also being examined for use in planning, tracking, and documentation of projects. An XCALIBR database contains information on mission requirements and technological capabilities, which are related by use of an XML taxonomy. XCALIBR incorporates a standardized interface for exporting data and analysis templates to an Excel spreadsheet. Unique features of XCALIBR include the following: It is inherently hierarchical by virtue of its XML basis. The XML taxonomy codifies a comprehensive data structure and data dictionary that includes performance metrics for spacecraft, sensors, and spacecraft systems other than sensors. The taxonomy contains >700 nodes representing all levels, from system through subsystem to individual parts. All entries are searchable and machine readable. There is an intuitive Web-based user interface. The software automatically matches technologies to mission requirements. The software automatically generates, and makes the required entries in, an Excel return-on-investment analysis software tool. The results of an analysis are presented in both tabular and graphical displays.

  8. Optimized gene editing technology for Drosophila melanogaster using germ line-specific Cas9.

    PubMed

    Ren, Xingjie; Sun, Jin; Housden, Benjamin E; Hu, Yanhui; Roesel, Charles; Lin, Shuailiang; Liu, Lu-Ping; Yang, Zhihao; Mao, Decai; Sun, Lingzhu; Wu, Qujie; Ji, Jun-Yuan; Xi, Jianzhong; Mohr, Stephanie E; Xu, Jiang; Perrimon, Norbert; Ni, Jian-Quan

    2013-11-19

    The ability to engineer genomes in a specific, systematic, and cost-effective way is critical for functional genomic studies. Recent advances using the CRISPR-associated single-guide RNA system (Cas9/sgRNA) illustrate the potential of this simple system for genome engineering in a number of organisms. Here we report an effective and inexpensive method for genome DNA editing in Drosophila melanogaster whereby plasmid DNAs encoding short sgRNAs under the control of the U6b promoter are injected into transgenic flies in which Cas9 is specifically expressed in the germ line via the nanos promoter. We evaluate the off-targets associated with the method and establish a Web-based resource, along with a searchable, genome-wide database of predicted sgRNAs appropriate for genome engineering in flies. Finally, we discuss the advantages of our method in comparison with other recently published approaches.

  9. The new on-line Czech Food Composition Database.

    PubMed

    Machackova, Marie; Holasova, Marie; Maskova, Eva

    2013-10-01

    The new on-line Czech Food Composition Database (FCDB) was launched on http://www.czfcdb.cz in December 2010 as a main freely available channel for dissemination of Czech food composition data. The application is based on a complied FCDB documented according to the EuroFIR standardised procedure for full value documentation and indexing of foods by the LanguaL™ Thesaurus. A content management system was implemented for administration of the website and performing data export (comma-separated values or EuroFIR XML transport package formats) by a compiler. Reference/s are provided for each published value with linking to available freely accessible on-line sources of data (e.g. full texts, EuroFIR Document Repository, on-line national FCDBs). LanguaL™ codes are displayed within each food record as searchable keywords of the database. A photo (or a photo gallery) is used as a visual descriptor of a food item. The application is searchable on foods, components, food groups, alphabet and a multi-field advanced search. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. The Mendeleev-Meyer force project.

    PubMed

    Santos, Sergio; Lai, Chia-Yun; Amadei, Carlo A; Gadelrab, Karim R; Tang, Tzu-Chieh; Verdaguer, Albert; Barcons, Victor; Font, Josep; Colchero, Jaime; Chiesa, Matteo

    2016-10-14

    Here we present the Mendeleev-Meyer Force Project which aims at tabulating all materials and substances in a fashion similar to the periodic table. The goal is to group and tabulate substances using nanoscale force footprints rather than atomic number or electronic configuration as in the periodic table. The process is divided into: (1) acquiring nanoscale force data from materials, (2) parameterizing the raw data into standardized input features to generate a library, (3) feeding the standardized library into an algorithm to generate, enhance or exploit a model to identify a material or property. We propose producing databases mimicking the Materials Genome Initiative, the Medical Literature Analysis and Retrieval System Online (MEDLARS) or the PRoteomics IDEntifications database (PRIDE) and making these searchable online via search engines mimicking Pubmed or the PRIDE web interface. A prototype exploiting deep learning algorithms, i.e. multilayer neural networks, is presented.

  11. Accessing a personalized bibliography with a searchable system on the World Wide Web

    Treesearch

    Malchus B. Baker; Daniel P. Huebner; Peter F. Ffolliott

    2000-01-01

    Researchers, educator's and land management personnel routinely construct bibliographies to assist them in managing publications that relate to their work. These personalized bibliographies are unique and valuable to others in the same discipline. This paper presents a computer data base system that provides users with the ability to search a bibliography through...

  12. Proteome of Caulobacter crescentus cell cycle publicly accessible on SWICZ server.

    PubMed

    Vohradsky, Jiri; Janda, Ivan; Grünenfelder, Björn; Berndt, Peter; Röder, Daniel; Langen, Hanno; Weiser, Jaroslav; Jenal, Urs

    2003-10-01

    Here we present the Swiss-Czech Proteomics Server (SWICZ), which hosts the proteomic database summarizing information about the cell cycle of the aquatic bacterium Caulobacter crescentus. The database provides a searchable tool for easy access of global protein synthesis and protein stability data as examined during the C. crescentus cell cycle. Protein synthesis data collected from five different cell cycle stages were determined for each protein spot as a relative value of the total amount of [(35)S]methionine incorporation. Protein stability of pulse-labeled extracts were measured during a chase period equivalent to one cell cycle unit. Quantitative information for individual proteins together with descriptive data such as protein identities, apparent molecular masses and isoelectric points, were combined with information on protein function, genomic context, and the cell cycle stage, and were then assembled in a relational database with a world wide web interface (http://proteom.biomed.cas.cz), which allows the database records to be searched and displays the recovered information. A total of 1250 protein spots were reproducibly detected on two-dimensional gel electropherograms, 295 of which were identified by mass spectroscopy. The database is accessible either through clickable two-dimensional gel electrophoretic maps or by means of a set of dedicated search engines. Basic characterization of the experimental procedures, data processing, and a comprehensive description of the web site are presented. In its current state, the SWICZ proteome database provides a platform for the incorporation of new data emerging from extended functional studies on the C. crescentus proteome.

  13. Refining the Use of the Web (and Web Search) as a Language Teaching and Learning Resource

    ERIC Educational Resources Information Center

    Wu, Shaoqun; Franken, Margaret; Witten, Ian H.

    2009-01-01

    The web is a potentially useful corpus for language study because it provides examples of language that are contextualized and authentic, and is large and easily searchable. However, web contents are heterogeneous in the extreme, uncontrolled and hence "dirty," and exhibit features different from the written and spoken texts in other linguistic…

  14. Astronomical Software Directory Service

    NASA Technical Reports Server (NTRS)

    Hanisch, R. J.; Payne, H.; Hayes, J.

    1998-01-01

    This is the final report on the development of the Astronomical Software Directory Service (ASDS), a distributable, searchable, WWW-based database of software packages and their related documentation. ASDS provides integrated access to 56 astronomical software packages, with more than 16,000 URL's indexed for full-text searching.

  15. Ocean Instruments Web Site for Undergraduate, Secondary and Informal Education

    NASA Astrophysics Data System (ADS)

    Farrington, J. W.; Nevala, A.; Dolby, L. A.

    2004-12-01

    An Ocean Instruments web site has been developed that makes available information about ocean sampling and measurement instruments and platforms. The site features text, pictures, diagrams and background information written or edited by experts in ocean science and engineering and contains links to glossaries and multimedia technologies including video streaming, audio packages, and searchable databases. The site was developed after advisory meetings with selected professors teaching undergraduate classes who responded to the question, what could Woods Hole Oceanographic Institution supply to enhance undergraduate education in ocean sciences, life sciences, and geosciences? Prototypes were developed and tested with students, potential users, and potential contributors. The site is hosted by WHOI. The initial five instruments featured were provided by four WHOI scientists and engineers and by one Sea Education Association faculty member. The site is now open to contributions from scientists and engineers worldwide. The site will not advertise or promote the use of individual ocean instruments.

  16. Space Images for NASA JPL Android Version

    NASA Technical Reports Server (NTRS)

    Nelson, Jon D.; Gutheinz, Sandy C.; Strom, Joshua R.; Arca, Jeremy M.; Perez, Martin; Boggs, Karen; Stanboli, Alice

    2013-01-01

    This software addresses the demand for easily accessible NASA JPL images and videos by providing a user friendly and simple graphical user interface that can be run via the Android platform from any location where Internet connection is available. This app is complementary to the iPhone version of the application. A backend infrastructure stores, tracks, and retrieves space images from the JPL Photojournal and Institutional Communications Web server, and catalogs the information into a streamlined rating infrastructure. This system consists of four distinguishing components: image repository, database, server-side logic, and Android mobile application. The image repository contains images from various JPL flight projects. The database stores the image information as well as the user rating. The server-side logic retrieves the image information from the database and categorizes each image for display. The Android mobile application is an interfacing delivery system that retrieves the image information from the server for each Android mobile device user. Also created is a reporting and tracking system for charting and monitoring usage. Unlike other Android mobile image applications, this system uses the latest emerging technologies to produce image listings based directly on user input. This allows for countless combinations of images returned. The backend infrastructure uses industry-standard coding and database methods, enabling future software improvement and technology updates. The flexibility of the system design framework permits multiple levels of display possibilities and provides integration capabilities. Unique features of the software include image/video retrieval from a selected set of categories, image Web links that can be shared among e-mail users, sharing to Facebook/Twitter, marking as user's favorites, and image metadata searchable for instant results.

  17. TMDB: a literature-curated database for small molecular compounds found from tea.

    PubMed

    Yue, Yi; Chu, Gang-Xiu; Liu, Xue-Shi; Tang, Xing; Wang, Wei; Liu, Guang-Jin; Yang, Tao; Ling, Tie-Jun; Wang, Xiao-Gang; Zhang, Zheng-Zhu; Xia, Tao; Wan, Xiao-Chun; Bao, Guan-Hu

    2014-09-16

    Tea is one of the most consumed beverages worldwide. The healthy effects of tea are attributed to a wealthy of different chemical components from tea. Thousands of studies on the chemical constituents of tea had been reported. However, data from these individual reports have not been collected into a single database. The lack of a curated database of related information limits research in this field, and thus a cohesive database system should necessarily be constructed for data deposit and further application. The Tea Metabolome database (TMDB), a manually curated and web-accessible database, was developed to provide detailed, searchable descriptions of small molecular compounds found in Camellia spp. esp. in the plant Camellia sinensis and compounds in its manufactured products (different kinds of tea infusion). TMDB is currently the most complete and comprehensive curated collection of tea compounds data in the world. It contains records for more than 1393 constituents found in tea with information gathered from 364 published books, journal articles, and electronic databases. It also contains experimental 1H NMR and 13C NMR data collected from the purified reference compounds or collected from other database resources such as HMDB. TMDB interface allows users to retrieve tea compounds entries by keyword search using compound name, formula, occurrence, and CAS register number. Each entry in the TMDB contains an average of 24 separate data fields including its original plant species, compound structure, formula, molecular weight, name, CAS registry number, compound types, compound uses including healthy benefits, reference literatures, NMR, MS data, and the corresponding ID from databases such as HMDB and Pubmed. Users can also contribute novel regulatory entries by using a web-based submission page. The TMDB database is freely accessible from the URL of http://pcsb.ahau.edu.cn:8080/TCDB/index.jsp. The TMDB is designed to address the broad needs of tea biochemists, natural products chemists, nutritionists, and members of tea related research community. The TMDB database provides a solid platform for collection, standardization, and searching of compounds information found in tea. As such this database will be a comprehensive repository for tea biochemistry and tea health research community.

  18. Development of a Publicly Available, Comprehensive Database of Fiber and Health Outcomes: Rationale and Methods

    PubMed Central

    Livingston, Kara A.; Chung, Mei; Sawicki, Caleigh M.; Lyle, Barbara J.; Wang, Ding Ding; Roberts, Susan B.; McKeown, Nicola M.

    2016-01-01

    Background Dietary fiber is a broad category of compounds historically defined as partially or completely indigestible plant-based carbohydrates and lignin with, more recently, the additional criteria that fibers incorporated into foods as additives should demonstrate functional human health outcomes to receive a fiber classification. Thousands of research studies have been published examining fibers and health outcomes. Objectives (1) Develop a database listing studies testing fiber and physiological health outcomes identified by experts at the Ninth Vahouny Conference; (2) Use evidence mapping methodology to summarize this body of literature. This paper summarizes the rationale, methodology, and resulting database. The database will help both scientists and policy-makers to evaluate evidence linking specific fibers with physiological health outcomes, and identify missing information. Methods To build this database, we conducted a systematic literature search for human intervention studies published in English from 1946 to May 2015. Our search strategy included a broad definition of fiber search terms, as well as search terms for nine physiological health outcomes identified at the Ninth Vahouny Fiber Symposium. Abstracts were screened using a priori defined eligibility criteria and a low threshold for inclusion to minimize the likelihood of rejecting articles of interest. Publications then were reviewed in full text, applying additional a priori defined exclusion criteria. The database was built and published on the Systematic Review Data Repository (SRDR™), a web-based, publicly available application. Conclusions A fiber database was created. This resource will reduce the unnecessary replication of effort in conducting systematic reviews by serving as both a central database archiving PICO (population, intervention, comparator, outcome) data on published studies and as a searchable tool through which this data can be extracted and updated. PMID:27348733

  19. The Next Generation of NASA Night Sky Network: A Searchable Nationwide Database of Astronomy Events

    NASA Astrophysics Data System (ADS)

    Ames, Z.; Berendsen, M.; White, V.

    2010-08-01

    With support from NASA, the Astronomical Society of the Pacific (ASP) first developed the Night Sky Network (NSN) in 2004. The NSN was created in response to research conducted by the Institute for Learning Innovation (ILI) to determine what type of support amateur astronomers could use to increase the efficiency and extent of their educational outreach programs. Since its creation, the NSN has grown to include an online searchable database of toolkit resources, Presentation Skills Videos covering topics such as working with kids and how to answer difficult questions, and a searchable nationwide calendar of astronomy events that supports club organization. The features of the NSN have allowed the ASP to create a template that amateur science organizations might use to create a similar support network for their members and the public.

  20. Forensic Science and the Internet - Current Utilization and Future Potential.

    PubMed

    Chamakura, R P

    1997-12-01

    The Internet has become a very powerful and inexpensive tool for the free distribution of knowledge and information. It is a learning and research tool, a virtual library without borders and membership requirements, a help desk, and a publication house providing newspapers with current information and journals with instant publication. Very soon, when live audio and video transmission is perfected, the Internet (popularly referred to as the Net) also will be a live classroom and everyday conference site. This article provides a brief overview of the basic structure and essential components of the Internet. A limited number of home pages/Web sites that are already made available on the Net by scientists, laboratories, and colleges in the forensic science community are presented in table forms. Home pages/Web sites containing useful information pertinent to different disciplines of forensic science are also categorized in various tables. The ease and benefits of the Internet use are exemplified by the author's personal experience. Currently, only a few forensic scientists and institutions have made their presence felt. More participation and active contribution and the creation of on-line searchable databases in all specialties of forensic science are urgently needed. Leading forensic journals should take the lead and create on-line searchable indexes with abstracts. Creating Internet repositories of unpublished papers is an idea worth looking into. Leading forensic science institutions should also develop use of the Net to provide training and retraining opportunities for forensic scientists. Copyright © 1997 Central Police University.

  1. ToxRefDB - Release user-friendly web-based tool for mining ToxRefDB

    EPA Science Inventory

    The updated URL link is for a table of NCCT ToxCast public datasets. The next to last row of the table has the link for the US EPA ToxCast ToxRefDB Data Release October 2014. ToxRefDB provides detailed chemical toxicity data in a publically accessible searchable format. ToxRefD...

  2. THE GB/3D Fossil Types Online Database

    NASA Astrophysics Data System (ADS)

    Howe, M. P.; McCormick, T.

    2012-12-01

    The ICZN and the International Code of Nomenclature for algae, fungi and plants require that every species or subspecies of organism (living & fossil), should have a type or reference specimen to define its characteristic features. These specimens are held in collections around the world and must be available for study. Over time, type specimens can deteriorate or become lost. The British Geological Survey, the National Museum of Wales, the Sedgwick Museum Cambridge and the Oxford Museum of Natural History are working together to create an online database of the type fossils they hold. The web portal provides data about each specimen, searchable on taxonomic, stratigraphic and spatial criteria. For each specimen it is possible to view and download high resolution photographs, and for many of them, 'anaglyph' stereo pairs and 3D scans are available. The portal also provides educational resources (OERs). The rise to prominence of the Web has transformed expectations in accessing information and the Web is now usually the first port of call. However, while many geological museums are providing web-searchable text catalogues, few have undertaken a large-scale program of providing images and 3D models. This project has tackled the issues of merging four distinct data holdings, and setting up workflows to image and scan large numbers of disparate fossils, ranging from small invertebrate macrofossils to large vertebrate skeletal elements. There are three advantages in providing such resources: (1) All users can exploit the collections more efficiently. End-users can view specimens remotely and assess their nature, preservation quality and completeness - in some cases this may be sufficient. It will reduce the need for institutions to send specimens (which are often fragile and always irreplaceable) to researchers by post, or for researchers to make possibly long, expensive and environmentally damaging journeys. (2) A public outreach and education dividend - the ability to view specimens greatly enriches the experience and information content of an institution's website. (3) The ability to digitally image specimens enables museums to have an archive record in case the physical specimens are lost or destroyed by accident or warfare.; Digital model of type of Kreterostephanus kreter Buckmann (GSM49334), an ammonite from the Jurasssic of Dorset, UK - displayed as an anaglyph

  3. Questions to Ask Your Doctor

    MedlinePlus

    ... Scientific Peer Review Award Process Post-Award Grant Management AHRQ Grantee Profiles Getting Recognition for Your AHRQ-Funded Study Contracts Project Research Online Database (PROD) Searchable database of AHRQ ...

  4. 48 CFR 5.601 - Governmentwide database of contracts.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 1 2014-10-01 2014-10-01 false Governmentwide database of... database of contracts. (a) A Governmentwide database of contracts and other procurement instruments.../contractdirectory/.This searchable database is a tool that may be used to identify existing contracts and other...

  5. 48 CFR 5.601 - Governmentwide database of contracts.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 1 2011-10-01 2011-10-01 false Governmentwide database of... database of contracts. (a) A Governmentwide database of contracts and other procurement instruments.../contractdirectory/. This searchable database is a tool that may be used to identify existing contracts and other...

  6. 48 CFR 5.601 - Governmentwide database of contracts.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 1 2012-10-01 2012-10-01 false Governmentwide database of... database of contracts. (a) A Governmentwide database of contracts and other procurement instruments.../contractdirectory/ .This searchable database is a tool that may be used to identify existing contracts and other...

  7. 48 CFR 5.601 - Governmentwide database of contracts.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 1 2013-10-01 2013-10-01 false Governmentwide database of... database of contracts. (a) A Governmentwide database of contracts and other procurement instruments.../contractdirectory/.This searchable database is a tool that may be used to identify existing contracts and other...

  8. Causal biological network database: a comprehensive platform of causal biological network models focused on the pulmonary and vascular systems

    PubMed Central

    Boué, Stéphanie; Talikka, Marja; Westra, Jurjen Willem; Hayes, William; Di Fabio, Anselmo; Park, Jennifer; Schlage, Walter K.; Sewer, Alain; Fields, Brett; Ansari, Sam; Martin, Florian; Veljkovic, Emilija; Kenney, Renee; Peitsch, Manuel C.; Hoeng, Julia

    2015-01-01

    With the wealth of publications and data available, powerful and transparent computational approaches are required to represent measured data and scientific knowledge in a computable and searchable format. We developed a set of biological network models, scripted in the Biological Expression Language, that reflect causal signaling pathways across a wide range of biological processes, including cell fate, cell stress, cell proliferation, inflammation, tissue repair and angiogenesis in the pulmonary and cardiovascular context. This comprehensive collection of networks is now freely available to the scientific community in a centralized web-based repository, the Causal Biological Network database, which is composed of over 120 manually curated and well annotated biological network models and can be accessed at http://causalbionet.com. The website accesses a MongoDB, which stores all versions of the networks as JSON objects and allows users to search for genes, proteins, biological processes, small molecules and keywords in the network descriptions to retrieve biological networks of interest. The content of the networks can be visualized and browsed. Nodes and edges can be filtered and all supporting evidence for the edges can be browsed and is linked to the original articles in PubMed. Moreover, networks may be downloaded for further visualization and evaluation. Database URL: http://causalbionet.com PMID:25887162

  9. HPMCD: the database of human microbial communities from metagenomic datasets and microbial reference genomes.

    PubMed

    Forster, Samuel C; Browne, Hilary P; Kumar, Nitin; Hunt, Martin; Denise, Hubert; Mitchell, Alex; Finn, Robert D; Lawley, Trevor D

    2016-01-04

    The Human Pan-Microbe Communities (HPMC) database (http://www.hpmcd.org/) provides a manually curated, searchable, metagenomic resource to facilitate investigation of human gastrointestinal microbiota. Over the past decade, the application of metagenome sequencing to elucidate the microbial composition and functional capacity present in the human microbiome has revolutionized many concepts in our basic biology. When sufficient high quality reference genomes are available, whole genome metagenomic sequencing can provide direct biological insights and high-resolution classification. The HPMC database provides species level, standardized phylogenetic classification of over 1800 human gastrointestinal metagenomic samples. This is achieved by combining a manually curated list of bacterial genomes from human faecal samples with over 21000 additional reference genomes representing bacteria, viruses, archaea and fungi with manually curated species classification and enhanced sample metadata annotation. A user-friendly, web-based interface provides the ability to search for (i) microbial groups associated with health or disease state, (ii) health or disease states and community structure associated with a microbial group, (iii) the enrichment of a microbial gene or sequence and (iv) enrichment of a functional annotation. The HPMC database enables detailed analysis of human microbial communities and supports research from basic microbiology and immunology to therapeutic development in human health and disease. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  10. eBASIS (Bioactive Substances in Food Information Systems) and Bioactive Intakes: Major Updates of the Bioactive Compound Composition and Beneficial Bioeffects Database and the Development of a Probabilistic Model to Assess Intakes in Europe.

    PubMed

    Plumb, Jenny; Pigat, Sandrine; Bompola, Foteini; Cushen, Maeve; Pinchen, Hannah; Nørby, Eric; Astley, Siân; Lyons, Jacqueline; Kiely, Mairead; Finglas, Paul

    2017-03-23

    eBASIS (Bioactive Substances in Food Information Systems), a web-based database that contains compositional and biological effects data for bioactive compounds of plant origin, has been updated with new data on fruits and vegetables, wheat and, due to some evidence of potential beneficial effects, extended to include meat bioactives. eBASIS remains one of only a handful of comprehensive and searchable databases, with up-to-date coherent and validated scientific information on the composition of food bioactives and their putative health benefits. The database has a user-friendly, efficient, and flexible interface facilitating use by both the scientific community and food industry. Overall, eBASIS contains data for 267 foods, covering the composition of 794 bioactive compounds, from 1147 quality-evaluated peer-reviewed publications, together with information from 567 publications describing beneficial bioeffect studies carried out in humans. This paper highlights recent updates and expansion of eBASIS and the newly-developed link to a probabilistic intake model, allowing exposure assessment of dietary bioactive compounds to be estimated and modelled in human populations when used in conjunction with national food consumption data. This new tool could assist small- and medium-sized enterprises (SMEs) in the development of food product health claim dossiers for submission to the European Food Safety Authority (EFSA).

  11. An electronic infrastructure for research and treatment of the thalassemias and other hemoglobinopathies: the Euro-mediterranean ITHANET project.

    PubMed

    Lederer, Carsten W; Basak, A Nazli; Aydinok, Yesim; Christou, Soteroula; El-Beshlawy, Amal; Eleftheriou, Androulla; Fattoum, Slaheddine; Felice, Alex E; Fibach, Eitan; Galanello, Renzo; Gambari, Roberto; Gavrila, Lucian; Giordano, Piero C; Grosveld, Frank; Hassapopoulou, Helen; Hladka, Eva; Kanavakis, Emmanuel; Locatelli, Franco; Old, John; Patrinos, George P; Romeo, Giovanni; Taher, Ali; Traeger-Synodinos, Joanne; Vassiliou, Panayiotis; Villegas, Ana; Voskaridou, Ersi; Wajcman, Henri; Zafeiropoulos, Anastasios; Kleanthous, Marina

    2009-01-01

    Hemoglobin (Hb) disorders are common, potentially lethal monogenic diseases, posing a global health challenge. With worldwide migration and intermixing of carriers, demanding flexible health planning and patient care, hemoglobinopathies may serve as a paradigm for the use of electronic infrastructure tools in the collection of data, the dissemination of knowledge, the harmonization of treatment, and the coordination of research and preventive programs. ITHANET, a network covering thalassemias and other hemoglobinopathies, comprises 26 organizations from 16 countries, including non-European countries of origin for these diseases (Egypt, Israel, Lebanon, Tunisia and Turkey). Using electronic infrastructure tools, ITHANET aims to strengthen cross-border communication and data transfer, cooperative research and treatment of thalassemia, and to improve support and information of those affected by hemoglobinopathies. Moreover, the consortium has established the ITHANET Portal, a novel web-based instrument for the dissemination of information on hemoglobinopathies to researchers, clinicians and patients. The ITHANET Portal is a growing public resource, providing forums for discussion and research coordination, and giving access to courses and databases organized by ITHANET partners. Already a popular repository for diagnostic protocols and news related to hemoglobinopathies, the ITHANET Portal also provides a searchable, extendable database of thalassemia mutations and associated background information. The experience of ITHANET is exemplary for a consortium bringing together disparate organizations from heterogeneous partner countries to face a common health challenge. The ITHANET Portal as a web-based tool born out of this experience amends some of the problems encountered and facilitates education and international exchange of data and expertise for hemoglobinopathies.

  12. Information sources [Chapter 12

    Treesearch

    Daniel G. Neary; John N. Rinne; Alvin L. Medina

    2012-01-01

    The main information sources for the UVR consist of several web sites with general information and bibliographies. RMRS has publications on its Air, Water, Aquatic Environments (AWAE) Program Flagstaff web site. Another RMRS and University of Arizona website on semi-arid and arid watersheds contains a large, searchable bibliography of supporting information from the...

  13. Shared Web Information Systems for Heritage in Scotland and Wales - Flexibility in Partnership

    NASA Astrophysics Data System (ADS)

    Thomas, D.; McKeague, P.

    2013-07-01

    The Royal Commissions on the Ancient and Historical Monuments of Scotland and Wales were established in 1908 to investigate and record the archaeological and built heritage of their respective countries. The organisations have grown organically over the succeeding century, steadily developing their inventories and collections as card and paper indexes. Computerisation followed in the late 1980s and early 1990s, with RCAHMS releasing Canmore, an online searchable database, in 1998. Following a review of service provision in Wales, RCAHMW entered into partnership with RCAHMS in 2003 to deliver a database for their national inventories and collections. The resultant partnership enables both organisations to develop at their own pace whilst delivering efficiencies through a common experience and a shared IT infrastructure. Through innovative solutions the partnership has also delivered benefits to the wider historic environment community, providing online portals to a range of datasets, ultimately raising public awareness and appreciation of the heritage around them. Now celebrating its 10th year, Shared Web Information Systems for Heritage, or more simply SWISH, continues to underpin the work of both organisations in presenting information about the historic environment to the public.

  14. Genic insights from integrated human proteomics in GeneCards.

    PubMed

    Fishilevich, Simon; Zimmerman, Shahar; Kohn, Asher; Iny Stein, Tsippi; Olender, Tsviya; Kolker, Eugene; Safran, Marilyn; Lancet, Doron

    2016-01-01

    GeneCards is a one-stop shop for searchable human gene annotations (http://www.genecards.org/). Data are automatically mined from ∼120 sources and presented in an integrated web card for every human gene. We report the application of recent advances in proteomics to enhance gene annotation and classification in GeneCards. First, we constructed the Human Integrated Protein Expression Database (HIPED), a unified database of protein abundance in human tissues, based on the publically available mass spectrometry (MS)-based proteomics sources ProteomicsDB, Multi-Omics Profiling Expression Database, Protein Abundance Across Organisms and The MaxQuant DataBase. The integrated database, residing within GeneCards, compares favourably with its individual sources, covering nearly 90% of human protein-coding genes. For gene annotation and comparisons, we first defined a protein expression vector for each gene, based on normalized abundances in 69 normal human tissues. This vector is portrayed in the GeneCards expression section as a bar graph, allowing visual inspection and comparison. These data are juxtaposed with transcriptome bar graphs. Using the protein expression vectors, we further defined a pairwise metric that helps assess expression-based pairwise proximity. This new metric for finding functional partners complements eight others, including sharing of pathways, gene ontology (GO) terms and domains, implemented in the GeneCards Suite. In parallel, we calculated proteome-based differential expression, highlighting a subset of tissues that overexpress a gene and subserving gene classification. This textual annotation allows users of VarElect, the suite's next-generation phenotyper, to more effectively discover causative disease variants. Finally, we define the protein-RNA expression ratio and correlation as yet another attribute of every gene in each tissue, adding further annotative information. The results constitute a significant enhancement of several GeneCards sections and help promote and organize the genome-wide structural and functional knowledge of the human proteome. Database URL:http://www.genecards.org/. © The Author(s) 2016. Published by Oxford University Press.

  15. 4Kids.org: Topical, Searchable, and Safe Internet-Based Resource for Children and Youth

    ERIC Educational Resources Information Center

    Bacon, Melanie; Blood, Leslie; Ault, Marilyn; Adams, Doug

    2008-01-01

    4Kids.org is an online resource with an accompanying syndicated print publication created to promote safe access to websites and technology literacy. 4Kids.org, created by ALTEC at the University of Kansas in 1995, provides a variety of Internet-based activities as well as access to a database of websites reviewed for educational content,…

  16. Online Resources for Identifying Evidence-Based, Out-of-School Time Programs: A User's Guide. Research-to-Results Brief. Publication #2009-36

    ERIC Educational Resources Information Center

    Terzian; Mary; Moore, Kristin Anderson; Williams-Taylor, Lisa; Nguyen, Hoan

    2009-01-01

    Child Trends produced this Guide to assist funders, administrators, and practitioners in identifying and navigating online resources to find evidence-based programs that may be appropriate for their target populations and communities. The Guide offers an overview of 21 of these resources--11 searchable online databases, 2 online interactive…

  17. Transitioning Newborns from NICU to Home: Family Information Packet

    MedlinePlus

    ... Scientific Peer Review Award Process Post-Award Grant Management AHRQ Grantee Profiles Getting Recognition for Your AHRQ-Funded Study Contracts Project Research Online Database (PROD) Searchable database of AHRQ ...

  18. Next Steps After Your Diagnosis: Finding Information and Support

    MedlinePlus

    ... Scientific Peer Review Award Process Post-Award Grant Management AHRQ Grantee Profiles Getting Recognition for Your AHRQ-Funded Study Contracts Project Research Online Database (PROD) Searchable database of AHRQ ...

  19. Blood Thinner Pills: Your Guide to Using Them Safely

    MedlinePlus

    ... Scientific Peer Review Award Process Post-Award Grant Management AHRQ Grantee Profiles Getting Recognition for Your AHRQ-Funded Study Contracts Project Research Online Database (PROD) Searchable database of AHRQ ...

  20. Question Builder: Be Prepared for Your Next Medical Appointment

    MedlinePlus

    ... Scientific Peer Review Award Process Post-Award Grant Management AHRQ Grantee Profiles Getting Recognition for Your AHRQ-Funded Study Contracts Project Research Online Database (PROD) Searchable database of AHRQ ...

  1. An international database of radionuclide concentration ratios for wildlife: development and uses.

    PubMed

    Copplestone, D; Beresford, N A; Brown, J E; Yankovich, T

    2013-12-01

    A key element of most systems for assessing the impact of radionuclides on the environment is a means to estimate the transfer of radionuclides to organisms. To facilitate this, an international wildlife transfer database has been developed to provide an online, searchable compilation of transfer parameters in the form of equilibrium-based whole-organism to media concentration ratios. This paper describes the derivation of the wildlife transfer database, the key data sources it contains and highlights the applications for the data. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. FDA toxicity databases and real-time data entry.

    PubMed

    Arvidson, Kirk B

    2008-11-15

    Structure-searchable electronic databases are valuable new tools that are assisting the FDA in its mission to promptly and efficiently review incoming submissions for regulatory approval of new food additives and food contact substances. The Center for Food Safety and Applied Nutrition's Office of Food Additive Safety (CFSAN/OFAS), in collaboration with Leadscope, Inc., is consolidating genetic toxicity data submitted in food additive petitions from the 1960s to the present day. The Center for Drug Evaluation and Research, Office of Pharmaceutical Science's Informatics and Computational Safety Analysis Staff (CDER/OPS/ICSAS) is separately gathering similar information from their submissions. Presently, these data are distributed in various locations such as paper files, microfiche, and non-standardized toxicology memoranda. The organization of the data into a consistent, searchable format will reduce paperwork, expedite the toxicology review process, and provide valuable information to industry that is currently available only to the FDA. Furthermore, by combining chemical structures with genetic toxicity information, biologically active moieties can be identified and used to develop quantitative structure-activity relationship (QSAR) modeling and testing guidelines. Additionally, chemicals devoid of toxicity data can be compared to known structures, allowing for improved safety review through the identification and analysis of structural analogs. Four database frameworks have been created: bacterial mutagenesis, in vitro chromosome aberration, in vitro mammalian mutagenesis, and in vivo micronucleus. Controlled vocabularies for these databases have been established. The four separate genetic toxicity databases are compiled into a single, structurally-searchable database for easy accessibility of the toxicity information. Beyond the genetic toxicity databases described here, additional databases for subchronic, chronic, and teratogenicity studies have been prepared.

  3. The Primary Care Electronic Library: RSS feeds using SNOMED-CT indexing for dynamic content delivery.

    PubMed

    Robinson, Judas; de Lusignan, Simon; Kostkova, Patty; Madge, Bruce; Marsh, A; Biniaris, C

    2006-01-01

    Rich Site Summary (RSS) feeds are a method for disseminating and syndicating the contents of a website using extensible mark-up language (XML). The Primary Care Electronic Library (PCEL) distributes recent additions to the site in the form of an RSS feed. When new resources are added to PCEL, they are manually assigned medical subject headings (MeSH terms), which are then automatically mapped to SNOMED-CT terms using the Unified Medical Language System (UMLS) Metathesaurus. The library is thus searchable using MeSH or SNOMED-CT. Our syndicate partner wished to have remote access to PCEL coronary heart disease (CHD) information resources based on SNOMED-CT search terms. To pilot the supply of relevant information resources in response to clinically coded requests, using RSS syndication for transmission between web servers. Our syndicate partner provided a list of CHD SNOMED-CT terms to its end-users, a list which was coded according to UMLS specifications. When the end-user requested relevant information resources, this request was relayed from our syndicate partner's web server to the PCEL web server. The relevant resources were retrieved from the PCEL MySQL database. This database is accessed using a server side scripting language (PHP), which enables the production of dynamic RSS feeds on the basis of Source Asserted Identifiers (CODEs) contained in UMLS. Retrieving resources using SNOMED-CT terms using syndication can be used to build a functioning application. The process from request to display of syndicated resources took less than one second. The results of the pilot illustrate that it is possible to exchange data between servers using RSS syndication. This method could be utilised dynamically to supply digital library resources to a clinical system with SNOMED-CT data used as the standard of reference.

  4. Be More Involved in Your Health Care: Tips for Patients

    MedlinePlus

    ... Scientific Peer Review Award Process Post-Award Grant Management AHRQ Grantee Profiles Getting Recognition for Your AHRQ-Funded Study Contracts Project Research Online Database (PROD) Searchable database of AHRQ ...

  5. Re-Framing Teacher Evaluation Discourse in the Media: An Analysis and Narrative-Based Proposal

    ERIC Educational Resources Information Center

    Ulmer, Jasmine B.

    2016-01-01

    Recent publications by major newspapers in the USA have reinforced the perception that teacher quality represents a national crisis. By releasing individual teacher evaluation data in online, searchable databases, several newspapers have influenced public perceptions of teachers and teaching. A framing analysis of selected media events and…

  6. The National NeuroAIDS Tissue Consortium (NNTC) Database: an integrated database for HIV-related studies

    PubMed Central

    Cserhati, Matyas F.; Pandey, Sanjit; Beaudoin, James J.; Baccaglini, Lorena; Guda, Chittibabu; Fox, Howard S.

    2015-01-01

    We herein present the National NeuroAIDS Tissue Consortium-Data Coordinating Center (NNTC-DCC) database, which is the only available database for neuroAIDS studies that contains data in an integrated, standardized form. This database has been created in conjunction with the NNTC, which provides human tissue and biofluid samples to individual researchers to conduct studies focused on neuroAIDS. The database contains experimental datasets from 1206 subjects for the following categories (which are further broken down into subcategories): gene expression, genotype, proteins, endo-exo-chemicals, morphometrics and other (miscellaneous) data. The database also contains a wide variety of downloadable data and metadata for 95 HIV-related studies covering 170 assays from 61 principal investigators. The data represent 76 tissue types, 25 measurement types, and 38 technology types, and reaches a total of 33 017 407 data points. We used the ISA platform to create the database and develop a searchable web interface for querying the data. A gene search tool is also available, which searches for NCBI GEO datasets associated with selected genes. The database is manually curated with many user-friendly features, and is cross-linked to the NCBI, HUGO and PubMed databases. A free registration is required for qualified users to access the database. Database URL: http://nntc-dcc.unmc.edu PMID:26228431

  7. Digital Pathology Evaluation in the Multicenter Nephrotic Syndrome Study Network (NEPTUNE)

    PubMed Central

    Nast, Cynthia C.; Jennette, J. Charles; Hodgin, Jeffrey B.; Herzenberg, Andrew M.; Lemley, Kevin V.; Conway, Catherine M.; Kopp, Jeffrey B.; Kretzler, Matthias; Lienczewski, Christa; Avila-Casado, Carmen; Bagnasco, Serena; Sethi, Sanjeev; Tomaszewski, John; Gasim, Adil H.

    2013-01-01

    Summary Pathology consensus review for clinical trials and disease classification has historically been performed by manual light microscopy with sequential section review by study pathologists, or multi-headed microscope review. Limitations of this approach include high intra- and inter-reader variability, costs, and delays for slide mailing and consensus reviews. To improve this, the Nephrotic Syndrome Study Network (NEPTUNE) is systematically applying digital pathology review in a multicenter study using renal biopsy whole slide imaging (WSI) for observation-based data collection. Study pathology materials are acquired, scanned, uploaded, and stored in a web-based information system that is accessed through a web-browser interface. Quality control includes metadata and image quality review. Initially, digital slides are annotated, with each glomerulus identified, given a unique number, and maintained in all levels until the glomerulus disappears or sections end. The software allows viewing and annotation of multiple slide sections concurrently. Analysis utilizes “descriptors” for patterns of injury, rather than diagnoses, in renal parenchymal compartments. This multidimensional representation via WSI, allows more accurate glomerular counting and identification of all lesions in each glomerulus, with data available in a searchable database. The use of WSI brings about efficiency critical to pathology review in a clinical trial setting, including independent review by multiple pathologists, improved intraobserver and interobserver reproducibility, efficiencies and risk reduction in slide circulation and mailing, centralized management of data integrity and slide images for current or future studies, and web-based consensus meetings. The overall effect is improved incorporation of pathology review in a budget neutral approach. PMID:23393107

  8. Patent Databases. . .A Survey of What Is Available from DIALOG, Questel, SDC, Pergamon and INPADOC.

    ERIC Educational Resources Information Center

    Kulp, Carol S.

    1984-01-01

    Presents survey of two groups of databases covering patent literature: patent literature only and general literature that includes patents relevant to subject area of database. Description of databases and comparison tables for patent and general databases (cost, country coverage, years covered, update frequency, file size, and searchable data…

  9. RadNet Databases and Reports

    EPA Pesticide Factsheets

    EPA’s RadNet data are available for viewing in a searchable database or as PDF reports. Historical and current RadNet monitoring data are used to estimate long-term trends in environmental radiation levels.

  10. Double-u double-u double-u dot APIC dot org: a review of the APIC World Wide Web site.

    PubMed

    Harr, J

    1996-12-01

    The widespread use of the Internet and the development of the World Wide Web have led to a revolution in electronic communication and information access. The Association for Professional in Infection Control and Epidemiology (APIC) has developed a site on the World Wide Web to provide mechanisms for international on-line information access and exchange on issues related to the practice of infection control and the application of epidemiology. From the home page of the APIC Web site, users can access information on professional resources, publications, educational offering, governmental affairs, the APIC organization, and the infection control profession. Among the chief features of the site is a discussion forum for posing questions and sharing information about infection control and epidemiology. The site also contains a searchable database of practice-related abstracts and descriptions and order forms for APIC publications. Users will find continuing education course descriptions and registration forms, legislative and regulatory action alerts and a congressional mailer, chapter and committee information, and infection control information of interest to the general public. APIC is considering several potential future enhancements to their Web site and will continue to review the site's content and features to provide current and useful information to infection control professionals.

  11. The National NeuroAIDS Tissue Consortium (NNTC) Database: an integrated database for HIV-related studies.

    PubMed

    Cserhati, Matyas F; Pandey, Sanjit; Beaudoin, James J; Baccaglini, Lorena; Guda, Chittibabu; Fox, Howard S

    2015-01-01

    We herein present the National NeuroAIDS Tissue Consortium-Data Coordinating Center (NNTC-DCC) database, which is the only available database for neuroAIDS studies that contains data in an integrated, standardized form. This database has been created in conjunction with the NNTC, which provides human tissue and biofluid samples to individual researchers to conduct studies focused on neuroAIDS. The database contains experimental datasets from 1206 subjects for the following categories (which are further broken down into subcategories): gene expression, genotype, proteins, endo-exo-chemicals, morphometrics and other (miscellaneous) data. The database also contains a wide variety of downloadable data and metadata for 95 HIV-related studies covering 170 assays from 61 principal investigators. The data represent 76 tissue types, 25 measurement types, and 38 technology types, and reaches a total of 33,017,407 data points. We used the ISA platform to create the database and develop a searchable web interface for querying the data. A gene search tool is also available, which searches for NCBI GEO datasets associated with selected genes. The database is manually curated with many user-friendly features, and is cross-linked to the NCBI, HUGO and PubMed databases. A free registration is required for qualified users to access the database. © The Author(s) 2015. Published by Oxford University Press.

  12. Virtual Evidence Cart - RP (VEC-RP).

    PubMed

    Liu, Fang; Fontelo, Paul; Muin, Michael; Ackerman, Michael

    2005-01-01

    VEC-RP (Virtual Evident Cart) is an open, Web-based, searchable collection of clinical questions and relevant references from MEDLINE/PubMed for healthcare professionals. The architecture consists of four parts: clinical questions, relevant articles from MEDLINE/PubMed, "bottom-line" answers, and peer reviews of entries. Only registered users can add reviews but unregistered users can read them. Feedback from physicians, mostly in the Philippines (RP) who tested the system, is positive.

  13. PROGRESS REPORT ON THE DSSTOX DATABASE NETWORK: NEWLY LAUNCHED WEBSITE, APPLICATIONS, FUTURE PLANS

    EPA Science Inventory

    Progress Report on the DSSTox Database Network: Newly Launched Website, Applications, Future Plans

    Progress will be reported on development of the Distributed Structure-Searchable Toxicity (DSSTox) Database Network and the newly launched public website that coordinates and...

  14. OSTMED.DR®, an Osteopathic Medicine Digital Library.

    PubMed

    Fitterling, Lori; Powers, Elaine; Vardell, Emily

    2018-01-01

    The OSTMED.DR® database provides access to both citation and full-text osteopathic literature, including the Journal of the American Osteopathic Association. Currently, it is a free database searchable using basic and advanced search features.

  15. Data Mining the Ogle-II I-band Database for Eclipsing Binary Stars

    NASA Astrophysics Data System (ADS)

    Ciocca, M.

    2013-08-01

    The OGLE I-band database is a searchable database of quality photometric data available to the public. During Phase 2 of the experiment, known as "OGLE-II", I-band observations were made over a period of approximately 1,000 days, resulting in over 1010 measurements of more than 40 million stars. This was accomplished by using a filter with a passband near the standard Cousins Ic. The database of these observations is fully searchable using the mysql database engine, and provides the magnitude measurements and their uncertainties. In this work, a program of data mining the OGLE I-band database was performed, resulting in the discovery of 42 previously unreported eclipsing binaries. Using the software package Peranso (Vanmuster 2011) to analyze the light curves obtained from OGLE-II, the eclipsing types, the epochs and the periods of these eclipsing variables were determined, to one part in 106. A preliminary attempt to model the physical parameters of these binaries was also performed, using the Binary Maker 3 software (Bradstreet and Steelman 2004).

  16. dbMDEGA: a database for meta-analysis of differentially expressed genes in autism spectrum disorder.

    PubMed

    Zhang, Shuyun; Deng, Libin; Jia, Qiyue; Huang, Shaoting; Gu, Junwang; Zhou, Fankun; Gao, Meng; Sun, Xinyi; Feng, Chang; Fan, Guangqin

    2017-11-16

    Autism spectrum disorders (ASD) are hereditary, heterogeneous and biologically complex neurodevelopmental disorders. Individual studies on gene expression in ASD cannot provide clear consensus conclusions. Therefore, a systematic review to synthesize the current findings from brain tissues and a search tool to share the meta-analysis results are urgently needed. Here, we conducted a meta-analysis of brain gene expression profiles in the current reported human ASD expression datasets (with 84 frozen male cortex samples, 17 female cortex samples, 32 cerebellum samples and 4 formalin fixed samples) and knock-out mouse ASD model expression datasets (with 80 collective brain samples). Then, we applied R language software and developed an interactive shared and updated database (dbMDEGA) displaying the results of meta-analysis of data from ASD studies regarding differentially expressed genes (DEGs) in the brain. This database, dbMDEGA ( https://dbmdega.shinyapps.io/dbMDEGA/ ), is a publicly available web-portal for manual annotation and visualization of DEGs in the brain from data from ASD studies. This database uniquely presents meta-analysis values and homologous forest plots of DEGs in brain tissues. Gene entries are annotated with meta-values, statistical values and forest plots of DEGs in brain samples. This database aims to provide searchable meta-analysis results based on the current reported brain gene expression datasets of ASD to help detect candidate genes underlying this disorder. This new analytical tool may provide valuable assistance in the discovery of DEGs and the elucidation of the molecular pathogenicity of ASD. This database model may be replicated to study other disorders.

  17. India Allele Finder: a web-based annotation tool for identifying common alleles in next-generation sequencing data of Indian origin.

    PubMed

    Zhang, Jimmy F; James, Francis; Shukla, Anju; Girisha, Katta M; Paciorkowski, Alex R

    2017-06-27

    We built India Allele Finder, an online searchable database and command line tool, that gives researchers access to variant frequencies of Indian Telugu individuals, using publicly available fastq data from the 1000 Genomes Project. Access to appropriate population-based genomic variant annotation can accelerate the interpretation of genomic sequencing data. In particular, exome analysis of individuals of Indian descent will identify population variants not reflected in European exomes, complicating genomic analysis for such individuals. India Allele Finder offers improved ease-of-use to investigators seeking to identify and annotate sequencing data from Indian populations. We describe the use of India Allele Finder to identify common population variants in a disease quartet whole exome dataset, reducing the number of candidate single nucleotide variants from 84 to 7. India Allele Finder is freely available to investigators to annotate genomic sequencing data from Indian populations. Use of India Allele Finder allows efficient identification of population variants in genomic sequencing data, and is an example of a population-specific annotation tool that simplifies analysis and encourages international collaboration in genomics research.

  18. University Real Estate Development Database: A Database-Driven Internet Research Tool

    ERIC Educational Resources Information Center

    Wiewel, Wim; Kunst, Kara

    2008-01-01

    The University Real Estate Development Database is an Internet resource developed by the University of Baltimore for the Lincoln Institute of Land Policy, containing over six hundred cases of university expansion outside of traditional campus boundaries. The University Real Estate Development database is a searchable collection of real estate…

  19. FDA toxicity databases and real-time data entry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arvidson, Kirk B.

    Structure-searchable electronic databases are valuable new tools that are assisting the FDA in its mission to promptly and efficiently review incoming submissions for regulatory approval of new food additives and food contact substances. The Center for Food Safety and Applied Nutrition's Office of Food Additive Safety (CFSAN/OFAS), in collaboration with Leadscope, Inc., is consolidating genetic toxicity data submitted in food additive petitions from the 1960s to the present day. The Center for Drug Evaluation and Research, Office of Pharmaceutical Science's Informatics and Computational Safety Analysis Staff (CDER/OPS/ICSAS) is separately gathering similar information from their submissions. Presently, these data are distributedmore » in various locations such as paper files, microfiche, and non-standardized toxicology memoranda. The organization of the data into a consistent, searchable format will reduce paperwork, expedite the toxicology review process, and provide valuable information to industry that is currently available only to the FDA. Furthermore, by combining chemical structures with genetic toxicity information, biologically active moieties can be identified and used to develop quantitative structure-activity relationship (QSAR) modeling and testing guidelines. Additionally, chemicals devoid of toxicity data can be compared to known structures, allowing for improved safety review through the identification and analysis of structural analogs. Four database frameworks have been created: bacterial mutagenesis, in vitro chromosome aberration, in vitro mammalian mutagenesis, and in vivo micronucleus. Controlled vocabularies for these databases have been established. The four separate genetic toxicity databases are compiled into a single, structurally-searchable database for easy accessibility of the toxicity information. Beyond the genetic toxicity databases described here, additional databases for subchronic, chronic, and teratogenicity studies have been prepared.« less

  20. SU-E-T-544: A Radiation Oncology-Specific Multi-Institutional Federated Database: Initial Implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hendrickson, K; Phillips, M; Fishburn, M

    Purpose: To implement a common database structure and user-friendly web-browser based data collection tools across several medical institutions to better support evidence-based clinical decision making and comparative effectiveness research through shared outcomes data. Methods: A consortium of four academic medical centers agreed to implement a federated database, known as Oncospace. Initial implementation has addressed issues of differences between institutions in workflow and types and breadth of structured information captured. This requires coordination of data collection from departmental oncology information systems (OIS), treatment planning systems, and hospital electronic medical records in order to include as much as possible the multi-disciplinary clinicalmore » data associated with a patients care. Results: The original database schema was well-designed and required only minor changes to meet institution-specific data requirements. Mobile browser interfaces for data entry and review for both the OIS and the Oncospace database were tailored for the workflow of individual institutions. Federation of database queries--the ultimate goal of the project--was tested using artificial patient data. The tests serve as proof-of-principle that the system as a whole--from data collection and entry to providing responses to research queries of the federated database--was viable. The resolution of inter-institutional use of patient data for research is still not completed. Conclusions: The migration from unstructured data mainly in the form of notes and documents to searchable, structured data is difficult. Making the transition requires cooperation of many groups within the department and can be greatly facilitated by using the structured data to improve clinical processes and workflow. The original database schema design is critical to providing enough flexibility for multi-institutional use to improve each institution s ability to study outcomes, determine best practices, and support research. The project has demonstrated the feasibility of deploying a federated database environment for research purposes to multiple institutions.« less

  1. eBASIS (Bioactive Substances in Food Information Systems) and Bioactive Intakes: Major Updates of the Bioactive Compound Composition and Beneficial Bioeffects Database and the Development of a Probabilistic Model to Assess Intakes in Europe

    PubMed Central

    Plumb, Jenny; Pigat, Sandrine; Bompola, Foteini; Cushen, Maeve; Pinchen, Hannah; Nørby, Eric; Astley, Siân; Lyons, Jacqueline; Kiely, Mairead; Finglas, Paul

    2017-01-01

    eBASIS (Bioactive Substances in Food Information Systems), a web-based database that contains compositional and biological effects data for bioactive compounds of plant origin, has been updated with new data on fruits and vegetables, wheat and, due to some evidence of potential beneficial effects, extended to include meat bioactives. eBASIS remains one of only a handful of comprehensive and searchable databases, with up-to-date coherent and validated scientific information on the composition of food bioactives and their putative health benefits. The database has a user-friendly, efficient, and flexible interface facilitating use by both the scientific community and food industry. Overall, eBASIS contains data for 267 foods, covering the composition of 794 bioactive compounds, from 1147 quality-evaluated peer-reviewed publications, together with information from 567 publications describing beneficial bioeffect studies carried out in humans. This paper highlights recent updates and expansion of eBASIS and the newly-developed link to a probabilistic intake model, allowing exposure assessment of dietary bioactive compounds to be estimated and modelled in human populations when used in conjunction with national food consumption data. This new tool could assist small- and medium-sized enterprises (SMEs) in the development of food product health claim dossiers for submission to the European Food Safety Authority (EFSA). PMID:28333085

  2. Causal biological network database: a comprehensive platform of causal biological network models focused on the pulmonary and vascular systems.

    PubMed

    Boué, Stéphanie; Talikka, Marja; Westra, Jurjen Willem; Hayes, William; Di Fabio, Anselmo; Park, Jennifer; Schlage, Walter K; Sewer, Alain; Fields, Brett; Ansari, Sam; Martin, Florian; Veljkovic, Emilija; Kenney, Renee; Peitsch, Manuel C; Hoeng, Julia

    2015-01-01

    With the wealth of publications and data available, powerful and transparent computational approaches are required to represent measured data and scientific knowledge in a computable and searchable format. We developed a set of biological network models, scripted in the Biological Expression Language, that reflect causal signaling pathways across a wide range of biological processes, including cell fate, cell stress, cell proliferation, inflammation, tissue repair and angiogenesis in the pulmonary and cardiovascular context. This comprehensive collection of networks is now freely available to the scientific community in a centralized web-based repository, the Causal Biological Network database, which is composed of over 120 manually curated and well annotated biological network models and can be accessed at http://causalbionet.com. The website accesses a MongoDB, which stores all versions of the networks as JSON objects and allows users to search for genes, proteins, biological processes, small molecules and keywords in the network descriptions to retrieve biological networks of interest. The content of the networks can be visualized and browsed. Nodes and edges can be filtered and all supporting evidence for the edges can be browsed and is linked to the original articles in PubMed. Moreover, networks may be downloaded for further visualization and evaluation. Database URL: http://causalbionet.com © The Author(s) 2015. Published by Oxford University Press.

  3. DSSTOX STRUCTURE-SEARCHABLE PUBLIC TOXICITY DATABASE NETWORK: CURRENT PROGRESS AND NEW INITIATIVES TO IMPROVE CHEMO-BIOINFORMATICS CAPABILITIES

    EPA Science Inventory

    The EPA DSSTox website (http://www/epa.gov/nheerl/dsstox) publishes standardized, structure-annotated toxicity databases, covering a broad range of toxicity disciplines. Each DSSTox database features documentation written in collaboration with the source authors and toxicity expe...

  4. Project management web tools at the MICE experiment

    NASA Astrophysics Data System (ADS)

    Coney, L. R.; Tunnell, C. D.

    2012-12-01

    Project management tools like Trac are commonly used within the open-source community to coordinate projects. The Muon Ionization Cooling Experiment (MICE) uses the project management web application Redmine to host mice.rl.ac.uk. Many groups within the experiment have a Redmine project: analysis, computing and software (including offline, online, controls and monitoring, and database subgroups), executive board, and operations. All of these groups use the website to communicate, track effort, develop schedules, and maintain documentation. The issue tracker is a rich tool that is used to identify tasks and monitor progress within groups on timescales ranging from immediate and unexpected problems to milestones that cover the life of the experiment. It allows the prioritization of tasks according to time-sensitivity, while providing a searchable record of work that has been done. This record of work can be used to measure both individual and overall group activity, identify areas lacking sufficient personnel or effort, and as a measure of progress against the schedule. Given that MICE, like many particle physics experiments, is an international community, such a system is required to allow easy communication within a global collaboration. Unlike systems that are purely wiki-based, the structure of a project management tool like Redmine allows information to be maintained in a more structured and logical fashion.

  5. DSSTox and Chemical Information Technologies in Support of PredictiveToxicology

    EPA Science Inventory

    The EPA NCCT Distributed Structure-Searchable Toxicity (DSSTox) Database project initially focused on the curation and publication of high-quality, standardized, chemical structure-annotated toxicity databases for use in structure-activity relationship (SAR) modeling. In recent y...

  6. The National State Policy Database. Quick Turn Around (QTA).

    ERIC Educational Resources Information Center

    Ahearn, Eileen; Jackson, Terry

    This paper describes the National State Policy Database (NSPD), a full-text searchable database of state and federal education regulations for special education. It summarizes the history of the NSPD and reports on a survey of state directors or their designees as to their use of the database and their suggestions for its future expansion. The…

  7. Development of a Searchable Database of Cryoablation Simulations for Use in Treatment Planning.

    PubMed

    Boas, F Edward; Srimathveeravalli, Govindarajan; Durack, Jeremy C; Kaye, Elena A; Erinjeri, Joseph P; Ziv, Etay; Maybody, Majid; Yarmohammadi, Hooman; Solomon, Stephen B

    2017-05-01

    To create and validate a planning tool for multiple-probe cryoablation, using simulations of ice ball size and shape for various ablation probe configurations, ablation times, and types of tissue ablated. Ice ball size and shape was simulated using the Pennes bioheat equation. Five thousand six hundred and seventy different cryoablation procedures were simulated, using 1-6 cryoablation probes and 1-2 cm spacing between probes. The resulting ice ball was measured along three perpendicular axes and recorded in a database. Simulated ice ball sizes were compared to gel experiments (26 measurements) and clinical cryoablation cases (42 measurements). The clinical cryoablation measurements were obtained from a HIPAA-compliant retrospective review of kidney and liver cryoablation procedures between January 2015 and February 2016. Finally, we created a web-based cryoablation planning tool, which uses the cryoablation simulation database to look up the probe spacing and ablation time that produces the desired ice ball shape and dimensions. Average absolute error between the simulated and experimentally measured ice balls was 1 mm in gel experiments and 4 mm in clinical cryoablation cases. The simulations accurately predicted the degree of synergy in multiple-probe ablations. The cryoablation simulation database covers a wide range of ice ball sizes and shapes up to 9.8 cm. Cryoablation simulations accurately predict the ice ball size in multiple-probe ablations. The cryoablation database can be used to plan ablation procedures: given the desired ice ball size and shape, it will find the number and type of probes, probe configuration and spacing, and ablation time required.

  8. Distribution and Features of the Six Classes of Peroxiredoxins

    PubMed Central

    Poole, Leslie B.; Nelson, Kimberly J.

    2016-01-01

    Peroxiredoxins are cysteine-dependent peroxide reductases that group into 6 different, structurally discernable classes. In 2011, our research team reported the application of a bioinformatic approach called active site profiling to extract active site-proximal sequence segments from the 29 distinct, structurally-characterized peroxiredoxins available at the time. These extracted sequences were then used to create unique profiles for the six groups which were subsequently used to search GenBank(nr), allowing identification of ∼3500 peroxiredoxin sequences and their respective subgroups. Summarized in this minireview are the features and phylogenetic distributions of each of these peroxiredoxin subgroups; an example is also provided illustrating the use of the web accessible, searchable database known as PREX to identify subfamily-specific peroxiredoxin sequences for the organism Vitis vinifera (grape). PMID:26810075

  9. 76 FR 79225 - 2002 Reopened-Previously Denied Determinations; Notice of Negative Determinations on...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-21

    ... reconsideration investigation revealed that the following workers groups have not met the certification criteria... Department's Web site at tradeact/taa/taa-- search--form.cfm under the searchable listing of determinations...

  10. 76 FR 81990 - 2002 Reopened-Previously Denied Determinations; Notice of Negative Determinations on...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-29

    ... reconsideration investigation revealed that the following workers groups have not met the certification criteria... Department's Web site at tradeact/taa/taa-- search--form.cfm under the searchable listing of determinations...

  11. 77 FR 3506 - 2002 Reopened-Previously Denied Determinations; Notice of Negative Determinations on...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-24

    ... reconsideration investigation revealed that the following workers groups have not met the certification criteria... Department's Web site at tradeact/taa/taa-- search--form.cfm under the searchable listing of determinations...

  12. Software Application Profile: Opal and Mica: open-source software solutions for epidemiological data management, harmonization and dissemination

    PubMed Central

    Doiron, Dany; Marcon, Yannick; Fortier, Isabel; Burton, Paul; Ferretti, Vincent

    2017-01-01

    Abstract Motivation Improving the dissemination of information on existing epidemiological studies and facilitating the interoperability of study databases are essential to maximizing the use of resources and accelerating improvements in health. To address this, Maelstrom Research proposes Opal and Mica, two inter-operable open-source software packages providing out-of-the-box solutions for epidemiological data management, harmonization and dissemination. Implementation Opal and Mica are two standalone but inter-operable web applications written in Java, JavaScript and PHP. They provide web services and modern user interfaces to access them. General features Opal allows users to import, manage, annotate and harmonize study data. Mica is used to build searchable web portals disseminating study and variable metadata. When used conjointly, Mica users can securely query and retrieve summary statistics on geographically dispersed Opal servers in real-time. Integration with the DataSHIELD approach allows conducting more complex federated analyses involving statistical models. Availability Opal and Mica are open-source and freely available at [www.obiba.org] under a General Public License (GPL) version 3, and the metadata models and taxonomies that accompany them are available under a Creative Commons licence. PMID:29025122

  13. How To Do Field Searching in Web Search Engines: A Field Trip.

    ERIC Educational Resources Information Center

    Hock, Ran

    1998-01-01

    Describes the field search capabilities of selected Web search engines (AltaVista, HotBot, Infoseek, Lycos, Yahoo!) and includes a chart outlining what fields (date, title, URL, images, audio, video, links, page depth) are searchable, where to go on the page to search them, the syntax required (if any), and how field search queries are entered.…

  14. An Analysis of Implementation Issues for the Searchable Content Object Reference Model (SCORM) in Navy Education and Training

    DTIC Science & Technology

    2003-09-01

    content objects to be used and reused within civilian and military education and training Learning Management Systems (LMS) across the World Wide Web...to be used and reused within civilian and military education and training Learning Management Systems (LMS) across the World Wide Web. vi...1998, SUBJECT: ENHANCING LEARNING AND EDUCATION THROUGH TECHNOLOGY

  15. New Catalog of Resources Enables Paleogeosciences Research

    NASA Astrophysics Data System (ADS)

    Lingo, R. C.; Horlick, K. A.; Anderson, D. M.

    2014-12-01

    The 21st century promises a new era for scientists of all disciplines, the age where cyber infrastructure enables research and education and fuels discovery. EarthCube is a working community of over 2,500 scientists and students of many Earth Science disciplines who are looking to build bridges between disciplines. The EarthCube initiative will create a digital infrastructure that connects databases, software, and repositories. A catalog of resources (databases, software, repositories) has been produced by the Research Coordination Network for Paleogeosciences to improve the discoverability of resources. The Catalog is currently made available within the larger-scope CINERGI geosciences portal (http://hydro10.sdsc.edu/geoportal/catalog/main/home.page). Other distribution points and web services are planned, using linked data, content services for the web, and XML descriptions that can be harvested using metadata protocols. The databases provide searchable interfaces to find data sets that would otherwise remain dark data, hidden in drawers and on personal computers. The software will be described in catalog entries so just one click will lead users to methods and analytical tools that many geoscientists were unaware of. The repositories listed in the Paleogeosciences Catalog contain physical samples found all across the globe, from natural history museums to the basements of university buildings. EarthCube has over 250 databases, 300 software systems, and 200 repositories which will grow in the coming year. When completed, geoscientists across the world will be connected into a productive workflow for managing, sharing, and exploring geoscience data and information that expedites collaboration and innovation within the paleogeosciences, potentially bringing about new interdisciplinary discoveries.

  16. Nuclear Energy Infrastructure Database Description and User’s Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heidrich, Brenden

    In 2014, the Deputy Assistant Secretary for Science and Technology Innovation initiated the Nuclear Energy (NE)–Infrastructure Management Project by tasking the Nuclear Science User Facilities, formerly the Advanced Test Reactor National Scientific User Facility, to create a searchable and interactive database of all pertinent NE-supported and -related infrastructure. This database, known as the Nuclear Energy Infrastructure Database (NEID), is used for analyses to establish needs, redundancies, efficiencies, distributions, etc., to best understand the utility of NE’s infrastructure and inform the content of infrastructure calls. The Nuclear Science User Facilities developed the database by utilizing data and policy direction from amore » variety of reports from the U.S. Department of Energy, the National Research Council, the International Atomic Energy Agency, and various other federal and civilian resources. The NEID currently contains data on 802 research and development instruments housed in 377 facilities at 84 institutions in the United States and abroad. The effort to maintain and expand the database is ongoing. Detailed information on many facilities must be gathered from associated institutions and added to complete the database. The data must be validated and kept current to capture facility and instrumentation status as well as to cover new acquisitions and retirements. This document provides a short tutorial on the navigation of the NEID web portal at NSUF-Infrastructure.INL.gov.« less

  17. Transformation of Developmental Neurotoxicity Data into a Structure-Searchable Relational Database

    EPA Science Inventory

    A database of neurotoxicants is critical to support the development and validation of animal alternatives for neurotoxicity. Validation of in vitro test methods can only be done using known animal and human neurotoxicants producing defined responses for neurochemical, neuropatho...

  18. AN EPA SPONSORED LITERATURE REVIEW DATABASE TO SUPPORT STRESSOR IDENTIFICATION

    EPA Science Inventory

    The Causal Analysis/Diagnosis Decision Information System (CADDIS) is an EPA decision-support system currently under development for evaluating the biological impact of stressors on water bodies. In support of CADDIS, EPA is developing CADLIT, a searchable database of the scient...

  19. 77 FR 5577 - 2002 Reopened-Previously Denied Determinations; Notice of Revised Denied Determinations on...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-03

    ... reconsideration investigation revealed that the following workers groups have met the certification criteria under... available on the Department's Web site at tradeact/taa/taa-- search--form.cfm under the searchable listing...

  20. 77 FR 5577 - 2002 Reopened-Previously Denied Determinations; Notice of Negative Determinations on...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-03

    ... reconsideration investigation revealed that the following workers groups have not met the certification criteria... available on the Department's Web site at tradeact/taa/taa-- search--form.cfm under the searchable listing...

  1. 77 FR 6592 - Notice of Negative Determinations on Reconsideration Under the Trade Adjustment Assistance...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-08

    ... reconsideration investigation revealed that the following workers groups have not met the certification criteria... are available on the Department's Web site at tradeact/taa/taa--search--form.cfm under the searchable...

  2. 76 FR 77558 - 2002 Reopened-Previously Denied Determinations; Notice of Revised Denied Determinations on...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-13

    ... reconsideration investigation revealed that the following workers groups have met the certification criteria under... are available on the Department's Web site at tradeact/taa/taa--search--form.cfm under the searchable...

  3. Assessment of medication adherence app features, functionality, and health literacy level and the creation of a searchable Web-based adherence app resource for health care professionals and patients.

    PubMed

    Heldenbrand, Seth; Martin, Bradley C; Gubbins, Paul O; Hadden, Kristie; Renna, Catherine; Shilling, Rebecca; Dayer, Lindsey

    2016-01-01

    To assess the features and level of health literacy (HL) of available medication adherence apps and to create a searchable website to assist health care providers (HCP) and patients identify quality adherence apps. Medication nonadherence continues to be a significant problem and leads to poor health outcomes and avoidable health care expense. The average adherence rate for chronic medications, regardless of disease state, is approximately 50% leaving significant room for improvement. Smartphone adherence apps are a novel resource to address medication nonadherence. With widespread smartphone use and the growing number of adherence apps, both HCP and patients should be able to identify quality adherence apps to maximize potential benefits. Assess the features, functionality and level of HL of available adherence apps and create a searchable website to help both HCP and patients identify quality adherence apps. Online marketplaces (iTunes, Google Play, Blackberry) were searched in June of 2014 to identify available adherence apps. Online descriptions were recorded and scored based on 28 author-identified features across 4 domains. The 100 highest-scoring apps were user-tested with a standardized regimen to evaluate their functionality and level of HL. 461 adherence apps were identified. 367 unique apps were evaluated after removing "Lite/Trial" versions. The median initial score based on descriptions was 15 (max of 68; range: 3 to 47). Only 77 apps of the top 100 highest-scoring apps completed user-testing and HL evaluations. The median overall user-testing score was 30 (max of 73; range: 16 to 55). App design, functionality, and level of HL varies widely among adherence apps. While no app is perfect, several apps scored highly across all domains. The website www.medappfinder.com is a searchable tool that helps HCP and patients identify quality apps in a crowded marketplace. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  4. The Technology Education Graduate Research Database, 1892-2000. CTTE Monograph.

    ERIC Educational Resources Information Center

    Reed, Philip A., Ed.

    The Technology Education Graduate Research Database (TEGRD) was designed in two parts. The first part was a 384 page bibliography of theses and dissertations from 1892-2000. The second part was an online, searchable database of graduate research completed within technology education from 1892 to the present. The primary goals of the project were:…

  5. 77 FR 3506 - 2002 Reopened-Previously Denied Determinations; Notice of Revised Denied Determinations On...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-24

    ... reconsideration investigation revealed that the following workers groups have met the certification criteria under... available on the Department's Web site at tradeact/taa/taa--search--form.cfm under the searchable listing of...

  6. DEVELOPMENT OF A STRUCTURE-SEARCHABLE DATABASE FOR PESTICIDE METABOLITES AND ENVIRONMENTAL DEGRADATES

    EPA Science Inventory

    USEPA is modifying and enhancing existing software for the depiction of metabolic maps to provide access via structures to metabolism information and associated data in EPA's Office of Pesticide Programs (OPP). The database includes information submitted to EPA in support of pest...

  7. A Magnetic Petrology Database for Satellite Magnetic Anomaly Interpretations

    NASA Astrophysics Data System (ADS)

    Nazarova, K.; Wasilewski, P.; Didenko, A.; Genshaft, Y.; Pashkevich, I.

    2002-05-01

    A Magnetic Petrology Database (MPDB) is now being compiled at NASA/Goddard Space Flight Center in cooperation with Russian and Ukrainian Institutions. The purpose of this database is to provide the geomagnetic community with a comprehensive and user-friendly method of accessing magnetic petrology data via Internet for more realistic interpretation of satellite magnetic anomalies. Magnetic Petrology Data had been accumulated in NASA/Goddard Space Flight Center, United Institute of Physics of the Earth (Russia) and Institute of Geophysics (Ukraine) over several decades and now consists of many thousands of records of data in our archives. The MPDB was, and continues to be in big demand especially since recent launching in near Earth orbit of the mini-constellation of three satellites - Oersted (in 1999), Champ (in 2000), and SAC-C (in 2000) which will provide lithospheric magnetic maps with better spatial and amplitude resolution (about 1 nT). The MPDB is focused on lower crustal and upper mantle rocks and will include data on mantle xenoliths, serpentinized ultramafic rocks, granulites, iron quartzites and rocks from Archean-Proterozoic metamorphic sequences from all around the world. A substantial amount of data is coming from the area of unique Kursk Magnetic Anomaly and Kola Deep Borehole (which recovered 12 km of continental crust). A prototype MPDB can be found on the Geodynamics Branch web server of Goddard Space Flight Center at http://core2.gsfc.nasa.gov/terr_mag/magnpetr.html. The MPDB employs a searchable relational design and consists of 7 interrelated tables. The schema of database is shown at http://core2.gsfc.nasa.gov/terr_mag/doc.html. MySQL database server was utilized to implement MPDB. The SQL (Structured Query Language) is used to query the database. To present the results of queries on WEB and for WEB programming we utilized PHP scripting language and CGI scripts. The prototype MPDB is designed to search database by major satellite magnetic anomaly, tectonic structure, geographical location, rock type, magnetic properties, chemistry and reference, see http://core2.gsfc.nasa.gov/terr_mag/query1.html. The output of database is HTML structured table, text file, and downloadable file. This database will be very useful for studies of lithospheric satellite magnetic anomalies on the Earth and other terrestrial planets.

  8. Number and impact of published scholarly works by pharmacy practice faculty members at accredited US colleges and schools of pharmacy (2001-2003).

    PubMed

    Coleman, Craig I; Schlesselman, Lauren S; Lao, Eang; White, C Michael

    2007-06-15

    To evaluate the quantity and quality of published literature conducted by pharmacy practice faculty members in US colleges and schools of pharmacy for the years 2001-2003. The Web of Science bibliographic database was used to identify publication citations for the years 2001-2003, which were then evaluated in a number of different ways. Faculty members were identified using American Association of Colleges of Pharmacy rosters for the 2000-2001, 2001-2002, and 2002-2003 academic years. Two thousand three hundred seventy-four pharmacy practice faculty members generated 1,896 publications in Web of Science searchable journals. A small number of faculty members (2.1%) were responsible for a large proportion of publications (30.6%), and only 4.9% of faculty members published 2 or more publications in these journals per year. The average impact factor for the top 200 publications was 7.6. Pharmacy practice faculty members contributed substantially to the biomedical literature and their work has had an important impact. A substantial portion of this work has come from a small subset of faculty members.

  9. YPD™, PombePD™ and WormPD™: model organism volumes of the BioKnowledge™ Library, an integrated resource for protein information

    PubMed Central

    Costanzo, Maria C.; Crawford, Matthew E.; Hirschman, Jodi E.; Kranz, Janice E.; Olsen, Philip; Robertson, Laura S.; Skrzypek, Marek S.; Braun, Burkhard R.; Hopkins, Kelley Lennon; Kondu, Pinar; Lengieza, Carey; Lew-Smith, Jodi E.; Tillberg, Michael; Garrels, James I.

    2001-01-01

    The BioKnowledge Library is a relational database and web site (http://www.proteome.com) composed of protein-specific information collected from the scientific literature. Each Protein Report on the web site summarizes and displays published information about a single protein, including its biochemical function, role in the cell and in the whole organism, localization, mutant phenotype and genetic interactions, regulation, domains and motifs, interactions with other proteins and other relevant data. This report describes four species-specific volumes of the BioKnowledge Library, concerned with the model organisms Saccharo­myces cerevisiae (YPD), Schizosaccharomyces pombe (PombePD) and Caenorhabditis elegans (WormPD), and with the fungal pathogen Candida albicans (CalPD™). Protein Reports of each species are unified in format, easily searchable and extensively cross-referenced between species. The relevance of these comprehensively curated resources to analysis of proteins in other species is discussed, and is illustrated by a survey of model organism proteins that have similarity to human proteins involved in disease. PMID:11125054

  10. LEPER: Library of Experimental PhasE Relations

    NASA Astrophysics Data System (ADS)

    Davis, F.; Gordon, S.; Mukherjee, S.; Hirschmann, M.; Ghiorso, M.

    2006-12-01

    The Library of Experimental PhasE Relations (LEPER) seeks to compile published experimental determinations of magmatic phase equilibria and provide those data on the web with a searchable and downloadable interface. Compiled experimental data include the conditions and durations of experiments, the bulk compositions of experimental charges, and the identity, compositions and proportions of phases observed, and, where available, estimates of experimental and analytical uncertainties. Also included are metadata such as the type of experimental device, capsule material, and method(s) of quantitative analysis. The database may be of use to practicing experimentalists as well as the wider Earth science community. Experimentalists may find the data useful for planning new experiments and will easily be able to compare their results to the full body of previous experimentnal data. Geologists may use LEPER to compare rocks sampled in the field with experiments performed on similar bulk composition or with experiments that produced similar-composition product phases. Modelers may use LEPER to parameterize partial melting of various lithologies. One motivation for compiling LEPER is for calibration of updated and revised versions of MELTS, however, it is hoped that the availability of LEPER will facilitate formulation and calibration of additional thermodynamic or empirical models of magmatic phase relations and phase equilibria, geothermometers and more. Data entry for LEPER is occuring presently: As of August, 2006, >6200 experiments have been entered, chiefly from work published between 1997 and 2005. A prototype web interface has been written and beta release on the web is anticipated in Fall, 2006. Eventually, experimentalists will be able to submit their new experimental data to the database via the web. At present, the database contains only data pertaining to the phase equilibria of silicate melts, but extension to other experimental data involving other fluids or sub-solidus phase equilibria may be contemplated. Also, the data are at present limited to natural or near-natural systems, but in the future, extension to synthetic (i.e., CMAS, etc.) systems is also possible. Each would depend in part on whether there is community demand for such databases. A trace element adjunct to LEPER is presently in planning stages.

  11. The U.S. Dairy Forage Research Center (USDFRC) condensed tannin NMR database

    USDA-ARS?s Scientific Manuscript database

    This perspective describes a solution-state NMR database for flavan-3-ol monomers and condensed tannin dimers through tetramers obtained from the literature to 2015, containing data searchable by structure, molecular formula, degrees of polymerization, 1H and 13C chemical shifts of the condensed tan...

  12. Development of a Searchable Database of Cryoablation Simulations for Use in Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boas, F. Edward, E-mail: boasf@mskcc.org; Srimathveeravalli, Govindarajan, E-mail: srimaths@mskcc.org; Durack, Jeremy C., E-mail: durackj@mskcc.org

    PurposeTo create and validate a planning tool for multiple-probe cryoablation, using simulations of ice ball size and shape for various ablation probe configurations, ablation times, and types of tissue ablated.Materials and MethodsIce ball size and shape was simulated using the Pennes bioheat equation. Five thousand six hundred and seventy different cryoablation procedures were simulated, using 1–6 cryoablation probes and 1–2 cm spacing between probes. The resulting ice ball was measured along three perpendicular axes and recorded in a database. Simulated ice ball sizes were compared to gel experiments (26 measurements) and clinical cryoablation cases (42 measurements). The clinical cryoablation measurements weremore » obtained from a HIPAA-compliant retrospective review of kidney and liver cryoablation procedures between January 2015 and February 2016. Finally, we created a web-based cryoablation planning tool, which uses the cryoablation simulation database to look up the probe spacing and ablation time that produces the desired ice ball shape and dimensions.ResultsAverage absolute error between the simulated and experimentally measured ice balls was 1 mm in gel experiments and 4 mm in clinical cryoablation cases. The simulations accurately predicted the degree of synergy in multiple-probe ablations. The cryoablation simulation database covers a wide range of ice ball sizes and shapes up to 9.8 cm.ConclusionCryoablation simulations accurately predict the ice ball size in multiple-probe ablations. The cryoablation database can be used to plan ablation procedures: given the desired ice ball size and shape, it will find the number and type of probes, probe configuration and spacing, and ablation time required.« less

  13. Syringomyelia

    MedlinePlus

    ... is the most reliable way to diagnose syringomyelia. Computer-generated radio waves and a powerful magnetic field ... a searchable database of current and past research projects supported by NIH and other federal agencies. RePORTER ...

  14. A Layered Searchable Encryption Scheme with Functional Components Independent of Encryption Methods

    PubMed Central

    Luo, Guangchun; Qin, Ke

    2014-01-01

    Searchable encryption technique enables the users to securely store and search their documents over the remote semitrusted server, which is especially suitable for protecting sensitive data in the cloud. However, various settings (based on symmetric or asymmetric encryption) and functionalities (ranked keyword query, range query, phrase query, etc.) are often realized by different methods with different searchable structures that are generally not compatible with each other, which limits the scope of application and hinders the functional extensions. We prove that asymmetric searchable structure could be converted to symmetric structure, and functions could be modeled separately apart from the core searchable structure. Based on this observation, we propose a layered searchable encryption (LSE) scheme, which provides compatibility, flexibility, and security for various settings and functionalities. In this scheme, the outputs of the core searchable component based on either symmetric or asymmetric setting are converted to some uniform mappings, which are then transmitted to loosely coupled functional components to further filter the results. In such a way, all functional components could directly support both symmetric and asymmetric settings. Based on LSE, we propose two representative and novel constructions for ranked keyword query (previously only available in symmetric scheme) and range query (previously only available in asymmetric scheme). PMID:24719565

  15. The Microbe Directory: An annotated, searchable inventory of microbes' characteristics.

    PubMed

    Shaaban, Heba; Westfall, David A; Mohammad, Rawhi; Danko, David; Bezdan, Daniela; Afshinnekoo, Ebrahim; Segata, Nicola; Mason, Christopher E

    2018-01-05

    The Microbe Directory is a collective research effort to profile and annotate more than 7,500 unique microbial species from the MetaPhlAn2 database that includes bacteria, archaea, viruses, fungi, and protozoa. By collecting and summarizing data on various microbes' characteristics, the project comprises a database that can be used downstream of large-scale metagenomic taxonomic analyses, allowing one to interpret and explore their taxonomic classifications to have a deeper understanding of the microbial ecosystem they are studying. Such characteristics include, but are not limited to: optimal pH, optimal temperature, Gram stain, biofilm-formation, spore-formation, antimicrobial resistance, and COGEM class risk rating. The database has been manually curated by trained student-researchers from Weill Cornell Medicine and CUNY-Hunter College, and its analysis remains an ongoing effort with open-source capabilities so others can contribute. Available in SQL, JSON, and CSV (i.e. Excel) formats, the Microbe Directory can be queried for the aforementioned parameters by a microorganism's taxonomy. In addition to the raw database, The Microbe Directory has an online counterpart ( https://microbe.directory/) that provides a user-friendly interface for storage, retrieval, and analysis into which other microbial database projects could be incorporated. The Microbe Directory was primarily designed to serve as a resource for researchers conducting metagenomic analyses, but its online web interface should also prove useful to any individual who wishes to learn more about any particular microbe.

  16. IDAAPM: integrated database of ADMET and adverse effects of predictive modeling based on FDA approved drug data.

    PubMed

    Legehar, Ashenafi; Xhaard, Henri; Ghemtio, Leo

    2016-01-01

    The disposition of a pharmaceutical compound within an organism, i.e. its Absorption, Distribution, Metabolism, Excretion, Toxicity (ADMET) properties and adverse effects, critically affects late stage failure of drug candidates and has led to the withdrawal of approved drugs. Computational methods are effective approaches to reduce the number of safety issues by analyzing possible links between chemical structures and ADMET or adverse effects, but this is limited by the size, quality, and heterogeneity of the data available from individual sources. Thus, large, clean and integrated databases of approved drug data, associated with fast and efficient predictive tools are desirable early in the drug discovery process. We have built a relational database (IDAAPM) to integrate available approved drug data such as drug approval information, ADMET and adverse effects, chemical structures and molecular descriptors, targets, bioactivity and related references. The database has been coupled with a searchable web interface and modern data analytics platform (KNIME) to allow data access, data transformation, initial analysis and further predictive modeling. Data were extracted from FDA resources and supplemented from other publicly available databases. Currently, the database contains information regarding about 19,226 FDA approval applications for 31,815 products (small molecules and biologics) with their approval history, 2505 active ingredients, together with as many ADMET properties, 1629 molecular structures, 2.5 million adverse effects and 36,963 experimental drug-target bioactivity data. IDAAPM is a unique resource that, in a single relational database, provides detailed information on FDA approved drugs including their ADMET properties and adverse effects, the corresponding targets with bioactivity data, coupled with a data analytics platform. It can be used to perform basic to complex drug-target ADMET or adverse effects analysis and predictive modeling. IDAAPM is freely accessible at http://idaapm.helsinki.fi and can be exploited through a KNIME workflow connected to the database.Graphical abstractFDA approved drug data integration for predictive modeling.

  17. Software Application Profile: Opal and Mica: open-source software solutions for epidemiological data management, harmonization and dissemination.

    PubMed

    Doiron, Dany; Marcon, Yannick; Fortier, Isabel; Burton, Paul; Ferretti, Vincent

    2017-10-01

    Improving the dissemination of information on existing epidemiological studies and facilitating the interoperability of study databases are essential to maximizing the use of resources and accelerating improvements in health. To address this, Maelstrom Research proposes Opal and Mica, two inter-operable open-source software packages providing out-of-the-box solutions for epidemiological data management, harmonization and dissemination. Opal and Mica are two standalone but inter-operable web applications written in Java, JavaScript and PHP. They provide web services and modern user interfaces to access them. Opal allows users to import, manage, annotate and harmonize study data. Mica is used to build searchable web portals disseminating study and variable metadata. When used conjointly, Mica users can securely query and retrieve summary statistics on geographically dispersed Opal servers in real-time. Integration with the DataSHIELD approach allows conducting more complex federated analyses involving statistical models. Opal and Mica are open-source and freely available at [www.obiba.org] under a General Public License (GPL) version 3, and the metadata models and taxonomies that accompany them are available under a Creative Commons licence. © The Author 2017; all rights reserved. Published by Oxford University Press on behalf of the International Epidemiological Association

  18. Software for Displaying Data from Planetary Rovers

    NASA Technical Reports Server (NTRS)

    Powell, Mark; Backers, Paul; Norris, Jeffrey; Vona, Marsette; Steinke, Robert

    2003-01-01

    Science Activity Planner (SAP) DownlinkBrowser is a computer program that assists in the visualization of processed telemetric data [principally images, image cubes (that is, multispectral images), and spectra] that have been transmitted to Earth from exploratory robotic vehicles (rovers) on remote planets. It is undergoing adaptation to (1) the Field Integrated Design and Operations (FIDO) rover (a prototype Mars-exploration rover operated on Earth as a test bed) and (2) the Mars Exploration Rover (MER) mission. This program has evolved from its predecessor - the Web Interface for Telescience (WITS) software - and surpasses WITS in the processing, organization, and plotting of data. SAP DownlinkBrowser creates Extensible Markup Language (XML) files that organize data files, on the basis of content, into a sortable, searchable product database, without the overhead of a relational database. The data-display components of SAP DownlinkBrowser (descriptively named ImageView, 3DView, OrbitalView, PanoramaView, ImageCubeView, and SpectrumView) are designed to run in a memory footprint of at least 256MB on computers that utilize the Windows, Linux, and Solaris operating systems.

  19. A user-friendly phytoremediation database: creating the searchable database, the users, and the broader implications.

    PubMed

    Famulari, Stevie; Witz, Kyla

    2015-01-01

    Designers, students, teachers, gardeners, farmers, landscape architects, architects, engineers, homeowners, and others have uses for the practice of phytoremediation. This research looks at the creation of a phytoremediation database which is designed for ease of use for a non-scientific user, as well as for students in an educational setting ( http://www.steviefamulari.net/phytoremediation ). During 2012, Environmental Artist & Professor of Landscape Architecture Stevie Famulari, with assistance from Kyla Witz, a landscape architecture student, created an online searchable database designed for high public accessibility. The database is a record of research of plant species that aid in the uptake of contaminants, including metals, organic materials, biodiesels & oils, and radionuclides. The database consists of multiple interconnected indexes categorized into common and scientific plant name, contaminant name, and contaminant type. It includes photographs, hardiness zones, specific plant qualities, full citations to the original research, and other relevant information intended to aid those designing with phytoremediation search for potential plants which may be used to address their site's need. The objective of the terminology section is to remove uncertainty for more inexperienced users, and to clarify terms for a more user-friendly experience. Implications of the work, including education and ease of browsing, as well as use of the database in teaching, are discussed.

  20. Which Online Resources Are Right for Your Collection?

    ERIC Educational Resources Information Center

    Pearlmutter, Jane

    1999-01-01

    Discusses important considerations for library media specialists creating a virtual-resources-collection policy, including selecting the right resources, navigating licensing fees, and free, searchable online sources. A sidebar lists resources for evaluating Web sites and places that lead to recommended sites for students. (AEF)

  1. WWW.LCACCESS -- GLOBAL DIRECTORY OF LCI RESOURCES

    EPA Science Inventory

    LCAccess is a USEPA sponsored web-site intended to promote the use of Life Cycle Assessments in business decision-making by facilitating access to data sources useful in developing a life cycle inventory (LCI). While LCAccess will not itself contain data, it will be a searchable...

  2. AnnoSys—implementation of a generic annotation system for schema-based data using the example of biodiversity collection data

    PubMed Central

    Kusber, W.-H.; Tschöpe, O.; Güntsch, A.; Berendsohn, W. G.

    2017-01-01

    Abstract Biological research collections holding billions of specimens world-wide provide the most important baseline information for systematic biodiversity research. Increasingly, specimen data records become available in virtual herbaria and data portals. The traditional (physical) annotation procedure fails here, so that an important pathway of research documentation and data quality control is broken. In order to create an online annotation system, we analysed, modeled and adapted traditional specimen annotation workflows. The AnnoSys system accesses collection data from either conventional web resources or the Biological Collection Access Service (BioCASe) and accepts XML-based data standards like ABCD or DarwinCore. It comprises a searchable annotation data repository, a user interface, and a subscription based message system. We describe the main components of AnnoSys and its current and planned interoperability with biodiversity data portals and networks. Details are given on the underlying architectural model, which implements the W3C OpenAnnotation model and allows the adaptation of AnnoSys to different problem domains. Advantages and disadvantages of different digital annotation and feedback approaches are discussed. For the biodiversity domain, AnnoSys proposes best practice procedures for digital annotations of complex records. Database URL: https://annosys.bgbm.fu-berlin.de/AnnoSys/AnnoSys PMID:28365735

  3. Virtual Solar Observatory Distributed Query Construction

    NASA Technical Reports Server (NTRS)

    Gurman, J. B.; Dimitoglou, G.; Bogart, R.; Davey, A.; Hill, F.; Martens, P.

    2003-01-01

    Through a prototype implementation (Tian et al., this meeting) the VSO has already demonstrated the capability of unifying geographically distributed data sources following the Web Services paradigm and utilizing mechanisms such as the Simple Object Access Protocol (SOAP). So far, four participating sites (Stanford, Montana State University, National Solar Observatory and the Solar Data Analysis Center) permit Web-accessible, time-based searches that allow browse access to a number of diverse data sets. Our latest work includes the extension of the simple, time-based queries to include numerous other searchable observation parameters. For VSO users, this extended functionality enables more refined searches. For the VSO, it is a proof of concept that more complex, distributed queries can be effectively constructed and that results from heterogeneous, remote sources can be synthesized and presented to users as a single, virtual data product.

  4. The Edinburgh Electronic Veterinary Curriculum: an online program-wide learning and support environment for veterinary education.

    PubMed

    Ellaway, Rachel; Pettigrew, Graham; Rhind, Susan; Dewhurst, David

    2005-01-01

    The Edinburgh Electronic Veterinary Curriculum (EEVeC) is a purpose-built virtual learning support environment for the veterinary medicine program at the University of Edinburgh. It is Web based and adapted from a system developed for the human medical curriculum. It is built around a set of databases and learning objects and incorporates features such as course materials, personalized timetables, staff and student contact pages, a notice board, and discussion forums. The EEVeC also contains global or generic resources such as information on quality enhancement and research options. Many of these features contribute to the aim of building a learning community, but a challenge has been to introduce specific features that enhance student learning. One of these is a searchable lecture database in which learning activities such as quizzes and computer-aided learning exercises (CALs) can be embedded to supplement a synopsis of the lecture and address the key needs of integration and reinforcement of learning. Statistics of use indicate extensive student activity during evenings and weekends, with a pattern of increased usage over the years as more features become available and staff and students progressively engage with the system. An essential feature of EEVeC is its flexibility and the way in which it is evolving to meet the changing needs of the teaching program.

  5. User applications driven by the community contribution framework MPContribs in the Materials Project

    DOE PAGES

    Huck, P.; Gunter, D.; Cholia, S.; ...

    2015-10-12

    This paper discusses how the MPContribs framework in the Materials Project (MP) allows user-contributed data to be shown and analyzed alongside the core MP database. The MP is a searchable database of electronic structure properties of over 65,000 bulk solid materials, which is accessible through a web-based science-gateway. We describe the motivation for enabling user contributions to the materials data and present the framework's features and challenges in the context of two real applications. These use cases illustrate how scientific collaborations can build applications with their own 'user-contributed' data using MPContribs. The Nanoporous Materials Explorer application provides a unique searchmore » interface to a novel dataset of hundreds of thousands of materials, each with tables of user-contributed values related to material adsorption and density at varying temperature and pressure. The Unified Theoretical and Experimental X-ray Spectroscopy application discusses a full workflow for the association, dissemination, and combined analyses of experimental data from the Advanced Light Source with MP's theoretical core data, using MPContribs tools for data formatting, management, and exploration. The capabilities being developed for these collaborations are serving as the model for how new materials data can be incorporated into the MP website with minimal staff overhead while giving powerful tools for data search and display to the user community.« less

  6. The International Outer Planets Watch atmospheres node database of giant-planet images

    NASA Astrophysics Data System (ADS)

    Hueso, R.; Legarreta, J.; Sánchez-Lavega, A.; Rojas, J. F.; Gómez-Forrellad, J. M.

    2011-10-01

    The Atmospheres Node of the International Outer Planets Watch (IOPW) is aimed to encourage the observations and study of the atmospheres of the Giant Planets. One of its main activities is to provide an interaction between the professional and amateur astronomical communities maintaining an online and fully searchable database of images of the giant planets obtained from amateur astronomers and available to both professional and amateurs [1]. The IOPW database contains about 13,000 image observations of Jupiter and Saturn obtained in the visible range with a few contributions of Uranus and Neptune. We describe the organization and structure of the database as posted in the Internet and in particular the PVOL software (Planetary Virtual Observatory & Laboratory) designed to manage the site and based in concepts from Virtual Observatory projects.

  7. Developmental Gene Discovery in a Hemimetabolous Insect: De Novo Assembly and Annotation of a Transcriptome for the Cricket Gryllus bimaculatus

    PubMed Central

    Zeng, Victor; Ewen-Campen, Ben; Horch, Hadley W.; Roth, Siegfried; Mito, Taro; Extavour, Cassandra G.

    2013-01-01

    Most genomic resources available for insects represent the Holometabola, which are insects that undergo complete metamorphosis like beetles and flies. In contrast, the Hemimetabola (direct developing insects), representing the basal branches of the insect tree, have very few genomic resources. We have therefore created a large and publicly available transcriptome for the hemimetabolous insect Gryllus bimaculatus (cricket), a well-developed laboratory model organism whose potential for functional genetic experiments is currently limited by the absence of genomic resources. cDNA was prepared using mRNA obtained from adult ovaries containing all stages of oogenesis, and from embryo samples on each day of embryogenesis. Using 454 Titanium pyrosequencing, we sequenced over four million raw reads, and assembled them into 21,512 isotigs (predicted transcripts) and 120,805 singletons with an average coverage per base pair of 51.3. We annotated the transcriptome manually for over 400 conserved genes involved in embryonic patterning, gametogenesis, and signaling pathways. BLAST comparison of the transcriptome against the NCBI non-redundant protein database (nr) identified significant similarity to nr sequences for 55.5% of transcriptome sequences, and suggested that the transcriptome may contain 19,874 unique transcripts. For predicted transcripts without significant similarity to known sequences, we assessed their similarity to other orthopteran sequences, and determined that these transcripts contain recognizable protein domains, largely of unknown function. We created a searchable, web-based database to allow public access to all raw, assembled and annotated data. This database is to our knowledge the largest de novo assembled and annotated transcriptome resource available for any hemimetabolous insect. We therefore anticipate that these data will contribute significantly to more effective and higher-throughput deployment of molecular analysis tools in Gryllus. PMID:23671567

  8. WWW.LCACCESS - GLOBAL DIRECTORY OF LCI RESOURCES

    EPA Science Inventory

    LCAccess is a USEPA sponsored web-site intended to promote the use of Life Cycle Assessment in business decision-making by facilitating access to data sources useful in developing a life cycle inventory (OCI). While LCAccess will not itself contain data, it will be a searchable g...

  9. Semantic Networks and Social Networks

    ERIC Educational Resources Information Center

    Downes, Stephen

    2005-01-01

    Purpose: To illustrate the need for social network metadata within semantic metadata. Design/methodology/approach: Surveys properties of social networks and the semantic web, suggests that social network analysis applies to semantic content, argues that semantic content is more searchable if social network metadata is merged with semantic web…

  10. Science Inventory | US EPA

    EPA Pesticide Factsheets

    The Science Inventory is a searchable database of research products primarily from EPA's Office of Research and Development. Science Inventory records provide descriptions of the product, contact information, and links to available printed material or websites.

  11. Context indexing of digital cardiac ultrasound records in PACS

    NASA Astrophysics Data System (ADS)

    Lobodzinski, S. Suave; Meszaros, Georg N.

    1998-07-01

    Recent wide adoption of the DICOM 3.0 standard by ultrasound equipment vendors created a need for practical clinical implementations of cardiac imaging study visualization, management and archiving, DICOM 3.0 defines only a logical and physical format for exchanging image data (still images, video, patient and study demographics). All DICOM compliant imaging studies must presently be archived on a 650 Mb recordable compact disk. This is a severe limitation for ultrasound applications where studies of 3 to 10 minutes long are a common practice. In addition, DICOM digital echocardiography objects require physiological signal indexing, content segmentation and characterization. Since DICOM 3.0 is an interchange standard only, it does not define how to database composite video objects. The goal of this research was therefore to address the issues of efficient storage, retrieval and management of DICOM compliant cardiac video studies in a distributed PACS environment. Our Web based implementation has the advantage of accommodating both DICOM defined entity-relation modules (equipment data, patient data, video format, etc.) in standard relational database tables and digital indexed video with its attributes in an object relational database. Object relational data model facilitates content indexing of full motion cardiac imaging studies through bi-directional hyperlink generation that tie searchable video attributes and related objects to individual video frames in the temporal domain. Benefits realized from use of bi-directionally hyperlinked data models in an object relational database include: (1) real time video indexing during image acquisition, (2) random access and frame accurate instant playback of previously recorded full motion imaging data, and (3) time savings from faster and more accurate access to data through multiple navigation mechanisms such as multidimensional queries on an index, queries on a hyperlink attribute, free search and browsing.

  12. A searchable, whole genome resource designed for protein variant analysis in diverse lineages of U.S. beef cattle

    USDA-ARS?s Scientific Manuscript database

    A key feature of a gene's function is the variety of protein isoforms it encodes in a population. However, the genetic diversity in bovine whole genome databases tends to be underrepresented because these databases contain an abundance of sequence from the most influential sires. Our first aim was ...

  13. CTD² Dashboard: a searchable web interface to connect validated results from the Cancer Target Discovery and Development Network* | Office of Cancer Genomics

    Cancer.gov

    The Cancer Target Discovery and Development (CTD2) Network aims to use functional genomics to accelerate the translation of high-throughput and high-content genomic and small-molecule data towards use in precision oncology.

  14. A Web-Based Database for Nurse Led Outreach Teams (NLOT) in Toronto.

    PubMed

    Li, Shirley; Kuo, Mu-Hsing; Ryan, David

    2016-01-01

    A web-based system can provide access to real-time data and information. Healthcare is moving towards digitizing patients' medical information and securely exchanging it through web-based systems. In one of Ontario's health regions, Nurse Led Outreach Teams (NLOT) provide emergency mobile nursing services to help reduce unnecessary transfers from long-term care homes to emergency departments. Currently the NLOT team uses a Microsoft Access database to keep track of the health information on the residents that they serve. The Access database lacks scalability, portability, and interoperability. The objective of this study is the development of a web-based database using Oracle Application Express that is easily accessible from mobile devices. The web-based database will allow NLOT nurses to enter and access resident information anytime and from anywhere.

  15. Neural network based chemical structure indexing.

    PubMed

    Rughooputh, S D; Rughooputh, H C

    2001-01-01

    Searches on chemical databases are presently dominated by the text-based content of a paper which can be indexed into a keyword searchable form. Such traditional searches can prove to be very time-consuming and discouraging to the less frequent scientist. We report a simple chemical indexing based on the molecular structure alone. The method used is based on a one-to-one correspondence between the chemical structure presented as an image to a neural network and the corresponding binary output. The method is direct and less cumbersome (compared with traditional methods) and proves to be robust, elegant, and very versatile.

  16. A few scenarios still do not fit all

    NASA Astrophysics Data System (ADS)

    Schweizer, Vanessa

    2018-05-01

    For integrated climate change research, the Scenario Matrix Architecture provides a tractable menu of possible emissions trajectories, socio-economic futures and policy environments. However, the future of decision support may lie in searchable databases.

  17. Distributed data discovery, access and visualization services to Improve Data Interoperability across different data holdings

    NASA Astrophysics Data System (ADS)

    Palanisamy, G.; Krassovski, M.; Devarakonda, R.; Santhana Vannan, S.

    2012-12-01

    The current climate debate is highlighting the importance of free, open, and authoritative sources of high quality climate data that are available for peer review and for collaborative purposes. It is increasingly important to allow various organizations around the world to share climate data in an open manner, and to enable them to perform dynamic processing of climate data. This advanced access to data can be enabled via Web-based services, using common "community agreed" standards without having to change their internal structure used to describe the data. The modern scientific community has become diverse and increasingly complex in nature. To meet the demands of such diverse user community, the modern data supplier has to provide data and other related information through searchable, data and process oriented tool. This can be accomplished by setting up on-line, Web-based system with a relational database as a back end. The following common features of the web data access/search systems will be outlined in the proposed presentation: - A flexible data discovery - Data in commonly used format (e.g., CSV, NetCDF) - Preparing metadata in standard formats (FGDC, ISO19115, EML, DIF etc.) - Data subseting capabilities and ability to narrow down to individual data elements - Standards based data access protocols and mechanisms (SOAP, REST, OpenDAP, OGC etc.) - Integration of services across different data systems (discovery to access, visualizations and subseting) This presentation will also include specific examples of integration of various data systems that are developed by Oak Ridge National Laboratory's - Climate Change Science Institute, their ability to communicate between each other to enable better data interoperability and data integration. References: [1] Devarakonda, Ranjeet, and Harold Shanafield. "Drupal: Collaborative framework for science research." Collaboration Technologies and Systems (CTS), 2011 International Conference on. IEEE, 2011. [2]Devarakonda, R., Shrestha, B., Palanisamy, G., Hook, L. A., Killeffer, T. S., Boden, T. A., ... & Lazer, K. (2014). THE NEW ONLINE METADATA EDITOR FOR GENERATING STRUCTURED METADATA. Oak Ridge National Laboratory (ORNL).

  18. SPARQLGraph: a web-based platform for graphically querying biological Semantic Web databases.

    PubMed

    Schweiger, Dominik; Trajanoski, Zlatko; Pabinger, Stephan

    2014-08-15

    Semantic Web has established itself as a framework for using and sharing data across applications and database boundaries. Here, we present a web-based platform for querying biological Semantic Web databases in a graphical way. SPARQLGraph offers an intuitive drag & drop query builder, which converts the visual graph into a query and executes it on a public endpoint. The tool integrates several publicly available Semantic Web databases, including the databases of the just recently released EBI RDF platform. Furthermore, it provides several predefined template queries for answering biological questions. Users can easily create and save new query graphs, which can also be shared with other researchers. This new graphical way of creating queries for biological Semantic Web databases considerably facilitates usability as it removes the requirement of knowing specific query languages and database structures. The system is freely available at http://sparqlgraph.i-med.ac.at.

  19. From ontology selection and semantic web to an integrated information system for food-borne diseases and food safety.

    PubMed

    Yan, Xianghe; Peng, Yun; Meng, Jianghong; Ruzante, Juliana; Fratamico, Pina M; Huang, Lihan; Juneja, Vijay; Needleman, David S

    2011-01-01

    Several factors have hindered effective use of information and resources related to food safety due to inconsistency among semantically heterogeneous data resources, lack of knowledge on profiling of food-borne pathogens, and knowledge gaps among research communities, government risk assessors/managers, and end-users of the information. This paper discusses technical aspects in the establishment of a comprehensive food safety information system consisting of the following steps: (a) computational collection and compiling publicly available information, including published pathogen genomic, proteomic, and metabolomic data; (b) development of ontology libraries on food-borne pathogens and design automatic algorithms with formal inference and fuzzy and probabilistic reasoning to address the consistency and accuracy of distributed information resources (e.g., PulseNet, FoodNet, OutbreakNet, PubMed, NCBI, EMBL, and other online genetic databases and information); (c) integration of collected pathogen profiling data, Foodrisk.org ( http://www.foodrisk.org ), PMP, Combase, and other relevant information into a user-friendly, searchable, "homogeneous" information system available to scientists in academia, the food industry, and government agencies; and (d) development of a computational model in semantic web for greater adaptability and robustness.

  20. Number and Impact of Published Scholarly Works by Pharmacy Practice Faculty Members at Accredited US Colleges and Schools of Pharmacy (2001-2003)

    PubMed Central

    Coleman, Craig I.; Schlesselman, Lauren S.; Lao, Eang

    2007-01-01

    Objective To evaluate the quantity and quality of published literature conducted by pharmacy practice faculty members in US colleges and schools of pharmacy for the years 2001-2003. Methods The Web of Science bibliographic database was used to identify publication citations for the years 2001-2003, which were then evaluated in a number of different ways. Faculty members were identified using American Association of Colleges of Pharmacy rosters for the 2000-2001, 2001-2002, and 2002-2003 academic years. Results Two thousand three hundred seventy-four pharmacy practice faculty members generated 1,896 publications in Web of Science searchable journals. A small number of faculty members (2.1%) were responsible for a large proportion of publications (30.6%), and only 4.9% of faculty members published 2 or more publications in these journals per year. The average impact factor for the top 200 publications was 7.6. Conclusion Pharmacy practice faculty members contributed substantially to the biomedical literature and their work has had an important impact. A substantial portion of this work has come from a small subset of faculty members. PMID:17619644

  1. An image database management system for conducting CAD research

    NASA Astrophysics Data System (ADS)

    Gruszauskas, Nicholas; Drukker, Karen; Giger, Maryellen L.

    2007-03-01

    The development of image databases for CAD research is not a trivial task. The collection and management of images and their related metadata from multiple sources is a time-consuming but necessary process. By standardizing and centralizing the methods in which these data are maintained, one can generate subsets of a larger database that match the specific criteria needed for a particular research project in a quick and efficient manner. A research-oriented management system of this type is highly desirable in a multi-modality CAD research environment. An online, webbased database system for the storage and management of research-specific medical image metadata was designed for use with four modalities of breast imaging: screen-film mammography, full-field digital mammography, breast ultrasound and breast MRI. The system was designed to consolidate data from multiple clinical sources and provide the user with the ability to anonymize the data. Input concerning the type of data to be stored as well as desired searchable parameters was solicited from researchers in each modality. The backbone of the database was created using MySQL. A robust and easy-to-use interface for entering, removing, modifying and searching information in the database was created using HTML and PHP. This standardized system can be accessed using any modern web-browsing software and is fundamental for our various research projects on computer-aided detection, diagnosis, cancer risk assessment, multimodality lesion assessment, and prognosis. Our CAD database system stores large amounts of research-related metadata and successfully generates subsets of cases that match the user's desired search criteria.

  2. Vasculitis Syndromes of the Central and Peripheral Nervous Systems

    MedlinePlus

    ... VCRC, www.rarediseasesnetwork.org/vcrc/ ), a network of academic medical centers, patient support organizations, and clinical research ... NIH RePORTER ( http://projectreporter.nih.gov ), a searchable database of current and past research projects supported by ...

  3. The quantification of instream flow rights to water

    USGS Publications Warehouse

    Milhous, Robert T.

    1990-01-01

    Energy development of all types continues to grow in the Rocky Mountain Region of the western United States. Federal resource managers increasingly need to balance energy demands, their effects on the natural and human landscape, and public perceptions towards these issues. The Western Energy Citation Clearinghouse (WECC v.1.0), part of a suite of data and information management tools developed and managed by the Wyoming Landscape Conservation Initiative (WLCI), provides resource managers with a searchable online database of citations that covers a broad spectrum of energy and landscape related topics relevant to resource managers, such as energy sources, natural and human landscape effects, and new research, methods and models. Based on the 2011 USGS Open-file Report "Abbreviated bibliography on energy development" (Montag, et al. 2011), WECC is an extensive collection of energy-related citations, as well as categorized lists of additional online resources related to oil and gas development, best practices, energy companies and Federal agencies. WECC incorporates the powerful web services of Sciencebase 2.0, the enterprise data and information platform for USGS scientists and partners, to provide secure, role-based data management features. For example, public/unauthenticated WECC users have full search and read access to the entire energy citation collection, while authenticated WLCI data stewards can manage WECC's citation collection using Sciencebase data management forms.

  4. Data, Metadata - Who Cares?

    NASA Astrophysics Data System (ADS)

    Baumann, Peter

    2013-04-01

    There is a traditional saying that metadata are understandable, semantic-rich, and searchable. Data, on the other hand, are big, with no accessible semantics, and just downloadable. Not only has this led to an imbalance of search support form a user perspective, but also underneath to a deep technology divide often using relational databases for metadata and bespoke archive solutions for data. Our vision is that this barrier will be overcome, and data and metadata become searchable likewise, leveraging the potential of semantic technologies in combination with scalability technologies. Ultimately, in this vision ad-hoc processing and filtering will not distinguish any longer, forming a uniformly accessible data universe. In the European EarthServer initiative, we work towards this vision by federating database-style raster query languages with metadata search and geo broker technology. We present our approach taken, how it can leverage OGC standards, the benefits envisaged, and first results.

  5. The Microbe Directory: An annotated, searchable inventory of microbes’ characteristics

    PubMed Central

    Mohammad, Rawhi; Danko, David; Bezdan, Daniela; Afshinnekoo, Ebrahim; Segata, Nicola; Mason, Christopher E.

    2018-01-01

    The Microbe Directory is a collective research effort to profile and annotate more than 7,500 unique microbial species from the MetaPhlAn2 database that includes bacteria, archaea, viruses, fungi, and protozoa. By collecting and summarizing data on various microbes’ characteristics, the project comprises a database that can be used downstream of large-scale metagenomic taxonomic analyses, allowing one to interpret and explore their taxonomic classifications to have a deeper understanding of the microbial ecosystem they are studying. Such characteristics include, but are not limited to: optimal pH, optimal temperature, Gram stain, biofilm-formation, spore-formation, antimicrobial resistance, and COGEM class risk rating. The database has been manually curated by trained student-researchers from Weill Cornell Medicine and CUNY—Hunter College, and its analysis remains an ongoing effort with open-source capabilities so others can contribute. Available in SQL, JSON, and CSV (i.e. Excel) formats, the Microbe Directory can be queried for the aforementioned parameters by a microorganism’s taxonomy. In addition to the raw database, The Microbe Directory has an online counterpart ( https://microbe.directory/) that provides a user-friendly interface for storage, retrieval, and analysis into which other microbial database projects could be incorporated. The Microbe Directory was primarily designed to serve as a resource for researchers conducting metagenomic analyses, but its online web interface should also prove useful to any individual who wishes to learn more about any particular microbe. PMID:29630066

  6. The HLA dictionary 2008: a summary of HLA-A, -B, -C, -DRB1/3/4/5, and -DQB1 alleles and their association with serologically defined HLA-A, -B, -C, -DR, and -DQ antigens.

    PubMed

    Holdsworth, R; Hurley, C K; Marsh, S G E; Lau, M; Noreen, H J; Kempenich, J H; Setterholm, M; Maiers, M

    2009-02-01

    The 2008 report of the human leukocyte antigen (HLA) data dictionary presents serologic equivalents of HLA-A, -B, -C, -DRB1, -DRB3, -DRB4, -DRB5, and -DQB1 alleles. The dictionary is an update of the one published in 2004. The data summarize equivalents obtained by the World Health Organization Nomenclature Committee for Factors of the HLA System, the International Cell Exchange, UCLA, the National Marrow Donor Program, recent publications, and individual laboratories. The 2008 edition includes information on 832 new alleles (685 class I and 147 class II) and updated information on 766 previously listed alleles (577 class I and 189 class II). The tables list the alleles with remarks on the serologic patterns and the equivalents. The serological equivalents are listed as expert assigned types, and the data are useful for identifying potential stem cell donors who were typed by either serology or DNA-based methods. The tables with HLA equivalents are available as a searchable form on the IMGT/HLA database Web site (http://www.ebi.ac.uk/imgt/hla/dictionary.html).

  7. Design and Development of a Spectral Library for Different Vegetation and Landcover Types for Arctic, Antarctic and Chihuahua Desert Ecosystem

    NASA Astrophysics Data System (ADS)

    Matharasi, K.; Goswami, S.; Gamon, J.; Vargas, S.; Marin, R.; Lin, D.; Tweedie, C. E.

    2008-12-01

    All objects on the Earth's surface absorb and reflect portions of the electromagnetic spectrum. Depending on the composition of the material, every material has its characteristic spectral profile. The characteristic spectral profile for vegetation is often used to study how vegetation patterns at large spatial scales affect ecosystem structure and function. Analysis of spectroscopic data from the laboratory, and from various other platforms like aircraft or spacecraft, requires a knowledge base that consists of different characteristic spectral profiles for known different materials. This study reports on establishment of an online and searchable spectral library for a range of plant species and landcover types in the Arctic, Anatarctic and Chihuahuan desert ecosystems. Field data were collected from Arctic Alaska, the Antarctic Peninsula and the Chihuahuan desert in the visible to near infrared (IR) range using a handheld portable spectrometer. The data have been archived in a database created using postgre sql with have been made publicly available on a plone web-interface. This poster describes the data collected in more detail and offers instruction to users who wish to make use of this free online resource.

  8. Opportunities in Participatory Science and Citizen Science with MRO's High Resolution Imaging Science Experiment: A Virtual Science Team Experience

    NASA Astrophysics Data System (ADS)

    Gulick, Ginny

    2009-09-01

    We report on the accomplishments of the HiRISE EPO program over the last two and a half years of science operations. We have focused primarily on delivering high impact science opportunities through our various participatory science and citizen science websites. Uniquely, we have invited students from around the world to become virtual HiRISE team members by submitting target suggestions via our HiRISE Quest Image challenges using HiWeb the team's image suggestion facility web tools. When images are acquired, students analyze their returned images, write a report and work with a HiRISE team member to write a image caption for release on the HiRISE website (http://hirise.lpl.arizona.edu). Another E/PO highlight has been our citizen scientist effort, HiRISE Clickworkers (http://clickworkers.arc.nasa.gov/hirise). Clickworkers enlists volunteers to identify geologic features (e.g., dunes, craters, wind streaks, gullies, etc.) in the HiRISE images and help generate searchable image databases. In addition, the large image sizes and incredible spatial resolution of the HiRISE camera can tax the capabilities of the most capable computers, so we have also focused on enabling typical users to browse, pan and zoom the HiRISE images using our HiRISE online image viewer (http://marsoweb.nas.nasa.gov/HiRISE/hirise_images/). Our educational materials available on the HiRISE EPO web site (http://hirise.seti.org/epo) include an assortment of K through college level, standards-based activity books, a K through 3 coloring/story book, a middle school level comic book, and several interactive educational games, including Mars jigsaw puzzles, crosswords, word searches and flash cards.

  9. Efficiently Multi-User Searchable Encryption Scheme with Attribute Revocation and Grant for Cloud Storage

    PubMed Central

    Wang, Shangping; Zhang, Xiaoxue; Zhang, Yaling

    2016-01-01

    Cipher-policy attribute-based encryption (CP-ABE) focus on the problem of access control, and keyword-based searchable encryption scheme focus on the problem of finding the files that the user interested in the cloud storage quickly. To design a searchable and attribute-based encryption scheme is a new challenge. In this paper, we propose an efficiently multi-user searchable attribute-based encryption scheme with attribute revocation and grant for cloud storage. In the new scheme the attribute revocation and grant processes of users are delegated to proxy server. Our scheme supports multi attribute are revoked and granted simultaneously. Moreover, the keyword searchable function is achieved in our proposed scheme. The security of our proposed scheme is reduced to the bilinear Diffie-Hellman (BDH) assumption. Furthermore, the scheme is proven to be secure under the security model of indistinguishability against selective ciphertext-policy and chosen plaintext attack (IND-sCP-CPA). And our scheme is also of semantic security under indistinguishability against chosen keyword attack (IND-CKA) in the random oracle model. PMID:27898703

  10. Efficiently Multi-User Searchable Encryption Scheme with Attribute Revocation and Grant for Cloud Storage.

    PubMed

    Wang, Shangping; Zhang, Xiaoxue; Zhang, Yaling

    2016-01-01

    Cipher-policy attribute-based encryption (CP-ABE) focus on the problem of access control, and keyword-based searchable encryption scheme focus on the problem of finding the files that the user interested in the cloud storage quickly. To design a searchable and attribute-based encryption scheme is a new challenge. In this paper, we propose an efficiently multi-user searchable attribute-based encryption scheme with attribute revocation and grant for cloud storage. In the new scheme the attribute revocation and grant processes of users are delegated to proxy server. Our scheme supports multi attribute are revoked and granted simultaneously. Moreover, the keyword searchable function is achieved in our proposed scheme. The security of our proposed scheme is reduced to the bilinear Diffie-Hellman (BDH) assumption. Furthermore, the scheme is proven to be secure under the security model of indistinguishability against selective ciphertext-policy and chosen plaintext attack (IND-sCP-CPA). And our scheme is also of semantic security under indistinguishability against chosen keyword attack (IND-CKA) in the random oracle model.

  11. The National Extreme Events Data and Research Center (NEED)

    NASA Astrophysics Data System (ADS)

    Gulledge, J.; Kaiser, D. P.; Wilbanks, T. J.; Boden, T.; Devarakonda, R.

    2014-12-01

    The Climate Change Science Institute at Oak Ridge National Laboratory (ORNL) is establishing the National Extreme Events Data and Research Center (NEED), with the goal of transforming how the United States studies and prepares for extreme weather events in the context of a changing climate. NEED will encourage the myriad, distributed extreme events research communities to move toward the adoption of common practices and will develop a new database compiling global historical data on weather- and climate-related extreme events (e.g., heat waves, droughts, hurricanes, etc.) and related information about impacts, costs, recovery, and available research. Currently, extreme event information is not easy to access and is largely incompatible and inconsistent across web sites. NEED's database development will take into account differences in time frames, spatial scales, treatments of uncertainty, and other parameters and variables, and leverage informatics tools developed at ORNL (i.e., the Metadata Editor [1] and Mercury [2]) to generate standardized, robust documentation for each database along with a web-searchable catalog. In addition, NEED will facilitate convergence on commonly accepted definitions and standards for extreme events data and will enable integrated analyses of coupled threats, such as hurricanes/sea-level rise/flooding and droughts/wildfires. Our goal and vision is that NEED will become the premiere integrated resource for the general study of extreme events. References: [1] Devarakonda, Ranjeet, et al. "OME: Tool for generating and managing metadata to handle BigData." Big Data (Big Data), 2014 IEEE International Conference on. IEEE, 2014. [2] Devarakonda, Ranjeet, et al. "Mercury: reusable metadata management, data discovery and access system." Earth Science Informatics 3.1-2 (2010): 87-94.

  12. 49 CFR 573.15 - Public Availability of Motor Vehicle Recall Information.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Internet. The information shall be in a format that is searchable by vehicle make and model and vehicle... following requirements: (1) Be free of charge and not require users to register or submit information, other... (Internet link) to it conspicuously placed on the manufacturer's main United States' Web page; (3) Not...

  13. 49 CFR 573.15 - Public availability of motor vehicle recall information.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Internet. The information shall be in a format that is searchable by vehicle make and model and vehicle... following requirements: (1) Be free of charge and not require users to register or submit information, other... (Internet link) to it conspicuously placed on the manufacturer's main United States' Web page; (3) Not...

  14. 76 FR 81990 - 2002 Reopened-Previously Denied Determinations; Notice of Negative Determinations on...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-29

    ... reconsideration investigation revealed that the following workers groups have not met the certification criteria... Myers, FL. TA-W-80,026; Computer Task Group, Mechanicsburg, PA. TA-W-80,047; Cenveo, Inc., Springfield... Department's Web site at tradeact/taa/taa--search--form.cfm under the searchable listing of determinations or...

  15. 76 FR 76189 - 2002 Reopened-Previously Denied Determinations; Notice of Revised Denied Determinations on...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-06

    ... reconsideration investigation revealed that the following workers groups have met the certification criteria under... Salt Lake City, UT: March 24, 2010. TA-W-80,153; Intercontinental Hotels Group, Alpharetta, GA: May 4... available on the Department's Web site at tradeact/taa/taa-- search--form.cfm under the searchable listing...

  16. Kratylos: A Tool for Sharing Interlinearized and Lexical Data in Diverse Formats

    ERIC Educational Resources Information Center

    Kaufman, Daniel; Finkel, Raphael

    2018-01-01

    In this paper we present Kratylos, at www.kratylos.org/, a web application that creates searchable multimedia corpora from data collections in diverse formats, including collections of interlinearized glossed text (IGT) and dictionaries. There exists a crucial lacuna in the electronic ecology that supports language documentation and linguistic…

  17. 78 FR 77475 - National Institute of Neurological Disorders and Stroke

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-23

    ... items including the National Pain Strategy, a searchable data base for the Federally-funded pain... Committee business items including the National Pain Strategy, a searchable data base for the Federally...

  18. KEGG orthology-based annotation of the predicted proteome of Acropora digitifera: ZoophyteBase - an open access and searchable database of a coral genome

    PubMed Central

    2013-01-01

    Background Contemporary coral reef research has firmly established that a genomic approach is urgently needed to better understand the effects of anthropogenic environmental stress and global climate change on coral holobiont interactions. Here we present KEGG orthology-based annotation of the complete genome sequence of the scleractinian coral Acropora digitifera and provide the first comprehensive view of the genome of a reef-building coral by applying advanced bioinformatics. Description Sequences from the KEGG database of protein function were used to construct hidden Markov models. These models were used to search the predicted proteome of A. digitifera to establish complete genomic annotation. The annotated dataset is published in ZoophyteBase, an open access format with different options for searching the data. A particularly useful feature is the ability to use a Google-like search engine that links query words to protein attributes. We present features of the annotation that underpin the molecular structure of key processes of coral physiology that include (1) regulatory proteins of symbiosis, (2) planula and early developmental proteins, (3) neural messengers, receptors and sensory proteins, (4) calcification and Ca2+-signalling proteins, (5) plant-derived proteins, (6) proteins of nitrogen metabolism, (7) DNA repair proteins, (8) stress response proteins, (9) antioxidant and redox-protective proteins, (10) proteins of cellular apoptosis, (11) microbial symbioses and pathogenicity proteins, (12) proteins of viral pathogenicity, (13) toxins and venom, (14) proteins of the chemical defensome and (15) coral epigenetics. Conclusions We advocate that providing annotation in an open-access searchable database available to the public domain will give an unprecedented foundation to interrogate the fundamental molecular structure and interactions of coral symbiosis and allow critical questions to be addressed at the genomic level based on combined aspects of evolutionary, developmental, metabolic, and environmental perspectives. PMID:23889801

  19. Developing the "Compendium of Strategies to Reduce Teacher Turnover in the Northeast and Islands Region." A Companion to the Database. Issues & Answers. REL 2008-No. 052

    ERIC Educational Resources Information Center

    Ellis, Pamela; Grogan, Marian; Levy, Abigail Jurist; Tucker-Seeley, Kevon

    2008-01-01

    This report provides state-, regional-, and district-level decisionmakers in the Northeast and Islands Region with a description of the "Compendium of Strategies to Reduce Teacher Turnover in the Northeast and Islands Region," a searchable database of selected profiles of retention strategies implemented in Connecticut, Maine,…

  20. Bradbury Science Museum Collections Inventory Photos Disc #5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strohmeyer, Wendy J.

    The photos on Bradbury Science Museum Collections Inventory Photos Disc #5 is another in an ongoing effort to catalog all artifacts held by the Museum. Photos will be used as part of the condition report for the artifact, and will become part of the collection record in the collections database for that artifact. The collections database will be publically searchable on the Museum website.

  1. 'Bradbury Science Museum Collections Inventory Photos Disc #4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strohmeyer, Wendy J.

    The photos on Bradbury Science Museum Collections Inventory Photos Disc #4 is another in an ongoing effort to catalog all artifacts held by the Museum. Photos will be used as part of the condition report for the artifact, and will become part of the collection record in the collections database for that artifact. The collections database will be publically searchable on the Museum website.

  2. Data Portal for the Library of Integrated Network-based Cellular Signatures (LINCS) program: integrated access to diverse large-scale cellular perturbation response data

    PubMed Central

    Koleti, Amar; Terryn, Raymond; Stathias, Vasileios; Chung, Caty; Cooper, Daniel J; Turner, John P; Vidović, Dušica; Forlin, Michele; Kelley, Tanya T; D’Urso, Alessandro; Allen, Bryce K; Torre, Denis; Jagodnik, Kathleen M; Wang, Lily; Jenkins, Sherry L; Mader, Christopher; Niu, Wen; Fazel, Mehdi; Mahi, Naim; Pilarczyk, Marcin; Clark, Nicholas; Shamsaei, Behrouz; Meller, Jarek; Vasiliauskas, Juozas; Reichard, John; Medvedovic, Mario; Ma’ayan, Avi; Pillai, Ajay

    2018-01-01

    Abstract The Library of Integrated Network-based Cellular Signatures (LINCS) program is a national consortium funded by the NIH to generate a diverse and extensive reference library of cell-based perturbation-response signatures, along with novel data analytics tools to improve our understanding of human diseases at the systems level. In contrast to other large-scale data generation efforts, LINCS Data and Signature Generation Centers (DSGCs) employ a wide range of assay technologies cataloging diverse cellular responses. Integration of, and unified access to LINCS data has therefore been particularly challenging. The Big Data to Knowledge (BD2K) LINCS Data Coordination and Integration Center (DCIC) has developed data standards specifications, data processing pipelines, and a suite of end-user software tools to integrate and annotate LINCS-generated data, to make LINCS signatures searchable and usable for different types of users. Here, we describe the LINCS Data Portal (LDP) (http://lincsportal.ccs.miami.edu/), a unified web interface to access datasets generated by the LINCS DSGCs, and its underlying database, LINCS Data Registry (LDR). LINCS data served on the LDP contains extensive metadata and curated annotations. We highlight the features of the LDP user interface that is designed to enable search, browsing, exploration, download and analysis of LINCS data and related curated content. PMID:29140462

  3. International forensic automotive paint database

    NASA Astrophysics Data System (ADS)

    Bishea, Gregory A.; Buckle, Joe L.; Ryland, Scott G.

    1999-02-01

    The Technical Working Group for Materials Analysis (TWGMAT) is supporting an international forensic automotive paint database. The Federal Bureau of Investigation and the Royal Canadian Mounted Police (RCMP) are collaborating on this effort through TWGMAT. This paper outlines the support and further development of the RCMP's Automotive Paint Database, `Paint Data Query'. This cooperative agreement augments and supports a current, validated, searchable, automotive paint database that is used to identify make(s), model(s), and year(s) of questioned paint samples in hit-and-run fatalities and other associated investigations involving automotive paint.

  4. Development of a Searchable Metabolite Database and Simulator of Xenobiotic Metabolism

    EPA Science Inventory

    A computational tool (MetaPath) has been developed for storage and analysis of metabolic pathways and associated metadata. The system is capable of sophisticated text and chemical structure/substructure searching as well as rapid comparison of metabolites formed across chemicals,...

  5. EPA'S REPORT ON THE ENVIRONMENT (2003 Draft)

    EPA Science Inventory

    The RoE presents information on environmental indicators in the areas of air, water, land, human health, and ecological condition. The report is available for download and the RoE information is searchable via an on-line database site: www.epa.gov/roe.

  6. Accessibility, searchability, transparency and engagement of soil carbon data: The International Soil Carbon Network

    NASA Astrophysics Data System (ADS)

    Harden, Jennifer W.; Hugelius, Gustaf; Koven, Charlie; Sulman, Ben; O'Donnell, Jon; He, Yujie

    2016-04-01

    Soils are capacitors for carbon and water entering and exiting through land-atmosphere exchange. Capturing the spatiotemporal variations in soil C exchange through monitoring and modeling is difficult in part because data are reported unevenly across spatial, temporal, and management scales and in part because the unit of measure generally involves destructive harvest or non-recurrent measurements. In order to improve our fundamental basis for understanding soil C exchange, a multi-user, open source, searchable database and network of scientists has been formed. The International Soil Carbon Network (ISCN) is a self-chartered, member-based and member-owned network of scientists dedicated to soil carbon science. Attributes of the ISCN include 1) Targeted ISCN Action Groups which represent teams of motivated researchers that propose and pursue specific soil C research questions with the aim of synthesizing seminal articles regarding soil C fate. 2) Datasets to date contributed by institutions and individuals to a comprehensive, searchable open-access database that currently includes over 70,000 geolocated profiles for which soil C and other soil properties. 3) Derivative products resulting from the database, including depth attenuation attributes for C concentration and storage; C storage maps; and model-based assessments of emission/sequestration for future climate scenarios. Several examples illustrate the power of such a database and its engagement with the science community. First, a simplified, data-constrained global ecosystem model estimated a global sensitivity of permafrost soil carbon to climate change (g sensitivity) of -14 to -19 Pg C °C-1 of warming on a 100 years time scale. Second, using mathematical characterizations of depth profiles for organic carbon storage, C at the soil surface reflects Net Primary Production (NPP) and its allotment as moss or litter, while e-folding depths are correlated to rooting depth. Third, storage of deep C is highly correlated with bulk density and porosity of the rock/sediment matrix. Thus C storage is most stable at depth, yet is susceptible to changes in tillage, rooting depths, and erosion/sedimentation. Fourth, current ESMs likely overestimate the turnover time of soil organic carbon and subsequently overestimate soil carbon sequestration, thus datasets combined with other soil properties will help constrain the ESM predictions. Last, analysis of soil horizon and carbon data showed that soils with a history of tillage had significantly lower carbon concentrations in both near-surface and deep layers, and that the effect persisted even in reforested areas. In addition to the opportunities for empirical science using a large database, the database has great promise for evaluation of biogeochemical and earth system models. The preservation of individual soil core measurements avoids issues with spatial averaging while facilitating evaluation of advanced model processes such as depth distributions of soil carbon, land use impacts, and spatial heterogeneity.

  7. Data Collection, Collaboration, Analysis, and Publication Using the Open Data Repository's (ODR) Data Publisher

    NASA Astrophysics Data System (ADS)

    Lafuente, B.; Stone, N.; Bristow, T.; Keller, R. M.; Blake, D. F.; Downs, R. T.; Pires, A.; Dateo, C. E.; Fonda, M.

    2017-12-01

    In development for nearly four years, the Open Data Repository's (ODR) Data Publisher software has become a useful tool for researchers' data needs. Data Publisher facilitates the creation of customized databases with flexible permission sets that allow researchers to share data collaboratively while improving data discovery and maintaining ownership rights. The open source software provides an end-to-end solution from collection to final repository publication. A web-based interface allows researchers to enter data, view data, and conduct analysis using any programming language supported by JupyterHub (http://www.jupyterhub.org). This toolset makes it possible for a researcher to store and manipulate their data in the cloud from any internet capable device. Data can be embargoed in the system until a date selected by the researcher. For instance, open publication can be set to a date that coincides with publication of data analysis in a third party journal. In conjunction with teams at NASA Ames and the University of Arizona, a number of pilot studies are being conducted to guide the software development so that it allows them to publish and share their data. These pilots include (1) the Astrobiology Habitable Environments Database (AHED), a central searchable repository designed to promote and facilitate the integration and sharing of all the data generated by the diverse disciplines in astrobiology; (2) a database containing the raw and derived data products from the CheMin instrument on the MSL rover Curiosity (http://odr.io/CheMin), featuring a versatile graphing system, instructions and analytical tools to process the data, and a capability to download data in different formats; and (3) the Mineral Evolution project, which by correlating the diversity of mineral species with their ages, localities, and other measurable properties aims to understand how the episodes of planetary accretion and differentiation, plate tectonics, and origin of life lead to a selective evolution of mineral species through changes in temperature, pressure, and composition. Ongoing development will complete integration of third party meta-data standards and publishing data to the semantic web. This project is supported by the Science-Enabling Research Activity (SERA) and NASA NNX11AP82A, MSL.

  8. Bottled SAFT: A Web App Providing SAFT-γ Mie Force Field Parameters for Thousands of Molecular Fluids.

    PubMed

    Ervik, Åsmund; Mejía, Andrés; Müller, Erich A

    2016-09-26

    Coarse-grained molecular simulation has become a popular tool for modeling simple and complex fluids alike. The defining aspects of a coarse grained model are the force field parameters, which must be determined for each particular fluid. Because the number of molecular fluids of interest in nature and in engineering processes is immense, constructing force field parameter tables by individually fitting to experimental data is a futile task. A step toward solving this challenge was taken recently by Mejía et al., who proposed a correlation that provides SAFT-γ Mie force field parameters for a fluid provided one knows the critical temperature, the acentric factor and a liquid density, all relatively accessible properties. Building on this, we have applied the correlation to more than 6000 fluids, and constructed a web application, called "Bottled SAFT", which makes this data set easily searchable by CAS number, name or chemical formula. Alternatively, the application allows the user to calculate parameters for components not present in the database. Once the intermolecular potential has been found through Bottled SAFT, code snippets are provided for simulating the desired substance using the "raaSAFT" framework, which leverages established molecular dynamics codes to run the simulations. The code underlying the web application is written in Python using the Flask microframework; this allows us to provide a modern high-performance web app while also making use of the scientific libraries available in Python. Bottled SAFT aims at taking the complexity out of obtaining force field parameters for a wide range of molecular fluids, and facilitates setting up and running coarse-grained molecular simulations. The web application is freely available at http://www.bottledsaft.org . The underlying source code is available on Bitbucket under a permissive license.

  9. Volcanic eruptions, hazardous ash clouds and visualization tools for accessing real-time infrared remote sensing data

    NASA Astrophysics Data System (ADS)

    Webley, P.; Dehn, J.; Dean, K. G.; Macfarlane, S.

    2010-12-01

    Volcanic eruptions are a global hazard, affecting local infrastructure, impacting airports and hindering the aviation community, as seen in Europe during Spring 2010 from the Eyjafjallajokull eruption in Iceland. Here, we show how remote sensing data is used through web-based interfaces for monitoring volcanic activity, both ground based thermal signals and airborne ash clouds. These ‘web tools’, http://avo.images.alaska.edu/, provide timely availability of polar orbiting and geostationary data from US National Aeronautics and Space Administration, National Oceanic and Atmosphere Administration and Japanese Meteorological Agency satellites for the North Pacific (NOPAC) region. This data is used operationally by the Alaska Volcano Observatory (AVO) for monitoring volcanic activity, especially at remote volcanoes and generates ‘alarms’ of any detected volcanic activity and ash clouds. The webtools allow the remote sensing team of AVO to easily perform their twice daily monitoring shifts. The web tools also assist the National Weather Service, Alaska and Kamchatkan Volcanic Emergency Response Team, Russia in their operational duties. Users are able to detect ash clouds, measure the distance from the source, area and signal strength. Within the web tools, there are 40 x 40 km datasets centered on each volcano and a searchable database of all acquired data from 1993 until present with the ability to produce time series data per volcano. Additionally, a data center illustrates the acquired data across the NOPAC within the last 48 hours, http://avo.images.alaska.edu/tools/datacenter/. We will illustrate new visualization tools allowing users to display the satellite imagery within Google Earth/Maps, and ArcGIS Explorer both as static maps and time-animated imagery. We will show these tools in real-time as well as examples of past large volcanic eruptions. In the future, we will develop the tools to produce real-time ash retrievals, run volcanic ash dispersion models from detected ash clouds and develop the browser interfaces to display other remote sensing datasets, such as volcanic sulfur dioxide detection.

  10. Computerization of ALDOT R & D library titles : a project for the Alabama Department of Transportation

    DOT National Transportation Integrated Search

    2003-06-01

    The Alabama Department of Transportation (ALDOT) Research and Development (R&D) Bureau wished to catalog the paper publications in its library and produce a searchable database that allows an internet user easily to identify if a particular document ...

  11. 78 FR 19243 - Privacy Act of 1974; System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-29

    ... applicant; or stored in searchable database and retrievable by patent number. Safeguards: Buildings employ... DEPARTMENT OF COMMERCE United States Patent and Trademark Office Privacy Act of 1974; System of Records AGENCY: United States Patent and Trademark Office, Commerce. ACTION: Notice of amendment of...

  12. EDUCATIONPLANNER.CA: An External Review

    ERIC Educational Resources Information Center

    Atkinson, Al

    2009-01-01

    The Education Planner website provides a searchable database of approximately 1,700 undergraduate post-secondary programs in British Columbia (BC). It is intended as a "one-start entry point" for students looking for post-secondary options. This independent review of Education Planner was undertaken to determine its overall…

  13. HydroDesktop: An Open Source GIS-Based Platform for Hydrologic Data Discovery, Visualization, and Analysis

    NASA Astrophysics Data System (ADS)

    Ames, D. P.; Kadlec, J.; Cao, Y.; Grover, D.; Horsburgh, J. S.; Whiteaker, T.; Goodall, J. L.; Valentine, D. W.

    2010-12-01

    A growing number of hydrologic information servers are being deployed by government agencies, university networks, and individual researchers using the Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI) Hydrologic Information System (HIS). The CUAHSI HIS Project has developed a standard software stack, called HydroServer, for publishing hydrologic observations data. It includes the Observations Data Model (ODM) database and Water Data Service web services, which together enable publication of data on the Internet in a standard format called Water Markup Language (WaterML). Metadata describing available datasets hosted on these servers is compiled within a central metadata catalog called HIS Central at the San Diego Supercomputer Center and is searchable through a set of predefined web services based queries. Together, these servers and central catalog service comprise a federated HIS of a scale and comprehensiveness never previously available. This presentation will briefly review/introduce the CUAHSI HIS system with special focus on a new HIS software tool called "HydroDesktop" and the open source software development web portal, www.HydroDesktop.org, which supports community development and maintenance of the software. HydroDesktop is a client-side, desktop software application that acts as a search and discovery tool for exploring the distributed network of HydroServers, downloading specific data series, visualizing and summarizing data series and exporting these to formats needed for analysis by external software. HydroDesktop is based on the open source DotSpatial GIS developer toolkit which provides it with map-based data interaction and visualization, and a plug-in interface that can be used by third party developers and researchers to easily extend the software using Microsoft .NET programming languages. HydroDesktop plug-ins that are presently available or currently under development within the project and by third party collaborators include functions for data search and discovery, extensive graphing, data editing and export, HydroServer exploration, integration with the OpenMI workflow and modeling system, and an interface for data analysis through the R statistical package.

  14. A web-based genomic sequence database for the Streptomycetaceae: a tool for systematics and genome mining

    USDA-ARS?s Scientific Manuscript database

    The ARS Microbial Genome Sequence Database (http://199.133.98.43), a web-based database server, was established utilizing the BIGSdb (Bacterial Isolate Genomics Sequence Database) software package, developed at Oxford University, as a tool to manage multi-locus sequence data for the family Streptomy...

  15. Perspective: Interactive material property databases through aggregation of literature data

    NASA Astrophysics Data System (ADS)

    Seshadri, Ram; Sparks, Taylor D.

    2016-05-01

    Searchable, interactive, databases of material properties, particularly those relating to functional materials (magnetics, thermoelectrics, photovoltaics, etc.) are curiously missing from discussions of machine-learning and other data-driven methods for advancing new materials discovery. Here we discuss the manual aggregation of experimental data from the published literature for the creation of interactive databases that allow the original experimental data as well additional metadata to be visualized in an interactive manner. The databases described involve materials for thermoelectric energy conversion, and for the electrodes of Li-ion batteries. The data can be subject to machine-learning, accelerating the discovery of new materials.

  16. A searchable database for the genome of Phomopsis longicolla (isolate MSPL 10-6)

    USDA-ARS?s Scientific Manuscript database

    Phomopsis longicolla (syn. Diaporthe longicolla) is an important seed-borne fungal pathogen that primarily causes Phomopsis seed decay (PSD) in most soybean production areas worldwide. This disease severely decreases soybean seed quality by reducing seed viability and oil quality, altering seed com...

  17. Using Web Ontology Language to Integrate Heterogeneous Databases in the Neurosciences

    PubMed Central

    Lam, Hugo Y.K.; Marenco, Luis; Shepherd, Gordon M.; Miller, Perry L.; Cheung, Kei-Hoi

    2006-01-01

    Integrative neuroscience involves the integration and analysis of diverse types of neuroscience data involving many different experimental techniques. This data will increasingly be distributed across many heterogeneous databases that are web-accessible. Currently, these databases do not expose their schemas (database structures) and their contents to web applications/agents in a standardized, machine-friendly way. This limits database interoperation. To address this problem, we describe a pilot project that illustrates how neuroscience databases can be expressed using the Web Ontology Language, which is a semantically-rich ontological language, as a common data representation language to facilitate complex cross-database queries. In this pilot project, an existing tool called “D2RQ” was used to translate two neuroscience databases (NeuronDB and CoCoDat) into OWL, and the resulting OWL ontologies were then merged. An OWL-based reasoner (Racer) was then used to provide a sophisticated query language (nRQL) to perform integrated queries across the two databases based on the merged ontology. This pilot project is one step toward exploring the use of semantic web technologies in the neurosciences. PMID:17238384

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fox, G.C.; Stevens, P.R.; Rittenberg, A.

    A compilation is presented of reaction data taken from experimental high energy physics journal articles, reports, preprints, theses, and other sources. Listings of all the data are given, and the data points are indexed by reaction and momentum, as well as by their source document. Much of the original compilation was done by others working in the field. The data presented also exist in the form of a computer-readable and searchable database; primitive access facilities for this database are available.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abercrombie, Robert K; Udoeyop, Akaninyene W; Schlicher, Bob G

    This work examines a scientometric model that tracks the emergence of an identified technology from initial discovery (via original scientific and conference literature), through critical discoveries (via original scientific, conference literature and patents), transitioning through Technology Readiness Levels (TRLs) and ultimately on to commercial application. During the period of innovation and technology transfer, the impact of scholarly works, patents and on-line web news sources are identified. As trends develop, currency of citations, collaboration indicators, and on-line news patterns are identified. The combinations of four distinct and separate searchable on-line networked sources (i.e., scholarly publications and citation, patents, news archives, andmore » online mapping networks) are assembled to become one collective network (a dataset for analysis of relations). This established network becomes the basis from which to quickly analyze the temporal flow of activity (searchable events) for the example subject domain we investigated.« less

  20. Database of Geoscientific References Through 2007 for Afghanistan, Version 2

    USGS Publications Warehouse

    Eppinger, Robert G.; Sipeki, Julianna; Scofield, M.L. Sco

    2007-01-01

    This report describes an accompanying database of geoscientific references for the country of Afghanistan. Included is an accompanying Microsoft? Access 2003 database of geoscientific references for the country of Afghanistan. The reference compilation is part of a larger joint study of Afghanistan's energy, mineral, and water resources, and geologic hazards, currently underway by the U.S. Geological Survey, the British Geological Survey, and the Afghanistan Geological Survey. The database includes both published (n = 2,462) and unpublished (n = 174) references compiled through September, 2007. The references comprise two separate tables in the Access database. The reference database includes a user-friendly, keyword-searchable, interface and only minimum knowledge of the use of Microsoft? Access is required.

  1. Just-in-time Database-Driven Web Applications

    PubMed Central

    2003-01-01

    "Just-in-time" database-driven Web applications are inexpensive, quickly-developed software that can be put to many uses within a health care organization. Database-driven Web applications garnered 73873 hits on our system-wide intranet in 2002. They enabled collaboration and communication via user-friendly Web browser-based interfaces for both mission-critical and patient-care-critical functions. Nineteen database-driven Web applications were developed. The application categories that comprised 80% of the hits were results reporting (27%), graduate medical education (26%), research (20%), and bed availability (8%). The mean number of hits per application was 3888 (SD = 5598; range, 14-19879). A model is described for just-in-time database-driven Web application development and an example given with a popular HTML editor and database program. PMID:14517109

  2. DAM-ing the Digital Flood

    ERIC Educational Resources Information Center

    Raths, David

    2008-01-01

    With the widespread digitization of art, photography, and music, plus the introduction of streaming video, many colleges and universities are realizing that they must develop or purchase systems to preserve their school's digitized objects; that they must create searchable databases so that researchers can find and share copies of digital files;…

  3. [A systematic evaluation of application of the web-based cancer database].

    PubMed

    Huang, Tingting; Liu, Jialin; Li, Yong; Zhang, Rui

    2013-10-01

    In order to support the theory and practice of the web-based cancer database development in China, we applied a systematic evaluation to assess the development condition of the web-based cancer databases at home and abroad. We performed computer-based retrieval of the Ovid-MEDLINE, Springerlink, EBSCOhost, Wiley Online Library and CNKI databases, the papers of which were published between Jan. 1995 and Dec. 2011, and retrieved the references of these papers by hand. We selected qualified papers according to the pre-established inclusion and exclusion criteria, and carried out information extraction and analysis of the papers. Eventually, searching the online database, we obtained 1244 papers, and checking the reference lists, we found other 19 articles. Thirty-one articles met the inclusion and exclusion criteria and we extracted the proofs and assessed them. Analyzing these evidences showed that the U.S.A. counted for 26% in the first place. Thirty-nine percent of these web-based cancer databases are comprehensive cancer databases. As for single cancer databases, breast cancer and prostatic cancer are on the top, both counting for 10% respectively. Thirty-two percent of the cancer database are associated with cancer gene information. For the technical applications, MySQL and PHP applied most widely, nearly 23% each.

  4. Consumer involvement in Quality Use of Medicines (QUM) projects - lessons from Australia.

    PubMed

    Kirkpatrick, Carl M J; Roughead, Elizabeth E; Monteith, Gregory R; Tett, Susan E

    2005-12-01

    It is essential that knowledge gained through health services research is collated and made available for evaluation, for policy purposes and to enable collaboration between people working in similar areas (capacity building). The Australian Quality Use of Medicine (QUM) on-line, web-based project database, known as the QUMmap, was designed to meet these needs for a specific sub-section of health services research related to improving the use of medicines. Australia's National Strategy for Quality Use of Medicines identifies the primacy of consumers as a major principle for quality use of medicines, and aims to support consumer led research. The aim of this study was to determine how consumers as a group have been represented in QUM projects in Australia. A secondary aim was to investigate how the projects with consumer involvement fit into Australia's QUM policy framework. Using the web-based QUMmap, all projects which claimed consumer involvement were identified and stratified into four categories, projects undertaken by; (a) consumers for consumers, (b) health professionals for consumers, (c) health professionals for health professionals, and (d) other. Projects in the first two categories were then classified according to the policy 'building blocks' considered necessary to achieve QUM. Of the 143 'consumer' projects identified, the majority stated to be 'for consumers' were either actually by health professionals for health professionals (c) or by health professionals for consumers (b) (47% and 40% respectively). Only 12 projects (9%) were directly undertaken by consumers or consumer groups for consumers (a). The majority of the health professionals for consumers (b) projects were directed at the provision of services and interventions, but were not focusing on the education, training or skill development of consumers. Health services research relating to QUM is active in Australia and the projects are collated and searchable on the web-based interactive QUMmap. Healthcare professionals appear to be dominating nominally 'consumer focussed' research, with less than half of these projects actively involving the consumers or directly benefiting consumers. The QUMmap provides a valuable tool for policy analysis and for provision of future directions through identification of QUM initiatives.

  5. PLAN: a web platform for automating high-throughput BLAST searches and for managing and mining results.

    PubMed

    He, Ji; Dai, Xinbin; Zhao, Xuechun

    2007-02-09

    BLAST searches are widely used for sequence alignment. The search results are commonly adopted for various functional and comparative genomics tasks such as annotating unknown sequences, investigating gene models and comparing two sequence sets. Advances in sequencing technologies pose challenges for high-throughput analysis of large-scale sequence data. A number of programs and hardware solutions exist for efficient BLAST searching, but there is a lack of generic software solutions for mining and personalized management of the results. Systematically reviewing the results and identifying information of interest remains tedious and time-consuming. Personal BLAST Navigator (PLAN) is a versatile web platform that helps users to carry out various personalized pre- and post-BLAST tasks, including: (1) query and target sequence database management, (2) automated high-throughput BLAST searching, (3) indexing and searching of results, (4) filtering results online, (5) managing results of personal interest in favorite categories, (6) automated sequence annotation (such as NCBI NR and ontology-based annotation). PLAN integrates, by default, the Decypher hardware-based BLAST solution provided by Active Motif Inc. with a greatly improved efficiency over conventional BLAST software. BLAST results are visualized by spreadsheets and graphs and are full-text searchable. BLAST results and sequence annotations can be exported, in part or in full, in various formats including Microsoft Excel and FASTA. Sequences and BLAST results are organized in projects, the data publication levels of which are controlled by the registered project owners. In addition, all analytical functions are provided to public users without registration. PLAN has proved a valuable addition to the community for automated high-throughput BLAST searches, and, more importantly, for knowledge discovery, management and sharing based on sequence alignment results. The PLAN web interface is platform-independent, easily configurable and capable of comprehensive expansion, and user-intuitive. PLAN is freely available to academic users at http://bioinfo.noble.org/plan/. The source code for local deployment is provided under free license. Full support on system utilization, installation, configuration and customization are provided to academic users.

  6. PLAN: a web platform for automating high-throughput BLAST searches and for managing and mining results

    PubMed Central

    He, Ji; Dai, Xinbin; Zhao, Xuechun

    2007-01-01

    Background BLAST searches are widely used for sequence alignment. The search results are commonly adopted for various functional and comparative genomics tasks such as annotating unknown sequences, investigating gene models and comparing two sequence sets. Advances in sequencing technologies pose challenges for high-throughput analysis of large-scale sequence data. A number of programs and hardware solutions exist for efficient BLAST searching, but there is a lack of generic software solutions for mining and personalized management of the results. Systematically reviewing the results and identifying information of interest remains tedious and time-consuming. Results Personal BLAST Navigator (PLAN) is a versatile web platform that helps users to carry out various personalized pre- and post-BLAST tasks, including: (1) query and target sequence database management, (2) automated high-throughput BLAST searching, (3) indexing and searching of results, (4) filtering results online, (5) managing results of personal interest in favorite categories, (6) automated sequence annotation (such as NCBI NR and ontology-based annotation). PLAN integrates, by default, the Decypher hardware-based BLAST solution provided by Active Motif Inc. with a greatly improved efficiency over conventional BLAST software. BLAST results are visualized by spreadsheets and graphs and are full-text searchable. BLAST results and sequence annotations can be exported, in part or in full, in various formats including Microsoft Excel and FASTA. Sequences and BLAST results are organized in projects, the data publication levels of which are controlled by the registered project owners. In addition, all analytical functions are provided to public users without registration. Conclusion PLAN has proved a valuable addition to the community for automated high-throughput BLAST searches, and, more importantly, for knowledge discovery, management and sharing based on sequence alignment results. The PLAN web interface is platform-independent, easily configurable and capable of comprehensive expansion, and user-intuitive. PLAN is freely available to academic users at . The source code for local deployment is provided under free license. Full support on system utilization, installation, configuration and customization are provided to academic users. PMID:17291345

  7. Implementing a Dynamic Database-Driven Course Using LAMP

    ERIC Educational Resources Information Center

    Laverty, Joseph Packy; Wood, David; Turchek, John

    2011-01-01

    This paper documents the formulation of a database driven open source architecture web development course. The design of a web-based curriculum faces many challenges: a) relative emphasis of client and server-side technologies, b) choice of a server-side language, and c) the cost and efficient delivery of a dynamic web development, database-driven…

  8. Spatial Distribution of Star Formation in High Redshift Galaxies

    NASA Astrophysics Data System (ADS)

    Cunnyngham, Ian; Takamiya, M.; Willmer, C.; Chun, M.; Young, M.

    2011-01-01

    Integral field unit spectroscopy taken of galaxies with redshifts between 0.6 and 0.8 utilizing Gemini Observatory’s GMOS instrument were used to investigate the spatial distribution of star-forming regions by measuring the Hβ and [OII]λ3727 emission line fluxes. These galaxies were selected based on the strength of Hβ and [OII]λ3727 as measured from slit LRIS/Keck spectra. The process of calibrating and reducing data into cubes -- possessing two spatial dimensions, and one for wavelength -- was automated via a custom batch script using the Gemini IRAF routines. Among these galaxies only the bluest sources clearly show [OII] in the IFU regardless of total galaxy luminosity. The brightest galaxies lack [OII] emission and it is posited that two different modes of star formation exist among this seemingly homogeneous group of z=0.7 star-forming galaxies. In order to increase the galaxy sample to include redshifts from 0.3 to 0.9, public Gemini IFU data are being sought. Python scripts were written to mine the Gemini Science Archive for candidate observations, cross-reference the target of these observations with information from the NASA Extragalactic Database, and then present the resultant database in sortable, searchable, cross-linked web-interface using Django to facilitate navigation. By increasing the sample, we expect to characterize these two different modes of star formation which could be high-redshift counterparts of the U/LIRGs and dwarf starburst galaxies like NGC 1569/NGC 4449. The authors acknowledge funds provided by the National Science Foundation (AST 0909240).

  9. Team X Spacecraft Instrument Database Consolidation

    NASA Technical Reports Server (NTRS)

    Wallenstein, Kelly A.

    2005-01-01

    In the past decade, many changes have been made to Team X's process of designing each spacecraft, with the purpose of making the overall procedure more efficient over time. One such improvement is the use of information databases from previous missions, designs, and research. By referring to these databases, members of the design team can locate relevant instrument data and significantly reduce the total time they spend on each design. The files in these databases were stored in several different formats with various levels of accuracy. During the past 2 months, efforts have been made in an attempt to combine and organize these files. The main focus was in the Instruments department, where spacecraft subsystems are designed based on mission measurement requirements. A common database was developed for all instrument parameters using Microsoft Excel to minimize the time and confusion experienced when searching through files stored in several different formats and locations. By making this collection of information more organized, the files within them have become more easily searchable. Additionally, the new Excel database offers the option of importing its contents into a more efficient database management system in the future. This potential for expansion enables the database to grow and acquire more search features as needed.

  10. Archive of mass spectral data files on recordable CD-ROMs and creation and maintenance of a searchable computerized database.

    PubMed

    Amick, G D

    1999-01-01

    A database containing names of mass spectral data files generated in a forensic toxicology laboratory and two Microsoft Visual Basic programs to maintain and search this database is described. The data files (approximately 0.5 KB/each) were collected from six mass spectrometers during routine casework. Data files were archived on 650 MB (74 min) recordable CD-ROMs. Each recordable CD-ROM was given a unique name, and its list of data file names was placed into the database. The present manuscript describes the use of search and maintenance programs for searching and routine upkeep of the database and creation of CD-ROMs for archiving of data files.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abercrombie, Robert K; Udoeyop, Akaninyene W

    This work examines a scientometric model that tracks the emergence of an identified technology from initial discovery (via original scientific and conference literature), through critical discoveries (via original scientific, conference literature and patents), transitioning through Technology Readiness Levels (TRLs) and ultimately on to commercial application. During the period of innovation and technology transfer, the impact of scholarly works, patents and on-line web news sources are identified. As trends develop, currency of citations, collaboration indicators, and on-line news patterns are identified. The combinations of four distinct and separate searchable on-line networked sources (i.e., scholarly publications and citation, worldwide patents, news archives,more » and on-line mapping networks) are assembled to become one collective network (a dataset for analysis of relations). This established network becomes the basis from which to quickly analyze the temporal flow of activity (searchable events) for the example subject domain we investigated.« less

  12. Scientometric methods for identifying emerging technologies

    DOEpatents

    Abercrombie, Robert K; Schlicher, Bob G; Sheldon, Frederick T

    2015-11-03

    Provided is a method of generating a scientometric model that tracks the emergence of an identified technology from initial discovery (via original scientific and conference literature), through critical discoveries (via original scientific, conference literature and patents), transitioning through Technology Readiness Levels (TRLs) and ultimately on to commercial application. During the period of innovation and technology transfer, the impact of scholarly works, patents and on-line web news sources are identified. As trends develop, currency of citations, collaboration indicators, and on-line news patterns are identified. The combinations of four distinct and separate searchable on-line networked sources (i.e., scholarly publications and citation, worldwide patents, news archives, and on-line mapping networks) are assembled to become one collective network (a dataset for analysis of relations). This established network becomes the basis from which to quickly analyze the temporal flow of activity (searchable events) for the example subject domain.

  13. Information systems in food safety management.

    PubMed

    McMeekin, T A; Baranyi, J; Bowman, J; Dalgaard, P; Kirk, M; Ross, T; Schmid, S; Zwietering, M H

    2006-12-01

    Information systems are concerned with data capture, storage, analysis and retrieval. In the context of food safety management they are vital to assist decision making in a short time frame, potentially allowing decisions to be made and practices to be actioned in real time. Databases with information on microorganisms pertinent to the identification of foodborne pathogens, response of microbial populations to the environment and characteristics of foods and processing conditions are the cornerstone of food safety management systems. Such databases find application in: Identifying pathogens in food at the genus or species level using applied systematics in automated ways. Identifying pathogens below the species level by molecular subtyping, an approach successfully applied in epidemiological investigations of foodborne disease and the basis for national surveillance programs. Predictive modelling software, such as the Pathogen Modeling Program and Growth Predictor (that took over the main functions of Food Micromodel) the raw data of which were combined as the genesis of an international web based searchable database (ComBase). Expert systems combining databases on microbial characteristics, food composition and processing information with the resulting "pattern match" indicating problems that may arise from changes in product formulation or processing conditions. Computer software packages to aid the practical application of HACCP and risk assessment and decision trees to bring logical sequences to establishing and modifying food safety management practices. In addition there are many other uses of information systems that benefit food safety more globally, including: Rapid dissemination of information on foodborne disease outbreaks via websites or list servers carrying commentary from many sources, including the press and interest groups, on the reasons for and consequences of foodborne disease incidents. Active surveillance networks allowing rapid dissemination of molecular subtyping information between public health agencies to detect foodborne outbreaks and limit the spread of human disease. Traceability of individual animals or crops from (or before) conception or germination to the consumer as an integral part of food supply chain management. Provision of high quality, online educational packages to food industry personnel otherwise precluded from access to such courses.

  14. A novel approach: chemical relational databases, and the role of the ISSCAN database on assessing chemical carcinogenicity.

    PubMed

    Benigni, Romualdo; Bossa, Cecilia; Richard, Ann M; Yang, Chihae

    2008-01-01

    Mutagenicity and carcinogenicity databases are crucial resources for toxicologists and regulators involved in chemicals risk assessment. Until recently, existing public toxicity databases have been constructed primarily as "look-up-tables" of existing data, and most often did not contain chemical structures. Concepts and technologies originated from the structure-activity relationships science have provided powerful tools to create new types of databases, where the effective linkage of chemical toxicity with chemical structure can facilitate and greatly enhance data gathering and hypothesis generation, by permitting: a) exploration across both chemical and biological domains; and b) structure-searchability through the data. This paper reviews the main public databases, together with the progress in the field of chemical relational databases, and presents the ISSCAN database on experimental chemical carcinogens.

  15. Current Development at the Southern California Earthquake Data Center (SCEDC)

    NASA Astrophysics Data System (ADS)

    Appel, V. L.; Clayton, R. W.

    2005-12-01

    Over the past year, the SCEDC completed or is near completion of three featured projects: Station Information System (SIS) Development: The SIS will provide users with an interface into complete and accurate station metadata for all current and historic data at the SCEDC. The goal of this project is to develop a system that can interact with a single database source to enter, update and retrieve station metadata easily and efficiently. The system will provide accurate station/channel information for active stations to the SCSN real-time processing system, as will as station/channel information for stations that have parametric data at the SCEDC i.e., for users retrieving data via STP. Additionally, the SIS will supply information required to generate dataless SEED and COSMOS V0 volumes and allow stations to be added to the system with a minimum, but incomplete set of information using predefined defaults that can be easily updated as more information becomes available. Finally, the system will facilitate statewide metadata exchange for both real-time processing and provide a common approach to CISN historic station metadata. Moment Tensor Solutions: The SCEDC is currently archiving and delivering Moment Magnitudes and Moment Tensor Solutions (MTS) produced by the SCSN in real-time and post-processing solutions for events spanning back to 1999. The automatic MTS runs on all local events with magnitudes > 3.0, and all regional events > 3.5. The distributed solution automatically creates links from all USGS Simpson Maps to a text e-mail summary solution, creates a .gif image of the solution, and updates the moment tensor database tables at the SCEDC. Searchable Scanned Waveforms Site: The Caltech Seismological Lab has made available 12,223 scanned images of pre-digital analog recordings of major earthquakes recorded in Southern California between 1962 and 1992 at http://www.data.scec.org/research/scans/. The SCEDC has developed a searchable web interface that allows users to search the available files, select multiple files for download and then retrieve a zipped file containing the results. Scanned images of paper records for M>3.5 southern California earthquakes and several significant teleseisms are available for download via the SCEDC through this search tool.

  16. EFFORTS TO EXPAND THE DSSTOX STRUCTURE-SEARCHABLE PUBLIC TOXICITY DATABASE NETWORK

    EPA Science Inventory

    A major goal of the DSSTox website is to improve the utility of published toxicity data across different fields of research. The largest barriers in the exploration of toxicity data by chemists and modelers are the lack of chemical structure annotation in the research literature ...

  17. GEONETCast Americas

    Science.gov Websites

    unless the user has internet access on the same machine. The products, including metadata, that are on is an access portal to GEONETCast products. It is a searchable database that can be found at -channel. This version will run on a local computer at a user site but internet links will not function

  18. [Systematic literature search in PubMed : A short introduction].

    PubMed

    Blümle, A; Lagrèze, W A; Motschall, E

    2018-03-01

    In order to identify current (and relevant) evidence for a specific clinical question within the unmanageable amount of information available, solid skills in performing a systematic literature search are essential. An efficient approach is to search a biomedical database containing relevant literature citations of study reports. The best known database is MEDLINE, which is searchable for free via the PubMed interface. In this article, we explain step by step how to perform a systematic literature search via PubMed by means of an example research question in the field of ophthalmology. First, we demonstrate how to translate the clinical problem into a well-framed and searchable research question, how to identify relevant search terms and how to conduct a text word search and a search with keywords in medical subject headings (MeSH) terms. We then show how to limit the number of search results if the search yields too many irrelevant hits and how to increase the number in the case of too few citations. Finally, we summarize all essential principles that guide a literature search via PubMed.

  19. Spectroscopic Data for an Astronomy Data Base

    NASA Technical Reports Server (NTRS)

    Parkinson, W. H.; Smith, Peter L.

    1997-01-01

    When we began this work, very few of the atomic and molecular data used by astronomers in the analysis of astronomical spectra were available in on-line searchable databases. Our principal goal was to: make the most useful of the atomic data files of R.L. Kurucuz (1995a,b) available on the WWW; and also to make the atomic data of R.L. Kelly for ultraviolet lines (i.e., essentially the same as the data in Kelly (1979) and Kelly (1987)) similarly available. In addition, we proposed to improve access to parameters for simple molecules of interest to astronomers.

  20. WAIS Searching of the Current Contents Database

    NASA Astrophysics Data System (ADS)

    Banholzer, P.; Grabenstein, M. E.

    The Homer E. Newell Memorial Library of NASA's Goddard Space Flight Center is developing capabilities to permit Goddard personnel to access electronic resources of the Library via the Internet. The Library's support services contractor, Maxima Corporation, and their subcontractor, SANAD Support Technologies have recently developed a World Wide Web Home Page (http://www-library.gsfc.nasa.gov) to provide the primary means of access. The first searchable database to be made available through the HomePage to Goddard employees is Current Contents, from the Institute for Scientific Information (ISI). The initial implementation includes coverage of articles from the last few months of 1992 to present. These records are augmented with abstracts and references, and often are more robust than equivalent records in bibliographic databases that currently serve the astronomical community. Maxima/SANAD selected Wais Incorporated's WAIS product with which to build the interface to Current Contents. This system allows access from Macintosh, IBM PC, and Unix hosts, which is an important feature for Goddard's multiplatform environment. The forms interface is structured to allow both fielded (author, article title, journal name, id number, keyword, subject term, and citation) and unfielded WAIS searches. The system allows a user to: Retrieve individual journal article records. Retrieve Table of Contents of specific issues of journals. Connect to articles with similar subject terms or keywords. Connect to other issues of the same journal in the same year. Browse journal issues from an alphabetical list of indexed journal names.

  1. OReFiL: an online resource finder for life sciences.

    PubMed

    Yamamoto, Yasunori; Takagi, Toshihisa

    2007-08-06

    Many online resources for the life sciences have been developed and introduced in peer-reviewed papers recently, ranging from databases and web applications to data-analysis software. Some have been introduced in special journal issues or websites with a search function, but others remain scattered throughout the Internet and in the published literature. The searchable resources on these sites are collected and maintained manually and are therefore of higher quality than automatically updated sites, but also require more time and effort. We developed an online resource search system called OReFiL to address these issues. We developed a crawler to gather all of the web pages whose URLs appear in MEDLINE abstracts and full-text papers on the BioMed Central open-access journals. The URLs were extracted using regular expressions and rules based on our heuristic knowledge. We then indexed the online resources to facilitate their retrieval and comparison by researchers. Because every online resource has at least one PubMed ID, we can easily acquire its summary with Medical Subject Headings (MeSH) terms and confirm its credibility through reference to the corresponding PubMed entry. In addition, because OReFiL automatically extracts URLs and updates the index, minimal time and effort is needed to maintain the system. We developed OReFiL, a search system for online life science resources, which is freely available. The system's distinctive features include the ability to return up-to-date query-relevant online resources introduced in peer-reviewed papers; the ability to search using free words, MeSH terms, or author names; easy verification of each hit following links to the corresponding PubMed entry or to papers citing the URL through the search systems of BioMed Central, Scirus, HighWire Press, or Google Scholar; and quick confirmation of the existence of an online resource web page.

  2. OReFiL: an online resource finder for life sciences

    PubMed Central

    Yamamoto, Yasunori; Takagi, Toshihisa

    2007-01-01

    Background Many online resources for the life sciences have been developed and introduced in peer-reviewed papers recently, ranging from databases and web applications to data-analysis software. Some have been introduced in special journal issues or websites with a search function, but others remain scattered throughout the Internet and in the published literature. The searchable resources on these sites are collected and maintained manually and are therefore of higher quality than automatically updated sites, but also require more time and effort. Description We developed an online resource search system called OReFiL to address these issues. We developed a crawler to gather all of the web pages whose URLs appear in MEDLINE abstracts and full-text papers on the BioMed Central open-access journals. The URLs were extracted using regular expressions and rules based on our heuristic knowledge. We then indexed the online resources to facilitate their retrieval and comparison by researchers. Because every online resource has at least one PubMed ID, we can easily acquire its summary with Medical Subject Headings (MeSH) terms and confirm its credibility through reference to the corresponding PubMed entry. In addition, because OReFiL automatically extracts URLs and updates the index, minimal time and effort is needed to maintain the system. Conclusion We developed OReFiL, a search system for online life science resources, which is freely available. The system's distinctive features include the ability to return up-to-date query-relevant online resources introduced in peer-reviewed papers; the ability to search using free words, MeSH terms, or author names; easy verification of each hit following links to the corresponding PubMed entry or to papers citing the URL through the search systems of BioMed Central, Scirus, HighWire Press, or Google Scholar; and quick confirmation of the existence of an online resource web page. PMID:17683589

  3. The Hawaiian Freshwater Algal Database (HfwADB): a laboratory LIMS and online biodiversity resource

    PubMed Central

    2012-01-01

    Background Biodiversity databases serve the important role of highlighting species-level diversity from defined geographical regions. Databases that are specially designed to accommodate the types of data gathered during regional surveys are valuable in allowing full data access and display to researchers not directly involved with the project, while serving as a Laboratory Information Management System (LIMS). The Hawaiian Freshwater Algal Database, or HfwADB, was modified from the Hawaiian Algal Database to showcase non-marine algal specimens collected from the Hawaiian Archipelago by accommodating the additional level of organization required for samples including multiple species. Description The Hawaiian Freshwater Algal Database is a comprehensive and searchable database containing photographs and micrographs of samples and collection sites, geo-referenced collecting information, taxonomic data and standardized DNA sequence data. All data for individual samples are linked through unique 10-digit accession numbers (“Isolate Accession”), the first five of which correspond to the collection site (“Environmental Accession”). Users can search online for sample information by accession number, various levels of taxonomy, habitat or collection site. HfwADB is hosted at the University of Hawaii, and was made publicly accessible in October 2011. At the present time the database houses data for over 2,825 samples of non-marine algae from 1,786 collection sites from the Hawaiian Archipelago. These samples include cyanobacteria, red and green algae and diatoms, as well as lesser representation from some other algal lineages. Conclusions HfwADB is a digital repository that acts as a Laboratory Information Management System for Hawaiian non-marine algal data. Users can interact with the repository through the web to view relevant habitat data (including geo-referenced collection locations) and download images of collection sites, specimen photographs and micrographs, and DNA sequences. It is publicly available at http://algae.manoa.hawaii.edu/hfwadb/. PMID:23095476

  4. Users’ guide to the surgical literature: how to perform a high-quality literature search

    PubMed Central

    Waltho, Daniel; Kaur, Manraj Nirmal; Haynes, R. Brian; Farrokhyar, Forough; Thoma, Achilleas

    2015-01-01

    Summary The article “Users’ guide to the surgical literature: how to perform a literature search” was published in 2003, but the continuing technological developments in databases and search filters have rendered that guide out of date. The present guide fills an existing gap in this area; it provides the reader with strategies for developing a searchable clinical question, creating an efficient search strategy, accessing appropriate databases, and skillfully retrieving the best evidence to address the research question. PMID:26384150

  5. [A web-based integrated clinical database for laryngeal cancer].

    PubMed

    E, Qimin; Liu, Jialin; Li, Yong; Liang, Chuanyu

    2014-08-01

    To establish an integrated database for laryngeal cancer, and to provide an information platform for laryngeal cancer in clinical and fundamental researches. This database also meet the needs of clinical and scientific use. Under the guidance of clinical expert, we have constructed a web-based integrated clinical database for laryngeal carcinoma on the basis of clinical data standards, Apache+PHP+MySQL technology, laryngeal cancer specialist characteristics and tumor genetic information. A Web-based integrated clinical database for laryngeal carcinoma had been developed. This database had a user-friendly interface and the data could be entered and queried conveniently. In addition, this system utilized the clinical data standards and exchanged information with existing electronic medical records system to avoid the Information Silo. Furthermore, the forms of database was integrated with laryngeal cancer specialist characteristics and tumor genetic information. The Web-based integrated clinical database for laryngeal carcinoma has comprehensive specialist information, strong expandability, high feasibility of technique and conforms to the clinical characteristics of laryngeal cancer specialties. Using the clinical data standards and structured handling clinical data, the database can be able to meet the needs of scientific research better and facilitate information exchange, and the information collected and input about the tumor sufferers are very informative. In addition, the user can utilize the Internet to realize the convenient, swift visit and manipulation on the database.

  6. A Web-Based Tool to Support Data-Based Early Intervention Decision Making

    ERIC Educational Resources Information Center

    Buzhardt, Jay; Greenwood, Charles; Walker, Dale; Carta, Judith; Terry, Barbara; Garrett, Matthew

    2010-01-01

    Progress monitoring and data-based intervention decision making have become key components of providing evidence-based early childhood special education services. Unfortunately, there is a lack of tools to support early childhood service providers' decision-making efforts. The authors describe a Web-based system that guides service providers…

  7. Web-enabled Exercise Generation Tool for Battle Command Training

    DTIC Science & Technology

    2010-08-01

    perceived as fair. Some non-instructional bells and whistles, such as background music and two-dimensional animation, should be judiciously...modify the current image by changing its size or adding a caption (Figure 12). Second, the trainer can upload a new image from his/her computer and...specify properties such as size, caption , image credit, and whether the image is searchable and usable by other trainers (Figure 13). Finally, the

  8. Systematic analysis of snake neurotoxins' functional classification using a data warehousing approach.

    PubMed

    Siew, Joyce Phui Yee; Khan, Asif M; Tan, Paul T J; Koh, Judice L Y; Seah, Seng Hong; Koo, Chuay Yeng; Chai, Siaw Ching; Armugam, Arunmozhiarasi; Brusic, Vladimir; Jeyaseelan, Kandiah

    2004-12-12

    Sequence annotations, functional and structural data on snake venom neurotoxins (svNTXs) are scattered across multiple databases and literature sources. Sequence annotations and structural data are available in the public molecular databases, while functional data are almost exclusively available in the published articles. There is a need for a specialized svNTXs database that contains NTX entries, which are organized, well annotated and classified in a systematic manner. We have systematically analyzed svNTXs and classified them using structure-function groups based on their structural, functional and phylogenetic properties. Using conserved motifs in each phylogenetic group, we built an intelligent module for the prediction of structural and functional properties of unknown NTXs. We also developed an annotation tool to aid the functional prediction of newly identified NTXs as an additional resource for the venom research community. We created a searchable online database of NTX proteins sequences (http://research.i2r.a-star.edu.sg/Templar/DB/snake_neurotoxin). This database can also be found under Swiss-Prot Toxin Annotation Project website (http://www.expasy.org/sprot/).

  9. Cyberinfrastructure at IRIS: Challenges and Solutions Providing Integrated Data Access to EarthScope and Other Earth Science Data

    NASA Astrophysics Data System (ADS)

    Ahern, T. K.; Barga, R.; Casey, R.; Kamb, L.; Parastatidis, S.; Stromme, S.; Weertman, B. T.

    2008-12-01

    While mature methods of accessing seismic data from the IRIS DMC have existed for decades, the demands for improved interdisciplinary data integration call for new approaches. Talented software teams at the IRIS DMC, UNAVCO and the ICDP in Germany, have been developing web services for all EarthScope data including data from USArray, PBO and SAFOD. These web services are based upon SOAP and WSDL. The EarthScope Data Portal was the first external system to access data holdings from the IRIS DMC using Web Services. EarthScope will also draw more heavily upon products to aid in cross-disciplinary data reuse. A Product Management System called SPADE allows archive of and access to heterogeneous data products, presented as XML documents, at the IRIS DMC. Searchable metadata are extracted from the XML and enable powerful searches for products from EarthScope and other data sources. IRIS is teaming with the External Research Group at Microsoft Research to leverage a powerful Scientific Workflow Engine (Trident) and interact with the web services developed at centers such as IRIS to enable access to data services as well as computational services. We believe that this approach will allow web- based control of workflows and the invocation of computational services that transform data. This capability will greatly improve access to data across scientific disciplines. This presentation will review some of the traditional access tools as well as many of the newer approaches that use web services, scientific workflow to improve interdisciplinary data access.

  10. TryTransDB: A web-based resource for transport proteins in Trypanosomatidae.

    PubMed

    Sonar, Krushna; Kabra, Ritika; Singh, Shailza

    2018-03-12

    TryTransDB is a web-based resource that stores transport protein data which can be retrieved using a standalone BLAST tool. We have attempted to create an integrated database that can be a one-stop shop for the researchers working with transport proteins of Trypanosomatidae family. TryTransDB (Trypanosomatidae Transport Protein Database) is a web based comprehensive resource that can fire a BLAST search against most of the transport protein sequences (protein and nucleotide) from Trypanosomatidae family organisms. This web resource further allows to compute a phylogenetic tree by performing multiple sequence alignment (MSA) using CLUSTALW suite embedded in it. Also, cross-linking to other databases helps in gathering more information for a certain transport protein in a single website.

  11. Collection Fusion Using Bayesian Estimation of a Linear Regression Model in Image Databases on the Web.

    ERIC Educational Resources Information Center

    Kim, Deok-Hwan; Chung, Chin-Wan

    2003-01-01

    Discusses the collection fusion problem of image databases, concerned with retrieving relevant images by content based retrieval from image databases distributed on the Web. Focuses on a metaserver which selects image databases supporting similarity measures and proposes a new algorithm which exploits a probabilistic technique using Bayesian…

  12. EMU Lessons Learned Database

    NASA Technical Reports Server (NTRS)

    Matthews, Kevin M., Jr.; Crocker, Lori; Cupples, J. Scott

    2011-01-01

    As manned space exploration takes on the task of traveling beyond low Earth orbit, many problems arise that must be solved in order to make the journey possible. One major task is protecting humans from the harsh space environment. The current method of protecting astronauts during Extravehicular Activity (EVA) is through use of the specially designed Extravehicular Mobility Unit (EMU). As more rigorous EVA conditions need to be endured at new destinations, the suit will need to be tailored and improved in order to accommodate the astronaut. The Objective behind the EMU Lessons Learned Database(LLD) is to be able to create a tool which will assist in the development of next-generation EMUs, along with maintenance and improvement of the current EMU, by compiling data from Failure Investigation and Analysis Reports (FIARs) which have information on past suit failures. FIARs use a system of codes that give more information on the aspects of the failure, but if one is unfamiliar with the EMU they will be unable to decipher the information. A goal of the EMU LLD is to not only compile the information, but to present it in a user-friendly, organized, searchable database accessible to all familiarity levels with the EMU; both newcomers and veterans alike. The EMU LLD originally started as an Excel database, which allowed easy navigation and analysis of the data through pivot charts. Creating an entry requires access to the Problem Reporting And Corrective Action database (PRACA), which contains the original FIAR data for all hardware. FIAR data are then transferred to, defined, and formatted in the LLD. Work is being done to create a web-based version of the LLD in order to increase accessibility to all of Johnson Space Center (JSC), which includes converting entries from Excel to the HTML format. FIARs related to the EMU have been completed in the Excel version, and now focus has shifted to expanding FIAR data in the LLD to include EVA tools and support hardware such as the Pistol Grip Tool (PGT) and the Battery Charger Module (BCM), while adding any recently closed EMU-related FIARs.

  13. A review of instruments to measure interprofessional team-based primary care.

    PubMed

    Shoemaker, Sarah J; Parchman, Michael L; Fuda, Kathleen Kerwin; Schaefer, Judith; Levin, Jessica; Hunt, Meaghan; Ricciardi, Richard

    2016-07-01

    Interprofessional team-based care is increasingly regarded as an important feature of delivery systems redesigned to provide more efficient and higher quality care, including primary care. Measurement of the functioning of such teams might enable improvement of team effectiveness and could facilitate research on team-based primary care. Our aims were to develop a conceptual framework of high-functioning primary care teams to identify and review instruments that measure the constructs identified in the framework, and to create a searchable, web-based atlas of such instruments (available at: http://primarycaremeasures.ahrq.gov/team-based-care/ ). Our conceptual framework was developed from existing frameworks, the teamwork literature, and expert input. The framework is based on an Input-Mediator-Output model and includes 12 constructs to which we mapped both instruments as a whole, and individual instrument items. Instruments were also reviewed for relevance to measuring team-based care, and characterized. Instruments were identified from peer-reviewed and grey literature, measure databases, and expert input. From nearly 200 instruments initially identified, we found 48 to be relevant to measuring team-based primary care. The majority of instruments were surveys (n = 44), and the remainder (n = 4) were observational checklists. Most instruments had been developed/tested in healthcare settings (n = 30) and addressed multiple constructs, most commonly communication (n = 42), heedful interrelating (n = 42), respectful interactions (n = 40), and shared explicit goals (n = 37). The majority of instruments had some reliability testing (n = 39) and over half included validity testing (n = 29). Currently available instruments offer promise to researchers and practitioners to assess teams' performance, but additional work is needed to adapt these instruments for primary care settings.

  14. Construction of a Linux based chemical and biological information system.

    PubMed

    Molnár, László; Vágó, István; Fehér, András

    2003-01-01

    A chemical and biological information system with a Web-based easy-to-use interface and corresponding databases has been developed. The constructed system incorporates all chemical, numerical and textual data related to the chemical compounds, including numerical biological screen results. Users can search the database by traditional textual/numerical and/or substructure or similarity queries through the web interface. To build our chemical database management system, we utilized existing IT components such as ORACLE or Tripos SYBYL for database management and Zope application server for the web interface. We chose Linux as the main platform, however, almost every component can be used under various operating systems.

  15. An integrated computational approach can classify VHL missense mutations according to risk of clear cell renal carcinoma

    PubMed Central

    Gossage, Lucy; Pires, Douglas E. V.; Olivera-Nappa, Álvaro; Asenjo, Juan; Bycroft, Mark; Blundell, Tom L.; Eisen, Tim

    2014-01-01

    Mutations in the von Hippel–Lindau (VHL) gene are pathogenic in VHL disease, congenital polycythaemia and clear cell renal carcinoma (ccRCC). pVHL forms a ternary complex with elongin C and elongin B, critical for pVHL stability and function, which interacts with Cullin-2 and RING-box protein 1 to target hypoxia-inducible factor for polyubiquitination and proteasomal degradation. We describe a comprehensive database of missense VHL mutations linked to experimental and clinical data. We use predictions from in silico tools to link the functional effects of missense VHL mutations to phenotype. The risk of ccRCC in VHL disease is linked to the degree of destabilization resulting from missense mutations. An optimized binary classification system (symphony), which integrates predictions from five in silico methods, can predict the risk of ccRCC associated with VHL missense mutations with high sensitivity and specificity. We use symphony to generate predictions for risk of ccRCC for all possible VHL missense mutations and present these predictions, in association with clinical and experimental data, in a publically available, searchable web server. PMID:24969085

  16. Digital hand atlas and computer-aided bone age assessment via the Web

    NASA Astrophysics Data System (ADS)

    Cao, Fei; Huang, H. K.; Pietka, Ewa; Gilsanz, Vicente

    1999-07-01

    A frequently used assessment method of bone age is atlas matching by a radiological examination of a hand image against a reference set of atlas patterns of normal standards. We are in a process of developing a digital hand atlas with a large standard set of normal hand and wrist images that reflect the skeletal maturity, race and sex difference, and current child development. The digital hand atlas will be used for a computer-aided bone age assessment via Web. We have designed and partially implemented a computer-aided diagnostic (CAD) system for Web-based bone age assessment. The system consists of a digital hand atlas, a relational image database and a Web-based user interface. The digital atlas is based on a large standard set of normal hand an wrist images with extracted bone objects and quantitative features. The image database uses a content- based indexing to organize the hand images and their attributes and present to users in a structured way. The Web-based user interface allows users to interact with the hand image database from browsers. Users can use a Web browser to push a clinical hand image to the CAD server for a bone age assessment. Quantitative features on the examined image, which reflect the skeletal maturity, will be extracted and compared with patterns from the atlas database to assess the bone age. The relevant reference imags and the final assessment report will be sent back to the user's browser via Web. The digital atlas will remove the disadvantages of the currently out-of-date one and allow the bone age assessment to be computerized and done conveniently via Web. In this paper, we present the system design and Web-based client-server model for computer-assisted bone age assessment and our initial implementation of the digital atlas database.

  17. CiteAb: a searchable antibody database that ranks antibodies by the number of times they have been cited

    PubMed Central

    2014-01-01

    Background Research antibodies are used by thousands of scientists working in diverse disciplines, but it is common to hear concerns about antibody quality. This means that researchers need to carefully choose the antibodies they use to avoid wasting time and money. A well accepted way of selecting a research antibody is to identify one which has been used previously, where the associated data has been peer-reviewed and the results published. Description CiteAb is a searchable database which ranks antibodies by the number of times they have been cited. This allows researchers to easily find antibodies that have been used in peer-reviewed publications and the accompanying citations are listed, so users can check the data contained within the publications. This makes CiteAb a useful resource for identifying antibodies for experiments and also for finding information to demonstrate antibody validation. The database currently contains 1,400,000 antibodies which are from 90 suppliers, including 87 commercial companies and 3 academic resources. Associated with these antibodies are 140,000 publications which provide 306,000 antibody citations. In addition to searching, users can also browse through the antibodies and add their own publications to the CiteAb database. Conclusions CiteAb provides a new way for researchers to find research antibodies that have been used successfully in peer-reviewed publications. It aims to assist these researchers and will hopefully help promote progress in many areas of life science research. PMID:24528853

  18. The CHARA Array Database

    NASA Astrophysics Data System (ADS)

    Jones, Jeremy; Schaefer, Gail; ten Brummelaar, Theo; Gies, Douglas; Farrington, Christopher

    2018-01-01

    We are building a searchable database for the CHARA Array data archive. The Array consists of six telescopes linked together as an interferometer, providing sub-milliarcsecond resolution in the optical and near-infrared. The Array enables a variety of scientific studies, including measuring stellar angular diameters, imaging stellar shapes and surface features, mapping the orbits of close binary companions, and resolving circumstellar environments. This database is one component of an NSF/MSIP funded program to provide open access to the CHARA Array to the broader astronomical community. This archive goes back to 2004 and covers all the beam combiners on the Array. We discuss the current status of and future plans for the public database, and give directions on how to access it.

  19. Same City, New Scene: 2010 BEA Preview

    ERIC Educational Resources Information Center

    Katterjohn, Anna

    2010-01-01

    On May 25-26 in New York, BookExpo America (BEA) will present new events for participants. To lighten participants' totes, BEA is partnering with Above the Treeline to create a free online catalog (Books@BEA) of the new and forthcoming titles on the show floor. There will also be a searchable database of all the authors participating in show…

  20. Computers Track the Elusive Metaphor

    ERIC Educational Resources Information Center

    Guernsey, Lisa

    2009-01-01

    Computers may not be able to master poetics like Aristotle, but they have become smart enough to know a metaphor when they see one. An online database called The Mind Is a Metaphor, created by Brad Pasanek, an assistant professor of English at the University of Virginia, is a searchable bank of phrases, verses, and lines from literature that…

  1. Development of an In Silico Metabolic Simulator and Searchable Metabolism Database for Chemical Risk Assessments

    EPA Science Inventory

    The US EPA is faced with long lists of chemicals that need to be assessed for hazard, and a gap in evaluating chemical risk is accounting for metabolic activation resulting in increased toxicity. The goals of this project are to develop a capability to predict metabolic maps of x...

  2. Using Mobile App Development Tools to Build a GIS Application

    NASA Astrophysics Data System (ADS)

    Mital, A.; Catchen, M.; Mital, K.

    2014-12-01

    Our group designed and built working web, android, and IOS applications using different mapping libraries as bases on which to overlay fire data from NASA. The group originally planned to make app versions for Google Maps, Leaflet, and OpenLayers. However, because the Leaflet library did not properly load on Android, the group focused efforts on the other two mapping libraries. For Google Maps, the group first designed a UI for the web app and made a working version of the app. After updating the source of fire data to one which also provided historical fire data, the design had to be modified to include the extra data. After completing a working version of the web app, the group used webview in android, a built in resource which allowed porting the web app to android without rewriting the code for android. Upon completing this, the group found Apple IOS devices had a similar capability, and so decided to add an IOS app to the project using a function similar to webview. Alongside this effort, the group began implementing an OpenLayers fire map using a simpler UI. This web app was completed fairly quickly relative to Google Maps; however, it did not include functionality such as satellite imagery or searchable locations. The group finished the project with a working android version of the Google Maps based app supporting API levels 14-19 and an OpenLayers based app supporting API levels 8-19, as well as a Google Maps based IOS app supporting both old and new screen formats. This project was implemented by high school and college students under an SGT Inc. STEM internship program

  3. 75 FR 29155 - Publicly Available Consumer Product Safety Information Database

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-24

    ...The Consumer Product Safety Commission (``Commission,'' ``CPSC,'' or ``we'') is issuing a notice of proposed rulemaking that would establish a publicly available consumer product safety information database (``database''). Section 212 of the Consumer Product Safety Improvement Act of 2008 (``CPSIA'') amended the Consumer Product Safety Act (``CPSA'') to require the Commission to establish and maintain a publicly available, searchable database on the safety of consumer products, and other products or substances regulated by the Commission. The proposed rule would interpret various statutory requirements pertaining to the information to be included in the database and also would establish provisions regarding submitting reports of harm; providing notice of reports of harm to manufacturers; publishing reports of harm and manufacturer comments in the database; and dealing with confidential and materially inaccurate information.

  4. Analysis of governmental Web sites on food safety issues: a global perspective.

    PubMed

    Namkung, Young; Almanza, Barbara A

    2006-10-01

    Despite a growing concern over food safety issues, as well as a growing dependence on the Internet as a source of information, little research has been done to examine the presence and relevance of food safety-related information on Web sites. The study reported here conducted Web site analysis in order to examine the current operational status of governmental Web sites on food safety issues. The study also evaluated Web site usability, especially information dimensionalities such as utility, currency, and relevance of content, from the perspective of the English-speaking consumer. Results showed that out of 192 World Health Organization members, 111 countries operated governmental Web sites that provide information about food safety issues. Among 171 searchable Web sites from the 111 countries, 123 Web sites (71.9 percent) were accessible, and 81 of those 123 (65.9 percent) were available in English. The majority of Web sites offered search engine tools and related links for more information, but their availability and utility was limited. In terms of content, 69.9 percent of Web sites offered information on foodborne-disease outbreaks, compared with 31.5 percent that had travel- and health-related information.

  5. A keyword searchable attribute-based encryption scheme with attribute update for cloud storage.

    PubMed

    Wang, Shangping; Ye, Jian; Zhang, Yaling

    2018-01-01

    Ciphertext-policy attribute-based encryption (CP-ABE) scheme is a new type of data encryption primitive, which is very suitable for data cloud storage for its fine-grained access control. Keyword-based searchable encryption scheme enables users to quickly find interesting data stored in the cloud server without revealing any information of the searched keywords. In this work, we provide a keyword searchable attribute-based encryption scheme with attribute update for cloud storage, which is a combination of attribute-based encryption scheme and keyword searchable encryption scheme. The new scheme supports the user's attribute update, especially in our new scheme when a user's attribute need to be updated, only the user's secret key related with the attribute need to be updated, while other user's secret key and the ciphertexts related with this attribute need not to be updated with the help of the cloud server. In addition, we outsource the operation with high computation cost to cloud server to reduce the user's computational burden. Moreover, our scheme is proven to be semantic security against chosen ciphertext-policy and chosen plaintext attack in the general bilinear group model. And our scheme is also proven to be semantic security against chosen keyword attack under bilinear Diffie-Hellman (BDH) assumption.

  6. A keyword searchable attribute-based encryption scheme with attribute update for cloud storage

    PubMed Central

    Wang, Shangping; Zhang, Yaling

    2018-01-01

    Ciphertext-policy attribute-based encryption (CP-ABE) scheme is a new type of data encryption primitive, which is very suitable for data cloud storage for its fine-grained access control. Keyword-based searchable encryption scheme enables users to quickly find interesting data stored in the cloud server without revealing any information of the searched keywords. In this work, we provide a keyword searchable attribute-based encryption scheme with attribute update for cloud storage, which is a combination of attribute-based encryption scheme and keyword searchable encryption scheme. The new scheme supports the user's attribute update, especially in our new scheme when a user's attribute need to be updated, only the user's secret key related with the attribute need to be updated, while other user's secret key and the ciphertexts related with this attribute need not to be updated with the help of the cloud server. In addition, we outsource the operation with high computation cost to cloud server to reduce the user's computational burden. Moreover, our scheme is proven to be semantic security against chosen ciphertext-policy and chosen plaintext attack in the general bilinear group model. And our scheme is also proven to be semantic security against chosen keyword attack under bilinear Diffie-Hellman (BDH) assumption. PMID:29795577

  7. The integrated web service and genome database for agricultural plants with biotechnology information.

    PubMed

    Kim, Changkug; Park, Dongsuk; Seol, Youngjoo; Hahn, Jangho

    2011-01-01

    The National Agricultural Biotechnology Information Center (NABIC) constructed an agricultural biology-based infrastructure and developed a Web based relational database for agricultural plants with biotechnology information. The NABIC has concentrated on functional genomics of major agricultural plants, building an integrated biotechnology database for agro-biotech information that focuses on genomics of major agricultural resources. This genome database provides annotated genome information from 1,039,823 records mapped to rice, Arabidopsis, and Chinese cabbage.

  8. Mining a Web Citation Database for Author Co-Citation Analysis.

    ERIC Educational Resources Information Center

    He, Yulan; Hui, Siu Cheung

    2002-01-01

    Proposes a mining process to automate author co-citation analysis based on the Web Citation Database, a data warehouse for storing citation indices of Web publications. Describes the use of agglomerative hierarchical clustering for author clustering and multidimensional scaling for displaying author cluster maps, and explains PubSearch, a…

  9. Integrated Functional and Executional Modelling of Software Using Web-Based Databases

    NASA Technical Reports Server (NTRS)

    Kulkarni, Deepak; Marietta, Roberta

    1998-01-01

    NASA's software subsystems undergo extensive modification and updates over the operational lifetimes. It is imperative that modified software should satisfy safety goals. This report discusses the difficulties encountered in doing so and discusses a solution based on integrated modelling of software, use of automatic information extraction tools, web technology and databases. To appear in an article of Journal of Database Management.

  10. Exploring Chemical Space for Drug Discovery Using the Chemical Universe Database

    PubMed Central

    2012-01-01

    Herein we review our recent efforts in searching for bioactive ligands by enumeration and virtual screening of the unknown chemical space of small molecules. Enumeration from first principles shows that almost all small molecules (>99.9%) have never been synthesized and are still available to be prepared and tested. We discuss open access sources of molecules, the classification and representation of chemical space using molecular quantum numbers (MQN), its exhaustive enumeration in form of the chemical universe generated databases (GDB), and examples of using these databases for prospective drug discovery. MQN-searchable GDB, PubChem, and DrugBank are freely accessible at www.gdb.unibe.ch. PMID:23019491

  11. The U.S. Dairy Forage Research Center (USDFRC) Condensed Tannin NMR Database.

    PubMed

    Zeller, Wayne E; Schatz, Paul F

    2017-06-28

    This Perspective describes a solution-state NMR database for flavan-3-ol monomers and condensed tannin dimers through tetramers obtained from the literature to 2015, containing data searchable by structure, molecular formula, degrees of polymerization, and 1 H and 13 C chemical shifts of the condensed tannins. Citations for all literature references are provided and should serve as valuable resource for scientists working in the field of condensed tannin research. The database will be periodically updated as additional information becomes available, typically on a yearly basis and is available for use, free of charge, from the U.S. Dairy Forage Research Center (USDFRC) Website.

  12. SABER: The Searchable Annotated Bibliography of Education Research in Astronomy

    NASA Astrophysics Data System (ADS)

    Bruning, David; Bailey, Janelle M.; Brissenden, Gina

    Starting a new research project can be a challenge, but especially so in education research because the literature is scattered throughout many journals. Relevant astronomy education research may be in psychology journals, science education journals, physics education journals, or even in science journals. Tracking the vast realm of literature is difficult, especially because libraries frequently do not subscribe to many of the relevant journals and abstracting services. The Searchable Annotated Bibliography of Education Research (SABER) is an online resource that was started to service the needs of the astronomy education community, specifically to reduce this "scatter" by compiling an annotated bibliography of education research articles in one electronic location. Although SABER started in 2001, the database has a new URL—http://astronom- y.uwp.edu/saber/—and has recently undergone a major update.

  13. DSSTOX (DISTRIBUTED STRUCTURE-SEARCHABLE ...

    EPA Pesticide Factsheets

    Distributed Structure-Searchable Toxicity Database Network Major trends affecting public toxicity information resources have the potential to significantly alter the future of predictive toxicology. Chemical toxicity screening is undergoing shifts towards greater use of more fundamental information on gene/protein expression patterns and bioactivity and bioassay profiles, the latter generated with highthroughput screening technologies. Curated, systematically organized, and webaccessible toxicity and biological activity data in association with chemical structures, enabling the integration of diverse data information domains, will fuel the next frontier of advancement for QSAR (quantitative structure-activity relationship) and data mining technologies. The DSSTox project is supporting progress towards these goals on many fronts, promoting the use of formalized and structure-annotated toxicity data models, helping to interface these efforts with QSAR modelers, linking data from diverse sources, and creating a large, quality reviewed, central chemical structure information resource linked to various toxicity data sources

  14. Enabling Astronony Research in High Schools with the START Collaboratory

    NASA Astrophysics Data System (ADS)

    Greenberg, G. J.; Pennypacker, C. R.

    2005-12-01

    The START Collaboratory is a three-year, NSF funded project to create a Web-based national astronomy research collaboratory for high school students that will bring authentic scientific research to classrooms across the country. The project brings together the resources and experience of Hands-On Universe at the University of California at Berkeley, the Sloan Digital Sky Survey / National Virtual Observatory at Johns Hopkins University and the Northwestern University Collaboratory Project. The START Collaboratory seamlessly integrates access to gigabytes of searchable data and images from the Sloan Digital Sky Survey and the NVO into Web-based research notebooks and research reports that can be shared and discussed online. Requests for observations can be made through the START Telescope Request Broker. These observations can be viewed with the START Web Visualization Tool for visualization and measurement of FITS files. The project has developed a set of research scenarios to introduce students to the resources and tools available through the START Collaboratory, and to provide a model for network-based collaboration that engages students, teachers and professional scientists. Great attention has been paid to ensuring that the research scenarios result in accurate and authentic research products that are of real interest to working astronomers. In this panel presentation, we will describe the educational benefits and opportunities being seen in pilot testing with teachers and students, and in preparations for a teacher professional development project with the Adler Planetarium.

  15. Digitizing Villanova University's Eclipsing Binary Card Catalogue

    NASA Astrophysics Data System (ADS)

    Guzman, Giannina; Dalton, Briana; Conroy, Kyle; Prsa, Andrej

    2018-01-01

    Villanova University’s Department of Astrophysics and Planetary Science has years of hand-written archival data on Eclipsing Binaries at its disposal. This card catalog began at Princeton in the 1930’s with notable contributions from scientists such as Henry Norris Russel. During World War II, the archive was moved to the University of Pennsylvania, which was one of the world centers for Eclipsing Binary research, consequently, the contributions to the catalog during this time were immense. It was then moved to University of Florida at Gainesville before being accepted by Villanova in the 1990’s. The catalog has been kept in storage since then. The objective of this project is to digitize this archive and create a fully functional online catalog that contains the information available on the cards, along with the scan of the actual cards. Our group has built a database using a python-powered infrastructure to contain the collected data. The team also built a prototype web-based searchable interface as a front-end to the catalog. Following the data-entry process, information like the Right Ascension and Declination will be run against SIMBAD and any differences between values will be noted as part of the catalog. Information published online from the card catalog and even discrepancies in information for a star, could be a catalyst for new studies on these Eclipsing Binaries. Once completed, the database-driven interface will be made available to astronomers worldwide. The group will also acquire, from the database, a list of referenced articles that have yet to be found online in order to further pursue their digitization. This list will be comprised of references in the cards that were neither found on ADS nor online during the data-entry process. Pursuing the integration of these references to online queries such as ADS will be an ongoing process that will contribute and further facilitate studies on Eclipsing Binaries.

  16. The integrated web service and genome database for agricultural plants with biotechnology information

    PubMed Central

    Kim, ChangKug; Park, DongSuk; Seol, YoungJoo; Hahn, JangHo

    2011-01-01

    The National Agricultural Biotechnology Information Center (NABIC) constructed an agricultural biology-based infrastructure and developed a Web based relational database for agricultural plants with biotechnology information. The NABIC has concentrated on functional genomics of major agricultural plants, building an integrated biotechnology database for agro-biotech information that focuses on genomics of major agricultural resources. This genome database provides annotated genome information from 1,039,823 records mapped to rice, Arabidopsis, and Chinese cabbage. PMID:21887015

  17. How can scientists bring research to use: the HENVINET experience.

    PubMed

    Bartonova, Alena

    2012-06-28

    Health concerns have driven the European environmental policies of the last 25 years, with issues becoming more complex. Addressing these concerns requires an approach that is both interdisciplinary and engages scientists with society. In response to this requirement, the FP6 coordination action "Health and Environment Network" HENVINET was set up to create a permanent inter-disciplinary network of professionals in the field of health and environment tasked to bridge the communication gap between science and society. In this paper we describe how HENVINET delivered on this task. The HENVINET project approached the issue of inter-disciplinary collaboration in four ways. (1) The Drivers-Pressures-State-Exposure-Effect-Action framework was used to structure information gathering, collaboration and communication between scientists in the field of health and the environment. (2) Interactive web-based tools were developed to enhance methods for knowledge evaluation, and use these methods to formulate policy advice. (3) Quantification methods were adapted to measure scientific agreement. And (4) Open architecture web technology was used to develop an information repository and a web portal to facilitate collaboration and communication among scientists. Twenty-five organizations from Europe and five from outside Europe participated in the Health and Environment Network HENVINET, which lasted for 3.5 years. The consortium included partners in environmental research, public health and veterinary medicine; included medical practitioners and representatives of local administrations; and had access to national policy making and EEA and WHO expertise. Dedicated web-based tools for visualisation of environmental health issues and knowledge evaluation allowed remote expert elicitation, and were used as a basis for developing policy advice in five health areas (asthma and allergies; cancer; neurodevelopmental disorders; endocrine disruption; and engineered nanoparticles in the environment). An open searchable database of decision support tools was established and populated. A web based social networking tool was developed to enhance collaboration and communication between scientists and society. HENVINET addressed key issues that arise in inter-disciplinary research on health and environment and in communicating research results to policy makers and society. HENVINET went beyond traditional scientific tools and methods to bridge the communication gap between science and policy makers. The project identified the need for a common framework and delivered it. It developed and implemented a variety of novel methods and tools and, using several representative examples, demonstrated the process of producing politically relevant scientific advice based on an open participation of experts. It highlighted the need for, and benefits of, a liaison between health and environment professionals and professionals in the social sciences and liberal arts. By adopting critical complexity thinking, HENVINET extended the traditional approach to environment and health research, and set the standard for current approaches to bridge the gap between science and society.

  18. MoonProt: a database for proteins that are known to moonlight

    PubMed Central

    Mani, Mathew; Chen, Chang; Amblee, Vaishak; Liu, Haipeng; Mathur, Tanu; Zwicke, Grant; Zabad, Shadi; Patel, Bansi; Thakkar, Jagravi; Jeffery, Constance J.

    2015-01-01

    Moonlighting proteins comprise a class of multifunctional proteins in which a single polypeptide chain performs multiple biochemical functions that are not due to gene fusions, multiple RNA splice variants or pleiotropic effects. The known moonlighting proteins perform a variety of diverse functions in many different cell types and species, and information about their structures and functions is scattered in many publications. We have constructed the manually curated, searchable, internet-based MoonProt Database (http://www.moonlightingproteins.org) with information about the over 200 proteins that have been experimentally verified to be moonlighting proteins. The availability of this organized information provides a more complete picture of what is currently known about moonlighting proteins. The database will also aid researchers in other fields, including determining the functions of genes identified in genome sequencing projects, interpreting data from proteomics projects and annotating protein sequence and structural databases. In addition, information about the structures and functions of moonlighting proteins can be helpful in understanding how novel protein functional sites evolved on an ancient protein scaffold, which can also help in the design of proteins with novel functions. PMID:25324305

  19. THE FLAG: A Web Resource of Innovative Assessment Tools for Faculty in College Science, Mathematics, Engineering, and Technology

    NASA Astrophysics Data System (ADS)

    Zeilik, M.; Mathieu, R. D.; National InstituteScience Education; College Level-One Team

    2000-12-01

    Even the most dedicated college faculty often discover that their students fail to learn what was taught in their courses and that much of what students do learn is quickly forgotten after the final exam. To help college faculty improve student learning in college Science, Mathematics, Engineering and Technology (SMET), the College Level - One Team of the National Institute for Science Education has created the "FLAG" a Field-tested Learning Assessment Guide for SMET faculty. Developed with funding from the National Science Foundation, the FLAG presents in guidebook format a diverse and robust collection of field-tested classroom assessment techniques (CATs), with supporting information on how to apply them in the classroom. Faculty can download the tools and techniques from the website, which also provides a goals clarifier, an assessment primer, a searchable database, and links to additional resources. The CATs and tools have been reviewed by an expert editorial board and the NISE team. These assessment strategies can help faculty improve the learning environments in their SMET courses especially the crucial introductory courses that most strongly shape students' college learning experiences. In addition, the FLAG includes the web-based Student Assessment of Learning Gains. The SALG offers a convenient way to evaluate the impact of your courses on students. It is based on findings that students' estimates of what they gained are more reliable and informative than their observations of what they liked about the course or teacher. It offers accurate feedback on how well the different aspects of teaching helped the students to learn. Students complete the SALG online after a generic template has been modified to fit the learning objectives and activities of your course. The results are presented to the teacher as summary statistics automatically. The FLAG can be found at the NISE "Innovations in SMET Education" website at www.wcer.wisc.edu/nise/cl1

  20. NAWeb 2000: Web-Based Learning - On Track! International Conference on Web-Based Learning. (6th, New Brunswick, Canada, October 14-17, 2000).

    ERIC Educational Resources Information Center

    Hall, Richard., Ed.

    This proceedings of the Sixth International Conference on Web-Based Learning, NAWeb 2000, includes the following papers: "Is a Paradigm Shift Required To Effectively Teach Web-Based Instruction?"; "Issues in Courseware Reuse for a Web-Based Information System"; "The Digital Curriculum Database: Meeting the Needs of Industry and the Challenge of…

  1. An Efficient Searchable Encryption Against Keyword Guessing Attacks for Sharable Electronic Medical Records in Cloud-based System.

    PubMed

    Wu, Yilun; Lu, Xicheng; Su, Jinshu; Chen, Peixin

    2016-12-01

    Preserving the privacy of electronic medical records (EMRs) is extremely important especially when medical systems adopt cloud services to store patients' electronic medical records. Considering both the privacy and the utilization of EMRs, some medical systems apply searchable encryption to encrypt EMRs and enable authorized users to search over these encrypted records. Since individuals would like to share their EMRs with multiple persons, how to design an efficient searchable encryption for sharable EMRs is still a very challenge work. In this paper, we propose a cost-efficient secure channel free searchable encryption (SCF-PEKS) scheme for sharable EMRs. Comparing with existing SCF-PEKS solutions, our scheme reduces the storage overhead and achieves better computation performance. Moreover, our scheme can guard against keyword guessing attack, which is neglected by most of the existing schemes. Finally, we implement both our scheme and a latest medical-based scheme to evaluate the performance. The evaluation results show that our scheme performs much better performance than the latest one for sharable EMRs.

  2. Implementation and application of an interactive user-friendly validation software for RADIANCE

    NASA Astrophysics Data System (ADS)

    Sundaram, Anand; Boonn, William W.; Kim, Woojin; Cook, Tessa S.

    2012-02-01

    RADIANCE extracts CT dose parameters from dose sheets using optical character recognition and stores the data in a relational database. To facilitate validation of RADIANCE's performance, a simple user interface was initially implemented and about 300 records were evaluated. Here, we extend this interface to achieve a wider variety of functions and perform a larger-scale validation. The validator uses some data from the RADIANCE database to prepopulate quality-testing fields, such as correspondence between calculated and reported total dose-length product. The interface also displays relevant parameters from the DICOM headers. A total of 5,098 dose sheets were used to test the performance accuracy of RADIANCE in dose data extraction. Several search criteria were implemented. All records were searchable by accession number, study date, or dose parameters beyond chosen thresholds. Validated records were searchable according to additional criteria from validation inputs. An error rate of 0.303% was demonstrated in the validation. Dose monitoring is increasingly important and RADIANCE provides an open-source solution with a high level of accuracy. The RADIANCE validator has been updated to enable users to test the integrity of their installation and verify that their dose monitoring is accurate and effective.

  3. McMaster Optimal Aging Portal: an evidence-based database for geriatrics-focused health professionals.

    PubMed

    Barbara, Angela M; Dobbins, Maureen; Brian Haynes, R; Iorio, Alfonso; Lavis, John N; Raina, Parminder; Levinson, Anthony J

    2017-07-11

    The objective of this work was to provide easy access to reliable health information based on good quality research that will help health care professionals to learn what works best for seniors to stay as healthy as possible, manage health conditions and build supportive health systems. This will help meet the demands of our aging population that clinicians provide high quality care for older adults, that public health professionals deliver disease prevention and health promotion strategies across the life span, and that policymakers address the economic and social need to create a robust health system and a healthy society for all ages. The McMaster Optimal Aging Portal's (Portal) professional bibliographic database contains high quality scientific evidence about optimal aging specifically targeted to clinicians, public health professionals and policymakers. The database content comes from three information services: McMaster Premium LiteratUre Service (MacPLUS™), Health Evidence™ and Health Systems Evidence. The Portal is continually updated, freely accessible online, easily searchable, and provides email-based alerts when new records are added. The database is being continually assessed for value, usability and use. A number of improvements are planned, including French language translation of content, increased linkages between related records within the Portal database, and inclusion of additional types of content. While this article focuses on the professional database, the Portal also houses resources for patients, caregivers and the general public, which may also be of interest to geriatric practitioners and researchers.

  4. THGS: a web-based database of Transmembrane Helices in Genome Sequences

    PubMed Central

    Fernando, S. A.; Selvarani, P.; Das, Soma; Kumar, Ch. Kiran; Mondal, Sukanta; Ramakumar, S.; Sekar, K.

    2004-01-01

    Transmembrane Helices in Genome Sequences (THGS) is an interactive web-based database, developed to search the transmembrane helices in the user-interested gene sequences available in the Genome Database (GDB). The proposed database has provision to search sequence motifs in transmembrane and globular proteins. In addition, the motif can be searched in the other sequence databases (Swiss-Prot and PIR) or in the macromolecular structure database, Protein Data Bank (PDB). Further, the 3D structure of the corresponding queried motif, if it is available in the solved protein structures deposited in the Protein Data Bank, can also be visualized using the widely used graphics package RASMOL. All the sequence databases used in the present work are updated frequently and hence the results produced are up to date. The database THGS is freely available via the world wide web and can be accessed at http://pranag.physics.iisc.ernet.in/thgs/ or http://144.16.71.10/thgs/. PMID:14681375

  5. Digital hand atlas for web-based bone age assessment: system design and implementation

    NASA Astrophysics Data System (ADS)

    Cao, Fei; Huang, H. K.; Pietka, Ewa; Gilsanz, Vicente

    2000-04-01

    A frequently used assessment method of skeletal age is atlas matching by a radiological examination of a hand image against a small set of Greulich-Pyle patterns of normal standards. The method however can lead to significant deviation in age assessment, due to a variety of observers with different levels of training. The Greulich-Pyle atlas based on middle upper class white populations in the 1950s, is also not fully applicable for children of today, especially regarding the standard development in other racial groups. In this paper, we present our system design and initial implementation of a digital hand atlas and computer-aided diagnostic (CAD) system for Web-based bone age assessment. The digital atlas will remove the disadvantages of the currently out-of-date one and allow the bone age assessment to be computerized and done conveniently via Web. The system consists of a hand atlas database, a CAD module and a Java-based Web user interface. The atlas database is based on a large set of clinically normal hand images of diverse ethnic groups. The Java-based Web user interface allows users to interact with the hand image database form browsers. Users can use a Web browser to push a clinical hand image to the CAD server for a bone age assessment. Quantitative features on the examined image, which reflect the skeletal maturity, is then extracted and compared with patterns from the atlas database to assess the bone age.

  6. A Web-based open-source database for the distribution of hyperspectral signatures

    NASA Astrophysics Data System (ADS)

    Ferwerda, J. G.; Jones, S. D.; Du, Pei-Jun

    2006-10-01

    With the coming of age of field spectroscopy as a non-destructive means to collect information on the physiology of vegetation, there is a need for storage of signatures, and, more importantly, their metadata. Without the proper organisation of metadata, the signatures itself become limited. In order to facilitate re-distribution of data, a database for the storage & distribution of hyperspectral signatures and their metadata was designed. The database was built using open-source software, and can be used by the hyperspectral community to share their data. Data is uploaded through a simple web-based interface. The database recognizes major file-formats by ASD, GER and International Spectronics. The database source code is available for download through the hyperspectral.info web domain, and we happily invite suggestion for additions & modification for the database to be submitted through the online forums on the same website.

  7. TIPdb: a database of anticancer, antiplatelet, and antituberculosis phytochemicals from indigenous plants in Taiwan.

    PubMed

    Lin, Ying-Chi; Wang, Chia-Chi; Chen, Ih-Sheng; Jheng, Jhao-Liang; Li, Jih-Heng; Tung, Chun-Wei

    2013-01-01

    The unique geographic features of Taiwan are attributed to the rich indigenous and endemic plant species in Taiwan. These plants serve as resourceful bank for biologically active phytochemicals. Given that these plant-derived chemicals are prototypes of potential drugs for diseases, databases connecting the chemical structures and pharmacological activities may facilitate drug development. To enhance the utility of the data, it is desirable to develop a database of chemical compounds and corresponding activities from indigenous plants in Taiwan. A database of anticancer, antiplatelet, and antituberculosis phytochemicals from indigenous plants in Taiwan was constructed. The database, TIPdb, is composed of a standardized format of published anticancer, antiplatelet, and antituberculosis phytochemicals from indigenous plants in Taiwan. A browse function was implemented for users to browse the database in a taxonomy-based manner. Search functions can be utilized to filter records of interest by botanical name, part, chemical class, or compound name. The structured and searchable database TIPdb was constructed to serve as a comprehensive and standardized resource for anticancer, antiplatelet, and antituberculosis compounds search. The manually curated chemical structures and activities provide a great opportunity to develop quantitative structure-activity relationship models for the high-throughput screening of potential anticancer, antiplatelet, and antituberculosis drugs.

  8. TIPdb: A Database of Anticancer, Antiplatelet, and Antituberculosis Phytochemicals from Indigenous Plants in Taiwan

    PubMed Central

    Lin, Ying-Chi; Wang, Chia-Chi; Chen, Ih-Sheng; Jheng, Jhao-Liang; Li, Jih-Heng; Tung, Chun-Wei

    2013-01-01

    The unique geographic features of Taiwan are attributed to the rich indigenous and endemic plant species in Taiwan. These plants serve as resourceful bank for biologically active phytochemicals. Given that these plant-derived chemicals are prototypes of potential drugs for diseases, databases connecting the chemical structures and pharmacological activities may facilitate drug development. To enhance the utility of the data, it is desirable to develop a database of chemical compounds and corresponding activities from indigenous plants in Taiwan. A database of anticancer, antiplatelet, and antituberculosis phytochemicals from indigenous plants in Taiwan was constructed. The database, TIPdb, is composed of a standardized format of published anticancer, antiplatelet, and antituberculosis phytochemicals from indigenous plants in Taiwan. A browse function was implemented for users to browse the database in a taxonomy-based manner. Search functions can be utilized to filter records of interest by botanical name, part, chemical class, or compound name. The structured and searchable database TIPdb was constructed to serve as a comprehensive and standardized resource for anticancer, antiplatelet, and antituberculosis compounds search. The manually curated chemical structures and activities provide a great opportunity to develop quantitative structure-activity relationship models for the high-throughput screening of potential anticancer, antiplatelet, and antituberculosis drugs. PMID:23766708

  9. Integrated Functional and Executional Modelling of Software Using Web-Based Databases

    NASA Technical Reports Server (NTRS)

    Kulkarni, Deepak; Marietta, Roberta

    1998-01-01

    NASA's software subsystems undergo extensive modification and updates over the operational lifetimes. It is imperative that modified software should satisfy safety goals. This report discusses the difficulties encountered in doing so and discusses a solution based on integrated modelling of software, use of automatic information extraction tools, web technology and databases.

  10. Development of a Relational Database for Learning Management Systems

    ERIC Educational Resources Information Center

    Deperlioglu, Omer; Sarpkaya, Yilmaz; Ergun, Ertugrul

    2011-01-01

    In today's world, Web-Based Distance Education Systems have a great importance. Web-based Distance Education Systems are usually known as Learning Management Systems (LMS). In this article, a database design, which was developed to create an educational institution as a Learning Management System, is described. In this sense, developed Learning…

  11. Integrated remote sensing and visualization (IRSV) system for transportation infrastructure operations and management, phase two, volume 4 : web-based bridge information database--visualization analytics and distributed sensing.

    DOT National Transportation Integrated Search

    2012-03-01

    This report introduces the design and implementation of a Web-based bridge information visual analytics system. This : project integrates Internet, multiple databases, remote sensing, and other visualization technologies. The result : combines a GIS ...

  12. Construction and validation of a web-based epidemiological database for inflammatory bowel diseases in Europe An EpiCom study.

    PubMed

    Burisch, Johan; Cukovic-Cavka, Silvija; Kaimakliotis, Ioannis; Shonová, Olga; Andersen, Vibeke; Dahlerup, Jens F; Elkjaer, Margarita; Langholz, Ebbe; Pedersen, Natalia; Salupere, Riina; Kolho, Kaija-Leena; Manninen, Pia; Lakatos, Peter Laszlo; Shuhaibar, Mary; Odes, Selwyn; Martinato, Matteo; Mihu, Ion; Magro, Fernando; Belousova, Elena; Fernandez, Alberto; Almer, Sven; Halfvarson, Jonas; Hart, Ailsa; Munkholm, Pia

    2011-08-01

    The EpiCom-study investigates a possible East-West-gradient in Europe in the incidence of IBD and the association with environmental factors. A secured web-based database is used to facilitate and centralize data registration. To construct and validate a web-based inception cohort database available in both English and Russian language. The EpiCom database has been constructed in collaboration with all 34 participating centers. The database was translated into Russian using forward translation, patient questionnaires were translated by simplified forward-backward translation. Data insertion implies fulfillment of international diagnostic criteria, disease activity, medical therapy, quality of life, work productivity and activity impairment, outcome of pregnancy, surgery, cancer and death. Data is secured by the WinLog3 System, developed in cooperation with the Danish Data Protection Agency. Validation of the database has been performed in two consecutive rounds, each followed by corrections in accordance with comments. The EpiCom database fulfills the requirements of the participating countries' local data security agencies by being stored at a single location. The database was found overall to be "good" or "very good" by 81% of the participants after the second validation round and the general applicability of the database was evaluated as "good" or "very good" by 77%. In the inclusion period January 1st -December 31st 2010 1336 IBD patients have been included in the database. A user-friendly, tailor-made and secure web-based inception cohort database has been successfully constructed, facilitating remote data input. The incidence of IBD in 23 European countries can be found at www.epicom-ecco.eu. Copyright © 2011 European Crohn's and Colitis Organisation. All rights reserved.

  13. Regulators of Androgen Action Resource: a one-stop shop for the comprehensive study of androgen receptor action.

    PubMed

    DePriest, Adam D; Fiandalo, Michael V; Schlanger, Simon; Heemers, Frederike; Mohler, James L; Liu, Song; Heemers, Hannelore V

    2016-01-01

    Androgen receptor (AR) is a ligand-activated transcription factor that is the main target for treatment of non-organ-confined prostate cancer (CaP). Failure of life-prolonging AR-targeting androgen deprivation therapy is due to flexibility in steroidogenic pathways that control intracrine androgen levels and variability in the AR transcriptional output. Androgen biosynthesis enzymes, androgen transporters and AR-associated coregulators are attractive novel CaP treatment targets. These proteins, however, are characterized by multiple transcript variants and isoforms, are subject to genomic alterations, and are differentially expressed among CaPs. Determining their therapeutic potential requires evaluation of extensive, diverse datasets that are dispersed over multiple databases, websites and literature reports. Mining and integrating these datasets are cumbersome, time-consuming tasks and provide only snapshots of relevant information. To overcome this impediment to effective, efficient study of AR and potential drug targets, we developed the Regulators of Androgen Action Resource (RAAR), a non-redundant, curated and user-friendly searchable web interface. RAAR centralizes information on gene function, clinical relevance, and resources for 55 genes that encode proteins involved in biosynthesis, metabolism and transport of androgens and for 274 AR-associated coregulator genes. Data in RAAR are organized in two levels: (i) Information pertaining to production of androgens is contained in a 'pre-receptor level' database, and coregulator gene information is provided in a 'post-receptor level' database, and (ii) an 'other resources' database contains links to additional databases that are complementary to and useful to pursue further the information provided in RAAR. For each of its 329 entries, RAAR provides access to more than 20 well-curated publicly available databases, and thus, access to thousands of data points. Hyperlinks provide direct access to gene-specific entries in the respective database(s). RAAR is a novel, freely available resource that provides fast, reliable and easy access to integrated information that is needed to develop alternative CaP therapies. Database URL: http://www.lerner.ccf.org/cancerbio/heemers/RAAR/search/. © The Author(s) 2016. Published by Oxford University Press.

  14. ProBiS-database: precalculated binding site similarities and local pairwise alignments of PDB structures.

    PubMed

    Konc, Janez; Cesnik, Tomo; Konc, Joanna Trykowska; Penca, Matej; Janežič, Dušanka

    2012-02-27

    ProBiS-Database is a searchable repository of precalculated local structural alignments in proteins detected by the ProBiS algorithm in the Protein Data Bank. Identification of functionally important binding regions of the protein is facilitated by structural similarity scores mapped to the query protein structure. PDB structures that have been aligned with a query protein may be rapidly retrieved from the ProBiS-Database, which is thus able to generate hypotheses concerning the roles of uncharacterized proteins. Presented with uncharacterized protein structure, ProBiS-Database can discern relationships between such a query protein and other better known proteins in the PDB. Fast access and a user-friendly graphical interface promote easy exploration of this database of over 420 million local structural alignments. The ProBiS-Database is updated weekly and is freely available online at http://probis.cmm.ki.si/database.

  15. IPD—the Immuno Polymorphism Database

    PubMed Central

    Robinson, James; Halliwell, Jason A.; McWilliam, Hamish; Lopez, Rodrigo; Marsh, Steven G. E.

    2013-01-01

    The Immuno Polymorphism Database (IPD), http://www.ebi.ac.uk/ipd/ is a set of specialist databases related to the study of polymorphic genes in the immune system. The IPD project works with specialist groups or nomenclature committees who provide and curate individual sections before they are submitted to IPD for online publication. The IPD project stores all the data in a set of related databases. IPD currently consists of four databases: IPD-KIR, contains the allelic sequences of killer-cell immunoglobulin-like receptors, IPD-MHC, a database of sequences of the major histocompatibility complex of different species; IPD-HPA, alloantigens expressed only on platelets; and IPD-ESTDAB, which provides access to the European Searchable Tumour Cell-Line Database, a cell bank of immunologically characterized melanoma cell lines. The data is currently available online from the website and FTP directory. This article describes the latest updates and additional tools added to the IPD project. PMID:23180793

  16. SNPversity: a web-based tool for visualizing diversity

    PubMed Central

    Schott, David A; Vinnakota, Abhinav G; Portwood, John L; Andorf, Carson M

    2018-01-01

    Abstract Many stand-alone desktop software suites exist to visualize single nucleotide polymorphism (SNP) diversity, but web-based software that can be easily implemented and used for biological databases is absent. SNPversity was created to answer this need by building an open-source visualization tool that can be implemented on a Unix-like machine and served through a web browser that can be accessible worldwide. SNPversity consists of a HDF5 database back-end for SNPs, a data exchange layer powered by TASSEL libraries that represent data in JSON format, and an interface layer using PHP to visualize SNP information. SNPversity displays data in real-time through a web browser in grids that are color-coded according to a given SNP’s allelic status and mutational state. SNPversity is currently available at MaizeGDB, the maize community’s database, and will be soon available at GrainGenes, the clade-oriented database for Triticeae and Avena species, including wheat, barley, rye, and oat. The code and documentation are uploaded onto github, and they are freely available to the public. We expect that the tool will be highly useful for other biological databases with a similar need to display SNP diversity through their web interfaces. Database URL: https://www.maizegdb.org/snpversity PMID:29688387

  17. GIS Technologies For The New Planetary Science Archive (PSA)

    NASA Astrophysics Data System (ADS)

    Docasal, R.; Barbarisi, I.; Rios, C.; Macfarlane, A. J.; Gonzalez, J.; Arviset, C.; De Marchi, G.; Martinez, S.; Grotheer, E.; Lim, T.; Besse, S.; Heather, D.; Fraga, D.; Barthelemy, M.

    2015-12-01

    Geographical information system (GIS) is becoming increasingly used for planetary science. GIS are computerised systems for the storage, retrieval, manipulation, analysis, and display of geographically referenced data. Some data stored in the Planetary Science Archive (PSA), for instance, a set of Mars Express/Venus Express data, have spatial metadata associated to them. To facilitate users in handling and visualising spatial data in GIS applications, the new PSA should support interoperability with interfaces implementing the standards approved by the Open Geospatial Consortium (OGC). These standards are followed in order to develop open interfaces and encodings that allow data to be exchanged with GIS Client Applications, well-known examples of which are Google Earth and NASA World Wind as well as open source tools such as Openlayers. The technology already exists within PostgreSQL databases to store searchable geometrical data in the form of the PostGIS extension. An existing open source maps server is GeoServer, an instance of which has been deployed for the new PSA, uses the OGC standards to allow, among others, the sharing, processing and editing of data and spatial data through the Web Feature Service (WFS) standard as well as serving georeferenced map images through the Web Map Service (WMS). The final goal of the new PSA, being developed by the European Space Astronomy Centre (ESAC) Science Data Centre (ESDC), is to create an archive which enables science exploitation of ESA's planetary missions datasets. This can be facilitated through the GIS framework, offering interfaces (both web GUI and scriptable APIs) that can be used more easily and scientifically by the community, and that will also enable the community to build added value services on top of the PSA.

  18. A Genome-Wide Survey of the Microsatellite Content of the Globe Artichoke Genome and the Development of a Web-Based Database

    PubMed Central

    Portis, Ezio; Portis, Flavio; Valente, Luisa; Moglia, Andrea; Barchi, Lorenzo; Lanteri, Sergio; Acquadro, Alberto

    2016-01-01

    The recently acquired genome sequence of globe artichoke (Cynara cardunculus var. scolymus) has been used to catalog the genome’s content of simple sequence repeat (SSR) markers. More than 177,000 perfect SSRs were revealed, equivalent to an overall density across the genome of 244.5 SSRs/Mbp, but some 224,000 imperfect SSRs were also identified. About 21% of these SSRs were complex (two stretches of repeats separated by <100 nt). Some 73% of the SSRs were composed of dinucleotide motifs. The SSRs were categorized for the numbers of repeats present, their overall length and were allocated to their linkage group. A total of 4,761 perfect and 6,583 imperfect SSRs were present in 3,781 genes (14.11% of the total), corresponding to an overall density across the gene space of 32,5 and 44,9 SSRs/Mbp for perfect and imperfect motifs, respectively. A putative function has been assigned, using the gene ontology approach, to the set of genes harboring at least one SSR. The same search parameters were applied to reveal the SSR content of 14 other plant species for which genome sequence is available. Certain species-specific SSR motifs were identified, along with a hexa-nucleotide motif shared only with the other two Compositae species (sunflower (Helianthus annuus) and horseweed (Conyza canadensis)) included in the study. Finally, a database, called “Cynara cardunculus MicroSatellite DataBase” (CyMSatDB) was developed to provide a searchable interface to the SSR data. CyMSatDB facilitates the retrieval of SSR markers, as well as suggested forward and reverse primers, on the basis of genomic location, genomic vs genic context, perfect vs imperfect repeat, motif type, motif sequence and repeat number. The SSR markers were validated via an in silico based PCR analysis adopting two available assembled transcriptomes, derived from contrasting globe artichoke accessions, as templates. PMID:27648830

  19. Development of a web-based video management and application processing system

    NASA Astrophysics Data System (ADS)

    Chan, Shermann S.; Wu, Yi; Li, Qing; Zhuang, Yueting

    2001-07-01

    How to facilitate efficient video manipulation and access in a web-based environment is becoming a popular trend for video applications. In this paper, we present a web-oriented video management and application processing system, based on our previous work on multimedia database and content-based retrieval. In particular, we extend the VideoMAP architecture with specific web-oriented mechanisms, which include: (1) Concurrency control facilities for the editing of video data among different types of users, such as Video Administrator, Video Producer, Video Editor, and Video Query Client; different users are assigned various priority levels for different operations on the database. (2) Versatile video retrieval mechanism which employs a hybrid approach by integrating a query-based (database) mechanism with content- based retrieval (CBR) functions; its specific language (CAROL/ST with CBR) supports spatio-temporal semantics of video objects, and also offers an improved mechanism to describe visual content of videos by content-based analysis method. (3) Query profiling database which records the `histories' of various clients' query activities; such profiles can be used to provide the default query template when a similar query is encountered by the same kind of users. An experimental prototype system is being developed based on the existing VideoMAP prototype system, using Java and VC++ on the PC platform.

  20. Web data mining

    NASA Astrophysics Data System (ADS)

    Wibonele, Kasanda J.; Zhang, Yanqing

    2002-03-01

    A web data mining system using granular computing and ASP programming is proposed. This is a web based application, which allows web users to submit survey data for many different companies. This survey is a collection of questions that will help these companies develop and improve their business and customer service with their clients by analyzing survey data. This web application allows users to submit data anywhere. All the survey data is collected into a database for further analysis. An administrator of this web application can login to the system and view all the data submitted. This web application resides on a web server, and the database resides on the MS SQL server.

  1. Metadata tables to enable dynamic data modeling and web interface design: the SEER example.

    PubMed

    Weiner, Mark; Sherr, Micah; Cohen, Abigail

    2002-04-01

    A wealth of information addressing health status, outcomes and resource utilization is compiled and made available by various government agencies. While exploration of the data is possible using existing tools, in general, would-be users of the resources must acquire CD-ROMs or download data from the web, and upload the data into their own database. Where web interfaces exist, they are highly structured, limiting the kinds of queries that can be executed. This work develops a web-based database interface engine whose content and structure is generated through interaction with a metadata table. The result is a dynamically generated web interface that can easily accommodate changes in the underlying data model by altering the metadata table, rather than requiring changes to the interface code. This paper discusses the background and implementation of the metadata table and web-based front end and provides examples of its use with the NCI's Surveillance, Epidemiology and End-Results (SEER) database.

  2. WaveNet: A Web-Based Metocean Data Access, Processing and Analysis Tool; Part 5 - WW3 Database

    DTIC Science & Technology

    2015-02-01

    Program ( CDIP ); and Part 4 for the Great Lakes Observing System/Coastal Forecasting System (GLOS/GLCFS). Using step-by-step instructions, this Part 5...Demirbilek, Z., L. Lin, and D. Wilson. 2014a. WaveNet: A web-based metocean data access, processing, and analysis tool; part 3– CDIP database

  3. A web Accessible Framework for Discovery, Visualization and Dissemination of Polar Data

    NASA Astrophysics Data System (ADS)

    Kirsch, P. J.; Breen, P.; Barnes, T. D.

    2007-12-01

    A web accessible information framework, currently under development within the Physical Sciences Division of the British Antarctic Survey is described. The datasets accessed are generally heterogeneous in nature from fields including space physics, meteorology, atmospheric chemistry, ice physics, and oceanography. Many of these are returned in near real time over a 24/7 limited bandwidth link from remote Antarctic Stations and ships. The requirement is to provide various user groups - each with disparate interests and demands - a system incorporating a browsable and searchable catalogue; bespoke data summary visualization, metadata access facilities and download utilities. The system allows timely access to raw and processed datasets through an easily navigable discovery interface. Once discovered, a summary of the dataset can be visualized in a manner prescribed by the particular projects and user communities or the dataset may be downloaded, subject to accessibility restrictions that may exist. In addition, access to related ancillary information including software, documentation, related URL's and information concerning non-electronic media (of particular relevance to some legacy datasets) is made directly available having automatically been associated with a dataset during the discovery phase. Major components of the framework include the relational database containing the catalogue, the organizational structure of the systems holding the data - enabling automatic updates of the system catalogue and real-time access to data -, the user interface design, and administrative and data management scripts allowing straightforward incorporation of utilities, datasets and system maintenance.

  4. Database of Novel and Emerging Adsorbent Materials

    National Institute of Standards and Technology Data Gateway

    SRD 205 NIST/ARPA-E Database of Novel and Emerging Adsorbent Materials (Web, free access)   The NIST/ARPA-E Database of Novel and Emerging Adsorbent Materials is a free, web-based catalog of adsorbent materials and measured adsorption properties of numerous materials obtained from article entries from the scientific literature. Search fields for the database include adsorbent material, adsorbate gas, experimental conditions (pressure, temperature), and bibliographic information (author, title, journal), and results from queries are provided as a list of articles matching the search parameters. The database also contains adsorption isotherms digitized from the cataloged articles, which can be compared visually online in the web application or exported for offline analysis.

  5. FCDD: A Database for Fruit Crops Diseases.

    PubMed

    Chauhan, Rupal; Jasrai, Yogesh; Pandya, Himanshu; Chaudhari, Suman; Samota, Chand Mal

    2014-01-01

    Fruit Crops Diseases Database (FCDD) requires a number of biotechnology and bioinformatics tools. The FCDD is a unique bioinformatics resource that compiles information about 162 details on fruit crops diseases, diseases type, its causal organism, images, symptoms and their control. The FCDD contains 171 phytochemicals from 25 fruits, their 2D images and their 20 possible sequences. This information has been manually extracted and manually verified from numerous sources, including other electronic databases, textbooks and scientific journals. FCDD is fully searchable and supports extensive text search. The main focus of the FCDD is on providing possible information of fruit crops diseases, which will help in discovery of potential drugs from one of the common bioresource-fruits. The database was developed using MySQL. The database interface is developed in PHP, HTML and JAVA. FCDD is freely available. http://www.fruitcropsdd.com/

  6. MEGADOCK-Web: an integrated database of high-throughput structure-based protein-protein interaction predictions.

    PubMed

    Hayashi, Takanori; Matsuzaki, Yuri; Yanagisawa, Keisuke; Ohue, Masahito; Akiyama, Yutaka

    2018-05-08

    Protein-protein interactions (PPIs) play several roles in living cells, and computational PPI prediction is a major focus of many researchers. The three-dimensional (3D) structure and binding surface are important for the design of PPI inhibitors. Therefore, rigid body protein-protein docking calculations for two protein structures are expected to allow elucidation of PPIs different from known complexes in terms of 3D structures because known PPI information is not explicitly required. We have developed rapid PPI prediction software based on protein-protein docking, called MEGADOCK. In order to fully utilize the benefits of computational PPI predictions, it is necessary to construct a comprehensive database to gather prediction results and their predicted 3D complex structures and to make them easily accessible. Although several databases exist that provide predicted PPIs, the previous databases do not contain a sufficient number of entries for the purpose of discovering novel PPIs. In this study, we constructed an integrated database of MEGADOCK PPI predictions, named MEGADOCK-Web. MEGADOCK-Web provides more than 10 times the number of PPI predictions than previous databases and enables users to conduct PPI predictions that cannot be found in conventional PPI prediction databases. In MEGADOCK-Web, there are 7528 protein chains and 28,331,628 predicted PPIs from all possible combinations of those proteins. Each protein structure is annotated with PDB ID, chain ID, UniProt AC, related KEGG pathway IDs, and known PPI pairs. Additionally, MEGADOCK-Web provides four powerful functions: 1) searching precalculated PPI predictions, 2) providing annotations for each predicted protein pair with an experimentally known PPI, 3) visualizing candidates that may interact with the query protein on biochemical pathways, and 4) visualizing predicted complex structures through a 3D molecular viewer. MEGADOCK-Web provides a huge amount of comprehensive PPI predictions based on docking calculations with biochemical pathways and enables users to easily and quickly assess PPI feasibilities by archiving PPI predictions. MEGADOCK-Web also promotes the discovery of new PPIs and protein functions and is freely available for use at http://www.bi.cs.titech.ac.jp/megadock-web/ .

  7. A web-based platform for virtual screening.

    PubMed

    Watson, Paul; Verdonk, Marcel; Hartshorn, Michael J

    2003-09-01

    A fully integrated, web-based, virtual screening platform has been developed to allow rapid virtual screening of large numbers of compounds. ORACLE is used to store information at all stages of the process. The system includes a large database of historical compounds from high throughput screenings (HTS) chemical suppliers, ATLAS, containing over 3.1 million unique compounds with their associated physiochemical properties (ClogP, MW, etc.). The database can be screened using a web-based interface to produce compound subsets for virtual screening or virtual library (VL) enumeration. In order to carry out the latter task within ORACLE a reaction data cartridge has been developed. Virtual libraries can be enumerated rapidly using the web-based interface to the cartridge. The compound subsets can be seamlessly submitted for virtual screening experiments, and the results can be viewed via another web-based interface allowing ad hoc querying of the virtual screening data stored in ORACLE.

  8. Establishment of Kawasaki disease database based on metadata standard.

    PubMed

    Park, Yu Rang; Kim, Jae-Jung; Yoon, Young Jo; Yoon, Young-Kwang; Koo, Ha Yeong; Hong, Young Mi; Jang, Gi Young; Shin, Soo-Yong; Lee, Jong-Keuk

    2016-07-01

    Kawasaki disease (KD) is a rare disease that occurs predominantly in infants and young children. To identify KD susceptibility genes and to develop a diagnostic test, a specific therapy, or prevention method, collecting KD patients' clinical and genomic data is one of the major issues. For this purpose, Kawasaki Disease Database (KDD) was developed based on the efforts of Korean Kawasaki Disease Genetics Consortium (KKDGC). KDD is a collection of 1292 clinical data and genomic samples of 1283 patients from 13 KKDGC-participating hospitals. Each sample contains the relevant clinical data, genomic DNA and plasma samples isolated from patients' blood, omics data and KD-associated genotype data. Clinical data was collected and saved using the common data elements based on the ISO/IEC 11179 metadata standard. Two genome-wide association study data of total 482 samples and whole exome sequencing data of 12 samples were also collected. In addition, KDD includes the rare cases of KD (16 cases with family history, 46 cases with recurrence, 119 cases with intravenous immunoglobulin non-responsiveness, and 52 cases with coronary artery aneurysm). As the first public database for KD, KDD can significantly facilitate KD studies. All data in KDD can be searchable and downloadable. KDD was implemented in PHP, MySQL and Apache, with all major browsers supported.Database URL: http://www.kawasakidisease.kr. © The Author(s) 2016. Published by Oxford University Press.

  9. The Nuclear Protein Database (NPD): sub-nuclear localisation and functional annotation of the nuclear proteome

    PubMed Central

    Dellaire, G.; Farrall, R.; Bickmore, W.A.

    2003-01-01

    The Nuclear Protein Database (NPD) is a curated database that contains information on more than 1300 vertebrate proteins that are thought, or are known, to localise to the cell nucleus. Each entry is annotated with information on predicted protein size and isoelectric point, as well as any repeats, motifs or domains within the protein sequence. In addition, information on the sub-nuclear localisation of each protein is provided and the biological and molecular functions are described using Gene Ontology (GO) terms. The database is searchable by keyword, protein name, sub-nuclear compartment and protein domain/motif. Links to other databases are provided (e.g. Entrez, SWISS-PROT, OMIM, PubMed, PubMed Central). Thus, NPD provides a gateway through which the nuclear proteome may be explored. The database can be accessed at http://npd.hgu.mrc.ac.uk and is updated monthly. PMID:12520015

  10. The Androgen Receptor Gene Mutations Database.

    PubMed

    Gottlieb, B; Lehvaslaiho, H; Beitel, L K; Lumbroso, R; Pinsky, L; Trifiro, M

    1998-01-01

    The current version of the androgen receptor (AR) gene mutations database is described. The total number of reported mutations has risen from 272 to 309 in the past year. We have expanded the database: (i) by giving each entry an accession number; (ii) by adding information on the length of polymorphic polyglutamine (polyGln) and polyglycine (polyGly) tracts in exon 1; (iii) by adding information on large gene deletions; (iv) by providing a direct link with a completely searchable database (courtesy EMBL-European Bioinformatics Institute). The addition of the exon 1 polymorphisms is discussed in light of their possible relevance as markers for predisposition to prostate or breast cancer. The database is also available on the internet (http://www.mcgill. ca/androgendb/ ), from EMBL-European Bioinformatics Institute (ftp. ebi.ac.uk/pub/databases/androgen ), or as a Macintosh FilemakerPro or Word file (MC33@musica.mcgill.ca).

  11. The Androgen Receptor Gene Mutations Database.

    PubMed Central

    Gottlieb, B; Lehvaslaiho, H; Beitel, L K; Lumbroso, R; Pinsky, L; Trifiro, M

    1998-01-01

    The current version of the androgen receptor (AR) gene mutations database is described. The total number of reported mutations has risen from 272 to 309 in the past year. We have expanded the database: (i) by giving each entry an accession number; (ii) by adding information on the length of polymorphic polyglutamine (polyGln) and polyglycine (polyGly) tracts in exon 1; (iii) by adding information on large gene deletions; (iv) by providing a direct link with a completely searchable database (courtesy EMBL-European Bioinformatics Institute). The addition of the exon 1 polymorphisms is discussed in light of their possible relevance as markers for predisposition to prostate or breast cancer. The database is also available on the internet (http://www.mcgill. ca/androgendb/ ), from EMBL-European Bioinformatics Institute (ftp. ebi.ac.uk/pub/databases/androgen ), or as a Macintosh FilemakerPro or Word file (MC33@musica.mcgill.ca). PMID:9399843

  12. WCSTools 3.0: More Tools for Image Astrometry and Catalog Searching

    NASA Astrophysics Data System (ADS)

    Mink, Douglas J.

    For five years, WCSTools has provided image astrometry for astronomers who need accurate positions for objects they wish to observe. Other functions have been added and improved since the package was first released. Support has been added for new catalogs, such as the GSC-ACT, 2MASS Point Source Catalog, and GSC II, as they have been published. A simple command line interface can search any supported catalog, returning information in several standard formats, whether the catalog is on a local disk or searchable over the World Wide Web. The catalog searching routine can be located on either end (or both ends!) of such a web connection, and the output from one catalog search can be used as the input to another search.

  13. Turning Access into a web-enabled secure information system for clinical trials.

    PubMed

    Dongquan Chen; Chen, Wei-Bang; Soong, Mayhue; Soong, Seng-Jaw; Orthner, Helmuth F

    2009-08-01

    Organizations that have limited resources need to conduct clinical studies in a cost-effective, but secure way. Clinical data residing in various individual databases need to be easily accessed and secured. Although widely available, digital certification, encryption, and secure web server, have not been implemented as widely, partly due to a lack of understanding of needs and concerns over issues such as cost and difficulty in implementation. The objective of this study was to test the possibility of centralizing various databases and to demonstrate ways of offering an alternative to a large-scale comprehensive and costly commercial product, especially for simple phase I and II trials, with reasonable convenience and security. We report a working procedure to transform and develop a standalone Access database into a secure Web-based secure information system. For data collection and reporting purposes, we centralized several individual databases; developed, and tested a web-based secure server using self-issued digital certificates. The system lacks audit trails. The cost of development and maintenance may hinder its wide application. The clinical trial databases scattered in various departments of an institution could be centralized into a web-enabled secure information system. The limitations such as the lack of a calendar and audit trail can be partially addressed with additional programming. The centralized Web system may provide an alternative to a comprehensive clinical trial management system.

  14. A Web-Based Multi-Database System Supporting Distributed Collaborative Management and Sharing of Microarray Experiment Information

    PubMed Central

    Burgarella, Sarah; Cattaneo, Dario; Masseroli, Marco

    2006-01-01

    We developed MicroGen, a multi-database Web based system for managing all the information characterizing spotted microarray experiments. It supports information gathering and storing according to the Minimum Information About Microarray Experiments (MIAME) standard. It also allows easy sharing of information and data among all multidisciplinary actors involved in spotted microarray experiments. PMID:17238488

  15. CUNY+ Web: Usability Study of the Web-Based GUI Version of the Bibliographic Database of the City University of New York (CUNY).

    ERIC Educational Resources Information Center

    Oulanov, Alexei; Pajarillo, Edmund J. Y.

    2002-01-01

    Describes the usability evaluation of the CUNY (City University of New York) information system in Web and Graphical User Interface (GUI) versions. Compares results to an earlier usability study of the basic information database available on CUNY's wide-area network and describes the applicability of the previous usability instrument to this…

  16. Informatics in radiology: use of CouchDB for document-based storage of DICOM objects.

    PubMed

    Rascovsky, Simón J; Delgado, Jorge A; Sanz, Alexander; Calvo, Víctor D; Castrillón, Gabriel

    2012-01-01

    Picture archiving and communication systems traditionally have depended on schema-based Structured Query Language (SQL) databases for imaging data management. To optimize database size and performance, many such systems store a reduced set of Digital Imaging and Communications in Medicine (DICOM) metadata, discarding informational content that might be needed in the future. As an alternative to traditional database systems, document-based key-value stores recently have gained popularity. These systems store documents containing key-value pairs that facilitate data searches without predefined schemas. Document-based key-value stores are especially suited to archive DICOM objects because DICOM metadata are highly heterogeneous collections of tag-value pairs conveying specific information about imaging modalities, acquisition protocols, and vendor-supported postprocessing options. The authors used an open-source document-based database management system (Apache CouchDB) to create and test two such databases; CouchDB was selected for its overall ease of use, capability for managing attachments, and reliance on HTTP and Representational State Transfer standards for accessing and retrieving data. A large database was created first in which the DICOM metadata from 5880 anonymized magnetic resonance imaging studies (1,949,753 images) were loaded by using a Ruby script. To provide the usual DICOM query functionality, several predefined "views" (standard queries) were created by using JavaScript. For performance comparison, the same queries were executed in both the CouchDB database and a SQL-based DICOM archive. The capabilities of CouchDB for attachment management and database replication were separately assessed in tests of a similar, smaller database. Results showed that CouchDB allowed efficient storage and interrogation of all DICOM objects; with the use of information retrieval algorithms such as map-reduce, all the DICOM metadata stored in the large database were searchable with only a minimal increase in retrieval time over that with the traditional database management system. Results also indicated possible uses for document-based databases in data mining applications such as dose monitoring, quality assurance, and protocol optimization. RSNA, 2012

  17. GeoSymbio: a hybrid, cloud-based web application of global geospatial bioinformatics and ecoinformatics for Symbiodinium-host symbioses.

    PubMed

    Franklin, Erik C; Stat, Michael; Pochon, Xavier; Putnam, Hollie M; Gates, Ruth D

    2012-03-01

    The genus Symbiodinium encompasses a group of unicellular, photosynthetic dinoflagellates that are found free living or in hospite with a wide range of marine invertebrate hosts including scleractinian corals. We present GeoSymbio, a hybrid web application that provides an online, easy to use and freely accessible interface for users to discover, explore and utilize global geospatial bioinformatic and ecoinformatic data on Symbiodinium-host symbioses. The novelty of this application lies in the combination of a variety of query and visualization tools, including dynamic searchable maps, data tables with filter and grouping functions, and interactive charts that summarize the data. Importantly, this application is hosted remotely or 'in the cloud' using Google Apps, and therefore does not require any specialty GIS, web programming or data programming expertise from the user. The current version of the application utilizes Symbiodinium data based on the ITS2 genetic marker from PCR-based techniques, including denaturing gradient gel electrophoresis, sequencing and cloning of specimens collected during 1982-2010. All data elements of the application are also downloadable as spatial files, tables and nucleic acid sequence files in common formats for desktop analysis. The application provides a unique tool set to facilitate research on the basic biology of Symbiodinium and expedite new insights into their ecology, biogeography and evolution in the face of a changing global climate. GeoSymbio can be accessed at https://sites.google.com/site/geosymbio/. © 2011 Blackwell Publishing Ltd.

  18. Relax with CouchDB - Into the non-relational DBMS era of Bioinformatics

    PubMed Central

    Manyam, Ganiraju; Payton, Michelle A.; Roth, Jack A.; Abruzzo, Lynne V.; Coombes, Kevin R.

    2012-01-01

    With the proliferation of high-throughput technologies, genome-level data analysis has become common in molecular biology. Bioinformaticians are developing extensive resources to annotate and mine biological features from high-throughput data. The underlying database management systems for most bioinformatics software are based on a relational model. Modern non-relational databases offer an alternative that has flexibility, scalability, and a non-rigid design schema. Moreover, with an accelerated development pace, non-relational databases like CouchDB can be ideal tools to construct bioinformatics utilities. We describe CouchDB by presenting three new bioinformatics resources: (a) geneSmash, which collates data from bioinformatics resources and provides automated gene-centric annotations, (b) drugBase, a database of drug-target interactions with a web interface powered by geneSmash, and (c) HapMap-CN, which provides a web interface to query copy number variations from three SNP-chip HapMap datasets. In addition to the web sites, all three systems can be accessed programmatically via web services. PMID:22609849

  19. Applying World Wide Web technology to the study of patients with rare diseases.

    PubMed

    de Groen, P C; Barry, J A; Schaller, W J

    1998-07-15

    Randomized, controlled trials of sporadic diseases are rarely conducted. Recent developments in communication technology, particularly the World Wide Web, allow efficient dissemination and exchange of information. However, software for the identification of patients with a rare disease and subsequent data entry and analysis in a secure Web database are currently not available. To study cholangiocarcinoma, a rare cancer of the bile ducts, we developed a computerized disease tracing system coupled with a database accessible on the Web. The tracing system scans computerized information systems on a daily basis and forwards demographic information on patients with bile duct abnormalities to an electronic mailbox. If informed consent is given, the patient's demographic and preexisting medical information available in medical database servers are electronically forwarded to a UNIX research database. Information from further patient-physician interactions and procedures is also entered into this database. The database is equipped with a Web user interface that allows data entry from various platforms (PC-compatible, Macintosh, and UNIX workstations) anywhere inside or outside our institution. To ensure patient confidentiality and data security, the database includes all security measures required for electronic medical records. The combination of a Web-based disease tracing system and a database has broad applications, particularly for the integration of clinical research within clinical practice and for the coordination of multicenter trials.

  20. CellAtlasSearch: a scalable search engine for single cells.

    PubMed

    Srivastava, Divyanshu; Iyer, Arvind; Kumar, Vibhor; Sengupta, Debarka

    2018-05-21

    Owing to the advent of high throughput single cell transcriptomics, past few years have seen exponential growth in production of gene expression data. Recently efforts have been made by various research groups to homogenize and store single cell expression from a large number of studies. The true value of this ever increasing data deluge can be unlocked by making it searchable. To this end, we propose CellAtlasSearch, a novel search architecture for high dimensional expression data, which is massively parallel as well as light-weight, thus infinitely scalable. In CellAtlasSearch, we use a Graphical Processing Unit (GPU) friendly version of Locality Sensitive Hashing (LSH) for unmatched speedup in data processing and query. Currently, CellAtlasSearch features over 300 000 reference expression profiles including both bulk and single-cell data. It enables the user query individual single cell transcriptomes and finds matching samples from the database along with necessary meta information. CellAtlasSearch aims to assist researchers and clinicians in characterizing unannotated single cells. It also facilitates noise free, low dimensional representation of single-cell expression profiles by projecting them on a wide variety of reference samples. The web-server is accessible at: http://www.cellatlassearch.com.

  1. TOPSAN: a dynamic web database for structural genomics.

    PubMed

    Ellrott, Kyle; Zmasek, Christian M; Weekes, Dana; Sri Krishna, S; Bakolitsa, Constantina; Godzik, Adam; Wooley, John

    2011-01-01

    The Open Protein Structure Annotation Network (TOPSAN) is a web-based collaboration platform for exploring and annotating structures determined by structural genomics efforts. Characterization of those structures presents a challenge since the majority of the proteins themselves have not yet been characterized. Responding to this challenge, the TOPSAN platform facilitates collaborative annotation and investigation via a user-friendly web-based interface pre-populated with automatically generated information. Semantic web technologies expand and enrich TOPSAN's content through links to larger sets of related databases, and thus, enable data integration from disparate sources and data mining via conventional query languages. TOPSAN can be found at http://www.topsan.org.

  2. Development of spatial density maps based on geoprocessing web services: application to tuberculosis incidence in Barcelona, Spain.

    PubMed

    Dominkovics, Pau; Granell, Carlos; Pérez-Navarro, Antoni; Casals, Martí; Orcau, Angels; Caylà, Joan A

    2011-11-29

    Health professionals and authorities strive to cope with heterogeneous data, services, and statistical models to support decision making on public health. Sophisticated analysis and distributed processing capabilities over geocoded epidemiological data are seen as driving factors to speed up control and decision making in these health risk situations. In this context, recent Web technologies and standards-based web services deployed on geospatial information infrastructures have rapidly become an efficient way to access, share, process, and visualize geocoded health-related information. Data used on this study is based on Tuberculosis (TB) cases registered in Barcelona city during 2009. Residential addresses are geocoded and loaded into a spatial database that acts as a backend database. The web-based application architecture and geoprocessing web services are designed according to the Representational State Transfer (REST) principles. These web processing services produce spatial density maps against the backend database. The results are focused on the use of the proposed web-based application to the analysis of TB cases in Barcelona. The application produces spatial density maps to ease the monitoring and decision making process by health professionals. We also include a discussion of how spatial density maps may be useful for health practitioners in such contexts. In this paper, we developed web-based client application and a set of geoprocessing web services to support specific health-spatial requirements. Spatial density maps of TB incidence were generated to help health professionals in analysis and decision-making tasks. The combined use of geographic information tools, map viewers, and geoprocessing services leads to interesting possibilities in handling health data in a spatial manner. In particular, the use of spatial density maps has been effective to identify the most affected areas and its spatial impact. This study is an attempt to demonstrate how web processing services together with web-based mapping capabilities suit the needs of health practitioners in epidemiological analysis scenarios.

  3. Development of spatial density maps based on geoprocessing web services: application to tuberculosis incidence in Barcelona, Spain

    PubMed Central

    2011-01-01

    Background Health professionals and authorities strive to cope with heterogeneous data, services, and statistical models to support decision making on public health. Sophisticated analysis and distributed processing capabilities over geocoded epidemiological data are seen as driving factors to speed up control and decision making in these health risk situations. In this context, recent Web technologies and standards-based web services deployed on geospatial information infrastructures have rapidly become an efficient way to access, share, process, and visualize geocoded health-related information. Methods Data used on this study is based on Tuberculosis (TB) cases registered in Barcelona city during 2009. Residential addresses are geocoded and loaded into a spatial database that acts as a backend database. The web-based application architecture and geoprocessing web services are designed according to the Representational State Transfer (REST) principles. These web processing services produce spatial density maps against the backend database. Results The results are focused on the use of the proposed web-based application to the analysis of TB cases in Barcelona. The application produces spatial density maps to ease the monitoring and decision making process by health professionals. We also include a discussion of how spatial density maps may be useful for health practitioners in such contexts. Conclusions In this paper, we developed web-based client application and a set of geoprocessing web services to support specific health-spatial requirements. Spatial density maps of TB incidence were generated to help health professionals in analysis and decision-making tasks. The combined use of geographic information tools, map viewers, and geoprocessing services leads to interesting possibilities in handling health data in a spatial manner. In particular, the use of spatial density maps has been effective to identify the most affected areas and its spatial impact. This study is an attempt to demonstrate how web processing services together with web-based mapping capabilities suit the needs of health practitioners in epidemiological analysis scenarios. PMID:22126392

  4. Evidence-informed decision-making by professionals working in addiction agencies serving women: a descriptive qualitative study.

    PubMed

    Jack, Susan M; Dobbins, Maureen; Sword, Wendy; Novotna, Gabriela; Brooks, Sandy; Lipman, Ellen L; Niccols, Alison

    2011-11-07

    Effective approaches to the prevention and treatment of substance abuse among mothers have been developed but not widely implemented. Implementation studies suggest that the adoption of evidence-based practices in the field of addictions remains low. There is a need, therefore, to better understand decision making processes in addiction agencies in order to develop more effective approaches to promote the translation of knowledge gained from addictions research into clinical practice. A descriptive qualitative study was conducted to explore: 1) the types and sources of evidence used to inform practice-related decisions within Canadian addiction agencies serving women; 2) how decision makers at different levels report using research evidence; and 3) factors that influence evidence-informed decision making. A purposeful sample of 26 decision-makers providing addiction treatment services to women completed in-depth qualitative interviews. Interview data were coded and analyzed using directed and summative content analysis strategies as well as constant comparison techniques. Across all groups, individuals reported locating and using multiple types of evidence to inform decisions. Some decision-makers rely on their experiential knowledge of addiction and recovery in decision-making. Research evidence is often used directly in decision-making at program management and senior administrative levels. Information for decision-making is accessed from a range of sources, including web-based resources and experts in the field. Individual and organizational facilitators and barriers to using research evidence in decision making were identified. There is support at administrative levels for integrating EIDM in addiction agencies. Knowledge transfer and exchange strategies should be focussed towards program managers and administrators and include capacity building for locating, appraising and using research evidence, knowledge brokering, and for partnering with universities. Resources are required to maintain web-based databases of searchable evidence to facilitate access to research evidence. A need exists to address the perception that there is a paucity of research evidence available to inform program decisions. Finally, there is a need to consider how experiential knowledge influences decision-making and what guidance research evidence has to offer regarding the implementation of different treatment approaches within the field of addictions.

  5. Evidence-informed decision-making by professionals working in addiction agencies serving women: a descriptive qualitative study

    PubMed Central

    2011-01-01

    Background Effective approaches to the prevention and treatment of substance abuse among mothers have been developed but not widely implemented. Implementation studies suggest that the adoption of evidence-based practices in the field of addictions remains low. There is a need, therefore, to better understand decision making processes in addiction agencies in order to develop more effective approaches to promote the translation of knowledge gained from addictions research into clinical practice. Methods A descriptive qualitative study was conducted to explore: 1) the types and sources of evidence used to inform practice-related decisions within Canadian addiction agencies serving women; 2) how decision makers at different levels report using research evidence; and 3) factors that influence evidence-informed decision making. A purposeful sample of 26 decision-makers providing addiction treatment services to women completed in-depth qualitative interviews. Interview data were coded and analyzed using directed and summative content analysis strategies as well as constant comparison techniques. Results Across all groups, individuals reported locating and using multiple types of evidence to inform decisions. Some decision-makers rely on their experiential knowledge of addiction and recovery in decision-making. Research evidence is often used directly in decision-making at program management and senior administrative levels. Information for decision-making is accessed from a range of sources, including web-based resources and experts in the field. Individual and organizational facilitators and barriers to using research evidence in decision making were identified. Conclusions There is support at administrative levels for integrating EIDM in addiction agencies. Knowledge transfer and exchange strategies should be focussed towards program managers and administrators and include capacity building for locating, appraising and using research evidence, knowledge brokering, and for partnering with universities. Resources are required to maintain web-based databases of searchable evidence to facilitate access to research evidence. A need exists to address the perception that there is a paucity of research evidence available to inform program decisions. Finally, there is a need to consider how experiential knowledge influences decision-making and what guidance research evidence has to offer regarding the implementation of different treatment approaches within the field of addictions. PMID:22059528

  6. Automated Data Tagging in the HLA

    NASA Astrophysics Data System (ADS)

    Gaffney, N. I.; Miller, W. W.

    2008-08-01

    One of the more powerful and popular forms of data organization implemented in most popular information sharing web applications is data tagging. With a rich user base from which to gather and digest tags, many interesting and often unanticipated yet very useful associations are revealed. With regard to an existing information, the astronomical community has a rich pool of existing digitally stored and searchable data than any of the currently popular web community, such as You Tube or My Space, had when they started. In initial experiments with the search engine for the Hubble Legacy Archive, we have created a simple yet powerful scheme by which the information from a footprint service, the NED and SIMBAD catalog services, and the ADS abstracts and keywords can be used to initially tag data with standard keywords. By then ingesting this into a public ally available information search engine, such as Apache Lucene, one can create a simple and powerful data tag search engine and association system. By then augmenting this with user provided keys and usage pattern analysis, one can produce a powerful modern data mining system for any astronomical data warehouse.

  7. Development of Web-based Distributed Cooperative Development Environmentof Sign-Language Animation System and its Evaluation

    NASA Astrophysics Data System (ADS)

    Yuizono, Takaya; Hara, Kousuke; Nakayama, Shigeru

    A web-based distributed cooperative development environment of sign-language animation system has been developed. We have extended the system from the previous animation system that was constructed as three tiered system which consists of sign-language animation interface layer, sign-language data processing layer, and sign-language animation database. Two components of a web client using VRML plug-in and web servlet are added to the previous system. The systems can support humanoid-model avatar for interoperability, and can use the stored sign language animation data shared on the database. It is noted in the evaluation of this system that the inverse kinematics function of web client improves the sign-language animation making.

  8. Protein Information Resource: a community resource for expert annotation of protein data

    PubMed Central

    Barker, Winona C.; Garavelli, John S.; Hou, Zhenglin; Huang, Hongzhan; Ledley, Robert S.; McGarvey, Peter B.; Mewes, Hans-Werner; Orcutt, Bruce C.; Pfeiffer, Friedhelm; Tsugita, Akira; Vinayaka, C. R.; Xiao, Chunlin; Yeh, Lai-Su L.; Wu, Cathy

    2001-01-01

    The Protein Information Resource, in collaboration with the Munich Information Center for Protein Sequences (MIPS) and the Japan International Protein Information Database (JIPID), produces the most comprehensive and expertly annotated protein sequence database in the public domain, the PIR-International Protein Sequence Database. To provide timely and high quality annotation and promote database interoperability, the PIR-International employs rule-based and classification-driven procedures based on controlled vocabulary and standard nomenclature and includes status tags to distinguish experimentally determined from predicted protein features. The database contains about 200 000 non-redundant protein sequences, which are classified into families and superfamilies and their domains and motifs identified. Entries are extensively cross-referenced to other sequence, classification, genome, structure and activity databases. The PIR web site features search engines that use sequence similarity and database annotation to facilitate the analysis and functional identification of proteins. The PIR-Inter­national databases and search tools are accessible on the PIR web site at http://pir.georgetown.edu/ and at the MIPS web site at http://www.mips.biochem.mpg.de. The PIR-International Protein Sequence Database and other files are also available by FTP. PMID:11125041

  9. NPIDB: Nucleic acid-Protein Interaction DataBase.

    PubMed

    Kirsanov, Dmitry D; Zanegina, Olga N; Aksianov, Evgeniy A; Spirin, Sergei A; Karyagina, Anna S; Alexeevski, Andrei V

    2013-01-01

    The Nucleic acid-Protein Interaction DataBase (http://npidb.belozersky.msu.ru/) contains information derived from structures of DNA-protein and RNA-protein complexes extracted from the Protein Data Bank (3846 complexes in October 2012). It provides a web interface and a set of tools for extracting biologically meaningful characteristics of nucleoprotein complexes. The content of the database is updated weekly. The current version of the Nucleic acid-Protein Interaction DataBase is an upgrade of the version published in 2007. The improvements include a new web interface, new tools for calculation of intermolecular interactions, a classification of SCOP families that contains DNA-binding protein domains and data on conserved water molecules on the DNA-protein interface.

  10. The Xeno-glycomics database (XDB): a relational database of qualitative and quantitative pig glycome repertoire.

    PubMed

    Park, Hae-Min; Park, Ju-Hyeong; Kim, Yoon-Woo; Kim, Kyoung-Jin; Jeong, Hee-Jin; Jang, Kyoung-Soon; Kim, Byung-Gee; Kim, Yun-Gon

    2013-11-15

    In recent years, the improvement of mass spectrometry-based glycomics techniques (i.e. highly sensitive, quantitative and high-throughput analytical tools) has enabled us to obtain a large dataset of glycans. Here we present a database named Xeno-glycomics database (XDB) that contains cell- or tissue-specific pig glycomes analyzed with mass spectrometry-based techniques, including a comprehensive pig glycan information on chemical structures, mass values, types and relative quantities. It was designed as a user-friendly web-based interface that allows users to query the database according to pig tissue/cell types or glycan masses. This database will contribute in providing qualitative and quantitative information on glycomes characterized from various pig cells/organs in xenotransplantation and might eventually provide new targets in the α1,3-galactosyltransferase gene-knock out pigs era. The database can be accessed on the web at http://bioinformatics.snu.ac.kr/xdb.

  11. Informatics in radiology: RADTF: a semantic search-enabled, natural language processor-generated radiology teaching file.

    PubMed

    Do, Bao H; Wu, Andrew; Biswal, Sandip; Kamaya, Aya; Rubin, Daniel L

    2010-11-01

    Storing and retrieving radiology cases is an important activity for education and clinical research, but this process can be time-consuming. In the process of structuring reports and images into organized teaching files, incidental pathologic conditions not pertinent to the primary teaching point can be omitted, as when a user saves images of an aortic dissection case but disregards the incidental osteoid osteoma. An alternate strategy for identifying teaching cases is text search of reports in radiology information systems (RIS), but retrieved reports are unstructured, teaching-related content is not highlighted, and patient identifying information is not removed. Furthermore, searching unstructured reports requires sophisticated retrieval methods to achieve useful results. An open-source, RadLex(®)-compatible teaching file solution called RADTF, which uses natural language processing (NLP) methods to process radiology reports, was developed to create a searchable teaching resource from the RIS and the picture archiving and communication system (PACS). The NLP system extracts and de-identifies teaching-relevant statements from full reports to generate a stand-alone database, thus converting existing RIS archives into an on-demand source of teaching material. Using RADTF, the authors generated a semantic search-enabled, Web-based radiology archive containing over 700,000 cases with millions of images. RADTF combines a compact representation of the teaching-relevant content in radiology reports and a versatile search engine with the scale of the entire RIS-PACS collection of case material. ©RSNA, 2010

  12. Video Games for Diabetes Self-Management: Examples and Design Strategies

    PubMed Central

    Lieberman, Debra A.

    2012-01-01

    The July 2012 issue of the Journal of Diabetes Science and Technology includes a special symposium called “Serious Games for Diabetes, Obesity, and Healthy Lifestyle.” As part of the symposium, this article focuses on health behavior change video games that are designed to improve and support players’ diabetes self-management. Other symposium articles include one that recommends theory-based approaches to the design of health games and identifies areas in which additional research is needed, followed by five research articles presenting studies of the design and effectiveness of games and game technologies that require physical activity in order to play. This article briefly describes 14 diabetes self-management video games, and, when available, cites research findings on their effectiveness. The games were found by searching the Health Games Research online searchable database, three bibliographic databases (ACM Digital Library, PubMed, and Social Sciences Databases of CSA Illumina), and the Google search engine, using the search terms “diabetes” and “game.” Games were selected if they addressed diabetes self-management skills. PMID:22920805

  13. Video games for diabetes self-management: examples and design strategies.

    PubMed

    Lieberman, Debra A

    2012-07-01

    The July 2012 issue of the Journal of Diabetes Science and Technology includes a special symposium called "Serious Games for Diabetes, Obesity, and Healthy Lifestyle." As part of the symposium, this article focuses on health behavior change video games that are designed to improve and support players' diabetes self-management. Other symposium articles include one that recommends theory-based approaches to the design of health games and identifies areas in which additional research is needed, followed by five research articles presenting studies of the design and effectiveness of games and game technologies that require physical activity in order to play. This article briefly describes 14 diabetes self-management video games, and, when available, cites research findings on their effectiveness. The games were found by searching the Health Games Research online searchable database, three bibliographic databases (ACM Digital Library, PubMed, and Social Sciences Databases of CSA Illumina), and the Google search engine, using the search terms "diabetes" and "game." Games were selected if they addressed diabetes self-management skills. © 2012 Diabetes Technology Society.

  14. Integration of multiple DICOM Web servers into an enterprise-wide Web-based electronic medical record

    NASA Astrophysics Data System (ADS)

    Stewart, Brent K.; Langer, Steven G.; Martin, Kelly P.

    1999-07-01

    The purpose of this paper is to integrate multiple DICOM image webservers into the currently existing enterprises- wide web-browsable electronic medical record. Over the last six years the University of Washington has created a clinical data repository combining in a distributed relational database information from multiple departmental databases (MIND). A character cell-based view of this data called the Mini Medical Record (MMR) has been available for four years, MINDscape, unlike the text-based MMR. provides a platform independent, dynamic, web browser view of the MIND database that can be easily linked with medical knowledge resources on the network, like PubMed and the Federated Drug Reference. There are over 10,000 MINDscape user accounts at the University of Washington Academic Medical Centers. The weekday average number of hits to MINDscape is 35,302 and weekday average number of individual users is 1252. DICOM images from multiple webservers are now being viewed through the MINDscape electronic medical record.

  15. 78 FR 42775 - CGI Federal, Inc., and Custom Applications Management; Transfer of Data

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-17

    ... develop applications, Web sites, Web pages, web-based applications and databases, in accordance with EPA policies and related Federal standards and procedures. The Contractor will provide [[Page 42776

  16. Design Considerations for a Web-based Database System of ELISpot Assay in Immunological Research

    PubMed Central

    Ma, Jingming; Mosmann, Tim; Wu, Hulin

    2005-01-01

    The enzyme-linked immunospot (ELISpot) assay has been a primary means in immunological researches (such as HIV-specific T cell response). Due to huge amount of data involved in ELISpot assay testing, the database system is needed for efficient data entry, easy retrieval, secure storage, and convenient data process. Besides, the NIH has recently issued a policy to promote the sharing of research data (see http://grants.nih.gov/grants/policy/data_sharing). The Web-based database system will be definitely benefit to data sharing among broad research communities. Here are some considerations for a database system of ELISpot assay (DBSEA). PMID:16779326

  17. Improving data management and dissemination in web based information systems by semantic enrichment of descriptive data aspects

    NASA Astrophysics Data System (ADS)

    Gebhardt, Steffen; Wehrmann, Thilo; Klinger, Verena; Schettler, Ingo; Huth, Juliane; Künzer, Claudia; Dech, Stefan

    2010-10-01

    The German-Vietnamese water-related information system for the Mekong Delta (WISDOM) project supports business processes in Integrated Water Resources Management in Vietnam. Multiple disciplines bring together earth and ground based observation themes, such as environmental monitoring, water management, demographics, economy, information technology, and infrastructural systems. This paper introduces the components of the web-based WISDOM system including data, logic and presentation tier. It focuses on the data models upon which the database management system is built, including techniques for tagging or linking metadata with the stored information. The model also uses ordered groupings of spatial, thematic and temporal reference objects to semantically tag datasets to enable fast data retrieval, such as finding all data in a specific administrative unit belonging to a specific theme. A spatial database extension is employed by the PostgreSQL database. This object-oriented database was chosen over a relational database to tag spatial objects to tabular data, improving the retrieval of census and observational data at regional, provincial, and local areas. While the spatial database hinders processing raster data, a "work-around" was built into WISDOM to permit efficient management of both raster and vector data. The data model also incorporates styling aspects of the spatial datasets through styled layer descriptions (SLD) and web mapping service (WMS) layer specifications, allowing retrieval of rendered maps. Metadata elements of the spatial data are based on the ISO19115 standard. XML structured information of the SLD and metadata are stored in an XML database. The data models and the data management system are robust for managing the large quantity of spatial objects, sensor observations, census and document data. The operational WISDOM information system prototype contains modules for data management, automatic data integration, and web services for data retrieval, analysis, and distribution. The graphical user interfaces facilitate metadata cataloguing, data warehousing, web sensor data analysis and thematic mapping.

  18. [The Development and Application of the Orthopaedics Implants Failure Database Software Based on WEB].

    PubMed

    Huang, Jiahua; Zhou, Hai; Zhang, Binbin; Ding, Biao

    2015-09-01

    This article develops a new failure database software for orthopaedics implants based on WEB. The software is based on B/S mode, ASP dynamic web technology is used as its main development language to achieve data interactivity, Microsoft Access is used to create a database, these mature technologies make the software extend function or upgrade easily. In this article, the design and development idea of the software, the software working process and functions as well as relative technical features are presented. With this software, we can store many different types of the fault events of orthopaedics implants, the failure data can be statistically analyzed, and in the macroscopic view, it can be used to evaluate the reliability of orthopaedics implants and operations, it also can ultimately guide the doctors to improve the clinical treatment level.

  19. Organizational Alignment Through Information Technology: A Web-Based Approach to Change

    NASA Technical Reports Server (NTRS)

    Heinrichs, W.; Smith, J.

    1999-01-01

    This paper reports on the effectiveness of web-based internet tools and databases to facilitate integration of technical organizations with interfaces that minimize modification of each technical organization.

  20. GALT protein database, a bioinformatics resource for the management and analysis of structural features of a galactosemia-related protein and its mutants.

    PubMed

    d'Acierno, Antonio; Facchiano, Angelo; Marabotti, Anna

    2009-06-01

    We describe the GALT-Prot database and its related web-based application that have been developed to collect information about the structural and functional effects of mutations on the human enzyme galactose-1-phosphate uridyltransferase (GALT) involved in the genetic disease named galactosemia type I. Besides a list of missense mutations at gene and protein sequence levels, GALT-Prot reports the analysis results of mutant GALT structures. In addition to the structural information about the wild-type enzyme, the database also includes structures of over 100 single point mutants simulated by means of a computational procedure, and the analysis to each mutant was made with several bioinformatics programs in order to investigate the effect of the mutations. The web-based interface allows querying of the database, and several links are also provided in order to guarantee a high integration with other resources already present on the web. Moreover, the architecture of the database and the web application is flexible and can be easily adapted to store data related to other proteins with point mutations. GALT-Prot is freely available at http://bioinformatica.isa.cnr.it/GALT/.

  1. dbHiMo: a web-based epigenomics platform for histone-modifying enzymes

    PubMed Central

    Choi, Jaeyoung; Kim, Ki-Tae; Huh, Aram; Kwon, Seomun; Hong, Changyoung; Asiegbu, Fred O.; Jeon, Junhyun; Lee, Yong-Hwan

    2015-01-01

    Over the past two decades, epigenetics has evolved into a key concept for understanding regulation of gene expression. Among many epigenetic mechanisms, covalent modifications such as acetylation and methylation of lysine residues on core histones emerged as a major mechanism in epigenetic regulation. Here, we present the database for histone-modifying enzymes (dbHiMo; http://hme.riceblast.snu.ac.kr/) aimed at facilitating functional and comparative analysis of histone-modifying enzymes (HMEs). HMEs were identified by applying a search pipeline built upon profile hidden Markov model (HMM) to proteomes. The database incorporates 11 576 HMEs identified from 603 proteomes including 483 fungal, 32 plants and 51 metazoan species. The dbHiMo provides users with web-based personalized data browsing and analysis tools, supporting comparative and evolutionary genomics. With comprehensive data entries and associated web-based tools, our database will be a valuable resource for future epigenetics/epigenomics studies. Database URL: http://hme.riceblast.snu.ac.kr/ PMID:26055100

  2. WebCN: A web-based computation tool for in situ-produced cosmogenic nuclides

    NASA Astrophysics Data System (ADS)

    Ma, Xiuzeng; Li, Yingkui; Bourgeois, Mike; Caffee, Marc; Elmore, David; Granger, Darryl; Muzikar, Paul; Smith, Preston

    2007-06-01

    Cosmogenic nuclide techniques are increasingly being utilized in geoscience research. For this it is critical to establish an effective, easily accessible and well defined tool for cosmogenic nuclide computations. We have been developing a web-based tool (WebCN) to calculate surface exposure ages and erosion rates based on the nuclide concentrations measured by the accelerator mass spectrometry. WebCN for 10Be and 26Al has been finished and published at http://www.physics.purdue.edu/primelab/for_users/rockage.html. WebCN for 36Cl is under construction. WebCN is designed as a three-tier client/server model and uses the open source PostgreSQL for the database management and PHP for the interface design and calculations. On the client side, an internet browser and Microsoft Access are used as application interfaces to access the system. Open Database Connectivity is used to link PostgreSQL and Microsoft Access. WebCN accounts for both spatial and temporal distributions of the cosmic ray flux to calculate the production rates of in situ-produced cosmogenic nuclides at the Earth's surface.

  3. WebCSD: the online portal to the Cambridge Structural Database

    PubMed Central

    Thomas, Ian R.; Bruno, Ian J.; Cole, Jason C.; Macrae, Clare F.; Pidcock, Elna; Wood, Peter A.

    2010-01-01

    WebCSD, a new web-based application developed by the Cambridge Crystallographic Data Centre, offers fast searching of the Cambridge Structural Database using only a standard internet browser. Search facilities include two-dimensional substructure, molecular similarity, text/numeric and reduced cell searching. Text, chemical diagrams and three-dimensional structural information can all be studied in the results browser using the efficient entry summaries and embedded three-dimensional viewer. PMID:22477776

  4. Columba: an integrated database of proteins, structures, and annotations.

    PubMed

    Trissl, Silke; Rother, Kristian; Müller, Heiko; Steinke, Thomas; Koch, Ina; Preissner, Robert; Frömmel, Cornelius; Leser, Ulf

    2005-03-31

    Structural and functional research often requires the computation of sets of protein structures based on certain properties of the proteins, such as sequence features, fold classification, or functional annotation. Compiling such sets using current web resources is tedious because the necessary data are spread over many different databases. To facilitate this task, we have created COLUMBA, an integrated database of annotations of protein structures. COLUMBA currently integrates twelve different databases, including PDB, KEGG, Swiss-Prot, CATH, SCOP, the Gene Ontology, and ENZYME. The database can be searched using either keyword search or data source-specific web forms. Users can thus quickly select and download PDB entries that, for instance, participate in a particular pathway, are classified as containing a certain CATH architecture, are annotated as having a certain molecular function in the Gene Ontology, and whose structures have a resolution under a defined threshold. The results of queries are provided in both machine-readable extensible markup language and human-readable format. The structures themselves can be viewed interactively on the web. The COLUMBA database facilitates the creation of protein structure data sets for many structure-based studies. It allows to combine queries on a number of structure-related databases not covered by other projects at present. Thus, information on both many and few protein structures can be used efficiently. The web interface for COLUMBA is available at http://www.columba-db.de.

  5. The GLIMS Glacier Database

    NASA Astrophysics Data System (ADS)

    Raup, B. H.; Khalsa, S. S.; Armstrong, R.

    2007-12-01

    The Global Land Ice Measurements from Space (GLIMS) project has built a geospatial and temporal database of glacier data, composed of glacier outlines and various scalar attributes. These data are being derived primarily from satellite imagery, such as from ASTER and Landsat. Each "snapshot" of a glacier is from a specific time, and the database is designed to store multiple snapshots representative of different times. We have implemented two web-based interfaces to the database; one enables exploration of the data via interactive maps (web map server), while the other allows searches based on text-field constraints. The web map server is an Open Geospatial Consortium (OGC) compliant Web Map Server (WMS) and Web Feature Server (WFS). This means that other web sites can display glacier layers from our site over the Internet, or retrieve glacier features in vector format. All components of the system are implemented using Open Source software: Linux, PostgreSQL, PostGIS (geospatial extensions to the database), MapServer (WMS and WFS), and several supporting components such as Proj.4 (a geographic projection library) and PHP. These tools are robust and provide a flexible and powerful framework for web mapping applications. As a service to the GLIMS community, the database contains metadata on all ASTER imagery acquired over glacierized terrain. Reduced-resolution of the images (browse imagery) can be viewed either as a layer in the MapServer application, or overlaid on the virtual globe within Google Earth. The interactive map application allows the user to constrain by time what data appear on the map. For example, ASTER or glacier outlines from 2002 only, or from Autumn in any year, can be displayed. The system allows users to download their selected glacier data in a choice of formats. The results of a query based on spatial selection (using a mouse) or text-field constraints can be downloaded in any of these formats: ESRI shapefiles, KML (Google Earth), MapInfo, GML (Geography Markup Language) and GMT (Generic Mapping Tools). This "clip-and-ship" function allows users to download only the data they are interested in. Our flexible web interfaces to the database, which includes various support layers (e.g. a layer to help collaborators identify satellite imagery over their region of expertise) will facilitate enhanced analysis to be undertaken on glacier systems, their distribution, and their impacts on other Earth systems.

  6. Curatr: a web application for creating, curating and sharing a mass spectral library.

    PubMed

    Palmer, Andrew; Phapale, Prasad; Fay, Dominik; Alexandrov, Theodore

    2018-04-15

    We have developed a web application curatr for the rapid generation of high quality mass spectral fragmentation libraries from liquid-chromatography mass spectrometry datasets. Curatr handles datasets from single or multiplexed standards and extracts chromatographic profiles and potential fragmentation spectra for multiple adducts. An intuitive interface helps users to select high quality spectra that are stored along with searchable molecular information, the providence of each standard and experimental metadata. Curatr supports exports to several standard formats for use with third party software or submission to repositories. We demonstrate the use of curatr to generate the EMBL Metabolomics Core Facility spectral library http://curatr.mcf.embl.de. Source code and example data are at http://github.com/alexandrovteam/curatr/. palmer@embl.de. Supplementary data are available at Bioinformatics online.

  7. WheatGenome.info: A Resource for Wheat Genomics Resource.

    PubMed

    Lai, Kaitao

    2016-01-01

    An integrated database with a variety of Web-based systems named WheatGenome.info hosting wheat genome and genomic data has been developed to support wheat research and crop improvement. The resource includes multiple Web-based applications, which are implemented as a variety of Web-based systems. These include a GBrowse2-based wheat genome viewer with BLAST search portal, TAGdb for searching wheat second generation genome sequence data, wheat autoSNPdb, links to wheat genetic maps using CMap and CMap3D, and a wheat genome Wiki to allow interaction between diverse wheat genome sequencing activities. This portal provides links to a variety of wheat genome resources hosted at other research organizations. This integrated database aims to accelerate wheat genome research and is freely accessible via the web interface at http://www.wheatgenome.info/ .

  8. Management and Stewardship of Airborne Observational Data for the NSF/NCAR HIAPER (GV) and NSF/NCAR C-130 at the National Center for Atmospheric Research (NCAR) Earth Observing Laboratory (EOL)

    NASA Astrophysics Data System (ADS)

    Aquino, J.

    2014-12-01

    The National Science Foundation (NSF) provides the National Center for Atmospheric Research (NCAR) Earth Observing Laboratory (EOL) funding for the operation, maintenance and upgrade of two research aircraft: the NSF/NCAR High-performance Instrumented Airborne Platform for Environmental Research (HIAPER) Gulfstream V and the NSF/NCAR Hercules C-130. A suite of in-situ and remote sensing airborne instruments housed at the EOL Research Aviation Facility (RAF) provide a basic set of measurements that are typically deployed on most airborne field campaigns. In addition, instruments to address more specific research requirements are provided by collaborating participants from universities, industry, NASA, NOAA or other agencies. The data collected are an important legacy of these field campaigns. A comprehensive metadata database and integrated cyber-infrastructure, along with a robust data workflow that begins during the field phase and extends to long-term archival (current aircraft data holdings go back to 1967), assures that: all data and associated software are safeguarded throughout the data handling process; community standards of practice for data stewardship and software version control are followed; simple and timely community access to collected data and associated software tools are provided; and the quality of the collected data is preserved, with the ultimate goal of supporting research and the reproducibility of published results. The components of this data system to be presented include: robust, searchable web access to data holdings; reliable, redundant data storage; web-based tools and scripts for efficient creation, maintenance and update of data holdings; access to supplemental data and documentation; storage of data in standardized data formats; comprehensive metadata collection; mature version control; human-discernable storage practices; and procedures to inform users of changes. In addition, lessons learned, shortcomings, and desired upgrades will be discussed.

  9. The Efficiency of Musical Emotions for the Reconciliation of Conceptual Dissonances

    DTIC Science & Technology

    2013-10-24

    Final/Annual/Midterm Report for AOARD Grant 114103 "The efficiency of musical emotions for the reconciliation of conceptual...and will be added to a searchable DoD database. In the present project, PI developed theoretical foundation for the evolution of music in...which was experimentally created in 4-year-old children, who obeyed an experimenter’s warning not to play with a desired toy. Without exposure to music

  10. Implementation of an EPN-TAP Service to Improve Accessibility to the Planetary Science Archive

    NASA Astrophysics Data System (ADS)

    Macfarlane, A.; Barabarisi, I.; Docasal, R.; Rios, C.; Saiz, J.; Vallejo, F.; Martinez, S.; Arviset, C.; Besse, S.; Vallat, C.

    2017-09-01

    The re-engineered PSA has a focus on improved access and search-ability to ESA's planetary science data. In addition to the new web interface released in January 2017, the new PSA supports several common planetary protocols in order to increase the visibility and ways in which the data may be queried and retrieved. Work is on-going to provide an EPN-TAP service covering as wide a range of parameters as possible to facilitate the discovery of scientific data and interoperability of the archive.

  11. Using Web-based Tutorials To Enhance Library Instruction.

    ERIC Educational Resources Information Center

    Kocour, Bruce G.

    2000-01-01

    Describes the development of a Web site for library instruction at Carson-Newman College (TN) and its integration into English composition courses. Describes the use of a virtual tour, a tutorial on database searching, tutorials on specific databases, and library guides to specific disciplines to create an effective mechanism for active learning.…

  12. Relax with CouchDB--into the non-relational DBMS era of bioinformatics.

    PubMed

    Manyam, Ganiraju; Payton, Michelle A; Roth, Jack A; Abruzzo, Lynne V; Coombes, Kevin R

    2012-07-01

    With the proliferation of high-throughput technologies, genome-level data analysis has become common in molecular biology. Bioinformaticians are developing extensive resources to annotate and mine biological features from high-throughput data. The underlying database management systems for most bioinformatics software are based on a relational model. Modern non-relational databases offer an alternative that has flexibility, scalability, and a non-rigid design schema. Moreover, with an accelerated development pace, non-relational databases like CouchDB can be ideal tools to construct bioinformatics utilities. We describe CouchDB by presenting three new bioinformatics resources: (a) geneSmash, which collates data from bioinformatics resources and provides automated gene-centric annotations, (b) drugBase, a database of drug-target interactions with a web interface powered by geneSmash, and (c) HapMap-CN, which provides a web interface to query copy number variations from three SNP-chip HapMap datasets. In addition to the web sites, all three systems can be accessed programmatically via web services. Copyright © 2012 Elsevier Inc. All rights reserved.

  13. NABIC marker database: A molecular markers information network of agricultural crops.

    PubMed

    Kim, Chang-Kug; Seol, Young-Joo; Lee, Dong-Jun; Jeong, In-Seon; Yoon, Ung-Han; Lee, Gang-Seob; Hahn, Jang-Ho; Park, Dong-Suk

    2013-01-01

    In 2013, National Agricultural Biotechnology Information Center (NABIC) reconstructs a molecular marker database for useful genetic resources. The web-based marker database consists of three major functional categories: map viewer, RSN marker and gene annotation. It provides 7250 marker locations, 3301 RSN marker property, 3280 molecular marker annotation information in agricultural plants. The individual molecular marker provides information such as marker name, expressed sequence tag number, gene definition and general marker information. This updated marker-based database provides useful information through a user-friendly web interface that assisted in tracing any new structures of the chromosomes and gene positional functions using specific molecular markers. The database is available for free at http://nabic.rda.go.kr/gere/rice/molecularMarkers/

  14. Analysis and visualization of Arabidopsis thaliana GWAS using web 2.0 technologies.

    PubMed

    Huang, Yu S; Horton, Matthew; Vilhjálmsson, Bjarni J; Seren, Umit; Meng, Dazhe; Meyer, Christopher; Ali Amer, Muhammad; Borevitz, Justin O; Bergelson, Joy; Nordborg, Magnus

    2011-01-01

    With large-scale genomic data becoming the norm in biological studies, the storing, integrating, viewing and searching of such data have become a major challenge. In this article, we describe the development of an Arabidopsis thaliana database that hosts the geographic information and genetic polymorphism data for over 6000 accessions and genome-wide association study (GWAS) results for 107 phenotypes representing the largest collection of Arabidopsis polymorphism data and GWAS results to date. Taking advantage of a series of the latest web 2.0 technologies, such as Ajax (Asynchronous JavaScript and XML), GWT (Google-Web-Toolkit), MVC (Model-View-Controller) web framework and Object Relationship Mapper, we have created a web-based application (web app) for the database, that offers an integrated and dynamic view of geographic information, genetic polymorphism and GWAS results. Essential search functionalities are incorporated into the web app to aid reverse genetics research. The database and its web app have proven to be a valuable resource to the Arabidopsis community. The whole framework serves as an example of how biological data, especially GWAS, can be presented and accessed through the web. In the end, we illustrate the potential to gain new insights through the web app by two examples, showcasing how it can be used to facilitate forward and reverse genetics research. Database URL: http://arabidopsis.usc.edu/

  15. Extending student knowledge and interest through super-curricular activities

    NASA Astrophysics Data System (ADS)

    Zetie, K. P.

    2018-03-01

    Any teacher of physics is likely to consider super-curricular reading as an important strategy for successful students. However, there are many more ways to extend a student’s interest in a subject than reading books, and undirected reading (such as providing a long out of date reading list) is not likely to be as helpful as targeted or directed study. I present an approach to directing and supporting additional study pioneered at St Paul’s School in the last 2 years based on two significant steps: • Providing a large, searchable database of reading and other material such as podcasts rather than simply a reading list. • Encouraging students to visualise and plot their trajectory toward a specific goal using a graph

  16. Resources | Division of Cancer Prevention

    Cancer.gov

    Manual of Operations Version 3, 12/13/2012 (PDF, 162KB) Database Sources Consortium for Functional Glycomics databases Design Studies Related to the Development of Distributed, Web-based European Carbohydrate Databases (EUROCarbDB) |

  17. GrTEdb: the first web-based database of transposable elements in cotton (Gossypium raimondii).

    PubMed

    Xu, Zhenzhen; Liu, Jing; Ni, Wanchao; Peng, Zhen; Guo, Yue; Ye, Wuwei; Huang, Fang; Zhang, Xianggui; Xu, Peng; Guo, Qi; Shen, Xinlian; Du, Jianchang

    2017-01-01

    Although several diploid and tetroploid Gossypium species genomes have been sequenced, the well annotated web-based transposable elements (TEs) database is lacking. To better understand the roles of TEs in structural, functional and evolutionary dynamics of the cotton genome, a comprehensive, specific, and user-friendly web-based database, Gossypium raimondii transposable elements database (GrTEdb), was constructed. A total of 14 332 TEs were structurally annotated and clearly categorized in G. raimondii genome, and these elements have been classified into seven distinct superfamilies based on the order of protein-coding domains, structures and/or sequence similarity, including 2929 Copia-like elements, 10 368 Gypsy-like elements, 299 L1 , 12 Mutators , 435 PIF-Harbingers , 275 CACTAs and 14 Helitrons . Meanwhile, the web-based sequence browsing, searching, downloading and blast tool were implemented to help users easily and effectively to annotate the TEs or TE fragments in genomic sequences from G. raimondii and other closely related Gossypium species. GrTEdb provides resources and information related with TEs in G. raimondii , and will facilitate gene and genome analyses within or across Gossypium species, evaluating the impact of TEs on their host genomes, and investigating the potential interaction between TEs and protein-coding genes in Gossypium species. http://www.grtedb.org/. © The Author(s) 2017. Published by Oxford University Press.

  18. FERN Ethnomedicinal Plant Database: Exploring Fern Ethnomedicinal Plants Knowledge for Computational Drug Discovery.

    PubMed

    Thakar, Sambhaji B; Ghorpade, Pradnya N; Kale, Manisha V; Sonawane, Kailas D

    2015-01-01

    Fern plants are known for their ethnomedicinal applications. Huge amount of fern medicinal plants information is scattered in the form of text. Hence, database development would be an appropriate endeavor to cope with the situation. So by looking at the importance of medicinally useful fern plants, we developed a web based database which contains information about several group of ferns, their medicinal uses, chemical constituents as well as protein/enzyme sequences isolated from different fern plants. Fern ethnomedicinal plant database is an all-embracing, content management web-based database system, used to retrieve collection of factual knowledge related to the ethnomedicinal fern species. Most of the protein/enzyme sequences have been extracted from NCBI Protein sequence database. The fern species, family name, identification, taxonomy ID from NCBI, geographical occurrence, trial for, plant parts used, ethnomedicinal importance, morphological characteristics, collected from various scientific literatures and journals available in the text form. NCBI's BLAST, InterPro, phylogeny, Clustal W web source has also been provided for the future comparative studies. So users can get information related to fern plants and their medicinal applications at one place. This Fern ethnomedicinal plant database includes information of 100 fern medicinal species. This web based database would be an advantageous to derive information specifically for computational drug discovery, botanists or botanical interested persons, pharmacologists, researchers, biochemists, plant biotechnologists, ayurvedic practitioners, doctors/pharmacists, traditional medicinal users, farmers, agricultural students and teachers from universities as well as colleges and finally fern plant lovers. This effort would be useful to provide essential knowledge for the users about the adventitious applications for drug discovery, applications, conservation of fern species around the world and finally to create social awareness.

  19. CREDO: a structural interactomics database for drug discovery

    PubMed Central

    Schreyer, Adrian M.; Blundell, Tom L.

    2013-01-01

    CREDO is a unique relational database storing all pairwise atomic interactions of inter- as well as intra-molecular contacts between small molecules and macromolecules found in experimentally determined structures from the Protein Data Bank. These interactions are integrated with further chemical and biological data. The database implements useful data structures and algorithms such as cheminformatics routines to create a comprehensive analysis platform for drug discovery. The database can be accessed through a web-based interface, downloads of data sets and web services at http://www-cryst.bioc.cam.ac.uk/credo. Database URL: http://www-cryst.bioc.cam.ac.uk/credo PMID:23868908

  20. Use of XML and Java for collaborative petroleum reservoir modeling on the Internet

    NASA Astrophysics Data System (ADS)

    Victorine, John; Watney, W. Lynn; Bhattacharya, Saibal

    2005-11-01

    The GEMINI (Geo-Engineering Modeling through INternet Informatics) is a public-domain, web-based freeware that is made up of an integrated suite of 14 Java-based software tools to accomplish on-line, real-time geologic and engineering reservoir modeling. GEMINI facilitates distant collaborations for small company and academic clients, negotiating analyses of both single and multiple wells. The system operates on a single server and an enterprise database. External data sets must be uploaded into this database. Feedback from GEMINI users provided the impetus to develop Stand Alone Web Start Applications of GEMINI modules that reside in and operate from the user's PC. In this version, the GEMINI modules run as applets, which may reside in local user PCs, on the server, or Java Web Start. In this enhanced version, XML-based data handling procedures are used to access data from remote and local databases and save results for later access and analyses. The XML data handling process also integrates different stand-alone GEMINI modules enabling the user(s) to access multiple databases. It provides flexibility to the user to customize analytical approach, database location, and level of collaboration. An example integrated field-study using GEMINI modules and Stand Alone Web Start Applications is provided to demonstrate the versatile applicability of this freeware for cost-effective reservoir modeling.

  1. Use of XML and Java for collaborative petroleum reservoir modeling on the Internet

    USGS Publications Warehouse

    Victorine, J.; Watney, W.L.; Bhattacharya, S.

    2005-01-01

    The GEMINI (Geo-Engineering Modeling through INternet Informatics) is a public-domain, web-based freeware that is made up of an integrated suite of 14 Java-based software tools to accomplish on-line, real-time geologic and engineering reservoir modeling. GEMINI facilitates distant collaborations for small company and academic clients, negotiating analyses of both single and multiple wells. The system operates on a single server and an enterprise database. External data sets must be uploaded into this database. Feedback from GEMINI users provided the impetus to develop Stand Alone Web Start Applications of GEMINI modules that reside in and operate from the user's PC. In this version, the GEMINI modules run as applets, which may reside in local user PCs, on the server, or Java Web Start. In this enhanced version, XML-based data handling procedures are used to access data from remote and local databases and save results for later access and analyses. The XML data handling process also integrates different stand-alone GEMINI modules enabling the user(s) to access multiple databases. It provides flexibility to the user to customize analytical approach, database location, and level of collaboration. An example integrated field-study using GEMINI modules and Stand Alone Web Start Applications is provided to demonstrate the versatile applicability of this freeware for cost-effective reservoir modeling. ?? 2005 Elsevier Ltd. All rights reserved.

  2. Accessing the SEED genome databases via Web services API: tools for programmers.

    PubMed

    Disz, Terry; Akhter, Sajia; Cuevas, Daniel; Olson, Robert; Overbeek, Ross; Vonstein, Veronika; Stevens, Rick; Edwards, Robert A

    2010-06-14

    The SEED integrates many publicly available genome sequences into a single resource. The database contains accurate and up-to-date annotations based on the subsystems concept that leverages clustering between genomes and other clues to accurately and efficiently annotate microbial genomes. The backend is used as the foundation for many genome annotation tools, such as the Rapid Annotation using Subsystems Technology (RAST) server for whole genome annotation, the metagenomics RAST server for random community genome annotations, and the annotation clearinghouse for exchanging annotations from different resources. In addition to a web user interface, the SEED also provides Web services based API for programmatic access to the data in the SEED, allowing the development of third-party tools and mash-ups. The currently exposed Web services encompass over forty different methods for accessing data related to microbial genome annotations. The Web services provide comprehensive access to the database back end, allowing any programmer access to the most consistent and accurate genome annotations available. The Web services are deployed using a platform independent service-oriented approach that allows the user to choose the most suitable programming platform for their application. Example code demonstrate that Web services can be used to access the SEED using common bioinformatics programming languages such as Perl, Python, and Java. We present a novel approach to access the SEED database. Using Web services, a robust API for access to genomics data is provided, without requiring large volume downloads all at once. The API ensures timely access to the most current datasets available, including the new genomes as soon as they come online.

  3. NeisseriaBase: a specialised Neisseria genomic resource and analysis platform.

    PubMed

    Zheng, Wenning; Mutha, Naresh V R; Heydari, Hamed; Dutta, Avirup; Siow, Cheuk Chuen; Jakubovics, Nicholas S; Wee, Wei Yee; Tan, Shi Yang; Ang, Mia Yang; Wong, Guat Jah; Choo, Siew Woh

    2016-01-01

    Background. The gram-negative Neisseria is associated with two of the most potent human epidemic diseases: meningococcal meningitis and gonorrhoea. In both cases, disease is caused by bacteria colonizing human mucosal membrane surfaces. Overall, the genus shows great diversity and genetic variation mainly due to its ability to acquire and incorporate genetic material from a diverse range of sources through horizontal gene transfer. Although a number of databases exist for the Neisseria genomes, they are mostly focused on the pathogenic species. In this present study we present the freely available NeisseriaBase, a database dedicated to the genus Neisseria encompassing the complete and draft genomes of 15 pathogenic and commensal Neisseria species. Methods. The genomic data were retrieved from National Center for Biotechnology Information (NCBI) and annotated using the RAST server which were then stored into the MySQL database. The protein-coding genes were further analyzed to obtain information such as calculation of GC content (%), predicted hydrophobicity and molecular weight (Da) using in-house Perl scripts. The web application was developed following the secure four-tier web application architecture: (1) client workstation, (2) web server, (3) application server, and (4) database server. The web interface was constructed using PHP, JavaScript, jQuery, AJAX and CSS, utilizing the model-view-controller (MVC) framework. The in-house developed bioinformatics tools implemented in NeisseraBase were developed using Python, Perl, BioPerl and R languages. Results. Currently, NeisseriaBase houses 603,500 Coding Sequences (CDSs), 16,071 RNAs and 13,119 tRNA genes from 227 Neisseria genomes. The database is equipped with interactive web interfaces. Incorporation of the JBrowse genome browser in the database enables fast and smooth browsing of Neisseria genomes. NeisseriaBase includes the standard BLAST program to facilitate homology searching, and for Virulence Factor Database (VFDB) specific homology searches, the VFDB BLAST is also incorporated into the database. In addition, NeisseriaBase is equipped with in-house designed tools such as the Pairwise Genome Comparison tool (PGC) for comparative genomic analysis and the Pathogenomics Profiling Tool (PathoProT) for the comparative pathogenomics analysis of Neisseria strains. Discussion. This user-friendly database not only provides access to a host of genomic resources on Neisseria but also enables high-quality comparative genome analysis, which is crucial for the expanding scientific community interested in Neisseria research. This database is freely available at http://neisseria.um.edu.my.

  4. NeisseriaBase: a specialised Neisseria genomic resource and analysis platform

    PubMed Central

    Zheng, Wenning; Mutha, Naresh V.R.; Heydari, Hamed; Dutta, Avirup; Siow, Cheuk Chuen; Jakubovics, Nicholas S.; Wee, Wei Yee; Tan, Shi Yang; Ang, Mia Yang; Wong, Guat Jah

    2016-01-01

    Background. The gram-negative Neisseria is associated with two of the most potent human epidemic diseases: meningococcal meningitis and gonorrhoea. In both cases, disease is caused by bacteria colonizing human mucosal membrane surfaces. Overall, the genus shows great diversity and genetic variation mainly due to its ability to acquire and incorporate genetic material from a diverse range of sources through horizontal gene transfer. Although a number of databases exist for the Neisseria genomes, they are mostly focused on the pathogenic species. In this present study we present the freely available NeisseriaBase, a database dedicated to the genus Neisseria encompassing the complete and draft genomes of 15 pathogenic and commensal Neisseria species. Methods. The genomic data were retrieved from National Center for Biotechnology Information (NCBI) and annotated using the RAST server which were then stored into the MySQL database. The protein-coding genes were further analyzed to obtain information such as calculation of GC content (%), predicted hydrophobicity and molecular weight (Da) using in-house Perl scripts. The web application was developed following the secure four-tier web application architecture: (1) client workstation, (2) web server, (3) application server, and (4) database server. The web interface was constructed using PHP, JavaScript, jQuery, AJAX and CSS, utilizing the model-view-controller (MVC) framework. The in-house developed bioinformatics tools implemented in NeisseraBase were developed using Python, Perl, BioPerl and R languages. Results. Currently, NeisseriaBase houses 603,500 Coding Sequences (CDSs), 16,071 RNAs and 13,119 tRNA genes from 227 Neisseria genomes. The database is equipped with interactive web interfaces. Incorporation of the JBrowse genome browser in the database enables fast and smooth browsing of Neisseria genomes. NeisseriaBase includes the standard BLAST program to facilitate homology searching, and for Virulence Factor Database (VFDB) specific homology searches, the VFDB BLAST is also incorporated into the database. In addition, NeisseriaBase is equipped with in-house designed tools such as the Pairwise Genome Comparison tool (PGC) for comparative genomic analysis and the Pathogenomics Profiling Tool (PathoProT) for the comparative pathogenomics analysis of Neisseria strains. Discussion. This user-friendly database not only provides access to a host of genomic resources on Neisseria but also enables high-quality comparative genome analysis, which is crucial for the expanding scientific community interested in Neisseria research. This database is freely available at http://neisseria.um.edu.my. PMID:27017950

  5. Web-based access to near real-time and archived high-density time-series data: cyber infrastructure challenges & developments in the open-source Waveform Server

    NASA Astrophysics Data System (ADS)

    Reyes, J. C.; Vernon, F. L.; Newman, R. L.; Steidl, J. H.

    2010-12-01

    The Waveform Server is an interactive web-based interface to multi-station, multi-sensor and multi-channel high-density time-series data stored in Center for Seismic Studies (CSS) 3.0 schema relational databases (Newman et al., 2009). In the last twelve months, based on expanded specifications and current user feedback, both the server-side infrastructure and client-side interface have been extensively rewritten. The Python Twisted server-side code-base has been fundamentally modified to now present waveform data stored in cluster-based databases using a multi-threaded architecture, in addition to supporting the pre-existing single database model. This allows interactive web-based access to high-density (broadband @ 40Hz to strong motion @ 200Hz) waveform data that can span multiple years; the common lifetime of broadband seismic networks. The client-side interface expands on it's use of simple JSON-based AJAX queries to now incorporate a variety of User Interface (UI) improvements including standardized calendars for defining time ranges, applying on-the-fly data calibration to display SI-unit data, and increased rendering speed. This presentation will outline the various cyber infrastructure challenges we have faced while developing this application, the use-cases currently in existence, and the limitations of web-based application development.

  6. Web-based Electronic Sharing and RE-allocation of Assets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leverett, Dave; Miller, Robert A.; Berlin, Gary J.

    2002-09-09

    The Electronic Asses Sharing Program is a web-based application that provides the capability for complex-wide sharing and reallocation of assets that are excess, under utilized, or un-utilized. through a web-based fron-end and supporting has database with a search engine, users can search for assets that they need, search for assets needed by others, enter assets they need, and enter assets they have available for reallocation. In addition, entire listings of available assets and needed assets can be viewed. The application is written in Java, the hash database and search engine are in Object-oriented Java Database Management (OJDBM). The application willmore » be hosted on an SRS-managed server outside the Firewall and access will be controlled via a protected realm. An example of the application can be viewed at the followinig (temporary) URL: http://idgdev.srs.gov/servlet/srs.weshare.WeShare« less

  7. Modernized Techniques for Dealing with Quality Data and Derived Products

    NASA Astrophysics Data System (ADS)

    Neiswender, C.; Miller, S. P.; Clark, D.

    2008-12-01

    "I just want a picture of the ocean floor in this area" is expressed all too often by researchers, educators, and students in the marine geosciences. As more sophisticated systems are developed to handle data collection and processing, the demand for quality data, and standardized products continues to grow. Data management is an invisible bridge between science and researchers/educators. The SIOExplorer digital library presents more than 50 years of ocean-going research. Prior to publication, all data is checked for quality using standardized criterion developed for each data stream. Despite the evolution of data formats and processing systems, SIOExplorer continues to present derived products in well- established formats. Standardized products are published for each cruise, and include a cruise report, MGD77 merged data, multi-beam flipbook, and underway profiles. Creation of these products is made possible by processing scripts, which continue to change with ever-evolving data formats. We continue to explore the potential of database-enabled creation of standardized products, such as the metadata-rich MGD77 header file. Database-enabled, automated processing produces standards-compliant metadata for each data and derived product. Metadata facilitates discovery and interpretation of published products. This descriptive information is stored both in an ASCII file, and a searchable digital library database. SIOExplorer's underlying technology allows focused search and retrieval of data and products. For example, users can initiate a search of only multi-beam data, which includes data-specific parameters. This customization is made possible with a synthesis of database, XML, and PHP technology. The combination of standardized products and digital library technology puts quality data and derived products in the hands of scientists. Interoperable systems enable distribution these published resources using technology such as web services. By developing modernized strategies to deal with data, Scripps Institution of Oceanography is able to produce and distribute well-formed, and quality-tested derived products, which aid research, understanding, and education.

  8. The AVO Website - a Comprehensive Tool for Information Management and Dissemination

    NASA Astrophysics Data System (ADS)

    Snedigar, S.; Cameron, C.; Nye, C. J.

    2008-12-01

    The Alaska Volcano Observatory (AVO) website serves as a primary information management, browsing, and dissemination tool. It is database-driven, thus easy to maintain and update. There are two different, yet fully integrated parts of the website. An external site (www.avo.alaska.edu) allows the general public to track eruptive activity by viewing the latest photographs, webcam images, seismic data, and official information releases about the volcano, as well as maps, previous eruption information, and bibliographies. This website is also the single most comprehensive source of Alaska volcano information available. The database now contains 14,000 images, 3,300 of which are publicly viewable, and 4,300 bibliographic citations - many linked to full-text downloadable files.. The internal portion of the website is essential to routine observatory operations, and hosts browse images of diverse geophysical and geological data in a format accessible by AVO staff regardless of location. An observation log allows users to enter information about anything from satellite passes to seismic activity to ash fall reports into a searchable database, and has become the permanent record of observatory function. The individual(s) on duty at home, at the watch office, or elsewhere use forms on the internal website to log information about volcano activity. These data are then automatically parsed into a number of primary activity notices which are the formal communication to appropriate agencies and interested individuals. Geochemistry, geochronology, and geospatial data modules are currently being developed. The website receives over 100 million hits, and serves 1,300 GB of data annually. It is dynamically generated from a MySQL database with over 300 tables and several thousand lines of php code which write the actual web display. The primary webserver is housed at (but not owned by) the University of Alaska Fairbanks, and currently holds 200 GB of data. Webcam images, webicorder graphs, earthquake location plots, and spectrograms are pulled and generated by other servers in Fairbanks and Anchorage.

  9. Developing a healthy web-based cookbook for pediatric cancer patients and survivors: rationale and methods.

    PubMed

    Li, Rhea; Raber, Margaret; Chandra, Joya

    2015-03-31

    Obesity has been a growing problem among children and adolescents in the United States for a number of decades. Childhood cancer survivors (CCS) are more susceptible to the downstream health consequences of obesity such as cardiovascular disease, endocrine issues, and risk of cancer recurrence due to late effects of treatment and suboptimal dietary and physical activity habits. The objective of this study was to document the development of a Web-based cookbook of healthy recipes and nutrition resources to help enable pediatric cancer patients and survivors to lead healthier lifestyles. The Web-based cookbook, named "@TheTable", was created by a committee of researchers, a registered dietitian, patients and family members, a hospital chef, and community advisors and donors. Recipes were collected from several sources including recipe contests and social media. We incorporated advice from current patients, parents, and CCS. Over 400 recipes, searchable by several categories and with accompanying nutritional information, are currently available on the website. In addition to healthy recipes, social media functionality and cooking videos are integrated into the website. The website also features nutrition information resources including nutrition and cooking tip sheets available on several subjects. The "@TheTable" website is a unique resource for promoting healthy lifestyles spanning pediatric oncology prevention, treatment, and survivorship. Through evaluations of the website's current and future use, as well as incorporation into interventions designed to promote energy balance, we will continue to adapt and build this unique resource to serve cancer patients, survivors, and the general public.

  10. Developing a Healthy Web-Based Cookbook for Pediatric Cancer Patients and Survivors: Rationale and Methods

    PubMed Central

    Raber, Margaret

    2015-01-01

    Background Obesity has been a growing problem among children and adolescents in the United States for a number of decades. Childhood cancer survivors (CCS) are more susceptible to the downstream health consequences of obesity such as cardiovascular disease, endocrine issues, and risk of cancer recurrence due to late effects of treatment and suboptimal dietary and physical activity habits. Objective The objective of this study was to document the development of a Web-based cookbook of healthy recipes and nutrition resources to help enable pediatric cancer patients and survivors to lead healthier lifestyles. Methods The Web-based cookbook, named “@TheTable”, was created by a committee of researchers, a registered dietitian, patients and family members, a hospital chef, and community advisors and donors. Recipes were collected from several sources including recipe contests and social media. We incorporated advice from current patients, parents, and CCS. Results Over 400 recipes, searchable by several categories and with accompanying nutritional information, are currently available on the website. In addition to healthy recipes, social media functionality and cooking videos are integrated into the website. The website also features nutrition information resources including nutrition and cooking tip sheets available on several subjects. Conclusions The “@TheTable” website is a unique resource for promoting healthy lifestyles spanning pediatric oncology prevention, treatment, and survivorship. Through evaluations of the website’s current and future use, as well as incorporation into interventions designed to promote energy balance, we will continue to adapt and build this unique resource to serve cancer patients, survivors, and the general public. PMID:25840596

  11. SNPchiMp: a database to disentangle the SNPchip jungle in bovine livestock.

    PubMed

    Nicolazzi, Ezequiel Luis; Picciolini, Matteo; Strozzi, Francesco; Schnabel, Robert David; Lawley, Cindy; Pirani, Ali; Brew, Fiona; Stella, Alessandra

    2014-02-11

    Currently, six commercial whole-genome SNP chips are available for cattle genotyping, produced by two different genotyping platforms. Technical issues need to be addressed to combine data that originates from the different platforms, or different versions of the same array generated by the manufacturer. For example: i) genome coordinates for SNPs may refer to different genome assemblies; ii) reference genome sequences are updated over time changing the positions, or even removing sequences which contain SNPs; iii) not all commercial SNP ID's are searchable within public databases; iv) SNPs can be coded using different formats and referencing different strands (e.g. A/B or A/C/T/G alleles, referencing forward/reverse, top/bottom or plus/minus strand); v) Due to new information being discovered, higher density chips do not necessarily include all the SNPs present in the lower density chips; and, vi) SNP IDs may not be consistent across chips and platforms. Most researchers and breed associations manage SNP data in real-time and thus require tools to standardise data in a user-friendly manner. Here we present SNPchiMp, a MySQL database linked to an open access web-based interface. Features of this interface include, but are not limited to, the following functions: 1) referencing the SNP mapping information to the latest genome assembly, 2) extraction of information contained in dbSNP for SNPs present in all commercially available bovine chips, and 3) identification of SNPs in common between two or more bovine chips (e.g. for SNP imputation from lower to higher density). In addition, SNPchiMp can retrieve this information on subsets of SNPs, accessing such data either via physical position on a supported assembly, or by a list of SNP IDs, rs or ss identifiers. This tool combines many different sources of information, that otherwise are time consuming to obtain and difficult to integrate. The SNPchiMp not only provides the information in a user-friendly format, but also enables researchers to perform a large number of operations with a few clicks of the mouse. This significantly reduces the time needed to execute the large number of operations required to manage SNP data.

  12. Comprehensive mollusk acute toxicity database improves the use of Interspecies Correlation Estimation (ICE) models to predict toxicity of untested freshwater and endangered mussel species

    EPA Science Inventory

    Interspecies correlation estimation (ICE) models extrapolate acute toxicity data from surrogate test species to untested taxa. A suite of ICE models developed from a comprehensive database is available on the US Environmental Protection Agency’s web-based application, Web-I...

  13. Designing a Relational Database for the Basic School; Schools Command Web Enabled Officer and Enlisted Database (Sword)

    DTIC Science & Technology

    2002-06-01

    Student memo for personnel MCLLS . . . . . . . . . . . . . . 75 i. Migrate data to SQL Server...The Web Server is on the same server as the SWORD database in the current version. 4: results set 5: dynamic HTML page 6: dynamic HTML page 3: SQL ...still be supported by Access. SQL Server would be a more viable tool for a fully developed application based on the number of potential users and

  14. Integrated database for identifying candidate genes for Aspergillus flavus resistance in maize

    PubMed Central

    2010-01-01

    Background Aspergillus flavus Link:Fr, an opportunistic fungus that produces aflatoxin, is pathogenic to maize and other oilseed crops. Aflatoxin is a potent carcinogen, and its presence markedly reduces the value of grain. Understanding and enhancing host resistance to A. flavus infection and/or subsequent aflatoxin accumulation is generally considered an efficient means of reducing grain losses to aflatoxin. Different proteomic, genomic and genetic studies of maize (Zea mays L.) have generated large data sets with the goal of identifying genes responsible for conferring resistance to A. flavus, or aflatoxin. Results In order to maximize the usage of different data sets in new studies, including association mapping, we have constructed a relational database with web interface integrating the results of gene expression, proteomic (both gel-based and shotgun), Quantitative Trait Loci (QTL) genetic mapping studies, and sequence data from the literature to facilitate selection of candidate genes for continued investigation. The Corn Fungal Resistance Associated Sequences Database (CFRAS-DB) (http://agbase.msstate.edu/) was created with the main goal of identifying genes important to aflatoxin resistance. CFRAS-DB is implemented using MySQL as the relational database management system running on a Linux server, using an Apache web server, and Perl CGI scripts as the web interface. The database and the associated web-based interface allow researchers to examine many lines of evidence (e.g. microarray, proteomics, QTL studies, SNP data) to assess the potential role of a gene or group of genes in the response of different maize lines to A. flavus infection and subsequent production of aflatoxin by the fungus. Conclusions CFRAS-DB provides the first opportunity to integrate data pertaining to the problem of A. flavus and aflatoxin resistance in maize in one resource and to support queries across different datasets. The web-based interface gives researchers different query options for mining the database across different types of experiments. The database is publically available at http://agbase.msstate.edu. PMID:20946609

  15. Integrated database for identifying candidate genes for Aspergillus flavus resistance in maize.

    PubMed

    Kelley, Rowena Y; Gresham, Cathy; Harper, Jonathan; Bridges, Susan M; Warburton, Marilyn L; Hawkins, Leigh K; Pechanova, Olga; Peethambaran, Bela; Pechan, Tibor; Luthe, Dawn S; Mylroie, J E; Ankala, Arunkanth; Ozkan, Seval; Henry, W B; Williams, W P

    2010-10-07

    Aspergillus flavus Link:Fr, an opportunistic fungus that produces aflatoxin, is pathogenic to maize and other oilseed crops. Aflatoxin is a potent carcinogen, and its presence markedly reduces the value of grain. Understanding and enhancing host resistance to A. flavus infection and/or subsequent aflatoxin accumulation is generally considered an efficient means of reducing grain losses to aflatoxin. Different proteomic, genomic and genetic studies of maize (Zea mays L.) have generated large data sets with the goal of identifying genes responsible for conferring resistance to A. flavus, or aflatoxin. In order to maximize the usage of different data sets in new studies, including association mapping, we have constructed a relational database with web interface integrating the results of gene expression, proteomic (both gel-based and shotgun), Quantitative Trait Loci (QTL) genetic mapping studies, and sequence data from the literature to facilitate selection of candidate genes for continued investigation. The Corn Fungal Resistance Associated Sequences Database (CFRAS-DB) (http://agbase.msstate.edu/) was created with the main goal of identifying genes important to aflatoxin resistance. CFRAS-DB is implemented using MySQL as the relational database management system running on a Linux server, using an Apache web server, and Perl CGI scripts as the web interface. The database and the associated web-based interface allow researchers to examine many lines of evidence (e.g. microarray, proteomics, QTL studies, SNP data) to assess the potential role of a gene or group of genes in the response of different maize lines to A. flavus infection and subsequent production of aflatoxin by the fungus. CFRAS-DB provides the first opportunity to integrate data pertaining to the problem of A. flavus and aflatoxin resistance in maize in one resource and to support queries across different datasets. The web-based interface gives researchers different query options for mining the database across different types of experiments. The database is publically available at http://agbase.msstate.edu.

  16. Korean Ministry of Environment's web-based visual consumer product exposure and risk assessment system (COPER).

    PubMed

    Lee, Hunjoo; Lee, Kiyoung; Park, Ji Young; Min, Sung-Gi

    2017-05-01

    With support from the Korean Ministry of the Environment (ME), our interdisciplinary research staff developed the COnsumer Product Exposure and Risk assessment system (COPER). This system includes various databases and features that enable the calculation of exposure and determination of risk caused by consumer products use. COPER is divided into three tiers: the integrated database layer (IDL), the domain specific service layer (DSSL), and the exposure and risk assessment layer (ERAL). IDL is organized by the form of the raw data (mostly non-aggregated data) and includes four sub-databases: a toxicity profile, an inventory of Korean consumer products, the weight fractions of chemical substances in the consumer products determined by chemical analysis and national representative exposure factors. DSSL provides web-based information services corresponding to each database within IDL. Finally, ERAL enables risk assessors to perform various exposure and risk assessments, including exposure scenario design via either inhalation or dermal contact by using or organizing each database in an intuitive manner. This paper outlines the overall architecture of the system and highlights some of the unique features of COPER based on visual and dynamic rendering engine for exposure assessment model on web.

  17. Redesign of Advanced Education Processes the United States Coast Guard

    DTIC Science & Technology

    1999-09-01

    educational level. Els are assigned to help track individuals with specialized training and to facilitate statistical data collection. The El is used by...just like every other officer in the Coast Guard. Currently, the Coast Guard’s personnel database does not include data on advanced education ...Appendix A. 56 • Advanced Education is not a searchable field in the Coast Guard’s Personnel Data System. PMs and AOs do not have direct access to

  18. The HLA Dictionary 2004: a summary of HLA-A, -B, -C, -DRB1/3/4/5 and -DQB1 alleles and their association with serologically defined HLA-A, -B, -C, -DR and -DQ antigens.

    PubMed

    Schreuder, G M Th; Hurley, C K; Marsh, S G E; Lau, M; Fernandez-Vina, M; Noreen, H J; Setterholm, M; Maiers, M

    2005-01-01

    This report presents serologic equivalents of human leucocyte antigen (HLA)-A, -B, -C, -DRB1, -DRB3, -DRB4, -DRB5 and -DQB1 alleles. The dictionary is an update of the one published in 2001. The data summarize equivalents obtained by the World Health Organization Nomenclature Committee for factors of the HLA System, the International Cell Exchange, the National Marrow Donor Program, recent publications and individual laboratories. This latest update of the dictionary is enhanced by the inclusion of results from studies performed during the 13th International Histocompatibility Workshop and from neural network analyses. A summary of the data as recommended serologic equivalents is presented as expert assigned types. The tables include remarks for alleles, which are or may be expressed as antigens with serologic reaction patterns that differ from the well-established HLA specificities. The equivalents provided will be useful in guiding searches for unrelated hematopoietic stem cell donors in which patients and/or potential donors are typed by either serology or DNA-based methods. The serological DNA equivalent dictionary will also aid in typing and matching procedures for organ transplant programs whose waiting lists of potential donors and recipients comprise of mixtures of serologic and DNA-based typings. The tables with HLA equivalents and a questionnaire for submission of serologic reaction patterns for poorly identified allelic products will be made available through the WMDA web page: www.worldmarrow.org. and in the near future also in a searchable form on the IMGT/HLA database.

  19. The HLA Dictionary 2004: a summary of HLA-A, -B, -C, -DRB1/3/4/5 and -DQB1 alleles and their association with serologically defined HLA-A, -B, -C, -DR and -DQ antigens.

    PubMed

    Schreuder, G M Th; Hurley, C K; Marsh, S G E; Lau, M; Fernandez-Vina, M; Noreen, H J; Setterholm, M; Maiers, M

    2005-02-01

    This report presents serological equivalents of HLA-A, -B, -C, -DRB1, -DRB3, -DRB4, -DRB5 and -DQB1 alleles. The dictionary is an update of that published in 2001. The data summarize equivalents obtained by the World Health Organization Nomenclature Committee for Factors of the HLA System, the International Cell Exchange (UCLA), the National Marrow Donor Program (NMDP), recent publications and individual laboratories. This latest update of the dictionary is enhanced by the inclusion of results from studies performed during the 13th International Histocompatibility Workshop and from neural network analyses. A summary of the data as recommended serological equivalents is presented as expert assigned types. The tables include remarks for alleles, which are or may be expressed as antigens with serological reaction patterns that differ from the well-established HLA specificities. The equivalents provided will be useful in guiding searches for unrelated haematopoietic stem cell donors in which patients and/or potential donors are typed by either serology or DNA-based methods. The serological DNA equivalent dictionary will also aid in typing and matching procedures for organ transplant programmes whose waiting lists of potential donors and recipients comprise mixtures of serological and DNA-based typings. The tables with HLA equivalents and a questionnaire for submission of serological reaction patterns for poorly identified allelic products will be made available through the WMDA web page (http://www.worldmarrow.org) and, in the near future, also in a searchable form on the IMGT/HLA database.

  20. An overview of biomedical literature search on the World Wide Web in the third millennium.

    PubMed

    Kumar, Prince; Goel, Roshni; Jain, Chandni; Kumar, Ashish; Parashar, Abhishek; Gond, Ajay Ratan

    2012-06-01

    Complete access to the existing pool of biomedical literature and the ability to "hit" upon the exact information of the relevant specialty are becoming essential elements of academic and clinical expertise. With the rapid expansion of the literature database, it is almost impossible to keep up to date with every innovation. Using the Internet, however, most people can freely access this literature at any time, from almost anywhere. This paper highlights the use of the Internet in obtaining valuable biomedical research information, which is mostly available from journals, databases, textbooks and e-journals in the form of web pages, text materials, images, and so on. The authors present an overview of web-based resources for biomedical researchers, providing information about Internet search engines (e.g., Google), web-based bibliographic databases (e.g., PubMed, IndMed) and how to use them, and other online biomedical resources that can assist clinicians in reaching well-informed clinical decisions.

  1. Astronaut Photography of the Earth: A Long-Term Dataset for Earth Systems Research, Applications, and Education

    NASA Technical Reports Server (NTRS)

    Stefanov, William L.

    2017-01-01

    The NASA Earth observations dataset obtained by humans in orbit using handheld film and digital cameras is freely accessible to the global community through the online searchable database at https://eol.jsc.nasa.gov, and offers a useful compliment to traditional ground-commanded sensor data. The dataset includes imagery from the NASA Mercury (1961) through present-day International Space Station (ISS) programs, and currently totals over 2.6 million individual frames. Geographic coverage of the dataset includes land and oceans areas between approximately 52 degrees North and South latitudes, but is spatially and temporally discontinuous. The photographic dataset includes some significant impediments for immediate research, applied, and educational use: commercial RGB films and camera systems with overlapping bandpasses; use of different focal length lenses, unconstrained look angles, and variable spacecraft altitudes; and no native geolocation information. Such factors led to this dataset being underutilized by the community but recent advances in automated and semi-automated image geolocation, image feature classification, and web-based services are adding new value to the astronaut-acquired imagery. A coupled ground software and on-orbit hardware system for the ISS is in development for planned deployment in mid-2017; this system will capture camera pose information for each astronaut photograph to allow automated, full georegistration of the data. The ground system component of the system is currently in use to fully georeference imagery collected in response to International Disaster Charter activations, and the auto-registration procedures are being applied to the extensive historical database of imagery to add value for research and educational purposes. In parallel, machine learning techniques are being applied to automate feature identification and classification throughout the dataset, in order to build descriptive metadata that will improve search capabilities. It is expected that these value additions will increase interest and use of the dataset by the global community.

  2. Web Proxy Auto Discovery for the WLCG

    NASA Astrophysics Data System (ADS)

    Dykstra, D.; Blomer, J.; Blumenfeld, B.; De Salvo, A.; Dewhurst, A.; Verguilov, V.

    2017-10-01

    All four of the LHC experiments depend on web proxies (that is, squids) at each grid site to support software distribution by the CernVM FileSystem (CVMFS). CMS and ATLAS also use web proxies for conditions data distributed through the Frontier Distributed Database caching system. ATLAS & CMS each have their own methods for their grid jobs to find out which web proxies to use for Frontier at each site, and CVMFS has a third method. Those diverse methods limit usability and flexibility, particularly for opportunistic use cases, where an experiment’s jobs are run at sites that do not primarily support that experiment. This paper describes a new Worldwide LHC Computing Grid (WLCG) system for discovering the addresses of web proxies. The system is based on an internet standard called Web Proxy Auto Discovery (WPAD). WPAD is in turn based on another standard called Proxy Auto Configuration (PAC). Both the Frontier and CVMFS clients support this standard. The input into the WLCG system comes from squids registered in the ATLAS Grid Information System (AGIS) and CMS SITECONF files, cross-checked with squids registered by sites in the Grid Configuration Database (GOCDB) and the OSG Information Management (OIM) system, and combined with some exceptions manually configured by people from ATLAS and CMS who operate WLCG Squid monitoring. WPAD servers at CERN respond to http requests from grid nodes all over the world with a PAC file that lists available web proxies, based on IP addresses matched from a database that contains the IP address ranges registered to organizations. Large grid sites are encouraged to supply their own WPAD web servers for more flexibility, to avoid being affected by short term long distance network outages, and to offload the WLCG WPAD servers at CERN. The CERN WPAD servers additionally support requests from jobs running at non-grid sites (particularly for LHC@Home) which they direct to the nearest publicly accessible web proxy servers. The responses to those requests are geographically ordered based on a separate database that maps IP addresses to longitude and latitude.

  3. Web Proxy Auto Discovery for the WLCG

    DOE PAGES

    Dykstra, D.; Blomer, J.; Blumenfeld, B.; ...

    2017-11-23

    All four of the LHC experiments depend on web proxies (that is, squids) at each grid site to support software distribution by the CernVM FileSystem (CVMFS). CMS and ATLAS also use web proxies for conditions data distributed through the Frontier Distributed Database caching system. ATLAS & CMS each have their own methods for their grid jobs to find out which web proxies to use for Frontier at each site, and CVMFS has a third method. Those diverse methods limit usability and flexibility, particularly for opportunistic use cases, where an experiment’s jobs are run at sites that do not primarily supportmore » that experiment. This paper describes a new Worldwide LHC Computing Grid (WLCG) system for discovering the addresses of web proxies. The system is based on an internet standard called Web Proxy Auto Discovery (WPAD). WPAD is in turn based on another standard called Proxy Auto Configuration (PAC). Both the Frontier and CVMFS clients support this standard. The input into the WLCG system comes from squids registered in the ATLAS Grid Information System (AGIS) and CMS SITECONF files, cross-checked with squids registered by sites in the Grid Configuration Database (GOCDB) and the OSG Information Management (OIM) system, and combined with some exceptions manually configured by people from ATLAS and CMS who operate WLCG Squid monitoring. WPAD servers at CERN respond to http requests from grid nodes all over the world with a PAC file that lists available web proxies, based on IP addresses matched from a database that contains the IP address ranges registered to organizations. Large grid sites are encouraged to supply their own WPAD web servers for more flexibility, to avoid being affected by short term long distance network outages, and to offload the WLCG WPAD servers at CERN. The CERN WPAD servers additionally support requests from jobs running at non-grid sites (particularly for LHC@Home) which it directs to the nearest publicly accessible web proxy servers. Furthermore, the responses to those requests are geographically ordered based on a separate database that maps IP addresses to longitude and latitude.« less

  4. Web Proxy Auto Discovery for the WLCG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dykstra, D.; Blomer, J.; Blumenfeld, B.

    All four of the LHC experiments depend on web proxies (that is, squids) at each grid site to support software distribution by the CernVM FileSystem (CVMFS). CMS and ATLAS also use web proxies for conditions data distributed through the Frontier Distributed Database caching system. ATLAS & CMS each have their own methods for their grid jobs to find out which web proxies to use for Frontier at each site, and CVMFS has a third method. Those diverse methods limit usability and flexibility, particularly for opportunistic use cases, where an experiment’s jobs are run at sites that do not primarily supportmore » that experiment. This paper describes a new Worldwide LHC Computing Grid (WLCG) system for discovering the addresses of web proxies. The system is based on an internet standard called Web Proxy Auto Discovery (WPAD). WPAD is in turn based on another standard called Proxy Auto Configuration (PAC). Both the Frontier and CVMFS clients support this standard. The input into the WLCG system comes from squids registered in the ATLAS Grid Information System (AGIS) and CMS SITECONF files, cross-checked with squids registered by sites in the Grid Configuration Database (GOCDB) and the OSG Information Management (OIM) system, and combined with some exceptions manually configured by people from ATLAS and CMS who operate WLCG Squid monitoring. WPAD servers at CERN respond to http requests from grid nodes all over the world with a PAC file that lists available web proxies, based on IP addresses matched from a database that contains the IP address ranges registered to organizations. Large grid sites are encouraged to supply their own WPAD web servers for more flexibility, to avoid being affected by short term long distance network outages, and to offload the WLCG WPAD servers at CERN. The CERN WPAD servers additionally support requests from jobs running at non-grid sites (particularly for LHC@Home) which it directs to the nearest publicly accessible web proxy servers. Furthermore, the responses to those requests are geographically ordered based on a separate database that maps IP addresses to longitude and latitude.« less

  5. A simple method for serving Web hypermaps with dynamic database drill-down

    PubMed Central

    Boulos, Maged N Kamel; Roudsari, Abdul V; Carson, Ewart R

    2002-01-01

    Background HealthCyberMap aims at mapping parts of health information cyberspace in novel ways to deliver a semantically superior user experience. This is achieved through "intelligent" categorisation and interactive hypermedia visualisation of health resources using metadata, clinical codes and GIS. HealthCyberMap is an ArcView 3.1 project. WebView, the Internet extension to ArcView, publishes HealthCyberMap ArcView Views as Web client-side imagemaps. The basic WebView set-up does not support any GIS database connection, and published Web maps become disconnected from the original project. A dedicated Internet map server would be the best way to serve HealthCyberMap database-driven interactive Web maps, but is an expensive and complex solution to acquire, run and maintain. This paper describes HealthCyberMap simple, low-cost method for "patching" WebView to serve hypermaps with dynamic database drill-down functionality on the Web. Results The proposed solution is currently used for publishing HealthCyberMap GIS-generated navigational information maps on the Web while maintaining their links with the underlying resource metadata base. Conclusion The authors believe their map serving approach as adopted in HealthCyberMap has been very successful, especially in cases when only map attribute data change without a corresponding effect on map appearance. It should be also possible to use the same solution to publish other interactive GIS-driven maps on the Web, e.g., maps of real world health problems. PMID:12437788

  6. Cpf1-Database: web-based genome-wide guide RNA library design for gene knockout screens using CRISPR-Cpf1.

    PubMed

    Park, Jeongbin; Bae, Sangsu

    2018-03-15

    Following the type II CRISPR-Cas9 system, type V CRISPR-Cpf1 endonucleases have been found to be applicable for genome editing in various organisms in vivo. However, there are as yet no web-based tools capable of optimally selecting guide RNAs (gRNAs) among all possible genome-wide target sites. Here, we present Cpf1-Database, a genome-wide gRNA library design tool for LbCpf1 and AsCpf1, which have DNA recognition sequences of 5'-TTTN-3' at the 5' ends of target sites. Cpf1-Database provides a sophisticated but simple way to design gRNAs for AsCpf1 nucleases on the genome scale. One can easily access the data using a straightforward web interface, and using the powerful collections feature one can easily design gRNAs for thousands of genes in short time. Free access at http://www.rgenome.net/cpf1-database/. sangsubae@hanyang.ac.kr.

  7. Image query and indexing for digital x rays

    NASA Astrophysics Data System (ADS)

    Long, L. Rodney; Thoma, George R.

    1998-12-01

    The web-based medical information retrieval system (WebMIRS) allows interned access to databases containing 17,000 digitized x-ray spine images and associated text data from National Health and Nutrition Examination Surveys (NHANES). WebMIRS allows SQL query of the text, and viewing of the returned text records and images using a standard browser. We are now working (1) to determine utility of data directly derived from the images in our databases, and (2) to investigate the feasibility of computer-assisted or automated indexing of the images to support image retrieval of images of interest to biomedical researchers in the field of osteoarthritis. To build an initial database based on image data, we are manually segmenting a subset of the vertebrae, using techniques from vertebral morphometry. From this, we will derive and add to the database vertebral features. This image-derived data will enhance the user's data access capability by enabling the creation of combined SQL/image-content queries.

  8. 77 FR 39269 - Submission for OMB Review, Comment Request, Proposed Collection: IMLS Museum Web Database...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-02

    ..., Proposed Collection: IMLS Museum Web Database: MuseumsCount.gov AGENCY: Institute of Museum and Library... general public. Information such as name, address, phone, email, Web site, staff size, program details... Museum Web Database: MuseumsCount.gov collection. The 60-day notice for the IMLS Museum Web Database...

  9. PathCase-SB architecture and database design

    PubMed Central

    2011-01-01

    Background Integration of metabolic pathways resources and regulatory metabolic network models, and deploying new tools on the integrated platform can help perform more effective and more efficient systems biology research on understanding the regulation in metabolic networks. Therefore, the tasks of (a) integrating under a single database environment regulatory metabolic networks and existing models, and (b) building tools to help with modeling and analysis are desirable and intellectually challenging computational tasks. Description PathCase Systems Biology (PathCase-SB) is built and released. The PathCase-SB database provides data and API for multiple user interfaces and software tools. The current PathCase-SB system provides a database-enabled framework and web-based computational tools towards facilitating the development of kinetic models for biological systems. PathCase-SB aims to integrate data of selected biological data sources on the web (currently, BioModels database and KEGG), and to provide more powerful and/or new capabilities via the new web-based integrative framework. This paper describes architecture and database design issues encountered in PathCase-SB's design and implementation, and presents the current design of PathCase-SB's architecture and database. Conclusions PathCase-SB architecture and database provide a highly extensible and scalable environment with easy and fast (real-time) access to the data in the database. PathCase-SB itself is already being used by researchers across the world. PMID:22070889

  10. Summary and status of the Horizons ephemeris system

    NASA Astrophysics Data System (ADS)

    Giorgini, J.

    2011-10-01

    Since 1996, the Horizons system has provided searchable access to JPL ephemerides for all known solar system bodies, several dozen spacecraft, planetary system barycenters, and some libration points. Responding to 18 400 000 requests from 300 000 unique addresses, the system has recently averaged 420 000 ephemeris requests per month. Horizons is accessed and automated using three interfaces: interactive telnet, web-browser form, and e-mail command-file. Asteroid and comet ephemerides are numerically integrated from JPL's database of initial conditions. This small-body database is updated hourly by a separate process as new measurements and discoveries are reported by the Minor Planet Center and automatically incorporated into new JPL orbit solutions. Ephemerides for other objects are derived by interpolating previously developed solutions whose trajectories have been represented in a file. For asteroids and comets, such files may be dynamically created and transferred to users, effectively recording integrator output. These small-body SPK files may then be interpolated by user software to reproduce the trajectory without duplicating the numerically integrated n-body dynamical model or PPN equations of motion. Other Horizons output is numerical and in the form of plain-text observer, vector, osculating element, or close-approach tables, typically expected be read by other software as input. About one hundred quantities can be requested in various time-scales and coordinate systems. For JPL small-body solutions, this includes statistical uncertainties derived from measurement covariance and state transition matrices. With the exception of some natural satellites, Horizons is consistent with DE405/DE406, the IAU 1976 constants, ITRF93, and IAU2009 rotational models.

  11. A Model Based Mars Climate Database for the Mission Design

    NASA Technical Reports Server (NTRS)

    2005-01-01

    A viewgraph presentation on a model based climate database is shown. The topics include: 1) Why a model based climate database?; 2) Mars Climate Database v3.1 Who uses it ? (approx. 60 users!); 3) The new Mars Climate database MCD v4.0; 4) MCD v4.0: what's new ? 5) Simulation of Water ice clouds; 6) Simulation of Water ice cycle; 7) A new tool for surface pressure prediction; 8) Acces to the database MCD 4.0; 9) How to access the database; and 10) New web access

  12. miRToolsGallery: a tag-based and rankable microRNA bioinformatics resources database portal

    PubMed Central

    Chen, Liang; Heikkinen, Liisa; Wang, ChangLiang; Yang, Yang; Knott, K Emily

    2018-01-01

    Abstract Hundreds of bioinformatics tools have been developed for MicroRNA (miRNA) investigations including those used for identification, target prediction, structure and expression profile analysis. However, finding the correct tool for a specific application requires the tedious and laborious process of locating, downloading, testing and validating the appropriate tool from a group of nearly a thousand. In order to facilitate this process, we developed a novel database portal named miRToolsGallery. We constructed the portal by manually curating > 950 miRNA analysis tools and resources. In the portal, a query to locate the appropriate tool is expedited by being searchable, filterable and rankable. The ranking feature is vital to quickly identify and prioritize the more useful from the obscure tools. Tools are ranked via different criteria including the PageRank algorithm, date of publication, number of citations, average of votes and number of publications. miRToolsGallery provides links and data for the comprehensive collection of currently available miRNA tools with a ranking function which can be adjusted using different criteria according to specific requirements. Database URL: http://www.mirtoolsgallery.org PMID:29688355

  13. Evolution of a Structure-Searchable Database into a Prototype for a High-Fidelity SmartPhone App for 62 Common Pesticides Used in Delaware.

    PubMed

    D'Souza, Malcolm J; Barile, Benjamin; Givens, Aaron F

    2015-05-01

    Synthetic pesticides are widely used in the modern world for human benefit. They are usually classified according to their intended pest target. In Delaware (DE), approximately 42 percent of the arable land is used for agriculture. In order to manage insectivorous and herbaceous pests (such as insects, weeds, nematodes, and rodents), pesticides are used profusely to biologically control the normal pest's life stage. In this undergraduate project, we first created a usable relational database containing 62 agricultural pesticides that are common in Delaware. Chemically pertinent quantitative and qualitative information was first stored in Bio-Rad's KnowItAll® Informatics System. Next, we extracted the data out of the KnowItAll® system and created additional sections on a Microsoft® Excel spreadsheet detailing pesticide use(s) and safety and handling information. Finally, in an effort to promote good agricultural practices, to increase efficiency in business decisions, and to make pesticide data globally accessible, we developed a mobile application for smartphones that displayed the pesticide database using Appery.io™; a cloud-based HyperText Markup Language (HTML5), jQuery Mobile and Hybrid Mobile app builder.

  14. AlgoRun: a Docker-based packaging system for platform-agnostic implemented algorithms.

    PubMed

    Hosny, Abdelrahman; Vera-Licona, Paola; Laubenbacher, Reinhard; Favre, Thibauld

    2016-08-01

    There is a growing need in bioinformatics for easy-to-use software implementations of algorithms that are usable across platforms. At the same time, reproducibility of computational results is critical and often a challenge due to source code changes over time and dependencies. The approach introduced in this paper addresses both of these needs with AlgoRun, a dedicated packaging system for implemented algorithms, using Docker technology. Implemented algorithms, packaged with AlgoRun, can be executed through a user-friendly interface directly from a web browser or via a standardized RESTful web API to allow easy integration into more complex workflows. The packaged algorithm includes the entire software execution environment, thereby eliminating the common problem of software dependencies and the irreproducibility of computations over time. AlgoRun-packaged algorithms can be published on http://algorun.org, a centralized searchable directory to find existing AlgoRun-packaged algorithms. AlgoRun is available at http://algorun.org and the source code under GPL license is available at https://github.com/algorun laubenbacher@uchc.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  15. MetEd Resources for Embracing Advances with S-NPP and JPSS

    NASA Astrophysics Data System (ADS)

    Abshire, W. E.; Dills, P. N.; Weingroff, M.

    2014-12-01

    The COMET® Program (www.comet.ucar.edu), a part of the UCAR Community Programs (UCP) at UCAR, receives funding from NOAA NESDIS as well as EUMETSAT and the Meteorological Service of Canada to support education and training in satellite meteorology. For many years COMET's satellite education programs have focused on developing self-paced online educational materials that highlight the capabilities and applications of current and next-generation operational geostationary and polar-orbiting satellites and their relevance to operational forecasters and other user communities. By partnering with experts from the Naval Research Laboratory, NOAA-NESDIS and its Cooperative Institutes, Meteorological Service of Canada, EUMETSAT, and other user communities, COMET stimulates greater use of current and future satellite observations and products. This presentation provides a tour of COMET's satellite training and education offerings that are directly applicable to data and products from the S-NPP and JPSS satellite series. A recommended set of lessons for users who wish to learn more will be highlighted, including excerpts from the newest materials on the Suomi NPP VIIRS imager and its applications, as well as advances in nighttime visible observation with the VIIRS Day-Night Band. We'll show how the lessons introduce users to the advances these systems bring to forecasting, numerical weather prediction, and environmental monitoring. Over 90 satellite-focused, self-paced, online materials are freely available on the of the MetEd Web site (http://www.meted.ucar.edu) via the "Education & Training", "Satellite" topic area. Quite a few polar-orbiting-related lessons are available in both English, Spanish, and French. Additionally, S-NPP and JPSS relevant information can also be found on the the Environmental Satellite Resource Center (ESRC) Web site (www.meted.ucar.edu/esrc) that is maintained by COMET. The ESRC is a searchable, database-driven Web site that provides access to nearly 600 education, training, and informational resources on Earth-observing satellites.

  16. The Earth Science Women's Network (ESWN): A member-driven network approach to supporting women in the Geosciences

    NASA Astrophysics Data System (ADS)

    Hastings, M. G.; Kontak, R.; Adams, A. S.; Barnes, R. T.; Fischer, E. V.; Glessmer, M. S.; Holloway, T.; Marin-Spiotta, E.; Rodriguez, C.; Steiner, A. L.; Wiedinmyer, C.; Laursen, S. L.

    2013-12-01

    The Earth Science Women's Network (ESWN) is an organization of women geoscientists, many in the early stages of their careers. The mission of ESWN is to promote success in scientific careers by facilitating career development, community, informal mentoring and support, and professional collaborations. ESWN currently connects nearly 2000 women across the globe, and includes graduate students, postdoctoral scientists, tenure and non-tenure track faculty from diverse colleges and universities, program managers, and government, non-government and industry researchers. In 2009, ESWN received an NSF ADVANCE PAID award, with the primary goals to grow our membership to serve a wider section of the geosciences community, to design and administer career development workshops, to promote professional networking at scientific conferences, and to develop web resources to build connections, collaborations, and peer mentoring for and among women in the Earth Sciences. Now at the end of the grant, ESWN members have reported gains in a number of aspects of their personal and professional lives including: knowledge about career resources; a greater understanding of the challenges facing women in science and resources to overcome them; a sense of community and less isolation; greater confidence in their own career trajectories; professional collaborations; emotional support on a variety of issues; and greater engagement and retention in scientific careers. The new ESWN web center (www.ESWNonline.org), a major development supported by NSF ADVANCE and AGU, was created to facilitate communication and networking among our members. The web center offers a state-of-the-art social networking platform and features: 1) a public site offering information on ESWN, career resources for all early career scientists, and a 'members' spotlight' highlighting members' scientific and professional achievements; and 2) a password protected member area where users can personalize profiles, create and respond to discussions, and connect with other members. The new member area's archive of discussions and member database are searchable, providing better tools for targeted networking and collaboration.

  17. Sequencing Data Discovery and Integration for Earth System Science with MetaSeek

    NASA Astrophysics Data System (ADS)

    Hoarfrost, A.; Brown, N.; Arnosti, C.

    2017-12-01

    Microbial communities play a central role in biogeochemical cycles. Sequencing data resources from environmental sources have grown exponentially in recent years, and represent a singular opportunity to investigate microbial interactions with Earth system processes. Carrying out such meta-analyses depends on our ability to discover and curate sequencing data into large-scale integrated datasets. However, such integration efforts are currently challenging and time-consuming, with sequencing data scattered across multiple repositories and metadata that is not easily or comprehensively searchable. MetaSeek is a sequencing data discovery tool that integrates sequencing metadata from all the major data repositories, allowing the user to search and filter on datasets in a lightweight application with an intuitive, easy-to-use web-based interface. Users can save and share curated datasets, while other users can browse these data integrations or use them as a jumping off point for their own curation. Missing and/or erroneous metadata are inferred automatically where possible, and where not possible, users are prompted to contribute to the improvement of the sequencing metadata pool by correcting and amending metadata errors. Once an integrated dataset has been curated, users can follow simple instructions to download their raw data and quickly begin their investigations. In addition to the online interface, the MetaSeek database is easily queryable via an open API, further enabling users and facilitating integrations of MetaSeek with other data curation tools. This tool lowers the barriers to curation and integration of environmental sequencing data, clearing the path forward to illuminating the ecosystem-scale interactions between biological and abiotic processes.

  18. Distributed spatial information integration based on web service

    NASA Astrophysics Data System (ADS)

    Tong, Hengjian; Zhang, Yun; Shao, Zhenfeng

    2008-10-01

    Spatial information systems and spatial information in different geographic locations usually belong to different organizations. They are distributed and often heterogeneous and independent from each other. This leads to the fact that many isolated spatial information islands are formed, reducing the efficiency of information utilization. In order to address this issue, we present a method for effective spatial information integration based on web service. The method applies asynchronous invocation of web service and dynamic invocation of web service to implement distributed, parallel execution of web map services. All isolated information islands are connected by the dispatcher of web service and its registration database to form a uniform collaborative system. According to the web service registration database, the dispatcher of web services can dynamically invoke each web map service through an asynchronous delegating mechanism. All of the web map services can be executed at the same time. When each web map service is done, an image will be returned to the dispatcher. After all of the web services are done, all images are transparently overlaid together in the dispatcher. Thus, users can browse and analyze the integrated spatial information. Experiments demonstrate that the utilization rate of spatial information resources is significantly raised thought the proposed method of distributed spatial information integration.

  19. Distributed spatial information integration based on web service

    NASA Astrophysics Data System (ADS)

    Tong, Hengjian; Zhang, Yun; Shao, Zhenfeng

    2009-10-01

    Spatial information systems and spatial information in different geographic locations usually belong to different organizations. They are distributed and often heterogeneous and independent from each other. This leads to the fact that many isolated spatial information islands are formed, reducing the efficiency of information utilization. In order to address this issue, we present a method for effective spatial information integration based on web service. The method applies asynchronous invocation of web service and dynamic invocation of web service to implement distributed, parallel execution of web map services. All isolated information islands are connected by the dispatcher of web service and its registration database to form a uniform collaborative system. According to the web service registration database, the dispatcher of web services can dynamically invoke each web map service through an asynchronous delegating mechanism. All of the web map services can be executed at the same time. When each web map service is done, an image will be returned to the dispatcher. After all of the web services are done, all images are transparently overlaid together in the dispatcher. Thus, users can browse and analyze the integrated spatial information. Experiments demonstrate that the utilization rate of spatial information resources is significantly raised thought the proposed method of distributed spatial information integration.

  20. A Tactical Framework for Cyberspace Situational Awareness

    DTIC Science & Technology

    2010-06-01

    Command & Control 1. VOIP Telephone 2. Internet Chat 3. Web App ( TBMCS ) 4. Email 5. Web App (PEX) 6. Database (CAMS) 7. Database (ARMS) 8...Database (LogMod) 9. Resource (WWW) 10. Application (PFPS) Mission Planning 1. Application (PFPS) 2. Email 3. Web App ( TBMCS ) 4. Internet Chat...1. Web App (PEX) 2. Database (ARMS) 3. Web App ( TBMCS ) 4. Email 5. Database (CAMS) 6. VOIP Telephone 7. Application (PFPS) 8. Internet Chat 9

  1. Administering a Web-Based Course on Database Technology

    ERIC Educational Resources Information Center

    de Oliveira, Leonardo Rocha; Cortimiglia, Marcelo; Marques, Luis Fernando Moraes

    2003-01-01

    This article presents a managerial experience with a web-based course on data base technology for enterprise management. The course has been developed and managed by a Department of Industrial Engineering in Brazil in a Public University. Project's managerial experiences are described covering its conception stage where the Virtual Learning…

  2. Upgrades to the TPSX Material Properties Database

    NASA Technical Reports Server (NTRS)

    Squire, T. H.; Milos, F. S.; Partridge, Harry (Technical Monitor)

    2001-01-01

    The TPSX Material Properties Database is a web-based tool that serves as a database for properties of advanced thermal protection materials. TPSX provides an easy user interface for retrieving material property information in a variety of forms, both graphical and text. The primary purpose and advantage of TPSX is to maintain a high quality source of often used thermal protection material properties in a convenient, easily accessible form, for distribution to government and aerospace industry communities. Last year a major upgrade to the TPSX web site was completed. This year, through the efforts of researchers at several NASA centers, the Office of the Chief Engineer awarded funds to update and expand the databases in TPSX. The FY01 effort focuses on updating correcting the Ames and Johnson thermal protection materials databases. In this session we will summarize the improvements made to the web site last year, report on the status of the on-going database updates, describe the planned upgrades for FY02 and FY03, and provide a demonstration of TPSX.

  3. The Brainomics/Localizer database.

    PubMed

    Papadopoulos Orfanos, Dimitri; Michel, Vincent; Schwartz, Yannick; Pinel, Philippe; Moreno, Antonio; Le Bihan, Denis; Frouin, Vincent

    2017-01-01

    The Brainomics/Localizer database exposes part of the data collected by the in-house Localizer project, which planned to acquire four types of data from volunteer research subjects: anatomical MRI scans, functional MRI data, behavioral and demographic data, and DNA sampling. Over the years, this local project has been collecting such data from hundreds of subjects. We had selected 94 of these subjects for their complete datasets, including all four types of data, as the basis for a prior publication; the Brainomics/Localizer database publishes the data associated with these 94 subjects. Since regulatory rules prevent us from making genetic data available for download, the database serves only anatomical MRI scans, functional MRI data, behavioral and demographic data. To publish this set of heterogeneous data, we use dedicated software based on the open-source CubicWeb semantic web framework. Through genericity in the data model and flexibility in the display of data (web pages, CSV, JSON, XML), CubicWeb helps us expose these complex datasets in original and efficient ways. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Educational Technology Use Among US Colleges and Schools of Pharmacy

    PubMed Central

    Cain, Jeff J.; Malone, Patrick M.; Chapman, Tracy A.; Walters, Ryan W.; Thompson, David C.; Riedl, Steven T.

    2011-01-01

    Objective. To develop a searchable database of educational technologies used at schools and colleges of pharmacy. Methods. A cross-sectional survey design was used to determine what educational technologies were being used and to identify an individual at each institution who could serve as an information resource for peer-to-peer questions. Results. Eighty-nine survey instruments were returned for a response rate of 75.4%. The resulting data illustrated the almost ubiquitous presence of educational technology. The most frequently used technology was course management systems and the least frequently used technology was microblogging. Conclusions. Educational technology use is trending toward fee-based products for enterprise-level applications and free, open-source products for collaboration and presentation. Educational technology is allowing educators to restructure classroom time for something other than simple transmission of factual information and to adopt an evidence-based approach to instructional innovation and reform. PMID:21829261

  5. Educational technology use among US colleges and schools of pharmacy.

    PubMed

    Monaghan, Michael S; Cain, Jeff J; Malone, Patrick M; Chapman, Tracy A; Walters, Ryan W; Thompson, David C; Riedl, Steven T

    2011-06-10

    To develop a searchable database of educational technologies used at schools and colleges of pharmacy. A cross-sectional survey design was used to determine what educational technologies were being used and to identify an individual at each institution who could serve as an information resource for peer-to-peer questions. Eighty-nine survey instruments were returned for a response rate of 75.4%. The resulting data illustrated the almost ubiquitous presence of educational technology. The most frequently used technology was course management systems and the least frequently used technology was microblogging. Educational technology use is trending toward fee-based products for enterprise-level applications and free, open-source products for collaboration and presentation. Educational technology is allowing educators to restructure classroom time for something other than simple transmission of factual information and to adopt an evidence-based approach to instructional innovation and reform.

  6. Community-based risk assessment of water contamination from high-volume horizontal hydraulic fracturing.

    PubMed

    Penningroth, Stephen M; Yarrow, Matthew M; Figueroa, Abner X; Bowen, Rebecca J; Delgado, Soraya

    2013-01-01

    The risk of contaminating surface and groundwater as a result of shale gas extraction using high-volume horizontal hydraulic fracturing (fracking) has not been assessed using conventional risk assessment methodologies. Baseline (pre-fracking) data on relevant water quality indicators, needed for meaningful risk assessment, are largely lacking. To fill this gap, the nonprofit Community Science Institute (CSI) partners with community volunteers who perform regular sampling of more than 50 streams in the Marcellus and Utica Shale regions of upstate New York; samples are analyzed for parameters associated with HVHHF. Similar baseline data on regional groundwater comes from CSI's testing of private drinking water wells. Analytic results for groundwater (with permission) and surface water are made publicly available in an interactive, searchable database. Baseline concentrations of potential contaminants from shale gas operations are found to be low, suggesting that early community-based monitoring is an effective foundation for assessing later contamination due to fracking.

  7. Bridging international law and rights-based litigation: mapping health-related rights through the development of the Global Health and Human Rights Database.

    PubMed

    Meier, Benjamin Mason; Cabrera, Oscar A; Ayala, Ana; Gostin, Lawrence O

    2012-06-15

    The O'Neill Institute for National and Global Health Law at Georgetown University, the World Health Organization, and the Lawyers Collective have come together to develop a searchable Global Health and Human Rights Database that maps the intersection of health and human rights in judgments, international and regional instruments, and national constitutions. Where states long remained unaccountable for violations of health-related human rights, litigation has arisen as a central mechanism in an expanding movement to create rights-based accountability. Facilitated by the incorporation of international human rights standards in national law, this judicial enforcement has supported the implementation of rights-based claims, giving meaning to states' longstanding obligations to realize the highest attainable standard of health. Yet despite these advancements, there has been insufficient awareness of the international and domestic legal instruments enshrining health-related rights and little understanding of the scope and content of litigation upholding these rights. As this accountability movement evolves, the Global Health and Human Rights Database seeks to chart this burgeoning landscape of international instruments, national constitutions, and judgments for health-related rights. Employing international legal research to document and catalogue these three interconnected aspects of human rights for the public's health, the Database's categorization by human rights, health topics, and regional scope provides a comprehensive means of understanding health and human rights law. Through these categorizations, the Global Health and Human Rights Database serves as a basis for analogous legal reasoning across states to serve as precedents for future cases, for comparative legal analysis of similar health claims in different country contexts, and for empirical research to clarify the impact of human rights judgments on public health outcomes. Copyright © 2012 Meier, Nygren-Krug, Cabrera, Ayala, and Gostin.

  8. JANIS 4: An Improved Version of the NEA Java-based Nuclear Data Information System

    NASA Astrophysics Data System (ADS)

    Soppera, N.; Bossant, M.; Dupont, E.

    2014-06-01

    JANIS is software developed to facilitate the visualization and manipulation of nuclear data, giving access to evaluated data libraries, and to the EXFOR and CINDA databases. It is stand-alone Java software, downloadable from the web and distributed on DVD. Used offline, the system also makes use of an internet connection to access the NEA Data Bank database. It is now also offered as a full web application, only requiring a browser. The features added in the latest version of the software and this new web interface are described.

  9. JANIS 4: An Improved Version of the NEA Java-based Nuclear Data Information System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soppera, N., E-mail: nicolas.soppera@oecd.org; Bossant, M.; Dupont, E.

    JANIS is software developed to facilitate the visualization and manipulation of nuclear data, giving access to evaluated data libraries, and to the EXFOR and CINDA databases. It is stand-alone Java software, downloadable from the web and distributed on DVD. Used offline, the system also makes use of an internet connection to access the NEA Data Bank database. It is now also offered as a full web application, only requiring a browser. The features added in the latest version of the software and this new web interface are described.

  10. Phynx: an open source software solution supporting data management and web-based patient-level data review for drug safety studies in the general practice research database and other health care databases.

    PubMed

    Egbring, Marco; Kullak-Ublick, Gerd A; Russmann, Stefan

    2010-01-01

    To develop a software solution that supports management and clinical review of patient data from electronic medical records databases or claims databases for pharmacoepidemiological drug safety studies. We used open source software to build a data management system and an internet application with a Flex client on a Java application server with a MySQL database backend. The application is hosted on Amazon Elastic Compute Cloud. This solution named Phynx supports data management, Web-based display of electronic patient information, and interactive review of patient-level information in the individual clinical context. This system was applied to a dataset from the UK General Practice Research Database (GPRD). Our solution can be setup and customized with limited programming resources, and there is almost no extra cost for software. Access times are short, the displayed information is structured in chronological order and visually attractive, and selected information such as drug exposure can be blinded. External experts can review patient profiles and save evaluations and comments via a common Web browser. Phynx provides a flexible and economical solution for patient-level review of electronic medical information from databases considering the individual clinical context. It can therefore make an important contribution to an efficient validation of outcome assessment in drug safety database studies.

  11. WheatGenome.info: an integrated database and portal for wheat genome information.

    PubMed

    Lai, Kaitao; Berkman, Paul J; Lorenc, Michal Tadeusz; Duran, Chris; Smits, Lars; Manoli, Sahana; Stiller, Jiri; Edwards, David

    2012-02-01

    Bread wheat (Triticum aestivum) is one of the most important crop plants, globally providing staple food for a large proportion of the human population. However, improvement of this crop has been limited due to its large and complex genome. Advances in genomics are supporting wheat crop improvement. We provide a variety of web-based systems hosting wheat genome and genomic data to support wheat research and crop improvement. WheatGenome.info is an integrated database resource which includes multiple web-based applications. These include a GBrowse2-based wheat genome viewer with BLAST search portal, TAGdb for searching wheat second-generation genome sequence data, wheat autoSNPdb, links to wheat genetic maps using CMap and CMap3D, and a wheat genome Wiki to allow interaction between diverse wheat genome sequencing activities. This system includes links to a variety of wheat genome resources hosted at other research organizations. This integrated database aims to accelerate wheat genome research and is freely accessible via the web interface at http://www.wheatgenome.info/.

  12. dbHiMo: a web-based epigenomics platform for histone-modifying enzymes.

    PubMed

    Choi, Jaeyoung; Kim, Ki-Tae; Huh, Aram; Kwon, Seomun; Hong, Changyoung; Asiegbu, Fred O; Jeon, Junhyun; Lee, Yong-Hwan

    2015-01-01

    Over the past two decades, epigenetics has evolved into a key concept for understanding regulation of gene expression. Among many epigenetic mechanisms, covalent modifications such as acetylation and methylation of lysine residues on core histones emerged as a major mechanism in epigenetic regulation. Here, we present the database for histone-modifying enzymes (dbHiMo; http://hme.riceblast.snu.ac.kr/) aimed at facilitating functional and comparative analysis of histone-modifying enzymes (HMEs). HMEs were identified by applying a search pipeline built upon profile hidden Markov model (HMM) to proteomes. The database incorporates 11,576 HMEs identified from 603 proteomes including 483 fungal, 32 plants and 51 metazoan species. The dbHiMo provides users with web-based personalized data browsing and analysis tools, supporting comparative and evolutionary genomics. With comprehensive data entries and associated web-based tools, our database will be a valuable resource for future epigenetics/epigenomics studies. © The Author(s) 2015. Published by Oxford University Press.

  13. Real-time Geographic Information System (GIS) for Monitoring the Area of Potential Water Level Using Rule Based System

    NASA Astrophysics Data System (ADS)

    Anugrah, Wirdah; Suryono; Suseno, Jatmiko Endro

    2018-02-01

    Management of water resources based on Geographic Information System can provide substantial benefits to water availability settings. Monitoring the potential water level is needed in the development sector, agriculture, energy and others. In this research is developed water resource information system using real-time Geographic Information System concept for monitoring the potential water level of web based area by applying rule based system method. GIS consists of hardware, software, and database. Based on the web-based GIS architecture, this study uses a set of computer that are connected to the network, run on the Apache web server and PHP programming language using MySQL database. The Ultrasound Wireless Sensor System is used as a water level data input. It also includes time and geographic location information. This GIS maps the five sensor locations. GIS is processed through a rule based system to determine the level of potential water level of the area. Water level monitoring information result can be displayed on thematic maps by overlaying more than one layer, and also generating information in the form of tables from the database, as well as graphs are based on the timing of events and the water level values.

  14. BioCarian: search engine for exploratory searches in heterogeneous biological databases.

    PubMed

    Zaki, Nazar; Tennakoon, Chandana

    2017-10-02

    There are a large number of biological databases publicly available for scientists in the web. Also, there are many private databases generated in the course of research projects. These databases are in a wide variety of formats. Web standards have evolved in the recent times and semantic web technologies are now available to interconnect diverse and heterogeneous sources of data. Therefore, integration and querying of biological databases can be facilitated by techniques used in semantic web. Heterogeneous databases can be converted into Resource Description Format (RDF) and queried using SPARQL language. Searching for exact queries in these databases is trivial. However, exploratory searches need customized solutions, especially when multiple databases are involved. This process is cumbersome and time consuming for those without a sufficient background in computer science. In this context, a search engine facilitating exploratory searches of databases would be of great help to the scientific community. We present BioCarian, an efficient and user-friendly search engine for performing exploratory searches on biological databases. The search engine is an interface for SPARQL queries over RDF databases. We note that many of the databases can be converted to tabular form. We first convert the tabular databases to RDF. The search engine provides a graphical interface based on facets to explore the converted databases. The facet interface is more advanced than conventional facets. It allows complex queries to be constructed, and have additional features like ranking of facet values based on several criteria, visually indicating the relevance of a facet value and presenting the most important facet values when a large number of choices are available. For the advanced users, SPARQL queries can be run directly on the databases. Using this feature, users will be able to incorporate federated searches of SPARQL endpoints. We used the search engine to do an exploratory search on previously published viral integration data and were able to deduce the main conclusions of the original publication. BioCarian is accessible via http://www.biocarian.com . We have developed a search engine to explore RDF databases that can be used by both novice and advanced users.

  15. Lynx: a database and knowledge extraction engine for integrative medicine.

    PubMed

    Sulakhe, Dinanath; Balasubramanian, Sandhya; Xie, Bingqing; Feng, Bo; Taylor, Andrew; Wang, Sheng; Berrocal, Eduardo; Dave, Utpal; Xu, Jinbo; Börnigen, Daniela; Gilliam, T Conrad; Maltsev, Natalia

    2014-01-01

    We have developed Lynx (http://lynx.ci.uchicago.edu)--a web-based database and a knowledge extraction engine, supporting annotation and analysis of experimental data and generation of weighted hypotheses on molecular mechanisms contributing to human phenotypes and disorders of interest. Its underlying knowledge base (LynxKB) integrates various classes of information from >35 public databases and private collections, as well as manually curated data from our group and collaborators. Lynx provides advanced search capabilities and a variety of algorithms for enrichment analysis and network-based gene prioritization to assist the user in extracting meaningful knowledge from LynxKB and experimental data, whereas its service-oriented architecture provides public access to LynxKB and its analytical tools via user-friendly web services and interfaces.

  16. Mars Public Mapping Project: Public Participation in Science Research; Providing Opportunities for Kids of All Ages

    NASA Astrophysics Data System (ADS)

    Rogers, L. D.; Valderrama Graff, P.; Bandfield, J. L.; Christensen, P. R.; Klug, S. L.; Deva, B.; Capages, C.

    2007-12-01

    The Mars Public Mapping Project is a web-based education and public outreach tool developed by the Mars Space Flight Facility at Arizona State University. This tool allows the general public to identify and map geologic features on Mars, utilizing Thermal Emission Imaging System (THEMIS) visible images, allowing public participation in authentic scientific research. In addition, participants are able to rate each image (based on a 1 to 5 star scale) to help build a catalog of some of the more appealing and interesting martian surface features. Once participants have identified observable features in an image, they are able to view a map of the global distribution of the many geologic features they just identified. This automatic feedback, through a global distribution map, allows participants to see how their answers compare to the answers of other participants. Participants check boxes "yes, no, or not sure" for each feature that is listed on the Mars Public Mapping Project web page, including surface geologic features such as gullies, sand dunes, dust devil tracks, wind streaks, lava flows, several types of craters, and layers. Each type of feature has a quick and easily accessible description and example image. When a participant moves their mouse over each example thumbnail image, a window pops up with a picture and a description of the feature. This provides a form of "on the job training" for the participants that can vary with their background level. For users who are more comfortable with Mars geology, there is also an advanced feature identification section accessible by a drop down menu. This includes additional features that may be identified, such as streamlined islands, valley networks, chaotic terrain, yardangs, and dark slope streaks. The Mars Public Mapping Project achieves several goals: 1) It engages the public in a manner that encourages active participation in scientific research and learning about geologic features and processes. 2) It helps to build a mappable database that can be used by researchers (and the public in general) to quickly access image based data that contains particular feature types. 3) It builds a searchable database of images containing specific geologic features that the public deem to be visually appealing. Other education and public outreach programs at the Mars Space Flight Facility, such as the Rock Around the World and the Mars Student Imaging Project, have shown an increase in demand for programs that allow "kids of all ages" to participate in authentic scientific research. The Mars Public Mapping Project is a broadly accessible program that continues this theme by building a set of activities that is useful for both the public and scientists.

  17. Fine-grained Database Field Search Using Attribute-Based Encryption for E-Healthcare Clouds.

    PubMed

    Guo, Cheng; Zhuang, Ruhan; Jie, Yingmo; Ren, Yizhi; Wu, Ting; Choo, Kim-Kwang Raymond

    2016-11-01

    An effectively designed e-healthcare system can significantly enhance the quality of access and experience of healthcare users, including facilitating medical and healthcare providers in ensuring a smooth delivery of services. Ensuring the security of patients' electronic health records (EHRs) in the e-healthcare system is an active research area. EHRs may be outsourced to a third-party, such as a community healthcare cloud service provider for storage due to cost-saving measures. Generally, encrypting the EHRs when they are stored in the system (i.e. data-at-rest) or prior to outsourcing the data is used to ensure data confidentiality. Searchable encryption (SE) scheme is a promising technique that can ensure the protection of private information without compromising on performance. In this paper, we propose a novel framework for controlling access to EHRs stored in semi-trusted cloud servers (e.g. a private cloud or a community cloud). To achieve fine-grained access control for EHRs, we leverage the ciphertext-policy attribute-based encryption (CP-ABE) technique to encrypt tables published by hospitals, including patients' EHRs, and the table is stored in the database with the primary key being the patient's unique identity. Our framework can enable different users with different privileges to search on different database fields. Differ from previous attempts to secure outsourcing of data, we emphasize the control of the searches of the fields within the database. We demonstrate the utility of the scheme by evaluating the scheme using datasets from the University of California, Irvine.

  18. Architecture for biomedical multimedia information delivery on the World Wide Web

    NASA Astrophysics Data System (ADS)

    Long, L. Rodney; Goh, Gin-Hua; Neve, Leif; Thoma, George R.

    1997-10-01

    Research engineers at the National Library of Medicine are building a prototype system for the delivery of multimedia biomedical information on the World Wide Web. This paper discuses the architecture and design considerations for the system, which will be used initially to make images and text from the third National Health and Nutrition Examination Survey (NHANES) publicly available. We categorized our analysis as follows: (1) fundamental software tools: we analyzed trade-offs among use of conventional HTML/CGI, X Window Broadway, and Java; (2) image delivery: we examined the use of unconventional TCP transmission methods; (3) database manager and database design: we discuss the capabilities and planned use of the Informix object-relational database manager and the planned schema for the HNANES database; (4) storage requirements for our Sun server; (5) user interface considerations; (6) the compatibility of the system with other standard research and analysis tools; (7) image display: we discuss considerations for consistent image display for end users. Finally, we discuss the scalability of the system in terms of incorporating larger or more databases of similar data, and the extendibility of the system for supporting content-based retrieval of biomedical images. The system prototype is called the Web-based Medical Information Retrieval System. An early version was built as a Java applet and tested on Unix, PC, and Macintosh platforms. This prototype used the MiniSQL database manager to do text queries on a small database of records of participants in the second NHANES survey. The full records and associated x-ray images were retrievable and displayable on a standard Web browser. A second version has now been built, also a Java applet, using the MySQL database manager.

  19. Java Web Simulation (JWS); a web based database of kinetic models.

    PubMed

    Snoep, J L; Olivier, B G

    2002-01-01

    Software to make a database of kinetic models accessible via the internet has been developed and a core database has been set up at http://jjj.biochem.sun.ac.za/. This repository of models, available to everyone with internet access, opens a whole new way in which we can make our models public. Via the database, a user can change enzyme parameters and run time simulations or steady state analyses. The interface is user friendly and no additional software is necessary. The database currently contains 10 models, but since the generation of the program code to include new models has largely been automated the addition of new models is straightforward and people are invited to submit their models to be included in the database.

  20. WE-E-BRB-11: Riview a Web-Based Viewer for Radiotherapy.

    PubMed

    Apte, A; Wang, Y; Deasy, J

    2012-06-01

    Collaborations involving radiotherapy data collection, such as the recently proposed international radiogenomics consortium, require robust, web-based tools to facilitate reviewing treatment planning information. We present the architecture and prototype characteristics for a web-based radiotherapy viewer. The web-based environment developed in this work consists of the following components: 1) Import of DICOM/RTOG data: CERR was leveraged to import DICOM/RTOG data and to convert to database friendly RT objects. 2) Extraction and Storage of RT objects: The scan and dose distributions were stored as .png files per slice and view plane. The file locations were written to the MySQL database. Structure contours and DVH curves were written to the database as numeric data. 3) Web interfaces to query, retrieve and visualize the RT objects: The Web application was developed using HTML 5 and Ruby on Rails (RoR) technology following the MVC philosophy. The open source ImageMagick library was utilized to overlay scan, dose and structures. The application allows users to (i) QA the treatment plans associated with a study, (ii) Query and Retrieve patients matching anonymized ID and study, (iii) Review up to 4 plans simultaneously in 4 window panes (iv) Plot DVH curves for the selected structures and dose distributions. A subset of data for lung cancer patients was used to prototype the system. Five user accounts were created to have access to this study. The scans, doses, structures and DVHs for 10 patients were made available via the web application. A web-based system to facilitate QA, and support Query, Retrieve and the Visualization of RT data was prototyped. The RIVIEW system was developed using open source and free technology like MySQL and RoR. We plan to extend the RIVIEW system further to be useful in clinical trial data collection, outcomes research, cohort plan review and evaluation. © 2012 American Association of Physicists in Medicine.

  1. SNPversity: A web-based tool for visualizing diversity

    USDA-ARS?s Scientific Manuscript database

    Background: Many stand-alone desktop software suites exist to visualize single nucleotide polymorphisms (SNP) diversity, but web-based software that can be easily implemented and used for biological databases is absent. SNPversity was created to answer this need by building an open-source visualizat...

  2. A web based relational database management system for filariasis control

    PubMed Central

    Murty, Upadhyayula Suryanarayana; Kumar, Duvvuri Venkata Rama Satya; Sriram, Kumaraswamy; Rao, Kadiri Madhusudhan; Bhattacharyulu, Chakravarthula Hayageeva Narasimha Venakata; Praveen, Bhoopathi; Krishna, Amirapu Radha

    2005-01-01

    The present study describes a RDBMS (relational database management system) for the effective management of Filariasis, a vector borne disease. Filariasis infects 120 million people from 83 countries. The possible re-emergence of the disease and the complexity of existing control programs warrant the development of new strategies. A database containing comprehensive data associated with filariasis finds utility in disease control. We have developed a database containing information on the socio-economic status of patients, mosquito collection procedures, mosquito dissection data, filariasis survey report and mass blood data. The database can be searched using a user friendly web interface. Availability http://www.webfil.org (login and password can be obtained from the authors) PMID:17597846

  3. National Vulnerability Database (NVD)

    National Institute of Standards and Technology Data Gateway

    National Vulnerability Database (NVD) (Web, free access)   NVD is a comprehensive cyber security vulnerability database that integrates all publicly available U.S. Government vulnerability resources and provides references to industry resources. It is based on and synchronized with the CVE vulnerability naming standard.

  4. WebBio, a web-based management and analysis system for patient data of biological products in hospital.

    PubMed

    Lu, Ying-Hao; Kuo, Chen-Chun; Huang, Yaw-Bin

    2011-08-01

    We selected HTML, PHP and JavaScript as the programming languages to build "WebBio", a web-based system for patient data of biological products and used MySQL as database. WebBio is based on the PHP-MySQL suite and is run by Apache server on Linux machine. WebBio provides the functions of data management, searching function and data analysis for 20 kinds of biological products (plasma expanders, human immunoglobulin and hematological products). There are two particular features in WebBio: (1) pharmacists can rapidly find out whose patients used contaminated products for medication safety, and (2) the statistics charts for a specific product can be automatically generated to reduce pharmacist's work loading. WebBio has successfully turned traditional paper work into web-based data management.

  5. The care pathway: concepts and theories: an introduction.

    PubMed

    Schrijvers, Guus; van Hoorn, Arjan; Huiskes, Nicolette

    2012-01-01

    This article addresses first the definition of a (care) pathway, and then follows a description of theories since the 1950s. It ends with a discussion of theoretical advantages and disadvantages of care pathways for patients and professionals. The objective of this paper is to provide a theoretical base for empirical studies on care pathways. The knowledge for this chapter is based on several books on pathways, which we found by searching in the digital encyclopedia Wikipedia. Although this is not usual in scientific publications, this method was used because books are not searchable by databases as Pubmed. From 2005, we performed a literature search on Pubmed and other literature databases, and with the keywords integrated care pathway, clinical pathway, critical pathway, theory, research, and evaluation. One of the inspirational sources was the website of the European Pathway Association (EPA) and its journal International Journal of Care Pathways. The authors visited several sites for this paper. These are mentioned as illustration of a concept or theory. Most of them have English websites with more information. The URLs of these websites are not mentioned in this paper as a reference, because the content of them changes fast, sometimes every day.

  6. The care pathway: concepts and theories: an introduction

    PubMed Central

    Schrijvers, Guus; van Hoorn, Arjan; Huiskes, Nicolette

    2012-01-01

    This article addresses first the definition of a (care) pathway, and then follows a description of theories since the 1950s. It ends with a discussion of theoretical advantages and disadvantages of care pathways for patients and professionals. The objective of this paper is to provide a theoretical base for empirical studies on care pathways. The knowledge for this chapter is based on several books on pathways, which we found by searching in the digital encyclopedia Wikipedia. Although this is not usual in scientific publications, this method was used because books are not searchable by databases as Pubmed. From 2005, we performed a literature search on Pubmed and other literature databases, and with the keywords integrated care pathway, clinical pathway, critical pathway, theory, research, and evaluation. One of the inspirational sources was the website of the European Pathway Association (EPA) and its journal International Journal of Care Pathways. The authors visited several sites for this paper. These are mentioned as illustration of a concept or theory. Most of them have English websites with more information. The URLs of these websites are not mentioned in this paper as a reference, because the content of them changes fast, sometimes every day. PMID:23593066

  7. The Implications of Well-Formedness on Web-Based Educational Resources.

    ERIC Educational Resources Information Center

    Mohler, James L.

    Within all institutions, Web developers are beginning to utilize technologies that make sites more than static information resources. Databases such as XML (Extensible Markup Language) and XSL (Extensible Stylesheet Language) are key technologies that promise to extend the Web beyond the "information storehouse" paradigm and provide…

  8. CADB: Conformation Angles DataBase of proteins

    PubMed Central

    Sheik, S. S.; Ananthalakshmi, P.; Bhargavi, G. Ramya; Sekar, K.

    2003-01-01

    Conformation Angles DataBase (CADB) provides an online resource to access data on conformation angles (both main-chain and side-chain) of protein structures in two data sets corresponding to 25% and 90% sequence identity between any two proteins, available in the Protein Data Bank. In addition, the database contains the necessary crystallographic parameters. The package has several flexible options and display facilities to visualize the main-chain and side-chain conformation angles for a particular amino acid residue. The package can also be used to study the interrelationship between the main-chain and side-chain conformation angles. A web based JAVA graphics interface has been deployed to display the user interested information on the client machine. The database is being updated at regular intervals and can be accessed over the World Wide Web interface at the following URL: http://144.16.71.148/cadb/. PMID:12520049

  9. PROTICdb: a web-based application to store, track, query, and compare plant proteome data.

    PubMed

    Ferry-Dumazet, Hélène; Houel, Gwenn; Montalent, Pierre; Moreau, Luc; Langella, Olivier; Negroni, Luc; Vincent, Delphine; Lalanne, Céline; de Daruvar, Antoine; Plomion, Christophe; Zivy, Michel; Joets, Johann

    2005-05-01

    PROTICdb is a web-based application, mainly designed to store and analyze plant proteome data obtained by two-dimensional polyacrylamide gel electrophoresis (2-D PAGE) and mass spectrometry (MS). The purposes of PROTICdb are (i) to store, track, and query information related to proteomic experiments, i.e., from tissue sampling to protein identification and quantitative measurements, and (ii) to integrate information from the user's own expertise and other sources into a knowledge base, used to support data interpretation (e.g., for the determination of allelic variants or products of post-translational modifications). Data insertion into the relational database of PROTICdb is achieved either by uploading outputs of image analysis and MS identification software, or by filling web forms. 2-D PAGE annotated maps can be displayed, queried, and compared through a graphical interface. Links to external databases are also available. Quantitative data can be easily exported in a tabulated format for statistical analyses. PROTICdb is based on the Oracle or the PostgreSQL Database Management System and is freely available upon request at the following URL: http://moulon.inra.fr/ bioinfo/PROTICdb.

  10. Systematic identification of human housekeeping genes possibly useful as references in gene expression studies.

    PubMed

    Caracausi, Maria; Piovesan, Allison; Antonaros, Francesca; Strippoli, Pierluigi; Vitale, Lorenza; Pelleri, Maria Chiara

    2017-09-01

    The ideal reference, or control, gene for the study of gene expression in a given organism should be expressed at a medium‑high level for easy detection, should be expressed at a constant/stable level throughout different cell types and within the same cell type undergoing different treatments, and should maintain these features through as many different tissues of the organism. From a biological point of view, these theoretical requirements of an ideal reference gene appear to be best suited to housekeeping (HK) genes. Recent advancements in the quality and completeness of human expression microarray data and in their statistical analysis may provide new clues toward the quantitative standardization of human gene expression studies in biology and medicine, both cross‑ and within‑tissue. The systematic approach used by the present study is based on the Transcriptome Mapper tool and exploits the automated reassignment of probes to corresponding genes, intra‑ and inter‑sample normalization, elaboration and representation of gene expression values in linear form within an indexed and searchable database with a graphical interface recording quantitative levels of expression, expression variability and cross‑tissue width of expression for more than 31,000 transcripts. The present study conducted a meta‑analysis of a pool of 646 expression profile data sets from 54 different human tissues and identified actin γ 1 as the HK gene that best fits the combination of all the traditional criteria to be used as a reference gene for general use; two ribosomal protein genes, RPS18 and RPS27, and one aquaporin gene, POM121 transmembrane nucleporin C, were also identified. The present study provided a list of tissue‑ and organ‑specific genes that may be most suited for the following individual tissues/organs: Adipose tissue, bone marrow, brain, heart, kidney, liver, lung, ovary, skeletal muscle and testis; and also provides in these cases a representative, quantitative portrait of the relative, typical gene‑expression profile in the form of searchable database tables.

  11. CerebralWeb: a Cytoscape.js plug-in to visualize networks stratified by subcellular localization.

    PubMed

    Frias, Silvia; Bryan, Kenneth; Brinkman, Fiona S L; Lynn, David J

    2015-01-01

    CerebralWeb is a light-weight JavaScript plug-in that extends Cytoscape.js to enable fast and interactive visualization of molecular interaction networks stratified based on subcellular localization or other user-supplied annotation. The application is designed to be easily integrated into any website and is configurable to support customized network visualization. CerebralWeb also supports the automatic retrieval of Cerebral-compatible localizations for human, mouse and bovine genes via a web service and enables the automated parsing of Cytoscape compatible XGMML network files. CerebralWeb currently supports embedded network visualization on the InnateDB (www.innatedb.com) and Allergy and Asthma Portal (allergen.innatedb.com) database and analysis resources. Database tool URL: http://www.innatedb.com/CerebralWeb © The Author(s) 2015. Published by Oxford University Press.

  12. BISQUE: locus- and variant-specific conversion of genomic, transcriptomic and proteomic database identifiers.

    PubMed

    Meyer, Michael J; Geske, Philip; Yu, Haiyuan

    2016-05-15

    Biological sequence databases are integral to efforts to characterize and understand biological molecules and share biological data. However, when analyzing these data, scientists are often left holding disparate biological currency-molecular identifiers from different databases. For downstream applications that require converting the identifiers themselves, there are many resources available, but analyzing associated loci and variants can be cumbersome if data is not given in a form amenable to particular analyses. Here we present BISQUE, a web server and customizable command-line tool for converting molecular identifiers and their contained loci and variants between different database conventions. BISQUE uses a graph traversal algorithm to generalize the conversion process for residues in the human genome, genes, transcripts and proteins, allowing for conversion across classes of molecules and in all directions through an intuitive web interface and a URL-based web service. BISQUE is freely available via the web using any major web browser (http://bisque.yulab.org/). Source code is available in a public GitHub repository (https://github.com/hyulab/BISQUE). haiyuan.yu@cornell.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. A web-based approach for electrocardiogram monitoring in the home.

    PubMed

    Magrabi, F; Lovell, N H; Celler, B G

    1999-05-01

    A Web-based electrocardiogram (ECG) monitoring service in which a longitudinal clinical record is used for management of patients, is described. The Web application is used to collect clinical data from the patient's home. A database on the server acts as a central repository where this clinical information is stored. A Web browser provides access to the patient's records and ECG data. We discuss the technologies used to automate the retrieval and storage of clinical data from a patient database, and the recording and reviewing of clinical measurement data. On the client's Web browser, ActiveX controls embedded in the Web pages provide a link between the various components including the Web server, Web page, the specialised client side ECG review and acquisition software, and the local file system. The ActiveX controls also implement FTP functions to retrieve and submit clinical data to and from the server. An intelligent software agent on the server is activated whenever new ECG data is sent from the home. The agent compares historical data with newly acquired data. Using this method, an optimum patient care strategy can be evaluated, a summarised report along with reminders and suggestions for action is sent to the doctor and patient by email.

  14. ProXL (Protein Cross-Linking Database): A Platform for Analysis, Visualization, and Sharing of Protein Cross-Linking Mass Spectrometry Data

    PubMed Central

    2016-01-01

    ProXL is a Web application and accompanying database designed for sharing, visualizing, and analyzing bottom-up protein cross-linking mass spectrometry data with an emphasis on structural analysis and quality control. ProXL is designed to be independent of any particular software pipeline. The import process is simplified by the use of the ProXL XML data format, which shields developers of data importers from the relative complexity of the relational database schema. The database and Web interfaces function equally well for any software pipeline and allow data from disparate pipelines to be merged and contrasted. ProXL includes robust public and private data sharing capabilities, including a project-based interface designed to ensure security and facilitate collaboration among multiple researchers. ProXL provides multiple interactive and highly dynamic data visualizations that facilitate structural-based analysis of the observed cross-links as well as quality control. ProXL is open-source, well-documented, and freely available at https://github.com/yeastrc/proxl-web-app. PMID:27302480

  15. An XML-based Generic Tool for Information Retrieval in Solar Databases

    NASA Astrophysics Data System (ADS)

    Scholl, Isabelle F.; Legay, Eric; Linsolas, Romain

    This paper presents the current architecture of the `Solar Web Project' now in its development phase. This tool will provide scientists interested in solar data with a single web-based interface for browsing distributed and heterogeneous catalogs of solar observations. The main goal is to have a generic application that can be easily extended to new sets of data or to new missions with a low level of maintenance. It is developed with Java and XML is used as a powerful configuration language. The server, independent of any database scheme, can communicate with a client (the user interface) and several local or remote archive access systems (such as existing web pages, ftp sites or SQL databases). Archive access systems are externally described in XML files. The user interface is also dynamically generated from an XML file containing the window building rules and a simplified database description. This project is developed at MEDOC (Multi-Experiment Data and Operations Centre), located at the Institut d'Astrophysique Spatiale (Orsay, France). Successful tests have been conducted with other solar archive access systems.

  16. ProXL (Protein Cross-Linking Database): A Platform for Analysis, Visualization, and Sharing of Protein Cross-Linking Mass Spectrometry Data.

    PubMed

    Riffle, Michael; Jaschob, Daniel; Zelter, Alex; Davis, Trisha N

    2016-08-05

    ProXL is a Web application and accompanying database designed for sharing, visualizing, and analyzing bottom-up protein cross-linking mass spectrometry data with an emphasis on structural analysis and quality control. ProXL is designed to be independent of any particular software pipeline. The import process is simplified by the use of the ProXL XML data format, which shields developers of data importers from the relative complexity of the relational database schema. The database and Web interfaces function equally well for any software pipeline and allow data from disparate pipelines to be merged and contrasted. ProXL includes robust public and private data sharing capabilities, including a project-based interface designed to ensure security and facilitate collaboration among multiple researchers. ProXL provides multiple interactive and highly dynamic data visualizations that facilitate structural-based analysis of the observed cross-links as well as quality control. ProXL is open-source, well-documented, and freely available at https://github.com/yeastrc/proxl-web-app .

  17. Web application for detailed real-time database transaction monitoring for CMS condition data

    NASA Astrophysics Data System (ADS)

    de Gruttola, Michele; Di Guida, Salvatore; Innocente, Vincenzo; Pierro, Antonio

    2012-12-01

    In the upcoming LHC era, database have become an essential part for the experiments collecting data from LHC, in order to safely store, and consistently retrieve, a wide amount of data, which are produced by different sources. In the CMS experiment at CERN, all this information is stored in ORACLE databases, allocated in several servers, both inside and outside the CERN network. In this scenario, the task of monitoring different databases is a crucial database administration issue, since different information may be required depending on different users' tasks such as data transfer, inspection, planning and security issues. We present here a web application based on Python web framework and Python modules for data mining purposes. To customize the GUI we record traces of user interactions that are used to build use case models. In addition the application detects errors in database transactions (for example identify any mistake made by user, application failure, unexpected network shutdown or Structured Query Language (SQL) statement error) and provides warning messages from the different users' perspectives. Finally, in order to fullfill the requirements of the CMS experiment community, and to meet the new development in many Web client tools, our application was further developed, and new features were deployed.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    The system is developed to collect, process, store and present the information provided by the radio frequency identification (RFID) devices. The system contains three parts, the application software, the database and the web page. The application software manages multiple RFID devices, such as readers and portals, simultaneously. It communicates with the devices through application programming interface (API) provided by the device vendor. The application software converts data collected by the RFID readers and portals to readable information. It is capable of encrypting data using 256 bits advanced encryption standard (AES). The application software has a graphical user interface (GUI). Themore » GUI mimics the configurations of the nucler material storage sites or transport vehicles. The GUI gives the user and system administrator an intuitive way to read the information and/or configure the devices. The application software is capable of sending the information to a remote, dedicated and secured web and database server. Two captured screen samples, one for storage and transport, are attached. The database is constructed to handle a large number of RFID tag readers and portals. A SQL server is employed for this purpose. An XML script is used to update the database once the information is sent from the application software. The design of the web page imitates the design of the application software. The web page retrieves data from the database and presents it in different panels. The user needs a user name combined with a password to access the web page. The web page is capable of sending e-mail and text messages based on preset criteria, such as when alarm thresholds are excceeded. A captured screen sample is attached. The application software is designed to be installed on a local computer. The local computer is directly connected to the RFID devices and can be controlled locally or remotely. There are multiple local computers managing different sites or transport vehicles. The control from remote sites and information transmitted to a central database server is through secured internet. The information stored in the central databaser server is shown on the web page. The users can view the web page on the internet. A dedicated and secured web and database server (https) is used to provide information security.« less

  19. ASGARD: an open-access database of annotated transcriptomes for emerging model arthropod species.

    PubMed

    Zeng, Victor; Extavour, Cassandra G

    2012-01-01

    The increased throughput and decreased cost of next-generation sequencing (NGS) have shifted the bottleneck genomic research from sequencing to annotation, analysis and accessibility. This is particularly challenging for research communities working on organisms that lack the basic infrastructure of a sequenced genome, or an efficient way to utilize whatever sequence data may be available. Here we present a new database, the Assembled Searchable Giant Arthropod Read Database (ASGARD). This database is a repository and search engine for transcriptomic data from arthropods that are of high interest to multiple research communities but currently lack sequenced genomes. We demonstrate the functionality and utility of ASGARD using de novo assembled transcriptomes from the milkweed bug Oncopeltus fasciatus, the cricket Gryllus bimaculatus and the amphipod crustacean Parhyale hawaiensis. We have annotated these transcriptomes to assign putative orthology, coding region determination, protein domain identification and Gene Ontology (GO) term annotation to all possible assembly products. ASGARD allows users to search all assemblies by orthology annotation, GO term annotation or Basic Local Alignment Search Tool. User-friendly features of ASGARD include search term auto-completion suggestions based on database content, the ability to download assembly product sequences in FASTA format, direct links to NCBI data for predicted orthologs and graphical representation of the location of protein domains and matches to similar sequences from the NCBI non-redundant database. ASGARD will be a useful repository for transcriptome data from future NGS studies on these and other emerging model arthropods, regardless of sequencing platform, assembly or annotation status. This database thus provides easy, one-stop access to multi-species annotated transcriptome information. We anticipate that this database will be useful for members of multiple research communities, including developmental biology, physiology, evolutionary biology, ecology, comparative genomics and phylogenomics. Database URL: asgard.rc.fas.harvard.edu.

  20. Sagace: A web-based search engine for biomedical databases in Japan

    PubMed Central

    2012-01-01

    Background In the big data era, biomedical research continues to generate a large amount of data, and the generated information is often stored in a database and made publicly available. Although combining data from multiple databases should accelerate further studies, the current number of life sciences databases is too large to grasp features and contents of each database. Findings We have developed Sagace, a web-based search engine that enables users to retrieve information from a range of biological databases (such as gene expression profiles and proteomics data) and biological resource banks (such as mouse models of disease and cell lines). With Sagace, users can search more than 300 databases in Japan. Sagace offers features tailored to biomedical research, including manually tuned ranking, a faceted navigation to refine search results, and rich snippets constructed with retrieved metadata for each database entry. Conclusions Sagace will be valuable for experts who are involved in biomedical research and drug development in both academia and industry. Sagace is freely available at http://sagace.nibio.go.jp/en/. PMID:23110816

  1. The experimental nuclear reaction data (EXFOR): Extended computer database and Web retrieval system

    DOE PAGES

    Zerkin, V. V.; Pritychenko, B.

    2018-02-04

    The EXchange FORmat (EXFOR) experimental nuclear reaction database and the associated Web interface provide access to the wealth of low- and intermediate-energy nuclear reaction physics data. This resource is based on numerical data sets and bibliographical information of ~22,000 experiments since the beginning of nuclear science. The principles of the computer database organization, its extended contents and Web applications development are described. New capabilities for the data sets uploads, renormalization, covariance matrix, and inverse reaction calculations are presented in this paper. The EXFOR database, updated monthly, provides an essential support for nuclear data evaluation, application development, and research activities. Finally,more » it is publicly available at the websites of the International Atomic Energy Agency Nuclear Data Section, http://www-nds.iaea.org/exfor, the U.S. National Nuclear Data Center, http://www.nndc.bnl.gov/exfor, and the mirror sites in China, India and Russian Federation.« less

  2. The experimental nuclear reaction data (EXFOR): Extended computer database and Web retrieval system

    NASA Astrophysics Data System (ADS)

    Zerkin, V. V.; Pritychenko, B.

    2018-04-01

    The EXchange FORmat (EXFOR) experimental nuclear reaction database and the associated Web interface provide access to the wealth of low- and intermediate-energy nuclear reaction physics data. This resource is based on numerical data sets and bibliographical information of ∼22,000 experiments since the beginning of nuclear science. The principles of the computer database organization, its extended contents and Web applications development are described. New capabilities for the data sets uploads, renormalization, covariance matrix, and inverse reaction calculations are presented. The EXFOR database, updated monthly, provides an essential support for nuclear data evaluation, application development, and research activities. It is publicly available at the websites of the International Atomic Energy Agency Nuclear Data Section, http://www-nds.iaea.org/exfor, the U.S. National Nuclear Data Center, http://www.nndc.bnl.gov/exfor, and the mirror sites in China, India and Russian Federation.

  3. The experimental nuclear reaction data (EXFOR): Extended computer database and Web retrieval system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zerkin, V. V.; Pritychenko, B.

    The EXchange FORmat (EXFOR) experimental nuclear reaction database and the associated Web interface provide access to the wealth of low- and intermediate-energy nuclear reaction physics data. This resource is based on numerical data sets and bibliographical information of ~22,000 experiments since the beginning of nuclear science. The principles of the computer database organization, its extended contents and Web applications development are described. New capabilities for the data sets uploads, renormalization, covariance matrix, and inverse reaction calculations are presented in this paper. The EXFOR database, updated monthly, provides an essential support for nuclear data evaluation, application development, and research activities. Finally,more » it is publicly available at the websites of the International Atomic Energy Agency Nuclear Data Section, http://www-nds.iaea.org/exfor, the U.S. National Nuclear Data Center, http://www.nndc.bnl.gov/exfor, and the mirror sites in China, India and Russian Federation.« less

  4. Practical guidelines for development of web-based interventions.

    PubMed

    Chee, Wonshik; Lee, Yaelim; Chee, Eunice; Im, Eun-Ok

    2014-10-01

    Despite a recent high funding priority on technological aspects of research and a high potential impact of Web-based interventions on health, few guidelines for the development of Web-based interventions are currently available. In this article, we propose practical guidelines for development of Web-based interventions based on an empirical study and an integrative literature review. The empirical study aimed at development of a Web-based physical activity promotion program that was specifically tailored to Korean American midlife women. The literature review included a total of 202 articles that were retrieved through multiple databases. On the basis of the findings of the study and the literature review, we propose directions for development of Web-based interventions in the following steps: (1) meaningfulness and effectiveness, (2) target population, (3) theoretical basis/program theory, (4) focus and objectives, (5) components, (6) technological aspects, and (7) logistics for users. The guidelines could help promote further development of Web-based interventions at this early stage of Web-based interventions in nursing.

  5. A Virtual "Hello": A Web-Based Orientation to the Library.

    ERIC Educational Resources Information Center

    Borah, Eloisa Gomez

    1997-01-01

    Describes the development of Web-based library services and resources available at the Rosenfeld Library of the Anderson Graduate School of Management at University of California at Los Angeles. Highlights include library orientation sessions; virtual tours of the library; a database of basic business sources; and research strategies, including…

  6. Design and Development of Web-Based Information Literacy Tutorials

    ERIC Educational Resources Information Center

    Su, Shiao-Feng; Kuo, Jane

    2010-01-01

    The current study conducts a thorough content analysis of recently built or up-to-date high-quality web-based information literacy tutorials contributed by academic libraries in a peer-reviewed database, PRIMO. This research analyzes the topics/skills PRIMO tutorials consider essential and the teaching strategies they consider effective. The…

  7. GSP: a web-based platform for designing genome-specific primers in polyploids

    USDA-ARS?s Scientific Manuscript database

    The primary goal of this research was to develop a web-based platform named GSP for designing genome-specific primers to distinguish subgenome sequences in the polyploid genome background. GSP uses BLAST to extract homeologous sequences of the subgenomes in the existing databases, performed a multip...

  8. dictyExpress: a Dictyostelium discoideum gene expression database with an explorative data analysis web-based interface.

    PubMed

    Rot, Gregor; Parikh, Anup; Curk, Tomaz; Kuspa, Adam; Shaulsky, Gad; Zupan, Blaz

    2009-08-25

    Bioinformatics often leverages on recent advancements in computer science to support biologists in their scientific discovery process. Such efforts include the development of easy-to-use web interfaces to biomedical databases. Recent advancements in interactive web technologies require us to rethink the standard submit-and-wait paradigm, and craft bioinformatics web applications that share analytical and interactive power with their desktop relatives, while retaining simplicity and availability. We have developed dictyExpress, a web application that features a graphical, highly interactive explorative interface to our database that consists of more than 1000 Dictyostelium discoideum gene expression experiments. In dictyExpress, the user can select experiments and genes, perform gene clustering, view gene expression profiles across time, view gene co-expression networks, perform analyses of Gene Ontology term enrichment, and simultaneously display expression profiles for a selected gene in various experiments. Most importantly, these tasks are achieved through web applications whose components are seamlessly interlinked and immediately respond to events triggered by the user, thus providing a powerful explorative data analysis environment. dictyExpress is a precursor for a new generation of web-based bioinformatics applications with simple but powerful interactive interfaces that resemble that of the modern desktop. While dictyExpress serves mainly the Dictyostelium research community, it is relatively easy to adapt it to other datasets. We propose that the design ideas behind dictyExpress will influence the development of similar applications for other model organisms.

  9. dictyExpress: a Dictyostelium discoideum gene expression database with an explorative data analysis web-based interface

    PubMed Central

    Rot, Gregor; Parikh, Anup; Curk, Tomaz; Kuspa, Adam; Shaulsky, Gad; Zupan, Blaz

    2009-01-01

    Background Bioinformatics often leverages on recent advancements in computer science to support biologists in their scientific discovery process. Such efforts include the development of easy-to-use web interfaces to biomedical databases. Recent advancements in interactive web technologies require us to rethink the standard submit-and-wait paradigm, and craft bioinformatics web applications that share analytical and interactive power with their desktop relatives, while retaining simplicity and availability. Results We have developed dictyExpress, a web application that features a graphical, highly interactive explorative interface to our database that consists of more than 1000 Dictyostelium discoideum gene expression experiments. In dictyExpress, the user can select experiments and genes, perform gene clustering, view gene expression profiles across time, view gene co-expression networks, perform analyses of Gene Ontology term enrichment, and simultaneously display expression profiles for a selected gene in various experiments. Most importantly, these tasks are achieved through web applications whose components are seamlessly interlinked and immediately respond to events triggered by the user, thus providing a powerful explorative data analysis environment. Conclusion dictyExpress is a precursor for a new generation of web-based bioinformatics applications with simple but powerful interactive interfaces that resemble that of the modern desktop. While dictyExpress serves mainly the Dictyostelium research community, it is relatively easy to adapt it to other datasets. We propose that the design ideas behind dictyExpress will influence the development of similar applications for other model organisms. PMID:19706156

  10. Lynx: a database and knowledge extraction engine for integrative medicine

    PubMed Central

    Sulakhe, Dinanath; Balasubramanian, Sandhya; Xie, Bingqing; Feng, Bo; Taylor, Andrew; Wang, Sheng; Berrocal, Eduardo; Dave, Utpal; Xu, Jinbo; Börnigen, Daniela; Gilliam, T. Conrad; Maltsev, Natalia

    2014-01-01

    We have developed Lynx (http://lynx.ci.uchicago.edu)—a web-based database and a knowledge extraction engine, supporting annotation and analysis of experimental data and generation of weighted hypotheses on molecular mechanisms contributing to human phenotypes and disorders of interest. Its underlying knowledge base (LynxKB) integrates various classes of information from >35 public databases and private collections, as well as manually curated data from our group and collaborators. Lynx provides advanced search capabilities and a variety of algorithms for enrichment analysis and network-based gene prioritization to assist the user in extracting meaningful knowledge from LynxKB and experimental data, whereas its service-oriented architecture provides public access to LynxKB and its analytical tools via user-friendly web services and interfaces. PMID:24270788

  11. An Approach of Web-based Point Cloud Visualization without Plug-in

    NASA Astrophysics Data System (ADS)

    Ye, Mengxuan; Wei, Shuangfeng; Zhang, Dongmei

    2016-11-01

    With the advances in three-dimensional laser scanning technology, the demand for visualization of massive point cloud is increasingly urgent, but a few years ago point cloud visualization was limited to desktop-based solutions until the introduction of WebGL, several web renderers are available. This paper addressed the current issues in web-based point cloud visualization, and proposed a method of web-based point cloud visualization without plug-in. The method combines ASP.NET and WebGL technologies, using the spatial database PostgreSQL to store data and the open web technologies HTML5 and CSS3 to implement the user interface, a visualization system online for 3D point cloud is developed by Javascript with the web interactions. Finally, the method is applied to the real case. Experiment proves that the new model is of great practical value which avoids the shortcoming of the existing WebGIS solutions.

  12. Demystifying the Search Button

    PubMed Central

    McKeever, Liam; Nguyen, Van; Peterson, Sarah J.; Gomez-Perez, Sandra

    2015-01-01

    A thorough review of the literature is the basis of all research and evidence-based practice. A gold-standard efficient and exhaustive search strategy is needed to ensure all relevant citations have been captured and that the search performed is reproducible. The PubMed database comprises both the MEDLINE and non-MEDLINE databases. MEDLINE-based search strategies are robust but capture only 89% of the total available citations in PubMed. The remaining 11% include the most recent and possibly relevant citations but are only searchable through less efficient techniques. An effective search strategy must employ both the MEDLINE and the non-MEDLINE portion of PubMed to ensure all studies have been identified. The robust MEDLINE search strategies are used for the MEDLINE portion of the search. Usage of the less robust strategies is then efficiently confined to search only the remaining 11% of PubMed citations that have not been indexed for MEDLINE. The current article offers step-by-step instructions for building such a search exploring methods for the discovery of medical subject heading (MeSH) terms to search MEDLINE, text-based methods for exploring the non-MEDLINE database, information on the limitations of convenience algorithms such as the “related citations feature,” the strengths and pitfalls associated with commonly used filters, the proper usage of Boolean operators to organize a master search strategy, and instructions for automating that search through “MyNCBI” to receive search query updates by email as new citations become available. PMID:26129895

  13. The Human Transcript Database: A Catalogue of Full Length cDNA Inserts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bouckk John; Michael McLeod; Kim Worley

    1999-09-10

    The BCM Search Launcher provided improved access to web-based sequence analysis services during the granting period and beyond. The Search Launcher web site grouped analysis procedures by function and provided default parameters that provided reasonable search results for most applications. For instance, most queries were automatically masked for repeat sequences prior to sequence database searches to avoid spurious matches. In addition to the web-based access and arrangements that were made using the functions easier, the BCM Search Launcher provided unique value-added applications like the BEAUTY sequence database search tool that combined information about protein domains and sequence database search resultsmore » to give an enhanced, more complete picture of the reliability and relative value of the information reported. This enhanced search tool made evaluating search results more straight-forward and consistent. Some of the favorite features of the web site are the sequence utilities and the batch client functionality that allows processing of multiple samples from the command line interface. One measure of the success of the BCM Search Launcher is the number of sites that have adopted the models first developed on the site. The graphic display on the BLAST search from the NCBI web site is one such outgrowth, as is the display of protein domain search results within BLAST search results, and the design of the Biology Workbench application. The logs of usage and comments from users confirm the great utility of this resource.« less

  14. Ontology-oriented retrieval of putative microRNAs in Vitis vinifera via GrapeMiRNA: a web database of de novo predicted grape microRNAs.

    PubMed

    Lazzari, Barbara; Caprera, Andrea; Cestaro, Alessandro; Merelli, Ivan; Del Corvo, Marcello; Fontana, Paolo; Milanesi, Luciano; Velasco, Riccardo; Stella, Alessandra

    2009-06-29

    Two complete genome sequences are available for Vitis vinifera Pinot noir. Based on the sequence and gene predictions produced by the IASMA, we performed an in silico detection of putative microRNA genes and of their targets, and collected the most reliable microRNA predictions in a web database. The application is available at http://www.itb.cnr.it/ptp/grapemirna/. The program FindMiRNA was used to detect putative microRNA genes in the grape genome. A very high number of predictions was retrieved, calling for validation. Nine parameters were calculated and, based on the grape microRNAs dataset available at miRBase, thresholds were defined and applied to FindMiRNA predictions having targets in gene exons. In the resulting subset, predictions were ranked according to precursor positions and sequence similarity, and to target identity. To further validate FindMiRNA predictions, comparisons to the Arabidopsis genome, to the grape Genoscope genome, and to the grape EST collection were performed. Results were stored in a MySQL database and a web interface was prepared to query the database and retrieve predictions of interest. The GrapeMiRNA database encompasses 5,778 microRNA predictions spanning the whole grape genome. Predictions are integrated with information that can be of use in selection procedures. Tools added in the web interface also allow to inspect predictions according to gene ontology classes and metabolic pathways of targets. The GrapeMiRNA database can be of help in selecting candidate microRNA genes to be validated.

  15. The McIntosh Archive: A solar feature database spanning four solar cycles

    NASA Astrophysics Data System (ADS)

    Gibson, S. E.; Malanushenko, A. V.; Hewins, I.; McFadden, R.; Emery, B.; Webb, D. F.; Denig, W. F.

    2016-12-01

    The McIntosh Archive consists of a set of hand-drawn solar Carrington maps created by Patrick McIntosh from 1964 to 2009. McIntosh used mainly H-alpha, He-1 10830 and photospheric magnetic measurements from both ground-based and NASA satellite observations. With these he traced coronal holes, polarity inversion lines, filaments, sunspots and plage, yielding a unique 45-year record of the features associated with the large-scale solar magnetic field. We will present the results of recent efforts to preserve and digitize this archive. Most of the original hand-drawn maps have been scanned, a method for processing these scans into digital, searchable format has been developed and streamlined, and an archival repository at NOAA's National Centers for Environmental Information (NCEI) has been created. We will demonstrate how Solar Cycle 23 data may now be accessed and how it may be utilized for scientific applications. In addition, we will discuss how this database of human-recognized features, which overlaps with the onset of high-resolution, continuous modern solar data, may act as a training set for computer feature recognition algorithms.

  16. Prototype of web-based database of surface wave investigation results for site classification

    NASA Astrophysics Data System (ADS)

    Hayashi, K.; Cakir, R.; Martin, A. J.; Craig, M. S.; Lorenzo, J. M.

    2016-12-01

    As active and passive surface wave methods are getting popular for evaluating site response of earthquake ground motion, demand on the development of database for investigation results is also increasing. Seismic ground motion not only depends on 1D velocity structure but also on 2D and 3D structures so that spatial information of S-wave velocity must be considered in ground motion prediction. The database can support to construct 2D and 3D underground models. Inversion of surface wave processing is essentially non-unique so that other information must be combined into the processing. The database of existed geophysical, geological and geotechnical investigation results can provide indispensable information to improve the accuracy and reliability of investigations. Most investigations, however, are carried out by individual organizations and investigation results are rarely stored in the unified and organized database. To study and discuss appropriate database and digital standard format for the surface wave investigations, we developed a prototype of web-based database to store observed data and processing results of surface wave investigations that we have performed at more than 400 sites in U.S. and Japan. The database was constructed on a web server using MySQL and PHP so that users can access to the database through the internet from anywhere with any device. All data is registered in the database with location and users can search geophysical data through Google Map. The database stores dispersion curves, horizontal to vertical spectral ratio and S-wave velocity profiles at each site that was saved in XML files as digital data so that user can review and reuse them. The database also stores a published 3D deep basin and crustal structure and user can refer it during the processing of surface wave data.

  17. The designing and implementation of PE teaching information resource database based on broadband network

    NASA Astrophysics Data System (ADS)

    Wang, Jian

    2017-01-01

    In order to change traditional PE teaching mode and realize the interconnection, interworking and sharing of PE teaching resources, a distance PE teaching platform based on broadband network is designed and PE teaching information resource database is set up. The designing of PE teaching information resource database takes Windows NT 4/2000Server as operating system platform, Microsoft SQL Server 7.0 as RDBMS, and takes NAS technology for data storage and flow technology for video service. The analysis of system designing and implementation shows that the dynamic PE teaching information resource sharing platform based on Web Service can realize loose coupling collaboration, realize dynamic integration and active integration and has good integration, openness and encapsulation. The distance PE teaching platform based on Web Service and the design scheme of PE teaching information resource database can effectively solve and realize the interconnection, interworking and sharing of PE teaching resources and adapt to the informatization development demands of PE teaching.

  18. Facilitating quality control for spectra assignments of small organic molecules: nmrshiftdb2--a free in-house NMR database with integrated LIMS for academic service laboratories.

    PubMed

    Kuhn, Stefan; Schlörer, Nils E

    2015-08-01

    nmrshiftdb2 supports with its laboratory information management system the integration of an electronic lab administration and management into academic NMR facilities. Also, it offers the setup of a local database, while full access to nmrshiftdb2's World Wide Web database is granted. This freely available system allows on the one hand the submission of orders for measurement, transfers recorded data automatically or manually, and enables download of spectra via web interface, as well as the integrated access to prediction, search, and assignment tools of the NMR database for lab users. On the other hand, for the staff and lab administration, flow of all orders can be supervised; administrative tools also include user and hardware management, a statistic functionality for accounting purposes, and a 'QuickCheck' function for assignment control, to facilitate quality control of assignments submitted to the (local) database. Laboratory information management system and database are based on a web interface as front end and are therefore independent of the operating system in use. Copyright © 2015 John Wiley & Sons, Ltd.

  19. KernPaeP - a web-based pediatric palliative documentation system for home care.

    PubMed

    Hartz, Tobias; Verst, Hendrik; Ueckert, Frank

    2009-01-01

    KernPaeP is a new web-based on- and offline documentation system, which has been developed for pediatric palliative care-teams supporting patient documentation and communication among health care professionals. It provides a reliable system making fast and secure home care documentation possible. KernPaeP is accessible online by registered users using any web-browser. Home care teams use an offline version of KernPaeP running on a netbook for patient documentation on site. Identifying and medical patient data are strictly separated and stored on two database servers. The system offers a stable, enhanced two-way algorithm for synchronization between the offline component and the central database servers. KernPaeP is implemented meeting highest security standards while still maintaining high usability. The web-based documentation system allows ubiquitous and immediate access to patient data. Sumptuous paper work is replaced by secure and comprehensive electronic documentation. KernPaeP helps saving time and improving the quality of documentation. Due to development in close cooperation with pediatric palliative professionals, KernPaeP fulfils the broad needs of home-care documentation. The technique of web-based online and offline documentation is in general applicable for arbitrary home care scenarios.

  20. Fifteen hundred guidelines and growing: the UK database of clinical guidelines.

    PubMed

    van Loo, John; Leonard, Niamh

    2006-06-01

    The National Library for Health offers a comprehensive searchable database of nationally approved clinical guidelines, called the Guidelines Finder. This resource, commissioned in 2002, is managed and developed by the University of Sheffield Health Sciences Library. The authors introduce the historical and political dimension of guidelines and the nature of guidelines as a mechanism to ensure clinical effectiveness in practice. The article then outlines the maintenance and organisation of the Guidelines Finder database itself, the criteria for selection, who publishes guidelines and guideline formats, usage of the Guidelines Finder service and finally looks at some lessons learnt from a local library offering a national service. Clinical guidelines are central to effective clinical practice at the national, organisational and individual level. The Guidelines Finder is one of the most visited resources within the National Library for Health and is successful in answering information needs related to specific patient care, clinical research, guideline development and education.

  1. Doors for memory: A searchable database.

    PubMed

    Baddeley, Alan D; Hitch, Graham J; Quinlan, Philip T; Bowes, Lindsey; Stone, Rob

    2016-11-01

    The study of human long-term memory has for over 50 years been dominated by research on words. This is partly due to lack of suitable nonverbal materials. Experience in developing a clinical test suggested that door scenes can provide an ecologically relevant and sensitive alternative to the faces and geometrical figures traditionally used to study visual memory. In pursuing this line of research, we have accumulated over 2000 door scenes providing a database that is categorized on a range of variables including building type, colour, age, condition, glazing, and a range of other physical characteristics. We describe an illustrative study of recognition memory for 100 doors tested by yes/no, two-alternative, or four-alternative forced-choice paradigms. These stimuli, together with the full categorized database, are available through a dedicated website. We suggest that door scenes provide an ecologically relevant and participant-friendly source of material for studying the comparatively neglected field of visual long-term memory.

  2. A simple versatile solution for collecting multidimensional clinical data based on the CakePHP web application framework.

    PubMed

    Biermann, Martin

    2014-04-01

    Clinical trials aiming for regulatory approval of a therapeutic agent must be conducted according to Good Clinical Practice (GCP). Clinical Data Management Systems (CDMS) are specialized software solutions geared toward GCP-trials. They are however less suited for data management in small non-GCP research projects. For use in researcher-initiated non-GCP studies, we developed a client-server database application based on the public domain CakePHP framework. The underlying MySQL database uses a simple data model based on only five data tables. The graphical user interface can be run in any web browser inside the hospital network. Data are validated upon entry. Data contained in external database systems can be imported interactively. Data are automatically anonymized on import, and the key lists identifying the subjects being logged to a restricted part of the database. Data analysis is performed by separate statistics and analysis software connecting to the database via a generic Open Database Connectivity (ODBC) interface. Since its first pilot implementation in 2011, the solution has been applied to seven different clinical research projects covering different clinical problems in different organ systems such as cancer of the thyroid and the prostate glands. This paper shows how the adoption of a generic web application framework is a feasible, flexible, low-cost, and user-friendly way of managing multidimensional research data in researcher-initiated non-GCP clinical projects. Copyright © 2014 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  3. Web-based Visualization and Query of semantically segmented multiresolution 3D Models in the Field of Cultural Heritage

    NASA Astrophysics Data System (ADS)

    Auer, M.; Agugiaro, G.; Billen, N.; Loos, L.; Zipf, A.

    2014-05-01

    Many important Cultural Heritage sites have been studied over long periods of time by different means of technical equipment, methods and intentions by different researchers. This has led to huge amounts of heterogeneous "traditional" datasets and formats. The rising popularity of 3D models in the field of Cultural Heritage in recent years has brought additional data formats and makes it even more necessary to find solutions to manage, publish and study these data in an integrated way. The MayaArch3D project aims to realize such an integrative approach by establishing a web-based research platform bringing spatial and non-spatial databases together and providing visualization and analysis tools. Especially the 3D components of the platform use hierarchical segmentation concepts to structure the data and to perform queries on semantic entities. This paper presents a database schema to organize not only segmented models but also different Levels-of-Details and other representations of the same entity. It is further implemented in a spatial database which allows the storing of georeferenced 3D data. This enables organization and queries by semantic, geometric and spatial properties. As service for the delivery of the segmented models a standardization candidate of the OpenGeospatialConsortium (OGC), the Web3DService (W3DS) has been extended to cope with the new database schema and deliver a web friendly format for WebGL rendering. Finally a generic user interface is presented which uses the segments as navigation metaphor to browse and query the semantic segmentation levels and retrieve information from an external database of the German Archaeological Institute (DAI).

  4. Accessibility and quality of online information for pediatric orthopaedic surgery fellowships.

    PubMed

    Davidson, Austin R; Murphy, Robert F; Spence, David D; Kelly, Derek M; Warner, William C; Sawyer, Jeffrey R

    2014-12-01

    Pediatric orthopaedic fellowship applicants commonly use online-based resources for information on potential programs. Two primary sources are the San Francisco Match (SF Match) database and the Pediatric Orthopaedic Society of North America (POSNA) database. We sought to determine the accessibility and quality of information that could be obtained by using these 2 sources. The online databases of the SF Match and POSNA were reviewed to determine the availability of embedded program links or external links for the included programs. If not available in the SF Match or POSNA data, Web sites for listed programs were located with a Google search. All identified Web sites were analyzed for accessibility, content volume, and content quality. At the time of online review, 50 programs, offering 68 positions, were listed in the SF Match database. Although 46 programs had links included with their information, 36 (72%) of them simply listed http://www.sfmatch.org as their unique Web site. Ten programs (20%) had external links listed, but only 2 (4%) linked directly to the fellowship web page. The POSNA database does not list any links to the 47 programs it lists, which offer 70 positions. On the basis of a Google search of the 50 programs listed in the SF Match database, web pages were found for 35. Of programs with independent web pages, all had a description of the program and 26 (74%) described their application process. Twenty-nine (83%) listed research requirements, 22 (63%) described the rotation schedule, and 12 (34%) discussed the on-call expectations. A contact telephone number and/or email address was provided by 97% of programs. Twenty (57%) listed both the coordinator and fellowship director, 9 (26%) listed the coordinator only, 5 (14%) listed the fellowship director only, and 1 (3%) had no contact information given. The SF Match and POSNA databases provide few direct links to fellowship Web sites, and individual program Web sites either do not exist or do not effectively convey information about the programs. Improved accessibility and accurate information online would allow potential applicants to obtain information about pediatric fellowships in a more efficient manner.

  5. A Web-Based GIS for Reporting Water Usage in the High Plains Underground Water Conservation District

    NASA Astrophysics Data System (ADS)

    Jia, M.; Deeds, N.; Winckler, M.

    2012-12-01

    The High Plains Underground Water Conservation District (HPWD) is the largest and oldest of the Texas water conservation districts, and oversees approximately 1.7 million irrigated acres. Recent rule changes have motivated HPWD to develop a more automated system to allow owners and operators to report well locations, meter locations, meter readings, the association between meters and wells, and contiguous acres. INTERA, Inc. has developed a web-based interactive system for HPWD water users to report water usage and for the district to better manage its water resources. The HPWD web management system utilizes state-of-the-art GIS techniques, including cloud-based Amazon EC2 virtual machine, ArcGIS Server, ArcSDE and ArcGIS Viewer for Flex, to support web-based water use management. The system enables users to navigate to their area of interest using a well-established base-map and perform a variety of operations and inquiries against their spatial features. The application currently has six components: user privilege management, property management, water meter registration, area registration, meter-well association and water use report. The system is composed of two main databases: spatial database and non-spatial database. With the help of Adobe Flex application at the front end and ArcGIS Server as the middle-ware, the spatial feature geometry and attributes update will be reflected immediately in the back end. As a result, property owners, along with the HPWD staff, collaborate together to weave the fabric of the spatial database. Interactions between the spatial and non-spatial databases are established by Windows Communication Foundation (WCF) services to record water-use report, user-property associations, owner-area associations, as well as meter-well associations. Mobile capabilities will be enabled in the near future for field workers to collect data and synchronize them to the spatial database. The entire solution is built on a highly scalable cloud server to dynamically allocate the computational resources so as to reduce the cost on security and hardware maintenance. In addition to the default capabilities provided by ESRI, customizations include 1) enabling interactions between spatial and non-spatial databases, 2) providing role-based feature editing, 3) dynamically filtering spatial features on the map based on user accounts and 4) comprehensive data validation.

  6. Six Online Periodical Databases: A Librarian's View.

    ERIC Educational Resources Information Center

    Willems, Harry

    1999-01-01

    Compares the following World Wide Web-based periodical databases, focusing on their usefulness in K-12 school libraries: EBSCO, Electric Library, Facts on File, SIRS, Wilson, and UMI. Search interfaces, display options, help screens, printing, home access, copyright restrictions, database administration, and making a decision are discussed. A…

  7. Web-based interventions for menopause: A systematic integrated literature review.

    PubMed

    Im, Eun-Ok; Lee, Yaelim; Chee, Eunice; Chee, Wonshik

    2017-01-01

    Advances in computer and Internet technologies have allowed health care providers to develop, use, and test various types of Web-based interventions for their practice and research. Indeed, an increasing number of Web-based interventions have recently been developed and tested in health care fields. Despite the great potential for Web-based interventions to improve practice and research, little is known about the current status of Web-based interventions, especially those related to menopause. To identify the current status of Web-based interventions used in the field of menopause, a literature review was conducted using multiple databases, with the keywords "online," "Internet," "Web," "intervention," and "menopause." Using these keywords, a total of 18 eligible articles were analyzed to identify the current status of Web-based interventions for menopause. Six themes reflecting the current status of Web-based interventions for menopause were identified: (a) there existed few Web-based intervention studies on menopause; (b) Web-based decision support systems were mainly used; (c) there was a lack of detail on the interventions; (d) there was a lack of guidance on the use of Web-based interventions; (e) counselling was frequently combined with Web-based interventions; and (f) the pros and cons were similar to those of Web-based methods in general. Based on these findings, directions for future Web-based interventions for menopause are provided. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  8. Vaxjo: a web-based vaccine adjuvant database and its application for analysis of vaccine adjuvants and their uses in vaccine development.

    PubMed

    Sayers, Samantha; Ulysse, Guerlain; Xiang, Zuoshuang; He, Yongqun

    2012-01-01

    Vaccine adjuvants are compounds that enhance host immune responses to co-administered antigens in vaccines. Vaxjo is a web-based central database and analysis system that curates, stores, and analyzes vaccine adjuvants and their usages in vaccine development. Basic information of a vaccine adjuvant stored in Vaxjo includes adjuvant name, components, structure, appearance, storage, preparation, function, safety, and vaccines that use this adjuvant. Reliable references are curated and cited. Bioinformatics scripts are developed and used to link vaccine adjuvants to different adjuvanted vaccines stored in the general VIOLIN vaccine database. Presently, 103 vaccine adjuvants have been curated in Vaxjo. Among these adjuvants, 98 have been used in 384 vaccines stored in VIOLIN against over 81 pathogens, cancers, or allergies. All these vaccine adjuvants are categorized and analyzed based on adjuvant types, pathogens used, and vaccine types. As a use case study of vaccine adjuvants in infectious disease vaccines, the adjuvants used in Brucella vaccines are specifically analyzed. A user-friendly web query and visualization interface is developed for interactive vaccine adjuvant search. To support data exchange, the information of vaccine adjuvants is stored in the Vaccine Ontology (VO) in the Web Ontology Language (OWL) format.

  9. Vaxjo: A Web-Based Vaccine Adjuvant Database and Its Application for Analysis of Vaccine Adjuvants and Their Uses in Vaccine Development

    PubMed Central

    Sayers, Samantha; Ulysse, Guerlain; Xiang, Zuoshuang; He, Yongqun

    2012-01-01

    Vaccine adjuvants are compounds that enhance host immune responses to co-administered antigens in vaccines. Vaxjo is a web-based central database and analysis system that curates, stores, and analyzes vaccine adjuvants and their usages in vaccine development. Basic information of a vaccine adjuvant stored in Vaxjo includes adjuvant name, components, structure, appearance, storage, preparation, function, safety, and vaccines that use this adjuvant. Reliable references are curated and cited. Bioinformatics scripts are developed and used to link vaccine adjuvants to different adjuvanted vaccines stored in the general VIOLIN vaccine database. Presently, 103 vaccine adjuvants have been curated in Vaxjo. Among these adjuvants, 98 have been used in 384 vaccines stored in VIOLIN against over 81 pathogens, cancers, or allergies. All these vaccine adjuvants are categorized and analyzed based on adjuvant types, pathogens used, and vaccine types. As a use case study of vaccine adjuvants in infectious disease vaccines, the adjuvants used in Brucella vaccines are specifically analyzed. A user-friendly web query and visualization interface is developed for interactive vaccine adjuvant search. To support data exchange, the information of vaccine adjuvants is stored in the Vaccine Ontology (VO) in the Web Ontology Language (OWL) format. PMID:22505817

  10. An advanced web query interface for biological databases

    PubMed Central

    Latendresse, Mario; Karp, Peter D.

    2010-01-01

    Although most web-based biological databases (DBs) offer some type of web-based form to allow users to author DB queries, these query forms are quite restricted in the complexity of DB queries that they can formulate. They can typically query only one DB, and can query only a single type of object at a time (e.g. genes) with no possible interaction between the objects—that is, in SQL parlance, no joins are allowed between DB objects. Writing precise queries against biological DBs is usually left to a programmer skillful enough in complex DB query languages like SQL. We present a web interface for building precise queries for biological DBs that can construct much more precise queries than most web-based query forms, yet that is user friendly enough to be used by biologists. It supports queries containing multiple conditions, and connecting multiple object types without using the join concept, which is unintuitive to biologists. This interactive web interface is called the Structured Advanced Query Page (SAQP). Users interactively build up a wide range of query constructs. Interactive documentation within the SAQP describes the schema of the queried DBs. The SAQP is based on BioVelo, a query language based on list comprehension. The SAQP is part of the Pathway Tools software and is available as part of several bioinformatics web sites powered by Pathway Tools, including the BioCyc.org site that contains more than 500 Pathway/Genome DBs. PMID:20624715

  11. OpenFlyData: an exemplar data web integrating gene expression data on the fruit fly Drosophila melanogaster.

    PubMed

    Miles, Alistair; Zhao, Jun; Klyne, Graham; White-Cooper, Helen; Shotton, David

    2010-10-01

    Integrating heterogeneous data across distributed sources is a major requirement for in silico bioinformatics supporting translational research. For example, genome-scale data on patterns of gene expression in the fruit fly Drosophila melanogaster are widely used in functional genomic studies in many organisms to inform candidate gene selection and validate experimental results. However, current data integration solutions tend to be heavy weight, and require significant initial and ongoing investment of effort. Development of a common Web-based data integration infrastructure (a.k.a. data web), using Semantic Web standards, promises to alleviate these difficulties, but little is known about the feasibility, costs, risks or practical means of migrating to such an infrastructure. We describe the development of OpenFlyData, a proof-of-concept system integrating gene expression data on D. melanogaster, combining Semantic Web standards with light-weight approaches to Web programming based on Web 2.0 design patterns. To support researchers designing and validating functional genomic studies, OpenFlyData includes user-facing search applications providing intuitive access to and comparison of gene expression data from FlyAtlas, the BDGP in situ database, and FlyTED, using data from FlyBase to expand and disambiguate gene names. OpenFlyData's services are also openly accessible, and are available for reuse by other bioinformaticians and application developers. Semi-automated methods and tools were developed to support labour- and knowledge-intensive tasks involved in deploying SPARQL services. These include methods for generating ontologies and relational-to-RDF mappings for relational databases, which we illustrate using the FlyBase Chado database schema; and methods for mapping gene identifiers between databases. The advantages of using Semantic Web standards for biomedical data integration are discussed, as are open issues. In particular, although the performance of open source SPARQL implementations is sufficient to query gene expression data directly from user-facing applications such as Web-based data fusions (a.k.a. mashups), we found open SPARQL endpoints to be vulnerable to denial-of-service-type problems, which must be mitigated to ensure reliability of services based on this standard. These results are relevant to data integration activities in translational bioinformatics. The gene expression search applications and SPARQL endpoints developed for OpenFlyData are deployed at http://openflydata.org. FlyUI, a library of JavaScript widgets providing re-usable user-interface components for Drosophila gene expression data, is available at http://flyui.googlecode.com. Software and ontologies to support transformation of data from FlyBase, FlyAtlas, BDGP and FlyTED to RDF are available at http://openflydata.googlecode.com. SPARQLite, an implementation of the SPARQL protocol, is available at http://sparqlite.googlecode.com. All software is provided under the GPL version 3 open source license.

  12. Milliarcsecond Astronomy with the CHARA Array

    NASA Astrophysics Data System (ADS)

    Schaefer, Gail; ten Brummelaar, Theo; Gies, Douglas; Jones, Jeremy; Farrington, Christopher

    2018-01-01

    The Center for High Angular Resolution Astronomy offers 50 nights per year of open access time at the CHARA Array. The Array consists of six telescopes linked together as an interferometer, providing sub-milliarcsecond resolution in the optical and near-infrared. The Array enables a variety of scientific studies, including measuring stellar angular diameters, imaging stellar shapes and surface features, mapping the orbits of close binary companions, and resolving circumstellar environments. The open access time is part of an NSF/MSIP funded program to open the CHARA Array to the broader astronomical community. As part of the program, we will build a searchable database for the CHARA data archive and run a series of one-day community workshops at different locations across the country to expand the user base for stellar interferometry and encourage new scientific investigations with the CHARA Array.

  13. Bacteria use type IV pili to walk upright and detach from surfaces.

    PubMed

    Gibiansky, Maxsim L; Conrad, Jacinta C; Jin, Fan; Gordon, Vernita D; Motto, Dominick A; Mathewson, Margie A; Stopka, Wiktor G; Zelasko, Daria C; Shrout, Joshua D; Wong, Gerard C L

    2010-10-08

    Bacterial biofilms are structured multicellular communities involved in a broad range of infections. Knowing how free-swimming bacteria adapt their motility mechanisms near surfaces is crucial for understanding the transition between planktonic and biofilm phenotypes. By translating microscopy movies into searchable databases of bacterial behavior, we identified fundamental type IV pili-driven mechanisms for Pseudomonas aeruginosa surface motility involved in distinct foraging strategies. Bacteria stood upright and "walked" with trajectories optimized for two-dimensional surface exploration. Vertical orientation facilitated surface detachment and could influence biofilm morphology.

  14. ACToR – Aggregated Computational Toxicology Resource ...

    EPA Pesticide Factsheets

    ACToR (Aggregated Computational Toxicology Resource) is a collection of databases collated or developed by the US EPA National Center for Computational Toxicology (NCCT). More than 200 sources of publicly available data on environmental chemicals have been brought together and made searchable by chemical name and other identifiers, and by chemical structure. Data includes chemical structure, physico-chemical values, in vitro assay data and in vivo toxicology data. Chemicals include, but are not limited to, high and medium production volume industrial chemicals, pesticides (active and inert ingredients), and potential ground and drinking water contaminants.

  15. Implications of Web of Science journal impact factor for scientific output evaluation in 16 institutions and investigators' opinion.

    PubMed

    Wáng, Yì-Xiáng J; Arora, Richa; Choi, Yongdoo; Chung, Hsiao-Wen; Egorov, Vyacheslav I; Frahm, Jens; Kudo, Hiroyuki; Kuyumcu, Suleyman; Laurent, Sophie; Loffroy, Romaric; Maurea, Simone; Morcos, Sameh K; Ni, Yicheng; Oei, Edwin H G; Sabarudin, Akmal; Yu, Xin

    2014-12-01

    Journal based metrics is known not to be ideal for the measurement of the quality of individual researcher's scientific output. In the current report 16 contributors from Hong Kong SAR, India, Korea, Taiwan, Russia, Germany, Japan, Turkey, Belgium, France, Italy, UK, The Netherlands, Malaysia, and USA are invited. The following six questions were asked: (I) is Web of Sciences journal impact factor (IF) and Institute for Scientific Information (ISI) citation the main academic output performance evaluation tool in your institution? and your country? (II) How does Google citation count in your institution? and your country? (III) If paper is published in a non-SCI journal but it is included in PubMed and searchable by Google scholar, how it is valued when compared with a paper published in a journal with an IF? (IV) Do you value to publish a piece of your work in a non-SCI journal as much as a paper published in a journal with an IF? (V) What is your personal view on the metric measurement of scientific output? (VI) Overall, do you think Web of Sciences journal IF is beneficial, or actually it is doing more harm? The results show that IF and ISI citation is heavily affecting the academic life in most of the institutions. Google citation and evaluation, while is being used and convenient and speedy, has not gain wide 'official' recognition as a tool for scientific output evaluation.

  16. A community effort to construct a gravity database for the United States and an associated Web portal

    USGS Publications Warehouse

    Keller, Gordon R.; Hildenbrand, T.G.; Kucks, R.; Webring, M.; Briesacher, A.; Rujawitz, K.; Hittleman, A.M.; Roman, D.R.; Winester, D.; Aldouri, R.; Seeley, J.; Rasillo, J.; Torres, R.; Hinze, W. J.; Gates, A.; Kreinovich, V.; Salayandia, L.

    2006-01-01

    Potential field data (gravity and magnetic measurements) are both useful and costeffective tools for many geologic investigations. Significant amounts of these data are traditionally in the public domain. A new magnetic database for North America was released in 2002, and as a result, a cooperative effort between government agencies, industry, and universities to compile an upgraded digital gravity anomaly database, grid, and map for the conterminous United States was initiated and is the subject of this paper. This database is being crafted into a data system that is accessible through a Web portal. This data system features the database, software tools, and convenient access. The Web portal will enhance the quality and quantity of data contributed to the gravity database that will be a shared community resource. The system's totally digital nature ensures that it will be flexible so that it can grow and evolve as new data, processing procedures, and modeling and visualization tools become available. Another goal of this Web-based data system is facilitation of the efforts of researchers and students who wish to collect data from regions currently not represented adequately in the database. The primary goal of upgrading the United States gravity database and this data system is to provide more reliable data that support societal and scientific investigations of national importance. An additional motivation is the international intent to compile an enhanced North American gravity database, which is critical to understanding regional geologic features, the tectonic evolution of the continent, and other issues that cross national boundaries. ?? 2006 Geological Society of America. All rights reserved.

  17. Survey Software Evaluation

    DTIC Science & Technology

    2009-01-01

    Oracle 9i, 10g  MySQL  MS SQL Server MS SQL Server Operating System Supported Windows 2003 Server  Windows 2000 Server (32 bit...WebStar (Mac OS X)  SunOne Internet Information Services (IIS) Database Server Supported MS SQL Server  MS SQL Server  Oracle 9i, 10g...challenges of Web-based surveys are: 1) identifying the best Commercial Off the Shelf (COTS) Web-based survey packages to serve the particular

  18. A web-based system architecture for ontology-based data integration in the domain of IT benchmarking

    NASA Astrophysics Data System (ADS)

    Pfaff, Matthias; Krcmar, Helmut

    2018-03-01

    In the domain of IT benchmarking (ITBM), a variety of data and information are collected. Although these data serve as the basis for business analyses, no unified semantic representation of such data yet exists. Consequently, data analysis across different distributed data sets and different benchmarks is almost impossible. This paper presents a system architecture and prototypical implementation for an integrated data management of distributed databases based on a domain-specific ontology. To preserve the semantic meaning of the data, the ITBM ontology is linked to data sources and functions as the central concept for database access. Thus, additional databases can be integrated by linking them to this domain-specific ontology and are directly available for further business analyses. Moreover, the web-based system supports the process of mapping ontology concepts to external databases by introducing a semi-automatic mapping recommender and by visualizing possible mapping candidates. The system also provides a natural language interface to easily query linked databases. The expected result of this ontology-based approach of knowledge representation and data access is an increase in knowledge and data sharing in this domain, which will enhance existing business analysis methods.

  19. Information Retrieval Strategies of Millennial Undergraduate Students in Web and Library Database Searches

    ERIC Educational Resources Information Center

    Porter, Brandi

    2009-01-01

    Millennial students make up a large portion of undergraduate students attending colleges and universities, and they have a variety of online resources available to them to complete academically related information searches, primarily Web based and library-based online information retrieval systems. The content, ease of use, and required search…

  20. Development and Implementation of a Web-based Evaluation System for an Internal Medicine Residency Program.

    ERIC Educational Resources Information Center

    Rosenberg, Mark E.; Watson, Kathleen; Paul, Jeevan; Miller, Wesley; Harris, Ilene; Valdivia, Tomas D.

    2001-01-01

    Describes the development and implementation of a World Wide Web-based electronic evaluation system for the internal medicine residency program at the University of Minnesota. Features include automatic entry of evaluations by faculty or students into a database, compliance tracking, reminders, extensive reporting capabilities, automatic…

  1. 76 FR 1137 - Publicly Available Consumer Product Safety Information Database: Notice of Public Web Conferences

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-07

    ...: Notice of Public Web Conferences AGENCY: Consumer Product Safety Commission. ACTION: Notice. SUMMARY: The Consumer Product Safety Commission (``Commission,'' ``CPSC,'' or ``we'') is announcing two Web conferences... database (``Database''). The Web conferences will be webcast live from the Commission's headquarters in...

  2. A web-based 3D geological information visualization system

    NASA Astrophysics Data System (ADS)

    Song, Renbo; Jiang, Nan

    2013-03-01

    Construction of 3D geological visualization system has attracted much more concern in GIS, computer modeling, simulation and visualization fields. It not only can effectively help geological interpretation and analysis work, but also can it can help leveling up geosciences professional education. In this paper, an applet-based method was introduced for developing a web-based 3D geological information visualization system. The main aims of this paper are to explore a rapid and low-cost development method for constructing a web-based 3D geological system. First, the borehole data stored in Excel spreadsheets was extracted and then stored in SQLSERVER database of a web server. Second, the JDBC data access component was utilized for providing the capability of access the database. Third, the user interface was implemented with applet component embedded in JSP page and the 3D viewing and querying functions were implemented with PickCanvas of Java3D. Last, the borehole data acquired from geological survey were used for test the system, and the test results has shown that related methods of this paper have a certain application values.

  3. Climate change and human health: what are the research trends? A scoping review protocol.

    PubMed

    Herlihy, Niamh; Bar-Hen, Avner; Verner, Glenn; Fischer, Helen; Sauerborn, Rainer; Depoux, Anneliese; Flahault, Antoine; Schütte, Stefanie

    2016-12-23

    For 28 years, the Intergovernmental Panel on Climate Change (IPCC) has been assessing the potential risks associated with anthropogenic climate change. Although interest in climate change and health is growing, the implications arising from their interaction remain understudied. Generating a greater understanding of the health impacts of climate change could be key step in inciting some of the changes necessary to decelerate global warming. A long-term and broad overview of the existing scientific literature in the field of climate change and health is currently missing in order to ensure that all priority areas are being adequately addressed. In this paper we outline our methods to conduct a scoping review of the published peer-reviewed literature on climate change and health between 1990 and 2015. A detailed search strategy will be used to search the PubMed and Web of Science databases. Specific inclusion and exclusion criteria will be applied in order to capture the most relevant literature in the time frame chosen. Data will be extracted, categorised and coded to allow for statistical analysis of the results. No ethical approval was required for this study. A searchable database of climate change and health publications will be developed and a manuscript will be complied for publication and dissemination of the findings. We anticipate that this study will allow us to map the trends observed in publications over the 25-year time period in climate change and health research. It will also identify the research areas with the highest volume of publications as well as highlight the research trends in climate change and health. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  4. Climate change and human health: what are the research trends? A scoping review protocol

    PubMed Central

    Herlihy, Niamh; Bar-Hen, Avner; Verner, Glenn; Fischer, Helen; Sauerborn, Rainer; Depoux, Anneliese; Flahault, Antoine; Schütte, Stefanie

    2016-01-01

    Introduction For 28 years, the Intergovernmental Panel on Climate Change (IPCC) has been assessing the potential risks associated with anthropogenic climate change. Although interest in climate change and health is growing, the implications arising from their interaction remain understudied. Generating a greater understanding of the health impacts of climate change could be key step in inciting some of the changes necessary to decelerate global warming. A long-term and broad overview of the existing scientific literature in the field of climate change and health is currently missing in order to ensure that all priority areas are being adequately addressed. In this paper we outline our methods to conduct a scoping review of the published peer-reviewed literature on climate change and health between 1990 and 2015. Methods and analysis A detailed search strategy will be used to search the PubMed and Web of Science databases. Specific inclusion and exclusion criteria will be applied in order to capture the most relevant literature in the time frame chosen. Data will be extracted, categorised and coded to allow for statistical analysis of the results. Ethics and dissemination No ethical approval was required for this study. A searchable database of climate change and health publications will be developed and a manuscript will be complied for publication and dissemination of the findings. We anticipate that this study will allow us to map the trends observed in publications over the 25-year time period in climate change and health research. It will also identify the research areas with the highest volume of publications as well as highlight the research trends in climate change and health. PMID:28011805

  5. CottonGen: a genomics, genetics and breeding database for cotton research

    USDA-ARS?s Scientific Manuscript database

    CottonGen (http://www.cottongen.org) is a curated and integrated web-based relational database providing access to publicly available genomic, genetic and breeding data for cotton. CottonGen supercedes CottonDB and the Cotton Marker Database, with enhanced tools for easier data sharing, mining, vis...

  6. The Footprint Database and Web Services of the Herschel Space Observatory

    NASA Astrophysics Data System (ADS)

    Dobos, László; Varga-Verebélyi, Erika; Verdugo, Eva; Teyssier, David; Exter, Katrina; Valtchanov, Ivan; Budavári, Tamás; Kiss, Csaba

    2016-10-01

    Data from the Herschel Space Observatory is freely available to the public but no uniformly processed catalogue of the observations has been published so far. To date, the Herschel Science Archive does not contain the exact sky coverage (footprint) of individual observations and supports search for measurements based on bounding circles only. Drawing on previous experience in implementing footprint databases, we built the Herschel Footprint Database and Web Services for the Herschel Space Observatory to provide efficient search capabilities for typical astronomical queries. The database was designed with the following main goals in mind: (a) provide a unified data model for meta-data of all instruments and observational modes, (b) quickly find observations covering a selected object and its neighbourhood, (c) quickly find every observation in a larger area of the sky, (d) allow for finding solar system objects crossing observation fields. As a first step, we developed a unified data model of observations of all three Herschel instruments for all pointing and instrument modes. Then, using telescope pointing information and observational meta-data, we compiled a database of footprints. As opposed to methods using pixellation of the sphere, we represent sky coverage in an exact geometric form allowing for precise area calculations. For easier handling of Herschel observation footprints with rather complex shapes, two algorithms were implemented to reduce the outline. Furthermore, a new visualisation tool to plot footprints with various spherical projections was developed. Indexing of the footprints using Hierarchical Triangular Mesh makes it possible to quickly find observations based on sky coverage, time and meta-data. The database is accessible via a web site http://herschel.vo.elte.hu and also as a set of REST web service functions, which makes it readily usable from programming environments such as Python or IDL. The web service allows downloading footprint data in various formats including Virtual Observatory standards.

  7. A Holistic, Similarity-Based Approach for Personalized Ranking in Web Databases

    ERIC Educational Resources Information Center

    Telang, Aditya

    2011-01-01

    With the advent of the Web, the notion of "information retrieval" has acquired a completely new connotation and currently encompasses several disciplines ranging from traditional forms of text and data retrieval in unstructured and structured repositories to retrieval of static and dynamic information from the contents of the surface and deep Web.…

  8. Web based tools for data manipulation, visualisation and validation with interactive georeferenced graphs

    NASA Astrophysics Data System (ADS)

    Ivankovic, D.; Dadic, V.

    2009-04-01

    Some of oceanographic parameters have to be manually inserted into database; some (for example data from CTD probe) are inserted from various files. All this parameters requires visualization, validation and manipulation from research vessel or scientific institution, and also public presentation. For these purposes is developed web based system, containing dynamic sql procedures and java applets. Technology background is Oracle 10g relational database, and Oracle application server. Web interfaces are developed using PL/SQL stored database procedures (mod PL/SQL). Additional parts for data visualization include use of Java applets and JavaScript. Mapping tool is Google maps API (javascript) and as alternative java applet. Graph is realized as dynamically generated web page containing java applet. Mapping tool and graph are georeferenced. That means that click on some part of graph, automatically initiate zoom or marker onto location where parameter was measured. This feature is very useful for data validation. Code for data manipulation and visualization are partially realized with dynamic SQL and that allow as to separate data definition and code for data manipulation. Adding new parameter in system requires only data definition and description without programming interface for this kind of data.

  9. Do Librarians Really Do That? Or Providing Custom, Fee-Based Services.

    ERIC Educational Resources Information Center

    Whitmore, Susan; Heekin, Janet

    This paper describes some of the fee-based, custom services provided by National Institutes of Health (NIH) Library to NIH staff, including knowledge management, clinical liaisons, specialized database searching, bibliographic database development, Web resource guide development, and journal management. The first section discusses selecting the…

  10. Semantic SenseLab: implementing the vision of the Semantic Web in neuroscience

    PubMed Central

    Samwald, Matthias; Chen, Huajun; Ruttenberg, Alan; Lim, Ernest; Marenco, Luis; Miller, Perry; Shepherd, Gordon; Cheung, Kei-Hoi

    2011-01-01

    Summary Objective Integrative neuroscience research needs a scalable informatics framework that enables semantic integration of diverse types of neuroscience data. This paper describes the use of the Web Ontology Language (OWL) and other Semantic Web technologies for the representation and integration of molecular-level data provided by several of SenseLab suite of neuroscience databases. Methods Based on the original database structure, we semi-automatically translated the databases into OWL ontologies with manual addition of semantic enrichment. The SenseLab ontologies are extensively linked to other biomedical Semantic Web resources, including the Subcellular Anatomy Ontology, Brain Architecture Management System, the Gene Ontology, BIRNLex and UniProt. The SenseLab ontologies have also been mapped to the Basic Formal Ontology and Relation Ontology, which helps ease interoperability with many other existing and future biomedical ontologies for the Semantic Web. In addition, approaches to representing contradictory research statements are described. The SenseLab ontologies are designed for use on the Semantic Web that enables their integration into a growing collection of biomedical information resources. Conclusion We demonstrate that our approach can yield significant potential benefits and that the Semantic Web is rapidly becoming mature enough to realize its anticipated promises. The ontologies are available online at http://neuroweb.med.yale.edu/senselab/ PMID:20006477

  11. Semantic SenseLab: Implementing the vision of the Semantic Web in neuroscience.

    PubMed

    Samwald, Matthias; Chen, Huajun; Ruttenberg, Alan; Lim, Ernest; Marenco, Luis; Miller, Perry; Shepherd, Gordon; Cheung, Kei-Hoi

    2010-01-01

    Integrative neuroscience research needs a scalable informatics framework that enables semantic integration of diverse types of neuroscience data. This paper describes the use of the Web Ontology Language (OWL) and other Semantic Web technologies for the representation and integration of molecular-level data provided by several of SenseLab suite of neuroscience databases. Based on the original database structure, we semi-automatically translated the databases into OWL ontologies with manual addition of semantic enrichment. The SenseLab ontologies are extensively linked to other biomedical Semantic Web resources, including the Subcellular Anatomy Ontology, Brain Architecture Management System, the Gene Ontology, BIRNLex and UniProt. The SenseLab ontologies have also been mapped to the Basic Formal Ontology and Relation Ontology, which helps ease interoperability with many other existing and future biomedical ontologies for the Semantic Web. In addition, approaches to representing contradictory research statements are described. The SenseLab ontologies are designed for use on the Semantic Web that enables their integration into a growing collection of biomedical information resources. We demonstrate that our approach can yield significant potential benefits and that the Semantic Web is rapidly becoming mature enough to realize its anticipated promises. The ontologies are available online at http://neuroweb.med.yale.edu/senselab/. 2009 Elsevier B.V. All rights reserved.

  12. REFOLDdb: a new and sustainable gateway to experimental protocols for protein refolding.

    PubMed

    Mizutani, Hisashi; Sugawara, Hideaki; Buckle, Ashley M; Sangawa, Takeshi; Miyazono, Ken-Ichi; Ohtsuka, Jun; Nagata, Koji; Shojima, Tomoki; Nosaki, Shohei; Xu, Yuqun; Wang, Delong; Hu, Xiao; Tanokura, Masaru; Yura, Kei

    2017-04-24

    More than 7000 papers related to "protein refolding" have been published to date, with approximately 300 reports each year during the last decade. Whilst some of these papers provide experimental protocols for protein refolding, a survey in the structural life science communities showed a necessity for a comprehensive database for refolding techniques. We therefore have developed a new resource - "REFOLDdb" that collects refolding techniques into a single, searchable repository to help researchers develop refolding protocols for proteins of interest. We based our resource on the existing REFOLD database, which has not been updated since 2009. We redesigned the data format to be more concise, allowing consistent representations among data entries compared with the original REFOLD database. The remodeled data architecture enhances the search efficiency and improves the sustainability of the database. After an exhaustive literature search we added experimental refolding protocols from reports published 2009 to early 2017. In addition to this new data, we fully converted and integrated existing REFOLD data into our new resource. REFOLDdb contains 1877 entries as of March 17 th , 2017, and is freely available at http://p4d-info.nig.ac.jp/refolddb/ . REFOLDdb is a unique database for the life sciences research community, providing annotated information for designing new refolding protocols and customizing existing methodologies. We envisage that this resource will find wide utility across broad disciplines that rely on the production of pure, active, recombinant proteins. Furthermore, the database also provides a useful overview of the recent trends and statistics in refolding technology development.

  13. Integration of Web-based and PC-based clinical research databases.

    PubMed

    Brandt, C A; Sun, K; Charpentier, P; Nadkarni, P M

    2004-01-01

    We have created a Web-based repository or data library of information about measurement instruments used in studies of multi-factorial geriatric health conditions (the Geriatrics Research Instrument Library - GRIL) based upon existing features of two separate clinical study data management systems. GRIL allows browsing, searching, and selecting measurement instruments based upon criteria such as keywords and areas of applicability. Measurement instruments selected can be printed and/or included in an automatically generated standalone microcomputer database application, which can be downloaded by investigators for use in data collection and data management. Integration of database applications requires the creation of a common semantic model, and mapping from each system to this model. Various database schema conflicts at the table and attribute level must be identified and resolved prior to integration. Using a conflict taxonomy and a mapping schema facilitates this process. Critical conflicts at the table level that required resolution included name and relationship differences. A major benefit of integration efforts is the sharing of features and cross-fertilization of applications created for similar purposes in different operating environments. Integration of applications mandates some degree of metadata model unification.

  14. A web-based quantitative signal detection system on adverse drug reaction in China.

    PubMed

    Li, Chanjuan; Xia, Jielai; Deng, Jianxiong; Chen, Wenge; Wang, Suzhen; Jiang, Jing; Chen, Guanquan

    2009-07-01

    To establish a web-based quantitative signal detection system for adverse drug reactions (ADRs) based on spontaneous reporting to the Guangdong province drug-monitoring database in China. Using Microsoft Visual Basic and Active Server Pages programming languages and SQL Server 2000, a web-based system with three software modules was programmed to perform data preparation and association detection, and to generate reports. Information component (IC), the internationally recognized measure of disproportionality for quantitative signal detection, was integrated into the system, and its capacity for signal detection was tested with ADR reports collected from 1 January 2002 to 30 June 2007 in Guangdong. A total of 2,496 associations including known signals were mined from the test database. Signals (e.g., cefradine-induced hematuria) were found early by using the IC analysis. In addition, 291 drug-ADR associations were alerted for the first time in the second quarter of 2007. The system can be used for the detection of significant associations from the Guangdong drug-monitoring database and could be an extremely useful adjunct to the expert assessment of very large numbers of spontaneously reported ADRs for the first time in China.

  15. How best to structure interdisciplinary primary care teams: the study protocol for a systematic review with narrative framework synthesis.

    PubMed

    Wranik, W Dominika; Hayden, Jill A; Price, Sheri; Parker, Robin M N; Haydt, Susan M; Edwards, Jeanette M; Suter, Esther; Katz, Alan; Gambold, Liesl L; Levy, Adrian R

    2016-10-04

    Western publicly funded health care systems increasingly rely on interdisciplinary teams to support primary care delivery and management of chronic conditions. This knowledge synthesis focuses on what is known in the academic and grey literature about optimal structural characteristics of teams. Its goal is to assess which factors contribute to the effective functioning of interdisciplinary primary care teams and improved health system outcomes, with specific focus on (i) team structure contribution to team process, (ii) team process contribution to primary care goals, and (iii) team structure contribution to primary care goals. The systematic search of academic literature focuses on four chronic conditions and co-morbidities. Within this scope, qualitative and quantitative studies that assess the effects of team characteristics (funding, governance, organization) on care process and patient outcomes will be searched. Electronic databases (Ovid MEDLINE, Embase, CINAHL, PAIS, Web of Science) will be searched systematically. Online web-based searches will be supported by the Grey Matters Tool. Studies will be included, if they report on interdisciplinary primary care in publicly funded Western health systems, and address the relationships between team structure, process, and/or patient outcomes. Studies will be selected in a three-stage screening process (title/abstract/full text) by two independent reviewers in each stage. Study quality will be assessed using the Mixed Methods Assessment Tool. An a priori framework will be applied to data extraction, and a narrative framework approach is used for the synthesis. Using an integrated knowledge translation approach, an electronic decision support tool will be developed for decision makers. It will be searchable along two axes of inquiry: (i) what primary care goals are supported by specific team characteristics and (ii) how should teams be structured to support specific primary care goals? The results of this evidence review will contribute directly to the design of interdisciplinary primary care teams. The optimized design will support the goals of primary care, contributing to the improved health of populations. PROSPERO CRD42016041884.

  16. On-line Geoscience Data Resources for Today's Undergraduates

    NASA Astrophysics Data System (ADS)

    Goodwillie, A. M.; Ryan, W.; Carbotte, S.; Melkonian, A.; Coplan, J.; Arko, R.; O'Hara, S.; Ferrini, V.; Leung, A.; Bonckzowski, J.

    2008-12-01

    Broadening the experience of undergraduates can be achieved by enabling free, unrestricted and convenient access to real scientific data. With funding from the U.S. National Science Foundation, the Marine Geoscience Data System (MGDS) (http://www.marine-geo.org/) serves as the integrated data portal for various NSF-funded projects and provides free public access and preservation to a wide variety of marine and terrestrial data including rock, fluid, biology and sediment samples information, underway geophysical data and multibeam bathymetry, water column and multi-channel seismics data. Users can easily view the locations of cruise tracks, sample and station locations against a backdrop of a multi-resolution global digital elevation model. A Search For Data web page rapidly extracts data holdings from the database and can be filtered on data and device type, field program ID, investigator name, geographical and date bounds. The data access experience is boosted by the MGDS use of standardised OGC-compliant Web Services to support uniform programmatic interfaces. GeoMapApp (http://www.geomapapp.org/), a free MGDS data visualization tool, supports map-based dynamic exploration of a broad suite of geosciences data. Built-in land and marine data sets include tectonic plate boundary compilations, DSDP/ODP core logs, earthquake events, seafloor photos, and submersible dive tracks. Seamless links take users to data held by external partner repositories including PetDB, UNAVCO, IRIS and NGDC. Users can generate custom maps and grids and import their own data sets and grids. A set of short, video-style on-line tutorials familiarises users step- by-step with GeoMapApp functionality (http://www.geomapapp.org/tutorials/). Virtual Ocean (http://www.virtualocean.org/) combines the functionality of GeoMapApp with a 3-D earth browser built using the NASA WorldWind API for a powerful new data resource. MGDS education involvement (http://www.marine-geo.org/, go to Education tab) includes the searchable Media Bank of images and video; KML files for viewing several MGDS data sets in Google Earth (tm); support in developing undergraduate- level teaching modules using NSF-MARGINS data. Examples of many of these data sets will be shown.

  17. 76 FR 54807 - Notice of Proposed Information Collection: IMLS Museum Web Database: MuseumsCount.gov

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-02

    ...: IMLS Museum Web Database: MuseumsCount.gov AGENCY: Institute of Museum and Library Services, National..., and the general public. Information such as name, address, phone, e-mail, Web site, congressional...: IMLS Museum Web Database, MuseumsCount.gov . OMB Number: To be determined. Agency Number: 3137...

  18. NCBI GEO: mining millions of expression profiles--database and tools.

    PubMed

    Barrett, Tanya; Suzek, Tugba O; Troup, Dennis B; Wilhite, Stephen E; Ngau, Wing-Chi; Ledoux, Pierre; Rudnev, Dmitry; Lash, Alex E; Fujibuchi, Wataru; Edgar, Ron

    2005-01-01

    The Gene Expression Omnibus (GEO) at the National Center for Biotechnology Information (NCBI) is the largest fully public repository for high-throughput molecular abundance data, primarily gene expression data. The database has a flexible and open design that allows the submission, storage and retrieval of many data types. These data include microarray-based experiments measuring the abundance of mRNA, genomic DNA and protein molecules, as well as non-array-based technologies such as serial analysis of gene expression (SAGE) and mass spectrometry proteomic technology. GEO currently holds over 30,000 submissions representing approximately half a billion individual molecular abundance measurements, for over 100 organisms. Here, we describe recent database developments that facilitate effective mining and visualization of these data. Features are provided to examine data from both experiment- and gene-centric perspectives using user-friendly Web-based interfaces accessible to those without computational or microarray-related analytical expertise. The GEO database is publicly accessible through the World Wide Web at http://www.ncbi.nlm.nih.gov/geo.

  19. LigSearch: a knowledge-based web server to identify likely ligands for a protein target

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beer, Tjaart A. P. de; Laskowski, Roman A.; Duban, Mark-Eugene

    LigSearch is a web server for identifying ligands likely to bind to a given protein. Identifying which ligands might bind to a protein before crystallization trials could provide a significant saving in time and resources. LigSearch, a web server aimed at predicting ligands that might bind to and stabilize a given protein, has been developed. Using a protein sequence and/or structure, the system searches against a variety of databases, combining available knowledge, and provides a clustered and ranked output of possible ligands. LigSearch can be accessed at http://www.ebi.ac.uk/thornton-srv/databases/LigSearch.

  20. Application of the Coastal and Marine Ecological Classification Standard to ROV Video Data for Enhanced Analysis of Deep-Sea Habitats in the Gulf of Mexico

    NASA Astrophysics Data System (ADS)

    Ruby, C.; Skarke, A. D.; Mesick, S.

    2016-02-01

    The Coastal and Marine Ecological Classification Standard (CMECS) is a network of common nomenclature that provides a comprehensive framework for organizing physical, biological, and chemical information about marine ecosystems. It was developed by the National Oceanic and Atmospheric Administration (NOAA) Coastal Services Center, in collaboration with other feral agencies and academic institutions, as a means for scientists to more easily access, compare, and integrate marine environmental data from a wide range of sources and time frames. CMECS has been endorsed by the Federal Geographic Data Committee (FGDC) as a national metadata standard. The research presented here is focused on the application of CMECS to deep-sea video and environmental data collected by the NOAA ROV Deep Discoverer and the NOAA Ship Okeanos Explorer in the Gulf of Mexico in 2011-2014. Specifically, a spatiotemporal index of the physical, chemical, biological, and geological features observed in ROV video records was developed in order to allow scientist, otherwise unfamiliar with the specific content of existing video data, to rapidly determine the abundance and distribution of features of interest, and thus evaluate the applicability of those video data to their research. CMECS units (setting, component, or modifier) for seafloor images extracted from high-definition ROV video data were established based upon visual assessment as well as analysis of coincident environmental sensor (temperature, conductivity), navigation (ROV position, depth, attitude), and log (narrative dive summary) data. The resulting classification units were integrated into easily searchable textual and geo-databases as well as an interactive web map. The spatial distribution and associations of deep-sea habitats as indicated by CMECS classifications are described and optimized methodological approaches for application of CMECS to deep-sea video and environmental data are presented.

Top