48 CFR 804.1102 - Vendor Information Pages (VIP) Database.
Code of Federal Regulations, 2011 CFR
2011-10-01
... (VIP) Database. 804.1102 Section 804.1102 Federal Acquisition Regulations System DEPARTMENT OF VETERANS AFFAIRS GENERAL ADMINISTRATIVE MATTERS Contract Execution 804.1102 Vendor Information Pages (VIP) Database. Prior to January 1, 2012, all VOSBs and SDVOSBs must be listed in the VIP database, available at http...
48 CFR 804.1102 - Vendor Information Pages (VIP) Database.
Code of Federal Regulations, 2013 CFR
2013-10-01
... (VIP) Database. 804.1102 Section 804.1102 Federal Acquisition Regulations System DEPARTMENT OF VETERANS AFFAIRS GENERAL ADMINISTRATIVE MATTERS Contract Execution 804.1102 Vendor Information Pages (VIP) Database. Prior to January 1, 2012, all VOSBs and SDVOSBs must be listed in the VIP database, available at http...
48 CFR 804.1102 - Vendor Information Pages (VIP) Database.
Code of Federal Regulations, 2014 CFR
2014-10-01
... (VIP) Database. 804.1102 Section 804.1102 Federal Acquisition Regulations System DEPARTMENT OF VETERANS AFFAIRS GENERAL ADMINISTRATIVE MATTERS Contract Execution 804.1102 Vendor Information Pages (VIP) Database. Prior to January 1, 2012, all VOSBs and SDVOSBs must be listed in the VIP database, available at http...
48 CFR 804.1102 - Vendor Information Pages (VIP) Database.
Code of Federal Regulations, 2012 CFR
2012-10-01
... (VIP) Database. 804.1102 Section 804.1102 Federal Acquisition Regulations System DEPARTMENT OF VETERANS AFFAIRS GENERAL ADMINISTRATIVE MATTERS Contract Execution 804.1102 Vendor Information Pages (VIP) Database. Prior to January 1, 2012, all VOSBs and SDVOSBs must be listed in the VIP database, available at http...
48 CFR 804.1102 - Vendor Information Pages (VIP) Database.
Code of Federal Regulations, 2010 CFR
2010-10-01
... (VIP) Database. 804.1102 Section 804.1102 Federal Acquisition Regulations System DEPARTMENT OF VETERANS AFFAIRS GENERAL ADMINISTRATIVE MATTERS Contract Execution 804.1102 Vendor Information Pages (VIP) Database. Prior to January 1, 2012, all VOSBs and SDVOSBs must be listed in the VIP database, available at http...
Saunders, Brian; Lyon, Stephen; Day, Matthew; Riley, Brenda; Chenette, Emily; Subramaniam, Shankar
2008-01-01
The UCSD-Nature Signaling Gateway Molecule Pages (http://www.signaling-gateway.org/molecule) provides essential information on more than 3800 mammalian proteins involved in cellular signaling. The Molecule Pages contain expert-authored and peer-reviewed information based on the published literature, complemented by regularly updated information derived from public data source references and sequence analysis. The expert-authored data includes both a full-text review about the molecule, with citations, and highly structured data for bioinformatics interrogation, including information on protein interactions and states, transitions between states and protein function. The expert-authored pages are anonymously peer reviewed by the Nature Publishing Group. The Molecule Pages data is present in an object-relational database format and is freely accessible to the authors, the reviewers and the public from a web browser that serves as a presentation layer. The Molecule Pages are supported by several applications that along with the database and the interfaces form a multi-tier architecture. The Molecule Pages and the Signaling Gateway are routinely accessed by a very large research community. PMID:17965093
Saunders, Brian; Lyon, Stephen; Day, Matthew; Riley, Brenda; Chenette, Emily; Subramaniam, Shankar; Vadivelu, Ilango
2008-01-01
The UCSD-Nature Signaling Gateway Molecule Pages (http://www.signaling-gateway.org/molecule) provides essential information on more than 3800 mammalian proteins involved in cellular signaling. The Molecule Pages contain expert-authored and peer-reviewed information based on the published literature, complemented by regularly updated information derived from public data source references and sequence analysis. The expert-authored data includes both a full-text review about the molecule, with citations, and highly structured data for bioinformatics interrogation, including information on protein interactions and states, transitions between states and protein function. The expert-authored pages are anonymously peer reviewed by the Nature Publishing Group. The Molecule Pages data is present in an object-relational database format and is freely accessible to the authors, the reviewers and the public from a web browser that serves as a presentation layer. The Molecule Pages are supported by several applications that along with the database and the interfaces form a multi-tier architecture. The Molecule Pages and the Signaling Gateway are routinely accessed by a very large research community.
Resource Purpose:The Watershed Information Network is a set of about 30 web pages that are organized by topic. These pages access existing databases like the American Heritage Rivers Services database and Surf Your Watershed. WIN in itself has no data or data sets.
L...
DOE Office of Scientific and Technical Information (OSTI.GOV)
The system is developed to collect, process, store and present the information provided by the radio frequency identification (RFID) devices. The system contains three parts, the application software, the database and the web page. The application software manages multiple RFID devices, such as readers and portals, simultaneously. It communicates with the devices through application programming interface (API) provided by the device vendor. The application software converts data collected by the RFID readers and portals to readable information. It is capable of encrypting data using 256 bits advanced encryption standard (AES). The application software has a graphical user interface (GUI). Themore » GUI mimics the configurations of the nucler material storage sites or transport vehicles. The GUI gives the user and system administrator an intuitive way to read the information and/or configure the devices. The application software is capable of sending the information to a remote, dedicated and secured web and database server. Two captured screen samples, one for storage and transport, are attached. The database is constructed to handle a large number of RFID tag readers and portals. A SQL server is employed for this purpose. An XML script is used to update the database once the information is sent from the application software. The design of the web page imitates the design of the application software. The web page retrieves data from the database and presents it in different panels. The user needs a user name combined with a password to access the web page. The web page is capable of sending e-mail and text messages based on preset criteria, such as when alarm thresholds are excceeded. A captured screen sample is attached. The application software is designed to be installed on a local computer. The local computer is directly connected to the RFID devices and can be controlled locally or remotely. There are multiple local computers managing different sites or transport vehicles. The control from remote sites and information transmitted to a central database server is through secured internet. The information stored in the central databaser server is shown on the web page. The users can view the web page on the internet. A dedicated and secured web and database server (https) is used to provide information security.« less
Code of Federal Regulations, 2011 CFR
2011-07-01
... identified as such by VA's Veterans Benefits Administration and listed in its database of veterans and family...-owned small businesses and works with the Small Business Administration's Veterans Business Development... business concern that has verified status in the VetBiz Vendor Information Pages database. Primary industry...
Code of Federal Regulations, 2012 CFR
2012-07-01
... identified as such by VA's Veterans Benefits Administration and listed in its database of veterans and family...-owned small businesses and works with the Small Business Administration's Veterans Business Development... business concern that has verified status in the VetBiz Vendor Information Pages database. Primary industry...
This page is the starting point for EZ Query. This page describes how to select key data elements from EPA's Facility Information Database and Geospatial Reference Database to build a tabular report or a Comma Separated Value (CSV) files for downloading.
PseudoBase: a database with RNA pseudoknots.
van Batenburg, F H; Gultyaev, A P; Pleij, C W; Ng, J; Oliehoek, J
2000-01-01
PseudoBase is a database containing structural, functional and sequence data related to RNA pseudo-knots. It can be reached at http://wwwbio. Leiden Univ.nl/ approximately Batenburg/PKB.html. This page will direct the user to a retrieval page from where a particular pseudoknot can be chosen, or to a submission page which enables the user to add pseudoknot information to the database or to an informative page that elaborates on the various aspects of the database. For each pseudoknot, 12 items are stored, e.g. the nucleotides of the region that contains the pseudoknot, the stem positions of the pseudoknot, the EMBL accession number of the sequence that contains this pseudoknot and the support that can be given regarding the reliability of the pseudoknot. Access is via a small number of steps, using 16 different categories. The development process was done by applying the evolutionary methodology for software development rather than by applying the methodology of the classical waterfall model or the more modern spiral model.
AgeFactDB--the JenAge Ageing Factor Database--towards data integration in ageing research.
Hühne, Rolf; Thalheim, Torsten; Sühnel, Jürgen
2014-01-01
AgeFactDB (http://agefactdb.jenage.de) is a database aimed at the collection and integration of ageing phenotype data including lifespan information. Ageing factors are considered to be genes, chemical compounds or other factors such as dietary restriction, whose action results in a changed lifespan or another ageing phenotype. Any information related to the effects of ageing factors is called an observation and is presented on observation pages. To provide concise access to the complete information for a particular ageing factor, corresponding observations are also summarized on ageing factor pages. In a first step, ageing-related data were primarily taken from existing databases such as the Ageing Gene Database--GenAge, the Lifespan Observations Database and the Dietary Restriction Gene Database--GenDR. In addition, we have started to include new ageing-related information. Based on homology data taken from the HomoloGene Database, AgeFactDB also provides observation and ageing factor pages of genes that are homologous to known ageing-related genes. These homologues are considered as candidate or putative ageing-related genes. AgeFactDB offers a variety of search and browse options, and also allows the download of ageing factor or observation lists in TSV, CSV and XML formats.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-27
... Request; FEMA Mitigation Success Story Database AGENCY: Federal Emergency Management Agency, DHS. ACTION... Database. Type of information collection: Revision of a currently approved information collection. [[Page...
A radiology department intranet: development and applications.
Willing, S J; Berland, L L
1999-01-01
An intranet is a "private Internet" that uses the protocols of the World Wide Web to share information resources within a company or with the company's business partners and clients. The hardware requirements for an intranet begin with a dedicated Web server permanently connected to the departmental network. The heart of a Web server is the hypertext transfer protocol (HTTP) service, which receives a page request from a client's browser and transmits the page back to the client. Although knowledge of hypertext markup language (HTML) is not essential for authoring a Web page, a working familiarity with HTML is useful, as is knowledge of programming and database management. Security can be ensured by using scripts to write information in hidden fields or by means of "cookies." Interfacing databases and database management systems with the Web server and conforming the user interface to HTML syntax can be achieved by means of the common gateway interface (CGI), Active Server Pages (ASP), or other methods. An intranet in a radiology department could include the following types of content: on-call schedules, work schedules and a calendar, a personnel directory, resident resources, memorandums and discussion groups, software for a radiology information system, and databases.
Seliske, Laura; Pickett, William; Bates, Rebecca; Janssen, Ian
2012-01-01
Many studies examining the food retail environment rely on geographic information system (GIS) databases for location information. The purpose of this study was to validate information provided by two GIS databases, comparing the positional accuracy of food service places within a 1 km circular buffer surrounding 34 schools in Ontario, Canada. A commercial database (InfoCanada) and an online database (Yellow Pages) provided the addresses of food service places. Actual locations were measured using a global positioning system (GPS) device. The InfoCanada and Yellow Pages GIS databases provided the locations for 973 and 675 food service places, respectively. Overall, 749 (77.1%) and 595 (88.2%) of these were located in the field. The online database had a higher proportion of food service places found in the field. The GIS locations of 25% of the food service places were located within approximately 15 m of their actual location, 50% were within 25 m, and 75% were within 50 m. This validation study provided a detailed assessment of errors in the measurement of the location of food service places in the two databases. The location information was more accurate for the online database, however, when matching criteria were more conservative, there were no observed differences in error between the databases. PMID:23066385
Seliske, Laura; Pickett, William; Bates, Rebecca; Janssen, Ian
2012-08-01
Many studies examining the food retail environment rely on geographic information system (GIS) databases for location information. The purpose of this study was to validate information provided by two GIS databases, comparing the positional accuracy of food service places within a 1 km circular buffer surrounding 34 schools in Ontario, Canada. A commercial database (InfoCanada) and an online database (Yellow Pages) provided the addresses of food service places. Actual locations were measured using a global positioning system (GPS) device. The InfoCanada and Yellow Pages GIS databases provided the locations for 973 and 675 food service places, respectively. Overall, 749 (77.1%) and 595 (88.2%) of these were located in the field. The online database had a higher proportion of food service places found in the field. The GIS locations of 25% of the food service places were located within approximately 15 m of their actual location, 50% were within 25 m, and 75% were within 50 m. This validation study provided a detailed assessment of errors in the measurement of the location of food service places in the two databases. The location information was more accurate for the online database, however, when matching criteria were more conservative, there were no observed differences in error between the databases.
AgeFactDB—the JenAge Ageing Factor Database—towards data integration in ageing research
Hühne, Rolf; Thalheim, Torsten; Sühnel, Jürgen
2014-01-01
AgeFactDB (http://agefactdb.jenage.de) is a database aimed at the collection and integration of ageing phenotype data including lifespan information. Ageing factors are considered to be genes, chemical compounds or other factors such as dietary restriction, whose action results in a changed lifespan or another ageing phenotype. Any information related to the effects of ageing factors is called an observation and is presented on observation pages. To provide concise access to the complete information for a particular ageing factor, corresponding observations are also summarized on ageing factor pages. In a first step, ageing-related data were primarily taken from existing databases such as the Ageing Gene Database—GenAge, the Lifespan Observations Database and the Dietary Restriction Gene Database—GenDR. In addition, we have started to include new ageing-related information. Based on homology data taken from the HomoloGene Database, AgeFactDB also provides observation and ageing factor pages of genes that are homologous to known ageing-related genes. These homologues are considered as candidate or putative ageing-related genes. AgeFactDB offers a variety of search and browse options, and also allows the download of ageing factor or observation lists in TSV, CSV and XML formats. PMID:24217911
Plant Genome Resources at the National Center for Biotechnology Information
Wheeler, David L.; Smith-White, Brian; Chetvernin, Vyacheslav; Resenchuk, Sergei; Dombrowski, Susan M.; Pechous, Steven W.; Tatusova, Tatiana; Ostell, James
2005-01-01
The National Center for Biotechnology Information (NCBI) integrates data from more than 20 biological databases through a flexible search and retrieval system called Entrez. A core Entrez database, Entrez Nucleotide, includes GenBank and is tightly linked to the NCBI Taxonomy database, the Entrez Protein database, and the scientific literature in PubMed. A suite of more specialized databases for genomes, genes, gene families, gene expression, gene variation, and protein domains dovetails with the core databases to make Entrez a powerful system for genomic research. Linked to the full range of Entrez databases is the NCBI Map Viewer, which displays aligned genetic, physical, and sequence maps for eukaryotic genomes including those of many plants. A specialized plant query page allow maps from all plant genomes covered by the Map Viewer to be searched in tandem to produce a display of aligned maps from several species. PlantBLAST searches against the sequences shown in the Map Viewer allow BLAST alignments to be viewed within a genomic context. In addition, precomputed sequence similarities, such as those for proteins offered by BLAST Link, enable fluid navigation from unannotated to annotated sequences, quickening the pace of discovery. NCBI Web pages for plants, such as Plant Genome Central, complete the system by providing centralized access to NCBI's genomic resources as well as links to organism-specific Web pages beyond NCBI. PMID:16010002
NASA Astrophysics Data System (ADS)
Gross, M. B.; Mayernik, M. S.; Rowan, L. R.; Khan, H.; Boler, F. M.; Maull, K. E.; Stott, D.; Williams, S.; Corson-Rikert, J.; Johns, E. M.; Daniels, M. D.; Krafft, D. B.
2015-12-01
UNAVCO, UCAR, and Cornell University are working together to leverage semantic web technologies to enable discovery of people, datasets, publications and other research products, as well as the connections between them. The EarthCollab project, an EarthCube Building Block, is enhancing an existing open-source semantic web application, VIVO, to address connectivity gaps across distributed networks of researchers and resources related to the following two geoscience-based communities: (1) the Bering Sea Project, an interdisciplinary field program whose data archive is hosted by NCAR's Earth Observing Laboratory (EOL), and (2) UNAVCO, a geodetic facility and consortium that supports diverse research projects informed by geodesy. People, publications, datasets and grant information have been mapped to an extended version of the VIVO-ISF ontology and ingested into VIVO's database. Data is ingested using a custom set of scripts that include the ability to perform basic automated and curated disambiguation. VIVO can display a page for every object ingested, including connections to other objects in the VIVO database. A dataset page, for example, includes the dataset type, time interval, DOI, related publications, and authors. The dataset type field provides a connection to all other datasets of the same type. The author's page will show, among other information, related datasets and co-authors. Information previously spread across several unconnected databases is now stored in a single location. In addition to VIVO's default display, the new database can also be queried using SPARQL, a query language for semantic data. EarthCollab will also extend the VIVO web application. One such extension is the ability to cross-link separate VIVO instances across institutions, allowing local display of externally curated information. For example, Cornell's VIVO faculty pages will display UNAVCO's dataset information and UNAVCO's VIVO will display Cornell faculty member contact and position information. Additional extensions, including enhanced geospatial capabilities, will be developed following task-centered usability testing.
Using the Saccharomyces Genome Database (SGD) for analysis of genomic information
Skrzypek, Marek S.; Hirschman, Jodi
2011-01-01
Analysis of genomic data requires access to software tools that place the sequence-derived information in the context of biology. The Saccharomyces Genome Database (SGD) integrates functional information about budding yeast genes and their products with a set of analysis tools that facilitate exploring their biological details. This unit describes how the various types of functional data available at SGD can be searched, retrieved, and analyzed. Starting with the guided tour of the SGD Home page and Locus Summary page, this unit highlights how to retrieve data using YeastMine, how to visualize genomic information with GBrowse, how to explore gene expression patterns with SPELL, and how to use Gene Ontology tools to characterize large-scale datasets. PMID:21901739
The Web-Database Connection Tools for Sharing Information on the Campus Intranet.
ERIC Educational Resources Information Center
Thibeault, Nancy E.
This paper evaluates four tools for creating World Wide Web pages that interface with Microsoft Access databases: DB Gateway, Internet Database Assistant (IDBA), Microsoft Internet Database Connector (IDC), and Cold Fusion. The system requirements and features of each tool are discussed. A sample application, "The Virtual Help Desk"…
Accounting Data to Web Interface Using PERL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hargeaves, C
2001-08-13
This document will explain the process to create a web interface for the accounting information generated by the High Performance Storage Systems (HPSS) accounting report feature. The accounting report contains useful data but it is not easily accessed in a meaningful way. The accounting report is the only way to see summarized storage usage information. The first step is to take the accounting data, make it meaningful and store the modified data in persistent databases. The second step is to generate the various user interfaces, HTML pages, that will be used to access the data. The third step is tomore » transfer all required files to the web server. The web pages pass parameters to Common Gateway Interface (CGI) scripts that generate dynamic web pages and graphs. The end result is a web page with specific information presented in text with or without graphs. The accounting report has a specific format that allows the use of regular expressions to verify if a line is storage data. Each storage data line is stored in a detailed database file with a name that includes the run date. The detailed database is used to create a summarized database file that also uses run date in its name. The summarized database is used to create the group.html web page that includes a list of all storage users. Scripts that query the database folder to build a list of available databases generate two additional web pages. A master script that is run monthly as part of a cron job, after the accounting report has completed, manages all of these individual scripts. All scripts are written in the PERL programming language. Whenever possible data manipulation scripts are written as filters. All scripts are written to be single source, which means they will function properly on both the open and closed networks at LLNL. The master script handles the command line inputs for all scripts, file transfers to the web server and records run information in a log file. The rest of the scripts manipulate the accounting data or use the files created to generate HTML pages. Each script will be described in detail herein. The following is a brief description of HPSS taken directly from an HPSS web site. ''HPSS is a major development project, which began in 1993 as a Cooperative Research and Development Agreement (CRADA) between government and industry. The primary objective of HPSS is to move very large data objects between high performance computers, workstation clusters, and storage libraries at speeds many times faster than is possible with today's software systems. For example, HPSS can manage parallel data transfers from multiple network-connected disk arrays at rates greater than 1 Gbyte per second, making it possible to access high definition digitized video in real time.'' The HPSS accounting report is a canned report whose format is controlled by the HPSS developers.« less
Automating Information Discovery Within the Invisible Web
NASA Astrophysics Data System (ADS)
Sweeney, Edwina; Curran, Kevin; Xie, Ermai
A Web crawler or spider crawls through the Web looking for pages to index, and when it locates a new page it passes the page on to an indexer. The indexer identifies links, keywords, and other content and stores these within its database. This database is searched by entering keywords through an interface and suitable Web pages are returned in a results page in the form of hyperlinks accompanied by short descriptions. The Web, however, is increasingly moving away from being a collection of documents to a multidimensional repository for sounds, images, audio, and other formats. This is leading to a situation where certain parts of the Web are invisible or hidden. The term known as the "Deep Web" has emerged to refer to the mass of information that can be accessed via the Web but cannot be indexed by conventional search engines. The concept of the Deep Web makes searches quite complex for search engines. Google states that the claim that conventional search engines cannot find such documents as PDFs, Word, PowerPoint, Excel, or any non-HTML page is not fully accurate and steps have been taken to address this problem by implementing procedures to search items such as academic publications, news, blogs, videos, books, and real-time information. However, Google still only provides access to a fraction of the Deep Web. This chapter explores the Deep Web and the current tools available in accessing it.
78 FR 28848 - Information Collection Activities; Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-16
... Quality's (AHRQ) Hospital Survey on Patient Safety Culture Comparative Database.'' In accordance with the... for Healthcare Research and Quality's (AHRQ) Hospital Survey on Patient Safety Culture Comparative... SOPS) Comparative Database; OMB NO. 0935- [[Page 28849
NREL: U.S. Life Cycle Inventory Database - Project Management Team
Project Management Team Information about the U.S. Life Cycle Inventory (LCI) Database project management team is listed on this page. Additional project information is available about the U.S. LCI Mechanical Engineering, Colorado State University Professional History Michael has worked as a Senior
Accessibility and quality of online information for pediatric orthopaedic surgery fellowships.
Davidson, Austin R; Murphy, Robert F; Spence, David D; Kelly, Derek M; Warner, William C; Sawyer, Jeffrey R
2014-12-01
Pediatric orthopaedic fellowship applicants commonly use online-based resources for information on potential programs. Two primary sources are the San Francisco Match (SF Match) database and the Pediatric Orthopaedic Society of North America (POSNA) database. We sought to determine the accessibility and quality of information that could be obtained by using these 2 sources. The online databases of the SF Match and POSNA were reviewed to determine the availability of embedded program links or external links for the included programs. If not available in the SF Match or POSNA data, Web sites for listed programs were located with a Google search. All identified Web sites were analyzed for accessibility, content volume, and content quality. At the time of online review, 50 programs, offering 68 positions, were listed in the SF Match database. Although 46 programs had links included with their information, 36 (72%) of them simply listed http://www.sfmatch.org as their unique Web site. Ten programs (20%) had external links listed, but only 2 (4%) linked directly to the fellowship web page. The POSNA database does not list any links to the 47 programs it lists, which offer 70 positions. On the basis of a Google search of the 50 programs listed in the SF Match database, web pages were found for 35. Of programs with independent web pages, all had a description of the program and 26 (74%) described their application process. Twenty-nine (83%) listed research requirements, 22 (63%) described the rotation schedule, and 12 (34%) discussed the on-call expectations. A contact telephone number and/or email address was provided by 97% of programs. Twenty (57%) listed both the coordinator and fellowship director, 9 (26%) listed the coordinator only, 5 (14%) listed the fellowship director only, and 1 (3%) had no contact information given. The SF Match and POSNA databases provide few direct links to fellowship Web sites, and individual program Web sites either do not exist or do not effectively convey information about the programs. Improved accessibility and accurate information online would allow potential applicants to obtain information about pediatric fellowships in a more efficient manner.
75 FR 61765 - Agency Information Collection Activities: Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-06
... September 17, 2010 (FR Doc. 201-023260), on page 57037, regarding the Black Lung Clinics Program Database... hours Database 15 1 15 20 300 Dated: September 28, 2010. Sahira Rafiullah, Director, Division of Policy...
Updated regulation curation model at the Saccharomyces Genome Database
Engel, Stacia R; Skrzypek, Marek S; Hellerstedt, Sage T; Wong, Edith D; Nash, Robert S; Weng, Shuai; Binkley, Gail; Sheppard, Travis K; Karra, Kalpana; Cherry, J Michael
2018-01-01
Abstract The Saccharomyces Genome Database (SGD) provides comprehensive, integrated biological information for the budding yeast Saccharomyces cerevisiae, along with search and analysis tools to explore these data, enabling the discovery of functional relationships between sequence and gene products in fungi and higher organisms. We have recently expanded our data model for regulation curation to address regulation at the protein level in addition to transcription, and are presenting the expanded data on the ‘Regulation’ pages at SGD. These pages include a summary describing the context under which the regulator acts, manually curated and high-throughput annotations showing the regulatory relationships for that gene and a graphical visualization of its regulatory network and connected networks. For genes whose products regulate other genes or proteins, the Regulation page includes Gene Ontology enrichment analysis of the biological processes in which those targets participate. For DNA-binding transcription factors, we also provide other information relevant to their regulatory function, such as DNA binding site motifs and protein domains. As with other data types at SGD, all regulatory relationships and accompanying data are available through YeastMine, SGD’s data warehouse based on InterMine. Database URL: http://www.yeastgenome.org PMID:29688362
Code of Federal Regulations, 2012 CFR
2012-07-01
... online Vendor Information Pages database forms at http://www.VetBiz.gov, and has been examined by VA's Center for Veterans Enterprise. Such businesses appear in the VIP database as “verified.” (b) Good... database and notify the business by phone and mail. Whenever CVE determines that the applicant submitted...
Code of Federal Regulations, 2011 CFR
2011-07-01
... online Vendor Information Pages database forms at http://www.VetBiz.gov, and has been examined by VA's Center for Veterans Enterprise. Such businesses appear in the VIP database as “verified.” (b) Good... database and notify the business by phone and mail. Whenever CVE determines that the applicant submitted...
Code of Federal Regulations, 2013 CFR
2013-07-01
... online Vendor Information Pages database forms at http://www.VetBiz.gov, and has been examined by VA's Center for Veterans Enterprise. Such businesses appear in the VIP database as “verified.” (b) Good... database and notify the business by phone and mail. Whenever CVE determines that the applicant submitted...
Code of Federal Regulations, 2014 CFR
2014-07-01
... online Vendor Information Pages database forms at http://www.VetBiz.gov, and has been examined by VA's Center for Veterans Enterprise. Such businesses appear in the VIP database as “verified.” (b) Good... database and notify the business by phone and mail. Whenever CVE determines that the applicant submitted...
Code of Federal Regulations, 2010 CFR
2010-07-01
... online Vendor Information Pages database forms at http://www.VetBiz.gov, and has been examined by VA's Center for Veterans Enterprise. Such businesses appear in the VIP database as “verified.” (b) Good... database and notify the business by phone and mail. Whenever CVE determines that the applicant submitted...
Thermal Protection System Imagery Inspection Management System -TIIMS
NASA Technical Reports Server (NTRS)
Goza, Sharon; Melendrez, David L.; Henningan, Marsha; LaBasse, Daniel; Smith, Daniel J.
2011-01-01
TIIMS is used during the inspection phases of every mission to provide quick visual feedback, detailed inspection data, and determination to the mission management team. This system consists of a visual Web page interface, an SQL database, and a graphical image generator. These combine to allow a user to ascertain quickly the status of the inspection process, and current determination of any problem zones. The TIIMS system allows inspection engineers to enter their determinations into a database and to link pertinent images and video to those database entries. The database then assigns criteria to each zone and tile, and via query, sends the information to a graphical image generation program. Using the official TIPS database tile positions and sizes, the graphical image generation program creates images of the current status of the orbiter, coloring zones, and tiles based on a predefined key code. These images are then displayed on a Web page using customized JAVA scripts to display the appropriate zone of the orbiter based on the location of the user's cursor. The close-up graphic and database entry for that particular zone can then be seen by selecting the zone. This page contains links into the database to access the images used by the inspection engineer when they make the determination entered into the database. Status for the inspection zones changes as determinations are refined and shown by the appropriate color code.
Human Mitochondrial Protein Database
National Institute of Standards and Technology Data Gateway
SRD 131 Human Mitochondrial Protein Database (Web, free access) The Human Mitochondrial Protein Database (HMPDb) provides comprehensive data on mitochondrial and human nuclear encoded proteins involved in mitochondrial biogenesis and function. This database consolidates information from SwissProt, LocusLink, Protein Data Bank (PDB), GenBank, Genome Database (GDB), Online Mendelian Inheritance in Man (OMIM), Human Mitochondrial Genome Database (mtDB), MITOMAP, Neuromuscular Disease Center and Human 2-D PAGE Databases. This database is intended as a tool not only to aid in studying the mitochondrion but in studying the associated diseases.
75 FR 3908 - Agency Information Collection Activities: Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-25
... Comparative Database.'' In accordance with the Paperwork Reduction Act, 44 U.S.C. 3501-3520, AHRQ invites the... Assessment of Healthcare Providers and Systems (CAHPS) Health Plan Survey Comparative Database. [[Page 3909..., and the Centers for Medicare & Medicaid Services (CMS) to provide comparative data to support public...
Rice proteome database: a step toward functional analysis of the rice genome.
Komatsu, Setsuko
2005-09-01
The technique of proteome analysis using two-dimensional polyacrylamide gel electrophoresis (2D-PAGE) has the power to monitor global changes that occur in the protein complement of tissues and subcellular compartments. In this study, the proteins of rice were cataloged, a rice proteome database was constructed, and a functional characterization of some of the identified proteins was undertaken. Proteins extracted from various tissues and subcellular compartments in rice were separated by 2D-PAGE and an image analyzer was used to construct a display of the proteins. The Rice Proteome Database contains 23 reference maps based on 2D-PAGE of proteins from various rice tissues and subcellular compartments. These reference maps comprise 13129 identified proteins, and the amino acid sequences of 5092 proteins are entered in the database. Major proteins involved in growth or stress responses were identified using the proteome approach. Some of these proteins, including a beta-tubulin, calreticulin, and ribulose-1,5-bisphosphate carboxylase/oxygenase activase in rice, have unexpected functions. The information obtained from the Rice Proteome Database will aid in cloning the genes for and predicting the function of unknown proteins.
2010-06-11
MODELING WITH IMPLEMENTED GBI AND MD DATA (STEADY STATE GB MIGRATION) PAGE 48 5. FORMATION AND ANALYSIS OF GB PROPERTIES DATABASE PAGE 53 5.1...Relative GB energy for specified GBM averaged on possible GBIs PAGE 53 5.2. Database validation on available experimental data PAGE 56 5.3. Comparison...PAGE 70 Fig. 6.11. MC Potts Rex. and GG software: (a) modeling volume analysis; (b) searching for GB energy value within included database . PAGE
COASTAL AND MARINE DATABASE SYSTEMS
Data miners trying to dig out new nuggets of insight from massive piles of rapidly expanding Web data; software bots skittering across the billion-page Web looking for specific information prey: Fast-paced developments in information technology make this an interesting time for c...
Enhancing Geoscience Research Discovery Through the Semantic Web
NASA Astrophysics Data System (ADS)
Rowan, Linda R.; Gross, M. Benjamin; Mayernik, Matthew; Khan, Huda; Boler, Frances; Maull, Keith; Stott, Don; Williams, Steve; Corson-Rikert, Jon; Johns, Erica M.; Daniels, Michael; Krafft, Dean B.; Meertens, Charles
2016-04-01
UNAVCO, UCAR, and Cornell University are working together to leverage semantic web technologies to enable discovery of people, datasets, publications and other research products, as well as the connections between them. The EarthCollab project, a U.S. National Science Foundation EarthCube Building Block, is enhancing an existing open-source semantic web application, VIVO, to enhance connectivity across distributed networks of researchers and resources related to the following two geoscience-based communities: (1) the Bering Sea Project, an interdisciplinary field program whose data archive is hosted by NCAR's Earth Observing Laboratory (EOL), and (2) UNAVCO, a geodetic facility and consortium that supports diverse research projects informed by geodesy. People, publications, datasets and grant information have been mapped to an extended version of the VIVO-ISF ontology and ingested into VIVO's database. Much of the VIVO ontology was built for the life sciences, so we have added some components of existing geoscience-based ontologies and a few terms from a local ontology that we created. The UNAVCO VIVO instance, connect.unavco.org, utilizes persistent identifiers whenever possible; for example using ORCIDs for people, publication DOIs, data DOIs and unique NSF grant numbers. Data is ingested using a custom set of scripts that include the ability to perform basic automated and curated disambiguation. VIVO can display a page for every object ingested, including connections to other objects in the VIVO database. A dataset page, for example, includes the dataset type, time interval, DOI, related publications, and authors. The dataset type field provides a connection to all other datasets of the same type. The author's page shows, among other information, related datasets and co-authors. Information previously spread across several unconnected databases is now stored in a single location. In addition to VIVO's default display, the new database can be queried using SPARQL, a query language for semantic data. EarthCollab is extending the VIVO web application. One such extension is the ability to cross-link separate VIVO instances across institutions, allowing local display of externally curated information. For example, Cornell's VIVO faculty pages will display UNAVCO's dataset information and UNAVCO's VIVO will display Cornell faculty member contact and position information. About half of UNAVCO's membership is international and we hope to connect our data to institutions in other countries with a similar approach. Additional extensions, including enhanced geospatial capabilities, will be developed based on task-centered usability testing.
38 CFR 74.10 - Where must an application be filed?
Code of Federal Regulations, 2011 CFR
2011-07-01
... Information Pages database located in the Center for Veterans Enterprise's Web portal, http://www.VetBiz.gov... information. Address information for the CVE is also contained on the Web portal. Correspondence may be dispatched to: Director, Center for Veterans Enterprise (00VE), U.S. Department of Veterans Affairs, 810...
38 CFR 74.10 - Where must an application be filed?
Code of Federal Regulations, 2014 CFR
2014-07-01
... Information Pages database located in the Center for Veterans Enterprise's Web portal, http://www.VetBiz.gov... information. Address information for the CVE is also contained on the Web portal. Correspondence may be dispatched to: Director, Center for Veterans Enterprise (00VE), U.S. Department of Veterans Affairs, 810...
38 CFR 74.10 - Where must an application be filed?
Code of Federal Regulations, 2013 CFR
2013-07-01
... Information Pages database located in the Center for Veterans Enterprise's Web portal, http://www.VetBiz.gov... information. Address information for the CVE is also contained on the Web portal. Correspondence may be dispatched to: Director, Center for Veterans Enterprise (00VE), U.S. Department of Veterans Affairs, 810...
38 CFR 74.10 - Where must an application be filed?
Code of Federal Regulations, 2012 CFR
2012-07-01
... Information Pages database located in the Center for Veterans Enterprise's Web portal, http://www.VetBiz.gov... information. Address information for the CVE is also contained on the Web portal. Correspondence may be dispatched to: Director, Center for Veterans Enterprise (00VE), U.S. Department of Veterans Affairs, 810...
WATERSHED INFORMATION - SURF YOUR WATERSHED
Surf Your Watershed is both a database of urls to world wide web pages associated with the watershed approach of environmental management and also data sets of relevant environmental information that can be queried. It is designed for citizens and decision makers across the count...
Publications - AR 2015 | Alaska Division of Geological & Geophysical
Publications Search Statewide Maps New Releases Sales Interactive Maps Databases Sections Geologic publication sales page for more information. Quadrangle(s): Alaska General Bibliographic Reference DGGS Staff
Publications - GMC 280 | Alaska Division of Geological & Geophysical
Publications Search Statewide Maps New Releases Sales Interactive Maps Databases Sections Geologic please see our publication sales page for more information. Bibliographic Reference Piggott, Neil, and
CADDIS Volume 5. Causal Databases: Home page (Duplicate?)
The Causal Analysis/Diagnosis Decision Information System, or CADDIS, is a website developed to help scientists and engineers in the Regions, States, and Tribes conduct causal assessments in aquatic systems.
Annotated checklist and database for vascular plants of the Jemez Mountains
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foxx, T. S.; Pierce, L.; Tierney, G. D.
Studies done in the last 40 years have provided information to construct a checklist of the Jemez Mountains. The present database and checklist builds on the basic list compiled by Teralene Foxx and Gail Tierney in the early 1980s. The checklist is annotated with taxonomic information, geographic and biological information, economic uses, wildlife cover, revegetation potential, and ethnographic uses. There are nearly 1000 species that have been noted for the Jemez Mountains. This list is cross-referenced with the US Department of Agriculture Natural Resource Conservation Service PLANTS database species names and acronyms. All information will soon be available on amore » Web Page.« less
Publications - GMC 379 | Alaska Division of Geological & Geophysical
Publications Search Statewide Maps New Releases Sales Interactive Maps Databases Sections Geologic Info: Download below or please see our publication sales page for more information. Quadrangle(s
Science.gov: gateway to government science information.
Fitzpatrick, Roberta Bronson
2010-01-01
Science.gov is a portal to more than 40 scientific databases and 200 million pages of science information via a single query. It connects users to science information and research results from the U.S. government. This column will provide readers with an overview of the resource, as well as basic search hints.
Publications - GMC 322 | Alaska Division of Geological & Geophysical
Publications Search Statewide Maps New Releases Sales Interactive Maps Databases Sections Geologic Ordering Info: Download below or please see our publication sales page for more information. Quadrangle(s
Adleman, Jennifer N.; Cameron, Cheryl E.; Snedigar, Seth F.; Neal, Christina A.; Wallace, Kristi L.; Power, John A.; Coombs, Michelle L.; Freymueller, Jeffrey T.
2010-01-01
The AVO Web site, with its accompanying database, is the backbone of AVO's external and internal communications. This was the first Cook Inlet volcanic eruption with a public expectation of real-time access to data, updates, and hazards information over the Internet. In March 2005, AVO improved the Web site from individual static pages to a dynamic, database-driven site. This new system provided quick and straightforward access to the latest information for (1) staff within the observatory, (2) emergency managers from State and local governments and organizations, (3) the media, and (4) the public. From mid-December 2005 through April 2006, the AVO Web site served more than 45 million Web pages and about 5.5 terabytes of data.
Automatic Hidden-Web Table Interpretation by Sibling Page Comparison
NASA Astrophysics Data System (ADS)
Tao, Cui; Embley, David W.
The longstanding problem of automatic table interpretation still illudes us. Its solution would not only be an aid to table processing applications such as large volume table conversion, but would also be an aid in solving related problems such as information extraction and semi-structured data management. In this paper, we offer a conceptual modeling solution for the common special case in which so-called sibling pages are available. The sibling pages we consider are pages on the hidden web, commonly generated from underlying databases. We compare them to identify and connect nonvarying components (category labels) and varying components (data values). We tested our solution using more than 2,000 tables in source pages from three different domains—car advertisements, molecular biology, and geopolitical information. Experimental results show that the system can successfully identify sibling tables, generate structure patterns, interpret tables using the generated patterns, and automatically adjust the structure patterns, if necessary, as it processes a sequence of hidden-web pages. For these activities, the system was able to achieve an overall F-measure of 94.5%.
Integrating Distributed Homogeneous and Heterogeneous Databases: Prototypes. Volume 3.
1987-12-01
Integrating Distributed3 Institute of Teholg Homogeneous and -Knowledge-Based eeokn usDtb e: Integrated Information Pooye Systems Engineering Pooye (KBIISE...Transportation Systems Center, December 1987 Broadway, NIA 02142 13. NUMBER OF PAGES IT ~ *n~1~ ArFre 218 Pages 14. kW rSi dTfrn front N Gr~in Office) IS...SECURITY CLASS. (of thie report) Transportation Systems Center, Unclassified Broadway, MA 02142 I5a. DECLASSIFICATION/ DOWNGRADING SCHEDULE 16. DISTRIBUTION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Studwell, Sara; Robinson, Carly; Elliott, Jannean
Scientific research is producing ever-increasing amounts of data. Organizing and reflecting relationships across data collections, datasets, publications, and other research objects are essential functionalities of the modern science environment, yet challenging to implement. Landing pages are often used for providing ‘big picture’ contextual frameworks for datasets and data collections, and many large-volume data holders are utilizing them in thoughtful, creative ways. The benefits of their organizational efforts, however, are not realized unless the user eventually sees the landing page at the end point of their search. What if that organization and ‘big picture’ context could benefit the user at themore » beginning of the search? That is a challenging approach, but The Department of Energy’s (DOE) Office of Scientific and Technical Information (OSTI) is redesigning the database functionality of the DOE Data Explorer (DDE) with that goal in mind. Phase I is focused on redesigning the DDE database to leverage relationships between two existing distinct populations in DDE, data Projects and individual Datasets, and then adding a third intermediate population, data Collections. Mapped, structured linkages, designed to show user relationships, will allow users to make informed search choices. These linkages will be sustainable and scalable, created automatically with the use of new metadata fields and existing authorities. Phase II will study selected DOE Data ID Service clients, analyzing how their landing pages are organized, and how that organization might be used to improve DDE search capabilities. At the heart of both phases is the realization that adding more metadata information for cross-referencing may require additional effort for data scientists. Finally, OSTI’s approach seeks to leverage existing metadata and landing page intelligence without imposing an additional burden on the data creators.« less
Studwell, Sara; Robinson, Carly; Elliott, Jannean
2017-04-04
Scientific research is producing ever-increasing amounts of data. Organizing and reflecting relationships across data collections, datasets, publications, and other research objects are essential functionalities of the modern science environment, yet challenging to implement. Landing pages are often used for providing ‘big picture’ contextual frameworks for datasets and data collections, and many large-volume data holders are utilizing them in thoughtful, creative ways. The benefits of their organizational efforts, however, are not realized unless the user eventually sees the landing page at the end point of their search. What if that organization and ‘big picture’ context could benefit the user at themore » beginning of the search? That is a challenging approach, but The Department of Energy’s (DOE) Office of Scientific and Technical Information (OSTI) is redesigning the database functionality of the DOE Data Explorer (DDE) with that goal in mind. Phase I is focused on redesigning the DDE database to leverage relationships between two existing distinct populations in DDE, data Projects and individual Datasets, and then adding a third intermediate population, data Collections. Mapped, structured linkages, designed to show user relationships, will allow users to make informed search choices. These linkages will be sustainable and scalable, created automatically with the use of new metadata fields and existing authorities. Phase II will study selected DOE Data ID Service clients, analyzing how their landing pages are organized, and how that organization might be used to improve DDE search capabilities. At the heart of both phases is the realization that adding more metadata information for cross-referencing may require additional effort for data scientists. Finally, OSTI’s approach seeks to leverage existing metadata and landing page intelligence without imposing an additional burden on the data creators.« less
HBVPathDB: a database of HBV infection-related molecular interaction network.
Zhang, Yi; Bo, Xiao-Chen; Yang, Jing; Wang, Sheng-Qi
2005-03-21
To describe molecules or genes interaction between hepatitis B viruses (HBV) and host, for understanding how virus' and host's genes and molecules are networked to form a biological system and for perceiving mechanism of HBV infection. The knowledge of HBV infection-related reactions was organized into various kinds of pathways with carefully drawn graphs in HBVPathDB. Pathway information is stored with relational database management system (DBMS), which is currently the most efficient way to manage large amounts of data and query is implemented with powerful Structured Query Language (SQL). The search engine is written using Personal Home Page (PHP) with SQL embedded and web retrieval interface is developed for searching with Hypertext Markup Language (HTML). We present the first version of HBVPathDB, which is a HBV infection-related molecular interaction network database composed of 306 pathways with 1 050 molecules involved. With carefully drawn graphs, pathway information stored in HBVPathDB can be browsed in an intuitive way. We develop an easy-to-use interface for flexible accesses to the details of database. Convenient software is implemented to query and browse the pathway information of HBVPathDB. Four search page layout options-category search, gene search, description search, unitized search-are supported by the search engine of the database. The database is freely available at http://www.bio-inf.net/HBVPathDB/HBV/. The conventional perspective HBVPathDB have already contained a considerable amount of pathway information with HBV infection related, which is suitable for in-depth analysis of molecular interaction network of virus and host. HBVPathDB integrates pathway data-sets with convenient software for query, browsing, visualization, that provides users more opportunity to identify regulatory key molecules as potential drug targets and to explore the possible mechanism of HBV infection based on gene expression datasets.
Secure web-based access to radiology: forms and databases for fast queries
NASA Astrophysics Data System (ADS)
McColl, Roderick W.; Lane, Thomas J.
2002-05-01
Currently, Web-based access to mini-PACS or similar databases commonly utilizes either JavaScript, Java applets or ActiveX controls. Many sites do not permit applets or controls or other binary objects for fear of viruses or worms sent by malicious users. In addition, the typical CGI query mechanism requires several parameters to be sent with the http GET/POST request, which may identify the patient in some way; this in unacceptable for privacy protection. Also unacceptable are pages produced by server-side scripts which can be cached by the browser, since these may also contain sensitive information. We propose a simple mechanism for access to patient information, including images, which guarantees security of information, makes it impossible to bookmark the page, or to return to the page after some defined length of time. In addition, this mechanism is simple, therefore permitting rapid access without the need to initially download an interface such as an applet or control. In addition to image display, the design of the site allows the user to view and save movies of multi-phasic data, or to construct multi-frame datasets from entire series. These capabilities make the site attractive for research purposes such as teaching file preparation.
PROTICdb: a web-based application to store, track, query, and compare plant proteome data.
Ferry-Dumazet, Hélène; Houel, Gwenn; Montalent, Pierre; Moreau, Luc; Langella, Olivier; Negroni, Luc; Vincent, Delphine; Lalanne, Céline; de Daruvar, Antoine; Plomion, Christophe; Zivy, Michel; Joets, Johann
2005-05-01
PROTICdb is a web-based application, mainly designed to store and analyze plant proteome data obtained by two-dimensional polyacrylamide gel electrophoresis (2-D PAGE) and mass spectrometry (MS). The purposes of PROTICdb are (i) to store, track, and query information related to proteomic experiments, i.e., from tissue sampling to protein identification and quantitative measurements, and (ii) to integrate information from the user's own expertise and other sources into a knowledge base, used to support data interpretation (e.g., for the determination of allelic variants or products of post-translational modifications). Data insertion into the relational database of PROTICdb is achieved either by uploading outputs of image analysis and MS identification software, or by filling web forms. 2-D PAGE annotated maps can be displayed, queried, and compared through a graphical interface. Links to external databases are also available. Quantitative data can be easily exported in a tabulated format for statistical analyses. PROTICdb is based on the Oracle or the PostgreSQL Database Management System and is freely available upon request at the following URL: http://moulon.inra.fr/ bioinfo/PROTICdb.
The Alaska Volcano Observatory Website a Tool for Information Management and Dissemination
NASA Astrophysics Data System (ADS)
Snedigar, S. F.; Cameron, C. E.; Nye, C. J.
2006-12-01
The Alaska Volcano Observatory's (AVO's) website served as a primary information management tool during the 2006 eruption of Augustine Volcano. The AVO website is dynamically generated from a database back- end. This system enabled AVO to quickly and easily update the website, and provide content based on user- queries to the database. During the Augustine eruption, the new AVO website was heavily used by members of the public (up to 19 million hits per day), and this was largely because the AVO public pages were an excellent source of up-to-date information. There are two different, yet fully integrated parts of the website. An external, public site (www.avo.alaska.edu) allows the general public to track eruptive activity by viewing the latest photographs, webcam images, webicorder graphs, and official information releases about activity at the volcano, as well as maps, previous eruption information, bibliographies, and rich information about other Alaska volcanoes. The internal half of the website hosts diverse geophysical and geological data (as browse images) in a format equally accessible by AVO staff in different locations. In addition, an observation log allows users to enter information about anything from satellite passes to seismic activity to ash fall reports into a searchable database. The individual(s) on duty at the watch office use forms on the internal website to post a summary of the latest activity directly to the public website, ensuring that the public website is always up to date. The internal website also serves as a starting point for monitoring Alaska's volcanoes. AVO's extensive image database allows AVO personnel to upload many photos, diagrams, and videos which are then available to be browsed by anyone in the AVO community. Selected images are viewable from the public page. The primary webserver is housed at the University of Alaska Fairbanks, and holds a MySQL database with over 200 tables and several thousand lines of php code gluing the database and website together. The database currently holds 95 GB of data. Webcam images and webicorder graphs are pulled from servers in Anchorage every few minutes. Other servers in Fairbanks generate earthquake location plots and spectrograms.
NASA Technical Reports Server (NTRS)
Steeman, Gerald; Connell, Christopher
2000-01-01
Many librarians may feel that dynamic Web pages are out of their reach, financially and technically. Yet we are reminded in library and Web design literature that static home pages are a thing of the past. This paper describes how librarians at the Institute for Defense Analyses (IDA) library developed a database-driven, dynamic intranet site using commercial off-the-shelf applications. Administrative issues include surveying a library users group for interest and needs evaluation; outlining metadata elements; and, committing resources from managing time to populate the database and training in Microsoft FrontPage and Web-to-database design. Technical issues covered include Microsoft Access database fundamentals, lessons learned in the Web-to-database process (including setting up Database Source Names (DSNs), redesigning queries to accommodate the Web interface, and understanding Access 97 query language vs. Standard Query Language (SQL)). This paper also offers tips on editing Active Server Pages (ASP) scripting to create desired results. A how-to annotated resource list closes out the paper.
Owens, John
2009-01-01
Technological advances in the acquisition of DNA and protein sequence information and the resulting onrush of data can quickly overwhelm the scientist unprepared for the volume of information that must be evaluated and carefully dissected to discover its significance. Few laboratories have the luxury of dedicated personnel to organize, analyze, or consistently record a mix of arriving sequence data. A methodology based on a modern relational-database manager is presented that is both a natural storage vessel for antibody sequence information and a conduit for organizing and exploring sequence data and accompanying annotation text. The expertise necessary to implement such a plan is equal to that required by electronic word processors or spreadsheet applications. Antibody sequence projects maintained as independent databases are selectively unified by the relational-database manager into larger database families that contribute to local analyses, reports, interactive HTML pages, or exported to facilities dedicated to sophisticated sequence analysis techniques. Database files are transposable among current versions of Microsoft, Macintosh, and UNIX operating systems.
2004-03-01
with MySQL . This choice was made because MySQL is open source. Any significant database engine such as Oracle or MS- SQL or even MS Access can be used...10 Figure 6. The DoD vs . Commercial Life Cycle...necessarily be interested in SCADA network security 13. MySQL (Database server) – This station represents a typical data server for a web page
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-08
... ferry operations. The information to be collected will be used to produce a descriptive database of existing ferry operations. A summary report of survey findings will be published on the BTS Web page. The... conducted a survey of approximately 250 ferry operators to identify: (1) Existing ferry operations including...
A proteomic analysis of leaf sheaths from rice.
Shen, Shihua; Matsubae, Masami; Takao, Toshifumi; Tanaka, Naoki; Komatsu, Setsuko
2002-10-01
The proteins extracted from the leaf sheaths of rice seedlings were separated by 2-D PAGE, and analyzed by Edman sequencing and mass spectrometry, followed by database searching. Image analysis revealed 352 protein spots on 2-D PAGE after staining with Coomassie Brilliant Blue. The amino acid sequences of 44 of 84 proteins were determined; for 31 of these proteins, a clear function could be assigned, whereas for 12 proteins, no function could be assigned. Forty proteins did not yield amino acid sequence information, because they were N-terminally blocked, or the obtained sequences were too short and/or did not give unambiguous results. Fifty-nine proteins were analyzed by mass spectrometry; all of these proteins were identified by matching to the protein database. The amino acid sequences of 19 of 27 proteins analyzed by mass spectrometry were similar to the results of Edman sequencing. These results suggest that 2-D PAGE combined with Edman sequencing and mass spectrometry analysis can be effectively used to identify plant proteins.
ERIC Educational Resources Information Center
Jackson, Mary E.
2002-01-01
Explains portals as tools that gather a variety of electronic information resources, including local library resources, into a single Web page. Highlights include cross-database searching; integration with university portals and course management software; the ARL (Association of Research Libraries) Scholars Portal Initiative; and selected vendors…
Enabling Scientists: Serving Sci-Tech Library Users with Disabilities.
ERIC Educational Resources Information Center
Coonin, Bryna
2001-01-01
Discusses how librarians in scientific and technical libraries can contribute to an accessible electronic library environment for users with disabilities to ensure independent access to information. Topics include relevant assistive technologies; creating accessible Web pages; monitoring accessibility of electronic databases; preparing accessible…
Liolios, Konstantinos; Mavromatis, Konstantinos; Tavernarakis, Nektarios; Kyrpides, Nikos C
2008-01-01
The Genomes On Line Database (GOLD) is a comprehensive resource that provides information on genome and metagenome projects worldwide. Complete and ongoing projects and their associated metadata can be accessed in GOLD through pre-computed lists and a search page. As of September 2007, GOLD contains information on more than 2900 sequencing projects, out of which 639 have been completed and their sequence data deposited in the public databases. GOLD continues to expand with the goal of providing metadata information related to the projects and the organisms/environments towards the Minimum Information about a Genome Sequence' (MIGS) guideline. GOLD is available at http://www.genomesonline.org and has a mirror site at the Institute of Molecular Biology and Biotechnology, Crete, Greece at http://gold.imbb.forth.gr/
Liolios, Konstantinos; Mavromatis, Konstantinos; Tavernarakis, Nektarios; Kyrpides, Nikos C.
2008-01-01
The Genomes On Line Database (GOLD) is a comprehensive resource that provides information on genome and metagenome projects worldwide. Complete and ongoing projects and their associated metadata can be accessed in GOLD through pre-computed lists and a search page. As of September 2007, GOLD contains information on more than 2900 sequencing projects, out of which 639 have been completed and their sequence data deposited in the public databases. GOLD continues to expand with the goal of providing metadata information related to the projects and the organisms/environments towards the Minimum Information about a Genome Sequence’ (MIGS) guideline. GOLD is available at http://www.genomesonline.org and has a mirror site at the Institute of Molecular Biology and Biotechnology, Crete, Greece at http://gold.imbb.forth.gr/ PMID:17981842
The PROTICdb database for 2-DE proteomics.
Langella, Olivier; Zivy, Michel; Joets, Johann
2007-01-01
PROTICdb is a web-based database mainly designed to store and analyze plant proteome data obtained by 2D polyacrylamide gel electrophoresis (2D PAGE) and mass spectrometry (MS). The goals of PROTICdb are (1) to store, track, and query information related to proteomic experiments, i.e., from tissue sampling to protein identification and quantitative measurements; and (2) to integrate information from the user's own expertise and other sources into a knowledge base, used to support data interpretation (e.g., for the determination of allelic variants or products of posttranslational modifications). Data insertion into the relational database of PROTICdb is achieved either by uploading outputs from Mélanie, PDQuest, IM2d, ImageMaster(tm) 2D Platinum v5.0, Progenesis, Sequest, MS-Fit, and Mascot software, or by filling in web forms (experimental design and methods). 2D PAGE-annotated maps can be displayed, queried, and compared through the GelBrowser. Quantitative data can be easily exported in a tabulated format for statistical analyses with any third-party software. PROTICdb is based on the Oracle or the PostgreSQLDataBase Management System (DBMS) and is freely available upon request at http://cms.moulon.inra.fr/content/view/14/44/.
Landfill Gas Energy Project Data and Landfill Technical Data
This page provides data from the LMOP Database for U.S. landfills and LFG energy projects in Excel files, a map of project and candidate landfill counts by state, project profiles for a select group of projects, and information about Project Expo sites.
ERIC Educational Resources Information Center
Fox, Megan K.
2003-01-01
Examines how librarians are customizing their services and collections for handheld computing. Discusses the widest adoption of PDAs (personal digital assistants) in libraries that serve health and medical communities; PDA-friendly information pages; the reference focus; journals and databases; lending materials; publicity; use of PDAs by library…
Optics survivability support, volume 2
NASA Astrophysics Data System (ADS)
Wild, N.; Simpson, T.; Busdeker, A.; Doft, F.
1993-01-01
This volume of the Optics Survivability Support Final Report contains plots of all the data contained in the computerized Optical Glasses Database. All of these plots are accessible through the Database, but are included here as a convenient reference. The first three pages summarize the types of glass included with a description of the radiation source, test date, and the original data reference. This information is included in the database as a macro button labeled 'LLNL DATABASE'. Following this summary is an Abbe chart showing which glasses are included and where they lie as a function of nu(sub d) and n(sub d). This chart is also callable through the database as a macro button labeled 'ABBEC'.
Nuclear Science References (NSR)
be included. For more information, see the help page. The NSR database schema and Web applications have undergone some recent changes. This is a revised version of the NSR Web Interface. NSR Quick Manager: Boris Pritychenko, NNDC, Brookhaven National Laboratory Web Programming: Boris Pritychenko, NNDC
Using the TIGR gene index databases for biological discovery.
Lee, Yuandan; Quackenbush, John
2003-11-01
The TIGR Gene Index web pages provide access to analyses of ESTs and gene sequences for nearly 60 species, as well as a number of resources derived from these. Each species-specific database is presented using a common format with a homepage. A variety of methods exist that allow users to search each species-specific database. Methods implemented currently include nucleotide or protein sequence queries using WU-BLAST, text-based searches using various sequence identifiers, searches by gene, tissue and library name, and searches using functional classes through Gene Ontology assignments. This protocol provides guidance for using the Gene Index Databases to extract information.
CBS Genome Atlas Database: a dynamic storage for bioinformatic results and sequence data.
Hallin, Peter F; Ussery, David W
2004-12-12
Currently, new bacterial genomes are being published on a monthly basis. With the growing amount of genome sequence data, there is a demand for a flexible and easy-to-maintain structure for storing sequence data and results from bioinformatic analysis. More than 150 sequenced bacterial genomes are now available, and comparisons of properties for taxonomically similar organisms are not readily available to many biologists. In addition to the most basic information, such as AT content, chromosome length, tRNA count and rRNA count, a large number of more complex calculations are needed to perform detailed comparative genomics. DNA structural calculations like curvature and stacking energy, DNA compositions like base skews, oligo skews and repeats at the local and global level are just a few of the analysis that are presented on the CBS Genome Atlas Web page. Complex analysis, changing methods and frequent addition of new models are factors that require a dynamic database layout. Using basic tools like the GNU Make system, csh, Perl and MySQL, we have created a flexible database environment for storing and maintaining such results for a collection of complete microbial genomes. Currently, these results counts to more than 220 pieces of information. The backbone of this solution consists of a program package written in Perl, which enables administrators to synchronize and update the database content. The MySQL database has been connected to the CBS web-server via PHP4, to present a dynamic web content for users outside the center. This solution is tightly fitted to existing server infrastructure and the solutions proposed here can perhaps serve as a template for other research groups to solve database issues. A web based user interface which is dynamically linked to the Genome Atlas Database can be accessed via www.cbs.dtu.dk/services/GenomeAtlas/. This paper has a supplemental information page which links to the examples presented: www.cbs.dtu.dk/services/GenomeAtlas/suppl/bioinfdatabase.
Database resources of the National Center for Biotechnology Information
2015-01-01
The National Center for Biotechnology Information (NCBI) provides a large suite of online resources for biological information and data, including the GenBank® nucleic acid sequence database and the PubMed database of citations and abstracts for published life science journals. Additional NCBI resources focus on literature (Bookshelf, PubMed Central (PMC) and PubReader); medical genetics (ClinVar, dbMHC, the Genetic Testing Registry, HIV-1/Human Protein Interaction Database and MedGen); genes and genomics (BioProject, BioSample, dbSNP, dbVar, Epigenomics, Gene, Gene Expression Omnibus (GEO), Genome, HomoloGene, the Map Viewer, Nucleotide, PopSet, Probe, RefSeq, Sequence Read Archive, the Taxonomy Browser, Trace Archive and UniGene); and proteins and chemicals (Biosystems, COBALT, the Conserved Domain Database (CDD), the Conserved Domain Architecture Retrieval Tool (CDART), the Molecular Modeling Database (MMDB), Protein Clusters, Protein and the PubChem suite of small molecule databases). The Entrez system provides search and retrieval operations for many of these databases. Augmenting many of the Web applications are custom implementations of the BLAST program optimized to search specialized data sets. All of these resources can be accessed through the NCBI home page at http://www.ncbi.nlm.nih.gov. PMID:25398906
Database resources of the National Center for Biotechnology Information
2016-01-01
The National Center for Biotechnology Information (NCBI) provides a large suite of online resources for biological information and data, including the GenBank® nucleic acid sequence database and the PubMed database of citations and abstracts for published life science journals. Additional NCBI resources focus on literature (PubMed Central (PMC), Bookshelf and PubReader), health (ClinVar, dbGaP, dbMHC, the Genetic Testing Registry, HIV-1/Human Protein Interaction Database and MedGen), genomes (BioProject, Assembly, Genome, BioSample, dbSNP, dbVar, Epigenomics, the Map Viewer, Nucleotide, Probe, RefSeq, Sequence Read Archive, the Taxonomy Browser and the Trace Archive), genes (Gene, Gene Expression Omnibus (GEO), HomoloGene, PopSet and UniGene), proteins (Protein, the Conserved Domain Database (CDD), COBALT, Conserved Domain Architecture Retrieval Tool (CDART), the Molecular Modeling Database (MMDB) and Protein Clusters) and chemicals (Biosystems and the PubChem suite of small molecule databases). The Entrez system provides search and retrieval operations for most of these databases. Augmenting many of the web applications are custom implementations of the BLAST program optimized to search specialized datasets. All of these resources can be accessed through the NCBI home page at www.ncbi.nlm.nih.gov. PMID:26615191
DOE Research and Development Accomplishments Help
be used to search, locate, access, and electronically download full-text research and development (R Browse Downloading, Viewing, and/or Searching Full-text Documents/Pages Searching the Database Search Features Search allows you to search the OCRed full-text document and bibliographic information, the
Maxima and O-C Diagrams for 489 Mira Stars
NASA Astrophysics Data System (ADS)
Karlsson, T.
2013-11-01
Maxima for 489 Mira stars have been compiled. They were computed with data from AAVSO, AFOEV, VSOLJ, and BAA-VSS and collected from published maxima. The result is presented in a mysql database and on web pages with O-C diagrams, periods and some statistical information for each star.
38 CFR 74.10 - Where must an application be filed?
Code of Federal Regulations, 2010 CFR
2010-07-01
... (CONTINUED) VETERANS SMALL BUSINESS REGULATIONS Application Guidelines § 74.10 Where must an application be... Information Pages database located in the Center for Veterans Enterprise's Web portal, http://www.VetBiz.gov... dispatched to: Director, Center for Veterans Enterprise (00VE), U.S. Department of Veterans Affairs, 810...
Representing metabolic pathway information: an object-oriented approach.
Ellis, L B; Speedie, S M; McLeish, R
1998-01-01
The University of Minnesota Biocatalysis/Biodegradation Database (UM-BBD) is a website providing information and dynamic links for microbial metabolic pathways, enzyme reactions, and their substrates and products. The Compound, Organism, Reaction and Enzyme (CORE) object-oriented database management system was developed to contain and serve this information. CORE was developed using Java, an object-oriented programming language, and PSE persistent object classes from Object Design, Inc. CORE dynamically generates descriptive web pages for reactions, compounds and enzymes, and reconstructs ad hoc pathway maps starting from any UM-BBD reaction. CORE code is available from the authors upon request. CORE is accessible through the UM-BBD at: http://www. labmed.umn.edu/umbbd/index.html.
Database resources of the National Center for Biotechnology Information: 2002 update
Wheeler, David L.; Church, Deanna M.; Lash, Alex E.; Leipe, Detlef D.; Madden, Thomas L.; Pontius, Joan U.; Schuler, Gregory D.; Schriml, Lynn M.; Tatusova, Tatiana A.; Wagner, Lukas; Rapp, Barbara A.
2002-01-01
In addition to maintaining the GenBank nucleic acid sequence database, the National Center for Biotechnology Information (NCBI) provides data analysis and retrieval resources that operate on the data in GenBank and a variety of other biological data made available through NCBI’s web site. NCBI data retrieval resources include Entrez, PubMed, LocusLink and the Taxonomy Browser. Data analysis resources include BLAST, Electronic PCR, OrfFinder, RefSeq, UniGene, HomoloGene, Database of Single Nucleotide Polymorphisms (dbSNP), Human Genome Sequencing, Human MapViewer, Human¡VMouse Homology Map, Cancer Chromosome Aberration Project (CCAP), Entrez Genomes, Clusters of Orthologous Groups (COGs) database, Retroviral Genotyping Tools, SAGEmap, Gene Expression Omnibus (GEO), Online Mendelian Inheritance in Man (OMIM), the Molecular Modeling Database (MMDB) and the Conserved Domain Database (CDD). Augmenting many of the web applications are custom implementations of the BLAST program optimized to search specialized data sets. All of the resources can be accessed through the NCBI home page at http://www.ncbi.nlm.nih.gov. PMID:11752242
A Multi-Purpose Data Dissemination Infrastructure for the Marine-Earth Observations
NASA Astrophysics Data System (ADS)
Hanafusa, Y.; Saito, H.; Kayo, M.; Suzuki, H.
2015-12-01
To open the data from a variety of observations, the Japan Agency for Marine-Earth Science and Technology (JAMSTEC) has developed a multi-purpose data dissemination infrastructure. Although many observations have been made in the earth science, all the data are not opened completely. We think data centers may provide researchers with a universal data dissemination service which can handle various kinds of observation data with little effort. For this purpose JAMSTEC Data Management Office has developed the "Information Catalog Infrastructure System (Catalog System)". This is a kind of catalog management system which can create, renew and delete catalogs (= databases) and has following features, - The Catalog System does not depend on data types or granularity of data records. - By registering a new metadata schema to the system, a new database can be created on the same system without sytem modification. - As web pages are defined by the cascading style sheets, databases have different look and feel, and operability. - The Catalog System provides databases with basic search tools; search by text, selection from a category tree, and selection from a time line chart. - For domestic users it creates the Japanese and English pages at the same time and has dictionary to control terminology and proper noun. As of August 2015 JAMSTEC operates 7 databases on the Catalog System. We expect to transfer existing databases to this system, or create new databases on it. In comparison with a dedicated database developed for the specific dataset, the Catalog System is suitable for the dissemination of small datasets, with minimum cost. Metadata held in the catalogs may be transfered to other metadata schema to exchange global databases or portals. Examples: JAMSTEC Data Catalog: http://www.godac.jamstec.go.jp/catalog/data_catalog/metadataList?lang=enJAMSTEC Document Catalog: http://www.godac.jamstec.go.jp/catalog/doc_catalog/metadataList?lang=en&tab=categoryResearch Information and Data Access Site of TEAMS: http://www.i-teams.jp/catalog/rias/metadataList?lang=en&tab=list
The unified database for the fixed target experiment BM@N
NASA Astrophysics Data System (ADS)
Gertsenberger, K. V.
2016-09-01
The article describes the developed database designed as comprehensive data storage of the fixed target experiment BM@N [1] at Joint Institute for Nuclear Research (JINR) in Dubna. The structure and purposes of the BM@N facility will be briefly presented. The scheme of the unified database and its parameters will be described in detail. The use of the BM@N database implemented on the PostgreSQL database management system (DBMS) allows one to provide user access to the actual information of the experiment. Also the interfaces developed for the access to the database will be presented. One was implemented as the set of C++ classes to access the data without SQL statements, the other-Web-interface being available on the Web page of the BM@N experiment.
NASA Astrophysics Data System (ADS)
Bößwetter, Daniel
Much has been written about the pros and cons of column-orientation as a means to speed up read-mostly analytic workloads in relational databases. In this paper we try to dissect the primitive mechanisms of a database that help express the coherence of tuples and present a novel way of organizing relational data in order to exploit the advantages of both, the row-oriented and the column-oriented world. As we go, we break with yet another bad habit of databases, namely the equal granularity of reads and writes which leads us to the introduction of consecutive clusters of disk pages called super-pages.
Linking NCBI to Wikipedia: a wiki-based approach.
Page, Roderic D M
2011-03-31
The NCBI Taxonomy underpins many bioinformatics and phyloinformatics databases, but by itself provides limited information on the taxa it contains. One readily available source of information on many taxa is Wikipedia. This paper describes iPhylo Linkout, a Semantic wiki that maps taxa in NCBI's taxonomy database onto corresponding pages in Wikipedia. Storing the mapping in a wiki makes it easy to edit, correct, or otherwise annotate the links between NCBI and Wikipedia. The mapping currently comprises some 53,000 taxa, and is available at http://iphylo.org/linkout. The links between NCBI and Wikipedia are also made available to NCBI users through the NCBI LinkOut service.
The Long Valley Caldera GIS database
Battaglia, Maurizio; Williams, M.J.; Venezky, D.Y.; Hill, D.P.; Langbein, J.O.; Farrar, C.D.; Howle, J.F.; Sneed, M.; Segall, P.
2003-01-01
This database provides an overview of the studies being conducted by the Long Valley Observatory in eastern California from 1975 to 2001. The database includes geologic, monitoring, and topographic datasets related to Long Valley caldera. The CD-ROM contains a scan of the original geologic map of the Long Valley region by R. Bailey. Real-time data of the current activity of the caldera (including earthquakes, ground deformation and the release of volcanic gas), information about volcanic hazards and the USGS response plan are available online at the Long Valley observatory web page (http://lvo.wr.usgs.gov). If you have any comments or questions about this database, please contact the Scientist in Charge of the Long Valley observatory.
BioPepDB: an integrated data platform for food-derived bioactive peptides.
Li, Qilin; Zhang, Chao; Chen, Hongjun; Xue, Jitong; Guo, Xiaolei; Liang, Ming; Chen, Ming
2018-03-12
Food-derived bioactive peptides play critical roles in regulating most biological processes and have considerable biological, medical and industrial importance. However, a large number of active peptides data, including sequence, function, source, commercial product information, references and other information are poorly integrated. BioPepDB is a searchable database of food-derived bioactive peptides and their related articles, including more than four thousand bioactive peptide entries. Moreover, BioPepDB provides modules of prediction and hydrolysis-simulation for discovering novel peptides. It can serve as a reference database to investigate the function of different bioactive peptides. BioPepDB is available at http://bis.zju.edu.cn/biopepdbr/ . The web page utilises Apache, PHP5 and MySQL to provide the user interface for accessing the database and predict novel peptides. The database itself is operated on a specialised server.
Home Page: Division of Birds: Department of Vertebrate Zoology: National
} Advanced Search Plan Your Visit Exhibitions Education Research & Collections About Us Get Involved © Smithsonian Institution Home Research Collections Visitor Information Loans Destructive Sampling Genetic Resources Database VZ Libraries Related Links Staff VZ All Birds Contact Us NMNH Home ⺠Research &
Bera, Maitreyee
2017-10-16
The U.S. Geological Survey (USGS), in cooperation with the DuPage County Stormwater Management Department, maintains a database of hourly meteorological and hydrologic data for use in a near real-time streamflow simulation system. This system is used in the management and operation of reservoirs and other flood-control structures in the West Branch DuPage River watershed in DuPage County, Illinois. The majority of the precipitation data are collected from a tipping-bucket rain-gage network located in and near DuPage County. The other meteorological data (air temperature, dewpoint temperature, wind speed, and solar radiation) are collected at Argonne National Laboratory in Argonne, Ill. Potential evapotranspiration is computed from the meteorological data using the computer program LXPET (Lamoreux Potential Evapotranspiration). The hydrologic data (water-surface elevation [stage] and discharge) are collected at U.S.Geological Survey streamflow-gaging stations in and around DuPage County. These data are stored in a Watershed Data Management (WDM) database.This report describes a version of the WDM database that is quality-assured and quality-controlled annually to ensure datasets are complete and accurate. This database is named WBDR13.WDM. It contains data from January 1, 2007, through September 30, 2013. Each precipitation dataset may have time periods of inaccurate data. This report describes the methods used to estimate the data for the periods of missing, erroneous, or snowfall-affected data and thereby improve the accuracy of these data. The other meteorological datasets are described in detail in Over and others (2010), and the hydrologic datasets in the database are fully described in the online USGS annual water data reports for Illinois (U.S. Geological Survey, 2016) and, therefore, are described in less detail than the precipitation datasets in this report.
Stockburger, D W
1999-05-01
Active server pages permit a software developer to customize the Web experience for users by inserting server-side script and database access into Web pages. This paper describes applications of these techniques and provides a primer on the use of these methods. Applications include a system that generates and grades individualized homework assignments and tests for statistics students. The student accesses the system as a Web page, prints out the assignment, does the assignment, and enters the answers on the Web page. The server, running on NT Server 4.0, grades the assignment, updates the grade book (on a database), and returns the answer key to the student.
Toward two-dimensional search engines
NASA Astrophysics Data System (ADS)
Ermann, L.; Chepelianskii, A. D.; Shepelyansky, D. L.
2012-07-01
We study the statistical properties of various directed networks using ranking of their nodes based on the dominant vectors of the Google matrix known as PageRank and CheiRank. On average PageRank orders nodes proportionally to a number of ingoing links, while CheiRank orders nodes proportionally to a number of outgoing links. In this way, the ranking of nodes becomes two dimensional which paves the way for the development of two-dimensional search engines of a new type. Statistical properties of information flow on the PageRank-CheiRank plane are analyzed for networks of British, French and Italian universities, Wikipedia, Linux Kernel, gene regulation and other networks. A special emphasis is done for British universities networks using the large database publicly available in the UK. Methods of spam links control are also analyzed.
An overview of biomedical literature search on the World Wide Web in the third millennium.
Kumar, Prince; Goel, Roshni; Jain, Chandni; Kumar, Ashish; Parashar, Abhishek; Gond, Ajay Ratan
2012-06-01
Complete access to the existing pool of biomedical literature and the ability to "hit" upon the exact information of the relevant specialty are becoming essential elements of academic and clinical expertise. With the rapid expansion of the literature database, it is almost impossible to keep up to date with every innovation. Using the Internet, however, most people can freely access this literature at any time, from almost anywhere. This paper highlights the use of the Internet in obtaining valuable biomedical research information, which is mostly available from journals, databases, textbooks and e-journals in the form of web pages, text materials, images, and so on. The authors present an overview of web-based resources for biomedical researchers, providing information about Internet search engines (e.g., Google), web-based bibliographic databases (e.g., PubMed, IndMed) and how to use them, and other online biomedical resources that can assist clinicians in reaching well-informed clinical decisions.
Li, Xiu-Qing
2012-01-01
Most protein PageRank studies do not use signal flow direction information in protein interactions because this information was not readily available in large protein databases until recently. Therefore, four questions have yet to be answered: A) What is the general difference between signal emitting and receiving in a protein interactome? B) Which proteins are among the top ranked in directional ranking? C) Are high ranked proteins more evolutionarily conserved than low ranked ones? D) Do proteins with similar ranking tend to have similar subcellular locations? In this study, we address these questions using the forward, reverse, and non-directional PageRank approaches to rank an information-directional network of human proteins and study their evolutionary conservation. The forward ranking gives credit to information receivers, reverse ranking to information emitters, and non-directional ranking mainly to the number of interactions. The protein lists generated by the forward and non-directional rankings are highly correlated, but those by the reverse and non-directional rankings are not. The results suggest that the signal emitting/receiving system is characterized by key-emittings and relatively even receivings in the human protein interactome. Signaling pathway proteins are frequent in top ranked ones. Eight proteins are both informational top emitters and top receivers. Top ranked proteins, except a few species-related novel-function ones, are evolutionarily well conserved. Protein-subunit ranking position reflects subunit function. These results demonstrate the usefulness of different PageRank approaches in characterizing protein networks and provide insights to protein interaction in the cell. PMID:23028653
Brower, Stewart M
2004-10-01
The analysis included forty-one academic health sciences library (HSL) Websites as captured in the first two weeks of January 2001. Home pages and persistent navigational tools (PNTs) were analyzed for layout, technology, and links, and other general site metrics were taken. Websites were selected based on rank in the National Network of Libraries of Medicine, with regional and resource libraries given preference on the basis that these libraries are recognized as leaders in their regions and would be the most reasonable source of standards for best practice. A three-page evaluation tool was developed based on previous similar studies. All forty-one sites were evaluated in four specific areas: library general information, Website aids and tools, library services, and electronic resources. Metrics taken for electronic resources included orientation of bibliographic databases alphabetically by title or by subject area and with links to specifically named databases. Based on the results, a formula for determining obligatory links was developed, listing items that should appear on all academic HSL Web home pages and PNTs. These obligatory links demonstrate a series of best practices that may be followed in the design and construction of academic HSL Websites.
Database recovery using redundant disk arrays
NASA Technical Reports Server (NTRS)
Mourad, Antoine N.; Fuchs, W. K.; Saab, Daniel G.
1992-01-01
Redundant disk arrays provide a way for achieving rapid recovery from media failures with a relatively low storage cost for large scale database systems requiring high availability. In this paper a method is proposed for using redundant disk arrays to support rapid-recovery from system crashes and transaction aborts in addition to their role in providing media failure recovery. A twin page scheme is used to store the parity information in the array so that the time for transaction commit processing is not degraded. Using an analytical model, it is shown that the proposed method achieves a significant increase in the throughput of database systems using redundant disk arrays by reducing the number of recovery operations needed to maintain the consistency of the database.
Recovery issues in databases using redundant disk arrays
NASA Technical Reports Server (NTRS)
Mourad, Antoine N.; Fuchs, W. K.; Saab, Daniel G.
1993-01-01
Redundant disk arrays provide a way for achieving rapid recovery from media failures with a relatively low storage cost for large scale database systems requiring high availability. In this paper we propose a method for using redundant disk arrays to support rapid recovery from system crashes and transaction aborts in addition to their role in providing media failure recovery. A twin page scheme is used to store the parity information in the array so that the time for transaction commit processing is not degraded. Using an analytical model, we show that the proposed method achieves a significant increase in the throughput of database systems using redundant disk arrays by reducing the number of recovery operations needed to maintain the consistency of the database.
Casting the Net: The Development of a Resource Collection for an Internet Database.
ERIC Educational Resources Information Center
McKiernan, Gerry
CyberStacks(sm), a demonstration prototype World Wide Web information service, was established on the home page server at Iowa State University with the intent of facilitating identification and use of significant Internet resources in science and technology. CyberStacks(sm) was created in response to perceived deficiencies in early efforts to…
Multimedia Database at National Museum of Ethnology
NASA Astrophysics Data System (ADS)
Sugita, Shigeharu
This paper describes the information management system at National Museum of Ethnology, Osaka, Japan. This museum is a kind of research center for cultural anthropology, and has many computer systems such as IBM 3090, VAX11/780, Fujitu M340R, etc. With these computers, distributed multimedia databases are constructed in which not only bibliographic data but also artifact image, slide image, book page image, etc. are stored. The number of data is now about 1.3 million items. These data can be retrieved and displayed on the multimedia workstation which has several displays.
Curated protein information in the Saccharomyces genome database.
Hellerstedt, Sage T; Nash, Robert S; Weng, Shuai; Paskov, Kelley M; Wong, Edith D; Karra, Kalpana; Engel, Stacia R; Cherry, J Michael
2017-01-01
Due to recent advancements in the production of experimental proteomic data, the Saccharomyces genome database (SGD; www.yeastgenome.org ) has been expanding our protein curation activities to make new data types available to our users. Because of broad interest in post-translational modifications (PTM) and their importance to protein function and regulation, we have recently started incorporating expertly curated PTM information on individual protein pages. Here we also present the inclusion of new abundance and protein half-life data obtained from high-throughput proteome studies. These new data types have been included with the aim to facilitate cellular biology research. : www.yeastgenome.org. © The Author(s) 2017. Published by Oxford University Press.
Database resources of the National Center for Biotechnology Information
Wheeler, David L.; Barrett, Tanya; Benson, Dennis A.; Bryant, Stephen H.; Canese, Kathi; Chetvernin, Vyacheslav; Church, Deanna M.; DiCuccio, Michael; Edgar, Ron; Federhen, Scott; Geer, Lewis Y.; Helmberg, Wolfgang; Kapustin, Yuri; Kenton, David L.; Khovayko, Oleg; Lipman, David J.; Madden, Thomas L.; Maglott, Donna R.; Ostell, James; Pruitt, Kim D.; Schuler, Gregory D.; Schriml, Lynn M.; Sequeira, Edwin; Sherry, Stephen T.; Sirotkin, Karl; Souvorov, Alexandre; Starchenko, Grigory; Suzek, Tugba O.; Tatusov, Roman; Tatusova, Tatiana A.; Wagner, Lukas; Yaschenko, Eugene
2006-01-01
In addition to maintaining the GenBank(R) nucleic acid sequence database, the National Center for Biotechnology Information (NCBI) provides analysis and retrieval resources for the data in GenBank and other biological data made available through NCBI's Web site. NCBI resources include Entrez, the Entrez Programming Utilities, MyNCBI, PubMed, PubMed Central, Entrez Gene, the NCBI Taxonomy Browser, BLAST, BLAST Link (BLink), Electronic PCR, OrfFinder, Spidey, Splign, RefSeq, UniGene, HomoloGene, ProtEST, dbMHC, dbSNP, Cancer Chromosomes, Entrez Genomes and related tools, the Map Viewer, Model Maker, Evidence Viewer, Clusters of Orthologous Groups, Retroviral Genotyping Tools, HIV-1, Human Protein Interaction Database, SAGEmap, Gene Expression Omnibus, Entrez Probe, GENSAT, Online Mendelian Inheritance in Man, Online Mendelian Inheritance in Animals, the Molecular Modeling Database, the Conserved Domain Database, the Conserved Domain Architecture Retrieval Tool and the PubChem suite of small molecule databases. Augmenting many of the Web applications are custom implementations of the BLAST program optimized to search specialized datasets. All of the resources can be accessed through the NCBI home page at: . PMID:16381840
Database resources of the National Center for Biotechnology Information.
Sayers, Eric W; Barrett, Tanya; Benson, Dennis A; Bolton, Evan; Bryant, Stephen H; Canese, Kathi; Chetvernin, Vyacheslav; Church, Deanna M; Dicuccio, Michael; Federhen, Scott; Feolo, Michael; Fingerman, Ian M; Geer, Lewis Y; Helmberg, Wolfgang; Kapustin, Yuri; Krasnov, Sergey; Landsman, David; Lipman, David J; Lu, Zhiyong; Madden, Thomas L; Madej, Tom; Maglott, Donna R; Marchler-Bauer, Aron; Miller, Vadim; Karsch-Mizrachi, Ilene; Ostell, James; Panchenko, Anna; Phan, Lon; Pruitt, Kim D; Schuler, Gregory D; Sequeira, Edwin; Sherry, Stephen T; Shumway, Martin; Sirotkin, Karl; Slotta, Douglas; Souvorov, Alexandre; Starchenko, Grigory; Tatusova, Tatiana A; Wagner, Lukas; Wang, Yanli; Wilbur, W John; Yaschenko, Eugene; Ye, Jian
2012-01-01
In addition to maintaining the GenBank® nucleic acid sequence database, the National Center for Biotechnology Information (NCBI) provides analysis and retrieval resources for the data in GenBank and other biological data made available through the NCBI Website. NCBI resources include Entrez, the Entrez Programming Utilities, MyNCBI, PubMed, PubMed Central (PMC), Gene, the NCBI Taxonomy Browser, BLAST, BLAST Link (BLink), Primer-BLAST, COBALT, Splign, RefSeq, UniGene, HomoloGene, ProtEST, dbMHC, dbSNP, dbVar, Epigenomics, Genome and related tools, the Map Viewer, Model Maker, Evidence Viewer, Trace Archive, Sequence Read Archive, BioProject, BioSample, Retroviral Genotyping Tools, HIV-1/Human Protein Interaction Database, Gene Expression Omnibus (GEO), Probe, Online Mendelian Inheritance in Animals (OMIA), the Molecular Modeling Database (MMDB), the Conserved Domain Database (CDD), the Conserved Domain Architecture Retrieval Tool (CDART), Biosystems, Protein Clusters and the PubChem suite of small molecule databases. Augmenting many of the Web applications are custom implementations of the BLAST program optimized to search specialized data sets. All of these resources can be accessed through the NCBI home page at www.ncbi.nlm.nih.gov.
Ligand.Info small-molecule Meta-Database.
von Grotthuss, Marcin; Koczyk, Grzegorz; Pas, Jakub; Wyrwicz, Lucjan S; Rychlewski, Leszek
2004-12-01
Ligand.Info is a compilation of various publicly available databases of small molecules. The total size of the Meta-Database is over 1 million entries. The compound records contain calculated three-dimensional coordinates and sometimes information about biological activity. Some molecules have information about FDA drug approving status or about anti-HIV activity. Meta-Database can be downloaded from the http://Ligand.Info web page. The database can also be screened using a Java-based tool. The tool can interactively cluster sets of molecules on the user side and automatically download similar molecules from the server. The application requires the Java Runtime Environment 1.4 or higher, which can be automatically downloaded from Sun Microsystems or Apple Computer and installed during the first use of Ligand.Info on desktop systems, which support Java (Ms Windows, Mac OS, Solaris, and Linux). The Ligand.Info Meta-Database can be used for virtual high-throughput screening of new potential drugs. Presented examples showed that using a known antiviral drug as query the system was able to find others antiviral drugs and inhibitors.
Database resources of the National Center for Biotechnology Information
Acland, Abigail; Agarwala, Richa; Barrett, Tanya; Beck, Jeff; Benson, Dennis A.; Bollin, Colleen; Bolton, Evan; Bryant, Stephen H.; Canese, Kathi; Church, Deanna M.; Clark, Karen; DiCuccio, Michael; Dondoshansky, Ilya; Federhen, Scott; Feolo, Michael; Geer, Lewis Y.; Gorelenkov, Viatcheslav; Hoeppner, Marilu; Johnson, Mark; Kelly, Christopher; Khotomlianski, Viatcheslav; Kimchi, Avi; Kimelman, Michael; Kitts, Paul; Krasnov, Sergey; Kuznetsov, Anatoliy; Landsman, David; Lipman, David J.; Lu, Zhiyong; Madden, Thomas L.; Madej, Tom; Maglott, Donna R.; Marchler-Bauer, Aron; Karsch-Mizrachi, Ilene; Murphy, Terence; Ostell, James; O'Sullivan, Christopher; Panchenko, Anna; Phan, Lon; Pruitt, Don Preussm Kim D.; Rubinstein, Wendy; Sayers, Eric W.; Schneider, Valerie; Schuler, Gregory D.; Sequeira, Edwin; Sherry, Stephen T.; Shumway, Martin; Sirotkin, Karl; Siyan, Karanjit; Slotta, Douglas; Soboleva, Alexandra; Soussov, Vladimir; Starchenko, Grigory; Tatusova, Tatiana A.; Trawick, Bart W.; Vakatov, Denis; Wang, Yanli; Ward, Minghong; John Wilbur, W.; Yaschenko, Eugene; Zbicz, Kerry
2014-01-01
In addition to maintaining the GenBank® nucleic acid sequence database, the National Center for Biotechnology Information (NCBI, http://www.ncbi.nlm.nih.gov) provides analysis and retrieval resources for the data in GenBank and other biological data made available through the NCBI Web site. NCBI resources include Entrez, the Entrez Programming Utilities, MyNCBI, PubMed, PubMed Central, PubReader, Gene, the NCBI Taxonomy Browser, BLAST, BLAST Link, Primer-BLAST, COBALT, RefSeq, UniGene, HomoloGene, ProtEST, dbMHC, dbSNP, dbVar, Epigenomics, the Genetic Testing Registry, Genome and related tools, the Map Viewer, Trace Archive, Sequence Read Archive, BioProject, BioSample, ClinVar, MedGen, HIV-1/Human Protein Interaction Database, Gene Expression Omnibus, Probe, Online Mendelian Inheritance in Animals, the Molecular Modeling Database, the Conserved Domain Database, the Conserved Domain Architecture Retrieval Tool, Biosystems, Protein Clusters and the PubChem suite of small molecule databases. Augmenting many of the Web applications are custom implementations of the BLAST program optimized to search specialized data sets. All these resources can be accessed through the NCBI home page. PMID:24259429
Database resources of the National Center for Biotechnology Information
Wheeler, David L.; Church, Deanna M.; Lash, Alex E.; Leipe, Detlef D.; Madden, Thomas L.; Pontius, Joan U.; Schuler, Gregory D.; Schriml, Lynn M.; Tatusova, Tatiana A.; Wagner, Lukas; Rapp, Barbara A.
2001-01-01
In addition to maintaining the GenBank® nucleic acid sequence database, the National Center for Biotechnology Information (NCBI) provides data analysis and retrieval resources that operate on the data in GenBank and a variety of other biological data made available through NCBI’s Web site. NCBI data retrieval resources include Entrez, PubMed, LocusLink and the Taxonomy Browser. Data analysis resources include BLAST, Electronic PCR, OrfFinder, RefSeq, UniGene, HomoloGene, Database of Single Nucleotide Polymorphisms (dbSNP), Human Genome Sequencing, Human MapViewer, GeneMap’99, Human–Mouse Homology Map, Cancer Chromosome Aberration Project (CCAP), Entrez Genomes, Clusters of Orthologous Groups (COGs) database, Retroviral Genotyping Tools, Cancer Genome Anatomy Project (CGAP), SAGEmap, Gene Expression Omnibus (GEO), Online Mendelian Inheritance in Man (OMIM), the Molecular Modeling Database (MMDB) and the Conserved Domain Database (CDD). Augmenting many of the Web applications are custom implementations of the BLAST program optimized to search specialized data sets. All of the resources can be accessed through the NCBI home page at: http://www.ncbi.nlm.nih.gov. PMID:11125038
Searching Online Chemical Data Repositories via the ChemAgora Portal.
Zanzi, Antonella; Wittwehr, Clemens
2017-12-26
ChemAgora, a web application designed and developed in the context of the "Data Infrastructure for Chemical Safety Assessment" (diXa) project, provides search capabilities to chemical data from resources available online, enabling users to cross-reference their search results with both regulatory chemical information and public chemical databases. ChemAgora, through an on-the-fly search, informs whether a chemical is known or not in each of the external data sources and provides clikable links leading to the third-party web site pages containing the information. The original purpose of the ChemAgora application was to correlate studies stored in the diXa data warehouse with available chemical data. Since the end of the diXa project, ChemAgora has evolved into an independent portal, currently accessible directly through the ChemAgora home page, with improved search capabilities of online data sources.
A web-based repository of surgical simulator projects.
Leskovský, Peter; Harders, Matthias; Székely, Gábor
2006-01-01
The use of computer-based surgical simulators for training of prospective surgeons has been a topic of research for more than a decade. As a result, a large number of academic projects have been carried out, and a growing number of commercial products are available on the market. Keeping track of all these endeavors for established groups as well as for newly started projects can be quite arduous. Gathering information on existing methods, already traveled research paths, and problems encountered is a time consuming task. To alleviate this situation, we have established a modifiable online repository of existing projects. It contains detailed information about a large number of simulator projects gathered from web pages, papers and personal communication. The database is modifiable (with password protected sections) and also allows for a simple statistical analysis of the collected data. For further information, the surgical repository web page can be found at www.virtualsurgery.vision.ee.ethz.ch.
Dibblee, T. W.; Digital database compiled by Graham, S. E.; Mahony, T.M.; Blissenbach, J.L.; Mariant, J.J.; Wentworth, C.M.
1999-01-01
This Open-File Report is a digital geologic map database. The report serves to introduce and describe the digital data. There is no paper map included in the Open-File Report. The report includes PostScript and PDF plot files that can be used to plot images of the geologic map sheet and explanation sheet. This digital map database is prepared from a previously published map by Dibblee (1973). The geologic map database delineates map units that are identified by general age, lithology, and clast size following the stratigraphic nomenclature of the U.S. Geological Survey. For descriptions of the units, their stratigraphic relations, and sources of geologic mapping, consult the explanation sheet (of99-14_4b.ps or of99-14_4d.pdf), or the original published paper map (Dibblee, 1973). The scale of the source map limits the spatial resolution (scale) of the database to 1:125,000 or smaller. For those interested in the geology of Carrizo Plain and vicinity who do not use an ARC/INFO compatible Geographic Information System (GIS), but would like to obtain a paper map and explanation, PDF and PostScript plot files containing map images of the data in the digital database, as well as PostScript and PDF plot files of the explanation sheet and explanatory text, have been included in the database package (please see the section 'Digital Plot Files', page 5). The PostScript plot files require a gzip utility to access them. For those without computer capability, we can provide users with the PostScript or PDF files on tape that can be taken to a vendor for plotting. Paper plots can also be ordered directly from the USGS (please see the section 'Obtaining Plots from USGS Open-File Services', page 5). The content and character of the database, methods of obtaining it, and processes of extracting the map database from the tar (tape archive) file are described herein. The map database itself, consisting of six ARC/INFO coverages, can be obtained over the Internet or by magnetic tape copy as described below. The database was compiled using ARC/INFO, a commercial Geographic Information System (Environmental Systems Research Institute, Redlands, California), with version 3.0 of the menu interface ALACARTE (Fitzgibbon and Wentworth, 1991, Fitzgibbon, 1991, Wentworth and Fitzgibbon, 1991). The ARC/INFO coverages are stored in uncompressed ARC export format (ARC/INFO version 7.x). All data files have been compressed, and may be uncompressed with gzip, which is available free of charge over the Internet via links from the USGS Public Domain Software page (http://edcwww.cr.usgs.gov/doc/edchome/ndcdb/public.html). ARC/INFO export files (files with the .e00 extension) can be converted into ARC/INFO coverages in ARC/INFO (see below) and can be read by some other Geographic Information Systems, such as MapInfo via ArcLink and ESRI's ArcView.
IMGT, the international ImMunoGeneTics information system®
Lefranc, Marie-Paule; Giudicelli, Véronique; Kaas, Quentin; Duprat, Elodie; Jabado-Michaloud, Joumana; Scaviner, Dominique; Ginestoux, Chantal; Clément, Oliver; Chaume, Denys; Lefranc, Gérard
2005-01-01
The international ImMunoGeneTics information system® (IMGT) (http://imgt.cines.fr), created in 1989, by the Laboratoire d'ImmunoGénétique Moléculaire LIGM (Université Montpellier II and CNRS) at Montpellier, France, is a high-quality integrated knowledge resource specializing in the immunoglobulins (IGs), T cell receptors (TRs), major histocompatibility complex (MHC) of human and other vertebrates, and related proteins of the immune systems (RPI) that belong to the immunoglobulin superfamily (IgSF) and to the MHC superfamily (MhcSF). IMGT includes several sequence databases (IMGT/LIGM-DB, IMGT/PRIMER-DB, IMGT/PROTEIN-DB and IMGT/MHC-DB), one genome database (IMGT/GENE-DB) and one three-dimensional (3D) structure database (IMGT/3Dstructure-DB), Web resources comprising 8000 HTML pages (IMGT Marie-Paule page), and interactive tools. IMGT data are expertly annotated according to the rules of the IMGT Scientific chart, based on the IMGT-ONTOLOGY concepts. IMGT tools are particularly useful for the analysis of the IG and TR repertoires in normal physiological and pathological situations. IMGT is used in medical research (autoimmune diseases, infectious diseases, AIDS, leukemias, lymphomas, myelomas), veterinary research, biotechnology related to antibody engineering (phage displays, combinatorial libraries, chimeric, humanized and human antibodies), diagnostics (clonalities, detection and follow up of residual diseases) and therapeutical approaches (graft, immunotherapy and vaccinology). IMGT is freely available at http://imgt.cines.fr. PMID:15608269
Wang, Luman; Mo, Qiaochu; Wang, Jianxin
2015-01-01
Most current gene coexpression databases support the analysis for linear correlation of gene pairs, but not nonlinear correlation of them, which hinders precisely evaluating the gene-gene coexpression strengths. Here, we report a new database, MIrExpress, which takes advantage of the information theory, as well as the Pearson linear correlation method, to measure the linear correlation, nonlinear correlation, and their hybrid of cell-specific gene coexpressions in immune cells. For a given gene pair or probe set pair input by web users, both mutual information (MI) and Pearson correlation coefficient (r) are calculated, and several corresponding values are reported to reflect their coexpression correlation nature, including MI and r values, their respective rank orderings, their rank comparison, and their hybrid correlation value. Furthermore, for a given gene, the top 10 most relevant genes to it are displayed with the MI, r, or their hybrid perspective, respectively. Currently, the database totally includes 16 human cell groups, involving 20,283 human genes. The expression data and the calculated correlation results from the database are interactively accessible on the web page and can be implemented for other related applications and researches. PMID:26881263
Wang, Luman; Mo, Qiaochu; Wang, Jianxin
2015-01-01
Most current gene coexpression databases support the analysis for linear correlation of gene pairs, but not nonlinear correlation of them, which hinders precisely evaluating the gene-gene coexpression strengths. Here, we report a new database, MIrExpress, which takes advantage of the information theory, as well as the Pearson linear correlation method, to measure the linear correlation, nonlinear correlation, and their hybrid of cell-specific gene coexpressions in immune cells. For a given gene pair or probe set pair input by web users, both mutual information (MI) and Pearson correlation coefficient (r) are calculated, and several corresponding values are reported to reflect their coexpression correlation nature, including MI and r values, their respective rank orderings, their rank comparison, and their hybrid correlation value. Furthermore, for a given gene, the top 10 most relevant genes to it are displayed with the MI, r, or their hybrid perspective, respectively. Currently, the database totally includes 16 human cell groups, involving 20,283 human genes. The expression data and the calculated correlation results from the database are interactively accessible on the web page and can be implemented for other related applications and researches.
Database resources of the National Center for Biotechnology Information.
2016-01-04
The National Center for Biotechnology Information (NCBI) provides a large suite of online resources for biological information and data, including the GenBank(®) nucleic acid sequence database and the PubMed database of citations and abstracts for published life science journals. Additional NCBI resources focus on literature (PubMed Central (PMC), Bookshelf and PubReader), health (ClinVar, dbGaP, dbMHC, the Genetic Testing Registry, HIV-1/Human Protein Interaction Database and MedGen), genomes (BioProject, Assembly, Genome, BioSample, dbSNP, dbVar, Epigenomics, the Map Viewer, Nucleotide, Probe, RefSeq, Sequence Read Archive, the Taxonomy Browser and the Trace Archive), genes (Gene, Gene Expression Omnibus (GEO), HomoloGene, PopSet and UniGene), proteins (Protein, the Conserved Domain Database (CDD), COBALT, Conserved Domain Architecture Retrieval Tool (CDART), the Molecular Modeling Database (MMDB) and Protein Clusters) and chemicals (Biosystems and the PubChem suite of small molecule databases). The Entrez system provides search and retrieval operations for most of these databases. Augmenting many of the web applications are custom implementations of the BLAST program optimized to search specialized datasets. All of these resources can be accessed through the NCBI home page at www.ncbi.nlm.nih.gov. Published by Oxford University Press on behalf of Nucleic Acids Research 2015. This work is written by (a) US Government employee(s) and is in the public domain in the US.
Database resources of the National Center for Biotechnology Information.
2015-01-01
The National Center for Biotechnology Information (NCBI) provides a large suite of online resources for biological information and data, including the GenBank(®) nucleic acid sequence database and the PubMed database of citations and abstracts for published life science journals. Additional NCBI resources focus on literature (Bookshelf, PubMed Central (PMC) and PubReader); medical genetics (ClinVar, dbMHC, the Genetic Testing Registry, HIV-1/Human Protein Interaction Database and MedGen); genes and genomics (BioProject, BioSample, dbSNP, dbVar, Epigenomics, Gene, Gene Expression Omnibus (GEO), Genome, HomoloGene, the Map Viewer, Nucleotide, PopSet, Probe, RefSeq, Sequence Read Archive, the Taxonomy Browser, Trace Archive and UniGene); and proteins and chemicals (Biosystems, COBALT, the Conserved Domain Database (CDD), the Conserved Domain Architecture Retrieval Tool (CDART), the Molecular Modeling Database (MMDB), Protein Clusters, Protein and the PubChem suite of small molecule databases). The Entrez system provides search and retrieval operations for many of these databases. Augmenting many of the Web applications are custom implementations of the BLAST program optimized to search specialized data sets. All of these resources can be accessed through the NCBI home page at http://www.ncbi.nlm.nih.gov. Published by Oxford University Press on behalf of Nucleic Acids Research 2014. This work is written by (a) US Government employee(s) and is in the public domain in the US.
[Integrated DNA barcoding database for identifying Chinese animal medicine].
Shi, Lin-Chun; Yao, Hui; Xie, Li-Fang; Zhu, Ying-Jie; Song, Jing-Yuan; Zhang, Hui; Chen, Shi-Lin
2014-06-01
In order to construct an integrated DNA barcoding database for identifying Chinese animal medicine, the authors and their cooperators have completed a lot of researches for identifying Chinese animal medicines using DNA barcoding technology. Sequences from GenBank have been analyzed simultaneously. Three different methods, BLAST, barcoding gap and Tree building, have been used to confirm the reliabilities of barcode records in the database. The integrated DNA barcoding database for identifying Chinese animal medicine has been constructed using three different parts: specimen, sequence and literature information. This database contained about 800 animal medicines and the adulterants and closely related species. Unknown specimens can be identified by pasting their sequence record into the window on the ID page of species identification system for traditional Chinese medicine (www. tcmbarcode. cn). The integrated DNA barcoding database for identifying Chinese animal medicine is significantly important for animal species identification, rare and endangered species conservation and sustainable utilization of animal resources.
2002-06-01
Student memo for personnel MCLLS . . . . . . . . . . . . . . 75 i. Migrate data to SQL Server...The Web Server is on the same server as the SWORD database in the current version. 4: results set 5: dynamic HTML page 6: dynamic HTML page 3: SQL ...still be supported by Access. SQL Server would be a more viable tool for a fully developed application based on the number of potential users and
Database resources of the National Center for Biotechnology Information
Wheeler, David L.; Barrett, Tanya; Benson, Dennis A.; Bryant, Stephen H.; Canese, Kathi; Chetvernin, Vyacheslav; Church, Deanna M.; DiCuccio, Michael; Edgar, Ron; Federhen, Scott; Feolo, Michael; Geer, Lewis Y.; Helmberg, Wolfgang; Kapustin, Yuri; Khovayko, Oleg; Landsman, David; Lipman, David J.; Madden, Thomas L.; Maglott, Donna R.; Miller, Vadim; Ostell, James; Pruitt, Kim D.; Schuler, Gregory D.; Shumway, Martin; Sequeira, Edwin; Sherry, Steven T.; Sirotkin, Karl; Souvorov, Alexandre; Starchenko, Grigory; Tatusov, Roman L.; Tatusova, Tatiana A.; Wagner, Lukas; Yaschenko, Eugene
2008-01-01
In addition to maintaining the GenBank(R) nucleic acid sequence database, the National Center for Biotechnology Information (NCBI) provides analysis and retrieval resources for the data in GenBank and other biological data available through NCBI's web site. NCBI resources include Entrez, the Entrez Programming Utilities, My NCBI, PubMed, PubMed Central, Entrez Gene, the NCBI Taxonomy Browser, BLAST, BLAST Link, Electronic PCR, OrfFinder, Spidey, Splign, RefSeq, UniGene, HomoloGene, ProtEST, dbMHC, dbSNP, Cancer Chromosomes, Entrez Genome, Genome Project and related tools, the Trace, Assembly, and Short Read Archives, the Map Viewer, Model Maker, Evidence Viewer, Clusters of Orthologous Groups, Influenza Viral Resources, HIV-1/Human Protein Interaction Database, Gene Expression Omnibus, Entrez Probe, GENSAT, Database of Genotype and Phenotype, Online Mendelian Inheritance in Man, Online Mendelian Inheritance in Animals, the Molecular Modeling Database, the Conserved Domain Database, the Conserved Domain Architecture Retrieval Tool and the PubChem suite of small molecule databases. Augmenting the web applications are custom implementations of the BLAST program optimized to search specialized data sets. These resources can be accessed through the NCBI home page at www.ncbi.nlm.nih.gov. PMID:18045790
Summary report of journal operations, 2012.
2013-01-01
Presents the summary reports of American Psychological Association journal operations (compiled from the 2012 annual reports of the Council of Editors and from Central Office records) and Division journal operations (compiled from the 2012 annual reports of the Division journal editors). The information provided includes number of manuscripts, printed pages, and print subscriptions per journal. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
Deng, Chen-Hui; Zhang, Guan-Min; Bi, Shan-Shan; Zhou, Tian-Yan; Lu, Wei
2011-07-01
This study is to develop a therapeutic drug monitoring (TDM) network server of tacrolimus for Chinese renal transplant patients, which can facilitate doctor to manage patients' information and provide three levels of predictions. Database management system MySQL was employed to build and manage the database of patients and doctors' information, and hypertext mark-up language (HTML) and Java server pages (JSP) technology were employed to construct network server for database management. Based on the population pharmacokinetic model of tacrolimus for Chinese renal transplant patients, above program languages were used to construct the population prediction and subpopulation prediction modules. Based on Bayesian principle and maximization of the posterior probability function, an objective function was established, and minimized by an optimization algorithm to estimate patient's individual pharmacokinetic parameters. It is proved that the network server has the basic functions for database management and three levels of prediction to aid doctor to optimize the regimen of tacrolimus for Chinese renal transplant patients.
Murphy, Elizabeth A.; Ishii, Audrey L.
2006-01-01
The U.S. Geological Survey (USGS), in cooperation with DuPage County Department of Engineering, Stormwater Management Division, maintains a database of hourly meteorologic and hydrologic data for use in a near real-time streamflow simulation system, which assists in the management and operation of reservoirs and other flood-control structures in the Salt Creek watershed in DuPage County, Illinois. The majority of the precipitation data are collected from a tipping-bucket rain-gage network located in and near DuPage County. The other meteorologic data (wind speed, solar radiation, air temperature, and dewpoint temperature) are collected at Argonne National Laboratory in Argonne, Illinois. Potential evapotranspiration is computed from the meteorologic data. The hydrologic data (discharge and stage) are collected at USGS streamflow-gaging stations in DuPage County. These data are stored in a Watershed Data Management (WDM) database. This report describes a version of the WDM database that was quality-assured and quality-controlled annually to ensure the datasets were complete and accurate. This version of the WDM database contains data from January 1, 1997, through September 30, 2004, and is named SEP04.WDM. This report provides a record of time periods of poor data for each precipitation dataset and describes methods used to estimate the data for the periods when data were missing, flawed, or snowfall-affected. The precipitation dataset data-filling process was changed in 2001, and both processes are described. The other meteorologic and hydrologic datasets in the database are fully described in the annual U.S. Geological Survey Water Data Report for Illinois and, therefore, are described in less detail than the precipitation datasets in this report.
NASA Astrophysics Data System (ADS)
Miyaji, Kousuke; Sun, Chao; Soga, Ayumi; Takeuchi, Ken
2014-01-01
A relational database management system (RDBMS) is designed based on NAND flash solid-state drive (SSD) for storage. By vertically integrating the storage engine (SE) and the flash translation layer (FTL), system performance is maximized and the internal SSD overhead is minimized. The proposed RDBMS SE utilizes physical information about the NAND flash memory which is supplied from the FTL. The query operation is also optimized for SSD. By these treatments, page-copy-less garbage collection is achieved and data fragmentation in the NAND flash memory is suppressed. As a result, RDBMS performance increases by 3.8 times, power consumption of SSD decreases by 46% and SSD life time is increased by 61%. The effectiveness of the proposed scheme increases with larger erase block sizes, which matches the future scaling trend of three-dimensional (3D-) NAND flash memories. The preferable row data size of the proposed scheme is below 500 byte for 16 kbyte page size.
Pagani, Ioanna; Liolios, Konstantinos; Jansson, Jakob; Chen, I-Min A.; Smirnova, Tatyana; Nosrat, Bahador; Markowitz, Victor M.; Kyrpides, Nikos C.
2012-01-01
The Genomes OnLine Database (GOLD, http://www.genomesonline.org/) is a comprehensive resource for centralized monitoring of genome and metagenome projects worldwide. Both complete and ongoing projects, along with their associated metadata, can be accessed in GOLD through precomputed tables and a search page. As of September 2011, GOLD, now on version 4.0, contains information for 11 472 sequencing projects, of which 2907 have been completed and their sequence data has been deposited in a public repository. Out of these complete projects, 1918 are finished and 989 are permanent drafts. Moreover, GOLD contains information for 340 metagenome studies associated with 1927 metagenome samples. GOLD continues to expand, moving toward the goal of providing the most comprehensive repository of metadata information related to the projects and their organisms/environments in accordance with the Minimum Information about any (x) Sequence specification and beyond. PMID:22135293
Pagani, Ioanna; Liolios, Konstantinos; Jansson, Jakob; Chen, I-Min A; Smirnova, Tatyana; Nosrat, Bahador; Markowitz, Victor M; Kyrpides, Nikos C
2012-01-01
The Genomes OnLine Database (GOLD, http://www.genomesonline.org/) is a comprehensive resource for centralized monitoring of genome and metagenome projects worldwide. Both complete and ongoing projects, along with their associated metadata, can be accessed in GOLD through precomputed tables and a search page. As of September 2011, GOLD, now on version 4.0, contains information for 11,472 sequencing projects, of which 2907 have been completed and their sequence data has been deposited in a public repository. Out of these complete projects, 1918 are finished and 989 are permanent drafts. Moreover, GOLD contains information for 340 metagenome studies associated with 1927 metagenome samples. GOLD continues to expand, moving toward the goal of providing the most comprehensive repository of metadata information related to the projects and their organisms/environments in accordance with the Minimum Information about any (x) Sequence specification and beyond.
Bera, Maitreyee
2014-01-01
The U.S. Geological Survey (USGS), in cooperation with DuPage County Stormwater Management Division, maintains a USGS database of hourly meteorologic and hydrologic data for use in a near real-time streamflow simulation system, which assists in the management and operation of reservoirs and other flood-control structures in the Salt Creek watershed in DuPage County, Illinois. Most of the precipitation data are collected from a tipping-bucket rain-gage network located in and near DuPage County. The other meteorologic data (wind speed, solar radiation, air temperature, and dewpoint temperature) are collected at Argonne National Laboratory in Argonne, Ill. Potential evapotranspiration is computed from the meteorologic data. The hydrologic data (discharge and stage) are collected at USGS streamflow-gaging stations in DuPage County. These data are stored in a Watershed Data Management (WDM) database. An earlier report describes in detail the WDM database development including the processing of data from January 1, 1997, through September 30, 2004, in SEP04.WDM database. SEP04.WDM is updated with the appended data from October 1, 2004, through September 30, 2011, water years 2005–11 and renamed as SEP11.WDM. This report details the processing of meteorologic and hydrologic data in SEP11.WDM. This report provides a record of snow affected periods and the data used to fill missing-record periods for each precipitation site during water years 2005–11. The meteorologic data filling methods are described in detail in Over and others (2010), and an update is provided in this report.
Userscripts for the life sciences.
Willighagen, Egon L; O'Boyle, Noel M; Gopalakrishnan, Harini; Jiao, Dazhi; Guha, Rajarshi; Steinbeck, Christoph; Wild, David J
2007-12-21
The web has seen an explosion of chemistry and biology related resources in the last 15 years: thousands of scientific journals, databases, wikis, blogs and resources are available with a wide variety of types of information. There is a huge need to aggregate and organise this information. However, the sheer number of resources makes it unrealistic to link them all in a centralised manner. Instead, search engines to find information in those resources flourish, and formal languages like Resource Description Framework and Web Ontology Language are increasingly used to allow linking of resources. A recent development is the use of userscripts to change the appearance of web pages, by on-the-fly modification of the web content. This opens possibilities to aggregate information and computational results from different web resources into the web page of one of those resources. Several userscripts are presented that enrich biology and chemistry related web resources by incorporating or linking to other computational or data sources on the web. The scripts make use of Greasemonkey-like plugins for web browsers and are written in JavaScript. Information from third-party resources are extracted using open Application Programming Interfaces, while common Universal Resource Locator schemes are used to make deep links to related information in that external resource. The userscripts presented here use a variety of techniques and resources, and show the potential of such scripts. This paper discusses a number of userscripts that aggregate information from two or more web resources. Examples are shown that enrich web pages with information from other resources, and show how information from web pages can be used to link to, search, and process information in other resources. Due to the nature of userscripts, scientists are able to select those scripts they find useful on a daily basis, as the scripts run directly in their own web browser rather than on the web server. This flexibility allows the scientists to tune the features of web resources to optimise their productivity.
Userscripts for the Life Sciences
Willighagen, Egon L; O'Boyle, Noel M; Gopalakrishnan, Harini; Jiao, Dazhi; Guha, Rajarshi; Steinbeck, Christoph; Wild, David J
2007-01-01
Background The web has seen an explosion of chemistry and biology related resources in the last 15 years: thousands of scientific journals, databases, wikis, blogs and resources are available with a wide variety of types of information. There is a huge need to aggregate and organise this information. However, the sheer number of resources makes it unrealistic to link them all in a centralised manner. Instead, search engines to find information in those resources flourish, and formal languages like Resource Description Framework and Web Ontology Language are increasingly used to allow linking of resources. A recent development is the use of userscripts to change the appearance of web pages, by on-the-fly modification of the web content. This opens possibilities to aggregate information and computational results from different web resources into the web page of one of those resources. Results Several userscripts are presented that enrich biology and chemistry related web resources by incorporating or linking to other computational or data sources on the web. The scripts make use of Greasemonkey-like plugins for web browsers and are written in JavaScript. Information from third-party resources are extracted using open Application Programming Interfaces, while common Universal Resource Locator schemes are used to make deep links to related information in that external resource. The userscripts presented here use a variety of techniques and resources, and show the potential of such scripts. Conclusion This paper discusses a number of userscripts that aggregate information from two or more web resources. Examples are shown that enrich web pages with information from other resources, and show how information from web pages can be used to link to, search, and process information in other resources. Due to the nature of userscripts, scientists are able to select those scripts they find useful on a daily basis, as the scripts run directly in their own web browser rather than on the web server. This flexibility allows the scientists to tune the features of web resources to optimise their productivity. PMID:18154664
Brower, Stewart M.
2004-01-01
Background: The analysis included forty-one academic health sciences library (HSL) Websites as captured in the first two weeks of January 2001. Home pages and persistent navigational tools (PNTs) were analyzed for layout, technology, and links, and other general site metrics were taken. Methods: Websites were selected based on rank in the National Network of Libraries of Medicine, with regional and resource libraries given preference on the basis that these libraries are recognized as leaders in their regions and would be the most reasonable source of standards for best practice. A three-page evaluation tool was developed based on previous similar studies. All forty-one sites were evaluated in four specific areas: library general information, Website aids and tools, library services, and electronic resources. Metrics taken for electronic resources included orientation of bibliographic databases alphabetically by title or by subject area and with links to specifically named databases. Results: Based on the results, a formula for determining obligatory links was developed, listing items that should appear on all academic HSL Web home pages and PNTs. Conclusions: These obligatory links demonstrate a series of best practices that may be followed in the design and construction of academic HSL Websites. PMID:15494756
Database resources of the National Center for Biotechnology Information
Sayers, Eric W.; Barrett, Tanya; Benson, Dennis A.; Bolton, Evan; Bryant, Stephen H.; Canese, Kathi; Chetvernin, Vyacheslav; Church, Deanna M.; DiCuccio, Michael; Federhen, Scott; Feolo, Michael; Fingerman, Ian M.; Geer, Lewis Y.; Helmberg, Wolfgang; Kapustin, Yuri; Krasnov, Sergey; Landsman, David; Lipman, David J.; Lu, Zhiyong; Madden, Thomas L.; Madej, Tom; Maglott, Donna R.; Marchler-Bauer, Aron; Miller, Vadim; Karsch-Mizrachi, Ilene; Ostell, James; Panchenko, Anna; Phan, Lon; Pruitt, Kim D.; Schuler, Gregory D.; Sequeira, Edwin; Sherry, Stephen T.; Shumway, Martin; Sirotkin, Karl; Slotta, Douglas; Souvorov, Alexandre; Starchenko, Grigory; Tatusova, Tatiana A.; Wagner, Lukas; Wang, Yanli; Wilbur, W. John; Yaschenko, Eugene; Ye, Jian
2012-01-01
In addition to maintaining the GenBank® nucleic acid sequence database, the National Center for Biotechnology Information (NCBI) provides analysis and retrieval resources for the data in GenBank and other biological data made available through the NCBI Website. NCBI resources include Entrez, the Entrez Programming Utilities, MyNCBI, PubMed, PubMed Central (PMC), Gene, the NCBI Taxonomy Browser, BLAST, BLAST Link (BLink), Primer-BLAST, COBALT, Splign, RefSeq, UniGene, HomoloGene, ProtEST, dbMHC, dbSNP, dbVar, Epigenomics, Genome and related tools, the Map Viewer, Model Maker, Evidence Viewer, Trace Archive, Sequence Read Archive, BioProject, BioSample, Retroviral Genotyping Tools, HIV-1/Human Protein Interaction Database, Gene Expression Omnibus (GEO), Probe, Online Mendelian Inheritance in Animals (OMIA), the Molecular Modeling Database (MMDB), the Conserved Domain Database (CDD), the Conserved Domain Architecture Retrieval Tool (CDART), Biosystems, Protein Clusters and the PubChem suite of small molecule databases. Augmenting many of the Web applications are custom implementations of the BLAST program optimized to search specialized data sets. All of these resources can be accessed through the NCBI home page at www.ncbi.nlm.nih.gov. PMID:22140104
Database resources of the National Center for Biotechnology Information
2013-01-01
In addition to maintaining the GenBank® nucleic acid sequence database, the National Center for Biotechnology Information (NCBI, http://www.ncbi.nlm.nih.gov) provides analysis and retrieval resources for the data in GenBank and other biological data made available through the NCBI web site. NCBI resources include Entrez, the Entrez Programming Utilities, MyNCBI, PubMed, PubMed Central, Gene, the NCBI Taxonomy Browser, BLAST, BLAST Link (BLink), Primer-BLAST, COBALT, Splign, RefSeq, UniGene, HomoloGene, ProtEST, dbMHC, dbSNP, dbVar, Epigenomics, the Genetic Testing Registry, Genome and related tools, the Map Viewer, Model Maker, Evidence Viewer, Trace Archive, Sequence Read Archive, BioProject, BioSample, Retroviral Genotyping Tools, HIV-1/Human Protein Interaction Database, Gene Expression Omnibus, Probe, Online Mendelian Inheritance in Animals, the Molecular Modeling Database, the Conserved Domain Database, the Conserved Domain Architecture Retrieval Tool, Biosystems, Protein Clusters and the PubChem suite of small molecule databases. Augmenting many of the web applications are custom implementations of the BLAST program optimized to search specialized data sets. All of these resources can be accessed through the NCBI home page. PMID:23193264
Database resources of the National Center for Biotechnology Information.
Wheeler, David L; Barrett, Tanya; Benson, Dennis A; Bryant, Stephen H; Canese, Kathi; Chetvernin, Vyacheslav; Church, Deanna M; DiCuccio, Michael; Edgar, Ron; Federhen, Scott; Geer, Lewis Y; Kapustin, Yuri; Khovayko, Oleg; Landsman, David; Lipman, David J; Madden, Thomas L; Maglott, Donna R; Ostell, James; Miller, Vadim; Pruitt, Kim D; Schuler, Gregory D; Sequeira, Edwin; Sherry, Steven T; Sirotkin, Karl; Souvorov, Alexandre; Starchenko, Grigory; Tatusov, Roman L; Tatusova, Tatiana A; Wagner, Lukas; Yaschenko, Eugene
2007-01-01
In addition to maintaining the GenBank nucleic acid sequence database, the National Center for Biotechnology Information (NCBI) provides analysis and retrieval resources for the data in GenBank and other biological data made available through NCBI's Web site. NCBI resources include Entrez, the Entrez Programming Utilities, My NCBI, PubMed, PubMed Central, Entrez Gene, the NCBI Taxonomy Browser, BLAST, BLAST Link(BLink), Electronic PCR, OrfFinder, Spidey, Splign, RefSeq, UniGene, HomoloGene, ProtEST, dbMHC, dbSNP, Cancer Chromosomes, Entrez Genome, Genome Project and related tools, the Trace and Assembly Archives, the Map Viewer, Model Maker, Evidence Viewer, Clusters of Orthologous Groups (COGs), Viral Genotyping Tools, Influenza Viral Resources, HIV-1/Human Protein Interaction Database, Gene Expression Omnibus (GEO), Entrez Probe, GENSAT, Online Mendelian Inheritance in Man (OMIM), Online Mendelian Inheritance in Animals (OMIA), the Molecular Modeling Database (MMDB), the Conserved Domain Database (CDD), the Conserved Domain Architecture Retrieval Tool (CDART) and the PubChem suite of small molecule databases. Augmenting many of the Web applications are custom implementations of the BLAST program optimized to search specialized data sets. These resources can be accessed through the NCBI home page at www.ncbi.nlm.nih.gov.
Database resources of the National Center for Biotechnology Information.
Sayers, Eric W; Barrett, Tanya; Benson, Dennis A; Bryant, Stephen H; Canese, Kathi; Chetvernin, Vyacheslav; Church, Deanna M; DiCuccio, Michael; Edgar, Ron; Federhen, Scott; Feolo, Michael; Geer, Lewis Y; Helmberg, Wolfgang; Kapustin, Yuri; Landsman, David; Lipman, David J; Madden, Thomas L; Maglott, Donna R; Miller, Vadim; Mizrachi, Ilene; Ostell, James; Pruitt, Kim D; Schuler, Gregory D; Sequeira, Edwin; Sherry, Stephen T; Shumway, Martin; Sirotkin, Karl; Souvorov, Alexandre; Starchenko, Grigory; Tatusova, Tatiana A; Wagner, Lukas; Yaschenko, Eugene; Ye, Jian
2009-01-01
In addition to maintaining the GenBank nucleic acid sequence database, the National Center for Biotechnology Information (NCBI) provides analysis and retrieval resources for the data in GenBank and other biological data made available through the NCBI web site. NCBI resources include Entrez, the Entrez Programming Utilities, MyNCBI, PubMed, PubMed Central, Entrez Gene, the NCBI Taxonomy Browser, BLAST, BLAST Link (BLink), Electronic PCR, OrfFinder, Spidey, Splign, RefSeq, UniGene, HomoloGene, ProtEST, dbMHC, dbSNP, Cancer Chromosomes, Entrez Genomes and related tools, the Map Viewer, Model Maker, Evidence Viewer, Clusters of Orthologous Groups (COGs), Retroviral Genotyping Tools, HIV-1/Human Protein Interaction Database, Gene Expression Omnibus (GEO), Entrez Probe, GENSAT, Online Mendelian Inheritance in Man (OMIM), Online Mendelian Inheritance in Animals (OMIA), the Molecular Modeling Database (MMDB), the Conserved Domain Database (CDD), the Conserved Domain Architecture Retrieval Tool (CDART) and the PubChem suite of small molecule databases. Augmenting many of the web applications is custom implementation of the BLAST program optimized to search specialized data sets. All of the resources can be accessed through the NCBI home page at www.ncbi.nlm.nih.gov.
NOAA Photo Library Banner Takes you to the Top Page Takes you to the About this Site page. Takes . Skip Theberge (NOAA Central Library) -- Collection development, site content, image digitization, and database construction. Kristin Ward (NOAA Central Library) -- HTML page construction Without the generosity
Wang, Yanli; Bryant, Stephen H.; Cheng, Tiejun; Wang, Jiyao; Gindulyte, Asta; Shoemaker, Benjamin A.; Thiessen, Paul A.; He, Siqian; Zhang, Jian
2017-01-01
PubChem's BioAssay database (https://pubchem.ncbi.nlm.nih.gov) has served as a public repository for small-molecule and RNAi screening data since 2004 providing open access of its data content to the community. PubChem accepts data submission from worldwide researchers at academia, industry and government agencies. PubChem also collaborates with other chemical biology database stakeholders with data exchange. With over a decade's development effort, it becomes an important information resource supporting drug discovery and chemical biology research. To facilitate data discovery, PubChem is integrated with all other databases at NCBI. In this work, we provide an update for the PubChem BioAssay database describing several recent development including added sources of research data, redesigned BioAssay record page, new BioAssay classification browser and new features in the Upload system facilitating data sharing. PMID:27899599
[Preparation of the database and the homepage on chemical accidents relating to health hazard].
Yamamoto, M; Morita, M; Kaminuma, T
1998-01-01
We collected the data on accidents due to chemicals occurred in Japan, and prepared the database. We also set up the World Wide Web homepage containing the explanation on accidents due to chemicals and the retrieval page for the database. We designed the retrieval page so that users can search the data from keywords such as chemicals (e.g. chlorine gas, hydrogen sulfide, pesticides), places (e.g. home, factory, vehicles, tank), causes (e.g. reaction, leakage, exhaust gas) and others (e.g. cleaning, painting, transportation).
Designing testing service at baristand industri Medan’s liquid waste laboratory
NASA Astrophysics Data System (ADS)
Kusumawaty, Dewi; Napitupulu, Humala L.; Sembiring, Meilita T.
2018-03-01
Baristand Industri Medan is a technical implementation unit under the Industrial and Research and Development Agency, the Ministry of Industry. One of the services often used in Baristand Industri Medan is liquid waste testing service. The company set the standard of service is nine working days for testing services. At 2015, 89.66% on testing services liquid waste does not meet the specified standard of services company because of many samples accumulated. The purpose of this research is designing online services to schedule the coming the liquid waste sample. The method used is designing an information system that consists of model design, output design, input design, database design and technology design. The results of designing information system of testing liquid waste online consist of three pages are pages to the customer, the recipient samples and laboratory. From the simulation results with scheduled samples, then the standard services a minimum of nine working days can be reached.
The semantic web and computer vision: old AI meets new AI
NASA Astrophysics Data System (ADS)
Mundy, J. L.; Dong, Y.; Gilliam, A.; Wagner, R.
2018-04-01
There has been vast process in linking semantic information across the billions of web pages through the use of ontologies encoded in the Web Ontology Language (OWL) based on the Resource Description Framework (RDF). A prime example is the Wikipedia where the knowledge contained in its more than four million pages is encoded in an ontological database called DBPedia http://wiki.dbpedia.org/. Web-based query tools can retrieve semantic information from DBPedia encoded in interlinked ontologies that can be accessed using natural language. This paper will show how this vast context can be used to automate the process of querying images and other geospatial data in support of report changes in structures and activities. Computer vision algorithms are selected and provided with context based on natural language requests for monitoring and analysis. The resulting reports provide semantically linked observations from images and 3D surface models.
Kalium: a database of potassium channel toxins from scorpion venom.
Kuzmenkov, Alexey I; Krylov, Nikolay A; Chugunov, Anton O; Grishin, Eugene V; Vassilevski, Alexander A
2016-01-01
Kalium (http://kaliumdb.org/) is a manually curated database that accumulates data on potassium channel toxins purified from scorpion venom (KTx). This database is an open-access resource, and provides easy access to pages of other databases of interest, such as UniProt, PDB, NCBI Taxonomy Browser, and PubMed. General achievements of Kalium are a strict and easy regulation of KTx classification based on the unified nomenclature supported by researchers in the field, removal of peptides with partial sequence and entries supported by transcriptomic information only, classification of β-family toxins, and addition of a novel λ-family. Molecules presented in the database can be processed by the Clustal Omega server using a one-click option. Molecular masses of mature peptides are calculated and available activity data are compiled for all KTx. We believe that Kalium is not only of high interest to professional toxinologists, but also of general utility to the scientific community.Database URL:http://kaliumdb.org/. © The Author(s) 2016. Published by Oxford University Press.
The use of a personal digital assistant for wireless entry of data into a database via the Internet.
Fowler, D L; Hogle, N J; Martini, F; Roh, M S
2002-01-01
Researchers typically record data on a worksheet and at some later time enter it into the database. Wireless data entry and retrieval using a personal digital assistant (PDA) at the site of patient contact can simplify this process and improve efficiency. A surgeon and a nurse coordinator provided the content for the database. The computer programmer created the database, placed the pages of the database on the PDA screen, and researched and installed security measures. Designing the database took 6 months. Meeting Health Insurance Portability and Accountability Act of 1996 (HIPAA) requirements for patient confidentiality, satisfying institutional Information Services requirements, and ensuring connectivity required an additional 8 months before the functional system was complete. It is now possible to achieve wireless entry and retrieval of data using a PDA. Potential advantages include collection and entry of data at the same time, easy entry of data from multiple sites, and retrieval of data at the patient's bedside.
NASA Astrophysics Data System (ADS)
Stalcup, Bruce W.; Dennis, Phillip W.; Dydyk, Robert B.
1999-10-01
Litton PRC and Litton Data Systems Division are developing a system, the Imaged Document Optical Correlation and Conversion System (IDOCCS), to provide a total solution to the problem of managing and retrieving textual and graphic information from imaged document archives. At the heart of IDOCCS, optical correlation technology provides the search and retrieval of information from imaged documents. IDOCCS can be used to rapidly search for key words or phrases within the imaged document archives. In addition, IDOCCS can automatically compare an input document with the archived database to determine if it is a duplicate, thereby reducing the overall resources required to maintain and access the document database. Embedded graphics on imaged pages can also be exploited; e.g., imaged documents containing an agency's seal or logo can be singled out. In this paper, we present a description of IDOCCS as well as preliminary performance results and theoretical projections.
Liolios, Konstantinos; Chen, I-Min A; Mavromatis, Konstantinos; Tavernarakis, Nektarios; Hugenholtz, Philip; Markowitz, Victor M; Kyrpides, Nikos C
2010-01-01
The Genomes On Line Database (GOLD) is a comprehensive resource for centralized monitoring of genome and metagenome projects worldwide. Both complete and ongoing projects, along with their associated metadata, can be accessed in GOLD through precomputed tables and a search page. As of September 2009, GOLD contains information for more than 5800 sequencing projects, of which 1100 have been completed and their sequence data deposited in a public repository. GOLD continues to expand, moving toward the goal of providing the most comprehensive repository of metadata information related to the projects and their organisms/environments in accordance with the Minimum Information about a (Meta)Genome Sequence (MIGS/MIMS) specification. GOLD is available at: http://www.genomesonline.org and has a mirror site at the Institute of Molecular Biology and Biotechnology, Crete, Greece, at: http://gold.imbb.forth.gr/
Liolios, Konstantinos; Chen, I-Min A.; Mavromatis, Konstantinos; Tavernarakis, Nektarios; Hugenholtz, Philip; Markowitz, Victor M.; Kyrpides, Nikos C.
2010-01-01
The Genomes On Line Database (GOLD) is a comprehensive resource for centralized monitoring of genome and metagenome projects worldwide. Both complete and ongoing projects, along with their associated metadata, can be accessed in GOLD through precomputed tables and a search page. As of September 2009, GOLD contains information for more than 5800 sequencing projects, of which 1100 have been completed and their sequence data deposited in a public repository. GOLD continues to expand, moving toward the goal of providing the most comprehensive repository of metadata information related to the projects and their organisms/environments in accordance with the Minimum Information about a (Meta)Genome Sequence (MIGS/MIMS) specification. GOLD is available at: http://www.genomesonline.org and has a mirror site at the Institute of Molecular Biology and Biotechnology, Crete, Greece, at: http://gold.imbb.forth.gr/ PMID:19914934
Spatial Designation of Critical Habitats for Endangered and Threatened Species in the United States
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tuttle, Mark A; Singh, Nagendra; Sabesan, Aarthy
Establishing biological reserves or "hot spots" for endangered and threatened species is critical to support real-world species regulatory and management problems. Geographic data on the distribution of endangered and threatened species can be used to improve ongoing efforts for species conservation in the United States. At present no spatial database exists which maps out the location endangered species for the US. However, spatial descriptions do exists for the habitat associated with all endangered species, but in a form not readily suitable to use in a geographic information system (GIS). In our study, the principal challenge was extracting spatial data describingmore » these critical habitats for 472 species from over 1000 pages of the federal register. In addition, an appropriate database schema was designed to accommodate the different tiers of information associated with the species along with the confidence of designation; the interpreted location data was geo-referenced to the county enumeration unit producing a spatial database of endangered species for the whole of US. The significance of these critical habitat designations, database scheme and methodologies will be discussed.« less
The YeastGenome app: the Saccharomyces Genome Database at your fingertips.
Wong, Edith D; Karra, Kalpana; Hitz, Benjamin C; Hong, Eurie L; Cherry, J Michael
2013-01-01
The Saccharomyces Genome Database (SGD) is a scientific database that provides researchers with high-quality curated data about the genes and gene products of Saccharomyces cerevisiae. To provide instant and easy access to this information on mobile devices, we have developed YeastGenome, a native application for the Apple iPhone and iPad. YeastGenome can be used to quickly find basic information about S. cerevisiae genes and chromosomal features regardless of internet connectivity. With or without network access, you can view basic information and Gene Ontology annotations about a gene of interest by searching gene names and gene descriptions or by browsing the database within the app to find the gene of interest. With internet access, the app provides more detailed information about the gene, including mutant phenotypes, references and protein and genetic interactions, as well as provides hyperlinks to retrieve detailed information by showing SGD pages and views of the genome browser. SGD provides online help describing basic ways to navigate the mobile version of SGD, highlights key features and answers frequently asked questions related to the app. The app is available from iTunes (http://itunes.com/apps/yeastgenome). The YeastGenome app is provided freely as a service to our community, as part of SGD's mission to provide free and open access to all its data and annotations.
Barbeyron, Tristan; Brillet-Guéguen, Loraine; Carré, Wilfrid; Carrière, Cathelène; Caron, Christophe; Czjzek, Mirjam; Hoebeke, Mark; Michel, Gurvan
2016-01-01
Sulfatases cleave sulfate groups from various molecules and constitute a biologically and industrially important group of enzymes. However, the number of sulfatases whose substrate has been characterized is limited in comparison to the huge diversity of sulfated compounds, yielding functional annotations of sulfatases particularly prone to flaws and misinterpretations. In the context of the explosion of genomic data, a classification system allowing a better prediction of substrate specificity and for setting the limit of functional annotations is urgently needed for sulfatases. Here, after an overview on the diversity of sulfated compounds and on the known sulfatases, we propose a classification database, SulfAtlas (http://abims.sb-roscoff.fr/sulfatlas/), based on sequence homology and composed of four families of sulfatases. The formylglycine-dependent sulfatases, which constitute the largest family, are also divided by phylogenetic approach into 73 subfamilies, each subfamily corresponding to either a known specificity or to an uncharacterized substrate. SulfAtlas summarizes information about the different families of sulfatases. Within a family a web page displays the list of its subfamilies (when they exist) and the list of EC numbers. The family or subfamily page shows some descriptors and a table with all the UniProt accession numbers linked to the databases UniProt, ExplorEnz, and PDB. PMID:27749924
Parents on the web: risks for quality management of cough in children.
Pandolfini, C; Impicciatore, P; Bonati, M
2000-01-01
Health information on the Internet, with respect to common, self-limited childhood illnesses, has been found to be unreliable. Therefore, parents navigating on the Internet risk finding advice that is incomplete or, more importantly, not evidence-based. The importance that a resource such as the Internet as a source of quality health information for consumers should, however, be taken into consideration. For this reason, studies need to be performed regarding the quality of material provided. Various strategies have been proposed that would allow parents to distinguish trustworthy web documents from unreliable ones. One of these strategies is the use of a checklist for the appraisal of web pages based on their technical aspects. The purpose of this study was to assess the quality of information present on the Internet regarding the home management of cough in children and to examine the applicability of a checklist strategy that would allow consumers to select more trustworthy web pages. The Internet was searched for web pages regarding the home treatment of cough in children with the use of different search engines. Medline and the Cochrane database were searched for available evidence concerning the management of cough in children. Three checklists were created to assess different aspects of the web documents. The first checklist was designed to allow for a technical appraisal of the web pages and was based on components such as the name of the author and references used. The second was constructed to examine the completeness of the health information contained in the documents, such as causes and mechanism of cough, and pharmacological and nonpharmacological treatment. The third checklist assessed the quality of the information by measuring it against a gold standard document. This document was created by combining the policy statement issued by the American Academy of Pediatrics regarding the pharmacological treatment of cough in children with the guide of the World Health Organization on drugs for children. For each checklist, the web page contents were analyzed and quantitative measurements were assigned. Of the 19 web pages identified, 9 explained the purpose and/or mechanism of cough and 14 the causes. The most frequently mentioned pharmacological treatments were single-ingredient suppressant preparations, followed by single-ingredient expectorants. Dextromethorphan was the most commonly referred to suppressant and guaifenesin the most common expectorant. No documents discouraged the use of suppressants, although 4 of the 10 web documents that addressed expectorants discouraged their use. Sixteen web pages addressed nonpharmacological treatment, 14 of which suggested exposure to a humid environment and/or extra fluid. In most cases, the criteria in the technical appraisal checklist were not present in the web documents; moreover, 2 web pages did not provide any of the items. Regarding content completeness, 3 web pages satisfied all the requirements considered in the checklist and 2 documents did not meet any of the criteria. Of the 3 web pages that scored highest in technical aspect, 2 also supplied complete information. No relationship was found, however, between the technical aspect and the content completeness. Concerning the quality of the health information supplied, 10 pages received a negative score because they contained more incorrect than correct information, and 1 web page received a high score. This document was 1 of the 2 that also scored high in technical aspect and content completeness. No relationship was found, however, among quality of information, technical aspect, and content completeness. As the results of this study show, a parent navigating the Internet for information on the home management of cough in children will no doubt find incorrect advice among the search results. (ABSTRACT TRUNCATED)
[Preparation of the database and the Internet (WWW) homepage for regulations on chemicals in Japan].
Yamamoto, M; Morita, M; Kaminuma, T
1999-01-01
We prepared a database on chemical regulations in Japan. The regulations consist of "The Law concerning the Examination and Regulation of Manufacture, etc., of Chemical Substances", "Poisonous and Deleterious Substances", Control Law", "Waterworks Law", "Law for the Control of Household Products containing Harmful Substances", and Pesticide Residues in Food Sanitation Law". We also set up a World Wide Web (WWW) homepage containing an explanation of the law as well as chemical names, CAS registry numbers, and standards. The WWW pages contain lists of chemicals and the retrieval page for the database.
Design and development of a web-based application for diabetes patient data management.
Deo, S S; Deobagkar, D N; Deobagkar, Deepti D
2005-01-01
A web-based database management system developed for collecting, managing and analysing information of diabetes patients is described here. It is a searchable, client-server, relational database application, developed on the Windows platform using Oracle, Active Server Pages (ASP), Visual Basic Script (VB Script) and Java Script. The software is menu-driven and allows authorized healthcare providers to access, enter, update and analyse patient information. Graphical representation of data can be generated by the system using bar charts and pie charts. An interactive web interface allows users to query the database and generate reports. Alpha- and beta-testing of the system was carried out and the system at present holds records of 500 diabetes patients and is found useful in diagnosis and treatment. In addition to providing patient data on a continuous basis in a simple format, the system is used in population and comparative analysis. It has proved to be of significant advantage to the healthcare provider as compared to the paper-based system.
Silva-Lopes, Victor W; Monteiro-Leal, Luiz H
2003-07-01
The development of new technology and the possibility of fast information delivery by either Internet or Intranet connections are changing education. Microanatomy education depends basically on the correct interpretation of microscopy images by students. Modern microscopes coupled to computers enable the presentation of these images in a digital form by creating image databases. However, the access to this new technology is restricted entirely to those living in cities and towns with an Information Technology (IT) infrastructure. This study describes the creation of a free Internet histology database composed by high-quality images and also presents an inexpensive way to supply it to a greater number of students through Internet/Intranet connections. By using state-of-the-art scientific instruments, we developed a Web page (http://www2.uerj.br/~micron/atlas/atlasenglish/index.htm) that, in association with a multimedia microscopy laboratory, intends to help in the reduction of the IT educational gap between developed and underdeveloped regions. Copyright 2003 Wiley-Liss, Inc.
78 FR 42775 - CGI Federal, Inc., and Custom Applications Management; Transfer of Data
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-17
... develop applications, Web sites, Web pages, web-based applications and databases, in accordance with EPA policies and related Federal standards and procedures. The Contractor will provide [[Page 42776
Database resources of the National Center for Biotechnology Information.
Sayers, Eric W; Barrett, Tanya; Benson, Dennis A; Bolton, Evan; Bryant, Stephen H; Canese, Kathi; Chetvernin, Vyacheslav; Church, Deanna M; DiCuccio, Michael; Federhen, Scott; Feolo, Michael; Fingerman, Ian M; Geer, Lewis Y; Helmberg, Wolfgang; Kapustin, Yuri; Landsman, David; Lipman, David J; Lu, Zhiyong; Madden, Thomas L; Madej, Tom; Maglott, Donna R; Marchler-Bauer, Aron; Miller, Vadim; Mizrachi, Ilene; Ostell, James; Panchenko, Anna; Phan, Lon; Pruitt, Kim D; Schuler, Gregory D; Sequeira, Edwin; Sherry, Stephen T; Shumway, Martin; Sirotkin, Karl; Slotta, Douglas; Souvorov, Alexandre; Starchenko, Grigory; Tatusova, Tatiana A; Wagner, Lukas; Wang, Yanli; Wilbur, W John; Yaschenko, Eugene; Ye, Jian
2011-01-01
In addition to maintaining the GenBank® nucleic acid sequence database, the National Center for Biotechnology Information (NCBI) provides analysis and retrieval resources for the data in GenBank and other biological data made available through the NCBI Web site. NCBI resources include Entrez, the Entrez Programming Utilities, MyNCBI, PubMed, PubMed Central (PMC), Entrez Gene, the NCBI Taxonomy Browser, BLAST, BLAST Link (BLink), Primer-BLAST, COBALT, Electronic PCR, OrfFinder, Splign, ProSplign, RefSeq, UniGene, HomoloGene, ProtEST, dbMHC, dbSNP, dbVar, Epigenomics, Cancer Chromosomes, Entrez Genomes and related tools, the Map Viewer, Model Maker, Evidence Viewer, Trace Archive, Sequence Read Archive, Retroviral Genotyping Tools, HIV-1/Human Protein Interaction Database, Gene Expression Omnibus (GEO), Entrez Probe, GENSAT, Online Mendelian Inheritance in Man (OMIM), Online Mendelian Inheritance in Animals (OMIA), the Molecular Modeling Database (MMDB), the Conserved Domain Database (CDD), the Conserved Domain Architecture Retrieval Tool (CDART), IBIS, Biosystems, Peptidome, OMSSA, Protein Clusters and the PubChem suite of small molecule databases. Augmenting many of the Web applications are custom implementations of the BLAST program optimized to search specialized data sets. All of these resources can be accessed through the NCBI home page at www.ncbi.nlm.nih.gov.
NASA Technical Reports Server (NTRS)
Daugherty, Colin C.
2010-01-01
International Space Station (ISS) crew and flight controller training documentation is used to aid in training operations. The Generic Simulations References SharePoint (Gen Sim) site is a database used as an aid during flight simulations. The Gen Sim site is used to make individual mission segment timelines, data, and flight information easily accessible to instructors. The Waste and Hygiene Compartment (WHC) training schematic includes simple and complex fluid schematics, as well as overall hardware locations. It is used as a teaching aid during WHC lessons for both ISS crew and flight controllers. ISS flight control documentation is used to support all aspects of ISS mission operations. The Quick Look Database and Consolidated Tool Page are imagery-based references used in real-time to help the Operations Support Officer (OSO) find data faster and improve discussions with the Flight Director and Capsule Communicator (CAPCOM). A Quick Look page was created for the Permanent Multipurpose Module (PMM) by locating photos of the module interior, labeling specific hardware, and organizing them in schematic form to match the layout of the PMM interior. A Tool Page was created for the Maintenance Work Area (MWA) by gathering images, detailed drawings, safety information, procedures, certifications, demonstration videos, and general facts of each MWA component and displaying them in an easily accessible and consistent format. Participation in ISS mechanisms and maintenance lessons, mission simulation On-the-Job Training (OJT), and real-time flight OJT was used as an opportunity to train for day-to-day operations as an OSO, as well as learn how to effectively respond to failures and emergencies during mission simulations and real-time flight operations.
Multimedia explorer: image database, image proxy-server and search-engine.
Frankewitsch, T.; Prokosch, U.
1999-01-01
Multimedia plays a major role in medicine. Databases containing images, movies or other types of multimedia objects are increasing in number, especially on the WWW. However, no good retrieval mechanism or search engine currently exists to efficiently track down such multimedia sources in the vast of information provided by the WWW. Secondly, the tools for searching databases are usually not adapted to the properties of images. HTML pages do not allow complex searches. Therefore establishing a more comfortable retrieval involves the use of a higher programming level like JAVA. With this platform independent language it is possible to create extensions to commonly used web browsers. These applets offer a graphical user interface for high level navigation. We implemented a database using JAVA objects as the primary storage container which are then stored by a JAVA controlled ORACLE8 database. Navigation depends on a structured vocabulary enhanced by a semantic network. With this approach multimedia objects can be encapsulated within a logical module for quick data retrieval. PMID:10566463
Multimedia explorer: image database, image proxy-server and search-engine.
Frankewitsch, T; Prokosch, U
1999-01-01
Multimedia plays a major role in medicine. Databases containing images, movies or other types of multimedia objects are increasing in number, especially on the WWW. However, no good retrieval mechanism or search engine currently exists to efficiently track down such multimedia sources in the vast of information provided by the WWW. Secondly, the tools for searching databases are usually not adapted to the properties of images. HTML pages do not allow complex searches. Therefore establishing a more comfortable retrieval involves the use of a higher programming level like JAVA. With this platform independent language it is possible to create extensions to commonly used web browsers. These applets offer a graphical user interface for high level navigation. We implemented a database using JAVA objects as the primary storage container which are then stored by a JAVA controlled ORACLE8 database. Navigation depends on a structured vocabulary enhanced by a semantic network. With this approach multimedia objects can be encapsulated within a logical module for quick data retrieval.
Zhang, Shihua; Xuan, Hongdong; Zhang, Liang; Fu, Sicong; Wang, Yijun; Yang, Hua; Tai, Yuling; Song, Youhong; Zhang, Jinsong; Ho, Chi-Tang; Li, Shaowen; Wan, Xiaochun
2017-09-01
Tea is one of the most consumed beverages in the world. Considerable studies show the exceptional health benefits (e.g. antioxidation, cancer prevention) of tea owing to its various bioactive components. However, data from these extensively published papers had not been made available in a central database. To lay a foundation in improving the understanding of healthy tea functions, we established a TBC2health database that currently documents 1338 relationships between 497 tea bioactive compounds and 206 diseases (or phenotypes) manually culled from over 300 published articles. Each entry in TBC2health contains comprehensive information about a bioactive relationship that can be accessed in three aspects: (i) compound information, (ii) disease (or phenotype) information and (iii) evidence and reference. Using the curated bioactive relationships, a bipartite network was reconstructed and the corresponding network (or sub-network) visualization and topological analyses are provided for users. This database has a user-friendly interface for entry browse, search and download. In addition, TBC2health provides a submission page and several useful tools (e.g. BLAST, molecular docking) to facilitate use of the database. Consequently, TBC2health can serve as a valuable bioinformatics platform for the exploration of beneficial effects of tea on human health. TBC2health is freely available at http://camellia.ahau.edu.cn/TBC2health. © The Author 2016. Published by Oxford University Press.
Tuberous Sclerosis Complex National Database
2005-10-01
monotherapy LIAED dosage reduction ElDiscontinuation of AED LURemoval of VNS device O1Discontinuation of Ketogenic Diet U Seizure remission Surgical...34* Treatments "* VNS "* Ketogenic Diet "* AEDs W81XWH-04-1-0896 Annual Report 10/05 Tuberous Sclerosis Complex National Database App. H - Page 1 of 3 PI: Steven P...Page 20 of 29 Date last modified 7/14/05 Subject name: First, Middle, Last DOB: LiKetogenic diet LiEpilepsy surgery (if checked, complete the separate
1984-12-01
52242 Prepared for the AIR FORCE OFFICE OF SCIENTIFIC RESEARCH Under Grant No. AFOSR 82-0322 December 1984 ~ " ’w Unclassified SECURITY CLASSIFICATION4...OF THIS PAGE REPORT DOCUMENTATION PAGE is REPORT SECURITY CLASSIFICATION lb. RESTRICTIVE MARKINGS Unclassified None 20 SECURITY CLASSIFICATION...designer .and computer- are 20 DIiRIBUTION/AVAILABI LIT Y 0P ABSTR4ACT 21 ABSTRACT SECURITY CLASSIFICA1ONr UNCLASSIFIED/UNLIMITED SAME AS APT OTIC USERS
VitisExpDB: a database resource for grape functional genomics.
Doddapaneni, Harshavardhan; Lin, Hong; Walker, M Andrew; Yao, Jiqiang; Civerolo, Edwin L
2008-02-28
The family Vitaceae consists of many different grape species that grow in a range of climatic conditions. In the past few years, several studies have generated functional genomic information on different Vitis species and cultivars, including the European grape vine, Vitis vinifera. Our goal is to develop a comprehensive web data source for Vitaceae. VitisExpDB is an online MySQL-PHP driven relational database that houses annotated EST and gene expression data for V. vinifera and non-vinifera grape species and varieties. Currently, the database stores approximately 320,000 EST sequences derived from 8 species/hybrids, their annotation (BLAST top match) details and Gene Ontology based structured vocabulary. Putative homologs for each EST in other species and varieties along with information on their percent nucleotide identities, phylogenetic relationship and common primers can be retrieved. The database also includes information on probe sequence and annotation features of the high density 60-mer gene expression chip consisting of approximately 20,000 non-redundant set of ESTs. Finally, the database includes 14 processed global microarray expression profile sets. Data from 12 of these expression profile sets have been mapped onto metabolic pathways. A user-friendly web interface with multiple search indices and extensively hyperlinked result features that permit efficient data retrieval has been developed. Several online bioinformatics tools that interact with the database along with other sequence analysis tools have been added. In addition, users can submit their ESTs to the database. The developed database provides genomic resource to grape community for functional analysis of genes in the collection and for the grape genome annotation and gene function identification. The VitisExpDB database is available through our website http://cropdisease.ars.usda.gov/vitis_at/main-page.htm.
PlantCAZyme: a database for plant carbohydrate-active enzymes
Ekstrom, Alexander; Taujale, Rahil; McGinn, Nathan; Yin, Yanbin
2014-01-01
PlantCAZyme is a database built upon dbCAN (database for automated carbohydrate active enzyme annotation), aiming to provide pre-computed sequence and annotation data of carbohydrate active enzymes (CAZymes) to plant carbohydrate and bioenergy research communities. The current version contains data of 43 790 CAZymes of 159 protein families from 35 plants (including angiosperms, gymnosperms, lycophyte and bryophyte mosses) and chlorophyte algae with fully sequenced genomes. Useful features of the database include: (i) a BLAST server and a HMMER server that allow users to search against our pre-computed sequence data for annotation purpose, (ii) a download page to allow batch downloading data of a specific CAZyme family or species and (iii) protein browse pages to provide an easy access to the most comprehensive sequence and annotation data. Database URL: http://cys.bios.niu.edu/plantcazyme/ PMID:25125445
Noda, Emi; Mifune, Taka; Nakayama, Takeo
2013-01-01
To characterize information on diabetes prevention appearing in Japanese general health magazines and to examine the agreement of the content with that in clinical practice guidelines for the treatment of diabetes in Japan. We used the Japanese magazines' databases provided by the Media Research Center and selected magazines with large print runs published in 2006. Two medical professionals independently conducted content analysis based on items in the diabetes prevention guidelines. The number of pages for each item and agreement with the information in the guidelines were determined. We found 63 issues of magazines amounting to 8,982 pages; 484 pages included diabetes prevention related content. For 23 items included in the diabetes prevention guidelines, overall agreement of information printed in the magazines with that in the guidelines was 64.5% (471 out of 730). The number of times these items were referred to in the magazines varied widely, from 247 times for food items to 0 times for items on screening for pregnancy-induced diabetes, dyslipidemia, and hypertension. Among the 20 items that were referred to at least once, 18 items showed more than 90% agreement with the guidelines. However, there was poor agreement for information on vegetable oil (2/14, 14%) and for specific foods (5/247, 2%). For the fatty acids category, "fat" was not mentioned in the guidelines; however, the term frequently appeared in magazines. "Uncertainty" was never mentioned in magazines for specific food items. The diabetes prevention related content in the health magazines differed from that defined in clinical practice guidelines. Most information in the magazines agreed with the guidelines, however some items were referred to inappropriately. To disseminate correct information to the public on diabetes prevention, health professionals and the media must collaborate.
BEAUTY-X: enhanced BLAST searches for DNA queries.
Worley, K C; Culpepper, P; Wiese, B A; Smith, R F
1998-01-01
BEAUTY (BLAST Enhanced Alignment Utility) is an enhanced version of the BLAST database search tool that facilitates identification of the functions of matched sequences. Three recent improvements to the BEAUTY program described here make the enhanced output (1) available for DNA queries, (2) available for searches of any protein database, and (3) more up-to-date, with periodic updates of the domain information. BEAUTY searches of the NCBI and EMBL non-redundant protein sequence databases are available from the BCM Search Launcher Web pages (http://gc.bcm.tmc. edu:8088/search-launcher/launcher.html). BEAUTY Post-Processing of submitted search results is available using the BCM Search Launcher Batch Client (version 2.6) (ftp://gc.bcm.tmc. edu/pub/software/search-launcher/). Example figures are available at http://dot.bcm.tmc. edu:9331/papers/beautypp.html (kworley,culpep)@bcm.tmc.edu
NASA Astrophysics Data System (ADS)
Firestone, Richard B.; Chu, S. Y. Frank; Ekstrom, L. Peter; Wu, Shiu-Chin; Singh, Balraj
1997-10-01
The Isotopes Project is developing Internet home pages to provide data for radioactive decay, nuclear structure, nuclear astrophysics, spontaneous fission, thermal neutron capture, and atomic masses. These home pages can be accessed from the Table of Isotopes home page at http://isotopes.lbl.gov/isotopes/toi.html. Data from the Evaluated Nuclear Structure Data File (ENSDF) is now available on the WWW in Nuclear Data Sheet style tables, complete with comments and hypertext linked footnotes. Bibliographic information from the Nuclear Science Reference (NSR) file can be searched on the WWW by combinations of author, A, Z, reaction, and various keywords. Decay gamma-ray data from several databases can be searched by energy. The Table of Superdeformed Nuclear Bands and Fission Isomers is continously updated. Reaction rates from Hoffman and Woosley and from Thielemann, fission yields from England and Rider, thermal neutron cross-sections from BNL-325, atomic masses from Audi, and skeleton scheme drawings and nuclear charts from the Table of Isotopes are among the information available through these websites. The nuclear data home pages are accessed by over 3500 different users each month.
The ISO Data Archive and Interoperability with Other Archives
NASA Astrophysics Data System (ADS)
Salama, Alberto; Arviset, Christophe; Hernández, José; Dowson, John; Osuna, Pedro
The ESA's Infrared Space Observatory (ISO), an unprecedented observatory for infrared astronomy launched in November 1995, successfully made nearly 30,000 scientific observations in its 2.5-year mission. The ISO data can be retrieved from the ISO Data Archive, available at ISO Data Archive , and comprised of about 150,000 observations, including parallel and serendipity mode observations. A user-friendly Java interface permits queries to the database and data retrieval. The interface currently offers a wide variety of links to other archives, such as name resolution with NED and SIMBAD, access to electronic articles from ADS and CDS/VizieR, and access to IRAS data. In the past year development has been focused on improving the IDA interoperability with other astronomical archives, either by accessing other relevant archives or by providing direct access to the ISO data for external services. A mechanism of information transfer has been developed, allowing direct query to the IDA via a Java Server Page, returning quick look ISO images and relevant, observation-specific information embedded in an HTML page. This method has been used to link from the CDS/Vizier Data Centre and ADS, and work with IPAC to allow access to the ISO Archive from IRSA, including display capabilities of the observed sky regions onto other mission images, is in progress. Prospects for further links to and from other archives and databases are also addressed.
The BioCyc collection of microbial genomes and metabolic pathways.
Karp, Peter D; Billington, Richard; Caspi, Ron; Fulcher, Carol A; Latendresse, Mario; Kothari, Anamika; Keseler, Ingrid M; Krummenacker, Markus; Midford, Peter E; Ong, Quang; Ong, Wai Kit; Paley, Suzanne M; Subhraveti, Pallavi
2017-08-17
BioCyc.org is a microbial genome Web portal that combines thousands of genomes with additional information inferred by computer programs, imported from other databases and curated from the biomedical literature by biologist curators. BioCyc also provides an extensive range of query tools, visualization services and analysis software. Recent advances in BioCyc include an expansion in the content of BioCyc in terms of both the number of genomes and the types of information available for each genome; an expansion in the amount of curated content within BioCyc; and new developments in the BioCyc software tools including redesigned gene/protein pages and metabolite pages; new search tools; a new sequence-alignment tool; a new tool for visualizing groups of related metabolic pathways; and a facility called SmartTables, which enables biologists to perform analyses that previously would have required a programmer's assistance. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
A searchable database for the genome of Phomopsis longicolla (isolate MSPL 10-6).
Darwish, Omar; Li, Shuxian; May, Zane; Matthews, Benjamin; Alkharouf, Nadim W
2016-01-01
Phomopsis longicolla (syn. Diaporthe longicolla) is an important seed-borne fungal pathogen that primarily causes Phomopsis seed decay (PSD) in most soybean production areas worldwide. This disease severely decreases soybean seed quality by reducing seed viability and oil quality, altering seed composition, and increasing frequencies of moldy and/or split beans. To facilitate investigation of the genetic base of fungal virulence factors and understand the mechanism of disease development, we designed and developed a database for P. longicolla isolate MSPL 10-6 that contains information about the genome assemblies (contigs), gene models, gene descriptions and GO functional ontologies. A web-based front end to the database was built using ASP.NET, which allows researchers to search and mine the genome of this important fungus. This database represents the first reported genome database for a seed borne fungal pathogen in the Diaporthe- Phomopsis complex. The database will also be a valuable resource for research and agricultural communities. It will aid in the development of new control strategies for this pathogen. http://bioinformatics.towson.edu/Phomopsis_longicolla/HomePage.aspx.
A searchable database for the genome of Phomopsis longicolla (isolate MSPL 10-6)
May, Zane; Matthews, Benjamin; Alkharouf, Nadim W.
2016-01-01
Phomopsis longicolla (syn. Diaporthe longicolla) is an important seed-borne fungal pathogen that primarily causes Phomopsis seed decay (PSD) in most soybean production areas worldwide. This disease severely decreases soybean seed quality by reducing seed viability and oil quality, altering seed composition, and increasing frequencies of moldy and/or split beans. To facilitate investigation of the genetic base of fungal virulence factors and understand the mechanism of disease development, we designed and developed a database for P. longicolla isolate MSPL 10-6 that contains information about the genome assemblies (contigs), gene models, gene descriptions and GO functional ontologies. A web-based front end to the database was built using ASP.NET, which allows researchers to search and mine the genome of this important fungus. This database represents the first reported genome database for a seed borne fungal pathogen in the Diaporthe– Phomopsis complex. The database will also be a valuable resource for research and agricultural communities. It will aid in the development of new control strategies for this pathogen. Availability: http://bioinformatics.towson.edu/Phomopsis_longicolla/HomePage.aspx PMID:28197060
Database resources of the National Center for Biotechnology
Wheeler, David L.; Church, Deanna M.; Federhen, Scott; Lash, Alex E.; Madden, Thomas L.; Pontius, Joan U.; Schuler, Gregory D.; Schriml, Lynn M.; Sequeira, Edwin; Tatusova, Tatiana A.; Wagner, Lukas
2003-01-01
In addition to maintaining the GenBank(R) nucleic acid sequence database, the National Center for Biotechnology Information (NCBI) provides data analysis and retrieval resources for the data in GenBank and other biological data made available through NCBI's Web site. NCBI resources include Entrez, PubMed, PubMed Central (PMC), LocusLink, the NCBITaxonomy Browser, BLAST, BLAST Link (BLink), Electronic PCR (e-PCR), Open Reading Frame (ORF) Finder, References Sequence (RefSeq), UniGene, HomoloGene, ProtEST, Database of Single Nucleotide Polymorphisms (dbSNP), Human/Mouse Homology Map, Cancer Chromosome Aberration Project (CCAP), Entrez Genomes and related tools, the Map Viewer, Model Maker (MM), Evidence Viewer (EV), Clusters of Orthologous Groups (COGs) database, Retroviral Genotyping Tools, SAGEmap, Gene Expression Omnibus (GEO), Online Mendelian Inheritance in Man (OMIM), the Molecular Modeling Database (MMDB), the Conserved Domain Database (CDD), and the Conserved Domain Architecture Retrieval Tool (CDART). Augmenting many of the Web applications are custom implementations of the BLAST program optimized to search specialized data sets. All of the resources can be accessed through the NCBI home page at: http://www.ncbi.nlm.nih.gov. PMID:12519941
Health information on internet: quality, importance, and popularity of persian health websites.
Samadbeik, Mahnaz; Ahmadi, Maryam; Mohammadi, Ali; Mohseni Saravi, Beniamin
2014-04-01
The Internet has provided great opportunities for disseminating both accurate and inaccurate health information. Therefore, the quality of information is considered as a widespread concern affecting the human life. Despite the increasingly substantial growth in the number of users, Persian health websites and the proportion of internet-using patients, little is known about the quality of Persian medical and health websites. The current study aimed to first assess the quality, popularity and importance of websites providing Persian health-related information, and second to evaluate the correlation of the popularity and importance ranking with quality score on the Internet. The sample websites were identified by entering the health-related keywords into four most popular search engines of Iranian users based on the Alexa ranking at the time of study. Each selected website was assessed using three qualified tools including the Bomba and Land Index, Google PageRank and the Alexa ranking. The evaluated sites characteristics (ownership structure, database, scope and objective) really did not have an effect on the Alexa traffic global rank, Alexa traffic rank in Iran, Google PageRank and Bomba total score. Most websites (78.9 percent, n = 56) were in the moderate category (8 ≤ x ≤ 11.99) based on their quality levels. There was no statistically significant association between Google PageRank with Bomba index variables and Alexa traffic global rank (P > 0.05). The Persian health websites had better Bomba quality scores in availability and usability guidelines as compared to other guidelines. The Google PageRank did not properly reflect the real quality of evaluated websites and Internet users seeking online health information should not merely rely on it for any kind of prejudgment regarding Persian health websites. However, they can use Iran Alexa rank as a primary filtering tool of these websites. Therefore, designing search engines dedicated to explore accredited Persian health-related Web sites can be an effective method to access high-quality Persian health websites.
Kim, Woo-Yeon; Kang, Sungsoo; Kim, Byoung-Chul; Oh, Jeehyun; Cho, Seongwoong; Bhak, Jong; Choi, Jong-Soon
2008-01-01
Cyanobacteria are model organisms for studying photosynthesis, carbon and nitrogen assimilation, evolution of plant plastids, and adaptability to environmental stresses. Despite many studies on cyanobacteria, there is no web-based database of their regulatory and signaling protein-protein interaction networks to date. We report a database and website SynechoNET that provides predicted protein-protein interactions. SynechoNET shows cyanobacterial domain-domain interactions as well as their protein-level interactions using the model cyanobacterium, Synechocystis sp. PCC 6803. It predicts the protein-protein interactions using public interaction databases that contain mutually complementary and redundant data. Furthermore, SynechoNET provides information on transmembrane topology, signal peptide, and domain structure in order to support the analysis of regulatory membrane proteins. Such biological information can be queried and visualized in user-friendly web interfaces that include the interactive network viewer and search pages by keyword and functional category. SynechoNET is an integrated protein-protein interaction database designed to analyze regulatory membrane proteins in cyanobacteria. It provides a platform for biologists to extend the genomic data of cyanobacteria by predicting interaction partners, membrane association, and membrane topology of Synechocystis proteins. SynechoNET is freely available at http://synechocystis.org/ or directly at http://bioportal.kobic.kr/SynechoNET/.
Addition of a breeding database in the Genome Database for Rosaceae
Evans, Kate; Jung, Sook; Lee, Taein; Brutcher, Lisa; Cho, Ilhyung; Peace, Cameron; Main, Dorrie
2013-01-01
Breeding programs produce large datasets that require efficient management systems to keep track of performance, pedigree, geographical and image-based data. With the development of DNA-based screening technologies, more breeding programs perform genotyping in addition to phenotyping for performance evaluation. The integration of breeding data with other genomic and genetic data is instrumental for the refinement of marker-assisted breeding tools, enhances genetic understanding of important crop traits and maximizes access and utility by crop breeders and allied scientists. Development of new infrastructure in the Genome Database for Rosaceae (GDR) was designed and implemented to enable secure and efficient storage, management and analysis of large datasets from the Washington State University apple breeding program and subsequently expanded to fit datasets from other Rosaceae breeders. The infrastructure was built using the software Chado and Drupal, making use of the Natural Diversity module to accommodate large-scale phenotypic and genotypic data. Breeders can search accessions within the GDR to identify individuals with specific trait combinations. Results from Search by Parentage lists individuals with parents in common and results from Individual Variety pages link to all data available on each chosen individual including pedigree, phenotypic and genotypic information. Genotypic data are searchable by markers and alleles; results are linked to other pages in the GDR to enable the user to access tools such as GBrowse and CMap. This breeding database provides users with the opportunity to search datasets in a fully targeted manner and retrieve and compare performance data from multiple selections, years and sites, and to output the data needed for variety release publications and patent applications. The breeding database facilitates efficient program management. Storing publicly available breeding data in a database together with genomic and genetic data will further accelerate the cross-utilization of diverse data types by researchers from various disciplines. Database URL: http://www.rosaceae.org/breeders_toolbox PMID:24247530
Addition of a breeding database in the Genome Database for Rosaceae.
Evans, Kate; Jung, Sook; Lee, Taein; Brutcher, Lisa; Cho, Ilhyung; Peace, Cameron; Main, Dorrie
2013-01-01
Breeding programs produce large datasets that require efficient management systems to keep track of performance, pedigree, geographical and image-based data. With the development of DNA-based screening technologies, more breeding programs perform genotyping in addition to phenotyping for performance evaluation. The integration of breeding data with other genomic and genetic data is instrumental for the refinement of marker-assisted breeding tools, enhances genetic understanding of important crop traits and maximizes access and utility by crop breeders and allied scientists. Development of new infrastructure in the Genome Database for Rosaceae (GDR) was designed and implemented to enable secure and efficient storage, management and analysis of large datasets from the Washington State University apple breeding program and subsequently expanded to fit datasets from other Rosaceae breeders. The infrastructure was built using the software Chado and Drupal, making use of the Natural Diversity module to accommodate large-scale phenotypic and genotypic data. Breeders can search accessions within the GDR to identify individuals with specific trait combinations. Results from Search by Parentage lists individuals with parents in common and results from Individual Variety pages link to all data available on each chosen individual including pedigree, phenotypic and genotypic information. Genotypic data are searchable by markers and alleles; results are linked to other pages in the GDR to enable the user to access tools such as GBrowse and CMap. This breeding database provides users with the opportunity to search datasets in a fully targeted manner and retrieve and compare performance data from multiple selections, years and sites, and to output the data needed for variety release publications and patent applications. The breeding database facilitates efficient program management. Storing publicly available breeding data in a database together with genomic and genetic data will further accelerate the cross-utilization of diverse data types by researchers from various disciplines. Database URL: http://www.rosaceae.org/breeders_toolbox.
Design and implementation of a database for Brucella melitensis genome annotation.
De Hertogh, Benoît; Lahlimi, Leïla; Lambert, Christophe; Letesson, Jean-Jacques; Depiereux, Eric
2008-03-18
The genome sequences of three Brucella biovars and of some species close to Brucella sp. have become available, leading to new relationship analysis. Moreover, the automatic genome annotation of the pathogenic bacteria Brucella melitensis has been manually corrected by a consortium of experts, leading to 899 modifications of start sites predictions among the 3198 open reading frames (ORFs) examined. This new annotation, coupled with the results of automatic annotation tools of the complete genome sequences of the B. melitensis genome (including BLASTs to 9 genomes close to Brucella), provides numerous data sets related to predicted functions, biochemical properties and phylogenic comparisons. To made these results available, alphaPAGe, a functional auto-updatable database of the corrected sequence genome of B. melitensis, has been built, using the entity-relationship (ER) approach and a multi-purpose database structure. A friendly graphical user interface has been designed, and users can carry out different kinds of information by three levels of queries: (1) the basic search use the classical keywords or sequence identifiers; (2) the original advanced search engine allows to combine (by using logical operators) numerous criteria: (a) keywords (textual comparison) related to the pCDS's function, family domains and cellular localization; (b) physico-chemical characteristics (numerical comparison) such as isoelectric point or molecular weight and structural criteria such as the nucleic length or the number of transmembrane helix (TMH); (c) similarity scores with Escherichia coli and 10 species phylogenetically close to B. melitensis; (3) complex queries can be performed by using a SQL field, which allows all queries respecting the database's structure. The database is publicly available through a Web server at the following url: http://www.fundp.ac.be/urbm/bioinfo/aPAGe.
... this page please turn Javascript on. Unique DNA database has helped advance scientific discoveries worldwide Since its origin 25 years ago, the database of nucleic acid sequences known as GenBank has ...
Portales-Casamar, Elodie; Arenillas, David; Lim, Jonathan; Swanson, Magdalena I.; Jiang, Steven; McCallum, Anthony; Kirov, Stefan; Wasserman, Wyeth W.
2009-01-01
The PAZAR database unites independently created and maintained data collections of transcription factor and regulatory sequence annotation. The flexible PAZAR schema permits the representation of diverse information derived from experiments ranging from biochemical protein–DNA binding to cellular reporter gene assays. Data collections can be made available to the public, or restricted to specific system users. The data ‘boutiques’ within the shopping-mall-inspired system facilitate the analysis of genomics data and the creation of predictive models of gene regulation. Since its initial release, PAZAR has grown in terms of data, features and through the addition of an associated package of software tools called the ORCA toolkit (ORCAtk). ORCAtk allows users to rapidly develop analyses based on the information stored in the PAZAR system. PAZAR is available at http://www.pazar.info. ORCAtk can be accessed through convenient buttons located in the PAZAR pages or via our website at http://www.cisreg.ca/ORCAtk. PMID:18971253
Gupta, Amarnath; Bug, William; Marenco, Luis; Qian, Xufei; Condit, Christopher; Rangarajan, Arun; Müller, Hans Michael; Miller, Perry L.; Sanders, Brian; Grethe, Jeffrey S.; Astakhov, Vadim; Shepherd, Gordon; Sternberg, Paul W.; Martone, Maryann E.
2009-01-01
The overarching goal of the NIF (Neuroscience Information Framework) project is to be a one-stop-shop for Neuroscience. This paper provides a technical overview of how the system is designed. The technical goal of the first version of the NIF system was to develop an information system that a neuroscientist can use to locate relevant information from a wide variety of information sources by simple keyword queries. Although the user would provide only keywords to retrieve information, the NIF system is designed to treat them as concepts whose meanings are interpreted by the system. Thus, a search for term should find a record containing synonyms of the term. The system is targeted to find information from web pages, publications, databases, web sites built upon databases, XML documents and any other modality in which such information may be published. We have designed a system to achieve this functionality. A central element in the system is an ontology called NIFSTD (for NIF Standard) constructed by amalgamating a number of known and newly developed ontologies. NIFSTD is used by our ontology management module, called OntoQuest to perform ontology-based search over data sources. The NIF architecture currently provides three different mechanisms for searching heterogeneous data sources including relational databases, web sites, XML documents and full text of publications. Version 1.0 of the NIF system is currently in beta test and may be accessed through http://nif.nih.gov. PMID:18958629
Gupta, Amarnath; Bug, William; Marenco, Luis; Qian, Xufei; Condit, Christopher; Rangarajan, Arun; Müller, Hans Michael; Miller, Perry L; Sanders, Brian; Grethe, Jeffrey S; Astakhov, Vadim; Shepherd, Gordon; Sternberg, Paul W; Martone, Maryann E
2008-09-01
The overarching goal of the NIF (Neuroscience Information Framework) project is to be a one-stop-shop for Neuroscience. This paper provides a technical overview of how the system is designed. The technical goal of the first version of the NIF system was to develop an information system that a neuroscientist can use to locate relevant information from a wide variety of information sources by simple keyword queries. Although the user would provide only keywords to retrieve information, the NIF system is designed to treat them as concepts whose meanings are interpreted by the system. Thus, a search for term should find a record containing synonyms of the term. The system is targeted to find information from web pages, publications, databases, web sites built upon databases, XML documents and any other modality in which such information may be published. We have designed a system to achieve this functionality. A central element in the system is an ontology called NIFSTD (for NIF Standard) constructed by amalgamating a number of known and newly developed ontologies. NIFSTD is used by our ontology management module, called OntoQuest to perform ontology-based search over data sources. The NIF architecture currently provides three different mechanisms for searching heterogeneous data sources including relational databases, web sites, XML documents and full text of publications. Version 1.0 of the NIF system is currently in beta test and may be accessed through http://nif.nih.gov.
Toward a standard reference database for computer-aided mammography
NASA Astrophysics Data System (ADS)
Oliveira, Júlia E. E.; Gueld, Mark O.; de A. Araújo, Arnaldo; Ott, Bastian; Deserno, Thomas M.
2008-03-01
Because of the lack of mammography databases with a large amount of codified images and identified characteristics like pathology, type of breast tissue, and abnormality, there is a problem for the development of robust systems for computer-aided diagnosis. Integrated to the Image Retrieval in Medical Applications (IRMA) project, we present an available mammography database developed from the union of: The Mammographic Image Analysis Society Digital Mammogram Database (MIAS), The Digital Database for Screening Mammography (DDSM), the Lawrence Livermore National Laboratory (LLNL), and routine images from the Rheinisch-Westfälische Technische Hochschule (RWTH) Aachen. Using the IRMA code, standardized coding of tissue type, tumor staging, and lesion description was developed according to the American College of Radiology (ACR) tissue codes and the ACR breast imaging reporting and data system (BI-RADS). The import was done automatically using scripts for image download, file format conversion, file name, web page and information file browsing. Disregarding the resolution, this resulted in a total of 10,509 reference images, and 6,767 images are associated with an IRMA contour information feature file. In accordance to the respective license agreements, the database will be made freely available for research purposes, and may be used for image based evaluation campaigns such as the Cross Language Evaluation Forum (CLEF). We have also shown that it can be extended easily with further cases imported from a picture archiving and communication system (PACS).
ACToR: Aggregated Computational Toxicology Resource (T) ...
The EPA Aggregated Computational Toxicology Resource (ACToR) is a set of databases compiling information on chemicals in the environment from a large number of public and in-house EPA sources. ACToR has 3 main goals: (1) The serve as a repository of public toxicology information on chemicals of interest to the EPA, and in particular to be a central source for the testing data on all chemicals regulated by all EPA programs; (2) To be a source of in vivo training data sets for building in vitro to in vivo computational models; (3) To serve as a central source of chemical structure and identity information for the ToxCastTM and Tox21 programs. There are 4 main databases, all linked through a common set of chemical information and a common structure linking chemicals to assay data: the public ACToR system (available at http://actor.epa.gov), the ToxMiner database holding ToxCast and Tox21 data, along with results form statistical analyses on these data; the Tox21 chemical repository which is managing the ordering and sample tracking process for the larger Tox21 project; and the public version of ToxRefDB. The public ACToR system contains information on ~500K compounds with toxicology, exposure and chemical property information from >400 public sources. The web site is visited by ~1,000 unique users per month and generates ~1,000 page requests per day on average. The databases are built on open source technology, which has allowed us to export them to a number of col
Comparison of PubMed, Scopus, Web of Science, and Google Scholar: strengths and weaknesses.
Falagas, Matthew E; Pitsouni, Eleni I; Malietzis, George A; Pappas, Georgios
2008-02-01
The evolution of the electronic age has led to the development of numerous medical databases on the World Wide Web, offering search facilities on a particular subject and the ability to perform citation analysis. We compared the content coverage and practical utility of PubMed, Scopus, Web of Science, and Google Scholar. The official Web pages of the databases were used to extract information on the range of journals covered, search facilities and restrictions, and update frequency. We used the example of a keyword search to evaluate the usefulness of these databases in biomedical information retrieval and a specific published article to evaluate their utility in performing citation analysis. All databases were practical in use and offered numerous search facilities. PubMed and Google Scholar are accessed for free. The keyword search with PubMed offers optimal update frequency and includes online early articles; other databases can rate articles by number of citations, as an index of importance. For citation analysis, Scopus offers about 20% more coverage than Web of Science, whereas Google Scholar offers results of inconsistent accuracy. PubMed remains an optimal tool in biomedical electronic research. Scopus covers a wider journal range, of help both in keyword searching and citation analysis, but it is currently limited to recent articles (published after 1995) compared with Web of Science. Google Scholar, as for the Web in general, can help in the retrieval of even the most obscure information but its use is marred by inadequate, less often updated, citation information.
DMTB: the magnetotactic bacteria database
NASA Astrophysics Data System (ADS)
Pan, Y.; Lin, W.
2012-12-01
Magnetotactic bacteria (MTB) are of interest in biogeomagnetism, rock magnetism, microbiology, biomineralization, and advanced magnetic materials because of their ability to synthesize highly ordered intracellular nano-sized magnetic minerals, magnetite or greigite. Great strides for MTB studies have been made in the past few decades. More than 600 articles concerning MTB have been published. These rapidly growing data are stimulating cross disciplinary studies in such field as biogeomagnetism. We have compiled the first online database for MTB, i.e., Database of Magnestotactic Bacteria (DMTB, http://database.biomnsl.com). It contains useful information of 16S rRNA gene sequences, oligonucleotides, and magnetic properties of MTB, and corresponding ecological metadata of sampling sites. The 16S rRNA gene sequences are collected from the GenBank database, while all other data are collected from the scientific literature. Rock magnetic properties for both uncultivated and cultivated MTB species are also included. In the DMTB database, data are accessible through four main interfaces: Site Sort, Phylo Sort, Oligonucleotides, and Magnetic Properties. References in each entry serve as links to specific pages within public databases. The online comprehensive DMTB will provide a very useful data resource for researchers from various disciplines, e.g., microbiology, rock magnetism and paleomagnetism, biogeomagnetism, magnetic material sciences and others.
Code of Federal Regulations, 2011 CFR
2011-10-01
... listed in the Department of Veterans Affairs' (VA) Veterans Benefits Administration (VBA) database of veterans and family members. To be eligible for inclusion in the VetBiz.gov VIP database, the following... Pages (VIP) database at http://www.vetbiz.gov. In addition, some businesses may be owned and controlled...
Code of Federal Regulations, 2013 CFR
2013-10-01
... listed in the Department of Veterans Affairs' (VA) Veterans Benefits Administration (VBA) database of veterans and family members. To be eligible for inclusion in the VetBiz.gov VIP database, the following... Pages (VIP) database at http://www.vetbiz.gov. In addition, some businesses may be owned and controlled...
Code of Federal Regulations, 2014 CFR
2014-10-01
... listed in the Department of Veterans Affairs' (VA) Veterans Benefits Administration (VBA) database of veterans and family members. To be eligible for inclusion in the VetBiz.gov VIP database, the following... Pages (VIP) database at http://www.vetbiz.gov. In addition, some businesses may be owned and controlled...
Code of Federal Regulations, 2012 CFR
2012-10-01
... listed in the Department of Veterans Affairs' (VA) Veterans Benefits Administration (VBA) database of veterans and family members. To be eligible for inclusion in the VetBiz.gov VIP database, the following... Pages (VIP) database at http://www.vetbiz.gov. In addition, some businesses may be owned and controlled...
Adaptation of a Knowledge-Based Decision-Support System in the Tactical Environment.
1981-12-01
002-04-6411S1CURITY CL All PICATION OF 1,416 PAGE (00HIR Onto ea0aOW .L10 *GU9WVC 4bGSI.CAYON S. Voss 10466lVka t... OftesoE ’ making decisons . The...noe..aaw Ad tdlalttt’ IV 680011 MMib) Artificial Intelligence; Decision-Support Systems; Tactical Decision- making ; Knowledge-based Decision-support...tactical information to assist tactical commanders in making decisions. The system, TAC*, for "Tactical Adaptable Consultant," incorporates a database
NREL: U.S. Life Cycle Inventory Database Home Page
U.S. Life-Cycle Inventory Database Buildings Research Photo of a green field with an ocean in the background. U.S. Life Cycle Inventory Database NREL and its partners created the U.S. Life Cycle Inventory (LCI) Database to help life cycle assessment (LCA) practitioners answer questions about environmental
CCDST: A free Canadian climate data scraping tool
NASA Astrophysics Data System (ADS)
Bonifacio, Charmaine; Barchyn, Thomas E.; Hugenholtz, Chris H.; Kienzle, Stefan W.
2015-02-01
In this paper we present a new software tool that automatically fetches, downloads and consolidates climate data from a Web database where the data are contained on multiple Web pages. The tool is called the Canadian Climate Data Scraping Tool (CCDST) and was developed to enhance access and simplify analysis of climate data from Canada's National Climate Data and Information Archive (NCDIA). The CCDST deconstructs a URL for a particular climate station in the NCDIA and then iteratively modifies the date parameters to download large volumes of data, remove individual file headers, and merge data files into one output file. This automated sequence enhances access to climate data by substantially reducing the time needed to manually download data from multiple Web pages. To this end, we present a case study of the temporal dynamics of blowing snow events that resulted in ~3.1 weeks time savings. Without the CCDST, the time involved in manually downloading climate data limits access and restrains researchers and students from exploring climate trends. The tool is coded as a Microsoft Excel macro and is available to researchers and students for free. The main concept and structure of the tool can be modified for other Web databases hosting geophysical data.
A web-based approach for electrocardiogram monitoring in the home.
Magrabi, F; Lovell, N H; Celler, B G
1999-05-01
A Web-based electrocardiogram (ECG) monitoring service in which a longitudinal clinical record is used for management of patients, is described. The Web application is used to collect clinical data from the patient's home. A database on the server acts as a central repository where this clinical information is stored. A Web browser provides access to the patient's records and ECG data. We discuss the technologies used to automate the retrieval and storage of clinical data from a patient database, and the recording and reviewing of clinical measurement data. On the client's Web browser, ActiveX controls embedded in the Web pages provide a link between the various components including the Web server, Web page, the specialised client side ECG review and acquisition software, and the local file system. The ActiveX controls also implement FTP functions to retrieve and submit clinical data to and from the server. An intelligent software agent on the server is activated whenever new ECG data is sent from the home. The agent compares historical data with newly acquired data. Using this method, an optimum patient care strategy can be evaluated, a summarised report along with reminders and suggestions for action is sent to the doctor and patient by email.
Space physics analysis network node directory (The Yellow Pages): Fourth edition
NASA Technical Reports Server (NTRS)
Peters, David J.; Sisson, Patricia L.; Green, James L.; Thomas, Valerie L.
1989-01-01
The Space Physics Analysis Network (SPAN) is a component of the global DECnet Internet, which has over 17,000 host computers. The growth of SPAN from its implementation in 1981 to its present size of well over 2,500 registered SPAN host computers, has created a need for users to acquire timely information about the network through a central source. The SPAN Network Information Center (SPAN-NIC) an online facility managed by the National Space Science Data Center (NSSDC) was developed to meet this need for SPAN-wide information. The remote node descriptive information in this document is not currently contained in the SPAN-NIC database, but will be incorporated in the near future. Access to this information is also available to non-DECnet users over a variety of networks such as Telenet, the NASA Packet Switched System (NPSS), and the TCP/IP Internet. This publication serves as the Yellow Pages for SPAN node information. The document also provides key information concerning other computer networks connected to SPAN, nodes associated with each SPAN routing center, science discipline nodes, contacts for primary SPAN nodes, and SPAN reference information. A section on DECnet Internetworking discusses SPAN connections with other wide-area DECnet networks (many with thousands of nodes each). Another section lists node names and their disciplines, countries, and institutions in the SPAN Network Information Center Online Data Base System. All remote sites connected to US-SPAN and European-SPAN (E-SPAN) are indexed. Also provided is information on the SPAN tail circuits, i.e., those remote nodes connected directly to a SPAN routing center, which is the local point of contact for resolving SPAN-related problems. Reference material is included for those who wish to know more about SPAN. Because of the rapid growth of SPAN, the SPAN Yellow Pages is reissued periodically.
Curation accuracy of model organism databases
Keseler, Ingrid M.; Skrzypek, Marek; Weerasinghe, Deepika; Chen, Albert Y.; Fulcher, Carol; Li, Gene-Wei; Lemmer, Kimberly C.; Mladinich, Katherine M.; Chow, Edmond D.; Sherlock, Gavin; Karp, Peter D.
2014-01-01
Manual extraction of information from the biomedical literature—or biocuration—is the central methodology used to construct many biological databases. For example, the UniProt protein database, the EcoCyc Escherichia coli database and the Candida Genome Database (CGD) are all based on biocuration. Biological databases are used extensively by life science researchers, as online encyclopedias, as aids in the interpretation of new experimental data and as golden standards for the development of new bioinformatics algorithms. Although manual curation has been assumed to be highly accurate, we are aware of only one previous study of biocuration accuracy. We assessed the accuracy of EcoCyc and CGD by manually selecting curated assertions within randomly chosen EcoCyc and CGD gene pages and by then validating that the data found in the referenced publications supported those assertions. A database assertion is considered to be in error if that assertion could not be found in the publication cited for that assertion. We identified 10 errors in the 633 facts that we validated across the two databases, for an overall error rate of 1.58%, and individual error rates of 1.82% for CGD and 1.40% for EcoCyc. These data suggest that manual curation of the experimental literature by Ph.D-level scientists is highly accurate. Database URL: http://ecocyc.org/, http://www.candidagenome.org// PMID:24923819
The COG database: new developments in phylogenetic classification of proteins from complete genomes
Tatusov, Roman L.; Natale, Darren A.; Garkavtsev, Igor V.; Tatusova, Tatiana A.; Shankavaram, Uma T.; Rao, Bachoti S.; Kiryutin, Boris; Galperin, Michael Y.; Fedorova, Natalie D.; Koonin, Eugene V.
2001-01-01
The database of Clusters of Orthologous Groups of proteins (COGs), which represents an attempt on a phylogenetic classification of the proteins encoded in complete genomes, currently consists of 2791 COGs including 45 350 proteins from 30 genomes of bacteria, archaea and the yeast Saccharomyces cerevisiae (http://www.ncbi.nlm.nih.gov/COG). In addition, a supplement to the COGs is available, in which proteins encoded in the genomes of two multicellular eukaryotes, the nematode Caenorhabditis elegans and the fruit fly Drosophila melanogaster, and shared with bacteria and/or archaea were included. The new features added to the COG database include information pages with structural and functional details on each COG and literature references, improvements of the COGNITOR program that is used to fit new proteins into the COGs, and classification of genomes and COGs constructed by using principal component analysis. PMID:11125040
Performance evaluation of redundant disk array support for transaction recovery
NASA Technical Reports Server (NTRS)
Mourad, Antoine N.; Fuchs, W. Kent; Saab, Daniel G.
1991-01-01
Redundant disk arrays provide a way of achieving rapid recovery from media failures with a relatively low storage cost for large scale data systems requiring high availability. Here, we propose a method for using redundant disk arrays to support rapid recovery from system crashes and transaction aborts in addition to their role in providing media failure recovery. A twin page scheme is used to store the parity information in the array so that the time for transaction commit processing is not degraded. Using an analytical model, we show that the proposed method achieves a significant increase in the throughput of database systems using redundant disk arrays by reducing the number of recovery operations needed to maintain the consistency of the database.
77 FR 26160 - Modification of VOR Federal Airway V-14; Missouri
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-03
... aeronautical database, matches the depiction on the associated charts, and to ensure the safety and efficiency... in the FAA's aeronautical database or the charted depiction of the airway. When V-14 was amended in... description in error. The FAA aeronautical database retained the [[Page 26161
Fourment, Mathieu; Gibbs, Mark J
2008-02-05
Viruses of the Bunyaviridae have segmented negative-stranded RNA genomes and several of them cause significant disease. Many partial sequences have been obtained from the segments so that GenBank searches give complex results. Sequence databases usually use HTML pages to mediate remote sorting, but this approach can be limiting and may discourage a user from exploring a database. The VirusBanker database contains Bunyaviridae sequences and alignments and is presented as two spreadsheets generated by a Java program that interacts with a MySQL database on a server. Sequences are displayed in rows and may be sorted using information that is displayed in columns and includes data relating to the segment, gene, protein, species, strain, sequence length, terminal sequence and date and country of isolation. Bunyaviridae sequences and alignments may be downloaded from the second spreadsheet with titles defined by the user from the columns, or viewed when passed directly to the sequence editor, Jalview. VirusBanker allows large datasets of aligned nucleotide and protein sequences from the Bunyaviridae to be compiled and winnowed rapidly using criteria that are formulated heuristically.
COM1/348: Design and Implementation of a Portal for the Market of the Medical Equipment (MEDICOM)
Palamas, S; Vlachos, I; Panou-Diamandi, O; Marinos, G; Kalivas, D; Zeelenberg, C; Nimwegen, C; Koutsouris, D
1999-01-01
Introduction The MEDICOM system provides the electronic means for medical equipment manufacturers to communicate online with their customers supporting the Purchasing Process and the Post Market Surveillance. The MEDICOM service will be provided over the Internet by the MEDICOM Portal, and by a set of distributed subsystems dedicated to handle structured information related to medical devices. There are three kinds of these subsystems, the Hypermedia Medical Catalogue (HMC), Virtual Medical Exhibition (VME), which contains information in a form of Virtual Models, and the Post Market Surveillance system (PMS). The Universal Medical Devices Nomenclature System (UMDNS) is used to register all products. This work was partially funded by the ESPRIT Project 25289 (MEDICOM). Methods The Portal provides the end user interface operating as the MEDICOM Portal, acts as the yellow pages for finding both products and providers, providing links to the providers servers, implements the system management and supports the subsystem database compatibility. The Portal hosts a database system composed of two parts: (a) the Common Database, which describes a set of encoded parameters (like Supported Languages, Geographic Regions, UMDNS Codes, etc) common to all subsystems and (b) the Short Description Database, which contains summarised descriptions of medical devices, including a text description, the codes of the manufacturer, UMDNS code, attribute values and links to the corresponding HTML pages of the HMC, VME and PMS servers. The Portal provides the MEDICOM user interface including services like end user profiling and registration, end user query forms, creation and hosting of newsgroups, links to online libraries, end user subscription to manufacturers' mailing lists, online information for the MEDICOM system and special messages or advertisements from manufacturers. Results Platform independence and interoperability characterise the system design. A general purpose RDBMS is used for the implementation of the databases. The end user interface is implemented using HTML and Java applets, while the subsystem administration applications are developed using Java. The JDBC interface is used in order to provide database access to these applications. The communication between subsystems is implemented using CORBA objects and Java servlets are used in subsystem servers for the activation of remote operations. Discussion In the second half of 1999, the MEDICOM Project will enter the phase of evaluation and pilot operation. The benefits of the MEDICOM system are expected to be the establishment of a world wide accessible marketplace between providers and health care professionals. The latter will achieve the provision of up-to-date and high quality products information in an easy and friendly way, and the enhancement of the marketing procedures and after sales support efficiency.
Academic medical center libraries on the Web.
Tannery, N H; Wessel, C B
1998-01-01
Academic medical center libraries are moving towards publishing electronically, utilizing networked technologies, and creating digital libraries. The catalyst for this movement has been the Web. An analysis of academic medical center library Web pages was undertaken to assess the information created and communicated in early 1997. A summary of present uses and suggestions for future applications is provided. A method for evaluating and describing the content of library Web sites was designed. The evaluation included categorizing basic information such as description and access to library services, access to commercial databases, and use of interactive forms. The main goal of the evaluation was to assess original resources produced by these libraries. PMID:9803298
ERIC Educational Resources Information Center
Cavaleri, Piero
2008-01-01
Purpose: The purpose of this paper is to describe the use of AJAX for searching the Biblioteche Oggi database of bibliographic records. Design/methodology/approach: The paper is a demonstration of how bibliographic database single page interfaces allow the implementation of more user-friendly features for social and collaborative tasks. Findings:…
Ratinaud, Pierre; Andersson, Gerhard
2018-01-01
Background When people with health conditions begin to manage their health issues, one important issue that emerges is the question as to what exactly do they do with the information that they have obtained through various sources (eg, news media, social media, health professionals, friends, and family). The information they gather helps form their opinions and, to some degree, influences their attitudes toward managing their condition. Objective This study aimed to understand how tinnitus is represented in the US newspaper media and in Facebook pages (ie, social media) using text pattern analysis. Methods This was a cross-sectional study based upon secondary analyses of publicly available data. The 2 datasets (ie, text corpuses) analyzed in this study were generated from US newspaper media during 1980-2017 (downloaded from the database US Major Dailies by ProQuest) and Facebook pages during 2010-2016. The text corpuses were analyzed using the Iramuteq software using cluster analysis and chi-square tests. Results The newspaper dataset had 432 articles. The cluster analysis resulted in 5 clusters, which were named as follows: (1) brain stimulation (26.2%), (2) symptoms (13.5%), (3) coping (19.8%), (4) social support (24.2%), and (5) treatment innovation (16.4%). A time series analysis of clusters indicated a change in the pattern of information presented in newspaper media during 1980-2017 (eg, more emphasis on cluster 5, focusing on treatment inventions). The Facebook dataset had 1569 texts. The cluster analysis resulted in 7 clusters, which were named as: (1) diagnosis (21.9%), (2) cause (4.1%), (3) research and development (13.6%), (4) social support (18.8%), (5) challenges (11.1%), (6) symptoms (21.4%), and (7) coping (9.2%). A time series analysis of clusters indicated no change in information presented in Facebook pages on tinnitus during 2011-2016. Conclusions The study highlights the specific aspects about tinnitus that the US newspaper media and Facebook pages focus on, as well as how these aspects change over time. These findings can help health care providers better understand the presuppositions that tinnitus patients may have. More importantly, the findings can help public health experts and health communication experts in tailoring health information about tinnitus to promote self-management, as well as assisting in appropriate choices of treatment for those living with tinnitus. PMID:29739734
Identification of metal ion binding sites based on amino acid sequences
Cao, Xiaoyong; Zhang, Xiaojin; Gao, Sujuan; Ding, Changjiang; Feng, Yonge; Bao, Weihua
2017-01-01
The identification of metal ion binding sites is important for protein function annotation and the design of new drug molecules. This study presents an effective method of analyzing and identifying the binding residues of metal ions based solely on sequence information. Ten metal ions were extracted from the BioLip database: Zn2+, Cu2+, Fe2+, Fe3+, Ca2+, Mg2+, Mn2+, Na+, K+ and Co2+. The analysis showed that Zn2+, Cu2+, Fe2+, Fe3+, and Co2+ were sensitive to the conservation of amino acids at binding sites, and promising results can be achieved using the Position Weight Scoring Matrix algorithm, with an accuracy of over 79.9% and a Matthews correlation coefficient of over 0.6. The binding sites of other metals can also be accurately identified using the Support Vector Machine algorithm with multifeature parameters as input. In addition, we found that Ca2+ was insensitive to hydrophobicity and hydrophilicity information and Mn2+ was insensitive to polarization charge information. An online server was constructed based on the framework of the proposed method and is freely available at http://60.31.198.140:8081/metal/HomePage/HomePage.html. PMID:28854211
Identification of metal ion binding sites based on amino acid sequences.
Cao, Xiaoyong; Hu, Xiuzhen; Zhang, Xiaojin; Gao, Sujuan; Ding, Changjiang; Feng, Yonge; Bao, Weihua
2017-01-01
The identification of metal ion binding sites is important for protein function annotation and the design of new drug molecules. This study presents an effective method of analyzing and identifying the binding residues of metal ions based solely on sequence information. Ten metal ions were extracted from the BioLip database: Zn2+, Cu2+, Fe2+, Fe3+, Ca2+, Mg2+, Mn2+, Na+, K+ and Co2+. The analysis showed that Zn2+, Cu2+, Fe2+, Fe3+, and Co2+ were sensitive to the conservation of amino acids at binding sites, and promising results can be achieved using the Position Weight Scoring Matrix algorithm, with an accuracy of over 79.9% and a Matthews correlation coefficient of over 0.6. The binding sites of other metals can also be accurately identified using the Support Vector Machine algorithm with multifeature parameters as input. In addition, we found that Ca2+ was insensitive to hydrophobicity and hydrophilicity information and Mn2+ was insensitive to polarization charge information. An online server was constructed based on the framework of the proposed method and is freely available at http://60.31.198.140:8081/metal/HomePage/HomePage.html.
NLTE4 Plasma Population Kinetics Database
National Institute of Standards and Technology Data Gateway
SRD 159 NLTE4 Plasma Population Kinetics Database (Web database for purchase) This database contains benchmark results for simulation of plasma population kinetics and emission spectra. The data were contributed by the participants of the 4th Non-LTE Code Comparison Workshop who have unrestricted access to the database. The only limitation for other users is in hidden labeling of the output results. Guest users can proceed to the database entry page without entering userid and password.
www.fallasdechile.cl, the First Online Repository for Neotectonic Faults in the Chilean Andes
NASA Astrophysics Data System (ADS)
Aron, F.; Salas, V.; Bugueño, C. J.; Hernández, C.; Leiva, L.; Santibanez, I.; Cembrano, J. M.
2016-12-01
We introduce the site www.fallasdechile.cl, created and maintained by undergraduate students and researchers at the Catholic University of Chile. Though the web page seeks to inform and educate the general public about potentially seismogenic faults of the country, layers of increasing content complexity allow students, researchers and educators to consult the site as a scientific tool as well. This is the first comprehensive, open access database on Chilean geologic faults; we envision that it may grow organically with contributions from peer scientists, resembling the SCEC community fault model for southern California. Our website aims at filling a gap between science and society providing users the opportunity to get involved by self-driven learning through interactive education modules. The main page highlights recent developments and open questions in Chilean earthquake science. Front pages show first level information of general concepts in earthquake topics such as tectonic settings, definition of geologic faults, and space-time constraints of faults. Users can navigate interactive modules to explore, with real data, different earthquake scenarios and compute values of seismic moment and magnitude. A second level covers Chilean/Andean faults classified according to their geographic location containing at least one of the following parameters: mapped trace, 3D geometry, sense of slip, recurrence times and date of last event. Fault traces are displayed on an interactive map using a Google Maps API. The material is compiled and curated in an effort to present, up to our knowledge, accurate and up to date information. If interested, the user can navigate to a third layer containing more advanced technical details including primary sources of the data, a brief structural description, published scientific articles, and links to other online content complementing our site. Also, geographically referenced fault traces with attributes (kml, shapefiles) and fault 3D surfaces (contours, tsurf files) will be available to download. Given its potential for becoming a referential database for active faults in Chile, this project evidences that undergrads can go beyond the classroom, be of service to the scientific community, and make contributions with broader impacts.
A Tutorial in Creating Web-Enabled Databases with Inmagic DB/TextWorks through ODBC.
ERIC Educational Resources Information Center
Breeding, Marshall
2000-01-01
Explains how to create Web-enabled databases. Highlights include Inmagic's DB/Text WebPublisher product called DB/TextWorks; ODBC (Open Database Connectivity) drivers; Perl programming language; HTML coding; Structured Query Language (SQL); Common Gateway Interface (CGI) programming; and examples of HTML pages and Perl scripts. (LRW)
Web Database Development: Implications for Academic Publishing.
ERIC Educational Resources Information Center
Fernekes, Bob
This paper discusses the preliminary planning, design, and development of a pilot project to create an Internet accessible database and search tool for locating and distributing company data and scholarly work. Team members established four project objectives: (1) to develop a Web accessible database and decision tool that creates Web pages on the…
Intelligent medical information filtering.
Quintana, Y
1998-01-01
This paper describes an intelligent information filtering system to assist users to be notified of updates to new and relevant medical information. Among the major problems users face is the large volume of medical information that is generated each day, and the need to filter and retrieve relevant information. The Internet has dramatically increased the amount of electronically accessible medical information and reduced the cost and time needed to publish. The opportunity of the Internet for the medical profession and consumers is to have more information to make decisions and this could potentially lead to better medical decisions and outcomes. However, without the assistance from professional medical librarians, retrieving new and relevant information from databases and the Internet remains a challenge. Many physicians do not have access to the services of a medical librarian. Most physicians indicate on surveys that they do not prefer to retrieve the literature themselves, or visit libraries because of the lack of recent materials, poor organisation and indexing of materials, lack of appropriate and available material, and lack of time. The information filtering system described in this paper records the online web browsing behaviour of each user and creates a user profile of the index terms found on the web pages visited by the user. A relevance-ranking algorithm then matches the user profiles to the index terms of new health care web pages that are added each day. The system creates customised summaries of new information for each user. A user can then connect to the web site to read the new information. Relevance feedback buttons on each page ask the user to rate the usefulness of the page to their immediate information needs. Errors in relevance ranking are reduced in this system by having both the user profile and medical information represented in the same representation language using a controlled vocabulary. This system also updates the user profiles, automatically relieving this burden from the user, but also allowing the user to explicitly state preferences. An initial evaluation of this system was done with health consumers using a web site on consumer health. It was found that users often modified their criteria for what they considered relevant not only between browsing sessions but also during a session. A user's criteria for what is relevant is constantly changing as they interact with the information. New revised metrics of recall and precision are needed to account for the partially relevant judgements and the dynamically changing criteria of users. Future research, development, and evaluation of interactive information retrieval systems will need to take into account the users' dynamically changing criteria of relevance.
Ortseifen, Vera; Stolze, Yvonne; Maus, Irena; Sczyrba, Alexander; Bremges, Andreas; Albaum, Stefan P; Jaenicke, Sebastian; Fracowiak, Jochen; Pühler, Alfred; Schlüter, Andreas
2016-08-10
To study the metaproteome of a biogas-producing microbial community, fermentation samples were taken from an agricultural biogas plant for microbial cell and protein extraction and corresponding metagenome analyses. Based on metagenome sequence data, taxonomic community profiling was performed to elucidate the composition of bacterial and archaeal sub-communities. The community's cytosolic metaproteome was represented in a 2D-PAGE approach. Metaproteome databases for protein identification were compiled based on the assembled metagenome sequence dataset for the biogas plant analyzed and non-corresponding biogas metagenomes. Protein identification results revealed that the corresponding biogas protein database facilitated the highest identification rate followed by other biogas-specific databases, whereas common public databases yielded insufficient identification rates. Proteins of the biogas microbiome identified as highly abundant were assigned to the pathways involved in methanogenesis, transport and carbon metabolism. Moreover, the integrated metagenome/-proteome approach enabled the examination of genetic-context information for genes encoding identified proteins by studying neighboring genes on the corresponding contig. Exemplarily, this approach led to the identification of a Methanoculleus sp. contig encoding 16 methanogenesis-related gene products, three of which were also detected as abundant proteins within the community's metaproteome. Thus, metagenome contigs provide additional information on the genetic environment of identified abundant proteins. Copyright © 2016 Elsevier B.V. All rights reserved.
WAIS Searching of the Current Contents Database
NASA Astrophysics Data System (ADS)
Banholzer, P.; Grabenstein, M. E.
The Homer E. Newell Memorial Library of NASA's Goddard Space Flight Center is developing capabilities to permit Goddard personnel to access electronic resources of the Library via the Internet. The Library's support services contractor, Maxima Corporation, and their subcontractor, SANAD Support Technologies have recently developed a World Wide Web Home Page (http://www-library.gsfc.nasa.gov) to provide the primary means of access. The first searchable database to be made available through the HomePage to Goddard employees is Current Contents, from the Institute for Scientific Information (ISI). The initial implementation includes coverage of articles from the last few months of 1992 to present. These records are augmented with abstracts and references, and often are more robust than equivalent records in bibliographic databases that currently serve the astronomical community. Maxima/SANAD selected Wais Incorporated's WAIS product with which to build the interface to Current Contents. This system allows access from Macintosh, IBM PC, and Unix hosts, which is an important feature for Goddard's multiplatform environment. The forms interface is structured to allow both fielded (author, article title, journal name, id number, keyword, subject term, and citation) and unfielded WAIS searches. The system allows a user to: Retrieve individual journal article records. Retrieve Table of Contents of specific issues of journals. Connect to articles with similar subject terms or keywords. Connect to other issues of the same journal in the same year. Browse journal issues from an alphabetical list of indexed journal names.
Content and Design Features of Academic Health Sciences Libraries' Home Pages.
McConnaughy, Rozalynd P; Wilson, Steven P
2018-01-01
The goal of this content analysis was to identify commonly used content and design features of academic health sciences library home pages. After developing a checklist, data were collected from 135 academic health sciences library home pages. The core components of these library home pages included a contact phone number, a contact email address, an Ask-a-Librarian feature, the physical address listed, a feedback/suggestions link, subject guides, a discovery tool or database-specific search box, multimedia, social media, a site search option, a responsive web design, and a copyright year or update date.
InverPep: A database of invertebrate antimicrobial peptides.
Gómez, Esteban A; Giraldo, Paula; Orduz, Sergio
2017-03-01
The aim of this work was to construct InverPep, a database specialised in experimentally validated antimicrobial peptides (AMPs) from invertebrates. AMP data contained in InverPep were manually curated from other databases and the scientific literature. MySQL was integrated with the development platform Laravel; this framework allows to integrate programming in PHP with HTML and was used to design the InverPep web page's interface. InverPep contains 18 separated fields, including InverPep code, phylum and species source, peptide name, sequence, peptide length, secondary structure, molar mass, charge, isoelectric point, hydrophobicity, Boman index, aliphatic index and percentage of hydrophobic amino acids. CALCAMPI, an algorithm to calculate the physicochemical properties of multiple peptides simultaneously, was programmed in PERL language. To date, InverPep contains 702 experimentally validated AMPs from invertebrate species. All of the peptides contain information associated with their source, physicochemical properties, secondary structure, biological activity and links to external literature. Most AMPs in InverPep have a length between 10 and 50 amino acids, a positive charge, a Boman index between 0 and 2 kcal/mol, and 30-50% hydrophobic amino acids. InverPep includes 33 AMPs not reported in other databases. Besides, CALCAMPI and statistical analysis of InverPep data is presented. The InverPep database is available in English and Spanish. InverPep is a useful database to study invertebrate AMPs and its information could be used for the design of new peptides. The user-friendly interface of InverPep and its information can be freely accessed via a web-based browser at http://ciencias.medellin.unal.edu.co/gruposdeinvestigacion/prospeccionydisenobiomoleculas/InverPep/public/home_en. Copyright © 2016 International Society for Chemotherapy of Infection and Cancer. Published by Elsevier Ltd. All rights reserved.
Karabulut, Nevzat
2017-03-01
The aim of this study is to investigate the frequency of incorrect citations and its effects on the impact factor of a specific biomedical journal: the American Journal of Roentgenology. The Cited Reference Search function of Thomson Reuters' Web of Science database (formerly the Institute for Scientific Information's Web of Knowledge database) was used to identify erroneous citations. This was done by entering the journal name into the Cited Work field and entering "2011-2012" into the Cited Year(s) field. The errors in any part of the inaccurately cited references (e.g., author names, title, year, volume, issue, and page numbers) were recorded, and the types of errors (i.e., absent, deficient, or mistyped) were analyzed. Erroneous citations were corrected using the Suggest a Correction function of the Web of Science database. The effect of inaccurate citations on the impact factor of the AJR was calculated. Overall, 183 of 1055 citable articles published in 2011-2012 were inaccurately cited 423 times (mean [± SD], 2.31 ± 4.67 times; range, 1-44 times). Of these 183 articles, 110 (60.1%) were web-only articles and 44 (24.0%) were print articles. The most commonly identified errors were page number errors (44.8%) and misspelling of an author's name (20.2%). Incorrect citations adversely affected the impact factor of the AJR by 0.065 in 2012 and by 0.123 in 2013. Inaccurate citations are not infrequent in biomedical journals, yet they can be detected and corrected using the Web of Science database. Although the accuracy of references is primarily the responsibility of authors, the journal editorial office should also define a periodic inaccurate citation check task and correct erroneous citations to reclaim unnecessarily lost credit.
TMDB: a literature-curated database for small molecular compounds found from tea.
Yue, Yi; Chu, Gang-Xiu; Liu, Xue-Shi; Tang, Xing; Wang, Wei; Liu, Guang-Jin; Yang, Tao; Ling, Tie-Jun; Wang, Xiao-Gang; Zhang, Zheng-Zhu; Xia, Tao; Wan, Xiao-Chun; Bao, Guan-Hu
2014-09-16
Tea is one of the most consumed beverages worldwide. The healthy effects of tea are attributed to a wealthy of different chemical components from tea. Thousands of studies on the chemical constituents of tea had been reported. However, data from these individual reports have not been collected into a single database. The lack of a curated database of related information limits research in this field, and thus a cohesive database system should necessarily be constructed for data deposit and further application. The Tea Metabolome database (TMDB), a manually curated and web-accessible database, was developed to provide detailed, searchable descriptions of small molecular compounds found in Camellia spp. esp. in the plant Camellia sinensis and compounds in its manufactured products (different kinds of tea infusion). TMDB is currently the most complete and comprehensive curated collection of tea compounds data in the world. It contains records for more than 1393 constituents found in tea with information gathered from 364 published books, journal articles, and electronic databases. It also contains experimental 1H NMR and 13C NMR data collected from the purified reference compounds or collected from other database resources such as HMDB. TMDB interface allows users to retrieve tea compounds entries by keyword search using compound name, formula, occurrence, and CAS register number. Each entry in the TMDB contains an average of 24 separate data fields including its original plant species, compound structure, formula, molecular weight, name, CAS registry number, compound types, compound uses including healthy benefits, reference literatures, NMR, MS data, and the corresponding ID from databases such as HMDB and Pubmed. Users can also contribute novel regulatory entries by using a web-based submission page. The TMDB database is freely accessible from the URL of http://pcsb.ahau.edu.cn:8080/TCDB/index.jsp. The TMDB is designed to address the broad needs of tea biochemists, natural products chemists, nutritionists, and members of tea related research community. The TMDB database provides a solid platform for collection, standardization, and searching of compounds information found in tea. As such this database will be a comprehensive repository for tea biochemistry and tea health research community.
Online nutrition information for pregnant women: a content analysis.
Storr, Tayla; Maher, Judith; Swanepoel, Elizabeth
2017-04-01
Pregnant women actively seek health information online, including nutrition and food-related topics. However, the accuracy and readability of this information have not been evaluated. The aim of this study was to describe and evaluate pregnancy-related food and nutrition information available online. Four search engines were used to search for pregnancy-related nutrition web pages. Content analysis of web pages was performed. Web pages were assessed against the 2013 Australian Dietary Guidelines to assess accuracy. Flesch-Kincaid (F-K), Simple Measure of Gobbledygook (SMOG), Gunning Fog Index (FOG) and Flesch reading ease (FRE) formulas were used to assess readability. Data was analysed descriptively. Spearman's correlation was used to assess the relationship between web page characteristics. Kruskal-Wallis test was used to check for differences among readability and other web page characteristics. A total of 693 web pages were included. Web page types included commercial (n = 340), not-for-profit (n = 113), blogs (n = 112), government (n = 89), personal (n = 36) and educational (n = 3). The accuracy of online nutrition information varied with 39.7% of web pages containing accurate information, 22.8% containing mixed information and 37.5% containing inaccurate information. The average reading grade of all pages analysed measured by F-K, SMOG and FOG was 11.8. The mean FRE was 51.6, a 'fairly difficult to read' score. Only 0.5% of web pages were written at or below grade 6 according to F-K, SMOG and FOG. The findings suggest that accuracy of pregnancy-related nutrition information is a problem on the internet. Web page readability is generally difficult and means that the information may not be accessible to those who cannot read at a sophisticated level. © 2016 John Wiley & Sons Ltd. © 2016 John Wiley & Sons Ltd.
Health Information on Internet: Quality, Importance, and Popularity of Persian Health Websites
Samadbeik, Mahnaz; Ahmadi, Maryam; Mohammadi, Ali; Mohseni Saravi, Beniamin
2014-01-01
Background: The Internet has provided great opportunities for disseminating both accurate and inaccurate health information. Therefore, the quality of information is considered as a widespread concern affecting the human life. Despite the increasingly substantial growth in the number of users, Persian health websites and the proportion of internet-using patients, little is known about the quality of Persian medical and health websites. Objectives: The current study aimed to first assess the quality, popularity and importance of websites providing Persian health-related information, and second to evaluate the correlation of the popularity and importance ranking with quality score on the Internet. Materials and Methods: The sample websites were identified by entering the health-related keywords into four most popular search engines of Iranian users based on the Alexa ranking at the time of study. Each selected website was assessed using three qualified tools including the Bomba and Land Index, Google PageRank and the Alexa ranking. Results: The evaluated sites characteristics (ownership structure, database, scope and objective) really did not have an effect on the Alexa traffic global rank, Alexa traffic rank in Iran, Google PageRank and Bomba total score. Most websites (78.9 percent, n = 56) were in the moderate category (8 ≤ x ≤ 11.99) based on their quality levels. There was no statistically significant association between Google PageRank with Bomba index variables and Alexa traffic global rank (P > 0.05). Conclusions: The Persian health websites had better Bomba quality scores in availability and usability guidelines as compared to other guidelines. The Google PageRank did not properly reflect the real quality of evaluated websites and Internet users seeking online health information should not merely rely on it for any kind of prejudgment regarding Persian health websites. However, they can use Iran Alexa rank as a primary filtering tool of these websites. Therefore, designing search engines dedicated to explore accredited Persian health-related Web sites can be an effective method to access high-quality Persian health websites. PMID:24910795
... You are here Home » Disorders » All Disorders Microcephaly Information Page Microcephaly Information Page What research is being done? The National ... the U.S. and Worldwide NINDS Clinical Trials Related Information Patient Organizations Birth Defect Research for Children, Inc. ...
Maccari, Giuseppe; Robinson, James; Ballingall, Keith; Guethlein, Lisbeth A.; Grimholt, Unni; Kaufman, Jim; Ho, Chak-Sum; de Groot, Natasja G.; Flicek, Paul; Bontrop, Ronald E.; Hammond, John A.; Marsh, Steven G. E.
2017-01-01
The IPD-MHC Database project (http://www.ebi.ac.uk/ipd/mhc/) collects and expertly curates sequences of the major histocompatibility complex from non-human species and provides the infrastructure and tools to enable accurate analysis. Since the first release of the database in 2003, IPD-MHC has grown and currently hosts a number of specific sections, with more than 7000 alleles from 70 species, including non-human primates, canines, felines, equids, ovids, suids, bovins, salmonids and murids. These sequences are expertly curated and made publicly available through an open access website. The IPD-MHC Database is a key resource in its field, and this has led to an average of 1500 unique visitors and more than 5000 viewed pages per month. As the database has grown in size and complexity, it has created a number of challenges in maintaining and organizing information, particularly the need to standardize nomenclature and taxonomic classification, while incorporating new allele submissions. Here, we describe the latest database release, the IPD-MHC 2.0 and discuss planned developments. This release incorporates sequence updates and new tools that enhance database queries and improve the submission procedure by utilizing common tools that are able to handle the varied requirements of each MHC-group. PMID:27899604
Structured Forms Reference Set of Binary Images (SFRS)
National Institute of Standards and Technology Data Gateway
NIST Structured Forms Reference Set of Binary Images (SFRS) (Web, free access) The NIST Structured Forms Database (Special Database 2) consists of 5,590 pages of binary, black-and-white images of synthesized documents. The documents in this database are 12 different tax forms from the IRS 1040 Package X for the year 1988.
The Technology Education Graduate Research Database, 1892-2000. CTTE Monograph.
ERIC Educational Resources Information Center
Reed, Philip A., Ed.
The Technology Education Graduate Research Database (TEGRD) was designed in two parts. The first part was a 384 page bibliography of theses and dissertations from 1892-2000. The second part was an online, searchable database of graduate research completed within technology education from 1892 to the present. The primary goals of the project were:…
1988-08-01
notwithstanding any other provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if it does not display a...19a. NAME OF RESPONSIBLE PERSON a. REPORT unclassified b. ABSTRACT unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98...States Navy, nor any person acting on behalf of the United States Navy (A) makes any warranty or representation, expressed or implied, with respect
NASA Technical Reports Server (NTRS)
Reid, John; Egge, Robert; McAfee, Nancy
2000-01-01
This document summarizes the feedback gathered during the user-testing phase in the development of an electronic library application: the Aeronautics and Space Access Pages (ASAP). It first provides some historical background on the NASA Scientific and Technical Information (STI) program and its efforts to enhance the services it offers the aerospace community. Following a brief overview of the ASAP project, it reviews the results of an online user survey, and from the lessons learned therein, outlines direction for future development of the project.
... are here Home » Disorders » All Disorders Sleep Apnea Information Page Sleep Apnea Information Page What research is being done? The National ... the U.S. and Worldwide NINDS Clinical Trials Related Information Patient Organizations American Sleep Apnea Association American Sleep ...
Googling endometriosis: a systematic review of information available on the Internet.
Hirsch, Martin; Aggarwal, Shivani; Barker, Claire; Davis, Colin J; Duffy, James M N
2017-05-01
The demand for health information online is increasing rapidly without clear governance. We aim to evaluate the credibility, quality, readability, and accuracy of online patient information concerning endometriosis. We searched 5 popular Internet search engines: aol.com, ask.com, bing.com, google.com, and yahoo.com. We developed a search strategy in consultation with patients with endometriosis, to identify relevant World Wide Web pages. Pages containing information related to endometriosis for women with endometriosis or the public were eligible. Two independent authors screened the search results. World Wide Web pages were evaluated using validated instruments across 3 of the 4 following domains: (1) credibility (White Paper instrument; range 0-10); (2) quality (DISCERN instrument; range 0-85); and (3) readability (Flesch-Kincaid instrument; range 0-100); and (4) accuracy (assessed by a prioritized criteria developed in consultation with health care professionals, researchers, and women with endometriosis based on the European Society of Human Reproduction and Embryology guidelines [range 0-30]). We summarized these data in diagrams, tables, and narratively. We identified 750 World Wide Web pages, of which 54 were included. Over a third of Web pages did not attribute authorship and almost half the included pages did not report the sources of information or academic references. No World Wide Web page provided information assessed as being written in plain English. A minority of web pages were assessed as high quality. A single World Wide Web page provided accurate information: evidentlycochrane.net. Available information was, in general, skewed toward the diagnosis of endometriosis. There were 16 credible World Wide Web pages, however the content limitations were infrequently discussed. No World Wide Web page scored highly across all 4 domains. In the unlikely event that a World Wide Web page reports high-quality, accurate, and credible health information it is typically challenging for a lay audience to comprehend. Health care professionals, and the wider community, should inform women with endometriosis of the risk of outdated, inaccurate, or even dangerous information online. The implementation of an information standard will incentivize providers of online information to establish and adhere to codes of conduct. Copyright © 2016 Elsevier Inc. All rights reserved.
Kozlov, Elissa; Carpenter, Brian D
2017-04-01
Americans rely on the Internet for health information, and people are likely to turn to online resources to learn about palliative care as well. The purpose of this study was to analyze online palliative care information pages to evaluate the breadth of their content. We also compared how frequently basic facts about palliative care appeared on the Web pages to expert rankings of the importance of those facts to understanding palliative care. Twenty-six pages were identified. Two researchers independently coded each page for content. Palliative care professionals (n = 20) rated the importance of content domains for comparison with content frequency in the Web pages. We identified 22 recurring broad concepts about palliative care. Each information page included, on average, 9.2 of these broad concepts (standard deviation [SD] = 3.36, range = 5-15). Similarly, each broad concept was present in an average of 45% of the Web pages (SD = 30.4%, range = 8%-96%). Significant discrepancies emerged between expert ratings of the importance of the broad concepts and the frequency of their appearance in the Web pages ( r τ = .25, P > .05). This study demonstrates that palliative care information pages available online vary considerably in their content coverage. Furthermore, information that palliative care professionals rate as important for consumers to know is not always included in Web pages. We developed guidelines for information pages for the purpose of educating consumers in a consistent way about palliative care.
CBD: a biomarker database for colorectal cancer.
Zhang, Xueli; Sun, Xiao-Feng; Cao, Yang; Ye, Benchen; Peng, Qiliang; Liu, Xingyun; Shen, Bairong; Zhang, Hong
2018-01-01
Colorectal cancer (CRC) biomarker database (CBD) was established based on 870 identified CRC biomarkers and their relevant information from 1115 original articles in PubMed published from 1986 to 2017. In this version of the CBD, CRC biomarker data were collected, sorted, displayed and analysed. The CBD with the credible contents as a powerful and time-saving tool provide more comprehensive and accurate information for further CRC biomarker research. The CBD was constructed under MySQL server. HTML, PHP and JavaScript languages have been used to implement the web interface. The Apache was selected as HTTP server. All of these web operations were implemented under the Windows system. The CBD could provide to users the multiple individual biomarker information and categorized into the biological category, source and application of biomarkers; the experiment methods, results, authors and publication resources; the research region, the average age of cohort, gender, race, the number of tumours, tumour location and stage. We only collect data from the articles with clear and credible results to prove the biomarkers are useful in the diagnosis, treatment or prognosis of CRC. The CBD can also provide a professional platform to researchers who are interested in CRC research to communicate, exchange their research ideas and further design high-quality research in CRC. They can submit their new findings to our database via the submission page and communicate with us in the CBD.Database URL: http://sysbio.suda.edu.cn/CBD/.
CBD: a biomarker database for colorectal cancer
Zhang, Xueli; Sun, Xiao-Feng; Ye, Benchen; Peng, Qiliang; Liu, Xingyun; Shen, Bairong; Zhang, Hong
2018-01-01
Abstract Colorectal cancer (CRC) biomarker database (CBD) was established based on 870 identified CRC biomarkers and their relevant information from 1115 original articles in PubMed published from 1986 to 2017. In this version of the CBD, CRC biomarker data were collected, sorted, displayed and analysed. The CBD with the credible contents as a powerful and time-saving tool provide more comprehensive and accurate information for further CRC biomarker research. The CBD was constructed under MySQL server. HTML, PHP and JavaScript languages have been used to implement the web interface. The Apache was selected as HTTP server. All of these web operations were implemented under the Windows system. The CBD could provide to users the multiple individual biomarker information and categorized into the biological category, source and application of biomarkers; the experiment methods, results, authors and publication resources; the research region, the average age of cohort, gender, race, the number of tumours, tumour location and stage. We only collect data from the articles with clear and credible results to prove the biomarkers are useful in the diagnosis, treatment or prognosis of CRC. The CBD can also provide a professional platform to researchers who are interested in CRC research to communicate, exchange their research ideas and further design high-quality research in CRC. They can submit their new findings to our database via the submission page and communicate with us in the CBD. Database URL: http://sysbio.suda.edu.cn/CBD/ PMID:29846545
Weiss, Lizabeth M; Drake, Audrey
2007-01-01
An electronic database was developed for succession planning and placement of nursing leaders interested and ready, willing, and able to accept an assignment in a nursing leadership position. The tool is a 1-page form used to identify candidates for nursing leadership assignments. This tool has been deployed nationally, with access to the database restricted to nurse executives at every Veterans Health Administration facility for the purpose of entering the names of developed nurse leaders ready for a leadership assignment. The tool is easily accessed through the Veterans Health Administration Office of Nursing Service, and by limiting access to the nurse executive group, ensures candidates identified are qualified. Demographic information included on the survey tool includes the candidate's demographic information and other certifications/credentials. This completed information form is entered into a database from which a report can be generated, resulting in a listing of potential candidates to contact to supplement a local or Veterans Integrated Service Network wide position announcement. The data forms can be sorted by positions, areas of clinical or functional experience, training programs completed, and geographic preference. The forms can be edited or updated and/or added or deleted in the system as the need is identified. This tool allows facilities with limited internal candidates to have a resource with Department of Veterans Affairs prepared staff in which to seek additional candidates. It also provides a way for interested candidates to be considered for positions outside of their local geographic area.
Buczkowski, Brian J.; Reid, Jane A.; Jenkins, Chris J.; Reid, Jamey M.; Williams, S. Jeffress; Flocks, James G.
2006-01-01
Over the past 50 years there has been an explosion in scientific interest, research effort and information gathered on the geologic sedimentary character of the United States continental margin. Data and information from thousands of publications have greatly increased our scientific understanding of the geologic origins of the shelf surface but rarely have those data been combined and integrated. This publication is the first release of the Gulf of Mexico and Caribbean (Puerto Rico and U.S. Virgin Islands) coastal and offshore data from the usSEABED database. The report contains a compilation of published and previously unpublished sediment texture and other geologic data about the sea floor from diverse sources. usSEABED is an innovative database system developed to bring assorted data together in a unified database. The dbSEABED system is used to process the data. Examples of maps displaying attributes such as grain size and sediment color are included. This database contains information that is a scientific foundation for the USGS Marine Aggregate Resources and Processes Assessment and Benthic Habitats projects, and will be useful to the marine science community for other studies of the Gulf of Mexico and Caribbean continental margins. This publication is divided into ten sections: Home, Introduction, Content, usSEABED (data), dbSEABED (processing), Data Catalog, References, Contacts, Acknowledgments and Frequently Asked Questions. Use the navigation bar on the left to navigate to specific sections of this report. Underlined topics throughout the publication are links to more information. Links to specific and detailed information on processing and those to pages outside this report will open in a new browser window.
[Title, abstract and keywords: essential issues in medical bibliographic research].
Bonciu, Carmen
2005-01-01
Medical information, conveyed either by books, journal articles, conference and congress papers or posters, represents the product, the result of the medical research. Note that the informational cycle can be shown schematically as Bibliographic information --> Medical research --> Research results --> Bibliographic information. The result of the scientific research (articles, posters, etc.) re-enters the informational cycle, as bibliographic information for a new medical research. The bibliographic research is still a time, and effort consuming activity, despite the explosive growth of information technology. It requires specific medical, information technology and bibliographic knowledge. The present work aims to emphasize the importance of title, keywords and abstract terms selection, to article writing and publication in medical journals, and the proper choice of meta-information in web pages. The bibliographic research was made using two databases with English language information about articles from international medical journals: MEDLINE (PUBMED) and PROQUEST MEDICAL LIBRARY. The results were compared with GOOGLE and YAHOO search. These searching engines are common now in all types of Internet users (including researchers, librarians, etc.). It is essential for the researchers to know the article registration mechanism in a database and the modalities of bibliographic investigation of online databases, so that the title, keyword and abstract terms are selected properly. The use of words not related to the subject, in title, keywords or abstract, results in ambiguities. The writing and the translation of scientific words must also be accurate, mainly when article authors are non-native English speakers: e.g., chimiotherapy (sic)--20 articles in Medline, 270 articles in Google; morphopathology (sic)-- 78 articles in Medline, and 294 in Google; morphopatology (sic)--2 articles in Medline, and 12 articles in Google.
Air travel and children’s health issues
2007-01-01
With more children travelling by air, health care professionals should become more familiar with some of the unique health issues associated with air travel. A thorough literature search involving a number of databases (1966 to 2006) revealed very few evidence-based papers on air travel and children. Many of the existing recommendations are based on descriptive evidence and expert opinion. The present statement will help physicians to inform families about the health-related issues concerning air travel and children, including otitis media, cardiopulmonary disorders, allergies, diabetes, infection and injury prevention. An accompanying document (Information for Parents and Caregivers) is also available in this issue of Paediatrics & Child Health (pages 51-52) to help answer common questions from parents. PMID:19030341
MedlinePlus Videos and Cool Tools
... the baby. See the Safety page for more information about pregnancy and x-rays. top of page ... procedure varies. See the Safety page for more information about radiation dose. Women should always inform their ...
CATS 1990 household travel survey : technical documentation for the household, person and trip files
DOT National Transportation Integrated Search
1994-04-01
This report contains the database documentation and data dictionary for the : Chicago Area Transportation Study's 1990 Household Travel Survey. The database : documentation can be found on pages 1 through 25 followed by the data dictionary. : Any que...
An XML-based Generic Tool for Information Retrieval in Solar Databases
NASA Astrophysics Data System (ADS)
Scholl, Isabelle F.; Legay, Eric; Linsolas, Romain
This paper presents the current architecture of the `Solar Web Project' now in its development phase. This tool will provide scientists interested in solar data with a single web-based interface for browsing distributed and heterogeneous catalogs of solar observations. The main goal is to have a generic application that can be easily extended to new sets of data or to new missions with a low level of maintenance. It is developed with Java and XML is used as a powerful configuration language. The server, independent of any database scheme, can communicate with a client (the user interface) and several local or remote archive access systems (such as existing web pages, ftp sites or SQL databases). Archive access systems are externally described in XML files. The user interface is also dynamically generated from an XML file containing the window building rules and a simplified database description. This project is developed at MEDOC (Multi-Experiment Data and Operations Centre), located at the Institut d'Astrophysique Spatiale (Orsay, France). Successful tests have been conducted with other solar archive access systems.
Burchill, C; Roos, L L; Fergusson, P; Jebamani, L; Turner, K; Dueck, S
2000-01-01
Comprehensive data available in the Canadian province of Manitoba since 1970 have aided study of the interaction between population health, health care utilization, and structural features of the health care system. Given a complex linked database and many ongoing projects, better organization of available epidemiological, institutional, and technical information was needed. The Manitoba Centre for Health Policy and Evaluation wished to develop a knowledge repository to handle data, document research Methods, and facilitate both internal communication and collaboration with other sites. This evolving knowledge repository consists of both public and internal (restricted access) pages on the World Wide Web (WWW). Information can be accessed using an indexed logical format or queried to allow entry at user-defined points. The main topics are: Concept Dictionary, Research Definitions, Meta-Index, and Glossary. The Concept Dictionary operationalizes concepts used in health research using administrative data, outlining the creation of complex variables. Research Definitions specify the codes for common surgical procedures, tests, and diagnoses. The Meta-Index organizes concepts and definitions according to the Medical Sub-Heading (MeSH) system developed by the National Library of Medicine. The Glossary facilitates navigation through the research terms and abbreviations in the knowledge repository. An Education Resources heading presents a web-based graduate course using substantial amounts of material in the Concept Dictionary, a lecture in the Epidemiology Supercourse, and material for Manitoba's Regional Health Authorities. Confidential information (including Data Dictionaries) is available on the Centre's internal website. Use of the public pages has increased dramatically since January 1998, with almost 6,000 page hits from 250 different hosts in May 1999. More recently, the number of page hits has averaged around 4,000 per month, while the number of unique hosts has climbed to around 400. This knowledge repository promotes standardization and increases efficiency by placing concepts and associated programming in the Centre's collective memory. Collaboration and project management are facilitated.
Burchill, Charles; Fergusson, Patricia; Jebamani, Laurel; Turner, Ken; Dueck, Stephen
2000-01-01
Background Comprehensive data available in the Canadian province of Manitoba since 1970 have aided study of the interaction between population health, health care utilization, and structural features of the health care system. Given a complex linked database and many ongoing projects, better organization of available epidemiological, institutional, and technical information was needed. Objective The Manitoba Centre for Health Policy and Evaluation wished to develop a knowledge repository to handle data, document research methods, and facilitate both internal communication and collaboration with other sites. Methods This evolving knowledge repository consists of both public and internal (restricted access) pages on the World Wide Web (WWW). Information can be accessed using an indexed logical format or queried to allow entry at user-defined points. The main topics are: Concept Dictionary, Research Definitions, Meta-Index, and Glossary. The Concept Dictionary operationalizes concepts used in health research using administrative data, outlining the creation of complex variables. Research Definitions specify the codes for common surgical procedures, tests, and diagnoses. The Meta-Index organizes concepts and definitions according to the Medical Sub-Heading (MeSH) system developed by the National Library of Medicine. The Glossary facilitates navigation through the research terms and abbreviations in the knowledge repository. An Education Resources heading presents a web-based graduate course using substantial amounts of material in the Concept Dictionary, a lecture in the Epidemiology Supercourse, and material for Manitoba's Regional Health Authorities. Confidential information (including Data Dictionaries) is available on the Centre's internal website. Results Use of the public pages has increased dramatically since January 1998, with almost 6,000 page hits from 250 different hosts in May 1999. More recently, the number of page hits has averaged around 4,000 per month, while the number of unique hosts has climbed to around 400. Conclusions This knowledge repository promotes standardization and increases efficiency by placing concepts and associated programming in the Centre's collective memory. Collaboration and project management are facilitated. PMID:11720929
Sjogren's Syndrome Information Page
... are here Home » Disorders » All Disorders Sjögren's Syndrome Information Page Sjögren's Syndrome Information Page What research is being done? The goals ... the U.S. and Worldwide NINDS Clinical Trials Related Information Patient Organizations Arthritis Foundation National Eye Institute (NEI) ...
Suzuki, Lalita K; Beale, Ivan L
2006-01-01
The content of personal Web home pages created by adolescents with cancer is a new source of information about this population of potential benefit to oncology nurses and psychologists. Individual Internet elements found on 21 home pages created by youths with cancer (14-22 years old) were rated for cancer-related self-presentation, information dissemination, and interpersonal connection. Examples of adolescents' online narratives were also recorded. Adolescents with cancer used various Internet elements on their home pages for cancer-related self-presentation (eg, welcome messages, essays, personal history and diary pages, news articles, and poetry), information dissemination (e.g., through personal interest pages, multimedia presentations, lists, charts, and hyperlinks), and interpersonal connection (eg, guestbook entries). Results suggest that various elements found on personal home pages are being used by a limited number of young patients with cancer for self-expression, information access, and contact with peers.
Fourment, Mathieu; Gibbs, Mark J
2008-01-01
Background Viruses of the Bunyaviridae have segmented negative-stranded RNA genomes and several of them cause significant disease. Many partial sequences have been obtained from the segments so that GenBank searches give complex results. Sequence databases usually use HTML pages to mediate remote sorting, but this approach can be limiting and may discourage a user from exploring a database. Results The VirusBanker database contains Bunyaviridae sequences and alignments and is presented as two spreadsheets generated by a Java program that interacts with a MySQL database on a server. Sequences are displayed in rows and may be sorted using information that is displayed in columns and includes data relating to the segment, gene, protein, species, strain, sequence length, terminal sequence and date and country of isolation. Bunyaviridae sequences and alignments may be downloaded from the second spreadsheet with titles defined by the user from the columns, or viewed when passed directly to the sequence editor, Jalview. Conclusion VirusBanker allows large datasets of aligned nucleotide and protein sequences from the Bunyaviridae to be compiled and winnowed rapidly using criteria that are formulated heuristically. PMID:18251994
Fokkema, Ivo F A C; den Dunnen, Johan T; Taschner, Peter E M
2005-08-01
The completion of the human genome project has initiated, as well as provided the basis for, the collection and study of all sequence variation between individuals. Direct access to up-to-date information on sequence variation is currently provided most efficiently through web-based, gene-centered, locus-specific databases (LSDBs). We have developed the Leiden Open (source) Variation Database (LOVD) software approaching the "LSDB-in-a-Box" idea for the easy creation and maintenance of a fully web-based gene sequence variation database. LOVD is platform-independent and uses PHP and MySQL open source software only. The basic gene-centered and modular design of the database follows the recommendations of the Human Genome Variation Society (HGVS) and focuses on the collection and display of DNA sequence variations. With minimal effort, the LOVD platform is extendable with clinical data. The open set-up should both facilitate and promote functional extension with scripts written by the community. The LOVD software is freely available from the Leiden Muscular Dystrophy pages (www.DMD.nl/LOVD/). To promote the use of LOVD, we currently offer curators the possibility to set up an LSDB on our Leiden server. (c) 2005 Wiley-Liss, Inc.
How Intrusion Detection Can Improve Software Decoy Applications
2003-03-01
THIS PAGE INTENTIONALLY LEFT BLANK 41 V. DISCUSSION Military history suggests it is best to employ a layered, defense-in...database: alert, postgresql , user=snort dbname=snort # output database: log, unixodbc, user=snort dbname=snort # output database: log, mssql, dbname...Threat Monitoring and Surveillance, James P. Anderson Co., Fort Washington. PA, April 1980. URL http://csrc.nist.gov/publications/ history /ande80
Information about epilepsy on the internet: An exploratory study of Arabic websites.
Alkhateeb, Jamal M; Alhadidi, Muna S
2018-01-01
The aim of this study was to explore information about epilepsy found on Arabic websites. The researchers collected information from the internet between November 2016 and January 2017. Information was obtained using Google and Yahoo search engines. Keywords used were the Arabic equivalent of the following two keywords: epilepsy (Al-saraa) and convulsion (Tashanoj). A total of 144 web pages addressing epilepsy in Arabic were reviewed. The majority of web pages were websites of medical institutions and general health websites, followed by informational and educational websites, others, blogs and websites of individuals, and news and media sites. Topics most commonly addressed were medical treatments for epilepsy (50% of all pages) followed by epilepsy definition (41%) and epilepsy etiology (34.7%). The results also revealed that the vast majority of web pages did not mention the source of information. Many web pages also did not provide author information. Only a small proportion of the web pages provided adequate information. Relatively few web pages provided inaccurate information or made sweeping generalizations. As a result, it is concluded that the findings of the present study suggest that development of more credible Arabic websites on epilepsy is needed. These websites need to go beyond basic information, offering more evidence-based and updated information about epilepsy. Copyright © 2017 Elsevier Inc. All rights reserved.
DPS Planetary Science Graduate Programs Listing: A Resource for Students and Advisors
NASA Astrophysics Data System (ADS)
Klassen, David R.; Roman, Anthony; Meinke, Bonnie
2015-11-01
We began a web page on the DPS Education site in 2013 listing all the graduate programs we could find that can lead to a PhD with a planetary science focus. Since then the static page has evolved into a database-driven, filtered-search site. It is intended to be a useful resource for both undergraduate students and undergraduate advisers, allowing them to find and compare programs across a basic set of search criteria. From the filtered list users can click on links to get a "quick look" at the database information and follow links to the program main site.The reason for such a list is because planetary science is a heading that covers an extremely diverse set of disciplines. The usual case is that planetary scientists are housed in a discipline-placed department so that finding them is typically not easy—undergraduates cannot look for a Planetary Science department, but must (somehow) know to search for them in all their possible places. This can overwhelm even determined undergraduate student, and even many advisers!We present here the updated site and a walk-through of the basic features. In addition we ask for community feedback on additional features to make the system more usable for them. Finally, we call upon those mentoring and advising undergraduates to use this resource, and program admission chairs to continue to review their entry and provide us with the most up-to-date information.The URL for our site is http://dps.aas.org/education/graduate-schools.
Geographic Information Systems and Web Page Development
NASA Technical Reports Server (NTRS)
Reynolds, Justin
2004-01-01
The Facilities Engineering and Architectural Branch is responsible for the design and maintenance of buildings, laboratories, and civil structures. In order to improve efficiency and quality, the FEAB has dedicated itself to establishing a data infrastructure based on Geographic Information Systems, GIs. The value of GIS was explained in an article dating back to 1980 entitled "Need for a Multipurpose Cadastre which stated, "There is a critical need for a better land-information system in the United States to improve land-conveyance procedures, furnish a basis for equitable taxation, and provide much-needed information for resource management and environmental planning." Scientists and engineers both point to GIS as the solution. What is GIS? According to most text books, Geographic Information Systems is a class of software that stores, manages, and analyzes mapable features on, above, or below the surface of the earth. GIS software is basically database management software to the management of spatial data and information. Simply put, Geographic Information Systems manage, analyze, chart, graph, and map spatial information. At the outset, I was given goals and expectations from my branch and from my mentor with regards to the further implementation of GIs. Those goals are as follows: (1) Continue the development of GIS for the underground structures. (2) Extract and export annotated data from AutoCAD drawing files and construct a database (to serve as a prototype for future work). (3) Examine existing underground record drawings to determine existing and non-existing underground tanks. Once this data was collected and analyzed, I set out on the task of creating a user-friendly database that could be assessed by all members of the branch. It was important that the database be built using programs that most employees already possess, ruling out most AutoCAD-based viewers. Therefore, I set out to create an Access database that translated onto the web using Internet Explorer as the foundation. After some programming, it was possible to view AutoCAD files and other GIS-related applications on Internet Explorer, while providing the user with a variety of editing commands and setting options. I was also given the task of launching a divisional website using Macromedia Flash and other web- development programs.
[An evaluation of the quality of health web pages using a validated questionnaire].
Conesa Fuentes, Maria del Carmen; Aguinaga Ontoso, Enrique; Hernández Morante, Juan José
2011-01-01
The objective of the present study was to evaluate the quality of general health information in Spanish language web pages, and the official Regional Services web pages from the different Autonomous Regions. It is a cross-sectional study. We have used a previously validated questionnaire to study the present state of the health information on Internet for a lay-user point of view. By mean of PageRank (Google®), we obtained a group of webs, including a total of 65 health web pages. We applied some exclusion criteria, and finally obtained a total of 36 webs. We also analyzed the official web pages from the different Health Services in Spain (19 webs), making a total of 54 health web pages. In the light of our data, we observed that, the quality of the general information health web pages was generally rather low, especially regarding the information quality. Not one page reached the maximum score (19 points). The mean score of the web pages was of 9.8±2.8. In conclusion, to avoid the problems arising from the lack of quality, health professionals should design advertising campaigns and other media to teach the lay-user how to evaluate the information quality. Copyright © 2009 Elsevier España, S.L. All rights reserved.
Das, Sankha Subhra; Saha, Pritam
2018-01-01
Abstract MicroRNAs (miRNAs) are well-known as key regulators of diverse biological pathways. A series of experimental evidences have shown that abnormal miRNA expression profiles are responsible for various pathophysiological conditions by modulating genes in disease associated pathways. In spite of the rapid increase in research data confirming such associations, scientists still do not have access to a consolidated database offering these miRNA-pathway association details for critical diseases. We have developed miRwayDB, a database providing comprehensive information of experimentally validated miRNA-pathway associations in various pathophysiological conditions utilizing data collected from published literature. To the best of our knowledge, it is the first database that provides information about experimentally validated miRNA mediated pathway dysregulation as seen specifically in critical human diseases and hence indicative of a cause-and-effect relationship in most cases. The current version of miRwayDB collects an exhaustive list of miRNA-pathway association entries for 76 critical disease conditions by reviewing 663 published articles. Each database entry contains complete information on the name of the pathophysiological condition, associated miRNA(s), experimental sample type(s), regulation pattern (up/down) of miRNA, pathway association(s), targeted member of dysregulated pathway(s) and a brief description. In addition, miRwayDB provides miRNA, gene and pathway score to evaluate the role of a miRNA regulated pathways in various pathophysiological conditions. The database can also be used for other biomedical approaches such as validation of computational analysis, integrated analysis and prediction of computational model. It also offers a submission page to submit novel data from recently published studies. We believe that miRwayDB will be a useful tool for miRNA research community. Database URL: http://www.mirway.iitkgp.ac.in PMID:29688364
Code of Federal Regulations, 2010 CFR
2010-04-01
... you use § 230.430A of this chapter to omit pricing information and the prospectus is used before you... Statement and Outside Front Cover Page of Prospectus. The registrant must furnish the following information... page. If the following information applies to your offering, disclose it on the outside cover page of...
Martin-Facklam, Meret; Kostrzewa, Michael; Martin, Peter; Haefeli, Walter E
2004-01-01
The generally poor quality of health information on the world wide web (WWW) has caused preventable adverse outcomes. Quality management of information on the internet is therefore critical given its widespread use. In order to develop strategies for the safe use of drugs, we scored general and content quality of pages about sildenafil and performed an intervention to improve their quality. The internet was searched with Yahoo and AltaVista for pages about sildenafil and 303 pages were included. For assessment of content quality a score based on accuracy and completeness of essential drug information was assigned. For assessment of general quality, four criteria were evaluated and their association with high content quality was determined by multivariate logistic regression analysis. The pages were randomly allocated to either control or intervention group. Evaluation took place before, as well as 7 and 22 weeks after an intervention which consisted of two letters with individualized feedback information on the respective page which were sent electronically to the address mentioned on the page. Providing references to scientific publications or prescribing information was significantly associated with high content quality (odds ratio: 8.2, 95% CI 3.2, 20.5). The intervention had no influence on general or content quality. To prevent adverse outcomes caused by misinformation on the WWW individualized feedback to the address mentioned on the page was ineffective. It is currently probably the most straight-forward approach to inform lay persons about indicators of high information quality, i.e. the provision of references.
2010-09-01
5 2. SCIL Architecture ...............................................................................6 3. Assertions...137 x THIS PAGE INTENTIONALLY LEFT BLANK xi LIST OF FIGURES Figure 1. SCIL architecture...Database Connectivity LAN Local Area Network ODBC Open Database Connectivity SCIL Social-Cultural Content in Language UMD
Manchaiah, Vinaya; Ratinaud, Pierre; Andersson, Gerhard
2018-05-08
When people with health conditions begin to manage their health issues, one important issue that emerges is the question as to what exactly do they do with the information that they have obtained through various sources (eg, news media, social media, health professionals, friends, and family). The information they gather helps form their opinions and, to some degree, influences their attitudes toward managing their condition. This study aimed to understand how tinnitus is represented in the US newspaper media and in Facebook pages (ie, social media) using text pattern analysis. This was a cross-sectional study based upon secondary analyses of publicly available data. The 2 datasets (ie, text corpuses) analyzed in this study were generated from US newspaper media during 1980-2017 (downloaded from the database US Major Dailies by ProQuest) and Facebook pages during 2010-2016. The text corpuses were analyzed using the Iramuteq software using cluster analysis and chi-square tests. The newspaper dataset had 432 articles. The cluster analysis resulted in 5 clusters, which were named as follows: (1) brain stimulation (26.2%), (2) symptoms (13.5%), (3) coping (19.8%), (4) social support (24.2%), and (5) treatment innovation (16.4%). A time series analysis of clusters indicated a change in the pattern of information presented in newspaper media during 1980-2017 (eg, more emphasis on cluster 5, focusing on treatment inventions). The Facebook dataset had 1569 texts. The cluster analysis resulted in 7 clusters, which were named as: (1) diagnosis (21.9%), (2) cause (4.1%), (3) research and development (13.6%), (4) social support (18.8%), (5) challenges (11.1%), (6) symptoms (21.4%), and (7) coping (9.2%). A time series analysis of clusters indicated no change in information presented in Facebook pages on tinnitus during 2011-2016. The study highlights the specific aspects about tinnitus that the US newspaper media and Facebook pages focus on, as well as how these aspects change over time. These findings can help health care providers better understand the presuppositions that tinnitus patients may have. More importantly, the findings can help public health experts and health communication experts in tailoring health information about tinnitus to promote self-management, as well as assisting in appropriate choices of treatment for those living with tinnitus. ©Vinaya Manchaiah, Pierre Ratinaud, Gerhard Andersson. Originally published in the Interactive Journal of Medical Research (http://www.i-jmr.org/), 08.05.2018.
information on the ecology of the species, its impacts and management, a comprehensive bibliography and a list ) General Impact information Management information Distribution The Distribution page presents the global page collates management information from the profile narrative page plus location-specific management
An integrated database-pipeline system for studying single nucleotide polymorphisms and diseases.
Yang, Jin Ok; Hwang, Sohyun; Oh, Jeongsu; Bhak, Jong; Sohn, Tae-Kwon
2008-12-12
Studies on the relationship between disease and genetic variations such as single nucleotide polymorphisms (SNPs) are important. Genetic variations can cause disease by influencing important biological regulation processes. Despite the needs for analyzing SNP and disease correlation, most existing databases provide information only on functional variants at specific locations on the genome, or deal with only a few genes associated with disease. There is no combined resource to widely support gene-, SNP-, and disease-related information, and to capture relationships among such data. Therefore, we developed an integrated database-pipeline system for studying SNPs and diseases. To implement the pipeline system for the integrated database, we first unified complicated and redundant disease terms and gene names using the Unified Medical Language System (UMLS) for classification and noun modification, and the HUGO Gene Nomenclature Committee (HGNC) and NCBI gene databases. Next, we collected and integrated representative databases for three categories of information. For genes and proteins, we examined the NCBI mRNA, UniProt, UCSC Table Track and MitoDat databases. For genetic variants we used the dbSNP, JSNP, ALFRED, and HGVbase databases. For disease, we employed OMIM, GAD, and HGMD databases. The database-pipeline system provides a disease thesaurus, including genes and SNPs associated with disease. The search results for these categories are available on the web page http://diseasome.kobic.re.kr/, and a genome browser is also available to highlight findings, as well as to permit the convenient review of potentially deleterious SNPs among genes strongly associated with specific diseases and clinical phenotypes. Our system is designed to capture the relationships between SNPs associated with disease and disease-causing genes. The integrated database-pipeline provides a list of candidate genes and SNP markers for evaluation in both epidemiological and molecular biological approaches to diseases-gene association studies. Furthermore, researchers then can decide semi-automatically the data set for association studies while considering the relationships between genetic variation and diseases. The database can also be economical for disease-association studies, as well as to facilitate an understanding of the processes which cause disease. Currently, the database contains 14,674 SNP records and 109,715 gene records associated with human diseases and it is updated at regular intervals.
Structured Forms Reference Set of Binary Images II (SFRS2)
National Institute of Standards and Technology Data Gateway
NIST Structured Forms Reference Set of Binary Images II (SFRS2) (Web, free access) The second NIST database of structured forms (Special Database 6) consists of 5,595 pages of binary, black-and-white images of synthesized documents containing hand-print. The documents in this database are 12 different tax forms with the IRS 1040 Package X for the year 1988.
Brinkmann, Ulrich; Vasmatzis, George; Lee, Byungkook; Yerushalmi, Noga; Essand, Magnus; Pastan, Ira
1998-01-01
We have used a combination of computerized database mining and experimental expression analyses to identify a gene that is preferentially expressed in normal male and female reproductive tissues, prostate, testis, fallopian tube, uterus, and placenta, as well as in prostate cancer, testicular cancer, and uterine cancer. This gene is located on the human X chromosome, and it is homologous to a family of genes encoding GAGE-like proteins. GAGE proteins are expressed in a variety of tumors and in testis. We designate the novel gene PAGE-1 because the expression pattern in the Cancer Genome Anatomy Project libraries indicates that it is predominantly expressed in normal and neoplastic prostate. Further database analysis indicates the presence of other genes with high homology to PAGE-1, which were found in cDNA libraries derived from testis, pooled libraries (with testis), and in a germ cell tumor library. The expression of PAGE-1 in normal and malignant prostate, testicular, and uterine tissues makes it a possible target for the diagnosis and possibly for the vaccine-based therapy of neoplasms of prostate, testis, and uterus. PMID:9724777
Brinkmann, U; Vasmatzis, G; Lee, B; Yerushalmi, N; Essand, M; Pastan, I
1998-09-01
We have used a combination of computerized database mining and experimental expression analyses to identify a gene that is preferentially expressed in normal male and female reproductive tissues, prostate, testis, fallopian tube, uterus, and placenta, as well as in prostate cancer, testicular cancer, and uterine cancer. This gene is located on the human X chromosome, and it is homologous to a family of genes encoding GAGE-like proteins. GAGE proteins are expressed in a variety of tumors and in testis. We designate the novel gene PAGE-1 because the expression pattern in the Cancer Genome Anatomy Project libraries indicates that it is predominantly expressed in normal and neoplastic prostate. Further database analysis indicates the presence of other genes with high homology to PAGE-1, which were found in cDNA libraries derived from testis, pooled libraries (with testis), and in a germ cell tumor library. The expression of PAGE-1 in normal and malignant prostate, testicular, and uterine tissues makes it a possible target for the diagnosis and possibly for the vaccine-based therapy of neoplasms of prostate, testis, and uterus.
Atmospheric Science Data Center
2018-06-15
... Theoretical Basis Document (ATBD) ADAM-M ADAM-M Information AirMISR AirMISR Home Page MISR Home Page Feature Article: Fiery Temperament KONVEX Information SAFARI Home Page AirMSPI Get Google Earth ...
Cannabis and Kratom online information in Thailand: Facebook trends 2015-2016.
Thaikla, Kanittha; Pinyopornpanish, Kanokporn; Jiraporncharoen, Wichuda; Angkurawaranon, Chaisiri
2018-05-09
Our study aims to evaluate the trends in online information about cannabis and kratom on Facebook in Thailand, where there is current discussion regarding legalizing these drugs. Between April and November 2015, reviewers searched for cannabis and kratom Facebook pages in the Thai language via the common search engines. Content analysis was performed and the contents of each page were categorized by the tone of the post (positive, negative or neutral). Then, a one-year follow-up search was conducted to compare the contents. Twelve Facebook pages each were initially identified for cannabis and for kratom. Follower numbers were higher for cannabis pages. Kratom pages were less active but were open for a longer time. Posts with positive tones and neutral tones were found for both drugs, but none had negative tones. Other drugs were mentioned on the cannabis pages, but they were different from those mentioned on the kratom pages. Issues regarding drug legalization were found on the cannabis pages but not on the kratom pages during the searching period. One year later, the tone of the posts was in the same direction, but the page activity had increased. The information currently available on the sampled Facebook pages was positive towards the use of cannabis and kratom. No information about harm from these drugs was found through our search.
48 CFR 19.703 - Eligibility requirements for participating in the program.
Code of Federal Regulations, 2012 CFR
2012-10-01
... or Small Business Administration certification status of the ANC or Indian tribe. (ii) Where one or... accessing the Central Contractor Registration (CCR) database or by contacting the SBA. Options for contacting the SBA include— (i) HUBZone small business database search application Web page at http://dsbs...
48 CFR 19.703 - Eligibility requirements for participating in the program.
Code of Federal Regulations, 2011 CFR
2011-10-01
... or Small Business Administration certification status of the ANC or Indian tribe. (ii) Where one or... accessing the Central Contractor Registration (CCR) database or by contacting the SBA. Options for contacting the SBA include— (i) HUBZone small business database search application Web page at http://dsbs...
Maccari, Giuseppe; Robinson, James; Ballingall, Keith; Guethlein, Lisbeth A; Grimholt, Unni; Kaufman, Jim; Ho, Chak-Sum; de Groot, Natasja G; Flicek, Paul; Bontrop, Ronald E; Hammond, John A; Marsh, Steven G E
2017-01-04
The IPD-MHC Database project (http://www.ebi.ac.uk/ipd/mhc/) collects and expertly curates sequences of the major histocompatibility complex from non-human species and provides the infrastructure and tools to enable accurate analysis. Since the first release of the database in 2003, IPD-MHC has grown and currently hosts a number of specific sections, with more than 7000 alleles from 70 species, including non-human primates, canines, felines, equids, ovids, suids, bovins, salmonids and murids. These sequences are expertly curated and made publicly available through an open access website. The IPD-MHC Database is a key resource in its field, and this has led to an average of 1500 unique visitors and more than 5000 viewed pages per month. As the database has grown in size and complexity, it has created a number of challenges in maintaining and organizing information, particularly the need to standardize nomenclature and taxonomic classification, while incorporating new allele submissions. Here, we describe the latest database release, the IPD-MHC 2.0 and discuss planned developments. This release incorporates sequence updates and new tools that enhance database queries and improve the submission procedure by utilizing common tools that are able to handle the varied requirements of each MHC-group. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Database Reports Over the Internet
NASA Technical Reports Server (NTRS)
Smith, Dean Lance
2002-01-01
Most of the summer was spent developing software that would permit existing test report forms to be printed over the web on a printer that is supported by Adobe Acrobat Reader. The data is stored in a DBMS (Data Base Management System). The client asks for the information from the database using an HTML (Hyper Text Markup Language) form in a web browser. JavaScript is used with the forms to assist the user and verify the integrity of the entered data. Queries to a database are made in SQL (Sequential Query Language), a widely supported standard for making queries to databases. Java servlets, programs written in the Java programming language running under the control of network server software, interrogate the database and complete a PDF form template kept in a file. The completed report is sent to the browser requesting the report. Some errors are sent to the browser in an HTML web page, others are reported to the server. Access to the databases was restricted since the data are being transported to new DBMS software that will run on new hardware. However, the SQL queries were made to Microsoft Access, a DBMS that is available on most PCs (Personal Computers). Access does support the SQL commands that were used, and a database was created with Access that contained typical data for the report forms. Some of the problems and features are discussed below.
Work-Facilitating Information Visualization Techniques for Complex Wastewater Systems
NASA Astrophysics Data System (ADS)
Ebert, Achim; Einsfeld, Katja
The design and the operation of urban drainage systems and wastewater treatment plants (WWTP) have become increasingly complex. This complexity is due to increased requirements concerning process technology, technical, environmental, economical, and occupational safety aspects. The plant operator has access not only to some timeworn filers and measured parameters but also to numerous on-line and off-line parameters that characterize the current state of the plant in detail. Moreover, expert databases and specific support pages of plant manufactures are accessible through the World Wide Web. Thus, the operator is overwhelmed with predominantly unstructured data.
Looking for Cancer Clues in Publicly Accessible Databases
Lemkin, Peter F.; Smythers, Gary W.; Munroe, David J.
2004-01-01
What started out as a mere attempt to tentatively identify proteins in experimental cancer-related 2D-PAGE maps developed into VIRTUAL2D, a web-accessible repository for theoretical pI/MW charts for 92 organisms. Using publicly available expression data, we developed a collection of tissue-specific plots based on differential gene expression between normal and diseased states. We use this comparative cancer proteomics knowledge base, known as the tissue molecular anatomy project (TMAP), to uncover threads of cancer markers common to several types of cancer and to relate this information to established biological pathways. PMID:18629065
Looking for cancer clues in publicly accessible databases.
Medjahed, Djamel; Lemkin, Peter F; Smythers, Gary W; Munroe, David J
2004-01-01
What started out as a mere attempt to tentatively identify proteins in experimental cancer-related 2D-PAGE maps developed into VIRTUAL2D, a web-accessible repository for theoretical pI/MW charts for 92 organisms. Using publicly available expression data, we developed a collection of tissue-specific plots based on differential gene expression between normal and diseased states. We use this comparative cancer proteomics knowledge base, known as the tissue molecular anatomy project (TMAP), to uncover threads of cancer markers common to several types of cancer and to relate this information to established biological pathways.
Martin-Facklam, Meret; Kostrzewa, Michael; Martin, Peter; Haefeli, Walter E
2004-01-01
Aims The generally poor quality of health information on the world wide web (WWW) has caused preventable adverse outcomes. Quality management of information on the internet is therefore critical given its widespread use. In order to develop strategies for the safe use of drugs, we scored general and content quality of pages about sildenafil and performed an intervention to improve their quality. Methods The internet was searched with Yahoo and AltaVista for pages about sildenafil and 303 pages were included. For assessment of content quality a score based on accuracy and completeness of essential drug information was assigned. For assessment of general quality, four criteria were evaluated and their association with high content quality was determined by multivariate logistic regression analysis. The pages were randomly allocated to either control or intervention group. Evaluation took place before, as well as 7 and 22 weeks after an intervention which consisted of two letters with individualized feedback information on the respective page which were sent electronically to the address mentioned on the page. Results Providing references to scientific publications or prescribing information was significantly associated with high content quality (odds ratio: 8.2, 95% CI 3.2, 20.5). The intervention had no influence on general or content quality. Conclusions To prevent adverse outcomes caused by misinformation on the WWW individualized feedback to the address mentioned on the page was ineffective. It is currently probably the most straight-forward approach to inform lay persons about indicators of high information quality, i.e. the provision of references. PMID:14678344
Training Presentation for NASA Civil Helicopter Safety Website
NASA Technical Reports Server (NTRS)
Iseler, Laura
2002-01-01
NASA civil helicopter safety News & Updates include the following: Mar. 2002. The Air Medical Operations Survey has been completed! Check it out! Also accessible via the Mission pages under Air Medical Mission. Air Medical and Law Enforcement Mission pages have been added. They are accessible via the Mission pages. The Public Use, Personal, Offshore, Law Enforcement, External Load, Business and Gyro accident pages (accessable via the Mission page) have been updated. Feb. 2002. A Words of Wisdom section has been added. You can access it by clicking the Library button. A link to a Corporate Accident Response Plan has been added to the Accident page. The AMs, Aerial Application and Instruction accident pages (accessable via the Mission page) have been updated. Jan. 2002. A new searchable safety article database has been added. You can access it by clicking the Library button. The 2001 accident summaries have been updated and the statistics have been compiled - check it out by clicking the accident tab to the left. Dec. 2001. Please read the FAA Administrator's memo regarding the latest FBI warning. 3ee the FAA column - Fall 2001 Read it now!
usSEABED: Pacific coast (California, Oregon, Washington) offshore surficial-sediment data release
Reid, Jane A.; Reid, Jamey M.; Jenkins, Chris J.; Zimmermann, Mark; Williams, S. Jeffress; Field, Michael E.
2006-01-01
Over the past 50 years there has been an explosion in scientific interest, research effort, and information gathered on the geologic sedimentary character of the continental margin of the United States. Data and information from thousands of publications have greatly increased our scientific understanding of the geologic origins of the margin surface but rarely have those data been combined and integrated. This publication is the first release of the Pacific coast data from the usSEABED database. The report contains a compilation of published and unpublished sediment texture and other geologic data about the sea floor from diverse sources. usSEABED is an innovative database system developed to unify assorted data, with the data processed by the dbSEABED system. Examples of maps displaying attributes such as grain size and sediment color are included. This database contains information that is a scientific foundation for the U.S. Geological Survey (USGS) Sea floor Mapping and Benthic Habitats project and the Marine Aggregate Resources and Processes assessment project, and will be useful to the marine science community for other studies of the Pacific coast continental margin. The publication is divided into 10 sections: Home, Introduction, Content, usSEABED (data), dbSEABED (processing), Data Catalog, References, Contacts, Acknowledgments, and Frequently Asked Questions. Use the navigation bar on the left to navigate to specific sections of this report. Underlined topics throughout the publication are links to more information. Links to specific and detailed information on processing and to those to pages outside this report will open in a new browser window.
Imaged document information location and extraction using an optical correlator
NASA Astrophysics Data System (ADS)
Stalcup, Bruce W.; Dennis, Phillip W.; Dydyk, Robert B.
1999-12-01
Today, the paper document is fast becoming a thing of the past. With the rapid development of fast, inexpensive computing and storage devices, many government and private organizations are archiving their documents in electronic form (e.g., personnel records, medical records, patents, etc.). Many of these organizations are converting their paper archives to electronic images, which are then stored in a computer database. Because of this, there is a need to efficiently organize this data into comprehensive and accessible information resources and provide for rapid access to the information contained within these imaged documents. To meet this need, Litton PRC and Litton Data Systems Division are developing a system, the Imaged Document Optical Correlation and Conversion System (IDOCCS), to provide a total solution to the problem of managing and retrieving textual and graphic information from imaged document archives. At the heart of IDOCCS, optical correlation technology provide a means for the search and retrieval of information from imaged documents. IDOCCS can be used to rapidly search for key words or phrases within the imaged document archives and has the potential to determine the types of languages contained within a document. In addition, IDOCCS can automatically compare an input document with the archived database to determine if it is a duplicate, thereby reducing the overall resources required to maintain and access the document database. Embedded graphics on imaged pages can also be exploited, e.g., imaged documents containing an agency's seal or logo can be singled out. In this paper, we present a description of IDOCCS as well as preliminary performance results and theoretical projections.
Hydrologic data for an investigation of the Smith River Watershed through water year 2010
Nilges, Hannah L.; Caldwell, Rodney R.
2012-01-01
Hydrologic data collected through water year 2010 and compiled as part of a U.S. Geological Survey study of the water resources of the Smith River watershed in west-central Montana are presented in this report. Tabulated data presented in this report were collected at 173 wells and 65 surface-water sites. Figures include location maps of data-collection sites and hydrographs of streamflow. Digital data files used to construct the figures, hydrographs, and data tables are included in the report. Data collected by the USGS are also stored in the USGS National Water Information System database and are available through the USGS National Water Information System Water Data for Montana Web page at http://waterdata.usgs.gov/mt/nwis/.
Analysis and Development of a Web-Enabled Planning and Scheduling Database Application
2013-09-01
establishes an entity—relationship diagram for the desired process, constructs an operable database using MySQL , and provides a web- enabled interface for...development, develop, design, process, re- engineering, reengineering, MySQL , structured query language, SQL, myPHPadmin. 15. NUMBER OF PAGES 107 16...relationship diagram for the desired process, constructs an operable database using MySQL , and provides a web-enabled interface for the population of
G6PDdb, an integrated database of glucose-6-phosphate dehydrogenase (G6PD) mutations.
Kwok, Colin J; Martin, Andrew C R; Au, Shannon W N; Lam, Veronica M S
2002-03-01
G6PDdb (http://www.rubic.rdg.ac.uk/g6pd/ or http://www.bioinf.org.uk/g6pd/) is a newly created web-accessible locus-specific mutation database for the human Glucose-6-phosphate dehydrogenase (G6PD) gene. The relational database integrates up-to-date mutational and structural data from various databanks (GenBank, Protein Data Bank, etc.) with biochemically characterized variants and their associated phenotypes obtained from published literature and the Favism website. An automated analysis of the mutations likely to have a significant impact on the structure of the protein has been performed using a recently developed procedure. The database may be queried online and the full results of the analysis of the structural impact of mutations are available. The web page provides a form for submitting additional mutation data and is linked to resources such as the Favism website, OMIM, HGMD, HGVBASE, and the PDB. This database provides insights into the molecular aspects and clinical significance of G6PD deficiency for researchers and clinicians and the web page functions as a knowledge base relevant to the understanding of G6PD deficiency and its management. Copyright 2002 Wiley-Liss, Inc.
Making your database available through Wikipedia: the pros and cons.
Finn, Robert D; Gardner, Paul P; Bateman, Alex
2012-01-01
Wikipedia, the online encyclopedia, is the most famous wiki in use today. It contains over 3.7 million pages of content; with many pages written on scientific subject matters that include peer-reviewed citations, yet are written in an accessible manner and generally reflect the consensus opinion of the community. In this, the 19th Annual Database Issue of Nucleic Acids Research, there are 11 articles that describe the use of a wiki in relation to a biological database. In this commentary, we discuss how biological databases can be integrated with Wikipedia, thereby utilising the pre-existing infrastructure, tools and above all, large community of authors (or Wikipedians). The limitations to the content that can be included in Wikipedia are highlighted, with examples drawn from articles found in this issue and other wiki-based resources, indicating why other wiki solutions are necessary. We discuss the merits of using open wikis, like Wikipedia, versus other models, with particular reference to potential vandalism. Finally, we raise the question about the future role of dedicated database biocurators in context of the thousands of crowdsourced, community annotations that are now being stored in wikis.
Making your database available through Wikipedia: the pros and cons
Finn, Robert D.; Gardner, Paul P.; Bateman, Alex
2012-01-01
Wikipedia, the online encyclopedia, is the most famous wiki in use today. It contains over 3.7 million pages of content; with many pages written on scientific subject matters that include peer-reviewed citations, yet are written in an accessible manner and generally reflect the consensus opinion of the community. In this, the 19th Annual Database Issue of Nucleic Acids Research, there are 11 articles that describe the use of a wiki in relation to a biological database. In this commentary, we discuss how biological databases can be integrated with Wikipedia, thereby utilising the pre-existing infrastructure, tools and above all, large community of authors (or Wikipedians). The limitations to the content that can be included in Wikipedia are highlighted, with examples drawn from articles found in this issue and other wiki-based resources, indicating why other wiki solutions are necessary. We discuss the merits of using open wikis, like Wikipedia, versus other models, with particular reference to potential vandalism. Finally, we raise the question about the future role of dedicated database biocurators in context of the thousands of crowdsourced, community annotations that are now being stored in wikis. PMID:22144683
Program for Critical Technologies in Breast Oncology
1999-07-01
the tissues, and in a ethical manner that respects the patients’ rights . The Program for Critical Technologies in Breast Oncology helps address all of...diagnosis, database 15. NUMBER OF PAGES 148 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT Unclassified 18. SECURITY CLASSIFICATION OF THIS...closer to clinical utility. Page 17 References Adida C. Crotty PL. McGrath J. Berrebi D. Diebold J. Altieri DC. Developmentally regulated
A Center for Excellence in Mathematical Sciences Final Progress Report
1997-02-18
together with a sampling rule of the form of (5): i(t) = G(x(t), t) + B(t) (4) G(., t) = continualized version of g(-, k), t E [ kA , (k + 1)A), k e N (5...Nobuki Takayama * 25 Pages 27 I I 93-97 Databases PF Actovotoes And Modeling Eugen Ardeleanu And Adriana Ardeleanu 6 Pages 93-98 Methodological Issues In
ERIC Educational Resources Information Center
Kammerer, Yvonne; Kalbfell, Eva; Gerjets, Peter
2016-01-01
In two experiments we systematically examined whether contradictions between two web pages--of which one was commercially biased as stated in an "about us" section--stimulated university students' consideration of source information both during and after reading. In Experiment 1 "about us" information of the web pages was…
Signaling gateway molecule pages—a data model perspective
Dinasarapu, Ashok Reddy; Saunders, Brian; Ozerlat, Iley; Azam, Kenan; Subramaniam, Shankar
2011-01-01
Summary: The Signaling Gateway Molecule Pages (SGMP) database provides highly structured data on proteins which exist in different functional states participating in signal transduction pathways. A molecule page starts with a state of a native protein, without any modification and/or interactions. New states are formed with every post-translational modification or interaction with one or more proteins, small molecules or class molecules and with each change in cellular location. State transitions are caused by a combination of one or more modifications, interactions and translocations which then might be associated with one or more biological processes. In a characterized biological state, a molecule can function as one of several entities or their combinations, including channel, receptor, enzyme, transcription factor and transporter. We have also exported SGMP data to the Biological Pathway Exchange (BioPAX) and Systems Biology Markup Language (SBML) as well as in our custom XML. Availability: SGMP is available at www.signaling-gateway.org/molecule. Contact: shankar@ucsd.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:21505029
Heads Up: Concussion in Youth Sports
MedlinePlus Videos and Cool Tools
... on this page will be unavailable. For more information about this message, please visit this page: About ... redirected to the course's overview page for more information, including technical requirements: https://www.cdc.gov/headsup/ ...
The Library as Information Provider: The Home Page.
ERIC Educational Resources Information Center
Clyde, Laurel A.
1996-01-01
Discusses ways in which libraries are using the World Wide Web to provide information via a home page, based on information from a survey in Iceland as well as a larger study that conducted content analyses of home pages of public and school libraries in 13 countries. (Author/LRW)
An Extraction Method of an Informative DOM Node from a Web Page by Using Layout Information
NASA Astrophysics Data System (ADS)
Tsuruta, Masanobu; Masuyama, Shigeru
We propose an informative DOM node extraction method from a Web page for preprocessing of Web content mining. Our proposed method LM uses layout data of DOM nodes generated by a generic Web browser, and the learning set consists of hundreds of Web pages and the annotations of informative DOM nodes of those Web pages. Our method does not require large scale crawling of the whole Web site to which the target Web page belongs. We design LM so that it uses the information of the learning set more efficiently in comparison to the existing method that uses the same learning set. By experiments, we evaluate the methods obtained by combining one that consists of the method for extracting the informative DOM node both the proposed method and the existing methods, and the existing noise elimination methods: Heur removes advertisements and link-lists by some heuristics and CE removes the DOM nodes existing in the Web pages in the same Web site to which the target Web page belongs. Experimental results show that 1) LM outperforms other methods for extracting the informative DOM node, 2) the combination method (LM, {CE(10), Heur}) based on LM (precision: 0.755, recall: 0.826, F-measure: 0.746) outperforms other combination methods.
SPOT 5/HRS: A Key Source for Navigation Database
2003-09-02
SUBTITLE SPOT 5 / HRS: A Key Source for Navigation Database 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT ......strategic objective. Nice data ….. What after ?? Filière SPOT Marc BERNARD Page 15 Producing from HRS u Partnership with IGN ( French
Database of Sources of Environmental Releases of Dioxin-Like Compounds in the United States
NASA Astrophysics Data System (ADS)
Goodwillie, A. M.; Carbotte, S. M.; Arko, R. A.; Haxby, W. F.; Ryan, W. B.; Chayes, D. N.; Lehnert, K. A.; Shank, T. M.
2005-12-01
Hosted at Lamont by the marine geoscience Data Management group, mgDMS, the NSF-funded Ridge 2000 electronic database, http://www.marine-geo.org/ridge2000/, is a key component of the Ridge 2000 multi-disciplinary program. The database covers each of the three Ridge 2000 Integrated Study Sites: Endeavour Segment, Lau Basin, and 8-11N Segment. It promotes the sharing of information to the broader community, facilitates integration of the suite of information collected at each study site, and enables comparisons between sites. The Ridge 2000 data system provides easy web access to a relational database that is built around a catalogue of cruise metadata. Any web browser can be used to perform a versatile text-based search which returns basic cruise and submersible dive information, sample and data inventories, navigation, and other relevant metadata such as shipboard personnel and links to NSF program awards. In addition, non-proprietary data files, images, and derived products which are hosted locally or in national repositories, as well as science and technical reports, can be freely downloaded. On the Ridge 2000 database page, our Data Link allows users to search the database using a broad range of parameters including data type, cruise ID, chief scientist, geographical location. The first Ridge 2000 field programs sailed in 2004 and, in addition to numerous data sets collected prior to the Ridge 2000 program, the database currently contains information on fifteen Ridge 2000-funded cruises and almost sixty Alvin dives. Track lines can be viewed using a recently- implemented Web Map Service button labelled Map View. The Ridge 2000 database is fully integrated with databases hosted by the mgDMS group for MARGINS and the Antarctic multibeam and seismic reflection data initiatives. Links are provided to partner databases including PetDB, SIOExplorer, and the ODP Janus system. Improved inter-operability with existing and new partner repositories continues to be strengthened. One major effort involves the gradual unification of the metadata across these partner databases. Standardised electronic metadata forms that can be filled in at sea are available from our web site. Interactive map-based exploration and visualisation of the Ridge 2000 database is provided by GeoMapApp, a freely-available Java(tm) application being developed within the mgDMS group. GeoMapApp includes high-resolution bathymetric grids for the 8-11N EPR segment and allows customised maps and grids for any of the Ridge 2000 ISS to be created. Vent and instrument locations can be plotted and saved as images, and Alvin dive photos are also available.
Wang, Jianjian; Cao, Yuze; Zhang, Huixue; Wang, Tianfeng; Tian, Qinghua; Lu, Xiaoyu; Lu, Xiaoyan; Kong, Xiaotong; Liu, Zhaojun; Wang, Ning; Zhang, Shuai; Ma, Heping; Ning, Shangwei; Wang, Lihua
2017-01-01
The Nervous System Disease NcRNAome Atlas (NSDNA) (http://www.bio-bigdata.net/nsdna/) is a manually curated database that provides comprehensive experimentally supported associations about nervous system diseases (NSDs) and noncoding RNAs (ncRNAs). NSDs represent a common group of disorders, some of which are characterized by high morbidity and disabilities. The pathogenesis of NSDs at the molecular level remains poorly understood. ncRNAs are a large family of functionally important RNA molecules. Increasing evidence shows that diverse ncRNAs play a critical role in various NSDs. Mining and summarizing NSD–ncRNA association data can help researchers discover useful information. Hence, we developed an NSDNA database that documents 24 713 associations between 142 NSDs and 8593 ncRNAs in 11 species, curated from more than 1300 articles. This database provides a user-friendly interface for browsing and searching and allows for data downloading flexibility. In addition, NSDNA offers a submission page for researchers to submit novel NSD–ncRNA associations. It represents an extremely useful and valuable resource for researchers who seek to understand the functions and molecular mechanisms of ncRNA involved in NSDs. PMID:27899613
Thompson, Andrew E; Graydon, Sara L
2009-01-01
With continuing use of the Internet, rheumatologists are referring patients to various websites to gain information about medications and diseases. Our goal was to develop and evaluate a Medication Website Assessment Tool (MWAT) for use by health professionals, and to explore the overall quality of methotrexate information presented on common English-language websites. Identification of websites was performed using a search strategy on the search engine Google. The first 250 hits were screened. Inclusion criteria included those English-language websites from authoritative sources, trusted medical, physicians', and common health-related websites. Websites from pharmaceutical companies, online pharmacies, and where the purpose seemed to be primarily advertisements were also included. Product monographs or technical-based web pages and web pages where the information was clearly directed at patients with cancer were excluded. Two reviewers independently scored each included web page for completeness and accuracy, format, readability, reliability, and credibility. An overall ranking was provided for each methotrexate information page. Twenty-eight web pages were included in the analysis. The average score for completeness and accuracy was 15.48+/-3.70 (maximum 24) with 10 out of 28 pages scoring 18 (75%) or higher. The average format score was 6.00+/-1.46 (maximum 8). The Flesch-Kincaid Grade Level revealed an average grade level of 10.07+/-1.84, with 5 out of 28 websites written at a reading level less than grade 8; however, no web page scored at a grade 5 to 6 level. An overall ranking was calculated identifying 8 web pages as appropriate sources of accurate and reliable methotrexate information. With the enormous amount of information available on the Internet, it is important to direct patients to web pages that are complete, accurate, readable, and credible sources of information. We identified web pages that may serve the interests of both rheumatologists and patients.
A web-based 3D geological information visualization system
NASA Astrophysics Data System (ADS)
Song, Renbo; Jiang, Nan
2013-03-01
Construction of 3D geological visualization system has attracted much more concern in GIS, computer modeling, simulation and visualization fields. It not only can effectively help geological interpretation and analysis work, but also can it can help leveling up geosciences professional education. In this paper, an applet-based method was introduced for developing a web-based 3D geological information visualization system. The main aims of this paper are to explore a rapid and low-cost development method for constructing a web-based 3D geological system. First, the borehole data stored in Excel spreadsheets was extracted and then stored in SQLSERVER database of a web server. Second, the JDBC data access component was utilized for providing the capability of access the database. Third, the user interface was implemented with applet component embedded in JSP page and the 3D viewing and querying functions were implemented with PickCanvas of Java3D. Last, the borehole data acquired from geological survey were used for test the system, and the test results has shown that related methods of this paper have a certain application values.
MetaRNA-Seq: An Interactive Tool to Browse and Annotate Metadata from RNA-Seq Studies.
Kumar, Pankaj; Halama, Anna; Hayat, Shahina; Billing, Anja M; Gupta, Manish; Yousri, Noha A; Smith, Gregory M; Suhre, Karsten
2015-01-01
The number of RNA-Seq studies has grown in recent years. The design of RNA-Seq studies varies from very simple (e.g., two-condition case-control) to very complicated (e.g., time series involving multiple samples at each time point with separate drug treatments). Most of these publically available RNA-Seq studies are deposited in NCBI databases, but their metadata are scattered throughout four different databases: Sequence Read Archive (SRA), Biosample, Bioprojects, and Gene Expression Omnibus (GEO). Although the NCBI web interface is able to provide all of the metadata information, it often requires significant effort to retrieve study- or project-level information by traversing through multiple hyperlinks and going to another page. Moreover, project- and study-level metadata lack manual or automatic curation by categories, such as disease type, time series, case-control, or replicate type, which are vital to comprehending any RNA-Seq study. Here we describe "MetaRNA-Seq," a new tool for interactively browsing, searching, and annotating RNA-Seq metadata with the capability of semiautomatic curation at the study level.
CELLPEDIA: a repository for human cell information for cell studies and differentiation analyses.
Hatano, Akiko; Chiba, Hirokazu; Moesa, Harry Amri; Taniguchi, Takeaki; Nagaie, Satoshi; Yamanegi, Koji; Takai-Igarashi, Takako; Tanaka, Hiroshi; Fujibuchi, Wataru
2011-01-01
CELLPEDIA is a repository database for current knowledge about human cells. It contains various types of information, such as cell morphologies, gene expression and literature references. The major role of CELLPEDIA is to provide a digital dictionary of human cells for the biomedical field, including support for the characterization of artificially generated cells in regenerative medicine. CELLPEDIA features (i) its own cell classification scheme, in which whole human cells are classified by their physical locations in addition to conventional taxonomy; and (ii) cell differentiation pathways compiled from biomedical textbooks and journal papers. Currently, human differentiated cells and stem cells are classified into 2260 and 66 cell taxonomy keys, respectively, from which 934 parent-child relationships reported in cell differentiation or transdifferentiation pathways are retrievable. As far as we know, this is the first attempt to develop a digital cell bank to function as a public resource for the accumulation of current knowledge about human cells. The CELLPEDIA homepage is freely accessible except for the data submission pages that require authentication (please send a password request to cell-info@cbrc.jp). Database URL: http://cellpedia.cbrc.jp/
ERIC Educational Resources Information Center
Gilstrap, Donald L.
1998-01-01
Explains how to build World Wide Web home pages using frames-based HTML so that librarians can manage Web-based information and improve their home pages. Provides descriptions and 15 examples for writing frames-HTML code, including advanced concepts and additional techniques for home-page design. (Author/LRW)
Chen, Chen; Liu, Xiaohui; Zheng, Weimin; Zhang, Lei; Yao, Jun; Yang, Pengyuan
2014-04-04
To completely annotate the human genome, the task of identifying and characterizing proteins that currently lack mass spectrometry (MS) evidence is inevitable and urgent. In this study, as the first effort to screen missing proteins in large scale, we developed an approach based on SDS-PAGE followed by liquid chromatography-multiple reaction monitoring (LC-MRM), for screening of those missing proteins with only a single peptide hit in the previous liver proteome data set. Proteins extracted from normal human liver were separated in SDS-PAGE and digested in split gel slice, and the resulting digests were then subjected to LC-schedule MRM analysis. The MRM assays were developed through synthesized crude peptides for target peptides. In total, the expressions of 57 target proteins were confirmed from 185 MRM assays in normal human liver tissues. Among the proved 57 one-hit wonders, 50 proteins are of the minimally redundant set in the PeptideAtlas database, 7 proteins even have none MS-based information previously in various biological processes. We conclude that our SDS-PAGE-MRM workflow can be a powerful approach to screen missing or poorly characterized proteins in different samples and to provide their quantity if detected. The MRM raw data have been uploaded to ISB/SRM Atlas/PASSEL (PXD000648).
... Page You are here Home » Disorders » All Disorders Fabry Disease Information Page Fabry Disease Information Page What research is being done? The ... treat and prevent lipid storage diseases such as Fabry disease. Researchers hope to identify biomarkers--signs that may ...
Stewart, P A; Nathan, N; Nyhof-Young, J
2007-01-01
Functional Neuroanatomy, an interactive electronic neuroanatomical atlas, was designed for first year medical students. Medical students have much to learn in a limited time; therefore a major goal in the atlas design was that it facilitate rapid, accurate information retrieval. To assess this feature, we designed a testing scenario in which students who had never taken a neuroanatomy course were asked to complete two equivalent tests, one using the electronic atlas and one using a comparable hard copy atlas, in a limited period of time. The tests were too long to be completed in the time allotted, so test scores were measures of how quickly correct information could be retrieved from each source. Statistical analysis of the data showed that the tests were of equal difficulty and that accurate information retrieval was significantly faster using the electronic atlas when compared with the hard copy atlas (P < 0.0001). Post-test focus groups (n = 4) allowed us to infer that the following design features contributed to rapid information access: the number of structures in the database was limited to those that are relevant to a practicing physician; all of the program modules were presented in both text and image form on the index screen, which doubled as a site map; pages were layered electronically such that information was hidden until requested, structures available on each page were listed alphabetically and could be accessed by clicking on their name; and an illustrated glossary was provided and equipped with a search engine.
Braun, Bremen L.; Schott, David A.; Portwood, II, John L.; Schaeffer, Mary L.; Harper, Lisa C.; Gardiner, Jack M.; Cannon, Ethalinda K.; Andorf, Carson M.
2017-01-01
Abstract The Maize Genetics and Genomics Database (MaizeGDB) team prepared a survey to identify breeders’ needs for visualizing pedigrees, diversity data and haplotypes in order to prioritize tool development and curation efforts at MaizeGDB. The survey was distributed to the maize research community on behalf of the Maize Genetics Executive Committee in Summer 2015. The survey garnered 48 responses from maize researchers, of which more than half were self-identified as breeders. The survey showed that the maize researchers considered their top priorities for visualization as: (i) displaying single nucleotide polymorphisms in a given region for a given list of lines, (ii) showing haplotypes for a given list of lines and (iii) presenting pedigree relationships visually. The survey also asked which populations would be most useful to display. The following two populations were on top of the list: (i) 3000 publicly available maize inbred lines used in Romay et al. (Comprehensive genotyping of the USA national maize inbred seed bank. Genome Biol, 2013;14:R55) and (ii) maize lines with expired Plant Variety Protection Act (ex-PVP) certificates. Driven by this strong stakeholder input, MaizeGDB staff are currently working in four areas to improve its interface and web-based tools: (i) presenting immediate progenies of currently available stocks at the MaizeGDB Stock pages, (ii) displaying the most recent ex-PVP lines described in the Germplasm Resources Information Network (GRIN) on the MaizeGDB Stock pages, (iii) developing network views of pedigree relationships and (iv) visualizing genotypes from SNP-based diversity datasets. These survey results can help other biological databases to direct their efforts according to user preferences as they serve similar types of data sets for their communities. Database URL: https://www.maizegdb.org PMID:28605768
Exploiting Captions for Access to Multimedia Databases
1991-04-01
Sherman Gee ONT-221 Chief of Naval Research 800 N. Quincy Street Arlington, VA 2217-5000 Leah Wong Code 443 Command and Control Departments Naval...This report was prepared in conjunction with research funded by the Naval Postgraduate School un- der Direct Funding. Reproduction of all or part of this...MCGHEE P.-MAR Chairman Dean of Research Department of Computer Science UNCLASSIFIED SECURIHY CLASSIFICATION 1 RTIS PAGE REPORT DOCUMENTATION PAGE la
Going beyond Google for Faster and Smarter Web Searching
ERIC Educational Resources Information Center
Vine, Rita
2004-01-01
With more than 4 billion web pages in its database, Google is suitable for many different kinds of searches. When you know what you are looking for, Google can be a pretty good first choice, as long as you want to search a word pattern that can be expected to appear on any results pages. The problem starts when you don't know exactly what you're…
Construction of Database for Pulsating Variable Stars
NASA Astrophysics Data System (ADS)
Chen, B. Q.; Yang, M.; Jiang, B. W.
2011-07-01
A database for the pulsating variable stars is constructed for Chinese astronomers to study the variable stars conveniently. The database includes about 230000 variable stars in the Galactic bulge, LMC and SMC observed by the MACHO (MAssive Compact Halo Objects) and OGLE (Optical Gravitational Lensing Experiment) projects at present. The software used for the construction is LAMP, i.e., Linux+Apache+MySQL+PHP. A web page is provided to search the photometric data and the light curve in the database through the right ascension and declination of the object. More data will be incorporated into the database.
Allen, J W; Finch, R J; Coleman, M G; Nathanson, L K; O'Rourke, N A; Fielding, G A
2002-01-01
This study was undertaken to determine the quality of information on the Internet regarding laparoscopy. Four popular World Wide Web search engines were used with the key word "laparoscopy." Advertisements, patient- or physician-directed information, and controversial material were noted. A total of 14,030 Web pages were found, but only 104 were unique Web sites. The majority of the sites were duplicate pages, subpages within a main Web page, or dead links. Twenty-eight of the 104 pages had a medical product for sale, 26 were patient-directed, 23 were written by a physician or group of physicians, and six represented corporations. The remaining 21 were "miscellaneous." The 46 pages containing educational material were critically reviewed. At least one of the senior authors found that 32 of the pages contained controversial or misleading statements. All of the three senior authors (LKN, NAO, GAF) independently agreed that 17 of the 46 pages contained controversial information. The World Wide Web is not a reliable source for patient or physician information about laparoscopy. Authenticating medical information on the World Wide Web is a difficult task, and no government or surgical society has taken the lead in regulating what is presented as fact on the World Wide Web.
NASA Astrophysics Data System (ADS)
Addison, J. A.
2015-12-01
The Past Global Changes (PAGES) project of IGBP and Future Earth supports research to understand the Earth's past environment to improve future climate predictions and inform strategies for sustainability. Within this framework, the PAGES 2k Network was established to provide a focus on the past 2000 years, a period that encompasses Medieval Climate Anomaly warming, Little Ice Age cooling, and recent anthropogenically-forced climate change. The results of these studies are used for testing earth system models, and for understanding decadal- to centennial-scale variability, which is needed for long-term planning. International coordination and cooperation among the nine regional Working Groups that make up the 2k Network has been critical to the success of PAGES 2k. The collaborative approach is moving toward scientific achievements across the regional groups, including: (i) the development of a community-driven open-access proxy climate database; (ii) integration of multi-resolution proxy records; (iii) development of multivariate climate reconstructions; and (iv) a leap forward in the spatial resolution of paleoclimate reconstructions. The last addition to the 2k Network, the Ocean2k Working Group has further innovated the collaborative approach by: (1) creating an open, receptive environment to discuss ideas exclusively in the virtual space; (2) employing an array of real-time collaborative software tools to enable communication, group document writing, and data analysis; (3) consolidating executive leadership teams to oversee project development and manage grassroots-style volunteer pools; and (4) embracing the value-added role that international and interdisciplinary science can play in advancing paleoclimate hypotheses critical to understanding future change. Ongoing efforts for the PAGES 2k Network are focused on developing new standards for data quality control and archiving. These tasks will provide the foundation for new and continuing "trans-regional" 2k projects which address paleoclimate science that transcend regional boundaries. The PAGES 2k Network encourages participation by all investigators interested in this community-wide project.
Frank, M S; Dreyer, K
2001-06-01
We describe a working software technology that enables educators to incorporate their expertise and teaching style into highly interactive and Socratic educational material for distribution on the world wide web. A graphically oriented interactive authoring system was developed to enable the computer novice to create and store within a database his or her domain expertise in the form of electronic knowledge. The authoring system supports and facilitates the input and integration of several types of content, including free-form, stylized text, miniature and full-sized images, audio, and interactive questions with immediate feedback. The system enables the choreography and sequencing of these entities for display within a web page as well as the sequencing of entire web pages within a case-based or thematic presentation. Images or segments of text can be hyperlinked with point-and-click to other entities such as adjunctive web pages, audio, or other images, cases, or electronic chapters. Miniature (thumbnail) images are automatically linked to their full-sized counterparts. The authoring system contains a graphically oriented word processor, an image editor, and capabilities to automatically invoke and use external image-editing software such as Photoshop. The system works in both local area network (LAN) and internet-centric environments. An internal metalanguage (invisible to the author but stored with the content) was invented to represent the choreographic directives that specify the interactive delivery of the content on the world wide web. A database schema was developed to objectify and store both this electronic knowledge and its associated choreographic metalanguage. A database engine was combined with page-rendering algorithms in order to retrieve content from the database and deliver it on the web in a Socratic style, assess the recipient's current fund of knowledge, and provide immediate feedback, thus stimulating in-person interaction with a human expert. This technology enables the educator to choreograph a stylized, interactive delivery of his or her message using multimedia components assembled in virtually any order, spanning any number of web pages for a given case or theme. An educator can thus exercise precise influence on specific learning objectives, embody his or her personal teaching style within the content, and ultimately enhance its educational impact. The described technology amplifies the efforts of the educator and provides a more dynamic and enriching learning environment for web-based education.
[Health information on the Internet and trust marks as quality indicators: vaccines case study].
Mayer, Miguel Angel; Leis, Angela; Sanz, Ferran
2009-10-01
To find out the prevalence of quality trust marks present in websites and to analyse the quality of these websites displaying trust marks compared with those that do not display them, in order to put forward these trust marks as a quality indicator. Cross-sectional study. Internet. Websites on vaccines. Using "vacunas OR vaccines" as key words, the features of 40 web pages were analysed. These web pages were selected from the page results of two search engines, Google and Yahoo! Based on a total of 9 criteria, the average score of criteria fulfilled was 7 (95% CI 3.96-10.04) points for the web pages offered by Yahoo! and 7.3 (95% CI 3.86-10.74) offered by Google. Amongst web pages offered by Yahoo!, there were three with clearly inaccurate information, while there were four in the pages offered by Google. Trust marks were displayed in 20% and 30% medical web pages, respectively, and their presence reached statistical significance (P=0.033) when fulfilling the quality criteria compared with web pages where trust marks were not displayed. A wide variety of web pages was obtained by search engines and a large number of them with useless information. Although the websites analysed had a good quality, between 15% and 20% showed inaccurate information. Websites where trust marks were displayed had more quality than those that did not display one and none of them were included amongst those where inaccurate information was found.
Integrating Databases with Maps: The Delivery of Cultural Data through TimeMap.
ERIC Educational Resources Information Center
Johnson, Ian
TimeMap is a unique integration of database management, metadata and interactive maps, designed to contextualise and deliver cultural data through maps. TimeMap extends conventional maps with the time dimension, creating and animating maps "on-the-fly"; delivers them as a kiosk application or embedded in Web pages; links flexibly to…
40 CFR 60.2235 - In what form can I submit my reports?
Code of Federal Regulations, 2013 CFR
2013-07-01
... with ERT are subject to this requirement to be submitted electronically into EPA's WebFIRE database... tests required by this subpart to EPA's WebFIRE database by using the Compliance and Emissions Data...FIRE Administrator, MD C404-02, 4930 Old Page Rd., Durham, NC 27703. The same ERT file with the CBI...
Optimizing Crawler4j using MapReduce Programming Model
NASA Astrophysics Data System (ADS)
Siddesh, G. M.; Suresh, Kavya; Madhuri, K. Y.; Nijagal, Madhushree; Rakshitha, B. R.; Srinivasa, K. G.
2017-06-01
World wide web is a decentralized system that consists of a repository of information on the basis of web pages. These web pages act as a source of information or data in the present analytics world. Web crawlers are used for extracting useful information from web pages for different purposes. Firstly, it is used in web search engines where the web pages are indexed to form a corpus of information and allows the users to query on the web pages. Secondly, it is used for web archiving where the web pages are stored for later analysis phases. Thirdly, it can be used for web mining where the web pages are monitored for copyright purposes. The amount of information processed by the web crawler needs to be improved by using the capabilities of modern parallel processing technologies. In order to solve the problem of parallelism and the throughput of crawling this work proposes to optimize the Crawler4j using the Hadoop MapReduce programming model by parallelizing the processing of large input data. Crawler4j is a web crawler that retrieves useful information about the pages that it visits. The crawler Crawler4j coupled with data and computational parallelism of Hadoop MapReduce programming model improves the throughput and accuracy of web crawling. The experimental results demonstrate that the proposed solution achieves significant improvements with respect to performance and throughput. Hence the proposed approach intends to carve out a new methodology towards optimizing web crawling by achieving significant performance gain.
ERIC Educational Resources Information Center
Lindsay, Lorin
Designing a web home page involves many decisions that affect how the page will look, the kind of technology required to use the page, the links the page will provide, and kinds of patrons who can use the page. The theme of information literacy needs to be built into every web page; users need to be taught the skills of sorting and applying…
GarlicESTdb: an online database and mining tool for garlic EST sequences.
Kim, Dae-Won; Jung, Tae-Sung; Nam, Seong-Hyeuk; Kwon, Hyuk-Ryul; Kim, Aeri; Chae, Sung-Hwa; Choi, Sang-Haeng; Kim, Dong-Wook; Kim, Ryong Nam; Park, Hong-Seog
2009-05-18
Allium sativum., commonly known as garlic, is a species in the onion genus (Allium), which is a large and diverse one containing over 1,250 species. Its close relatives include chives, onion, leek and shallot. Garlic has been used throughout recorded history for culinary, medicinal use and health benefits. Currently, the interest in garlic is highly increasing due to nutritional and pharmaceutical value including high blood pressure and cholesterol, atherosclerosis and cancer. For all that, there are no comprehensive databases available for Expressed Sequence Tags(EST) of garlic for gene discovery and future efforts of genome annotation. That is why we developed a new garlic database and applications to enable comprehensive analysis of garlic gene expression. GarlicESTdb is an integrated database and mining tool for large-scale garlic (Allium sativum) EST sequencing. A total of 21,595 ESTs collected from an in-house cDNA library were used to construct the database. The analysis pipeline is an automated system written in JAVA and consists of the following components: automatic preprocessing of EST reads, assembly of raw sequences, annotation of the assembled sequences, storage of the analyzed information into MySQL databases, and graphic display of all processed data. A web application was implemented with the latest J2EE (Java 2 Platform Enterprise Edition) software technology (JSP/EJB/JavaServlet) for browsing and querying the database, for creation of dynamic web pages on the client side, and for mapping annotated enzymes to KEGG pathways, the AJAX framework was also used partially. The online resources, such as putative annotation, single nucleotide polymorphisms (SNP) and tandem repeat data sets, can be searched by text, explored on the website, searched using BLAST, and downloaded. To archive more significant BLAST results, a curation system was introduced with which biologists can easily edit best-hit annotation information for others to view. The GarlicESTdb web application is freely available at http://garlicdb.kribb.re.kr. GarlicESTdb is the first incorporated online information database of EST sequences isolated from garlic that can be freely accessed and downloaded. It has many useful features for interactive mining of EST contigs and datasets from each library, including curation of annotated information, expression profiling, information retrieval, and summary of statistics of functional annotation. Consequently, the development of GarlicESTdb will provide a crucial contribution to biologists for data-mining and more efficient experimental studies.
Astrophysics Source Code Library Enhancements
NASA Astrophysics Data System (ADS)
Hanisch, R. J.; Allen, A.; Berriman, G. B.; DuPrie, K.; Mink, J.; Nemiroff, R. J.; Schmidt, J.; Shamir, L.; Shortridge, K.; Taylor, M.; Teuben, P. J.; Wallin, J.
2015-09-01
The Astrophysics Source Code Library (ASCL)1 is a free online registry of codes used in astronomy research; it currently contains over 900 codes and is indexed by ADS. The ASCL has recently moved a new infrastructure into production. The new site provides a true database for the code entries and integrates the WordPress news and information pages and the discussion forum into one site. Previous capabilities are retained and permalinks to ascl.net continue to work. This improvement offers more functionality and flexibility than the previous site, is easier to maintain, and offers new possibilities for collaboration. This paper covers these recent changes to the ASCL.
Development of a User-Oriented Data Classification for Information System Design Methodology.
1982-06-30
4, December 1978,I T [COD79] CODD , E. F ., "Extending the Database Relational Model h to Capture More Meaning." ACM TODS 4:4, December 1979. [COU731...I AD-All& 879 ALPHA 4 A aOROU INC SILVEXRIN MD.m F /S 5/2DEVELOP 4T OF A UUA-ORIENTS11 DATA CLASSIFICATION FOR INPORMAT--ETCIU)AMR at -82-C-0129...mwtizuii tm esign = au* C ~I #i systemtic, ady&nmuic Viobze of an terpditand I~~~UWT FigureTO OF Tso: ~ow u PawKq"I m F ~pra o.saper ewmatatLou Page. MIL
Identification of Malicious Web Pages by Inductive Learning
NASA Astrophysics Data System (ADS)
Liu, Peishun; Wang, Xuefang
Malicious web pages are an increasing threat to current computer systems in recent years. Traditional anti-virus techniques focus typically on detection of the static signatures of Malware and are ineffective against these new threats because they cannot deal with zero-day attacks. In this paper, a novel classification method for detecting malicious web pages is presented. This method is generalization and specialization of attack pattern based on inductive learning, which can be used for updating and expanding knowledge database. The attack pattern is established from an example and generalized by inductive learning, which can be used to detect unknown attacks whose behavior is similar to the example.
SLAC Detailed Page: For staff, users, and collaborators - Page no longer
information about this change.) This page will automatically redirect to the For Staff page. You may also want to visit the new Detailed Index web page. Please change your bookmarks accordingly. SLAC Stanford
Cuff, Alison L.; Sillitoe, Ian; Lewis, Tony; Clegg, Andrew B.; Rentzsch, Robert; Furnham, Nicholas; Pellegrini-Calace, Marialuisa; Jones, David; Thornton, Janet; Orengo, Christine A.
2011-01-01
CATH version 3.3 (class, architecture, topology, homology) contains 128 688 domains, 2386 homologous superfamilies and 1233 fold groups, and reflects a major focus on classifying structural genomics (SG) structures and transmembrane proteins, both of which are likely to add structural novelty to the database and therefore increase the coverage of protein fold space within CATH. For CATH version 3.4 we have significantly improved the presentation of sequence information and associated functional information for CATH superfamilies. The CATH superfamily pages now reflect both the functional and structural diversity within the superfamily and include structural alignments of close and distant relatives within the superfamily, annotated with functional information and details of conserved residues. A significantly more efficient search function for CATH has been established by implementing the search server Solr (http://lucene.apache.org/solr/). The CATH v3.4 webpages have been built using the Catalyst web framework. PMID:21097779
Representation of health conditions on Facebook: content analysis and evaluation of user engagement.
Hale, Timothy M; Pathipati, Akhilesh S; Zan, Shiyi; Jethwani, Kamal
2014-08-04
A sizable majority of adult Internet users report looking for health information online. Social networking sites (SNS) like Facebook represent a common place to seek information, but very little is known about the representation and use of health content on SNS. Our goal in this study was to understand the role of SNS in health information seeking. More specifically, we aimed to describe how health conditions are represented on Facebook Pages and how users interact with these different conditions. We used Google Insights to identify the 20 most searched for health conditions on Google and then searched each of the resulting terms on Facebook. We compiled a list of the first 50 Facebook "Pages" results for each health condition. After filtering results to identify pages relevant to our research, we categorized pages into one of seven categories based on the page's primary purpose. We then measured user engagement by evaluating the number of "Likes" for different conditions and types of pages. The search returned 50 pages for 18 of the health conditions, but only 48 pages were found for "anemia" and 5 pages were found for "flu symptoms", yielding a total of 953 pages. A large number of pages (29.4%, 280/953) were irrelevant to the health condition searched. Of the 673 relevant pages, 151 were not in English or originated outside the United States, leaving 522 pages to be coded for content. The most common type of page was marketing/promotion (32.2%, 168/522) followed by information/awareness (20.7%, 108/522), Wikipedia-type pages (15.5%, 81/522), patient support (9.4%, 49/522), and general support (3.6%, 19/522). Health conditions varied greatly by the primary page type. All health conditions had some marketing/promotion pages and this made up 76% (29/38) of pages on acquired immunodeficiency syndrome (AIDS). The largest percentage of general support pages were cancer (19%, 6/32) and stomach (16%, 4/25). For patient support, stroke (67%, 4/6), lupus (33%, 10/30), breast cancer (19%, 6/31), arthritis (16%, 6/36), and diabetes (16%, 6/37) ranked the highest. Six health conditions were not represented by any type of support pages (ie, human papillomavirus, diarrhea, flu symptoms, pneumonia, spine, human immunodeficiency virus). Marketing/promotion pages accounted for 46.73% (10,371,169/22,191,633) of all Likes, followed by support pages (40.66%, 9,023,234/22,191,633). Cancer and breast cancer accounted for 86.90% (19,284,066/22,191,633) of all page Likes. This research represents the first attempts to comprehensively describe publicly available health content and user engagement with health conditions on Facebook pages. Public health interventions using Facebook will need to be designed to ensure relevant information is easy to find and with an understanding that stigma associated with some health conditions may limit the users' engagement with Facebook pages. This line of research merits further investigation as Facebook and other SNS continue to evolve over the coming years.
Water fluoridation and the quality of information available online.
Frangos, Zachary; Steffens, Maryke; Leask, Julie
2018-02-13
The Internet has transformed the way in which people approach their health care, with online resources becoming a primary source of health information. Little work has assessed the quality of online information regarding community water fluoridation. This study sought to assess the information available to individuals searching online for information, with emphasis on the credibility and quality of websites. We identified the top 10 web pages returned from different search engines, using common fluoridation search terms (identified in Google Trends). Web pages were scored using a credibility, quality and health literacy tool based on Global Advisory Committee on Vaccine Safety (GAVCS) and Center for Disease Control and Prevention (CDC) criteria. Scores were compared according to their fluoridation stance and domain type, then ranked by quality. The functionality of the scoring tool was analysed via a Bland-Altman plot of inter-rater reliability. Five-hundred web pages were returned, of which 55 were scored following removal of duplicates and irrelevant pages. Of these, 28 (51%) were pro-fluoridation, 16 (29%) were neutral and 11 (20%) were anti-fluoridation. Pro, neutral and anti-fluoridation pages scored well against health literacy standards (0.91, 0.90 and 0.81/1 respectively). Neutral and pro-fluoridation web pages showed strong credibility, with mean scores of 0.80 and 0.85 respectively, while anti-fluoridation scored 0.62/1. Most pages scored poorly for content quality, providing a moderate amount of superficial information. Those seeking online information regarding water fluoridation are faced with comprehensible, yet poorly referenced, superficial information. Sites were credible and user friendly; however, our results suggest that online resources need to focus on providing more transparent information with appropriate figures to consolidate the information. © 2018 FDI World Dental Federation.
Reporting on post-menopausal hormone therapy: an analysis of gynaecologists' web pages.
Bucksch, Jens; Kolip, Petra; Deitermann, Bernhilde
2004-01-01
The present study was designed to analyse Web pages of German gynaecologists with regard to postmenopausal hormone therapy (HT). There is a growing body of evidence, that the overall health risks of HT exceed the benefits. Making one's own informed choice has become a central concern for menopausal women. The Internet is an important source of health information, but the quality is often dubious. The study focused on the analysis of basic criteria such as last modification date and quality of the HT information content. The results of the Women's Health Initiative Study (WHI) were used as a benchmark. We searched for relevant Web pages by entering a combination of key words (9 x 13 = 117) into the search engine www.google.de. Each Web page was analysed using a standardized questionnaire. The basic criteria and the quality of content on each Web page were separately categorized by two evaluators. Disagreements were resolved by discussion. Of the 97 websites identified, basic criteria were not met by the majority. For example, the modification date was displayed by only 23 (23.7%) Web pages. The quality of content of most Web pages regarding HT was inaccurate and incomplete. Whilst only nine (9.3%) took up a balanced position, 66 (68%) recommended HT without any restrictions. In 22 cases the recommendation was indistinct and none of the sites refused HT. With regard to basic criteria, there was no difference between HT-recommending Web pages and sites with balanced position. Evidence-based information resulting from the WHI trial was insufficiently represented on gynaecologists' Web pages. Because of the growing number of consumers looking online for health information, the danger of obtaining harmful information has to be minimized. Web pages of gynaecologists do not appear to be recommendable for women because they do not provide recent evidence-based findings about HT.
PDBe: Protein Data Bank in Europe
Velankar, Sameer; Alhroub, Younes; Alili, Anaëlle; Best, Christoph; Boutselakis, Harry C.; Caboche, Ségolène; Conroy, Matthew J.; Dana, Jose M.; van Ginkel, Glen; Golovin, Adel; Gore, Swanand P.; Gutmanas, Aleksandras; Haslam, Pauline; Hirshberg, Miriam; John, Melford; Lagerstedt, Ingvar; Mir, Saqib; Newman, Laurence E.; Oldfield, Tom J.; Penkett, Chris J.; Pineda-Castillo, Jorge; Rinaldi, Luana; Sahni, Gaurav; Sawka, Grégoire; Sen, Sanchayita; Slowley, Robert; Sousa da Silva, Alan Wilter; Suarez-Uruena, Antonio; Swaminathan, G. Jawahar; Symmons, Martyn F.; Vranken, Wim F.; Wainwright, Michael; Kleywegt, Gerard J.
2011-01-01
The Protein Data Bank in Europe (PDBe; pdbe.org) is actively involved in managing the international archive of biomacromolecular structure data as one of the partners in the Worldwide Protein Data Bank (wwPDB; wwpdb.org). PDBe also develops new tools to make structural data more widely and more easily available to the biomedical community. PDBe has developed a browser to access and analyze the structural archive using classification systems that are familiar to chemists and biologists. The PDBe web pages that describe individual PDB entries have been enhanced through the introduction of plain-English summary pages and iconic representations of the contents of an entry (PDBprints). In addition, the information available for structures determined by means of NMR spectroscopy has been expanded. Finally, the entire web site has been redesigned to make it substantially easier to use for expert and novice users alike. PDBe works closely with other teams at the European Bioinformatics Institute (EBI) and in the international scientific community to develop new resources with value-added information. The SIFTS initiative is an example of such a collaboration—it provides extensive mapping data between proteins whose structures are available from the PDB and a host of other biomedical databases. SIFTS is widely used by major bioinformatics resources. PMID:21045060
Abel, Olubunmi; Shatunov, Aleksey; Jones, Ashley R; Andersen, Peter M; Powell, John F
2013-01-01
Background The ALS Online Genetics Database (ALSoD) website holds mutation, geographical, and phenotype data on genes implicated in amyotrophic lateral sclerosis (ALS) and links to bioinformatics resources, publications, and tools for analysis. On average, there are 300 unique visits per day, suggesting a high demand from the research community. To enable wider access, we developed a mobile-friendly version of the website and a smartphone app. Objective We sought to compare data traffic before and after implementation of a mobile version of the website to assess utility. Methods We identified the most frequently viewed pages using Google Analytics and our in-house analytic monitoring. For these, we optimized the content layout of the screen, reduced image sizes, and summarized available information. We used the Microsoft .NET framework mobile detection property (HttpRequest.IsMobileDevice in the Request.Browser object in conjunction with HttpRequest.UserAgent), which returns a true value if the browser is a recognized mobile device. For app development, we used the Eclipse integrated development environment with Android plug-ins. We wrapped the mobile website version with the WebView object in Android. Simulators were downloaded to test and debug the applications. Results The website automatically detects access from a mobile phone and redirects pages to fit the smaller screen. Because the amount of data stored on ALSoD is very large, the available information for display using smartphone access is deliberately restricted to improve usability. Visits to the website increased from 2231 to 2820, yielding a 26% increase from the pre-mobile to post-mobile period and an increase from 103 to 340 visits (230%) using mobile devices (including tablets). The smartphone app is currently available on BlackBerry and Android devices and will be available shortly on iOS as well. Conclusions Further development of the ALSoD website has allowed access through smartphones and tablets, either through the website or directly through a mobile app, making genetic data stored on the database readily accessible to researchers and patients across multiple devices. PMID:25098641
Abel, Olubunmi; Shatunov, Aleksey; Jones, Ashley R; Andersen, Peter M; Powell, John F; Al-Chalabi, Ammar
2013-09-04
The ALS Online Genetics Database (ALSoD) website holds mutation, geographical, and phenotype data on genes implicated in amyotrophic lateral sclerosis (ALS) and links to bioinformatics resources, publications, and tools for analysis. On average, there are 300 unique visits per day, suggesting a high demand from the research community. To enable wider access, we developed a mobile-friendly version of the website and a smartphone app. We sought to compare data traffic before and after implementation of a mobile version of the website to assess utility. We identified the most frequently viewed pages using Google Analytics and our in-house analytic monitoring. For these, we optimized the content layout of the screen, reduced image sizes, and summarized available information. We used the Microsoft .NET framework mobile detection property (HttpRequest.IsMobileDevice in the Request.Browser object in conjunction with HttpRequest.UserAgent), which returns a true value if the browser is a recognized mobile device. For app development, we used the Eclipse integrated development environment with Android plug-ins. We wrapped the mobile website version with the WebView object in Android. Simulators were downloaded to test and debug the applications. The website automatically detects access from a mobile phone and redirects pages to fit the smaller screen. Because the amount of data stored on ALSoD is very large, the available information for display using smartphone access is deliberately restricted to improve usability. Visits to the website increased from 2231 to 2820, yielding a 26% increase from the pre-mobile to post-mobile period and an increase from 103 to 340 visits (230%) using mobile devices (including tablets). The smartphone app is currently available on BlackBerry and Android devices and will be available shortly on iOS as well. Further development of the ALSoD website has allowed access through smartphones and tablets, either through the website or directly through a mobile app, making genetic data stored on the database readily accessible to researchers and patients across multiple devices.
Protocol for the E-Area Low Level Waste Facility Disposal Limits Database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swingle, R
2006-01-31
A database has been developed to contain the disposal limits for the E-Area Low Level Waste Facility (ELLWF). This database originates in the form of an EXCEL{copyright} workbook. The pertinent sheets are translated to PDF format using Adobe ACROBAT{copyright}. The PDF version of the database is accessible from the Solid Waste Division web page on SHRINE. In addition to containing the various disposal unit limits, the database also contains hyperlinks to the original references for all limits. It is anticipated that database will be revised each time there is an addition, deletion or revision of any of the ELLWF radionuclidemore » disposal limits.« less
Construction of the Database for Pulsating Variable Stars
NASA Astrophysics Data System (ADS)
Chen, Bing-Qiu; Yang, Ming; Jiang, Bi-Wei
2012-01-01
A database for pulsating variable stars is constructed to favor the study of variable stars in China. The database includes about 230,000 variable stars in the Galactic bulge, LMC and SMC observed in an about 10 yr period by the MACHO(MAssive Compact Halo Objects) and OGLE(Optical Gravitational Lensing Experiment) projects. The software used for the construction is LAMP, i.e., Linux+Apache+MySQL+PHP. A web page is provided for searching the photometric data and light curves in the database through the right ascension and declination of an object. Because of the flexibility of this database, more up-to-date data of variable stars can be incorporated into the database conveniently.
ASM Based Synthesis of Handwritten Arabic Text Pages
Al-Hamadi, Ayoub; Elzobi, Moftah; El-etriby, Sherif; Ghoneim, Ahmed
2015-01-01
Document analysis tasks, as text recognition, word spotting, or segmentation, are highly dependent on comprehensive and suitable databases for training and validation. However their generation is expensive in sense of labor and time. As a matter of fact, there is a lack of such databases, which complicates research and development. This is especially true for the case of Arabic handwriting recognition, that involves different preprocessing, segmentation, and recognition methods, which have individual demands on samples and ground truth. To bypass this problem, we present an efficient system that automatically turns Arabic Unicode text into synthetic images of handwritten documents and detailed ground truth. Active Shape Models (ASMs) based on 28046 online samples were used for character synthesis and statistical properties were extracted from the IESK-arDB database to simulate baselines and word slant or skew. In the synthesis step ASM based representations are composed to words and text pages, smoothed by B-Spline interpolation and rendered considering writing speed and pen characteristics. Finally, we use the synthetic data to validate a segmentation method. An experimental comparison with the IESK-arDB database encourages to train and test document analysis related methods on synthetic samples, whenever no sufficient natural ground truthed data is available. PMID:26295059
ASM Based Synthesis of Handwritten Arabic Text Pages.
Dinges, Laslo; Al-Hamadi, Ayoub; Elzobi, Moftah; El-Etriby, Sherif; Ghoneim, Ahmed
2015-01-01
Document analysis tasks, as text recognition, word spotting, or segmentation, are highly dependent on comprehensive and suitable databases for training and validation. However their generation is expensive in sense of labor and time. As a matter of fact, there is a lack of such databases, which complicates research and development. This is especially true for the case of Arabic handwriting recognition, that involves different preprocessing, segmentation, and recognition methods, which have individual demands on samples and ground truth. To bypass this problem, we present an efficient system that automatically turns Arabic Unicode text into synthetic images of handwritten documents and detailed ground truth. Active Shape Models (ASMs) based on 28046 online samples were used for character synthesis and statistical properties were extracted from the IESK-arDB database to simulate baselines and word slant or skew. In the synthesis step ASM based representations are composed to words and text pages, smoothed by B-Spline interpolation and rendered considering writing speed and pen characteristics. Finally, we use the synthetic data to validate a segmentation method. An experimental comparison with the IESK-arDB database encourages to train and test document analysis related methods on synthetic samples, whenever no sufficient natural ground truthed data is available.
2012-09-01
relative performance of several conventional SQL and NoSQL databases with a set of one billion file block hashes. Digital Forensics, Sector Hashing, Full... NoSQL databases with a set of one billion file block hashes. v THIS PAGE INTENTIONALLY LEFT BLANK vi Table of Contents List of Acronyms and...Operating System NOOP No Operation assembly instruction NoSQL “Not only SQL” model for non-relational database management NSRL National Software
Acquiring geographical data with web harvesting
NASA Astrophysics Data System (ADS)
Dramowicz, K.
2016-04-01
Many websites contain very attractive and up to date geographical information. This information can be extracted, stored, analyzed and mapped using web harvesting techniques. Poorly organized data from websites are transformed with web harvesting into a more structured format, which can be stored in a database and analyzed. Almost 25% of web traffic is related to web harvesting, mostly while using search engines. This paper presents how to harvest geographic information from web documents using the free tool called the Beautiful Soup, one of the most commonly used Python libraries for pulling data from HTML and XML files. It is a relatively easy task to process one static HTML table. The more challenging task is to extract and save information from tables located in multiple and poorly organized websites. Legal and ethical aspects of web harvesting are discussed as well. The paper demonstrates two case studies. The first one shows how to extract various types of information about the Good Country Index from the multiple web pages, load it into one attribute table and map the results. The second case study shows how script tools and GIS can be used to extract information from one hundred thirty six websites about Nova Scotia wines. In a little more than three minutes a database containing one hundred and six liquor stores selling these wines is created. Then the availability and spatial distribution of various types of wines (by grape types, by wineries, and by liquor stores) are mapped and analyzed.
Tozzi, Alberto Eugenio; Buonuomo, Paola Sabrina; Ciofi degli Atti, Marta Luisa; Carloni, Emanuela; Meloni, Marco; Gamba, Fiorenza
2010-01-01
Information available on the Internet about immunizations may influence parents' perception about human papillomavirus (HPV) immunization and their attitude toward vaccinating their daughters. We hypothesized that the quality of information on HPV available on the Internet may vary with language and with the level of knowledge of parents. To this end we compared the quality of a sample of Web pages in Italian with a sample of Web pages in English. Five reviewers assessed the quality of Web pages retrieved with popular search engines using criteria adapted from the Good Information Practice Essential Criteria for Vaccine Safety Web Sites recommended by the World Health Organization. Quality of Web pages was assessed in the domains of accessibility, credibility, content, and design. Scores in these domains were compared through nonparametric statistical tests. We retrieved and reviewed 74 Web sites in Italian and 117 in English. Most retrieved Web pages (33.5%) were from private agencies. Median scores were higher in Web pages in English compared with those in Italian in the domain of accessibility (p < .01), credibility (p < .01), and content (p < .01). The highest credibility and content scores were those of Web pages from governmental agencies or universities. Accessibility scores were positively associated with content scores (p < .01) and with credibility scores (p < .01). A total of 16.2% of Web pages in Italian opposed HPV immunization compared with 6.0% of those in English (p < .05). Quality of information and number of Web pages opposing HPV immunization may vary with the Web site language. High-quality Web pages on HPV, especially from public health agencies and universities, should be easily accessible and retrievable with common Web search engines. Copyright 2010 Society for Adolescent Medicine. Published by Elsevier Inc. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-17
...: Application for Additional Visa Pages or Miscellaneous Passport Services ACTION: Notice of request for public... Information Collection: Application for Additional Visa Pages or Miscellaneous Passport Services. OMB Control...: Bureau of Consular Affairs, Passport Services, Office of Program Management and Operational Support...
Bera, Maitreyee; Ortel, Terry W.
2018-01-12
The U.S. Geological Survey, in cooperation with DuPage County Stormwater Management Department, is testing a near real-time streamflow simulation system that assists in the management and operation of reservoirs and other flood-control structures in the Salt Creek and West Branch DuPage River drainage basins in DuPage County, Illinois. As part of this effort, the U.S. Geological Survey maintains a database of hourly meteorological and hydrologic data for use in this near real-time streamflow simulation system. Among these data are next generation weather radar-multisensor precipitation estimates and quantitative precipitation forecast data, which are retrieved from the North Central River Forecasting Center of the National Weather Service. The DuPage County streamflow simulation system uses these quantitative precipitation forecast data to create streamflow predictions for the two simulated drainage basins. This report discusses in detail how these data are processed for inclusion in the Watershed Data Management files used in the streamflow simulation system for the Salt Creek and West Branch DuPage River drainage basins.
2005-11-01
care for localized prostate cancer. To date, we have completed all survey mailings, collected responses, entered these into an Access database, and...vignette, patient socioeconomic status, not race, influenced treatment recommendations for localized prostate cancer. A majority of urologists rate their...in patterns of care for localized prostate cancer. See Introduction (page 14) and Methods (pages 15-17) in Appendix B for details. Key research
An incremental database access method for autonomous interoperable databases
NASA Technical Reports Server (NTRS)
Roussopoulos, Nicholas; Sellis, Timos
1994-01-01
We investigated a number of design and performance issues of interoperable database management systems (DBMS's). The major results of our investigation were obtained in the areas of client-server database architectures for heterogeneous DBMS's, incremental computation models, buffer management techniques, and query optimization. We finished a prototype of an advanced client-server workstation-based DBMS which allows access to multiple heterogeneous commercial DBMS's. Experiments and simulations were then run to compare its performance with the standard client-server architectures. The focus of this research was on adaptive optimization methods of heterogeneous database systems. Adaptive buffer management accounts for the random and object-oriented access methods for which no known characterization of the access patterns exists. Adaptive query optimization means that value distributions and selectives, which play the most significant role in query plan evaluation, are continuously refined to reflect the actual values as opposed to static ones that are computed off-line. Query feedback is a concept that was first introduced to the literature by our group. We employed query feedback for both adaptive buffer management and for computing value distributions and selectivities. For adaptive buffer management, we use the page faults of prior executions to achieve more 'informed' management decisions. For the estimation of the distributions of the selectivities, we use curve-fitting techniques, such as least squares and splines, for regressing on these values.
Representation of Health Conditions on Facebook: Content Analysis and Evaluation of User Engagement
Pathipati, Akhilesh S; Zan, Shiyi; Jethwani, Kamal
2014-01-01
Background A sizable majority of adult Internet users report looking for health information online. Social networking sites (SNS) like Facebook represent a common place to seek information, but very little is known about the representation and use of health content on SNS. Objective Our goal in this study was to understand the role of SNS in health information seeking. More specifically, we aimed to describe how health conditions are represented on Facebook Pages and how users interact with these different conditions. Methods We used Google Insights to identify the 20 most searched for health conditions on Google and then searched each of the resulting terms on Facebook. We compiled a list of the first 50 Facebook “Pages” results for each health condition. After filtering results to identify pages relevant to our research, we categorized pages into one of seven categories based on the page’s primary purpose. We then measured user engagement by evaluating the number of “Likes” for different conditions and types of pages. Results The search returned 50 pages for 18 of the health conditions, but only 48 pages were found for “anemia” and 5 pages were found for “flu symptoms”, yielding a total of 953 pages. A large number of pages (29.4%, 280/953) were irrelevant to the health condition searched. Of the 673 relevant pages, 151 were not in English or originated outside the United States, leaving 522 pages to be coded for content. The most common type of page was marketing/promotion (32.2%, 168/522) followed by information/awareness (20.7%, 108/522), Wikipedia-type pages (15.5%, 81/522), patient support (9.4%, 49/522), and general support (3.6%, 19/522). Health conditions varied greatly by the primary page type. All health conditions had some marketing/promotion pages and this made up 76% (29/38) of pages on acquired immunodeficiency syndrome (AIDS). The largest percentage of general support pages were cancer (19%, 6/32) and stomach (16%, 4/25). For patient support, stroke (67%, 4/6), lupus (33%, 10/30), breast cancer (19%, 6/31), arthritis (16%, 6/36), and diabetes (16%, 6/37) ranked the highest. Six health conditions were not represented by any type of support pages (ie, human papillomavirus, diarrhea, flu symptoms, pneumonia, spine, human immunodeficiency virus). Marketing/promotion pages accounted for 46.73% (10,371,169/22,191,633) of all Likes, followed by support pages (40.66%, 9,023,234/22,191,633). Cancer and breast cancer accounted for 86.90% (19,284,066/22,191,633) of all page Likes. Conclusions This research represents the first attempts to comprehensively describe publicly available health content and user engagement with health conditions on Facebook pages. Public health interventions using Facebook will need to be designed to ensure relevant information is easy to find and with an understanding that stigma associated with some health conditions may limit the users’ engagement with Facebook pages. This line of research merits further investigation as Facebook and other SNS continue to evolve over the coming years. PMID:25092386
Enhancement of the Earth Science and Remote Sensing Group's Website and Related Projects
NASA Technical Reports Server (NTRS)
Coffin, Ashley; Vanderbloemen, Lisa
2014-01-01
The major problem addressed throughout the term was the need to update the group's current website, as it was outdated and required streamlining and modernization. The old Gateway to Astronaut Photography of the Earth website had multiple components, many of which involved searches through expansive databases. The amount of work required to update the website was large and due to a desired release date, assistance was needed to help build new pages and to transfer old information. Additionally, one of the tools listed on the website called Image Detective had been underutilized in the past. It was important to address why the public was not using the tool and how it could potentially become more of a resource for the team. In order to help with updating the website, it was necessary to first learn HTML. After assisting with small edits, I began creating new pages. I utilized the "view page source" and "developer" tools in the internet browser to observe how other websites created their features and to test changes without editing the code. I then edited the code to create an interactive feature on the new page. For the Image Detective Page I began an evaluation of the current page. I also asked my fellow interns and friends at my University to offer their input. I took all of the opinions into account and wrote up a document regarding my recommendations. The recommendations will be considered as I help to improve the Image Detective page for the updated website. In addition to the website, other projects included the need for additional, and updated image collections, along with various project requests. The image collections have been used by educators in the classroom and the impact crater collection was highly requested. The glaciers collection focused mostly on South American glaciers and needed to include more of the earth's many glaciers. The collections had not been updated or created due to the fact that related imagery had not been catalogued. The process of cataloging involves identifying the center point location of the image and feature identification. Other project needs included collecting night images of India in for publishing. Again, many of the images were not catalogued and the database was lacking in night time imagery for that region. The last project was to calculate the size of mega fans in South Africa. Calculating the fan sizes involved several steps. To expedite the study, calculations needed to be made after the base maps had been created. Using data files that included an outline of the mega fans on a topographic map, I opened the file in Photoshop, determined the number of pixels within the outlined area, created a one degree squared box, determined the pixels within the box, converted the pixels within the box to kilometers, and then calculated the fan size using this information. Overall, the internship has been a learning experience for me. I have learned how to use new programs and I developed new skills. These These skills can help me as I enter into the next phase of my career. Learning Photoshop and HTML in addition to coding in Dreamweaver are highly sought after skills that are used in a variety of fields. Additionally, the exposure to different aspects of the team and working with different people helped me to gain a broader set of skills and allowed me to work with people with different experiences. The various projects I have worked on this summer have directly benefitted the team whether it was completing projects they did not have the time to do, or by helping the team reach deadlines sooner. The new website will be the best place to see all of my work as it will include the newly designed pages and will feature my updates to collections.
NASA Astrophysics Data System (ADS)
Brissebrat, Guillaume; Fleury, Laurence; Boichard, Jean-Luc; Cloché, Sophie; Eymard, Laurence; Mastrorillo, Laurence; Moulaye, Oumarou; Ramage, Karim; Asencio, Nicole; Favot, Florence; Roussot, Odile
2013-04-01
The AMMA information system aims at expediting data and scientific results communication inside the AMMA community and beyond. It has already been adopted as the data management system by several projects and is meant to become a reference information system about West Africa area for the whole scientific community. The AMMA database and the associated on line tools have been developed and are managed by two French teams (IPSL Database Centre, Palaiseau and OMP Data Service, Toulouse). The complete system has been fully duplicated and is operated by AGRHYMET Regional Centre in Niamey, Niger. The AMMA database contains a wide variety of datasets: - about 250 local observation datasets, that cover geophysical components (atmosphere, ocean, soil, vegetation) and human activities (agronomy, health...) They come from either operational networks or scientific experiments, and include historical data in West Africa from 1850; - 1350 outputs of a socio-economics questionnaire; - 60 operational satellite products and several research products; - 10 output sets of meteorological and ocean operational models and 15 of research simulations. Database users can access all the data using either the portal http://database.amma-international.org or http://amma.agrhymet.ne/amma-data. Different modules are available. The complete catalogue enables to access metadata (i.e. information about the datasets) that are compliant with the international standards (ISO19115, INSPIRE...). Registration pages enable to read and sign the data and publication policy, and to apply for a user database account. The data access interface enables to easily build a data extraction request by selecting various criteria like location, time, parameters... At present, the AMMA database counts more than 740 registered users and process about 80 data requests every month In order to monitor day-to-day meteorological and environment information over West Africa, some quick look and report display websites have been developed. They met the operational needs for the observational teams during the AMMA 2006 (http://aoc.amma-international.org) and FENNEC 2011 (http://fenoc.sedoo.fr) campaigns. But they also enable scientific teams to share physical indices along the monsoon season (http://misva.sedoo.fr from 2011). A collaborative WIKINDX tool has been set on line in order to manage scientific publications and communications of interest to AMMA (http://biblio.amma-international.org). Now the bibliographic database counts about 1200 references. It is the most exhaustive document collection about African Monsoon available for all. Every scientist is invited to make use of the different AMMA on line tools and data. Scientists or project leaders who have data management needs for existing or future datasets over West Africa are welcome to use the AMMA database framework and to contact ammaAdmin@sedoo.fr .
Atlas of Nonindigenous Marine and Estuarine Species in the ...
The product consists of a report synthesizing available information on nonindigenous species in the North Pacific. We note that while this product focuses on invasive species, the tools and approaches developed for this research are the precursors on how we will address identifying species vulnerable to climate change. The hierarchical “Marine Ecoregions of the World” (MEOW) biogeographic schema was used as the framework for assessing species’ distributions, with the modification that we added a “region” level to differentiate eastern and western sides of oceans in the North Pacific. The two North Pacific regions are the Northeast Pacific (NEP), which extends from the Gulf of California to Aleutian Islands, and the Northwest Pacific (NWP), which extends from the East China Sea to the Kamchatka Shelf. To have complete coverage of the United States, we included the MEOW Hawaii ecoregion as a separate reporting unit. To have complete coverage of Japan and China, we combined five MEOW ecoregions in southern China and Japan into the North Central-Indo Pacific (NCIP) Region The various types of information were synthesized in a Microsoft Access database, the “PICES Nonindigenous Species Information System”, which is further described in the “User’s Guide and Metadata for the PICES Nonindigenous Species Information System” (Lee et al., 2012). The PICES database was then used to generate two-page individual “species profiles” that map the native
Wang, Jianjian; Cao, Yuze; Zhang, Huixue; Wang, Tianfeng; Tian, Qinghua; Lu, Xiaoyu; Lu, Xiaoyan; Kong, Xiaotong; Liu, Zhaojun; Wang, Ning; Zhang, Shuai; Ma, Heping; Ning, Shangwei; Wang, Lihua
2017-01-04
The Nervous System Disease NcRNAome Atlas (NSDNA) (http://www.bio-bigdata.net/nsdna/) is a manually curated database that provides comprehensive experimentally supported associations about nervous system diseases (NSDs) and noncoding RNAs (ncRNAs). NSDs represent a common group of disorders, some of which are characterized by high morbidity and disabilities. The pathogenesis of NSDs at the molecular level remains poorly understood. ncRNAs are a large family of functionally important RNA molecules. Increasing evidence shows that diverse ncRNAs play a critical role in various NSDs. Mining and summarizing NSD-ncRNA association data can help researchers discover useful information. Hence, we developed an NSDNA database that documents 24 713 associations between 142 NSDs and 8593 ncRNAs in 11 species, curated from more than 1300 articles. This database provides a user-friendly interface for browsing and searching and allows for data downloading flexibility. In addition, NSDNA offers a submission page for researchers to submit novel NSD-ncRNA associations. It represents an extremely useful and valuable resource for researchers who seek to understand the functions and molecular mechanisms of ncRNA involved in NSDs. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Yellow pages advertising by physicians. Are doctors providing the information consumers want most?
Butler, D D; Abernethy, A M
1996-01-01
Yellow pages listing are the most widely used form of physician advertising. Every month, approximately 21.6 million adults in the United States refer to the yellow pages before obtaining medical care. Mobile consumers--approximately 17% of the U.S. population who move each year--are heavy users of yellow pages. Consumers desire information on a physician's experience, but it is included in less than 1% of all physician display ads.
Publications - AR 2008 | Alaska Division of Geological & Geophysical
Publications Search Statewide Maps New Releases Sales Interactive Maps Databases Sections Geologic ; Geophysical Surveys Ordering Info: Download below or please see our publication sales page for more
Publications - AR 2007 | Alaska Division of Geological & Geophysical
Publications Search Statewide Maps New Releases Sales Interactive Maps Databases Sections Geologic ; Geophysical Surveys Ordering Info: Download below or please see our publication sales page for more
Publications - AR 2001 | Alaska Division of Geological & Geophysical
Publications Search Statewide Maps New Releases Sales Interactive Maps Databases Sections Geologic ; Geophysical Surveys Ordering Info: Download below or please see our publication sales page for more
Publications - AR 2002 | Alaska Division of Geological & Geophysical
Publications Search Statewide Maps New Releases Sales Interactive Maps Databases Sections Geologic ; Geophysical Surveys Ordering Info: Download below or please see our publication sales page for more
Neurodegeneration with Brain Iron Accumulation
... are here Home » Disorders » All Disorders Neurodegeneration with Brain Iron Accumulation Information Page Neurodegeneration with Brain Iron Accumulation Information Page What research is being ...
Web-based X-ray quality control documentation.
David, George; Burnett, Lou Ann; Schenkel, Robert
2003-01-01
The department of radiology at the Medical College of Georgia Hospital and Clinics has developed an equipment quality control web site. Our goal is to provide immediate access to virtually all medical physics survey data. The web site is designed to assist equipment engineers, department management and technologists. By improving communications and access to equipment documentation, we believe productivity is enhanced. The creation of the quality control web site was accomplished in three distinct steps. First, survey data had to be placed in a computer format. The second step was to convert these various computer files to a format supported by commercial web browsers. Third, a comprehensive home page had to be designed to provide convenient access to the multitude of surveys done in the various x-ray rooms. Because we had spent years previously fine-tuning the computerization of the medical physics quality control program, most survey documentation was already in spreadsheet or database format. A major technical decision was the method of conversion of survey spreadsheet and database files into documentation appropriate for the web. After an unsatisfactory experience with a HyperText Markup Language (HTML) converter (packaged with spreadsheet and database software), we tried creating Portable Document Format (PDF) files using Adobe Acrobat software. This process preserves the original formatting of the document and takes no longer than conventional printing; therefore, it has been very successful. Although the PDF file generated by Adobe Acrobat is a proprietary format, it can be displayed through a conventional web browser using the freely distributed Adobe Acrobat Reader program that is available for virtually all platforms. Once a user installs the software, it is automatically invoked by the web browser whenever the user follows a link to a file with a PDF extension. Although no confidential patient information is available on the web site, our legal department recommended that we secure the site in order to keep out those wishing to make mischief. Our interim solution has not been to password protect the page, which we feared would hinder access for occasional legitimate users, but also not to provide links to it from other hospital and department pages. Utility and productivity were improved and time and money were saved by making radiological equipment quality control documentation instantly available on-line.
PAGES-Powell North America 2k database
NASA Astrophysics Data System (ADS)
McKay, N.
2014-12-01
Syntheses of paleoclimate data in North America are essential for understanding long-term spatiotemporal variability in climate and for properly assessing risk on decadal and longer timescales. Existing reconstructions of the past 2,000 years rely almost exclusively on tree-ring records, which can underestimate low-frequency variability and rarely extend beyond the last millennium. Meanwhile, many records from the full spectrum of paleoclimate archives are available and hold the potential of enhancing our understanding of past climate across North America over the past 2000 years. The second phase of the Past Global Changes (PAGES) North America 2k project began in 2014, with a primary goal of assembling these disparate paleoclimate records into a unified database. This effort is currently supported by the USGS Powell Center together with PAGES. Its success requires grassroots support from the community of researchers developing and interpreting paleoclimatic evidence relevant to the past 2000 years. Most likely, fewer than half of the published records appropriate for this database are publicly archived, and far fewer include the data needed to quantify geochronologic uncertainty, or to concisely describe how best to interpret the data in context of a large-scale paleoclimatic synthesis. The current version of the database includes records that (1) have been published in a peer-reviewed journal (including evidence of the record's relationship to climate), (2) cover a substantial portion of the past 2000 yr (>300 yr for annual records, >500 yr for lower frequency records) at relatively high resolution (<50 yr/observation), and (3) have reasonably small and quantifiable age uncertainty. Presently, the database includes records from boreholes, ice cores, lake and marine sediments, speleothems, and tree rings. This poster presentation will display the site locations and basic metadata of the records currently in the database. We invite anyone with interest in participating in the project to visit the poster or contact the author to help identify and assimilate relevant records that have not yet been included. The goal is to develop a comprehensive and open-access resource that will serve the diverse community interested in the climate of the Common Era in North America.
Atmospheric Science Data Center
2013-03-21
... Web Links to Relevant CERES Information Relevant information about CERES, CERES references, ... Instrument Working Group Home Page Aerosol Retrieval Web Page (Center for Satellite Applications and Research) ...
Information Badging Information Foreign National Requirements SLAC Internal Gate Information Site Entry Form this is a SLAC-Internal page for videos on how to use the automated gates. Security Assistance The Main and holidays. See Gate Information this is a SLAC-Internal page for more information about the
FRS exposes several REST services that allows developers to utilize a live feed of data from the FRS database. This web page is intended for a technical audience and describes the content and purpose of each service available.
DB-PABP: a database of polyanion-binding proteins
Fang, Jianwen; Dong, Yinghua; Salamat-Miller, Nazila; Russell Middaugh, C.
2008-01-01
The interactions between polyanions (PAs) and polyanion-binding proteins (PABPs) have been found to play significant roles in many essential biological processes including intracellular organization, transport and protein folding. Furthermore, many neurodegenerative disease-related proteins are PABPs. Thus, a better understanding of PA/PABP interactions may not only enhance our understandings of biological systems but also provide new clues to these deadly diseases. The literature in this field is widely scattered, suggesting the need for a comprehensive and searchable database of PABPs. The DB-PABP is a comprehensive, manually curated and searchable database of experimentally characterized PABPs. It is freely available and can be accessed online at http://pabp.bcf.ku.edu/DB_PABP/. The DB-PABP was implemented as a MySQL relational database. An interactive web interface was created using Java Server Pages (JSP). The search page of the database is organized into a main search form and a section for utilities. The main search form enables custom searches via four menus: protein names, polyanion names, the source species of the proteins and the methods used to discover the interactions. Available utilities include a commonality matrix, a function of listing PABPs by the number of interacting polyanions and a string search for author surnames. The DB-PABP is maintained at the University of Kansas. We encourage users to provide feedback and submit new data and references. PMID:17916573
DB-PABP: a database of polyanion-binding proteins.
Fang, Jianwen; Dong, Yinghua; Salamat-Miller, Nazila; Middaugh, C Russell
2008-01-01
The interactions between polyanions (PAs) and polyanion-binding proteins (PABPs) have been found to play significant roles in many essential biological processes including intracellular organization, transport and protein folding. Furthermore, many neurodegenerative disease-related proteins are PABPs. Thus, a better understanding of PA/PABP interactions may not only enhance our understandings of biological systems but also provide new clues to these deadly diseases. The literature in this field is widely scattered, suggesting the need for a comprehensive and searchable database of PABPs. The DB-PABP is a comprehensive, manually curated and searchable database of experimentally characterized PABPs. It is freely available and can be accessed online at http://pabp.bcf.ku.edu/DB_PABP/. The DB-PABP was implemented as a MySQL relational database. An interactive web interface was created using Java Server Pages (JSP). The search page of the database is organized into a main search form and a section for utilities. The main search form enables custom searches via four menus: protein names, polyanion names, the source species of the proteins and the methods used to discover the interactions. Available utilities include a commonality matrix, a function of listing PABPs by the number of interacting polyanions and a string search for author surnames. The DB-PABP is maintained at the University of Kansas. We encourage users to provide feedback and submit new data and references.
2017-06-20
OF ABSTRACT 18. NUMBER OF PAGES 19a. NAME OF RESPONSIBLE PERSON Charles J . Newell a. REPORT b. ABSTRACT c . THIS PAGE 163 19b. TELEPHONE NUMBER...2016. McHugh, T.E., J.A. Connor, F. Ahmad, and C . J . Newell, 2003. A Groundwater Mass Flux Model For Groundwater-To-Indoor-Air Vapor Intrusion. in...Newell, C . J ., L. P. Hopkins, and P. B. Bedient, 1990. “A Hydrogeologic Database for Groundwater Modeling”, Ground Water, Vol. 28, No. 5. Newell, C
Radiation Database for Earth and Mars Entry
2008-11-17
SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT SAR 18 . NUMBER OF PAGES 40 19a. NAME OF RESPONSIBLE PERSON a. REPORT unclassified b...ABSTRACT unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39- 18 wall, and zero otherwise. The radiative...coupling scheme, we have the additional selection rules for the electric dipolar transition ∆S = 0, (16) ∆L = 0,±1, (17) L = 0 6↔ L = 0 ( 18 ) where we have
Enhanced DIII-D Data Management Through a Relational Database
NASA Astrophysics Data System (ADS)
Burruss, J. R.; Peng, Q.; Schachter, J.; Schissel, D. P.; Terpstra, T. B.
2000-10-01
A relational database is being used to serve data about DIII-D experiments. The database is optimized for queries across multiple shots, allowing for rapid data mining by SQL-literate researchers. The relational database relates different experiments and datasets, thus providing a big picture of DIII-D operations. Users are encouraged to add their own tables to the database. Summary physics quantities about DIII-D discharges are collected and stored in the database automatically. Meta-data about code runs, MDSplus usage, and visualization tool usage are collected, stored in the database, and later analyzed to improve computing. Documentation on the database may be accessed through programming languages such as C, Java, and IDL, or through ODBC compliant applications such as Excel and Access. A database-driven web page also provides a convenient means for viewing database quantities through the World Wide Web. Demonstrations will be given at the poster.
Forensic Science and the Internet - Current Utilization and Future Potential.
Chamakura, R P
1997-12-01
The Internet has become a very powerful and inexpensive tool for the free distribution of knowledge and information. It is a learning and research tool, a virtual library without borders and membership requirements, a help desk, and a publication house providing newspapers with current information and journals with instant publication. Very soon, when live audio and video transmission is perfected, the Internet (popularly referred to as the Net) also will be a live classroom and everyday conference site. This article provides a brief overview of the basic structure and essential components of the Internet. A limited number of home pages/Web sites that are already made available on the Net by scientists, laboratories, and colleges in the forensic science community are presented in table forms. Home pages/Web sites containing useful information pertinent to different disciplines of forensic science are also categorized in various tables. The ease and benefits of the Internet use are exemplified by the author's personal experience. Currently, only a few forensic scientists and institutions have made their presence felt. More participation and active contribution and the creation of on-line searchable databases in all specialties of forensic science are urgently needed. Leading forensic journals should take the lead and create on-line searchable indexes with abstracts. Creating Internet repositories of unpublished papers is an idea worth looking into. Leading forensic science institutions should also develop use of the Net to provide training and retraining opportunities for forensic scientists. Copyright © 1997 Central Police University.
RefPrimeCouch—a reference gene primer CouchApp
Silbermann, Jascha; Wernicke, Catrin; Pospisil, Heike; Frohme, Marcus
2013-01-01
To support a quantitative real-time polymerase chain reaction standardization project, a new reference gene database application was required. The new database application was built with the explicit goal of simplifying not only the development process but also making the user interface more responsive and intuitive. To this end, CouchDB was used as the backend with a lightweight dynamic user interface implemented client-side as a one-page web application. Data entry and curation processes were streamlined using an OpenRefine-based workflow. The new RefPrimeCouch database application provides its data online under an Open Database License. Database URL: http://hpclife.th-wildau.de:5984/rpc/_design/rpc/view.html PMID:24368831
RefPrimeCouch--a reference gene primer CouchApp.
Silbermann, Jascha; Wernicke, Catrin; Pospisil, Heike; Frohme, Marcus
2013-01-01
To support a quantitative real-time polymerase chain reaction standardization project, a new reference gene database application was required. The new database application was built with the explicit goal of simplifying not only the development process but also making the user interface more responsive and intuitive. To this end, CouchDB was used as the backend with a lightweight dynamic user interface implemented client-side as a one-page web application. Data entry and curation processes were streamlined using an OpenRefine-based workflow. The new RefPrimeCouch database application provides its data online under an Open Database License. Database URL: http://hpclife.th-wildau.de:5984/rpc/_design/rpc/view.html.
ERIC Educational Resources Information Center
Brockmann, R. John
1996-01-01
Discusses Victor Page, one of the first people to make a living as a technical communicator. Focuses on his 33 automotive and aviation books, popular with the public and critics, which contained information on novel technology, profuse illustrations, and easy-to-access information. States that Page published quickly, had firsthand expertise, and…
NOAO observing proposal processing system
NASA Astrophysics Data System (ADS)
Bell, David J.; Gasson, David; Hartman, Mia
2002-12-01
Since going electronic in 1994, NOAO has continued to refine and enhance its observing proposal handling system. Virtually all related processes are now handled electronically. Members of the astronomical community can submit proposals through email, web form or via Gemini's downloadable Phase-I Tool. NOAO staff can use online interfaces for administrative tasks, technical reviews, telescope scheduling, and compilation of various statistics. In addition, all information relevant to the TAC process is made available online. The system, now known as ANDES, is designed as a thin-client architecture (web pages are now used for almost all database functions) built using open source tools (FreeBSD, Apache, MySQL, Perl, PHP) to process descriptively-marked (LaTeX, XML) proposal documents.
NASA Astrophysics Data System (ADS)
Candela, L.; Ruggieri, G.; Giancaspro, A.
2004-09-01
In the sphere of "Multi-Mission Ground Segment" Italian Space Agency project, some innovative technologies such as CORBA[1], Z39.50[2], XML[3], Java[4], Java server Pages[4] and C++ has been experimented. The SSPI system (Space Service Provider Infrastructure) is the prototype of a distributed environment aimed to facilitate the access to Earth Observation (EO) data. SSPI allows to ingests, archive, consolidate, visualize and evaluate these data. Hence, SSPI is not just a database of or a data repository, but an application that by means of a set of protocols, standards and specifications provides a unified access to multi-mission EO data.
Historical literature review on waste classification and categorization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Croff, A.G.; Richmond, A.A.; Williams, J.P.
1995-03-01
The Staff of the Waste Management Document Library (WMDL), in cooperation with Allen Croff have been requested to provide information support for a historical search concerning waste categorization/classification. This bibliography has been compiled under the sponsorship of Oak Ridge National Laboratory`s Chemical Technology Division to help in Allen`s ongoing committee work with the NRC/NRCP. After examining the search, Allen Croff saw the value of the search being published. Permission was sought from the database providers to allow limited publication (i.e. 20--50 copies) of the search for internal distribution at the Oak Ridge National Laboratory and for Allen Croff`s associated committee.more » Citations from the database providers who did not grant legal permission for their material to be published have been omitted from the literature review. Some of the longer citations have been included in an abbreviated form in the search to allow the format of the published document to be shortened from approximately 1,400 pages. The bibliography contains 372 citations.« less
A web-based quantitative signal detection system on adverse drug reaction in China.
Li, Chanjuan; Xia, Jielai; Deng, Jianxiong; Chen, Wenge; Wang, Suzhen; Jiang, Jing; Chen, Guanquan
2009-07-01
To establish a web-based quantitative signal detection system for adverse drug reactions (ADRs) based on spontaneous reporting to the Guangdong province drug-monitoring database in China. Using Microsoft Visual Basic and Active Server Pages programming languages and SQL Server 2000, a web-based system with three software modules was programmed to perform data preparation and association detection, and to generate reports. Information component (IC), the internationally recognized measure of disproportionality for quantitative signal detection, was integrated into the system, and its capacity for signal detection was tested with ADR reports collected from 1 January 2002 to 30 June 2007 in Guangdong. A total of 2,496 associations including known signals were mined from the test database. Signals (e.g., cefradine-induced hematuria) were found early by using the IC analysis. In addition, 291 drug-ADR associations were alerted for the first time in the second quarter of 2007. The system can be used for the detection of significant associations from the Guangdong drug-monitoring database and could be an extremely useful adjunct to the expert assessment of very large numbers of spontaneously reported ADRs for the first time in China.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tuli, J.K.; Sonzogni,A.
The National Nuclear Data Center has provided remote access to some of its resources since 1986. The major databases and other resources available currently through NNDC Web site are summarized. The National Nuclear Data Center (NNDC) has provided remote access to the nuclear physics databases it maintains and to other resources since 1986. With considerable innovation access is now mostly through the Web. The NNDC Web pages have been modernized to provide a consistent state-of-the-art style. The improved database services and other resources available from the NNOC site at www.nndc.bnl.govwill be described.
MedlinePlus Videos and Cool Tools
... related pain or inflammation and provide physicians additional information about possible sources of pain. top of page ... the baby. See the Safety page for more information about pregnancy and x-rays. Though MRI does ...
Example of a Participant Agreement for the Food Recovery Challenge
This page contains an example of the participant agreement from the Sustainable Materials Management database system for anyone thinking about joining the Food Recovery Challenge can view to learn about what it entails.
Health Effects of Exposures to Mercury
... risk assessment for mercuric chloride in EPA's IRIS database Top of Page Contact Us to ask a question, provide feedback, or report a problem. Discover. Accessibility EPA Administrator Budget & Performance Contracting Grants January 19, 2017 Web ...
Home Page: Division of Mammals: Department of Vertebrate Zoology: NMNH
Search Field: Search Submit: Submit {search_item} Advanced Search Plan Your Visit Exhibitions Education Resources Databases Marine Mammal Program Beaked Whale Identification Guide 3D Primates VZ Libraries Related
The Results of Development of the Project ZOOINT and its Future Perspectives
NASA Astrophysics Data System (ADS)
Smirnov, I. S.; Lobanov, A. L.; Alimov, A. F.; Medvedev, S. G.; Golikov, A. A.
The work on a computerization of main processes of accumulation and analysis of the collection, expert and literary data on a systematics and faunistics of various taxa of animal (a basis of studying of a biological diversity) was started in the Zoological Institute in 1987. In 1991 the idea of creating of the software package, ZOOlogical INTegrated system (ZOOINT) was born. ZOOINT could provide a loading operation about collections and simultaneously would allow to analyze the accumulated data with the help of various queries. During execution, the project ZOOINT was transformed slightly and has given results a little bit distinguished from planned earlier, but even more valuable. In the Internet the site about the information retrieval system (IRS) ZOOINT was built also ( ZOOINT ). The implementation of remote access to the taxonomic information, with possibility to work with databases (DB) of the IRS ZOOINT in the on-line mode was scheduled. It has required not only innovation of computer park of the developers and users, but also mastering of new software: language HTML, operating system of Windows NT, and technology of Active Server Pages (ASP). One of the serious problems of creating of databases and the IRS on zoology is the problem of representation of hierarchical classification. Building the classifiers, specialized standard taxonomic databases, which have obtained the name ZOOCOD solved this problem. The lately magnified number of attempts of creating of taxonomic electronic lists, tables and DB has required development of some primary rules of unification of zoological systematic databases. These rules assume their application in institutes of the biological profile, in which the processes of a computerization are very slowly, and the building of databases is in the most rudimentary state. These some positions and the standards of construction of biological (taxonomic) databases should facilitate dialogue of the biologists, application in the near future of most advanced technologies of development of the DB (for example, usage of the XML platform) and, eventually, building of the modern information systems. The work on the project is carried out at support of the RFBR grant N 02-07-90217; programs "The Information system on a biodiversity of Russia" and Project N 15 "Antarctic Regions".
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamilton, C.
1995-02-01
Views of the Solar System has been created as an educational tour of the solar system. It contains images and information about the Sun, planets, moons, asteroids and comets found within the solar system. The image processing for many of the images was done by the author. This tour uses hypertext to allow space travel by simply clicking on a desired planet. This causes information and images about the planet to appear on screen. While on a planet page, hyperlinks travel to pages about the moons and other relevant available resources. Unusual terms are linked to and defined in themore » Glossary page. Statistical information of the planets and satellites can be browsed through lists sorted by name, radius and distance. History of Space Exploration contains information about rocket history, early astronauts, space missions, spacecraft and detailed chronology tables of space exploration. The Table of Contents page has links to all of the various pages within Views Of the Solar System.« less
Method and apparatus for faulty memory utilization
Cher, Chen-Yong; Andrade Costa, Carlos H.; Park, Yoonho; Rosenburg, Bryan S.; Ryu, Kyung D.
2016-04-19
A method for faulty memory utilization in a memory system includes: obtaining information regarding memory health status of at least one memory page in the memory system; determining an error tolerance of the memory page when the information regarding memory health status indicates that a failure is predicted to occur in an area of the memory system affecting the memory page; initiating a migration of data stored in the memory page when it is determined that the data stored in the memory page is non-error-tolerant; notifying at least one application regarding a predicted operating system failure and/or a predicted application failure when it is determined that data stored in the memory page is non-error-tolerant and cannot be migrated; and notifying at least one application regarding the memory failure predicted to occur when it is determined that data stored in the memory page is error-tolerant.
CALS Database Usage and Analysis Tool Study
1991-09-01
inference aggregation and cardinality aggregation as two distinct aspects of the aggregation problem. The paper develops the concept of a semantic...aggregation, cardinality aggregation I " CALS Database Usage Analysis Tool Study * Bibliography * Page 7 i NIDX - An Expert System for Real-Time...1989 IEEE Symposium on Research in Security and Privacy, Oakland, CA, May 1989. [21 Baur, D.S.; Eichelman, F.R. 1I; Herrera , R.M.; Irgon, A.E
MedlinePlus Videos and Cool Tools
... Page Next Page It's Only Natural resources Related information Breastfeeding Pregnancy Resources Your Guide to Breastfeeding Support ... Our vision and mission Programs and Activities Health Information Gateway It's Only Natural Make the Call, Don' ...
This page will house information leading up to the 2017 Urban Waters National Training Workshop. The agenda, hotel and other quarterly updates will be posted to this page including information about how to register.
Using the web to validate document recognition results: experiments with business cards
NASA Astrophysics Data System (ADS)
Oertel, Clemens; O'Shea, Shauna; Bodnar, Adam; Blostein, Dorothea
2004-12-01
The World Wide Web is a vast information resource which can be useful for validating the results produced by document recognizers. Three computational steps are involved, all of them challenging: (1) use the recognition results in a Web search to retrieve Web pages that contain information similar to that in the document, (2) identify the relevant portions of the retrieved Web pages, and (3) analyze these relevant portions to determine what corrections (if any) should be made to the recognition result. We have conducted exploratory implementations of steps (1) and (2) in the business-card domain: we use fields of the business card to retrieve Web pages and identify the most relevant portions of those Web pages. In some cases, this information appears suitable for correcting OCR errors in the business card fields. In other cases, the approach fails due to stale information: when business cards are several years old and the business-card holder has changed jobs, then websites (such as the home page or company website) no longer contain information matching that on the business card. Our exploratory results indicate that in some domains it may be possible to develop effective means of querying the Web with recognition results, and to use this information to correct the recognition results and/or detect that the information is stale.
Using the web to validate document recognition results: experiments with business cards
NASA Astrophysics Data System (ADS)
Oertel, Clemens; O'Shea, Shauna; Bodnar, Adam; Blostein, Dorothea
2005-01-01
The World Wide Web is a vast information resource which can be useful for validating the results produced by document recognizers. Three computational steps are involved, all of them challenging: (1) use the recognition results in a Web search to retrieve Web pages that contain information similar to that in the document, (2) identify the relevant portions of the retrieved Web pages, and (3) analyze these relevant portions to determine what corrections (if any) should be made to the recognition result. We have conducted exploratory implementations of steps (1) and (2) in the business-card domain: we use fields of the business card to retrieve Web pages and identify the most relevant portions of those Web pages. In some cases, this information appears suitable for correcting OCR errors in the business card fields. In other cases, the approach fails due to stale information: when business cards are several years old and the business-card holder has changed jobs, then websites (such as the home page or company website) no longer contain information matching that on the business card. Our exploratory results indicate that in some domains it may be possible to develop effective means of querying the Web with recognition results, and to use this information to correct the recognition results and/or detect that the information is stale.
History of Bioterrorism: Botulism
MedlinePlus Videos and Cool Tools
... on this page will be unavailable. For more information about this message, please visit this page: About ... Emergency Responders: Tips for taking care of yourself Information on Specific Types of Emergencies Situation Awareness Hurricanes ...
MedlinePlus Videos and Cool Tools
... artery ( dissection ). See the MRA page for more information. top of page How should I prepare? You ... will be requested in this instance. For more information on adverse reactions to gadolinium-based contrast agents, ...
Chemical Speciation - General Information
This page includes general information about the Chemical Speciation Network that is not covered on the main page. Commonly visited documents, including calendars, site lists, and historical files for the program are listed here
Core Technical Capability Laboratory Management System
NASA Technical Reports Server (NTRS)
Shaykhian, Linda; Dugger, Curtis; Griffin, Laurie
2008-01-01
The Core Technical Capability Lab - oratory Management System (CTCLMS) consists of dynamically generated Web pages used to access a database containing detailed CTC lab data with the software hosted on a server that allows users to have remote access.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-02
... curriculum instructions, physical equipment, fiscal affairs, and academic methods of the [[Page 5288... Officer can be obtained from the GSA's FACA Database-- https://www.fido.gov/facadatabase/public.asp . The...
Health Information in Polish (polski)
... Tools You Are Here: Home → Multiple Languages → Polish (polski) URL of this page: https://medlineplus.gov/languages/polish.html Health Information in Polish (polski) To use the sharing features on this page, ...
Poisson, J; Six, M; Morin, C; Fardet, L
2013-05-01
About 1% of the general population are receiving systemic glucocorticoids. The information about this treatment sought by patients is unknown. The website www.cortisone-info.fr aims to provide therapeutic information about glucocorticoids and glucocorticoid therapy. It was posted on January 16, 2012. The information available on the website is documented and based on the recent medical literature. The website is made of 43 pages divided into five main sections (generalities about glucocorticoids, adverse events, measures associated with glucocorticoid therapy, discontinuation of glucocorticoids and, situations requiring attention). The website traffic between February 1st, 2012 and January 4, 2013 was analyzed using Google Analytics. During the study period, the website was visited by 67,496 people (average number of visitors per day: 33 in February 2012, 326 in December 2012). The number of page views was 230,496 or an average of 3.5 pages per visitor. Of these 230,496 page views, 145,431 (63.1%) were related to adverse events and 37,722 (16.4%) were related to generalities about glucocorticoids (e.g., what is cortisone? For which disease? How does it work?). Information particularly sought by visitors was related to the diet to follow during glucocorticoid therapy (page accessed 11,946 times), data about what cortisone is (page accessed 11,829 times) and the effects of glucocorticoids on weight (page accessed 10,442 times). Knowledge of glucocorticoid-treated patients' expectations may help physicians to optimize information they give, thereby helping to reduce patients' concerns about glucocorticoids and to improve adherence to the treatment. Copyright © 2013 Société nationale française de médecine interne (SNFMI). Published by Elsevier SAS. All rights reserved.
Web-based surveillance of public information needs for informing preconception interventions.
D'Ambrosio, Angelo; Agricola, Eleonora; Russo, Luisa; Gesualdo, Francesco; Pandolfi, Elisabetta; Bortolus, Renata; Castellani, Carlo; Lalatta, Faustina; Mastroiacovo, Pierpaolo; Tozzi, Alberto Eugenio
2015-01-01
The risk of adverse pregnancy outcomes can be minimized through the adoption of healthy lifestyles before pregnancy by women of childbearing age. Initiatives for promotion of preconception health may be difficult to implement. Internet can be used to build tailored health interventions through identification of the public's information needs. To this aim, we developed a semi-automatic web-based system for monitoring Google searches, web pages and activity on social networks, regarding preconception health. Based on the American College of Obstetricians and Gynecologists guidelines and on the actual search behaviors of Italian Internet users, we defined a set of keywords targeting preconception care topics. Using these keywords, we analyzed the usage of Google search engine and identified web pages containing preconception care recommendations. We also monitored how the selected web pages were shared on social networks. We analyzed discrepancies between searched and published information and the sharing pattern of the topics. We identified 1,807 Google search queries which generated a total of 1,995,030 searches during the study period. Less than 10% of the reviewed pages contained preconception care information and in 42.8% information was consistent with ACOG guidelines. Facebook was the most used social network for sharing. Nutrition, Chronic Diseases and Infectious Diseases were the most published and searched topics. Regarding Genetic Risk and Folic Acid, a high search volume was not associated to a high web page production, while Medication pages were more frequently published than searched. Vaccinations elicited high sharing although web page production was low; this effect was quite variable in time. Our study represent a resource to prioritize communication on specific topics on the web, to address misconceptions, and to tailor interventions to specific populations.
Web-Based Surveillance of Public Information Needs for Informing Preconception Interventions
D’Ambrosio, Angelo; Agricola, Eleonora; Russo, Luisa; Gesualdo, Francesco; Pandolfi, Elisabetta; Bortolus, Renata; Castellani, Carlo; Lalatta, Faustina; Mastroiacovo, Pierpaolo; Tozzi, Alberto Eugenio
2015-01-01
Background The risk of adverse pregnancy outcomes can be minimized through the adoption of healthy lifestyles before pregnancy by women of childbearing age. Initiatives for promotion of preconception health may be difficult to implement. Internet can be used to build tailored health interventions through identification of the public's information needs. To this aim, we developed a semi-automatic web-based system for monitoring Google searches, web pages and activity on social networks, regarding preconception health. Methods Based on the American College of Obstetricians and Gynecologists guidelines and on the actual search behaviors of Italian Internet users, we defined a set of keywords targeting preconception care topics. Using these keywords, we analyzed the usage of Google search engine and identified web pages containing preconception care recommendations. We also monitored how the selected web pages were shared on social networks. We analyzed discrepancies between searched and published information and the sharing pattern of the topics. Results We identified 1,807 Google search queries which generated a total of 1,995,030 searches during the study period. Less than 10% of the reviewed pages contained preconception care information and in 42.8% information was consistent with ACOG guidelines. Facebook was the most used social network for sharing. Nutrition, Chronic Diseases and Infectious Diseases were the most published and searched topics. Regarding Genetic Risk and Folic Acid, a high search volume was not associated to a high web page production, while Medication pages were more frequently published than searched. Vaccinations elicited high sharing although web page production was low; this effect was quite variable in time. Conclusion Our study represent a resource to prioritize communication on specific topics on the web, to address misconceptions, and to tailor interventions to specific populations. PMID:25879682
Wang, Likun; Yang, Luhe; Peng, Zuohan; Lu, Dan; Jin, Yan; McNutt, Michael; Yin, Yuxin
2015-01-01
With the burgeoning development of cloud technology and services, there are an increasing number of users who prefer cloud to run their applications. All software and associated data are hosted on the cloud, allowing users to access them via a web browser from any computer, anywhere. This paper presents cisPath, an R/Bioconductor package deployed on cloud servers for client users to visualize, manage, and share functional protein interaction networks. With this R package, users can easily integrate downloaded protein-protein interaction information from different online databases with private data to construct new and personalized interaction networks. Additional functions allow users to generate specific networks based on private databases. Since the results produced with the use of this package are in the form of web pages, cloud users can easily view and edit the network graphs via the browser, using a mouse or touch screen, without the need to download them to a local computer. This package can also be installed and run on a local desktop computer. Depending on user preference, results can be publicized or shared by uploading to a web server or cloud driver, allowing other users to directly access results via a web browser. This package can be installed and run on a variety of platforms. Since all network views are shown in web pages, such package is particularly useful for cloud users. The easy installation and operation is an attractive quality for R beginners and users with no previous experience with cloud services.
2015-01-01
Background With the burgeoning development of cloud technology and services, there are an increasing number of users who prefer cloud to run their applications. All software and associated data are hosted on the cloud, allowing users to access them via a web browser from any computer, anywhere. This paper presents cisPath, an R/Bioconductor package deployed on cloud servers for client users to visualize, manage, and share functional protein interaction networks. Results With this R package, users can easily integrate downloaded protein-protein interaction information from different online databases with private data to construct new and personalized interaction networks. Additional functions allow users to generate specific networks based on private databases. Since the results produced with the use of this package are in the form of web pages, cloud users can easily view and edit the network graphs via the browser, using a mouse or touch screen, without the need to download them to a local computer. This package can also be installed and run on a local desktop computer. Depending on user preference, results can be publicized or shared by uploading to a web server or cloud driver, allowing other users to directly access results via a web browser. Conclusions This package can be installed and run on a variety of platforms. Since all network views are shown in web pages, such package is particularly useful for cloud users. The easy installation and operation is an attractive quality for R beginners and users with no previous experience with cloud services. PMID:25708840
Oliveira, S R M; Almeida, G V; Souza, K R R; Rodrigues, D N; Kuser-Falcão, P R; Yamagishi, M E B; Santos, E H; Vieira, F D; Jardine, J G; Neshich, G
2007-10-05
An effective strategy for managing protein databases is to provide mechanisms to transform raw data into consistent, accurate and reliable information. Such mechanisms will greatly reduce operational inefficiencies and improve one's ability to better handle scientific objectives and interpret the research results. To achieve this challenging goal for the STING project, we introduce Sting_RDB, a relational database of structural parameters for protein analysis with support for data warehousing and data mining. In this article, we highlight the main features of Sting_RDB and show how a user can explore it for efficient and biologically relevant queries. Considering its importance for molecular biologists, effort has been made to advance Sting_RDB toward data quality assessment. To the best of our knowledge, Sting_RDB is one of the most comprehensive data repositories for protein analysis, now also capable of providing its users with a data quality indicator. This paper differs from our previous study in many aspects. First, we introduce Sting_RDB, a relational database with mechanisms for efficient and relevant queries using SQL. Sting_rdb evolved from the earlier, text (flat file)-based database, in which data consistency and integrity was not guaranteed. Second, we provide support for data warehousing and mining. Third, the data quality indicator was introduced. Finally and probably most importantly, complex queries that could not be posed on a text-based database, are now easily implemented. Further details are accessible at the Sting_RDB demo web page: http://www.cbi.cnptia.embrapa.br/StingRDB.
Code of Federal Regulations, 2010 CFR
2010-07-01
... PROGRAM DEFENSE INTELLIGENCE AGENCY (DIA) FREEDOM OF INFORMATION ACT Pt. 292, App. A Appendix A to Part... search site, conducting the search and return may be charged as FOIA search costs. General Pre-Printed material, per printed page .02 Office copy, per page .15 Microfiche, per page .25 Aerial Photography...
Imaged Document Optical Correlation and Conversion System (IDOCCS)
NASA Astrophysics Data System (ADS)
Stalcup, Bruce W.; Dennis, Phillip W.; Dydyk, Robert B.
1999-03-01
Today, the paper document is fast becoming a thing of the past. With the rapid development of fast, inexpensive computing and storage devices, many government and private organizations are archiving their documents in electronic form (e.g., personnel records, medical records, patents, etc.). In addition, many organizations are converting their paper archives to electronic images, which are stored in a computer database. Because of this, there is a need to efficiently organize this data into comprehensive and accessible information resources. The Imaged Document Optical Correlation and Conversion System (IDOCCS) provides a total solution to the problem of managing and retrieving textual and graphic information from imaged document archives. At the heart of IDOCCS, optical correlation technology provides the search and retrieval capability of document images. The IDOCCS can be used to rapidly search for key words or phrases within the imaged document archives and can even determine the types of languages contained within a document. In addition, IDOCCS can automatically compare an input document with the archived database to determine if it is a duplicate, thereby reducing the overall resources required to maintain and access the document database. Embedded graphics on imaged pages can also be exploited, e.g., imaged documents containing an agency's seal or logo, or documents with a particular individual's signature block, can be singled out. With this dual capability, IDOCCS outperforms systems that rely on optical character recognition as a basis for indexing and storing only the textual content of documents for later retrieval.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-12
..., page 71473. FAA Form 7480-1 (Notice of Landing Area Proposal) is used to collect information about any... Activities: Requests for Comments; Clearance of Renewed Approval of Information Collection: Notice of Landing... information collection. The Federal Register Notice with a 60-day comment period soliciting [[Page 22022...
Information to Include in Curriculum Vitae | Cancer Prevention Fellowship Program
Applicants are encouraged to use their current curriculum vitae and to add any necessary information. Please include your name and a page number on each page. Some of the information requested below will not be applicable to all individuals. Perso
Artieta-Pinedo, Isabel; Paz-Pascual, Carmen; Grandes, Gonzalo; Villanueva, Gemma
2018-03-01
the aim of this study is to evaluate the quality of web pages found by women when carrying out an exploratory search concerning pregnancy, childbirth, the postpartum period and breastfeeding. a descriptive study of the first 25 web pages that appear in the search engines Google, Yahoo and Bing, in October 2014 in the Basque Country (Spain), when entering eight Spanish words and seven English words related to pregnancy, childbirth, the postpartum period, breastfeeding and newborns. Web pages aimed at healthcare professionals and forums were excluded. The reliability was evaluated using the LIDA questionnaire, and the contents of the web pages with the highest scores were then described. a total of 126 web pages were found using the key search words. Of these, 14 scored in the top 30% for reliability. The content analysis of these found that the mean score for "references to the source of the information" was 3.4 (SD: 2.17), that for "up-to-date" was 4.30 (SD: 1.97) and the score for "conflict of interest statement" was 5.90 (SD: 2.16). The mean for web pages created by universities and official bodies was 13.64 (SD: 4.47), whereas the mean for those created by private bodies was 11.23 (SD: 4.51) (F (1,124)5.27. p=0.02). The content analysis of these web pages found that the most commonly discussed topic was breastfeeding, followed by self-care during pregnancy and the onset of childbirth. in this study, web pages from established healthcare or academic institutions were found to contain the most reliable information. The significant number of web pages found in this study with poor quality information indicates the need for healthcare professionals to guide women when sourcing information online. As the origin of the web page has a direct effect on reliability, the involvement of healthcare professionals in the use, counselling and generation of new technologies as an intervention tool is increasingly essential. Copyright © 2017 Elsevier Ltd. All rights reserved.
MedlinePlus Connect: Technical Information
... Service Technical Information Page MedlinePlus Connect Implementation Options Web Application How does it work? Responds to requests ... examples of MedlinePlus Connect Web Application response pages. Web Service How does it work? Responds to requests ...
Know How to Use Your Asthma Inhaler
MedlinePlus Videos and Cool Tools
... on this page will be unavailable. For more information about this message, please visit this page: About ... Tables and Graphs Asthma Call-back Survey Technical Information Prevalence Tables BRFSS Prevalence Data NHIS Prevalence Data ...
Health Information in Indonesian (Bahasa Indonesia)
... You Are Here: Home → Multiple Languages → Indonesian (Bahasa Indonesia) URL of this page: https://medlineplus.gov/languages/indonesian.html Health Information in Indonesian (Bahasa Indonesia) To use the sharing features on this page, ...
Polio vaccine - what you need to know
... is taken in its entirety from the CDC Polio Vaccine Information Statement (VIS): www.cdc.gov/vaccines/ ... statements/ipv.html CDC review information for the Polio VIS: Page last reviewed: July 20, 2016 Page ...
Experience with a Spanish-language laparoscopy website.
Moreno-Sanz, Carlos; Seoane-González, Jose B
2006-02-01
Although there are no clearly defined electronic tools for continuing medical education (CME), new information technologies offer a basic platform for presenting training content on the internet. Due to the shortage of websites about minimally invasive surgery in the Spanish language, we set up a topical website in Spanish. This study considers the experience with the website between April 2001 and January 2005. To study the activity of the website, the registry information was analyzed descriptively using the log files of the server. To study the characteristics of the users, we searched the database of registered users. We found a total of 107,941 visits to our website and a total of 624,895 page downloads. Most visits to the site were made from Spanish-speaking countries. The most frequent professional profile of the registered users was that of general surgeon. The development, implementation, and evaluation of Spanish-language CME initiatives over the internet is promising but presents challenges.
A Reference Proteomic Database of Lactobacillus plantarum CMCC-P0002
Tian, Wanhong; Yu, Gang; Liu, Xiankai; Wang, Jie; Feng, Erling; Zhang, Xuemin; Chen, Bei; Zeng, Ming; Wang, Hengliang
2011-01-01
Lactobacillus plantarum is a widespread probiotic bacteria found in many fermented food products. In this study, the whole-cell proteins and secretory proteins of L. plantarum were separated by two-dimensional electrophoresis method. A total of 434 proteins were identified by tandem mass spectrometry, including a plasmid-encoded hypothetical protein pLP9000_05. The information of first 20 highest abundance proteins was listed for the further genetic manipulation of L. plantarum, such as construction of high-level expressions system. Furthermore, the first interaction map of L. plantarum was established by Blue-Native/SDS-PAGE technique. A heterodimeric complex composed of maltose phosphorylase Map3 and Map2, and two homodimeric complexes composed of Map3 and Map2 respectively, were identified at the same time, indicating the important roles of these proteins. These findings provided valuable information for the further proteomic researches of L. plantarum. PMID:21998671
A reference proteomic database of Lactobacillus plantarum CMCC-P0002.
Zhu, Li; Hu, Wei; Liu, Datao; Tian, Wanhong; Yu, Gang; Liu, Xiankai; Wang, Jie; Feng, Erling; Zhang, Xuemin; Chen, Bei; Zeng, Ming; Wang, Hengliang
2011-01-01
Lactobacillus plantarum is a widespread probiotic bacteria found in many fermented food products. In this study, the whole-cell proteins and secretory proteins of L. plantarum were separated by two-dimensional electrophoresis method. A total of 434 proteins were identified by tandem mass spectrometry, including a plasmid-encoded hypothetical protein pLP9000_05. The information of first 20 highest abundance proteins was listed for the further genetic manipulation of L. plantarum, such as construction of high-level expressions system. Furthermore, the first interaction map of L. plantarum was established by Blue-Native/SDS-PAGE technique. A heterodimeric complex composed of maltose phosphorylase Map3 and Map2, and two homodimeric complexes composed of Map3 and Map2 respectively, were identified at the same time, indicating the important roles of these proteins. These findings provided valuable information for the further proteomic researches of L. plantarum.
... all breast tumors occur within the ducts, and a tumor may be present that is not identified on the galactogram. top of page Additional Information and Resources RTAnswers.org Radiation Therapy for Breast Cancer top of page This page ...
Atmospheric Science Data Center
2017-12-22
... The First ISCCP Regional Experiment is a series of field missions which have collected cirrus and marine stratocumulus ... Home Page (tar file) FIRE I - Extended Time Observations Home Page (tar file) FIRE Project Home Page for ...
Chapter 07: Species description pages
Alex C. Wiedenhoeft
2011-01-01
These pages are written to be the final step in the identification process; you will be directed to them by the key in Chapter 6. Each species or group of similar species in the same genus has its own set of pages. The information in the first page describes the characteristics of the wood covered in the manual. The page shows images of similar or confusable woods,...
Pinyopornpanish, Kanokporn; Jiraporncharoen, Wichuda; Thaikla, Kanittha; Yoonut, Kulyapa; Angkurawaranon, Chaisiri
2018-03-21
Evidence from other countries has suggested that many controlled drugs are also offered online, even though it is illegal to sell these drugs without a license. To evaluate the current contents related to the supply and demand of sedatives and analgesic drugs available online in Thailand, with a particular focus on Facebook. A team of reviewers manually searched for data by entering keywords related to analgesic drugs and sedatives. The contents of the website were screened for supply and demand-related information. A total of 5,352 websites were found publicly available. The number of websites and Facebook pages containing the information potentially related to the supply and demand of analgesic drugs and sedatives was limited. Nine websites sold sedatives, and six websites sold analgesics directly. Fourteen Facebook pages were found, including 7 sedative pages and 7 analgesic pages. Within one year, the three remaining active pages multiplied in the number of followers by three- to nine-fold. The most popular Facebook page had over 2,900 followers. Both the internet and social media contain sites and pages where sedatives and analgesics are illegally advertised. These websites are searchable through common search engines. Although the number of websites is limited, the number of followers on these Facebook pages does suggest a growing number of people who are interested in such pages. Our study emphasized the importance of monitoring and developing potential plans relative to the online marketing of prescription drugs in Thailand.
Cloud/web mapping and geoprocessing services - Intelligently linking geoinformation
NASA Astrophysics Data System (ADS)
Veenendaal, Bert; Brovelli, Maria Antonia; Wu, Lixin
2016-04-01
We live in a world that is alive with information and geographies. "Everything happens somewhere" (Tosta, 2001). This reality is being exposed in the digital earth technologies providing a multi-dimensional, multi-temporal and multi-resolution model of the planet, based on the needs of diverse actors: from scientists to decision makers, communities and citizens (Brovelli et al., 2015). We are building up a geospatial information infrastructure updated in real time thanks to mobile, positioning and sensor observations. Users can navigate, not only through space but also through time, to access historical data and future predictions based on social and/or environmental models. But how do we find the information about certain geographic locations or localities when it is scattered in the cloud and across the web of data behind a diversity of databases, web services and hyperlinked pages? We need to be able to link geoinformation together in order to integrate it, make sense of it, and use it appropriately for managing the world and making decisions.
Using the World Wide Web for GIDEP Problem Data Processing at Marshall Space Flight Center
NASA Technical Reports Server (NTRS)
McPherson, John W.; Haraway, Sandra W.; Whirley, J. Don
1999-01-01
Since April 1997, Marshall Space Flight Center has been using electronic transfer and the web to support our processing of the Government-Industry Data Exchange Program (GIDEP) and NASA ALERT information. Specific aspects include: (1) Extraction of ASCII text information from GIDEP for loading into Word documents for e-mail to ALERT actionees; (2) Downloading of GIDEP form image formats in Adobe Acrobat (.pdf) for internal storage display on the MSFC ALERT web page; (3) Linkage of stored GRDEP problem forms with summary information for access from the MSFC ALERT Distribution Summary Chart or from an html table of released MSFC ALERTs (4) Archival of historic ALERTs for reference by GIDEP ID, MSFC ID, or MSFC release date; (5) On-line tracking of ALERT response status using a Microsoft Access database and the web (6) On-line response to ALERTs from MSFC actionees through interactive web forms. The technique, benefits, effort, coordination, and lessons learned for each aspect are covered herein.
Delayed Instantiation Bulk Operations for Management of Distributed, Object-Based Storage Systems
2009-08-01
source and destination object sets, while they have attribute pages to indicate that history . Fourth, we allow for operations to occur on any objects...client dialogue to the PostgreSQL database where server-side functions implement the service logic for the requests. The translation is done...to satisfy client requests, and performs delayed instantiation bulk operations. It is built around a PostgreSQL database with tables for storing
Belmonte, M
In this article we review two of the main Internet information services for seeking references to bibliography and journals, and the electronic publications on the Internet, with particular emphasis on those related to neurosciencs. The main indices of bibliography are: 1. MEDLINE. By definition, this is the bibliography database. It is an 'on line' version of the magazine with a smaller format, published weekly with the title pages and summaries of most of the biomedical journals. It is based on the Index Medicus, a bibliographic index (on paper) which annually collects references to the most important biomedical journals. 2. EMBASE (Excerpta Medica). It is a direct competitor to MEDLINE, although it has the disadvantage of lack of government subsidies and is privately financed only. This bibliographic database, produced by the publishers Elsevier of Holland, covers approximately 3,500 biomedical journals from 110 countries, and is particularly useful for articles on drugs and toxicology. 3. Current Contents. It publishes the index Current Contents, a classic in this field, much appreciated by scientists in all areas: medicine, social, technology, arts and humanities. At present, it is available in an on line version known as CCC (Current Contents Connect), accessible through the web, but only to subscribers. There is a growing tendency towards the publication of biomedical journals on the Internet. Its full development, if correctly carried out, will mean the opportunity to have the best information available and will result in great benefit to all those who are already using new information technology.
Information system of mineral deposits in Slovenia
NASA Astrophysics Data System (ADS)
Hribernik, K.; Rokavec, D.; Šinigioj, J.; Šolar, S.
2010-03-01
At the Geologic Survey of Slovenia the need for complex overview and control of the deposits of available non-metallic mineral raw materials and of their exploitations became urgent. In the framework of the Geologic Information System we established the Database of non-metallic mineral deposits comprising all important data of deposits and concessionars. Relational database is built with program package MS Access, but in year 2008 we plan to transfer it on SQL server. In the evidence there is 272 deposits and 200 concessionars. The mineral resources information system of Slovenia, which was started back in 2002, consists of two integrated parts, mentioned relational database of mineral deposits, which relates information in tabular way so that rules of relational algebra can be applied, and geographic information system (GIS), which relates spatial information of deposits. . The complex relationships between objects and the concepts of normalized data structures, lead to the practical informative and useful data model, transparent to the user and to better decision-making by allowing future scenarios to be developed and inspected. Computerized storage, and display system is as already said, developed and managed under the support of Geological Survey of Slovenia, which conducts research on the occurrence, quality, quantity, and availability of mineral resources in order to help the Nation make informed decisions using earth-science information. Information about deposit is stored in records in approximately hundred data fields. A numeric record number uniquely identifies each site. The data fields are grouped under principal categories. Each record comprise elementary data of deposit (name, type, location, prospect, rock), administrative data (concessionar, number of decree in official paper, object of decree, number of contract and its duration) and data of mineral resource produced amount and size of exploration area). The data can also be searched, sorted and printed using any of these fields. New records are being added annually, and existing records updated or upgraded. Relational database is connected with scanned exploration/exploitation areas of deposits, defined on the base of digital ortofoto. Register of those areas is indispensable because of spatial planning and spatial municipal and regional strategy development. Database is also part of internet application for quick search and review of data and part of web page of mineral resources of Slovenia. The technology chosen for internet application is ESRI's ArcIMS Internet Map Server. ArcIMS allows users to readily and easily display, analyze, and interpret spatial data from desktop using a Web browser connected to the Internet. We believe that there is an opportunity for cooperation within this activity. We can offer a single location where users can come to browse relatively simply for geoscience-related digital data sets.
2013-01-01
Background Research in organic chemistry generates samples of novel chemicals together with their properties and other related data. The involved scientists must be able to store this data and search it by chemical structure. There are commercial solutions for common needs like chemical registration systems or electronic lab notebooks. However for specific requirements of in-house databases and processes no such solutions exist. Another issue is that commercial solutions have the risk of vendor lock-in and may require an expensive license of a proprietary relational database management system. To speed up and simplify the development for applications that require chemical structure search capabilities, I have developed Molecule Database Framework. The framework abstracts the storing and searching of chemical structures into method calls. Therefore software developers do not require extensive knowledge about chemistry and the underlying database cartridge. This decreases application development time. Results Molecule Database Framework is written in Java and I created it by integrating existing free and open-source tools and frameworks. The core functionality includes: • Support for multi-component compounds (mixtures) • Import and export of SD-files • Optional security (authorization) For chemical structure searching Molecule Database Framework leverages the capabilities of the Bingo Cartridge for PostgreSQL and provides type-safe searching, caching, transactions and optional method level security. Molecule Database Framework supports multi-component chemical compounds (mixtures). Furthermore the design of entity classes and the reasoning behind it are explained. By means of a simple web application I describe how the framework could be used. I then benchmarked this example application to create some basic performance expectations for chemical structure searches and import and export of SD-files. Conclusions By using a simple web application it was shown that Molecule Database Framework successfully abstracts chemical structure searches and SD-File import and export to simple method calls. The framework offers good search performance on a standard laptop without any database tuning. This is also due to the fact that chemical structure searches are paged and cached. Molecule Database Framework is available for download on the projects web page on bitbucket: https://bitbucket.org/kienerj/moleculedatabaseframework. PMID:24325762
Kiener, Joos
2013-12-11
Research in organic chemistry generates samples of novel chemicals together with their properties and other related data. The involved scientists must be able to store this data and search it by chemical structure. There are commercial solutions for common needs like chemical registration systems or electronic lab notebooks. However for specific requirements of in-house databases and processes no such solutions exist. Another issue is that commercial solutions have the risk of vendor lock-in and may require an expensive license of a proprietary relational database management system. To speed up and simplify the development for applications that require chemical structure search capabilities, I have developed Molecule Database Framework. The framework abstracts the storing and searching of chemical structures into method calls. Therefore software developers do not require extensive knowledge about chemistry and the underlying database cartridge. This decreases application development time. Molecule Database Framework is written in Java and I created it by integrating existing free and open-source tools and frameworks. The core functionality includes:•Support for multi-component compounds (mixtures)•Import and export of SD-files•Optional security (authorization)For chemical structure searching Molecule Database Framework leverages the capabilities of the Bingo Cartridge for PostgreSQL and provides type-safe searching, caching, transactions and optional method level security. Molecule Database Framework supports multi-component chemical compounds (mixtures).Furthermore the design of entity classes and the reasoning behind it are explained. By means of a simple web application I describe how the framework could be used. I then benchmarked this example application to create some basic performance expectations for chemical structure searches and import and export of SD-files. By using a simple web application it was shown that Molecule Database Framework successfully abstracts chemical structure searches and SD-File import and export to simple method calls. The framework offers good search performance on a standard laptop without any database tuning. This is also due to the fact that chemical structure searches are paged and cached. Molecule Database Framework is available for download on the projects web page on bitbucket: https://bitbucket.org/kienerj/moleculedatabaseframework.
Jia, Ying; Cantu, Bruno A; Sánchez, Elda E; Pérez, John C
2008-06-15
To advance our knowledge on the snake venom composition and transcripts expressed in venom gland at the molecular level, we constructed a cDNA library from the venom gland of Agkistrodon piscivorus leucostoma for the generation of expressed sequence tags (ESTs) database. From the randomly sequenced 2112 independent clones, we have obtained ESTs for 1309 (62%) cDNAs, which showed significant deduced amino acid sequence similarity (scores >80) to previously characterized proteins in National Center for Biotechnology Information (NCBI) database. Ribosomal proteins make up 47 clones (2%) and the remaining 756 (36%) cDNAs represent either unknown identity or show BLASTX sequence identity scores of <80 with known GenBank accessions. The most highly expressed gene encoding phospholipase A(2) (PLA(2)) accounting for 35% of A. p. leucostoma venom gland cDNAs was identified and further confirmed by crude venom applied to sodium dodecyl sulfate/polyacrylamide gel electrophoresis (SDS-PAGE) electrophoresis and protein sequencing. A total of 180 representative genes were obtained from the sequence assemblies and deposited to EST database. Clones showing sequence identity to disintegrins, thrombin-like enzymes, hemorrhagic toxins, fibrinogen clotting inhibitors and plasminogen activators were also identified in our EST database. These data can be used to develop a research program that will help us identify genes encoding proteins that are of medical importance or proteins involved in the mechanisms of the toxin venom.
Fallis, Don; Frické, Martin
2002-01-01
To identify indicators of accuracy for consumer health information on the Internet. The results will help lay people distinguish accurate from inaccurate health information on the Internet. Several popular search engines (Yahoo, AltaVista, and Google) were used to find Web pages on the treatment of fever in children. The accuracy and completeness of these Web pages was determined by comparing their content with that of an instrument developed from authoritative sources on treating fever in children. The presence on these Web pages of a number of proposed indicators of accuracy, taken from published guidelines for evaluating the quality of health information on the Internet, was noted. Correlation between the accuracy of Web pages on treating fever in children and the presence of proposed indicators of accuracy on these pages. Likelihood ratios for the presence (and absence) of these proposed indicators. One hundred Web pages were identified and characterized as "more accurate" or "less accurate." Three indicators correlated with accuracy: displaying the HONcode logo, having an organization domain, and displaying a copyright. Many proposed indicators taken from published guidelines did not correlate with accuracy (e.g., the author being identified and the author having medical credentials) or inaccuracy (e.g., lack of currency and advertising). This method provides a systematic way of identifying indicators that are correlated with the accuracy (or inaccuracy) of health information on the Internet. Three such indicators have been identified in this study. Identifying such indicators and informing the providers and consumers of health information about them would be valuable for public health care.
MedlinePlus FAQ: MedlinePlus and MEDLINE/PubMed
... What is the difference between MedlinePlus and MEDLINE/PubMed? To use the sharing features on this page, ... latest health professional articles on your topic. MEDLINE/PubMed: Is a database of professional biomedical literature Is ...
Error Tracking System is a database used to store & track error notifications sent by users of EPA's web site. ETS is managed by OIC/OEI. OECA's ECHO & OEI Envirofacts use it. Error notifications from EPA's home Page under Contact Us also uses it.
Pneumococcal polysaccharide vaccine - what you need to know
... taken in its entirety from the CDC Pneumococcal Polysaccharide Vaccine Information Statement (VIS): www.cdc.gov/vaccines/ ... statements/ppv.html CDC review information for Pneumococcal Polysaccharide VIS: Page last reviewed: April 24, 2015 Page ...
Varicella (chickenpox) vaccine - what you need to know
... is taken in its entirety from the CDC Chickenpox Vaccine Information Statement (VIS): www.cdc.gov/vaccines/ ... statements/varicella.html CDC review information for the Chickenpox VIS: Page last reviewed: February 12, 2018 Page ...
Learn what search terms brought users to choose your page in their search results, and what terms they entered in the EPA search box after visiting your page. Use this information to improve links and content on the page.
Visual Design Principles Applied To World Wide Web Construction.
ERIC Educational Resources Information Center
Luck, Donald D.; Hunter, J. Mark
This paper describes basic types of World Wide Web pages and presents design criteria for page layout based on principles of visual literacy. Discussion focuses on pages that present information in the following styles: billboard; directory/index; textual; and graphics. Problems and solutions in Web page construction are explored according to…
Promotion of tobacco products on Facebook: policy versus practice.
Jackler, Robert K; Li, Vanessa Y; Cardiff, Ryan A L; Ramamurthi, Divya
2018-04-05
Facebook has a comprehensive set of policies intended to inhibit promotion and sales of tobacco products. Their effectiveness has yet to be studied. Leading tobacco brands (388) were identified via Nielsen and Ranker databases and 108 were found to maintain brand-sponsored Facebook pages. Key indicators of alignment with Facebook policy were evaluated. Purchase links (eg, 'shop now' button) on brand-sponsored pages were found for hookah tobaccos (41%), e-cigarettes (74%), smokeless (50%) and cigars (31%). Sales promotions (eg, discount coupons) were present in hookah tobacco (48%), e-cigarette (76%) and cigar (69%) brand-sponsored pages. While conventional cigarettes did not maintain brand-sponsored pages, they were featured in 80% of online tobacco vendors' Facebook pages. The requirement for age gating, to exclude those <18 from viewing tobacco promotion, was absent in hookah tobacco (78%), e-cigarette (62%) and cigar (21%) brand-sponsored pages and for 90% of online tobacco stores which promote leading cigarette brands (eg, Marlboro, Camel). Many of the brand-sponsored tobacco product pages had thousands of 'likes'. It is laudable that Facebook has policies intended to interdict tobacco promotion throughout its platform. Nevertheless, widespread tobacco promotion and sales were found at variance with the company's policies governing advertising, commerce, page content and under age access. Vetting could be improved by automated screening in partnership with human reviewers. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Evaluation models and criteria of the quality of hospital websites: a systematic review study.
Jeddi, Fatemeh Rangraz; Gilasi, Hamidreza; Khademi, Sahar
2017-02-01
Hospital websites are important tools in establishing communication and exchanging information between patients and staff, and thus should enjoy an acceptable level of quality. The aim of this study was to identify proper models and criteria to evaluate the quality of hospital websites. This research was a systematic review study. The international databases such as Science Direct, Google Scholar, PubMed, Proquest, Ovid, Elsevier, Springer, and EBSCO together with regional database such as Magiran, Scientific Information Database, Persian Journal Citation Report (PJCR) and IranMedex were searched. Suitable keywords including website, evaluation, and quality of website were used. Full text papers related to the research were included. The criteria and sub criteria of the evaluation of website quality were extracted and classified. To evaluate the quality of the websites, various models and criteria were presented. The WEB-Q-IM, Mile, Minerva, Seruni Luci, and Web-Qual models were the designed models. The criteria of accessibility, content and apparent features of the websites, the design procedure, the graphics applied in the website, and the page's attractions have been mentioned in the majority of studies. The criteria of accessibility, content, design method, security, and confidentiality of personal information are the essential criteria in the evaluation of all websites. It is suggested that the ease of use, graphics, attractiveness and other apparent properties of websites are considered as the user-friendliness sub criteria. Further, the criteria of speed and accessibility of the website should be considered as sub criterion of efficiency. When determining the evaluation criteria of the quality of websites, attention to major differences in the specific features of any website is essential.
Fact Sheets for the Architectural Coating Rule for Volatile Organic Compounds
This page contains an August 1998 fact sheet with information regarding the National Volatile Organic Compounds Emission Standards for Architectural Coatings Rule. This page also contains information on applicability and compliance for this rule.
NASA Technical Reports Server (NTRS)
Garcia, Joseph A.; Smith, Charles A. (Technical Monitor)
1998-01-01
The document consists of a publicly available web site (george.arc.nasa.gov) for Joseph A. Garcia's personal web pages in the AI division. Only general information will be posted and no technical material. All the information is unclassified.
Hepatitis B vaccine - what you need to know
... is taken in its entirety from the CDC Hepatitis B Vaccine Information Statement (VIS): www.cdc.gov/vaccines/ ... statements/hep-b.html CDC review information for Hepatitis B VIS: Page last reviewed: July 20, 2016 Page ...
Demonstration of holographic smart card system using the optical memory technology
NASA Astrophysics Data System (ADS)
Kim, JungHoi; Choi, JaeKwang; An, JunWon; Kim, Nam; Lee, KwonYeon; Jeon, SeckHee
2003-05-01
In this paper, we demonstrate the holographic smart card system using digital holographic memory technique that uses reference beam encrypted by the random phase mask to prevent unauthorized users from accessing the stored digital page. The input data that include document data, a picture of face, and a fingerprint for identification is encoded digitally and then coupled with the reference beam modulated by a random phase mask. Therefore, this proposed system can execute recording in the order of MB~GB and readout all personal information from just one card without any additional database system. Also, recorded digital holograms can't be reconstructed without a phase key and can't be copied by using computers, scanners, or photography.
NASA Astrophysics Data System (ADS)
Templeton, Matthew R.
2009-08-01
Nova Ophiuchi 2009 was discovered by Koichi Itagaki, Teppo-Cho, Yamagata, Japan, at unfiltered CCD magnitude 10.1 on August 16.515 UT, and confirmed by him on Aug. 16.526. After posting to the CBET Unconfirmed Observations page, the object was confirmed independently by several observers. The discovery and confirmatory information were intially reported in CBET 1910, CBET 1911, and AAVSO Special Notice #166. The nova, located in a very crowded field within the Milky Way, is reported by T. Kato (vsnet-alert 11399) to have a large B-V (+1.6), indicating it is highly reddened. N Oph 2009 has been assigned the identifiers VSX J173819.7-264413 and the AUID 000-BJP-605. Please submit observations to the AAVSO International Database using the name N OPH 2009.
Google Analytics Reports about Search Terms
Learn what search terms brought users to choose your page in their search results, and what terms they entered in the EPA search box after visiting your page. Use this information to improve links and content on the page.
Fei, Lin; Zhao, Jing; Leng, Jiahao; Zhang, Shujian
2017-10-12
The ALIPORC full-text database is targeted at a specific full-text database of acupuncture literature in the Republic of China. Starting in 2015, till now, the database has been getting completed, focusing on books relevant with acupuncture, articles and advertising documents, accomplished or published in the Republic of China. The construction of this database aims to achieve the source sharing of acupuncture medical literature in the Republic of China through the retrieval approaches to diversity and accurate content presentation, contributes to the exchange of scholars, reduces the paper damage caused by paging and simplify the retrieval of the rare literature. The writers have made the explanation of the database in light of sources, characteristics and current situation of construction; and have discussed on improving the efficiency and integrity of the database and deepening the development of acupuncture literature in the Republic of China.
Gavrielides, Mike; Furney, Simon J; Yates, Tim; Miller, Crispin J; Marais, Richard
2014-01-01
Whole genomes, whole exomes and transcriptomes of tumour samples are sequenced routinely to identify the drivers of cancer. The systematic sequencing and analysis of tumour samples, as well other oncogenomic experiments, necessitates the tracking of relevant sample information throughout the investigative process. These meta-data of the sequencing and analysis procedures include information about the samples and projects as well as the sequencing centres, platforms, data locations, results locations, alignments, analysis specifications and further information relevant to the experiments. The current work presents a sample tracking system for oncogenomic studies (Onco-STS) to store these data and make them easily accessible to the researchers who work with the samples. The system is a web application, which includes a database and a front-end web page that allows the remote access, submission and updating of the sample data in the database. The web application development programming framework Grails was used for the development and implementation of the system. The resulting Onco-STS solution is efficient, secure and easy to use and is intended to replace the manual data handling of text records. Onco-STS allows simultaneous remote access to the system making collaboration among researchers more effective. The system stores both information on the samples in oncogenomic studies and details of the analyses conducted on the resulting data. Onco-STS is based on open-source software, is easy to develop and can be modified according to a research group's needs. Hence it is suitable for laboratories that do not require a commercial system.
PrimerZ: streamlined primer design for promoters, exons and human SNPs.
Tsai, Ming-Fang; Lin, Yi-Jung; Cheng, Yu-Chang; Lee, Kuo-Hsi; Huang, Cheng-Chih; Chen, Yuan-Tsong; Yao, Adam
2007-07-01
PrimerZ (http://genepipe.ngc.sinica.edu.tw/primerz/) is a web application dedicated primarily to primer design for genes and human SNPs. PrimerZ accepts genes by gene name or Ensembl accession code, and SNPs by dbSNP rs or AFFY_Probe IDs. The promoter and exon sequence information of all gene transcripts fetched from the Ensembl database (http://www.ensembl.org) are processed before being passed on to Primer3 (http://frodo.wi.mit.edu/cgi-bin/primer3/primer3_www.cgi) for individual primer design. All results returned from Primer 3 are organized and integrated in a specially designed web page for easy browsing. Besides the web page presentation, csv text file export is also provided for enhanced user convenience. PrimerZ automates highly standard but tedious gene primer design to improve the success rate of PCR experiments. More than 2000 primers have been designed with PrimerZ at our institute since 2004 and the success rate is over 70%. The addition of several new features has made PrimerZ even more useful to the research community in facilitating primer design for promoters, exons and SNPs.
ADA LIBRARY PUBLICATIONS AND RESOURCES SEARCH ACCOMMODATIONS DATABASE A-Z OF DISABILITIES AND ACCOMMODATIONS NEWS Hot Topics How to Use this Site JAN en Español Print this Page A A A Text Size Connect with JAN (800)526-7234 (Voice) (877)781-9403 ( ...
78 FR 55337 - Buy America Waiver Notification
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-10
... DEPARTMENT OF TRANSPORTATION Federal Highway Administration Buy America Waiver Notification AGENCY... regarding the FHWA's finding that a Buy America waiver is appropriate for the use of five non- domestic 14... Office's database at: http://www.access.gpo.gov/nara . [[Page 55338
Teaching the structure of immunoglobulins by molecular visualization and SDS-PAGE analysis.
Rižner, Tea Lanišnik
2014-01-01
This laboratory class combines molecular visualization and laboratory experimentation to teach the structure of the immunoglobulins (Ig). In the first part of the class, the three-dimensional structures of the human IgG and IgM molecules available through the RCSB PDB database are visualized using freely available software. In the second part, IgG and IgM are studied using electrophoretic methods. Through SDS-PAGE analysis under reducing conditions, the students determine the number and molecular masses of the polypeptide chains, while through SDS-PAGE under nonreducing conditions, the students assess the oligomerization of these Ig molecules. The aims of this class are to expand upon the knowledge and understanding of the Ig structure that the students have gained from classroom lectures. The combination of this molecular visualization of the Ig molecules and the SDS-PAGE experimentation ensures variety in the teaching techniques, while the implication of the Ig molecules in human disease promotes interest for biomedical students. © 2014 by The International Union of Biochemistry and Molecular Biology.
Oceanography Information System of Spanish Institute of Oceanography (IEO)
NASA Astrophysics Data System (ADS)
Tello, Olvido; Gómez, María; González, Sonsoles
2016-04-01
Since 1914, the Spanish Institute of Oceanography (IEO) performs multidisciplinary studies of the marine environment. In same case are systematic studies and in others are specific studies for special requirements (El Hierro submarine volcanic episode, spill Prestige, others.). Different methodologies and data acquisition techniques are used depending on studies aims. The acquired data are stored and presented in different formats. The information is organized into different databases according to the subject and the variables represented (geology, fisheries, aquaculture, pollution, habitats, etc.). Related to physical and chemical oceanography data, in 1964 was created the DATA CENTER of IEO (CEDO), in order to organize the data about physical and chemical variables, to standardize this information and to serve the international data network SeaDataNet. www.seadatanet.org. This database integrates data about temperature, salinity, nutrients, and tidal data. CEDO allows consult and download the data. http://indamar.ieo.es On the other hand, related to data about marine species in 1999 was developed SIRENO DATABASE. All data about species collected in oceanographic surveys carried out by researches of IEO, and data from observers on fishing vessels are incorporated in SIRENO database. In this database is stored catch data, biomass, abundance, etc. This system is based on architecture ORACLE. Due to the large amount of information collected over the 100 years of IEO history, there is a clear need to organize, standardize, integrate and relate the different databases and information, and to provide interoperability and access to the information. Consequently, in 2000 it emerged the first initiative to organize the IEO spatial information in an Oceanography Information System, based on a Geographical Information System (GIS). The GIS was consolidated as IEO institutional GIS and was created the Spatial Data Infrastructure of IEO (IDEO) following trend of INSPIRE. All data included in the GIS have their corresponding metadata about ISO19115 and INSPIRE. IDEO is based on Web services, Quality of Services, Open standards, ISO (OGC) and INSPIRE standards, and both provide access to the geographical marine information of IEO. The GIS allows the information to be organized, visualized, consulted and analyzed. The data from different IEO databases are integrated into a GIS corporate Geodatabase (Esri format). This tool is essential in the decision making of aspects like: - Protection of marine environment - Sustainable management of resources - Natural Hazards. - Marine spatial planning. Examples of the use of GIS as a spatial analysis tool are: - Mud volcanoes explored in LIFE-INDEMARES project. - Cartographic series about Spanish continental shelf, developed from data integrated in IEO marine GIS, acquired from oceanographic surveys in ESPACE project. - Cartography developed from the information gathered in Initial Assessment of Marine Strategy Framework Directive. - Studies of natural hazards related to submarine canyons in southeast region marine Spanish. Currently the IEO is participating in many European initiatives, especially in several lots of EMODNET. The IEO besides is working in consonance with INSPIRE, Growth Blue, Horizon 2020, etc., to contribute to, the knowledge of marine environment, its protection and its spatial planning are extremely relevant issues. In order to facilitate the access to the Spatial Data Infrastructure of IEO, the IEO Geoportal was developed in 2012. It mainly involves a metadata catalog, access to the data viewers and Web Services of IDEO. http://www.geo-ideo.ieo.es/geoportalideo/catalog/main/home.page
32 CFR 2103.32 - Mandatory review for declassification.
Code of Federal Regulations, 2012 CFR
2012-07-01
... specific, no further action will be taken. (b) Review. (1) The requestor shall be informed of the National... page for all copying of four pages or more. No fee shall be assessed for reproducing documents that are three pages or less, or for the first three pages of longer documents. (2) Where it is anticipated that...
32 CFR 2103.32 - Mandatory review for declassification.
Code of Federal Regulations, 2013 CFR
2013-07-01
... specific, no further action will be taken. (b) Review. (1) The requestor shall be informed of the National... page for all copying of four pages or more. No fee shall be assessed for reproducing documents that are three pages or less, or for the first three pages of longer documents. (2) Where it is anticipated that...
32 CFR 2103.32 - Mandatory review for declassification.
Code of Federal Regulations, 2010 CFR
2010-07-01
... specific, no further action will be taken. (b) Review. (1) The requestor shall be informed of the National... page for all copying of four pages or more. No fee shall be assessed for reproducing documents that are three pages or less, or for the first three pages of longer documents. (2) Where it is anticipated that...
32 CFR 2103.32 - Mandatory review for declassification.
Code of Federal Regulations, 2014 CFR
2014-07-01
... specific, no further action will be taken. (b) Review. (1) The requestor shall be informed of the National... page for all copying of four pages or more. No fee shall be assessed for reproducing documents that are three pages or less, or for the first three pages of longer documents. (2) Where it is anticipated that...
Code of Federal Regulations, 2013 CFR
1998-01-01
... 14 Aeronautics and Space 4 1998-01-01 1998-01-01 false Title page. 221.31 Section 221.31 ECONOMIC REGULATIONS TARIFFS Contents of Tariff § 221.31 Title page. (a) Contents. Except as otherwise required in this part, or by other regulatory agencies, the title page of every tariff shall contain the following information to be shown in the order...
Code of Federal Regulations, 2011 CFR
1999-01-01
... 14 Aeronautics and Space 4 1999-01-01 1999-01-01 false Title page. 221.31 Section 221.31 ECONOMIC REGULATIONS TARIFFS Contents of Tariff § 221.31 Title page. (a) Contents. Except as otherwise required in this part, or by other regulatory agencies, the title page of every tariff shall contain the following information to be shown in the order...
Code of Federal Regulations, 2012 CFR
1997-01-01
... 14 Aeronautics and Space 4 1997-01-01 1997-01-01 false Title page. 221.31 Section 221.31 ECONOMIC REGULATIONS TARIFFS Contents of Tariff § 221.31 Title page. (a) Contents. Except as otherwise required in this part, or by other regulatory agencies, the title page of every tariff shall contain the following information to be shown in the order...
Comprehensive computerized diabetes registry. Serving the Cree of Eeyou Istchee (eastern James Bay).
Dannenbaum, D.; Verronneau, M.; Torrie, J.; Smeja, H.; Robinson, E.; Dumont, C.; Kovitch, I.; Webster, T.
1999-01-01
PROBLEM BEING ADDRESSED: Diabetes is rapidly evolving as a major health concern in the Cree population of eastern James Bay (Eeyou Istchee). The Cree Board of Health and Social Services of James Bay (CBHSSJB) diabetes registry was the initial phase in the development of a comprehensive program for diabetes in this region. OBJECTIVE OF PROGRAM: The CBHSSJB diabetes registry was developed to provide a framework to track the prevalence of diabetes and the progression of diabetic complications. The database will also identify patients not receiving appropriate clinical and laboratory screening for diabetic complications, and will provide standardized clinical flow sheets for routine patient management. MAIN COMPONENTS OF PROGRAM: The CBHSSJB diabetes registry uses a system of paper registration forms and clinical flow sheets kept in the nine community clinics. Information from these sheets is entered into a computer database annually. The flow sheets serve as a guideline for appropriate management of patients with diabetes, and provide a one-page summary of relevant clinical and laboratory information. CONCLUSIONS: A diabetes registry is vital to follow the progression of diabetes and diabetic complications in the region served by the CBHSSJB. The registry system incorporates both a means for regional epidemiologic monitoring of diabetes mellitus and clinical tools for managing patients with the disease. PMID:10065310
Indicators of Accuracy of Consumer Health Information on the Internet
Fallis, Don; Frické, Martin
2002-01-01
Objectives: To identify indicators of accuracy for consumer health information on the Internet. The results will help lay people distinguish accurate from inaccurate health information on the Internet. Design: Several popular search engines (Yahoo, AltaVista, and Google) were used to find Web pages on the treatment of fever in children. The accuracy and completeness of these Web pages was determined by comparing their content with that of an instrument developed from authoritative sources on treating fever in children. The presence on these Web pages of a number of proposed indicators of accuracy, taken from published guidelines for evaluating the quality of health information on the Internet, was noted. Main Outcome Measures: Correlation between the accuracy of Web pages on treating fever in children and the presence of proposed indicators of accuracy on these pages. Likelihood ratios for the presence (and absence) of these proposed indicators. Results: One hundred Web pages were identified and characterized as “more accurate” or “less accurate.” Three indicators correlated with accuracy: displaying the HONcode logo, having an organization domain, and displaying a copyright. Many proposed indicators taken from published guidelines did not correlate with accuracy (e.g., the author being identified and the author having medical credentials) or inaccuracy (e.g., lack of currency and advertising). Conclusions: This method provides a systematic way of identifying indicators that are correlated with the accuracy (or inaccuracy) of health information on the Internet. Three such indicators have been identified in this study. Identifying such indicators and informing the providers and consumers of health information about them would be valuable for public health care. PMID:11751805
TRAC Innovation Report (FY15 to FY18)
2017-11-01
PAGE INTENTIONALLY LEFT BLANK REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information ...maintaining the data needed, and completing and reviewing the collection of information . Send comments regarding this burden estimate or any other aspect of...this collection of information , including suggestions for reducing this burden, to Washington headquarters Services, Directorate for Information
81 FR 40262 - Notice of Intent To Seek Approval To Collect Information
Federal Register 2010, 2011, 2012, 2013, 2014
2016-06-21
... their level of satisfaction with existing services. The NAL Internet sites are a vast collection of Web pages. NAL Web pages are visited by an average of 8.6 million people per month. All NAL Information Centers have an established web presence that provides information to their respective audiences...
76 FR 2754 - Agency Information Collection (Pay Now Enter Info Page) Activity Under OMB Review
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-14
... DEPARTMENT OF VETERANS AFFAIRS [OMB Control No. 2900-0663] Agency Information Collection (Pay Now... No. 2900-0663.'' SUPPLEMENTARY INFORMATION: Title: Pay Now Enter Info Page. OMB Control Number: 2900... participated in VA's benefit programs and owe debts to VA can voluntary make online payments through VA's Pay...
NASA Astrophysics Data System (ADS)
Manea, M.; Norini, G.; Capra, L.; Manea, V. C.
2009-04-01
The Colima Volcano is currently the most active Mexican volcano. After the 1913 plinian activity the volcano presented several eruptive phases that lasted few years, but since 1991 its activity became more persistent with vulcanian eruptions, lava and dome extrusions. During the last 15 years the volcano suffered several eruptive episodes as in 1991, 1994, 1998-1999, 2001-2003, 2004 and 2005 with the emplacement of pyroclastic flows. During rain seasons lahars are frequent affecting several infrastructures such as bridges and electric towers. Researchers from different institutions (Mexico, USA, Germany, Italy, and Spain) are currently working on several aspects of the volcano, from remote sensing, field data of old and recent deposits, structural framework, monitoring (rain, seismicity, deformation and visual observations) and laboratory experiments (analogue models and numerical simulations). Each investigation is focused to explain a single process, but it is fundamental to visualize the global status of the volcano in order to understand its behavior and to mitigate future hazards. The Colima Volcano WebGIS represents an initiative aimed to collect and store on a systematic basis all the data obtained so far for the volcano and to continuously update the database with new information. The Colima Volcano WebGIS is hosted on the Computational Geodynamics Laboratory web server and it is based entirely on Open Source software. The web pages, written in php/html will extract information from a mysql relational database, which will host the information needed for the MapBender application. There will be two types of intended users: 1) researchers working on the Colima Volcano, interested in this project and collaborating in common projects will be provided with open access to the database and will be able to introduce their own data, results, interpretation or recommendations; 2) general users, interested in accessing information on Colima Volcano will be provided with restricted access and will be able to visualize maps, images, diagrams, and current activity. The website can be visited at: http://www.geociencias.unam.mx/colima
Vasconcelos, Hemerson Bruno da Silva; Woods, David John
2017-01-01
This study aimed to identify the knowledge, skills and attitudes of Brazilian hospital pharmacists in the use of information technology and electronic tools to support clinical practice. Methods: A questionnaire was sent by email to clinical pharmacists working public and private hospitals in Brazil. The instrument was validated using the method of Polit and Beck to determine the content validity index. Data (n = 348) were analyzed using descriptive statistics, Pearson's Chi-square test and Gamma correlation tests. Results: Pharmacists had 1–4 electronic devices for personal use, mainly smartphones (84.8%; n = 295) and laptops (81.6%; n = 284). At work, pharmacists had access to a computer (89.4%; n = 311), mostly connected to the internet (83.9%; n = 292). They felt competent (very capable/capable) searching for a web page/web site on a specific subject (100%; n = 348), downloading files (99.7%; n = 347), using spreadsheets (90.2%; n = 314), searching using MeSH terms in PubMed (97.4%; n = 339) and general searching for articles in bibliographic databases (such as Medline/PubMed: 93.4%; n = 325). Pharmacists did not feel competent in using statistical analysis software (somewhat capable/incapable: 78.4%; n = 273). Most pharmacists reported that they had not received formal education to perform most of these actions except searching using MeSH terms. Access to bibliographic databases was available in Brazilian hospitals, however, most pharmacists (78.7%; n = 274) reported daily use of a non-specific search engine such as Google. This result may reflect the lack of formal knowledge and training in the use of bibliographic databases and difficulty with the English language. The need to expand knowledge about information search tools was recognized by most pharmacists in clinical practice in Brazil, especially those with less time dedicated exclusively to clinical activity (Chi-square, p = 0.006). Conclusion: These results will assist in defining minimal competencies for the training of pharmacists in the field of information technology to support clinical practice. Knowledge and skill gaps are evident in the use of bibliographic databases, spreadsheets and statistical tools. PMID:29272292
Néri, Eugenie Desirèe Rabelo; Meira, Assuero Silva; Vasconcelos, Hemerson Bruno da Silva; Woods, David John; Fonteles, Marta Maria de França
2017-01-01
This study aimed to identify the knowledge, skills and attitudes of Brazilian hospital pharmacists in the use of information technology and electronic tools to support clinical practice. A questionnaire was sent by email to clinical pharmacists working public and private hospitals in Brazil. The instrument was validated using the method of Polit and Beck to determine the content validity index. Data (n = 348) were analyzed using descriptive statistics, Pearson's Chi-square test and Gamma correlation tests. Pharmacists had 1-4 electronic devices for personal use, mainly smartphones (84.8%; n = 295) and laptops (81.6%; n = 284). At work, pharmacists had access to a computer (89.4%; n = 311), mostly connected to the internet (83.9%; n = 292). They felt competent (very capable/capable) searching for a web page/web site on a specific subject (100%; n = 348), downloading files (99.7%; n = 347), using spreadsheets (90.2%; n = 314), searching using MeSH terms in PubMed (97.4%; n = 339) and general searching for articles in bibliographic databases (such as Medline/PubMed: 93.4%; n = 325). Pharmacists did not feel competent in using statistical analysis software (somewhat capable/incapable: 78.4%; n = 273). Most pharmacists reported that they had not received formal education to perform most of these actions except searching using MeSH terms. Access to bibliographic databases was available in Brazilian hospitals, however, most pharmacists (78.7%; n = 274) reported daily use of a non-specific search engine such as Google. This result may reflect the lack of formal knowledge and training in the use of bibliographic databases and difficulty with the English language. The need to expand knowledge about information search tools was recognized by most pharmacists in clinical practice in Brazil, especially those with less time dedicated exclusively to clinical activity (Chi-square, p = 0.006). These results will assist in defining minimal competencies for the training of pharmacists in the field of information technology to support clinical practice. Knowledge and skill gaps are evident in the use of bibliographic databases, spreadsheets and statistical tools.
[The virtual library in equity, health, and human development].
Valdés, América
2002-01-01
This article attempts to describe the rationale that has led to the development of information sources dealing with equity, health, and human development in countries of Latin America and the Caribbean within the context of the Virtual Health Library (Biblioteca Virtual en Salud, BVS). Such information sources include the scientific literature, databases in printed and electronic format, institutional directories and lists of specialists, lists of events and courses, distance education programs, specialty journals and bulletins, as well as other means of disseminating health information. The pages that follow deal with the development of a Virtual Library in Equity, Health, and Human Development, an effort rooted in the conviction that decision-making and policy geared toward achieving greater equity in health must, of necessity, be based on coherent, well-organized, and readily accessible first-rate scientific information. Information is useless unless it is converted into knowledge that benefits society. The Virtual Library in Equity, Health, and Human Development is a coordinated effort to develop a decentralized regional network of scientific information sources, with strict quality control, from which public officials can draw data and practical examples that can help them set health and development policies geared toward achieving greater equity for all.
Marenco, Luis; Ascoli, Giorgio A; Martone, Maryann E; Shepherd, Gordon M; Miller, Perry L
2008-09-01
This paper describes the NIF LinkOut Broker (NLB) that has been built as part of the Neuroscience Information Framework (NIF) project. The NLB is designed to coordinate the assembly of links to neuroscience information items (e.g., experimental data, knowledge bases, and software tools) that are (1) accessible via the Web, and (2) related to entries in the National Center for Biotechnology Information's (NCBI's) Entrez system. The NLB collects these links from each resource and passes them to the NCBI which incorporates them into its Entrez LinkOut service. In this way, an Entrez user looking at a specific Entrez entry can LinkOut directly to related neuroscience information. The information stored in the NLB can also be utilized in other ways. A second approach, which is operational on a pilot basis, is for the NLB Web server to create dynamically its own Web page of LinkOut links for each NCBI identifier in the NLB database. This approach can allow other resources (in addition to the NCBI Entrez) to LinkOut to related neuroscience information. The paper describes the current NLB system and discusses certain design issues that arose during its implementation.
Home Page: The Mode of Transport through the Information Superhighway
NASA Technical Reports Server (NTRS)
Lujan, Michelle R.
1995-01-01
The purpose of the project with the Aeroacoustics Branch was to create and submit a home page for the internet about branch information. In order to do this, one must also become familiar with the way that the internet operates. Learning HyperText Markup Language (HTML), and the ability to create a document using this language was the final objective in order to place a home page on the internet (World Wide Web). A manual of instructions regarding maintenance of the home page, and how to keep it up to date was also necessary in order to provide branch members with the opportunity to make any pertinent changes.
G-Bean: an ontology-graph based web tool for biomedical literature retrieval
2014-01-01
Background Currently, most people use NCBI's PubMed to search the MEDLINE database, an important bibliographical information source for life science and biomedical information. However, PubMed has some drawbacks that make it difficult to find relevant publications pertaining to users' individual intentions, especially for non-expert users. To ameliorate the disadvantages of PubMed, we developed G-Bean, a graph based biomedical search engine, to search biomedical articles in MEDLINE database more efficiently. Methods G-Bean addresses PubMed's limitations with three innovations: (1) Parallel document index creation: a multithreaded index creation strategy is employed to generate the document index for G-Bean in parallel; (2) Ontology-graph based query expansion: an ontology graph is constructed by merging four major UMLS (Version 2013AA) vocabularies, MeSH, SNOMEDCT, CSP and AOD, to cover all concepts in National Library of Medicine (NLM) database; a Personalized PageRank algorithm is used to compute concept relevance in this ontology graph and the Term Frequency - Inverse Document Frequency (TF-IDF) weighting scheme is used to re-rank the concepts. The top 500 ranked concepts are selected for expanding the initial query to retrieve more accurate and relevant information; (3) Retrieval and re-ranking of documents based on user's search intention: after the user selects any article from the existing search results, G-Bean analyzes user's selections to determine his/her true search intention and then uses more relevant and more specific terms to retrieve additional related articles. The new articles are presented to the user in the order of their relevance to the already selected articles. Results Performance evaluation with 106 OHSUMED benchmark queries shows that G-Bean returns more relevant results than PubMed does when using these queries to search the MEDLINE database. PubMed could not even return any search result for some OHSUMED queries because it failed to form the appropriate Boolean query statement automatically from the natural language query strings. G-Bean is available at http://bioinformatics.clemson.edu/G-Bean/index.php. Conclusions G-Bean addresses PubMed's limitations with ontology-graph based query expansion, automatic document indexing, and user search intention discovery. It shows significant advantages in finding relevant articles from the MEDLINE database to meet the information need of the user. PMID:25474588
G-Bean: an ontology-graph based web tool for biomedical literature retrieval.
Wang, James Z; Zhang, Yuanyuan; Dong, Liang; Li, Lin; Srimani, Pradip K; Yu, Philip S
2014-01-01
Currently, most people use NCBI's PubMed to search the MEDLINE database, an important bibliographical information source for life science and biomedical information. However, PubMed has some drawbacks that make it difficult to find relevant publications pertaining to users' individual intentions, especially for non-expert users. To ameliorate the disadvantages of PubMed, we developed G-Bean, a graph based biomedical search engine, to search biomedical articles in MEDLINE database more efficiently. G-Bean addresses PubMed's limitations with three innovations: (1) Parallel document index creation: a multithreaded index creation strategy is employed to generate the document index for G-Bean in parallel; (2) Ontology-graph based query expansion: an ontology graph is constructed by merging four major UMLS (Version 2013AA) vocabularies, MeSH, SNOMEDCT, CSP and AOD, to cover all concepts in National Library of Medicine (NLM) database; a Personalized PageRank algorithm is used to compute concept relevance in this ontology graph and the Term Frequency - Inverse Document Frequency (TF-IDF) weighting scheme is used to re-rank the concepts. The top 500 ranked concepts are selected for expanding the initial query to retrieve more accurate and relevant information; (3) Retrieval and re-ranking of documents based on user's search intention: after the user selects any article from the existing search results, G-Bean analyzes user's selections to determine his/her true search intention and then uses more relevant and more specific terms to retrieve additional related articles. The new articles are presented to the user in the order of their relevance to the already selected articles. Performance evaluation with 106 OHSUMED benchmark queries shows that G-Bean returns more relevant results than PubMed does when using these queries to search the MEDLINE database. PubMed could not even return any search result for some OHSUMED queries because it failed to form the appropriate Boolean query statement automatically from the natural language query strings. G-Bean is available at http://bioinformatics.clemson.edu/G-Bean/index.php. G-Bean addresses PubMed's limitations with ontology-graph based query expansion, automatic document indexing, and user search intention discovery. It shows significant advantages in finding relevant articles from the MEDLINE database to meet the information need of the user.
Atmospheric Science Data Center
2018-04-04
Surface meteorology and Solar Energy (SSE) Data and Information A new POWER home page ... The Release 6.0 Surface meteorology and Solar Energy (SSE) data set contains parameters formulated for assessing and designing renewable energy systems. This latest release contains new parameters based on ...
Chest X-Ray (Chest Radiography)
... may be necessary to clarify the results of a chest x-ray or to look for abnormalities not visible on the chest x-ray. top of page Additional Information and Resources RTAnswers.org Radiation Therapy for Lung Cancer top of page This page ...
Identifying potential kidney donors using social networking web sites.
Chang, Alexander; Anderson, Emily E; Turner, Hang T; Shoham, David; Hou, Susan H; Grams, Morgan
2013-01-01
Social networking sites like Facebook may be a powerful tool for increasing rates of live kidney donation. They allow for wide dissemination of information and discussion and could lessen anxiety associated with a face-to-face request for donation. However, sparse data exist on the use of social media for this purpose. We searched Facebook, the most popular social networking site, for publicly available English-language pages seeking kidney donors for a specific individual, abstracting information on the potential recipient, characteristics of the page itself, and whether potential donors were tested. In the 91 pages meeting inclusion criteria, the mean age of potential recipients was 37 (range: 2-69); 88% were US residents. Other posted information included the individual's photograph (76%), blood type (64%), cause of kidney disease (43%), and location (71%). Thirty-two percent of pages reported having potential donors tested, and 10% reported receiving a live-donor kidney transplant. Those reporting donor testing shared more potential recipient characteristics, provided more information about transplantation, and had higher page traffic. Facebook is already being used to identify potential kidney donors. Future studies should focus on how to safely, ethically, and effectively use social networking sites to inform potential donors and potentially expand live kidney donation. © 2013 John Wiley & Sons A/S.
Identifying Potential Kidney Donors Using Social Networking Websites
Chang, Alexander; Anderson, Emily E.; Turner, Hang T.; Shoham, David; Hou, Susan H.; Grams, Morgan
2013-01-01
Social networking sites like Facebook may be a powerful tool for increasing rates of live kidney donation. They allow for wide dissemination of information and discussion, and could lessen anxiety associated with a face-to-face request for donation. However, sparse data exist on the use of social media for this purpose. We searched Facebook, the most popular social networking site, for publicly available English-language pages seeking kidney donors for a specific individual, abstracting information on the potential recipient, characteristics of the page itself, and whether potential donors were tested. In the 91 pages meeting inclusion criteria, the mean age of potential recipients was 37 (range: 2–69); 88% were U.S. residents. Other posted information included the individual’s photograph (76%), blood type (64%), cause of kidney disease (43%), and location (71%). Thirty-two percent of pages reported having potential donors tested, and 10% reported receiving a live donor kidney transplant. Those reporting donor testing shared more potential recipient characteristics, provided more information about transplantation, and had higher page traffic. Facebook is already being used to identify potential kidney donors. Future studies should focus on how to safely, ethically, and effectively use social networking sites to inform potential donors and potentially expand live kidney donation. PMID:23600791
A Semantically Enabled Metadata Repository for Solar Irradiance Data Products
NASA Astrophysics Data System (ADS)
Wilson, A.; Cox, M.; Lindholm, D. M.; Nadiadi, I.; Traver, T.
2014-12-01
The Laboratory for Atmospheric and Space Physics, LASP, has been conducting research in Atmospheric and Space science for over 60 years, and providing the associated data products to the public. LASP has a long history, in particular, of making space-based measurements of the solar irradiance, which serves as crucial input to several areas of scientific research, including solar-terrestrial interactions, atmospheric, and climate. LISIRD, the LASP Interactive Solar Irradiance Data Center, serves these datasets to the public, including solar spectral irradiance (SSI) and total solar irradiance (TSI) data. The LASP extended metadata repository, LEMR, is a database of information about the datasets served by LASP, such as parameters, uncertainties, temporal and spectral ranges, current version, alerts, etc. It serves as the definitive, single source of truth for that information. The database is populated with information garnered via web forms and automated processes. Dataset owners keep the information current and verified for datasets under their purview. This information can be pulled dynamically for many purposes. Web sites such as LISIRD can include this information in web page content as it is rendered, ensuring users get current, accurate information. It can also be pulled to create metadata records in various metadata formats, such as SPASE (for heliophysics) and ISO 19115. Once these records are be made available to the appropriate registries, our data will be discoverable by users coming in via those organizations. The database is implemented as a RDF triplestore, a collection of instances of subject-object-predicate data entities identifiable with a URI. This capability coupled with SPARQL over HTTP read access enables semantic queries over the repository contents. To create the repository we leveraged VIVO, an open source semantic web application, to manage and create new ontologies and populate repository content. A variety of ontologies were used in creating the triplestore, including ontologies that came with VIVO such as FOAF. Also, the W3C DCAT ontology was integrated and extended to describe properties of our data products that we needed to capture, such as spectral range. The presentation will describe the architecture, ontology issues, and tools used to create LEMR and plans for its evolution.
Information and research needs of acute-care clinical nurses.
Spath, M; Buttlar, L
1996-01-01
The majority of nurses surveyed used the library on a regular but limited basis to obtain information needed in caring for or making decisions about their patients. A minority indicated that the libraries in their own institutions totally met their information needs. In fact, only 4% depended on the library to stay abreast of new information and developments in the field. Many of the nurses had their own journal subscriptions, which could account in part for the limited use of libraries and the popularity of the professional journal as the key information source. This finding correlates with the research of Binger and Huntsman, who found that 95% of staff development educators relied on professional journal literature to keep up with current information in the field, and only 45% regularly monitored indexing-and-abstracting services. The present study also revealed that nurses seek information from colleagues more than from any other source, supporting the findings of Corcoran-Perry and Graves. Further research is necessary to clarify why nurses use libraries on a limited basis. It appears, as Bunyan and Lutz contend, that a more aggressive approach to marketing the library to nurses is needed. Further research should include an assessment of how the library can meet the information needs of nurses for both research and patient care. Options to be considered include offering library orientation sessions for new staff nurses, providing current-awareness services by circulating photocopied table-of-contents pages, sending out reviews of new monographs, inviting nurses to submit search requests on a topic, scheduling seminars and workshops that teach CD-ROM and online search strategies, and providing information about electronic databases covering topics related to nursing. Information on databases may be particularly important in light of the present study's finding that databases available in CD-ROM format are consulted very little. Nursing education programs should be expanded to include curricula bibliographic sessions where the librarian, in cooperation with the teaching faculty, visits the classroom to explain all pertinent information sources or invites the class to the library for hands-on demonstration and practice. Nurses who gain working knowledge of the tools that open the doors to retrieval of research findings and who have information about new innovations in medicine and medical technology have superior chances for success in their chosen profession. PMID:8938341
Information and research needs of acute-care clinical nurses.
Spath, M; Buttlar, L
1996-01-01
The majority of nurses surveyed used the library on a regular but limited basis to obtain information needed in caring for or making decisions about their patients. A minority indicated that the libraries in their own institutions totally met their information needs. In fact, only 4% depended on the library to stay abreast of new information and developments in the field. Many of the nurses had their own journal subscriptions, which could account in part for the limited use of libraries and the popularity of the professional journal as the key information source. This finding correlates with the research of Binger and Huntsman, who found that 95% of staff development educators relied on professional journal literature to keep up with current information in the field, and only 45% regularly monitored indexing-and-abstracting services. The present study also revealed that nurses seek information from colleagues more than from any other source, supporting the findings of Corcoran-Perry and Graves. Further research is necessary to clarify why nurses use libraries on a limited basis. It appears, as Bunyan and Lutz contend, that a more aggressive approach to marketing the library to nurses is needed. Further research should include an assessment of how the library can meet the information needs of nurses for both research and patient care. Options to be considered include offering library orientation sessions for new staff nurses, providing current-awareness services by circulating photocopied table-of-contents pages, sending out reviews of new monographs, inviting nurses to submit search requests on a topic, scheduling seminars and workshops that teach CD-ROM and online search strategies, and providing information about electronic databases covering topics related to nursing. Information on databases may be particularly important in light of the present study's finding that databases available in CD-ROM format are consulted very little. Nursing education programs should be expanded to include curricula bibliographic sessions where the librarian, in cooperation with the teaching faculty, visits the classroom to explain all pertinent information sources or invites the class to the library for hands-on demonstration and practice. Nurses who gain working knowledge of the tools that open the doors to retrieval of research findings and who have information about new innovations in medicine and medical technology have superior chances for success in their chosen profession.
Fungal genome resources at NCBI.
Robbertse, B; Tatusova, T
2011-09-01
The National Center for Biotechnology Information (NCBI) is well known for the nucleotide sequence archive, GenBank and sequence analysis tool BLAST. However, NCBI integrates many types of biomolecular data from variety of sources and makes it available to the scientific community as interactive web resources as well as organized releases of bulk data. These tools are available to explore and compare fungal genomes. Searching all databases with Fungi [organism] at http://www.ncbi.nlm.nih.gov/ is the quickest way to find resources of interest with fungal entries. Some tools though are resources specific and can be indirectly accessed from a particular database in the Entrez system. These include graphical viewers and comparative analysis tools such as TaxPlot, TaxMap and UniGene DDD (found via UniGene Homepage). Gene and BioProject pages also serve as portals to external data such as community annotation websites, BioGrid and UniProt. There are many different ways of accessing genomic data at NCBI. Depending on the focus and goal of research projects or the level of interest, a user would select a particular route for accessing genomic databases and resources. This review article describes methods of accessing fungal genome data and provides examples that illustrate the use of analysis tools.
Ning, Shangwei; Zhang, Jizhou; Wang, Peng; Zhi, Hui; Wang, Jianjian; Liu, Yue; Gao, Yue; Guo, Maoni; Yue, Ming; Wang, Lihua; Li, Xia
2016-01-01
Lnc2Cancer (http://www.bio-bigdata.net/lnc2cancer) is a manually curated database of cancer-associated long non-coding RNAs (lncRNAs) with experimental support that aims to provide a high-quality and integrated resource for exploring lncRNA deregulation in various human cancers. LncRNAs represent a large category of functional RNA molecules that play a significant role in human cancers. A curated collection and summary of deregulated lncRNAs in cancer is essential to thoroughly understand the mechanisms and functions of lncRNAs. Here, we developed the Lnc2Cancer database, which contains 1057 manually curated associations between 531 lncRNAs and 86 human cancers. Each association includes lncRNA and cancer name, the lncRNA expression pattern, experimental techniques, a brief functional description, the original reference and additional annotation information. Lnc2Cancer provides a user-friendly interface to conveniently browse, retrieve and download data. Lnc2Cancer also offers a submission page for researchers to submit newly validated lncRNA-cancer associations. With the rapidly increasing interest in lncRNAs, Lnc2Cancer will significantly improve our understanding of lncRNA deregulation in cancer and has the potential to be a timely and valuable resource. PMID:26481356
Educating Normal Breast Mucosa to Prevent Breast Cancer
2013-05-01
REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average...cells 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18. NUMBER OF PAGES 19a. NAME OF RESPONSIBLE PERSON USAMRMC a. REPORT U...b. ABSTRACT U c. THIS PAGE U UU 19b. TELEPHONE NUMBER (include area code) Table of Contents Page
Vaona, Alberto; Marcon, Alessandro; Rava, Marta; Buzzetti, Roberto; Sartori, Marco; Abbinante, Crescenza; Moser, Andrea; Seddaiu, Antonia; Prontera, Manuela; Quaglio, Alessandro; Pallazzoni, Piera; Sartori, Valentina; Rigon, Giulio
2011-12-01
Many medical journals provide patient information leaflets on the correct use of medicines and/or appropriate lifestyles. Only a few studies have assessed the quality of this patient-specific literature. The purpose of this study was to evaluate the quality of JAMA Patient Pages on diabetes using the Ensuring Quality Information for Patient (EQIP) tool. A multidisciplinary group of 10 medical doctors analyzed all diabetes-related Patient Pages published by JAMA from 1998 to 2010 using the EQIP tool. Inter-rater reliability was assessed using the percentage of observed total agreement (p(o)). A quality score between 0 and 1 (the higher score indicating higher quality) was calculated for each item on every page as a function of raters' answers to the EQIP checklist. A mean score per item and a mean score per page were then calculated. We found 8 Patient Pages on diabetes on the JAMA web site. The overall quality score of the documents ranged between 0.55 (Managing Diabetes and Diabetes) and 0.67 (weight and diabetes). p(o) was at least moderate (>50%) for 15 of the 20 EQIP items. Despite generally favorable quality scores, some items received low scores. The worst scores were for the item assessing provision of an empty space to customize information for individual patients (score=0.01, p(o)=95%) and patients involvement in document drafting (score=0.11, p(o)=79%). The Patient Pages on diabetes published by JAMA were found to present weak points that limit their overall quality and may jeopardize their efficacy. We therefore recommend that authors and publishers of written patient information comply with published quality criteria. Further research is needed to evaluate the quality and efficacy of existing written health care information. Copyright © 2011 Primary Care Diabetes Europe. Published by Elsevier Ltd. All rights reserved.
Khawaja, Zain-Ul-Abdin; Ali, Khudejah Iqbal; Khan, Shanze
2017-02-01
Social marketing related to sexual health is a problematic task, especially in religiously and/or culturally conservative countries. Social media presents a possible alternative channel for sexual health efforts to disseminate information and engage new users. In an effort to understand how well sexual health campaigns and organizations have leveraged this opportunity, this study presents a systematic examination of ongoing Facebook-based sexual health efforts in conservative Asian countries. It was discovered that out of hundreds of sexual health organizations identified in the region, less than half had created a Facebook page. Of those that had, only 31 were found to have posted sexual health-relevant content at least once a month. Many of these 31 organizations were also unsuccessful in maintaining regular official and user activity on their page. In order to assess the quality of the Facebook pages as Web-based information resources, the sexual health-related official activity on each page was analyzed for information (a) value, (b) reliability, (c) currency, and (d) system accessibility. User responsiveness to official posts on the pages was also used to discuss the potential of Facebook as a sexual health information delivery platform.
MBGD update 2013: the microbial genome database for exploring the diversity of microbial world.
Uchiyama, Ikuo; Mihara, Motohiro; Nishide, Hiroyo; Chiba, Hirokazu
2013-01-01
The microbial genome database for comparative analysis (MBGD, available at http://mbgd.genome.ad.jp/) is a platform for microbial genome comparison based on orthology analysis. As its unique feature, MBGD allows users to conduct orthology analysis among any specified set of organisms; this flexibility allows MBGD to adapt to a variety of microbial genomic study. Reflecting the huge diversity of microbial world, the number of microbial genome projects now becomes several thousands. To efficiently explore the diversity of the entire microbial genomic data, MBGD now provides summary pages for pre-calculated ortholog tables among various taxonomic groups. For some closely related taxa, MBGD also provides the conserved synteny information (core genome alignment) pre-calculated using the CoreAligner program. In addition, efficient incremental updating procedure can create extended ortholog table by adding additional genomes to the default ortholog table generated from the representative set of genomes. Combining with the functionalities of the dynamic orthology calculation of any specified set of organisms, MBGD is an efficient and flexible tool for exploring the microbial genome diversity.
ERIC Educational Resources Information Center
Chuang, Hsueh-Hua; Liu, Han-Chin
2012-01-01
This study implemented eye-tracking technology to understand the impact of different multimedia instructional materials, i.e., five successive pages versus a single page with the same amount of information, on information-processing activities in 21 non-science-major college students. The findings showed that students demonstrated the same number…
content of resource. contentinfo contains meta information about the content on the page or the page as a Applications Vocabulary alert A message with important, and usually time-sensitive, information. Also see : true, false, or mixed. columnheader A cell containing header information for a column. combobox A
College of DuPage Information Technology Plan, Fiscal Year 1994-95.
ERIC Educational Resources Information Center
College of DuPage, Glen Ellyn, IL.
Building upon four previous planning documents for computing at College of DuPage in Illinois, this plan for fiscal year 1995 (FY95) provides a starting point for future plans to address all activities that relate to the use of information technology on campus. The FY95 "Information Technology Plan" is divided into six sections, each…
Evaluation of Learning Unit Design with Use of Page Flip Information Analysis
ERIC Educational Resources Information Center
Horikoshi, Izumi; Noguchi, Masato; Tamura, Yasuhisa
2016-01-01
In this paper, the authors attempted to evaluate design of leaning units with use of Learning Analytics technique on page flip information. Traditional formative assessment has been carried out by giving assignments and evaluating their results. However, the information that teacher can get from the evaluation is limited and coarse-grained. The…
2014-08-17
REPORT DOCUMENTATION PAGE Form Approved 0MB No. 0704-0188 Public reporting burcjen for this collecUon of information is estimated to average 1...CLASSIFICATION OF: N/A a. REPORT b. ABSTRACT c. THIS PAGE 17. LIMITATION OF ABSTRACT None 18. NUMBER OF PAGES 17 19a. NAME OF RESPONSIBLE PERSON...be restricted to four journal pages for contributed papers and six journal pages for invited papers. I All papers will be considered as submissions
The COMET Initiative database: progress and activities update (2015).
Gargon, E; Williamson, P R; Altman, D G; Blazeby, J M; Tunis, S; Clarke, M
2017-02-03
This letter describes the substantial activity on the Core Outcome Measure in Effectiveness Trials (COMET) website in 2015, updating our earlier progress reports for the period from the launch of the COMET website and database in August 2011 to December 2014. As in previous years, 2015 saw further increases in the annual number of visits to the website, the number of pages viewed and the number of searches undertaken. The sustained growth in use of the website and database suggests that COMET is continuing to gain interest and prominence, and that the resources are useful to people interested in the development of core outcome sets.
Fels, Deborah I; Richards, Jan; Hardman, Jim; Lee, Daniel G
2006-01-01
The WORLD WIDE WEB has changed the way people interact. It has also become an important equalizer of information access for many social sectors. However, for many people, including some sign language users, Web accessing can be difficult. For some, it not only presents another barrier to overcome but has left them without cultural equality. The present article describes a system that allows sign language-only Web pages to be created and linked through a video-based technique called sign-linking. In two studies, 14 Deaf participants examined two iterations of signlinked Web pages to gauge the usability and learnability of a signing Web page interface. The first study indicated that signing Web pages were usable by sign language users but that some interface features required improvement. The second study showed increased usability for those features; users consequently couldnavigate sign language information with ease and pleasure.
49 CFR 573.9 - Address for submitting required reports and other information.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Internet Web page http://www.safercar.gov/Vehicle+Manufacturers. A manufacturer must use the templates provided at this Web page for all submissions required under this section. Defect and noncompliance... at this Web page. [78 FR 51421, Aug. 20, 2013] ...
ERIC Educational Resources Information Center
Nesbeitt, Sarah
1997-01-01
Numerous Web-based phone and address directories provide advantages over the white and yellow pages. Although many share a common database, each has features that set it apart: maps, suggested driving directions, and phone dialing. This article examines eight (Bigfoot, BigBook, BigYellow, Switchboard, Infospace, Contractjobs, InterNIC)…
Who Do You Think You Are? Personal Home Pages and Self-Presentation on the World Wide Web.
ERIC Educational Resources Information Center
Dominick, Joseph R.
1999-01-01
Analyzes 319 personal home pages. Finds the typical page had a brief biography, a counter or guest book, and links to other pages but did not contain much personal information. Finds that strategies of self-presentation were employed with the same frequency as they were in interpersonal settings, and gender differences in self-presentation were…
National Environmental Satellite, Data, and Information Service Home Page Default Office of Satellite and Atlantic Composites Pacific Composites Satellite Services Argos DCS EMWIN GEONETCast Americas GOES DCS LRIT NOAA DRO Conference NOAASIS SARSAT ---- Satellite Information ---- GOES -- Satellite Status -- Special
Overview of Historical Earthquake Document Database in Japan and Future Development
NASA Astrophysics Data System (ADS)
Nishiyama, A.; Satake, K.
2014-12-01
In Japan, damage and disasters from historical large earthquakes have been documented and preserved. Compilation of historical earthquake documents started in the early 20th century and 33 volumes of historical document source books (about 27,000 pages) have been published. However, these source books are not effectively utilized for researchers due to a contamination of low-reliability historical records and a difficulty for keyword searching by characters and dates. To overcome these problems and to promote historical earthquake studies in Japan, construction of text database started in the 21 century. As for historical earthquakes from the beginning of the 7th century to the early 17th century, "Online Database of Historical Documents in Japanese Earthquakes and Eruptions in the Ancient and Medieval Ages" (Ishibashi, 2009) has been already constructed. They investigated the source books or original texts of historical literature, emended the descriptions, and assigned the reliability of each historical document on the basis of written age. Another database compiled the historical documents for seven damaging earthquakes occurred along the Sea of Japan coast in Honshu, central Japan in the Edo period (from the beginning of the 17th century to the middle of the 19th century) and constructed text database and seismic intensity data base. These are now publicized on the web (written only in Japanese). However, only about 9 % of the earthquake source books have been digitized so far. Therefore, we plan to digitize all of the remaining historical documents by the research-program which started in 2014. The specification of the data base will be similar for previous ones. We also plan to combine this database with liquefaction traces database, which will be constructed by other research program, by adding the location information described in historical documents. Constructed database would be utilized to estimate the distributions of seismic intensities and tsunami heights.
Using old technology to implement modern computer-aided decision support for primary diabetes care.
Hunt, D. L.; Haynes, R. B.; Morgan, D.
2001-01-01
BACKGROUND: Implementation rates of interventions known to be beneficial for people with diabetes mellitus are often suboptimal. Computer-aided decision support systems (CDSSs) can improve these rates. The complexity of establishing a fully integrated electronic medical record that provides decision support, however, often prevents their use. OBJECTIVE: To develop a CDSS for diabetes care that can be easily introduced into primary care settings and diabetes clinics. THE SYSTEM: The CDSS uses fax-machine-based optical character recognition software for acquiring patient information. Simple, 1-page paper forms, completed by patients or health practitioners, are faxed to a central location. The information is interpreted and recorded in a database. This initiates a routine that matches the information against a knowledge base so that patient-specific recommendations can be generated. These are formatted and faxed back within 4-5 minutes. IMPLEMENTATION: The system is being introduced into 2 diabetes clinics. We are collecting information on frequency of use of the system, as well as satisfaction with the information provided. CONCLUSION: Computer-aided decision support can be provided in any setting with a fax machine, without the need for integrated electronic medical records or computerized data-collection devices. PMID:11825194
Using old technology to implement modern computer-aided decision support for primary diabetes care.
Hunt, D L; Haynes, R B; Morgan, D
2001-01-01
Implementation rates of interventions known to be beneficial for people with diabetes mellitus are often suboptimal. Computer-aided decision support systems (CDSSs) can improve these rates. The complexity of establishing a fully integrated electronic medical record that provides decision support, however, often prevents their use. To develop a CDSS for diabetes care that can be easily introduced into primary care settings and diabetes clinics. THE SYSTEM: The CDSS uses fax-machine-based optical character recognition software for acquiring patient information. Simple, 1-page paper forms, completed by patients or health practitioners, are faxed to a central location. The information is interpreted and recorded in a database. This initiates a routine that matches the information against a knowledge base so that patient-specific recommendations can be generated. These are formatted and faxed back within 4-5 minutes. The system is being introduced into 2 diabetes clinics. We are collecting information on frequency of use of the system, as well as satisfaction with the information provided. Computer-aided decision support can be provided in any setting with a fax machine, without the need for integrated electronic medical records or computerized data-collection devices.
Ascoli, Giorgio A.; Martone, Maryann E.; Shepherd, Gordon M.; Miller, Perry L.
2009-01-01
This paper describes the NIF LinkOut Broker (NLB) that has been built as part of the Neuroscience Information Framework (NIF) project. The NLB is designed to coordinate the assembly of links to neuroscience information items (e.g., experimental data, knowledge bases, and software tools) that are (1) accessible via the Web, and (2) related to entries in the National Center for Biotechnology Information’s (NCBI’s) Entrez system. The NLB collects these links from each resource and passes them to the NCBI which incorporates them into its Entrez LinkOut service. In this way, an Entrez user looking at a specific Entrez entry can LinkOut directly to related neuroscience information. The information stored in the NLB can also be utilized in other ways. A second approach, which is operational on a pilot basis, is for the NLB Web server to create dynamically its own Web page of LinkOut links for each NCBI identifier in the NLB database. This approach can allow other resources (in addition to the NCBI Entrez) to LinkOut to related neuroscience information. The paper describes the current NLB system and discusses certain design issues that arose during its implementation. PMID:18975149
[Tracing the map of medication errors outside the hospital environment in the Madrid Community].
Taravilla-Cerdán, Belén; Larrubia-Muñoz, Olga; de la Corte-García, María; Cruz-Martos, Encarnación
2011-12-01
Preparation of a map of medication errors reported by health professionals outside hospitals within the framework of Medication Errors Reporting for the Community of Madrid during the period 2008-2009. Retrospective observational study. Notification database of medication errors in the Community of Madrid. Notifications sent to the web page: Safe Use of Medicines and Health Products of the Community of Madrid. Information on the originator of the report, date of incident, shift, type of error and causes, outcome, patient characteristics, stage, place where it was produced and detected, if the medication was administered, lot number, expiry date and the general nature of the drug and a brief description of the incident. There were 5470 medication errors analysed, of which 3412 came from outside hospitals (62%), occurring mainly in the prescription stage (56.92%) and being more reported pharmacists. No harm was done in 92.9% of cases, but there was harm in 4.8% and in 2.3% there was an error that could not be followed up. The centralization of information has led to the confirmation that the prescription is a vulnerable point in the chain of drug therapy. Cleaning up prescription databases, preventing the marketing of commercial presentations that give rise to confusion, enhanced information to professionals and patients, and establishing standardised procedures, and avoiding the use of ambiguous prescriptions, illegible, or abbreviations, are useful strategies to try to minimise these errors. Copyright © 2010 Elsevier España, S.L. All rights reserved.
An ant colony optimization based feature selection for web page classification.
Saraç, Esra; Özel, Selma Ayşe
2014-01-01
The increased popularity of the web has caused the inclusion of huge amount of information to the web, and as a result of this explosive information growth, automated web page classification systems are needed to improve search engines' performance. Web pages have a large number of features such as HTML/XML tags, URLs, hyperlinks, and text contents that should be considered during an automated classification process. The aim of this study is to reduce the number of features to be used to improve runtime and accuracy of the classification of web pages. In this study, we used an ant colony optimization (ACO) algorithm to select the best features, and then we applied the well-known C4.5, naive Bayes, and k nearest neighbor classifiers to assign class labels to web pages. We used the WebKB and Conference datasets in our experiments, and we showed that using the ACO for feature selection improves both accuracy and runtime performance of classification. We also showed that the proposed ACO based algorithm can select better features with respect to the well-known information gain and chi square feature selection methods.
Xu, Yanjun; Yang, Haixiu; Wu, Tan; Dong, Qun; Sun, Zeguo; Shang, Desi; Li, Feng; Xu, Yingqi; Su, Fei; Liu, Siyao
2017-01-01
Abstract BioM2MetDisease is a manually curated database that aims to provide a comprehensive and experimentally supported resource of associations between metabolic diseases and various biomolecules. Recently, metabolic diseases such as diabetes have become one of the leading threats to people’s health. Metabolic disease associated with alterations of multiple types of biomolecules such as miRNAs and metabolites. An integrated and high-quality data source that collection of metabolic disease associated biomolecules is essential for exploring the underlying molecular mechanisms and discovering novel therapeutics. Here, we developed the BioM2MetDisease database, which currently documents 2681 entries of relationships between 1147 biomolecules (miRNAs, metabolites and small molecules/drugs) and 78 metabolic diseases across 14 species. Each entry includes biomolecule category, species, biomolecule name, disease name, dysregulation pattern, experimental technique, a brief description of metabolic disease-biomolecule relationships, the reference, additional annotation information etc. BioM2MetDisease provides a user-friendly interface to explore and retrieve all data conveniently. A submission page was also offered for researchers to submit new associations between biomolecules and metabolic diseases. BioM2MetDisease provides a comprehensive resource for studying biology molecules act in metabolic diseases, and it is helpful for understanding the molecular mechanisms and developing novel therapeutics for metabolic diseases. Database URL: http://www.bio-bigdata.com/BioM2MetDisease/ PMID:28605773
77 FR 70454 - Proposed Flood Hazard Determinations
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-26
... which included a Web page address through which the Preliminary Flood Insurance Rate Map (FIRM), and... be accessed. The information available through the Web page address has subsequently been updated... through the web page address listed in the table has been updated to reflect the Revised Preliminary...
One-Shot Decoupling and Page Curves from a Dynamical Model for Black Hole Evaporation.
Brádler, Kamil; Adami, Christoph
2016-03-11
One-shot decoupling is a powerful primitive in quantum information theory and was hypothesized to play a role in the black hole information paradox. We study black hole dynamics modeled by a trilinear Hamiltonian whose semiclassical limit gives rise to Hawking radiation. An explicit numerical calculation of the discretized path integral of the S matrix shows that decoupling is exact in the continuous limit, implying that quantum information is perfectly transferred from the black hole to radiation. A striking consequence of decoupling is the emergence of an output radiation entropy profile that follows Page's prediction. We argue that information transfer and the emergence of Page curves is a robust feature of any multilinear interaction Hamiltonian with a bounded spectrum.
Frank, M S; Dreyer, K
2001-06-01
We describe a virtual web site hosting technology that enables educators in radiology to emblazon and make available for delivery on the world wide web their own interactive educational content, free from dependencies on in-house resources and policies. This suite of technologies includes a graphically oriented software application, designed for the computer novice, to facilitate the input, storage, and management of domain expertise within a database system. The database stores this expertise as choreographed and interlinked multimedia entities including text, imagery, interactive questions, and audio. Case-based presentations or thematic lectures can be authored locally, previewed locally within a web browser, then uploaded at will as packaged knowledge objects to an educator's (or department's) personal web site housed within a virtual server architecture. This architecture can host an unlimited number of unique educational web sites for individuals or departments in need of such service. Each virtual site's content is stored within that site's protected back-end database connected to Internet Information Server (Microsoft Corp, Redmond WA) using a suite of Active Server Page (ASP) modules that incorporate Microsoft's Active Data Objects (ADO) technology. Each person's or department's electronic teaching material appears as an independent web site with different levels of access--controlled by a username-password strategy--for teachers and students. There is essentially no static hypertext markup language (HTML). Rather, all pages displayed for a given site are rendered dynamically from case-based or thematic content that is fetched from that virtual site's database. The dynamically rendered HTML is displayed within a web browser in a Socratic fashion that can assess the recipient's current fund of knowledge while providing instantaneous user-specific feedback. Each site is emblazoned with the logo and identification of the participating institution. Individuals with teacher-level access can use a web browser to upload new content as well as manage content already stored on their virtual site. Each virtual site stores, collates, and scores participants' responses to the interactive questions posed on line. This virtual web site strategy empowers the educator with an end-to-end solution for creating interactive educational content and hosting that content within the educator's personalized and protected educational site on the world wide web, thus providing a valuable outlet that can magnify the impact of his or her talents and contributions.
Bapat, Shweta S; Patel, Harshali K; Sansgiry, Sujit S
2017-10-16
In this study, we evaluate the role of information anxiety and information load on the intention to read information from prescription drug information leaflets (PILs). These PILs were developed based on the principals of information load and consumer information processing. This was an experimental prospective repeated measures study conducted in the United States where 360 (62% response rate) university students (>18 years old) participated. Participants were presented with a scenario followed by exposure to the three drug product information sources used to operationalize information load. The three sources were: (i) current practice; (ii) pre-existing one-page text only; and (iii) interventional one-page prototype PILs designed for the study. Information anxiety was measured as anxiety experienced by the individual when encountering information. The outcome variable of intention to read PILs was defined as the likelihood that the patient will read the information provided in the leaflets. A survey questionnaire was used to capture the data and the objectives were analyzed by performing a repeated measures MANOVA using SAS version 9.3. When compared to current practice and one-page text only leaflets, one-page PILs had significantly lower scores on information anxiety ( p < 0.001) and information load ( p < 0.001). The intention to read was highest and significantly different ( p < 0.001) for PILs as compared to current practice or text only leaflets. Information anxiety and information load significantly impacted intention to read ( p < 0.001). Newly developed PILs increased patient's intention to read and can help in improving the counseling services provided by pharmacists.
Bapat, Shweta S.; Patel, Harshali K.; Sansgiry, Sujit S.
2017-01-01
In this study, we evaluate the role of information anxiety and information load on the intention to read information from prescription drug information leaflets (PILs). These PILs were developed based on the principals of information load and consumer information processing. This was an experimental prospective repeated measures study conducted in the United States where 360 (62% response rate) university students (>18 years old) participated. Participants were presented with a scenario followed by exposure to the three drug product information sources used to operationalize information load. The three sources were: (i) current practice; (ii) pre-existing one-page text only; and (iii) interventional one-page prototype PILs designed for the study. Information anxiety was measured as anxiety experienced by the individual when encountering information. The outcome variable of intention to read PILs was defined as the likelihood that the patient will read the information provided in the leaflets. A survey questionnaire was used to capture the data and the objectives were analyzed by performing a repeated measures MANOVA using SAS version 9.3. When compared to current practice and one-page text only leaflets, one-page PILs had significantly lower scores on information anxiety (p < 0.001) and information load (p < 0.001). The intention to read was highest and significantly different (p < 0.001) for PILs as compared to current practice or text only leaflets. Information anxiety and information load significantly impacted intention to read (p < 0.001). Newly developed PILs increased patient’s intention to read and can help in improving the counseling services provided by pharmacists. PMID:29035337
Army Investment Casting Industry Report
1987-04-01
Page 9 #5 Capacity Utilization Rates Page 10 #6 Labor Intensity Indicator Page 11 #7 Market Share By Firm Size Page 12 #8 Market Share By Type of Firm...information gathered from the U.S. casters. These similarities coupled with the relatively small Canadian market share resulted in similar conclusions...investment castings further expanding into the Defense market some foreseeable difficulties that could arise would be: a. Lack of adequate tooling. b
The "New Oxford English Dictionary" Project.
ERIC Educational Resources Information Center
Fawcett, Heather
1993-01-01
Describes the conversion of the 22,000-page Oxford English Dictionary to an electronic version incorporating a modified Standard Generalized Markup Language (SGML) syntax. Explains that the database designers chose structured markup because it supports users' data searching needs, allows textual components to be extracted or modified, and allows…
Correction to: Nuclear, chloroplast, and mitochondrial data of a US cannabis DNA database.
Houston, Rachel; Birck, Matthew; LaRue, Bobby; Hughes-Stamm, Sheree; Gangitano, David
2018-05-01
The original version of this article contained a mistake. In page 10 of the original article, the significant level (p > 0.01) is incorrect. The correct significant level is (p < 0.01). The original article has been corrected.
FAA Registry - Aircraft - N-Number Inquiry
Skip to page content Federal Aviation Administration Aircraft Inquiries N-number Serial Number -Number Online In Writing Reserved N-Number Renewal Online Request for Aircraft Records Online Help Main Menu Aircraft Registration Aircraft Downloadable Database Definitions N-Number Format Registrations at
The quality and readability of online consumer information about gynecologic cancer.
Sobota, Aleksandra; Ozakinci, Gozde
2015-03-01
The Internet has become an important source of health-related information for consumers, among whom younger women constitute a notable group. The aims of this study were (1) to evaluate the quality and readability of online information about gynecologic cancer using validated instruments and (2) to relate the quality of information to its readability. Using the Alexa Rank, we obtained a list of 35 Web pages providing information about 7 gynecologic malignancies. These were assessed using the Health on the Net (HON) seal of approval, the Journal of the American Medical Association (JAMA) benchmarks, and the DISCERN instrument. Flesch readability score was calculated for sections related to symptoms and signs and treatment. Less than 30% of the Web pages displayed the HON seal or achieved all JAMA benchmarks. The majority of the treatment sections were of moderate to high quality according to the DISCERN. There was no significant relationship between the presence of the HON seal and readability. Web pages achieving all JAMA benchmarks were significantly more difficult to read and understand than Web pages that missed any of the JAMA benchmarks. Treatment-related content of moderate to high quality as assessed by the DISCERN had a significantly better readability score than the low-quality content. The online information about gynecologic cancer provided by the most frequently visited Web pages is of variable quality and in general difficult to read and understand. The relationship between the quality and readability remains unclear. Health care providers should direct their patients to reliable material online because patients consider the Internet as an important source of information.
6 CFR 7.31 - Mandatory review for declassification requests.
Code of Federal Regulations, 2012 CFR
2012-01-01
... action will be taken and of the requester's right to appeal. (c) Requests for review of information that... processing of a mandatory review request. Such notification shall include the number of pages declassified in full; the number of pages declassified in part; and the number of pages where declassification was...
6 CFR 7.31 - Mandatory review for declassification requests.
Code of Federal Regulations, 2014 CFR
2014-01-01
... action will be taken and of the requester's right to appeal. (c) Requests for review of information that... processing of a mandatory review request. Such notification shall include the number of pages declassified in full; the number of pages declassified in part; and the number of pages where declassification was...
Where do I find documentation/more information concerning a data set?
Atmospheric Science Data Center
2015-11-30
To access documentation, locate and select the link from the Projects Supported page for the project that you would like ... page where you can access it if it is available, note that a missing tab on the product page indicates that there is no documentation ...
The National Center for Atmospheric Research (NCAR) Research Data Archive: a Data Education Center
NASA Astrophysics Data System (ADS)
Peng, G. S.; Schuster, D.
2015-12-01
The National Center for Atmospheric Research (NCAR) Research Data Archive (RDA), rda.ucar.edu, is not just another data center or data archive. It is a data education center. We not only serve data, we TEACH data. Weather and climate data is the original "Big Data" dataset and lessons learned while playing with weather data are applicable to a wide range of data investigations. Erroneous data assumptions are the Achilles heel of Big Data. It doesn't matter how much data you crunch if the data is not what you think it is. Each dataset archived at the RDA is assigned to a data specialist (DS) who curates the data. If a user has a question not answered in the dataset information web pages, they can call or email a skilled DS for further clarification. The RDA's diverse staff—with academic training in meteorology, oceanography, engineering (electrical, civil, ocean and database), mathematics, physics, chemistry and information science—means we likely have someone who "speaks your language." Data discovery is another difficult Big Data problem; one can only solve problems with data if one can find the right data. Metadata, both machine and human-generated, underpin the RDA data search tools. Users can quickly find datasets by name or dataset ID number. They can also perform a faceted search that successively narrows the options by user requirements or simply kick off an indexed search with a few words. Weather data formats can be difficult to read for non-expert users; it's usually packed in binary formats requiring specialized software and parameter names use specialized vocabularies. DSs create detailed information pages for each dataset and maintain lists of helpful software, documentation and links of information around the web. We further grow the level of sophistication of the users with tips, tutorials and data stories on the RDA Blog, http://ncarrda.blogspot.com/. How-to video tutorials are also posted on the NCAR Computational and Information Systems Laboratory (CISL) YouTube channel.
75 FR 68040 - Proposed Information Collection Activity: Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-04
... DEPARTMENT OF VETERANS AFFAIRS [OMB Control No. 2900-0663 (Pay Now Enter Info Page)] Proposed... of automated collection techniques or the use of other forms of information technology. Title: Pay... make online payments through VA's Pay Now Enter Info Page Web site. Data entered on the Pay Now Enter...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-21
... DS-4085 Application for Additional Visa Pages or Miscellaneous Passport Services, OMB Control Number... of Information Collection: Application for Additional Visa Pages or Miscellaneous Passport Services.... Originating Office: Bureau of Consular Affairs, Passport Services CA/PPT. Form Number: DS-4085. Respondents...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-22
... DEPARTMENT OF EDUCATION Office of Special Education and Rehabilitative Services Overview Information; Migrant and Seasonal Farmworkers Program Correction In notice document 2010-5976 beginning on page 13106 in the issue of Thursday, March 18, 2010 make the following correction: On page 13106, in...
Synthesis of Common Arabic Handwritings to Aid Optical Character Recognition Research.
Dinges, Laslo; Al-Hamadi, Ayoub; Elzobi, Moftah; El-Etriby, Sherif
2016-03-11
Document analysis tasks such as pattern recognition, word spotting or segmentation, require comprehensive databases for training and validation. Not only variations in writing style but also the used list of words is of importance in the case that training samples should reflect the input of a specific area of application. However, generation of training samples is expensive in the sense of manpower and time, particularly if complete text pages including complex ground truth are required. This is why there is a lack of such databases, especially for Arabic, the second most popular language. However, Arabic handwriting recognition involves different preprocessing, segmentation and recognition methods. Each requires particular ground truth or samples to enable optimal training and validation, which are often not covered by the currently available databases. To overcome this issue, we propose a system that synthesizes Arabic handwritten words and text pages and generates corresponding detailed ground truth. We use these syntheses to validate a new, segmentation based system that recognizes handwritten Arabic words. We found that a modification of an Active Shape Model based character classifiers-that we proposed earlier-improves the word recognition accuracy. Further improvements are achieved, by using a vocabulary of the 50,000 most common Arabic words for error correction.
Synthesis of Common Arabic Handwritings to Aid Optical Character Recognition Research
Dinges, Laslo; Al-Hamadi, Ayoub; Elzobi, Moftah; El-etriby, Sherif
2016-01-01
Document analysis tasks such as pattern recognition, word spotting or segmentation, require comprehensive databases for training and validation. Not only variations in writing style but also the used list of words is of importance in the case that training samples should reflect the input of a specific area of application. However, generation of training samples is expensive in the sense of manpower and time, particularly if complete text pages including complex ground truth are required. This is why there is a lack of such databases, especially for Arabic, the second most popular language. However, Arabic handwriting recognition involves different preprocessing, segmentation and recognition methods. Each requires particular ground truth or samples to enable optimal training and validation, which are often not covered by the currently available databases. To overcome this issue, we propose a system that synthesizes Arabic handwritten words and text pages and generates corresponding detailed ground truth. We use these syntheses to validate a new, segmentation based system that recognizes handwritten Arabic words. We found that a modification of an Active Shape Model based character classifiers—that we proposed earlier—improves the word recognition accuracy. Further improvements are achieved, by using a vocabulary of the 50,000 most common Arabic words for error correction. PMID:26978368
Clique-based data mining for related genes in a biomedical database.
Matsunaga, Tsutomu; Yonemori, Chikara; Tomita, Etsuji; Muramatsu, Masaaki
2009-07-01
Progress in the life sciences cannot be made without integrating biomedical knowledge on numerous genes in order to help formulate hypotheses on the genetic mechanisms behind various biological phenomena, including diseases. There is thus a strong need for a way to automatically and comprehensively search from biomedical databases for related genes, such as genes in the same families and genes encoding components of the same pathways. Here we address the extraction of related genes by searching for densely-connected subgraphs, which are modeled as cliques, in a biomedical relational graph. We constructed a graph whose nodes were gene or disease pages, and edges were the hyperlink connections between those pages in the Online Mendelian Inheritance in Man (OMIM) database. We obtained over 20,000 sets of related genes (called 'gene modules') by enumerating cliques computationally. The modules included genes in the same family, genes for proteins that form a complex, and genes for components of the same signaling pathway. The results of experiments using 'metabolic syndrome'-related gene modules show that the gene modules can be used to get a coherent holistic picture helpful for interpreting relations among genes. We presented a data mining approach extracting related genes by enumerating cliques. The extracted gene sets provide a holistic picture useful for comprehending complex disease mechanisms.
Leo, C A; Murphy, J; Hodgkinson, J D; Vaizey, C J; Maeda, Y
2018-01-01
The Internet has become an important platform for information communication. This study aim to investigate the utility of social media and search engines to disseminate faecal incontinence information. We looked into Social media platforms and search engines. There was not a direct patient recruitment and any available information from patients was already on public domain at the time of search. A quantitative analysis of types and volumes of information regarding faecal incontinence was made. Twelve valid pages were identified on Facebook: 5 (41%) pages were advertising commercial incontinence products, 4 (33%) pages were dedicated to patients support groups and 3 (25%) pages provided healthcare information. Also we found 192 Facebook posts. On Twitter, 2890 tweets were found of which 51% tweets provided healthcare information; 675 (45%) were sent by healthcare professionals to patients, 530 tweets (35.3%) were between healthcare professionals, 201 tweets (13.4%) were from medical journals or scientific books and 103 tweets (7%) were from hospitals or clinics with information about events and meetings. The second commonest type of tweets was advertising commercial incontinence products 27%. Patients tweeted to exchange information and advice between themselves (20.5%). In contrast, search engines as Google/Yahoo/Bing had a higher proportion of healthcare information (over 70%). Internet appears to have potential to be a useful platform for patients to learn about faecal incontinence and share information; however, given one lack of focus of available data, patients may struggle to identify valid and useful information.
Bates, Benjamin R; Romina, Sharon; Ahmed, Rukhsana; Hopson, Danielle
2006-03-01
Recent use of the Internet as a source of health information has raised concerns about consumers' ability to tell 'good' information from 'bad' information. Although consumers report that they use source credibility to judge information quality, several observational studies suggest that consumers make little use of source credibility. This study examines consumer evaluations of web pages attributed to a credible source as compared to generic web pages on measures of message quality. In spring 2005, a community-wide convenience survey was distributed in a regional hub city in Ohio, USA. 519 participants were randomly assigned one of six messages discussing lung cancer prevention: three messages each attributed to a highly credible national organization and three identical messages each attributed to a generic web page. Independent sample t-tests were conducted to compare each attributed message to its counterpart attributed to a generic web page on measures of trustworthiness, truthfulness, readability, and completeness. The results demonstrated that differences in attribution to a source did not have a significant effect on consumers' evaluations of the quality of the information.Conclusions. The authors offer suggestions for national organizations to promote credibility to consumers as a heuristic for choosing better online health information through the use of media co-channels to emphasize credibility.
Spectr-W3 Online Database On Atomic Properties Of Atoms And Ions
NASA Astrophysics Data System (ADS)
Faenov, A. Ya.; Magunov, A. I.; Pikuz, T. A.; Skobelev, I. Yu.; Loboda, P. A.; Bakshayev, N. N.; Gagarin, S. V.; Komosko, V. V.; Kuznetsov, K. S.; Markelenkov, S. A.
2002-10-01
Recent progress in the novel information technologies based on the World-Wide Web (WWW) gives a new possibility for a worldwide exchange of atomic spectral and collisional data. This facilitates joint efforts of the international scientific community in basic and applied research, promising technological developments, and university education programs. Special-purpose atomic databases (ADBs) are needed for an effective employment of large-scale datasets. The ADB SPECTR developed at MISDC of VNIIFTRI has been used during the last decade in several laboratories in the world, including RFNC-VNIITF. The DB SPECTR accumulates a considerable amount of atomic data (about 500,000 records). These data were extracted from publications on experimental and theoretical studies in atomic physics, astrophysics, and plasma spectroscopy during the last few decades. The information for atoms and ions comprises the ionization potentials, the energy levels, the wavelengths and transition probabilities, and, to a lesser extent, -- also the autoionization rates, and the electron-ion collision cross-sections and rates. The data are supplied with source references and comments elucidating the details of computations or measurements. Our goal is to create an interactive WWW information resource based on the extended and updated Web-oriented database version SPECTR-W3 and its further integration into the family of specialized atomic databases on the Internet. The version will incorporate novel experimental and theoretical data. An appropriate revision of the previously accumulated data will be performed from the viewpoint of their consistency to the current state-of-the-art. We are particularly interested in cooperation for storing the atomic collision data. Presently, a software shell with the up-to-date Web-interface is being developed to work with the SPECTR-W3 database. The shell would include the subsystems of information retrieval, input, update, and output in/from the database and present the users a handful of capabilities to formulate the queries with various modes of the search prescriptions, to present the information in tabular, graphic, and alphanumeric form using the formats of the text and HTML documents. The SPECTR-W3 Website is being arranged now and is supposed to be freely accessible round-the-clock on a dedicated Web server at RFNC VNIITF. The Website is being created with the employment of the advanced Internet technologies and database development techniques by using the up-to-date software of the world leading software manufacturers. The SPECTR-W3 ADB FrontPage would also include a feedback channel for the user comments and proposals as well as the hyperlinks to the Websites of the other ADBs and research centers in Europe, the USA, the Middle and Far East, running the investigations in atomic physics, plasma spectroscopy, astrophysics, and in adjacent areas. The effort is being supported by the International Science and Technology Center under the project sharp/mesh/hash1785-01.