47 CFR 69.120 - Line information database.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 3 2011-10-01 2011-10-01 false Line information database. 69.120 Section 69...) ACCESS CHARGES Computation of Charges § 69.120 Line information database. (a) A charge that is expressed... from a local exchange carrier database to recover the costs of: (1) The transmission facilities between...
47 CFR 69.120 - Line information database.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 3 2013-10-01 2013-10-01 false Line information database. 69.120 Section 69...) ACCESS CHARGES Computation of Charges § 69.120 Line information database. (a) A charge that is expressed... from a local exchange carrier database to recover the costs of: (1) The transmission facilities between...
47 CFR 69.120 - Line information database.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 3 2014-10-01 2014-10-01 false Line information database. 69.120 Section 69...) ACCESS CHARGES Computation of Charges § 69.120 Line information database. (a) A charge that is expressed... from a local exchange carrier database to recover the costs of: (1) The transmission facilities between...
47 CFR 69.120 - Line information database.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 3 2010-10-01 2010-10-01 false Line information database. 69.120 Section 69...) ACCESS CHARGES Computation of Charges § 69.120 Line information database. (a) A charge that is expressed... from a local exchange carrier database to recover the costs of: (1) The transmission facilities between...
47 CFR 69.120 - Line information database.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 3 2012-10-01 2012-10-01 false Line information database. 69.120 Section 69...) ACCESS CHARGES Computation of Charges § 69.120 Line information database. (a) A charge that is expressed... from a local exchange carrier database to recover the costs of: (1) The transmission facilities between...
Development and Implementation of Kumamoto Technopolis Regional Database T-KIND
NASA Astrophysics Data System (ADS)
Onoue, Noriaki
T-KIND (Techno-Kumamoto Information Network for Data-Base) is a system for effectively searching information of technology, human resources and industries which are necessary to realize Kumamoto Technopolis. It is composed of coded database, image database and LAN inside technoresearch park which is the center of R & D in the Technopolis. It constructs on-line system by networking general-purposed computers, minicomputers, optical disk file systems and so on, and provides the service through public telephone line. Two databases are now available on enterprise information and human resource information. The former covers about 4,000 enterprises, and the latter does about 2,000 persons.
The Genomes On Line Database (GOLD) v.2: a monitor of genome projects worldwide
Liolios, Konstantinos; Tavernarakis, Nektarios; Hugenholtz, Philip; Kyrpides, Nikos C.
2006-01-01
The Genomes On Line Database (GOLD) is a web resource for comprehensive access to information regarding complete and ongoing genome sequencing projects worldwide. The database currently incorporates information on over 1500 sequencing projects, of which 294 have been completed and the data deposited in the public databases. GOLD v.2 has been expanded to provide information related to organism properties such as phenotype, ecotype and disease. Furthermore, project relevance and availability information is now included. GOLD is available at . It is also mirrored at the Institute of Molecular Biology and Biotechnology, Crete, Greece at PMID:16381880
This fact sheet provides an overview of the 10 on-line characterization and remediation databases available on the Hazardous Waste Clean-Up Information (CLU-IN) website sponsored by the U.S. Environmental Protection Agency.
Consumption Database The California Energy Commission has created this on-line database for informal reporting ) classifications. The database also provides easy downloading of energy consumption data into Microsoft Excel (XLSX
S&MPO - An information system for ozone spectroscopy on the WEB
NASA Astrophysics Data System (ADS)
Babikov, Yurii L.; Mikhailenko, Semen N.; Barbe, Alain; Tyuterev, Vladimir G.
2014-09-01
Spectroscopy and Molecular Properties of Ozone ("S&MPO") is an Internet accessible information system devoted to high resolution spectroscopy of the ozone molecule, related properties and data sources. S&MPO contains information on original spectroscopic data (line positions, line intensities, energies, transition moments, spectroscopic parameters) recovered from comprehensive analyses and modeling of experimental spectra as well as associated software for data representation written in PHP Java Script, C++ and FORTRAN. The line-by-line list of vibration-rotation transitions and other information is organized as a relational database under control of MySQL database tools. The main S&MPO goal is to provide access to all available information on vibration-rotation molecular states and transitions under extended conditions based on extrapolations of laboratory measurements using validated theoretical models. Applications for the S&MPO may include: education/training in molecular physics, radiative processes, laser physics; spectroscopic applications (analysis, Fourier transform spectroscopy, atmospheric optics, optical standards, spectroscopic atlases); applications to environment studies and atmospheric physics (remote sensing); data supply for specific databases; and to photochemistry (laser excitation, multiphoton processes). The system is accessible via Internet on two sites: http://smpo.iao.ru and http://smpo.univ-reims.fr.
JICST Factual Database JICST DNA Database
NASA Astrophysics Data System (ADS)
Shirokizawa, Yoshiko; Abe, Atsushi
Japan Information Center of Science and Technology (JICST) has started the on-line service of DNA database in October 1988. This database is composed of EMBL Nucleotide Sequence Library and Genetic Sequence Data Bank. The authors outline the database system, data items and search commands. Examples of retrieval session are presented.
Information integration for a sky survey by data warehousing
NASA Astrophysics Data System (ADS)
Luo, A.; Zhang, Y.; Zhao, Y.
The virtualization service of data system for a sky survey LAMOST is very important for astronomers The service needs to integrate information from data collections catalogs and references and support simple federation of a set of distributed files and associated metadata Data warehousing has been in existence for several years and demonstrated superiority over traditional relational database management systems by providing novel indexing schemes that supported efficient on-line analytical processing OLAP of large databases Now relational database systems such as Oracle etc support the warehouse capability which including extensions to the SQL language to support OLAP operations and a number of metadata management tools have been created The information integration of LAMOST by applying data warehousing is to effectively provide data and knowledge on-line
Liolios, Konstantinos; Mavromatis, Konstantinos; Tavernarakis, Nektarios; Kyrpides, Nikos C
2008-01-01
The Genomes On Line Database (GOLD) is a comprehensive resource that provides information on genome and metagenome projects worldwide. Complete and ongoing projects and their associated metadata can be accessed in GOLD through pre-computed lists and a search page. As of September 2007, GOLD contains information on more than 2900 sequencing projects, out of which 639 have been completed and their sequence data deposited in the public databases. GOLD continues to expand with the goal of providing metadata information related to the projects and the organisms/environments towards the Minimum Information about a Genome Sequence' (MIGS) guideline. GOLD is available at http://www.genomesonline.org and has a mirror site at the Institute of Molecular Biology and Biotechnology, Crete, Greece at http://gold.imbb.forth.gr/
Liolios, Konstantinos; Mavromatis, Konstantinos; Tavernarakis, Nektarios; Kyrpides, Nikos C.
2008-01-01
The Genomes On Line Database (GOLD) is a comprehensive resource that provides information on genome and metagenome projects worldwide. Complete and ongoing projects and their associated metadata can be accessed in GOLD through pre-computed lists and a search page. As of September 2007, GOLD contains information on more than 2900 sequencing projects, out of which 639 have been completed and their sequence data deposited in the public databases. GOLD continues to expand with the goal of providing metadata information related to the projects and the organisms/environments towards the Minimum Information about a Genome Sequence’ (MIGS) guideline. GOLD is available at http://www.genomesonline.org and has a mirror site at the Institute of Molecular Biology and Biotechnology, Crete, Greece at http://gold.imbb.forth.gr/ PMID:17981842
Sakurai, Tetsuya; Kondou, Youichi; Akiyama, Kenji; Kurotani, Atsushi; Higuchi, Mieko; Ichikawa, Takanari; Kuroda, Hirofumi; Kusano, Miyako; Mori, Masaki; Saitou, Tsutomu; Sakakibara, Hitoshi; Sugano, Shoji; Suzuki, Makoto; Takahashi, Hideki; Takahashi, Shinya; Takatsuji, Hiroshi; Yokotani, Naoki; Yoshizumi, Takeshi; Saito, Kazuki; Shinozaki, Kazuo; Oda, Kenji; Hirochika, Hirohiko; Matsui, Minami
2011-02-01
Identification of gene function is important not only for basic research but also for applied science, especially with regard to improvements in crop production. For rapid and efficient elucidation of useful traits, we developed a system named FOX hunting (Full-length cDNA Over-eXpressor gene hunting) using full-length cDNAs (fl-cDNAs). A heterologous expression approach provides a solution for the high-throughput characterization of gene functions in agricultural plant species. Since fl-cDNAs contain all the information of functional mRNAs and proteins, we introduced rice fl-cDNAs into Arabidopsis plants for systematic gain-of-function mutation. We generated >30,000 independent Arabidopsis transgenic lines expressing rice fl-cDNAs (rice FOX Arabidopsis mutant lines). These rice FOX Arabidopsis lines were screened systematically for various criteria such as morphology, photosynthesis, UV resistance, element composition, plant hormone profile, metabolite profile/fingerprinting, bacterial resistance, and heat and salt tolerance. The information obtained from these screenings was compiled into a database named 'RiceFOX'. This database contains around 18,000 records of rice FOX Arabidopsis lines and allows users to search against all the observed results, ranging from morphological to invisible traits. The number of searchable items is approximately 100; moreover, the rice FOX Arabidopsis lines can be searched by rice and Arabidopsis gene/protein identifiers, sequence similarity to the introduced rice fl-cDNA and traits. The RiceFOX database is available at http://ricefox.psc.riken.jp/.
Sakurai, Tetsuya; Kondou, Youichi; Akiyama, Kenji; Kurotani, Atsushi; Higuchi, Mieko; Ichikawa, Takanari; Kuroda, Hirofumi; Kusano, Miyako; Mori, Masaki; Saitou, Tsutomu; Sakakibara, Hitoshi; Sugano, Shoji; Suzuki, Makoto; Takahashi, Hideki; Takahashi, Shinya; Takatsuji, Hiroshi; Yokotani, Naoki; Yoshizumi, Takeshi; Saito, Kazuki; Shinozaki, Kazuo; Oda, Kenji; Hirochika, Hirohiko; Matsui, Minami
2011-01-01
Identification of gene function is important not only for basic research but also for applied science, especially with regard to improvements in crop production. For rapid and efficient elucidation of useful traits, we developed a system named FOX hunting (Full-length cDNA Over-eXpressor gene hunting) using full-length cDNAs (fl-cDNAs). A heterologous expression approach provides a solution for the high-throughput characterization of gene functions in agricultural plant species. Since fl-cDNAs contain all the information of functional mRNAs and proteins, we introduced rice fl-cDNAs into Arabidopsis plants for systematic gain-of-function mutation. We generated >30,000 independent Arabidopsis transgenic lines expressing rice fl-cDNAs (rice FOX Arabidopsis mutant lines). These rice FOX Arabidopsis lines were screened systematically for various criteria such as morphology, photosynthesis, UV resistance, element composition, plant hormone profile, metabolite profile/fingerprinting, bacterial resistance, and heat and salt tolerance. The information obtained from these screenings was compiled into a database named ‘RiceFOX’. This database contains around 18,000 records of rice FOX Arabidopsis lines and allows users to search against all the observed results, ranging from morphological to invisible traits. The number of searchable items is approximately 100; moreover, the rice FOX Arabidopsis lines can be searched by rice and Arabidopsis gene/protein identifiers, sequence similarity to the introduced rice fl-cDNA and traits. The RiceFOX database is available at http://ricefox.psc.riken.jp/. PMID:21186176
The HITRAN molecular data base - Editions of 1991 and 1992
NASA Technical Reports Server (NTRS)
Rothman, Laurence S.; Gamache, R. R.; Tipping, R. H.; Rinsland, C. P.; Smith, M. A. H.; Benner, D. C.; Devi, V. M.; Flaud, J.-M.; Camy-Peyret, C.; Perrin, A.
1992-01-01
We describe in this paper the modifications, improvements, and enhancements to the HITRAN molecular absorption database that have occurred in the two editions of 1991 and 1992. The current database includes line parameters for 31 species and their isotopomers that are significant for terrestrial atmospheric studies. This line-by-line portion of HITRAN presently contains about 709,000 transitions between 0 and 23,000/cm and contains three molecules not present in earlier versions: COF2, SF6, and H2S. The HITRAN compilation has substantially more information on chlorofluorocarbons and other molecular species that exhibit dense spectra which are not amenable to line-by-line representation. The user access of the database has been advanced, and new media forms are now available for use on personal computers.
Bonfill, Xavier; Osorio, Dimelza; Solà, Ivan; Pijoan, Jose Ignacio; Balasso, Valentina; Quintana, Maria Jesús; Puig, Teresa; Bolibar, Ignasi; Urrútia, Gerard; Zamora, Javier; Emparanza, José Ignacio; Gómez de la Cámara, Agustín; Ferreira-González, Ignacio
2016-01-01
To describe the development of a novel on-line database aimed to serve as a source of information concerning healthcare interventions appraised for their clinical value and appropriateness by several initiatives worldwide, and to present a retrospective analysis of the appraisals already included in the database. Database development and a retrospective analysis. The database DianaHealth.com is already on-line and it is regularly updated, independent, open access and available in English and Spanish. Initiatives are identified in medical news, in article references, and by contacting experts in the field. We include appraisals in the form of clinical recommendations, expert analyses, conclusions from systematic reviews, and original research that label any health care intervention as low-value or inappropriate. We obtain the information necessary to classify the appraisals according to type of intervention, specialties involved, publication year, authoring initiative, and key words. The database is accessible through a search engine which retrieves a list of appraisals and a link to the website where they were published. DianaHealth.com also provides a brief description of the initiatives and a section where users can report new appraisals or suggest new initiatives. From January 2014 to July 2015, the on-line database included 2940 appraisals from 22 initiatives: eleven campaigns gathering clinical recommendations from scientific societies, five sets of conclusions from literature review, three sets of recommendations from guidelines, two collections of articles on low clinical value in medical journals, and an initiative of our own. We have developed an open access on-line database of appraisals about healthcare interventions considered of low clinical value or inappropriate. DianaHealth.com could help physicians and other stakeholders make better decisions concerning patient care and healthcare systems sustainability. Future efforts should be focused on assessing the impact of these appraisals in the clinical practice.
Bonfill, Xavier; Osorio, Dimelza; Solà, Ivan; Pijoan, Jose Ignacio; Balasso, Valentina; Quintana, Maria Jesús; Puig, Teresa; Bolibar, Ignasi; Urrútia, Gerard; Zamora, Javier; Emparanza, José Ignacio; Gómez de la Cámara, Agustín; Ferreira-González, Ignacio
2016-01-01
Objective To describe the development of a novel on-line database aimed to serve as a source of information concerning healthcare interventions appraised for their clinical value and appropriateness by several initiatives worldwide, and to present a retrospective analysis of the appraisals already included in the database. Methods and Findings Database development and a retrospective analysis. The database DianaHealth.com is already on-line and it is regularly updated, independent, open access and available in English and Spanish. Initiatives are identified in medical news, in article references, and by contacting experts in the field. We include appraisals in the form of clinical recommendations, expert analyses, conclusions from systematic reviews, and original research that label any health care intervention as low-value or inappropriate. We obtain the information necessary to classify the appraisals according to type of intervention, specialties involved, publication year, authoring initiative, and key words. The database is accessible through a search engine which retrieves a list of appraisals and a link to the website where they were published. DianaHealth.com also provides a brief description of the initiatives and a section where users can report new appraisals or suggest new initiatives. From January 2014 to July 2015, the on-line database included 2940 appraisals from 22 initiatives: eleven campaigns gathering clinical recommendations from scientific societies, five sets of conclusions from literature review, three sets of recommendations from guidelines, two collections of articles on low clinical value in medical journals, and an initiative of our own. Conclusions We have developed an open access on-line database of appraisals about healthcare interventions considered of low clinical value or inappropriate. DianaHealth.com could help physicians and other stakeholders make better decisions concerning patient care and healthcare systems sustainability. Future efforts should be focused on assessing the impact of these appraisals in the clinical practice. PMID:26840451
Romano, Paolo; Manniello, Assunta; Aresu, Ottavia; Armento, Massimiliano; Cesaro, Michela; Parodi, Barbara
2009-01-01
The Cell Line Data Base (CLDB) is a well-known reference information source on human and animal cell lines including information on more than 6000 cell lines. Main biological features are coded according to controlled vocabularies derived from international lists and taxonomies. HyperCLDB (http://bioinformatics.istge.it/hypercldb/) is a hypertext version of CLDB that improves data accessibility by also allowing information retrieval through web spiders. Access to HyperCLDB is provided through indexes of biological characteristics and navigation in the hypertext is granted by many internal links. HyperCLDB also includes links to external resources. Recently, an interest was raised for a reference nomenclature for cell lines and CLDB was seen as an authoritative system. Furthermore, to overcome the cell line misidentification problem, molecular authentication methods, such as fingerprinting, single-locus short tandem repeat (STR) profile and single nucleotide polymorphisms validation, were proposed. Since this data is distributed, a reference portal on authentication of human cell lines is needed. We present here the architecture and contents of CLDB, its recent enhancements and perspectives. We also present a new related database, the Cell Line Integrated Molecular Authentication (CLIMA) database (http://bioinformatics.istge.it/clima/), that allows to link authentication data to actual cell lines. PMID:18927105
Romano, Paolo; Manniello, Assunta; Aresu, Ottavia; Armento, Massimiliano; Cesaro, Michela; Parodi, Barbara
2009-01-01
The Cell Line Data Base (CLDB) is a well-known reference information source on human and animal cell lines including information on more than 6000 cell lines. Main biological features are coded according to controlled vocabularies derived from international lists and taxonomies. HyperCLDB (http://bioinformatics.istge.it/hypercldb/) is a hypertext version of CLDB that improves data accessibility by also allowing information retrieval through web spiders. Access to HyperCLDB is provided through indexes of biological characteristics and navigation in the hypertext is granted by many internal links. HyperCLDB also includes links to external resources. Recently, an interest was raised for a reference nomenclature for cell lines and CLDB was seen as an authoritative system. Furthermore, to overcome the cell line misidentification problem, molecular authentication methods, such as fingerprinting, single-locus short tandem repeat (STR) profile and single nucleotide polymorphisms validation, were proposed. Since this data is distributed, a reference portal on authentication of human cell lines is needed. We present here the architecture and contents of CLDB, its recent enhancements and perspectives. We also present a new related database, the Cell Line Integrated Molecular Authentication (CLIMA) database (http://bioinformatics.istge.it/clima/), that allows to link authentication data to actual cell lines.
NASA Technical Reports Server (NTRS)
Grams, R. R.
1982-01-01
A system designed to access a large range of available medical textbook information in an online interactive fashion is described. A high level query type database manager, INQUIRE, is used. Operating instructions, system flow diagrams, database descriptions, text generation, and error messages are discussed. User information is provided.
The relational clinical database: a possible solution to the star wars in registry systems.
Michels, D K; Zamieroski, M
1990-12-01
In summary, having data from other service areas available in a relational clinical database could resolve many of the problems existing in today's registry systems. Uniting sophisticated information systems into a centralized database system could definitely be a corporate asset in managing the bottom line.
HOPE: An On-Line Piloted Handling Qualities Experiment Data Book
NASA Technical Reports Server (NTRS)
Jackson, E. B.; Proffitt, Melissa S.
2010-01-01
A novel on-line database for capturing most of the information obtained during piloted handling qualities experiments (either flight or simulated) is described. The Hyperlinked Overview of Piloted Evaluations (HOPE) web application is based on an open-source object-oriented Web-based front end (Ruby-on-Rails) that can be used with a variety of back-end relational database engines. The hyperlinked, on-line data book approach allows an easily-traversed way of looking at a variety of collected data, including pilot ratings, pilot information, vehicle and configuration characteristics, test maneuvers, and individual flight test cards and repeat runs. It allows for on-line retrieval of pilot comments, both audio and transcribed, as well as time history data retrieval and video playback. Pilot questionnaires are recorded as are pilot biographies. Simple statistics are calculated for each selected group of pilot ratings, allowing multiple ways to aggregate the data set (by pilot, by task, or by vehicle configuration, for example). Any number of per-run or per-task metrics can be captured in the database. The entire run metrics dataset can be downloaded in comma-separated text for further analysis off-line. It is expected that this tool will be made available upon request
Reddy, T.B.K.; Thomas, Alex D.; Stamatis, Dimitri; Bertsch, Jon; Isbandi, Michelle; Jansson, Jakob; Mallajosyula, Jyothi; Pagani, Ioanna; Lobos, Elizabeth A.; Kyrpides, Nikos C.
2015-01-01
The Genomes OnLine Database (GOLD; http://www.genomesonline.org) is a comprehensive online resource to catalog and monitor genetic studies worldwide. GOLD provides up-to-date status on complete and ongoing sequencing projects along with a broad array of curated metadata. Here we report version 5 (v.5) of the database. The newly designed database schema and web user interface supports several new features including the implementation of a four level (meta)genome project classification system and a simplified intuitive web interface to access reports and launch search tools. The database currently hosts information for about 19 200 studies, 56 000 Biosamples, 56 000 sequencing projects and 39 400 analysis projects. More than just a catalog of worldwide genome projects, GOLD is a manually curated, quality-controlled metadata warehouse. The problems encountered in integrating disparate and varying quality data into GOLD are briefly highlighted. GOLD fully supports and follows the Genomic Standards Consortium (GSC) Minimum Information standards. PMID:25348402
2000-05-31
Grey Literature Network Service ( Farace , Dominic,1997) as, “that which is produced on all levels of government, academics, business and industry in... literature is available, on-line, to scientific workers throughout the world, for a world scientific database.” These reports served as the base to begin...all the world’s formal scientific literature is available, on-line, to scientific workers throughout the world, for a world scientific database
DOE technology information management system database study report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Widing, M.A.; Blodgett, D.W.; Braun, M.D.
1994-11-01
To support the missions of the US Department of Energy (DOE) Special Technologies Program, Argonne National Laboratory is defining the requirements for an automated software system that will search electronic databases on technology. This report examines the work done and results to date. Argonne studied existing commercial and government sources of technology databases in five general areas: on-line services, patent database sources, government sources, aerospace technology sources, and general technology sources. First, it conducted a preliminary investigation of these sources to obtain information on the content, cost, frequency of updates, and other aspects of their databases. The Laboratory then performedmore » detailed examinations of at least one source in each area. On this basis, Argonne recommended which databases should be incorporated in DOE`s Technology Information Management System.« less
Cross-Service Investigation of Geographical Information Systems
2004-03-01
Figure 8 illustrates the combined layers. Information for the layers is stored in a database format. The two types of storage are vector and...raster models. In a vector model, the image and information are stored as geometric objects such as points, lines, or polygons. In a raster model...DNCs are a vector -based digital database with selected maritime significant physical features from hydrographic charts. Layers within the DNC are data
On-line searching: costly or cost effective? A marketing perspective.
Dunn, R G; Boyle, H F
1984-05-01
The value of acquiring and using information is not well understood. Decisions to purchase information are made on the basis of the perceived need for the information, the anticipated benefit of using it, and the price. The current pricing of on-line information services, which emphasizes the connect hour as the unit of price, does not relate the price of a search to the value of a search, and the education programs of on-line vendors and database suppliers concentrate on the mechanics of information retrieval rather than on the application of information to the customer's problem. The on-line information industry needs to adopt a strong marketing orientation that focuses on the needs of customers rather than the needs of suppliers or vendors.
HIPdb: a database of experimentally validated HIV inhibiting peptides.
Qureshi, Abid; Thakur, Nishant; Kumar, Manoj
2013-01-01
Besides antiretroviral drugs, peptides have also demonstrated potential to inhibit the Human immunodeficiency virus (HIV). For example, T20 has been discovered to effectively block the HIV entry and was approved by the FDA as a novel anti-HIV peptide (AHP). We have collated all experimental information on AHPs at a single platform. HIPdb is a manually curated database of experimentally verified HIV inhibiting peptides targeting various steps or proteins involved in the life cycle of HIV e.g. fusion, integration, reverse transcription etc. This database provides experimental information of 981 peptides. These are of varying length obtained from natural as well as synthetic sources and tested on different cell lines. Important fields included are peptide sequence, length, source, target, cell line, inhibition/IC(50), assay and reference. The database provides user friendly browse, search, sort and filter options. It also contains useful services like BLAST and 'Map' for alignment with user provided sequences. In addition, predicted structure and physicochemical properties of the peptides are also included. HIPdb database is freely available at http://crdd.osdd.net/servers/hipdb. Comprehensive information of this database will be helpful in selecting/designing effective anti-HIV peptides. Thus it may prove a useful resource to researchers for peptide based therapeutics development.
Insect barcode information system.
Pratheepa, Maria; Jalali, Sushil Kumar; Arokiaraj, Robinson Silvester; Venkatesan, Thiruvengadam; Nagesh, Mandadi; Panda, Madhusmita; Pattar, Sharath
2014-01-01
Insect Barcode Information System called as Insect Barcode Informática (IBIn) is an online database resource developed by the National Bureau of Agriculturally Important Insects, Bangalore. This database provides acquisition, storage, analysis and publication of DNA barcode records of agriculturally important insects, for researchers specifically in India and other countries. It bridges a gap in bioinformatics by integrating molecular, morphological and distribution details of agriculturally important insects. IBIn was developed using PHP/My SQL by using relational database management concept. This database is based on the client- server architecture, where many clients can access data simultaneously. IBIn is freely available on-line and is user-friendly. IBIn allows the registered users to input new information, search and view information related to DNA barcode of agriculturally important insects.This paper provides a current status of insect barcode in India and brief introduction about the database IBIn. http://www.nabg-nbaii.res.in/barcode.
76 FR 42677 - Notice of Intent To Seek Approval To Collect Information
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-19
... and maintains an on-line recipe database, the Recipe Finder, as a popular feature to the SNAP-Ed Connection Web site. The purpose of the Recipe Finder database is to provide SNAP-Ed providers with low-cost... inclusion in the database. SNAP-Ed staff and providers benefit from collecting and posting feedback on...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reddy, Tatiparthi B. K.; Thomas, Alex D.; Stamatis, Dimitri
The Genomes OnLine Database (GOLD; http://www.genomesonline.org) is a comprehensive online resource to catalog and monitor genetic studies worldwide. GOLD provides up-to-date status on complete and ongoing sequencing projects along with a broad array of curated metadata. Within this paper, we report version 5 (v.5) of the database. The newly designed database schema and web user interface supports several new features including the implementation of a four level (meta)genome project classification system and a simplified intuitive web interface to access reports and launch search tools. The database currently hosts information for about 19 200 studies, 56 000 Biosamples, 56 000 sequencingmore » projects and 39 400 analysis projects. More than just a catalog of worldwide genome projects, GOLD is a manually curated, quality-controlled metadata warehouse. The problems encountered in integrating disparate and varying quality data into GOLD are briefly highlighted. Lastly, GOLD fully supports and follows the Genomic Standards Consortium (GSC) Minimum Information standards.« less
ERIC Educational Resources Information Center
Molina, Enzo
1986-01-01
Use of online bibliographic databases in Mexico is provided through Servicio de Consulta a Bancos de Informacion, a public service that provides information retrieval, document delivery, translation, technical support, and training services. Technical infrastructure is based on a public packet-switching network and institutional users may receive…
Analysis of high accuracy, quantitative proteomics data in the MaxQB database.
Schaab, Christoph; Geiger, Tamar; Stoehr, Gabriele; Cox, Juergen; Mann, Matthias
2012-03-01
MS-based proteomics generates rapidly increasing amounts of precise and quantitative information. Analysis of individual proteomic experiments has made great strides, but the crucial ability to compare and store information across different proteome measurements still presents many challenges. For example, it has been difficult to avoid contamination of databases with low quality peptide identifications, to control for the inflation in false positive identifications when combining data sets, and to integrate quantitative data. Although, for example, the contamination with low quality identifications has been addressed by joint analysis of deposited raw data in some public repositories, we reasoned that there should be a role for a database specifically designed for high resolution and quantitative data. Here we describe a novel database termed MaxQB that stores and displays collections of large proteomics projects and allows joint analysis and comparison. We demonstrate the analysis tools of MaxQB using proteome data of 11 different human cell lines and 28 mouse tissues. The database-wide false discovery rate is controlled by adjusting the project specific cutoff scores for the combined data sets. The 11 cell line proteomes together identify proteins expressed from more than half of all human genes. For each protein of interest, expression levels estimated by label-free quantification can be visualized across the cell lines. Similarly, the expression rank order and estimated amount of each protein within each proteome are plotted. We used MaxQB to calculate the signal reproducibility of the detected peptides for the same proteins across different proteomes. Spearman rank correlation between peptide intensity and detection probability of identified proteins was greater than 0.8 for 64% of the proteome, whereas a minority of proteins have negative correlation. This information can be used to pinpoint false protein identifications, independently of peptide database scores. The information contained in MaxQB, including high resolution fragment spectra, is accessible to the community via a user-friendly web interface at http://www.biochem.mpg.de/maxqb.
NASA Technical Reports Server (NTRS)
Lepore, K. H.; Mackie, J.; Dyar, M. D.; Fassett, C. I.
2017-01-01
Information on emission lines for major and minor elements is readily available from the National Institute of Standards and Technology (NIST) as part of the Atomic Spectra Database. However, tabulated emission lines are scarce for some minor elements and the wavelength ranges presented on the NIST database are limited to those included in existing studies. Previous work concerning minor element calibration curves measured using laser-induced break-down spectroscopy found evidence of Zn emission lines that were not documented on the NIST database. In this study, rock powders were doped with Rb, Ce, La, Sr, Y, Zr, Pb and Se in concentrations ranging from 10 percent to 10 parts per million. The difference between normalized spectra collected on samples containing 10 percent dopant and those containing only 10 parts per million were used to identify all emission lines that can be detected using LIBS (Laser-Induced Breakdown Spectroscopy) in a ChemCam-like configuration at the Mount Holyoke College LIBS facility. These emission spectra provide evidence of many previously undocumented emission lines for the elements measured here.
MIPS: analysis and annotation of proteins from whole genomes in 2005
Mewes, H. W.; Frishman, D.; Mayer, K. F. X.; Münsterkötter, M.; Noubibou, O.; Pagel, P.; Rattei, T.; Oesterheld, M.; Ruepp, A.; Stümpflen, V.
2006-01-01
The Munich Information Center for Protein Sequences (MIPS at the GSF), Neuherberg, Germany, provides resources related to genome information. Manually curated databases for several reference organisms are maintained. Several of these databases are described elsewhere in this and other recent NAR database issues. In a complementary effort, a comprehensive set of >400 genomes automatically annotated with the PEDANT system are maintained. The main goal of our current work on creating and maintaining genome databases is to extend gene centered information to information on interactions within a generic comprehensive framework. We have concentrated our efforts along three lines (i) the development of suitable comprehensive data structures and database technology, communication and query tools to include a wide range of different types of information enabling the representation of complex information such as functional modules or networks Genome Research Environment System, (ii) the development of databases covering computable information such as the basic evolutionary relations among all genes, namely SIMAP, the sequence similarity matrix and the CABiNet network analysis framework and (iii) the compilation and manual annotation of information related to interactions such as protein–protein interactions or other types of relations (e.g. MPCDB, MPPI, CYGD). All databases described and the detailed descriptions of our projects can be accessed through the MIPS WWW server (). PMID:16381839
MIPS: analysis and annotation of proteins from whole genomes in 2005.
Mewes, H W; Frishman, D; Mayer, K F X; Münsterkötter, M; Noubibou, O; Pagel, P; Rattei, T; Oesterheld, M; Ruepp, A; Stümpflen, V
2006-01-01
The Munich Information Center for Protein Sequences (MIPS at the GSF), Neuherberg, Germany, provides resources related to genome information. Manually curated databases for several reference organisms are maintained. Several of these databases are described elsewhere in this and other recent NAR database issues. In a complementary effort, a comprehensive set of >400 genomes automatically annotated with the PEDANT system are maintained. The main goal of our current work on creating and maintaining genome databases is to extend gene centered information to information on interactions within a generic comprehensive framework. We have concentrated our efforts along three lines (i) the development of suitable comprehensive data structures and database technology, communication and query tools to include a wide range of different types of information enabling the representation of complex information such as functional modules or networks Genome Research Environment System, (ii) the development of databases covering computable information such as the basic evolutionary relations among all genes, namely SIMAP, the sequence similarity matrix and the CABiNet network analysis framework and (iii) the compilation and manual annotation of information related to interactions such as protein-protein interactions or other types of relations (e.g. MPCDB, MPPI, CYGD). All databases described and the detailed descriptions of our projects can be accessed through the MIPS WWW server (http://mips.gsf.de).
ZeBase: an open-source relational database for zebrafish laboratories.
Hensley, Monica R; Hassenplug, Eric; McPhail, Rodney; Leung, Yuk Fai
2012-03-01
Abstract ZeBase is an open-source relational database for zebrafish inventory. It is designed for the recording of genetic, breeding, and survival information of fish lines maintained in a single- or multi-laboratory environment. Users can easily access ZeBase through standard web-browsers anywhere on a network. Convenient search and reporting functions are available to facilitate routine inventory work; such functions can also be automated by simple scripting. Optional barcode generation and scanning are also built-in for easy access to the information related to any fish. Further information of the database and an example implementation can be found at http://zebase.bio.purdue.edu.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The Transportation Legislative Database (TLDB) is an on-line information service containing detailed information on legislation and regulations regarding the transportation of radioactive materials in the United States. The system is dedicated to serving the legislative and regulatory information needs of the US Department of Energy and other federal agencies; state, tribal, and local governments; the hazardous materials transportation industry; and interested members of the general public. In addition to the on-line information service, quarterly and annual Legal Developments Reports are produced using information from the TLDB. These reports summarize important changes in federal and state legislation, regulations, administrative agency rulings,more » and judicial decisions over the reporting period. Information on significant legal developments at the tribal and local levels is also included on an as-available basis. Battelle's Office of Transportation Systems and Planning (OTSP) will also perform customized searches of the TLDB and produce formatted printouts in response to specific information requests.« less
Akiyama, Kenji; Kurotani, Atsushi; Iida, Kei; Kuromori, Takashi; Shinozaki, Kazuo; Sakurai, Tetsuya
2014-01-01
Arabidopsis thaliana is one of the most popular experimental plants. However, only 40% of its genes have at least one experimental Gene Ontology (GO) annotation assigned. Systematic observation of mutant phenotypes is an important technique for elucidating gene functions. Indeed, several large-scale phenotypic analyses have been performed and have generated phenotypic data sets from many Arabidopsis mutant lines and overexpressing lines, which are freely available online. Since each Arabidopsis mutant line database uses individual phenotype expression, the differences in the structured term sets used by each database make it difficult to compare data sets and make it impossible to search across databases. Therefore, we obtained publicly available information for a total of 66,209 Arabidopsis mutant lines, including loss-of-function (RATM and TARAPPER) and gain-of-function (AtFOX and OsFOX) lines, and integrated the phenotype data by mapping the descriptions onto Plant Ontology (PO) and Phenotypic Quality Ontology (PATO) terms. This approach made it possible to manage the four different phenotype databases as one large data set. Here, we report a publicly accessible web-based database, the RIKEN Arabidopsis Genome Encyclopedia II (RARGE II; http://rarge-v2.psc.riken.jp/), in which all of the data described in this study are included. Using the database, we demonstrated consistency (in terms of protein function) with a previous study and identified the presumed function of an unknown gene. We provide examples of AT1G21600, which is a subunit in the plastid-encoded RNA polymerase complex, and AT5G56980, which is related to the jasmonic acid signaling pathway.
Regional early flood warning system: design and implementation
NASA Astrophysics Data System (ADS)
Chang, L. C.; Yang, S. N.; Kuo, C. L.; Wang, Y. F.
2017-12-01
This study proposes a prototype of the regional early flood inundation warning system in Tainan City, Taiwan. The AI technology is used to forecast multi-step-ahead regional flood inundation maps during storm events. The computing time is only few seconds that leads to real-time regional flood inundation forecasting. A database is built to organize data and information for building real-time forecasting models, maintaining the relations of forecasted points, and displaying forecasted results, while real-time data acquisition is another key task where the model requires immediately accessing rain gauge information to provide forecast services. All programs related database are constructed in Microsoft SQL Server by using Visual C# to extracting real-time hydrological data, managing data, storing the forecasted data and providing the information to the visual map-based display. The regional early flood inundation warning system use the up-to-date Web technologies driven by the database and real-time data acquisition to display the on-line forecasting flood inundation depths in the study area. The friendly interface includes on-line sequentially showing inundation area by Google Map, maximum inundation depth and its location, and providing KMZ file download of the results which can be watched on Google Earth. The developed system can provide all the relevant information and on-line forecast results that helps city authorities to make decisions during typhoon events and make actions to mitigate the losses.
47 CFR 69.306 - Central office equipment (COE).
Code of Federal Regulations, 2010 CFR
2010-10-01
... exchange carrier's signalling transfer point and the database shall be assigned to the Line Information Database subelement at § 69.120(a). All other COE Category 2 shall be assigned to the interexchange... requirement. Non-price cap local exchange carriers may use thirty percent of the interstate Local Switching...
Moving BASISplus and TECHLIBplus from VAX/VMS to UNIX
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dominiak, R.
1993-12-31
BASISplus is used at the Laboratory by the Technical Information Services (TIS) Department which is part of the Information and Publishing Division at ARGONNE. TIS operates the Argonne Information Management System (AIM). The AIM System consists of the ANL Libraries On-Line Database (a TECHLIBplus database), the Current Journals Database (IDI`s current contents search), the ANL Publications Tracking Database (a TECHLIBplus database), the Powder Diffraction File Database, and several CD-ROM databases available through a Novell network. The AIM System is available from the desktop of ANL staff through modem and network connections, as well as from the 10 science libraries atmore » ARGONNE. TIS has been a BASISplus and TECHLIBplus site from the start, and never migrated from BASIS K. The decision to migrate from the VAX/VMS platform to a UNIX platform. Migrating a product from one platform to another involves many decisions and considerations. These justifications, decisions, and considerations are explored in this report.« less
Pagani, Ioanna; Liolios, Konstantinos; Jansson, Jakob; Chen, I-Min A.; Smirnova, Tatyana; Nosrat, Bahador; Markowitz, Victor M.; Kyrpides, Nikos C.
2012-01-01
The Genomes OnLine Database (GOLD, http://www.genomesonline.org/) is a comprehensive resource for centralized monitoring of genome and metagenome projects worldwide. Both complete and ongoing projects, along with their associated metadata, can be accessed in GOLD through precomputed tables and a search page. As of September 2011, GOLD, now on version 4.0, contains information for 11 472 sequencing projects, of which 2907 have been completed and their sequence data has been deposited in a public repository. Out of these complete projects, 1918 are finished and 989 are permanent drafts. Moreover, GOLD contains information for 340 metagenome studies associated with 1927 metagenome samples. GOLD continues to expand, moving toward the goal of providing the most comprehensive repository of metadata information related to the projects and their organisms/environments in accordance with the Minimum Information about any (x) Sequence specification and beyond. PMID:22135293
Pagani, Ioanna; Liolios, Konstantinos; Jansson, Jakob; Chen, I-Min A; Smirnova, Tatyana; Nosrat, Bahador; Markowitz, Victor M; Kyrpides, Nikos C
2012-01-01
The Genomes OnLine Database (GOLD, http://www.genomesonline.org/) is a comprehensive resource for centralized monitoring of genome and metagenome projects worldwide. Both complete and ongoing projects, along with their associated metadata, can be accessed in GOLD through precomputed tables and a search page. As of September 2011, GOLD, now on version 4.0, contains information for 11,472 sequencing projects, of which 2907 have been completed and their sequence data has been deposited in a public repository. Out of these complete projects, 1918 are finished and 989 are permanent drafts. Moreover, GOLD contains information for 340 metagenome studies associated with 1927 metagenome samples. GOLD continues to expand, moving toward the goal of providing the most comprehensive repository of metadata information related to the projects and their organisms/environments in accordance with the Minimum Information about any (x) Sequence specification and beyond.
A call for standardized naming and reporting of human ESC and iPSC lines.
Luong, Mai X; Auerbach, Jonathan; Crook, Jeremy M; Daheron, Laurence; Hei, Derek; Lomax, Geoffrey; Loring, Jeanne F; Ludwig, Tenneille; Schlaeger, Thorsten M; Smith, Kelly P; Stacey, Glyn; Xu, Ren-He; Zeng, Fanyi
2011-04-08
Human embryonic and induced pluripotent stem cell lines are being generated at a rapid pace and now number in the thousands. We propose a standard nomenclature and suggest the use of a centralized database for all cell line names and a minimum set of information for reporting new derivations. Copyright © 2011 Elsevier Inc. All rights reserved.
Co-PylotDB - A Python-Based Single-Window User Interface for Transmitting Information to a Database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnette, Daniel W.
2012-01-05
Co-PylotDB, written completely in Python, provides a user interface (UI) with which to select user and data file(s), directories, and file content, and provide or capture various other information for sending data collected from running any computer program to a pre-formatted database table for persistent storage. The interface allows the user to select input, output, make, source, executable, and qsub files. It also provides fields for specifying the machine name on which the software was run, capturing compile and execution lines, and listing relevant user comments. Data automatically captured by Co-PylotDB and sent to the database are user, current directory,more » local hostname, current date, and time of send. The UI provides fields for logging into a local or remote database server, specifying a database and a table, and sending the information to the selected database table. If a server is not available, the UI provides for saving the command that would have saved the information to a database table for either later submission or for sending via email to a collaborator who has access to the desired database.« less
NASA Astrophysics Data System (ADS)
Jacquinet-Husson, N.; Lmd Team
The GEISA (Gestion et Etude des Informations Spectroscopiques Atmosphériques: Management and Study of Atmospheric Spectroscopic Information) computer accessible database system, in its former 1997 and 2001 versions, has been updated in 2003 (GEISA-03). It is developed by the ARA (Atmospheric Radiation Analysis) group at LMD (Laboratoire de Météorologie Dynamique, France) since 1974. This early effort implemented the so-called `` line-by-line and layer-by-layer '' approach for forward radiative transfer modelling action. The GEISA 2003 system comprises three databases with their associated management softwares: a database of spectroscopic parameters required to describe adequately the individual spectral lines belonging to 42 molecules (96 isotopic species) and located in a spectral range from the microwave to the limit of the visible. The featured molecules are of interest in studies of the terrestrial as well as the other planetary atmospheres, especially those of the Giant Planets. a database of absorption cross-sections of molecules such as chlorofluorocarbons which exhibit unresolvable spectra. a database of refractive indices of basic atmospheric aerosol components. Illustrations will be given of GEISA-03, data archiving method, contents, management softwares and Web access facilities at: http://ara.lmd.polytechnique.fr The performance of instruments like AIRS (Atmospheric Infrared Sounder; http://www-airs.jpl.nasa.gov) in the USA, and IASI (Infrared Atmospheric Sounding Interferometer; http://smsc.cnes.fr/IASI/index.htm) in Europe, which have a better vertical resolution and accuracy, compared to the presently existing satellite infrared vertical sounders, is directly related to the quality of the spectroscopic parameters of the optically active gases, since these are essential input in the forward models used to simulate recorded radiance spectra. For these upcoming atmospheric sounders, the so-called GEISA/IASI sub-database system has been elaborated, from GEISA. Its content, will be described, as well. This work is ongoing, with the purpose of assessing the IASI measurements capabilities and the spectroscopic information quality, within the ISSWG (IASI Sounding Science Working Group), in the frame of the CNES (Centre National d'Etudes Spatiales, France)/EUMETSAT (EUropean organization for the exploitation of METeorological SATellites) Polar System (EPS) project, by simulating high resolution radiances and/or using experimental data. EUMETSAT will implement GEISA/IASI into the EPS ground segment. The IASI soundings spectroscopic data archive requirements will be discussed in the context of comparisons between recorded and calculated experimental spectra, using the ARA/4A forward line-by-line radiative transfer modelling code in its latest version.
Collecting and Using Student Information for School Improvement.
ERIC Educational Resources Information Center
Riegel, N. Blyth
This paper suggests methods for collecting and using student information for school improvement by describing how the Richardson Independent School District (RISD), Texas, determines data for effective school management decisionmaking. RISD readily accesses student information via a networked database on line with the central office's IBM…
Annual patents review, January-December 2004
Roland Gleisner; Karen Scallon; Michael Fleischmann; Julie Blankenburg; Marguerite Sykes
2005-01-01
This review summarizes patents related to paper recycling that first appeared in patent databases during the 2004. Two on-line databases, Claims/U.S. Patents Abstracts and Derwent World Patents Index, were searched for this review. This feature is intended to inform readers about recent developments in equipment design, chemicals, and process technologies for recycling...
The Design and Product of National 1:1000000 Cartographic Data of Topographic Map
NASA Astrophysics Data System (ADS)
Wang, Guizhi
2016-06-01
National administration of surveying, mapping and geoinformation started to launch the project of national fundamental geographic information database dynamic update in 2012. Among them, the 1:50000 database was updated once a year, furthermore the 1:250000 database was downsized and linkage-updated on the basis. In 2014, using the latest achievements of 1:250000 database, comprehensively update the 1:1000000 digital line graph database. At the same time, generate cartographic data of topographic map and digital elevation model data. This article mainly introduce national 1:1000000 cartographic data of topographic map, include feature content, database structure, Database-driven Mapping technology, workflow and so on.
A Prototype System for Retrieval of Gene Functional Information
Folk, Lillian C.; Patrick, Timothy B.; Pattison, James S.; Wolfinger, Russell D.; Mitchell, Joyce A.
2003-01-01
Microarrays allow researchers to gather data about the expression patterns of thousands of genes simultaneously. Statistical analysis can reveal which genes show statistically significant results. Making biological sense of those results requires the retrieval of functional information about the genes thus identified, typically a manual gene-by-gene retrieval of information from various on-line databases. For experiments generating thousands of genes of interest, retrieval of functional information can become a significant bottleneck. To address this issue, we are currently developing a prototype system to automate the process of retrieval of functional information from multiple on-line sources. PMID:14728346
NASA Astrophysics Data System (ADS)
Geyer, Adelina; Marti, Joan
2015-04-01
Collapse calderas are one of the most important volcanic structures not only because of their hazard implications, but also because of their high geothermal energy potential and their association with mineral deposits of economic interest. In 2008 we presented a new general worldwide Collapse Caldera DataBase (CCDB), in order to provide a useful and accessible tool for studying and understanding caldera collapse processes. The principal aim of the CCDB is to update the current field based knowledge on calderas, merging together the existing databases and complementing them with new examples found in the bibliography, and leaving it open for the incorporation of new data from future studies. Currently, the database includes over 450 documented calderas around the world, trying to be representative enough to promote further studies and analyses. We have performed a comprehensive compilation of published field studies of collapse calderas including more than 500 references, and their information has been summarized in a database linked to a Geographical Information System (GIS) application. Thus, it is possible to visualize the selected calderas on a world map and to filter them according to different features recorded in the database (e.g. age, structure). The information recorded in the CCDB can be grouped in seven main information classes: caldera features, properties of the caldera-forming deposits, magmatic system, geodynamic setting, pre-caldera volcanism,caldera-forming eruption sequence and post-caldera activity. Additionally, we have added two extra classes. The first records the references consulted for each caldera. The second allows users to introduce comments on the caldera sample such as possible controversies concerning the caldera origin. During the last seven years, the database has been available on-line at http://www.gvb-csic.es/CCDB.htm previous registration. This year, the CCDB webpage will be updated and improved so the database content can be queried on-line. This research was partially funded by the research fellowship RYC-2012-11024.
Fuller, Pamela L.; Cannister, Matthew; Johansen, Rebecca; Estes, L. Dwayne; Hamilton, Steven W.; Barrass, Andrew N.
2013-01-01
The Nonindigenous Aquatic Species (NAS) database (http://nas.er.usgs.gov) functions as a national repository and clearinghouse for occurrence data for introduced species within the United States. Included is locality information on over 1,100 species of vertebrates, invertebrates, and vascular plants introduced as early as 1850. Taxa include foreign (exotic) species and species native to North America that have been transported outside of their natural range. Locality data are obtained from published and unpublished literature, state, federal and local monitoring programs, museum accessions, on-line databases, websites, professional communications and on-line reporting forms. The NAS web site provides immediate access to new occurrence records through a real-time interface with the NAS database. Visitors to the web site are presented with a set of pre-defined queries that generate lists of species according to state or hydrologic basin of interest. Fact sheets, distribution maps, and information on new occurrences are updated as new records and information become available. The NAS database allows resource managers to learn of new introductions reported in their region or nearby regions, improving response time. Conversely, managers are encouraged to report their observations of new occurrences to the NAS database so information can be disseminated to other managers, researchers, and the public. In May 2004, the NAS database incorporated an Alert System to notify registered users of new introductions as part of a national early detection/rapid response system. Users can register to receive alerts based on geographic or taxonomic criteria. The NAS database was used to identify 23 fish species introduced into the lower Tennessee and Cumberland drainages. Most of these are sport fish stocked to support fisheries, but the list also includes accidental and illegal introductions such as Asian Carps, clupeids, various species popular in the aquarium trade, and Atlantic Needlefish (Strongylura marina) that was introduced via the newly-constructed Tennessee-Tombigbee Canal.
NASA Astrophysics Data System (ADS)
Noelle, A.; Hartmann, G. K.; Martin-Torres, F. J.
2010-05-01
The science-softCon "UV/Vis+ Spectra Data Base" is a non-profit project established in August 2000 and is operated in accordance to the "Open Access" definitions and regulations of the CSPR Assessment Panel on Scientific Data and Information (International Council for Science, 2004, HYPERLINK "http://www.science-softcon.de/spectra/cspr.pdf" ICSU Report of the CSPR Assessment Panel on Data and Information; ISBN 0-930357-60-4). The on-line database contains currently about 5600 spectra (from low to very high resolution, at different temperatures and pressures) and datasheets (metadata) of about 850 substances. Additional spectra/datasheets will be added continuously. In addition more than 250 links to on-line free available original publications are provided. The interdisciplinary of this photochemistry database provides a good interaction between different research areas. So, this database is an excellent tool for scientists who investigate on different fields such as atmospheric chemistry, astrophysics, agriculture, analytical chemistry, environmental chemistry, medicine, remote sensing, etc. To ensure the high quality standard of the fast growing UV/Vis+ Spectra Data Base an international "Scientific Advisory Group" (SAG) has been established in 2004. Because of the importance of maintenance of the database the support of the scientific community is crucial. Therefore we would like to encourage all scientists to support this data compilation project thru the provision of new or missing spectral data and information.
Digital geologic map database of the Nevada Test Site area, Nevada
Wahl, R.R.; Sawyer, D.A.; Minor, S.A.; Carr, M.D.; Cole, J.C.; Swadley, W.C.; Laczniak, R.J.; Warren, R.G.; Green, K.S.; Engle, C.M.
1997-01-01
Forty years of geologic investigations at the Nevada Test Site (NTS) have been digitized. These data include all geologic information that: (1) has been collected, and (2) can be represented on a map within the map borders at the map scale is included in the map digital coverages. The following coverages are included with this dataset: Coverage Type Description geolpoly Polygon Geologic outcrops geolflts line Fault traces geolatts Point Bedding attitudes, etc. geolcald line Caldera boundaries geollins line Interpreted lineaments geolmeta line Metamorphic gradients The above coverages are attributed with numeric values and interpreted information. The entity files documented below show the data associated with each coverage.
EPA'S REPORT ON THE ENVIRONMENT (2003 Draft)
The RoE presents information on environmental indicators in the areas of air, water, land, human health, and ecological condition. The report is available for download and the RoE information is searchable via an on-line database site: www.epa.gov/roe.
NASA Astrophysics Data System (ADS)
Michold, U.; Cummins, M.; Watson, J. M.; Holmquist, J.; Shobbrook, R.
Contents: library catalogs and holdings; indexing and abstract services; preprint services; electronic journals and newsletters; alerting services; commercial databases; informal networking; use of a thesaurus for on-line searching. An extensive list of access pointers for library catalogs and services, electronic newsletters, and publishers and bookshops is enclosed.
Liolios, Konstantinos; Chen, I-Min A; Mavromatis, Konstantinos; Tavernarakis, Nektarios; Hugenholtz, Philip; Markowitz, Victor M; Kyrpides, Nikos C
2010-01-01
The Genomes On Line Database (GOLD) is a comprehensive resource for centralized monitoring of genome and metagenome projects worldwide. Both complete and ongoing projects, along with their associated metadata, can be accessed in GOLD through precomputed tables and a search page. As of September 2009, GOLD contains information for more than 5800 sequencing projects, of which 1100 have been completed and their sequence data deposited in a public repository. GOLD continues to expand, moving toward the goal of providing the most comprehensive repository of metadata information related to the projects and their organisms/environments in accordance with the Minimum Information about a (Meta)Genome Sequence (MIGS/MIMS) specification. GOLD is available at: http://www.genomesonline.org and has a mirror site at the Institute of Molecular Biology and Biotechnology, Crete, Greece, at: http://gold.imbb.forth.gr/
Liolios, Konstantinos; Chen, I-Min A.; Mavromatis, Konstantinos; Tavernarakis, Nektarios; Hugenholtz, Philip; Markowitz, Victor M.; Kyrpides, Nikos C.
2010-01-01
The Genomes On Line Database (GOLD) is a comprehensive resource for centralized monitoring of genome and metagenome projects worldwide. Both complete and ongoing projects, along with their associated metadata, can be accessed in GOLD through precomputed tables and a search page. As of September 2009, GOLD contains information for more than 5800 sequencing projects, of which 1100 have been completed and their sequence data deposited in a public repository. GOLD continues to expand, moving toward the goal of providing the most comprehensive repository of metadata information related to the projects and their organisms/environments in accordance with the Minimum Information about a (Meta)Genome Sequence (MIGS/MIMS) specification. GOLD is available at: http://www.genomesonline.org and has a mirror site at the Institute of Molecular Biology and Biotechnology, Crete, Greece, at: http://gold.imbb.forth.gr/ PMID:19914934
ERIC Educational Resources Information Center
RESNA: Association for the Advancement of Rehabilitation Technology, Washington, DC.
This resource directory provides a selective listing of electronic networks, online databases, and bulletin boards that highlight technology-related services and products. For each resource, the following information is provided: name, address, and telephone number; description; target audience; hardware/software needs to access the system;…
1982-12-01
management, plus the comments received from the faculty and staff. A major assumption in this thesis is that automated database tech- niques offer the...and major advantage of a DBMS is that of real-time, on- line data accessibility. Routine queries, reports and ad hoc queries caii be performed...used or as applications programs evolve. Such changes can have a major impact on the organization and storage of data and ultimately on the response
Modern Hardware Technologies and Software Techniques for On-Line Database Storage and Access.
1985-12-01
of the information in a message narrative. This method employs artificial intelligence techniques to extract information, In simalest terms, an...disf ribif ion (tape replacemenf) systemns Database distribution On-fine mass storage Videogame ROM (luke-box I Media Cost Mt $2-10/438 $10-SO/G38...trajninq ot tne great intelligence for the analyst would be required. If, on’ the other hand, a sentence analysis scneme siTole enouq,. for the low-level
NASA Astrophysics Data System (ADS)
Brissebrat, Guillaume; Fleury, Laurence; Boichard, Jean-Luc; Cloché, Sophie; Eymard, Laurence; Mastrorillo, Laurence; Moulaye, Oumarou; Ramage, Karim; Asencio, Nicole; Favot, Florence; Roussot, Odile
2013-04-01
The AMMA information system aims at expediting data and scientific results communication inside the AMMA community and beyond. It has already been adopted as the data management system by several projects and is meant to become a reference information system about West Africa area for the whole scientific community. The AMMA database and the associated on line tools have been developed and are managed by two French teams (IPSL Database Centre, Palaiseau and OMP Data Service, Toulouse). The complete system has been fully duplicated and is operated by AGRHYMET Regional Centre in Niamey, Niger. The AMMA database contains a wide variety of datasets: - about 250 local observation datasets, that cover geophysical components (atmosphere, ocean, soil, vegetation) and human activities (agronomy, health...) They come from either operational networks or scientific experiments, and include historical data in West Africa from 1850; - 1350 outputs of a socio-economics questionnaire; - 60 operational satellite products and several research products; - 10 output sets of meteorological and ocean operational models and 15 of research simulations. Database users can access all the data using either the portal http://database.amma-international.org or http://amma.agrhymet.ne/amma-data. Different modules are available. The complete catalogue enables to access metadata (i.e. information about the datasets) that are compliant with the international standards (ISO19115, INSPIRE...). Registration pages enable to read and sign the data and publication policy, and to apply for a user database account. The data access interface enables to easily build a data extraction request by selecting various criteria like location, time, parameters... At present, the AMMA database counts more than 740 registered users and process about 80 data requests every month In order to monitor day-to-day meteorological and environment information over West Africa, some quick look and report display websites have been developed. They met the operational needs for the observational teams during the AMMA 2006 (http://aoc.amma-international.org) and FENNEC 2011 (http://fenoc.sedoo.fr) campaigns. But they also enable scientific teams to share physical indices along the monsoon season (http://misva.sedoo.fr from 2011). A collaborative WIKINDX tool has been set on line in order to manage scientific publications and communications of interest to AMMA (http://biblio.amma-international.org). Now the bibliographic database counts about 1200 references. It is the most exhaustive document collection about African Monsoon available for all. Every scientist is invited to make use of the different AMMA on line tools and data. Scientists or project leaders who have data management needs for existing or future datasets over West Africa are welcome to use the AMMA database framework and to contact ammaAdmin@sedoo.fr .
The GEISA Spectroscopic Database System in its latest Edition
NASA Astrophysics Data System (ADS)
Jacquinet-Husson, N.; Crépeau, L.; Capelle, V.; Scott, N. A.; Armante, R.; Chédin, A.
2009-04-01
GEISA (Gestion et Etude des Informations Spectroscopiques Atmosphériques: Management and Study of Spectroscopic Information)[1] is a computer-accessible spectroscopic database system, designed to facilitate accurate forward planetary radiative transfer calculations using a line-by-line and layer-by-layer approach. It was initiated in 1976. Currently, GEISA is involved in activities related to the assessment of the capabilities of IASI (Infrared Atmospheric Sounding Interferometer on board the METOP European satellite -http://earth-sciences.cnes.fr/IASI/)) through the GEISA/IASI database[2] derived from GEISA. Since the Metop (http://www.eumetsat.int) launch (October 19th 2006), GEISA/IASI is the reference spectroscopic database for the validation of the level-1 IASI data, using the 4A radiative transfer model[3] (4A/LMD http://ara.lmd.polytechnique.fr; 4A/OP co-developed by LMD and Noveltis with the support of CNES). Also, GEISA is involved in planetary research, i.e.: modelling of Titan's atmosphere, in the comparison with observations performed by Voyager: http://voyager.jpl.nasa.gov/, or by ground-based telescopes, and by the instruments on board the Cassini-Huygens mission: http://www.esa.int/SPECIALS/Cassini-Huygens/index.html. The updated 2008 edition of GEISA (GEISA-08), a system comprising three independent sub-databases devoted, respectively, to line transition parameters, infrared and ultraviolet/visible absorption cross-sections, microphysical and optical properties of atmospheric aerosols, will be described. Spectroscopic parameters quality requirement will be discussed in the context of comparisons between observed or simulated Earth's and other planetary atmosphere spectra. GEISA is implemented on the CNES/CNRS Ether Products and Services Centre WEB site (http://ether.ipsl.jussieu.fr), where all archived spectroscopic data can be handled through general and user friendly associated management software facilities. More than 350 researchers are registered for on line use of GEISA. Refs: 1. Jacquinet-Husson N., N.A. Scott, A. Chédin,L. Crépeau, R. Armante, V. Capelle, J. Orphal, A. Coustenis, C. Boonne, N. Poulet-Crovisier, et al. THE GEISA SPECTROSCOPIC DATABASE: Current and future archive for Earth and planetary atmosphere studies. JQSRT, 109, 1043-1059, 2008 2. Jacquinet-Husson N., N.A. Scott, A. Chédin, K. Garceran, R. Armante, et al. The 2003 edition of the GEISA/IASI spectroscopic database. JQSRT, 95, 429-67, 2005. 3. Scott, N.A. and A. Chedin, 1981: A fast line-by-line method for atmospheric absorption computations: The Automatized Atmospheric Absorption Atlas. J. Appl. Meteor., 20,556-564.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
FEDIX is an on-line information service that links the higher education community and the federal government to facilitate research, education, and services. The system provides accurate, timely federal agency information to colleges, universities, and other research organizations. There are no registration fees or access charges. Participating agencies include DOE, FAA, NASA, ONR, AFOSR, NSF, NSA, DOE, DOEd, HUD, and AID. This guide is intended to help users access and utilize FEDIX.
Leveraging Information Technology. Track VII: Outstanding Applications.
ERIC Educational Resources Information Center
CAUSE, Boulder, CO.
Eight papers from the 1987 CAUSE conference's Track VII, Outstanding Applications, are presented. They include: "Image Databases in the University" (Reid Kaplan and Gordon Mathieson); "Using Information Technology for Travel Management at the University of Michigan" (Robert E. Russell and John C. Hufziger); "On-Line Access…
How Community Colleges Can Capitalize on Changes in Information Services.
ERIC Educational Resources Information Center
Nourse, Jimmie Anne; Widman, Rudy
1991-01-01
Urges community college librarians to become leaders in library instruction by developing aggressive teaching programs using high-technology information resources, such as compact disc read-only-memory (CD-ROM), telecommunications, and on-line databases. Discusses training, hardware, software, and funding issues. (DMM)
Initiation of a Database of CEUS Ground Motions for NGA East
NASA Astrophysics Data System (ADS)
Cramer, C. H.
2007-12-01
The Nuclear Regulatory Commission has funded the first stage of development of a database of central and eastern US (CEUS) broadband and accelerograph records, along the lines of the existing Next Generation Attenuation (NGA) database for active tectonic areas. This database will form the foundation of an NGA East project for the development of CEUS ground-motion prediction equations that include the effects of soils. This initial effort covers the development of a database design and the beginning of data collection to populate the database. It also includes some processing for important source parameters (Brune corner frequency and stress drop) and site parameters (kappa, Vs30). Besides collecting appropriate earthquake recordings and information, existing information about site conditions at recording sites will also be gathered, including geology and geotechnical information. The long-range goal of the database development is to complete the database and make it available in 2010. The database design is centered on CEUS ground motion information needs but is built on the Pacific Earthquake Engineering Research Center's (PEER) NGA experience. Documentation from the PEER NGA website was reviewed and relevant fields incorporated into the CEUS database design. CEUS database tables include ones for earthquake, station, component, record, and references. As was done for NGA, a CEUS ground- motion flat file of key information will be extracted from the CEUS database for use in attenuation relation development. A short report on the CEUS database and several initial design-definition files are available at https://umdrive.memphis.edu:443/xythoswfs/webui/_xy-7843974_docstore1. Comments and suggestions on the database design can be sent to the author. More details will be presented in a poster at the meeting.
GABI-Kat SimpleSearch: new features of the Arabidopsis thaliana T-DNA mutant database.
Kleinboelting, Nils; Huep, Gunnar; Kloetgen, Andreas; Viehoever, Prisca; Weisshaar, Bernd
2012-01-01
T-DNA insertion mutants are very valuable for reverse genetics in Arabidopsis thaliana. Several projects have generated large sequence-indexed collections of T-DNA insertion lines, of which GABI-Kat is the second largest resource worldwide. User access to the collection and its Flanking Sequence Tags (FSTs) is provided by the front end SimpleSearch (http://www.GABI-Kat.de). Several significant improvements have been implemented recently. The database now relies on the TAIRv10 genome sequence and annotation dataset. All FSTs have been newly mapped using an optimized procedure that leads to improved accuracy of insertion site predictions. A fraction of the collection with weak FST yield was re-analysed by generating new FSTs. Along with newly found predictions for older sequences about 20,000 new FSTs were included in the database. Information about groups of FSTs pointing to the same insertion site that is found in several lines but is real only in a single line are included, and many problematic FST-to-line links have been corrected using new wet-lab data. SimpleSearch currently contains data from ~71,000 lines with predicted insertions covering 62.5% of the 27,206 nuclear protein coding genes, and offers insertion allele-specific data from 9545 confirmed lines that are available from the Nottingham Arabidopsis Stock Centre.
Enhancements to the NASA Astrophysics Science Information and Abstract Service
NASA Astrophysics Data System (ADS)
Kurtz, M. J.; Eichhorn, G.; Accomazzi, A.; Grant, C. S.; Murray, S. S.
1995-05-01
The NASA Astrophysics Data System Astrophysics Science Information and Abstract Service, the extension of the ADS Abstract Service continues rapidly to expand in both use and capabilities. Each month the service is used by about 4,000 different people, and returns about 1,000,000 pieces of bibliographic information. Among the recent additions to the system are: 1. Whole Text Access. In addition to the ApJ Letters we now have whole text for the ApJ on-line, soon we will have AJ and Rev. Mexicana. Discussions with other publishers are in progress. 2. Space Instrumentation Database. We now provide a second abstract service, covering papers related to space instruments. This is larger than the astronomy and astrophysics database in terms of total abstracts. 3. Reference Books and Historical Journals. We have begun putting the SAO Annals and the HCO Annals on-line. We have put the Handbook of Space Astronomy and Astrophysics by M.V. Zombeck (Cambridge U.P.) on-line. 4. Author Abstracts. We can now include original abstracts in addition to those we get from the NASA STI Abstracts Database. We have included abstracts for A&A in collaboration with the CDS in Strasbourg, and are collaborating with the AAS and the ASP on others. We invite publishers and editors of journals and conference proceedings to include their original abstracts in our service; send inquiries via e-mail to ads@cfa.harvard.edu. 5. Author Notes. We now accept notes and comments from authors of articles in our database. These are arbitrary html files and may contain pointers to other WWW documents, they are listed along with the abstracts, whole text, and data available in the index listing for every reference. The ASIAS is available at: http://adswww.harvard.edu/
NASA Astrophysics Data System (ADS)
Endres, Christian P.; Schlemmer, Stephan; Schilke, Peter; Stutzki, Jürgen; Müller, Holger S. P.
2016-09-01
The Cologne Database for Molecular Spectroscopy, CDMS, was founded 1998 to provide in its catalog section line lists of mostly molecular species which are or may be observed in various astronomical sources (usually) by radio astronomical means. The line lists contain transition frequencies with qualified accuracies, intensities, quantum numbers, as well as further auxiliary information. They have been generated from critically evaluated experimental line lists, mostly from laboratory experiments, employing established Hamiltonian models. Separate entries exist for different isotopic species and usually also for different vibrational states. As of December 2015, the number of entries is 792. They are available online as ascii tables with additional files documenting information on the entries. The Virtual Atomic and Molecular Data Centre, VAMDC, was founded more than 5 years ago as a common platform for atomic and molecular data. This platform facilitates exchange not only between spectroscopic databases related to astrophysics or astrochemistry, but also with collisional and kinetic databases. A dedicated infrastructure was developed to provide a common data format in the various databases enabling queries to a large variety of databases on atomic and molecular data at once. For CDMS, the incorporation in VAMDC was combined with several modifications on the generation of CDMS catalog entries. Here we introduce related changes to the data structure and the data content in the CDMS. The new data scheme allows us to incorporate all previous data entries but in addition allows us also to include entries based on new theoretical descriptions. Moreover, the CDMS entries have been transferred into a mySQL database format. These developments within the VAMDC framework have in part been driven by the needs of the astronomical community to be able to deal efficiently with large data sets obtained with the Herschel Space Telescope or, more recently, with the Atacama Large Millimeter Array.
NASA Technical Reports Server (NTRS)
1990-01-01
In 1981 Wayne Erickson founded Microrim, Inc, a company originally focused on marketing a microcomputer version of RIM (Relational Information Manager). Dennis Comfort joined the firm and is now vice president, development. The team developed an advanced spinoff from the NASA system they had originally created, a microcomputer database management system known as R:BASE 4000. Microrim added many enhancements and developed a series of R:BASE products for various environments. R:BASE is now the second largest selling line of microcomputer database management software in the world.
Staradmin -- Starlink User Database Maintainer
NASA Astrophysics Data System (ADS)
Fish, Adrian
The subject of this SSN is a utility called STARADMIN. This utility allows the system administrator to build and maintain a Starlink User Database (UDB). The principal source of information for each user is a text file, named after their username. The content of each file is a list consisting of one keyword followed by the relevant user data per line. These user database files reside in a single directory. The STARADMIN program is used to manipulate these user data files and automatically generate user summary lists.
Starbase Data Tables: An ASCII Relational Database for Unix
NASA Astrophysics Data System (ADS)
Roll, John
2011-11-01
Database management is an increasingly important part of astronomical data analysis. Astronomers need easy and convenient ways of storing, editing, filtering, and retrieving data about data. Commercial databases do not provide good solutions for many of the everyday and informal types of database access astronomers need. The Starbase database system with simple data file formatting rules and command line data operators has been created to answer this need. The system includes a complete set of relational and set operators, fast search/index and sorting operators, and many formatting and I/O operators. Special features are included to enhance the usefulness of the database when manipulating astronomical data. The software runs under UNIX, MSDOS and IRAF.
Establishment of Low Energy Building materials and Equipment Database Based on Property Information
NASA Astrophysics Data System (ADS)
Kim, Yumin; Shin, Hyery; eon Lee, Seung
2018-03-01
The purpose of this study is to provide reliable service of materials information portal through the establishment of public big data by collecting and integrating scattered low energy building materials and equipment data. There were few cases of low energy building materials database in Korea have provided material properties as factors influencing material pricing. The framework of the database was defined referred with Korea On-line E-procurement system. More than 45,000 data were gathered by the specification of entities and with the gathered data, price prediction models for chillers were suggested. To improve the usability of the prediction model, detailed properties should be analysed for each item.
Distributed On-line Monitoring System Based on Modem and Public Phone Net
NASA Astrophysics Data System (ADS)
Chen, Dandan; Zhang, Qiushi; Li, Guiru
In order to solve the monitoring problem of urban sewage disposal, a distributed on-line monitoring system is proposed. By introducing dial-up communication technology based on Modem, the serial communication program can rationally solve the information transmission problem between master station and slave station. The realization of serial communication program is based on the MSComm control of C++ Builder 6.0.The software includes real-time data operation part and history data handling part, which using Microsoft SQL Server 2000 for database, and C++ Builder6.0 for user interface. The monitoring center displays a user interface with alarm information of over-standard data and real-time curve. Practical application shows that the system has successfully accomplished the real-time data acquisition from data gather station, and stored them in the terminal database.
Chesapeake Bay Program Water Quality Database
The Chesapeake Information Management System (CIMS), designed in 1996, is an integrated, accessible information management system for the Chesapeake Bay Region. CIMS is an organized, distributed library of information and software tools designed to increase basin-wide public access to Chesapeake Bay information. The information delivered by CIMS includes technical and public information, educational material, environmental indicators, policy documents, and scientific data. Through the use of relational databases, web-based programming, and web-based GIS a large number of Internet resources have been established. These resources include multiple distributed on-line databases, on-demand graphing and mapping of environmental data, and geographic searching tools for environmental information. Baseline monitoring data, summarized data and environmental indicators that document ecosystem status and trends, confirm linkages between water quality, habitat quality and abundance, and the distribution and integrity of biological populations are also available. One of the major features of the CIMS network is the Chesapeake Bay Program's Data Hub, providing users access to a suite of long- term water quality and living resources databases. Chesapeake Bay mainstem and tidal tributary water quality, benthic macroinvertebrates, toxics, plankton, and fluorescence data can be obtained for a network of over 800 monitoring stations.
An Empirical Spectroscopic Database for Acetylene in the Regions of 5850-9415 CM^{-1}
NASA Astrophysics Data System (ADS)
Campargue, Alain; Lyulin, Oleg
2017-06-01
Six studies have been recently devoted to a systematic analysis of the high-resolution near infrared absorption spectrum of acetylene recorded by Cavity Ring Down spectroscopy (CRDS) in Grenoble and by Fourier-transform spectroscopy (FTS) in Brussels and Hefei. On the basis of these works, in the present contribution, we construct an empirical database for acetylene in the 5850 - 9415 \\wn region excluding the 6341-7000 \\wn interval corresponding to the very strong νb{1}+ νb{3} manifold. The database gathers and extends information included in our CRDS and FTS studies. In particular, the intensities of about 1700 lines measured by CRDS in the 7244-7920 \\wn are reported for the first time together with those of several bands of ^{12}C^{13}CH_{2} present in natural isotopic abundance in the acetylene sample. The Herman-Wallis coefficients of most of the bands are derived from a fit of the measured intensity values. A recommended line list is provided with positions calculated using empirical spectroscopic parameters of the lower and upper energy vibrational levels and intensities calculated using the derived Herman-Wallis coefficients. This approach allows completing the experimental list by adding missing lines and improving poorly determined positions and intensities. As a result the constructed line list includes a total of 10973 lines belonging to 146 bands of ^{12}C_{2}H_{2} and 29 bands of ^{12}C^{13}CH_{2}. For comparison the HITRAN2012 database in the same region includes 869 lines of 14 bands, all belonging to ^{12}C_{2}H_{2}. Our weakest lines have an intensity on the order of 10^{-29} cm/molecule,about three orders of magnitude smaller than the HITRAN intensity cut off. Line profile parameters are added to the line list which is provided in HITRAN format. The comparison to the HITRAN2012 line list or to results obtained using the global effective operator approach is discussed in terms of completeness and accuracy.
Digital release of the Alaska Quaternary fault and fold database
NASA Astrophysics Data System (ADS)
Koehler, R. D.; Farrell, R.; Burns, P.; Combellick, R. A.; Weakland, J. R.
2011-12-01
The Alaska Division of Geological & Geophysical Surveys (DGGS) has designed a Quaternary fault and fold database for Alaska in conformance with standards defined by the U.S. Geological Survey for the National Quaternary fault and fold database. Alaska is the most seismically active region of the United States, however little information exists on the location, style of deformation, and slip rates of Quaternary faults. Thus, to provide an accurate, user-friendly, reference-based fault inventory to the public, we are producing a digital GIS shapefile of Quaternary fault traces and compiling summary information on each fault. Here, we present relevant information pertaining to the digital GIS shape file and online access and availability of the Alaska database. This database will be useful for engineering geologic studies, geologic, geodetic, and seismic research, and policy planning. The data will also contribute to the fault source database being constructed by the Global Earthquake Model (GEM), Faulted Earth project, which is developing tools to better assess earthquake risk. We derived the initial list of Quaternary active structures from The Neotectonic Map of Alaska (Plafker et al., 1994) and supplemented it with more recent data where available. Due to the limited level of knowledge on Quaternary faults in Alaska, pre-Quaternary fault traces from the Plafker map are shown as a layer in our digital database so users may view a more accurate distribution of mapped faults and to suggest the possibility that some older traces may be active yet un-studied. The database will be updated as new information is developed. We selected each fault by reviewing the literature and georegistered the faults from 1:250,000-scale paper maps contained in 1970's vintage and earlier bedrock maps. However, paper map scales range from 1:20,000 to 1:500,000. Fault parameters in our GIS fault attribute tables include fault name, age, slip rate, slip sense, dip direction, fault line type (i.e., well constrained, moderately constrained, or inferred), and mapped scale. Each fault is assigned a three-integer CODE, based upon age, slip rate, and how well the fault is located. This CODE dictates the line-type for the GIS files. To host the database, we are developing an interactive web-map application with ArcGIS for Server and the ArcGIS API for JavaScript from Environmental Systems Research Institute, Inc. (Esri). The web-map application will present the database through a visible scale range with each fault displayed at the resolution of the original map. Application functionality includes: search by name or location, identification of fault by manual selection, and choice of base map. Base map options include topographic, satellite imagery, and digital elevation maps available from ArcGIS on-line. We anticipate that the database will be publically accessible from a portal embedded on the DGGS website by the end of 2011.
Sagace: A web-based search engine for biomedical databases in Japan
2012-01-01
Background In the big data era, biomedical research continues to generate a large amount of data, and the generated information is often stored in a database and made publicly available. Although combining data from multiple databases should accelerate further studies, the current number of life sciences databases is too large to grasp features and contents of each database. Findings We have developed Sagace, a web-based search engine that enables users to retrieve information from a range of biological databases (such as gene expression profiles and proteomics data) and biological resource banks (such as mouse models of disease and cell lines). With Sagace, users can search more than 300 databases in Japan. Sagace offers features tailored to biomedical research, including manually tuned ranking, a faceted navigation to refine search results, and rich snippets constructed with retrieved metadata for each database entry. Conclusions Sagace will be valuable for experts who are involved in biomedical research and drug development in both academia and industry. Sagace is freely available at http://sagace.nibio.go.jp/en/. PMID:23110816
Space Station Freedom environmental database system (FEDS) for MSFC testing
NASA Technical Reports Server (NTRS)
Story, Gail S.; Williams, Wendy; Chiu, Charles
1991-01-01
The Water Recovery Test (WRT) at Marshall Space Flight Center (MSFC) is the first demonstration of integrated water recovery systems for potable and hygiene water reuse as envisioned for Space Station Freedom (SSF). In order to satisfy the safety and health requirements placed on the SSF program and facilitate test data assessment, an extensive laboratory analysis database was established to provide a central archive and data retrieval function. The database is required to store analysis results for physical, chemical, and microbial parameters measured from water, air and surface samples collected at various locations throughout the test facility. The Oracle Relational Database Management System (RDBMS) was utilized to implement a secured on-line information system with the ECLSS WRT program as the foundation for this system. The database is supported on a VAX/VMS 8810 series mainframe and is accessible from the Marshall Information Network System (MINS). This paper summarizes the database requirements, system design, interfaces, and future enhancements.
PASCAL Data Base: File Description and On Line Access on ESA/IRS.
ERIC Educational Resources Information Center
Pelissier, Denise
This report describes the PASCAL database, a machine readable version of the French abstract journal Bulletin Signaletique, which allows use of the file for (1) batch and online retrieval of information, (2) selective dissemination of information, and (3) publishing of the 50 sections of Bulletin Signaletique. The system, which covers nine…
The Joy of Telecomputing: Everything You Need to Know about Going On-Line at Home.
ERIC Educational Resources Information Center
Pearlman, Dara
1984-01-01
Discusses advantages and pleasures of utilizing a personal computer at home to receive electronic mail; participate in online conferences, software exchanges, and game networks; do shopping and banking; and have access to databases storing volumes of information. Information sources for the services mentioned are included. (MBR)
TNAURice: Database on rice varieties released from Tamil Nadu Agricultural University
Ramalingam, Jegadeesan; Arul, Loganathan; Sathishkumar, Natarajan; Vignesh, Dhandapani; Thiyagarajan, Katiannan; Samiyappan, Ramasamy
2010-01-01
We developed, TNAURice: a database comprising of the rice varieties released from a public institution, Tamil Nadu Agricultural University (TNAU), Coimbatore, India. Backed by MS-SQL, and ASP-Net at the front end, this database provide information on both quantitative and qualitative descriptors of the rice varities inclusive of their parental details. Enabled by an user friendly search utility, the database can be effectively searched by the varietal descriptors, and the entire contents are navigable as well. The database comes handy to the plant breeders involved in the varietal improvement programs to decide on the choice of parental lines. TNAURice is available for public access at http://www.btistnau.org/germdefault.aspx. PMID:21364829
TNAURice: Database on rice varieties released from Tamil Nadu Agricultural University.
Ramalingam, Jegadeesan; Arul, Loganathan; Sathishkumar, Natarajan; Vignesh, Dhandapani; Thiyagarajan, Katiannan; Samiyappan, Ramasamy
2010-11-27
WE DEVELOPED, TNAURICE: a database comprising of the rice varieties released from a public institution, Tamil Nadu Agricultural University (TNAU), Coimbatore, India. Backed by MS-SQL, and ASP-Net at the front end, this database provide information on both quantitative and qualitative descriptors of the rice varities inclusive of their parental details. Enabled by an user friendly search utility, the database can be effectively searched by the varietal descriptors, and the entire contents are navigable as well. The database comes handy to the plant breeders involved in the varietal improvement programs to decide on the choice of parental lines. TNAURice is available for public access at http://www.btistnau.org/germdefault.aspx.
NASA Technical Reports Server (NTRS)
Ho, C. Y.; Li, H. H.
1989-01-01
A computerized comprehensive numerical database system on the mechanical, thermophysical, electronic, electrical, magnetic, optical, and other properties of various types of technologically important materials such as metals, alloys, composites, dielectrics, polymers, and ceramics has been established and operational at the Center for Information and Numerical Data Analysis and Synthesis (CINDAS) of Purdue University. This is an on-line, interactive, menu-driven, user-friendly database system. Users can easily search, retrieve, and manipulate the data from the database system without learning special query language, special commands, standardized names of materials, properties, variables, etc. It enables both the direct mode of search/retrieval of data for specified materials, properties, independent variables, etc., and the inverted mode of search/retrieval of candidate materials that meet a set of specified requirements (which is the computer-aided materials selection). It enables also tabular and graphical displays and on-line data manipulations such as units conversion, variables transformation, statistical analysis, etc., of the retrieved data. The development, content, accessibility, etc., of the database system are presented and discussed.
NASA Astrophysics Data System (ADS)
Gasser, Deta; Viola, Giulio; Bingen, Bernard
2016-04-01
Since 2010, the Geological Survey of Norway has been implementing and continuously developing a digital workflow for geological bedrock mapping in Norway, from fieldwork to final product. Our workflow is based on the ESRI ArcGIS platform, and we use rugged Windows computers in the field. Three different hardware solutions have been tested over the past 5 years (2010-2015). (1) Panasonic Toughbook CE-19 (2.3 kg), (2) Panasonic Toughbook CF H2 Field (1.6 kg) and (3) Motion MC F5t tablet (1.5 kg). For collection of point observations in the field we mainly use the SIGMA Mobile application in ESRI ArcGIS developed by the British Geological Survey, which allows the mappers to store georeferenced comments, structural measurements, sample information, photographs, sketches, log information etc. in a Microsoft Access database. The application is freely downloadable from the BGS websites. For line- and polygon work we use our in-house database, which is currently under revision. Our line database consists of three feature classes: (1) bedrock boundaries, (2) bedrock lineaments, and (3) bedrock lines, with each feature class having up to 24 different attribute fields. Our polygon database consists of one feature class with 38 attribute fields enabling to store various information concerning lithology, stratigraphic order, age, metamorphic grade and tectonic subdivision. The polygon and line databases are coupled via topology in ESRI ArcGIS, which allows us to edit them simultaneously. This approach has been applied in two large-scale 1:50 000 bedrock mapping projects, one in the Kongsberg domain of the Sveconorwegian orogen, and the other in the greater Trondheim area (Orkanger) in the Caledonian belt. The mapping projects combined collection of high-resolution geophysical data, digital acquisition of field data, and collection of geochronological, geochemical and petrological data. During the Kongsberg project, some 25000 field observation points were collected by eight geologists. For the Orkanger project, some 2100 field observation points were collected by three geologists. Several advantages of the applied digital approach became clear during these projects: (1) The systematic collection of geological field data in a common format allows easy access and exchange of data among different geologists, (2) Easier access to background information such as geophysics and DEMS in the field, (3) Faster workflow from field data collection to final map product. Obvious disadvantages include: (1) Heavy(ish) and expensive hardware, (2) Battery life and other technical issues in the field, (3) Need for a central field observation point storage inhouse (large amounts of data!), and (4) Acceptance of- and training in a common workflow from all involved geologists.
NPInter v3.0: an upgraded database of noncoding RNA-associated interactions
Hao, Yajing; Wu, Wei; Li, Hui; Yuan, Jiao; Luo, Jianjun; Zhao, Yi; Chen, Runsheng
2016-01-01
Despite the fact that a large quantity of noncoding RNAs (ncRNAs) have been identified, their functions remain unclear. To enable researchers to have a better understanding of ncRNAs’ functions, we updated the NPInter database to version 3.0, which contains experimentally verified interactions between ncRNAs (excluding tRNAs and rRNAs), especially long noncoding RNAs (lncRNAs) and other biomolecules (proteins, mRNAs, miRNAs and genomic DNAs). In NPInter v3.0, interactions pertaining to ncRNAs are not only manually curated from scientific literature but also curated from high-throughput technologies. In addition, we also curated lncRNA–miRNA interactions from in silico predictions supported by AGO CLIP-seq data. When compared with NPInter v2.0, the interactions are more informative (with additional information on tissues or cell lines, binding sites, conservation, co-expression values and other features) and more organized (with divisions on data sets by data sources, tissues or cell lines, experiments and other criteria). NPInter v3.0 expands the data set to 491,416 interactions in 188 tissues (or cell lines) from 68 kinds of experimental technologies. NPInter v3.0 also improves the user interface and adds new web services, including a local UCSC Genome Browser to visualize binding sites. Additionally, NPInter v3.0 defined a high-confidence set of interactions and predicted the functions of lncRNAs in human and mouse based on the interactions curated in the database. NPInter v3.0 is available at http://www.bioinfo.org/NPInter/. Database URL: http://www.bioinfo.org/NPInter/ PMID:27087310
NASA Astrophysics Data System (ADS)
Lyulin, O. M.; Campargue, A.
2017-12-01
Six studies have been recently devoted to a systematic analysis of the high-resolution near infrared absorption spectrum of acetylene recorded by Cavity Ring Down spectroscopy (CRDS) in Grenoble and by Fourier-transform spectroscopy (FTS) in Brussels and Hefei. On the basis of these works, in the present contribution, we construct an empirical database for acetylene in the 5850-9415 cm-1 region excluding the 6341-7000 cm-1 interval corresponding to the very strong ν1+ν3 manifold. Our database gathers and extends information included in our CRDS and FTS studies. In particular, the intensities of about 1700 lines measured by CRDS in the 7244-7920 cm-1 region are reported for the first time together with those of several bands of 12C13CH2 present in natural isotopic abundance in the acetylene sample. The Herman-Wallis coefficients of most of the bands are derived from a fit of the measured intensity values. A recommended line list is provided with positions calculated using empirical spectroscopic parameters of the lower and upper energy vibrational levels and intensities calculated using the derived Herman-Wallis coefficients. This approach allows completing the experimental list by adding missing lines and improving poorly determined positions and intensities. As a result the constructed line list includes a total of 11113 transitions belonging to 150 bands of 12C2H2 and 29 bands of 12C13CH2. For comparison the HITRAN database in the same region includes 869 transitions of 14 bands, all belonging to 12C2H2. Our weakest lines have an intensity on the order of 10-29 cm/molecule, about three orders of magnitude smaller than the HITRAN intensity cut off. Line profile parameters are added to the line list which is provided in HITRAN format. The comparison of the acetylene database to the HITRAN2012 line list or to results obtained using the global effective operator approach is discussed in terms of completeness and accuracy.
Code of Federal Regulations, 2011 CFR
2011-01-01
... OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION PROCEEDINGS) ECONOMIC REGULATIONS TARIFFS... public reference room. Copies of information contained in a filer's on-line tariff database may be...
Code of Federal Regulations, 2014 CFR
2014-01-01
... OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION PROCEEDINGS) ECONOMIC REGULATIONS TARIFFS... public reference room. Copies of information contained in a filer's on-line tariff database may be...
Code of Federal Regulations, 2012 CFR
2012-01-01
... OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION PROCEEDINGS) ECONOMIC REGULATIONS TARIFFS... public reference room. Copies of information contained in a filer's on-line tariff database may be...
Code of Federal Regulations, 2013 CFR
2013-01-01
... OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION (AVIATION PROCEEDINGS) ECONOMIC REGULATIONS TARIFFS... public reference room. Copies of information contained in a filer's on-line tariff database may be...
Developing Data Systems To Support the Analysis and Development of Large-Scale, On-Line Assessment.
ERIC Educational Resources Information Center
Yu, Chong Ho
Today many data warehousing systems are data rich, but information poor. Extracting useful information from an ocean of data to support administrative, policy, and instructional decisions becomes a major challenge to both database designers and measurement specialists. This paper focuses on the development of a data processing system that…
Kleinboelting, Nils; Huep, Gunnar; Weisshaar, Bernd
2017-01-01
SimpleSearch provides access to a database containing information about T-DNA insertion lines of the GABI-Kat collection of Arabidopsis thaliana mutants. These mutants are an important tool for reverse genetics, and GABI-Kat is the second largest collection of such T-DNA insertion mutants. Insertion sites were deduced from flanking sequence tags (FSTs), and the database contains information about mutant plant lines as well as insertion alleles. Here, we describe improvements within the interface (available at http://www.gabi-kat.de/db/genehits.php) and with regard to the database content that have been realized in the last five years. These improvements include the integration of the Araport11 genome sequence annotation data containing the recently updated A. thaliana structural gene descriptions, an updated visualization component that displays groups of insertions with very similar insertion positions, mapped confirmation sequences, and primers. The visualization component provides a quick way to identify insertions of interest, and access to improved data about the exact structure of confirmed insertion alleles. In addition, the database content has been extended by incorporating additional insertion alleles that were detected during the confirmation process, as well as by adding new FSTs that have been produced during continued efforts to complement gaps in FST availability. Finally, the current database content regarding predicted and confirmed insertion alleles as well as primer sequences has been made available as downloadable flat files. © The Author 2016. Published by Oxford University Press on behalf of Japanese Society of Plant Physiologists.
Wilson, Frederic H.; Hults, Chad P.; Mull, Charles G.; Karl, Susan M.
2015-12-31
This Alaska compilation is unique in that it is integrated with a rich database of information provided in the spatial datasets and standalone attribute databases. Within the spatial files every line and polygon is attributed to its original source; the references to these sources are contained in related tables, as well as in stand-alone tables. Additional attributes include typical lithology, geologic setting, and age range for the map units. Also included are tables of radiometric ages.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Browne, S.V.; Green, S.C.; Moore, K.
1994-04-01
The Netlib repository, maintained by the University of Tennessee and Oak Ridge National Laboratory, contains freely available software, documents, and databases of interest to the numerical, scientific computing, and other communities. This report includes both the Netlib User`s Guide and the Netlib System Manager`s Guide, and contains information about Netlib`s databases, interfaces, and system implementation. The Netlib repository`s databases include the Performance Database, the Conferences Database, and the NA-NET mail forwarding and Whitepages Databases. A variety of user interfaces enable users to access the Netlib repository in the manner most convenient and compatible with their networking capabilities. These interfaces includemore » the Netlib email interface, the Xnetlib X Windows client, the netlibget command-line TCP/IP client, anonymous FTP, anonymous RCP, and gopher.« less
The NSO FTS database program and archive (FTSDBM)
NASA Technical Reports Server (NTRS)
Lytle, D. M.
1992-01-01
Data from the NSO Fourier transform spectrometer is being re-archived from half inch tape onto write-once compact disk. In the process, information about each spectrum and a low resolution copy of each spectrum is being saved into an on-line database. FTSDBM is a simple database management program in the NSO external package for IRAF. A command language allows the FTSDBM user to add entries to the database, delete entries, select subsets from the database based on keyword values including ranges of values, create new database files based on these subsets, make keyword lists, examine low resolution spectra graphically, and make disk number/file number lists. Once the archive is complete, FTSDBM will allow the database to be efficiently searched for data of interest to the user and the compact disk format will allow random access to that data.
[Construction of chemical information database based on optical structure recognition technique].
Lv, C Y; Li, M N; Zhang, L R; Liu, Z M
2018-04-18
To create a protocol that could be used to construct chemical information database from scientific literature quickly and automatically. Scientific literature, patents and technical reports from different chemical disciplines were collected and stored in PDF format as fundamental datasets. Chemical structures were transformed from published documents and images to machine-readable data by using the name conversion technology and optical structure recognition tool CLiDE. In the process of molecular structure information extraction, Markush structures were enumerated into well-defined monomer molecules by means of QueryTools in molecule editor ChemDraw. Document management software EndNote X8 was applied to acquire bibliographical references involving title, author, journal and year of publication. Text mining toolkit ChemDataExtractor was adopted to retrieve information that could be used to populate structured chemical database from figures, tables, and textual paragraphs. After this step, detailed manual revision and annotation were conducted in order to ensure the accuracy and completeness of the data. In addition to the literature data, computing simulation platform Pipeline Pilot 7.5 was utilized to calculate the physical and chemical properties and predict molecular attributes. Furthermore, open database ChEMBL was linked to fetch known bioactivities, such as indications and targets. After information extraction and data expansion, five separate metadata files were generated, including molecular structure data file, molecular information, bibliographical references, predictable attributes and known bioactivities. Canonical simplified molecular input line entry specification as primary key, metadata files were associated through common key nodes including molecular number and PDF number to construct an integrated chemical information database. A reasonable construction protocol of chemical information database was created successfully. A total of 174 research articles and 25 reviews published in Marine Drugs from January 2015 to June 2016 collected as essential data source, and an elementary marine natural product database named PKU-MNPD was built in accordance with this protocol, which contained 3 262 molecules and 19 821 records. This data aggregation protocol is of great help for the chemical information database construction in accuracy, comprehensiveness and efficiency based on original documents. The structured chemical information database can facilitate the access to medical intelligence and accelerate the transformation of scientific research achievements.
Miller, Stephan W.
1981-01-01
A second set of related problems deals with how this format and other representations of spatial entities, such as vector formats for point and line features, can be interrelated for manipulation, retrieval, and analysis by a spatial database management subsystem. Methods have been developed for interrelating areal data sets in the raster format with point and line data in a vector format and these are described.
Knoppers, Bartha M; Isasi, Rosario; Benvenisty, Nissim; Kim, Ock-Joo; Lomax, Geoffrey; Morris, Clive; Murray, Thomas H; Lee, Eng Hin; Perry, Margery; Richardson, Genevra; Sipp, Douglas; Tanner, Klaus; Wahlström, Jan; de Wert, Guido; Zeng, Fanyi
2011-09-01
Novel methods and associated tools permitting individual identification in publicly accessible SNP databases have become a debatable issue. There is growing concern that current technical and ethical safeguards to protect the identities of donors could be insufficient. In the context of human embryonic stem cell research, there are no studies focusing on the probability that an hESC line donor could be identified by analyzing published SNP profiles and associated genotypic and phenotypic information. We present the International Stem Cell Forum (ISCF) Ethics Working Party's Policy Statement on "Publishing SNP Genotypes of Human Embryonic Stem Cell Lines (hESC)". The Statement prospectively addresses issues surrounding the publication of genotypic data and associated annotations of hESC lines in open access databases. It proposes a balanced approach between the goals of open science and data sharing with the respect for fundamental bioethical principles (autonomy, privacy, beneficence, justice and research merit and integrity).
NASA Astrophysics Data System (ADS)
Jacquinet-Husson, Nicole; Crépeau, Laurent; Capelle, Virginie; Scott, Noëlle; Armante, Raymond; Chédin, Alain; Boonne, Cathy; Poulet-Crovisier, Nathalie
2010-05-01
The GEISA (1) (Gestion et Etude des Informations Spectroscopiques Atmosphériques: Management and Study of Atmospheric Spectroscopic Information) computer-accessible database, initiated in 1976, is developed and maintained at LMD (Laboratoire de Météorologie Dynamique, France) a system comprising three independent sub-databases devoted respectively to : line transition parameters, infrared and ultraviolet/visible absorption cross-sections, microphysical and optical properties of atmospheric aerosols. The updated 2009 edition (GEISA-09) archives, in its line transition parameters sub-section, 50 molecules, corresponding to 111 isotopes, for a total of 3,807,997 entries, in the spectral range from 10-6 to 35,877.031 cm-1. Detailed description of the whole database contents will be documented. GEISA and GEISA/IASI are implemented on the CNES/CNRS Ether Products and Services Centre WEB site (http://ether.ipsl.jussieu.fr), where all archived spectroscopic data can be handled through general and user friendly associated management software facilities. These facilities will be described and widely illustrated, as well. Interactive demonstrations will be given if technical possibilities are feasible at the time of the Poster Display Session. More than 350 researchers are registered for on line use of GEISA on Ether. Currently, GEISA is involved in activities (2) related to the remote sensing of the terrestrial atmosphere thanks to the sounding performances of new generation of hyperspectral Earth' atmospheric sounders, like AIRS (Atmospheric Infrared Sounder -http://www-airs.jpl.nasa.gov/), in the USA, and IASI (Infrared Atmospheric Sounding Interferometer -http://earth-sciences.cnes.fr/IASI/) in Europe, using the 4A radiative transfer model (3) (4A/LMD http://ara.lmd.polytechnique.fr; 4A/OP co-developed by LMD and NOVELTIS -http://www.noveltis.fr/) with the support of CNES (2006). Refs: (1) Jacquinet-Husson N., N.A. Scott, A. Chédin,L. Crépeau, R. Armante, V. Capelle, J. Orphal, A. Coustenis, C. Boonne, N. Poulet-Crovisier, et al. : THE GEISA SPECTROSCOPIC DATABASE: Current and future archive for Earth and planetary atmosphere studies. JQSRT 109 (2008) 1043-1059. (2) Jacquinet-Husson N., N.A. Scott, A. Chédin, K. Garceran, R. Armante, et al. : The 2003 edition of the GEISA/IASI spectroscopic database. JQSRT, 95 (2005) 429-467. (3) Scott, N.A. and A. Chedin. A fast line-by-line method for atmospheric absorption computations: The Automatized Atmospheric Absorption Atlas. J. Appl. Meteor., 20 (1981) 556-564.
Web client and ODBC access to legacy database information: a low cost approach.
Sanders, N. W.; Mann, N. H.; Spengler, D. M.
1997-01-01
A new method has been developed for the Department of Orthopaedics of Vanderbilt University Medical Center to access departmental clinical data. Previously this data was stored only in the medical center's mainframe DB2 database, it is now additionally stored in a departmental SQL database. Access to this data is available via any ODBC compliant front-end or a web client. With a small budget and no full time staff, we were able to give our department on-line access to many years worth of patient data that was previously inaccessible. PMID:9357735
X-ray Photoelectron Spectroscopy Database (Version 4.1)
National Institute of Standards and Technology Data Gateway
SRD 20 X-ray Photoelectron Spectroscopy Database (Version 4.1) (Web, free access) The NIST XPS Database gives access to energies of many photoelectron and Auger-electron spectral lines. The database contains over 22,000 line positions, chemical shifts, doublet splittings, and energy separations of photoelectron and Auger-electron lines.
This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). This web service includes the State and County boundaries from the TIGER shapefiles compiled into a single national coverage for each layer. The TIGER/Line Files are shapefiles and related database files (.dbf) that are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master Address File / Topologically Integrated Geographic Encoding and Referencing (MAF/TIGER) Database (MTDB).
An Algorithm to Compress Line-transition Data for Radiative-transfer Calculations
NASA Astrophysics Data System (ADS)
Cubillos, Patricio E.
2017-11-01
Molecular line-transition lists are an essential ingredient for radiative-transfer calculations. With recent databases now surpassing the billion-line mark, handling them has become computationally prohibitive, due to both the required processing power and memory. Here I present a temperature-dependent algorithm to separate strong from weak line transitions, reformatting the large majority of the weaker lines into a cross-section data file, and retaining the detailed line-by-line information of the fewer strong lines. For any given molecule over the 0.3-30 μm range, this algorithm reduces the number of lines to a few million, enabling faster radiative-transfer computations without a significant loss of information. The final compression rate depends on how densely populated the spectrum is. I validate this algorithm by comparing Exomol’s HCN extinction-coefficient spectra between the complete (65 million line transitions) and compressed (7.7 million) line lists. Over the 0.6-33 μm range, the average difference between extinction-coefficient values is less than 1%. A Python/C implementation of this algorithm is open-source and available at https://github.com/pcubillos/repack. So far, this code handles the Exomol and HITRAN line-transition format.
A self-organized learning strategy for object recognition by an embedded line of attraction
NASA Astrophysics Data System (ADS)
Seow, Ming-Jung; Alex, Ann T.; Asari, Vijayan K.
2012-04-01
For humans, a picture is worth a thousand words, but to a machine, it is just a seemingly random array of numbers. Although machines are very fast and efficient, they are vastly inferior to humans for everyday information processing. Algorithms that mimic the way the human brain computes and learns may be the solution. In this paper we present a theoretical model based on the observation that images of similar visual perceptions reside in a complex manifold in an image space. The perceived features are often highly structured and hidden in a complex set of relationships or high-dimensional abstractions. To model the pattern manifold, we present a novel learning algorithm using a recurrent neural network. The brain memorizes information using a dynamical system made of interconnected neurons. Retrieval of information is accomplished in an associative sense. It starts from an arbitrary state that might be an encoded representation of a visual image and converges to another state that is stable. The stable state is what the brain remembers. In designing a recurrent neural network, it is usually of prime importance to guarantee the convergence in the dynamics of the network. We propose to modify this picture: if the brain remembers by converging to the state representing familiar patterns, it should also diverge from such states when presented with an unknown encoded representation of a visual image belonging to a different category. That is, the identification of an instability mode is an indication that a presented pattern is far away from any stored pattern and therefore cannot be associated with current memories. These properties can be used to circumvent the plasticity-stability dilemma by using the fluctuating mode as an indicator to create new states. We capture this behavior using a novel neural architecture and learning algorithm, in which the system performs self-organization utilizing a stability mode and an instability mode for the dynamical system. Based on this observation we developed a self- organizing line attractor, which is capable of generating new lines in the feature space to learn unrecognized patterns. Experiments performed on UMIST pose database and CMU face expression variant database for face recognition have shown that the proposed nonlinear line attractor is able to successfully identify the individuals and it provided better recognition rate when compared to the state of the art face recognition techniques. Experiments on FRGC version 2 database has also provided excellent recognition rate in images captured in complex lighting environments. Experiments performed on the Japanese female face expression database and Essex Grimace database using the self organizing line attractor have also shown successful expression invariant face recognition. These results show that the proposed model is able to create nonlinear manifolds in a multidimensional feature space to distinguish complex patterns.
Price, Ronald N; Chandrasekhar, Arcot J; Tamirisa, Balaji
1990-01-01
The Department of Medicine at Loyola University Medical Center (LUMC) of Chicago has implemented a local area network (LAN) based Patient Information Management System (PIMS) as part of its integrated departmental database management system. PIMS consists of related database applications encompassing demographic information, current medications, problem lists, clinical data, prior events, and on-line procedure results. Integration into the existing departmental database system permits PIMS to capture and manipulate data in other departmental applications. Standardization of clinical data is accomplished through three data tables that verify diagnosis codes, procedures codes and a standardized set of clinical data elements. The modularity of the system, coupled with standardized data formats, allowed the development of a Patient Information Protocol System (PIPS). PIPS, a userdefinable protocol processor, provides physicians with individualized data entry or review screens customized for their specific research protocols or practice habits. Physician feedback indicates that the PIMS/PIPS combination enhances their ability to collect and review specific patient information by filtering large amount of clinical data.
CliniWeb: managing clinical information on the World Wide Web.
Hersh, W R; Brown, K E; Donohoe, L C; Campbell, E M; Horacek, A E
1996-01-01
The World Wide Web is a powerful new way to deliver on-line clinical information, but several problems limit its value to health care professionals: content is highly distributed and difficult to find, clinical information is not separated from non-clinical information, and the current Web technology is unable to support some advanced retrieval capabilities. A system called CliniWeb has been developed to address these problems. CliniWeb is an index to clinical information on the World Wide Web, providing a browsing and searching interface to clinical content at the level of the health care student or provider. Its database contains a list of clinical information resources on the Web that are indexed by terms from the Medical Subject Headings disease tree and retrieved with the assistance of SAPHIRE. Limitations of the processes used to build the database are discussed, together with directions for future research.
ERIC Educational Resources Information Center
Tobias, Christine
2017-01-01
The Michigan State University (MSU) Libraries' Website has a case of TMI: too much information organized by librarians for librarians. Finding relevant information about various library services, including the 24/7 Distance Learning Support Line, and access points to scholarly resources is often cumbersome, and given the limited time and staffing…
De-MA: a web Database for electron Microprobe Analyses to assist EMP lab manager and users
NASA Astrophysics Data System (ADS)
Allaz, J. M.
2012-12-01
Lab managers and users of electron microprobe (EMP) facilities require comprehensive, yet flexible documentation structures, as well as an efficient scheduling mechanism. A single on-line database system for managing reservations, and providing information on standards, quantitative and qualitative setups (element mapping, etc.), and X-ray data has been developed for this purpose. This system is particularly useful in multi-user facilities where experience ranges from beginners to the highly experienced. New users and occasional facility users will find these tools extremely useful in developing and maintaining high quality, reproducible, and efficient analyses. This user-friendly database is available through the web, and uses MySQL as a database and PHP/HTML as script language (dynamic website). The database includes several tables for standards information, X-ray lines, X-ray element mapping, PHA, element setups, and agenda. It is configurable for up to five different EMPs in a single lab, each of them having up to five spectrometers and as many diffraction crystals as required. The installation should be done on a web server supporting PHP/MySQL, although installation on a personal computer is possible using third-party freeware to create a local Apache server, and to enable PHP/MySQL. Since it is web-based, any user outside the EMP lab can access this database anytime through any web browser and on any operating system. The access can be secured using a general password protection (e.g. htaccess). The web interface consists of 6 main menus. (1) "Standards" lists standards defined in the database, and displays detailed information on each (e.g. material type, name, reference, comments, and analyses). Images such as EDS spectra or BSE can be associated with a standard. (2) "Analyses" lists typical setups to use for quantitative analyses, allows calculation of mineral composition based on a mineral formula, or calculation of mineral formula based on a fixed amount of oxygen, or of cation (using an analysis in element or oxide weight-%); this latter includes re-calculation of H2O/CO2 based on stoichiometry, and oxygen correction for F and Cl. Another option offers a list of any available standards and possible peak or background interferences for a series of elements. (3) "X-ray maps" lists the different setups recommended for element mapping using WDS, and a map calculator to facilitate maps setups and to estimate the total mapping time. (4) "X-ray data" lists all x-ray lines for a specific element (K, L, M, absorption edges, and satellite peaks) in term of energy, wavelength and peak position. A check for possible interferences on peak or background is also possible. Theoretical x-ray peak positions for each crystal are calculated based on the 2d spacing of each crystal and the wavelength of each line. (5) "Agenda" menu displays the reservation dates for each month and for each EMP lab defined. It also offers a reservation request option, this request being sent by email to the EMP manager for approval. (6) Finally, "Admin" is password restricted, and contains all necessary options to manage the database through user-friendly forms. The installation of this database is made easy and knowledge of HTML, PHP, or MySQL is unnecessary to install, configure, manage, or use it. A working database is accessible at http://cub.geoloweb.ch.
In the Jungle of Astronomical On--line Data Services
NASA Astrophysics Data System (ADS)
Egret, D.
The author tried to survive in the jungle of astronomical on--line data services. In order to find efficient answers to common scientific data retrieval requests, he had to collect many pieces of information, in order to formulate typical user scenarios, and try them against a number of different data bases, catalogue services, or information systems. He discovered soon how frustrating treasure coffers may be when their keys are not available, but he realized also that nice widgets and gadgets are of no help when the information is not there. And, before long, he knew he would have to navigate through several systems because no one was yet offering a general answer to all his questions. I will present examples of common user scenarios and show how they were tested against a number of services. I will propose some elements of classification which should help the end-user to evaluate how adequate the different services may be for providing satisfying answers to specific queries. For that, many aspects of the user interaction will be considered: documentation, access, query formulation, functionalities, qualification of the data, overall efficiency, etc. I will also suggest possible improvements to the present situation: the first of them being to encourage system managers to increase collaboration between one another, for the benefit of the whole astronomical community. The subjective review I will present, is based on publicly available astronomical on--line services from the U.S. and from Europe, most of which (excepting the newcomers) were described in ``Databases and On-Line Data in Astronomy", (Albrecht & Egret, eds, 1991): this includes databases (such as NED and Simbad ), catalog services ( StarCat , DIRA , XCatScan , etc.), and information systems ( ADS and ESIS ).
47 CFR 69.307 - General support facilities.
Code of Federal Regulations, 2014 CFR
2014-10-01
....307 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED... computer investment used in the provision of the Line Information Database sub-element at § 69.120(b) shall be assigned to that sub-element. (b) General purpose computer investment used in the provision of the...
47 CFR 69.307 - General support facilities.
Code of Federal Regulations, 2010 CFR
2010-10-01
....307 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED... computer investment used in the provision of the Line Information Database sub-element at § 69.120(b) shall be assigned to that sub-element. (b) General purpose computer investment used in the provision of the...
47 CFR 69.307 - General support facilities.
Code of Federal Regulations, 2011 CFR
2011-10-01
....307 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED... computer investment used in the provision of the Line Information Database sub-element at § 69.120(b) shall be assigned to that sub-element. (b) General purpose computer investment used in the provision of the...
47 CFR 69.307 - General support facilities.
Code of Federal Regulations, 2012 CFR
2012-10-01
....307 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED... computer investment used in the provision of the Line Information Database sub-element at § 69.120(b) shall be assigned to that sub-element. (b) General purpose computer investment used in the provision of the...
47 CFR 69.307 - General support facilities.
Code of Federal Regulations, 2013 CFR
2013-10-01
....307 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED... computer investment used in the provision of the Line Information Database sub-element at § 69.120(b) shall be assigned to that sub-element. (b) General purpose computer investment used in the provision of the...
Deng, Youping; Dong, Yinghua; Thodima, Venkata; Clem, Rollie J; Passarelli, A Lorena
2006-01-01
Background Little is known about the genome sequences of lepidopteran insects, although this group of insects has been studied extensively in the fields of endocrinology, development, immunity, and pathogen-host interactions. In addition, cell lines derived from Spodoptera frugiperda and other lepidopteran insects are routinely used for baculovirus foreign gene expression. This study reports the results of an expressed sequence tag (EST) sequencing project in cells from the lepidopteran insect S. frugiperda, the fall armyworm. Results We have constructed an EST database using two cDNA libraries from the S. frugiperda-derived cell line, SF-21. The database consists of 2,367 ESTs which were assembled into 244 contigs and 951 singlets for a total of 1,195 unique sequences. Conclusion S. frugiperda is an agriculturally important pest insect and genomic information will be instrumental for establishing initial transcriptional profiling and gene function studies, and for obtaining information about genes manipulated during infections by insect pathogens such as baculoviruses. PMID:17052344
Prieto, Claudia I; Palau, María J; Martina, Pablo; Achiary, Carlos; Achiary, Andrés; Bettiol, Marisa; Montanaro, Patricia; Cazzola, María L; Leguizamón, Mariana; Massillo, Cintia; Figoli, Cecilia; Valeiras, Brenda; Perez, Silvia; Rentería, Fernando; Diez, Graciela; Yantorno, Osvaldo M; Bosch, Alejandra
2016-01-01
The epidemiological and clinical management of cystic fibrosis (CF) patients suffering from acute pulmonary exacerbations or chronic lung infections demands continuous updating of medical and microbiological processes associated with the constant evolution of pathogens during host colonization. In order to monitor the dynamics of these processes, it is essential to have expert systems capable of storing and subsequently extracting the information generated from different studies of the patients and microorganisms isolated from them. In this work we have designed and developed an on-line database based on an information system that allows to store, manage and visualize data from clinical studies and microbiological analysis of bacteria obtained from the respiratory tract of patients suffering from cystic fibrosis. The information system, named Cystic Fibrosis Cloud database is available on the http://servoy.infocomsa.com/cfc_database site and is composed of a main database and a web-based interface, which uses Servoy's product architecture based on Java technology. Although the CFC database system can be implemented as a local program for private use in CF centers, it can also be used, updated and shared by different users who can access the stored information in a systematic, practical and safe manner. The implementation of the CFC database could have a significant impact on the monitoring of respiratory infections, the prevention of exacerbations, the detection of emerging organisms, and the adequacy of control strategies for lung infections in CF patients. Copyright © 2015 Asociación Argentina de Microbiología. Publicado por Elsevier España, S.L.U. All rights reserved.
This EnviroAtlas web service supports research and online mapping activities related to EnviroAtlas (https://www.epa.gov/enviroatlas). This web service includes the State, County, and Census Block Groups boundaries from the TIGER shapefiles compiled into a single national coverage for each layer. The TIGER/Line Files are shapefiles and related database files (.dbf) that are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master Address File / Topologically Integrated Geographic Encoding and Referencing (MAF/TIGER) Database (MTDB).
An On-line Technology Information System (OTIS) for Advanced Life Support
NASA Technical Reports Server (NTRS)
Levri, Julie A.; Boulanger, Richard; Hogan, John A.; Rodriquez, Luis
2003-01-01
OTIS is an on-line communication platform designed for smooth flow of technology information between advanced life support (ALS) technology developers, researchers, system analysts, and managers. With pathways for efficient transfer of information, several improvements in the ALS Program will result. With OTIS, it will be possible to provide programmatic information for technology developers and researchers, technical information for analysts, and managerial decision support. OTIS is a platform that enables the effective research, development, and delivery of complex systems for life support. An electronic data collection form has been developed for the solid waste element, drafted by the Solid Waste Working Group. Forms for other elements (air revitalization, water recovery, food processing, biomass production and thermal control) will also be developed, based on lessons learned from the development of the solid waste form. All forms will be developed by consultation with other working groups, comprised of experts in the area of interest. Forms will be converted to an on-line data collection interface that technology developers will use to transfer information into OTIS. Funded technology developers will log in to OTIS annually to complete the element- specific forms for their technology. The type and amount of information requested expands as the technology readiness level (TRL) increases. The completed forms will feed into a regularly updated and maintained database that will store technology information and allow for database searching. To ensure confidentiality of proprietary information, security permissions will be customized for each user. Principal investigators of a project will be able to designate certain data as proprietary and only technical monitors of a task, ALS Management, and the principal investigator will have the ability to view this information. The typical OTIS user will be able to read all non-proprietary information about all projects.Interaction with the database will occur over encrypted connections, and data will be stored on the server in an encrypted form. Implementation of OTIS will initiate a community-accessible repository of technology development information. With OTIS, ALS element leads and managers will be able to carry out informed technology selection for programmatic decisions. OTIS will also allow analysts to make accurate evaluations of technology options. Additionally, the range and specificity of information solicited will help educate technology developers of program needs. With augmentation, OTIS reporting is capable of replacing the current fiscal year-end reporting process. Overall, the system will enable more informed R&TD decisions and more rapid attainment of ALS Program goals.
New approach for logo recognition
NASA Astrophysics Data System (ADS)
Chen, Jingying; Leung, Maylor K. H.; Gao, Yongsheng
2000-03-01
The problem of logo recognition is of great interest in the document domain, especially for document database. By recognizing the logo we obtain semantic information about the document which may be useful in deciding whether or not to analyze the textual components. In order to develop a logo recognition method that is efficient to compute and product intuitively reasonable results, we investigate the Line Segment Hausdorff Distance on logo recognition. Researchers apply Hausdorff Distance to measure the dissimilarity of two point sets. It has been extended to match two sets of line segments. The new approach has the advantage to incorporate structural and spatial information to compute the dissimilarity. The added information can conceptually provide more and better distinctive capability for recognition. The proposed technique has been applied on line segments of logos with encouraging results that support the concept experimentally. This might imply a new way for logo recognition.
An Integrated Korean Biodiversity and Genetic Information Retrieval System
Lim, Jeongheui; Bhak, Jong; Oh, Hee-Mock; Kim, Chang-Bae; Park, Yong-Ha; Paek, Woon Kee
2008-01-01
Background On-line biodiversity information databases are growing quickly and being integrated into general bioinformatics systems due to the advances of fast gene sequencing technologies and the Internet. These can reduce the cost and effort of performing biodiversity surveys and genetic searches, which allows scientists to spend more time researching and less time collecting and maintaining data. This will cause an increased rate of knowledge build-up and improve conservations. The biodiversity databases in Korea have been scattered among several institutes and local natural history museums with incompatible data types. Therefore, a comprehensive database and a nation wide web portal for biodiversity information is necessary in order to integrate diverse information resources, including molecular and genomic databases. Results The Korean Natural History Research Information System (NARIS) was built and serviced as the central biodiversity information system to collect and integrate the biodiversity data of various institutes and natural history museums in Korea. This database aims to be an integrated resource that contains additional biological information, such as genome sequences and molecular level diversity. Currently, twelve institutes and museums in Korea are integrated by the DiGIR (Distributed Generic Information Retrieval) protocol, with Darwin Core2.0 format as its metadata standard for data exchange. Data quality control and statistical analysis functions have been implemented. In particular, integrating molecular and genetic information from the National Center for Biotechnology Information (NCBI) databases with NARIS was recently accomplished. NARIS can also be extended to accommodate other institutes abroad, and the whole system can be exported to establish local biodiversity management servers. Conclusion A Korean data portal, NARIS, has been developed to efficiently manage and utilize biodiversity data, which includes genetic resources. NARIS aims to be integral in maximizing bio-resource utilization for conservation, management, research, education, industrial applications, and integration with other bioinformation data resources. It can be found at . PMID:19091024
Retrieving Historical Electrorefining Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wheeler, Meagan Daniella
Pyrochemical Operations began at Los Alamos National Laboratory (LANL) during 1962 (1). Electrorefining (ER) has been implemented as a routine process since the 1980’s. The process data that went through the ER operation was recorded but had never been logged in an online database. Without a database new staff members are hindered in their work by the lack of information. To combat the issue a database in Access was created to collect the historical data. The years from 2000 onward were entered and queries were created to analyze trends. These trends will aid engineering and operations staff to reach optimalmore » performance for the startup of the new lines.« less
ERIC Educational Resources Information Center
Curtis, Rick
This paper summarizes information about using computer hardware and software to aid in making purchase decisions that are based on user needs. The two major options in hardware are IBM-compatible machines and the Apple Macintosh line. The three basic software applications include word processing, database management, and spreadsheet applications.…
Educational Technology: Transitioning from Business Continuity to Mission Continuity
ERIC Educational Resources Information Center
Mekdeci, Kelly Broyles
2011-01-01
United States schools and American Overseas (A/OS) schools depend upon educational technology (ET) to support business operations and student learning experiences. Schools rely upon administrative software, on-line course modules, information databases, digital communications systems, and many other ET processes. However, ET's fragility compared…
H2 16O line list for the study of atmospheres of Venus and Mars
NASA Astrophysics Data System (ADS)
Lavrentieva, N. N.; Voronin, B. A.; Fedorova, A. A.
2015-01-01
IR spectroscopy is an important method of remote measurement of H2 16O content in planetary atmospheres with initial spectroscopic information from the HITRAN, GEISA, etc., databases adapted for studies in the Earth's atmosphere. Unlike the Earth, the atmospheres of Mars and Venus mainly consist of carbon dioxide with a CO2 content of about 95%. In this paper, the line list of H2 16O is obtained on the basis of the BT2 line list (R.J. Barber, J. Tennyson, G.J. Harris, et al., Mon. Not. R. Astron. Soc. 368, 1087 (2006)). The BT2 line list containing information on the centers, intensities, and quantum identification of lines is supplemented with the line contour parameters: the self-broadening and carbon dioxide broadening coefficients and the temperature dependence coefficient at 296 K in the range of 0.001-30000 cm-1. Transitions with intensity values 10-30, 10-32, and 10-35 cm/molecule, the total number of which is 323310, 753529, and 2011072, respectively, were chosen from the BT2 line list.
NASA Technical Reports Server (NTRS)
Campbell, William J.
1985-01-01
Intelligent data management is the concept of interfacing a user to a database management system with a value added service that will allow a full range of data management operations at a high level of abstraction using human written language. The development of such a system will be based on expert systems and related artificial intelligence technologies, and will allow the capturing of procedural and relational knowledge about data management operations and the support of a user with such knowledge in an on-line, interactive manner. Such a system will have the following capabilities: (1) the ability to construct a model of the users view of the database, based on the query syntax; (2) the ability to transform English queries and commands into database instructions and processes; (3) the ability to use heuristic knowledge to rapidly prune the data space in search processes; and (4) the ability to use an on-line explanation system to allow the user to understand what the system is doing and why it is doing it. Additional information is given in outline form.
FIREDOC users manual, 3rd edition
NASA Astrophysics Data System (ADS)
Jason, Nora H.
1993-12-01
FIREDOC is the on-line bibliographic database which reflects the holdings (published reports, journal articles, conference proceedings, books, and audiovisual items) of the Fire Research Information Services (FRIS) at the Building and Fire Research Laboratory (BFRL), National Institute of Standards and Technology (NIST). This manual provides step-by-step procedures for entering and exiting the database via telecommunication lines, as well as a number of techniques for searching the database and processing the results of the searches. This Third Edition is necessitated by the change to a UNIX platform. The new computer allows for faster response time if searching via a modem and, in addition, offers internet accessibility. FIREDOC may be used with personal computers, using DOS or Windows, or with Macintosh computers and workstations. A new section on how to access Internet is included, and one on how to obtain the references of interest to you. Appendix F: Quick Guide to Getting Started will be useful to both modem and Internet users.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnette, Daniel W.
eCo-PylotDB, written completely in Python, provides a script that parses incoming emails and prepares extracted data for submission to a database table. The script extracts the database server, the database table, the server password, and the server username all from the email address to which the email is sent. The database table is specified on the Subject line. Any text in the body of the email is extracted as user comments for the database table. Attached files are extracted as data files with each file submitted to a specified table field but in separate rows of the targeted database table.more » Other information such as sender, date, time, and machine from which the email was sent is extracted and submitted to the database table as well. An email is sent back to the user specifying whether the data from the initial email was accepted or rejected by the database server. If rejected, the return email includes details as to why.« less
14 CFR 221.180 - Requirements for electronic filing of tariffs.
Code of Federal Regulations, 2010 CFR
2010-01-01
... of Transportation, for the maintenance and security of the on-line tariff database. (b) No carrier or... to its on-line tariff database. The filer shall be responsible for the transportation, installation... installation or maintenance. (3) The filer shall provide public access to its on-line tariff database, at...
14 CFR 221.180 - Requirements for electronic filing of tariffs.
Code of Federal Regulations, 2014 CFR
2014-01-01
... of Transportation, for the maintenance and security of the on-line tariff database. (b) No carrier or... to its on-line tariff database. The filer shall be responsible for the transportation, installation... installation or maintenance. (3) The filer shall provide public access to its on-line tariff database, at...
14 CFR 221.180 - Requirements for electronic filing of tariffs.
Code of Federal Regulations, 2013 CFR
2013-01-01
... of Transportation, for the maintenance and security of the on-line tariff database. (b) No carrier or... to its on-line tariff database. The filer shall be responsible for the transportation, installation... installation or maintenance. (3) The filer shall provide public access to its on-line tariff database, at...
14 CFR 221.180 - Requirements for electronic filing of tariffs.
Code of Federal Regulations, 2012 CFR
2012-01-01
... of Transportation, for the maintenance and security of the on-line tariff database. (b) No carrier or... to its on-line tariff database. The filer shall be responsible for the transportation, installation... installation or maintenance. (3) The filer shall provide public access to its on-line tariff database, at...
14 CFR 221.180 - Requirements for electronic filing of tariffs.
Code of Federal Regulations, 2011 CFR
2011-01-01
... of Transportation, for the maintenance and security of the on-line tariff database. (b) No carrier or... to its on-line tariff database. The filer shall be responsible for the transportation, installation... installation or maintenance. (3) The filer shall provide public access to its on-line tariff database, at...
"TPSX: Thermal Protection System Expert and Material Property Database"
NASA Technical Reports Server (NTRS)
Squire, Thomas H.; Milos, Frank S.; Rasky, Daniel J. (Technical Monitor)
1997-01-01
The Thermal Protection Branch at NASA Ames Research Center has developed a computer program for storing, organizing, and accessing information about thermal protection materials. The program, called Thermal Protection Systems Expert and Material Property Database, or TPSX, is available for the Microsoft Windows operating system. An "on-line" version is also accessible on the World Wide Web. TPSX is designed to be a high-quality source for TPS material properties presented in a convenient, easily accessible form for use by engineers and researchers in the field of high-speed vehicle design. Data can be displayed and printed in several formats. An information window displays a brief description of the material with properties at standard pressure and temperature. A spread sheet window displays complete, detailed property information. Properties which are a function of temperature and/or pressure can be displayed as graphs. In any display the data can be converted from English to SI units with the click of a button. Two material databases included with TPSX are: 1) materials used and/or developed by the Thermal Protection Branch at NASA Ames Research Center, and 2) a database compiled by NASA Johnson Space Center 9JSC). The Ames database contains over 60 advanced TPS materials including flexible blankets, rigid ceramic tiles, and ultra-high temperature ceramics. The JSC database contains over 130 insulative and structural materials. The Ames database is periodically updated and expanded as required to include newly developed materials and material property refinements.
Automating testbed documentation and database access using World Wide Web (WWW) tools
NASA Technical Reports Server (NTRS)
Ames, Charles; Auernheimer, Brent; Lee, Young H.
1994-01-01
A method for providing uniform transparent access to disparate distributed information systems was demonstrated. A prototype testing interface was developed to access documentation and information using publicly available hypermedia tools. The prototype gives testers a uniform, platform-independent user interface to on-line documentation, user manuals, and mission-specific test and operations data. Mosaic was the common user interface, and HTML (Hypertext Markup Language) provided hypertext capability.
Shah, Sachin D.; Maltby, David R.
2010-01-01
The U.S. Geological Survey, in cooperation with the U.S. Army Corps of Engineers, compiled salinity-related water-quality data and information in a geodatabase containing more than 6,000 sampling sites. The geodatabase was designed as a tool for water-resource management and includes readily available digital data sources from the U.S. Geological Survey, U.S. Environmental Protection Agency, New Mexico Interstate Stream Commission, Sustainability of semi-Arid Hydrology and Riparian Areas, Paso del Norte Watershed Council, numerous other State and local databases, and selected databases maintained by the University of Arizona and New Mexico State University. Salinity information was compiled for an approximately 26,000-square-mile area of the Rio Grande Basin from the Rio Arriba-Sandoval County line, New Mexico, to Presidio, Texas. The geodatabase relates the spatial location of sampling sites with salinity-related water-quality data reported by multiple agencies. The sampling sites are stored in a geodatabase feature class; each site is linked by a relationship class to the corresponding sample and results stored in data tables.
Accounting Data to Web Interface Using PERL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hargeaves, C
2001-08-13
This document will explain the process to create a web interface for the accounting information generated by the High Performance Storage Systems (HPSS) accounting report feature. The accounting report contains useful data but it is not easily accessed in a meaningful way. The accounting report is the only way to see summarized storage usage information. The first step is to take the accounting data, make it meaningful and store the modified data in persistent databases. The second step is to generate the various user interfaces, HTML pages, that will be used to access the data. The third step is tomore » transfer all required files to the web server. The web pages pass parameters to Common Gateway Interface (CGI) scripts that generate dynamic web pages and graphs. The end result is a web page with specific information presented in text with or without graphs. The accounting report has a specific format that allows the use of regular expressions to verify if a line is storage data. Each storage data line is stored in a detailed database file with a name that includes the run date. The detailed database is used to create a summarized database file that also uses run date in its name. The summarized database is used to create the group.html web page that includes a list of all storage users. Scripts that query the database folder to build a list of available databases generate two additional web pages. A master script that is run monthly as part of a cron job, after the accounting report has completed, manages all of these individual scripts. All scripts are written in the PERL programming language. Whenever possible data manipulation scripts are written as filters. All scripts are written to be single source, which means they will function properly on both the open and closed networks at LLNL. The master script handles the command line inputs for all scripts, file transfers to the web server and records run information in a log file. The rest of the scripts manipulate the accounting data or use the files created to generate HTML pages. Each script will be described in detail herein. The following is a brief description of HPSS taken directly from an HPSS web site. ''HPSS is a major development project, which began in 1993 as a Cooperative Research and Development Agreement (CRADA) between government and industry. The primary objective of HPSS is to move very large data objects between high performance computers, workstation clusters, and storage libraries at speeds many times faster than is possible with today's software systems. For example, HPSS can manage parallel data transfers from multiple network-connected disk arrays at rates greater than 1 Gbyte per second, making it possible to access high definition digitized video in real time.'' The HPSS accounting report is a canned report whose format is controlled by the HPSS developers.« less
Acquisition-Management Program
NASA Technical Reports Server (NTRS)
Avery, Don E.; Vann, A. Vernon; Jones, Richard H.; Rew, William E.
1987-01-01
NASA Acquisition Management Subsystem (AMS) program integrated NASA-wide standard automated-procurement-system program developed in 1985. Designed to provide each NASA installation with procurement data-base concept with on-line terminals for managing, tracking, reporting, and controlling contractual actions and associated procurement data. Subsystem provides control, status, and reporting for various procurement areas. Purpose of standardization is to decrease costs of procurement and operation of automatic data processing; increases procurement productivity; furnishes accurate, on-line management information and improves customer support. Written in the ADABAS NATURAL.
The HITRAN 2008 Molecular Spectroscopic Database
NASA Technical Reports Server (NTRS)
Rothman, Laurence S.; Gordon, Iouli E.; Barbe, Alain; Benner, D. Chris; Bernath, Peter F.; Birk, Manfred; Boudon, V.; Brown, Linda R.; Campargue, Alain; Champion, J.-P.;
2009-01-01
This paper describes the status of the 2008 edition of the HITRAN molecular spectroscopic database. The new edition is the first official public release since the 2004 edition, although a number of crucial updates had been made available online since 2004. The HITRAN compilation consists of several components that serve as input for radiative-transfer calculation codes: individual line parameters for the microwave through visible spectra of molecules in the gas phase; absorption cross-sections for molecules having dense spectral features, i.e., spectra in which the individual lines are not resolved; individual line parameters and absorption cross sections for bands in the ultra-violet; refractive indices of aerosols, tables and files of general properties associated with the database; and database management software. The line-by-line portion of the database contains spectroscopic parameters for forty-two molecules including many of their isotopologues.
ERIC Educational Resources Information Center
Perusse, Lyse
This study, which was conducted at the University of Quebec over a period of six months, focuses on four systems for online information retrieval that are available to users at the university and provide access to some of the same databases: BRS (Bibliographic Retrieval Services Inc.), CAN/OLE (Canadian On-Line Inquiry), Lockheed DIALOG, and…
PlantDB – a versatile database for managing plant research
Exner, Vivien; Hirsch-Hoffmann, Matthias; Gruissem, Wilhelm; Hennig, Lars
2008-01-01
Background Research in plant science laboratories often involves usage of many different species, cultivars, ecotypes, mutants, alleles or transgenic lines. This creates a great challenge to keep track of the identity of experimental plants and stored samples or seeds. Results Here, we describe PlantDB – a Microsoft® Office Access database – with a user-friendly front-end for managing information relevant for experimental plants. PlantDB can hold information about plants of different species, cultivars or genetic composition. Introduction of a concise identifier system allows easy generation of pedigree trees. In addition, all information about any experimental plant – from growth conditions and dates over extracted samples such as RNA to files containing images of the plants – can be linked unequivocally. Conclusion We have been using PlantDB for several years in our laboratory and found that it greatly facilitates access to relevant information. PMID:18182106
An Abstraction-Based Data Model for Information Retrieval
NASA Astrophysics Data System (ADS)
McAllister, Richard A.; Angryk, Rafal A.
Language ontologies provide an avenue for automated lexical analysis that may be used to supplement existing information retrieval methods. This paper presents a method of information retrieval that takes advantage of WordNet, a lexical database, to generate paths of abstraction, and uses them as the basis for an inverted index structure to be used in the retrieval of documents from an indexed corpus. We present this method as a entree to a line of research on using ontologies to perform word-sense disambiguation and improve the precision of existing information retrieval techniques.
INFOMAT: The international materials assessment and application centre's internet gateway
NASA Astrophysics Data System (ADS)
Branquinho, Carmen Lucia; Colodete, Leandro Tavares
2004-08-01
INFOMAT is an electronic directory structured to facilitate the search and retrieval of materials science and technology information sources. Linked to the homepage of the International Materials Assessment and Application Centre, INFOMAT presents descriptions of 392 proprietary databases with links to their host systems as well as direct links to over 180 public domain databases and over 2,400 web sites. Among the web sites are associations/unions, governmental and non-governmental institutions, industries, library holdings, market statistics, news services, on-line publications, standardization and intellectual property organizations, and universities/research groups.
GIS and RDBMS Used with Offline FAA Airspace Databases
NASA Technical Reports Server (NTRS)
Clark, J.; Simmons, J.; Scofield, E.; Talbott, B.
1994-01-01
A geographic information system (GIS) and relational database management system (RDBMS) were used in a Macintosh environment to access, manipulate, and display off-line FAA databases of airport and navigational aid locations, airways, and airspace boundaries. This proof-of-concept effort used data available from the Adaptation Controlled Environment System (ACES) and Digital Aeronautical Chart Supplement (DACS) databases to allow FAA cartographers and others to create computer-assisted charts and overlays as reference material for air traffic controllers. These products were created on an engineering model of the future GRASP (GRaphics Adaptation Support Position) workstation that will be used to make graphics and text products for the Advanced Automation System (AAS), which will upgrade and replace the current air traffic control system. Techniques developed during the prototyping effort have shown the viability of using databases to create graphical products without the need for an intervening data entry step.
Semiannual patents review, July 2001-December 2001
Roland Gleisner; Marguerite Sykes; Julie Blankenburg
2002-01-01
This review summarizes patents related to paper recycling that were issued during the last six months of 2001. Two on-line databases, Claims/U.S. Patents Abstracts and Derwent World Patents Index, were searched for this review. This semiannual feature is intended to inform readers about recent developments in equipment design, chemicals and process technology for...
Tracking Community College Transfers Using National Student Clearinghouse Data.
ERIC Educational Resources Information Center
Romano, Richard M.; Wisniewski, Martin
This study shows how community colleges can track almost all of their own students who transfer into both public and private colleges and across state lines using the National Student Clearinghouse (NSC) database. It utilizes data from the student information systems of Broome Community College, New York; Cayuga Community College, New York; the…
NPACT: Naturally Occurring Plant-based Anti-cancer Compound-Activity-Target database
Mangal, Manu; Sagar, Parul; Singh, Harinder; Raghava, Gajendra P. S.; Agarwal, Subhash M.
2013-01-01
Plant-derived molecules have been highly valued by biomedical researchers and pharmaceutical companies for developing drugs, as they are thought to be optimized during evolution. Therefore, we have collected and compiled a central resource Naturally Occurring Plant-based Anti-cancer Compound-Activity-Target database (NPACT, http://crdd.osdd.net/raghava/npact/) that gathers the information related to experimentally validated plant-derived natural compounds exhibiting anti-cancerous activity (in vitro and in vivo), to complement the other databases. It currently contains 1574 compound entries, and each record provides information on their structure, manually curated published data on in vitro and in vivo experiments along with reference for users referral, inhibitory values (IC50/ED50/EC50/GI50), properties (physical, elemental and topological), cancer types, cell lines, protein targets, commercial suppliers and drug likeness of compounds. NPACT can easily be browsed or queried using various options, and an online similarity tool has also been made available. Further, to facilitate retrieval of existing data, each record is hyperlinked to similar databases like SuperNatural, Herbal Ingredients’ Targets, Comparative Toxicogenomics Database, PubChem and NCI-60 GI50 data. PMID:23203877
MySQL/PHP web database applications for IPAC proposal submission
NASA Astrophysics Data System (ADS)
Crane, Megan K.; Storrie-Lombardi, Lisa J.; Silbermann, Nancy A.; Rebull, Luisa M.
2008-07-01
The Infrared Processing and Analysis Center (IPAC) is NASA's multi-mission center of expertise for long-wavelength astrophysics. Proposals for various IPAC missions and programs are ingested via MySQL/PHP web database applications. Proposers use web forms to enter coversheet information and upload PDF files related to the proposal. Upon proposal submission, a unique directory is created on the webserver into which all of the uploaded files are placed. The coversheet information is converted into a PDF file using a PHP extension called FPDF. The files are concatenated into one PDF file using the command-line tool pdftk and then forwarded to the review committee. This work was performed at the California Institute of Technology under contract to the National Aeronautics and Space Administration.
The Eclipsing Binary On-Line Atlas (EBOLA)
NASA Astrophysics Data System (ADS)
Bradstreet, D. H.; Steelman, D. P.; Sanders, S. J.; Hargis, J. R.
2004-05-01
In conjunction with the upcoming release of \\it Binary Maker 3.0, an extensive on-line database of eclipsing binaries is being made available. The purposes of the atlas are: \\begin {enumerate} Allow quick and easy access to information on published eclipsing binaries. Amass a consistent database of light and radial velocity curve solutions to aid in solving new systems. Provide invaluable querying capabilities on all of the parameters of the systems so that informative research can be quickly accomplished on a multitude of published results. Aid observers in establishing new observing programs based upon stars needing new light and/or radial velocity curves. Encourage workers to submit their published results so that others may have easy access to their work. Provide a vast but easily accessible storehouse of information on eclipsing binaries to accelerate the process of understanding analysis techniques and current work in the field. \\end {enumerate} The database will eventually consist of all published eclipsing binaries with light curve solutions. The following information and data will be supplied whenever available for each binary: original light curves in all bandpasses, original radial velocity observations, light curve parameters, RA and Dec, V-magnitudes, spectral types, color indices, periods, binary type, 3D representation of the system near quadrature, plots of the original light curves and synthetic models, plots of the radial velocity observations with theoretical models, and \\it Binary Maker 3.0 data files (parameter, light curve, radial velocity). The pertinent references for each star are also given with hyperlinks directly to the papers via the NASA Abstract website for downloading, if available. In addition the Atlas has extensive searching options so that workers can specifically search for binaries with specific characteristics. The website has more than 150 systems already uploaded. The URL for the site is http://ebola.eastern.edu/.
Incorporating the APS Catalog of the POSS I and Image Archive in ADS
NASA Technical Reports Server (NTRS)
Humphreys, Roberta M.
1998-01-01
The primary purpose of this contract was to develop the software to both create and access an on-line database of images from digital scans of the Palomar Sky Survey. This required modifying our DBMS (called Star Base) to create an image database from the actual raw pixel data from the scans. The digitized images are processed into a set of coordinate-reference index and pixel files that are stored in run-length files, thus achieving an efficient lossless compression. For efficiency and ease of referencing, each digitized POSS I plate is then divided into 900 subplates. Our custom DBMS maps each query into the corresponding POSS plate(s) and subplate(s). All images from the appropriate subplates are retrieved from disk with byte-offsets taken from the index files. These are assembled on-the-fly into a GIF image file for browser display, and a FITS format image file for retrieval. The FITS images have a pixel size of 0.33 arcseconds. The FITS header contains astrometric and photometric information. This method keeps the disk requirements manageable while allowing for future improvements. When complete, the APS Image Database will contain over 130 Gb of data. A set of web pages query forms are available on-line, as well as an on-line tutorial and documentation. The database is distributed to the Internet by a high-speed SGI server and a high-bandwidth disk system. URL is http://aps.umn.edu/IDB/. The image database software is written in perl and C and has been compiled on SGI computers with MIX5.3. A copy of the written documentation is included and the software is on the accompanying exabyte tape.
The GRIDView Visualization Package
NASA Astrophysics Data System (ADS)
Kent, B. R.
2011-07-01
Large three-dimensional data cubes, catalogs, and spectral line archives are increasingly important elements of the data discovery process in astronomy. Visualization of large data volumes is of vital importance for the success of large spectral line surveys. Examples of data reduction utilizing the GRIDView software package are shown. The package allows users to manipulate data cubes, extract spectral profiles, and measure line properties. The package and included graphical user interfaces (GUIs) are designed with pipeline infrastructure in mind. The software has been used with great success analyzing spectral line and continuum data sets obtained from large radio survey collaborations. The tools are also important for multi-wavelength cross-correlation studies and incorporate Virtual Observatory client applications for overlaying database information in real time as cubes are examined by users.
Models in Translational Oncology: A Public Resource Database for Preclinical Cancer Research.
Galuschka, Claudia; Proynova, Rumyana; Roth, Benjamin; Augustin, Hellmut G; Müller-Decker, Karin
2017-05-15
The devastating diseases of human cancer are mimicked in basic and translational cancer research by a steadily increasing number of tumor models, a situation requiring a platform with standardized reports to share model data. Models in Translational Oncology (MiTO) database was developed as a unique Web platform aiming for a comprehensive overview of preclinical models covering genetically engineered organisms, models of transplantation, chemical/physical induction, or spontaneous development, reviewed here. MiTO serves data entry for metastasis profiles and interventions. Moreover, cell lines and animal lines including tool strains can be recorded. Hyperlinks for connection with other databases and file uploads as supplementary information are supported. Several communication tools are offered to facilitate exchange of information. Notably, intellectual property can be protected prior to publication by inventor-defined accessibility of any given model. Data recall is via a highly configurable keyword search. Genome editing is expected to result in changes of the spectrum of model organisms, a reason to open MiTO for species-independent data. Registered users may deposit own model fact sheets (FS). MiTO experts check them for plausibility. Independently, manually curated FS are provided to principle investigators for revision and publication. Importantly, noneditable versions of reviewed FS can be cited in peer-reviewed journals. Cancer Res; 77(10); 2557-63. ©2017 AACR . ©2017 American Association for Cancer Research.
The Alaska Volcano Observatory Website a Tool for Information Management and Dissemination
NASA Astrophysics Data System (ADS)
Snedigar, S. F.; Cameron, C. E.; Nye, C. J.
2006-12-01
The Alaska Volcano Observatory's (AVO's) website served as a primary information management tool during the 2006 eruption of Augustine Volcano. The AVO website is dynamically generated from a database back- end. This system enabled AVO to quickly and easily update the website, and provide content based on user- queries to the database. During the Augustine eruption, the new AVO website was heavily used by members of the public (up to 19 million hits per day), and this was largely because the AVO public pages were an excellent source of up-to-date information. There are two different, yet fully integrated parts of the website. An external, public site (www.avo.alaska.edu) allows the general public to track eruptive activity by viewing the latest photographs, webcam images, webicorder graphs, and official information releases about activity at the volcano, as well as maps, previous eruption information, bibliographies, and rich information about other Alaska volcanoes. The internal half of the website hosts diverse geophysical and geological data (as browse images) in a format equally accessible by AVO staff in different locations. In addition, an observation log allows users to enter information about anything from satellite passes to seismic activity to ash fall reports into a searchable database. The individual(s) on duty at the watch office use forms on the internal website to post a summary of the latest activity directly to the public website, ensuring that the public website is always up to date. The internal website also serves as a starting point for monitoring Alaska's volcanoes. AVO's extensive image database allows AVO personnel to upload many photos, diagrams, and videos which are then available to be browsed by anyone in the AVO community. Selected images are viewable from the public page. The primary webserver is housed at the University of Alaska Fairbanks, and holds a MySQL database with over 200 tables and several thousand lines of php code gluing the database and website together. The database currently holds 95 GB of data. Webcam images and webicorder graphs are pulled from servers in Anchorage every few minutes. Other servers in Fairbanks generate earthquake location plots and spectrograms.
MeDReaders: a database for transcription factors that bind to methylated DNA.
Wang, Guohua; Luo, Ximei; Wang, Jianan; Wan, Jun; Xia, Shuli; Zhu, Heng; Qian, Jiang; Wang, Yadong
2018-01-04
Understanding the molecular principles governing interactions between transcription factors (TFs) and DNA targets is one of the main subjects for transcriptional regulation. Recently, emerging evidence demonstrated that some TFs could bind to DNA motifs containing highly methylated CpGs both in vitro and in vivo. Identification of such TFs and elucidation of their physiological roles now become an important stepping-stone toward understanding the mechanisms underlying the methylation-mediated biological processes, which have crucial implications for human disease and disease development. Hence, we constructed a database, named as MeDReaders, to collect information about methylated DNA binding activities. A total of 731 TFs, which could bind to methylated DNA sequences, were manually curated in human and mouse studies reported in the literature. In silico approaches were applied to predict methylated and unmethylated motifs of 292 TFs by integrating whole genome bisulfite sequencing (WGBS) and ChIP-Seq datasets in six human cell lines and one mouse cell line extracted from ENCODE and GEO database. MeDReaders database will provide a comprehensive resource for further studies and aid related experiment designs. The database implemented unified access for users to most TFs involved in such methylation-associated binding actives. The website is available at http://medreader.org/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
NASA Astrophysics Data System (ADS)
Jacquinet-Husson, Nicole; Crépeau, Laurent; Capelle, Virginie; Scott, Noëlle; Armante, Raymond; Chédin, Alain
2010-05-01
Remote sensing of the terrestrial atmosphere has advanced significantly in recent years, and this has placed greater demands on the compilations in terms of accuracy, additional species, and spectral coverage. The successful performances of the new generation of hyperspectral Earth' atmospheric sounders like AIRS (Atmospheric Infrared Sounder -http://www-airs.jpl.nasa.gov/), in the USA, and IASI (Infrared Atmospheric Sounding Interferometer -http://earth-sciences.cnes.fr/IASI/) in Europe, which have a better vertical resolution and accuracy, compared to the previous satellite infrared vertical sounders, depend ultimately on the accuracy to which the spectroscopic parameters of the optically active gases are known, since they constitute an essential input to the forward radiative transfer models that are used to interpret their observations. In this context, the GEISA (1) (Gestion et Etude des Informations Spectroscopiques Atmosphériques: Management and Study of Atmospheric Spectroscopic Information) computer-accessible database, initiated in 1976, is continuously developed and maintained at LMD (Laboratoire de Météorologie Dynamique, France). The updated 2009 edition of GEISA (GEISA-09)is a system comprising three independent sub-databases devoted respectively to: line transition parameters, infrared and ultraviolet/visible absorption cross-sections, microphysical and optical properties of atmospheric aerosols. In this edition, the contents of which will be summarized, 50 molecules are involved in the line transition parameters sub-database, including 111 isotopes, for a total of 3,807,997 entries, in the spectral range from 10-6 to 35,877.031 cm-1. Currently, GEISA is involved in activities related to the assessment of the capabilities of IASI through the GEISA/IASI database derived from GEISA (2). Since the Metop (http://www.eumetsat.int) launch (October 19th 2006), GEISA/IASI is the reference spectroscopic database for the validation of the level-1 IASI data, using the 4A radiative transfer model (3) (4A/LMD http://ara.lmd.polytechnique.fr; 4A/OP co-developed by LMD and NOVELTIS -http://www.noveltis.fr/) with the support of CNES (2006). Special emphasize will be given to the description of GEISA/IASI. Spectroscopic parameters quality requirement will be discussed in the context of comparisons between observed or simulated Earth's atmosphere spectra. GEISA and GEISA/IASI are implemented on the CNES/CNRS Ether Products and Services Centre WEB site (http://ether.ipsl.jussieu.fr), where all archived spectroscopic data can be handled through general and user friendly associated management software facilities. More than 350 researchers are registered for on line use of GEISA. Refs: (1) Jacquinet-Husson N., N.A. Scott, A. Chédin,L. Crépeau, R. Armante, V. Capelle, J. Orphal, A. Coustenis, C. Boonne, N. Poulet-Crovisier, et al. THE GEISA SPECTROSCOPIC DATABASE: Current and future archive for Earth and planetary atmosphere studies. JQSRT 109 (2008) 1043-1059. (2) Jacquinet-Husson N., N.A. Scott, A. Chédin, K. Garceran, R. Armante, et al. The 2003 edition of the GEISA/IASI spectroscopic database. JQSRT 95 (2005)429-467. (3) Scott, N.A. and A. Chedin. A fast line-by-line method for atmospheric absorption computations: The Automatized Atmospheric Absorption Atlas. J. Appl. Meteor. 20 (1981)556-564.
NASA Technical Reports Server (NTRS)
1983-01-01
Castle Industries, Inc. is a small machine shop manufacturing replacement plumbing repair parts, such as faucet, tub and ballcock seats. Therese Castley, president of Castle decided to introduce Monel because it offered a chance to improve competitiveness and expand the product line. Before expanding, Castley sought NERAC assistance on Monel technology. NERAC (New England Research Application Center) provided an information package which proved very helpful. The NASA database was included in NERAC's search and yielded a wealth of information on machining Monel.
Amadoz, Alicia; González-Candelas, Fernando
2007-04-20
Most research scientists working in the fields of molecular epidemiology, population and evolutionary genetics are confronted with the management of large volumes of data. Moreover, the data used in studies of infectious diseases are complex and usually derive from different institutions such as hospitals or laboratories. Since no public database scheme incorporating clinical and epidemiological information about patients and molecular information about pathogens is currently available, we have developed an information system, composed by a main database and a web-based interface, which integrates both types of data and satisfies requirements of good organization, simple accessibility, data security and multi-user support. From the moment a patient arrives to a hospital or health centre until the processing and analysis of molecular sequences obtained from infectious pathogens in the laboratory, lots of information is collected from different sources. We have divided the most relevant data into 12 conceptual modules around which we have organized the database schema. Our schema is very complete and it covers many aspects of sample sources, samples, laboratory processes, molecular sequences, phylogenetics results, clinical tests and results, clinical information, treatments, pathogens, transmissions, outbreaks and bibliographic information. Communication between end-users and the selected Relational Database Management System (RDMS) is carried out by default through a command-line window or through a user-friendly, web-based interface which provides access and management tools for the data. epiPATH is an information system for managing clinical and molecular information from infectious diseases. It facilitates daily work related to infectious pathogens and sequences obtained from them. This software is intended for local installation in order to safeguard private data and provides advanced SQL-users the flexibility to adapt it to their needs. The database schema, tool scripts and web-based interface are free software but data stored in our database server are not publicly available. epiPATH is distributed under the terms of GNU General Public License. More details about epiPATH can be found at http://genevo.uv.es/epipath.
Consensus Assignments for Water Vapor Lines Not Assigned on the HITRAN Database: 13,200 to 16,500/cm
NASA Technical Reports Server (NTRS)
Giver, Lawerence P.; Chackerian, Charles, Jr.; Freedman, Richard S.; Varanasi, Prasad; Gore, Warren (Technical Monitor)
2000-01-01
There are nearly 800 water Vapor-lines in the 13,200-16,500/cm region that do not have rovibrational assignments in the HITRAN database. The positions and intensities in the database were determined by Mandin et al., but assignments could not be determined at that time. Polyansky, et al. have now assigned over 600 of the unassigned lines in the 11,200-16,500/cm region. Schwenke has also given rovibrational assignments to many of these unassigned lines throughout the visible and near-infrared. Both articles changed the assignments of some HITRAN lines. Carleer et al. extend assignments to some weaker lines measured by them on new spectra with excellent signal/noise. However, some lines measured by Mandin et al. were omitted by Carleer, et al. because of blends due to lower spectral resolution. The rovibrational assignments of Polyansky et al. completely agree with those in Schwenke's article for only about 200 lines. However, Schwenke's ab initio line list is available on his internet site (http://ccf.arc.nasa.gov/-dschwenke). A detailed comparison of the Polyansky et al.line list, the Carleer et al.line list, and Schwenke's ab initio line list shows a larger number of agreements. In many cases the disagreement is only about the vibrational and/or rotational upper level, while there is agreement on the lower state assignment and energy level, "E", which is of primary importance for atmospheric applications. We will present a line list of "consensus" assignments in the 13,200-16,500/cm region for consideration of inclusion on the HITRAN and GEISA databases. This will substantially reduce the number of unassigned lines on the databases in this spectral region.
NASA gateway requirements analysis
NASA Technical Reports Server (NTRS)
Duncan, Denise R.; Doby, John S.; Shockley, Cynthia W.
1991-01-01
NASA devotes approximately 40 percent of its budget to R&D. Twelve NASA Research Centers and their contractors conduct this R&D, which ranges across many disciplines and is fueled by information about previous endeavors. Locating the right information is crucial. While NASA researchers use peer contacts as their primary source of scientific and technical information (STI), on-line bibliographic data bases - both Government-owned and commercial - are also frequently consulted. Once identified, the STI must be delivered in a usable format. This report assesses the appropriateness of developing an intelligent gateway interface for the NASA R&D community as a means of obtaining improved access to relevant STI resources outside of NASA's Remote Console (RECON) on-line bibliographic database. A study was conducted to determine (1) the information requirements of the R&D community, (2) the information sources to meet those requirements, and (3) ways of facilitating access to those information sources. Findings indicate that NASA researchers need more comprehensive STI coverage of disciplines not now represented in the RECON database. This augmented subject coverage should preferably be provided by both domestic and foreign STI sources. It was also found that NASA researchers frequently request rapid delivery of STI, in its original format. Finally, it was found that researchers need a better system for alerting them to recent developments in their areas of interest. A gateway that provides access to domestic and international information sources can also solve several shortcomings in the present STI delivery system. NASA should further test the practicality of a gateway as a mechanism for improved STI access.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaponov, Yu.A.; Igarashi, N.; Hiraki, M.
2004-05-12
An integrated controlling system and a unified database for high throughput protein crystallography experiments have been developed. Main features of protein crystallography experiments (purification, crystallization, crystal harvesting, data collection, data processing) were integrated into the software under development. All information necessary to perform protein crystallography experiments is stored (except raw X-ray data that are stored in a central data server) in a MySQL relational database. The database contains four mutually linked hierarchical trees describing protein crystals, data collection of protein crystal and experimental data processing. A database editor was designed and developed. The editor supports basic database functions to view,more » create, modify and delete user records in the database. Two search engines were realized: direct search of necessary information in the database and object oriented search. The system is based on TCP/IP secure UNIX sockets with four predefined sending and receiving behaviors, which support communications between all connected servers and clients with remote control functions (creating and modifying data for experimental conditions, data acquisition, viewing experimental data, and performing data processing). Two secure login schemes were designed and developed: a direct method (using the developed Linux clients with secure connection) and an indirect method (using the secure SSL connection using secure X11 support from any operating system with X-terminal and SSH support). A part of the system has been implemented on a new MAD beam line, NW12, at the Photon Factory Advanced Ring for general user experiments.« less
Automatic drawing for traffic marking with MMS LIDAR intensity
NASA Astrophysics Data System (ADS)
Takahashi, G.; Takeda, H.; Shimano, Y.
2014-05-01
Upgrading the database of CYBER JAPAN has been strategically promoted because the "Basic Act on Promotion of Utilization of Geographical Information", was enacted in May 2007. In particular, there is a high demand for road information that comprises a framework in this database. Therefore, road inventory mapping work has to be accurate and eliminate variation caused by individual human operators. Further, the large number of traffic markings that are periodically maintained and possibly changed require an efficient method for updating spatial data. Currently, we apply manual photogrammetry drawing for mapping traffic markings. However, this method is not sufficiently efficient in terms of the required productivity, and data variation can arise from individual operators. In contrast, Mobile Mapping Systems (MMS) and high-density Laser Imaging Detection and Ranging (LIDAR) scanners are rapidly gaining popularity. The aim in this study is to build an efficient method for automatically drawing traffic markings using MMS LIDAR data. The key idea in this method is extracting lines using a Hough transform strategically focused on changes in local reflection intensity along scan lines. However, also note that this method processes every traffic marking. In this paper, we discuss a highly accurate and non-human-operator-dependent method that applies the following steps: (1) Binarizing LIDAR points by intensity and extracting higher intensity points; (2) Generating a Triangulated Irregular Network (TIN) from higher intensity points; (3) Deleting arcs by length and generating outline polygons on the TIN; (4) Generating buffers from the outline polygons; (5) Extracting points from the buffers using the original LIDAR points; (6) Extracting local-intensity-changing points along scan lines using the extracted points; (7) Extracting lines from intensity-changing points through a Hough transform; and (8) Connecting lines to generate automated traffic marking mapping data.
ERIC Educational Resources Information Center
Kasner, Melanie; Reid, Greg; MacDonald, Cathy
2012-01-01
The purpose of the research was to conduct a quality indicator analysis of studies exploring the effects of antecedent exercise on self-stimulatory behaviors of individuals with autism spectrum disorders (ASD). Educational Resources Information Center (ERIC), Google Scholar, SPORTDiscus, PsychINFO, and PubMed/MedLine databases from 1980 to October…
Semiannual patents review July 2002–December 2002
Roland Gleisner; Julie Blankenburg
2003-01-01
This review summarizes patents related to paper recycling that were issued during the last six months of 2002. Two on-line databases, Claims/U.S. Patents Abstracts and Derwent World Patents Index, were searched for this review. This semiannual feature is intended to inform readers about recent developments in equipment design, chemicals, and process technology for...
Semiannual patents review, January-June 1999
Marguerite Sykes; Julie Blankenburg
1999-01-01
This review summarizes patents related to paper recycling that were issued during the first 6 months of 1999. The two on-line databases used for this search were C1aims/U.S. Patents Abstracts and Derwent World Patents Index. This semiannual feature is intended to inform readers about the latest developments in equipment design, chemicals, and process technology for...
JANE, A new information retrieval system for the Radiation Shielding Information Center
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trubey, D.K.
A new information storage and retrieval system has been developed for the Radiation Shielding Information Center (RSIC) at Oak Ridge National Laboratory to replace mainframe systems that have become obsolete. The database contains citations and abstracts of literature which were selected by RSIC analysts and indexed with terms from a controlled vocabulary. The database, begun in 1963, has been maintained continuously since that time. The new system, called JANE, incorporates automatic indexing techniques and on-line retrieval using the RSIC Data General Eclipse MV/4000 minicomputer, Automatic indexing and retrieval techniques based on fuzzy-set theory allow the presentation of results in ordermore » of Retrieval Status Value. The fuzzy-set membership function depends on term frequency in the titles and abstracts and on Term Discrimination Values which indicate the resolving power of the individual terms. These values are determined by the Cover Coefficient method. The use of a commercial database base to store and retrieve the indexing information permits rapid retrieval of the stored documents. Comparisons of the new and presently-used systems for actual searches of the literature indicate that it is practical to replace the mainframe systems with a minicomputer system similar to the present version of JANE. 18 refs., 10 figs.« less
Braun, Bremen L.; Schott, David A.; Portwood, II, John L.; Schaeffer, Mary L.; Harper, Lisa C.; Gardiner, Jack M.; Cannon, Ethalinda K.; Andorf, Carson M.
2017-01-01
Abstract The Maize Genetics and Genomics Database (MaizeGDB) team prepared a survey to identify breeders’ needs for visualizing pedigrees, diversity data and haplotypes in order to prioritize tool development and curation efforts at MaizeGDB. The survey was distributed to the maize research community on behalf of the Maize Genetics Executive Committee in Summer 2015. The survey garnered 48 responses from maize researchers, of which more than half were self-identified as breeders. The survey showed that the maize researchers considered their top priorities for visualization as: (i) displaying single nucleotide polymorphisms in a given region for a given list of lines, (ii) showing haplotypes for a given list of lines and (iii) presenting pedigree relationships visually. The survey also asked which populations would be most useful to display. The following two populations were on top of the list: (i) 3000 publicly available maize inbred lines used in Romay et al. (Comprehensive genotyping of the USA national maize inbred seed bank. Genome Biol, 2013;14:R55) and (ii) maize lines with expired Plant Variety Protection Act (ex-PVP) certificates. Driven by this strong stakeholder input, MaizeGDB staff are currently working in four areas to improve its interface and web-based tools: (i) presenting immediate progenies of currently available stocks at the MaizeGDB Stock pages, (ii) displaying the most recent ex-PVP lines described in the Germplasm Resources Information Network (GRIN) on the MaizeGDB Stock pages, (iii) developing network views of pedigree relationships and (iv) visualizing genotypes from SNP-based diversity datasets. These survey results can help other biological databases to direct their efforts according to user preferences as they serve similar types of data sets for their communities. Database URL: https://www.maizegdb.org PMID:28605768
A storage scheme for the real-time database supporting the on-line commitment
NASA Astrophysics Data System (ADS)
Dai, Hong-bin; Jing, Yu-jian; Wang, Hui
2013-07-01
The modern SCADA (Supervisory Control and Data acquisition) systems have been applied to various aspects of everyday life. As the time goes on, the requirements of the applications of the systems vary. Thus the data structure of the real-time database, which is the core of a SCADA system, often needs modification. As a result, the commitment consisting of a sequence of configuration operations modifying the data structure of the real-time database is performed from time to time. Though it is simple to perform the off-line commitment by first stopping and then restarting the system, during which all the data in the real-time database are reconstructed. It is much more preferred or in some cases even necessary to perform the on-line commitment, during which the real-time database can still provide real-time service and the system continues working normally. In this paper, a storage scheme of the data in the real-time database is proposed. It helps the real-time database support its on-line commitment, during which real-time service is still available.
CancerDR: cancer drug resistance database.
Kumar, Rahul; Chaudhary, Kumardeep; Gupta, Sudheer; Singh, Harinder; Kumar, Shailesh; Gautam, Ankur; Kapoor, Pallavi; Raghava, Gajendra P S
2013-01-01
Cancer therapies are limited by the development of drug resistance, and mutations in drug targets is one of the main reasons for developing acquired resistance. The adequate knowledge of these mutations in drug targets would help to design effective personalized therapies. Keeping this in mind, we have developed a database "CancerDR", which provides information of 148 anti-cancer drugs, and their pharmacological profiling across 952 cancer cell lines. CancerDR provides comprehensive information about each drug target that includes; (i) sequence of natural variants, (ii) mutations, (iii) tertiary structure, and (iv) alignment profile of mutants/variants. A number of web-based tools have been integrated in CancerDR. This database will be very useful for identification of genetic alterations in genes encoding drug targets, and in turn the residues responsible for drug resistance. CancerDR allows user to identify promiscuous drug molecules that can kill wide range of cancer cells. CancerDR is freely accessible at http://crdd.osdd.net/raghava/cancerdr/
Silva-Lopes, Victor W; Monteiro-Leal, Luiz H
2003-07-01
The development of new technology and the possibility of fast information delivery by either Internet or Intranet connections are changing education. Microanatomy education depends basically on the correct interpretation of microscopy images by students. Modern microscopes coupled to computers enable the presentation of these images in a digital form by creating image databases. However, the access to this new technology is restricted entirely to those living in cities and towns with an Information Technology (IT) infrastructure. This study describes the creation of a free Internet histology database composed by high-quality images and also presents an inexpensive way to supply it to a greater number of students through Internet/Intranet connections. By using state-of-the-art scientific instruments, we developed a Web page (http://www2.uerj.br/~micron/atlas/atlasenglish/index.htm) that, in association with a multimedia microscopy laboratory, intends to help in the reduction of the IT educational gap between developed and underdeveloped regions. Copyright 2003 Wiley-Liss, Inc.
IUEAGN: A database of ultraviolet spectra of active galactic nuclei
NASA Technical Reports Server (NTRS)
Pike, G.; Edelson, R.; Shull, J. M.; Saken, J.
1993-01-01
In 13 years of operation, IUE has gathered approximately 5000 spectra of almost 600 Active Galactic Nuclei (AGN). In order to undertake AGN studies which require large amounts of data, we are consistently reducing this entire archive and creating a homogeneous, easy-to-use database. First, the spectra are extracted using the Optimal extraction algorithm. Continuum fluxes are then measured across predefined bands, and line fluxes are measured with a multi-component fit. These results, along with source information such as redshifts and positions, are placed in the IUEAGN relational database. Analysis algorithms, statistical tests, and plotting packages run within the structure, and this flexible database can accommodate future data when they are released. This archival approach has already been used to survey line and continuum variability in six bright Seyfert 1s and rapid continuum variability in 14 blazars. Among the results that could only be obtained using a large archival study is evidence that blazars show a positive correlation between degree of variability and apparent luminosity, while Seyfert 1s show an anti-correlation. This suggests that beaming dominates the ultraviolet properties for blazars, while thermal emission from an accretion disk dominates for Seyfert 1s. Our future plans include a survey of line ratios in Seyfert 1s, to be fitted with photoionization models to test the models and determine the range of temperatures, densities and ionization parameters. We will also include data from IRAS, Einstein, EXOSAT, and ground-based telescopes to measure multi-wavelength correlations and broadband spectral energy distributions.
A Bioinformatics Workflow for Variant Peptide Detection in Shotgun Proteomics*
Li, Jing; Su, Zengliu; Ma, Ze-Qiang; Slebos, Robbert J. C.; Halvey, Patrick; Tabb, David L.; Liebler, Daniel C.; Pao, William; Zhang, Bing
2011-01-01
Shotgun proteomics data analysis usually relies on database search. However, commonly used protein sequence databases do not contain information on protein variants and thus prevent variant peptides and proteins from been identified. Including known coding variations into protein sequence databases could help alleviate this problem. Based on our recently published human Cancer Proteome Variation Database, we have created a protein sequence database that comprehensively annotates thousands of cancer-related coding variants collected in the Cancer Proteome Variation Database as well as noncancer-specific ones from the Single Nucleotide Polymorphism Database (dbSNP). Using this database, we then developed a data analysis workflow for variant peptide identification in shotgun proteomics. The high risk of false positive variant identifications was addressed by a modified false discovery rate estimation method. Analysis of colorectal cancer cell lines SW480, RKO, and HCT-116 revealed a total of 81 peptides that contain either noncancer-specific or cancer-related variations. Twenty-three out of 26 variants randomly selected from the 81 were confirmed by genomic sequencing. We further applied the workflow on data sets from three individual colorectal tumor specimens. A total of 204 distinct variant peptides were detected, and five carried known cancer-related mutations. Each individual showed a specific pattern of cancer-related mutations, suggesting potential use of this type of information for personalized medicine. Compatibility of the workflow has been tested with four popular database search engines including Sequest, Mascot, X!Tandem, and MyriMatch. In summary, we have developed a workflow that effectively uses existing genomic data to enable variant peptide detection in proteomics. PMID:21389108
Flexible data registration and automation in semiconductor production
NASA Astrophysics Data System (ADS)
Dudde, Ralf; Staudt-Fischbach, Peter; Kraemer, Benedict
1997-08-01
The need for cost reduction and flexibility in semiconductor production will result in a wider application of computer based automation systems. With the setup of a new and advanced CMOS semiconductor line in the Fraunhofer Institute for Silicon Technology [ISIT, Itzehoe (D)] a new line information system (LIS) was introduced based on an advanced model for the underlying data structure. This data model was implemented into an ORACLE-RDBMS. A cellworks based system (JOSIS) was used for the integration of the production equipment, communication and automated database bookings and information retrievals. During the ramp up of the production line this new system is used for the fab control. The data model and the cellworks based system integration is explained. This system enables an on-line overview of the work in progress in the fab, lot order history and equipment status and history. Based on this figures improved production and cost monitoring and optimization is possible. First examples of the information gained by this system are presented. The modular set-up of the LIS system will allow easy data exchange with additional software tools like scheduler, different fab control systems like PROMIS and accounting systems like SAP. Modifications necessary for the integration of PROMIS are described.
NASA Astrophysics Data System (ADS)
Nakaike, Shin'ichi; Tanaka, Masao
The authors describe present status of patent information service by JAPIO, new on-line system project (PATOLIS-III), Paperless Project by the Patent Office and input of domestic gazettes for patent into optical disks. They also describe CD-ROM created by using image information of the gazettes for patent which is produced under the Paperless Project, its production method, and the terminals and their functions. Some problems found in CD-ROM of JAPIO, such as time lag for the issuance, treatment of the multiple copies, and countermeasures against them are mentioned.
CHOmine: an integrated data warehouse for CHO systems biology and modeling
Hanscho, Michael; Ruckerbauer, David E.; Zanghellini, Jürgen; Borth, Nicole
2017-01-01
Abstract The last decade has seen a surge in published genome-scale information for Chinese hamster ovary (CHO) cells, which are the main production vehicles for therapeutic proteins. While a single access point is available at www.CHOgenome.org, the primary data is distributed over several databases at different institutions. Currently research is frequently hampered by a plethora of gene names and IDs that vary between published draft genomes and databases making systems biology analyses cumbersome and elaborate. Here we present CHOmine, an integrative data warehouse connecting data from various databases and links to other ones. Furthermore, we introduce CHOmodel, a web based resource that provides access to recently published CHO cell line specific metabolic reconstructions. Both resources allow to query CHO relevant data, find interconnections between different types of data and thus provides a simple, standardized entry point to the world of CHO systems biology. Database URL: http://www.chogenome.org PMID:28605771
The Forum, 1998-2002. Research Forum on Children, Families, and the New Federalism.
ERIC Educational Resources Information Center
Oshinsky, Carole J., Ed.
2002-01-01
This document contains 16 issues of the first 5 years of a newsletter encouraging collaborative research and informed policy on welfare reform and focusing on the use of an on-line database of child welfare research projects, as well as research and policy issues related to implementation studies, indicators of well-being, and administrative data.…
Code of Federal Regulations, 2010 CFR
2010-01-01
...(s) located in Department's public reference room. 221.550 Section 221.550 Aeronautics and Space... public reference room. Copies of information contained in a filer's on-line tariff database may be... Reference Room by the filer. The filer may assess a fee for copying, provided it is reasonable and that no...
Digital Bedrock Compilation: A Geodatabase Covering Forest Service Lands in California
NASA Astrophysics Data System (ADS)
Elder, D.; de La Fuente, J. A.; Reichert, M.
2010-12-01
This digital database contains bedrock geologic mapping for Forest Service lands within California. This compilation began in 2004 and the first version was completed in 2005. Second publication of this geodatabase was completed in 2010 and filled major gaps in the southern Sierra Nevada and Modoc/Medicine Lake/Warner Mountains areas. This digital map database was compiled from previously published and unpublished geologic mapping, with source mapping and review from California Geological Survey, the U.S. Geological Survey and others. Much of the source data was itself compilation mapping. This geodatabase is huge, containing ~107,000 polygons and ~ 280,000 arcs. Mapping was compiled from more than one thousand individual sources and covers over 41,000,000 acres (~166,000 km2). It was compiled from source maps at various scales - from ~ 1:4,000 to 1:250,000 and represents the best available geologic mapping at largest scale possible. An estimated 70-80% of the source information was digitized from geologic mapping at 1:62,500 scale or better. Forest Service ACT2 Enterprise Team compiled the bedrock mapping and developed a geodatabase to store this information. This geodatabase supports feature classes for polygons (e.g, map units), lines (e.g., contacts, boundaries, faults and structural lines) and points (e.g., orientation data, structural symbology). Lookup tables provide detailed information for feature class items. Lookup/type tables contain legal values and hierarchical groupings for geologic ages and lithologies. Type tables link coded values with descriptions for line and point attributes, such as line type, line location and point type. This digital mapping is at the core of many quantitative analyses and derivative map products. Queries of the database are used to produce maps and to quantify rock types of interest. These include the following: (1) ultramafic rocks - where hazards from naturally occurring asbestos are high, (2) granitic rocks - increased erosion hazards, (3) limestone, chert, sedimentary rocks - paleontological resources (Potential Fossil Yield Classification maps), (4) calcareous rocks (cave resources, water chemistry), and (5) lava flows - lava tubes (more caves). Map unit groupings (e.g., belts, terranes, tectonic & geomorphic provinces) can also be derived from the geodatabase. Digital geologic mapping was used in ground water modeling to predict effects of tunneling through the San Bernardino Mountains. Bedrock mapping is used in models that characterize watershed sediment regimes and quantify anthropogenic influences. When combined with digital geomorphology mapping, this geodatabase helps to assess landslide hazards.
Overview of Nuclear Physics Data: Databases, Web Applications and Teaching Tools
NASA Astrophysics Data System (ADS)
McCutchan, Elizabeth
2017-01-01
The mission of the United States Nuclear Data Program (USNDP) is to provide current, accurate, and authoritative data for use in pure and applied areas of nuclear science and engineering. This is accomplished by compiling, evaluating, and disseminating extensive datasets. Our main products include the Evaluated Nuclear Structure File (ENSDF) containing information on nuclear structure and decay properties and the Evaluated Nuclear Data File (ENDF) containing information on neutron-induced reactions. The National Nuclear Data Center (NNDC), through the website www.nndc.bnl.gov, provides web-based retrieval systems for these and many other databases. In addition, the NNDC hosts several on-line physics tools, useful for calculating various quantities relating to basic nuclear physics. In this talk, I will first introduce the quantities which are evaluated and recommended in our databases. I will then outline the searching capabilities which allow one to quickly and efficiently retrieve data. Finally, I will demonstrate how the database searches and web applications can provide effective teaching tools concerning the structure of nuclei and how they interact. Work supported by the Office of Nuclear Physics, Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-98CH10886.
Development Status of the Advanced Life Support On-Line Project Information System
NASA Technical Reports Server (NTRS)
Levri, Julie A.; Hogan, John A.; Cavazzoni, Jim; Brodbeck, Christina; Morrow, Rich; Ho, Michael; Kaehms, Bob; Whitaker, Dawn R.
2005-01-01
The Advanced Life Support Program has recently accelerated an effort to develop an On-line Project Information System (OPIS) for research project and technology development data centralization and sharing. The core functionality of OPIS will launch in October of 2005. This paper presents the current OPIS development status. OPIS core functionality involves a Web-based annual solicitation of project and technology data directly from ALS Principal Investigators (PIS) through customized data collection forms. Data provided by PIs will be reviewed by a Technical Task Monitor (TTM) before posting the information to OPIS for ALS Community viewing via the Web. The data will be stored in an object-oriented relational database (created in MySQL(R)) located on a secure server at NASA ARC. Upon launch, OPIS can be utilized by Managers to identify research and technology development gaps and to assess task performance. Analysts can employ OPIS to obtain.
A Standard Nomenclature for Referencing and Authentication of Pluripotent Stem Cells.
Kurtz, Andreas; Seltmann, Stefanie; Bairoch, Amos; Bittner, Marie-Sophie; Bruce, Kevin; Capes-Davis, Amanda; Clarke, Laura; Crook, Jeremy M; Daheron, Laurence; Dewender, Johannes; Faulconbridge, Adam; Fujibuchi, Wataru; Gutteridge, Alexander; Hei, Derek J; Kim, Yong-Ou; Kim, Jung-Hyun; Kokocinski, Anja Kolb-; Lekschas, Fritz; Lomax, Geoffrey P; Loring, Jeanne F; Ludwig, Tenneille; Mah, Nancy; Matsui, Tohru; Müller, Robert; Parkinson, Helen; Sheldon, Michael; Smith, Kelly; Stachelscheid, Harald; Stacey, Glyn; Streeter, Ian; Veiga, Anna; Xu, Ren-He
2018-01-09
Unambiguous cell line authentication is essential to avoid loss of association between data and cells. The risk for loss of references increases with the rapidity that new human pluripotent stem cell (hPSC) lines are generated, exchanged, and implemented. Ideally, a single name should be used as a generally applied reference for each cell line to access and unify cell-related information across publications, cell banks, cell registries, and databases and to ensure scientific reproducibility. We discuss the needs and requirements for such a unique identifier and implement a standard nomenclature for hPSCs, which can be automatically generated and registered by the human pluripotent stem cell registry (hPSCreg). To avoid ambiguities in PSC-line referencing, we strongly urge publishers to demand registration and use of the standard name when publishing research based on hPSC lines. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
S.I.I.A for monitoring crop evolution and anomaly detection in Andalusia by remote sensing
NASA Astrophysics Data System (ADS)
Rodriguez Perez, Antonio Jose; Louakfaoui, El Mostafa; Munoz Rastrero, Antonio; Rubio Perez, Luis Alberto; de Pablos Epalza, Carmen
2004-02-01
A new remote sensing application was developed and incorporated to the Agrarian Integrated Information System (S.I.I.A), project which is involved on integrating the regional farming databases from a geographical point of view, adding new values and uses to the original information. The project is supported by the Studies and Statistical Service, Regional Government Ministry of Agriculture and Fisheries (CAP). The process integrates NDVI values from daily NOAA-AVHRR and monthly IRS-WIFS images, and crop classes location maps. Agrarian local information and meteorological information is being included in the working process to produce a synergistic effect. An updated crop-growing evaluation state is obtained by 10-days periods, crop class, sensor type (including data fusion) and administrative geographical borders. Last ten years crop database (1992-2002) has been organized according to these variables. Crop class database can be accessed by an application which helps users on the crop statistical analysis. Multi-temporal and multi-geographical comparative analysis can be done by the user, not only for a year but also for a historical point of view. Moreover, real time crop anomalies can be detected and analyzed. Most of the output products will be available on Internet in the near future by a on-line application.
Work-Facilitating Information Visualization Techniques for Complex Wastewater Systems
NASA Astrophysics Data System (ADS)
Ebert, Achim; Einsfeld, Katja
The design and the operation of urban drainage systems and wastewater treatment plants (WWTP) have become increasingly complex. This complexity is due to increased requirements concerning process technology, technical, environmental, economical, and occupational safety aspects. The plant operator has access not only to some timeworn filers and measured parameters but also to numerous on-line and off-line parameters that characterize the current state of the plant in detail. Moreover, expert databases and specific support pages of plant manufactures are accessible through the World Wide Web. Thus, the operator is overwhelmed with predominantly unstructured data.
Interactive Scene Analysis Module - A sensor-database fusion system for telerobotic environments
NASA Technical Reports Server (NTRS)
Cooper, Eric G.; Vazquez, Sixto L.; Goode, Plesent W.
1992-01-01
Accomplishing a task with telerobotics typically involves a combination of operator control/supervision and a 'script' of preprogrammed commands. These commands usually assume that the location of various objects in the task space conform to some internal representation (database) of that task space. The ability to quickly and accurately verify the task environment against the internal database would improve the robustness of these preprogrammed commands. In addition, the on-line initialization and maintenance of a task space database is difficult for operators using Cartesian coordinates alone. This paper describes the Interactive Scene' Analysis Module (ISAM) developed to provide taskspace database initialization and verification utilizing 3-D graphic overlay modelling, video imaging, and laser radar based range imaging. Through the fusion of taskspace database information and image sensor data, a verifiable taskspace model is generated providing location and orientation data for objects in a task space. This paper also describes applications of the ISAM in the Intelligent Systems Research Laboratory (ISRL) at NASA Langley Research Center, and discusses its performance relative to representation accuracy and operator interface efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodman, John A.
MOLIS is an online database of minority institutions, and is used by federal agencies to identify peer reviewers and by majority institutions to identify possible collaborations and sub-contracts. MOLIS includes in-depth information about the research and educational capabilities of Historically Black Colleges and Universities (HBCUs), Hispanic Serving Institutions (HSIs), and Tribal Colleges. Included with this report are several annual progress reports, a list of all minority institutions currently on MOLIS, a list of outreach activities, etc.
Information need in local government and online network system ; LOGON
NASA Astrophysics Data System (ADS)
Ohta, Masanori
Local Authorities Systems DEvelopment Center started the trial operation of LOcal Government information service On-line Network system (LOGON) in April of 1988. Considering the background of LOGON construction this paper introduces the present status of informationalization in municipalities and needs to network systems as well as information centers based on results of various types of research. It also compares database systems with communication by personal computers, both of which are typical communication forms, and investigates necessary functions of LOGON. The actual system functions, services and operation of LOGON and some problems occurred in the trial are discussed.
Belmonte, M
In this article we review two of the main Internet information services for seeking references to bibliography and journals, and the electronic publications on the Internet, with particular emphasis on those related to neurosciencs. The main indices of bibliography are: 1. MEDLINE. By definition, this is the bibliography database. It is an 'on line' version of the magazine with a smaller format, published weekly with the title pages and summaries of most of the biomedical journals. It is based on the Index Medicus, a bibliographic index (on paper) which annually collects references to the most important biomedical journals. 2. EMBASE (Excerpta Medica). It is a direct competitor to MEDLINE, although it has the disadvantage of lack of government subsidies and is privately financed only. This bibliographic database, produced by the publishers Elsevier of Holland, covers approximately 3,500 biomedical journals from 110 countries, and is particularly useful for articles on drugs and toxicology. 3. Current Contents. It publishes the index Current Contents, a classic in this field, much appreciated by scientists in all areas: medicine, social, technology, arts and humanities. At present, it is available in an on line version known as CCC (Current Contents Connect), accessible through the web, but only to subscribers. There is a growing tendency towards the publication of biomedical journals on the Internet. Its full development, if correctly carried out, will mean the opportunity to have the best information available and will result in great benefit to all those who are already using new information technology.
NASA Technical Reports Server (NTRS)
vonOfenheim. William H. C.; Heimerl, N. Lynn; Binkley, Robert L.; Curry, Marty A.; Slater, Richard T.; Nolan, Gerald J.; Griswold, T. Britt; Kovach, Robert D.; Corbin, Barney H.; Hewitt, Raymond W.
1998-01-01
This paper discusses the technical aspects of and the project background for the NASA Image exchange (NIX). NIX, which provides a single entry point to search selected image databases at the NASA Centers, is a meta-search engine (i.e., a search engine that communicates with other search engines). It uses these distributed digital image databases to access photographs, animations, and their associated descriptive information (meta-data). NIX is available for use at the following URL: http://nix.nasa.gov./NIX, which was sponsored by NASAs Scientific and Technical Information (STI) Program, currently serves images from seven NASA Centers. Plans are under way to link image databases from three additional NASA Centers. images and their associated meta-data, which are accessible by NIX, reside at the originating Centers, and NIX utilizes a virtual central site that communicates with each of these sites. Incorporated into the virtual central site are several protocols to support searches from a diverse collection of database engines. The searches are performed in parallel to ensure optimization of response times. To augment the search capability, browse functionality with pre-defined categories has been built into NIX, thereby ensuring dissemination of 'best-of-breed' imagery. As a final recourse, NIX offers access to a help desk via an on-line form to help locate images and information either within the scope of NIX or from available external sources.
Begley, Dale A; Sundberg, John P; Krupke, Debra M; Neuhauser, Steven B; Bult, Carol J; Eppig, Janan T; Morse, Herbert C; Ward, Jerrold M
2015-12-01
Many mouse models have been created to study hematopoietic cancer types. There are over thirty hematopoietic tumor types and subtypes, both human and mouse, with various origins, characteristics and clinical prognoses. Determining the specific type of hematopoietic lesion produced in a mouse model and identifying mouse models that correspond to the human subtypes of these lesions has been a continuing challenge for the scientific community. The Mouse Tumor Biology Database (MTB; http://tumor.informatics.jax.org) is designed to facilitate use of mouse models of human cancer by providing detailed histopathologic and molecular information on lymphoma subtypes, including expertly annotated, on line, whole slide scans, and providing a repository for storing information on and querying these data for specific lymphoma models. Copyright © 2015 Elsevier Inc. All rights reserved.
CellLineNavigator: a workbench for cancer cell line analysis
Krupp, Markus; Itzel, Timo; Maass, Thorsten; Hildebrandt, Andreas; Galle, Peter R.; Teufel, Andreas
2013-01-01
The CellLineNavigator database, freely available at http://www.medicalgenomics.org/celllinenavigator, is a web-based workbench for large scale comparisons of a large collection of diverse cell lines. It aims to support experimental design in the fields of genomics, systems biology and translational biomedical research. Currently, this compendium holds genome wide expression profiles of 317 different cancer cell lines, categorized into 57 different pathological states and 28 individual tissues. To enlarge the scope of CellLineNavigator, the database was furthermore closely linked to commonly used bioinformatics databases and knowledge repositories. To ensure easy data access and search ability, a simple data and an intuitive querying interface were implemented. It allows the user to explore and filter gene expression, focusing on pathological or physiological conditions. For a more complex search, the advanced query interface may be used to query for (i) differentially expressed genes; (ii) pathological or physiological conditions; or (iii) gene names or functional attributes, such as Kyoto Encyclopaedia of Genes and Genomes pathway maps. These queries may also be combined. Finally, CellLineNavigator allows additional advanced analysis of differentially regulated genes by a direct link to the Database for Annotation, Visualization and Integrated Discovery (DAVID) Bioinformatics Resources. PMID:23118487
Return to sports after ankle fractures: a systematic review.
Del Buono, Angelo; Smith, Rebecca; Coco, Manuela; Woolley, Laurence; Denaro, Vincenzo; Maffulli, Nicola
2013-01-01
This review aims to provide information on the time athletes will take to resume sports activity following ankle fractures. We systematically searched Medline (PubMED), EMBASE, CINHAL, Cochrane, Sports Discus and Google scholar databases using the combined keywords 'ankle fractures', 'ankle injuries', 'athletes', 'sports', 'return to sport', 'recovery', 'operative fixation', 'pinning', 'return to activity' to identify articles published in English, Spanish, French, Portuguese and Italian. Seven retrospective studies fulfilled our inclusion criteria. Of the 793 patients, 469 (59%) were males and 324 (41%) were females, and of the 356 ankle fractures we obtained information on, 338 were acute and 18 stress fractures. The general principles were to undertake open reduction and internal fixation of acute fractures, and manage stress fractures conservatively unless a thin fracture line was visible on radiographs. The best timing to return to sports after an acute ankle fracture is still undefined, given the heterogeneity of the outcome measures and results. The time to return to sports after an acute stress injury ranged from 3 to 51 weeks. When facing athletes with ankle fractures, associated injuries have to be assessed and addressed to improve current treatment lines and satisfy future expectancies. The best timing to return to sports after an ankle fracture has not been established yet. The ideas of the return to activity parameter and surgeon databases including sports-related information could induce research to progress.
Structure and software tools of AIDA.
Duisterhout, J S; Franken, B; Witte, F
1987-01-01
AIDA consists of a set of software tools to allow for fast development and easy-to-maintain Medical Information Systems. AIDA supports all aspects of such a system both during development and operation. It contains tools to build and maintain forms for interactive data entry and on-line input validation, a database management system including a data dictionary and a set of run-time routines for database access, and routines for querying the database and output formatting. Unlike an application generator, the user of AIDA may select parts of the tools to fulfill his needs and program other subsystems not developed with AIDA. The AIDA software uses as host language the ANSI-standard programming language MUMPS, an interpreted language embedded in an integrated database and programming environment. This greatly facilitates the portability of AIDA applications. The database facilities supported by AIDA are based on a relational data model. This data model is built on top of the MUMPS database, the so-called global structure. This relational model overcomes the restrictions of the global structure regarding string length. The global structure is especially powerful for sorting purposes. Using MUMPS as a host language allows the user an easy interface between user-defined data validation checks or other user-defined code and the AIDA tools. AIDA has been designed primarily for prototyping and for the construction of Medical Information Systems in a research environment which requires a flexible approach. The prototyping facility of AIDA operates terminal independent and is even to a great extent multi-lingual. Most of these features are table-driven; this allows on-line changes in the use of terminal type and language, but also causes overhead. AIDA has a set of optimizing tools by which it is possible to build a faster, but (of course) less flexible code from these table definitions. By separating the AIDA software in a source and a run-time version, one is able to write implementation-specific code which can be selected and loaded by a special source loader, being part of the AIDA software. This feature is also accessible for maintaining software on different sites and on different installations.
R2 & NE State - 2010 Census; Housing and Population Summary
The TIGER/Line Files are shapefiles and related database files (.dbf) that are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master Address File / Topologically Integrated Geographic Encoding and Referencing (MAF/TIGER) Database (MTDB). The MTDB represents a seamless national file with no overlaps or gaps between parts, however, each TIGER/Line File is designed to stand alone as an independent data set, or they can be combined to cover the entire nation. States and equivalent entities are the primary governmental divisions of the United States. In addition to the fifty States, the Census Bureau treats the District of Columbia, Puerto Rico, and each of the Island Areas (American Samoa, the Commonwealth of the Northern Mariana Islands, Guam, and the U.S. Virgin Islands) as the statistical equivalents of States for the purpose of data presentation.This table contains housing data derived from the U.S. Census 2010 Summary file 1 database for states. The 2010 Summary File 1 (SF 1) contains data compiled from the 2010 Decennial Census questions. This table contains data on housing units, owner and rental.This table contains population data derived from the U.S. Census 2010 Summary file 1 database for states. The 2010 Summary File 1 (SF 1) contains data compiled from the 2010 Decennial Census questions. This table contains data on ancestry, age, and sex.
NASA Astrophysics Data System (ADS)
Perry, William G.
2006-04-01
One goal of database mining is to draw unique and valid perspectives from multiple data sources. Insights that are fashioned from closely-held data stores are likely to possess a high degree of reliability. The degree of information assurance comes into question, however, when external databases are accessed, combined and analyzed to form new perspectives. ISO/IEC 17799, Information technology-Security techniques-Code of practice for information security management, can be used to establish a higher level of information assurance among disparate entities using data mining in the defense, homeland security, commercial and other civilian/commercial domains. Organizations that meet ISO/IEC information security standards have identified and assessed risks, threats and vulnerabilities and have taken significant proactive steps to meet their unique security requirements. The ISO standards address twelve domains: risk assessment and treatment, security policy, organization of information security, asset management, human resources security, physical and environmental security, communications and operations management, access control, information systems acquisition, development and maintenance, information security incident management and business continuity management and compliance. Analysts can be relatively confident that if organizations are ISO 17799 compliant, a high degree of information assurance is likely to be a characteristic of the data sets being used. The reverse may be true. Extracting, fusing and drawing conclusions based upon databases with a low degree of information assurance may be wrought with all of the hazards that come from knowingly using bad data to make decisions. Using ISO/IEC 17799 as a baseline for information assurance can help mitigate these risks.
Design and implementation of a portal for the medical equipment market: MEDICOM.
Palamas, S; Kalivas, D; Panou-Diamandi, O; Zeelenberg, C; van Nimwegen, C
2001-01-01
The MEDICOM (Medical Products Electronic Commerce) Portal provides the electronic means for medical-equipment manufacturers to communicate online with their customers while supporting the Purchasing Process and Post Market Surveillance. The Portal offers a powerful Internet-based search tool for finding medical products and manufacturers. Its main advantage is the fast, reliable and up-to-date retrieval of information while eliminating all unrelated content that a general-purpose search engine would retrieve. The Universal Medical Device Nomenclature System (UMDNS) registers all products. The Portal accepts end-user requests and generates a list of results containing text descriptions of devices, UMDNS attribute values, and links to manufacturer Web pages and online catalogues for access to more-detailed information. Device short descriptions are provided by the corresponding manufacturer. The Portal offers technical support for integration of the manufacturers Web sites with itself. The network of the Portal and the connected manufacturers sites is called the MEDICOM system. To establish an environment hosting all the interactions of consumers (health care organizations and professionals) and providers (manufacturers, distributors, and resellers of medical devices). The Portal provides the end-user interface, implements system management, and supports database compatibility. The Portal hosts information about the whole MEDICOM system (Common Database) and summarized descriptions of medical devices (Short Description Database); the manufacturers servers present extended descriptions. The Portal provides end-user profiling and registration, an efficient product-searching mechanism, bulletin boards, links to on-line libraries and standards, on-line information for the MEDICOM system, and special messages or advertisements from manufacturers. Platform independence and interoperability characterize the system design. Relational Database Management Systems are used for the system s databases. The end-user interface is implemented using HTML, Javascript, Java applets, and XML documents. Communication between the Portal and the manufacturers servers is implemented using a CORBA interface. Remote administration of the Portal is enabled by dynamically-generated HTML interfaces based on XML documents. A representative group of users evaluated the system. The aim of the evaluation was validation of the usability of all of MEDICOM s functionality. The evaluation procedure was based on ISO/IEC 9126 Information technology - Software product evaluation - Quality characteristics and guidelines for their use. The overall user evaluation of the MEDICOM system was very positive. The MEDICOM system was characterized as an innovative concept that brings significant added value to medical-equipment commerce. The eventual benefits of the MEDICOM system are (a) establishment of a worldwide-accessible marketplace between manufacturers and health care professionals that provides up-to-date and high-quality product information in an easy and friendly way and (b) enhancement of the efficiency of marketing procedures and after-sales support.
Design and Implementation of a Portal for the Medical Equipment Market: MEDICOM
Kalivas, Dimitris; Panou-Diamandi, Ourania; Zeelenberg, Cees; van Nimwegen, Chris
2001-01-01
Background The MEDICOM (Medical Products Electronic Commerce) Portal provides the electronic means for medical-equipment manufacturers to communicate online with their customers while supporting the Purchasing Process and Post Market Surveillance. The Portal offers a powerful Internet-based search tool for finding medical products and manufacturers. Its main advantage is the fast, reliable and up-to-date retrieval of information while eliminating all unrelated content that a general-purpose search engine would retrieve. The Universal Medical Device Nomenclature System (UMDNS) registers all products. The Portal accepts end-user requests and generates a list of results containing text descriptions of devices, UMDNS attribute values, and links to manufacturer Web pages and online catalogues for access to more-detailed information. Device short descriptions are provided by the corresponding manufacturer. The Portal offers technical support for integration of the manufacturers' Web sites with itself. The network of the Portal and the connected manufacturers' sites is called the MEDICOM system. Objective To establish an environment hosting all the interactions of consumers (health care organizations and professionals) and providers (manufacturers, distributors, and resellers of medical devices). Methods The Portal provides the end-user interface, implements system management, and supports database compatibility. The Portal hosts information about the whole MEDICOM system (Common Database) and summarized descriptions of medical devices (Short Description Database); the manufacturers' servers present extended descriptions. The Portal provides end-user profiling and registration, an efficient product-searching mechanism, bulletin boards, links to on-line libraries and standards, on-line information for the MEDICOM system, and special messages or advertisements from manufacturers. Platform independence and interoperability characterize the system design. Relational Database Management Systems are used for the system's databases. The end-user interface is implemented using HTML, Javascript, Java applets, and XML documents. Communication between the Portal and the manufacturers' servers is implemented using a CORBA interface. Remote administration of the Portal is enabled by dynamically-generated HTML interfaces based on XML documents. A representative group of users evaluated the system. The aim of the evaluation was validation of the usability of all of MEDICOM's functionality. The evaluation procedure was based on ISO/IEC 9126 Information technology - Software product evaluation - Quality characteristics and guidelines for their use. Results The overall user evaluation of the MEDICOM system was very positive. The MEDICOM system was characterized as an innovative concept that brings significant added value to medical-equipment commerce. Conclusions The eventual benefits of the MEDICOM system are (a) establishment of a worldwide-accessible marketplace between manufacturers and health care professionals that provides up-to-date and high-quality product information in an easy and friendly way and (b) enhancement of the efficiency of marketing procedures and after-sales support. PMID:11772547
TOMATOMA Update: Phenotypic and Metabolite Information in the Micro-Tom Mutant Resource.
Shikata, Masahito; Hoshikawa, Ken; Ariizumi, Tohru; Fukuda, Naoya; Yamazaki, Yukiko; Ezura, Hiroshi
2016-01-01
TOMATOMA (http://tomatoma.nbrp.jp/) is a tomato mutant database providing visible phenotypic data of tomato mutant lines generated by ethylmethane sulfonate (EMS) treatment or γ-ray irradiation in the genetic background of Micro-Tom, a small and rapidly growing variety. To increase mutation efficiency further, mutagenized M3 seeds were subjected to a second round of EMS treatment; M3M1 populations were generated. These plants were self-pollinated, and 4,952 lines of M3M2 mutagenized seeds were generated. We checked for visible phenotypes in the M3M2 plants, and 618 mutant lines with 1,194 phenotypic categories were identified. In addition to the phenotypic information, we investigated Brix values and carotenoid contents in the fruits of individual mutants. Of 466 samples from 171 mutant lines, Brix values and carotenoid contents were between 3.2% and 11.6% and 6.9 and 37.3 µg g(-1) FW, respectively. This metabolite information concerning the mutant fruits would be useful in breeding programs as well as for the elucidation of metabolic regulation. Researchers are able to browse and search this phenotypic and metabolite information and order seeds of individual mutants via TOMATOMA. Our new Micro-Tom double-mutagenized populations and the metabolic information could provide a valuable genetic toolkit to accelerate tomato research and potential breeding programs. © The Author 2015. Published by Oxford University Press on behalf of Japanese Society of Plant Physiologists. All rights reserved. For permissions, please email: journals.permissions@oup.com.
Ellis, J A; Kierulf, J C; Klassen, T P
1995-01-01
In-line skating, also known as rollerblading, is an increasingly popular recreational activity that carries with it the potential for injury. As reported in the Canadian Hospitals Injury Reporting and Prevention Program database (CHIRPP), 194 children were injured while in-line skating. Fractures to the radius and ulna were the most common type of injury sustained (57.5%), followed by lacerations and abrasions (14.9%). Five children had concussions and very few children reported wearing protective gear such as a helmet or wrist, elbow and knee protectors. Compared to the database overall, in-line skaters suffered more severe injuries and were more likely to require follow-up treatment. Safety implications in relation to protective gear and learning the sport of in-line skating are discussed.
Laituri, Melinda; Kodrich, Kris
2008-01-01
The Indian Ocean tsunami (2004) and Hurricane Katrina (2005) reveal the coming of age of the on-line disaster response community. Due to the integration of key geospatial technologies (remote sensing - RS, geographic information systems - GIS, global positioning systems – GPS) and the Internet, on-line disaster response communities have grown. They include the traditional aspects of disaster preparedness, response, recovery, mitigation, and policy as facilitated by governmental agencies and relief response organizations. However, the contribution from the public via the Internet has changed significantly. The on-line disaster response community includes several key characteristics: the ability to donate money quickly and efficiently due to improved Internet security and reliable donation sites; a computer-savvy segment of the public that creates blogs, uploads pictures, and disseminates information – oftentimes faster than government agencies, and message boards to create interactive information exchange in seeking family members and identifying shelters. A critical and novel occurrence is the development of “people as sensors” - networks of government, NGOs, private companies, and the public - to build rapid response databases of the disaster area for various aspects of disaster relief and response using geospatial technologies. This paper examines these networks, their products, and their future potential. PMID:27879864
The 2015 edition of the GEISA spectroscopic database
NASA Astrophysics Data System (ADS)
Jacquinet-Husson, N.; Armante, R.; Scott, N. A.; Chédin, A.; Crépeau, L.; Boutammine, C.; Bouhdaoui, A.; Crevoisier, C.; Capelle, V.; Boonne, C.; Poulet-Crovisier, N.; Barbe, A.; Chris Benner, D.; Boudon, V.; Brown, L. R.; Buldyreva, J.; Campargue, A.; Coudert, L. H.; Devi, V. M.; Down, M. J.; Drouin, B. J.; Fayt, A.; Fittschen, C.; Flaud, J.-M.; Gamache, R. R.; Harrison, J. J.; Hill, C.; Hodnebrog, Ø.; Hu, S.-M.; Jacquemart, D.; Jolly, A.; Jiménez, E.; Lavrentieva, N. N.; Liu, A.-W.; Lodi, L.; Lyulin, O. M.; Massie, S. T.; Mikhailenko, S.; Müller, H. S. P.; Naumenko, O. V.; Nikitin, A.; Nielsen, C. J.; Orphal, J.; Perevalov, V. I.; Perrin, A.; Polovtseva, E.; Predoi-Cross, A.; Rotger, M.; Ruth, A. A.; Yu, S. S.; Sung, K.; Tashkun, S. A.; Tennyson, J.; Tyuterev, Vl. G.; Vander Auwera, J.; Voronin, B. A.; Makie, A.
2016-09-01
The GEISA database (Gestion et Etude des Informations Spectroscopiques Atmosphériques: Management and Study of Atmospheric Spectroscopic Information) has been developed and maintained by the http://ara.abct.lmd.polytechnique.fr. The "line parameters database" contains 52 molecular species (118 isotopologues) and transitions in the spectral range from 10-6 to 35,877.031 cm-1, representing 5,067,351 entries, against 3,794,297 in GEISA-2011. Among the previously existing molecules, 20 molecular species have been updated. A new molecule (SO3) has been added. HDO, isotopologue of H2O, is now identified as an independent molecular species. Seven new isotopologues have been added to the GEISA-2015 database. The "cross section sub-database" has been enriched by the addition of 43 new molecular species in its infrared part, 4 molecules (ethane, propane, acetone, acetonitrile) are also updated; they represent 3% of the update. A new section is added, in the near-infrared spectral region, involving 7 molecular species: CH3CN, CH3I, CH3O2, H2CO, HO2, HONO, NH3. The "microphysical and optical properties of atmospheric aerosols sub-database" has been updated for the first time since 2003. It contains more than 40 species originating from NCAR and 20 from the http://eodg.atm.ox.ac.uk/ARIA/introduction_nocol.html. As for the previous versions, this new release of GEISA and associated management software facilities are implemented and freely accessible on the http://cds-espri.ipsl.fr/etherTypo/?id=950.
14 CFR 221.500 - Transmission of electronic tariffs to subscribers.
Code of Federal Regulations, 2011 CFR
2011-01-01
... to any subscriber to the on-line tariff database, including access to the justification required by... machine-readable data (raw tariff data) of all daily transactions made to its on-line tariff database. The...
14 CFR 221.500 - Transmission of electronic tariffs to subscribers.
Code of Federal Regulations, 2013 CFR
2013-01-01
... to any subscriber to the on-line tariff database, including access to the justification required by... machine-readable data (raw tariff data) of all daily transactions made to its on-line tariff database. The...
14 CFR 221.500 - Transmission of electronic tariffs to subscribers.
Code of Federal Regulations, 2010 CFR
2010-01-01
... to any subscriber to the on-line tariff database, including access to the justification required by... machine-readable data (raw tariff data) of all daily transactions made to its on-line tariff database. The...
14 CFR 221.500 - Transmission of electronic tariffs to subscribers.
Code of Federal Regulations, 2012 CFR
2012-01-01
... to any subscriber to the on-line tariff database, including access to the justification required by... machine-readable data (raw tariff data) of all daily transactions made to its on-line tariff database. The...
Al-Zahrani, Ateeq Ahmed
2018-01-01
Several anticancer drugs have been developed from natural products such as plants. Successful experiments in inhibiting the growth of human cancer cell lines using Saudi plants were published over the last three decades. Up to date, there is no Saudi anticancer plants database as a comprehensive source for the interesting data generated from these experiments. Therefore, there was a need for creating a database to collect, organize, search and retrieve such data. As a result, the current paper describes the generation of the Saudi anti-human cancer plants database (SACPD). The database contains most of the reported information about the naturally growing Saudi anticancer plants. SACPD comprises the scientific and local names of 91 plant species that grow naturally in Saudi Arabia. These species belong to 38 different taxonomic families. In Addition, 18 species that represent16 family of medicinal plants and are intensively sold in the local markets in Saudi Arabia were added to the database. The website provides interesting details, including plant part containing the anticancer bioactive compounds, plants locations and cancer/cell type against which they exhibit their anticancer activity. Our survey revealed that breast, liver and leukemia were the most studied cancer cell lines in Saudi Arabia with percentages of 27%, 19% and 15%, respectively. The current SACPD represents a nucleus around which more development efforts can expand to accommodate all future submissions about new Saudi plant species with anticancer activities. SACPD will provide an excellent starting point for researchers and pharmaceutical companies who are interested in developing new anticancer drugs. SACPD is available online at https://teeqrani1.wixsite.com/sapd PMID:29774137
Al-Zahrani, Ateeq Ahmed
2018-01-30
Several anticancer drugs have been developed from natural products such as plants. Successful experiments in inhibiting the growth of human cancer cell lines using Saudi plants were published over the last three decades. Up to date, there is no Saudi anticancer plants database as a comprehensive source for the interesting data generated from these experiments. Therefore, there was a need for creating a database to collect, organize, search and retrieve such data. As a result, the current paper describes the generation of the Saudi anti-human cancer plants database (SACPD). The database contains most of the reported information about the naturally growing Saudi anticancer plants. SACPD comprises the scientific and local names of 91 plant species that grow naturally in Saudi Arabia. These species belong to 38 different taxonomic families. In Addition, 18 species that represent16 family of medicinal plants and are intensively sold in the local markets in Saudi Arabia were added to the database. The website provides interesting details, including plant part containing the anticancer bioactive compounds, plants locations and cancer/cell type against which they exhibit their anticancer activity. Our survey revealed that breast, liver and leukemia were the most studied cancer cell lines in Saudi Arabia with percentages of 27%, 19% and 15%, respectively. The current SACPD represents a nucleus around which more development efforts can expand to accommodate all future submissions about new Saudi plant species with anticancer activities. SACPD will provide an excellent starting point for researchers and pharmaceutical companies who are interested in developing new anticancer drugs. SACPD is available online at https://teeqrani1.wixsite.com/sapd.
A Firefly Algorithm-based Approach for Pseudo-Relevance Feedback: Application to Medical Database.
Khennak, Ilyes; Drias, Habiba
2016-11-01
The difficulty of disambiguating the sense of the incomplete and imprecise keywords that are extensively used in the search queries has caused the failure of search systems to retrieve the desired information. One of the most powerful and promising method to overcome this shortcoming and improve the performance of search engines is Query Expansion, whereby the user's original query is augmented by new keywords that best characterize the user's information needs and produce more useful query. In this paper, a new Firefly Algorithm-based approach is proposed to enhance the retrieval effectiveness of query expansion while maintaining low computational complexity. In contrast to the existing literature, the proposed approach uses a Firefly Algorithm to find the best expanded query among a set of expanded query candidates. Moreover, this new approach allows the determination of the length of the expanded query empirically. Experimental results on MEDLINE, the on-line medical information database, show that our proposed approach is more effective and efficient compared to the state-of-the-art.
An on-line expert system for diagnosing environmentally induced spacecraft anomalies using CLIPS
NASA Technical Reports Server (NTRS)
Lauriente, Michael; Rolincik, Mark; Koons, Harry C; Gorney, David
1993-01-01
A new rule-based, expert system for diagnosing spacecraft anomalies is under development. The knowledge base consists of over two-hundred rules and provide links to historical and environmental databases. Environmental causes considered are bulk charging, single event upsets (SEU), surface charging, and total radiation dose. The system's driver translates forward chaining rules into a backward chaining sequence, prompting the user for information pertinent to the causes considered. The use of heuristics frees the user from searching through large amounts of irrelevant information (varying degrees of confidence in an answer) or 'unknown' to any question. The expert system not only provides scientists with needed risk analysis and confidence estimates not available in standard numerical models or databases, but it is also an effective learning tool. In addition, the architecture of the expert system allows easy additions to the knowledge base and the database. For example, new frames concerning orbital debris and ionospheric scintillation are being considered. The system currently runs on a MicroVAX and uses the C Language Integrated Production System (CLIPS).
Improvement of the Database on the 1.13-microns Band of Water Vapor
NASA Technical Reports Server (NTRS)
Giver, Lawrence P.; Schwenke, David W.; Chackerian, Charles, Jr.; Varanasi, Prasad; Freedman, Richard S.; Gore, Warren J. (Technical Monitor)
2000-01-01
Corrections have recently been reported (Giver et al.) on the short-wave (visible and near-infrared) line intensities of water vapor that were catalogued in the spectroscopic database known as HITRAN. These updates have been posted on www.hitran.com, and are being used to reanalyze the polar stratospheric absorption in the 0.94 microns band as observed in POAM. We are currently investigating additional improvement in the 1.13 microns band using data obtained by us with an absorption path length of 1.107 km and 4 torr of water vapor and the ab initio line list of Partridge and Schwenke (needs ref). We are proposing the following four types of improvement of the HITRAN database in this region: 1) HITRAN has nearly 200 lines in this region without proper assignments of rotational quantum levels. Nearly all of them can now be assigned. 2) We have measured positions of the observable H2O-17 and H2O-18 lines. These lines in HITRAN currently have approximate positions based upon rather aged computations. 3) Some additional lines are observed and assigned which should be included in the database. 4) Corrections are necessary for the lower state energies E" for the HITRAN lines of the 121-010 "hot" band.
Tran, Trung T; Bollineni, Ravi C; Strozynski, Margarita; Koehler, Christian J; Thiede, Bernd
2017-07-07
Alternative splicing is a mechanism in eukaryotes by which different forms of mRNAs are generated from the same gene. Identification of alternative splice variants requires the identification of peptides specific for alternative splice forms. For this purpose, we generated a human database that contains only unique tryptic peptides specific for alternative splice forms from Swiss-Prot entries. Using this database allows an easy access to splice variant-specific peptide sequences that match to MS data. Furthermore, we combined this database without alternative splice variant-1-specific peptides with human Swiss-Prot. This combined database can be used as a general database for searching of LC-MS data. LC-MS data derived from in-solution digests of two different cell lines (LNCaP, HeLa) and phosphoproteomics studies were analyzed using these two databases. Several nonalternative splice variant-1-specific peptides were found in both cell lines, and some of them seemed to be cell-line-specific. Control and apoptotic phosphoproteomes from Jurkat T cells revealed several nonalternative splice variant-1-specific peptides, and some of them showed clear quantitative differences between the two states.
The SPM Kinematic Catalogue of Planetary Nebulae
NASA Astrophysics Data System (ADS)
López, J. A.; Richer, M. G.; Riesgo, H.; Steffen, W.; García-Segura, G.; Meaburn, J.; Bryce, M.
The San Pedro Mártir Kinematic Catalogue of Planetary Nebulae aims at providing detailed kinematic information for galactic planetary nebulae (PNe) and bright PNe in the Local Group. The database provides long-slit, Echelle spectra and images where the location of the slits on the nebula are indicated. As a tool to help interpret the 2D line profiles or position-velocity data, an atlas of synthetic emission line spectra accompanies the Catalogue. The atlas has been produced with the code SHAPE and contains synthetic spectra for all the main morphological groups for a wide range of spatial orientations and slit locations over the nebula.
Olejniczak, Marta; Galka-Marciniak, Paulina; Polak, Katarzyna; Fligier, Andrzej; Krzyzosiak, Wlodzimierz J.
2012-01-01
The RNAimmuno database was created to provide easy access to information regarding the nonspecific effects generated in cells by RNA interference triggers and microRNA regulators. Various RNAi and microRNA reagents, which differ in length and structure, often cause non-sequence-specific immune responses, in addition to triggering the intended sequence-specific effects. The activation of the cellular sensors of foreign RNA or DNA may lead to the induction of type I interferon and proinflammatory cytokine release. Subsequent changes in the cellular transcriptome and proteome may result in adverse effects, including cell death during therapeutic treatments or the misinterpretation of experimental results in research applications. The manually curated RNAimmuno database gathers the majority of the published data regarding the immunological side effects that are caused in investigated cell lines, tissues, and model organisms by different reagents. The database is accessible at http://rnaimmuno.ibch.poznan.pl and may be helpful in the further application and development of RNAi- and microRNA-based technologies. PMID:22411954
Olejniczak, Marta; Galka-Marciniak, Paulina; Polak, Katarzyna; Fligier, Andrzej; Krzyzosiak, Wlodzimierz J
2012-05-01
The RNAimmuno database was created to provide easy access to information regarding the nonspecific effects generated in cells by RNA interference triggers and microRNA regulators. Various RNAi and microRNA reagents, which differ in length and structure, often cause non-sequence-specific immune responses, in addition to triggering the intended sequence-specific effects. The activation of the cellular sensors of foreign RNA or DNA may lead to the induction of type I interferon and proinflammatory cytokine release. Subsequent changes in the cellular transcriptome and proteome may result in adverse effects, including cell death during therapeutic treatments or the misinterpretation of experimental results in research applications. The manually curated RNAimmuno database gathers the majority of the published data regarding the immunological side effects that are caused in investigated cell lines, tissues, and model organisms by different reagents. The database is accessible at http://rnaimmuno.ibch.poznan.pl and may be helpful in the further application and development of RNAi- and microRNA-based technologies.
Ridge 2000 Data Management System
NASA Astrophysics Data System (ADS)
Goodwillie, A. M.; Carbotte, S. M.; Arko, R. A.; Haxby, W. F.; Ryan, W. B.; Chayes, D. N.; Lehnert, K. A.; Shank, T. M.
2005-12-01
Hosted at Lamont by the marine geoscience Data Management group, mgDMS, the NSF-funded Ridge 2000 electronic database, http://www.marine-geo.org/ridge2000/, is a key component of the Ridge 2000 multi-disciplinary program. The database covers each of the three Ridge 2000 Integrated Study Sites: Endeavour Segment, Lau Basin, and 8-11N Segment. It promotes the sharing of information to the broader community, facilitates integration of the suite of information collected at each study site, and enables comparisons between sites. The Ridge 2000 data system provides easy web access to a relational database that is built around a catalogue of cruise metadata. Any web browser can be used to perform a versatile text-based search which returns basic cruise and submersible dive information, sample and data inventories, navigation, and other relevant metadata such as shipboard personnel and links to NSF program awards. In addition, non-proprietary data files, images, and derived products which are hosted locally or in national repositories, as well as science and technical reports, can be freely downloaded. On the Ridge 2000 database page, our Data Link allows users to search the database using a broad range of parameters including data type, cruise ID, chief scientist, geographical location. The first Ridge 2000 field programs sailed in 2004 and, in addition to numerous data sets collected prior to the Ridge 2000 program, the database currently contains information on fifteen Ridge 2000-funded cruises and almost sixty Alvin dives. Track lines can be viewed using a recently- implemented Web Map Service button labelled Map View. The Ridge 2000 database is fully integrated with databases hosted by the mgDMS group for MARGINS and the Antarctic multibeam and seismic reflection data initiatives. Links are provided to partner databases including PetDB, SIOExplorer, and the ODP Janus system. Improved inter-operability with existing and new partner repositories continues to be strengthened. One major effort involves the gradual unification of the metadata across these partner databases. Standardised electronic metadata forms that can be filled in at sea are available from our web site. Interactive map-based exploration and visualisation of the Ridge 2000 database is provided by GeoMapApp, a freely-available Java(tm) application being developed within the mgDMS group. GeoMapApp includes high-resolution bathymetric grids for the 8-11N EPR segment and allows customised maps and grids for any of the Ridge 2000 ISS to be created. Vent and instrument locations can be plotted and saved as images, and Alvin dive photos are also available.
[A Terahertz Spectral Database Based on Browser/Server Technique].
Zhang, Zhuo-yong; Song, Yue
2015-09-01
With the solution of key scientific and technical problems and development of instrumentation, the application of terahertz technology in various fields has been paid more and more attention. Owing to the unique characteristic advantages, terahertz technology has been showing a broad future in the fields of fast, non-damaging detections, as well as many other fields. Terahertz technology combined with other complementary methods can be used to cope with many difficult practical problems which could not be solved before. One of the critical points for further development of practical terahertz detection methods depends on a good and reliable terahertz spectral database. We developed a BS (browser/server) -based terahertz spectral database recently. We designed the main structure and main functions to fulfill practical requirements. The terahertz spectral database now includes more than 240 items, and the spectral information was collected based on three sources: (1) collection and citation from some other abroad terahertz spectral databases; (2) collected from published literatures; and (3) spectral data measured in our laboratory. The present paper introduced the basic structure and fundament functions of the terahertz spectral database developed in our laboratory. One of the key functions of this THz database is calculation of optical parameters. Some optical parameters including absorption coefficient, refractive index, etc. can be calculated based on the input THz time domain spectra. The other main functions and searching methods of the browser/server-based terahertz spectral database have been discussed. The database search system can provide users convenient functions including user registration, inquiry, displaying spectral figures and molecular structures, spectral matching, etc. The THz database system provides an on-line searching function for registered users. Registered users can compare the input THz spectrum with the spectra of database, according to the obtained correlation coefficient one can perform the searching task very fast and conveniently. Our terahertz spectral database can be accessed at http://www.teralibrary.com. The proposed terahertz spectral database is based on spectral information so far, and will be improved in the future. We hope this terahertz spectral database can provide users powerful, convenient, and high efficient functions, and could promote the broader applications of terahertz technology.
Stade, Björn; Seelow, Dominik; Thomsen, Ingo; Krawczak, Michael; Franke, Andre
2014-01-01
Next Generation Sequencing (NGS) of whole exomes or genomes is increasingly being used in human genetic research and diagnostics. Sharing NGS data with third parties can help physicians and researchers to identify causative or predisposing mutations for a specific sample of interest more efficiently. In many cases, however, the exchange of such data may collide with data privacy regulations. GrabBlur is a newly developed tool to aggregate and share NGS-derived single nucleotide variant (SNV) data in a public database, keeping individual samples unidentifiable. In contrast to other currently existing SNV databases, GrabBlur includes phenotypic information and contact details of the submitter of a given database entry. By means of GrabBlur human geneticists can securely and easily share SNV data from resequencing projects. GrabBlur can ease the interpretation of SNV data by offering basic annotations, genotype frequencies and in particular phenotypic information - given that this information was shared - for the SNV of interest. GrabBlur facilitates the combination of phenotypic and NGS data (VCF files) via a local interface or command line operations. Data submissions may include HPO (Human Phenotype Ontology) terms, other trait descriptions, NGS technology information and the identity of the submitter. Most of this information is optional and its provision at the discretion of the submitter. Upon initial intake, GrabBlur merges and aggregates all sample-specific data. If a certain SNV is rare, the sample-specific information is replaced with the submitter identity. Generally, all data in GrabBlur are highly aggregated so that they can be shared with others while ensuring maximum privacy. Thus, it is impossible to reconstruct complete exomes or genomes from the database or to re-identify single individuals. After the individual information has been sufficiently "blurred", the data can be uploaded into a publicly accessible domain where aggregated genotypes are provided alongside phenotypic information. A web interface allows querying the database and the extraction of gene-wise SNV information. If an interesting SNV is found, the interrogator can get in contact with the submitter to exchange further information on the carrier and clarify, for example, whether the latter's phenotype matches with phenotype of their own patient.
The AVO Website - a Comprehensive Tool for Information Management and Dissemination
NASA Astrophysics Data System (ADS)
Snedigar, S.; Cameron, C.; Nye, C. J.
2008-12-01
The Alaska Volcano Observatory (AVO) website serves as a primary information management, browsing, and dissemination tool. It is database-driven, thus easy to maintain and update. There are two different, yet fully integrated parts of the website. An external site (www.avo.alaska.edu) allows the general public to track eruptive activity by viewing the latest photographs, webcam images, seismic data, and official information releases about the volcano, as well as maps, previous eruption information, and bibliographies. This website is also the single most comprehensive source of Alaska volcano information available. The database now contains 14,000 images, 3,300 of which are publicly viewable, and 4,300 bibliographic citations - many linked to full-text downloadable files.. The internal portion of the website is essential to routine observatory operations, and hosts browse images of diverse geophysical and geological data in a format accessible by AVO staff regardless of location. An observation log allows users to enter information about anything from satellite passes to seismic activity to ash fall reports into a searchable database, and has become the permanent record of observatory function. The individual(s) on duty at home, at the watch office, or elsewhere use forms on the internal website to log information about volcano activity. These data are then automatically parsed into a number of primary activity notices which are the formal communication to appropriate agencies and interested individuals. Geochemistry, geochronology, and geospatial data modules are currently being developed. The website receives over 100 million hits, and serves 1,300 GB of data annually. It is dynamically generated from a MySQL database with over 300 tables and several thousand lines of php code which write the actual web display. The primary webserver is housed at (but not owned by) the University of Alaska Fairbanks, and currently holds 200 GB of data. Webcam images, webicorder graphs, earthquake location plots, and spectrograms are pulled and generated by other servers in Fairbanks and Anchorage.
NPCARE: database of natural products and fractional extracts for cancer regulation.
Choi, Hwanho; Cho, Sun Young; Pak, Ho Jeong; Kim, Youngsoo; Choi, Jung-Yun; Lee, Yoon Jae; Gong, Byung Hee; Kang, Yeon Seok; Han, Taehoon; Choi, Geunbae; Cho, Yeeun; Lee, Soomin; Ryoo, Dekwoo; Park, Hwangseo
2017-01-01
Natural products have increasingly attracted much attention as a valuable resource for the development of anticancer medicines due to the structural novelty and good bioavailability. This necessitates a comprehensive database for the natural products and the fractional extracts whose anticancer activities have been verified. NPCARE (http://silver.sejong.ac.kr/npcare) is a publicly accessible online database of natural products and fractional extracts for cancer regulation. At NPCARE, one can explore 6578 natural compounds and 2566 fractional extracts isolated from 1952 distinct biological species including plants, marine organisms, fungi, and bacteria whose anticancer activities were validated with 1107 cell lines for 34 cancer types. Each entry in NPCARE is annotated with the cancer type, genus and species names of the biological resource, the cell line used for demonstrating the anticancer activity, PubChem ID, and a wealth of information about the target gene or protein. Besides the augmentation of plant entries up to 743 genus and 197 families, NPCARE is further enriched with the natural products and the fractional extracts of diverse non-traditional biological resources. NPCARE is anticipated to serve as a dominant gateway for the discovery of new anticancer medicines due to the inclusion of a large number of the fractional extracts as well as the natural compounds isolated from a variety of biological resources.
Developing a list of reference chemicals for testing alternatives to whole fish toxicity tests.
Schirmer, Kristin; Tanneberger, Katrin; Kramer, Nynke I; Völker, Doris; Scholz, Stefan; Hafner, Christoph; Lee, Lucy E J; Bols, Niels C; Hermens, Joop L M
2008-11-11
This paper details the derivation of a list of 60 reference chemicals for the development of alternatives to animal testing in ecotoxicology with a particular focus on fish. The chemicals were selected as a prerequisite to gather mechanistic information on the performance of alternative testing systems, namely vertebrate cell lines and fish embryos, in comparison to the fish acute lethality test. To avoid the need for additional experiments with fish, the U.S. EPA fathead minnow database was consulted as reference for whole organism responses. This database was compared to the Halle Registry of Cytotoxicity and a collation of data by the German EPA (UBA) on acute toxicity data derived from zebrafish embryos. Chemicals that were present in the fathead minnow database and in at least one of the other two databases were subject to selection. Criteria included the coverage of a wide range of toxicity and physico-chemical parameters as well as the determination of outliers of the in vivo/in vitro correlations. While the reference list of chemicals now guides our research for improving cell line and fish embryo assays to make them widely applicable, the list could be of benefit to search for alternatives in ecotoxicology in general. One example would be the use of this list to validate structure-activity prediction models, which in turn would benefit from a continuous extension of this list with regard to physico-chemical and toxicological data.
Avatar DNA Nanohybrid System in Chip-on-a-Phone
NASA Astrophysics Data System (ADS)
Park, Dae-Hwan; Han, Chang Jo; Shul, Yong-Gun; Choy, Jin-Ho
2014-05-01
Long admired for informational role and recognition function in multidisciplinary science, DNA nanohybrids have been emerging as ideal materials for molecular nanotechnology and genetic information code. Here, we designed an optical machine-readable DNA icon on microarray, Avatar DNA, for automatic identification and data capture such as Quick Response and ColorZip codes. Avatar icon is made of telepathic DNA-DNA hybrids inscribed on chips, which can be identified by camera of smartphone with application software. Information encoded in base-sequences can be accessed by connecting an off-line icon to an on-line web-server network to provide message, index, or URL from database library. Avatar DNA is then converged with nano-bio-info-cogno science: each building block stands for inorganic nanosheets, nucleotides, digits, and pixels. This convergence could address item-level identification that strengthens supply-chain security for drug counterfeits. It can, therefore, provide molecular-level vision through mobile network to coordinate and integrate data management channels for visual detection and recording.
Using the World Wide Web for GIDEP Problem Data Processing at Marshall Space Flight Center
NASA Technical Reports Server (NTRS)
McPherson, John W.; Haraway, Sandra W.; Whirley, J. Don
1999-01-01
Since April 1997, Marshall Space Flight Center has been using electronic transfer and the web to support our processing of the Government-Industry Data Exchange Program (GIDEP) and NASA ALERT information. Specific aspects include: (1) Extraction of ASCII text information from GIDEP for loading into Word documents for e-mail to ALERT actionees; (2) Downloading of GIDEP form image formats in Adobe Acrobat (.pdf) for internal storage display on the MSFC ALERT web page; (3) Linkage of stored GRDEP problem forms with summary information for access from the MSFC ALERT Distribution Summary Chart or from an html table of released MSFC ALERTs (4) Archival of historic ALERTs for reference by GIDEP ID, MSFC ID, or MSFC release date; (5) On-line tracking of ALERT response status using a Microsoft Access database and the web (6) On-line response to ALERTs from MSFC actionees through interactive web forms. The technique, benefits, effort, coordination, and lessons learned for each aspect are covered herein.
Avatar DNA Nanohybrid System in Chip-on-a-Phone
Park, Dae-Hwan; Han, Chang Jo; Shul, Yong-Gun; Choy, Jin-Ho
2014-01-01
Long admired for informational role and recognition function in multidisciplinary science, DNA nanohybrids have been emerging as ideal materials for molecular nanotechnology and genetic information code. Here, we designed an optical machine-readable DNA icon on microarray, Avatar DNA, for automatic identification and data capture such as Quick Response and ColorZip codes. Avatar icon is made of telepathic DNA-DNA hybrids inscribed on chips, which can be identified by camera of smartphone with application software. Information encoded in base-sequences can be accessed by connecting an off-line icon to an on-line web-server network to provide message, index, or URL from database library. Avatar DNA is then converged with nano-bio-info-cogno science: each building block stands for inorganic nanosheets, nucleotides, digits, and pixels. This convergence could address item-level identification that strengthens supply-chain security for drug counterfeits. It can, therefore, provide molecular-level vision through mobile network to coordinate and integrate data management channels for visual detection and recording. PMID:24824876
NASA Astrophysics Data System (ADS)
Rennoll, V.
2016-02-01
The National Centers for Environmental Information provide public access to a wealth of seafloor mapping data, both from National Ocean Service hydrographic surveys and outside source collections. Utilizing the outside source data to improve nautical charts created by the National Oceanic and Atmospheric Administration (NOAA) is an appealing alternative to traditional surveys, largely in areas with significant data gaps where hydrographic surveys are not planned. However, much of the outside data are collected in transit lines and lack traditional overlapping main scheme lines and crosslines. Spanning multiple years and vessels, these transit line data collections were obtained using disparate operating procedures and have inconsistent qualities. Here, a workflow was developed to ingest these variable depth data within a defined region by assessing their quality and utility for nautical charting. The workflow was evaluated with a navigationally significant area in the Bering Sea, where bathymetric data collected from ten vessels over a period of twelve years were available. The outside data were shown to be of sufficient quality through comparisons with existing NOAA surveys and then used to demonstrate where the data could provide new or updated information on nautical charts, and provide reconnaissance for future hydrographic planning. The utility assessment of the data, however, was hindered by lack of a verified survey-scale sounding database, against which the outside source data could be compared. Having developed the workflow, it is recommended that further outside data is ingested by NOAA's Office of Coast Survey and that a database is developed with full-scale chart soundings for outside data comparisons.
Monitoring outcomes with relational databases: does it improve quality of care?
Clemmer, Terry P
2004-12-01
There are 3 key ingredients in improving quality of medial care: 1) using a scientific process of improvement, 2) executing the process at the lowest possible level in the organization, and 3) measuring the results of any change reliably. Relational databases when used within these guidelines are of great value in these efforts if they contain reliable information that is pertinent to the project and used in a scientific process of quality improvement by a front line team. Unfortunately, the data are frequently unreliable and/or not pertinent to the local process and is used by persons at very high levels in the organization without a scientific process and without reliable measurement of the outcome. Under these circumstances the effectiveness of relational databases in improving care is marginal at best, frequently wasteful and has the potential to be harmful. This article explores examples of these concepts.
Centre-based restricted nearest feature plane with angle classifier for face recognition
NASA Astrophysics Data System (ADS)
Tang, Linlin; Lu, Huifen; Zhao, Liang; Li, Zuohua
2017-10-01
An improved classifier based on the nearest feature plane (NFP), called the centre-based restricted nearest feature plane with the angle (RNFPA) classifier, is proposed for the face recognition problems here. The famous NFP uses the geometrical information of samples to increase the number of training samples, but it increases the computation complexity and it also has an inaccuracy problem coursed by the extended feature plane. To solve the above problems, RNFPA exploits a centre-based feature plane and utilizes a threshold of angle to restrict extended feature space. By choosing the appropriate angle threshold, RNFPA can improve the performance and decrease computation complexity. Experiments in the AT&T face database, AR face database and FERET face database are used to evaluate the proposed classifier. Compared with the original NFP classifier, the nearest feature line (NFL) classifier, the nearest neighbour (NN) classifier and some other improved NFP classifiers, the proposed one achieves competitive performance.
Catalogue of HI PArameters (CHIPA)
NASA Astrophysics Data System (ADS)
Saponara, J.; Benaglia, P.; Koribalski, B.; Andruchow, I.
2015-08-01
The catalogue of HI parameters of galaxies HI (CHIPA) is the natural continuation of the compilation by M.C. Martin in 1998. CHIPA provides the most important parameters of nearby galaxies derived from observations of the neutral Hydrogen line. The catalogue contains information of 1400 galaxies across the sky and different morphological types. Parameters like the optical diameter of the galaxy, the blue magnitude, the distance, morphological type, HI extension are listed among others. Maps of the HI distribution, velocity and velocity dispersion can also be display for some cases. The main objective of this catalogue is to facilitate the bibliographic queries, through searching in a database accessible from the internet that will be available in 2015 (the website is under construction). The database was built using the open source `` mysql (SQL, Structured Query Language, management system relational database) '', while the website was built with ''HTML (Hypertext Markup Language)'' and ''PHP (Hypertext Preprocessor)''.
Nurse-computer performance. Considerations for the nurse administrator.
Mills, M E; Staggers, N
1994-11-01
Regulatory reporting requirements and economic pressures to create a unified healthcare database are leading to the development of a fully computerized patient record. Nursing staff members will be responsible increasingly for using this technology, yet little is known about the interaction effect of staff characteristics and computer screen design on on-line accuracy and speed. In examining these issues, new considerations are raised for nurse administrators interested in facilitating staff use of clinical information systems.
A Multi-Wavelength Study of the Hot Component of the Interstellar Medium
NASA Technical Reports Server (NTRS)
Nichols, Joy; Oliversen, Ronald K. (Technical Monitor)
2002-01-01
The goals of this research are as follows: (1) Using the large number of lines of sight available in the ME database, identify the lines of sight with high-velocity components in interstellar lines, from neutral species through Si VI, C IV, and N V; (2) Compare the column density of the main components (i.e. low velocity components) of the interstellar lines with distance, galactic longitude and latitude, and galactic radial position. Derive statistics on the distribution of components in space (e.g. mean free path, mean column density of a component). Compare with model predictions for the column densities in the walls of old SNR bubbles and superbubbles, in evaporating cloud boundaries and in turbulent mixing layers; (3) For the lines of sight associated with multiple high velocity, high ionization components, model the shock parameters for the associated superbubble and SNR to provide more accurate energy input information for hot phase models and galactic halo models. Thus far 49 lines of sight with at least one high velocity component to the C IV lines have been identified; and (4) Obtain higher resolution data for the lines of sight with high velocity components (and a few without) to further refine these models.
Valletta, Elisa; Kučera, Lukáš; Prokeš, Lubomír; Amato, Filippo; Pivetta, Tiziana; Hampl, Aleš; Havel, Josef; Vaňhara, Petr
2016-01-01
Cross-contamination of eukaryotic cell lines used in biomedical research represents a highly relevant problem. Analysis of repetitive DNA sequences, such as Short Tandem Repeats (STR), or Simple Sequence Repeats (SSR), is a widely accepted, simple, and commercially available technique to authenticate cell lines. However, it provides only qualitative information that depends on the extent of reference databases for interpretation. In this work, we developed and validated a rapid and routinely applicable method for evaluation of cell culture cross-contamination levels based on mass spectrometric fingerprints of intact mammalian cells coupled with artificial neural networks (ANNs). We used human embryonic stem cells (hESCs) contaminated by either mouse embryonic stem cells (mESCs) or mouse embryonic fibroblasts (MEFs) as a model. We determined the contamination level using a mass spectra database of known calibration mixtures that served as training input for an ANN. The ANN was then capable of correct quantification of the level of contamination of hESCs by mESCs or MEFs. We demonstrate that MS analysis, when linked to proper mathematical instruments, is a tangible tool for unraveling and quantifying heterogeneity in cell cultures. The analysis is applicable in routine scenarios for cell authentication and/or cell phenotyping in general.
Prokeš, Lubomír; Amato, Filippo; Pivetta, Tiziana; Hampl, Aleš; Havel, Josef; Vaňhara, Petr
2016-01-01
Cross-contamination of eukaryotic cell lines used in biomedical research represents a highly relevant problem. Analysis of repetitive DNA sequences, such as Short Tandem Repeats (STR), or Simple Sequence Repeats (SSR), is a widely accepted, simple, and commercially available technique to authenticate cell lines. However, it provides only qualitative information that depends on the extent of reference databases for interpretation. In this work, we developed and validated a rapid and routinely applicable method for evaluation of cell culture cross-contamination levels based on mass spectrometric fingerprints of intact mammalian cells coupled with artificial neural networks (ANNs). We used human embryonic stem cells (hESCs) contaminated by either mouse embryonic stem cells (mESCs) or mouse embryonic fibroblasts (MEFs) as a model. We determined the contamination level using a mass spectra database of known calibration mixtures that served as training input for an ANN. The ANN was then capable of correct quantification of the level of contamination of hESCs by mESCs or MEFs. We demonstrate that MS analysis, when linked to proper mathematical instruments, is a tangible tool for unraveling and quantifying heterogeneity in cell cultures. The analysis is applicable in routine scenarios for cell authentication and/or cell phenotyping in general. PMID:26821236
GeneSCF: a real-time based functional enrichment tool with support for multiple organisms.
Subhash, Santhilal; Kanduri, Chandrasekhar
2016-09-13
High-throughput technologies such as ChIP-sequencing, RNA-sequencing, DNA sequencing and quantitative metabolomics generate a huge volume of data. Researchers often rely on functional enrichment tools to interpret the biological significance of the affected genes from these high-throughput studies. However, currently available functional enrichment tools need to be updated frequently to adapt to new entries from the functional database repositories. Hence there is a need for a simplified tool that can perform functional enrichment analysis by using updated information directly from the source databases such as KEGG, Reactome or Gene Ontology etc. In this study, we focused on designing a command-line tool called GeneSCF (Gene Set Clustering based on Functional annotations), that can predict the functionally relevant biological information for a set of genes in a real-time updated manner. It is designed to handle information from more than 4000 organisms from freely available prominent functional databases like KEGG, Reactome and Gene Ontology. We successfully employed our tool on two of published datasets to predict the biologically relevant functional information. The core features of this tool were tested on Linux machines without the need for installation of more dependencies. GeneSCF is more reliable compared to other enrichment tools because of its ability to use reference functional databases in real-time to perform enrichment analysis. It is an easy-to-integrate tool with other pipelines available for downstream analysis of high-throughput data. More importantly, GeneSCF can run multiple gene lists simultaneously on different organisms thereby saving time for the users. Since the tool is designed to be ready-to-use, there is no need for any complex compilation and installation procedures.
Powell, Robert E.
2001-01-01
This data set maps and describes the geology of the Conejo Well 7.5 minute quadrangle, Riverside County, southern California. The quadrangle, situated in Joshua Tree National Park in the eastern Transverse Ranges physiographic and structural province, encompasses part of the northern Eagle Mountains and part of the south flank of Pinto Basin. It is underlain by a basement terrane comprising Proterozoic metamorphic rocks, Mesozoic plutonic rocks, and Mesozoic and Mesozoic or Cenozoic hypabyssal dikes. The basement terrane is capped by a widespread Tertiary erosion surface preserved in remnants in the Eagle Mountains and buried beneath Cenozoic deposits in Pinto Basin. Locally, Miocene basalt overlies the erosion surface. A sequence of at least three Quaternary pediments is planed into the north piedmont of the Eagle Mountains, each in turn overlain by successively younger residual and alluvial deposits. The Tertiary erosion surface is deformed and broken by north-northwest-trending, high-angle, dip-slip faults in the Eagle Mountains and an east-west trending system of high-angle dip- and left-slip faults. In and adjacent to the Conejo Well quadrangle, faults of the northwest-trending set displace Miocene sedimentary rocks and basalt deposited on the Tertiary erosion surface and Pliocene and (or) Pleistocene deposits that accumulated on the oldest pediment. Faults of this system appear to be overlain by Pleistocene deposits that accumulated on younger pediments. East-west trending faults are younger than and perhaps in part coeval with faults of the northwest-trending set. The Conejo Well database was created using ARCVIEW and ARC/INFO, which are geographical information system (GIS) software products of Envronmental Systems Research Institute (ESRI). The database consists of the following items: (1) a map coverage showing faults and geologic contacts and units, (2) a separate coverage showing dikes, (3) a coverage showing structural data, (4) a point coverage containing line ornamentation, and (5) a scanned topographic base at a scale of 1:24,000. The coverages include attribute tables for geologic units (polygons and regions), contacts (arcs), and site-specific data (points). The database, accompanied by a pamphlet file and this metadata file, also includes the following graphic and text products: (1) A portable document file (.pdf) containing a navigable graphic of the geologic map on a 1:24,000 topographic base. The map is accompanied by a marginal explanation consisting of a Description of Map and Database Units (DMU), a Correlation of Map and Database Units (CMU), and a key to point-and line-symbols. (2) Separate .pdf files of the DMU and CMU, individually. (3) A PostScript graphic-file containing the geologic map on a 1:24,000 topographic base accompanied by the marginal explanation. (4) A pamphlet that describes the database and how to access it. Within the database, geologic contacts , faults, and dikes are represented as lines (arcs), geologic units as polygons and regions, and site-specific data as points. Polygon, arc, and point attribute tables (.pat, .aat, and .pat, respectively) uniquely identify each geologic datum and link it to other tables (.rel) that provide more detailed geologic information.
DASTCOM5: A Portable and Current Database of Asteroid and Comet Orbit Solutions
NASA Astrophysics Data System (ADS)
Giorgini, Jon D.; Chamberlin, Alan B.
2014-11-01
A portable direct-access database containing all NASA/JPL asteroid and comet orbit solutions, with the software to access it, is available for download (ftp://ssd.jpl.nasa.gov/pub/xfr/dastcom5.zip; unzip -ao dastcom5.zip). DASTCOM5 contains the latest heliocentric IAU76/J2000 ecliptic osculating orbital elements for all known asteroids and comets as determined by a least-squares best-fit to ground-based optical, spacecraft, and radar astrometric measurements. Other physical, dynamical, and covariance parameters are included when known. A total of 142 parameters per object are supported within DASTCOM5. This information is suitable for initializing high-precision numerical integrations, assessing orbit geometry, computing trajectory uncertainties, visual magnitude, and summarizing physical characteristics of the body. The DASTCOM5 distribution is updated as often as hourly to include newly discovered objects or orbit solution updates. It includes an ASCII index of objects that supports look-ups based on name, current or past designation, SPK ID, MPC packed-designations, or record number. DASTCOM5 is the database used by the NASA/JPL Horizons ephemeris system. It is a subset exported from a larger MySQL-based relational Small-Body Database ("SBDB") maintained at JPL. The DASTCOM5 distribution is intended for programmers comfortable with UNIX/LINUX/MacOSX command-line usage who need to develop stand-alone applications. The goal of the implementation is to provide small, fast, portable, and flexibly programmatic access to JPL comet and asteroid orbit solutions. The supplied software library, examples, and application programs have been verified under gfortran, Lahey, Intel, and Sun 32/64-bit Linux/UNIX FORTRAN compilers. A command-line tool ("dxlook") is provided to enable database access from shell or script environments.
A Multi-Purpose Data Dissemination Infrastructure for the Marine-Earth Observations
NASA Astrophysics Data System (ADS)
Hanafusa, Y.; Saito, H.; Kayo, M.; Suzuki, H.
2015-12-01
To open the data from a variety of observations, the Japan Agency for Marine-Earth Science and Technology (JAMSTEC) has developed a multi-purpose data dissemination infrastructure. Although many observations have been made in the earth science, all the data are not opened completely. We think data centers may provide researchers with a universal data dissemination service which can handle various kinds of observation data with little effort. For this purpose JAMSTEC Data Management Office has developed the "Information Catalog Infrastructure System (Catalog System)". This is a kind of catalog management system which can create, renew and delete catalogs (= databases) and has following features, - The Catalog System does not depend on data types or granularity of data records. - By registering a new metadata schema to the system, a new database can be created on the same system without sytem modification. - As web pages are defined by the cascading style sheets, databases have different look and feel, and operability. - The Catalog System provides databases with basic search tools; search by text, selection from a category tree, and selection from a time line chart. - For domestic users it creates the Japanese and English pages at the same time and has dictionary to control terminology and proper noun. As of August 2015 JAMSTEC operates 7 databases on the Catalog System. We expect to transfer existing databases to this system, or create new databases on it. In comparison with a dedicated database developed for the specific dataset, the Catalog System is suitable for the dissemination of small datasets, with minimum cost. Metadata held in the catalogs may be transfered to other metadata schema to exchange global databases or portals. Examples: JAMSTEC Data Catalog: http://www.godac.jamstec.go.jp/catalog/data_catalog/metadataList?lang=enJAMSTEC Document Catalog: http://www.godac.jamstec.go.jp/catalog/doc_catalog/metadataList?lang=en&tab=categoryResearch Information and Data Access Site of TEAMS: http://www.i-teams.jp/catalog/rias/metadataList?lang=en&tab=list
Expanding Academic Vocabulary with an Interactive On-Line Database
ERIC Educational Resources Information Center
Horst, Marlise; Cobb, Tom; Nicolae, Ioana
2005-01-01
University students used a set of existing and purpose-built on-line tools for vocabulary learning in an experimental ESL course. The resources included concordance, dictionary, cloze-builder, hypertext, and a database with interactive self-quizzing feature (all freely available at www.lextutor.ca). The vocabulary targeted for learning consisted…
14 CFR 221.500 - Transmission of electronic tariffs to subscribers.
Code of Federal Regulations, 2014 CFR
2014-01-01
... TRANSPORTATION (AVIATION PROCEEDINGS) ECONOMIC REGULATIONS TARIFFS Electronically Filed Tariffs § 221.500... to any subscriber to the on-line tariff database, including access to the justification required by... machine-readable data (raw tariff data) of all daily transactions made to its on-line tariff database. The...
NASA Astrophysics Data System (ADS)
Damm, Bodo; Klose, Martin
2014-05-01
This contribution presents an initiative to develop a national landslide database for the Federal Republic of Germany. It highlights structure and contents of the landslide database and outlines its major data sources and the strategy of information retrieval. Furthermore, the contribution exemplifies the database potentials in applied landslide impact research, including statistics of landslide damage, repair, and mitigation. The landslide database offers due to systematic regional data compilation a differentiated data pool of more than 5,000 data sets and over 13,000 single data files. It dates back to 1137 AD and covers landslide sites throughout Germany. In seven main data blocks, the landslide database stores besides information on landslide types, dimensions, and processes, additional data on soil and bedrock properties, geomorphometry, and climatic or other major triggering events. A peculiarity of this landslide database is its storage of data sets on land use effects, damage impacts, hazard mitigation, and landslide costs. Compilation of landslide data is based on a two-tier strategy of data collection. The first step of information retrieval includes systematic web content mining and exploration of online archives of emergency agencies, fire and police departments, and news organizations. Using web and RSS feeds and soon also a focused web crawler, this enables effective nationwide data collection for recent landslides. On the basis of this information, in-depth data mining is performed to deepen and diversify the data pool in key landslide areas. This enables to gather detailed landslide information from, amongst others, agency records, geotechnical reports, climate statistics, maps, and satellite imagery. Landslide data is extracted from these information sources using a mix of methods, including statistical techniques, imagery analysis, and qualitative text interpretation. The landslide database is currently migrated to a spatial database system running on PostgreSQL/PostGIS. This provides advanced functionality for spatial data analysis and forms the basis for future data provision and visualization using a WebGIS application. Analysis of landslide database contents shows that in most parts of Germany landslides primarily affect transportation infrastructures. Although with distinct lower frequency, recent landslides are also recorded to cause serious damage to hydraulic facilities and waterways, supply and disposal infrastructures, sites of cultural heritage, as well as forest, agricultural, and mining areas. The main types of landslide damage are failure of cut and fill slopes, destruction of retaining walls, street lights, and forest stocks, burial of roads, backyards, and garden areas, as well as crack formation in foundations, sewer lines, and building walls. Landslide repair and mitigation at transportation infrastructures is dominated by simple solutions such as catch barriers or rock fall drapery. These solutions are often undersized and fail under stress. The use of costly slope stabilization or protection systems is proven to reduce these risks effectively over longer maintenance cycles. The right balancing of landslide mitigation is thus a crucial problem in managing landslide risks. Development and analysis of such landslide databases helps to support decision-makers in finding efficient solutions to minimize landslide risks for human beings, infrastructures, and financial assets.
Handwriting Identification, Matching, and Indexing in Noisy Document Images
2006-01-01
algorithm to detect all parallel lines simultaneously. Our method can detect 96.8% of the severely broken rule lines in the Arabic database we collected...in the database to guide later processing. It is widely used in banks, post offices, and tax offices where the types of forms are most often pre...be used for different fields), and output the recognition results to a database . Although special anchors may be avail- able to facilitate form
MAPU: Max-Planck Unified database of organellar, cellular, tissue and body fluid proteomes
Zhang, Yanling; Zhang, Yong; Adachi, Jun; Olsen, Jesper V.; Shi, Rong; de Souza, Gustavo; Pasini, Erica; Foster, Leonard J.; Macek, Boris; Zougman, Alexandre; Kumar, Chanchal; Wiśniewski, Jacek R.; Jun, Wang; Mann, Matthias
2007-01-01
Mass spectrometry (MS)-based proteomics has become a powerful technology to map the protein composition of organelles, cell types and tissues. In our department, a large-scale effort to map these proteomes is complemented by the Max-Planck Unified (MAPU) proteome database. MAPU contains several body fluid proteomes; including plasma, urine, and cerebrospinal fluid. Cell lines have been mapped to a depth of several thousand proteins and the red blood cell proteome has also been analyzed in depth. The liver proteome is represented with 3200 proteins. By employing high resolution MS and stringent validation criteria, false positive identification rates in MAPU are lower than 1:1000. Thus MAPU datasets can serve as reference proteomes in biomarker discovery. MAPU contains the peptides identifying each protein, measured masses, scores and intensities and is freely available at using a clickable interface of cell or body parts. Proteome data can be queried across proteomes by protein name, accession number, sequence similarity, peptide sequence and annotation information. More than 4500 mouse and 2500 human proteins have already been identified in at least one proteome. Basic annotation information and links to other public databases are provided in MAPU and we plan to add further analysis tools. PMID:17090601
Operator Performance Support System (OPSS)
NASA Technical Reports Server (NTRS)
Conklin, Marlen Z.
1993-01-01
In the complex and fast reaction world of military operations, present technologies, combined with tactical situations, have flooded the operator with assorted information that he is expected to process instantly. As technologies progress, this flow of data and information have both guided and overwhelmed the operator. However, the technologies that have confounded many operators today can be used to assist him -- thus the Operator Performance Support Team. In this paper we propose an operator support station that incorporates the elements of Video and Image Databases, productivity Software, Interactive Computer Based Training, Hypertext/Hypermedia Databases, Expert Programs, and Human Factors Engineering. The Operator Performance Support System will provide the operator with an integrating on-line information/knowledge system that will guide expert or novice to correct systems operations. Although the OPSS is being developed for the Navy, the performance of the workforce in today's competitive industry is of major concern. The concepts presented in this paper which address ASW systems software design issues are also directly applicable to industry. the OPSS will propose practical applications in how to more closely align the relationships between technical knowledge and equipment operator performance.
NASA Astrophysics Data System (ADS)
Yamada, A.; Saitoh, N.; Nonogaki, R.; Imasu, R.; Shiomi, K.; Kuze, A.
2016-12-01
The thermal infrared (TIR) band of Thermal and Near-infrared Sensor for Carbon Observation Fourier Transform Spectrometer (TANSO-FTS) onboard Greenhouse Gases Observing Satellite (GOSAT) observes CH4 profile at wavenumber range from 1210 cm-1 to 1360 cm-1 including CH4 ν4 band. The current retrieval algorithm (V1.0) uses LBLRTM V12.1 with AER V3.1 line database to calculate optical depth. LBLRTM V12.1 include MT_CKD 2.5.2 model to calculate continuum absorption. The continuum absorption has large uncertainty, especially temperature dependent coefficient, between BPS model and MT_CKD model in the wavenumber region of 1210-1250 cm-1(Paynter and Ramaswamy, 2014). The purpose of this study is to assess the impact on CH4 retrieval from the line parameter databases and the uncertainty of continuum absorption. We used AER v1.0 database, HITRAN2004 database, HITRAN2008 database, AER V3.2 database, and HITRAN2012 database (Rothman et al. 2005, 2009, and 2013. Clough et al., 2005). AER V1.0 database is based on HITRAN2000. The CH4 line parameters of AER V3.1 and V3.2 databases are developed from HITRAN2008 including updates until May 2009 with line mixing parameters. We compared the retrieved CH4 with the HIPPO CH4 observation (Wofsy et al., 2012). The difference of AER V3.2 was the smallest and 24.1 ± 45.9 ppbv. The differences of AER V1.0, HITRAN2004, HITRAN2008, and HITRAN2012 were 35.6 ± 46.5 ppbv, 37.6 ± 46.3 ppbv, 32.1 ± 46.1 ppbv, and 35.2 ± 46.0 ppbv, respectively. Compare AER V3.2 case to HITRAN2008 case, the line coupling effect reduced difference by 8.0 ppbv. Median values of Residual difference from HITRAN2008 to AER V1.0, HITRAN2004, AER V3.2, and HITRAN2012 were 0.6 K, 0.1 K, -0.08 K, and 0.08 K, respectively, while median values of transmittance difference were less than 0.0003 and transmittance differences have small wavenumber dependence. We also discuss the retrieval error from the uncertainty of the continuum absorption, the test of full grid configuration for retrieval, and the retrieval results using GOSAT TIR L1B V203203, which are sample products to evaluate the next level 1B algorithm.
Application of machine vision to pup loaf bread evaluation
NASA Astrophysics Data System (ADS)
Zayas, Inna Y.; Chung, O. K.
1996-12-01
Intrinsic end-use quality of hard winter wheat breeding lines is routinely evaluated at the USDA, ARS, USGMRL, Hard Winter Wheat Quality Laboratory. Experimental baking test of pup loaves is the ultimate test for evaluating hard wheat quality. Computer vision was applied to developing an objective methodology for bread quality evaluation for the 1994 and 1995 crop wheat breeding line samples. Computer extracted features for bread crumb grain were studied, using subimages (32 by 32 pixel) and features computed for the slices with different threshold settings. A subsampling grid was located with respect to the axis of symmetry of a slice to provide identical topological subimage information. Different ranking techniques were applied to the databases. Statistical analysis was run on the database with digital image and breadmaking features. Several ranking algorithms and data visualization techniques were employed to create a sensitive scale for porosity patterns of bread crumb. There were significant linear correlations between machine vision extracted features and breadmaking parameters. Crumb grain scores by human experts were correlated more highly with some image features than with breadmaking parameters.
Dynamic Speed Adaptation for Path Tracking Based on Curvature Information and Speed Limits.
Gámez Serna, Citlalli; Ruichek, Yassine
2017-06-14
A critical concern of autonomous vehicles is safety. Different approaches have tried to enhance driving safety to reduce the number of fatal crashes and severe injuries. As an example, Intelligent Speed Adaptation (ISA) systems warn the driver when the vehicle exceeds the recommended speed limit. However, these systems only take into account fixed speed limits without considering factors like road geometry. In this paper, we consider road curvature with speed limits to automatically adjust vehicle's speed with the ideal one through our proposed Dynamic Speed Adaptation (DSA) method. Furthermore, 'curve analysis extraction' and 'speed limits database creation' are also part of our contribution. An algorithm that analyzes GPS information off-line identifies high curvature segments and estimates the speed for each curve. The speed limit database contains information about the different speed limit zones for each traveled path. Our DSA senses speed limits and curves of the road using GPS information and ensures smooth speed transitions between current and ideal speeds. Through experimental simulations with different control algorithms on real and simulated datasets, we prove that our method is able to significantly reduce lateral errors on sharp curves, to respect speed limits and consequently increase safety and comfort for the passenger.
TIGER/Line Shapefile, 2010, 2010 Census Block State-based
The TIGER/Line Files are shapefiles and related database files (.dbf) that are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master Address File / Topologically Integrated Geographic Encoding and Referencing (MAF/TIGER) Database (MTDB). The MTDB represents a seamless national file with no overlaps or gaps between parts, however, each TIGER/Line File is designed to stand alone as an independent data set, or they can be combined to cover the entire nation. Census Blocks are statistical areas bounded on all sides by visible features, such as streets, roads, streams, and railroad tracks, and/or by nonvisible boundaries such as city, town, township, and county limits, and short line-of-sight extensions of streets and roads. Census blocks are relatively small in area; for example, a block in a city bounded by streets. However, census blocks in remote areas are often large and irregular and may even be many square miles in area. A common misunderstanding is that data users think census blocks are used geographically to build all other census geographic areas, rather all other census geographic areas are updated and then used as the primary constraints, along with roads and water features, to delineate the tabulation blocks. As a result, all 2010 Census blocks nest within every other 2010 Census geographic area, so that Census Bureau statistical data can be tabulated at the block level and aggregated up to the appropr
A Framework for Cloudy Model Optimization and Database Storage
NASA Astrophysics Data System (ADS)
Calvén, Emilia; Helton, Andrew; Sankrit, Ravi
2018-01-01
We present a framework for producing Cloudy photoionization models of the nebular emission from novae ejecta and storing a subset of the results in SQL database format for later usage. The database can be searched for models best fitting observed spectral line ratios. Additionally, the framework includes an optimization feature that can be used in tandem with the database to search for and improve on models by creating new Cloudy models while, varying the parameters. The database search and optimization can be used to explore the structures of nebulae by deriving their properties from the best-fit models. The goal is to provide the community with a large database of Cloudy photoionization models, generated from parameters reflecting conditions within novae ejecta, that can be easily fitted to observed spectral lines; either by directly accessing the database using the framework code or by usage of a website specifically made for this purpose.
IPD—the Immuno Polymorphism Database
Robinson, James; Halliwell, Jason A.; McWilliam, Hamish; Lopez, Rodrigo; Marsh, Steven G. E.
2013-01-01
The Immuno Polymorphism Database (IPD), http://www.ebi.ac.uk/ipd/ is a set of specialist databases related to the study of polymorphic genes in the immune system. The IPD project works with specialist groups or nomenclature committees who provide and curate individual sections before they are submitted to IPD for online publication. The IPD project stores all the data in a set of related databases. IPD currently consists of four databases: IPD-KIR, contains the allelic sequences of killer-cell immunoglobulin-like receptors, IPD-MHC, a database of sequences of the major histocompatibility complex of different species; IPD-HPA, alloantigens expressed only on platelets; and IPD-ESTDAB, which provides access to the European Searchable Tumour Cell-Line Database, a cell bank of immunologically characterized melanoma cell lines. The data is currently available online from the website and FTP directory. This article describes the latest updates and additional tools added to the IPD project. PMID:23180793
The New Partnership for Africa’s Development (NEPAD) -- Will It Succeed or Fail?
2005-03-18
elites.” Global Information Network . (30 May 2003):1. Database on-line. Available from ProQuest. Accessed 21 October 2004. Ishikawa , Kaoru . Nation...Threat to Regional Integration?” TWN Africa, available from http://twnafrica.org/news_detail.asp?twnID=541; Internet: accessed 21 October 2004. 41 Kaoru ... Ishikawa , Nation Building and Development Assistance in Africa , (New York, N.Y.: St. Martin’s Press, 1999), 79-80. 42 “African Women Respond: Summary
Nicholson, Suzanne W.; Stoeser, Douglas B.; Wilson, Frederic H.; Dicken, Connie L.; Ludington, Steve
2007-01-01
The growth in the use of Geographic nformation Systems (GS) has highlighted the need for regional and national digital geologic maps attributed with age and rock type information. Such spatial data can be conveniently used to generate derivative maps for purposes that include mineral-resource assessment, metallogenic studies, tectonic studies, human health and environmental research. n 1997, the United States Geological Survey’s Mineral Resources Program initiated an effort to develop national digital databases for use in mineral resource and environmental assessments. One primary activity of this effort was to compile a national digital geologic map database, utilizing state geologic maps, to support mineral resource studies in the range of 1:250,000- to 1:1,000,000-scale. Over the course of the past decade, state databases were prepared using a common standard for the database structure, fields, attributes, and data dictionaries. As of late 2006, standardized geological map databases for all conterminous (CONUS) states have been available on-line as USGS Open-File Reports. For Alaska and Hawaii, new state maps are being prepared, and the preliminary work for Alaska is being released as a series of 1:500,000-scale regional compilations. See below for a list of all published databases.
On-Line Database of Vibration-Based Damage Detection Experiments
NASA Technical Reports Server (NTRS)
Pappa, Richard S.; Doebling, Scott W.; Kholwad, Tina D.
2000-01-01
This paper describes a new, on-line bibliographic database of vibration-based damage detection experiments. Publications in the database discuss experiments conducted on actual structures as well as those conducted with simulated data. The database can be searched and sorted in many ways, and it provides photographs of test structures when available. It currently contains 100 publications, which is estimated to be about 5-10% of the number of papers written to date on this subject. Additional entries are forthcoming. This database is available for public use on the Internet at the following address: http://sdbpappa-mac.larc.nasa.gov. Click on the link named "dd_experiments.fp3" and then type "guest" as the password. No user name is required.
IAS telecommunication infrastructure and value added network services provided by IASNET
NASA Astrophysics Data System (ADS)
Smirnov, Oleg L.; Marchenko, Sergei
The topology of a packet switching network for the Soviet National Centre for Automated Data Exchange with Foreign Computer Networks and Databanks (NCADE) based on a design by the Institute for Automated Systems (IAS) is discussed. NCADE has partners all over the world: it is linked to East European countries via telephone lines while satellites are used for communication with remote partners, such as Cuba, Mongolia, and Vietnam. Moreover, there is a connection to the Austrian, British, Canadian, Finnish, French, U.S. and other western networks through which users can have access to databases on each network. At the same time, NCADE provides western customers with access to more than 70 Soviet databases. Software and hardware of IASNET use data exchange recommendations agreed with the International Standard Organization (ISO) and International Telegraph and Telephone Consultative Committee (CCITT). Technical parameters of IASNET are compatible with the majority of foreign networks such as DATAPAK, TRANSPAC, TELENET, and others. By means of IASNET, the NCADE provides connection of Soviet and foreign users to information and computer centers around the world on the basis of the CCITT X.25 and X.75 recommendations. Any information resources of IASNET and value added network services, such as computer teleconferences, E-mail, information retrieval system, intelligent support of access to databanks and databases, and others are discussed. The topology of the ACADEMNET connected to IASNET over an X.25 gateway is also discussed.
BrEPS 2.0: Optimization of sequence pattern prediction for enzyme annotation.
Dudek, Christian-Alexander; Dannheim, Henning; Schomburg, Dietmar
2017-01-01
The prediction of gene functions is crucial for a large number of different life science areas. Faster high throughput sequencing techniques generate more and larger datasets. The manual annotation by classical wet-lab experiments is not suitable for these large amounts of data. We showed earlier that the automatic sequence pattern-based BrEPS protocol, based on manually curated sequences, can be used for the prediction of enzymatic functions of genes. The growing sequence databases provide the opportunity for more reliable patterns, but are also a challenge for the implementation of automatic protocols. We reimplemented and optimized the BrEPS pattern generation to be applicable for larger datasets in an acceptable timescale. Primary improvement of the new BrEPS protocol is the enhanced data selection step. Manually curated annotations from Swiss-Prot are used as reliable source for function prediction of enzymes observed on protein level. The pool of sequences is extended by highly similar sequences from TrEMBL and SwissProt. This allows us to restrict the selection of Swiss-Prot entries, without losing the diversity of sequences needed to generate significant patterns. Additionally, a supporting pattern type was introduced by extending the patterns at semi-conserved positions with highly similar amino acids. Extended patterns have an increased complexity, increasing the chance to match more sequences, without losing the essential structural information of the pattern. To enhance the usability of the database, we introduced enzyme function prediction based on consensus EC numbers and IUBMB enzyme nomenclature. BrEPS is part of the Braunschweig Enzyme Database (BRENDA) and is available on a completely redesigned website and as download. The database can be downloaded and used with the BrEPScmd command line tool for large scale sequence analysis. The BrEPS website and downloads for the database creation tool, command line tool and database are freely accessible at http://breps.tu-bs.de.
BrEPS 2.0: Optimization of sequence pattern prediction for enzyme annotation
Schomburg, Dietmar
2017-01-01
The prediction of gene functions is crucial for a large number of different life science areas. Faster high throughput sequencing techniques generate more and larger datasets. The manual annotation by classical wet-lab experiments is not suitable for these large amounts of data. We showed earlier that the automatic sequence pattern-based BrEPS protocol, based on manually curated sequences, can be used for the prediction of enzymatic functions of genes. The growing sequence databases provide the opportunity for more reliable patterns, but are also a challenge for the implementation of automatic protocols. We reimplemented and optimized the BrEPS pattern generation to be applicable for larger datasets in an acceptable timescale. Primary improvement of the new BrEPS protocol is the enhanced data selection step. Manually curated annotations from Swiss-Prot are used as reliable source for function prediction of enzymes observed on protein level. The pool of sequences is extended by highly similar sequences from TrEMBL and SwissProt. This allows us to restrict the selection of Swiss-Prot entries, without losing the diversity of sequences needed to generate significant patterns. Additionally, a supporting pattern type was introduced by extending the patterns at semi-conserved positions with highly similar amino acids. Extended patterns have an increased complexity, increasing the chance to match more sequences, without losing the essential structural information of the pattern. To enhance the usability of the database, we introduced enzyme function prediction based on consensus EC numbers and IUBMB enzyme nomenclature. BrEPS is part of the Braunschweig Enzyme Database (BRENDA) and is available on a completely redesigned website and as download. The database can be downloaded and used with the BrEPScmd command line tool for large scale sequence analysis. The BrEPS website and downloads for the database creation tool, command line tool and database are freely accessible at http://breps.tu-bs.de. PMID:28750104
New spectroscopy in the HITRAN2016 database and its impact on atmospheric retrievals
NASA Astrophysics Data System (ADS)
Gordon, I.; Rothman, L. S.; Kochanov, R. V.; Tan, Y.; Toon, G. C.
2017-12-01
The HITRAN spectroscopic database is a backbone of the interpretation of spectral atmospheric retrievals and is an important input to the radiative transfer codes. The database is serving the atmospheric community for nearly half-a-century with every new edition being released every four years. The most recent release of the database is HITRAN2016 [1]. It consists of line-by-line lists, experimental absorption cross-sections, collision-induced absorption data and aerosol indices of refraction. In this presentation it will be stressed the importance of using the most recent edition of the database in the radiative transfer codes. The line-by-line lists for most of the HITRAN molecules were updated (and two new molecules added) in comparison with the previous compilation HITRAN2012 [2] that has been in use, along with some intermediate updates, since 2012. The extent of the updates ranges from updating a few lines of certain molecules to complete replacements of the lists and introduction of additional isotopologues. In addition, the amount of molecules in cross-sectional part of the database has increased dramatically from nearly 50 to over 300. The molecules covered by the HITRAN database are important in planetary remote sensing, environment monitoring (in particular, biomass burning detection), climate applications, industrial pollution tracking, atrophysics, and more. Taking advantage of the new structure and interface available at www.hitran.org [3] and the HITRAN Application Programming Interface [4] the amount of parameters has also been significantly increased, now incorporating, for instance, non-Voigt line profiles [5]; broadening by gases other than air and "self" [6]; and other phenomena, including line mixing. This is a very important novelty that needs to be properly introduced in the radiative transfer codes in order to advance accurate interpretation of the remote sensing retrievals. This work is supported by the NASA PDART (NNX16AG51G) and AURA (NNX 17AI78G) programs. References[1] I.E. Gordon et al, JQSRT in press (2017) http://doi.org/10.1016/j.jqsrt.2017.06.038. [2] L.S. Rothman et al, JQSRT 130, 4 (2013). [3] C. Hill et al, JQSRT 177, 4 (2016). [4] R.V. Kochanov et al, JQSRT 177, 15 (2016). [5] P. Wcisło et al., JQSRT 177, 75 (2016). [6] J. S. Wilzewski et al., JQSRT 168, 193 (2016).
Development Approach of the Advanced Life Support On-line Project Information System
NASA Technical Reports Server (NTRS)
Levri, Julie A.; Hogan, John A.; Morrow, Rich; Ho, Michael C.; Kaehms, Bob; Cavazzoni, Jim; Brodbeck, Christina A.; Whitaker, Dawn R.
2005-01-01
The Advanced Life Support (ALS) Program has recently accelerated an effort to develop an On-line Project Information System (OPIS) for research project and technology development data centralization and sharing. There has been significant advancement in the On-line Project Information System (OPIS) over the past year (Hogan et al, 2004). This paper presents the resultant OPIS development approach. OPIS is being built as an application framework consisting of an uderlying Linux/Apache/MySQL/PHP (LAMP) stack, and supporting class libraries that provides database abstraction and automatic code generation, simplifying the ongoing development and maintenance process. Such a development approach allows for quick adaptation to serve multiple Programs, although initial deployment is for an ALS module. OPIS core functionality will involve a Web-based annual solicitation of project and technology data directly from ALS Principal Investigators (PIs) through customized data collection forms. Data provided by PIs will be reviewed by a Technical Task Monitor (TTM) before posting the information to OPIS for ALS Community viewing via the Web. Such Annual Reports will be permanent, citable references within OPIS. OPlS core functionality will also include Project Home Sites, which will allow PIS to provide updated technology information to the Community in between Annual Report updates. All data will be stored in an object-oriented relational database, created in MySQL(Reistered Trademark) and located on a secure server at NASA Ames Research Center (ARC). Upon launch, OPlS can be utilized by Managers to identify research and technology development (R&TD) gaps and to assess task performance. Analysts can employ OPlS to obtain the current, comprehensive, accurate information about advanced technologies that is required to perform trade studies of various life support system options. ALS researchers and technology developers can use OPlS to achieve an improved understanding of the NASA and ALS Program needs and to understand how other researchers and technology developers are addressing those needs. OPlS core functionality will launch for 'Ihe ALS Program in October, 2005. However, the system has been developed with the ability to evolve with Program needs. Because of open-source construction, software costs are minimized. Any functionality that is technologically feasible can be built into OPIS, and OPlS can expand through module cloning and adaptation, to any level deemed useful to the Agency.
JANIS: NEA JAva-based Nuclear Data Information System
NASA Astrophysics Data System (ADS)
Soppera, Nicolas; Bossant, Manuel; Cabellos, Oscar; Dupont, Emmeric; Díez, Carlos J.
2017-09-01
JANIS (JAva-based Nuclear Data Information System) software is developed by the OECD Nuclear Energy Agency (NEA) Data Bank to facilitate the visualization and manipulation of nuclear data, giving access to evaluated nuclear data libraries, such as ENDF, JEFF, JENDL, TENDL etc., and also to experimental nuclear data (EXFOR) and bibliographical references (CINDA). It is available as a standalone Java program, downloadable and distributed on DVD and also a web application available on the NEA website. One of the main new features in JANIS is the scripting capability via command line, which notably automatizes plots generation and permits automatically extracting data from the JANIS database. Recent NEA software developments rely on these JANIS features to access nuclear data, for example the Nuclear Data Sensitivity Tool (NDaST) makes use of covariance data in BOXER and COVERX formats, which are retrieved from the JANIS database. New features added in this version of the JANIS software are described along this paper with some examples.
Sources of Cryogenic Data and Information
NASA Astrophysics Data System (ADS)
Mohling, R. A.; Hufferd, W. L.; Marquardt, E. D.
It is commonly known that cryogenic data, technology, and information are applied across many military, National Aeronautics and Space Administration (NASA), and civilian product lines. Before 1950, however, there was no centralized US source of cryogenic technology data. The Cryogenic Data Center of the National Bureau of Standards (NBS) maintained a database of cryogenic technical documents that served the national need well from the mid 1950s to the early 1980s. The database, maintained on a mainframe computer, was a highly specific bibliography of cryogenic literature and thermophysical properties that covered over 100 years of data. In 1983, however, the Cryogenic Data Center was discontinued when NBS's mission and scope were redefined. In 1998, NASA contracted with the Chemical Propulsion Information Agency (CPIA) and Technology Applications, Inc. (TAI) to reconstitute and update Cryogenic Data Center information and establish a self-sufficient entity to provide technical services for the cryogenic community. The Cryogenic Information Center (CIC) provided this service until 2004, when it was discontinued due to a lack of market interest. The CIC technical assets were distributed to NASA Marshall Space Flight Center and the National Institute of Standards and Technology. Plans are under way in 2006 for CPIA to launch an e-commerce cryogenic website to offer bibliography data with capability to download cryogenic documents.
Optical measurements of paintings and the creation of an artwork database for authenticity
Hwang, Seonhee; Song, Hyerin; Cho, Soon-Woo; Kim, Chang Eun; Kim, Chang-Seok; Kim, Kyujung
2017-01-01
Paintings have high cultural and commercial value, so that needs to be preserved. Many techniques have been attempted to analyze properties of paintings, including X-ray analysis and optical coherence tomography (OCT) methods, and enable conservation of paintings from forgeries. In this paper, we suggest a simple and accurate optical analysis system to protect them from counterfeit which is comprised of fiber optics reflectance spectroscopy (FORS) and line laser-based topographic analysis. The system is designed to fully cover the whole area of paintings regardless of its size for the accurate analysis. For additional assessments, a line laser-based high resolved OCT was utilized. Some forgeries were created by the experts from the three different styles of genuine paintings for the experiments. After measuring surface properties of paintings, we could observe the results from the genuine works and the forgeries have the distinctive characteristics. The forgeries could be distinguished maximally 76.5% with obtained RGB spectra by FORS and 100% by topographic analysis. Through the several executions, the reliability of the system was confirmed. We could verify that the measurement system is worthwhile for the conservation of the valuable paintings. To store the surface information of the paintings in micron scale, we created a numerical database. Consequently, we secured the databases of three different famous Korean paintings for accurate authenticity. PMID:28151981
Manchester visual query language
NASA Astrophysics Data System (ADS)
Oakley, John P.; Davis, Darryl N.; Shann, Richard T.
1993-04-01
We report a database language for visual retrieval which allows queries on image feature information which has been computed and stored along with images. The language is novel in that it provides facilities for dealing with feature data which has actually been obtained from image analysis. Each line in the Manchester Visual Query Language (MVQL) takes a set of objects as input and produces another, usually smaller, set as output. The MVQL constructs are mainly based on proven operators from the field of digital image analysis. An example is the Hough-group operator which takes as input a specification for the objects to be grouped, a specification for the relevant Hough space, and a definition of the voting rule. The output is a ranked list of high scoring bins. The query could be directed towards one particular image or an entire image database, in the latter case the bins in the output list would in general be associated with different images. We have implemented MVQL in two layers. The command interpreter is a Lisp program which maps each MVQL line to a sequence of commands which are used to control a specialized database engine. The latter is a hybrid graph/relational system which provides low-level support for inheritance and schema evolution. In the paper we outline the language and provide examples of useful queries. We also describe our solution to the engineering problems associated with the implementation of MVQL.
Optical measurements of paintings and the creation of an artwork database for authenticity.
Hwang, Seonhee; Song, Hyerin; Cho, Soon-Woo; Kim, Chang Eun; Kim, Chang-Seok; Kim, Kyujung
2017-01-01
Paintings have high cultural and commercial value, so that needs to be preserved. Many techniques have been attempted to analyze properties of paintings, including X-ray analysis and optical coherence tomography (OCT) methods, and enable conservation of paintings from forgeries. In this paper, we suggest a simple and accurate optical analysis system to protect them from counterfeit which is comprised of fiber optics reflectance spectroscopy (FORS) and line laser-based topographic analysis. The system is designed to fully cover the whole area of paintings regardless of its size for the accurate analysis. For additional assessments, a line laser-based high resolved OCT was utilized. Some forgeries were created by the experts from the three different styles of genuine paintings for the experiments. After measuring surface properties of paintings, we could observe the results from the genuine works and the forgeries have the distinctive characteristics. The forgeries could be distinguished maximally 76.5% with obtained RGB spectra by FORS and 100% by topographic analysis. Through the several executions, the reliability of the system was confirmed. We could verify that the measurement system is worthwhile for the conservation of the valuable paintings. To store the surface information of the paintings in micron scale, we created a numerical database. Consequently, we secured the databases of three different famous Korean paintings for accurate authenticity.
NASA Astrophysics Data System (ADS)
Susanto, Arif; Mulyono, Nur Budi
2018-02-01
The changes of environmental management system standards into the latest version, i.e. ISO 14001:2015, may cause a change on a data and information need in decision making and achieving the objectives in the organization coverage. Information management is the organization's responsibility to ensure that effectiveness and efficiency start from its creating, storing, processing and distribution processes to support operations and effective decision making activity in environmental performance management. The objective of this research was to set up an information management program and to adopt the technology as the supporting component of the program which was done by PTFI Concentrating Division so that it could be in line with the desirable organization objective in environmental management based on ISO 14001:2015 environmental management system standards. Materials and methods used covered technical aspects in information management, i.e. with web-based application development by using usage centered design. The result of this research showed that the use of Single Sign On gave ease to its user to interact further on the use of the environmental management system. Developing a web-based through creating entity relationship diagram (ERD) and information extraction by conducting information extraction which focuses on attributes, keys, determination of constraints. While creating ERD is obtained from relational database scheme from a number of database from environmental performances in Concentrating Division.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rizzardi, M.; Mohr, M.S.; Merrill, D.W.
1992-07-01
In 1990, the United States Bureau of the Census released detailed geographic base files known as TIGER/Line (Topologically Integrated Geographic Encoding and Referencing) which contain detail on the physical features and census tract boundaries of every county in the United States. The TIGER database is attractive for two reasons. First, it is publicly available through the Bureau of the Census on tape or cd-rom for a minimal fee. Second, it contains 24 billion characters of data which describe geographic features of interest to the Census Bureau such as coastlines, hydrography, transportation networks, political boundaries, etc. Unfortunately, the large TIGER databasemore » only provides raw alphanumeric data; no utility software, graphical or otherwise, is included. On the other hand New S, a popular statistical software package by AT T, has easily operated functions that permit advanced graphics in conjunction with data analysis. New S has the ability to plot contours, lines, segments, and points. However, of special interest is the New S function map and its options. Using the map function, which requires polygons as input, census tracts can be quickly selected, plotted, shaded, etc. New S graphics combined with the TIGER database has obvious potential. This paper reports on our efforts to use the TIGER map files with New S, especially to construct census tract maps of counties. While census tract boundaries are inherently polygonal, they are not organized as such in the TIGER database. This conversion of the TIGER line'' format into New S polygon/polyline'' format is one facet of the work reported here. Also we discuss the selection and extraction of auxiliary geographic information from TIGER files for graphical display using New S.« less
Interfacing 1990 US Census TIGER map files with New S graphics software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rizzardi, M.; Mohr, M.S.; Merrill, D.W.
1992-07-01
In 1990, the United States Bureau of the Census released detailed geographic base files known as TIGER/Line (Topologically Integrated Geographic Encoding and Referencing) which contain detail on the physical features and census tract boundaries of every county in the United States. The TIGER database is attractive for two reasons. First, it is publicly available through the Bureau of the Census on tape or cd-rom for a minimal fee. Second, it contains 24 billion characters of data which describe geographic features of interest to the Census Bureau such as coastlines, hydrography, transportation networks, political boundaries, etc. Unfortunately, the large TIGER databasemore » only provides raw alphanumeric data; no utility software, graphical or otherwise, is included. On the other hand New S, a popular statistical software package by AT&T, has easily operated functions that permit advanced graphics in conjunction with data analysis. New S has the ability to plot contours, lines, segments, and points. However, of special interest is the New S function map and its options. Using the map function, which requires polygons as input, census tracts can be quickly selected, plotted, shaded, etc. New S graphics combined with the TIGER database has obvious potential. This paper reports on our efforts to use the TIGER map files with New S, especially to construct census tract maps of counties. While census tract boundaries are inherently polygonal, they are not organized as such in the TIGER database. This conversion of the TIGER ``line`` format into New S ``polygon/polyline`` format is one facet of the work reported here. Also we discuss the selection and extraction of auxiliary geographic information from TIGER files for graphical display using New S.« less
Specdata: Automated Analysis Software for Broadband Spectra
NASA Astrophysics Data System (ADS)
Oliveira, Jasmine N.; Martin-Drumel, Marie-Aline; McCarthy, Michael C.
2017-06-01
With the advancement of chirped-pulse techniques, broadband rotational spectra with a few tens to several hundred GHz of spectral coverage are now routinely recorded. When studying multi-component mixtures that might result, for example, with the use of an electrical discharge, lines of new chemical species are often obscured by those of known compounds, and analysis can be laborious. To address this issue, we have developed SPECdata, an open source, interactive tool which is designed to simplify and greatly accelerate the spectral analysis and discovery. Our software tool combines both automated and manual components that free the user from computation, while giving him/her considerable flexibility to assign, manipulate, interpret and export their analysis. The automated - and key - component of the new software is a database query system that rapidly assigns transitions of known species in an experimental spectrum. For each experiment, the software identifies spectral features, and subsequently assigns them to known molecules within an in-house database (Pickett .cat files, list of frequencies...), or those catalogued in Splatalogue (using automatic on-line queries). With suggested assignments, the control is then handed over to the user who can choose to accept, decline or add additional species. Data visualization, statistical information, and interactive widgets assist the user in making decisions about their data. SPECdata has several other useful features intended to improve the user experience. Exporting a full report of the analysis, or a peak file in which assigned lines are removed are among several options. A user may also save their progress to continue at another time. Additional features of SPECdata help the user to maintain and expand their database for future use. A user-friendly interface allows one to search, upload, edit or update catalog or experiment entries.
POLARIS: Helping Managers Get Answers Fast!
NASA Technical Reports Server (NTRS)
Corcoran, Patricia M.; Webster, Jeffery
2007-01-01
This viewgraph presentation reviews the Project Online Library and Resource Information System (POLARIS) system. It is NASA-wide, web-based system, providing access to information related to Program and Project Management. It will provide a one-stop shop for access to: a searchable, sortable database of all requirements for all product lines, project life cycle diagrams with reviews, project life cycle diagrams with reviews, project review definitions with products review information from NPR 7123.1, NASA Systems Engineering Processes and Requirements, templates and examples of products, project standard WBSs with dictionaries, and requirements for implementation and approval, information from NASA s Metadata Manager (MdM): Attributes of Missions, Themes, Programs & Projects, NPR7120.5 waiver form and instructions and much more. The presentation reviews the plans and timelines for future revisions and modifications.
NASA Astrophysics Data System (ADS)
Ren, Tao; Modest, Michael F.; Fateev, Alexander; Clausen, Sønnik
2015-01-01
In this study, we present an inverse calculation model based on the Levenberg-Marquardt optimization method to reconstruct temperature and species concentration from measured line-of-sight spectral transmissivity data for homogeneous gaseous media. The high temperature gas property database HITEMP 2010 (Rothman et al. (2010) [1]), which contains line-by-line (LBL) information for several combustion gas species, such as CO2 and H2O, was used to predict gas spectral transmissivities. The model was validated by retrieving temperatures and species concentrations from experimental CO2 and H2O transmissivity measurements. Optimal wavenumber ranges for CO2 and H2O transmissivity measured across a wide range of temperatures and concentrations were determined according to the performance of inverse calculations. Results indicate that the inverse radiation model shows good feasibility for measurements of temperature and gas concentration.
Powell, Robert E.
2001-01-01
This data set maps and describes the geology of the Porcupine Wash 7.5 minute quadrangle, Riverside County, southern California. The quadrangle, situated in Joshua Tree National Park in the eastern Transverse Ranges physiographic and structural province, encompasses parts of the Hexie Mountains, Cottonwood Mountains, northern Eagle Mountains, and south flank of Pinto Basin. It is underlain by a basement terrane comprising Proterozoic metamorphic rocks, Mesozoic plutonic rocks, and Mesozoic and Mesozoic or Cenozoic hypabyssal dikes. The basement terrane is capped by a widespread Tertiary erosion surface preserved in remnants in the Eagle and Cottonwood Mountains and buried beneath Cenozoic deposits in Pinto Basin. Locally, Miocene basalt overlies the erosion surface. A sequence of at least three Quaternary pediments is planed into the north piedmont of the Eagle and Hexie Mountains, each in turn overlain by successively younger residual and alluvial deposits. The Tertiary erosion surface is deformed and broken by north-northwest-trending, high-angle, dip-slip faults and an east-west trending system of high-angle dip- and left-slip faults. East-west trending faults are younger than and perhaps in part coeval with faults of the northwest-trending set. The Porcupine Wash database was created using ARCVIEW and ARC/INFO, which are geographical information system (GIS) software products of Envronmental Systems Research Institute (ESRI). The database consists of the following items: (1) a map coverage showing faults and geologic contacts and units, (2) a separate coverage showing dikes, (3) a coverage showing structural data, (4) a scanned topographic base at a scale of 1:24,000, and (5) attribute tables for geologic units (polygons and regions), contacts (arcs), and site-specific data (points). The database, accompanied by a pamphlet file and this metadata file, also includes the following graphic and text products: (1) A portable document file (.pdf) containing a navigable graphic of the geologic map on a 1:24,000 topographic base. The map is accompanied by a marginal explanation consisting of a Description of Map and Database Units (DMU), a Correlation of Map and Database Units (CMU), and a key to point-and line-symbols. (2) Separate .pdf files of the DMU and CMU, individually. (3) A PostScript graphic-file containing the geologic map on a 1:24,000 topographic base accompanied by the marginal explanation. (4) A pamphlet that describes the database and how to access it. Within the database, geologic contacts , faults, and dikes are represented as lines (arcs), geologic units as polygons and regions, and site-specific data as points. Polygon, arc, and point attribute tables (.pat, .aat, and .pat, respectively) uniquely identify each geologic datum and link it to other tables (.rel) that provide more detailed geologic information.
EnviroNET: On-line information for LDEF
NASA Technical Reports Server (NTRS)
Lauriente, Michael
1993-01-01
EnviroNET is an on-line, free-form database intended to provide a centralized repository for a wide range of technical information on environmentally induced interactions of use to Space Shuttle customers and spacecraft designers. It provides a user-friendly, menu-driven format on networks that are connected globally and is available twenty-four hours a day - every day. The information, updated regularly, includes expository text, tabular numerical data, charts and graphs, and models. The system pools space data collected over the years by NASA, USAF, other government research facilities, industry, universities, and the European Space Agency. The models accept parameter input from the user, then calculate and display the derived values corresponding to that input. In addition to the archive, interactive graphics programs are also available on space debris, the neutral atmosphere, radiation, magnetic fields, and the ionosphere. A user-friendly, informative interface is standard for all the models and includes a pop-up help window with information on inputs, outputs, and caveats. The system will eventually simplify mission analysis with analytical tools and deliver solutions for computationally intense graphical applications to do 'What if...' scenarios. A proposed plan for developing a repository of information from the Long Duration Exposure Facility (LDEF) for a user group is presented.
Computerized engineering logic for procurement and dedication processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tulay, M.P.
1996-12-31
This paper summarizes the work performed for designing the system and especially for calculating on-line expected performance and gives some significant results. In an attempt to better meet the needs of operations and maintenance organizations, many nuclear utility procurement engineering groups have simplified their procedures, developed on-line tools for performing the specification of replacement items, and developed relational databases containing part-level information necessary to automate the procurement process. Although these improvements have helped to reduce the engineering necessary to properly specify and accept/dedicate items for nuclear safety-related applications, a number of utilities have recognized that additional long-term savings can bemore » realized by integrating a computerized logic to assist technical procurement engineering personnel.« less
Mirel, Barbara; Ackerman, Mark S; Kerber, Kevin; Klinkman, Michael
2006-01-01
Clinical care management promises to help diminish the major health problem of depression. To realize this promise, front line clinicians must know which care management interventions are best for which patients and act accordingly. Unfortunately, the detailed intervention data required for such differentiated assessments are missing in most clinical information systems (CIS). To determine frontline clinicians' needs for these data and to identify the data that CIS should keep, we conducted an 18 month ethnographic study and discourse analysis of telehealth depression care management. Results show care managers need data-based evidence to choose best options, and discourse analysis suggests some personalized interventions that CIS should and can feasibly capture for evidence.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
FEDIX is an on-line information service that links the higher education community and the federal government to facilitate research, education, and services. The system provides accurate and timely federal agency information to colleges, universities, and other research organizations. There are no registration fees and no access charges for using FEDIX. Agencies participating in the FEDIX system include: Department of Energy (DOE), Federal Aviation Administration (FAA), National Aeronautics and Space Administration (NASA), Office of Naval Research (ONR), Air Force Office of Scientific Research (AFOSR), National Science Foundation (NSF), National Security Agency (NSA), Department of Commerce (DOC), Department of Education (DOEd), Departmentmore » of Housing and Urban Development (HUD), and Agency for International Development (AID). Additional government agencies are expected to join FEDIX in the near future. This guide is intended to help users access and utilize the FEDIX system. Because the system is frequently updated, however, some menus and tables used as examples in this text may not exactly match those displayed on the live system.« less
41 CFR 60-1.12 - Record retention.
Code of Federal Regulations, 2012 CFR
2012-07-01
... individual for a particular position, such as on-line resumes or internal resume databases, records... recordkeeping with respect to internal resume databases, the contractor must maintain a record of each resume added to the database, a record of the date each resume was added to the database, the position for...
41 CFR 60-1.12 - Record retention.
Code of Federal Regulations, 2010 CFR
2010-07-01
... individual for a particular position, such as on-line resumes or internal resume databases, records... recordkeeping with respect to internal resume databases, the contractor must maintain a record of each resume added to the database, a record of the date each resume was added to the database, the position for...
41 CFR 60-1.12 - Record retention.
Code of Federal Regulations, 2011 CFR
2011-07-01
... individual for a particular position, such as on-line resumes or internal resume databases, records... recordkeeping with respect to internal resume databases, the contractor must maintain a record of each resume added to the database, a record of the date each resume was added to the database, the position for...
41 CFR 60-1.12 - Record retention.
Code of Federal Regulations, 2013 CFR
2013-07-01
... individual for a particular position, such as on-line resumes or internal resume databases, records... recordkeeping with respect to internal resume databases, the contractor must maintain a record of each resume added to the database, a record of the date each resume was added to the database, the position for...
A broadband multimedia TeleLearning system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Ruiping; Karmouch, A.
1996-12-31
In this paper we discuss a broadband multimedia TeleLearning system under development in the Multimedia Information Research Laboratory at the University of Ottawa. The system aims at providing a seamless environment for TeleLearning using the latest telecommunication and multimedia information processing technology. It basically consists of a media production center, a courseware author site, a courseware database, a courseware user site, and an on-line facilitator site. All these components are distributed over an ATM network and work together to offer a multimedia interactive courseware service. An MHEG-based model is exploited in designing the system architecture to achieve the real-time, interactive,more » and reusable information interchange through heterogeneous platforms. The system architecture, courseware processing strategies, courseware document models are presented.« less
Risch, John S [Kennewick, WA; Dowson, Scott T [West Richland, WA; Hart, Michelle L [Richland, WA; Hatley, Wes L [Kennewick, WA
2008-05-13
A method of displaying correlations among information objects comprises receiving a query against a database; obtaining a query result set; and generating a visualization representing the components of the result set, the visualization including one of a plane and line to represent a data field, nodes representing data values, and links showing correlations among fields and values. Other visualization methods and apparatus are disclosed.
Risch, John S [Kennewick, WA; Dowson, Scott T [West Richland, WA
2012-03-06
A method of displaying correlations among information objects includes receiving a query against a database; obtaining a query result set; and generating a visualization representing the components of the result set, the visualization including one of a plane and line to represent a data field, nodes representing data values, and links showing correlations among fields and values. Other visualization methods and apparatus are disclosed.
HASH: the Hong Kong/AAO/Strasbourg Hα planetary nebula database
NASA Astrophysics Data System (ADS)
Parker, Quentin A.; Bojičić, Ivan S.; Frew, David J.
2016-07-01
By incorporating our major recent discoveries with re-measured and verified contents of existing catalogues we provide, for the first time, an accessible, reliable, on-line SQL database for essential, up-to date information for all known Galactic planetary nebulae (PNe). We have attempted to: i) reliably remove PN mimics/false ID's that have biased previous studies and ii) provide accurate positions, sizes, morphologies, multi-wavelength imagery and spectroscopy. We also provide a link to CDS/Vizier for the archival history of each object and other valuable links to external data. With the HASH interface, users can sift, select, browse, collate, investigate, download and visualise the entire currently known Galactic PNe diversity. HASH provides the community with the most complete and reliable data with which to undertake new science.
Computer network access to scientific information systems for minority universities
NASA Astrophysics Data System (ADS)
Thomas, Valerie L.; Wakim, Nagi T.
1993-08-01
The evolution of computer networking technology has lead to the establishment of a massive networking infrastructure which interconnects various types of computing resources at many government, academic, and corporate institutions. A large segment of this infrastructure has been developed to facilitate information exchange and resource sharing within the scientific community. The National Aeronautics and Space Administration (NASA) supports both the development and the application of computer networks which provide its community with access to many valuable multi-disciplinary scientific information systems and on-line databases. Recognizing the need to extend the benefits of this advanced networking technology to the under-represented community, the National Space Science Data Center (NSSDC) in the Space Data and Computing Division at the Goddard Space Flight Center has developed the Minority University-Space Interdisciplinary Network (MU-SPIN) Program: a major networking and education initiative for Historically Black Colleges and Universities (HBCUs) and Minority Universities (MUs). In this paper, we will briefly explain the various components of the MU-SPIN Program while highlighting how, by providing access to scientific information systems and on-line data, it promotes a higher level of collaboration among faculty and students and NASA scientists.
Geography:The TIGER Line Files are feature classes and related database files (.) that are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master Address File / Topologically Integrated Geographic Encoding and Referencing (MAF/TIGER) Database (MTDB). The MTDB represents a seamless national file with no overlaps or gaps between parts, however, each TIGER Line File is designed to stand alone as an independent data set, or they can be combined to cover the entire nation. Census Blocks are statistical areas bounded on all sides by visible features, such as streets, roads, streams, and railroad tracks, and/or by non visible boundaries such as city, town, township, and county limits, and short line-of-sight extensions of streets and roads. Census blocks are relatively small in area; for example, a block in a city bounded by streets. However, census blocks in remote areas are often large and irregular and may even be many square miles in area. A common misunderstanding is that data users think census blocks are used geographically to build all other census geographic areas, rather all other census geographic areas are updated and then used as the primary constraints, along with roads and water features, to delineate the tabulation blocks. As a result, all 2010 Census blocks nest within every other 2010 Census geographic area, so that Census Bureau statistical data can be tabulated at the block level and aggregated up t
NASA Astrophysics Data System (ADS)
Zemek, Peter G.; Plowman, Steven V.
2010-04-01
Advances in hardware have miniaturized the emissions spectrometer and associated optics, rendering them easily deployed in the field. Such systems are also suitable for vehicle mounting, and can provide high quality data and concentration information in minutes. Advances in software have accompanied this hardware evolution, enabling the development of portable point-and-click OP-FTIR systems that weigh less than 16 lbs. These systems are ideal for first-responders, military, law enforcement, forensics, and screening applications using optical remote sensing (ORS) methodologies. With canned methods and interchangeable detectors, the new generation of OP-FTIR technology is coupled to the latest forward reference-type model software to provide point-and-click technology. These software models have been established for some time. However, refined user-friendly models that use active, passive, and solar occultation methodologies now allow the user to quickly field-screen and quantify plumes, fence-lines, and combustion incident scenarios in high-temporal-resolution. Synthetic background generation is now redundant as the models use highly accurate instrument line shape (ILS) convolutions and several other parameters, in conjunction with radiative transfer model databases to model a single calibration spectrum to collected sample spectra. Data retrievals are performed directly on single beam spectra using non-linear classical least squares (NLCLS). Typically, the Hitran line database is used to generate the initial calibration spectrum contained within the software.
CHOmine: an integrated data warehouse for CHO systems biology and modeling.
Gerstl, Matthias P; Hanscho, Michael; Ruckerbauer, David E; Zanghellini, Jürgen; Borth, Nicole
2017-01-01
The last decade has seen a surge in published genome-scale information for Chinese hamster ovary (CHO) cells, which are the main production vehicles for therapeutic proteins. While a single access point is available at www.CHOgenome.org, the primary data is distributed over several databases at different institutions. Currently research is frequently hampered by a plethora of gene names and IDs that vary between published draft genomes and databases making systems biology analyses cumbersome and elaborate. Here we present CHOmine, an integrative data warehouse connecting data from various databases and links to other ones. Furthermore, we introduce CHOmodel, a web based resource that provides access to recently published CHO cell line specific metabolic reconstructions. Both resources allow to query CHO relevant data, find interconnections between different types of data and thus provides a simple, standardized entry point to the world of CHO systems biology. http://www.chogenome.org. © The Author(s) 2017. Published by Oxford University Press.
DSSTox: New On-line Resource for Publishing Structure-Standardized Toxicity Databases
Ann M Richard1, Jamie Burch2, ClarLynda Williams3
1Nat. Health and Environ. Effects Res. Lb, US EP& Ret Triangle Park, NC 27711; 2EPA-NC
Central Univ Student COOP, US EPA, lies. Tri...
Machine Learning and Decision Support in Critical Care
Johnson, Alistair E. W.; Ghassemi, Mohammad M.; Nemati, Shamim; Niehaus, Katherine E.; Clifton, David A.; Clifford, Gari D.
2016-01-01
Clinical data management systems typically provide caregiver teams with useful information, derived from large, sometimes highly heterogeneous, data sources that are often changing dynamically. Over the last decade there has been a significant surge in interest in using these data sources, from simply re-using the standard clinical databases for event prediction or decision support, to including dynamic and patient-specific information into clinical monitoring and prediction problems. However, in most cases, commercial clinical databases have been designed to document clinical activity for reporting, liability and billing reasons, rather than for developing new algorithms. With increasing excitement surrounding “secondary use of medical records” and “Big Data” analytics, it is important to understand the limitations of current databases and what needs to change in order to enter an era of “precision medicine.” This review article covers many of the issues involved in the collection and preprocessing of critical care data. The three challenges in critical care are considered: compartmentalization, corruption, and complexity. A range of applications addressing these issues are covered, including the modernization of static acuity scoring; on-line patient tracking; personalized prediction and risk assessment; artifact detection; state estimation; and incorporation of multimodal data sources such as genomic and free text data. PMID:27765959
MAPU: Max-Planck Unified database of organellar, cellular, tissue and body fluid proteomes.
Zhang, Yanling; Zhang, Yong; Adachi, Jun; Olsen, Jesper V; Shi, Rong; de Souza, Gustavo; Pasini, Erica; Foster, Leonard J; Macek, Boris; Zougman, Alexandre; Kumar, Chanchal; Wisniewski, Jacek R; Jun, Wang; Mann, Matthias
2007-01-01
Mass spectrometry (MS)-based proteomics has become a powerful technology to map the protein composition of organelles, cell types and tissues. In our department, a large-scale effort to map these proteomes is complemented by the Max-Planck Unified (MAPU) proteome database. MAPU contains several body fluid proteomes; including plasma, urine, and cerebrospinal fluid. Cell lines have been mapped to a depth of several thousand proteins and the red blood cell proteome has also been analyzed in depth. The liver proteome is represented with 3200 proteins. By employing high resolution MS and stringent validation criteria, false positive identification rates in MAPU are lower than 1:1000. Thus MAPU datasets can serve as reference proteomes in biomarker discovery. MAPU contains the peptides identifying each protein, measured masses, scores and intensities and is freely available at http://www.mapuproteome.com using a clickable interface of cell or body parts. Proteome data can be queried across proteomes by protein name, accession number, sequence similarity, peptide sequence and annotation information. More than 4500 mouse and 2500 human proteins have already been identified in at least one proteome. Basic annotation information and links to other public databases are provided in MAPU and we plan to add further analysis tools.
The Human Transcript Database: A Catalogue of Full Length cDNA Inserts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bouckk John; Michael McLeod; Kim Worley
1999-09-10
The BCM Search Launcher provided improved access to web-based sequence analysis services during the granting period and beyond. The Search Launcher web site grouped analysis procedures by function and provided default parameters that provided reasonable search results for most applications. For instance, most queries were automatically masked for repeat sequences prior to sequence database searches to avoid spurious matches. In addition to the web-based access and arrangements that were made using the functions easier, the BCM Search Launcher provided unique value-added applications like the BEAUTY sequence database search tool that combined information about protein domains and sequence database search resultsmore » to give an enhanced, more complete picture of the reliability and relative value of the information reported. This enhanced search tool made evaluating search results more straight-forward and consistent. Some of the favorite features of the web site are the sequence utilities and the batch client functionality that allows processing of multiple samples from the command line interface. One measure of the success of the BCM Search Launcher is the number of sites that have adopted the models first developed on the site. The graphic display on the BLAST search from the NCBI web site is one such outgrowth, as is the display of protein domain search results within BLAST search results, and the design of the Biology Workbench application. The logs of usage and comments from users confirm the great utility of this resource.« less
Ethical management in the constitution of a European database for leukodystrophies rare diseases.
Duchange, Nathalie; Darquy, Sylviane; d'Audiffret, Diane; Callies, Ingrid; Lapointe, Anne-Sophie; Loeve, Boris; Boespflug-Tanguy, Odile; Moutel, Grégoire
2014-09-01
The EU LeukoTreat program aims to connect, enlarge and improve existing national databases for leukodystrophies (LDs) and other genetic diseases affecting the white matter of the brain. Ethical issues have been placed high on the agenda by pairing the participating LD expert research teams with experts in medical ethics and LD patient families and associations. The overarching goal is to apply core ethics principles to specific project needs and ensure patient rights and protection in research addressing the context of these rare diseases. This paper looks at how ethical issues were identified and handled at project management level when setting up an ethics committee. Through a work performed as a co-construction between health professionals, ethics experts, and patient representatives, we expose the major ethical issues identified. The committee acts as the forum for tackling specific issues tied to data sharing and patient participation: the thin line between care and research, the need for a charter establishing the commitments binding health professionals and the information items to be delivered. Ongoing feedback on the database, including delivering global results in a broad-audience format, emerged as a key recommendation. Information should be available to all patients in the partner countries developing the database and should be scaled to different patient profiles. This work led to a number of recommendations for ensuring transparency and optimizing the partnership between scientists and patients. Copyright © 2014 European Paediatric Neurology Society. Published by Elsevier Ltd. All rights reserved.
Robertson, Eden G; Wakefield, Claire E; Signorelli, Christina; Cohn, Richard J; Patenaude, Andrea; Foster, Claire; Pettit, Tristan; Fardell, Joanna E
2018-07-01
We conducted a systematic review to identify the strategies that have been recommended in the literature to facilitate shared decision-making regarding enrolment in pediatric oncology clinical trials. We searched seven databases for peer-reviewed literature, published 1990-2017. Of 924 articles identified, 17 studies were eligible for the review. We assessed study quality using the 'Mixed-Methods Appraisal Tool'. We coded the results and discussions of papers line-by-line using nVivo software. We categorized strategies thematically. Five main themes emerged: 1) decision-making as a process, 2) individuality of the process; 3) information provision, 4) the role of communication, or 5) decision and psychosocial support. Families should have adequate time to make a decision. HCPs should elicit parents' and patients' preferences for level of information and decision involvement. Information should be clear and provided in multiple modalities. Articles also recommended providing training for healthcare professionals and access to psychosocial support for families. High quality, individually-tailored information, open communication and psychosocial support appear vital in supporting decision-making regarding enrollment in clinical trials. These data will usefully inform future decision-making interventions/tools to support families making clinical trial decisions. A solid evidence-base for effective strategies which facilitate shared decision-making is needed. Copyright © 2018 Elsevier B.V. All rights reserved.
Large-scale feature searches of collections of medical imagery
NASA Astrophysics Data System (ADS)
Hedgcock, Marcus W.; Karshat, Walter B.; Levitt, Tod S.; Vosky, D. N.
1993-09-01
Large scale feature searches of accumulated collections of medical imagery are required for multiple purposes, including clinical studies, administrative planning, epidemiology, teaching, quality improvement, and research. To perform a feature search of large collections of medical imagery, one can either search text descriptors of the imagery in the collection (usually the interpretation), or (if the imagery is in digital format) the imagery itself. At our institution, text interpretations of medical imagery are all available in our VA Hospital Information System. These are downloaded daily into an off-line computer. The text descriptors of most medical imagery are usually formatted as free text, and so require a user friendly database search tool to make searches quick and easy for any user to design and execute. We are tailoring such a database search tool (Liveview), developed by one of the authors (Karshat). To further facilitate search construction, we are constructing (from our accumulated interpretation data) a dictionary of medical and radiological terms and synonyms. If the imagery database is digital, the imagery which the search discovers is easily retrieved from the computer archive. We describe our database search user interface, with examples, and compare the efficacy of computer assisted imagery searches from a clinical text database with manual searches. Our initial work on direct feature searches of digital medical imagery is outlined.
Dynamic Speed Adaptation for Path Tracking Based on Curvature Information and Speed Limits †
Gámez Serna, Citlalli; Ruichek, Yassine
2017-01-01
A critical concern of autonomous vehicles is safety. Different approaches have tried to enhance driving safety to reduce the number of fatal crashes and severe injuries. As an example, Intelligent Speed Adaptation (ISA) systems warn the driver when the vehicle exceeds the recommended speed limit. However, these systems only take into account fixed speed limits without considering factors like road geometry. In this paper, we consider road curvature with speed limits to automatically adjust vehicle’s speed with the ideal one through our proposed Dynamic Speed Adaptation (DSA) method. Furthermore, ‘curve analysis extraction’ and ‘speed limits database creation’ are also part of our contribution. An algorithm that analyzes GPS information off-line identifies high curvature segments and estimates the speed for each curve. The speed limit database contains information about the different speed limit zones for each traveled path. Our DSA senses speed limits and curves of the road using GPS information and ensures smooth speed transitions between current and ideal speeds. Through experimental simulations with different control algorithms on real and simulated datasets, we prove that our method is able to significantly reduce lateral errors on sharp curves, to respect speed limits and consequently increase safety and comfort for the passenger. PMID:28613251
The new on-line Czech Food Composition Database.
Machackova, Marie; Holasova, Marie; Maskova, Eva
2013-10-01
The new on-line Czech Food Composition Database (FCDB) was launched on http://www.czfcdb.cz in December 2010 as a main freely available channel for dissemination of Czech food composition data. The application is based on a complied FCDB documented according to the EuroFIR standardised procedure for full value documentation and indexing of foods by the LanguaL™ Thesaurus. A content management system was implemented for administration of the website and performing data export (comma-separated values or EuroFIR XML transport package formats) by a compiler. Reference/s are provided for each published value with linking to available freely accessible on-line sources of data (e.g. full texts, EuroFIR Document Repository, on-line national FCDBs). LanguaL™ codes are displayed within each food record as searchable keywords of the database. A photo (or a photo gallery) is used as a visual descriptor of a food item. The application is searchable on foods, components, food groups, alphabet and a multi-field advanced search. Copyright © 2013 Elsevier Ltd. All rights reserved.
Multispectrum analysis of the oxygen A-band.
Drouin, Brian J; Benner, D Chris; Brown, Linda R; Cich, Matthew J; Crawford, Timothy J; Devi, V Malathy; Guillaume, Alexander; Hodges, Joseph T; Mlawer, Eli J; Robichaud, David J; Oyafuso, Fabiano; Payne, Vivienne H; Sung, Keeyoon; Wishnow, Edward H; Yu, Shanshan
2017-01-01
Retrievals of atmospheric composition from near-infrared measurements require measurements of airmass to better than the desired precision of the composition. The oxygen bands are obvious choices to quantify airmass since the mixing ratio of oxygen is fixed over the full range of atmospheric conditions. The OCO-2 mission is currently retrieving carbon dioxide concentration using the oxygen A-band for airmass normalization. The 0.25% accuracy desired for the carbon dioxide concentration has pushed the required state-of-the-art for oxygen spectroscopy. To measure O 2 A-band cross-sections with such accuracy through the full range of atmospheric pressure requires a sophisticated line-shape model (Rautian or Speed-Dependent Voigt) with line mixing (LM) and collision induced absorption (CIA). Models of each of these phenomena exist, however, this work presents an integrated self-consistent model developed to ensure the best accuracy. It is also important to consider multiple sources of spectroscopic data for such a study in order to improve the dynamic range of the model and to minimize effects of instrumentation and associated systematic errors. The techniques of Fourier Transform Spectroscopy (FTS) and Cavity Ring-Down Spectroscopy (CRDS) allow complimentary information for such an analysis. We utilize multispectrum fitting software to generate a comprehensive new database with improved accuracy based on these datasets. The extensive information will be made available as a multi-dimensional cross-section (ABSCO) table and the parameterization will be offered for inclusion in the HITRANonline database.
Multispectrum analysis of the oxygen A-band
Drouin, Brian J.; Benner, D. Chris; Brown, Linda R.; Cich, Matthew J.; Crawford, Timothy J.; Devi, V. Malathy; Guillaume, Alexander; Hodges, Joseph T.; Mlawer, Eli J.; Robichaud, David J.; Oyafuso, Fabiano; Payne, Vivienne H.; Sung, Keeyoon; Wishnow, Edward H.; Yu, Shanshan
2016-01-01
Retrievals of atmospheric composition from near-infrared measurements require measurements of airmass to better than the desired precision of the composition. The oxygen bands are obvious choices to quantify airmass since the mixing ratio of oxygen is fixed over the full range of atmospheric conditions. The OCO-2 mission is currently retrieving carbon dioxide concentration using the oxygen A-band for airmass normalization. The 0.25% accuracy desired for the carbon dioxide concentration has pushed the required state-of-the-art for oxygen spectroscopy. To measure O2 A-band cross-sections with such accuracy through the full range of atmospheric pressure requires a sophisticated line-shape model (Rautian or Speed-Dependent Voigt) with line mixing (LM) and collision induced absorption (CIA). Models of each of these phenomena exist, however, this work presents an integrated self-consistent model developed to ensure the best accuracy. It is also important to consider multiple sources of spectroscopic data for such a study in order to improve the dynamic range of the model and to minimize effects of instrumentation and associated systematic errors. The techniques of Fourier Transform Spectroscopy (FTS) and Cavity Ring-Down Spectroscopy (CRDS) allow complimentary information for such an analysis. We utilize multispectrum fitting software to generate a comprehensive new database with improved accuracy based on these datasets. The extensive information will be made available as a multi-dimensional cross-section (ABSCO) table and the parameterization will be offered for inclusion in the HITRANonline database. PMID:27840454
Multispectrum analysis of the oxygen A-band
Drouin, Brian J.; Benner, D. Chris; Brown, Linda R.; ...
2016-04-11
Retrievals of atmospheric composition from near-infrared measurements require measurements of airmass to better than the desired precision of the composition. The oxygen bands are obvious choices to quantify airmass since the mixing ratio of oxygen is fixed over the full range of atmospheric conditions. The OCO-2 mission is currently retrieving carbon dioxide concentration using the oxygen A-band for airmass normalization. The 0.25% accuracy desired for the carbon dioxide concentration has pushed the required state-of-the-art for oxygen spectroscopy. To measure O2 A-band cross-sections with such accuracy through the full range of atmospheric pressure requires a sophisticated line-shape model (Rautian or Speed-Dependentmore » Voigt) with line mixing (LM) and collision induced absorption (CIA). Models of each of these phenomena exist, however, this work presents an integrated self-consistent model developed to ensure the best accuracy. It is also important to consider multiple sources of spectroscopic data for such a study in order to improve the dynamic range of the model and to minimize effects of instrumentation and associated systematic errors. The techniques of Fourier Transform Spectroscopy (FTS) and Cavity Ring-Down Spectroscopy (CRDS) allow complimentary information for such an analysis. We utilize multispectrum fitting software to generate a comprehensive new database with improved accuracy based on these datasets. As a result, the extensive information will be made available as a multi-dimensional cross-section (ABSCO) table and the parameterization will be offered for inclusion in the HITRANonline database.« less
Multispectrum analysis of the oxygen A-band
DOE Office of Scientific and Technical Information (OSTI.GOV)
Drouin, Brian J.; Benner, D. Chris; Brown, Linda R.
Retrievals of atmospheric composition from near-infrared measurements require measurements of airmass to better than the desired precision of the composition. The oxygen bands are obvious choices to quantify airmass since the mixing ratio of oxygen is fixed over the full range of atmospheric conditions. The OCO-2 mission is currently retrieving carbon dioxide concentration using the oxygen A-band for airmass normalization. The 0.25% accuracy desired for the carbon dioxide concentration has pushed the required state-of-the-art for oxygen spectroscopy. To measure O2 A-band cross-sections with such accuracy through the full range of atmospheric pressure requires a sophisticated line-shape model (Rautian or Speed-Dependentmore » Voigt) with line mixing (LM) and collision induced absorption (CIA). Models of each of these phenomena exist, however, this work presents an integrated self-consistent model developed to ensure the best accuracy. It is also important to consider multiple sources of spectroscopic data for such a study in order to improve the dynamic range of the model and to minimize effects of instrumentation and associated systematic errors. The techniques of Fourier Transform Spectroscopy (FTS) and Cavity Ring-Down Spectroscopy (CRDS) allow complimentary information for such an analysis. We utilize multispectrum fitting software to generate a comprehensive new database with improved accuracy based on these datasets. As a result, the extensive information will be made available as a multi-dimensional cross-section (ABSCO) table and the parameterization will be offered for inclusion in the HITRANonline database.« less
HITRAN2016 Database Part II: Overview of the Spectroscopic Parameters of the Trace Gases
NASA Astrophysics Data System (ADS)
Tan, Yan; Gordon, Iouli E.; Rothman, Laurence S.; Kochanov, Roman V.; Hill, Christian
2017-06-01
The 2016 edition of HITRAN database is available now. This new edition of the database takes advantage of the new structure and can be accessed through HITRANonline (www.hitran.org). The line-by-line lists for almost all of the trace atmospheric species were updated in comparison with the previous edition HITRAN2012. These extended update covers not only updating few transitions of the certain molecules, but also complete replacements of the whole line lists, and as well as introduction of new spectroscopic parameters for non-Voigt line shape. The new line lists for NH_3, HNO_3, OCS, HCN, CH_3Cl, C_2H_2, C_2H_6, PH_3, C_2H_4, CH_3CN, CF_4, C_4H_2, and SO_3 feature substantial expansion of the spectral and dynamic ranges in addition of the improved accuracy of the parameters for already existing lines. A semi-empirical procedure was developed to update the air-broadening and self-broadening coefficients of N_2O, SO_2, NH_3, CH_3Cl, H_2S, and HO_2. We draw particular attention to flaws in the commonly used expression n_{air}=0.79n_{N_2}+0.21n_{O_2} to determine the air-broadening temperature dependence exponent in the power law from those for nitrogen and oxygen broadening. A more meaningful approach will be presented. The semi-empirical line width, pressure shifts and temperature-dependence exponents of CO, NH_3, HF, HCl, OCS, C_2H_2, SO_2 perturbed by H_2, He, and CO_2 have been added to the database based on the algorithm described in Wilzewski et al.. The new spectroscopic parameters for HT profile were implemented into the database for hydrogen molecule. The HITRAN database is supported by the NASA AURA program grant NNX14AI55G and NASA PDART grant NNX16AG51G. I. E. Gordon, L. S. Rothman, et al., J Quant Spectrosc Radiat Transf 2017; submitted. Hill C, et al., J Quant Spectrosc Radiat Transf 2013;130:51-61. Wilzewski JS,et al., J Quant Spectrosc Radiat Transf 2016;168:193-206. Wcislo P, et al., J Quant Spectrosc Radiat Transf 2016;177:75-91.
NASA Astrophysics Data System (ADS)
Chursin, Alexei A.; Jacquinet-Husson, N.; Lefevre, G.; Scott, Noelle A.; Chedin, Alain
2000-01-01
This paper presents the recently developed information content diffusion facilities, e.g. the WWW-server of GEISA, MS DOS, WINDOWS-95/NT, and UNIX software packages, associated with the 1997 version of the GEISA-(Gestion et Etude des Informations Spectroscopiques Atmospheriques; word translation: Management and Study of Atmospheric Spectroscopic Information) infrared spectroscopic databank developed at LMD (Laboratoire de Meteorologie Dynamique, France). GEISA-97 individual lines file involves 42 molecules (96 isotopic species) and contains 1,346,266 entries, between 0 and 22,656 cm-1. GEISA-97 also has a catalog of cross-sections at different temperatures and pressures for species (such as chlorofluorocarbons) with complex spectra. The current version of the GEISA-97 cross- section databank contains 4,716,743 entries related to 23 molecules between 555 and 1700 cm-1.
Three-dimensional object recognition based on planar images
NASA Astrophysics Data System (ADS)
Mital, Dinesh P.; Teoh, Eam-Khwang; Au, K. C.; Chng, E. K.
1993-01-01
This paper presents the development and realization of a robotic vision system for the recognition of 3-dimensional (3-D) objects. The system can recognize a single object from among a group of known regular convex polyhedron objects that is constrained to lie on a calibrated flat platform. The approach adopted comprises a series of image processing operations on a single 2-dimensional (2-D) intensity image to derive an image line drawing. Subsequently, a feature matching technique is employed to determine 2-D spatial correspondences of the image line drawing with the model in the database. Besides its identification ability, the system can also provide important position and orientation information of the recognized object. The system was implemented on an IBM-PC AT machine executing at 8 MHz without the 80287 Maths Co-processor. In our overall performance evaluation based on a 600 recognition cycles test, the system demonstrated an accuracy of above 80% with recognition time well within 10 seconds. The recognition time is, however, indirectly dependent on the number of models in the database. The reliability of the system is also affected by illumination conditions which must be clinically controlled as in any industrial robotic vision system.
Genome-Wide SNP Genotyping to Infer the Effects on Gene Functions in Tomato
Hirakawa, Hideki; Shirasawa, Kenta; Ohyama, Akio; Fukuoka, Hiroyuki; Aoki, Koh; Rothan, Christophe; Sato, Shusei; Isobe, Sachiko; Tabata, Satoshi
2013-01-01
The genotype data of 7054 single nucleotide polymorphism (SNP) loci in 40 tomato lines, including inbred lines, F1 hybrids, and wild relatives, were collected using Illumina's Infinium and GoldenGate assay platforms, the latter of which was utilized in our previous study. The dendrogram based on the genotype data corresponded well to the breeding types of tomato and wild relatives. The SNPs were classified into six categories according to their positions in the genes predicted on the tomato genome sequence. The genes with SNPs were annotated by homology searches against the nucleotide and protein databases, as well as by domain searches, and they were classified into the functional categories defined by the NCBI's eukaryotic orthologous groups (KOG). To infer the SNPs' effects on the gene functions, the three-dimensional structures of the 843 proteins that were encoded by the genes with SNPs causing missense mutations were constructed by homology modelling, and 200 of these proteins were considered to carry non-synonymous amino acid substitutions in the predicted functional sites. The SNP information obtained in this study is available at the Kazusa Tomato Genomics Database (http://plant1.kazusa.or.jp/tomato/). PMID:23482505
Modeling and Databases for Teaching Petrology
NASA Astrophysics Data System (ADS)
Asher, P.; Dutrow, B.
2003-12-01
With the widespread availability of high-speed computers with massive storage and ready transport capability of large amounts of data, computational and petrologic modeling and the use of databases provide new tools with which to teach petrology. Modeling can be used to gain insights into a system, predict system behavior, describe a system's processes, compare with a natural system or simply to be illustrative. These aspects result from data driven or empirical, analytical or numerical models or the concurrent examination of multiple lines of evidence. At the same time, use of models can enhance core foundations of the geosciences by improving critical thinking skills and by reinforcing prior knowledge gained. However, the use of modeling to teach petrology is dictated by the level of expectation we have for students and their facility with modeling approaches. For example, do we expect students to push buttons and navigate a program, understand the conceptual model and/or evaluate the results of a model. Whatever the desired level of sophistication, specific elements of design should be incorporated into a modeling exercise for effective teaching. These include, but are not limited to; use of the scientific method, use of prior knowledge, a clear statement of purpose and goals, attainable goals, a connection to the natural/actual system, a demonstration that complex heterogeneous natural systems are amenable to analyses by these techniques and, ideally, connections to other disciplines and the larger earth system. Databases offer another avenue with which to explore petrology. Large datasets are available that allow integration of multiple lines of evidence to attack a petrologic problem or understand a petrologic process. These are collected into a database that offers a tool for exploring, organizing and analyzing the data. For example, datasets may be geochemical, mineralogic, experimental and/or visual in nature, covering global, regional to local scales. These datasets provide students with access to large amount of related data through space and time. Goals of the database working group include educating earth scientists about information systems in general, about the importance of metadata about ways of using databases and datasets as educational tools and about the availability of existing datasets and databases. The modeling and databases groups hope to create additional petrologic teaching tools using these aspects and invite the community to contribute to the effort.
NIST Databases on Atomic Spectra
NASA Astrophysics Data System (ADS)
Reader, J.; Wiese, W. L.; Martin, W. C.; Musgrove, A.; Fuhr, J. R.
2002-11-01
The NIST atomic and molecular spectroscopic databases now available on the World Wide Web through the NIST Physics Laboratory homepage include Atomic Spectra Database, Ground Levels and Ionization Energies for the Neutral Atoms, Spectrum of Platinum Lamp for Ultraviolet Spectrograph Calibration, Bibliographic Database on Atomic Transition Probabilities, Bibliographic Database on Atomic Spectral Line Broadening, and Electron-Impact Ionization Cross Section Database. The Atomic Spectra Database (ASD) [1] offers evaluated data on energy levels, wavelengths, and transition probabilities for atoms and atomic ions. Data are given for some 950 spectra and 70,000 energy levels. About 91,000 spectral lines are included, with transition probabilities for about half of these. Additional data resulting from our ongoing critical compilations will be included in successive new versions of ASD. We plan to include, for example, our recently published data for some 16,000 transitions covering most ions of the iron-group elements, as well as Cu, Kr, and Mo [2]. Our compilations benefit greatly from experimental and theoretical atomic-data research being carried out in the NIST Atomic Physics Division. A new compilation covering spectra of the rare gases in all stages of ionization, for example, revealed a need for improved data in the infrared. We have thus measured these needed data with our high-resolution Fourier transform spectrometer [3]. An upcoming new database will give wavelengths and intensities for the stronger lines of all neutral and singly-ionized atoms, along with energy levels and transition probabilities for the persistent lines [4]. A critical compilation of the transition probabilities of Ba I and Ba II [5] has been completed and several other compilations of atomic transition probabilities are nearing completion. These include data for all spectra of Na, Mg, Al, and Si [6]. Newly compiled data for selected ions of Ne, Mg, Si and S, will form the basis for a new database intended to assist interpretation of soft x-ray astronomical spectra, such as from the Chandra X-ray Observatory. These data will be available soon on the World Wide Web [7].
Electron Stark Broadening Database for Atomic N, O, and C Lines
NASA Technical Reports Server (NTRS)
Liu, Yen; Yao, Winifred M.; Wray, Alan A.; Carbon, Duane F.
2012-01-01
A database for efficiently computing the electron Stark broadening line widths for atomic N, O, and C lines is constructed. The line width is expressed in terms of the electron number density and electronatom scattering cross sections based on the Baranger impact theory. The state-to-state cross sections are computed using the semiclassical approximation, in which the atom is treated quantum mechanically whereas the motion of the free electron follows a classical trajectory. These state-to-state cross sections are calculated based on newly compiled line lists. Each atomic line list consists of a careful merger of NIST, Vanderbilt, and TOPbase line datasets from wavelength 50 nm to 50 micrometers covering the VUV to IR spectral regions. There are over 10,000 lines in each atomic line list. The widths for each line are computed at 13 electron temperatures between 1,000 K 50,000 K. A linear least squares method using a four-term fractional power series is then employed to obtain an analytical fit for each line-width variation as a function of the electron temperature. The maximum L2 error of the analytic fits for all lines in our line lists is about 5%.
Kaas, Quentin; Ruiz, Manuel; Lefranc, Marie-Paule
2004-01-01
IMGT/3Dstructure-DB and IMGT/Structural-Query are a novel 3D structure database and a new tool for immunological proteins. They are part of IMGT, the international ImMunoGenetics information system®, a high-quality integrated knowledge resource specializing in immunoglobulins (IG), T cell receptors (TR), major histocompatibility complex (MHC) and related proteins of the immune system (RPI) of human and other vertebrate species, which consists of databases, Web resources and interactive on-line tools. IMGT/3Dstructure-DB data are described according to the IMGT Scientific chart rules based on the IMGT-ONTOLOGY concepts. IMGT/3Dstructure-DB provides IMGT gene and allele identification of IG, TR and MHC proteins with known 3D structures, domain delimitations, amino acid positions according to the IMGT unique numbering and renumbered coordinate flat files. Moreover IMGT/3Dstructure-DB provides 2D graphical representations (or Collier de Perles) and results of contact analysis. The IMGT/StructuralQuery tool allows search of this database based on specific structural characteristics. IMGT/3Dstructure-DB and IMGT/StructuralQuery are freely available at http://imgt.cines.fr. PMID:14681396
The Results of Development of the Project ZOOINT and its Future Perspectives
NASA Astrophysics Data System (ADS)
Smirnov, I. S.; Lobanov, A. L.; Alimov, A. F.; Medvedev, S. G.; Golikov, A. A.
The work on a computerization of main processes of accumulation and analysis of the collection, expert and literary data on a systematics and faunistics of various taxa of animal (a basis of studying of a biological diversity) was started in the Zoological Institute in 1987. In 1991 the idea of creating of the software package, ZOOlogical INTegrated system (ZOOINT) was born. ZOOINT could provide a loading operation about collections and simultaneously would allow to analyze the accumulated data with the help of various queries. During execution, the project ZOOINT was transformed slightly and has given results a little bit distinguished from planned earlier, but even more valuable. In the Internet the site about the information retrieval system (IRS) ZOOINT was built also ( ZOOINT ). The implementation of remote access to the taxonomic information, with possibility to work with databases (DB) of the IRS ZOOINT in the on-line mode was scheduled. It has required not only innovation of computer park of the developers and users, but also mastering of new software: language HTML, operating system of Windows NT, and technology of Active Server Pages (ASP). One of the serious problems of creating of databases and the IRS on zoology is the problem of representation of hierarchical classification. Building the classifiers, specialized standard taxonomic databases, which have obtained the name ZOOCOD solved this problem. The lately magnified number of attempts of creating of taxonomic electronic lists, tables and DB has required development of some primary rules of unification of zoological systematic databases. These rules assume their application in institutes of the biological profile, in which the processes of a computerization are very slowly, and the building of databases is in the most rudimentary state. These some positions and the standards of construction of biological (taxonomic) databases should facilitate dialogue of the biologists, application in the near future of most advanced technologies of development of the DB (for example, usage of the XML platform) and, eventually, building of the modern information systems. The work on the project is carried out at support of the RFBR grant N 02-07-90217; programs "The Information system on a biodiversity of Russia" and Project N 15 "Antarctic Regions".
LIRIS flight database and its use toward noncooperative rendezvous
NASA Astrophysics Data System (ADS)
Mongrard, O.; Ankersen, F.; Casiez, P.; Cavrois, B.; Donnard, A.; Vergnol, A.; Southivong, U.
2018-06-01
ESA's fifth and last Automated Transfer Vehicle, ATV Georges Lemaître, tested new rendezvous technology before docking with the International Space Station (ISS) in August 2014. The technology demonstration called Laser Infrared Imaging Sensors (LIRIS) provides an unseen view of the ISS. During Georges Lemaître's rendezvous, LIRIS sensors, composed of two infrared cameras, one visible camera, and a scanning LIDAR (Light Detection and Ranging), were turned on two and a half hours and 3500 m from the Space Station. All sensors worked as expected and a large amount of data was recorded and stored within ATV-5's cargo hold before being returned to Earth with the Soyuz flight 38S in September 2014. As a part of the LIRIS postflight activities, the information gathered by all sensors is collected inside a flight database together with the reference ATV trajectory and attitude estimated by ATV main navigation sensors. Although decoupled from the ATV main computer, the LIRIS data were carefully synchronized with ATV guidance, navigation, and control (GNC) data. Hence, the LIRIS database can be used to assess the performance of various image processing algorithms to provide range and line-of-sight (LoS) navigation at long/medium range but also 6 degree-of-freedom (DoF) navigation at short range. The database also contains information related to the overall ATV position with respect to Earth and the Sun direction within ATV frame such that the effect of the environment on the sensors can also be investigated. This paper introduces the structure of the LIRIS database and provides some example of applications to increase the technology readiness level of noncooperative rendezvous.
Highlights of the HITRAN2016 database
NASA Astrophysics Data System (ADS)
Gordon, I.; Rothman, L. S.; Hill, C.; Kochanov, R. V.; Tan, Y.
2016-12-01
The HITRAN2016 database will be released just before the AGU meeting. It is a titanic effort of world-wide collaboration between experimentalists, theoreticians and atmospheric scientists, who measure, calculate and validate the HITRAN data. The line-by-line lists for almost all of the HITRAN molecules were updated in comparison with the previous compilation HITRAN2012 [1] that has been in use, along with some intermediate updates, since 2012. The extent of the updates ranges from updating a few lines of certain molecules to complete replacements of the lists and introduction of additional isotopologues. Many more vibrational bands were added to the database, extending the spectral coverage and completeness of the datasets. For several molecules, including H2O, CO2 and CH4, the extent of the updates is so complex that separate task groups were assembled to make strategic decisions about the choices of sources for various parameters in different spectral regions. The amount of parameters has also been significantly increased, now incorporating, for instance, non-Voigt line profiles [2]; broadening by gases other than air and "self" [3]; and other phenomena, including line mixing. In addition, the amount of cross-sectional sets in the database has increased dramatically and includes many recent experiments as well as adaptation of the existing databases that were not in HITRAN previously (for instance the PNNL database [4]). The HITRAN2016 edition takes full advantage of the new structure and interface available at www.hitran.org [5] and the HITRAN Application Programming Interface [6]. This poster will provide a summary of the updates, emphasizing details of some of the most important or dramatic improvements. The users of the database will have an opportunity to discuss the updates relevant to their research and request a demonstration on how to work with the database. This work is supported by the NASA PATM (NNX13AI59G), PDART (NNX16AG51G) and AURA (NNX14AI55G) programs. References[1] L.S. Rothman et al, JQSRT 130, 4 (2013). [2] P. Wcisło et al., JQSRT 177, 75 (2016). [3] J. S. Wilzewski et al., JQSRT 168, 193 (2016). [4] S.W. Sharpe et al, Appl Spectrosc 58, 1452 (2004). [5] C. Hill et al, JQSRT 177, 4 (2016). [6] R.V. Kochanov et al, JQSRT 177, 15 (2016).
KA-SB: from data integration to large scale reasoning
Roldán-García, María del Mar; Navas-Delgado, Ismael; Kerzazi, Amine; Chniber, Othmane; Molina-Castro, Joaquín; Aldana-Montes, José F
2009-01-01
Background The analysis of information in the biological domain is usually focused on the analysis of data from single on-line data sources. Unfortunately, studying a biological process requires having access to disperse, heterogeneous, autonomous data sources. In this context, an analysis of the information is not possible without the integration of such data. Methods KA-SB is a querying and analysis system for final users based on combining a data integration solution with a reasoner. Thus, the tool has been created with a process divided into two steps: 1) KOMF, the Khaos Ontology-based Mediator Framework, is used to retrieve information from heterogeneous and distributed databases; 2) the integrated information is crystallized in a (persistent and high performance) reasoner (DBOWL). This information could be further analyzed later (by means of querying and reasoning). Results In this paper we present a novel system that combines the use of a mediation system with the reasoning capabilities of a large scale reasoner to provide a way of finding new knowledge and of analyzing the integrated information from different databases, which is retrieved as a set of ontology instances. This tool uses a graphical query interface to build user queries easily, which shows a graphical representation of the ontology and allows users o build queries by clicking on the ontology concepts. Conclusion These kinds of systems (based on KOMF) will provide users with very large amounts of information (interpreted as ontology instances once retrieved), which cannot be managed using traditional main memory-based reasoners. We propose a process for creating persistent and scalable knowledgebases from sets of OWL instances obtained by integrating heterogeneous data sources with KOMF. This process has been applied to develop a demo tool , which uses the BioPax Level 3 ontology as the integration schema, and integrates UNIPROT, KEGG, CHEBI, BRENDA and SABIORK databases. PMID:19796402
New Directions in the NOAO Observing Proposal System
NASA Astrophysics Data System (ADS)
Gasson, David; Bell, Dave
For the past eight years NOAO has been refining its on-line observing proposal system. Virtually all related processes are now handled electronically. Members of the astronomical community can submit proposals through email, web form, or via the Gemini Phase I Tool. NOAO staff can use the system to do administrative tasks, scheduling, and compilation of various statistics. In addition, all information relevant to the TAC process is made available on-line, including the proposals themselves (in HTML, PDF and PostScript) and technical comments. Grades and TAC comments are entered and edited through web forms, and can be sorted and filtered according to specified criteria. Current developments include a move away from proprietary solutions, toward open standards such as SQL (in the form of the MySQL relational database system), Perl, PHP and XML.
NASA Technical Reports Server (NTRS)
2003-01-01
When NASA needed a real-time, online database system capable of tracking documentation changes in its propulsion test facilities, engineers at Stennis Space Center joined with ECT International, of Brookfield, Wisconsin, to create a solution. Through NASA's Dual-Use Program, ECT developed Exdata, a software program that works within the company's existing Promise software. Exdata not only satisfied NASA s requirements, but also expanded ECT s commercial product line. Promise, ECT s primary product, is an intelligent software program with specialized functions for designing and documenting electrical control systems. An addon to AutoCAD software, Promis e generates control system schematics, panel layouts, bills of material, wire lists, and terminal plans. The drawing functions include symbol libraries, macros, and automatic line breaking. Primary Promise customers include manufacturing companies, utilities, and other organizations with complex processes to control.
Convolutional Neural Network-Based Finger-Vein Recognition Using NIR Image Sensors
Hong, Hyung Gil; Lee, Min Beom; Park, Kang Ryoung
2017-01-01
Conventional finger-vein recognition systems perform recognition based on the finger-vein lines extracted from the input images or image enhancement, and texture feature extraction from the finger-vein images. In these cases, however, the inaccurate detection of finger-vein lines lowers the recognition accuracy. In the case of texture feature extraction, the developer must experimentally decide on a form of the optimal filter for extraction considering the characteristics of the image database. To address this problem, this research proposes a finger-vein recognition method that is robust to various database types and environmental changes based on the convolutional neural network (CNN). In the experiments using the two finger-vein databases constructed in this research and the SDUMLA-HMT finger-vein database, which is an open database, the method proposed in this research showed a better performance compared to the conventional methods. PMID:28587269
Convolutional Neural Network-Based Finger-Vein Recognition Using NIR Image Sensors.
Hong, Hyung Gil; Lee, Min Beom; Park, Kang Ryoung
2017-06-06
Conventional finger-vein recognition systems perform recognition based on the finger-vein lines extracted from the input images or image enhancement, and texture feature extraction from the finger-vein images. In these cases, however, the inaccurate detection of finger-vein lines lowers the recognition accuracy. In the case of texture feature extraction, the developer must experimentally decide on a form of the optimal filter for extraction considering the characteristics of the image database. To address this problem, this research proposes a finger-vein recognition method that is robust to various database types and environmental changes based on the convolutional neural network (CNN). In the experiments using the two finger-vein databases constructed in this research and the SDUMLA-HMT finger-vein database, which is an open database, the method proposed in this research showed a better performance compared to the conventional methods.
NASA Technical Reports Server (NTRS)
Mallasch, Paul G.; Babic, Slavoljub
1994-01-01
The United States Air Force (USAF) provides NASA Lewis Research Center with monthly reports containing the Synchronous Satellite Catalog and the associated Two Line Mean Element Sets. The USAF Synchronous Satellite Catalog supplies satellite orbital parameters collected by an automated monitoring system and provided to Lewis Research Center as text files on magnetic tape. Software was developed to facilitate automated formatting, data normalization, cross-referencing, and error correction of Synchronous Satellite Catalog files before loading into the NASA Geosynchronous Satellite Orbital Statistics Database System (GSOSTATS). This document contains the User's Guide and Software Maintenance Manual with information necessary for installation, initialization, start-up, operation, error recovery, and termination of the software application. It also contains implementation details, modification aids, and software source code adaptations for use in future revisions.
FRBCAT: The Fast Radio Burst Catalogue
NASA Astrophysics Data System (ADS)
Petroff, E.; Barr, E. D.; Jameson, A.; Keane, E. F.; Bailes, M.; Kramer, M.; Morello, V.; Tabbara, D.; van Straten, W.
2016-09-01
Here, we present a catalogue of known Fast Radio Burst sources in the form of an online catalogue, FRBCAT. The catalogue includes information about the instrumentation used for the observations for each detected burst, the measured quantities from each observation, and model-dependent quantities derived from observed quantities. To aid in consistent comparisons of burst properties such as width and signal-to-noise ratios, we have re-processed all the bursts for which we have access to the raw data, with software which we make available. The originally derived properties are also listed for comparison. The catalogue is hosted online as a Mysql database which can also be downloaded in tabular or plain text format for off-line use. This database will be maintained for use by the community for studies of the Fast Radio Burst population as it grows.
A new relational database structure and online interface for the HITRAN database
NASA Astrophysics Data System (ADS)
Hill, Christian; Gordon, Iouli E.; Rothman, Laurence S.; Tennyson, Jonathan
2013-11-01
A new format for the HITRAN database is proposed. By storing the line-transition data in a number of linked tables described by a relational database schema, it is possible to overcome the limitations of the existing format, which have become increasingly apparent over the last few years as new and more varied data are being used by radiative-transfer models. Although the database in the new format can be searched using the well-established Structured Query Language (SQL), a web service, HITRANonline, has been deployed to allow users to make most common queries of the database using a graphical user interface in a web page. The advantages of the relational form of the database to ensuring data integrity and consistency are explored, and the compatibility of the online interface with the emerging standards of the Virtual Atomic and Molecular Data Centre (VAMDC) project is discussed. In particular, the ability to access HITRAN data using a standard query language from other websites, command line tools and from within computer programs is described.
Cabazitaxel for the treatment of prostate cancer.
Michielsen, Dirk P J; Braeckman, Johan G; Denis, Louis
2011-04-01
Prostate cancer is a frequently diagnosed male cancer. In men presenting locally advanced or metastatic disease, the mainstay of treatment is hormonal suppression. Despite the castrate levels of testosterone, with time, prostate cancer gradually evolves into a castration-refractory state. Chemotherapeutic agents are able to influence the natural history of metastatic castration-resistant prostate cancer. Docetaxel is a clinically relevant, FDA-approved taxane. Today, it is the first-line chemotherapeutic agent in castration-refractory prostate cancer (CRPC). There is no standard second-line chemotherapeutic regimen. This review provides information on the efficacy of cabazitaxel as a second-line treatment for CRPC. The medline database was searched for clinical trials on chemotherapeutical treatment options of castration-resistant prostate cancer. All available data on the efficacy of cabazitaxel are summarized. New treatment strategies for castration-resistant prostate cancer should primarily focus on quality of life. In this view, vaccination therapy seems promising because of the acceptable level of toxicity. However, more research is needed to prove their efficacy in the treatment of castration-resistant prostate cancer. Cabazitaxel seems to be a promising second-line therapy in CRPC.
Impact of line parameter database and continuum absorption on GOSAT TIR methane retrieval
NASA Astrophysics Data System (ADS)
Yamada, A.; Saitoh, N.; Nonogaki, R.; Imasu, R.; Shiomi, K.; Kuze, A.
2017-12-01
The current methane retrieval algorithm (V1) at wavenumber range from 1210 cm-1 to 1360 cm-1 including CH4 ν 4 band from the thermal infrared (TIR) band of Thermal and Near-infrared Sensor for Carbon Observation Fourier Transform Spectrometer (TANSO-FTS) onboard Greenhouse Gases Observing Satellite (GOSAT) uses LBLRTM V12.1 with AER V3.1 line database and MT CKD 2.5.2 continuum absorption model to calculate optical depth. Since line parameter databases have been updated and the continuum absorption may have large uncertainty, the purpose of this study is to assess the impact on {CH}4 retrieval from the choice of line parameter databases and the uncertainty of continuum absorption. We retrieved {CH}4 profiles with replacement of line parameter database from AER V3.1 to AER v1.0, HITRAN 2004, HITRAN 2008, AER V3.2, or HITRAN 2012 (Rothman et al. 2005, 2009, and 2013. Clough et al., 2005), we assumed 10% larger continuum absorption coefficients and 50% larger temperature dependent coefficient of continuum absorption based on the report by Paynter and Ramaswamy (2014). We compared the retrieved CH4 with the HIPPO CH4 observation (Wofsy et al., 2012). The difference from HIPPO observation of AER V3.2 was the smallest and 24.1 ± 45.9 ppbv. The differences of AER V1.0, HITRAN 2004, HITRAN 2008, and HITRAN 2012 were 35.6 ± 46.5 ppbv, 37.6 ± 46.3 ppbv, 32.1 ± 46.1 ppbv, and 35.2 ± 46.0 ppbv, respectively. Maximum {CH}4 retrieval differences were -0.4 ppbv at the layer of 314 hPa when we used 10% larger absorption coefficients of {H}2O foreign continuum. Comparing AER V3.2 case to HITRAN 2008 case, the line coupling effect reduced difference by 8.0 ppbv. Line coupling effects were important for GOSAT TIR {CH}4 retrieval. Effects from the uncertainty of continuum absorption were negligible small for GOSAT TIR CH4 retrieval.
The human-induced pluripotent stem cell initiative—data resources for cellular genetics
Streeter, Ian; Harrison, Peter W.; Faulconbridge, Adam; Flicek, Paul; Parkinson, Helen; Clarke, Laura
2017-01-01
The Human Induced Pluripotent Stem Cell Initiative (HipSci) isf establishing a large catalogue of human iPSC lines, arguably the most well characterized collection to date. The HipSci portal enables researchers to choose the right cell line for their experiment, and makes HipSci's rich catalogue of assay data easy to discover and reuse. Each cell line has genomic, transcriptomic, proteomic and cellular phenotyping data. Data are deposited in the appropriate EMBL-EBI archives, including the European Nucleotide Archive (ENA), European Genome-phenome Archive (EGA), ArrayExpress and PRoteomics IDEntifications (PRIDE) databases. The project will make 500 cell lines from healthy individuals, and from 150 patients with rare genetic diseases; these will be available through the European Collection of Authenticated Cell Cultures (ECACC). As of August 2016, 238 cell lines are available for purchase. Project data is presented through the HipSci data portal (http://www.hipsci.org/lines) and is downloadable from the associated FTP site (ftp://ftp.hipsci.ebi.ac.uk/vol1/ftp). The data portal presents a summary matrix of the HipSci cell lines, showing available data types. Each line has its own page containing descriptive metadata, quality information, and links to archived assay data. Analysis results are also available in a Track Hub, allowing visualization in the context of public genomic annotations (http://www.hipsci.org/data/trackhubs). PMID:27733501
Genomes OnLine Database (GOLD) v.6: data updates and feature enhancements
Mukherjee, Supratim; Stamatis, Dimitri; Bertsch, Jon; Ovchinnikova, Galina; Verezemska, Olena; Isbandi, Michelle; Thomas, Alex D.; Ali, Rida; Sharma, Kaushal; Kyrpides, Nikos C.; Reddy, T. B. K.
2017-01-01
The Genomes Online Database (GOLD) (https://gold.jgi.doe.gov) is a manually curated data management system that catalogs sequencing projects with associated metadata from around the world. In the current version of GOLD (v.6), all projects are organized based on a four level classification system in the form of a Study, Organism (for isolates) or Biosample (for environmental samples), Sequencing Project and Analysis Project. Currently, GOLD provides information for 26 117 Studies, 239 100 Organisms, 15 887 Biosamples, 97 212 Sequencing Projects and 78 579 Analysis Projects. These are integrated with over 312 metadata fields from which 58 are controlled vocabularies with 2067 terms. The web interface facilitates submission of a diverse range of Sequencing Projects (such as isolate genome, single-cell genome, metagenome, metatranscriptome) and complex Analysis Projects (such as genome from metagenome, or combined assembly from multiple Sequencing Projects). GOLD provides a seamless interface with the Integrated Microbial Genomes (IMG) system and supports and promotes the Genomic Standards Consortium (GSC) Minimum Information standards. This paper describes the data updates and additional features added during the last two years. PMID:27794040
NASA Technical Reports Server (NTRS)
1991-01-01
R:BASE for DOS, a computer program developed under NASA contract, has been adapted by the National Marine Mammal Laboratory and the College of the Atlantic to provide and advanced computerized photo matching technique for identification of humpback whales. The program compares photos with stored digitized descriptions, enabling researchers to track and determine distribution and migration patterns. R:BASE is a spinoff of RIM (Relational Information Manager), which was used to store data for analyzing heat shielding tiles on the Space Shuttle Orbiter. It is now the world's second largest selling line of microcomputer database management software.
Code of Federal Regulations, 2011 CFR
2011-01-01
... AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE (Eff. Jan. 10, 2011) Background and Definitions... Product Safety Information Database. (2) Commission or CPSC means the Consumer Product Safety Commission... Information Database, also referred to as the Database, means the database on the safety of consumer products...
Economic evaluation for first-line anti-hypertensive medicines: applications for the Philippines
2012-01-01
Background Medicines to control hypertension, a leading cause of morbidity and mortality, are a major component of health expenditures in the Philippines. This study aims to review economic studies for first line anti-hypertensive medical treatment without co-morbidities; and discuss practical, informational and policy implications on the use of economic evaluation in the Philippines. Methods A systematic literature review was performed using the following databases: MEDLINE, EMBASE, BIOSIS, PubMed, The Cochrane Library, Health Economics Evaluations Database (HEED) and the Centre for Reviews and Dissemination – NHS NICE. Six existing economic analytical frameworks were reviewed and one framework for critical appraisal was developed. Results Out of 1336 searched articles, 12 fulfilled the inclusion criteria. The studies were summarized according to their background characteristics (year, journal, intervention and comparators, objective/study question, target audience, economic study type, study population, setting and country and source of funding/conflict of interest) and technical characteristics (perspective, time horizon, methodology/modeling, search strategy for parameters, costs, effectiveness measures, discounting, assumptions and biases, results, cost-effectiveness ratio, endpoints, sensitivity analysis, generalizability, strengths and limitations, conclusions, implications and feasibility and recommendations). The studies represented different countries, perspectives and stakeholders. Conclusions Diuretics were the most cost-effective drug class for first-line treatment of hypertension without co-morbidities. Although the Philippine Health Insurance Corporation may apply the recommendations given in previous studies (i.e. to subsidize diuretics, ACE inhibitors and calcium channel blockers), it is uncertain how much public funding is justified. There is an information gap on clinical data (transition probabilities, relative risks and risk reduction) and utility values on hypertension and related diseases from middle- and low- income countries. Considering the national relevance of the disease, a study on the costs of hypertension in the Philippines including in-patient, out-patient, out-of-pocket, local government and national government expenditure must be made. Economic evaluation may be incorporated in health technology assessment, planning, proposal development, research, prioritization and evaluation of health programmes. The approaches will vary depending on the policy questions. The information gap calls for building strong economic evaluative capacity in growing economies. PMID:23227952
GISD Global invasive species database Home About the GISD How to use Contacts 100 of the worst ) Bromus rubens Line drawing of Bromus rubens (USDA-NRCS PLANTS Database / Hitchcock, A.S. (rev. A. Chase GISD ISPRA SNPA The Global Invasive Species Database was developed and is managed by the Invasive
The Effect of Positive Mood on Flexible Processing of Affective Information.
Grol, Maud; De Raedt, Rudi
2017-07-17
Recent efforts have been made to understand the cognitive mechanisms underlying psychological resilience. Cognitive flexibility in the context of affective information has been related to individual differences in resilience. However, it is unclear whether flexible affective processing is sensitive to mood fluctuations. Furthermore, it remains to be investigated how effects on flexible affective processing interact with the affective valence of information that is presented. To fill this gap, we tested the effects of positive mood and individual differences in self-reported resilience on affective flexibility, using a task switching paradigm (N = 80). The main findings showed that positive mood was related to lower task switching costs, reflecting increased flexibility, in line with previous findings. In line with this effect of positive mood, we showed that greater resilience levels, specifically levels of acceptance of self and life, also facilitated task set switching in the context of affective information. However, the effects of resilience on affective flexibility seem more complex. Resilience tended to relate to more efficient task switching when negative information was preceded by positive information, possibly because the presentation of positive information, as well as positive mood, can facilitate task set switching. Positive mood also influenced costs associated with switching affective valence of the presented information. This latter effect was indicative of a reduced impact of no longer relevant negative information and more impact of no longer relevant positive information. Future research should confirm these effects of individual differences in resilience on affective flexibility, considering the affective valence of the presented information. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Rey, Michaël; Nikitin, Andrei V.; Babikov, Yurii L.; Tyuterev, Vladimir G.
2016-09-01
Knowledge of intensities of rovibrational transitions of various molecules and theirs isotopic species in wide spectral and temperature ranges is essential for the modeling of optical properties of planetary atmospheres, brown dwarfs and for other astrophysical applications. TheoReTS ("Theoretical Reims-Tomsk Spectral data") is an Internet accessible information system devoted to ab initio based rotationally resolved spectra predictions for some relevant molecular species. All data were generated from potential energy and dipole moment surfaces computed via high-level electronic structure calculations using variational methods for vibration-rotation energy levels and transitions. When available, empirical corrections to band centers were applied, all line intensities remaining purely ab initio. The current TheoReTS implementation contains information on four-to-six atomic molecules, including phosphine, methane, ethylene, silane, methyl-fluoride, and their isotopic species 13CH4 , 12CH3D , 12CH2D2 , 12CD4 , 13C2H4, … . Predicted hot methane line lists up to T = 2000 K are included. The information system provides the associated software for spectra simulation including absorption coefficient, absorption and emission cross-sections, transmittance and radiance. The simulations allow Lorentz, Gauss and Voight line shapes. Rectangular, triangular, Lorentzian, Gaussian, sinc and sinc squared apparatus function can be used with user-defined specifications for broadening parameters and spectral resolution. All information is organized as a relational database with the user-friendly graphical interface according to Model-View-Controller architectural tools. The full-featured web application is written on PHP using Yii framework and C++ software modules. In case of very large high-temperature line lists, a data compression is implemented for fast interactive spectra simulations of a quasi-continual absorption due to big line density. Applications for the TheoReTS may include: education/training in molecular absorption/emission, radiative and non-LTE processes, spectroscopic applications, opacity calculations for planetary and astrophysical applications. The system is freely accessible via internet on the two mirror sites: in Reims, France
Update of ECTOM - European catalogue of training opportunities in meteorology
NASA Astrophysics Data System (ADS)
Halenka, Tomas; Belda, Michal
2016-04-01
After Bologna Declaration (1999) the process of integration of education at university level was started in most European countries with the aim to unify the system and structure of the tertiary education with the option for possibility of transnational mobility of students across the Europe. The goal was to achieve the compatibility between the systems and levels in individual countries to help this mobility. To support the effort it is useful to provide the information about the possibility of education in different countries in centralised form, with uniform shape and content, but validated on a national level. For meteorology and climatology this could be reasonably done on the floor of European Meteorological Society, ideally with contribution of individual National Meteorological Societies and their guidance. Brief history of the original ECTOM I and previous attempts to start ECTOM II is given. Further need of update of the content is discussed with emphasis to several aspects. There are several reasons for such an update of ECTOM 1. First, there are many more new EMS members which could contribute to the catalogue. Second, corrected, new, more precise and expanding information will be available in addition to existing record, particularly in sense of some changes in education systems of EC countries and associated countries approaching the EC following the main goals included in Bologna Declaration. Third, contemporary technology to organize the real database with the possibility of easier navigation and searching of the appropriate information and feasibility to keep them up to date permanently through the WWW interface should be adopted. In this presentation, the engine of ECTOM II database will be shown together with practical information how to find and submit information on access to education or training possibilities. Finally, as we have started with filling the database using freely available information from the web, practical examples of use will be demonstrated on-line.
Kuwabara, Kazuaki; Matsuda, Shinya; Fushimi, Kiyohide; Ishikawa, Koichi B; Horiguchi, Hiromasa; Fujimori, Kenji
2012-01-01
Public health emergencies like earthquakes and tsunamis underscore the need for an evidence-based approach to disaster preparedness. Using the Japanese administrative database and the geographical information system (GIS), the interruption of hospital-based mechanical ventilation administration by a hypothetical disaster in three areas of the southeastern mainland (Tokai, Tonankai, and Nankai) was simulated and the repercussions on ventilator care in the prefectures adjacent to the damaged prefectures was estimated. Using the database of 2010 including 3,181,847 hospitalized patients among 952 hospitals, the maximum daily ventilator capacity in each hospital was calculated and the number of patients who were administered ventilation on October xx was counted. Using GIS and patient zip code, the straight-line distances among the damaged hospitals, the hospitals in prefectures nearest to damaged prefectures, and ventilated patients' zip codes were measured. The authors simulated that ventilated patients were transferred to the closest hospitals outside damaged prefectures. The increase in the ventilator operating rates in three areas was aggregated. One hundred twenty-four and 236 patients were administered ventilation in the damaged hospitals and in the closest hospitals outside the damaged prefectures of Tokai, 92 and 561 of Tonankai, and 35 and 85 of Nankai, respectively. The increases in the ventilator operating rates among prefectures ranged from 1.04 to 26.33-fold in Tokai; 1.03 to 1.74-fold in Tonankai, and 1.00 to 2.67-fold in Nankai. Administrative databases and GIS can contribute to evidenced-based disaster preparedness and the determination of appropriate receiving hospitals with available medical resources.
TopoCad - A unified system for geospatial data and services
NASA Astrophysics Data System (ADS)
Felus, Y. A.; Sagi, Y.; Regev, R.; Keinan, E.
2013-10-01
"E-government" is a leading trend in public sector activities in recent years. The Survey of Israel set as a vision to provide all of its services and datasets online. The TopoCad system is the latest software tool developed in order to unify a number of services and databases into one on-line and user friendly system. The TopoCad system is based on Web 1.0 technology; hence the customer is only a consumer of data. All data and services are accessible for the surveyors and geo-information professional in an easy and comfortable way. The future lies in Web 2.0 and Web 3.0 technologies through which professionals can upload their own data for quality control and future assimilation with the national database. A key issue in the development of this complex system was to implement a simple and easy (comfortable) user experience (UX). The user interface employs natural language dialog box in order to understand the user requirements. The system then links spatial data with alpha-numeric data in a flawless manner. The operation of the TopoCad requires no user guide or training. It is intuitive and self-taught. The system utilizes semantic engines and machine understanding technologies to link records from diverse databases in a meaningful way. Thus, the next generation of TopoCad will include five main modules: users and projects information, coordinates transformations and calculations services, geospatial data quality control, linking governmental systems and databases, smart forms and applications. The article describes the first stage of the TopoCad system and gives an overview of its future development.
Poprach, Alexandr; Bortlíček, Zbyněk; Büchler, Tomáš; Melichar, Bohuslav; Lakomý, Radek; Vyzula, Rostislav; Brabec, Petr; Svoboda, Marek; Dušek, Ladislav; Gregor, Jakub
2012-12-01
The incidence and mortality of renal cell carcinoma (RCC) in the Czech Republic are among the highest in the world. Several targeted agents have been recently approved for the treatment of advanced/metastatic RCC. Presentation of a national clinical database for monitoring and assessment of patients with advanced/metastatic RCC treated with targeted therapy. The RenIS (RENal Information System, http://renis.registry.cz ) registry is a non-interventional post-registration database of epidemiological and clinical data of patients with RCC treated with targeted therapies in the Czech Republic. Twenty cancer centres eligible for targeted therapy administration participate in the project. As of November 2011, six agents were approved and reimbursed from public health insurance, including bevacizumab, everolimus, pazopanib, sorafenib, sunitinib, and temsirolimus. As of 10 October 2011, 1,541 patients with valid records were entered into the database. Comparison with population-based data from the Czech National Cancer Registry revealed that RCC patients treated with targeted therapy are significantly younger (median age at diagnosis 59 vs. 66 years). Most RenIS registry patients were treated with sorafenib and sunitinib, many patients sequentially with both agents. Over 10 % of patients were also treated with everolimus in the second or third line. Progression-free survival times achieved were comparable to phase III clinical trials. The RenIS registry has become an important tool and source of information for the management of cancer care and clinical practice, providing comprehensive data on monitoring and assessment of RCC targeted therapy on a national level.
LONI visualization environment.
Dinov, Ivo D; Valentino, Daniel; Shin, Bae Cheol; Konstantinidis, Fotios; Hu, Guogang; MacKenzie-Graham, Allan; Lee, Erh-Fang; Shattuck, David; Ma, Jeff; Schwartz, Craig; Toga, Arthur W
2006-06-01
Over the past decade, the use of informatics to solve complex neuroscientific problems has increased dramatically. Many of these research endeavors involve examining large amounts of imaging, behavioral, genetic, neurobiological, and neuropsychiatric data. Superimposing, processing, visualizing, or interpreting such a complex cohort of datasets frequently becomes a challenge. We developed a new software environment that allows investigators to integrate multimodal imaging data, hierarchical brain ontology systems, on-line genetic and phylogenic databases, and 3D virtual data reconstruction models. The Laboratory of Neuro Imaging visualization environment (LONI Viz) consists of the following components: a sectional viewer for imaging data, an interactive 3D display for surface and volume rendering of imaging data, a brain ontology viewer, and an external database query system. The synchronization of all components according to stereotaxic coordinates, region name, hierarchical ontology, and genetic labels is achieved via a comprehensive BrainMapper functionality, which directly maps between position, structure name, database, and functional connectivity information. This environment is freely available, portable, and extensible, and may prove very useful for neurobiologists, neurogenetisists, brain mappers, and for other clinical, pedagogical, and research endeavors.
NASA Technical Reports Server (NTRS)
Hogan, John A.; Levri, Julie A.; Morrow, Rich; Cavazzoni, Jim; Rodriquez, Luis F.; Riano, Rebecca; Whitaker, Dawn R.
2004-01-01
An ongoing effort is underway at NASA Amcs Research Center (ARC) tu develop an On-line Project Information System (OPIS) for the Advanced Life Support (ALS) Program. The objective of this three-year project is to develop, test, revise and deploy OPIS to enhance the quality of decision-making metrics and attainment of Program goals through improved knowledge sharing. OPIS will centrally locate detailed project information solicited from investigators on an annual basis and make it readily accessible by the ALS Community via a web-accessible interface. The data will be stored in an object-oriented relational database (created in MySQL(Trademark) located on a secure server at NASA ARC. OPE will simultaneously serve several functions, including being an R&TD status information hub that can potentially serve as the primary annual reporting mechanism. Using OPIS, ALS managers and element leads will be able to carry out informed research and technology development investment decisions, and allow analysts to perform accurate systems evaluations. Additionally, the range and specificity of information solicited will serve to educate technology developers of programmatic needs. OPIS will collect comprehensive information from all ALS projects as well as highly detailed information specific to technology development in each ALS area (Waste, Water, Air, Biomass, Food, Thermal, and Control). Because the scope of needed information can vary dramatically between areas, element-specific technology information is being compiled with the aid of multiple specialized working groups. This paper presents the current development status in terms of the architecture and functionality of OPIS. Possible implementation approaches for OPIS are also discussed.
Using Third Party Data to Update a Reference Dataset in a Quality Evaluation Service
NASA Astrophysics Data System (ADS)
Xavier, E. M. A.; Ariza-López, F. J.; Ureña-Cámara, M. A.
2016-06-01
Nowadays it is easy to find many data sources for various regions around the globe. In this 'data overload' scenario there are few, if any, information available about the quality of these data sources. In order to easily provide these data quality information we presented the architecture of a web service for the automation of quality control of spatial datasets running over a Web Processing Service (WPS). For quality procedures that require an external reference dataset, like positional accuracy or completeness, the architecture permits using a reference dataset. However, this reference dataset is not ageless, since it suffers the natural time degradation inherent to geospatial features. In order to mitigate this problem we propose the Time Degradation & Updating Module which intends to apply assessed data as a tool to maintain the reference database updated. The main idea is to utilize datasets sent to the quality evaluation service as a source of 'candidate data elements' for the updating of the reference database. After the evaluation, if some elements of a candidate dataset reach a determined quality level, they can be used as input data to improve the current reference database. In this work we present the first design of the Time Degradation & Updating Module. We believe that the outcomes can be applied in the search of a full-automatic on-line quality evaluation platform.
Nastasi, Giovanni; Miceli, Carla; Pittalà, Valeria; Modica, Maria N; Prezzavento, Orazio; Romeo, Giuseppe; Rescifina, Antonio; Marrazzo, Agostino; Amata, Emanuele
2017-01-01
Sigma (σ) receptors are accepted as a particular receptor class consisting of two subtypes: sigma-1 (σ 1 ) and sigma-2 (σ 2 ). The two receptor subtypes have specific drug actions, pharmacological profiles and molecular characteristics. The σ 2 receptor is overexpressed in several tumor cell lines, and its ligands are currently under investigation for their role in tumor diagnosis and treatment. The σ 2 receptor structure has not been disclosed, and researchers rely on σ 2 receptor radioligand binding assay to understand the receptor's pharmacological behavior and design new lead compounds. Here we present the sigma-2 Receptor Selective Ligands Database (S2RSLDB) a manually curated database of the σ 2 receptor selective ligands containing more than 650 compounds. The database is built with chemical structure information, radioligand binding affinity data, computed physicochemical properties, and experimental radioligand binding procedures. The S2RSLDB is freely available online without account login and having a powerful search engine the user may build complex queries, sort tabulated results, generate color coded 2D and 3D graphs and download the data for additional screening. The collection here reported is extremely useful for the development of new ligands endowed of σ 2 receptor affinity, selectivity, and appropriate physicochemical properties. The database will be updated yearly and in the near future, an online submission form will be available to help with keeping the database widely spread in the research community and continually updated. The database is available at http://www.researchdsf.unict.it/S2RSLDB.
Manosroi, Jiradej; Boonpisuttinant, Korawinwich; Manosroi, Worapaka; Manosroi, Aranya
2012-07-13
The Thai/Lanna medicinal plant recipe database "MANOSROI II" contained the medicinal plant recipes of all regions in Thailand for the treatment of various diseases including anti-cancer medicinal plant recipes. To investigate anti-proliferative activity on HeLa cell lines of medicinal plant recipes selected from the Thai/Lanna medicinal plant recipe database "MANOSROI II". The forty aqueous extracts of Thai/Lanna medicinal plant recipes selected from the Thai/Lanna medicinal plant recipe database "MANOSROI II" were investigated for anti-proliferative activity on HeLa cell line by SRB assay. The apoptosis induction by caspase-3 activity and MMP-2 inhibition activity by zymography on HeLa cell line of the three selected aqueous extracts, which gave the highest anti-proliferative activity were determined. Phytochemicals and anti-oxidative activities including free radical scavenging activity, inhibition of lipid peroxidation and metal chelating inhibition activities were also investigated. Sixty percentages of the medicinal plant recipes selected from "MANOSROI II" database showed anti-proliferative activity on HeLa cell line. The recipes of N031(Albizia chinensis (Osbeck) Merr, Cassia fistula L., and Dargea volubilis Benth.ex Hook. etc.), N039 (Nymphoides indica L., Peltophorum pterocarpum (DC.), and Polyalthia debilis Finet et Gagnep etc.) and N040 (Nymphoides indica L. Kuntze, Sida rhombifolia L., and Xylinbaria minutiflora Pierre. etc.) gave higher anti-proliferative activity than the standard anti-cancer drug, cisplatin of 1.25, 1.29 and 30.18 times, respectively. The positive relationship between the anti-proliferative activity and the MMP-2 inhibition activity and metal chelating inhibition activity was observed, but no relationship between the anti-proliferative activity and apoptosis induction, free radical scavenging activity and lipid peroxidation inhibition activity. Phytochemicals found in these extracts were alkaloids, flavonoids, tannins and xanthones, but not anthraquinones and carotenoids. The recipe N040 exhibited the highest anti-proliferative and MMP-2 inhibition on HeLa cancer cell line at 30 and threefolds of cisplatin, respectively (p<0.05), while recipe N031 gave the highest caspase-3 activity (1.29-folds over the control) (p<0.05). This study has demonstrated that recipe N040 selected from MANOSROI II database appeared to be a good candidate with high potential for the further development as an anti-cancer agent. Crown Copyright © 2012. Published by Elsevier Ireland Ltd. All rights reserved.
An incremental database access method for autonomous interoperable databases
NASA Technical Reports Server (NTRS)
Roussopoulos, Nicholas; Sellis, Timos
1994-01-01
We investigated a number of design and performance issues of interoperable database management systems (DBMS's). The major results of our investigation were obtained in the areas of client-server database architectures for heterogeneous DBMS's, incremental computation models, buffer management techniques, and query optimization. We finished a prototype of an advanced client-server workstation-based DBMS which allows access to multiple heterogeneous commercial DBMS's. Experiments and simulations were then run to compare its performance with the standard client-server architectures. The focus of this research was on adaptive optimization methods of heterogeneous database systems. Adaptive buffer management accounts for the random and object-oriented access methods for which no known characterization of the access patterns exists. Adaptive query optimization means that value distributions and selectives, which play the most significant role in query plan evaluation, are continuously refined to reflect the actual values as opposed to static ones that are computed off-line. Query feedback is a concept that was first introduced to the literature by our group. We employed query feedback for both adaptive buffer management and for computing value distributions and selectivities. For adaptive buffer management, we use the page faults of prior executions to achieve more 'informed' management decisions. For the estimation of the distributions of the selectivities, we use curve-fitting techniques, such as least squares and splines, for regressing on these values.
GIS Application System Design Applied to Information Monitoring
NASA Astrophysics Data System (ADS)
Qun, Zhou; Yujin, Yuan; Yuena, Kang
Natural environment information management system involves on-line instrument monitoring, data communications, database establishment, information management software development and so on. Its core lies in collecting effective and reliable environmental information, increasing utilization rate and sharing degree of environment information by advanced information technology, and maximizingly providing timely and scientific foundation for environmental monitoring and management. This thesis adopts C# plug-in application development and uses a set of complete embedded GIS component libraries and tools libraries provided by GIS Engine to finish the core of plug-in GIS application framework, namely, the design and implementation of framework host program and each functional plug-in, as well as the design and implementation of plug-in GIS application framework platform. This thesis adopts the advantages of development technique of dynamic plug-in loading configuration, quickly establishes GIS application by visualized component collaborative modeling and realizes GIS application integration. The developed platform is applicable to any application integration related to GIS application (ESRI platform) and can be as basis development platform of GIS application development.
Classification of Palmprint Using Principal Line
NASA Astrophysics Data System (ADS)
Prasad, Munaga V. N. K.; Kumar, M. K. Pramod; Sharma, Kuldeep
In this paper, a new classification scheme for palmprint is proposed. Palmprint is one of the reliable physiological characteristics that can be used to authenticate an individual. Palmprint classification provides an important indexing mechanism in a very large palmprint database. Here, the palmprint database is initially categorized into two groups, right hand group and left hand group. Then, each group is further classified based on the distance traveled by principal line i.e. Heart Line During pre processing, a rectangular Region of Interest (ROI) in which only heart line is present, is extracted. Further, ROI is divided into 6 regions and depending upon the regions in which the heart line traverses the palmprint is classified accordingly. Consequently, our scheme allows 64 categories for each group forming a total number of 128 possible categories. The technique proposed in this paper includes only 15 such categories and it classifies not more than 20.96% of the images into a single category.
The HITRAN2016 molecular spectroscopic database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gordon, I. E.; Rothman, L. S.; Hill, C.
This paper describes the contents of the 2016 edition of the HITRAN molecular spectroscopic compilation. The new edition replaces the previous HITRAN edition of 2012 and its updates during the intervening years. The HITRAN molecular absorption compilation is comprised of five major components: the traditional line-by-line spectroscopic parameters required for high-resolution radiative-transfer codes, infrared absorption cross-sections for molecules not yet amenable to representation in a line-by-line form, collision-induced absorption data, aerosol indices of refraction, and general tables such as partition sums that apply globally to the data. The new HITRAN is greatly extended in terms of accuracy, spectral coverage, additionalmore » absorption phenomena, added line-shape formalisms, and validity. Moreover, molecules, isotopologues, and perturbing gases have been added that address the issues of atmospheres beyond the Earth. Of considerable note, experimental IR cross-sections for almost 200 additional significant molecules have been added to the database.« less
A Public-Use, Full-Screen Interface for SPIRES Databases.
ERIC Educational Resources Information Center
Kriz, Harry M.
This paper describes the techniques for implementing a full-screen, custom SPIRES interface for a public-use library database. The database-independent protocol that controls the system is described in detail. Source code for an entire working application using this interface is included. The protocol, with less than 170 lines of procedural code,…
NASA Astrophysics Data System (ADS)
Tóbiás, Roland; Furtenbacher, Tibor; Császár, Attila G.
2017-12-01
Cycle bases of graph theory are introduced for the analysis of transition data deposited in line-by-line rovibronic spectroscopic databases. The principal advantage of using cycle bases is that outlier transitions -almost always present in spectroscopic databases built from experimental data originating from many different sources- can be detected and identified straightforwardly and automatically. The data available for six water isotopologues, H
Evaluation of personal digital assistant drug information databases for the managed care pharmacist.
Lowry, Colleen M; Kostka-Rokosz, Maria D; McCloskey, William W
2003-01-01
Personal digital assistants (PDAs) are becoming a necessity for practicing pharmacists. They offer a time-saving and convenient way to obtain current drug information. Several software companies now offer general drug information databases for use on hand held computers. PDAs priced less than 200 US dollars often have limited memory capacity; therefore, the user must choose from a growing list of general drug information database options in order to maximize utility without exceeding memory capacity. This paper reviews the attributes of available general drug information software databases for the PDA. It provides information on the content, advantages, limitations, pricing, memory requirements, and accessibility of drug information software databases. Ten drug information databases were subjectively analyzed and evaluated based on information from the product.s Web site, vendor Web sites, and from our experience. Some of these databases have attractive auxiliary features such as kinetics calculators, disease references, drug-drug and drug-herb interaction tools, and clinical guidelines, which may make them more useful to the PDA user. Not all drug information databases are equal with regard to content, author credentials, frequency of updates, and memory requirements. The user must therefore evaluate databases for completeness, currency, and cost effectiveness before purchase. In addition, consideration should be given to the ease of use and flexibility of individual programs.
HITRAN2016: Part I. Line lists for H_2O, CO_2, O_3, N_2O, CO, CH_4, and O_2
NASA Astrophysics Data System (ADS)
Gordon, Iouli E.; Rothman, Laurence S.; Tan, Yan; Kochanov, Roman V.; Hill, Christian
2017-06-01
The HITRAN2016 database is now officially released. Plethora of experimental and theoretical molecular spectroscopic data were collected, evaluated and vetted before compiling the new edition of the database. The database is now distributed through the dynamic user interface HITRANonline (available at www.hitran.org) which offers many flexible options for browsing and downloading the data. In addition HITRAN Application Programming Interface (HAPI) offers modern ways to download the HITRAN data and use it to carry out sophisticated calculations. The line-by-line lists for almost all of the 47 HITRAN molecules were updated in comparison with the previous compilation (HITRAN2012. Some of the most important updates for major atmospheric absorbers, such as H_2O, CO_2, O_3, N_2O, CO, CH_4, and O_2, will be presented in this talk, while the trace gases will be presented in the next talk by Y. Tan. The HITRAN2016 database now provides alternative line-shape representations for a number of molecules, as well as broadening by gases dominant in planetary atmospheres. In addition, substantial extension and improvement of cross-section data is featured, which will be described in a dedicated talk by R. V. Kochanov. The new edition of the database is a substantial step forward to improve retrievals of the planetary atmospheric constituents in comparison with previous editions, while offering new ways of working with the data. The HITRAN database is supported by the NASA AURA and PDART program grants NNX14AI55G and NNX16AG51G. I. E. Gordon, L. S. Rothman, C. Hill, R. V. Kochanov, Y. Tan, et al. The HITRAN2016 Molecular Spectroscopic Database. JQSRT 2017;submitted. Many spectroscopists and atmospheric scientists worldwide have contributed data to the database or provided invaluable validations. C. Hill, I. E. Gordon, R. V. Kochanov, L. Barrett, J.S. Wilzewski, L.S. Rothman, JQSRT. 177 (2016) 4-14 R.V. Kochanov, I. E. Gordon, L. S. Rothman, P. Wcislo, C. Hill, J. S. Wilzewski, JQSRT. 177 (2016) 15-30. L. S. Rothman, I. E. Gordon et al. The HITRAN2012 Molecular Spectroscopic Database. JQSRT, 113 (2013) 4-50.
The Meteoritical Bulletin, No. 103
NASA Astrophysics Data System (ADS)
Ruzicka, Alex; Grossman, Jeffrey; Bouvier, Audrey; Agee, Carl B.
2017-05-01
Meteoritical Bulletin 103 contains 2582 meteorites including 10 falls (Ardón, Demsa, Jinju, Križevci, Kuresoi, Novato, Tinajdad, Tirhert, Vicência, Wolcott), with 2174 ordinary chondrites, 130 HED achondrites, 113 carbonaceous chondrites, 41 ureilites, 27 lunar meteorites, 24 enstatite chondrites, 21 iron meteorites, 15 primitive achondrites, 11 mesosiderites, 10 Martian meteorites, 6 Rumuruti chondrites, 5 ungrouped achondrites, 2 enstatite achondrites, 1 relict meteorite, 1 pallasite, and 1 angrite, and with 1511 from Antarctica, 588 from Africa, 361 from Asia, 86 from South America, 28 from North America, and 6 from Europe. Note: 1 meteorite from Russia was counted as European. The complete contents of this bulletin (244 pages) are available on line. Information about approved meteorites can be obtained from the Meteoritical Bulletin Database (MBD) available on line at
Pham, Tuyen Danh; Lee, Dong Eun; Park, Kang Ryoung
2017-07-08
Automatic recognition of banknotes is applied in payment facilities, such as automated teller machines (ATMs) and banknote counters. Besides the popular approaches that focus on studying the methods applied to various individual types of currencies, there have been studies conducted on simultaneous classification of banknotes from multiple countries. However, their methods were conducted with limited numbers of banknote images, national currencies, and denominations. To address this issue, we propose a multi-national banknote classification method based on visible-light banknote images captured by a one-dimensional line sensor and classified by a convolutional neural network (CNN) considering the size information of each denomination. Experiments conducted on the combined banknote image database of six countries with 62 denominations gave a classification accuracy of 100%, and results show that our proposed algorithm outperforms previous methods.
Pham, Tuyen Danh; Lee, Dong Eun; Park, Kang Ryoung
2017-01-01
Automatic recognition of banknotes is applied in payment facilities, such as automated teller machines (ATMs) and banknote counters. Besides the popular approaches that focus on studying the methods applied to various individual types of currencies, there have been studies conducted on simultaneous classification of banknotes from multiple countries. However, their methods were conducted with limited numbers of banknote images, national currencies, and denominations. To address this issue, we propose a multi-national banknote classification method based on visible-light banknote images captured by a one-dimensional line sensor and classified by a convolutional neural network (CNN) considering the size information of each denomination. Experiments conducted on the combined banknote image database of six countries with 62 denominations gave a classification accuracy of 100%, and results show that our proposed algorithm outperforms previous methods. PMID:28698466
15 Years of Chandra Observations of Capella
NASA Astrophysics Data System (ADS)
Kashyap, Vinay
2014-11-01
Capella is the strongest coronal line source accessible to Chandra. It has been cumulatively observed with gratings for over 1.2 Ms. The accumulated spectrum represents astrophysical ground truth for atomic physics calculations that is unprecedented in quality. We analyze co-added spectra to generate a comprehensive list of detectable lines and their locations, spanning two orders of magnitude in photon energy. We compare the locations of identifiable lines with locations from atomic databases ATOMDB and Chianti and characterize the uncertainties in the databases. The full line lists and comparisons will be made available at the Dataverse at http://dx.doi.org/10.7910/DVN/27084 This work is supported by Chandra grant AR0-11001X and NASA Contract NAS8-03060 to the Chandra X-Ray Center.
H2-,He-and CO2-line broadening coefficients and pressure shifts for the HITRAN database
NASA Astrophysics Data System (ADS)
Wilzewski, Jonas; Gordon, Iouli E.; Rothman, Laurence S.
2014-06-01
To increase the potential of the HITRAN database in astronomy, experimental and theoretical line broadening coefficients and line shifts of molecules of planetary interest broadened by H2,He,and CO2 have been assembled from available peer-reviewed sources. Since H2 and He are major constituents in the atmospheres of gas giants, and CO2 predominates in atmospheres of some rocky planets with volcanic activity, these spectroscopic data are important for studying planetary atmospheres. The collected data were used to create semi-empirical models for complete data sets from the microwave to the UV part of the spectrum of the studied molecules. The presented work will help identify the need for further investigations of broadening and shifting of spectral lines.
National launch strategy vehicle data management system
NASA Technical Reports Server (NTRS)
Cordes, David
1990-01-01
The national launch strategy vehicle data management system (NLS/VDMS) was developed as part of the 1990 NASA Summer Faculty Fellowship Program. The system was developed under the guidance of the Engineering Systems Branch of the Information Systems Office, and is intended for use within the Program Development Branch PD34. The NLS/VDMS is an on-line database system that permits the tracking of various launch vehicle configurations within the program development office. The system is designed to permit the definition of new launch vehicles, as well as the ability to display and edit existing launch vehicles. Vehicles can be grouped in logical architectures within the system. Reports generated from this package include vehicle data sheets, architecture data sheets, and vehicle flight rate reports. The topics covered include: (1) system overview; (2) initial system development; (3) supercard hypermedia authoring system; (4) the ORACLE database; and (5) system evaluation.
Data management of a multilaboratory field program using distributed processing. [PRECP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tichler, J.L.
The PRECP program is a multilaboratory research effort conducted by the US Department of Energy as a part of the National Acid Precipitation Assessment Program (NAPAP). The primary objective of PRECP is to provide essential information for the quantitative description of chemical wet deposition as a function of air pollution loadings, geograpic location, and atmospheric processing. The program is broken into four closely interrelated sectors: Diagnostic Modeling; Field Measurements; Laboratory Measurements; and Climatological Evaluation. Data management tasks are: compile databases of the data collected in field studies; verify the contents of data sets; make data available to program participants eithermore » on-line or by means of computer tapes; perform requested analyses, graphical displays, and data aggregations; provide an index of what data is available; and provide documentation for field programs both as part of the computer database and as data reports.« less
Registered File Support for Critical Operations Files at (Space Infrared Telescope Facility) SIRTF
NASA Technical Reports Server (NTRS)
Turek, G.; Handley, Tom; Jacobson, J.; Rector, J.
2001-01-01
The SIRTF Science Center's (SSC) Science Operations System (SOS) has to contend with nearly one hundred critical operations files via comprehensive file management services. The management is accomplished via the registered file system (otherwise known as TFS) which manages these files in a registered file repository composed of a virtual file system accessible via a TFS server and a file registration database. The TFS server provides controlled, reliable, and secure file transfer and storage by registering all file transactions and meta-data in the file registration database. An API is provided for application programs to communicate with TFS servers and the repository. A command line client implementing this API has been developed as a client tool. This paper describes the architecture, current implementation, but more importantly, the evolution of these services based on evolving community use cases and emerging information system technology.
Recently active traces of the Bartlett Springs Fault, California: a digital database
Lienkaemper, James J.
2010-01-01
The purpose of this map is to show the location of and evidence for recent movement on active fault traces within the Bartlett Springs Fault Zone, California. The location and recency of the mapped traces is primarily based on geomorphic expression of the fault as interpreted from large-scale aerial photography. In a few places, evidence of fault creep and offset Holocene strata in trenches and natural exposures have confirmed the activity of some of these traces. This publication is formatted both as a digital database for use within a geographic information system (GIS) and for broader public access as map images that may be browsed on-line or download a summary map. The report text describes the types of scientific observations used to make the map, gives references pertaining to the fault and the evidence of faulting, and provides guidance for use of and limitations of the map.
ExoMol line lists - XXIX. The rotation-vibration spectrum of methyl chloride up to 1200 K
NASA Astrophysics Data System (ADS)
Owens, A.; Yachmenev, A.; Thiel, W.; Fateev, A.; Tennyson, J.; Yurchenko, S. N.
2018-06-01
Comprehensive rotation-vibration line lists are presented for the two main isotopologues of methyl chloride, 12CH335Cl and 12CH337Cl. The line lists, OYT-35 and OYT-37, are suitable for temperatures up to T = 1200 K and consider transitions with rotational excitation up to J = 85 in the wavenumber range 0-6400 cm-1 (wavelengths λ > 1.56 μm). Over 166 billion transitions between 10.2 million energy levels have been calculated variationally for each line list using a new empirically refined potential energy surface, determined by refining to 739 experimentally derived energy levels up to J = 5, and an established ab initio dipole moment surface. The OYT line lists show excellent agreement with newly measured high-temperature infrared absorption cross-sections, reproducing both strong and weak intensity features across the spectrum. The line lists are available from the ExoMol database and the CDS database.
Activity computer program for calculating ion irradiation activation
NASA Astrophysics Data System (ADS)
Palmer, Ben; Connolly, Brian; Read, Mark
2017-07-01
A computer program, Activity, was developed to predict the activity and gamma lines of materials irradiated with an ion beam. It uses the TENDL (Koning and Rochman, 2012) [1] proton reaction cross section database, the Stopping and Range of Ions in Matter (SRIM) (Biersack et al., 2010) code, a Nuclear Data Services (NDS) radioactive decay database (Sonzogni, 2006) [2] and an ENDF gamma decay database (Herman and Chadwick, 2006) [3]. An extended version of Bateman's equation is used to calculate the activity at time t, and this equation is solved analytically, with the option to also solve by numeric inverse Laplace Transform as a failsafe. The program outputs the expected activity and gamma lines of the activated material.
Water line positions in the 782-840 nm region
NASA Astrophysics Data System (ADS)
Hu, S.-M.; Chen, B.; Tan, Y.; Wang, J.; Cheng, C.-F.; Liu, A.-W.
2015-10-01
A set of water transitions in the 782-840 nm region, including 38 H216O lines, 12 HD16O lines, and 30 D216O lines, were recorded with a cavity ring-down spectrometer calibrated using precise atomic lines. Absolute frequencies of the lines were determined with an accuracy of about 5 MHz. Systematic shifts were found in the line positions given in the HITRAN database and the upper energy levels given in recent MARVEL studies.
A Study of Memory Effects in a Chess Database.
Schaigorodsky, Ana L; Perotti, Juan I; Billoni, Orlando V
2016-01-01
A series of recent works studying a database of chronologically sorted chess games-containing 1.4 million games played by humans between 1998 and 2007- have shown that the popularity distribution of chess game-lines follows a Zipf's law, and that time series inferred from the sequences of those game-lines exhibit long-range memory effects. The presence of Zipf's law together with long-range memory effects was observed in several systems, however, the simultaneous emergence of these two phenomena were always studied separately up to now. In this work, by making use of a variant of the Yule-Simon preferential growth model, introduced by Cattuto et al., we provide an explanation for the simultaneous emergence of Zipf's law and long-range correlations memory effects in a chess database. We find that Cattuto's Model (CM) is able to reproduce both, Zipf's law and the long-range correlations, including size-dependent scaling of the Hurst exponent for the corresponding time series. CM allows an explanation for the simultaneous emergence of these two phenomena via a preferential growth dynamics, including a memory kernel, in the popularity distribution of chess game-lines. This mechanism results in an aging process in the chess game-line choice as the database grows. Moreover, we find burstiness in the activity of subsets of the most active players, although the aggregated activity of the pool of players displays inter-event times without burstiness. We show that CM is not able to produce time series with bursty behavior providing evidence that burstiness is not required for the explanation of the long-range correlation effects in the chess database. Our results provide further evidence favoring the hypothesis that long-range correlations effects are a consequence of the aging of game-lines and not burstiness, and shed light on the mechanism that operates in the simultaneous emergence of Zipf's law and long-range correlations in a community of chess players.
A Study of Memory Effects in a Chess Database
Schaigorodsky, Ana L.; Perotti, Juan I.; Billoni, Orlando V.
2016-01-01
A series of recent works studying a database of chronologically sorted chess games–containing 1.4 million games played by humans between 1998 and 2007– have shown that the popularity distribution of chess game-lines follows a Zipf’s law, and that time series inferred from the sequences of those game-lines exhibit long-range memory effects. The presence of Zipf’s law together with long-range memory effects was observed in several systems, however, the simultaneous emergence of these two phenomena were always studied separately up to now. In this work, by making use of a variant of the Yule-Simon preferential growth model, introduced by Cattuto et al., we provide an explanation for the simultaneous emergence of Zipf’s law and long-range correlations memory effects in a chess database. We find that Cattuto’s Model (CM) is able to reproduce both, Zipf’s law and the long-range correlations, including size-dependent scaling of the Hurst exponent for the corresponding time series. CM allows an explanation for the simultaneous emergence of these two phenomena via a preferential growth dynamics, including a memory kernel, in the popularity distribution of chess game-lines. This mechanism results in an aging process in the chess game-line choice as the database grows. Moreover, we find burstiness in the activity of subsets of the most active players, although the aggregated activity of the pool of players displays inter-event times without burstiness. We show that CM is not able to produce time series with bursty behavior providing evidence that burstiness is not required for the explanation of the long-range correlation effects in the chess database. Our results provide further evidence favoring the hypothesis that long-range correlations effects are a consequence of the aging of game-lines and not burstiness, and shed light on the mechanism that operates in the simultaneous emergence of Zipf’s law and long-range correlations in a community of chess players. PMID:28005922
[A web-based integrated clinical database for laryngeal cancer].
E, Qimin; Liu, Jialin; Li, Yong; Liang, Chuanyu
2014-08-01
To establish an integrated database for laryngeal cancer, and to provide an information platform for laryngeal cancer in clinical and fundamental researches. This database also meet the needs of clinical and scientific use. Under the guidance of clinical expert, we have constructed a web-based integrated clinical database for laryngeal carcinoma on the basis of clinical data standards, Apache+PHP+MySQL technology, laryngeal cancer specialist characteristics and tumor genetic information. A Web-based integrated clinical database for laryngeal carcinoma had been developed. This database had a user-friendly interface and the data could be entered and queried conveniently. In addition, this system utilized the clinical data standards and exchanged information with existing electronic medical records system to avoid the Information Silo. Furthermore, the forms of database was integrated with laryngeal cancer specialist characteristics and tumor genetic information. The Web-based integrated clinical database for laryngeal carcinoma has comprehensive specialist information, strong expandability, high feasibility of technique and conforms to the clinical characteristics of laryngeal cancer specialties. Using the clinical data standards and structured handling clinical data, the database can be able to meet the needs of scientific research better and facilitate information exchange, and the information collected and input about the tumor sufferers are very informative. In addition, the user can utilize the Internet to realize the convenient, swift visit and manipulation on the database.
Networking of Bibliographical Information: Lessons learned for the Virtual Observatory
NASA Astrophysics Data System (ADS)
Genova, Françoise; Egret, Daniel
Networking of bibliographic information is particularly remarkable in astronomy. On-line journals, the ADS bibliographic database, SIMBAD and NED are everyday tools for research, and provide easy navigation from one resource to another. Tables are published on line, in close collaboration with data centers. Recent new developments include the links between observatory archives and the ADS, as well as the large scale prototyping of object links between Astronomy and Astrophysics and SIMBAD, following those implemented a few years ago with New Astronomy and the International Bulletin of Variable stars . This networking has been made possible by close collaboration between the ADS, data centers such as the CDS and NED, and the journals, and this partnership being now extended to observatory archives. Simple, de facto exchange standards, like the bibcode to refer to a published paper, have been the key for building links and exchanging data. This partnership, in which practitioners from different disciplines agree to link their resources and to work together to define useful and usable standards, has produced a revolution in scientists' practice. It is an excellent model for the Virtual Observatory projects.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-30
...] FDA's Public Database of Products With Orphan-Drug Designation: Replacing Non-Informative Code Names... replaced non- informative code names with descriptive identifiers on its public database of products that... on our public database with non-informative code names. After careful consideration of this matter...
Real-time line-width measurements: a new feature for reticle inspection systems
NASA Astrophysics Data System (ADS)
Eran, Yair; Greenberg, Gad; Joseph, Amnon; Lustig, Cornel; Mizrahi, Eyal
1997-07-01
The significance of line width control in mask production has become greater with the lessening of defect size. There are two conventional methods used for controlling line widths dimensions which employed in the manufacturing of masks for sub micron devices. These two methods are the critical dimensions (CD) measurement and the detection of edge defects. Achieving reliable and accurate control of line width errors is one of the most challenging tasks in mask production. Neither of the two methods cited above (namely CD measurement and the detection of edge defects) guarantees the detection of line width errors with good sensitivity over the whole mask area. This stems from the fact that CD measurement provides only statistical data on the mask features whereas applying edge defect detection method checks defects on each edge by itself, and does not supply information on the combined result of error detection on two adjacent edges. For example, a combination of a small edge defect together with a CD non- uniformity which are both within the allowed tolerance, may yield a significant line width error, which will not be detected using the conventional methods (see figure 1). A new approach for the detection of line width errors which overcomes this difficulty is presented. Based on this approach, a new sensitive line width error detector was developed and added to Orbot's RT-8000 die-to-database reticle inspection system. This innovative detector operates continuously during the mask inspection process and scans (inspects) the entire area of the reticle for line width errors. The detection is based on a comparison of measured line width that are taken on both the design database and the scanned image of the reticle. In section 2, the motivation for developing this new detector is presented. The section covers an analysis of various defect types, which are difficult to detect using conventional edge detection methods or, alternatively, CD measurements. In section 3, the basic concept of the new approach is introduced together with a description of the new detector and its characteristics. In section 4, the calibration process that took place in order to achieve reliable and repeatable line width measurements is presented. The description of an experiments conducted in order to evaluate the sensitivity of the new detector is given in section 5, followed by a report of the results of this evaluation. The conclusions are presented in section 6.
Region 9 2010 Census Web Service
This web service displays data collected during the 2010 U.S. Census. The data are organized into layers representing Tract, Block, and Block Group visualizations. Geography The TIGER Line Files are feature classes and related database files that are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master Address File / Topologically Integrated Geographic Encoding and Referencing (MAF/TIGER) Database (MTDB). The MTDB represents a seamless national file with no overlaps or gaps between parts, however, each TIGER Line File is designed to stand alone as an independent data set, or they can be combined to cover the entire nation. Census tracts are small, relatively permanent statistical subdivisions of a county or equivalent entity, and were defined by local participants as part of the 2010 Census Participant Statistical Areas Program. The Census Bureau delineated the census tracts in situations where no local participant existed or where all the potential participants declined to participate. The primary purpose of census tracts is to provide a stable set of geographic units for the presentation of census data and comparison back to previous decennial censuses. Census tracts generally have a population size between 1,200 and 8,000 people, with an optimum size of 4,000 people. When first delineated, census tracts were designed to be homogeneous with respect to population characteristics, economic status
2016 American Indian/Alaska Native/Native Hawaiian Areas (AIANNH) Michigan, Minnesota, and Wisconsin
The TIGER/Line shapefiles and related database files (.dbf) are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master Address File / Topologically Integrated Geographic Encoding and Referencing (MAF/TIGER) Database (MTDB). The MTDB represents a seamless national file with no overlaps or gaps between parts, however, each TIGER/Line shapefile is designed to stand alone as an independent data set, or they can be combined to cover the entire nation. The American Indian/Alaska Native/Native Hawaiian (AIANNH) Areas Shapefile includes the following legal entities: federally recognized American Indian reservations and off-reservation trust land areas, state-recognized American Indian reservations, and Hawaiian home lands (HHLs). The statistical entities included are Alaska Native village statistical areas (ANVSAs), Oklahoma tribal statistical areas (OTSAs), tribal designated statistical areas (TDSAs), and state designated tribal statistical areas (SDTSAs). Joint use areas are also included in this shapefile refer to areas that are administered jointly and/or claimed by two or more American Indian tribes. The Census Bureau designates both legal and statistical joint use areas as unique geographic entities for the purpose of presenting statistical data. The Bureau of Indian Affairs (BIA) within the U.S. Department of the Interior (DOI) provides the list of federally recognized tribes and only provides legal boundary infor
Russell-Rose, Tony; Chamberlain, Jon
2017-10-02
Healthcare information professionals play a key role in closing the knowledge gap between medical research and clinical practice. Their work involves meticulous searching of literature databases using complex search strategies that can consist of hundreds of keywords, operators, and ontology terms. This process is prone to error and can lead to inefficiency and bias if performed incorrectly. The aim of this study was to investigate the search behavior of healthcare information professionals, uncovering their needs, goals, and requirements for information retrieval systems. A survey was distributed to healthcare information professionals via professional association email discussion lists. It investigated the search tasks they undertake, their techniques for search strategy formulation, their approaches to evaluating search results, and their preferred functionality for searching library-style databases. The popular literature search system PubMed was then evaluated to determine the extent to which their needs were met. The 107 respondents indicated that their information retrieval process relied on the use of complex, repeatable, and transparent search strategies. On average it took 60 minutes to formulate a search strategy, with a search task taking 4 hours and consisting of 15 strategy lines. Respondents reviewed a median of 175 results per search task, far more than they would ideally like (100). The most desired features of a search system were merging search queries and combining search results. Healthcare information professionals routinely address some of the most challenging information retrieval problems of any profession. However, their needs are not fully supported by current literature search systems and there is demand for improved functionality, in particular regarding the development and management of search strategies. ©Tony Russell-Rose, Jon Chamberlain. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 02.10.2017.
2017-01-01
Background Healthcare information professionals play a key role in closing the knowledge gap between medical research and clinical practice. Their work involves meticulous searching of literature databases using complex search strategies that can consist of hundreds of keywords, operators, and ontology terms. This process is prone to error and can lead to inefficiency and bias if performed incorrectly. Objective The aim of this study was to investigate the search behavior of healthcare information professionals, uncovering their needs, goals, and requirements for information retrieval systems. Methods A survey was distributed to healthcare information professionals via professional association email discussion lists. It investigated the search tasks they undertake, their techniques for search strategy formulation, their approaches to evaluating search results, and their preferred functionality for searching library-style databases. The popular literature search system PubMed was then evaluated to determine the extent to which their needs were met. Results The 107 respondents indicated that their information retrieval process relied on the use of complex, repeatable, and transparent search strategies. On average it took 60 minutes to formulate a search strategy, with a search task taking 4 hours and consisting of 15 strategy lines. Respondents reviewed a median of 175 results per search task, far more than they would ideally like (100). The most desired features of a search system were merging search queries and combining search results. Conclusions Healthcare information professionals routinely address some of the most challenging information retrieval problems of any profession. However, their needs are not fully supported by current literature search systems and there is demand for improved functionality, in particular regarding the development and management of search strategies. PMID:28970190
Open Clients for Distributed Databases
NASA Astrophysics Data System (ADS)
Chayes, D. N.; Arko, R. A.
2001-12-01
We are actively developing a collection of open source example clients that demonstrate use of our "back end" data management infrastructure. The data management system is reported elsewhere at this meeting (Arko and Chayes: A Scaleable Database Infrastructure). In addition to their primary goal of being examples for others to build upon, some of these clients may have limited utility in them selves. More information about the clients and the data infrastructure is available on line at http://data.ldeo.columbia.edu. The available examples to be demonstrated include several web-based clients including those developed for the Community Review System of the Digital Library for Earth System Education, a real-time watch standers log book, an offline interface to use log book entries, a simple client to search on multibeam metadata and others are Internet enabled and generally web-based front ends that support searches against one or more relational databases using industry standard SQL queries. In addition to the web based clients, simple SQL searches from within Excel and similar applications will be demonstrated. By defining, documenting and publishing a clear interface to the fully searchable databases, it becomes relatively easy to construct client interfaces that are optimized for specific applications in comparison to building a monolithic data and user interface system.
Inoue, J
1991-12-01
When occupational health personnel, especially occupational physicians search bibliographies, they usually have to search bibliographies by themselves. Also, if a library is not available because of the location of their work place, they might have to rely on online databases. Although there are many commercial databases in the world, people who seldom use them, will have problems with on-line searching, such as user-computer interface, keywords, and so on. The present study surveyed the best bibliographic searching system in the field of occupational medicine by questionnaire through the use of DIALOG OnDisc MEDLINE as a commercial database. In order to ascertain the problems involved in determining the best bibliographic searching system, a prototype bibliographic searching system was constructed and then evaluated. Finally, solutions for the problems were discussed. These led to the following conclusions: to construct the best bibliographic searching system at the present time, 1) a concept of micro-to-mainframe links (MML) is needed for the computer hardware network; 2) multi-lingual font standards and an excellent common user-computer interface are needed for the computer software; 3) a short course and education of database management systems, and support of personal information processing for retrieved data are necessary for the practical use of the system.
The human-induced pluripotent stem cell initiative-data resources for cellular genetics.
Streeter, Ian; Harrison, Peter W; Faulconbridge, Adam; Flicek, Paul; Parkinson, Helen; Clarke, Laura
2017-01-04
The Human Induced Pluripotent Stem Cell Initiative (HipSci) isf establishing a large catalogue of human iPSC lines, arguably the most well characterized collection to date. The HipSci portal enables researchers to choose the right cell line for their experiment, and makes HipSci's rich catalogue of assay data easy to discover and reuse. Each cell line has genomic, transcriptomic, proteomic and cellular phenotyping data. Data are deposited in the appropriate EMBL-EBI archives, including the European Nucleotide Archive (ENA), European Genome-phenome Archive (EGA), ArrayExpress and PRoteomics IDEntifications (PRIDE) databases. The project will make 500 cell lines from healthy individuals, and from 150 patients with rare genetic diseases; these will be available through the European Collection of Authenticated Cell Cultures (ECACC). As of August 2016, 238 cell lines are available for purchase. Project data is presented through the HipSci data portal (http://www.hipsci.org/lines) and is downloadable from the associated FTP site (ftp://ftp.hipsci.ebi.ac.uk/vol1/ftp). The data portal presents a summary matrix of the HipSci cell lines, showing available data types. Each line has its own page containing descriptive metadata, quality information, and links to archived assay data. Analysis results are also available in a Track Hub, allowing visualization in the context of public genomic annotations (http://www.hipsci.org/data/trackhubs). © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
NASA Astrophysics Data System (ADS)
Binette, L.; Ortiz, P.; Joguet, B.; Rola, C.
1998-11-01
A widely accessible data bank (available through Netscape) and consiting of all (or most) of the emission lines reported in the litterature is being built. It will comprise objects as diverse as HII regions, PN, AGN, HHO. One of its use will be to define/refine existing diagnostic emission line diagrams.
Dirks, Wilhelm Gerhard; Faehnrich, Silke; Estella, Isabelle Annick Janine; Drexler, Hans Guenter
2005-01-01
Cell lines have wide applications as model systems in the medical and pharmaceutical industry. Much drug and chemical testing is now first carried out exhaustively on in vitro systems, reducing the need for complicated and invasive animal experiments. The basis for any research, development or production program involving cell lines is the choice of an authentic cell line. Microsatellites in the human genome that harbour short tandem repeat (STR) DNA markers allow individualisation of established cell lines at the DNA level. Fluorescence polymerase chain reaction amplification of eight highly polymorphic microsatellite STR loci plus gender determination was found to be the best tool to screen the uniqueness of DNA profiles in a fingerprint database. Our results demonstrate that cross-contamination and misidentification remain chronic problems in the use of human continuous cell lines. The combination of rapidly generated DNA types based on single-locus STR and their authentication or individualisation by screening the fingerprint database constitutes a highly reliable and robust method for the identification and verification of cell lines.
EarthChem: International Collaboration for Solid Earth Geochemistry in Geoinformatics
NASA Astrophysics Data System (ADS)
Walker, J. D.; Lehnert, K. A.; Hofmann, A. W.; Sarbas, B.; Carlson, R. W.
2005-12-01
The current on-line information systems for igneous rock geochemistry - PetDB, GEOROC, and NAVDAT - convincingly demonstrate the value of rigorous scientific data management of geochemical data for research and education. The next generation of hypothesis formulation and testing can be vastly facilitated by enhancing these electronic resources through integration of available datasets, expansion of data coverage in location, time, and tectonic setting, timely updates with new data, and through intuitive and efficient access and data analysis tools for the broader geosciences community. PetDB, GEOROC, and NAVDAT have therefore formed the EarthChem consortium (www.earthchem.org) as a international collaborative effort to address these needs and serve the larger earth science community by facilitating the compilation, communication, serving, and visualization of geochemical data, and their integration with other geological, geochronological, geophysical, and geodetic information to maximize their scientific application. We report on the status of and future plans for EarthChem activities. EarthChem's development plan includes: (1) expanding the functionality of the web portal to become a `one-stop shop for geochemical data' with search capability across databases, standardized and integrated data output, generally applicable tools for data quality assessment, and data analysis/visualization including plotting methods and an information-rich map interface; and (2) expanding data holdings by generating new datasets as identified and prioritized through community outreach, and facilitating data contributions from the community by offering web-based data submission capability and technical assistance for design, implementation, and population of new databases and their integration with all EarthChem data holdings. Such federated databases and datasets will retain their identity within the EarthChem system. We also plan on working with publishers to ease the assimilation of geochemical data into the EarthChem database. As a community resource, EarthChem will address user concerns and respond to broad scientific and educational needs. EarthChem will hold yearly workshops, town hall meetings, and/or exhibits at major meetings. The group has established a two-tier committee structure to help ease the communication and coordination of database and IT issues between existing data management projects, and to receive feedback and support from individuals and groups from the larger geosciences community.
Integrated Space Asset Management Database and Modeling
NASA Technical Reports Server (NTRS)
MacLeod, Todd; Gagliano, Larry; Percy, Thomas; Mason, Shane
2015-01-01
Effective Space Asset Management is one key to addressing the ever-growing issue of space congestion. It is imperative that agencies around the world have access to data regarding the numerous active assets and pieces of space junk currently tracked in orbit around the Earth. At the center of this issues is the effective management of data of many types related to orbiting objects. As the population of tracked objects grows, so too should the data management structure used to catalog technical specifications, orbital information, and metadata related to those populations. Marshall Space Flight Center's Space Asset Management Database (SAM-D) was implemented in order to effectively catalog a broad set of data related to known objects in space by ingesting information from a variety of database and processing that data into useful technical information. Using the universal NORAD number as a unique identifier, the SAM-D processes two-line element data into orbital characteristics and cross-references this technical data with metadata related to functional status, country of ownership, and application category. The SAM-D began as an Excel spreadsheet and was later upgraded to an Access database. While SAM-D performs its task very well, it is limited by its current platform and is not available outside of the local user base. Further, while modeling and simulation can be powerful tools to exploit the information contained in SAM-D, the current system does not allow proper integration options for combining the data with both legacy and new M&S tools. This paper provides a summary of SAM-D development efforts to date and outlines a proposed data management infrastructure that extends SAM-D to support the larger data sets to be generated. A service-oriented architecture model using an information sharing platform named SIMON will allow it to easily expand to incorporate new capabilities, including advanced analytics, M&S tools, fusion techniques and user interface for visualizations. In addition, tight control of information sharing policy will increase confidence in the system, which would encourage industry partners to provide commercial data. Combined with the integration of new and legacy M&S tools, a SIMON-based architecture will provide a robust environment that can be extended and expanded indefinitely.
Modeling Constellation Virtual Missions Using the Vdot(Trademark) Process Management Tool
NASA Technical Reports Server (NTRS)
Hardy, Roger; ONeil, Daniel; Sturken, Ian; Nix, Michael; Yanez, Damian
2011-01-01
The authors have identified a software tool suite that will support NASA's Virtual Mission (VM) effort. This is accomplished by transforming a spreadsheet database of mission events, task inputs and outputs, timelines, and organizations into process visualization tools and a Vdot process management model that includes embedded analysis software as well as requirements and information related to data manipulation and transfer. This paper describes the progress to date, and the application of the Virtual Mission to not only Constellation but to other architectures, and the pertinence to other aerospace applications. Vdot s intuitive visual interface brings VMs to life by turning static, paper-based processes into active, electronic processes that can be deployed, executed, managed, verified, and continuously improved. A VM can be executed using a computer-based, human-in-the-loop, real-time format, under the direction and control of the NASA VM Manager. Engineers in the various disciplines will not have to be Vdot-proficient but rather can fill out on-line, Excel-type databases with the mission information discussed above. The author s tool suite converts this database into several process visualization tools for review and into Microsoft Project, which can be imported directly into Vdot. Many tools can be embedded directly into Vdot, and when the necessary data/information is received from a preceding task, the analysis can be initiated automatically. Other NASA analysis tools are too complex for this process but Vdot automatically notifies the tool user that the data has been received and analysis can begin. The VM can be simulated from end-to-end using the author s tool suite. The planned approach for the Vdot-based process simulation is to generate the process model from a database; other advantages of this semi-automated approach are the participants can be geographically remote and after refining the process models via the human-in-the-loop simulation, the system can evolve into a process management server for the actual process.
EVITHERM: The Virtual Institute of Thermal Metrology
NASA Astrophysics Data System (ADS)
Redgrove, J.; Filtz, J.-R.; Fischer, J.; Le Parlouër, P.; Mathot, V.; Nesvadba, P.; Pavese, F.
2007-12-01
Evitherm is a web-based thermal resource centre, resulting from a 3-year project partly funded by the EU’s GROWTH programme (2002 05). Evitherm links together the widely distributed centres of excellence (NMIs, research and teaching institutes, consultants, etc.) and others concerned with thermal measurements and technology to provide a focal point for information exchange and knowledge transfer between all these organizations and industry. To facilitate the quick and easy flow of thermal knowledge to users of thermal technologies, evitherm has a website (www.evitherm.org) through which it disseminates information and by which it also provides access to resources such as training, property data, measurements and experts. Among the resources available from the website are (1) thermal property data—offering access to some of the world’s leading databases; (2) expertise— evitherm has a database of consultants, an Advice line, a public Forum and a unique Consultancy Brokering Service whereby users are linked to the expert they need to solve their thermal industrial problems; (3) industry resources—thermal information for particular industry sectors; (4) services—information directories on thermal property measurement, training, equipment supply, reference materials, etc.; (5) literature—links to books, papers, standards, etc.; (6) events—conferences, meetings, seminars, organizations and networks, what’s happening. A user only has to register (for free) to gain access to all the information on the evitherm website. Much of the thermal property data can be accessed for free and in a few cases we have negotiated affordable rates for access to some leading databases, such as CINDAS, THERSYST and NELFOOD. This article illustrates the aims and structure of the evitherm Society, how it is directed, and how it serves the thermal community worldwide in its need for quick and easy access to the resources needed to help ensure a well resourced industrial work force and clean and efficient thermal processes.
[Establishment of database with standard 3D tooth crowns based on 3DS MAX].
Cheng, Xiaosheng; An, Tao; Liao, Wenhe; Dai, Ning; Yu, Qing; Lu, Peijun
2009-08-01
The database with standard 3D tooth crowns has laid the groundwork for dental CAD/CAM system. In this paper, we design the standard tooth crowns in 3DS MAX 9.0 and create a database with these models successfully. Firstly, some key lines are collected from standard tooth pictures. Then we use 3DS MAX 9.0 to design the digital tooth model based on these lines. During the design process, it is important to refer to the standard plaster tooth model. After some tests, the standard tooth models designed with this method are accurate and adaptable; furthermore, it is very easy to perform some operations on the models such as deforming and translating. This method provides a new idea to build the database with standard 3D tooth crowns and a basis for dental CAD/CAM system.
The study on the real estate integrated cadastral information system based on shared plots
NASA Astrophysics Data System (ADS)
Xu, Huan; Liu, Nan; Liu, Renyi; Huang, Jie
2008-10-01
Solving the problem of the land property right on the shared parcel demands the integration of real estate information into cadastral management. Therefore a new cadastral feature named Shared Plot is introduced. After defining the shared plot clearly and describing its characteristics in detail, the impact resulting from the new feature on the traditional cadastral model composed of three cadastral features - parcels, parcel boundary lines and parcel boundary points is focused on and a four feature cadastral model that makes some amendments to the three feature one is put forward. The new model has been applied to the development of a new generation of real estate integrated cadastral information system, which incorporates real estate attribute and spatial information into cadastral database in addition to cadastral information. The system has been used in several cities of Zhejiang Province and got a favorable response. This verifies the feasibility and effectiveness of the model to some extent.
Problem formulation, metrics, open government, and on-line collaboration
NASA Astrophysics Data System (ADS)
Ziegler, C. R.; Schofield, K.; Young, S.; Shaw, D.
2010-12-01
Problem formulation leading to effective environmental management, including synthesis and application of science by government agencies, may benefit from collaborative on-line environments. This is illustrated by two interconnected projects: 1) literature-based evidence tools that support causal assessment and problem formulation, and 2) development of output, outcome, and sustainability metrics for tracking environmental conditions. Specifically, peer-production mechanisms allow for global contribution to science-based causal evidence databases, and subsequent crowd-sourced development of causal networks supported by that evidence. In turn, science-based causal networks may inform problem formulation and selection of metrics or indicators to track environmental condition (or problem status). Selecting and developing metrics in a collaborative on-line environment may improve stakeholder buy-in, the explicit relevance of metrics to planning, and the ability to approach problem apportionment or accountability, and to define success or sustainability. Challenges include contribution governance, data-sharing incentives, linking on-line interfaces to data service providers, and the intersection of environmental science and social science. Degree of framework access and confidentiality may vary by group and/or individual, but may ultimately be geared at demonstrating connections between science and decision making and supporting a culture of open government, by fostering transparency, public engagement, and collaboration.
Comparison of Online Agricultural Information Services.
ERIC Educational Resources Information Center
Reneau, Fred; Patterson, Richard
1984-01-01
Outlines major online agricultural information services--agricultural databases, databases with agricultural services, educational databases in agriculture--noting services provided, access to the database, and costs. Benefits of online agricultural database sources (availability of agricultural marketing, weather, commodity prices, management…
WMC Database Evaluation. Case Study Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palounek, Andrea P. T
The WMC Database is ultimately envisioned to hold a collection of experimental data, design information, and information from computational models. This project was a first attempt at using the Database to access experimental data and extract information from it. This evaluation shows that the Database concept is sound and robust, and that the Database, once fully populated, should remain eminently usable for future researchers.
MIPS: analysis and annotation of proteins from whole genomes
Mewes, H. W.; Amid, C.; Arnold, R.; Frishman, D.; Güldener, U.; Mannhaupt, G.; Münsterkötter, M.; Pagel, P.; Strack, N.; Stümpflen, V.; Warfsmann, J.; Ruepp, A.
2004-01-01
The Munich Information Center for Protein Sequences (MIPS-GSF), Neuherberg, Germany, provides protein sequence-related information based on whole-genome analysis. The main focus of the work is directed toward the systematic organization of sequence-related attributes as gathered by a variety of algorithms, primary information from experimental data together with information compiled from the scientific literature. MIPS maintains automatically generated and manually annotated genome-specific databases, develops systematic classification schemes for the functional annotation of protein sequences and provides tools for the comprehensive analysis of protein sequences. This report updates the information on the yeast genome (CYGD), the Neurospora crassa genome (MNCDB), the database of complete cDNAs (German Human Genome Project, NGFN), the database of mammalian protein–protein interactions (MPPI), the database of FASTA homologies (SIMAP), and the interface for the fast retrieval of protein-associated information (QUIPOS). The Arabidopsis thaliana database, the rice database, the plant EST databases (MATDB, MOsDB, SPUTNIK), as well as the databases for the comprehensive set of genomes (PEDANT genomes) are described elsewhere in the 2003 and 2004 NAR database issues, respectively. All databases described, and the detailed descriptions of our projects can be accessed through the MIPS web server (http://mips.gsf.de). PMID:14681354
MIPS: analysis and annotation of proteins from whole genomes.
Mewes, H W; Amid, C; Arnold, R; Frishman, D; Güldener, U; Mannhaupt, G; Münsterkötter, M; Pagel, P; Strack, N; Stümpflen, V; Warfsmann, J; Ruepp, A
2004-01-01
The Munich Information Center for Protein Sequences (MIPS-GSF), Neuherberg, Germany, provides protein sequence-related information based on whole-genome analysis. The main focus of the work is directed toward the systematic organization of sequence-related attributes as gathered by a variety of algorithms, primary information from experimental data together with information compiled from the scientific literature. MIPS maintains automatically generated and manually annotated genome-specific databases, develops systematic classification schemes for the functional annotation of protein sequences and provides tools for the comprehensive analysis of protein sequences. This report updates the information on the yeast genome (CYGD), the Neurospora crassa genome (MNCDB), the database of complete cDNAs (German Human Genome Project, NGFN), the database of mammalian protein-protein interactions (MPPI), the database of FASTA homologies (SIMAP), and the interface for the fast retrieval of protein-associated information (QUIPOS). The Arabidopsis thaliana database, the rice database, the plant EST databases (MATDB, MOsDB, SPUTNIK), as well as the databases for the comprehensive set of genomes (PEDANT genomes) are described elsewhere in the 2003 and 2004 NAR database issues, respectively. All databases described, and the detailed descriptions of our projects can be accessed through the MIPS web server (http://mips.gsf.de).
The HITRAN2016 molecular spectroscopic database
NASA Astrophysics Data System (ADS)
Gordon, I. E.; Rothman, L. S.; Hill, C.; Kochanov, R. V.; Tan, Y.; Bernath, P. F.; Birk, M.; Boudon, V.; Campargue, A.; Chance, K. V.; Drouin, B. J.; Flaud, J.-M.; Gamache, R. R.; Hodges, J. T.; Jacquemart, D.; Perevalov, V. I.; Perrin, A.; Shine, K. P.; Smith, M.-A. H.; Tennyson, J.; Toon, G. C.; Tran, H.; Tyuterev, V. G.; Barbe, A.; Császár, A. G.; Devi, V. M.; Furtenbacher, T.; Harrison, J. J.; Hartmann, J.-M.; Jolly, A.; Johnson, T. J.; Karman, T.; Kleiner, I.; Kyuberis, A. A.; Loos, J.; Lyulin, O. M.; Massie, S. T.; Mikhailenko, S. N.; Moazzen-Ahmadi, N.; Müller, H. S. P.; Naumenko, O. V.; Nikitin, A. V.; Polyansky, O. L.; Rey, M.; Rotger, M.; Sharpe, S. W.; Sung, K.; Starikova, E.; Tashkun, S. A.; Auwera, J. Vander; Wagner, G.; Wilzewski, J.; Wcisło, P.; Yu, S.; Zak, E. J.
2017-12-01
This paper describes the contents of the 2016 edition of the HITRAN molecular spectroscopic compilation. The new edition replaces the previous HITRAN edition of 2012 and its updates during the intervening years. The HITRAN molecular absorption compilation is composed of five major components: the traditional line-by-line spectroscopic parameters required for high-resolution radiative-transfer codes, infrared absorption cross-sections for molecules not yet amenable to representation in a line-by-line form, collision-induced absorption data, aerosol indices of refraction, and general tables such as partition sums that apply globally to the data. The new HITRAN is greatly extended in terms of accuracy, spectral coverage, additional absorption phenomena, added line-shape formalisms, and validity. Moreover, molecules, isotopologues, and perturbing gases have been added that address the issues of atmospheres beyond the Earth. Of considerable note, experimental IR cross-sections for almost 300 additional molecules important in different areas of atmospheric science have been added to the database. The compilation can be accessed through www.hitran.org. Most of the HITRAN data have now been cast into an underlying relational database structure that offers many advantages over the long-standing sequential text-based structure. The new structure empowers the user in many ways. It enables the incorporation of an extended set of fundamental parameters per transition, sophisticated line-shape formalisms, easy user-defined output formats, and very convenient searching, filtering, and plotting of data. A powerful application programming interface making use of structured query language (SQL) features for higher-level applications of HITRAN is also provided.
ERIC Educational Resources Information Center
Blackwell, Michael Lind
This study evaluates the "Education Resources Information Center" (ERIC), "Library and Information Science Abstracts" (LISA), and "Library Literature" (LL) databases, determining how long the databases take to enter records (indexing delay), how much duplication of effort exists among the three databases (indexing…
Analysis and fit of stellar spectra using a mega-database of CMFGEN models
NASA Astrophysics Data System (ADS)
Fierro-Santillán, Celia; Zsargó, Janos; Klapp, Jaime; Díaz-Azuara, Santiago Alfredo; Arrieta, Anabel; Arias, Lorena
2017-11-01
We present a tool for analysis and fit of stellar spectra using a mega database of 15,000 atmosphere models for OB stars. We have developed software tools, which allow us to find the model that best fits to an observed spectrum, comparing equivalent widths and line ratios in the observed spectrum with all models of the database. We use the Hα, Hβ, Hγ, and Hδ lines as criterion of stellar gravity and ratios of He II λ4541/He I λ4471, He II λ4200/(He I+He II λ4026), He II λ4541/He I λ4387, and He II λ4200/He I λ4144 as criterion of T eff.
Flight deck party line issues : an Aviation Safety Reporting System analysis
DOT National Transportation Integrated Search
1995-06-01
This document describes an analysis of the Aviation Safety Reporting System : (ASRS) database with regards to human factors aspects concerning the : implementation of Data Link into the flightdeck. The ASRS database contains : thousands of reports co...
An Integrated Molecular Database on Indian Insects.
Pratheepa, Maria; Venkatesan, Thiruvengadam; Gracy, Gandhi; Jalali, Sushil Kumar; Rangheswaran, Rajagopal; Antony, Jomin Cruz; Rai, Anil
2018-01-01
MOlecular Database on Indian Insects (MODII) is an online database linking several databases like Insect Pest Info, Insect Barcode Information System (IBIn), Insect Whole Genome sequence, Other Genomic Resources of National Bureau of Agricultural Insect Resources (NBAIR), Whole Genome sequencing of Honey bee viruses, Insecticide resistance gene database and Genomic tools. This database was developed with a holistic approach for collecting information about phenomic and genomic information of agriculturally important insects. This insect resource database is available online for free at http://cib.res.in. http://cib.res.in/.
Nascimento, Leandro Costa; Salazar, Marcela Mendes; Lepikson-Neto, Jorge; Camargo, Eduardo Leal Oliveira; Parreiras, Lucas Salera; Carazzolle, Marcelo Falsarella
2017-01-01
Abstract Tree species of the genus Eucalyptus are the most valuable and widely planted hardwoods in the world. Given the economic importance of Eucalyptus trees, much effort has been made towards the generation of specimens with superior forestry properties that can deliver high-quality feedstocks, customized to the industrýs needs for both cellulosic (paper) and lignocellulosic biomass production. In line with these efforts, large sets of molecular data have been generated by several scientific groups, providing invaluable information that can be applied in the development of improved specimens. In order to fully explore the potential of available datasets, the development of a public database that provides integrated access to genomic and transcriptomic data from Eucalyptus is needed. EUCANEXT is a database that analyses and integrates publicly available Eucalyptus molecular data, such as the E. grandis genome assembly and predicted genes, ESTs from several species and digital gene expression from 26 RNA-Seq libraries. The database has been implemented in a Fedora Linux machine running MySQL and Apache, while Perl CGI was used for the web interfaces. EUCANEXT provides a user-friendly web interface for easy access and analysis of publicly available molecular data from Eucalyptus species. This integrated database allows for complex searches by gene name, keyword or sequence similarity and is publicly accessible at http://www.lge.ibi.unicamp.br/eucalyptusdb. Through EUCANEXT, users can perform complex analysis to identify genes related traits of interest using RNA-Seq libraries and tools for differential expression analysis. Moreover, all the bioinformatics pipeline here described, including the database schema and PERL scripts, are readily available and can be applied to any genomic and transcriptomic project, regardless of the organism. Database URL: http://www.lge.ibi.unicamp.br/eucalyptusdb PMID:29220468
chemalot and chemalot_knime: Command line programs as workflow tools for drug discovery.
Lee, Man-Ling; Aliagas, Ignacio; Feng, Jianwen A; Gabriel, Thomas; O'Donnell, T J; Sellers, Benjamin D; Wiswedel, Bernd; Gobbi, Alberto
2017-06-12
Analyzing files containing chemical information is at the core of cheminformatics. Each analysis may require a unique workflow. This paper describes the chemalot and chemalot_knime open source packages. Chemalot is a set of command line programs with a wide range of functionalities for cheminformatics. The chemalot_knime package allows command line programs that read and write SD files from stdin and to stdout to be wrapped into KNIME nodes. The combination of chemalot and chemalot_knime not only facilitates the compilation and maintenance of sequences of command line programs but also allows KNIME workflows to take advantage of the compute power of a LINUX cluster. Use of the command line programs is demonstrated in three different workflow examples: (1) A workflow to create a data file with project-relevant data for structure-activity or property analysis and other type of investigations, (2) The creation of a quantitative structure-property-relationship model using the command line programs via KNIME nodes, and (3) The analysis of strain energy in small molecule ligand conformations from the Protein Data Bank database. The chemalot and chemalot_knime packages provide lightweight and powerful tools for many tasks in cheminformatics. They are easily integrated with other open source and commercial command line tools and can be combined to build new and even more powerful tools. The chemalot_knime package facilitates the generation and maintenance of user-defined command line workflows, taking advantage of the graphical design capabilities in KNIME. Graphical abstract Example KNIME workflow with chemalot nodes and the corresponding command line pipe.
Architecture and Functionality of the Advanced Life Support On-Line Project Information System
NASA Technical Reports Server (NTRS)
Hogan, John A.; Levri, Julie A.; Morrow, Rich; Cavazzoni, Jim; Rodriguez, Luis F.; Riano, Rebecca; Whitaker, Dawn R.
2004-01-01
An ongoing effort is underway at NASA Ames Research Center (ARC) to develop an On-line Project Information System (OPIS) for the Advanced Life Support (ALS) Program. The objective of this three-year project is to develop, test, revise and deploy OPIS to enhance the quality of decision-making metrics and attainment of Program goals through improved knowledge sharing. OPIS will centrally locate detailed project information solicited from investigators on an annual basis and make it readily accessible by the ALS Community via a Web-accessible interface. The data will be stored in an object-oriented relational database (created in MySQL) located on a secure server at NASA ARC. OPE will simultaneously serve several functions, including being an research and technology development (R&TD) status information hub that can potentially serve as the primary annual reporting mechanism for ALS-funded projects. Using OPIS, ALS managers and element leads will be able to carry out informed R&TD investment decisions, and allow analysts to perform accurate systems evaluations. Additionally, the range and specificity of information solicited will serve to educate technology developers of programmatic needs. OPIS will collect comprehensive information from all ALS projects as well as highly detailed information specific to technology development in each ALS area (Waste, Water, Air, Biomass, Food, Thermal, Controls and Systems Analysis). Because the scope of needed information can vary dramatically between areas, element-specific technology information is being compiled with the aid of multiple specialized working groups. This paper presents the current development status in terms of the architecture and functionality of OPIS. Possible implementation approaches for OPIS are also discussed.
Fils, D.; Cervato, C.; Reed, J.; Diver, P.; Tang, X.; Bohling, G.; Greer, D.
2009-01-01
CHRONOS's purpose is to transform Earth history research by seamlessly integrating stratigraphic databases and tools into a virtual on-line stratigraphic record. In this paper, we describe the various components of CHRONOS's distributed data system, including the encoding of semantic and descriptive data into a service-based architecture. We give examples of how we have integrated well-tested resources available from the open-source and geoinformatic communities, like the GeoSciML schema and the simple knowledge organization system (SKOS), into the services-oriented architecture to encode timescale and phylogenetic synonymy data. We also describe on-going efforts to use geospatially enhanced data syndication and informally including semantic information by embedding it directly into the XHTML Document Object Model (DOM). XHTML DOM allows machine-discoverable descriptive data such as licensing and citation information to be incorporated directly into data sets retrieved by users. ?? 2008 Elsevier Ltd. All rights reserved.
Spectroscopic data for an astronomy database
NASA Technical Reports Server (NTRS)
Parkinson, W. H.; Smith, Peter L.
1995-01-01
Very few of the atomic and molecular data used in analyses of astronomical spectra are currently available in World Wide Web (WWW) databases that are searchable with hypertext browsers. We have begun to rectify this situation by making extensive atomic data files available with simple search procedures. We have also established links to other on-line atomic and molecular databases. All can be accessed from our database homepage with URL: http:// cfa-www.harvard.edu/ amp/ data/ amdata.html.
Database on unstable rock slopes in Norway
NASA Astrophysics Data System (ADS)
Oppikofer, Thierry; Nordahl, Bo; Bunkholt, Halvor; Nicolaisen, Magnus; Hermanns, Reginald L.; Böhme, Martina; Yugsi Molina, Freddy X.
2014-05-01
Several large rockslides have occurred in historic times in Norway causing many casualties. Most of these casualties are due to displacement waves triggered by a rock avalanche and affecting coast lines of entire lakes and fjords. The Geological Survey of Norway performs systematic mapping of unstable rock slopes in Norway and has detected up to now more than 230 unstable slopes with significant postglacial deformation. This systematic mapping aims to detect future rock avalanches before they occur. The registered unstable rock slopes are stored in a database on unstable rock slopes developed and maintained by the Geological Survey of Norway. The main aims of this database are (1) to serve as a national archive for unstable rock slopes in Norway; (2) to serve for data collection and storage during field mapping; (3) to provide decision-makers with hazard zones and other necessary information on unstable rock slopes for land-use planning and mitigation; and (4) to inform the public through an online map service. The database is organized hierarchically with a main point for each unstable rock slope to which several feature classes and tables are linked. This main point feature class includes several general attributes of the unstable rock slopes, such as site name, general and geological descriptions, executed works, recommendations, technical parameters (volume, lithology, mechanism and others), displacement rates, possible consequences, hazard and risk classification and so on. Feature classes and tables linked to the main feature class include the run-out area, the area effected by secondary effects, the hazard and risk classification, subareas and scenarios of an unstable rock slope, field observation points, displacement measurement stations, URL links for further documentation and references. The database on unstable rock slopes in Norway will be publicly consultable through the online map service on www.skrednett.no in 2014. Only publicly relevant parts of the database will be shown in the online map service (e.g. processed results of displacement measurements), while more detailed data will not (e.g. raw data of displacement measurements). Factsheets with key information on unstable rock slopes can be automatically generated and downloaded for each site, a municipality, a county or the entire country. Selected data will also be downloadable free of charge. The present database on unstable rock slopes in Norway will further evolve in the coming years as the systematic mapping conducted by the Geological Survey of Norway progresses and as available techniques and tools evolve.
The implementation of non-Voigt line profiles in the HITRAN database: H2 case study
NASA Astrophysics Data System (ADS)
Wcisło, P.; Gordon, I. E.; Tran, H.; Tan, Y.; Hu, S.-M.; Campargue, A.; Kassi, S.; Romanini, D.; Hill, C.; Kochanov, R. V.; Rothman, L. S.
2016-07-01
Experimental capabilities of molecular spectroscopy and its applications nowadays require a sub-percent or even sub-per mille accuracy of the representation of the shapes of molecular transitions. This implies the necessity of using more advanced line-shape models which are characterized by many more parameters than a simple Voigt profile. It is a great challenge for modern molecular spectral databases to store and maintain the extended set of line-shape parameters as well as their temperature dependences. It is even more challenging to reliably retrieve these parameters from experimental spectra over a large range of pressures and temperatures. In this paper we address this problem starting from the case of the H2 molecule for which the non-Voigt line-shape effects are exceptionally pronounced. For this purpose we reanalyzed the experimental data reported in the literature. In particular, we performed detailed line-shape analysis of high-quality spectra obtained with cavity-enhanced techniques. We also report the first high-quality cavity-enhanced measurement of the H2 fundamental vibrational mode. We develop a correction to the Hartmann-Tran profile (HTP) which adjusts the HTP to the particular model of the velocity-changing collisions. This allows the measured spectra to be better represented over a wide range of pressures. The problem of storing the HTP parameters in the HITRAN database together with their temperature dependences is also discussed.
The Protein Information Resource: an integrated public resource of functional annotation of proteins
Wu, Cathy H.; Huang, Hongzhan; Arminski, Leslie; Castro-Alvear, Jorge; Chen, Yongxing; Hu, Zhang-Zhi; Ledley, Robert S.; Lewis, Kali C.; Mewes, Hans-Werner; Orcutt, Bruce C.; Suzek, Baris E.; Tsugita, Akira; Vinayaka, C. R.; Yeh, Lai-Su L.; Zhang, Jian; Barker, Winona C.
2002-01-01
The Protein Information Resource (PIR) serves as an integrated public resource of functional annotation of protein data to support genomic/proteomic research and scientific discovery. The PIR, in collaboration with the Munich Information Center for Protein Sequences (MIPS) and the Japan International Protein Information Database (JIPID), produces the PIR-International Protein Sequence Database (PSD), the major annotated protein sequence database in the public domain, containing about 250 000 proteins. To improve protein annotation and the coverage of experimentally validated data, a bibliography submission system is developed for scientists to submit, categorize and retrieve literature information. Comprehensive protein information is available from iProClass, which includes family classification at the superfamily, domain and motif levels, structural and functional features of proteins, as well as cross-references to over 40 biological databases. To provide timely and comprehensive protein data with source attribution, we have introduced a non-redundant reference protein database, PIR-NREF. The database consists of about 800 000 proteins collected from PIR-PSD, SWISS-PROT, TrEMBL, GenPept, RefSeq and PDB, with composite protein names and literature data. To promote database interoperability, we provide XML data distribution and open database schema, and adopt common ontologies. The PIR web site (http://pir.georgetown.edu/) features data mining and sequence analysis tools for information retrieval and functional identification of proteins based on both sequence and annotation information. The PIR databases and other files are also available by FTP (ftp://nbrfa.georgetown.edu/pir_databases). PMID:11752247
Kim, Changkug; Park, Dongsuk; Seol, Youngjoo; Hahn, Jangho
2011-01-01
The National Agricultural Biotechnology Information Center (NABIC) constructed an agricultural biology-based infrastructure and developed a Web based relational database for agricultural plants with biotechnology information. The NABIC has concentrated on functional genomics of major agricultural plants, building an integrated biotechnology database for agro-biotech information that focuses on genomics of major agricultural resources. This genome database provides annotated genome information from 1,039,823 records mapped to rice, Arabidopsis, and Chinese cabbage.
Numeric Databases in Chemical Thermodynamics at the National Institute of Standards and Technology
Chase, Malcolm W.
1989-01-01
During the past year the activities of the Chemical Thermodynamics Data Center and the JANAF Thermochemical Tables project have been combined to obtain an extensive collection of thermodynamic information for many chemical species, including the elements. Currently available are extensive bibliographic collections and data files of heat capacity, enthalpy, vapor pressure, phase transitions, etc. Future plans related to materials science are to improve the metallic oxide temperature dependent tabulations, upgrade the recommended values periodically, and maintain the bibliographic citations and the thermochemical data current. The recommended thermochemical information is maintained on-line, and tied to the calculational routines within the data center. Recent thermodynamic evaluations on the elements and oxides will be discussed, as well as studies in related activities at NIST. PMID:28053395
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
The Library Services Alliance is a unique multi-type library consortium committed to resource sharing. As a voluntary association of university and governmental laboratory libraries supporting scientific research, the Alliance has become a leader in New Mexico in using cooperative ventures to cost-effectively expand resources supporting their scientific and technical communities. During 1994, the alliance continued to expand on their strategic planning foundation to enhance access to research information for the scientific and technical communities. Significant progress was made in facilitating easy access to the on-line catalogs of member libraries via connections through the Internet. Access to Alliance resources is nowmore » available via the World Wide Web and Gopher, as well as links to other databases and electronic information. This report highlights the accomplishments of the Alliance during calendar year 1994.« less
Automated extraction of chemical structure information from digital raster images
Park, Jungkap; Rosania, Gus R; Shedden, Kerby A; Nguyen, Mandee; Lyu, Naesung; Saitou, Kazuhiro
2009-01-01
Background To search for chemical structures in research articles, diagrams or text representing molecules need to be translated to a standard chemical file format compatible with cheminformatic search engines. Nevertheless, chemical information contained in research articles is often referenced as analog diagrams of chemical structures embedded in digital raster images. To automate analog-to-digital conversion of chemical structure diagrams in scientific research articles, several software systems have been developed. But their algorithmic performance and utility in cheminformatic research have not been investigated. Results This paper aims to provide critical reviews for these systems and also report our recent development of ChemReader – a fully automated tool for extracting chemical structure diagrams in research articles and converting them into standard, searchable chemical file formats. Basic algorithms for recognizing lines and letters representing bonds and atoms in chemical structure diagrams can be independently run in sequence from a graphical user interface-and the algorithm parameters can be readily changed-to facilitate additional development specifically tailored to a chemical database annotation scheme. Compared with existing software programs such as OSRA, Kekule, and CLiDE, our results indicate that ChemReader outperforms other software systems on several sets of sample images from diverse sources in terms of the rate of correct outputs and the accuracy on extracting molecular substructure patterns. Conclusion The availability of ChemReader as a cheminformatic tool for extracting chemical structure information from digital raster images allows research and development groups to enrich their chemical structure databases by annotating the entries with published research articles. Based on its stable performance and high accuracy, ChemReader may be sufficiently accurate for annotating the chemical database with links to scientific research articles. PMID:19196483
SeedStor: A Germplasm Information Management System and Public Database
Horler, RSP; Turner, AS; Fretter, P; Ambrose, M
2018-01-01
Abstract SeedStor (https://www.seedstor.ac.uk) acts as the publicly available database for the seed collections held by the Germplasm Resources Unit (GRU) based at the John Innes Centre, Norwich, UK. The GRU is a national capability supported by the Biotechnology and Biological Sciences Research Council (BBSRC). The GRU curates germplasm collections of a range of temperate cereal, legume and Brassica crops and their associated wild relatives, as well as precise genetic stocks, near-isogenic lines and mapping populations. With >35,000 accessions, the GRU forms part of the UK’s plant conservation contribution to the Multilateral System (MLS) of the International Treaty for Plant Genetic Resources for Food and Agriculture (ITPGRFA) for wheat, barley, oat and pea. SeedStor is a fully searchable system that allows our various collections to be browsed species by species through to complicated multipart phenotype criteria-driven queries. The results from these searches can be downloaded for later analysis or used to order germplasm via our shopping cart. The user community for SeedStor is the plant science research community, plant breeders, specialist growers, hobby farmers and amateur gardeners, and educationalists. Furthermore, SeedStor is much more than a database; it has been developed to act internally as a Germplasm Information Management System that allows team members to track and process germplasm requests, determine regeneration priorities, handle cost recovery and Material Transfer Agreement paperwork, manage the Seed Store holdings and easily report on a wide range of the aforementioned tasks. PMID:29228298
National Scale Marine Geophysical Data Portal for the Israel EEZ with Public Access Web-GIS Platform
NASA Astrophysics Data System (ADS)
Ketter, T.; Kanari, M.; Tibor, G.
2017-12-01
Recent offshore discoveries and regulation in the Israel Exclusive Economic Zone (EEZ) are the driving forces behind increasing marine research and development initiatives such as infrastructure development, environmental protection and decision making among many others. All marine operations rely on existing seabed information, while some also generate new data. We aim to create a single platform knowledge-base to enable access to existing information, in a comprehensive, publicly accessible web-based interface. The Israel EEZ covers approx. 26,000 sqkm and has been surveyed continuously with various geophysical instruments over the past decades, including 10,000 km of multibeam survey lines, 8,000 km of sub-bottom seismic lines, and hundreds of sediment sampling stations. Our database consists of vector and raster datasets from multiple sources compiled into a repository of geophysical data and metadata, acquired nation-wide by several research institutes and universities. The repository will enable public access via a web portal based on a GIS platform, including datasets from multibeam, sub-bottom profiling, single- and multi-channel seismic surveys and sediment sampling analysis. Respective data products will also be available e.g. bathymetry, substrate type, granulometry, geological structure etc. Operating a web-GIS based repository allows retrieval of pre-existing data for potential users to facilitate planning of future activities e.g. conducting marine surveys, construction of marine infrastructure and other private or public projects. User interface is based on map oriented spatial selection, which will reveal any relevant data for designated areas of interest. Querying the database will allow the user to obtain information about the data owner and to address them for data retrieval as required. Wide and free public access to existing data and metadata can save time and funds for academia, government and commercial sectors, while aiding in cooperation and data sharing among the various stakeholders.
Code of Federal Regulations, 2014 CFR
2014-04-01
... Unique Device Identification Database. 830.350 Section 830.350 Food and Drugs FOOD AND DRUG... Global Unique Device Identification Database § 830.350 Correction of information submitted to the Global Unique Device Identification Database. (a) If FDA becomes aware that any information submitted to the...
Design and Establishment of Quality Model of Fundamental Geographic Information Database
NASA Astrophysics Data System (ADS)
Ma, W.; Zhang, J.; Zhao, Y.; Zhang, P.; Dang, Y.; Zhao, T.
2018-04-01
In order to make the quality evaluation for the Fundamental Geographic Information Databases(FGIDB) more comprehensive, objective and accurate, this paper studies and establishes a quality model of FGIDB, which formed by the standardization of database construction and quality control, the conformity of data set quality and the functionality of database management system, and also designs the overall principles, contents and methods of the quality evaluation for FGIDB, providing the basis and reference for carry out quality control and quality evaluation for FGIDB. This paper designs the quality elements, evaluation items and properties of the Fundamental Geographic Information Database gradually based on the quality model framework. Connected organically, these quality elements and evaluation items constitute the quality model of the Fundamental Geographic Information Database. This model is the foundation for the quality demand stipulation and quality evaluation of the Fundamental Geographic Information Database, and is of great significance on the quality assurance in the design and development stage, the demand formulation in the testing evaluation stage, and the standard system construction for quality evaluation technology of the Fundamental Geographic Information Database.
Evaluation of consumer drug information databases.
Choi, J A; Sullivan, J; Pankaskie, M; Brufsky, J
1999-01-01
To evaluate prescription drug information contained in six consumer drug information databases available on CD-ROM, and to make health care professionals aware of the information provided, so that they may appropriately recommend these databases for use by their patients. Observational study of six consumer drug information databases: The Corner Drug Store, Home Medical Advisor, Mayo Clinic Family Pharmacist, Medical Drug Reference, Mosby's Medical Encyclopedia, and PharmAssist. Not applicable. Not applicable. Information on 20 frequently prescribed drugs was evaluated in each database. The databases were ranked using a point-scale system based on primary and secondary assessment criteria. For the primary assessment, 20 categories of information based on those included in the 1998 edition of the USP DI Volume II, Advice for the Patient: Drug Information in Lay Language were evaluated for each of the 20 drugs, and each database could earn up to 400 points (for example, 1 point was awarded if the database mentioned a drug's mechanism of action). For the secondary assessment, the inclusion of 8 additional features that could enhance the utility of the databases was evaluated (for example, 1 point was awarded if the database contained a picture of the drug), and each database could earn up to 8 points. The results of the primary and secondary assessments, listed in order of highest to lowest number of points earned, are as follows: Primary assessment--Mayo Clinic Family Pharmacist (379), Medical Drug Reference (251), PharmAssist (176), Home Medical Advisor (113.5), The Corner Drug Store (98), and Mosby's Medical Encyclopedia (18.5); secondary assessment--The Mayo Clinic Family Pharmacist (8), The Corner Drug Store (5), Mosby's Medical Encyclopedia (5), Home Medical Advisor (4), Medical Drug Reference (4), and PharmAssist (3). The Mayo Clinic Family Pharmacist was the most accurate and complete source of prescription drug information based on the USP DI Volume II and would be an appropriate database for health care professionals to recommend to patients.
Utah Virtual Lab: JAVA interactivity for teaching science and statistics on line.
Malloy, T E; Jensen, G C
2001-05-01
The Utah on-line Virtual Lab is a JAVA program run dynamically off a database. It is embedded in StatCenter (www.psych.utah.edu/learn/statsampler.html), an on-line collection of tools and text for teaching and learning statistics. Instructors author a statistical virtual reality that simulates theories and data in a specific research focus area by defining independent, predictor, and dependent variables and the relations among them. Students work in an on-line virtual environment to discover the principles of this simulated reality: They go to a library, read theoretical overviews and scientific puzzles, and then go to a lab, design a study, collect and analyze data, and write a report. Each student's design and data analysis decisions are computer-graded and recorded in a database; the written research report can be read by the instructor or by other students in peer groups simulating scientific conventions.
Kim, ChangKug; Park, DongSuk; Seol, YoungJoo; Hahn, JangHo
2011-01-01
The National Agricultural Biotechnology Information Center (NABIC) constructed an agricultural biology-based infrastructure and developed a Web based relational database for agricultural plants with biotechnology information. The NABIC has concentrated on functional genomics of major agricultural plants, building an integrated biotechnology database for agro-biotech information that focuses on genomics of major agricultural resources. This genome database provides annotated genome information from 1,039,823 records mapped to rice, Arabidopsis, and Chinese cabbage. PMID:21887015
The Binding Database: data management and interface design.
Chen, Xi; Lin, Yuhmei; Liu, Ming; Gilson, Michael K
2002-01-01
The large and growing body of experimental data on biomolecular binding is of enormous value in developing a deeper understanding of molecular biology, in developing new therapeutics, and in various molecular design applications. However, most of these data are found only in the published literature and are therefore difficult to access and use. No existing public database has focused on measured binding affinities and has provided query capabilities that include chemical structure and sequence homology searches. We have created Binding DataBase (BindingDB), a public, web-accessible database of measured binding affinities. BindingDB is based upon a relational data specification for describing binding measurements via Isothermal Titration Calorimetry (ITC) and enzyme inhibition. A corresponding XML Document Type Definition (DTD) is used to create and parse intermediate files during the on-line deposition process and will also be used for data interchange, including collection of data from other sources. The on-line query interface, which is constructed with Java Servlet technology, supports standard SQL queries as well as searches for molecules by chemical structure and sequence homology. The on-line deposition interface uses Java Server Pages and JavaBean objects to generate dynamic HTML and to store intermediate results. The resulting data resource provides a range of functionality with brisk response-times, and lends itself well to continued development and enhancement.
NASA Astrophysics Data System (ADS)
Kulchitsky, A.; Maurits, S.; Watkins, B.
2006-12-01
With the widespread availability of the Internet today, many people can monitor various scientific research activities. It is important to accommodate this interest providing on-line access to dynamic and illustrative Web-resources, which could demonstrate different aspects of ongoing research. It is especially important to explain and these research activities for high school and undergraduate students, thereby providing more information for making decisions concerning their future studies. Such Web resources are also important to clarify scientific research for the general public, in order to achieve better awareness of research progress in various fields. Particularly rewarding is dissemination of information about ongoing projects within Universities and research centers to their local communities. The benefits of this type of scientific outreach are mutual, since development of Web-based automatic systems is prerequisite for many research projects targeting real-time monitoring and/or modeling of natural conditions. Continuous operation of such systems provide ongoing research opportunities for the statistically massive validation of the models, as well. We have developed a Web-based system to run the University of Alaska Fairbanks Polar Ionospheric Model in real-time. This model makes use of networking and computational resources at the Arctic Region Supercomputing Center. This system was designed to be portable among various operating systems and computational resources. Its components can be installed across different computers, separating Web servers and computational engines. The core of the system is a Real-Time Management module (RMM) written Python, which facilitates interactions of remote input data transfers, the ionospheric model runs, MySQL database filling, and PHP scripts for the Web-page preparations. The RMM downloads current geophysical inputs as soon as they become available at different on-line depositories. This information is processed to provide inputs for the next ionospheic model time step and then stored in a MySQL database as the first part of the time-specific record. The RMM then performs synchronization of the input times with the current model time, prepares a decision on initialization for the next model time step, and monitors its execution. Then, as soon as the model completes computations for the next time step, RMM visualizes the current model output into various short-term (about 1-2 hours) forecasting products and compares prior results with available ionospheric measurements. The RMM places prepared images into the MySQL database, which can be located on a different computer node, and then proceeds to the next time interval continuing the time-loop. The upper-level interface of this real-time system is the a PHP-based Web site (http://www.arsc.edu/SpaceWeather/new). This site provides general information about the Earth polar and adjacent mid-latitude ionosphere, allows for monitoring of the current developments and short-term forecasts, and facilitates access to the comparisons archive stored in the database.
Computer-Based Method for On-Line Service and Compact Storage of Data
NASA Astrophysics Data System (ADS)
Vasilyev, S. V.
New method for compressing some types of astronomical data is proposed and discussed. The method is intended to provide astronomers more convenient technique for data retrieval from observational databases. The technique is based on the principal component method (PCM) of data analysis and their representation by characteristic vectors and eigenvalues. It allows to change the variety of data records by relatively small number of parameters. The initial data can be restored simply by linear combinations of obtained characteristic vectors. This approach can essentially reduce the dimensions of data being stored in databases and transferred through a netware. Our study shows that resulting volumes of data depend on the required accuracy of the representation and can be several times less than the initial ones. We note that using this method does not prevent applying the widely-used software for further data compressing. As the PCM is able to represent data analytically it can be used for proper adaptation of the requested information to the researcher's aims. Finally, taking into account that the method itself is a powerful tool for data smoothing, modelling and comparison we find it having good prospects for use in computer databases. Some examples of the PCM applications are described.
Real-time traffic sign detection and recognition
NASA Astrophysics Data System (ADS)
Herbschleb, Ernst; de With, Peter H. N.
2009-01-01
The continuous growth of imaging databases increasingly requires analysis tools for extraction of features. In this paper, a new architecture for the detection of traffic signs is proposed. The architecture is designed to process a large database with tens of millions of images with a resolution up to 4,800x2,400 pixels. Because of the size of the database, a high reliability as well as a high throughput is required. The novel architecture consists of a three-stage algorithm with multiple steps per stage, combining both color and specific spatial information. The first stage contains an area-limitation step which is performance critical in both the detection rate as the overall processing time. The second stage locates suggestions for traffic signs using recently published feature processing. The third stage contains a validation step to enhance reliability of the algorithm. During this stage, the traffic signs are recognized. Experiments show a convincing detection rate of 99%. With respect to computational speed, the throughput for line-of-sight images of 800×600 pixels is 35 Hz and for panorama images it is 4 Hz. Our novel architecture outperforms existing algorithms, with respect to both detection rate and throughput
NASA Astrophysics Data System (ADS)
Steckler, M. S.; Haxby, W.; Persaud, P.; Stock, J.; Martín-Barajas, A.; Diebold, J.; Gonzalez-Fernandez, A.; Mountain, G. S.
2003-04-01
A multi-channel seismic reflection database has been developed to give access to high-resolution MCS data collected in the northern Gulf of California in May-June 1999. This data set consists of 3500 km of high-resolution MCS data acquired by the LDEO portable 48-channel MCS system using a 600-m streamer, a 1-ms sampling interval, and CDP spacing of 6.25/12.5 m on board the B/O Ulloa, the 28-m research vessel of CICESE. The resulting images have vertical resolution on the scale of meters to depths of up to 2 km below the seafloor. In addition, 48 sonobuoys recorded to 7 sec TWTT provided refraction velocities to greater depths. The northern Gulf of California is a transitional region between the oceanic ridge transform system of the central and southern Gulf and the continental San Andreas fault system of southern California. This data images the active deformation associated with the plate boundary zone in the northern Gulf of California. Multiple parallel rifts are simultaneously active in this wide complex zone of regional extension overprinted by shearing and a high sediment influx. The public-access database makes the cruise results, which is in a US MARGINS Program focus area, available to the broader geoscience community. The database includes navigation, final stacks and images for 80 seismic lines and 48 sonobuoys. The database may be accessed with MapApp, a downloadable Java application. Java applets offer many advantages over static or scripted web pages; they permit dynamic local interaction with data sets and limit time-consuming interaction with a remote server. MapApp displays the seismic lines on a map, and provides a viewer for inspecting images of the lines. Users may select a line from a list, or by clicking on the map. Once a line is selected, a user may load the image into the viewer, or download navigation, image or SEG-Y files. The viewer includes capability to zoom in and out, scroll, stretch or shrink horizontally, reverse direction, and toggle between black-on-white and white-on-black display. The section of the line in the viewer is indicated on the map, as is the current cursor location.
The development of digital library system for drug research information.
Kim, H J; Kim, S R; Yoo, D S; Lee, S H; Suh, O K; Cho, J H; Shin, H T; Yoon, J P
1998-01-01
The sophistication of computer technology and information transmission on internet has made various cyber information repository available to information consumers. In the era of information super-highway, the digital library which can be accessed from remote sites at any time is considered the prototype of information repository. Using object-oriented DBMS, the very first model of digital library for pharmaceutical researchers and related professionals in Korea has been developed. The published research papers and researchers' personal information was included in the database. For database with research papers, 13 domestic journals were abstracted and scanned for full-text image files which can be viewed by Internet web browsers. The database with researchers' personal information was also developed and interlinked to the database with research papers. These database will be continuously updated and will be combined with world-wide information as the unique digital library in the field of pharmacy.
Remote sensing and GIS-based prediction and assessment of copper-gold resources in Thailand
NASA Astrophysics Data System (ADS)
Yang, Shasha; Wang, Gongwen; Du, Wenhui; Huang, Luxiong
2014-03-01
Quantitative integration of geological information is a frontier and hotspot of prospecting decision research in the world. The forming process of large scale Cu-Au deposits is influenced by complicated geological events and restricted by various geological factors (stratum, structure and alteration). In this paper, using Thailand's copper-gold deposit district as a case study, geological anomaly theory is used along with the typical copper and gold metallogenic model, ETM+ remote sensing images, geological maps and mineral geology database in study area are combined with GIS technique. These techniques create ore-forming information such as geological information (strata, line-ring faults, intrusion), remote sensing information (hydroxyl alteration, iron alteration, linear-ring structure) and the Cu-Au prospect targets. These targets were identified using weights of evidence model. The research results show that the remote sensing and geological data can be combined to quickly predict and assess for exploration of mineral resources in a regional metallogenic belt.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-13
... information. Access to any such database system is limited to system administrators, individuals responsible... during the certification process. The above information will be contained in one or more databases (such as Lotus Notes) that reside on servers in EPA offices. The database(s) may be specific to one...
The HITRAN2016 Molecular Spectroscopic Database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gordon, I. E.; Rothman, L. S.; Hill, C.
This article describes the contents of the 2016 edition of the HITRAN molecular spectroscopic compilation. The new edition replaces the previous HITRAN edition of 2012 and its updates during the intervening years. The HITRAN molecular absorption compilation is composed of five major components: the traditional line-by-line spectroscopic parameters required for high-resolution radiative-transfer codes, infrared absorption cross-sections for molecules not yet amenable to representation in a line-by-line form, collision-induced absorption data, aerosol indices of refraction, and general tables such as partition sums that apply globally to the data. The new HITRAN is greatly extended in terms of accuracy, spectral coverage, additionalmore » absorption phenomena, added line-shape formalisms, and validity. Moreover, molecules, isotopologues, and perturbing gases have been added that address the issues of atmospheres beyond the Earth. Of considerable note, experimental IR cross-sections for almost 300 additional molecules important in different areas of atmospheric science have been added to the database. The compilation can be accessed through www.hitran.org. Most of the HITRAN data have now been cast into an underlying relational database structure that offers many advantages over the long-standing sequential text-based structure. The new structure empowers the user in many ways. It enables the incorporation of an extended set of fundamental parameters per transition, sophisticated line-shape formalisms, easy user-defined output formats, and very convenient searching, filtering, and plotting of data. Finally, a powerful application programming interface making use of structured query language (SQL) features for higher-level applications of HITRAN is also provided.« less
The HITRAN2016 Molecular Spectroscopic Database
Gordon, I. E.; Rothman, L. S.; Hill, C.; ...
2017-07-05
This article describes the contents of the 2016 edition of the HITRAN molecular spectroscopic compilation. The new edition replaces the previous HITRAN edition of 2012 and its updates during the intervening years. The HITRAN molecular absorption compilation is composed of five major components: the traditional line-by-line spectroscopic parameters required for high-resolution radiative-transfer codes, infrared absorption cross-sections for molecules not yet amenable to representation in a line-by-line form, collision-induced absorption data, aerosol indices of refraction, and general tables such as partition sums that apply globally to the data. The new HITRAN is greatly extended in terms of accuracy, spectral coverage, additionalmore » absorption phenomena, added line-shape formalisms, and validity. Moreover, molecules, isotopologues, and perturbing gases have been added that address the issues of atmospheres beyond the Earth. Of considerable note, experimental IR cross-sections for almost 300 additional molecules important in different areas of atmospheric science have been added to the database. The compilation can be accessed through www.hitran.org. Most of the HITRAN data have now been cast into an underlying relational database structure that offers many advantages over the long-standing sequential text-based structure. The new structure empowers the user in many ways. It enables the incorporation of an extended set of fundamental parameters per transition, sophisticated line-shape formalisms, easy user-defined output formats, and very convenient searching, filtering, and plotting of data. Finally, a powerful application programming interface making use of structured query language (SQL) features for higher-level applications of HITRAN is also provided.« less
Manosroi, Jiradej; Sainakham, Mathukorn; Manosroi, Worapaka; Manosroi, Aranya
2012-05-07
ETHONOPHARMACOLOGICAL RELEVANCES: Traditional medicines have long been used by the Thai people. Several medicinal recipes prepared from a mixture of plants are often used by traditional medicinal practitioners for the treatment of many diseases including cancer. The recipes collected from the Thai medicinal text books were recorded in MANOSROI II database. Anticancer recipes were searched and selected by a computer program using the recipe indication keywords including Ma-reng and San which means cancer in Thai, from the database for anticancer activity investigation. To investigate anti-cancer activities of the Thai medicinal plant recipes selected from the "MANOSROI II" database. Anti-proliferative and apoptotic activities of extracts from 121 recipes selected from 56,137 recipes in the Thai medicinal plant recipe "MANOSROI II" database were investigated in two cancer cell lines including human mouth epidermal carcinoma (KB) and human colon adenocarcinoma (HT-29) cell lines using sulforhodamine B (SRB) assay and acridine orange (AO) and ethidium bromide (EB) staining technique, respectively. In the SRB assay, recipes NE028 and, S003 gave the highest anti-proliferation activity on KB and HT29 with the IC(50) values of 2.48±0.24 and 6.92±0.49μg/ml, respectively. In the AO/EB staining assay, recipes S016 and NE028 exhibited the highest apoptotic induction in KB and HT-29 cell lines, respectively. This study has demonstrated that the three Thai medicinal plant recipes selected from "MANOSROI II" database (NE028, S003 and S016) gave active anti-cancer activities according to the NCI classification which can be further developed for anti-cancer treatment. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
NABIC marker database: A molecular markers information network of agricultural crops.
Kim, Chang-Kug; Seol, Young-Joo; Lee, Dong-Jun; Jeong, In-Seon; Yoon, Ung-Han; Lee, Gang-Seob; Hahn, Jang-Ho; Park, Dong-Suk
2013-01-01
In 2013, National Agricultural Biotechnology Information Center (NABIC) reconstructs a molecular marker database for useful genetic resources. The web-based marker database consists of three major functional categories: map viewer, RSN marker and gene annotation. It provides 7250 marker locations, 3301 RSN marker property, 3280 molecular marker annotation information in agricultural plants. The individual molecular marker provides information such as marker name, expressed sequence tag number, gene definition and general marker information. This updated marker-based database provides useful information through a user-friendly web interface that assisted in tracing any new structures of the chromosomes and gene positional functions using specific molecular markers. The database is available for free at http://nabic.rda.go.kr/gere/rice/molecularMarkers/
The MOLGENIS toolkit: rapid prototyping of biosoftware at the push of a button.
Swertz, Morris A; Dijkstra, Martijn; Adamusiak, Tomasz; van der Velde, Joeri K; Kanterakis, Alexandros; Roos, Erik T; Lops, Joris; Thorisson, Gudmundur A; Arends, Danny; Byelas, George; Muilu, Juha; Brookes, Anthony J; de Brock, Engbert O; Jansen, Ritsert C; Parkinson, Helen
2010-12-21
There is a huge demand on bioinformaticians to provide their biologists with user friendly and scalable software infrastructures to capture, exchange, and exploit the unprecedented amounts of new *omics data. We here present MOLGENIS, a generic, open source, software toolkit to quickly produce the bespoke MOLecular GENetics Information Systems needed. The MOLGENIS toolkit provides bioinformaticians with a simple language to model biological data structures and user interfaces. At the push of a button, MOLGENIS' generator suite automatically translates these models into a feature-rich, ready-to-use web application including database, user interfaces, exchange formats, and scriptable interfaces. Each generator is a template of SQL, JAVA, R, or HTML code that would require much effort to write by hand. This 'model-driven' method ensures reuse of best practices and improves quality because the modeling language and generators are shared between all MOLGENIS applications, so that errors are found quickly and improvements are shared easily by a re-generation. A plug-in mechanism ensures that both the generator suite and generated product can be customized just as much as hand-written software. In recent years we have successfully evaluated the MOLGENIS toolkit for the rapid prototyping of many types of biomedical applications, including next-generation sequencing, GWAS, QTL, proteomics and biobanking. Writing 500 lines of model XML typically replaces 15,000 lines of hand-written programming code, which allows for quick adaptation if the information system is not yet to the biologist's satisfaction. Each application generated with MOLGENIS comes with an optimized database back-end, user interfaces for biologists to manage and exploit their data, programming interfaces for bioinformaticians to script analysis tools in R, Java, SOAP, REST/JSON and RDF, a tab-delimited file format to ease upload and exchange of data, and detailed technical documentation. Existing databases can be quickly enhanced with MOLGENIS generated interfaces using the 'ExtractModel' procedure. The MOLGENIS toolkit provides bioinformaticians with a simple model to quickly generate flexible web platforms for all possible genomic, molecular and phenotypic experiments with a richness of interfaces not provided by other tools. All the software and manuals are available free as LGPLv3 open source at http://www.molgenis.org.
77 FR 24925 - Privacy Act of 1974; System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-26
... CES Personnel Information System database of NIFA. This database is updated annually from data provided by 1862 and 1890 land-grant universities. This database is maintained by the Agricultural Research... reviewer. NIFA maintains a database of potential reviewers. Information in the database is used to match...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-31
... Extension of Approval; Comment Request--Publicly Available Consumer Product Safety Information Database... Publicly Available Consumer Product Safety Information Database. The Commission will consider all comments... intention to seek extension of approval of a collection of information for a database on the safety of...
78 FR 18232 - Amendment of VOR Federal Airway V-233, Springfield, IL
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-26
... it matches the information contained in the FAA's aeronautical database, matches the depiction on the... description did not match the airway information contained in the FAA's aeronautical database or the charted... information that should have been used. The FAA aeronautical database contains the correct radial information...
ERIC Educational Resources Information Center
American Society for Information Science, Washington, DC.
This document contains abstracts of papers on database design and management which were presented at the 1986 mid-year meeting of the American Society for Information Science (ASIS). Topics considered include: knowledge representation in a bilingual art history database; proprietary database design; relational database design; in-house databases;…
Martone, Maryann E.; Tran, Joshua; Wong, Willy W.; Sargis, Joy; Fong, Lisa; Larson, Stephen; Lamont, Stephan P.; Gupta, Amarnath; Ellisman, Mark H.
2008-01-01
Databases have become integral parts of data management, dissemination and mining in biology. At the Second Annual Conference on Electron Tomography, held in Amsterdam in 2001, we proposed that electron tomography data should be shared in a manner analogous to structural data at the protein and sequence scales. At that time, we outlined our progress in creating a database to bring together cell level imaging data across scales, The Cell Centered Database (CCDB). The CCDB was formally launched in 2002 as an on-line repository of high-resolution 3D light and electron microscopic reconstructions of cells and subcellular structures. It contains 2D, 3D and 4D structural and protein distribution information from confocal, multiphoton and electron microscopy, including correlated light and electron microscopy. Many of the data sets are derived from electron tomography of cells and tissues. In the five years since its debut, we have moved the CCDB from a prototype to a stable resource and expanded the scope of the project to include data management and knowledge engineering. Here we provide an update on the CCDB and how it is used by the scientific community. We also describe our work in developing additional knowledge tools, e.g., ontologies, for annotation and query of electron microscopic data. PMID:18054501
Development of the Tensoral Computer Language
NASA Technical Reports Server (NTRS)
Ferziger, Joel; Dresselhaus, Eliot
1996-01-01
The research scientist or engineer wishing to perform large scale simulations or to extract useful information from existing databases is required to have expertise in the details of the particular database, the numerical methods and the computer architecture to be used. This poses a significant practical barrier to the use of simulation data. The goal of this research was to develop a high-level computer language called Tensoral, designed to remove this barrier. The Tensoral language provides a framework in which efficient generic data manipulations can be easily coded and implemented. First of all, Tensoral is general. The fundamental objects in Tensoral represent tensor fields and the operators that act on them. The numerical implementation of these tensors and operators is completely and flexibly programmable. New mathematical constructs and operators can be easily added to the Tensoral system. Tensoral is compatible with existing languages. Tensoral tensor operations co-exist in a natural way with a host language, which may be any sufficiently powerful computer language such as Fortran, C, or Vectoral. Tensoral is very-high-level. Tensor operations in Tensoral typically act on entire databases (i.e., arrays) at one time and may, therefore, correspond to many lines of code in a conventional language. Tensoral is efficient. Tensoral is a compiled language. Database manipulations are simplified optimized and scheduled by the compiler eventually resulting in efficient machine code to implement them.
Second Line of Defense Master Spares Catalog
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henderson, Dale L.; Muller, George; Mercier, Theresa M.
This catalog is intended to be a comprehensive listing of repair parts, components, kits, and consumable items used on the equipment deployed at SLD sites worldwide. The catalog covers detection, CAS, network, ancillary equipment, and tools. The catalog is backed by a Master Parts Database which is used to generate the standard report views of the catalog. The master parts database is a relational database containing a record for every part in the master parts catalog along with supporting tables for normalizing fields in the records. The database also includes supporting queries, database maintenance forms, and reports.
Annual Review of Database Developments: 1993.
ERIC Educational Resources Information Center
Basch, Reva
1993-01-01
Reviews developments in the database industry for 1993. Topics addressed include scientific and technical information; environmental issues; social sciences; legal information; business and marketing; news services; documentation; databases and document delivery; electronic bulletin boards and the Internet; and information industry organizational…
16 CFR 1102.24 - Designation of confidential information.
Code of Federal Regulations, 2014 CFR
2014-01-01
... ACT REGULATIONS PUBLICLY AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Procedural... allegedly confidential information is not placed in the database, a request for designation of confidential... publication in the Database until it makes a determination regarding confidential treatment. (e) Assistance...
16 CFR 1102.24 - Designation of confidential information.
Code of Federal Regulations, 2012 CFR
2012-01-01
... ACT REGULATIONS PUBLICLY AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Procedural... allegedly confidential information is not placed in the database, a request for designation of confidential... publication in the Database until it makes a determination regarding confidential treatment. (e) Assistance...
National Institute of Standards and Technology Data Gateway
SRD 115 Hydrocarbon Spectral Database (Web, free access) All of the rotational spectral lines observed and reported in the open literature for 91 hydrocarbon molecules have been tabulated. The isotopic molecular species, assigned quantum numbers, observed frequency, estimated measurement uncertainty and reference are given for each transition reported.
National Institute of Standards and Technology Data Gateway
SRD 114 Diatomic Spectral Database (Web, free access) All of the rotational spectral lines observed and reported in the open literature for 121 diatomic molecules have been tabulated. The isotopic molecular species, assigned quantum numbers, observed frequency, estimated measurement uncertainty, and reference are given for each transition reported.
National Institute of Standards and Technology Data Gateway
SRD 117 Triatomic Spectral Database (Web, free access) All of the rotational spectral lines observed and reported in the open literature for 55 triatomic molecules have been tabulated. The isotopic molecular species, assigned quantum numbers, observed frequency, estimated measurement uncertainty and reference are given for each transition reported.
R2 & NE Block Group - 2010 Census; Housing and Population Summary
The TIGER/Line Files are shapefiles and related database files (.dbf) that are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master Address File / Topologically Integrated Geographic Encoding and Referencing (MAF/TIGER) Database (MTDB). The MTDB represents a seamless national file with no overlaps or gaps between parts, however, each TIGER/Line File is designed to stand alone as an independent data set, or they can be combined to cover the entire nation. Block Groups (BGs) are defined before tabulation block delineation and numbering, but are clusters of blocks within the same census tract that have the same first digit of their 4-digit census block number from the same decennial census. For example, Census 2000 tabulation blocks 3001, 3002, 3003,.., 3999 within Census 2000 tract 1210.02 are also within BG 3 within that census tract. Census 2000 BGs generally contained between 600 and 3,000 people, with an optimum size of 1,500 people. Most BGs were delineated by local participants in the Census Bureau's Participant Statistical Areas Program (PSAP). The Census Bureau delineated BGs only where the PSAP participant declined to delineate BGs or where the Census Bureau could not identify any local PSAP participant. A BG usually covers a contiguous area. Each census tract contains at least one BG, and BGs are uniquely numbered within census tract. Within the standard census geographic hierarchy, BGs never cross
R2 & NE Tract - 2010 Census; Housing and Population Summary
The TIGER/Line Files are shapefiles and related database files (.dbf) that are an extract of selected geographic and cartographic information from the U.S. Census Bureau's Master Address File / Topologically Integrated Geographic Encoding and Referencing (MAF/TIGER) Database (MTDB). The MTDB represents a seamless national file with no overlaps or gaps between parts, however, each TIGER/Line File is designed to stand alone as an independent data set, or they can be combined to cover the entire nation. Census tracts are small, relatively permanent statistical subdivisions of a county or equivalent entity, and were defined by local participants as part of the 2010 Census Participant Statistical Areas Program. The Census Bureau delineated the census tracts in situations where no local participant existed or where all the potential participants declined to participate. The primary purpose of census tracts is to provide a stable set of geographic units for the presentation of census data and comparison back to previous decennial censuses. Census tracts generally have a population size between 1,200 and 8,000 people, with an optimum size of 4,000 people. When first delineated, census tracts were designed to be homogeneous with respect to population characteristics, economic status, and living conditions. The spatial size of census tracts varies widely depending on the density of settlement. Physical changes in street patterns caused by highway construction, new
Radiative transfer and spectroscopic databases: A line-sampling Monte Carlo approach
NASA Astrophysics Data System (ADS)
Galtier, Mathieu; Blanco, Stéphane; Dauchet, Jérémi; El Hafi, Mouna; Eymet, Vincent; Fournier, Richard; Roger, Maxime; Spiesser, Christophe; Terrée, Guillaume
2016-03-01
Dealing with molecular-state transitions for radiative transfer purposes involves two successive steps that both reach the complexity level at which physicists start thinking about statistical approaches: (1) constructing line-shaped absorption spectra as the result of very numerous state-transitions, (2) integrating over optical-path domains. For the first time, we show here how these steps can be addressed simultaneously using the null-collision concept. This opens the door to the design of Monte Carlo codes directly estimating radiative transfer observables from spectroscopic databases. The intermediate step of producing accurate high-resolution absorption spectra is no longer required. A Monte Carlo algorithm is proposed and applied to six one-dimensional test cases. It allows the computation of spectrally integrated intensities (over 25 cm-1 bands or the full IR range) in a few seconds, regardless of the retained database and line model. But free parameters need to be selected and they impact the convergence. A first possible selection is provided in full detail. We observe that this selection is highly satisfactory for quite distinct atmospheric and combustion configurations, but a more systematic exploration is still in progress.
Code of Federal Regulations, 2014 CFR
2014-01-01
... AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Notice and Disclosure Requirements § 1102.42... Consumer Product Safety Information Database, particularly with respect to the accuracy, completeness, or adequacy of information submitted by persons outside of the CPSC. The Database will contain a notice to...
16 CFR § 1102.24 - Designation of confidential information.
Code of Federal Regulations, 2013 CFR
2013-01-01
... SAFETY ACT REGULATIONS PUBLICLY AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Procedural... allegedly confidential information is not placed in the database, a request for designation of confidential... publication in the Database until it makes a determination regarding confidential treatment. (e) Assistance...
Code of Federal Regulations, 2012 CFR
2012-01-01
... AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Notice and Disclosure Requirements § 1102.42... Consumer Product Safety Information Database, particularly with respect to the accuracy, completeness, or adequacy of information submitted by persons outside of the CPSC. The Database will contain a notice to...
Code of Federal Regulations, 2011 CFR
2011-01-01
... AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE (Eff. Jan. 10, 2011) Notice and Disclosure... of the contents of the Consumer Product Safety Information Database, particularly with respect to the accuracy, completeness, or adequacy of information submitted by persons outside of the CPSC. The Database...
Ang, Darwin N; Behrns, Kevin E
2013-07-01
The emphasis on high-quality care has spawned the development of quality programs, most of which focus on broad outcome measures across a diverse group of providers. Our aim was to investigate the clinical outcomes for a department of surgery with multiple service lines of patient care using a relational database. Mortality, length of stay (LOS), patient safety indicators (PSIs), and hospital-acquired conditions were examined for each service line. Expected values for mortality and LOS were derived from University HealthSystem Consortium regression models, whereas expected values for PSIs were derived from Agency for Healthcare Research and Quality regression models. Overall, 5200 patients were evaluated from the months of January through May of both 2011 (n = 2550) and 2012 (n = 2650). The overall observed-to-expected (O/E) ratio of mortality improved from 1.03 to 0.92. The overall O/E ratio for LOS improved from 0.92 to 0.89. PSIs that predicted mortality included postoperative sepsis (O/E:1.89), postoperative respiratory failure (O/E:1.83), postoperative metabolic derangement (O/E:1.81), and postoperative deep vein thrombosis or pulmonary embolus (O/E:1.8). Mortality and LOS can be improved by using a relational database with outcomes reported to specific service lines. Service line quality can be influenced by distribution of frequent reports, group meetings, and service line-directed interventions.
[Establishment of a comprehensive database for laryngeal cancer related genes and the miRNAs].
Li, Mengjiao; E, Qimin; Liu, Jialin; Huang, Tingting; Liang, Chuanyu
2015-09-01
By collecting and analyzing the laryngeal cancer related genes and the miRNAs, to build a comprehensive laryngeal cancer-related gene database, which differs from the current biological information database with complex and clumsy structure and focuses on the theme of gene and miRNA, and it could make the research and teaching more convenient and efficient. Based on the B/S architecture, using Apache as a Web server, MySQL as coding language of database design and PHP as coding language of web design, a comprehensive database for laryngeal cancer-related genes was established, providing with the gene tables, protein tables, miRNA tables and clinical information tables of the patients with laryngeal cancer. The established database containsed 207 laryngeal cancer related genes, 243 proteins, 26 miRNAs, and their particular information such as mutations, methylations, diversified expressions, and the empirical references of laryngeal cancer relevant molecules. The database could be accessed and operated via the Internet, by which browsing and retrieval of the information were performed. The database were maintained and updated regularly. The database for laryngeal cancer related genes is resource-integrated and user-friendly, providing a genetic information query tool for the study of laryngeal cancer.
The nature of advocacy vs. paternalism in nursing: clarifying the 'thin line'.
Zomorodi, Meg; Foley, Barbara Jo
2009-08-01
This paper is an exploration of the concepts of advocacy and paternalism in nursing and discusses the thin line between the two. Nurses are involved in care more than any other healthcare professionals and they play a central role in advocating for patients and families. It is difficult to obtain a clear definition of advocacy, yet the concepts of advocacy and paternalism must be compared, contrasted, and discussed extensively. In many situations, only a thin line distinguishes advocacy from paternalism. A literature search was conducted using PubMed and CINAHL databases (2000-2008) as well as a library catalogue for texts. Four case stories were described in order to discuss the 'thin line' between advocacy and paternalism and develop communication strategies to eliminate ambiguity. Weighing the ethical principles of beneficence and autonomy helps to clarify advocacy and paternalism and provides an avenue for discussion among nurses practicing in a variety of settings. Advocacy and paternalism should be discussed at interdisciplinary rounds, and taken into consideration when making patient care decisions. It is difficult to clarify advocacy vs. paternalism, but strategies such as knowing the patient, clarifying information, and educating all involved are initial steps in distinguishing advocacy from paternalism. Truly 'knowing' patients, their life experiences, values, beliefs and wishes can help clarify the 'thin line' and gain a grasp of these difficult to distinguish theoretical concepts.
Seliske, Laura; Pickett, William; Bates, Rebecca; Janssen, Ian
2012-01-01
Many studies examining the food retail environment rely on geographic information system (GIS) databases for location information. The purpose of this study was to validate information provided by two GIS databases, comparing the positional accuracy of food service places within a 1 km circular buffer surrounding 34 schools in Ontario, Canada. A commercial database (InfoCanada) and an online database (Yellow Pages) provided the addresses of food service places. Actual locations were measured using a global positioning system (GPS) device. The InfoCanada and Yellow Pages GIS databases provided the locations for 973 and 675 food service places, respectively. Overall, 749 (77.1%) and 595 (88.2%) of these were located in the field. The online database had a higher proportion of food service places found in the field. The GIS locations of 25% of the food service places were located within approximately 15 m of their actual location, 50% were within 25 m, and 75% were within 50 m. This validation study provided a detailed assessment of errors in the measurement of the location of food service places in the two databases. The location information was more accurate for the online database, however, when matching criteria were more conservative, there were no observed differences in error between the databases. PMID:23066385
Seliske, Laura; Pickett, William; Bates, Rebecca; Janssen, Ian
2012-08-01
Many studies examining the food retail environment rely on geographic information system (GIS) databases for location information. The purpose of this study was to validate information provided by two GIS databases, comparing the positional accuracy of food service places within a 1 km circular buffer surrounding 34 schools in Ontario, Canada. A commercial database (InfoCanada) and an online database (Yellow Pages) provided the addresses of food service places. Actual locations were measured using a global positioning system (GPS) device. The InfoCanada and Yellow Pages GIS databases provided the locations for 973 and 675 food service places, respectively. Overall, 749 (77.1%) and 595 (88.2%) of these were located in the field. The online database had a higher proportion of food service places found in the field. The GIS locations of 25% of the food service places were located within approximately 15 m of their actual location, 50% were within 25 m, and 75% were within 50 m. This validation study provided a detailed assessment of errors in the measurement of the location of food service places in the two databases. The location information was more accurate for the online database, however, when matching criteria were more conservative, there were no observed differences in error between the databases.
Tabak, Ying P.; Johannes, Richard S.; Sun, Xiaowu; Crosby, Cynthia T.
2016-01-01
The Centers for Medicare and Medicaid Services (CMS) Hospital Compare central line-associated bloodstream infection (CLABSI) data and private databases containing new-generation intravenous needleless connector (study NC) use at the hospital level were linked. The relative risk (RR) of CLABSI associated with the study NCs was estimated, adjusting for hospital characteristics. Among 3074 eligible hospitals in the 2013 CMS database, 758 (25%) hospitals used the study NCs. The study NC hospitals had a lower unadjusted CLABSI rate (1.03 vs 1.13 CLABSIs per 1000 central line days, P < .0001) compared with comparator hospitals. The adjusted RR for CLABSI was 0.94 (95% confidence interval: 0.86, 1.02; P = .11). PMID:27598072
Municipal GIS incorporates database from pipe lines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1994-05-01
League City, a coastal area community of about 35,000 population in Galveston County, Texas, has developed an impressive municipal GIS program. The system represents a textbook example of what a municipal GIS can represent and produce. In 1987, the city engineer was authorized to begin developing the area information system. City survey personnel used state-of-the-art Global Positioning System (GPS) technology to establish a first order monumentation program with a grid of 78 monuments set over 54 sq mi. Street, subdivision, survey, utilities, taxing criteria, hydrology, topography, environmental and other concerns were layered into the municipal GIS database program. Today, areamore » developers submit all layout, design, and land use plan data to the city in digital format without hard copy. Multi-color maps with high resolution graphics can be quickly generate for cross-referenced queries sensitive to political, environmental, engineering, taxing, and/or utility capacity jurisdictions. The design of both the GIS and data base system are described.« less
The EXOSAT database and archive
NASA Technical Reports Server (NTRS)
Reynolds, A. P.; Parmar, A. N.
1992-01-01
The EXOSAT database provides on-line access to the results and data products (spectra, images, and lightcurves) from the EXOSAT mission as well as access to data and logs from a number of other missions (such as EINSTEIN, COS-B, ROSAT, and IRAS). In addition, a number of familiar optical, infrared, and x ray catalogs, including the Hubble Space Telescope (HST) guide star catalog are available. The complete database is located at the EXOSAT observatory at ESTEC in the Netherlands and is accessible remotely via a captive account. The database management system was specifically developed to efficiently access the database and to allow the user to perform statistical studies on large samples of astronomical objects as well as to retrieve scientific and bibliographic information on single sources. The system was designed to be mission independent and includes timing, image processing, and spectral analysis packages as well as software to allow the easy transfer of analysis results and products to the user's own institute. The archive at ESTEC comprises a subset of the EXOSAT observations, stored on magnetic tape. Observations of particular interest were copied in compressed format to an optical jukebox, allowing users to retrieve and analyze selected raw data entirely from their terminals. Such analysis may be necessary if the user's needs are not accommodated by the products contained in the database (in terms of time resolution, spectral range, and the finesse of the background subtraction, for instance). Long-term archiving of the full final observation data is taking place at ESRIN in Italy as part of the ESIS program, again using optical media, and ESRIN have now assumed responsibility for distributing the data to the community. Tests showed that raw observational data (typically several tens of megabytes for a single target) can be transferred via the existing networks in reasonable time.
Wachtel, Ruth E; Dexter, Franklin
2013-12-01
The purpose of this article is to teach operating room managers, financial analysts, and those with a limited knowledge of search engines, including PubMed, how to locate articles they need in the areas of operating room and anesthesia group management. Many physicians are unaware of current literature in their field and evidence-based practices. The most common source of information is colleagues. Many people making management decisions do not read published scientific articles. Databases such as PubMed are available to search for such articles. Other databases, such as citation indices and Google Scholar, can be used to uncover additional articles. Nevertheless, most people who do not know how to use these databases are reluctant to utilize help resources when they do not know how to accomplish a task. Most people are especially reluctant to use on-line help files. Help files and search databases are often difficult to use because they have been designed for users already familiar with the field. The help files and databases have specialized vocabularies unique to the application. MeSH terms in PubMed are not useful alternatives for operating room management, an important limitation, because MeSH is the default when search terms are entered in PubMed. Librarians or those trained in informatics can be valuable assets for searching unusual databases, but they must possess the domain knowledge relative to the subject they are searching. The search methods we review are especially important when the subject area (e.g., anesthesia group management) is so specific that only 1 or 2 articles address the topic of interest. The materials are presented broadly enough that the reader can extrapolate the findings to other areas of clinical and management issues in anesthesiology.
GenoMycDB: a database for comparative analysis of mycobacterial genes and genomes.
Catanho, Marcos; Mascarenhas, Daniel; Degrave, Wim; Miranda, Antonio Basílio de
2006-03-31
Several databases and computational tools have been created with the aim of organizing, integrating and analyzing the wealth of information generated by large-scale sequencing projects of mycobacterial genomes and those of other organisms. However, with very few exceptions, these databases and tools do not allow for massive and/or dynamic comparison of these data. GenoMycDB (http://www.dbbm.fiocruz.br/GenoMycDB) is a relational database built for large-scale comparative analyses of completely sequenced mycobacterial genomes, based on their predicted protein content. Its central structure is composed of the results obtained after pair-wise sequence alignments among all the predicted proteins coded by the genomes of six mycobacteria: Mycobacterium tuberculosis (strains H37Rv and CDC1551), M. bovis AF2122/97, M. avium subsp. paratuberculosis K10, M. leprae TN, and M. smegmatis MC2 155. The database stores the computed similarity parameters of every aligned pair, providing for each protein sequence the predicted subcellular localization, the assigned cluster of orthologous groups, the features of the corresponding gene, and links to several important databases. Tables containing pairs or groups of potential homologs between selected species/strains can be produced dynamically by user-defined criteria, based on one or multiple sequence similarity parameters. In addition, searches can be restricted according to the predicted subcellular localization of the protein, the DNA strand of the corresponding gene and/or the description of the protein. Massive data search and/or retrieval are available, and different ways of exporting the result are offered. GenoMycDB provides an on-line resource for the functional classification of mycobacterial proteins as well as for the analysis of genome structure, organization, and evolution.
THE HUMAN EXPOSURE DATABASE SYSTEM (HEDS)-PUTTING THE NHEXAS DATA ON-LINE
The EPA's National Exposure Research Laboratory (NERL) has developed an Internet accessible Human Exposure Database System (HEDS) to provide the results of NERL human exposure studies to both the EPA and the external scientific communities. The first data sets that will be ava...
16 CFR 1102.24 - Designation of confidential information.
Code of Federal Regulations, 2011 CFR
2011-01-01
... ACT REGULATIONS PUBLICLY AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE (Eff. Jan. 10, 2011... allegedly confidential information is not placed in the database, a request for designation of confidential... publication in the Database until it makes a determination regarding confidential treatment. (e) Assistance...
16 CFR § 1102.42 - Disclaimers.
Code of Federal Regulations, 2013 CFR
2013-01-01
... AVAILABLE CONSUMER PRODUCT SAFETY INFORMATION DATABASE Notice and Disclosure Requirements § 1102.42... Consumer Product Safety Information Database, particularly with respect to the accuracy, completeness, or adequacy of information submitted by persons outside of the CPSC. The Database will contain a notice to...
49 CFR 535.8 - Reporting requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... information. (2) Manufacturers must submit information electronically through the EPA database system as the... year 2012 the agencies are not prepared to receive information through the EPA database system... applications for certificates of conformity in accordance through the EPA database including both GHG emissions...
NASA Astrophysics Data System (ADS)
Parker, Jay; Donnellan, Andrea; Glasscoe, Margaret; Fox, Geoffrey; Wang, Jun; Pierce, Marlon; Ma, Yu
2015-08-01
High-resolution maps of earth surface deformation are available in public archives for scientific interpretation, but are primarily available as bulky downloads on the internet. The NASA uninhabited aerial vehicle synthetic aperture radar (UAVSAR) archive of airborne radar interferograms delivers very high resolution images (approximately seven meter pixels) making remote handling of the files that much more pressing. Data exploration requiring data selection and exploratory analysis has been tedious. QuakeSim has implemented an archive of UAVSAR data in a web service and browser system based on GeoServer (http://geoserver.org). This supports a variety of services that supply consistent maps, raster image data and geographic information systems (GIS) objects including standard earthquake faults. Browsing the database is supported by initially displaying GIS-referenced thumbnail images of the radar displacement maps. Access is also provided to image metadata and links for full file downloads. One of the most widely used features is the QuakeSim line-of-sight profile tool, which calculates the radar-observed displacement (from an unwrapped interferogram product) along a line specified through a web browser. Displacement values along a profile are updated to a plot on the screen as the user interactively redefines the endpoints of the line and the sampling density. The profile and also a plot of the ground height are available as CSV (text) files for further examination, without any need to download the full radar file. Additional tools allow the user to select a polygon overlapping the radar displacement image, specify a downsampling rate and extract a modest sized grid of observations for display or for inversion, for example, the QuakeSim simplex inversion tool which estimates a consistent fault geometry and slip model.
Na, Tong; Xie, Jianyang; Zhao, Yitian; Zhao, Yifan; Liu, Yue; Wang, Yongtian; Liu, Jiang
2018-05-09
Automatic methods of analyzing of retinal vascular networks, such as retinal blood vessel detection, vascular network topology estimation, and arteries/veins classification are of great assistance to the ophthalmologist in terms of diagnosis and treatment of a wide spectrum of diseases. We propose a new framework for precisely segmenting retinal vasculatures, constructing retinal vascular network topology, and separating the arteries and veins. A nonlocal total variation inspired Retinex model is employed to remove the image intensity inhomogeneities and relatively poor contrast. For better generalizability and segmentation performance, a superpixel-based line operator is proposed as to distinguish between lines and the edges, thus allowing more tolerance in the position of the respective contours. The concept of dominant sets clustering is adopted to estimate retinal vessel topology and classify the vessel network into arteries and veins. The proposed segmentation method yields competitive results on three public data sets (STARE, DRIVE, and IOSTAR), and it has superior performance when compared with unsupervised segmentation methods, with accuracy of 0.954, 0.957, and 0.964, respectively. The topology estimation approach has been applied to five public databases (DRIVE,STARE, INSPIRE, IOSTAR, and VICAVR) and achieved high accuracy of 0.830, 0.910, 0.915, 0.928, and 0.889, respectively. The accuracies of arteries/veins classification based on the estimated vascular topology on three public databases (INSPIRE, DRIVE and VICAVR) are 0.90.9, 0.910, and 0.907, respectively. The experimental results show that the proposed framework has effectively addressed crossover problem, a bottleneck issue in segmentation and vascular topology reconstruction. The vascular topology information significantly improves the accuracy on arteries/veins classification. © 2018 American Association of Physicists in Medicine.
The production and sales of anti-tuberculosis drugs in China.
Huang, Yang-Mu; Zhao, Qi-Peng; Ren, Qiao-Meng; Peng, Dan-Lu; Guo, Yan
2016-10-04
Tuberculosis (TB) is a major infectious disease globally. Adequate and proper use of anti-TB drugs is essential for TB control. This study aims to study China's production capacity and sales situation of anti-TB drugs, and to further discuss the potential for China to contribute to global TB control. The production data of anti-TB drugs in China from 2011 to 2013 and the sales data from 2010 to 2014 were extracted from Ministry of Industry and Information Technology database of China and IMS Health database, respectively. The number of drugs was standardized to the molecular level of the key components before calculating. All data were described and analyzed by Microsoft Excel. First-line drugs were the majority in both sales (89.5 %) and production (92.3 %) of anti-TB drugs in China. The production of rifampicin held the majority share in active pharmaceutical ingredients (APIs) and finished products, whilst ethambutol and pyrazinamide were the top two sales in finished products. Fixed-dose combinations only held small percentages in total production and sales weight, though a slight increase was observed. The production and sales of streptomycin showed a tendency of decrease after 2012. The trends and proportion of different anti-TB drugs were similar in production and sales, however, the production weight was much larger than that of sales, especially for rifampicin and isoniazid. First-line drugs were the predominant medicine produced and used in China. While the low production and sales of the second-line TB drugs and FDCs rose concerns for the treatment of multiple drug resistant TB. The redundant production amount, as well as the prompt influence of national policy on drug production and sales, indicated the potential for China to better contribute to global TB control.
Contextualization of drug-mediator relations using evidence networks.
Tran, Hai Joey; Speyer, Gil; Kiefer, Jeff; Kim, Seungchan
2017-05-31
Genomic analysis of drug response can provide unique insights into therapies that can be used to match the "right drug to the right patient." However, the process of discovering such therapeutic insights using genomic data is not straightforward and represents an area of active investigation. EDDY (Evaluation of Differential DependencY), a statistical test to detect differential statistical dependencies, is one method that leverages genomic data to identify differential genetic dependencies. EDDY has been used in conjunction with the Cancer Therapeutics Response Portal (CTRP), a dataset with drug-response measurements for more than 400 small molecules, and RNAseq data of cell lines in the Cancer Cell Line Encyclopedia (CCLE) to find potential drug-mediator pairs. Mediators were identified as genes that showed significant change in genetic statistical dependencies within annotated pathways between drug sensitive and drug non-sensitive cell lines, and the results are presented as a public web-portal (EDDY-CTRP). However, the interpretability of drug-mediator pairs currently hinders further exploration of these potentially valuable results. In this study, we address this challenge by constructing evidence networks built with protein and drug interactions from the STITCH and STRING interaction databases. STITCH and STRING are sister databases that catalog known and predicted drug-protein interactions and protein-protein interactions, respectively. Using these two databases, we have developed a method to construct evidence networks to "explain" the relation between a drug and a mediator. RESULTS: We applied this approach to drug-mediator relations discovered in EDDY-CTRP analysis and identified evidence networks for ~70% of drug-mediator pairs where most mediators were not known direct targets for the drug. Constructed evidence networks enable researchers to contextualize the drug-mediator pair with current research and knowledge. Using evidence networks, we were able to improve the interpretability of the EDDY-CTRP results by linking the drugs and mediators with genes associated with both the drug and the mediator. We anticipate that these evidence networks will help inform EDDY-CTRP results and enhance the generation of important insights to drug sensitivity that will lead to improved precision medicine applications.
SORTEZ: a relational translator for NCBI's ASN.1 database.
Hart, K W; Searls, D B; Overton, G C
1994-07-01
The National Center for Biotechnology Information (NCBI) has created a database collection that includes several protein and nucleic acid sequence databases, a biosequence-specific subset of MEDLINE, as well as value-added information such as links between similar sequences. Information in the NCBI database is modeled in Abstract Syntax Notation 1 (ASN.1) an Open Systems Interconnection protocol designed for the purpose of exchanging structured data between software applications rather than as a data model for database systems. While the NCBI database is distributed with an easy-to-use information retrieval system, ENTREZ, the ASN.1 data model currently lacks an ad hoc query language for general-purpose data access. For that reason, we have developed a software package, SORTEZ, that transforms the ASN.1 database (or other databases with nested data structures) to a relational data model and subsequently to a relational database management system (Sybase) where information can be accessed through the relational query language, SQL. Because the need to transform data from one data model and schema to another arises naturally in several important contexts, including efficient execution of specific applications, access to multiple databases and adaptation to database evolution this work also serves as a practical study of the issues involved in the various stages of database transformation. We show that transformation from the ASN.1 data model to a relational data model can be largely automated, but that schema transformation and data conversion require considerable domain expertise and would greatly benefit from additional support tools.
Massive NGS Data Analysis Reveals Hundreds Of Potential Novel Gene Fusions in Human Cell Lines.
Gioiosa, Silvia; Bolis, Marco; Flati, Tiziano; Massini, Annalisa; Garattini, Enrico; Chillemi, Giovanni; Fratelli, Maddalena; Castrignanò, Tiziana
2018-06-01
Gene fusions derive from chromosomal rearrangements and the resulting chimeric transcripts are often endowed with oncogenic potential. Furthermore, they serve as diagnostic tools for the clinical classification of cancer subgroups with different prognosis and, in some cases, they can provide specific drug targets. So far, many efforts have been carried out to study gene fusion events occurring in tumor samples. In recent years, the availability of a comprehensive Next Generation Sequencing dataset for all the existing human tumor cell lines has provided the opportunity to further investigate these data in order to identify novel and still uncharacterized gene fusion events. In our work, we have extensively reanalyzed 935 paired-end RNA-seq experiments downloaded from "The Cancer Cell Line Encyclopedia" repository, aiming at addressing novel putative cell-line specific gene fusion events in human malignancies. The bioinformatics analysis has been performed by the execution of four different gene fusion detection algorithms. The results have been further prioritized by running a bayesian classifier which makes an in silico validation. The collection of fusion events supported by all of the predictive softwares results in a robust set of ∼ 1,700 in-silico predicted novel candidates suitable for downstream analyses. Given the huge amount of data and information produced, computational results have been systematized in a database named LiGeA. The database can be browsed through a dynamical and interactive web portal, further integrated with validated data from other well known repositories. Taking advantage of the intuitive query forms, the users can easily access, navigate, filter and select the putative gene fusions for further validations and studies. They can also find suitable experimental models for a given fusion of interest. We believe that the LiGeA resource can represent not only the first compendium of both known and putative novel gene fusion events in the catalog of all of the human malignant cell lines, but it can also become a handy starting point for wet-lab biologists who wish to investigate novel cancer biomarkers and specific drug targets.
The methodology of database design in organization management systems
NASA Astrophysics Data System (ADS)
Chudinov, I. L.; Osipova, V. V.; Bobrova, Y. V.
2017-01-01
The paper describes the unified methodology of database design for management information systems. Designing the conceptual information model for the domain area is the most important and labor-intensive stage in database design. Basing on the proposed integrated approach to design, the conceptual information model, the main principles of developing the relation databases are provided and user’s information needs are considered. According to the methodology, the process of designing the conceptual information model includes three basic stages, which are defined in detail. Finally, the article describes the process of performing the results of analyzing user’s information needs and the rationale for use of classifiers.
Rhode Island Water Supply System Management Plan Database (WSSMP-Version 1.0)
Granato, Gregory E.
2004-01-01
In Rhode Island, the availability of water of sufficient quality and quantity to meet current and future environmental and economic needs is vital to life and the State's economy. Water suppliers, the Rhode Island Water Resources Board (RIWRB), and other State agencies responsible for water resources in Rhode Island need information about available resources, the water-supply infrastructure, and water use patterns. These decision makers need historical, current, and future water-resource information. In 1997, the State of Rhode Island formalized a system of Water Supply System Management Plans (WSSMPs) to characterize and document relevant water-supply information. All major water suppliers (those that obtain, transport, purchase, or sell more than 50 million gallons of water per year) are required to prepare, maintain, and carry out WSSMPs. An electronic database for this WSSMP information has been deemed necessary by the RIWRB for water suppliers and State agencies to consistently document, maintain, and interpret the information in these plans. Availability of WSSMP data in standard formats will allow water suppliers and State agencies to improve the understanding of water-supply systems and to plan for future needs or water-supply emergencies. In 2002, however, the Rhode Island General Assembly passed a law that classifies some of the WSSMP information as confidential to protect the water-supply infrastructure from potential terrorist threats. Therefore the WSSMP database was designed for an implementation method that will balance security concerns with the information needs of the RIWRB, suppliers, other State agencies, and the public. A WSSMP database was developed by the U.S. Geological Survey in cooperation with the RIWRB. The database was designed to catalog WSSMP information in a format that would accommodate synthesis of current and future information about Rhode Island's water-supply infrastructure. This report documents the design and implementation of the WSSMP database. All WSSMP information in the database is, ultimately, linked to the individual water suppliers and to a WSSMP 'cycle' (which is currently a 5-year planning cycle for compiling WSSMP information). The database file contains 172 tables - 47 data tables, 61 association tables, 61 domain tables, and 3 example import-link tables. This database is currently implemented in the Microsoft Access database software because it is widely used within and outside of government and is familiar to many existing and potential customers. Design documentation facilitates current use and potential modification for future use of the database. Information within the structure of the WSSMP database file (WSSMPv01.mdb), a data dictionary file (WSSMPDD1.pdf), a detailed database-design diagram (WSSMPPL1.pdf), and this database-design report (OFR2004-1231.pdf) documents the design of the database. This report includes a discussion of each WSSMP data structure with an accompanying database-design diagram. Appendix 1 of this report is an index of the diagrams in the report and on the plate; this index is organized by table name in alphabetical order. Each of these products is included in digital format on the enclosed CD-ROM to facilitate use or modification of the database.
Burley, Thomas E.
2011-01-01
The U.S. Geological Survey, in cooperation with the New Mexico Interstate Stream Commission, developed a geodatabase compendium (hereinafter referred to as the 'geodatabase') of available water-resources data for the reach of the Rio Grande from Rio Arriba-Sandoval County line, New Mexico, to Presidio, Texas. Since 1889, a wealth of water-resources data has been collected in the Rio Grande Basin from Rio Arriba-Sandoval County line, New Mexico, to Presidio, Texas, for a variety of purposes. Collecting agencies, researchers, and organizations have included the U.S. Geological Survey, Bureau of Reclamation, International Boundary and Water Commission, State agencies, irrigation districts, municipal water utilities, universities, and other entities. About 1,750 data records were recently (2010) evaluated to enhance their usability by compiling them into a single geospatial relational database (geodatabase). This report is intended as a user's manual and administration guide for the geodatabase. All data available, including water quality, water level, and discharge data (both instantaneous and daily) from January 1, 1889, through December 17, 2009, were compiled for the study area. A flexible and efficient geodatabase design was used, enhancing the ability of the geodatabase to handle data from diverse sources and helping to ensure sustainability of the geodatabase with long-term maintenance. Geodatabase tables include daily data values, site locations and information, sample event information, and parameters, as well as data sources and collecting agencies. The end products of this effort are a comprehensive water-resources geodatabase that enables the visualization of primary sampling sites for surface discharges, groundwater elevations, and water-quality and associated data for the study area. In addition, repeatable data processing scripts, Structured Query Language queries for loading prepared data sources, and a detailed process for refreshing all data in the compendium have been developed. The geodatabase functionality allows users to explore spatial characteristics of the data, conduct spatial analyses, and pose questions to the geodatabase in the form of queries. Users can also customize and extend the geodatabase, combine it with other databases, or use the geodatabase design for other water-resources applications.
User assumptions about information retrieval systems: Ethical concerns
DOE Office of Scientific and Technical Information (OSTI.GOV)
Froehlich, T.J.
Information professionals, whether designers, intermediaries, database producers or vendors, bear some responsibility for the information that they make available to users of information systems. The users of such systems may tend to make many assumptions about the information that a system provides, such as believing: that the data are comprehensive, current and accurate, that the information resources or databases have same degree of quality and consistency of indexing; that the abstracts, if they exist, correctly and adequate reflect the content of the article; that there is consistency informs of author names or journal titles or indexing within and across databases;more » that there is standardization in and across databases; that once errors are detected, they are corrected; that appropriate choices of databases or information resources are a relatively easy matter, etc. The truth is that few of these assumptions are valid in commercia or corporate or organizational databases. However, given these beliefs and assumptions by many users, often promoted by information providers, information professionals, impossible, should intervene to warn users about the limitations and constraints of the databases they are using. With the growth of the Internet and end-user products (e.g., CD-ROMs), such interventions have significantly declined. In such cases, information should be provided on start-up or through interface screens, indicating to users, the constraints and orientation of the system they are using. The principle of {open_quotes}caveat emptor{close_quotes} is naive and socially irresponsible: information professionals or systems have an obligation to provide some framework or context for the information that users are accessing.« less
A perceptive method for handwritten text segmentation
NASA Astrophysics Data System (ADS)
Lemaitre, Aurélie; Camillerapp, Jean; Coüasnon, Bertrand
2011-01-01
This paper presents a new method to address the problem of handwritten text segmentation into text lines and words. Thus, we propose a method based on the cooperation among points of view that enables the localization of the text lines in a low resolution image, and then to associate the pixels at a higher level of resolution. Thanks to the combination of levels of vision, we can detect overlapping characters and re-segment the connected components during the analysis. Then, we propose a segmentation of lines into words based on the cooperation among digital data and symbolic knowledge. The digital data are obtained from distances inside a Delaunay graph, which gives a precise distance between connected components, at the pixel level. We introduce structural rules in order to take into account some generic knowledge about the organization of a text page. This cooperation among information gives a bigger power of expression and ensures the global coherence of the recognition. We validate this work using the metrics and the database proposed for the segmentation contest of ICDAR 2009. Thus, we show that our method obtains very interesting results, compared to the other methods of the literature. More precisely, we are able to deal with slope and curvature, overlapping text lines and varied kinds of writings, which are the main difficulties met by the other methods.
Singh, Vinayak; Goel, Ridhi; Pande, Veena; Asif, Mehar Hasan; Mohanty, Chandra Sekhar
2017-01-01
Condensed tannin (CT) or proanthocyanidin (PA) is a unique group of phenolic metabolite with high molecular weight with specific structure. It is reported that, the presence of high-CT in the legumes adversely affect the nutrients in the plant and impairs the digestibility upon consumption by animals. Winged bean (Psophocarpus tetragonolobus (L.) DC.) is one of the promising underutilized legume with high protein and oil-content. One of the reasons for its underutilization is due to the presence of CT. Transcriptome sequencing of leaves of two diverse CT-containing lines of P. tetragonolobus was carried out on Illumina Nextseq 500 sequencer to identify the underlying genes and contigs responsible for CT-biosynthesis. RNA-Seq data generated 102586 and 88433 contigs for high (HCTW) and low CT (LCTW) lines of P. tetragonolobus, respectively. Based on the similarity searches against gene ontology (GO) and Kyoto encyclopedia of genes and genomes (KEGG) database revealed 5210 contigs involved in 229 different pathways. A total of 1235 contigs were detected to differentially express between HCTW and LCTW lines. This study along with its findings will be helpful in providing information for functional and comparative genomic analysis of condensed tannin biosynthesis in this plant in specific and legumes in general. PMID:28322296
Kheifets, Leeka; Crespi, Catherine M; Hooper, Chris; Oksuzyan, Sona; Cockburn, Myles; Ly, Thomas; Mezei, Gabor
2015-01-01
We conducted a large epidemiologic case-control study in California to examine the association between childhood cancer risk and distance from the home address at birth to the nearest high-voltage overhead transmission line as a replication of the study of Draper et al. in the United Kingdom. We present a detailed description of the study design, methods of case ascertainment, control selection, exposure assessment and data analysis plan. A total of 5788 childhood leukemia cases and 3308 childhood central nervous system cancer cases (included for comparison) and matched controls were available for analysis. Birth and diagnosis addresses of cases and birth addresses of controls were geocoded. Distance from the home to nearby overhead transmission lines was ascertained on the basis of the electric power companies’ geographic information system (GIS) databases, additional Google Earth aerial evaluation and site visits to selected residences. We evaluated distances to power lines up to 2000 m and included consideration of lower voltages (60–69 kV). Distance measures based on GIS and Google Earth evaluation showed close agreement (Pearson correlation >0.99). Our three-tiered approach to exposure assessment allowed us to achieve high specificity, which is crucial for studies of rare diseases with low exposure prevalence. PMID:24045429
Gallo, Marco; Malandrino, Pasqualino; Fanciulli, Giuseppe; Rota, Francesca; Faggiano, Antongiulio; Colao, Annamaria
2017-07-01
Everolimus has been shown to be effective for advanced pancreatic neuroendocrine tumours (pNETs), but its positioning in the therapeutic algorithm for pNETs is matter of debate. With the aim to shed light on this point, we performed an up-to-date critical review taking into account the results of both retrospective and prospective published studies, and the recommendations of international guidelines. In addition, we performed an extensive search on the Clinical Trial Registries databases worldwide, to gather information on the ongoing clinical trials related to this specific topic. We identified eight retrospective published studies, two prospective published studies, and five registered clinical trials. Moreover, we analyzed the content of four widespread international guidelines. Our critical review confirms the lack of high-quality data to recommend everolimus as the first line therapy for pNETs. The ongoing clinical trials reported in this review will hopefully help clinicians, in the near future, to better evaluate the role of everolimus as the first line therapy for pNETs. However, at the moment, there is already enough evidence to recommend everolimus as the first line therapy for patients with symptomatic malignant unresectable insulin-secreting pNETs, to control the endocrine syndrome regardless of tumour growth.
Pharmacogenomic agreement between two cancer cell line data sets.
2015-12-03
Large cancer cell line collections broadly capture the genomic diversity of human cancers and provide valuable insight into anti-cancer drug response. Here we show substantial agreement and biological consilience between drug sensitivity measurements and their associated genomic predictors from two publicly available large-scale pharmacogenomics resources: The Cancer Cell Line Encyclopedia and the Genomics of Drug Sensitivity in Cancer databases.
A New Approach To Secure Federated Information Bases Using Agent Technology.
ERIC Educational Resources Information Center
Weippi, Edgar; Klug, Ludwig; Essmayr, Wolfgang
2003-01-01
Discusses database agents which can be used to establish federated information bases by integrating heterogeneous databases. Highlights include characteristics of federated information bases, including incompatible database management systems, schemata, and frequently changing context; software agent technology; Java agents; system architecture;…
A Database of Historical Information on Landslides and Floods in Italy
NASA Astrophysics Data System (ADS)
Guzzetti, F.; Tonelli, G.
2003-04-01
For the past 12 years we have maintained and updated a database of historical information on landslides and floods in Italy, known as the National Research Council's AVI (Damaged Urban Areas) Project archive. The database was originally designed to respond to a specific request of the Minister of Civil Protection, and was aimed at helping the regional assessment of landslide and flood risk in Italy. The database was first constructed in 1991-92 to cover the period 1917 to 1990. Information of damaging landslide and flood event was collected by searching archives, by screening thousands of newspaper issues, by reviewing the existing technical and scientific literature on landslides and floods in Italy, and by interviewing landslide and flood experts. The database was then updated chiefly through the analysis of hundreds of newspaper articles, and it now covers systematically the period 1900 to 1998, and non-systematically the periods 1900 to 1916 and 1999 to 2002. Non systematic information on landslide and flood events older than 20th century is also present in the database. The database currently contains information on more than 32,000 landslide events occurred at more than 25,700 sites, and on more than 28,800 flood events occurred at more than 15,600 sites. After a brief outline of the history and evolution of the AVI Project archive, we present and discuss: (a) the present structure of the database, including the hardware and software solutions adopted to maintain, manage, use and disseminate the information stored in the database, (b) the type and amount of information stored in the database, including an estimate of its completeness, and (c) examples of recent applications of the database, including a web-based GIS systems to show the location of sites historically affected by landslides and floods, and an estimate of geo-hydrological (i.e., landslide and flood) risk in Italy based on the available historical information.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feldman, U.; Space Science Division, Naval Research Laboratory, Washington, DC 20375-5320; Doschek, G.A.
We list observed parity-forbidden and spin-forbidden lines in the 500-1600 A range emitted by solar coronal plasmas and derive improved energy levels from their wavelengths. The lines, emitted by astrophysical abundant elements, belong to transitions within the ground configurations of the type ns{sup 2} np {sup k}, for n = 2, 3 and k = 0-5, and between the lowest term of the first excited configuration 2s2p {sup k+1} and the 2s{sup 2}2p {sup k} ground configurations for k = 0, 1, 2. For each line we give the newly measured wavelength, and the measured or predicted wavelength from themore » NIST Atomic Spectra Database (ASD) (which except for a few cases includes the previously reported compilation of Kaufman and Sugar [J. Phys. Chem. Ref. Data 15 (1986) 321]), and the values of the transition probability taken from the ASD and CHIANTI database. The list contains measured wavelengths of 136 lines of which over 100 were not available for the Kaufman and Sugar compilation. In addition we provide energy levels that were derived from the reported lines.« less
An On-Line Technology Information System (OTIS) for Advanced Life Support
NASA Technical Reports Server (NTRS)
Levri, Julie A.; Boulanger, Richard; Hoganm John A.; Rodriquez, Luis
2003-01-01
An On-line Technology Information System (OTIS) is currently being developed for the Advanced Life Support (ALS) Program. This paper describes the preliminary development of OTIS, which is a system designed to provide centralized collection and organization of technology information. The lack of thorough, reliable and easily understood technology information is a major obstacle in effective assessment of technology development progress, trade studies, metric calculations, and technology selection for integrated testing. OTIS will provide a formalized, well-organized protocol to communicate thorough, accurate, current and relevant technology information between the hands-on technology developer and the ALS Community. The need for this type of information transfer system within the Solid Waste Management (SWM) element was recently identified and addressed. A SWM Technology Information Form (TIF) was developed specifically for collecting detailed technology information in the area of SWM. In the TIF, information is requested from SWM technology developers, based upon the Technology Readiness Level (TRL). Basic information is requested for low-TRL technologies, and more detailed information is requested as the TRL of the technology increases. A comparable form is also being developed for the wastewater processing element. In the future, similar forms will also be developed for the ALS elements of air revitalization, food processing, biomass production and thermal control. These ALS element-specific forms will be implemented in OTIS via a web-accessible interface,with the data stored in an object-oriented relational database (created in MySQLTM) located on a secure server at NASA Ames Research Center. With OTIS, ALS element leads and managers will be able to carry out informed research and development investment, thereby promoting technology through the TRL scale. OTIS will also allow analysts to make accurate evaluations of technology options. Additionally, the range and specificity of information solicited will help educate technology developers of programmatic needs.
Human interface to large multimedia databases
NASA Astrophysics Data System (ADS)
Davis, Ben; Marks, Linn; Collins, Dave; Mack, Robert; Malkin, Peter; Nguyen, Tam
1994-04-01
The emergence of high-speed networking for multimedia will have the effect of turning the computer screen into a window on a very large information space. As this information space increases in size and complexity, providing users with easy and intuitive means of accessing information will become increasingly important. Providing access to large amounts of text has been the focus of work for hundreds of years and has resulted in the evolution of a set of standards, from the Dewey Decimal System for libraries to the recently proposed ANSI standards for representing information on-line: KIF, Knowledge Interchange Format, and CG's, Conceptual Graphs. Certain problems remain unsolved by these efforts, though: how to let users know the contents of the information space, so that they know whether or not they want to search it in the first place, how to facilitate browsing, and, more specifically, how to facilitate visual browsing. These issues are particularly important for users in educational contexts and have been the focus of much of our recent work. In this paper we discuss some of the solutions we have prototypes: specifically, visual means, visual browsers, and visual definitional sequences.
77 FR 66617 - HIT Policy and Standards Committees; Workgroup Application Database
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-06
... Database AGENCY: Office of the National Coordinator for Health Information Technology, HHS. ACTION: Notice of New ONC HIT FACA Workgroup Application Database. The Office of the National Coordinator (ONC) has launched a new Health Information Technology Federal Advisory Committee Workgroup Application Database...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, K.; Chubb, C.; Huberman, E.
High resolution two dimensional get electrophoresis (2DE) and database analysis was used to establish protein expression patterns for cultured normal human mammary epithelial cells and thirteen breast cancer cell lines. The Human Breast Epithelial Cell database contains the 2DE protein patterns, including relative protein abundances, for each cell line, plus a composite pattern that contains all the common and specifically expressed proteins from all the cell lines. Significant differences in protein expression, both qualitative and quantitative, were observed not only between normal cells and tumor cells, but also among the tumor cell lines. Eight percent of the consistently detected proteinsmore » were found in significantly (P < 0.001) variable levels among the cell lines. Using a combination of immunostaining, comigration with purified protein, subcellular fractionation, and amino-terminal protein sequencing, we identified a subset of the differentially expressed proteins. These identified proteins include the cytoskeletal proteins actin, tubulin, vimentin, and cytokeratins. The cell lines can be classified into four distinct groups based on their intermediate filament protein profile. We also identified heat shock proteins; hsp27, hsp60, and hsp70 varied in abundance and in some cases in the relative phosphorylation levels among the cell lines. Finally, we identified IMP dehydrogenase in each of the cell lines, and found the levels of this enzyme in the tumor cell lines elevated 2- to 20-fold relative to the levels in normal cells.« less
Directory of Assistive Technology: Data Sources.
ERIC Educational Resources Information Center
Council for Exceptional Children, Reston, VA. Center for Special Education Technology.
The annotated directory describes in detail both on-line and print databases in the area of assistive technology for individuals with disabilities. For each database, the directory provides the name, address, and telephone number of the sponsoring organization; disability areas served; number of hardware and software products; types of information…
49 CFR 375.103 - What are the definitions of terms used in this part?
Code of Federal Regulations, 2011 CFR
2011-10-01
...) FEDERAL MOTOR CARRIER SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION FEDERAL MOTOR CARRIER SAFETY... household goods transportation service. This includes written or electronic database listings of your name, address, and telephone number in an on-line database. This excludes listings of your name, address, and...
49 CFR 375.103 - What are the definitions of terms used in this part?
Code of Federal Regulations, 2010 CFR
2010-10-01
...) FEDERAL MOTOR CARRIER SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION FEDERAL MOTOR CARRIER SAFETY... household goods transportation service. This includes written or electronic database listings of your name, address, and telephone number in an on-line database. This excludes listings of your name, address, and...
E-MSD: an integrated data resource for bioinformatics.
Velankar, S; McNeil, P; Mittard-Runte, V; Suarez, A; Barrell, D; Apweiler, R; Henrick, K
2005-01-01
The Macromolecular Structure Database (MSD) group (http://www.ebi.ac.uk/msd/) continues to enhance the quality and consistency of macromolecular structure data in the worldwide Protein Data Bank (wwPDB) and to work towards the integration of various bioinformatics data resources. One of the major obstacles to the improved integration of structural databases such as MSD and sequence databases like UniProt is the absence of up to date and well-maintained mapping between corresponding entries. We have worked closely with the UniProt group at the EBI to clean up the taxonomy and sequence cross-reference information in the MSD and UniProt databases. This information is vital for the reliable integration of the sequence family databases such as Pfam and Interpro with the structure-oriented databases of SCOP and CATH. This information has been made available to the eFamily group (http://www.efamily.org.uk/) and now forms the basis of the regular interchange of information between the member databases (MSD, UniProt, Pfam, Interpro, SCOP and CATH). This exchange of annotation information has enriched the structural information in the MSD database with annotation from wider sequence-oriented resources. This work was carried out under the 'Structure Integration with Function, Taxonomy and Sequences (SIFTS)' initiative (http://www.ebi.ac.uk/msd-srv/docs/sifts) in the MSD group.
75 FR 29155 - Publicly Available Consumer Product Safety Information Database
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-24
...The Consumer Product Safety Commission (``Commission,'' ``CPSC,'' or ``we'') is issuing a notice of proposed rulemaking that would establish a publicly available consumer product safety information database (``database''). Section 212 of the Consumer Product Safety Improvement Act of 2008 (``CPSIA'') amended the Consumer Product Safety Act (``CPSA'') to require the Commission to establish and maintain a publicly available, searchable database on the safety of consumer products, and other products or substances regulated by the Commission. The proposed rule would interpret various statutory requirements pertaining to the information to be included in the database and also would establish provisions regarding submitting reports of harm; providing notice of reports of harm to manufacturers; publishing reports of harm and manufacturer comments in the database; and dealing with confidential and materially inaccurate information.
The CIS Database: Occupational Health and Safety Information Online.
ERIC Educational Resources Information Center
Siegel, Herbert; Scurr, Erica
1985-01-01
Describes document acquisition, selection, indexing, and abstracting and discusses online searching of the CIS database, an online system produced by the International Occupational Safety and Health Information Centre. This database comprehensively covers information in the field of occupational health and safety. Sample searches and search…
Access to DNA and protein databases on the Internet.
Harper, R
1994-02-01
During the past year, the number of biological databases that can be queried via Internet has dramatically increased. This increase has resulted from the introduction of networking tools, such as Gopher and WAIS, that make it easy for research workers to index databases and make them available for on-line browsing. Biocomputing in the nineties will see the advent of more client/server options for the solution of problems in bioinformatics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Tengfang; Piette, Mary Ann
2004-08-05
The original scope of work was to obtain and analyze existing and emerging data in four states: California, Florida, New York, and Wisconsin. The goal of this data collection was to deliver a baseline database or recommendations for such a database that could possibly contain window and daylighting features and energy performance characteristics of Kindergarten through 12th grade (K-12) school buildings (or those of classrooms when available). In particular, data analyses were performed based upon the California Commercial End-Use Survey (CEUS) databases to understand school energy use, features of window glazing, and availability of daylighting in California K-12 schools. Themore » outcomes from this baseline task can be used to assist in establishing a database of school energy performance, assessing applications of existing technologies relevant to window and daylighting design, and identifying future R&D needs. These are in line with the overall project goals as outlined in the proposal. Through the review and analysis of this data, it is clear that there are many compounding factors impacting energy use in K-12 school buildings in the U.S., and that there are various challenges in understanding the impact of K-12 classroom energy use associated with design features of window glazing and skylight. First, the energy data in the existing CEUS databases has, at most, provided the aggregated electricity and/or gas usages for the building establishments that include other school facilities on top of the classroom spaces. Although the percentage of classroom floor area in schools is often available from the databases, there is no additional information that can be used to quantitatively segregate the EUI for classroom spaces. In order to quantify the EUI for classrooms, sub-metering of energy usage by classrooms must be obtained. Second, magnitudes of energy use for electricity lighting are not attainable from the existing databases, nor are the lighting levels contributed by artificial lighting or daylight. It is impossible to reasonably estimate the lighting energy consumption for classroom areas in the sample of schools studied in this project. Third, there are many other compounding factors that may as well influence the overall classroom energy use, e.g., ventilation, insulation, system efficiency, occupancy, control, schedules, and weather. Fourth, although we have examined the school EUI grouped by various factors such as climate zones, window and daylighting design features from the California databases, no statistically significant associations can be identified from the sampled California K-12 schools in the current California CEUS. There are opportunities to expand such analyses by developing and including more powerful CEUS databases in the future. Finally, a list of parameters is recommended for future database development and for use of future investigation in K-12 classroom energy use, window and skylight design, and possible relations between them. Some of the key parameters include: (1) Energy end use data for lighting systems, classrooms, and schools; (2) Building design and operation including features for windows and daylighting; and (3) Other key parameters and information that would be available to investigate overall energy uses, building and systems design, their operation, and services provided.« less
Dimensional modeling: beyond data processing constraints.
Bunardzic, A
1995-01-01
The focus of information processing requirements is shifting from the on-line transaction processing (OLTP) issues to the on-line analytical processing (OLAP) issues. While the former serves to ensure the feasibility of the real-time on-line transaction processing (which has already exceeded a level of up to 1,000 transactions per second under normal conditions), the latter aims at enabling more sophisticated analytical manipulation of data. The OLTP requirements, or how to efficiently get data into the system, have been solved by applying the Relational theory in the form of Entity-Relation model. There is presently no theory related to OLAP that would resolve the analytical processing requirements as efficiently as Relational theory provided for the transaction processing. The "relational dogma" also provides the mathematical foundation for the Centralized Data Processing paradigm in which mission-critical information is incorporated as 'one and only one instance' of data, thus ensuring data integrity. In such surroundings, the information that supports business analysis and decision support activities is obtained by running predefined reports and queries that are provided by the IS department. In today's intensified competitive climate, businesses are finding that this traditional approach is not good enough. The only way to stay on top of things, and to survive and prosper, is to decentralize the IS services. The newly emerging Distributed Data Processing, with its increased emphasis on empowering the end user, does not seem to find enough merit in the relational database model to justify relying upon it. Relational theory proved too rigid and complex to accommodate the analytical processing needs. In order to satisfy the OLAP requirements, or how to efficiently get the data out of the system, different models, metaphors, and theories have been devised. All of them are pointing to the need for simplifying the highly non-intuitive mathematical constraints found in the relational databases normalized to their 3rd normal form. Object-oriented approach insists on the importance of the common sense component of the data processing activities. But, particularly interesting, is the approach that advocates the necessity of 'flattening' the structure of the business models as we know them today. This discipline is called Dimensional Modeling and it enables users to form multidimensional views of the relevant facts which are stored in a 'flat' (non-structured), easy-to-comprehend and easy-to-access database. When using dimensional modeling, we relax many of the axioms inherent in a relational model. We focus on the knowledge of the relevant facts which are reflecting the business operations and are the real basis for the decision support and business analysis. At the core of the dimensional modeling are fact tables that contain the non-discrete, additive data. To determine the level of aggregation of these facts, we use granularity tables that specify the resolution, or the level/detail, that the user is allowed to entertain. The third component is dimension tables that embody the knowledge of the constraints to be used to form the views.
48 CFR 804.1102 - Vendor Information Pages (VIP) Database.
Code of Federal Regulations, 2011 CFR
2011-10-01
... (VIP) Database. 804.1102 Section 804.1102 Federal Acquisition Regulations System DEPARTMENT OF VETERANS AFFAIRS GENERAL ADMINISTRATIVE MATTERS Contract Execution 804.1102 Vendor Information Pages (VIP) Database. Prior to January 1, 2012, all VOSBs and SDVOSBs must be listed in the VIP database, available at http...
48 CFR 804.1102 - Vendor Information Pages (VIP) Database.
Code of Federal Regulations, 2013 CFR
2013-10-01
... (VIP) Database. 804.1102 Section 804.1102 Federal Acquisition Regulations System DEPARTMENT OF VETERANS AFFAIRS GENERAL ADMINISTRATIVE MATTERS Contract Execution 804.1102 Vendor Information Pages (VIP) Database. Prior to January 1, 2012, all VOSBs and SDVOSBs must be listed in the VIP database, available at http...
48 CFR 804.1102 - Vendor Information Pages (VIP) Database.
Code of Federal Regulations, 2014 CFR
2014-10-01
... (VIP) Database. 804.1102 Section 804.1102 Federal Acquisition Regulations System DEPARTMENT OF VETERANS AFFAIRS GENERAL ADMINISTRATIVE MATTERS Contract Execution 804.1102 Vendor Information Pages (VIP) Database. Prior to January 1, 2012, all VOSBs and SDVOSBs must be listed in the VIP database, available at http...
48 CFR 804.1102 - Vendor Information Pages (VIP) Database.
Code of Federal Regulations, 2012 CFR
2012-10-01
... (VIP) Database. 804.1102 Section 804.1102 Federal Acquisition Regulations System DEPARTMENT OF VETERANS AFFAIRS GENERAL ADMINISTRATIVE MATTERS Contract Execution 804.1102 Vendor Information Pages (VIP) Database. Prior to January 1, 2012, all VOSBs and SDVOSBs must be listed in the VIP database, available at http...
48 CFR 804.1102 - Vendor Information Pages (VIP) Database.
Code of Federal Regulations, 2010 CFR
2010-10-01
... (VIP) Database. 804.1102 Section 804.1102 Federal Acquisition Regulations System DEPARTMENT OF VETERANS AFFAIRS GENERAL ADMINISTRATIVE MATTERS Contract Execution 804.1102 Vendor Information Pages (VIP) Database. Prior to January 1, 2012, all VOSBs and SDVOSBs must be listed in the VIP database, available at http...
Alternative treatment technology information center computer database system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sullivan, D.
1995-10-01
The Alternative Treatment Technology Information Center (ATTIC) computer database system was developed pursuant to the 1986 Superfund law amendments. It provides up-to-date information on innovative treatment technologies to clean up hazardous waste sites. ATTIC v2.0 provides access to several independent databases as well as a mechanism for retrieving full-text documents of key literature. It can be accessed with a personal computer and modem 24 hours a day, and there are no user fees. ATTIC provides {open_quotes}one-stop shopping{close_quotes} for information on alternative treatment options by accessing several databases: (1) treatment technology database; this contains abstracts from the literature on all typesmore » of treatment technologies, including biological, chemical, physical, and thermal methods. The best literature as viewed by experts is highlighted. (2) treatability study database; this provides performance information on technologies to remove contaminants from wastewaters and soils. It is derived from treatability studies. This database is available through ATTIC or separately as a disk that can be mailed to you. (3) underground storage tank database; this presents information on underground storage tank corrective actions, surface spills, emergency response, and remedial actions. (4) oil/chemical spill database; this provides abstracts on treatment and disposal of spilled oil and chemicals. In addition to these separate databases, ATTIC allows immediate access to other disk-based systems such as the Vendor Information System for Innovative Treatment Technologies (VISITT) and the Bioremediation in the Field Search System (BFSS). The user may download these programs to their own PC via a high-speed modem. Also via modem, users are able to download entire documents through the ATTIC system. Currently, about fifty publications are available, including Superfund Innovative Technology Evaluation (SITE) program documents.« less
Potentials of Advanced Database Technology for Military Information Systems
2001-04-01
UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADP010866 TITLE: Potentials of Advanced Database Technology for Military... Technology for Military Information Systems Sunil Choennia Ben Bruggemanb a National Aerospace Laboratory, NLR, P.O. Box 90502, 1006 BM Amsterdam...application of advanced information tech- nology, including database technology , as underpin- actions X and Y as dangerous or not? ning is
SeedStor: A Germplasm Information Management System and Public Database.
Horler, R S P; Turner, A S; Fretter, P; Ambrose, M
2018-01-01
SeedStor (https://www.seedstor.ac.uk) acts as the publicly available database for the seed collections held by the Germplasm Resources Unit (GRU) based at the John Innes Centre, Norwich, UK. The GRU is a national capability supported by the Biotechnology and Biological Sciences Research Council (BBSRC). The GRU curates germplasm collections of a range of temperate cereal, legume and Brassica crops and their associated wild relatives, as well as precise genetic stocks, near-isogenic lines and mapping populations. With >35,000 accessions, the GRU forms part of the UK's plant conservation contribution to the Multilateral System (MLS) of the International Treaty for Plant Genetic Resources for Food and Agriculture (ITPGRFA) for wheat, barley, oat and pea. SeedStor is a fully searchable system that allows our various collections to be browsed species by species through to complicated multipart phenotype criteria-driven queries. The results from these searches can be downloaded for later analysis or used to order germplasm via our shopping cart. The user community for SeedStor is the plant science research community, plant breeders, specialist growers, hobby farmers and amateur gardeners, and educationalists. Furthermore, SeedStor is much more than a database; it has been developed to act internally as a Germplasm Information Management System that allows team members to track and process germplasm requests, determine regeneration priorities, handle cost recovery and Material Transfer Agreement paperwork, manage the Seed Store holdings and easily report on a wide range of the aforementioned tasks. © The Author(s) 2017. Published by Oxford University Press on behalf of Japanese Society of Plant Physiologists.
48 CFR 52.204-10 - Reporting Executive Compensation and First-Tier Subcontract Awards.
Code of Federal Regulations, 2013 CFR
2013-10-01
... System for Award Management (SAM) database (FAR provision 52.204-7), the Contractor shall report the... information from SAM and FPDS databases. If FPDS information is incorrect, the contractor should notify the contracting officer. If the SAM database information is incorrect, the contractor is responsible for...
48 CFR 52.204-10 - Reporting Executive Compensation and First-Tier Subcontract Awards.
Code of Federal Regulations, 2014 CFR
2014-10-01
... System for Award Management (SAM) database (FAR provision 52.204-7), the Contractor shall report the... information from SAM and FPDS databases. If FPDS information is incorrect, the contractor should notify the contracting officer. If the SAM database information is incorrect, the contractor is responsible for...
ERIC Educational Resources Information Center
Lundquist, Carol; Frieder, Ophir; Holmes, David O.; Grossman, David
1999-01-01
Describes a scalable, parallel, relational database-drive information retrieval engine. To support portability across a wide range of execution environments, all algorithms adhere to the SQL-92 standard. By incorporating relevance feedback algorithms, accuracy is enhanced over prior database-driven information retrieval efforts. Presents…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-09
... DEPARTMENT OF STATE [Public Notice 7976] 30-Day Notice of Proposed Information Collection: Civilian Response Corps Database In-Processing Electronic Form, OMB Control Number 1405-0168, Form DS-4096.... Title of Information Collection: Civilian Response Corps Database In-Processing Electronic Form. OMB...
Integrated Primary Care Information Database (IPCI)
The Integrated Primary Care Information Database is a longitudinal observational database that was created specifically for pharmacoepidemiological and pharmacoeconomic studies, inlcuding data from computer-based patient records supplied voluntarily by general practitioners.
Fernández, José M; Valencia, Alfonso
2004-10-12
Downloading the information stored in relational databases into XML and other flat formats is a common task in bioinformatics. This periodical dumping of information requires considerable CPU time, disk and memory resources. YAdumper has been developed as a purpose-specific tool to deal with the integral structured information download of relational databases. YAdumper is a Java application that organizes database extraction following an XML template based on an external Document Type Declaration. Compared with other non-native alternatives, YAdumper substantially reduces memory requirements and considerably improves writing performance.
Silva, Cristina; Fresco, Paula; Monteiro, Joaquim; Rama, Ana Cristina Ribeiro
2013-08-01
Evidence-Based Practice requires health care decisions to be based on the best available evidence. The model "Information Mastery" proposes that clinicians should use sources of information that have previously evaluated relevance and validity, provided at the point of care. Drug databases (DB) allow easy and fast access to information and have the benefit of more frequent content updates. Relevant information, in the context of drug therapy, is that which supports safe and effective use of medicines. Accordingly, the European Guideline on the Summary of Product Characteristics (EG-SmPC) was used as a standard to evaluate the inclusion of relevant information contents in DB. To develop and test a method to evaluate relevancy of DB contents, by assessing the inclusion of information items deemed relevant for effective and safe drug use. Hierarchical organisation and selection of the principles defined in the EGSmPC; definition of criteria to assess inclusion of selected information items; creation of a categorisation and quantification system that allows score calculation; calculation of relative differences (RD) of scores for comparison with an "ideal" database, defined as the one that achieves the best quantification possible for each of the information items; pilot test on a sample of 9 drug databases, using 10 drugs frequently associated in literature with morbidity-mortality and also being widely consumed in Portugal. Main outcome measure Calculate individual and global scores for clinically relevant information items of drug monographs in databases, using the categorisation and quantification system created. A--Method development: selection of sections, subsections, relevant information items and corresponding requisites; system to categorise and quantify their inclusion; score and RD calculation procedure. B--Pilot test: calculated scores for the 9 databases; globally, all databases evaluated significantly differed from the "ideal" database; some DB performed better but performance was inconsistent at subsections level, within the same DB. The method developed allows quantification of the inclusion of relevant information items in DB and comparison with an "ideal database". It is necessary to consult diverse DB in order to find all the relevant information needed to support clinical drug use.
NASA Astrophysics Data System (ADS)
Wilzewski, Jonas S.; Gordon, Iouli E.; Kochanov, Roman V.; Hill, Christian; Rothman, Laurence S.
2016-01-01
To increase the potential for use of the HITRAN database in astronomy, experimental and theoretical line-broadening coefficients, line shifts and temperature-dependence exponents of molecules of planetary interest broadened by H2, He, and CO2 have been assembled from available peer-reviewed sources. The collected data were used to create semi-empirical models so that every HITRAN line of the studied molecules has corresponding parameters. Since H2 and He are major constituents in the atmospheres of gas giants, and CO2 predominates in atmospheres of some rocky planets with volcanic activity, these spectroscopic data are important for remote sensing studies of planetary atmospheres. In this paper we make the first step in assembling complete sets of these parameters, thereby creating datasets for SO2, NH3, HF, HCl, OCS and C2H2.