An Overview of the Object Protocol Model (OPM) and the OPM Data Management Tools.
ERIC Educational Resources Information Center
Chen, I-Min A.; Markowitz, Victor M.
1995-01-01
Discussion of database management tools for scientific information focuses on the Object Protocol Model (OPM) and data management tools based on OPM. Topics include the need for new constructs for modeling scientific experiments, modeling object structures and experiments in OPM, queries and updates, and developing scientific database applications…
Data structures and organisation: Special problems in scientific applications
NASA Astrophysics Data System (ADS)
Read, Brian J.
1989-12-01
In this paper we discuss and offer answers to the following questions: What, really, are the benifits of databases in physics? Are scientific databases essentially different from conventional ones? What are the drawbacks of a commercial database management system for use with scientific data? Do they outweigh the advantages? Do databases systems have adequate graphics facilities, or is a separate graphics package necessary? SQL as a standard language has deficiencies, but what are they for scientific data in particular? Indeed, is the relational model appropriate anyway? Or, should we turn to object oriented databases?
Geoscience research databases for coastal Alabama ecosystem management
Hummell, Richard L.
1995-01-01
Effective management of complex coastal ecosystems necessitates access to scientific knowledge that can be acquired through a multidisciplinary approach involving Federal and State scientists that take advantage of agency expertise and resources for the benefit of all participants working toward a set of common research and management goals. Cooperative geostatic investigations have led toward building databases of fundamental scientific knowledge that can be utilized to manage coastal Alabama's natural and future development. These databases have been used to assess the occurrence and economic potential of hard mineral resources in the Alabama EFZ, and to support oil spill contingency planning and environmental analysis for coastal Alabama.
Information Management System, Materials Research Society Fall Meeting (2013) Photovoltaics Informatics scientific data management, database and data systems design, database clusters, storage systems integration , and distributed data analytics. She has used her experience in laboratory data management systems, lab
Database Software Selection for the Egyptian National STI Network.
ERIC Educational Resources Information Center
Slamecka, Vladimir
The evaluation and selection of information/data management system software for the Egyptian National Scientific and Technical (STI) Network are described. An overview of the state-of-the-art of database technology elaborates on the differences between information retrieval and database management systems (DBMS). The desirable characteristics of…
1992-02-01
6 What Information Should Be Included in the TR Database? 2-6 What Types of Media Can Be Used to Submit Information to the TR Database? 2-9 How Is...reports. Contract administration documents. Regulations. Commercially published books. WHAT TYPES OF MEDIA CAN BE USED TO SUBMIT INFORMATION TO THE TR...TOWARD DTIC’S WUIS DATA- BASE ? The WUIS database, used to control and report technical and management data, summarizes ongoing research and technology
A DBMS architecture for global change research
NASA Astrophysics Data System (ADS)
Hachem, Nabil I.; Gennert, Michael A.; Ward, Matthew O.
1993-08-01
The goal of this research is the design and development of an integrated system for the management of very large scientific databases, cartographic/geographic information processing, and exploratory scientific data analysis for global change research. The system will represent both spatial and temporal knowledge about natural and man-made entities on the eath's surface, following an object-oriented paradigm. A user will be able to derive, modify, and apply, procedures to perform operations on the data, including comparison, derivation, prediction, validation, and visualization. This work represents an effort to extend the database technology with an intrinsic class of operators, which is extensible and responds to the growing needs of scientific research. Of significance is the integration of many diverse forms of data into the database, including cartography, geography, hydrography, hypsography, images, and urban planning data. Equally important is the maintenance of metadata, that is, data about the data, such as coordinate transformation parameters, map scales, and audit trails of previous processing operations. This project will impact the fields of geographical information systems and global change research as well as the database community. It will provide an integrated database management testbed for scientific research, and a testbed for the development of analysis tools to understand and predict global change.
The Marshall Islands Data Management Program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoker, A.C.; Conrado, C.L.
1995-09-01
This report is a resource document of the methods and procedures used currently in the Data Management Program of the Marshall Islands Dose Assessment and Radioecology Project. Since 1973, over 60,000 environmental samples have been collected. Our program includes relational database design, programming and maintenance; sample and information management; sample tracking; quality control; and data entry, evaluation and reduction. The usefulness of scientific databases involves careful planning in order to fulfill the requirements of any large research program. Compilation of scientific results requires consolidation of information from several databases, and incorporation of new information as it is generated. The successmore » in combining and organizing all radionuclide analysis, sample information and statistical results into a readily accessible form, is critical to our project.« less
Reef Ecosystem Services and Decision Support Database
This scientific and management information database utilizes systems thinking to describe the linkages between decisions, human activities, and provisioning of reef ecosystem goods and services. This database provides: (1) Hierarchy of related topics - Click on topics to navigat...
... Scientific Peer Review Award Process Post-Award Grant Management AHRQ Grantee Profiles Getting Recognition for Your AHRQ-Funded Study Contracts Project Research Online Database (PROD) Searchable database of AHRQ ...
NASA Technical Reports Server (NTRS)
1984-01-01
Management of the data within a planetary data system (PDS) is addressed. Principles of modern data management are described and several large NASA scientific data base systems are examined. Data management in PDS is outlined and the major data management issues are introduced.
Transitioning Newborns from NICU to Home: Family Information Packet
... Scientific Peer Review Award Process Post-Award Grant Management AHRQ Grantee Profiles Getting Recognition for Your AHRQ-Funded Study Contracts Project Research Online Database (PROD) Searchable database of AHRQ ...
Next Steps After Your Diagnosis: Finding Information and Support
... Scientific Peer Review Award Process Post-Award Grant Management AHRQ Grantee Profiles Getting Recognition for Your AHRQ-Funded Study Contracts Project Research Online Database (PROD) Searchable database of AHRQ ...
Blood Thinner Pills: Your Guide to Using Them Safely
... Scientific Peer Review Award Process Post-Award Grant Management AHRQ Grantee Profiles Getting Recognition for Your AHRQ-Funded Study Contracts Project Research Online Database (PROD) Searchable database of AHRQ ...
Question Builder: Be Prepared for Your Next Medical Appointment
... Scientific Peer Review Award Process Post-Award Grant Management AHRQ Grantee Profiles Getting Recognition for Your AHRQ-Funded Study Contracts Project Research Online Database (PROD) Searchable database of AHRQ ...
International Soil Carbon Network (ISCN) Database v3-1
Nave, Luke [University of Michigan] (ORCID:0000000182588335); Johnson, Kris [USDA-Forest Service; van Ingen, Catharine [Microsoft Research; Agarwal, Deborah [Lawrence Berkeley National Laboratory] (ORCID:0000000150452396); Humphrey, Marty [University of Virginia; Beekwilder, Norman [University of Virginia
2016-01-01
The ISCN is an international scientific community devoted to the advancement of soil carbon research. The ISCN manages an open-access, community-driven soil carbon database. This is version 3-1 of the ISCN Database, released in December 2015. It gathers 38 separate dataset contributions, totalling 67,112 sites with data from 71,198 soil profiles and 431,324 soil layers. For more information about the ISCN, its scientific community and resources, data policies and partner networks visit: http://iscn.fluxdata.org/.
Be More Involved in Your Health Care: Tips for Patients
... Scientific Peer Review Award Process Post-Award Grant Management AHRQ Grantee Profiles Getting Recognition for Your AHRQ-Funded Study Contracts Project Research Online Database (PROD) Searchable database of AHRQ ...
DataHub: Knowledge-based data management for data discovery
NASA Astrophysics Data System (ADS)
Handley, Thomas H.; Li, Y. Philip
1993-08-01
Currently available database technology is largely designed for business data-processing applications, and seems inadequate for scientific applications. The research described in this paper, the DataHub, will address the issues associated with this shortfall in technology utilization and development. The DataHub development is addressing the key issues in scientific data management of scientific database models and resource sharing in a geographically distributed, multi-disciplinary, science research environment. Thus, the DataHub will be a server between the data suppliers and data consumers to facilitate data exchanges, to assist science data analysis, and to provide as systematic approach for science data management. More specifically, the DataHub's objectives are to provide support for (1) exploratory data analysis (i.e., data driven analysis); (2) data transformations; (3) data semantics capture and usage; analysis-related knowledge capture and usage; and (5) data discovery, ingestion, and extraction. Applying technologies that vary from deductive databases, semantic data models, data discovery, knowledge representation and inferencing, exploratory data analysis techniques and modern man-machine interfaces, DataHub will provide a prototype, integrated environement to support research scientists' needs in multiple disciplines (i.e. oceanography, geology, and atmospheric) while addressing the more general science data management issues. Additionally, the DataHub will provide data management services to exploratory data analysis applications such as LinkWinds and NCSA's XIMAGE.
Scientific information repository assisting reflectance spectrometry in legal medicine.
Belenki, Liudmila; Sterzik, Vera; Bohnert, Michael; Zimmermann, Klaus; Liehr, Andreas W
2012-06-01
Reflectance spectrometry is a fast and reliable method for the characterization of human skin if the spectra are analyzed with respect to a physical model describing the optical properties of human skin. For a field study performed at the Institute of Legal Medicine and the Freiburg Materials Research Center of the University of Freiburg, a scientific information repository has been developed, which is a variant of an electronic laboratory notebook and assists in the acquisition, management, and high-throughput analysis of reflectance spectra in heterogeneous research environments. At the core of the repository is a database management system hosting the master data. It is filled with primary data via a graphical user interface (GUI) programmed in Java, which also enables the user to browse the database and access the results of data analysis. The latter is carried out via Matlab, Python, and C programs, which retrieve the primary data from the scientific information repository, perform the analysis, and store the results in the database for further usage.
NASA Astrophysics Data System (ADS)
Brissebrat, Guillaume; Fleury, Laurence; Boichard, Jean-Luc; Cloché, Sophie; Eymard, Laurence; Mastrorillo, Laurence; Moulaye, Oumarou; Ramage, Karim; Asencio, Nicole; Favot, Florence; Roussot, Odile
2013-04-01
The AMMA information system aims at expediting data and scientific results communication inside the AMMA community and beyond. It has already been adopted as the data management system by several projects and is meant to become a reference information system about West Africa area for the whole scientific community. The AMMA database and the associated on line tools have been developed and are managed by two French teams (IPSL Database Centre, Palaiseau and OMP Data Service, Toulouse). The complete system has been fully duplicated and is operated by AGRHYMET Regional Centre in Niamey, Niger. The AMMA database contains a wide variety of datasets: - about 250 local observation datasets, that cover geophysical components (atmosphere, ocean, soil, vegetation) and human activities (agronomy, health...) They come from either operational networks or scientific experiments, and include historical data in West Africa from 1850; - 1350 outputs of a socio-economics questionnaire; - 60 operational satellite products and several research products; - 10 output sets of meteorological and ocean operational models and 15 of research simulations. Database users can access all the data using either the portal http://database.amma-international.org or http://amma.agrhymet.ne/amma-data. Different modules are available. The complete catalogue enables to access metadata (i.e. information about the datasets) that are compliant with the international standards (ISO19115, INSPIRE...). Registration pages enable to read and sign the data and publication policy, and to apply for a user database account. The data access interface enables to easily build a data extraction request by selecting various criteria like location, time, parameters... At present, the AMMA database counts more than 740 registered users and process about 80 data requests every month In order to monitor day-to-day meteorological and environment information over West Africa, some quick look and report display websites have been developed. They met the operational needs for the observational teams during the AMMA 2006 (http://aoc.amma-international.org) and FENNEC 2011 (http://fenoc.sedoo.fr) campaigns. But they also enable scientific teams to share physical indices along the monsoon season (http://misva.sedoo.fr from 2011). A collaborative WIKINDX tool has been set on line in order to manage scientific publications and communications of interest to AMMA (http://biblio.amma-international.org). Now the bibliographic database counts about 1200 references. It is the most exhaustive document collection about African Monsoon available for all. Every scientist is invited to make use of the different AMMA on line tools and data. Scientists or project leaders who have data management needs for existing or future datasets over West Africa are welcome to use the AMMA database framework and to contact ammaAdmin@sedoo.fr .
ENVIRONMENTAL INFORMATION MANAGEMENT SYSTEM (EIMS)
The Environmental Information Management System (EIMS) organizes descriptive information (metadata) for data sets, databases, documents, models, projects, and spatial data. The EIMS design provides a repository for scientific documentation that can be easily accessed with standar...
Application of cloud database in the management of clinical data of patients with skin diseases.
Mao, Xiao-fei; Liu, Rui; DU, Wei; Fan, Xue; Chen, Dian; Zuo, Ya-gang; Sun, Qiu-ning
2015-04-01
To evaluate the needs and applications of using cloud database in the daily practice of dermatology department. The cloud database was established for systemic scleroderma and localized scleroderma. Paper forms were used to record the original data including personal information, pictures, specimens, blood biochemical indicators, skin lesions,and scores of self-rating scales. The results were input into the cloud database. The applications of the cloud database in the dermatology department were summarized and analyzed. The personal and clinical information of 215 systemic scleroderma patients and 522 localized scleroderma patients were included and analyzed using the cloud database. The disease status,quality of life, and prognosis were obtained by statistical calculations. The cloud database can efficiently and rapidly store and manage the data of patients with skin diseases. As a simple, prompt, safe, and convenient tool, it can be used in patients information management, clinical decision-making, and scientific research.
Data, Data Everywhere but Not a Byte to Read: Managing Monitoring Information.
ERIC Educational Resources Information Center
Stafford, Susan G.
1993-01-01
Describes the Forest Science Data Bank that contains 2,400 data sets from over 350 existing ecological studies. Database features described include involvement of the scientific community; database documentation; data quality assurance; security; data access and retrieval; and data import/export flexibility. Appendices present the Quantitative…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Browne, S.V.; Green, S.C.; Moore, K.
1994-04-01
The Netlib repository, maintained by the University of Tennessee and Oak Ridge National Laboratory, contains freely available software, documents, and databases of interest to the numerical, scientific computing, and other communities. This report includes both the Netlib User`s Guide and the Netlib System Manager`s Guide, and contains information about Netlib`s databases, interfaces, and system implementation. The Netlib repository`s databases include the Performance Database, the Conferences Database, and the NA-NET mail forwarding and Whitepages Databases. A variety of user interfaces enable users to access the Netlib repository in the manner most convenient and compatible with their networking capabilities. These interfaces includemore » the Netlib email interface, the Xnetlib X Windows client, the netlibget command-line TCP/IP client, anonymous FTP, anonymous RCP, and gopher.« less
The personal receiving document management and the realization of email function in OAS
NASA Astrophysics Data System (ADS)
Li, Biqing; Li, Zhao
2017-05-01
This software is an independent software system, suitable for small and medium enterprises, contains personal office, scientific research project management and system management functions, independently run in relevant environment, and to solve practical needs. This software is an independent software system, using the current popular B/S (browser/server) structure and ASP.NET technology development, using the Windows 7 operating system, Microsoft SQL Server2005 Visual2008 and database as a development platform, suitable for small and medium enterprises, contains personal office, scientific research project management and system management functions, independently run in relevant environment, and to solve practical needs.
[Establishement for regional pelvic trauma database in Hunan Province].
Cheng, Liang; Zhu, Yong; Long, Haitao; Yang, Junxiao; Sun, Buhua; Li, Kanghua
2017-04-28
To establish a database for pelvic trauma in Hunan Province, and to start the work of multicenter pelvic trauma registry. Methods: To establish the database, literatures relevant to pelvic trauma were screened, the experiences from the established trauma database in China and abroad were learned, and the actual situations for pelvic trauma rescue in Hunan Province were considered. The database for pelvic trauma was established based on the PostgreSQL and the advanced programming language Java 1.6. Results: The complex procedure for pelvic trauma rescue was described structurally. The contents for the database included general patient information, injurious condition, prehospital rescue, conditions in admission, treatment in hospital, status on discharge, diagnosis, classification, complication, trauma scoring and therapeutic effect. The database can be accessed through the internet by browser/servicer. The functions for the database include patient information management, data export, history query, progress report, video-image management and personal information management. Conclusion: The database with whole life cycle pelvic trauma is successfully established for the first time in China. It is scientific, functional, practical, and user-friendly.
Use of a secure Internet Web site for collaborative medical research.
Marshall, W W; Haley, R W
2000-10-11
Researchers who collaborate on clinical research studies from diffuse locations need a convenient, inexpensive, secure way to record and manage data. The Internet, with its World Wide Web, provides a vast network that enables researchers with diverse types of computers and operating systems anywhere in the world to log data through a common interface. Development of a Web site for scientific data collection can be organized into 10 steps, including planning the scientific database, choosing a database management software system, setting up database tables for each collaborator's variables, developing the Web site's screen layout, choosing a middleware software system to tie the database software to the Web site interface, embedding data editing and calculation routines, setting up the database on the central server computer, obtaining a unique Internet address and name for the Web site, applying security measures to the site, and training staff who enter data. Ensuring the security of an Internet database requires limiting the number of people who have access to the server, setting up the server on a stand-alone computer, requiring user-name and password authentication for server and Web site access, installing a firewall computer to prevent break-ins and block bogus information from reaching the server, verifying the identity of the server and client computers with certification from a certificate authority, encrypting information sent between server and client computers to avoid eavesdropping, establishing audit trails to record all accesses into the Web site, and educating Web site users about security techniques. When these measures are carefully undertaken, in our experience, information for scientific studies can be collected and maintained on Internet databases more efficiently and securely than through conventional systems of paper records protected by filing cabinets and locked doors. JAMA. 2000;284:1843-1849.
Researchers in the National Exposure Research Laboratory (NERL) have performed a number of large human exposure measurement studies during the past decade. It is the goal of the NERL to make the data available to other researchers for analysis in order to further the scientific ...
Hosting and pulishing astronomical data in SQL databases
NASA Astrophysics Data System (ADS)
Galkin, Anastasia; Klar, Jochen; Riebe, Kristin; Matokevic, Gal; Enke, Harry
2017-04-01
In astronomy, terabytes and petabytes of data are produced by ground instruments, satellite missions and simulations. At Leibniz-Institute for Astrophysics Potsdam (AIP) we host and publish terabytes of cosmological simulation and observational data. The public archive at AIP has now reached a size of 60TB and growing and helps to produce numerous scientific papers. The web framework Daiquiri offers a dedicated web interface for each of the hosted scientific databases. Scientists all around the world run SQL queries which include specific astrophysical functions and get their desired data in reasonable time. Daiquiri supports the scientific projects by offering a number of administration tools such as database and user management, contact messages to the staff and support for organization of meetings and workshops. The webpages can be customized and the Wordpress integration supports the participating scientists in maintaining the documentation and the projects' news sections.
Scanlon, Kathryn M.; Waller, Rhian G.; Sirotek, Alexander R.; Knisel, Julia M.; O'Malley, John; Alesandrini, Stian
2010-01-01
The USGS Cold-Water Coral Geographic Database (CoWCoG) provides a tool for researchers and managers interested in studying, protecting, and/or utilizing cold-water coral habitats in the Gulf of Mexico and western North Atlantic Ocean. The database makes information about the locations and taxonomy of cold-water corals available to the public in an easy-to-access form while preserving the scientific integrity of the data. The database includes over 1700 entries, mostly from published scientific literature, museum collections, and other databases. The CoWCoG database is easy to search in a variety of ways, and data can be quickly displayed in table form and on a map by using only the software included with this publication. Subsets of the database can be selected on the basis of geographic location, taxonomy, or other criteria and exported to one of several available file formats. Future versions of the database are being planned to cover a larger geographic area and additional taxa.
Salary Management System for Small and Medium-sized Enterprises
NASA Astrophysics Data System (ADS)
Hao, Zhang; Guangli, Xu; Yuhuan, Zhang; Yilong, Lei
Small and Medium-sized Enterprises (SMEs) in the process of wage entry, calculation, the total number are needed to be done manually in the past, the data volume is quite large, processing speed is low, and it is easy to make error, which is resulting in low efficiency. The main purpose of writing this paper is to present the basis of salary management system, establish a scientific database, the computer payroll system, using the computer instead of a lot of past manual work in order to reduce duplication of staff labor, it will improve working efficiency.This system combines the actual needs of SMEs, through in-depth study and practice of the C/S mode, PowerBuilder10.0 development tools, databases and SQL language, Completed a payroll system needs analysis, database design, application design and development work. Wages, departments, units and personnel database file are included in this system, and have data management, department management, personnel management and other functions, through the control and management of the database query, add, delete, modify, and other functions can be realized. This system is reasonable design, a more complete function, stable operation has been tested to meet the basic needs of the work.
The National Nonindigenous Aquatic Species Database
Neilson, Matthew E.; Fuller, Pamela L.
2012-01-01
The U.S. Geological Survey (USGS) Nonindigenous Aquatic Species (NAS) Program maintains a database that monitors, records, and analyzes sightings of nonindigenous aquatic plant and animal species throughout the United States. The program is based at the USGS Wetland and Aquatic Research Center in Gainesville, Florida.The initiative to maintain scientific information on nationwide occurrences of nonindigenous aquatic species began with the Aquatic Nuisance Species Task Force, created by Congress in 1990 to provide timely information to natural resource managers. Since then, the NAS database has been a clearinghouse of information for confirmed sightings of nonindigenous, also known as nonnative, aquatic species throughout the Nation. The database is used to produce email alerts, maps, summary graphs, publications, and other information products to support natural resource managers.
Management of information in distributed biomedical collaboratories.
Keator, David B
2009-01-01
Organizing and annotating biomedical data in structured ways has gained much interest and focus in the last 30 years. Driven by decreases in digital storage costs and advances in genetics sequencing, imaging, electronic data collection, and microarray technologies, data is being collected at an alarming rate. The specialization of fields in biology and medicine demonstrates the need for somewhat different structures for storage and retrieval of data. For biologists, the need for structured information and integration across a number of domains drives development. For clinical researchers and hospitals, the need for a structured medical record accessible to, ideally, any medical practitioner who might require it during the course of research or patient treatment, patient confidentiality, and security are the driving developmental factors. Scientific data management systems generally consist of a few core services: a backend database system, a front-end graphical user interface, and an export/import mechanism or data interchange format to both get data into and out of the database and share data with collaborators. The chapter introduces some existing databases, distributed file systems, and interchange languages used within the biomedical research and clinical communities for scientific data management and exchange.
NASA Astrophysics Data System (ADS)
Shi, Congming; Wang, Feng; Deng, Hui; Liu, Yingbo; Liu, Cuiyin; Wei, Shoulin
2017-08-01
As a dedicated synthetic aperture radio interferometer in China, the MingantU SpEctral Radioheliograph (MUSER), initially known as the Chinese Spectral RadioHeliograph (CSRH), has entered the stage of routine observation. More than 23 million data records per day need to be effectively managed to provide high-performance data query and retrieval for scientific data reduction. In light of these massive amounts of data generated by the MUSER, in this paper, a novel data management technique called the negative database (ND) is proposed and used to implement a data management system for the MUSER. Based on the key-value database, the ND technique makes complete utilization of the complement set of observational data to derive the requisite information. Experimental results showed that the proposed ND can significantly reduce storage volume in comparison with a relational database management system (RDBMS). Even when considering the time needed to derive records that were absent, its overall performance, including querying and deriving the data of the ND, is comparable with that of a relational database management system (RDBMS). The ND technique effectively solves the problem of massive data storage for the MUSER and is a valuable reference for the massive data management required in next-generation telescopes.
Coordinating Council. First Meeting: NASA/RECON database
NASA Technical Reports Server (NTRS)
1990-01-01
A Council of NASA Headquarters, American Institute of Aeronautics and Astronautics (AIAA), and the NASA Scientific and Technical Information (STI) Facility management met (1) to review and discuss issues of NASA concern, and (2) to promote new and better ways to collect and disseminate scientific and technical information. Topics mentioned for study and discussion at subsequent meetings included the pros and cons of transferring the NASA/RECON database to the commercial sector, the quality of the database, and developing ways to increase foreign acquisitions. The input systems at AIAA and the STI Facility were described. Also discussed were the proposed RECON II retrieval system, the transmittal of document orders received by the Facility and sent to AIAA, and the handling of multimedia input by the Departments of Defense and Commerce. A second meeting was scheduled for six weeks later to discuss database quality and international foreign input.
Uses of the Drupal CMS Collaborative Framework in the Woods Hole Scientific Community (Invited)
NASA Astrophysics Data System (ADS)
Maffei, A. R.; Chandler, C. L.; Work, T. T.; Shorthouse, D.; Furfey, J.; Miller, H.
2010-12-01
Organizations that comprise the Woods Hole scientific community (Woods Hole Oceanographic Institution, Marine Biological Laboratory, USGS Woods Hole Coastal and Marine Science Center, Woods Hole Research Center, NOAA NMFS Northeast Fisheries Science Center, SEA Education Association) have a long history of collaborative activity regarding computing, computer network and information technologies that support common, inter-disciplinary science needs. Over the past several years there has been growing interest in the use of the Drupal Content Management System (CMS) playing a variety of roles in support of research projects resident at several of these organizations. Many of these projects are part of science programs that are national and international in scope. Here we survey the current uses of Drupal within the Woods Hole scientific community and examine reasons it has been adopted. The promise of emerging semantic features in the Drupal framework is examined and projections of how pre-existing Drupal-based websites might benefit are made. Closer examination of Drupal software design exposes it as more than simply a content management system. The flexibility of its architecture; the power of its taxonomy module; the care taken in nurturing the open-source developer community that surrounds it (including organized and often well-attended code sprints); the ability to bind emerging software technologies as Drupal modules; the careful selection process used in adopting core functionality; multi-site hosting and cross-site deployment of updates and a recent trend towards development of use-case inspired Drupal distributions casts Drupal as a general-purpose application deployment framework. Recent work in the semantic arena casts Drupal as an emerging RDF framework as well. Examples of roles played by Drupal-based websites within the Woods Hole scientific community that will be discussed include: science data metadata database, organization main website, biological taxonomy development, bibliographic database, physical media data archive inventory manager, disaster-response website development framework, science project task management, science conference planning, and spreadsheet-to-database converter.
London, Sue; Brahmi, Frances A
2005-01-01
As end-user demand for easy access to electronic full text continues to climb, an increasing number of information providers are combining that access with their other products and services, making navigating their Web sites by librarians seeking information on a given product or service more daunting than ever. One such provider of a complex array of products and services is Thomson Scientific. This paper looks at some of the many products and tools available from two of Thomson Scientific's businesses, Thomson ISI and Thomson ResearchSoft. Among the items of most interest to health sciences and veterinary librarians and their users are the variety of databases available via the ISI Web of Knowledge platform and the information management products available from ResearchSoft.
Seabird databases and the new paradigm for scientific publication and attribution
Hatch, Scott A.
2010-01-01
For more than 300 years, the peer-reviewed journal article has been the principal medium for packaging and delivering scientific data. With new tools for managing digital data, a new paradigm is emerging—one that demands open and direct access to data and that enables and rewards a broad-based approach to scientific questions. Ground-breaking papers in the future will increasingly be those that creatively mine and synthesize vast stores of data available on the Internet. This is especially true for conservation science, in which essential data can be readily captured in standard record formats. For seabird professionals, a number of globally shared databases are in the offing, or should be. These databases will capture the salient results of inventories and monitoring, pelagic surveys, diet studies, and telemetry. A number of real or perceived barriers to data sharing exist, but none is insurmountable. Our discipline should take an important stride now by adopting a specially designed markup language for annotating and sharing seabird data.
Object-oriented structures supporting remote sensing databases
NASA Technical Reports Server (NTRS)
Wichmann, Keith; Cromp, Robert F.
1995-01-01
Object-oriented databases show promise for modeling the complex interrelationships pervasive in scientific domains. To examine the utility of this approach, we have developed an Intelligent Information Fusion System based on this technology, and applied it to the problem of managing an active repository of remotely-sensed satellite scenes. The design and implementation of the system is compared and contrasted with conventional relational database techniques, followed by a presentation of the underlying object-oriented data structures used to enable fast indexing into the data holdings.
Mass-storage management for distributed image/video archives
NASA Astrophysics Data System (ADS)
Franchi, Santina; Guarda, Roberto; Prampolini, Franco
1993-04-01
The realization of image/video database requires a specific design for both database structures and mass storage management. This issue has addressed the project of the digital image/video database system that has been designed at IBM SEMEA Scientific & Technical Solution Center. Proper database structures have been defined to catalog image/video coding technique with the related parameters, and the description of image/video contents. User workstations and servers are distributed along a local area network. Image/video files are not managed directly by the DBMS server. Because of their wide size, they are stored outside the database on network devices. The database contains the pointers to the image/video files and the description of the storage devices. The system can use different kinds of storage media, organized in a hierarchical structure. Three levels of functions are available to manage the storage resources. The functions of the lower level provide media management. They allow it to catalog devices and to modify device status and device network location. The medium level manages image/video files on a physical basis. It manages file migration between high capacity media and low access time media. The functions of the upper level work on image/video file on a logical basis, as they archive, move and copy image/video data selected by user defined queries. These functions are used to support the implementation of a storage management strategy. The database information about characteristics of both storage devices and coding techniques are used by the third level functions to fit delivery/visualization requirements and to reduce archiving costs.
NASA Technical Reports Server (NTRS)
Campbell, William J.; Roelofs, Larry H.; Short, Nicholas M., Jr.
1987-01-01
The National Space Science Data Center (NSSDC) has initiated an Intelligent Data Management (IDM) research effort which has as one of its components the development of an Intelligent User Interface (IUI).The intent of the latter is to develop a friendly and intelligent user interface service that is based on expert systems and natural language processing technologies. The purpose is to support the large number of potential scientific and engineering users presently having need of space and land related research and technical data but who have little or no experience in query languages or understanding of the information content or architecture of the databases involved. This technical memorandum presents prototype Intelligent User Interface Subsystem (IUIS) using the Crustal Dynamics Project Database as a test bed for the implementation of the CRUDDES (Crustal Dynamics Expert System). The knowledge base has more than 200 rules and represents a single application view and the architectural view. Operational performance using CRUDDES has allowed nondatabase users to obtain useful information from the database previously accessible only to an expert database user or the database designer.
Duda, Jeffrey J.; Wieferich, Daniel J.; Bristol, R. Sky; Bellmore, J. Ryan; Hutchison, Vivian B.; Vittum, Katherine M.; Craig, Laura; Warrick, Jonathan A.
2016-08-18
The removal of dams has recently increased over historical levels due to aging infrastructure, changing societal needs, and modern safety standards rendering some dams obsolete. Where possibilities for river restoration, or improved safety, exceed the benefits of retaining a dam, removal is more often being considered as a viable option. Yet, as this is a relatively new development in the history of river management, science is just beginning to guide our understanding of the physical and ecological implications of dam removal. Ultimately, the “lessons learned” from previous scientific studies on the outcomes dam removal could inform future scientific understanding of ecosystem outcomes, as well as aid in decision-making by stakeholders. We created a database visualization tool, the Dam Removal Information Portal (DRIP), to display map-based, interactive information about the scientific studies associated with dam removals. Serving both as a bibliographic source as well as a link to other existing databases like the National Hydrography Dataset, the derived National Dam Removal Science Database serves as the foundation for a Web-based application that synthesizes the existing scientific studies associated with dam removals. Thus, using the DRIP application, users can explore information about completed dam removal projects (for example, their location, height, and date removed), as well as discover sources and details of associated of scientific studies. As such, DRIP is intended to be a dynamic collection of scientific information related to dams that have been removed in the United States and elsewhere. This report describes the architecture and concepts of this “metaknowledge” database and the DRIP visualization tool.
Fahy, Michael; Doyle, Orla; Denny, Kevin; McAuliffe, Fionnuala M; Robson, Michael
2013-05-01
Increasing birth rates have raised questions for policy makers and hospital management about the economic costs of childbirth. The purpose of this article is to identify and review all existing scientific studies in relation to the economic costs of alternative modes of childbirth delivery and to highlight deficiencies in the existing scientific research. We searched Cochrane, Centre for Reviews and Dissemination, EconLit, the Excerpta Medica Database, the Health Economic Evaluations Database, MEDLINE and PubMed. Thirty articles are included in this review. The main findings suggest that there is no internationally acceptable childbirth cost and clinical outcome classification system that allows for comparisons across different delivery modes. This review demonstrates that a better understanding and classification of the costs and associated clinical outcomes of childbirth is required to allow for valid comparisons between maternity units, and to inform policy makers and hospital management. © 2013 The Authors Acta Obstetricia et Gynecologica Scandinavica © 2013 Nordic Federation of Societies of Obstetrics and Gynecology.
Organization of Heterogeneous Scientific Data Using the EAV/CR Representation
Nadkarni, Prakash M.; Marenco, Luis; Chen, Roland; Skoufos, Emmanouil; Shepherd, Gordon; Miller, Perry
1999-01-01
Entity-attribute-value (EAV) representation is a means of organizing highly heterogeneous data using a relatively simple physical database schema. EAV representation is widely used in the medical domain, most notably in the storage of data related to clinical patient records. Its potential strengths suggest its use in other biomedical areas, in particular research databases whose schemas are complex as well as constantly changing to reflect evolving knowledge in rapidly advancing scientific domains. When deployed for such purposes, the basic EAV representation needs to be augmented significantly to handle the modeling of complex objects (classes) as well as to manage interobject relationships. The authors refer to their modification of the basic EAV paradigm as EAV/CR (EAV with classes and relationships). They describe EAV/CR representation with examples from two biomedical databases that use it. PMID:10579606
The Admissions Office Goes Scientific.
ERIC Educational Resources Information Center
Bryant, Peter; Crockett, Kevin
1993-01-01
Data-based planning and management is revolutionizing college student recruitment. Data analysis focuses on historical trends, marketing and recruiting strategies, cost-effectiveness strategy, and markets. Data sources include primary market demographics, geo-demographics, secondary sources, student price response information, and institutional…
The Fabric for Frontier Experiments Project at Fermilab
NASA Astrophysics Data System (ADS)
Kirby, Michael
2014-06-01
The FabrIc for Frontier Experiments (FIFE) project is a new, far-reaching initiative within the Fermilab Scientific Computing Division to drive the future of computing services for experiments at FNAL and elsewhere. It is a collaborative effort between computing professionals and experiment scientists to produce an end-to-end, fully integrated set of services for computing on the grid and clouds, managing data, accessing databases, and collaborating within experiments. FIFE includes 1) easy to use job submission services for processing physics tasks on the Open Science Grid and elsewhere; 2) an extensive data management system for managing local and remote caches, cataloging, querying, moving, and tracking the use of data; 3) custom and generic database applications for calibrations, beam information, and other purposes; 4) collaboration tools including an electronic log book, speakers bureau database, and experiment membership database. All of these aspects will be discussed in detail. FIFE sets the direction of computing at Fermilab experiments now and in the future, and therefore is a major driver in the design of computing services worldwide.
ERIC Educational Resources Information Center
Kurtz, Michael J.; Eichorn, Guenther; Accomazzi, Alberto; Grant, Carolyn S.; Demleitner, Markus; Murray, Stephen S.; Jones, Michael L. W.; Gay, Geri K.; Rieger, Robert H.; Millman, David; Bruggemann-Klein, Anne; Klein, Rolf; Landgraf, Britta; Wang, James Ze; Li, Jia; Chan, Desmond; Wiederhold, Gio; Pitti, Daniel V.
1999-01-01
Includes six articles that discuss a digital library for astronomy; comparing evaluations of digital collection efforts; cross-organizational access management of Web-based resources; searching scientific bibliographic databases based on content-based relations between documents; semantics-sensitive retrieval for digital picture libraries; and…
Evaluating non-relational storage technology for HEP metadata and meta-data catalog
NASA Astrophysics Data System (ADS)
Grigorieva, M. A.; Golosova, M. V.; Gubin, M. Y.; Klimentov, A. A.; Osipova, V. V.; Ryabinkin, E. A.
2016-10-01
Large-scale scientific experiments produce vast volumes of data. These data are stored, processed and analyzed in a distributed computing environment. The life cycle of experiment is managed by specialized software like Distributed Data Management and Workload Management Systems. In order to be interpreted and mined, experimental data must be accompanied by auxiliary metadata, which are recorded at each data processing step. Metadata describes scientific data and represent scientific objects or results of scientific experiments, allowing them to be shared by various applications, to be recorded in databases or published via Web. Processing and analysis of constantly growing volume of auxiliary metadata is a challenging task, not simpler than the management and processing of experimental data itself. Furthermore, metadata sources are often loosely coupled and potentially may lead to an end-user inconsistency in combined information queries. To aggregate and synthesize a range of primary metadata sources, and enhance them with flexible schema-less addition of aggregated data, we are developing the Data Knowledge Base architecture serving as the intelligence behind GUIs and APIs.
An adaptable XML based approach for scientific data management and integration
NASA Astrophysics Data System (ADS)
Wang, Fusheng; Thiel, Florian; Furrer, Daniel; Vergara-Niedermayr, Cristobal; Qin, Chen; Hackenberg, Georg; Bourgue, Pierre-Emmanuel; Kaltschmidt, David; Wang, Mo
2008-03-01
Increased complexity of scientific research poses new challenges to scientific data management. Meanwhile, scientific collaboration is becoming increasing important, which relies on integrating and sharing data from distributed institutions. We develop SciPort, a Web-based platform on supporting scientific data management and integration based on a central server based distributed architecture, where researchers can easily collect, publish, and share their complex scientific data across multi-institutions. SciPort provides an XML based general approach to model complex scientific data by representing them as XML documents. The documents capture not only hierarchical structured data, but also images and raw data through references. In addition, SciPort provides an XML based hierarchical organization of the overall data space to make it convenient for quick browsing. To provide generalization, schemas and hierarchies are customizable with XML-based definitions, thus it is possible to quickly adapt the system to different applications. While each institution can manage documents on a Local SciPort Server independently, selected documents can be published to a Central Server to form a global view of shared data across all sites. By storing documents in a native XML database, SciPort provides high schema extensibility and supports comprehensive queries through XQuery. By providing a unified and effective means for data modeling, data access and customization with XML, SciPort provides a flexible and powerful platform for sharing scientific data for scientific research communities, and has been successfully used in both biomedical research and clinical trials.
An Adaptable XML Based Approach for Scientific Data Management and Integration.
Wang, Fusheng; Thiel, Florian; Furrer, Daniel; Vergara-Niedermayr, Cristobal; Qin, Chen; Hackenberg, Georg; Bourgue, Pierre-Emmanuel; Kaltschmidt, David; Wang, Mo
2008-02-20
Increased complexity of scientific research poses new challenges to scientific data management. Meanwhile, scientific collaboration is becoming increasing important, which relies on integrating and sharing data from distributed institutions. We develop SciPort, a Web-based platform on supporting scientific data management and integration based on a central server based distributed architecture, where researchers can easily collect, publish, and share their complex scientific data across multi-institutions. SciPort provides an XML based general approach to model complex scientific data by representing them as XML documents. The documents capture not only hierarchical structured data, but also images and raw data through references. In addition, SciPort provides an XML based hierarchical organization of the overall data space to make it convenient for quick browsing. To provide generalization, schemas and hierarchies are customizable with XML-based definitions, thus it is possible to quickly adapt the system to different applications. While each institution can manage documents on a Local SciPort Server independently, selected documents can be published to a Central Server to form a global view of shared data across all sites. By storing documents in a native XML database, SciPort provides high schema extensibility and supports comprehensive queries through XQuery. By providing a unified and effective means for data modeling, data access and customization with XML, SciPort provides a flexible and powerful platform for sharing scientific data for scientific research communities, and has been successfully used in both biomedical research and clinical trials.
MouseNet database: digital management of a large-scale mutagenesis project.
Pargent, W; Heffner, S; Schäble, K F; Soewarto, D; Fuchs, H; Hrabé de Angelis, M
2000-07-01
The Munich ENU Mouse Mutagenesis Screen is a large-scale mutant production, phenotyping, and mapping project. It encompasses two animal breeding facilities and a number of screening groups located in the general area of Munich. A central database is required to manage and process the immense amount of data generated by the mutagenesis project. This database, which we named MouseNet(c), runs on a Sybase platform and will finally store and process all data from the entire project. In addition, the system comprises a portfolio of functions needed to support the workflow management of the core facility and the screening groups. MouseNet(c) will make all of the data available to the participating screening groups, and later to the international scientific community. MouseNet(c) will consist of three major software components:* Animal Management System (AMS)* Sample Tracking System (STS)* Result Documentation System (RDS)MouseNet(c) provides the following major advantages:* being accessible from different client platforms via the Internet* being a full-featured multi-user system (including access restriction and data locking mechanisms)* relying on a professional RDBMS (relational database management system) which runs on a UNIX server platform* supplying workflow functions and a variety of plausibility checks.
ERIC Educational Resources Information Center
And Others; Town, William G.
1980-01-01
Discusses the problems encountered and solutions adopted in application of the ADABAS database management system to the ECDIN (Environmental Chemicals Data and Information Network) data bank. SIMAS, the pilot system, and ADABAS are compared, and ECDIN ADABAS design features are described. Appendices provide additional facts about ADABAS and SIMAS.…
The utilization of neural nets in populating an object-oriented database
NASA Technical Reports Server (NTRS)
Campbell, William J.; Hill, Scott E.; Cromp, Robert F.
1989-01-01
Existing NASA supported scientific data bases are usually developed, managed and populated in a tedious, error prone and self-limiting way in terms of what can be described in a relational Data Base Management System (DBMS). The next generation Earth remote sensing platforms (i.e., Earth Observation System, (EOS), will be capable of generating data at a rate of over 300 Mbs per second from a suite of instruments designed for different applications. What is needed is an innovative approach that creates object-oriented databases that segment, characterize, catalog and are manageable in a domain-specific context and whose contents are available interactively and in near-real-time to the user community. Described here is work in progress that utilizes an artificial neural net approach to characterize satellite imagery of undefined objects into high-level data objects. The characterized data is then dynamically allocated to an object-oriented data base where it can be reviewed and assessed by a user. The definition, development, and evolution of the overall data system model are steps in the creation of an application-driven knowledge-based scientific information system.
An Array Library for Microsoft SQL Server with Astrophysical Applications
NASA Astrophysics Data System (ADS)
Dobos, L.; Szalay, A. S.; Blakeley, J.; Falck, B.; Budavári, T.; Csabai, I.
2012-09-01
Today's scientific simulations produce output on the 10-100 TB scale. This unprecedented amount of data requires data handling techniques that are beyond what is used for ordinary files. Relational database systems have been successfully used to store and process scientific data, but the new requirements constantly generate new challenges. Moving terabytes of data among servers on a timely basis is a tough problem, even with the newest high-throughput networks. Thus, moving the computations as close to the data as possible and minimizing the client-server overhead are absolutely necessary. At least data subsetting and preprocessing have to be done inside the server process. Out of the box commercial database systems perform very well in scientific applications from the prospective of data storage optimization, data retrieval, and memory management but lack basic functionality like handling scientific data structures or enabling advanced math inside the database server. The most important gap in Microsoft SQL Server is the lack of a native array data type. Fortunately, the technology exists to extend the database server with custom-written code that enables us to address these problems. We present the prototype of a custom-built extension to Microsoft SQL Server that adds array handling functionality to the database system. With our Array Library, fix-sized arrays of all basic numeric data types can be created and manipulated efficiently. Also, the library is designed to be able to be seamlessly integrated with the most common math libraries, such as BLAS, LAPACK, FFTW, etc. With the help of these libraries, complex operations, such as matrix inversions or Fourier transformations, can be done on-the-fly, from SQL code, inside the database server process. We are currently testing the prototype with two different scientific data sets: The Indra cosmological simulation will use it to store particle and density data from N-body simulations, and the Milky Way Laboratory project will use it to store galaxy simulation data.
J. David Creswell: Award for Distinguished Scientific Early Career Contributions to Psychology.
2014-11-01
APA's Awards for Distinguished Scientific Early Career Contributions to Psychology recognize excellent young psychologists who have not held a doctoral degree for more than nine years. One of the 2014 award winners is J. David Creswell, for "outstanding and innovative research on mechanisms linking stress management strategies to disease." Creswell's award citation, biography, and a selected bibliography are presented here. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Groundwater modeling in integrated water resources management--visions for 2020.
Refsgaard, Jens Christian; Højberg, Anker Lajer; Møller, Ingelise; Hansen, Martin; Søndergaard, Verner
2010-01-01
Groundwater modeling is undergoing a change from traditional stand-alone studies toward being an integrated part of holistic water resources management procedures. This is illustrated by the development in Denmark, where comprehensive national databases for geologic borehole data, groundwater-related geophysical data, geologic models, as well as a national groundwater-surface water model have been established and integrated to support water management. This has enhanced the benefits of using groundwater models. Based on insight gained from this Danish experience, a scientifically realistic scenario for the use of groundwater modeling in 2020 has been developed, in which groundwater models will be a part of sophisticated databases and modeling systems. The databases and numerical models will be seamlessly integrated, and the tasks of monitoring and modeling will be merged. Numerical models for atmospheric, surface water, and groundwater processes will be coupled in one integrated modeling system that can operate at a wide range of spatial scales. Furthermore, the management systems will be constructed with a focus on building credibility of model and data use among all stakeholders and on facilitating a learning process whereby data and models, as well as stakeholders' understanding of the system, are updated to currently available information. The key scientific challenges for achieving this are (1) developing new methodologies for integration of statistical and qualitative uncertainty; (2) mapping geological heterogeneity and developing scaling methodologies; (3) developing coupled model codes; and (4) developing integrated information systems, including quality assurance and uncertainty information that facilitate active stakeholder involvement and learning.
Respiratory cancer database: An open access database of respiratory cancer gene and miRNA.
Choubey, Jyotsna; Choudhari, Jyoti Kant; Patel, Ashish; Verma, Mukesh Kumar
2017-01-01
Respiratory cancer database (RespCanDB) is a genomic and proteomic database of cancer of respiratory organ. It also includes the information of medicinal plants used for the treatment of various respiratory cancers with structure of its active constituents as well as pharmacological and chemical information of drug associated with various respiratory cancers. Data in RespCanDB has been manually collected from published research article and from other databases. Data has been integrated using MySQL an object-relational database management system. MySQL manages all data in the back-end and provides commands to retrieve and store the data into the database. The web interface of database has been built in ASP. RespCanDB is expected to contribute to the understanding of scientific community regarding respiratory cancer biology as well as developments of new way of diagnosing and treating respiratory cancer. Currently, the database consist the oncogenomic information of lung cancer, laryngeal cancer, and nasopharyngeal cancer. Data for other cancers, such as oral and tracheal cancers, will be added in the near future. The URL of RespCanDB is http://ridb.subdic-bioinformatics-nitrr.in/.
The intelligent user interface for NASA's advanced information management systems
NASA Technical Reports Server (NTRS)
Campbell, William J.; Short, Nicholas, Jr.; Rolofs, Larry H.; Wattawa, Scott L.
1987-01-01
NASA has initiated the Intelligent Data Management Project to design and develop advanced information management systems. The project's primary goal is to formulate, design and develop advanced information systems that are capable of supporting the agency's future space research and operational information management needs. The first effort of the project was the development of a prototype Intelligent User Interface to an operational scientific database, using expert systems and natural language processing technologies. An overview of Intelligent User Interface formulation and development is given.
The Office of Environmental Management technical reports: a bibliography
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1997-07-01
The Office of Environmental Management`s (EM) technical reports bibliography is an annual publication that contains information on scientific and technical reports sponsored by the Office of Environmental Management added to the Energy Science and Technology Database from July 1, 1995 through Sept. 30, 1996. This information is divided into the following categories: Focus Areas and Crosscutting Programs. Support Programs, Technology Integration and International Technology Exchange are now included in the General category. EM`s Office of Science and Technology sponsors this bibliography.
DOT National Transportation Integrated Search
2006-05-01
This research has provided NCDOT with (1) scientific observations to validate the pollutant removal : performance of selected structural BMPs, (2) a database management option for BMP monitoring and : non-monitoring sites, (3) pollution prevention pl...
Comparison of scientific and administrative database management systems
NASA Technical Reports Server (NTRS)
Stoltzfus, J. C.
1983-01-01
Some characteristics found to be different for scientific and administrative data bases are identified and some of the corresponding generic requirements for data base management systems (DBMS) are discussed. The requirements discussed are especially stringent for either the scientific or administrative data bases. For some, no commercial DBMS is fully satisfactory, and the data base designer must invent a suitable approach. For others, commercial systems are available with elegant solutions, and a wrong choice would mean an expensive work-around to provide the missing features. It is concluded that selection of a DBMS must be based on the requirements for the information system. There is no unique distinction between scientific and administrative data bases or DBMS. The distinction comes from the logical structure of the data, and understanding the data and their relationships is the key to defining the requirements and selecting an appropriate DBMS for a given set of applications.
Updated Palaeotsunami Database for Aotearoa/New Zealand
NASA Astrophysics Data System (ADS)
Gadsby, M. R.; Goff, J. R.; King, D. N.; Robbins, J.; Duesing, U.; Franz, T.; Borrero, J. C.; Watkins, A.
2016-12-01
The updated configuration, design, and implementation of a national palaeotsunami (pre-historic tsunami) database for Aotearoa/New Zealand (A/NZ) is near completion. This tool enables correlation of events along different stretches of the NZ coastline, provides information on frequency and extent of local, regional and distant-source tsunamis, and delivers detailed information on the science and proxies used to identify the deposits. In A/NZ a plethora of data, scientific research and experience surrounds palaeotsunami deposits, but much of this information has been difficult to locate, has variable reporting standards, and lacked quality assurance. The original database was created by Professor James Goff while working at the National Institute of Water & Atmospheric Research in A/NZ, but has subsequently been updated during his tenure at the University of New South Wales. The updating and establishment of the national database was funded by the Ministry of Civil Defence and Emergency Management (MCDEM), led by Environment Canterbury Regional Council, and supported by all 16 regions of A/NZ's local government. Creation of a single database has consolidated a wide range of published and unpublished research contributions from many science providers on palaeotsunamis in A/NZ. The information is now easily accessible and quality assured and allows examination of frequency, extent and correlation of events. This provides authoritative scientific support for coastal-marine planning and risk management. The database will complement the GNS New Zealand Historical Database, and contributes to a heightened public awareness of tsunami by being a "one-stop-shop" for information on past tsunami impacts. There is scope for this to become an international database, enabling the pacific-wide correlation of large events, as well as identifying smaller regional ones. The Australian research community has already expressed an interest, and the database is also compatible with a similar one currently under development in Japan. Expressions of interest in collaborating with the A/NZ team to expand the database are invited from other Pacific nations.
Development of Human Face Literature Database Using Text Mining Approach: Phase I.
Kaur, Paramjit; Krishan, Kewal; Sharma, Suresh K
2018-06-01
The face is an important part of the human body by which an individual communicates in the society. Its importance can be highlighted by the fact that a person deprived of face cannot sustain in the living world. The amount of experiments being performed and the number of research papers being published under the domain of human face have surged in the past few decades. Several scientific disciplines, which are conducting research on human face include: Medical Science, Anthropology, Information Technology (Biometrics, Robotics, and Artificial Intelligence, etc.), Psychology, Forensic Science, Neuroscience, etc. This alarms the need of collecting and managing the data concerning human face so that the public and free access of it can be provided to the scientific community. This can be attained by developing databases and tools on human face using bioinformatics approach. The current research emphasizes on creating a database concerning literature data of human face. The database can be accessed on the basis of specific keywords, journal name, date of publication, author's name, etc. The collected research papers will be stored in the form of a database. Hence, the database will be beneficial to the research community as the comprehensive information dedicated to the human face could be found at one place. The information related to facial morphologic features, facial disorders, facial asymmetry, facial abnormalities, and many other parameters can be extracted from this database. The front end has been developed using Hyper Text Mark-up Language and Cascading Style Sheets. The back end has been developed using hypertext preprocessor (PHP). The JAVA Script has used as scripting language. MySQL (Structured Query Language) is used for database development as it is most widely used Relational Database Management System. XAMPP (X (cross platform), Apache, MySQL, PHP, Perl) open source web application software has been used as the server.The database is still under the developmental phase and discusses the initial steps of its creation. The current paper throws light on the work done till date.
NASA Astrophysics Data System (ADS)
Ehlmann, Bryon K.
Current scientific experiments are often characterized by massive amounts of very complex data and the need for complex data analysis software. Object-oriented database (OODB) systems have the potential of improving the description of the structure and semantics of this data and of integrating the analysis software with the data. This dissertation results from research to enhance OODB functionality and methodology to support scientific databases (SDBs) and, more specifically, to support a nuclear physics experiments database for the Continuous Electron Beam Accelerator Facility (CEBAF). This research to date has identified a number of problems related to the practical application of OODB technology to the conceptual design of the CEBAF experiments database and other SDBs: the lack of a generally accepted OODB design methodology, the lack of a standard OODB model, the lack of a clear conceptual level in existing OODB models, and the limited support in existing OODB systems for many common object relationships inherent in SDBs. To address these problems, the dissertation describes an Object-Relationship Diagram (ORD) and an Object-oriented Database Definition Language (ODDL) that provide tools that allow SDB design and development to proceed systematically and independently of existing OODB systems. These tools define multi-level, conceptual data models for SDB design, which incorporate a simple notation for describing common types of relationships that occur in SDBs. ODDL allows these relationships and other desirable SDB capabilities to be supported by an extended OODB system. A conceptual model of the CEBAF experiments database is presented in terms of ORDs and the ODDL to demonstrate their functionality and use and provide a foundation for future development of experimental nuclear physics software using an OODB approach.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-19
... Medicines Agency (EMA) European Community Herbal Monographs, and World Health Organization (WHO) Monographs... that several authoritative labeling standards monographs for herbal products specify traditional use... the major scientific reference databases, such as the National Library of Medicine's literature...
Do we need a Unique Scientist ID for publications in biomedicine?
Bohne-Lang, Andreas; Lang, Elke
2005-03-22
BACKGROUND: The PubMed database contains nearly 15 million references from more than 4,800 biomedical journals. In general, authors of scientific articles are addressed by their last name and forename initial. DISCUSSION: In general, names can be too common and not unique enough to be search criteria. Today, Ph.D. students, other researchers and women publish scientific work. A person may not only have one name but several names and publish under each name. A Unique Scientist ID could help to address people in peer-to-peer (P2P) networks. As a starting point, perhaps PubMed could generate and manage such a scientist ID. SUMMARY: A Unique Scientist ID would improve knowledge management in science. Unfortunately in some of the publications, and then within the online databases, only one letter abbreviates the author's forename. A common name with only one initial could retrieve pertinent citations, but include many false drops (retrieval matching searched criteria but indisputably irrelevant).
1984-12-01
52242 Prepared for the AIR FORCE OFFICE OF SCIENTIFIC RESEARCH Under Grant No. AFOSR 82-0322 December 1984 ~ " ’w Unclassified SECURITY CLASSIFICATION4...OF THIS PAGE REPORT DOCUMENTATION PAGE is REPORT SECURITY CLASSIFICATION lb. RESTRICTIVE MARKINGS Unclassified None 20 SECURITY CLASSIFICATION...designer .and computer- are 20 DIiRIBUTION/AVAILABI LIT Y 0P ABSTR4ACT 21 ABSTRACT SECURITY CLASSIFICA1ONr UNCLASSIFIED/UNLIMITED SAME AS APT OTIC USERS
The Fabric for Frontier Experiments Project at Fermilab
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirby, Michael
2014-01-01
The FabrIc for Frontier Experiments (FIFE) project is a new, far-reaching initiative within the Fermilab Scientific Computing Division to drive the future of computing services for experiments at FNAL and elsewhere. It is a collaborative effort between computing professionals and experiment scientists to produce an end-to-end, fully integrated set of services for computing on the grid and clouds, managing data, accessing databases, and collaborating within experiments. FIFE includes 1) easy to use job submission services for processing physics tasks on the Open Science Grid and elsewhere, 2) an extensive data management system for managing local and remote caches, cataloging, querying,more » moving, and tracking the use of data, 3) custom and generic database applications for calibrations, beam information, and other purposes, 4) collaboration tools including an electronic log book, speakers bureau database, and experiment membership database. All of these aspects will be discussed in detail. FIFE sets the direction of computing at Fermilab experiments now and in the future, and therefore is a major driver in the design of computing services worldwide.« less
Challenges and Experiences of Building Multidisciplinary Datasets across Cultures
NASA Astrophysics Data System (ADS)
Jamiyansharav, K.; Laituri, M.; Fernandez-Gimenez, M.; Fassnacht, S. R.; Venable, N. B. H.; Allegretti, A. M.; Reid, R.; Baival, B.; Jamsranjav, C.; Ulambayar, T.; Linn, S.; Angerer, J.
2017-12-01
Efficient data sharing and management are key challenges to multidisciplinary scientific research. These challenges are further complicated by adding a multicultural component. We address the construction of a complex database for social-ecological analysis in Mongolia. Funded by the National Science Foundation (NSF) Dynamics of Coupled Natural and Human (CNH) Systems, the Mongolian Rangelands and Resilience (MOR2) project focuses on the vulnerability of Mongolian pastoral systems to climate change and adaptive capacity. The MOR2 study spans over three years of fieldwork in 36 paired districts (Soum) from 18 provinces (Aimag) of Mongolia that covers steppe, mountain forest steppe, desert steppe and eastern steppe ecological zones. Our project team is composed of hydrologists, social scientists, geographers, and ecologists. The MOR2 database includes multiple ecological, social, meteorological, geospatial and hydrological datasets, as well as archives of original data and survey in multiple formats. Managing this complex database requires significant organizational skills, attention to detail and ability to communicate within collective team members from diverse disciplines and across multiple institutions in the US and Mongolia. We describe the database's rich content, organization, structure and complexity. We discuss lessons learned, best practices and recommendations for complex database management, sharing, and archiving in creating a cross-cultural and multi-disciplinary database.
Wachtel, Ruth E; Dexter, Franklin
2013-12-01
The purpose of this article is to teach operating room managers, financial analysts, and those with a limited knowledge of search engines, including PubMed, how to locate articles they need in the areas of operating room and anesthesia group management. Many physicians are unaware of current literature in their field and evidence-based practices. The most common source of information is colleagues. Many people making management decisions do not read published scientific articles. Databases such as PubMed are available to search for such articles. Other databases, such as citation indices and Google Scholar, can be used to uncover additional articles. Nevertheless, most people who do not know how to use these databases are reluctant to utilize help resources when they do not know how to accomplish a task. Most people are especially reluctant to use on-line help files. Help files and search databases are often difficult to use because they have been designed for users already familiar with the field. The help files and databases have specialized vocabularies unique to the application. MeSH terms in PubMed are not useful alternatives for operating room management, an important limitation, because MeSH is the default when search terms are entered in PubMed. Librarians or those trained in informatics can be valuable assets for searching unusual databases, but they must possess the domain knowledge relative to the subject they are searching. The search methods we review are especially important when the subject area (e.g., anesthesia group management) is so specific that only 1 or 2 articles address the topic of interest. The materials are presented broadly enough that the reader can extrapolate the findings to other areas of clinical and management issues in anesthesiology.
An environmental database for Venice and tidal zones
NASA Astrophysics Data System (ADS)
Macaluso, L.; Fant, S.; Marani, A.; Scalvini, G.; Zane, O.
2003-04-01
The natural environment is a complex, highly variable and physically non reproducible system (not in laboratory, nor in a confined territory). Environmental experimental studies are thus necessarily based on field measurements distributed in time and space. Only extensive data collections can provide the representative samples of the system behavior which are essential for scientific advancement. The assimilation of large data collections into accessible archives must necessarily be implemented in electronic databases. In the case of tidal environments in general, and of the Venice lagoon in particular, it is useful to establish a database, freely accessible to the scientific community, documenting the dynamics of such systems and their response to anthropic pressures and climatic variability. At the Istituto Veneto di Scienze, Lettere ed Arti in Venice (Italy) two internet environmental databases has been developed: one collects information regarding in detail the Venice lagoon; the other co-ordinate the research consortium of the "TIDE" EU RTD project, that attends to three different tidal areas: Venice Lagoon (Italy), Morecambe Bay (England), and Forth Estuary (Scotland). The archives may be accessed through the URL: www.istitutoveneto.it. The first one is freely available and applies to anyone is interested. It is continuously updated and has been structured in order to promote documentation concerning Venetian environment and disseminate this information for educational purposes (see "Dissemination" section). The second one is supplied by scientists and engineers working on this tidal system for various purposes (scientific, management, conservation purposes, etc.); it applies to interested researchers and grows with their own contributions. Both intend to promote scientific communication, to contribute to the realization of a distributed information system collecting homogeneous themes, and to initiate the interconnection among databases regarding different kinds of environment.
Rocha, Vania; Ximenes, Elisa Francioli; Carvalho, Mauren Lopes de; Alpino, Tais de Moura Ariza; Freitas, Carlos Machado de
2014-09-01
In the specialized database of the Virtual Health Library (VHL), the DISASTER database highlights the importance of the theme for the health sector. The scope of this article is to identify the profiles of technical and scientific publications in the specialized database. Based on systematic searches and the analysis of results it is possible to determine: the type of publication; the main topics addressed; the most common type of disasters mentioned in published materials, countries and regions as subjects, historic periods with the most publications and the current trend of publications. When examining the specialized data in detail, it soon becomes clear that the number of major topics is very high, making a specific search process in this database a challenging exercise. On the other hand, it is encouraging that the disaster topic is discussed and assessed in a broad and diversified manner, associated with different aspects of the natural and social sciences. The disaster issue requires the production of interdisciplinary knowledge development to reduce the impacts of disasters and for risk management. In this way, since the health sector is a interdisciplinary area, it can contribute to knowledge production.
LARCRIM user's guide, version 1.0
NASA Technical Reports Server (NTRS)
Davis, John S.; Heaphy, William J.
1993-01-01
LARCRIM is a relational database management system (RDBMS) which performs the conventional duties of an RDBMS with the added feature that it can store attributes which consist of arrays or matrices. This makes it particularly valuable for scientific data management. It is accessible as a stand-alone system and through an application program interface. The stand-alone system may be executed in two modes: menu or command. The menu mode prompts the user for the input required to create, update, and/or query the database. The command mode requires the direct input of LARCRIM commands. Although LARCRIM is an update of an old database family, its performance on modern computers is quite satisfactory. LARCRIM is written in FORTRAN 77 and runs under the UNIX operating system. Versions have been released for the following computers: SUN (3 & 4), Convex, IRIS, Hewlett-Packard, CRAY 2 & Y-MP.
NASA Astrophysics Data System (ADS)
Rack, F. R.
2005-12-01
The Integrated Ocean Drilling Program (IODP: 2003-2013 initial phase) is the successor to the Deep Sea Drilling Project (DSDP: 1968-1983) and the Ocean Drilling Program (ODP: 1985-2003). These earlier scientific drilling programs amassed collections of sediment and rock cores (over 300 kilometers stored in four repositories) and data organized in distributed databases and in print or electronic publications. International members of the IODP have established, through memoranda, the right to have access to: (1) all data, samples, scientific and technical results, all engineering plans, data or other information produced under contract to the program; and, (2) all data from geophysical and other site surveys performed in support of the program which are used for drilling planning. The challenge that faces the individual platform operators and management of IODP is to find the right balance and appropriate synergies among the needs, expectations and requirements of stakeholders. The evolving model for IODP database services consists of the management and integration of data collected onboard the various IODP platforms (including downhole logging and syn-cruise site survey information), legacy data from DSDP and ODP, data derived from post-cruise research and publications, and other IODP-relevant information types, to form a common, program-wide IODP information system (e.g., IODP Portal) which will be accessible to both researchers and the public. The JANUS relational database of ODP was introduced in 1997 and the bulk of ODP shipboard data has been migrated into this system, which is comprised of a relational data model consisting of over 450 tables. The JANUS database includes paleontological, lithostratigraphic, chemical, physical, sedimentological, and geophysical data from a global distribution of sites. For ODP Legs 100 through 210, and including IODP Expeditions 301 through 308, JANUS has been used to store data from 233,835 meters of core recovered, which are comprised of 38,039 cores, with 202,281 core sections stored in repositories, which have resulted in the taking of 2,299,180 samples for scientists and other users (http://iodp.tamu.edu/janusweb/general/dbtable.cgi). JANUS and other IODP databases are viewed as components of an evolving distributed network of databases, supported by metadata catalogs and middleware with XML workflows, that are intended to provide access to DSDP/ODP/IODP cores and sample-based data as well as other distributed geoscience data collections (e.g., CHRONOS, PetDB, SedDB). These data resources can be explored through the use of emerging data visualization environments, such as GeoWall, CoreWall (http://(www.evl.uic.edu/cavern/corewall), a multi-screen display for viewing cores and related data, GeoWall-2 and LambdaVision, a very-high resolution, networked environment for data exploration and visualization, and others. The U.S Implementing Organization (USIO) for the IODP, also known as the JOI Alliance, is a partnership between Joint Oceanographic Institutions (JOI), Texas A&M University, and Lamont-Doherty Earth Observatory of Columbia University. JOI is a consortium of 20 premier oceanographic research institutions that serves the U.S. scientific community by leading large-scale, global research programs in scientific ocean drilling and ocean observing. For more than 25 years, JOI has helped facilitate discovery and advance global understanding of the Earth and its oceans through excellence in program management.
The development of an intelligent user interface for NASA's scientific databases
NASA Technical Reports Server (NTRS)
Campbell, William J.; Roelofs, Larry H.
1986-01-01
The National Space Science Data Center (NSSDC) has initiated an Intelligent Data Management (IDM) research effort which has as one of its components, the development of an Intelligent User Interface (IUI). The intent of the IUI effort is to develop a friendly and intelligent user interface service that is based on expert systems and natural language processing technologies. This paper presents the design concepts, development approach and evaluation of performance of a prototype Intelligent User Interface Subsystem (IUIS) supporting an operational database.
Planetary Data Archiving Plan at JAXA
NASA Astrophysics Data System (ADS)
Shinohara, Iku; Kasaba, Yasumasa; Yamamoto, Yukio; Abe, Masanao; Okada, Tatsuaki; Imamura, Takeshi; Sobue, Shinichi; Takashima, Takeshi; Terazono, Jun-Ya
After the successful rendezvous of Hayabusa with the small-body planet Itokawa, and the successful launch of Kaguya to the moon, Japanese planetary community has gotten their own and full-scale data. However, at this moment, these datasets are only available from the data sites managed by each mission team. The databases are individually constructed in the different formats, and the user interface of these data sites is not compatible with foreign databases. To improve the usability of the planetary archives at JAXA and to enable the international data exchange smooth, we are investigating to make a new planetary database. Within a coming decade, Japan will have fruitful datasets in the planetary science field, Venus (Planet-C), Mercury (BepiColombo), and several missions in planning phase (small-bodies). In order to strongly assist the international scientific collaboration using these mission archive data, the planned planetary data archive at JAXA should be managed in an unified manner and the database should be constructed in the international planetary database standard style. In this presentation, we will show the current status and future plans of the planetary data archiving at JAXA.
Omics databases on kidney disease: where they can be found and how to benefit from them.
Papadopoulos, Theofilos; Krochmal, Magdalena; Cisek, Katryna; Fernandes, Marco; Husi, Holger; Stevens, Robert; Bascands, Jean-Loup; Schanstra, Joost P; Klein, Julie
2016-06-01
In the recent decades, the evolution of omics technologies has led to advances in all biological fields, creating a demand for effective storage, management and exchange of rapidly generated data and research discoveries. To address this need, the development of databases of experimental outputs has become a common part of scientific practice in order to serve as knowledge sources and data-sharing platforms, providing information about genes, transcripts, proteins or metabolites. In this review, we present omics databases available currently, with a special focus on their application in kidney research and possibly in clinical practice. Databases are divided into two categories: general databases with a broad information scope and kidney-specific databases distinctively concentrated on kidney pathologies. In research, databases can be used as a rich source of information about pathophysiological mechanisms and molecular targets. In the future, databases will support clinicians with their decisions, providing better and faster diagnoses and setting the direction towards more preventive, personalized medicine. We also provide a test case demonstrating the potential of biological databases in comparing multi-omics datasets and generating new hypotheses to answer a critical and common diagnostic problem in nephrology practice. In the future, employment of databases combined with data integration and data mining should provide powerful insights into unlocking the mysteries of kidney disease, leading to a potential impact on pharmacological intervention and therapeutic disease management.
Data Mining Research with the LSST
NASA Astrophysics Data System (ADS)
Borne, Kirk D.; Strauss, M. A.; Tyson, J. A.
2007-12-01
The LSST catalog database will exceed 10 petabytes, comprising several hundred attributes for 5 billion galaxies, 10 billion stars, and over 1 billion variable sources (optical variables, transients, or moving objects), extracted from over 20,000 square degrees of deep imaging in 5 passbands with thorough time domain coverage: 1000 visits over the 10-year LSST survey lifetime. The opportunities are enormous for novel scientific discoveries within this rich time-domain ultra-deep multi-band survey database. Data Mining, Machine Learning, and Knowledge Discovery research opportunities with the LSST are now under study, with a potential for new collaborations to develop to contribute to these investigations. We will describe features of the LSST science database that are amenable to scientific data mining, object classification, outlier identification, anomaly detection, image quality assurance, and survey science validation. We also give some illustrative examples of current scientific data mining research in astronomy, and point out where new research is needed. In particular, the data mining research community will need to address several issues in the coming years as we prepare for the LSST data deluge. The data mining research agenda includes: scalability (at petabytes scales) of existing machine learning and data mining algorithms; development of grid-enabled parallel data mining algorithms; designing a robust system for brokering classifications from the LSST event pipeline (which may produce 10,000 or more event alerts per night); multi-resolution methods for exploration of petascale databases; visual data mining algorithms for visual exploration of the data; indexing of multi-attribute multi-dimensional astronomical databases (beyond RA-Dec spatial indexing) for rapid querying of petabyte databases; and more. Finally, we will identify opportunities for synergistic collaboration between the data mining research group and the LSST Data Management and Science Collaboration teams.
Perryman, Sarah A M; Castells-Brooke, Nathalie I D; Glendining, Margaret J; Goulding, Keith W T; Hawkesford, Malcolm J; Macdonald, Andy J; Ostler, Richard J; Poulton, Paul R; Rawlings, Christopher J; Scott, Tony; Verrier, Paul J
2018-05-15
The electronic Rothamsted Archive, e-RA (www.era.rothamsted.ac.uk) provides a permanent managed database to both securely store and disseminate data from Rothamsted Research's long-term field experiments (since 1843) and meteorological stations (since 1853). Both historical and contemporary data are made available via this online database which provides the scientific community with access to a unique continuous record of agricultural experiments and weather measured since the mid-19 th century. Qualitative information, such as treatment and management practices, plans and soil information, accompanies the data and are made available on the e-RA website. e-RA was released externally to the wider scientific community in 2013 and this paper describes its development, content, curation and the access process for data users. Case studies illustrate the diverse applications of the data, including its original intended purposes and recent unforeseen applications. Usage monitoring demonstrates the data are of increasing interest. Future developments, including adopting FAIR data principles, are proposed as the resource is increasingly recognised as a unique archive of data relevant to sustainable agriculture, agroecology and the environment.
United states national land cover data base development? 1992-2001 and beyond
Yang, L.
2008-01-01
An accurate, up-to-date and spatially-explicate national land cover database is required for monitoring the status and trends of the nation's terrestrial ecosystem, and for managing and conserving land resources at the national scale. With all the challenges and resources required to develop such a database, an innovative and scientifically sound planning must be in place and a partnership be formed among users from government agencies, research institutes and private sectors. In this paper, we summarize major scientific and technical issues regarding the development of the NLCD 1992 and 2001. Experiences and lessons learned from the project are documented with regard to project design, technical approaches, accuracy assessment strategy, and projecti imiplementation.Future improvements in developing next generation NLCD beyond 2001 are suggested, including: 1) enhanced satellite data preprocessing in correction of atmospheric and adjacency effect and the topographic normalization; 2) improved classification accuracy through comprehensive and consistent training data and new algorithm development; 3) multi-resolution and multi-temporal database targeting major land cover changes and land cover database updates; 4) enriched database contents by including additional biophysical parameters and/or more detailed land cover classes through synergizing multi-sensor, multi-temporal, and multi-spectral satellite data and ancillary data, and 5) transform the NLCD project into a national land cover monitoring program. ?? 2008 IEEE.
Energy science and technology database (on the internet). Online data
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
The Energy Science and Technology Database (EDB) is a multidisciplinary file containing worldwide references to basic and applied scientific and technical research literature. The information is collected for use by government managers, researchers at the national laboratories, and other research efforts sponsored by the U.S. Department of Energy, and the results of this research are transferred to the public. Abstracts are included for records from 1976 to the present. The EDB also contains the Nuclear Science Abstracts which is a comprehensive abstract and index collection to the international nuclear science and technology literature for the period 1948 through 1976. Includedmore » are scientific and technical reports of the U.S. Atomic Energy Commission, U.S. Energy Research and Development Administration and its contractors, other agencies, universities, and industrial and research organizations. Approximately 25% of the records in the file contain abstracts. Nuclear Science Abstracts contains over 900,000 bibliographic records. The entire Energy Science and Technology Database contains over 3 million bibliographic records. This database is now available for searching through the GOV. Research-Center (GRC) service. GRC is a single online web-based search service to well known Government databases. Featuring powerful search and retrieval software, GRC is an important research tool. The GRC web site is at http://grc.ntis.gov.« less
NASA Astrophysics Data System (ADS)
Tyupikova, T. V.; Samoilov, V. N.
2003-04-01
Modern information technologies urge natural sciences to further development. But it comes together with evaluation of infrastructures, to spotlight favorable conditions for the development of science and financial base in order to prove and protect legally new research. Any scientific development entails accounting and legal protection. In the report, we consider a new direction in software, organization and control of common databases on the example of the electronic document handling, which functions in some departments of the Joint Institute for Nuclear Research.
EarthChem: International Collaboration for Solid Earth Geochemistry in Geoinformatics
NASA Astrophysics Data System (ADS)
Walker, J. D.; Lehnert, K. A.; Hofmann, A. W.; Sarbas, B.; Carlson, R. W.
2005-12-01
The current on-line information systems for igneous rock geochemistry - PetDB, GEOROC, and NAVDAT - convincingly demonstrate the value of rigorous scientific data management of geochemical data for research and education. The next generation of hypothesis formulation and testing can be vastly facilitated by enhancing these electronic resources through integration of available datasets, expansion of data coverage in location, time, and tectonic setting, timely updates with new data, and through intuitive and efficient access and data analysis tools for the broader geosciences community. PetDB, GEOROC, and NAVDAT have therefore formed the EarthChem consortium (www.earthchem.org) as a international collaborative effort to address these needs and serve the larger earth science community by facilitating the compilation, communication, serving, and visualization of geochemical data, and their integration with other geological, geochronological, geophysical, and geodetic information to maximize their scientific application. We report on the status of and future plans for EarthChem activities. EarthChem's development plan includes: (1) expanding the functionality of the web portal to become a `one-stop shop for geochemical data' with search capability across databases, standardized and integrated data output, generally applicable tools for data quality assessment, and data analysis/visualization including plotting methods and an information-rich map interface; and (2) expanding data holdings by generating new datasets as identified and prioritized through community outreach, and facilitating data contributions from the community by offering web-based data submission capability and technical assistance for design, implementation, and population of new databases and their integration with all EarthChem data holdings. Such federated databases and datasets will retain their identity within the EarthChem system. We also plan on working with publishers to ease the assimilation of geochemical data into the EarthChem database. As a community resource, EarthChem will address user concerns and respond to broad scientific and educational needs. EarthChem will hold yearly workshops, town hall meetings, and/or exhibits at major meetings. The group has established a two-tier committee structure to help ease the communication and coordination of database and IT issues between existing data management projects, and to receive feedback and support from individuals and groups from the larger geosciences community.
Unified Access Architecture for Large-Scale Scientific Datasets
NASA Astrophysics Data System (ADS)
Karna, Risav
2014-05-01
Data-intensive sciences have to deploy diverse large scale database technologies for data analytics as scientists have now been dealing with much larger volume than ever before. While array databases have bridged many gaps between the needs of data-intensive research fields and DBMS technologies (Zhang 2011), invocation of other big data tools accompanying these databases is still manual and separate the database management's interface. We identify this as an architectural challenge that will increasingly complicate the user's work flow owing to the growing number of useful but isolated and niche database tools. Such use of data analysis tools in effect leaves the burden on the user's end to synchronize the results from other data manipulation analysis tools with the database management system. To this end, we propose a unified access interface for using big data tools within large scale scientific array database using the database queries themselves to embed foreign routines belonging to the big data tools. Such an invocation of foreign data manipulation routines inside a query into a database can be made possible through a user-defined function (UDF). UDFs that allow such levels of freedom as to call modules from another language and interface back and forth between the query body and the side-loaded functions would be needed for this purpose. For the purpose of this research we attempt coupling of four widely used tools Hadoop (hadoop1), Matlab (matlab1), R (r1) and ScaLAPACK (scalapack1) with UDF feature of rasdaman (Baumann 98), an array-based data manager, for investigating this concept. The native array data model used by an array-based data manager provides compact data storage and high performance operations on ordered data such as spatial data, temporal data, and matrix-based data for linear algebra operations (scidbusr1). Performances issues arising due to coupling of tools with different paradigms, niche functionalities, separate processes and output data formats have been anticipated and considered during the design of the unified architecture. The research focuses on the feasibility of the designed coupling mechanism and the evaluation of the efficiency and benefits of our proposed unified access architecture. Zhang 2011: Zhang, Ying and Kersten, Martin and Ivanova, Milena and Nes, Niels, SciQL: Bridging the Gap Between Science and Relational DBMS, Proceedings of the 15th Symposium on International Database Engineering Applications, 2011. Baumann 98: Baumann, P., Dehmel, A., Furtado, P., Ritsch, R., Widmann, N., "The Multidimensional Database System RasDaMan", SIGMOD 1998, Proceedings ACM SIGMOD International Conference on Management of Data, June 2-4, 1998, Seattle, Washington, 1998. hadoop1: hadoop.apache.org, "Hadoop", http://hadoop.apache.org/, [Online; accessed 12-Jan-2014]. scalapack1: netlib.org/scalapack, "ScaLAPACK", http://www.netlib.org/scalapack,[Online; accessed 12-Jan-2014]. r1: r-project.org, "R", http://www.r-project.org/,[Online; accessed 12-Jan-2014]. matlab1: mathworks.com, "Matlab Documentation", http://www.mathworks.de/de/help/matlab/,[Online; accessed 12-Jan-2014]. scidbusr1: scidb.org, "SciDB User's Guide", http://scidb.org/HTMLmanual/13.6/scidb_ug,[Online; accessed 01-Dec-2013].
Optical components damage parameters database system
NASA Astrophysics Data System (ADS)
Tao, Yizheng; Li, Xinglan; Jin, Yuquan; Xie, Dongmei; Tang, Dingyong
2012-10-01
Optical component is the key to large-scale laser device developed by one of its load capacity is directly related to the device output capacity indicators, load capacity depends on many factors. Through the optical components will damage parameters database load capacity factors of various digital, information technology, for the load capacity of optical components to provide a scientific basis for data support; use of business processes and model-driven approach, the establishment of component damage parameter information model and database systems, system application results that meet the injury test optical components business processes and data management requirements of damage parameters, component parameters of flexible, configurable system is simple, easy to use, improve the efficiency of the optical component damage test.
Rezaei, Satar; Hajizadeh, Mohammad; Zandian, Hamed; Fathi, Afshin; Nouri, Bijan
2017-08-01
The purpose of this systematic review and meta-analysis was to provide a precise estimate of the period prevalence of needlestick injuries (NSI) among nurses working in hospitals in Iran and the reporting rate of NSI to nurse managers. We searched both international (PubMed, Scopus and the Institute for Scientific Information) and Iranian (Scientific Information Database, Iranmedex and Magiran) scientific databases to find studies published from 2000 to 2016 of NSI among Iranian nurses. The following keywords in Persian and English were used: "needle-stick" or "needle stick" or "needlestick," with and without "injury" or "injuries," "prevalence" or "frequency," "nurses" or "nursing staff," and "Iran." In a sample of 21 articles with 6,480 participants, we estimated that the overall 1-year period prevalence of NSI was 44% (95% confidence interval [CI], 35-53%) among Iranian nurses. The overall 1-year period prevalence of reporting NSI to nurse managers was 42% (95% CI, 33-52%). In meta-regression analysis, sample size, mean age, years of experience, and gender ratio were not associated with prevalence of NSI or reporting rate. The year of data collection was positively associated with period prevalence of NSI (p < .05), but not with the period prevalence of reporting NSI to nurse managers. Results indicated a high NSI period prevalence and low NSI reporting rate among nurses in Iran. Thus, effective interventions are required in hospitals in Iran to reduce the prevalence and increase the reporting rate of NSI. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Rasdaman for Big Spatial Raster Data
NASA Astrophysics Data System (ADS)
Hu, F.; Huang, Q.; Scheele, C. J.; Yang, C. P.; Yu, M.; Liu, K.
2015-12-01
Spatial raster data have grown exponentially over the past decade. Recent advancements on data acquisition technology, such as remote sensing, have allowed us to collect massive observation data of various spatial resolution and domain coverage. The volume, velocity, and variety of such spatial data, along with the computational intensive nature of spatial queries, pose grand challenge to the storage technologies for effective big data management. While high performance computing platforms (e.g., cloud computing) can be used to solve the computing-intensive issues in big data analysis, data has to be managed in a way that is suitable for distributed parallel processing. Recently, rasdaman (raster data manager) has emerged as a scalable and cost-effective database solution to store and retrieve massive multi-dimensional arrays, such as sensor, image, and statistics data. Within this paper, the pros and cons of using rasdaman to manage and query spatial raster data will be examined and compared with other common approaches, including file-based systems, relational databases (e.g., PostgreSQL/PostGIS), and NoSQL databases (e.g., MongoDB and Hive). Earth Observing System (EOS) data collected from NASA's Atmospheric Scientific Data Center (ASDC) will be used and stored in these selected database systems, and a set of spatial and non-spatial queries will be designed to benchmark their performance on retrieving large-scale, multi-dimensional arrays of EOS data. Lessons learnt from using rasdaman will be discussed as well.
[Construction of chemical information database based on optical structure recognition technique].
Lv, C Y; Li, M N; Zhang, L R; Liu, Z M
2018-04-18
To create a protocol that could be used to construct chemical information database from scientific literature quickly and automatically. Scientific literature, patents and technical reports from different chemical disciplines were collected and stored in PDF format as fundamental datasets. Chemical structures were transformed from published documents and images to machine-readable data by using the name conversion technology and optical structure recognition tool CLiDE. In the process of molecular structure information extraction, Markush structures were enumerated into well-defined monomer molecules by means of QueryTools in molecule editor ChemDraw. Document management software EndNote X8 was applied to acquire bibliographical references involving title, author, journal and year of publication. Text mining toolkit ChemDataExtractor was adopted to retrieve information that could be used to populate structured chemical database from figures, tables, and textual paragraphs. After this step, detailed manual revision and annotation were conducted in order to ensure the accuracy and completeness of the data. In addition to the literature data, computing simulation platform Pipeline Pilot 7.5 was utilized to calculate the physical and chemical properties and predict molecular attributes. Furthermore, open database ChEMBL was linked to fetch known bioactivities, such as indications and targets. After information extraction and data expansion, five separate metadata files were generated, including molecular structure data file, molecular information, bibliographical references, predictable attributes and known bioactivities. Canonical simplified molecular input line entry specification as primary key, metadata files were associated through common key nodes including molecular number and PDF number to construct an integrated chemical information database. A reasonable construction protocol of chemical information database was created successfully. A total of 174 research articles and 25 reviews published in Marine Drugs from January 2015 to June 2016 collected as essential data source, and an elementary marine natural product database named PKU-MNPD was built in accordance with this protocol, which contained 3 262 molecules and 19 821 records. This data aggregation protocol is of great help for the chemical information database construction in accuracy, comprehensiveness and efficiency based on original documents. The structured chemical information database can facilitate the access to medical intelligence and accelerate the transformation of scientific research achievements.
Techniques for Efficiently Managing Large Geosciences Data Sets
NASA Astrophysics Data System (ADS)
Kruger, A.; Krajewski, W. F.; Bradley, A. A.; Smith, J. A.; Baeck, M. L.; Steiner, M.; Lawrence, R. E.; Ramamurthy, M. K.; Weber, J.; Delgreco, S. A.; Domaszczynski, P.; Seo, B.; Gunyon, C. A.
2007-12-01
We have developed techniques and software tools for efficiently managing large geosciences data sets. While the techniques were developed as part of an NSF-Funded ITR project that focuses on making NEXRAD weather data and rainfall products available to hydrologists and other scientists, they are relevant to other geosciences disciplines that deal with large data sets. Metadata, relational databases, data compression, and networking are central to our methodology. Data and derived products are stored on file servers in a compressed format. URLs to, and metadata about the data and derived products are managed in a PostgreSQL database. Virtually all access to the data and products is through this database. Geosciences data normally require a number of processing steps to transform the raw data into useful products: data quality assurance, coordinate transformations and georeferencing, applying calibration information, and many more. We have developed the concept of crawlers that manage this scientific workflow. Crawlers are unattended processes that run indefinitely, and at set intervals query the database for their next assignment. A database table functions as a roster for the crawlers. Crawlers perform well-defined tasks that are, except for perhaps sequencing, largely independent from other crawlers. Once a crawler is done with its current assignment, it updates the database roster table, and gets its next assignment by querying the database. We have developed a library that enables one to quickly add crawlers. The library provides hooks to external (i.e., C-language) compiled codes, so that developers can work and contribute independently. Processes called ingesters inject data into the system. The bulk of the data are from a real-time feed using UCAR/Unidata's IDD/LDM software. An exciting recent development is the establishment of a Unidata HYDRO feed that feeds value-added metadata over the IDD/LDM. Ingesters grab the metadata and populate the PostgreSQL tables. These and other concepts we have developed have enabled us to efficiently manage a 70 Tb (and growing) data weather radar data set.
Bandeira, Francisco; Griz, Luiz; Chaves, Narriane; Carvalho, Nara Crispim; Borges, Lívia Maria; Lazaretti-Castro, Marise; Borba, Victoria; Castro, Luiz Cláudio de; Borges, João Lindolfo; Bilezikian, John
2013-08-01
To conduct a literature review on the diagnosis and management of primary hyperparathyroidism including the classical hipercalcemic form as well as the normocalcemic variant. This scientific statement was generated by a request from the Brazilian Medical Association (AMB) to the Brazilian Society for Endocrinology as part of its Clinical Practice Guidelines program. Articles were identified by searching in PubMed and Cochrane databases as well as abstracts presented at the Endocrine Society, Brazilian Society for Endocrinology Annual Meetings and the American Society for Bone and Mineral Research Annual Meeting during the last 5 years. Grading quality of evidence and strength of recommendation were adapted from the first report of the Oxford Centre for Evidence-based Medicine. All grades of recommendation, including "D", are based on scientific evidence. The differences between A, B, C and D, are due exclusively to the methods employed in generating evidence. We present a scientific statement on primary hyperparathyroidism providing the level of evidence and the degree of recommendation regarding causes, clinical presentation as well as surgical and medical treatment.
Perryman, Sarah A. M.; Castells-Brooke, Nathalie I. D.; Glendining, Margaret J.; Goulding, Keith W. T.; Hawkesford, Malcolm J.; Macdonald, Andy J.; Ostler, Richard J.; Poulton, Paul R.; Rawlings, Christopher J.; Scott, Tony; Verrier, Paul J.
2018-01-01
The electronic Rothamsted Archive, e-RA (www.era.rothamsted.ac.uk) provides a permanent managed database to both securely store and disseminate data from Rothamsted Research’s long-term field experiments (since 1843) and meteorological stations (since 1853). Both historical and contemporary data are made available via this online database which provides the scientific community with access to a unique continuous record of agricultural experiments and weather measured since the mid-19th century. Qualitative information, such as treatment and management practices, plans and soil information, accompanies the data and are made available on the e-RA website. e-RA was released externally to the wider scientific community in 2013 and this paper describes its development, content, curation and the access process for data users. Case studies illustrate the diverse applications of the data, including its original intended purposes and recent unforeseen applications. Usage monitoring demonstrates the data are of increasing interest. Future developments, including adopting FAIR data principles, are proposed as the resource is increasingly recognised as a unique archive of data relevant to sustainable agriculture, agroecology and the environment. PMID:29762552
From tsunami hazard assessment to risk management in Guadeloupe (F.W.I.)
NASA Astrophysics Data System (ADS)
Zahibo, Narcisse; Dudon, Bernard; Krien, Yann; Arnaud, Gaël; Mercado, Aurelio; Roger, Jean
2017-04-01
The Caribbean region is prone to numerous natural hazards such as earthquakes, landslides, storm surges, tsunamis, coastal erosion or hurricanes. All these threats may cause great human and economic losses and are thus of prime interest for applied research. One of the main challenges for the scientific community is to conduct state-of-the-art research to assess hazards and share the results with coastal planners and decision makers so that they can regulate land use and develop mitigation strategies. We present here the results of a scientific collaborative project between Guadeloupe and Porto Rico which aimed at bringing a decision-making support to the authorities regarding tsunami hazards. This project led us to build a database of potential extreme events, and to study their impacts on Guadeloupe to investigate storm surge and tsunami hazards. The results were used by local authorities to develop safeguarding and mitigation measures in coastal areas. This project is thus a good example to demonstrate the benefit of inter Caribbean scientific collaboration for natural risks management.
IceVal DatAssistant: An Interactive, Automated Icing Data Management System
NASA Technical Reports Server (NTRS)
Levinson, Laurie H.; Wright, William B.
2008-01-01
As with any scientific endeavor, the foundation of icing research at the NASA Glenn Research Center (GRC) is the data acquired during experimental testing. In the case of the GRC Icing Branch, an important part of this data consists of ice tracings taken following tests carried out in the GRC Icing Research Tunnel (IRT), as well as the associated operational and environmental conditions documented during these tests. Over the years, the large number of experimental runs completed has served to emphasize the need for a consistent strategy for managing this data. To address the situation, the Icing Branch has recently elected to implement the IceVal DatAssistant automated data management system. With the release of this system, all publicly available IRT-generated experimental ice shapes with complete and verifiable conditions have now been compiled into one electronically-searchable database. Simulation software results for the equivalent conditions, generated using the latest version of the LEWICE ice shape prediction code, are likewise included and are linked to the corresponding experimental runs. In addition to this comprehensive database, the IceVal system also includes a graphically-oriented database access utility, which provides reliable and easy access to all data contained in the database. In this paper, the issues surrounding historical icing data management practices are discussed, as well as the anticipated benefits to be achieved as a result of migrating to the new system. A detailed description of the software system features and database content is also provided; and, finally, known issues and plans for future work are presented.
IceVal DatAssistant: An Interactive, Automated Icing Data Management System
NASA Technical Reports Server (NTRS)
Levinson, Laurie H.; Wright, William B.
2008-01-01
As with any scientific endeavor, the foundation of icing research at the NASA Glenn Research Center (GRC) is the data acquired during experimental testing. In the case of the GRC Icing Branch, an important part of this data consists of ice tracings taken following tests carried out in the GRC Icing Research Tunnel (IRT), as well as the associated operational and environmental conditions during those tests. Over the years, the large number of experimental runs completed has served to emphasize the need for a consistent strategy to manage the resulting data. To address this situation, the Icing Branch has recently elected to implement the IceVal DatAssistant automated data management system. With the release of this system, all publicly available IRT-generated experimental ice shapes with complete and verifiable conditions have now been compiled into one electronically-searchable database; and simulation software results for the equivalent conditions, generated using the latest version of the LEWICE ice shape prediction code, are likewise included and linked to the corresponding experimental runs. In addition to this comprehensive database, the IceVal system also includes a graphically-oriented database access utility, which provides reliable and easy access to all data contained in the database. In this paper, the issues surrounding historical icing data management practices are discussed, as well as the anticipated benefits to be achieved as a result of migrating to the new system. A detailed description of the software system features and database content is also provided; and, finally, known issues and plans for future work are presented.
NASA Astrophysics Data System (ADS)
Onodera, Natsuo; Mizukami, Masayuki
This paper estimates several quantitative indice on production and distribution of scientific and technical databases based on various recent publications and attempts to compare the indice internationally. Raw data used for the estimation are brought mainly from the Database Directory (published by MITI) for database production and from some domestic and foreign study reports for database revenues. The ratio of the indice among Japan, US and Europe for usage of database is similar to those for general scientific and technical activities such as population and R&D expenditures. But Japanese contributions to production, revenue and over-countory distribution of databases are still lower than US and European countries. International comparison of relative database activities between public and private sectors is also discussed.
Gadelha, Luiz; Ribeiro-Alves, Marcelo; Porto, Fábio
2017-01-01
There are many steps in analyzing transcriptome data, from the acquisition of raw data to the selection of a subset of representative genes that explain a scientific hypothesis. The data produced can be represented as networks of interactions among genes and these may additionally be integrated with other biological databases, such as Protein-Protein Interactions, transcription factors and gene annotation. However, the results of these analyses remain fragmented, imposing difficulties, either for posterior inspection of results, or for meta-analysis by the incorporation of new related data. Integrating databases and tools into scientific workflows, orchestrating their execution, and managing the resulting data and its respective metadata are challenging tasks. Additionally, a great amount of effort is equally required to run in-silico experiments to structure and compose the information as needed for analysis. Different programs may need to be applied and different files are produced during the experiment cycle. In this context, the availability of a platform supporting experiment execution is paramount. We present GeNNet, an integrated transcriptome analysis platform that unifies scientific workflows with graph databases for selecting relevant genes according to the evaluated biological systems. It includes GeNNet-Wf, a scientific workflow that pre-loads biological data, pre-processes raw microarray data and conducts a series of analyses including normalization, differential expression inference, clusterization and gene set enrichment analysis. A user-friendly web interface, GeNNet-Web, allows for setting parameters, executing, and visualizing the results of GeNNet-Wf executions. To demonstrate the features of GeNNet, we performed case studies with data retrieved from GEO, particularly using a single-factor experiment in different analysis scenarios. As a result, we obtained differentially expressed genes for which biological functions were analyzed. The results are integrated into GeNNet-DB, a database about genes, clusters, experiments and their properties and relationships. The resulting graph database is explored with queries that demonstrate the expressiveness of this data model for reasoning about gene interaction networks. GeNNet is the first platform to integrate the analytical process of transcriptome data with graph databases. It provides a comprehensive set of tools that would otherwise be challenging for non-expert users to install and use. Developers can add new functionality to components of GeNNet. The derived data allows for testing previous hypotheses about an experiment and exploring new ones through the interactive graph database environment. It enables the analysis of different data on humans, rhesus, mice and rat coming from Affymetrix platforms. GeNNet is available as an open source platform at https://github.com/raquele/GeNNet and can be retrieved as a software container with the command docker pull quelopes/gennet. PMID:28695067
Costa, Raquel L; Gadelha, Luiz; Ribeiro-Alves, Marcelo; Porto, Fábio
2017-01-01
There are many steps in analyzing transcriptome data, from the acquisition of raw data to the selection of a subset of representative genes that explain a scientific hypothesis. The data produced can be represented as networks of interactions among genes and these may additionally be integrated with other biological databases, such as Protein-Protein Interactions, transcription factors and gene annotation. However, the results of these analyses remain fragmented, imposing difficulties, either for posterior inspection of results, or for meta-analysis by the incorporation of new related data. Integrating databases and tools into scientific workflows, orchestrating their execution, and managing the resulting data and its respective metadata are challenging tasks. Additionally, a great amount of effort is equally required to run in-silico experiments to structure and compose the information as needed for analysis. Different programs may need to be applied and different files are produced during the experiment cycle. In this context, the availability of a platform supporting experiment execution is paramount. We present GeNNet, an integrated transcriptome analysis platform that unifies scientific workflows with graph databases for selecting relevant genes according to the evaluated biological systems. It includes GeNNet-Wf, a scientific workflow that pre-loads biological data, pre-processes raw microarray data and conducts a series of analyses including normalization, differential expression inference, clusterization and gene set enrichment analysis. A user-friendly web interface, GeNNet-Web, allows for setting parameters, executing, and visualizing the results of GeNNet-Wf executions. To demonstrate the features of GeNNet, we performed case studies with data retrieved from GEO, particularly using a single-factor experiment in different analysis scenarios. As a result, we obtained differentially expressed genes for which biological functions were analyzed. The results are integrated into GeNNet-DB, a database about genes, clusters, experiments and their properties and relationships. The resulting graph database is explored with queries that demonstrate the expressiveness of this data model for reasoning about gene interaction networks. GeNNet is the first platform to integrate the analytical process of transcriptome data with graph databases. It provides a comprehensive set of tools that would otherwise be challenging for non-expert users to install and use. Developers can add new functionality to components of GeNNet. The derived data allows for testing previous hypotheses about an experiment and exploring new ones through the interactive graph database environment. It enables the analysis of different data on humans, rhesus, mice and rat coming from Affymetrix platforms. GeNNet is available as an open source platform at https://github.com/raquele/GeNNet and can be retrieved as a software container with the command docker pull quelopes/gennet.
Databases on biotechnology and biosafety of GMOs.
Degrassi, Giuliano; Alexandrova, Nevena; Ripandelli, Decio
2003-01-01
Due to the involvement of scientific, industrial, commercial and public sectors of society, the complexity of the issues concerning the safety of genetically modified organisms (GMOs) for the environment, agriculture, and human and animal health calls for a wide coverage of information. Accordingly, development of the field of biotechnology, along with concerns related to the fate of released GMOs, has led to a rapid development of tools for disseminating such information. As a result, there is a growing number of databases aimed at collecting and storing information related to GMOs. Most of the sites deal with information on environmental releases, field trials, transgenes and related sequences, regulations and legislation, risk assessment documents, and literature. Databases are mainly established and managed by scientific, national or international authorities, and are addressed towards scientists, government officials, policy makers, consumers, farmers, environmental groups and civil society representatives. This complexity can lead to an overlapping of information. The purpose of the present review is to analyse the relevant databases currently available on the web, providing comments on their vastly different information and on the structure of the sites pertaining to different users. A preliminary overview on the development of these sites during the last decade, at both the national and international level, is also provided.
NASA Astrophysics Data System (ADS)
Fiore, Sandro; Williams, Dean; Aloisio, Giovanni
2016-04-01
In many scientific domains such as climate, data is often n-dimensional and requires tools that support specialized data types and primitives to be properly stored, accessed, analysed and visualized. Moreover, new challenges arise in large-scale scenarios and eco-systems where petabytes (PB) of data can be available and data can be distributed and/or replicated (e.g., the Earth System Grid Federation (ESGF) serving the Coupled Model Intercomparison Project, Phase 5 (CMIP5) experiment, providing access to 2.5PB of data for the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report (AR5). Most of the tools currently available for scientific data analysis in the climate domain fail at large scale since they: (1) are desktop based and need the data locally; (2) are sequential, so do not benefit from available multicore/parallel machines; (3) do not provide declarative languages to express scientific data analysis tasks; (4) are domain-specific, which ties their adoption to a specific domain; and (5) do not provide a workflow support, to enable the definition of complex "experiments". The Ophidia project aims at facing most of the challenges highlighted above by providing a big data analytics framework for eScience. Ophidia provides declarative, server-side, and parallel data analysis, jointly with an internal storage model able to efficiently deal with multidimensional data and a hierarchical data organization to manage large data volumes ("datacubes"). The project relies on a strong background of high performance database management and OLAP systems to manage large scientific data sets. It also provides a native workflow management support, to define processing chains and workflows with tens to hundreds of data analytics operators to build real scientific use cases. With regard to interoperability aspects, the talk will present the contribution provided both to the RDA Working Group on Array Databases, and the Earth System Grid Federation (ESGF) Compute Working Team. Also highlighted will be the results of large scale climate model intercomparison data analysis experiments, for example: (1) defined in the context of the EU H2020 INDIGO-DataCloud project; (2) implemented in a real geographically distributed environment involving CMCC (Italy) and LLNL (US) sites; (3) exploiting Ophidia as server-side, parallel analytics engine; and (4) applied on real CMIP5 data sets available through ESGF.
NCI at Frederick Scientific Library Reintroduces Scientific Publications Database | Poster
A 20-year-old database of scientific publications by NCI at Frederick, FNLCR, and affiliated employees has gotten a significant facelift. Maintained by the Scientific Library, the redesigned database—which is linked from each of the Scientific Library’s web pages—offers features that were not available in previous versions, such as additional search limits and non-traditional metrics for scholarly and scientific publishing known as altmetrics.
NCI at Frederick Scientific Library Reintroduces Scientific Publications Database | Poster
A 20-year-old database of scientific publications by NCI at Frederick, FNLCR, and affiliated employees has gotten a significant facelift. Maintained by the Scientific Library, the redesigned database—which is linked from each of the Scientific Library’s web pages—offers features that were not available in previous versions, such as additional search limits and non-traditional
Fleet, Richard; Archambault, Patrick; Légaré, France; Chauny, Jean-Marc; Lévesque, Jean-Frédéric; Ouimet, Mathieu; Dupuis, Gilles; Haggerty, Jeannie; Poitras, Julien; Tanguay, Alain; Simard-Racine, Geneviève; Gauthier, Josée
2013-01-01
Introduction Emergency departments are important safety nets for people who live in rural areas. Moreover, a serious problem in access to healthcare services has emerged in these regions. The challenges of providing access to quality rural emergency care include recruitment and retention issues, lack of advanced imagery technology, lack of specialist support and the heavy reliance on ambulance transport over great distances. The Quebec Ministry of Health and Social Services published a new version of the Emergency Department Management Guide, a document designed to improve the emergency department management and to humanise emergency department care and services. In particular, the Guide recommends solutions to problems that plague rural emergency departments. Unfortunately, no studies have evaluated the implementation of the proposed recommendations. Methods and analysis To develop a comprehensive portrait of all rural emergency departments in Quebec, data will be gathered from databases at the Quebec Ministry of Health and Social Services, the Quebec Trauma Registry and from emergency departments and ambulance services managers. Statistics Canada data will be used to describe populations and rural regions. To evaluate the use of the 2006 Emergency Department Management Guide and the implementation of its various recommendations, an online survey and a phone interview will be administered to emergency department managers. Two online surveys will evaluate quality of work life among physicians and nurses working at rural emergency departments. Quality-of-care indicators will be collected from databases and patient medical files. Data will be analysed using statistical (descriptive and inferential) procedures. Ethics and dissemination This protocol has been approved by the CSSS Alphonse–Desjardins research ethics committee (Project MP-HDL-1213-011). The results will be published in peer-reviewed scientific journals and presented at one or more scientific conferences. PMID:23633423
Fleet, Richard; Archambault, Patrick; Légaré, France; Chauny, Jean-Marc; Lévesque, Jean-Frédéric; Ouimet, Mathieu; Dupuis, Gilles; Haggerty, Jeannie; Poitras, Julien; Tanguay, Alain; Simard-Racine, Geneviève; Gauthier, Josée
2013-01-01
Emergency departments are important safety nets for people who live in rural areas. Moreover, a serious problem in access to healthcare services has emerged in these regions. The challenges of providing access to quality rural emergency care include recruitment and retention issues, lack of advanced imagery technology, lack of specialist support and the heavy reliance on ambulance transport over great distances. The Quebec Ministry of Health and Social Services published a new version of the Emergency Department Management Guide, a document designed to improve the emergency department management and to humanise emergency department care and services. In particular, the Guide recommends solutions to problems that plague rural emergency departments. Unfortunately, no studies have evaluated the implementation of the proposed recommendations. To develop a comprehensive portrait of all rural emergency departments in Quebec, data will be gathered from databases at the Quebec Ministry of Health and Social Services, the Quebec Trauma Registry and from emergency departments and ambulance services managers. Statistics Canada data will be used to describe populations and rural regions. To evaluate the use of the 2006 Emergency Department Management Guide and the implementation of its various recommendations, an online survey and a phone interview will be administered to emergency department managers. Two online surveys will evaluate quality of work life among physicians and nurses working at rural emergency departments. Quality-of-care indicators will be collected from databases and patient medical files. Data will be analysed using statistical (descriptive and inferential) procedures. This protocol has been approved by the CSSS Alphonse-Desjardins research ethics committee (Project MP-HDL-1213-011). The results will be published in peer-reviewed scientific journals and presented at one or more scientific conferences.
NASA Astrophysics Data System (ADS)
Meyer, Hanna; Authmann, Christian; Dreber, Niels; Hess, Bastian; Kellner, Klaus; Morgenthal, Theunis; Nauss, Thomas; Seeger, Bernhard; Tsvuura, Zivanai; Wiegand, Kerstin
2017-04-01
Bush encroachment is a syndrome of land degradation that occurs in many savannas including those of southern Africa. The increase in density, cover or biomass of woody vegetation often has negative effects on a range of ecosystem functions and services, which are hardly reversible. However, despite its importance, neither the causes of bush encroachment, nor the consequences of different resource management strategies to combat or mitigate related shifts in savanna states are fully understood. The project "IDESSA" (An Integrative Decision Support System for Sustainable Rangeland Management in Southern African Savannas) aims to improve the understanding of the complex interplays between land use, climate patterns and vegetation dynamics and to implement an integrative monitoring and decision-support system for the sustainable management of different savanna types. For this purpose, IDESSA follows an innovative approach that integrates local knowledge, botanical surveys, remote-sensing and machine-learning based time-series of atmospheric and land-cover dynamics, spatially explicit simulation modeling and analytical database management. The integration of the heterogeneous data will be implemented in a user oriented database infrastructure and scientific workflow system. Accessible via web-based interfaces, this database and analysis system will allow scientists to manage and analyze monitoring data and scenario computations, as well as allow stakeholders (e. g. land users, policy makers) to retrieve current ecosystem information and seasonal outlooks. We present the concept of the project and show preliminary results of the realization steps towards the integrative savanna management and decision-support system.
Using SIR (Scientific Information Retrieval System) for data management during a field program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tichler, J.L.
As part of the US Department of Energy's program, PRocessing of Emissions by Clouds and Precipitation (PRECP), a team of scientists from four laboratories conducted a study in north central New York State, to characterize the chemical and physical processes occurring in winter storms. Sampling took place from three aircraft, two instrumented motor homes and a network of 26 surface precipitation sampling sites. Data management personnel were part of the field program, using a portable IBM PC-AT computer to enter information as it became available during the field study. Having the same database software on the field computer and onmore » the cluster of VAX 11/785 computers in use aided database development and the transfer of data between machines. 2 refs., 3 figs., 5 tabs.« less
Introduction to the scientific application system of DAMPE (On behalf of DAMPE collaboration)
NASA Astrophysics Data System (ADS)
Zang, Jingjing
2016-07-01
The Dark Matter Particle Explorer (DAMPE) is a high energy particle physics experiment satellite, launched on 17 Dec 2015. The science data processing and payload operation maintenance for DAMPE will be provided by the DAMPE Scientific Application System (SAS) at the Purple Mountain Observatory (PMO) of Chinese Academy of Sciences. SAS is consisted of three subsystems - scientific operation subsystem, science data and user management subsystem and science data processing subsystem. In cooperation with the Ground Support System (Beijing), the scientific operation subsystem is responsible for proposing observation plans, monitoring the health of satellite, generating payload control commands and participating in all activities related to payload operation. Several databases developed by the science data and user management subsystem of DAMPE methodically manage all collected and reconstructed science data, down linked housekeeping data, payload configuration and calibration data. Under the leadership of DAMPE Scientific Committee, this subsystem is also responsible for publication of high level science data and supporting all science activities of the DAMPE collaboration. The science data processing subsystem of DAMPE has already developed a series of physics analysis software to reconstruct basic information about detected cosmic ray particle. This subsystem also maintains the high performance computing system of SAS to processing all down linked science data and automatically monitors the qualities of all produced data. In this talk, we will describe all functionalities of whole DAMPE SAS system and show you main performances of data processing ability.
Data Sharing in Astrobiology: the Astrobiology Habitable Environments Database (AHED)
NASA Astrophysics Data System (ADS)
Bristow, T.; Lafuente Valverde, B.; Keller, R.; Stone, N.; Downs, R. T.; Blake, D. F.; Fonda, M.; Pires, A.
2016-12-01
Astrobiology is a multidisciplinary area of scientific research focused on studying the origins of life on Earth and the conditions under which life might have emerged elsewhere in the universe. The understanding of complex questions in astrobiology requires integration and analysis of data spanning a range of disciplines including biology, chemistry, geology, astronomy and planetary science. However, the lack of a centralized repository makes it difficult for astrobiology teams to share data and benefit from resultant synergies. Moreover, in recent years, federal agencies are requiring that results of any federally funded scientific research must be available and useful for the public and the science community. Astrobiology, as any other scientific discipline, needs to respond to these mandates. The Astrobiology Habitable Environments Database (AHED) is a central, high quality, long-term searchable repository designed to help the community by promoting the integration and sharing of all the data generated by these diverse disciplines. AHED provides public and open-access to astrobiology-related research data through a user-managed web portal implemented using the open-source software The Open Data Repository's (ODR) Data Publisher [1]. ODR-DP provides a user-friendly interface that research teams or individual scientists can use to design, populate and manage their own databases or laboratory notebooks according to the characteristics of their data. AHED is then a collection of databases housed in the ODR framework that store information about samples, along with associated measurements, analyses, and contextual information about field sites where samples were collected, the instruments or equipment used for analysis, and people and institutions involved in their collection. Advanced graphics are implemented together with advanced online tools for data analysis (e.g. R, MATLAB, Project Jupyter-http://jupyter.org). A permissions system will be put in place so that as data are being actively collected and interpreted, they will remain proprietary. A citation system will allow research data to be used and appropriately referenced by other researchers after the data are made public. This project is supported by SERA and NASA NNX11AP82A, MSL. [1] Stone et al. (2016) AGU, submitted.
MEIMAN: Database exploring Medicinal and Edible insects of Manipur
Shantibala, Tourangbam; Lokeshwari, Rajkumari; Thingnam, Gourshyam; Somkuwar, Bharat Gopalrao
2012-01-01
We have developed MEIMAN, a unique database on medicinal and edible insects of Manipur which comprises 51 insects species collected through extensive survey and questionnaire for two years. MEIMAN provides integrated access to insect species thorough sophisticated web interface which has following capabilities a) Graphical interface of seasonality, b) Method of preparation, c) Form of use - edible and medicinal, d) habitat, e) medicinal uses, f) commercial importance and g) economic status. This database will be useful for scientific validations and updating of traditional wisdom in bioprospecting aspects. It will be useful in analyzing the insect biodiversity for the development of virgin resources and their industrialization. Further, the features will be suited for detailed investigation on potential medicinal and edible insects that make MEIMAN a powerful tool for sustainable management. Availability The database is available for free at www.ibsd.gov.in/meiman PMID:22715305
Integrating Scientific Array Processing into Standard SQL
NASA Astrophysics Data System (ADS)
Misev, Dimitar; Bachhuber, Johannes; Baumann, Peter
2014-05-01
We live in a time that is dominated by data. Data storage is cheap and more applications than ever accrue vast amounts of data. Storing the emerging multidimensional data sets efficiently, however, and allowing them to be queried by their inherent structure, is a challenge many databases have to face today. Despite the fact that multidimensional array data is almost always linked to additional, non-array information, array databases have mostly developed separately from relational systems, resulting in a disparity between the two database categories. The current SQL standard and SQL DBMS supports arrays - and in an extension also multidimensional arrays - but does so in a very rudimentary and inefficient way. This poster demonstrates the practicality of an SQL extension for array processing, implemented in a proof-of-concept multi-faceted system that manages a federation of array and relational database systems, providing transparent, efficient and scalable access to the heterogeneous data in them.
Chesapeake Bay Program Water Quality Database
The Chesapeake Information Management System (CIMS), designed in 1996, is an integrated, accessible information management system for the Chesapeake Bay Region. CIMS is an organized, distributed library of information and software tools designed to increase basin-wide public access to Chesapeake Bay information. The information delivered by CIMS includes technical and public information, educational material, environmental indicators, policy documents, and scientific data. Through the use of relational databases, web-based programming, and web-based GIS a large number of Internet resources have been established. These resources include multiple distributed on-line databases, on-demand graphing and mapping of environmental data, and geographic searching tools for environmental information. Baseline monitoring data, summarized data and environmental indicators that document ecosystem status and trends, confirm linkages between water quality, habitat quality and abundance, and the distribution and integrity of biological populations are also available. One of the major features of the CIMS network is the Chesapeake Bay Program's Data Hub, providing users access to a suite of long- term water quality and living resources databases. Chesapeake Bay mainstem and tidal tributary water quality, benthic macroinvertebrates, toxics, plankton, and fluorescence data can be obtained for a network of over 800 monitoring stations.
A novel data storage logic in the cloud
Mátyás, Bence; Szarka, Máté; Járvás, Gábor; Kusper, Gábor; Argay, István; Fialowski, Alice
2016-01-01
Databases which store and manage long-term scientific information related to life science are used to store huge amount of quantitative attributes. Introduction of a new entity attribute requires modification of the existing data tables and the programs that use these data tables. The solution is increasing the virtual data tables while the number of screens remains the same. The main objective of the present study was to introduce a logic called Joker Tao (JT) which provides universal data storage for cloud-based databases. It means all types of input data can be interpreted as an entity and attribute at the same time, in the same data table. PMID:29026521
NASA Astrophysics Data System (ADS)
Besara, Rachel
2015-03-01
For years the cost of STEM databases have exceeded the rate of inflation. Libraries have reallocated funds for years to continue to provide support to their scientific communities, but they are reaching a point at many institutions where they are no longer able to provide access to many databases considered standard to support research. A possible or partial alleviation to this problem is the federal open access mandate. However, this shift challenges the current model of publishing and data management in the sciences. This talk will discuss these topics from the perspective of research libraries supporting physics and the STEM disciplines.
Unification - An international aerospace information issue
NASA Technical Reports Server (NTRS)
Cotter, Gladys A.; Lahr, Thomas F.
1992-01-01
Scientific and Technical Information (STI) represents the results of large investments in research and development (R&D) and the expertise of a nation and is a valuable resource. For more than four decades, NASA and its predecessor organizations have developed and managed the preeminent aerospace information system. NASA obtains foreign materials through its international exchange relationships, continually increasing the comprehensiveness of the NASA Aerospace Database (NAD). The NAD is de facto the international aerospace database. This paper reviews current NASA goals and activities with a view toward maintaining compatibility among international aerospace information systems, eliminating duplication of effort, and sharing resources through international cooperation wherever possible.
A novel data storage logic in the cloud.
Mátyás, Bence; Szarka, Máté; Járvás, Gábor; Kusper, Gábor; Argay, István; Fialowski, Alice
2016-01-01
Databases which store and manage long-term scientific information related to life science are used to store huge amount of quantitative attributes. Introduction of a new entity attribute requires modification of the existing data tables and the programs that use these data tables. The solution is increasing the virtual data tables while the number of screens remains the same. The main objective of the present study was to introduce a logic called Joker Tao (JT) which provides universal data storage for cloud-based databases. It means all types of input data can be interpreted as an entity and attribute at the same time, in the same data table.
Martone, Maryann E.; Tran, Joshua; Wong, Willy W.; Sargis, Joy; Fong, Lisa; Larson, Stephen; Lamont, Stephan P.; Gupta, Amarnath; Ellisman, Mark H.
2008-01-01
Databases have become integral parts of data management, dissemination and mining in biology. At the Second Annual Conference on Electron Tomography, held in Amsterdam in 2001, we proposed that electron tomography data should be shared in a manner analogous to structural data at the protein and sequence scales. At that time, we outlined our progress in creating a database to bring together cell level imaging data across scales, The Cell Centered Database (CCDB). The CCDB was formally launched in 2002 as an on-line repository of high-resolution 3D light and electron microscopic reconstructions of cells and subcellular structures. It contains 2D, 3D and 4D structural and protein distribution information from confocal, multiphoton and electron microscopy, including correlated light and electron microscopy. Many of the data sets are derived from electron tomography of cells and tissues. In the five years since its debut, we have moved the CCDB from a prototype to a stable resource and expanded the scope of the project to include data management and knowledge engineering. Here we provide an update on the CCDB and how it is used by the scientific community. We also describe our work in developing additional knowledge tools, e.g., ontologies, for annotation and query of electron microscopic data. PMID:18054501
Information And Data-Sharing Plan of IPY China Activity
NASA Astrophysics Data System (ADS)
Zhang, X.; Cheng, W.
2007-12-01
Polar Data-Sharing is an effective resolution to global system and polar science problems and to interdisciplinary and sustainable study, as well as an important means to deal with IPY scientific heritages and realize IPY goals. Corresponding to IPY Data-Sharing policies, Information and Data-Sharing Plan was listed in five sub-plans of IPY Chinese Programme launched in March, 2007,they are Scientific research program of the Prydz Bay, Amery Ice Shelf and Dome A transects(short title:'PANDA'), the Arctic Scientific Research Expedition Plan, International Cooperation Plan, Information and Data-Sharing Plan, Education and Outreach. China, since the foundation of Antarctic Zhongshan Station in 1989, has carried out systematic scientific expeditions and researches in Larsemann Hills, Prydz Bay and the neighbouring sea areas, organized 14 Prydz Bay oceanographic investigations, 3 Amery Ice Shelf expeditions, 4 Grove Mountains expeditions and 5 inland ice cap scientific expeditions. 2 comprehensive oceanographic investigations in the Arctic Ocean were conducted in 1999 and 2003, acquired a large amount of data and samples in PANDA section and fan areas of Pacific Ocean in the Arctic Ocean. A mechanism of basic data submitting ,sharing and archiving has been gradually set up since 2000. Presently, Polar Science Database and Polar Sample Resource Sharing Platform of China with the aim of sharing polar data and samples has been initially established and began to provide sharing service to domestic and oversea users. According to IPY Chinese Activity, 2 scientific expeditions in the Arctic Ocean, 3 in the South Ocean, 2 at Amery Ice Shelf, 1 on Grove Mountains and 2 inland ice cap expeditions on Dome A will be carried out during IPY period. According to the experiences accumulated in the past and the jobs in the future, the Information and Data- Sharing Plan, during 2007-2010, will save, archive, and provide exchange and sharing services upon the data obtained by scientific expeditions on the site of IPY Chinese Programme. Meanwhile, focusing on areas in east Antarctic Dome A-Grove Mountain-Zhongshan Station-Amery Ice Shelf-Prydz Bay Section and the fan areas of Pacific Ocean in the Arctic Ocean, the Plan will also collect and integrate IPY data and historical data and establish database of PANDA Section and the Arctic Ocean. The details are as follows: On the basis of integrating the observed data acquired during the expeditions of China, the Plan will, adopting portal technology, develop 5 subject databases (English version included):(1) Database of Zhongshan Station- Dome A inner land ice cap section;(2) Database of interaction of ocean-ice-atmosphere-ice shelf in east Antarctica;(3) Database of geological and glaciological advance and retreat evolvement in Grove Mountains; (4) Database of Solar Terrestrial Physics at Zhongshan Station; (5) Oceanographic database of fan area of Pacific Ocean in the Arctic Ocean. CN-NADC of PRIC is the institute which assumes the responsibility for the Plan, specifically, it coordinates and organizes the operation of the Plan which includes data management, developing the portal of data and information sharing, and international exchanges. The specific assignments under the Plan will be carried out by research institutes under CAS (Chinese Academy of Sciences), SOA ( State Oceanic Administration), State Bureau of Surveying and Mapping and Ministry of Education.
Research mapping in North Sumatra based on Scopus
NASA Astrophysics Data System (ADS)
Nasution, M. K. M.; Sitepu, R.; Rosmayati; Bakti, D.; Hardi, S. M.
2018-02-01
Research is needed to improve the capacity of human resources to manage natural resources for human well-being. Research is done by institutions such as universities or research institutions, but the research picture related to human welfare interests is not easy to obtain. If research can be proven through scientific publications, scientific research publication databases can be used to view research behaviour. Research mapping in North Sumatra needs to be done to see the suitability of research conducted with development needs in North Sumatra, and as a presentation is the Universitas Sumatera Utara which shows that research conducted has 60% strength, especially in the exact sciences.
Exploring Antarctic Land Surface Temperature Extremes Using Condensed Anomaly Databases
NASA Astrophysics Data System (ADS)
Grant, Glenn Edwin
Satellite observations have revolutionized the Earth Sciences and climate studies. However, data and imagery continue to accumulate at an accelerating rate, and efficient tools for data discovery, analysis, and quality checking lag behind. In particular, studies of long-term, continental-scale processes at high spatiotemporal resolutions are especially problematic. The traditional technique of downloading an entire dataset and using customized analysis code is often impractical or consumes too many resources. The Condensate Database Project was envisioned as an alternative method for data exploration and quality checking. The project's premise was that much of the data in any satellite dataset is unneeded and can be eliminated, compacting massive datasets into more manageable sizes. Dataset sizes are further reduced by retaining only anomalous data of high interest. Hosting the resulting "condensed" datasets in high-speed databases enables immediate availability for queries and exploration. Proof of the project's success relied on demonstrating that the anomaly database methods can enhance and accelerate scientific investigations. The hypothesis of this dissertation is that the condensed datasets are effective tools for exploring many scientific questions, spurring further investigations and revealing important information that might otherwise remain undetected. This dissertation uses condensed databases containing 17 years of Antarctic land surface temperature anomalies as its primary data. The study demonstrates the utility of the condensate database methods by discovering new information. In particular, the process revealed critical quality problems in the source satellite data. The results are used as the starting point for four case studies, investigating Antarctic temperature extremes, cloud detection errors, and the teleconnections between Antarctic temperature anomalies and climate indices. The results confirm the hypothesis that the condensate databases are a highly useful tool for Earth Science analyses. Moreover, the quality checking capabilities provide an important method for independent evaluation of dataset veracity.
Internet Portal For A Distributed Management of Groundwater
NASA Astrophysics Data System (ADS)
Meissner, U. F.; Rueppel, U.; Gutzke, T.; Seewald, G.; Petersen, M.
The management of groundwater resources for the supply of German cities and sub- urban areas has become a matter of public interest during the last years. Negative headlines in the Rhein-Main-Area dealt with cracks in buildings as well as damaged woodlands and inundated agriculture areas as an effect of varying groundwater levels. Usually a holistic management of groundwater resources is not existent because of the complexity of the geological system, the large number of involved groups and their divergent interests and a lack of essential information. The development of a network- based information system for an efficient groundwater management was the target of the project: ?Grundwasser-Online?[1]. The management of groundwater resources has to take into account various hydro- geological, climatic, water-economical, chemical and biological interrelations [2]. Thus, the traditional approaches in information retrieval, which are characterised by a high personnel and time expenditure, are not sufficient. Furthermore, the efficient control of the groundwater cultivation requires a direct communication between the different water supply companies, the consultant engineers, the scientists, the govern- mental agencies and the public, by using computer networks. The presented groundwater information system consists of different components, especially for the collection, storage, evaluation and visualisation of groundwater- relevant information. Network-based technologies are used [3]. For the collection of time-dependant groundwater-relevant information, modern technologies of Mobile Computing have been analysed in order to provide an integrated approach in the man- agement of large groundwater systems. The aggregated information is stored within a distributed geo-scientific database system which enables a direct integration of simu- lation programs for the evaluation of interactions in groundwater systems. Thus, even a prognosis for the evolution of groundwater states can be given. In order to gener- ate reports automatically, technologies are utilised. The visualisation of geo-scientific databases in the internet considering their geographic reference is performed with internet map servers. According to the communication of the map server with the un- derlying geo-scientific database, it is necessary that the demanded data can be filtered interactively in the internet browser using chronological and logical criteria. With re- gard to public use the security aspects within the described distributed system are of 1 major importance. Therefore, security methods for the modelling of access rights in combination with digital signatures have been analysed and implemented in order to provide a secure data exchange and communication between the different partners in the network 2
Database Design and Management in Engineering Optimization.
1988-02-01
scientific and engineer- Q.- ’ method In the mid-19SOs along with modern digital com- ing applications. The paper highlights the difference puters, have made...is continuously tion software can call standard subroutines from the DBMS redefined in an application program, DDL must have j libary to define...operations. .. " type data usually encountered in engineering applications. GFDGT: Computes the number of digits needed to display " "’ A user
Exploration of options for publishing databases and supplemental material in society journals
USDA-ARS?s Scientific Manuscript database
As scientific information becomes increasingly more abundant, there is increasing interest among members of our societies to share databases. These databases have great value, for example, in providing long-term perspectives of various scientific problems and for use by modelers to extend the inform...
Environment Online: The Greening of Databases. Part 2. Scientific and Technical Databases.
ERIC Educational Resources Information Center
Alston, Patricia Gayle
1991-01-01
This second in a series of articles about online sources of environmental information describes scientific and technical databases that are useful for searching environmental data. Topics covered include chemicals and hazardous substances; agriculture; pesticides; water; forestry, oil, and energy resources; air; environmental and occupational…
[Presence of the biomedical periodicals of Hungarian editions in international databases].
Vasas, Lívia; Hercsel, Imréné
2006-01-15
Presence of the biomedical periodicals of Hungarian editions in international databases. The majority of Hungarian scientific results in medical and related sciences are published in scientific periodicals of foreign edition with high impact factor (IF) values, and they appear in international scientific literature in foreign languages. In this study the authors dealt with the presence and registered citation in international databases of those periodicals only, which had been published in Hungary and/or in cooperation with foreign publishing companies. The examination went back to year 1980 and covered a 25-year long period. 110 periodicals were selected for more detailed examination. The authors analyzed the situation of the current periodicals in the three most often visited databases (MEDLINE, EMBASE, Web of Science), and discovered, that the biomedical scientific periodicals of Hungarian interests were not represented with reasonable emphasis in the relevant international bibliographic databases. Because of the great number of data the scientific literature of medicine and related sciences could not be represented in its entirety, this publication, however, might give useful information for the inquirers, and call the attention of the competent people.
Geoinformatics in the public service: building a cyberinfrastructure across the geological surveys
Allison, M. Lee; Gundersen, Linda C.; Richard, Stephen M.; Keller, G. Randy; Baru, Chaitanya
2011-01-01
Advanced information technology infrastructure is increasingly being employed in the Earth sciences to provide researchers with efficient access to massive central databases and to integrate diversely formatted information from a variety of sources. These geoinformatics initiatives enable manipulation, modeling and visualization of data in a consistent way, and are helping to develop integrated Earth models at various scales, and from the near surface to the deep interior. This book uses a series of case studies to demonstrate computer and database use across the geosciences. Chapters are thematically grouped into sections that cover data collection and management; modeling and community computational codes; visualization and data representation; knowledge management and data integration; and web services and scientific workflows. Geoinformatics is a fascinating and accessible introduction to this emerging field for readers across the solid Earth sciences and an invaluable reference for researchers interested in initiating new cyberinfrastructure projects of their own.
Review of telehealth stuttering management.
Lowe, Robyn; O'Brian, Sue; Onslow, Mark
2013-01-01
Telehealth is the use of communication technology to provide health care services by means other than typical in-clinic attendance models. Telehealth is increasingly used for the management of speech, language and communication disorders. The aim of this article is to review telehealth applications to stuttering management. We conducted a search of peer-reviewed literature for the past 20 years using the Institute for Scientific Information Web of Science database, PubMed: The Bibliographic Database and a search for articles by hand. Outcomes for telehealth stuttering treatment were generally positive, but there may be a compromise of treatment efficiency with telehealth treatment of young children. Our search found no studies dealing with stuttering assessment procedures using telehealth models. No economic analyses of this delivery model have been reported. This review highlights the need for continued research about telehealth for stuttering management. Evidence from research is needed to inform the efficacy of assessment procedures using telehealth methods as well as guide the development of improved treatment procedures. Clinical and technical guidelines are urgently needed to ensure that the evolving and continued use of telehealth to manage stuttering does not compromise the standards of care afforded with standard in-clinic models.
The QuakeSim Project: Web Services for Managing Geophysical Data and Applications
NASA Astrophysics Data System (ADS)
Pierce, Marlon E.; Fox, Geoffrey C.; Aktas, Mehmet S.; Aydin, Galip; Gadgil, Harshawardhan; Qi, Zhigang; Sayar, Ahmet
2008-04-01
We describe our distributed systems research efforts to build the “cyberinfrastructure” components that constitute a geophysical Grid, or more accurately, a Grid of Grids. Service-oriented computing principles are used to build a distributed infrastructure of Web accessible components for accessing data and scientific applications. Our data services fall into two major categories: Archival, database-backed services based around Geographical Information System (GIS) standards from the Open Geospatial Consortium, and streaming services that can be used to filter and route real-time data sources such as Global Positioning System data streams. Execution support services include application execution management services and services for transferring remote files. These data and execution service families are bound together through metadata information and workflow services for service orchestration. Users may access the system through the QuakeSim scientific Web portal, which is built using a portlet component approach.
Specimens as records: scientific practice and recordkeeping in natural history research.
Ilerbaig, Juan
2010-01-01
For the past two decades, scholars in archival science have begun to question traditional assumptions about the nature of the record. Drawing on theories from fields such as sociology, organization theory, and science studies, and on their own ethnographic studies, they propose more inclusive definitions and widening the contexts of analysis of record making and recordkeeping. This paper continues this critical consideration of the concept of record by examining the nature of nonprototypical records in the scientific world. The paper focuses on the system of specimens and field notes established by biologist Joseph Grinnell at the Museum of Vertebrate Zoology (University of California, Berkeley) as a means of examining several aspects of the nature of the scientific record: materiality, representation, and the triad evidence/memory/accountability. Focusing on the creation and management of these scientific records, the paper argues that further analyses of scientific record making and recordkeeping are bound to benefit both scientific work, which depends more and more on databases and archives, as well as archival science, which is becoming more relevant beyond its traditional realm of the legal/business/administrative world.
A Conceptual Model and Database to Integrate Data and Project Management
NASA Astrophysics Data System (ADS)
Guarinello, M. L.; Edsall, R.; Helbling, J.; Evaldt, E.; Glenn, N. F.; Delparte, D.; Sheneman, L.; Schumaker, R.
2015-12-01
Data management is critically foundational to doing effective science in our data-intensive research era and done well can enhance collaboration, increase the value of research data, and support requirements by funding agencies to make scientific data and other research products available through publically accessible online repositories. However, there are few examples (but see the Long-term Ecological Research Network Data Portal) of these data being provided in such a manner that allows exploration within the context of the research process - what specific research questions do these data seek to answer? what data were used to answer these questions? what data would have been helpful to answer these questions but were not available? We propose an agile conceptual model and database design, as well as example results, that integrate data management with project management not only to maximize the value of research data products but to enhance collaboration during the project and the process of project management itself. In our project, which we call 'Data Map,' we used agile principles by adopting a user-focused approach and by designing our database to be simple, responsive, and expandable. We initially designed Data Map for the Idaho EPSCoR project "Managing Idaho's Landscapes for Ecosystem Services (MILES)" (see https://www.idahoecosystems.org//) and will present example results for this work. We consulted with our primary users- project managers, data managers, and researchers to design the Data Map. Results will be useful to project managers and to funding agencies reviewing progress because they will readily provide answers to the questions "For which research projects/questions are data available and/or being generated by MILES researchers?" and "Which research projects/questions are associated with each of the 3 primary questions from the MILES proposal?" To be responsive to the needs of the project, we chose to streamline our design for the prototype database and build it in a way that is modular and can be changed or expanded to meet user needs. Our hope is that others, especially those managing large collaborative research grants, will be able to use our project model and database design to enhance the value of their project and data management both during and following the active research period.
Comparison Study of Overlap among 21 Scientific Databases in Searching Pesticide Information.
ERIC Educational Resources Information Center
Meyer, Daniel E.; And Others
1983-01-01
Evaluates overlapping coverage of 21 scientific databases used in 10 online pesticide searches in an attempt to identify minimum number of databases needed to generate 90 percent of unique, relevant citations for given search. Comparison of searches combined under given pesticide usage (herbicide, fungicide, insecticide) is discussed. Nine…
Informatics and data quality at collaborative multicenter Breast and Colon Cancer Family Registries.
McGarvey, Peter B; Ladwa, Sweta; Oberti, Mauricio; Dragomir, Anca Dana; Hedlund, Erin K; Tanenbaum, David Michael; Suzek, Baris E; Madhavan, Subha
2012-06-01
Quality control and harmonization of data is a vital and challenging undertaking for any successful data coordination center and a responsibility shared between the multiple sites that produce, integrate, and utilize the data. Here we describe a coordinated effort between scientists and data managers in the Cancer Family Registries to implement a data governance infrastructure consisting of both organizational and technical solutions. The technical solution uses a rule-based validation system that facilitates error detection and correction for data centers submitting data to a central informatics database. Validation rules comprise both standard checks on allowable values and a crosscheck of related database elements for logical and scientific consistency. Evaluation over a 2-year timeframe showed a significant decrease in the number of errors in the database and a concurrent increase in data consistency and accuracy.
Informatics and data quality at collaborative multicenter Breast and Colon Cancer Family Registries
McGarvey, Peter B; Ladwa, Sweta; Oberti, Mauricio; Dragomir, Anca Dana; Hedlund, Erin K; Tanenbaum, David Michael; Suzek, Baris E
2012-01-01
Quality control and harmonization of data is a vital and challenging undertaking for any successful data coordination center and a responsibility shared between the multiple sites that produce, integrate, and utilize the data. Here we describe a coordinated effort between scientists and data managers in the Cancer Family Registries to implement a data governance infrastructure consisting of both organizational and technical solutions. The technical solution uses a rule-based validation system that facilitates error detection and correction for data centers submitting data to a central informatics database. Validation rules comprise both standard checks on allowable values and a crosscheck of related database elements for logical and scientific consistency. Evaluation over a 2-year timeframe showed a significant decrease in the number of errors in the database and a concurrent increase in data consistency and accuracy. PMID:22323393
EXPOSURES AND INTERNAL DOSES OF ...
The National Center for Environmental Assessment (NCEA) has released a final report that presents and applies a method to estimate distributions of internal concentrations of trihalomethanes (THMs) in humans resulting from a residential drinking water exposure. The report presents simulations of oral, dermal and inhalation exposures and demonstrates the feasibility of linking the US EPA’s information Collection Rule database with other databases on external exposure factors and physiologically based pharmacokinetic modeling to refine population-based estimates of exposure. Review Draft - by 2010, develop scientifically sound data and approaches to assess and manage risks to human health posed by exposure to specific regulated waterborne pathogens and chemicals, including those addressed by the Arsenic, M/DBP and Six-Year Review Rules.
Bannasch, Detlev; Mehrle, Alexander; Glatting, Karl-Heinz; Pepperkok, Rainer; Poustka, Annemarie; Wiemann, Stefan
2004-01-01
We have implemented LIFEdb (http://www.dkfz.de/LIFEdb) to link information regarding novel human full-length cDNAs generated and sequenced by the German cDNA Consortium with functional information on the encoded proteins produced in functional genomics and proteomics approaches. The database also serves as a sample-tracking system to manage the process from cDNA to experimental read-out and data interpretation. A web interface enables the scientific community to explore and visualize features of the annotated cDNAs and ORFs combined with experimental results, and thus helps to unravel new features of proteins with as yet unknown functions. PMID:14681468
High Performance Databases For Scientific Applications
NASA Technical Reports Server (NTRS)
French, James C.; Grimshaw, Andrew S.
1997-01-01
The goal for this task is to develop an Extensible File System (ELFS). ELFS attacks the problem of the following: 1. Providing high bandwidth performance architectures; 2. Reducing the cognitive burden faced by applications programmers when they attempt to optimize; and 3. Seamlessly managing the proliferation of data formats and architectural differences. The approach for ELFS solution consists of language and run-time system support that permits the specification on a hierarchy of file classes.
ERIC Educational Resources Information Center
Brown, Cecelia
2003-01-01
Discusses the growth in use and acceptance of Web-based genomic and proteomic databases (GPD) in scholarly communication. Confirms the role of GPD in the scientific literature cycle, suggests GPD are a storage and retrieval mechanism for molecular biology information, and recommends that existing models of scientific communication be updated to…
NONATObase: a database for Polychaeta (Annelida) from the Southwestern Atlantic Ocean.
Pagliosa, Paulo R; Doria, João G; Misturini, Dairana; Otegui, Mariana B P; Oortman, Mariana S; Weis, Wilson A; Faroni-Perez, Larisse; Alves, Alexandre P; Camargo, Maurício G; Amaral, A Cecília Z; Marques, Antonio C; Lana, Paulo C
2014-01-01
Networks can greatly advance data sharing attitudes by providing organized and useful data sets on marine biodiversity in a friendly and shared scientific environment. NONATObase, the interactive database on polychaetes presented herein, will provide new macroecological and taxonomic insights of the Southwestern Atlantic region. The database was developed by the NONATO network, a team of South American researchers, who integrated available information on polychaetes from between 5°N and 80°S in the Atlantic Ocean and near the Antarctic. The guiding principle of the database is to keep free and open access to data based on partnerships. Its architecture consists of a relational database integrated in the MySQL and PHP framework. Its web application allows access to the data from three different directions: species (qualitative data), abundance (quantitative data) and data set (reference data). The database has built-in functionality, such as the filter of data on user-defined taxonomic levels, characteristics of site, sample, sampler, and mesh size used. Considering that there are still many taxonomic issues related to poorly known regional fauna, a scientific committee was created to work out consistent solutions to current misidentifications and equivocal taxonomy status of some species. Expertise from this committee will be incorporated by NONATObase continually. The use of quantitative data was possible by standardization of a sample unit. All data, maps of distribution and references from a data set or a specified query can be visualized and exported to a commonly used data format in statistical analysis or reference manager software. The NONATO network has initialized with NONATObase, a valuable resource for marine ecologists and taxonomists. The database is expected to grow in functionality as it comes in useful, particularly regarding the challenges of dealing with molecular genetic data and tools to assess the effects of global environment change. Database URL: http://nonatobase.ufsc.br/.
NONATObase: a database for Polychaeta (Annelida) from the Southwestern Atlantic Ocean
Pagliosa, Paulo R.; Doria, João G.; Misturini, Dairana; Otegui, Mariana B. P.; Oortman, Mariana S.; Weis, Wilson A.; Faroni-Perez, Larisse; Alves, Alexandre P.; Camargo, Maurício G.; Amaral, A. Cecília Z.; Marques, Antonio C.; Lana, Paulo C.
2014-01-01
Networks can greatly advance data sharing attitudes by providing organized and useful data sets on marine biodiversity in a friendly and shared scientific environment. NONATObase, the interactive database on polychaetes presented herein, will provide new macroecological and taxonomic insights of the Southwestern Atlantic region. The database was developed by the NONATO network, a team of South American researchers, who integrated available information on polychaetes from between 5°N and 80°S in the Atlantic Ocean and near the Antarctic. The guiding principle of the database is to keep free and open access to data based on partnerships. Its architecture consists of a relational database integrated in the MySQL and PHP framework. Its web application allows access to the data from three different directions: species (qualitative data), abundance (quantitative data) and data set (reference data). The database has built-in functionality, such as the filter of data on user-defined taxonomic levels, characteristics of site, sample, sampler, and mesh size used. Considering that there are still many taxonomic issues related to poorly known regional fauna, a scientific committee was created to work out consistent solutions to current misidentifications and equivocal taxonomy status of some species. Expertise from this committee will be incorporated by NONATObase continually. The use of quantitative data was possible by standardization of a sample unit. All data, maps of distribution and references from a data set or a specified query can be visualized and exported to a commonly used data format in statistical analysis or reference manager software. The NONATO network has initialized with NONATObase, a valuable resource for marine ecologists and taxonomists. The database is expected to grow in functionality as it comes in useful, particularly regarding the challenges of dealing with molecular genetic data and tools to assess the effects of global environment change. Database URL: http://nonatobase.ufsc.br/ PMID:24573879
DCMS: A data analytics and management system for molecular simulation.
Kumar, Anand; Grupcev, Vladimir; Berrada, Meryem; Fogarty, Joseph C; Tu, Yi-Cheng; Zhu, Xingquan; Pandit, Sagar A; Xia, Yuni
Molecular Simulation (MS) is a powerful tool for studying physical/chemical features of large systems and has seen applications in many scientific and engineering domains. During the simulation process, the experiments generate a very large number of atoms and intend to observe their spatial and temporal relationships for scientific analysis. The sheer data volumes and their intensive interactions impose significant challenges for data accessing, managing, and analysis. To date, existing MS software systems fall short on storage and handling of MS data, mainly because of the missing of a platform to support applications that involve intensive data access and analytical process. In this paper, we present the database-centric molecular simulation (DCMS) system our team developed in the past few years. The main idea behind DCMS is to store MS data in a relational database management system (DBMS) to take advantage of the declarative query interface ( i.e. , SQL), data access methods, query processing, and optimization mechanisms of modern DBMSs. A unique challenge is to handle the analytical queries that are often compute-intensive. For that, we developed novel indexing and query processing strategies (including algorithms running on modern co-processors) as integrated components of the DBMS. As a result, researchers can upload and analyze their data using efficient functions implemented inside the DBMS. Index structures are generated to store analysis results that may be interesting to other users, so that the results are readily available without duplicating the analysis. We have developed a prototype of DCMS based on the PostgreSQL system and experiments using real MS data and workload show that DCMS significantly outperforms existing MS software systems. We also used it as a platform to test other data management issues such as security and compression.
Management and assimilation of diverse, distributed watershed datasets
NASA Astrophysics Data System (ADS)
Varadharajan, C.; Faybishenko, B.; Versteeg, R.; Agarwal, D.; Hubbard, S. S.; Hendrix, V.
2016-12-01
The U.S. Department of Energy's (DOE) Watershed Function Scientific Focus Area (SFA) seeks to determine how perturbations to mountainous watersheds (e.g., floods, drought, early snowmelt) impact the downstream delivery of water, nutrients, carbon, and metals over seasonal to decadal timescales. We are building a software platform that enables integration of diverse and disparate field, laboratory, and simulation datasets, of various types including hydrological, geological, meteorological, geophysical, geochemical, ecological and genomic datasets across a range of spatial and temporal scales within the Rifle floodplain and the East River watershed, Colorado. We are using agile data management and assimilation approaches, to enable web-based integration of heterogeneous, multi-scale dataSensor-based observations of water-level, vadose zone and groundwater temperature, water quality, meteorology as well as biogeochemical analyses of soil and groundwater samples have been curated and archived in federated databases. Quality Assurance and Quality Control (QA/QC) are performed on priority datasets needed for on-going scientific analyses, and hydrological and geochemical modeling. Automated QA/QC methods are used to identify and flag issues in the datasets. Data integration is achieved via a brokering service that dynamically integrates data from distributed databases via web services, based on user queries. The integrated results are presented to users in a portal that enables intuitive search, interactive visualization and download of integrated datasets. The concepts, approaches and codes being used are shared across various data science components of various large DOE-funded projects such as the Watershed Function SFA, Next Generation Ecosystem Experiment (NGEE) Tropics, Ameriflux/FLUXNET, and Advanced Simulation Capability for Environmental Management (ASCEM), and together contribute towards DOE's cyberinfrastructure for data management and model-data integration.
Drug residues in urban water: A database for ecotoxicological risk management.
Destrieux, Doriane; Laurent, François; Budzinski, Hélène; Pedelucq, Julie; Vervier, Philippe; Gerino, Magali
2017-12-31
Human-use drug residues (DR) are only partially eliminated by waste water treatment plants (WWTPs), so that residual amounts can reach natural waters and cause environmental hazards. In order to properly manage these hazards in the aquatic environment, a database is made available that integrates the concentration ranges for DR, which cause adverse effects for aquatic organisms, and the temporal variations of the ecotoxicological risks. To implement this database for the ecotoxicological risk assessment (ERA database), the required information for each DR is the predicted no effect concentrations (PNECs), along with the predicted environmental concentrations (PECs). The risk assessment is based on the ratio between the PNECs and the PECs. Adverse effect data or PNECs have been found in the publicly available literature for 45 substances. These ecotoxicity test data have been extracted from 125 different sources. This ERA database contains 1157 adverse effect data and 287 PNECs. The efficiency of this ERA database was tested with a data set coming from a simultaneous survey of WWTPs and the natural environment. In this data set, 26 DR were searched for in two WWTPs and in the river. On five sampling dates, concentrations measured in the river for 10 DR could pose environmental problems of which 7 were measured only downstream of WWTP outlets. From scientific literature and measurements, data implementation with unit homogenisation in a single database facilitates the actual ecotoxicological risk assessment, and may be useful for further risk coming from data arising from the future field survey. Moreover, the accumulation of a large ecotoxicity data set in a single database should not only improve knowledge of higher risk molecules but also supply an objective tool to help the rapid and efficient evaluation of the risk. Copyright © 2017 Elsevier B.V. All rights reserved.
An international aerospace information system: A cooperative opportunity
NASA Technical Reports Server (NTRS)
Cotter, Gladys A.; Blados, Walter R.
1992-01-01
Scientific and technical information (STI) is a valuable resource which represents the results of large investments in research and development (R&D), and the expertise of a nation. NASA and its predecessor organizations have developed and managed the preeminent aerospace information system. We see information and information systems changing and becoming more international in scope. In Europe, consistent with joint R&D programs and a view toward a united Europe, we have seen the emergence of a European Aerospace Database concept. In addition, the development of aeronautics and astronautics in individual nations have also lead to initiatives for national aerospace databases. Considering recent technological developments in information science and technology, as well as the reality of scarce resources in all nations, it is time to reconsider the mutually beneficial possibilities offered by cooperation and international resource sharing. The new possibilities offered through cooperation among the various aerospace database efforts toward an international aerospace database initiative which can optimize the cost/benefit equation for all participants are considered.
NASA Astrophysics Data System (ADS)
Heather, David
2016-07-01
Introduction: The Planetary Science Archive (PSA) is the European Space Agency's (ESA) repository of science data from all planetary science and exploration missions. The PSA provides access to scientific datasets through various interfaces (e.g. FTP browser, Map based, Advanced search, and Machine interface): http://archives.esac.esa.int/psa All datasets are scientifically peer-reviewed by independent scientists, and are compliant with the Planetary Data System (PDS) standards. Updating the PSA: The PSA is currently implementing a number of significant changes, both to its web-based interface to the scientific community, and to its database structure. The new PSA will be up-to-date with versions 3 and 4 of the PDS standards, as PDS4 will be used for ESA's upcoming ExoMars and BepiColombo missions. The newly designed PSA homepage will provide direct access to scientific datasets via a text search for targets or missions. This will significantly reduce the complexity for users to find their data and will promote one-click access to the datasets. Additionally, the homepage will provide direct access to advanced views and searches of the datasets. Users will have direct access to documentation, information and tools that are relevant to the scientific use of the dataset, including ancillary datasets, Software Interface Specification (SIS) documents, and any tools/help that the PSA team can provide. A login mechanism will provide additional functionalities to the users to aid / ease their searches (e.g. saving queries, managing default views). Queries to the PSA database will be possible either via the homepage (for simple searches of missions or targets), or through a filter menu for more tailored queries. The filter menu will offer multiple options to search for a particular dataset or product, and will manage queries for both in-situ and remote sensing instruments. Parameters such as start-time, phase angle, and heliocentric distance will be emphasized. A further advanced search function will allow users to query all the metadata present in the PSA database. Results will be displayed in 3 different ways: 1) A table listing all the corresponding data matching the criteria in the filter menu, 2) a projection of the products onto the surface of the object when applicable (i.e. planets, small bodies), and 3) a list of images for the relevant instruments to enjoy the beauty of our Solar System. These different ways of viewing the datasets will ensure that scientists and non-professionals alike will have access to the specific data they are looking for, regardless of their background. Conclusions: The new PSA will maintain the various interfaces and services it had in the past, and will include significant improvements designed to allow easier and more effective access to the scientific data and supporting materials. The new PSA is expected to be released by mid-2016. It will support the past, present and future missions, ancillary datasets, and will enhance the scientific output of ESA's missions. As such, the PSA will become a unique archive ensuring the long-term preservation and usage of scientific datasets together with user-friendly access.
NASA Astrophysics Data System (ADS)
Heather, David; Besse, Sebastien; Barbarisi, Isa; Arviset, Christophe; de Marchi, Guido; Barthelemy, Maud; Docasal, Ruben; Fraga, Diego; Grotheer, Emmanuel; Lim, Tanya; Macfarlane, Alan; Martinez, Santa; Rios, Carlos
2016-04-01
Introduction: The Planetary Science Archive (PSA) is the European Space Agency's (ESA) repository of science data from all planetary science and exploration missions. The PSA provides access to scientific datasets through various interfaces (e.g. FTP browser, Map based, Advanced search, and Machine interface): http://archives.esac.esa.int/psa All datasets are scientifically peer-reviewed by independent scientists, and are compliant with the Planetary Data System (PDS) standards. Updating the PSA: The PSA is currently implementing a number of significant changes, both to its web-based interface to the scientific community, and to its database structure. The new PSA will be up-to-date with versions 3 and 4 of the PDS standards, as PDS4 will be used for ESA's upcoming ExoMars and BepiColombo missions. The newly designed PSA homepage will provide direct access to scientific datasets via a text search for targets or missions. This will significantly reduce the complexity for users to find their data and will promote one-click access to the datasets. Additionally, the homepage will provide direct access to advanced views and searches of the datasets. Users will have direct access to documentation, information and tools that are relevant to the scientific use of the dataset, including ancillary datasets, Software Interface Specification (SIS) documents, and any tools/help that the PSA team can provide. A login mechanism will provide additional functionalities to the users to aid / ease their searches (e.g. saving queries, managing default views). Queries to the PSA database will be possible either via the homepage (for simple searches of missions or targets), or through a filter menu for more tailored queries. The filter menu will offer multiple options to search for a particular dataset or product, and will manage queries for both in-situ and remote sensing instruments. Parameters such as start-time, phase angle, and heliocentric distance will be emphasized. A further advanced search function will allow users to query all the metadata present in the PSA database. Results will be displayed in 3 different ways: 1) A table listing all the corresponding data matching the criteria in the filter menu, 2) a projection of the products onto the surface of the object when applicable (i.e. planets, small bodies), and 3) a list of images for the relevant instruments to enjoy the beauty of our Solar System. These different ways of viewing the datasets will ensure that scientists and non-professionals alike will have access to the specific data they are looking for, regardless of their background. Conclusions: The new PSA will maintain the various interfaces and services it had in the past, and will include significant improvements designed to allow easier and more effective access to the scientific data and supporting materials. The new PSA is expected to be released by mid-2016. It will support the past, present and future missions, ancillary datasets, and will enhance the scientific output of ESA's missions. As such, the PSA will become a unique archive ensuring the long-term preservation and usage of scientific datasets together with user-friendly access.
The Danish Inguinal Hernia database.
Friis-Andersen, Hans; Bisgaard, Thue
2016-01-01
To monitor and improve nation-wide surgical outcome after groin hernia repair based on scientific evidence-based surgical strategies for the national and international surgical community. Patients ≥18 years operated for groin hernia. Type and size of hernia, primary or recurrent, type of surgical repair procedure, mesh and mesh fixation methods. According to the Danish National Health Act, surgeons are obliged to register all hernia repairs immediately after surgery (3 minute registration time). All institutions have continuous access to their own data stratified on individual surgeons. Registrations are based on a closed, protected Internet system requiring personal codes also identifying the operating institution. A national steering committee consisting of 13 voluntary and dedicated surgeons, 11 of whom are unpaid, handles the medical management of the database. The Danish Inguinal Hernia Database comprises intraoperative data from >130,000 repairs (May 2015). A total of 49 peer-reviewed national and international publications have been published from the database (June 2015). The Danish Inguinal Hernia Database is fully active monitoring surgical quality and contributes to the national and international surgical society to improve outcome after groin hernia repair.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jung, Haeryong; Lee, Eunyong; Jeong, YiYeong
Korea Radioactive-waste Management Corporation (KRMC) established in 2009 has started a new project to collect information on long-term stability of deep geological environments on the Korean Peninsula. The information has been built up in the integrated natural barrier database system available on web (www.deepgeodisposal.kr). The database system also includes socially and economically important information, such as land use, mining area, natural conservation area, population density, and industrial complex, because some of this information is used as exclusionary criteria during the site selection process for a deep geological repository for safe and secure containment and isolation of spent nuclear fuel andmore » other long-lived radioactive waste in Korea. Although the official site selection process has not been started yet in Korea, current integrated natural barrier database system and socio-economic database is believed that the database system will be effectively utilized to narrow down the number of sites where future investigation is most promising in the site selection process for a deep geological repository and to enhance public acceptance by providing readily-available relevant scientific information on deep geological environments in Korea. (authors)« less
Antagonist molecules in the treatment of angina
Gupta, Ashish K.; Winchester, David; Pepine, Carl J.
2017-01-01
Introduction Management of chronic angina has evolved dramatically in the last few decades with several options for pharmacotherapy outlined in various evidence-based guidelines. Areas covered There is a growing list of drugs that are currently being investigated for treatment of chronic angina. These also include several herbal medications, which are now being scientifically evaluated as potential alternative or even adjunctive therapy for angina. Gene- and cell-based therapies have opened yet another avenue for management of chronic refractory angina in ‘no-option’ patients who are not candidates for either percutaneous or surgical revascularization and are on optimal medical therapy. An extensive review of literature using PUBMED, Cochrane database, clinical trial databases of USA and European Union was done and summarized in this review. This review will attempt to discuss the traditional as well as novel therapeutic agents for angina. Expert opinion Several pharmacological and non-pharmacological therapeutic options are now available for treatment and management of chronic refractory angina. Renewed interest in traditional therapies and cell- and gene-based modalities with targeted drug delivery systems will open the doors for personalized therapy for patients with chronic refractory angina. PMID:24047238
Progress in the Mallik 2002 Data and Information System
NASA Astrophysics Data System (ADS)
Loewner, R.; Conze, R.; Laframboise, R. R.; Working Group, M.
2002-12-01
Since December 2001 scientific investigations in a gas hydrate research well program were undertaken in the Mackenzie Delta in the Canadian Arctic, supported by a new Data and Information System. The program comprised a main production well and two scientific observation wells. During the drilling period of the main Mallik well hole we were able to elaborate an information system very close in time and space to the activities and operations at the drill site and in the laboratories of the Inuvik Research Center. Due to the particular conditions and characteristics of Methane Drilling Projects, the technical realization and the structure of the data management required adapted individual solutions. On the one hand, the physical properties of the Methane and the climate in the Arctic enforced working under extreme conditions not only for the staff but also for the technical equipment. On the other hand, the sensitive data demanded security on a very high level. Considering these characteristics, a database structure has been set up successfully on a server in Inuvik, supported by our Drilling Information System (DIS). The drilling period ended in March 2002 and the scientific evaluation phase began. Until now a detailed database with all on-site gained information and data from the succeeding analyses has been made available in the ICDP information network (http://www.icdp-online.de/html/sites/mallik/index/index.html). Lithological descriptions, borehole measurements, monitoring data and an archive of all the core runs and samples are stored in the Mallik Data Warehouse. A request started from the Internet generates results dynamically which accomplish the needs of the user. The user even can generate own litho-logs which enables him/her to compare all kinds of borehole information for his/her scientific work. All these functions and sevices are covered by an highly sophisticated security management due to different defined areas of confidentiality within the Mallik Science Team.
Automatic labeling and characterization of objects using artificial neural networks
NASA Technical Reports Server (NTRS)
Campbell, William J.; Hill, Scott E.; Cromp, Robert F.
1989-01-01
Existing NASA supported scientific data bases are usually developed, managed and populated in a tedious, error prone and self-limiting way in terms of what can be described in a relational Data Base Management System (DBMS). The next generation Earth remote sensing platforms, i.e., Earth Observation System, (EOS), will be capable of generating data at a rate of over 300 Mbs per second from a suite of instruments designed for different applications. What is needed is an innovative approach that creates object-oriented databases that segment, characterize, catalog and are manageable in a domain-specific context and whose contents are available interactively and in near-real-time to the user community. Described here is work in progress that utilizes an artificial neural net approach to characterize satellite imagery of undefined objects into high-level data objects. The characterized data is then dynamically allocated to an object-oriented data base where it can be reviewed and assessed by a user. The definition, development, and evolution of the overall data system model are steps in the creation of an application-driven knowledge-based scientific information system.
Database assessment of CMIP5 and hydrological models to determine flood risk areas
NASA Astrophysics Data System (ADS)
Limlahapun, Ponthip; Fukui, Hiromichi
2016-11-01
Solutions for water-related disasters may not be solved with a single scientific method. Based on this premise, we involved logic conceptions, associate sequential result amongst models, and database applications attempting to analyse historical and future scenarios in the context of flooding. The three main models used in this study are (1) the fifth phase of the Coupled Model Intercomparison Project (CMIP5) to derive precipitation; (2) the Integrated Flood Analysis System (IFAS) to extract amount of discharge; and (3) the Hydrologic Engineering Center (HEC) model to generate inundated areas. This research notably focused on integrating data regardless of system-design complexity, and database approaches are significantly flexible, manageable, and well-supported for system data transfer, which makes them suitable for monitoring a flood. The outcome of flood map together with real-time stream data can help local communities identify areas at-risk of flooding in advance.
Cryogenic Fluid Management: 2000-2004
NASA Technical Reports Server (NTRS)
2004-01-01
This custom bibliography from the NASA Scientific and Technical Information Program lists a sampling of records found in the NASA Aeronautics and Space Database. The scope of this topic includes cooling technologies for precision astronomical sensors and advanced spacecraft, as well as propellant storage and transfer in space. This area of focus is one of the enabling technologies as defined by NASA's Report of the President's Commission on Implementation of United States Space Exploration Policy, published in June 2004.
NASA Astrophysics Data System (ADS)
Boichard, Jean-Luc; Brissebrat, Guillaume; Cloche, Sophie; Eymard, Laurence; Fleury, Laurence; Mastrorillo, Laurence; Moulaye, Oumarou; Ramage, Karim
2010-05-01
The AMMA project includes aircraft, ground-based and ocean measurements, an intensive use of satellite data and diverse modelling studies. Therefore, the AMMA database aims at storing a great amount and a large variety of data, and at providing the data as rapidly and safely as possible to the AMMA research community. In order to stimulate the exchange of information and collaboration between researchers from different disciplines or using different tools, the database provides a detailed description of the products and uses standardized formats. The AMMA database contains: - AMMA field campaigns datasets; - historical data in West Africa from 1850 (operational networks and previous scientific programs); - satellite products from past and future satellites, (re-)mapped on a regular latitude/longitude grid and stored in NetCDF format (CF Convention); - model outputs from atmosphere or ocean operational (re-)analysis and forecasts, and from research simulations. The outputs are processed as the satellite products are. Before accessing the data, any user has to sign the AMMA data and publication policy. This chart only covers the use of data in the framework of scientific objectives and categorically excludes the redistribution of data to third parties and the usage for commercial applications. Some collaboration between data producers and users, and the mention of the AMMA project in any publication is also required. The AMMA database and the associated on-line tools have been fully developed and are managed by two teams in France (IPSL Database Centre, Paris and OMP, Toulouse). Users can access data of both data centres using an unique web portal. This website is composed of different modules : - Registration: forms to register, read and sign the data use chart when an user visits for the first time - Data access interface: friendly tool allowing to build a data extraction request by selecting various criteria like location, time, parameters... The request can concern local, satellite and model data. - Documentation: catalogue of all the available data and their metadata. These tools have been developed using standard and free languages and softwares: - Linux system with an Apache web server and a Tomcat application server; - J2EE tools : JSF and Struts frameworks, hibernate; - relational database management systems: PostgreSQL and MySQL; - OpenLDAP directory. In order to facilitate the access to the data by African scientists, the complete system has been mirrored at AGHRYMET Regional Centre in Niamey and is operational there since January 2009. Users can now access metadata and request data through one or the other of two equivalent portals: http://database.amma-international.org or http://amma.agrhymet.ne/amma-data.
Securely and Flexibly Sharing a Biomedical Data Management System
Wang, Fusheng; Hussels, Phillip; Liu, Peiya
2011-01-01
Biomedical database systems need not only to address the issues of managing complex data, but also to provide data security and access control to the system. These include not only system level security, but also instance level access control such as access of documents, schemas, or aggregation of information. The latter is becoming more important as multiple users can share a single scientific data management system to conduct their research, while data have to be protected before they are published or IP-protected. This problem is challenging as users’ needs for data security vary dramatically from one application to another, in terms of who to share with, what resources to be shared, and at what access level. We develop a comprehensive data access framework for a biomedical data management system SciPort. SciPort provides fine-grained multi-level space based access control of resources at not only object level (documents and schemas), but also space level (resources set aggregated in a hierarchy way). Furthermore, to simplify the management of users and privileges, customizable role-based user model is developed. The access control is implemented efficiently by integrating access privileges into the backend XML database, thus efficient queries are supported. The secure access approach we take makes it possible for multiple users to share the same biomedical data management system with flexible access management and high data security. PMID:21625285
NASA's computer science research program
NASA Technical Reports Server (NTRS)
Larsen, R. L.
1983-01-01
Following a major assessment of NASA's computing technology needs, a new program of computer science research has been initiated by the Agency. The program includes work in concurrent processing, management of large scale scientific databases, software engineering, reliable computing, and artificial intelligence. The program is driven by applications requirements in computational fluid dynamics, image processing, sensor data management, real-time mission control and autonomous systems. It consists of university research, in-house NASA research, and NASA's Research Institute for Advanced Computer Science (RIACS) and Institute for Computer Applications in Science and Engineering (ICASE). The overall goal is to provide the technical foundation within NASA to exploit advancing computing technology in aerospace applications.
Management Practices and Tools: 2000-2004
NASA Technical Reports Server (NTRS)
2004-01-01
This custom bibliography from the NASA Scientific and Technical Information Program lists a sampling of records found in the NASA Aeronautics and Space Database. The scope of this topic is divided into four parts and covers the adoption of proven personnel and management reforms to implement the national space exploration vision, including the use of "system-of-systems" approach; policies of spiral, evolutionary development; reliance upon lead systems integrators; and independent technical and cost assessments. This area of focus is one of the enabling technologies as defined by NASA s Report of the President s Commission on Implementation of United States Space Exploration Policy, published in June 2004.
Farias, Diego Carlos; Araujo, Fernando Oliveira de
2017-06-01
Hospitals are complex organizations which, in addition to the technical assistance expected in the context of treatment and prevention of health hazards, also require good management practices aimed at improving their efficiency in their core business. However, in administrative terms, recurrent conflicts arise involving technical and managerial areas. Thus, this article sets out to conducta review of the scientific literature pertaining to the themes of hospital management and projects that have been applied in the hospital context. In terms of methodology, the study adopts the webiblioming method of collection and systematic analysis of knowledge in indexed journal databases. The results show a greater interest on the part of researchers in looking for a more vertically and horizontally dialogical administration, better definition of work processes, innovative technological tools to support the management process and finally the possibility of applying project management methodologies in collaboration with hospital management.
Biocuration at the Saccharomyces genome database.
Skrzypek, Marek S; Nash, Robert S
2015-08-01
Saccharomyces Genome Database is an online resource dedicated to managing information about the biology and genetics of the model organism, yeast (Saccharomyces cerevisiae). This information is derived primarily from scientific publications through a process of human curation that involves manual extraction of data and their organization into a comprehensive system of knowledge. This system provides a foundation for further analysis of experimental data coming from research on yeast as well as other organisms. In this review we will demonstrate how biocuration and biocurators add a key component, the biological context, to our understanding of how genes, proteins, genomes and cells function and interact. We will explain the role biocurators play in sifting through the wealth of biological data to incorporate and connect key information. We will also discuss the many ways we assist researchers with their various research needs. We hope to convince the reader that manual curation is vital in converting the flood of data into organized and interconnected knowledge, and that biocurators play an essential role in the integration of scientific information into a coherent model of the cell. © 2015 Wiley Periodicals, Inc.
Biocuration at the Saccharomyces Genome Database
Skrzypek, Marek S.; Nash, Robert S.
2015-01-01
Saccharomyces Genome Database is an online resource dedicated to managing information about the biology and genetics of the model organism, yeast (Saccharomyces cerevisiae). This information is derived primarily from scientific publications through a process of human curation that involves manual extraction of data and their organization into a comprehensive system of knowledge. This system provides a foundation for further analysis of experimental data coming from research on yeast as well as other organisms. In this review we will demonstrate how biocuration and biocurators add a key component, the biological context, to our understanding of how genes, proteins, genomes and cells function and interact. We will explain the role biocurators play in sifting through the wealth of biological data to incorporate and connect key information. We will also discuss the many ways we assist researchers with their various research needs. We hope to convince the reader that manual curation is vital in converting the flood of data into organized and interconnected knowledge, and that biocurators play an essential role in the integration of scientific information into a coherent model of the cell. PMID:25997651
Belanger, M; Harris, P G; Nikolis, A; Danino, A M
2009-03-01
Our aim was to analyze the communications about three outstanding medical reports. Was there any difference in the reports of the three allografts? Was there a correlation between the media and the scientific world? The Internet sites of three major newspapers were used for the media database. Those results were compared with PubMed between 2005 and 2007 using these key words: "facial graft," "facial allograft," "composite tissue allograft," and names of surgeons of the graft. We did a comparative analysis using a word processor and a quality analysis software. We analyzed 51 articles from the media and six from the PubMed database. In PubMed, 100% of the articles were on the first graft and respected the privacy of the patient compared to 67% of the media who unveiled the identity. The communication following a medical premiere depends on the team, which performes the act. We observed a major difference between the three cases. Ethical considerations are different for the media and for scientists. The communication management of a medical premiere takes preparation and evaluation.
ERIC Educational Resources Information Center
Adams, James D.; Clemmons, J. Roger
2008-01-01
This article is a guide to the NBER-Rensselaer Scientific Papers Database, which includes more than 2.5 million scientific publications and over 21 million citations to those papers. The data cover an important sample of 110 top U.S. universities and 200 top U.S.-based R&D-performing firms during the period 1981-1999. This article describes the…
2000-05-31
Grey Literature Network Service ( Farace , Dominic,1997) as, “that which is produced on all levels of government, academics, business and industry in... literature is available, on-line, to scientific workers throughout the world, for a world scientific database.” These reports served as the base to begin...all the world’s formal scientific literature is available, on-line, to scientific workers throughout the world, for a world scientific database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murphy, J.R.; O`Neill, D.C.; Barker, B.W.
1994-10-01
The research described in this report is directed toward the development of a workstation-based data management, analysis and visualization system which can be used to improve the Air Force`s capability to evaluate site specific environmental hazards. The initial prototype system described in this report is directed toward a specific application to the Massachusetts Military Reservation (formerly Otis Air Force Base) on Cape Cod, Massachusetts. This system integrates a comprehensive, on-line environmental database for the site together with a map-based graphical user interface which facilitates analyst access to the databases and analysis tools needed to characterize the subsurface geologic and hydrologicmore » environments at the site.« less
Design and development of a geo-referenced database to radionuclides in food
NASA Astrophysics Data System (ADS)
Nascimento, L. M. E.; Ferreira, A. C. M.; Gonzalez, S. A.
2018-03-01
The primary purpose of the range of activities concerning the info management of the environmental assessment is to provide to scientific community an improved access to environmental data, as well as to support the decision making loop, in case of contamination events due either to accidental or intentional causes. In recent years, geotechnologies became a key reference in environmental research and monitoring, since they deliver an efficient data retrieval and subsequent processing about natural resources. This study aimed at the development of a georeferenced database (SIGLARA – SIstema Georeferenciado Latino Americano de Radionuclídeos em Alimentos), designed to radioactivity in food data storage, available in three languages (Spanish, Portuguese and English), employing free software[l].
The role of information system in multiple sclerosis management
Ajami, Sima; Ahmadi, Golchehreh; Etemadifar, Masoud
2014-01-01
Multiple sclerosis (MS) is a chronic disease of central nervous system. The multiple sclerosis information system (MSIS), such as other information system (IS), depends on identification, collection and processing of data for producing useful information. Lack of the integrated IS for collecting standard data causes undesirable effects on exchanging, comparing, and managing. The aim of this study was to recognize the role of the IS in the MS management and determine the advantages and barriers in implementing of the MSIS. The present study was a nonsystematized review that was done in order to recognize the role of the IS in the MS management. In this study, electronic scientific resources such as scientific magazines and books and published topics at conferences were used. We used key words (IS, chronic disease management, and multiple sclerosis), their combination or their synonyms in title, key words, abstracts, and text of English articles and published reports from 1980 until 2013, and by using search engines such as Google, Google Scholar and scientific databases and electronic issues such as iPubMed, sufficiently important difference, Scopus, Medlib, and Magiran for gathering information. More than 200 articles and reports were collected and assessed and 139 of them. Findings showed that the MSIS can reduce of disease expenses through continuously collecting correct, accurate, sufficient, and timely patients and disease nature information; recoding; editing; processing; exchanging, and distributing among different health care centers. Although the MSIS has many advantages; but, we cannot ignore cultural, economic, technical, organizational, and managerial barriers. Therefore, it is necessary to do studies for preventing, reducing, and controlling them. One of the ways is to recognize the advantages of the MSIS and usage information technology in optimizing disease management. PMID:25709660
The role of information system in multiple sclerosis management.
Ajami, Sima; Ahmadi, Golchehreh; Etemadifar, Masoud
2014-12-01
Multiple sclerosis (MS) is a chronic disease of central nervous system. The multiple sclerosis information system (MSIS), such as other information system (IS), depends on identification, collection and processing of data for producing useful information. Lack of the integrated IS for collecting standard data causes undesirable effects on exchanging, comparing, and managing. The aim of this study was to recognize the role of the IS in the MS management and determine the advantages and barriers in implementing of the MSIS. The present study was a nonsystematized review that was done in order to recognize the role of the IS in the MS management. In this study, electronic scientific resources such as scientific magazines and books and published topics at conferences were used. We used key words (IS, chronic disease management, and multiple sclerosis), their combination or their synonyms in title, key words, abstracts, and text of English articles and published reports from 1980 until 2013, and by using search engines such as Google, Google Scholar and scientific databases and electronic issues such as iPubMed, sufficiently important difference, Scopus, Medlib, and Magiran for gathering information. More than 200 articles and reports were collected and assessed and 139 of them. Findings showed that the MSIS can reduce of disease expenses through continuously collecting correct, accurate, sufficient, and timely patients and disease nature information; recoding; editing; processing; exchanging, and distributing among different health care centers. Although the MSIS has many advantages; but, we cannot ignore cultural, economic, technical, organizational, and managerial barriers. Therefore, it is necessary to do studies for preventing, reducing, and controlling them. One of the ways is to recognize the advantages of the MSIS and usage information technology in optimizing disease management.
In-Memory Graph Databases for Web-Scale Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Castellana, Vito G.; Morari, Alessandro; Weaver, Jesse R.
RDF databases have emerged as one of the most relevant way for organizing, integrating, and managing expo- nentially growing, often heterogeneous, and not rigidly structured data for a variety of scientific and commercial fields. In this paper we discuss the solutions integrated in GEMS (Graph database Engine for Multithreaded Systems), a software framework for implementing RDF databases on commodity, distributed-memory high-performance clusters. Unlike the majority of current RDF databases, GEMS has been designed from the ground up to primarily employ graph-based methods. This is reflected in all the layers of its stack. The GEMS framework is composed of: a SPARQL-to-C++more » compiler, a library of data structures and related methods to access and modify them, and a custom runtime providing lightweight software multithreading, network messages aggregation and a partitioned global address space. We provide an overview of the framework, detailing its component and how they have been closely designed and customized to address issues of graph methods applied to large-scale datasets on clusters. We discuss in details the principles that enable automatic translation of the queries (expressed in SPARQL, the query language of choice for RDF databases) to graph methods, and identify differences with respect to other RDF databases.« less
The Astrobiology Habitable Environments Database (AHED)
NASA Astrophysics Data System (ADS)
Lafuente, B.; Stone, N.; Downs, R. T.; Blake, D. F.; Bristow, T.; Fonda, M.; Pires, A.
2015-12-01
The Astrobiology Habitable Environments Database (AHED) is a central, high quality, long-term searchable repository for archiving and collaborative sharing of astrobiologically relevant data, including, morphological, textural and contextural images, chemical, biochemical, isotopic, sequencing, and mineralogical information. The aim of AHED is to foster long-term innovative research by supporting integration and analysis of diverse datasets in order to: 1) help understand and interpret planetary geology; 2) identify and characterize habitable environments and pre-biotic/biotic processes; 3) interpret returned data from present and past missions; 4) provide a citable database of NASA-funded published and unpublished data (after an agreed-upon embargo period). AHED uses the online open-source software "The Open Data Repository's Data Publisher" (ODR - http://www.opendatarepository.org) [1], which provides a user-friendly interface that research teams or individual scientists can use to design, populate and manage their own database according to the characteristics of their data and the need to share data with collaborators or the broader scientific community. This platform can be also used as a laboratory notebook. The database will have the capability to import and export in a variety of standard formats. Advanced graphics will be implemented including 3D graphing, multi-axis graphs, error bars, and similar scientific data functions together with advanced online tools for data analysis (e. g. the statistical package, R). A permissions system will be put in place so that as data are being actively collected and interpreted, they will remain proprietary. A citation system will allow research data to be used and appropriately referenced by other researchers after the data are made public. This project is supported by the Science-Enabling Research Activity (SERA) and NASA NNX11AP82A, Mars Science Laboratory Investigations. [1] Nate et al. (2015) AGU, submitted.
Laurenne, Nina; Tuominen, Jouni; Saarenmaa, Hannu; Hyvönen, Eero
2014-01-01
The scientific names of plants and animals play a major role in Life Sciences as information is indexed, integrated, and searched using scientific names. The main problem with names is their ambiguous nature, because more than one name may point to the same taxon and multiple taxa may share the same name. In addition, scientific names change over time, which makes them open to various interpretations. Applying machine-understandable semantics to these names enables efficient processing of biological content in information systems. The first step is to use unique persistent identifiers instead of name strings when referring to taxa. The most commonly used identifiers are Life Science Identifiers (LSID), which are traditionally used in relational databases, and more recently HTTP URIs, which are applied on the Semantic Web by Linked Data applications. We introduce two models for expressing taxonomic information in the form of species checklists. First, we show how species checklists are presented in a relational database system using LSIDs. Then, in order to gain a more detailed representation of taxonomic information, we introduce meta-ontology TaxMeOn to model the same content as Semantic Web ontologies where taxa are identified using HTTP URIs. We also explore how changes in scientific names can be managed over time. The use of HTTP URIs is preferable for presenting the taxonomic information of species checklists. An HTTP URI identifies a taxon and operates as a web address from which additional information about the taxon can be located, unlike LSID. This enables the integration of biological data from different sources on the web using Linked Data principles and prevents the formation of information silos. The Linked Data approach allows a user to assemble information and evaluate the complexity of taxonomical data based on conflicting views of taxonomic classifications. Using HTTP URIs and Semantic Web technologies also facilitate the representation of the semantics of biological data, and in this way, the creation of more "intelligent" biological applications and services.
Advances in Data Management in Remote Sensing and Climate Modeling
NASA Astrophysics Data System (ADS)
Brown, P. G.
2014-12-01
Recent commercial interest in "Big Data" information systems has yielded little more than a sense of deja vu among scientists whose work has always required getting their arms around extremely large databases, and writing programs to explore and analyze it. On the flip side, there are some commercial DBMS startups building "Big Data" platform using techniques taken from earth science, astronomy, high energy physics and high performance computing. In this talk, we will introduce one such platform; Paradigm4's SciDB, the first DBMS designed from the ground up to combine the kinds of quality-of-service guarantees made by SQL DBMS platforms—high level data model, query languages, extensibility, transactions—with the kinds of functionality familiar to scientific users—arrays as structural building blocks, integrated linear algebra, and client language interfaces that minimize the learning curve. We will review how SciDB is used to manage and analyze earth science data by several teams of scientific users.
Waller, P; Cassell, J A; Saunders, M H; Stevens, R
2017-03-01
In order to promote understanding of UK governance and assurance relating to electronic health records research, we present and discuss the role of the Independent Scientific Advisory Committee (ISAC) for MHRA database research in evaluating protocols proposing the use of the Clinical Practice Research Datalink. We describe the development of the Committee's activities between 2006 and 2015, alongside growth in data linkage and wider national electronic health records programmes, including the application and assessment processes, and our approach to undertaking this work. Our model can provide independence, challenge and support to data providers such as the Clinical Practice Research Datalink database which has been used for well over 1,000 medical research projects. ISAC's role in scientific oversight ensures feasible and scientifically acceptable plans are in place, while having both lay and professional membership addresses governance issues in order to protect the integrity of the database and ensure that public confidence is maintained.
Li, Hai-yan; Li, Yuan-hai; Yang, Yang; Liu, Fang-zhou; Wang, Jing; Tian, Ye; Yang, Ce; Liu, Yang; Li, Meng; Sun Li-ying
2015-12-01
The aim of this study is to identify the present status of the scientific and technological personnel in the field of traditional Chinese medicine (TCM) resource science. Based on the data from Chinese scientific research paper, an investigation regarding the number of the personnel, the distribution, their output of paper, their scientific research teams, high-yield authors and high-cited authors was conducted. The study covers seven subfields of traditional Chinese medicine identification, quality standard, Chinese medicine cultivation, harvest processing of TCM, market development and resource protection and resource management, as well as 82 widely used Chinese medicine species, such as Ginseng and Radix Astragali. One hundred and fifteen domain authority experts were selected based on the data of high-yield authors and high-cited authors. The database system platform "Skilled Scientific and Technological Personnel in the field of Traditional Chinese Medicine Resource Science-Chinese papers" was established. This platform successfully provided the retrieval result of the personnel, output of paper, and their core research team by input the study field, year, and Chinese medicine species. The investigation provides basic data of scientific and technological personnel in the field of traditional Chinese medicine resource science for administrative agencies and also evidence for the selection of scientific and technological personnel and construction of scientific research teams.
Monitoring of IaaS and scientific applications on the Cloud using the Elasticsearch ecosystem
NASA Astrophysics Data System (ADS)
Bagnasco, S.; Berzano, D.; Guarise, A.; Lusso, S.; Masera, M.; Vallero, S.
2015-05-01
The private Cloud at the Torino INFN computing centre offers IaaS services to different scientific computing applications. The infrastructure is managed with the OpenNebula cloud controller. The main stakeholders of the facility are a grid Tier-2 site for the ALICE collaboration at LHC, an interactive analysis facility for the same experiment and a grid Tier-2 site for the BES-III collaboration, plus an increasing number of other small tenants. Besides keeping track of the usage, the automation of dynamic allocation of resources to tenants requires detailed monitoring and accounting of the resource usage. As a first investigation towards this, we set up a monitoring system to inspect the site activities both in terms of IaaS and applications running on the hosted virtual instances. For this purpose we used the Elasticsearch, Logstash and Kibana stack. In the current implementation, the heterogeneous accounting information is fed to different MySQL databases and sent to Elasticsearch via a custom Logstash plugin. For the IaaS metering, we developed sensors for the OpenNebula API. The IaaS level information gathered through the API is sent to the MySQL database through an ad-hoc developed RESTful web service, which is also used for other accounting purposes. Concerning the application level, we used the Root plugin TProofMonSenderSQL to collect accounting data from the interactive analysis facility. The BES-III virtual instances used to be monitored with Zabbix, as a proof of concept we also retrieve the information contained in the Zabbix database. Each of these three cases is indexed separately in Elasticsearch. We are now starting to consider dismissing the intermediate level provided by the SQL database and evaluating a NoSQL option as a unique central database for all the monitoring information. We setup a set of Kibana dashboards with pre-defined queries in order to monitor the relevant information in each case. In this way we have achieved a uniform monitoring interface for both the IaaS and the scientific applications, mostly leveraging off-the-shelf tools.
1990-02-01
in section S3.1, Airspace Management. oue ORNL 1988 data. S3.10-2 S3.10.4 Community Services Community services provided at the county level include...Under the Base Realignment Project. Mountain Home City Clerk’s Office. 1989. Proposed Fiscal 1990 Budget. Oak Ridge National Laboratory ( ORNL ). 1988. Low...Altitude Airspace Database. Submitted to the U.S. Air Force. ORNL and Consultants. 1988. Reviews of Scientific Literatures on the Environmental
1993-12-01
Iaporta .. y be definitive for the tubjoct proaentod, exploratory in natura, or an evaluation of critical Aubayato• or of technical problema , 4...International Security 9 Social and Natural Science Studies Field 41 Edit: (Type 3) -Entry of an invalid code when Performance Type is "C" or "M" will...analysis SF Foreign area social science research SP Foreign area policy planAing research BF Identifies databases with data on foreign forces or
Abstracting data warehousing issues in scientific research.
Tews, Cody; Bracio, Boris R
2002-01-01
This paper presents the design and implementation of the Idaho Biomedical Data Management System (IBDMS). This system preprocesses biomedical data from the IMPROVE (Improving Control of Patient Status in Critical Care) library via an Open Database Connectivity (ODBC) connection. The ODBC connection allows for local and remote simulations to access filtered, joined, and sorted data using the Structured Query Language (SQL). The tool is capable of providing an overview of available data in addition to user defined data subset for verification of models of the human respiratory system.
Perspectives in astrophysical databases
NASA Astrophysics Data System (ADS)
Frailis, Marco; de Angelis, Alessandro; Roberto, Vito
2004-07-01
Astrophysics has become a domain extremely rich of scientific data. Data mining tools are needed for information extraction from such large data sets. This asks for an approach to data management emphasizing the efficiency and simplicity of data access; efficiency is obtained using multidimensional access methods and simplicity is achieved by properly handling metadata. Moreover, clustering and classification techniques on large data sets pose additional requirements in terms of computation and memory scalability and interpretability of results. In this study we review some possible solutions.
MAIN CONTROVERSIES IN THE NONOPERATIVE MANAGEMENT OF BLUNT SPLENIC INJURIES.
Carlotto, Jorge Roberto Marcante; Lopes-Filho, Gaspar de Jesus; Colleoni-Neto, Ramiro
2016-03-01
The nonoperative management of traumatic spleen injuries is the modality of choice in patients with blunt abdominal trauma and hemodynamic stability. However, there are still questions about the treatment indication in some groups of patients, as well as its follow-up. Update knowledge about the spleen injury. Was performed review of the literature on the nonoperative management of blunt injuries of the spleen in databases: Cochrane Library, Medline and SciELO. Were evaluated articles in English and Portuguese, between 1955 and 2014, using the headings "splenic injury, nonoperative management and blunt abdominal trauma". Were selected 35 articles. Most of them were recommendation grade B and C. The spleen traumatic injuries are frequent and its nonoperative management is a worldwide trend. The available literature does not explain all aspects on treatment. The authors developed a systematization of care based on the best available scientific evidence to better treat this condition.
MAIN CONTROVERSIES IN THE NONOPERATIVE MANAGEMENT OF BLUNT SPLENIC INJURIES
CARLOTTO, Jorge Roberto Marcante; LOPES-FILHO, Gaspar de Jesus; COLLEONI-NETO, Ramiro
2016-01-01
Introduction : The nonoperative management of traumatic spleen injuries is the modality of choice in patients with blunt abdominal trauma and hemodynamic stability. However, there are still questions about the treatment indication in some groups of patients, as well as its follow-up. Aim: Update knowledge about the spleen injury. Method : Was performed review of the literature on the nonoperative management of blunt injuries of the spleen in databases: Cochrane Library, Medline and SciELO. Were evaluated articles in English and Portuguese, between 1955 and 2014, using the headings "splenic injury, nonoperative management and blunt abdominal trauma". Results : Were selected 35 articles. Most of them were recommendation grade B and C. Conclusion : The spleen traumatic injuries are frequent and its nonoperative management is a worldwide trend. The available literature does not explain all aspects on treatment. The authors developed a systematization of care based on the best available scientific evidence to better treat this condition. PMID:27120744
NASA Astrophysics Data System (ADS)
Dornback, M.; Hourigan, T.; Etnoyer, P.; McGuinn, R.; Cross, S. L.
2014-12-01
Research on deep-sea corals has expanded rapidly over the last two decades, as scientists began to realize their value as long-lived structural components of high biodiversity habitats and archives of environmental information. The NOAA Deep Sea Coral Research and Technology Program's National Database for Deep-Sea Corals and Sponges is a comprehensive resource for georeferenced data on these organisms in U.S. waters. The National Database currently includes more than 220,000 deep-sea coral records representing approximately 880 unique species. Database records from museum archives, commercial and scientific bycatch, and from journal publications provide baseline information with relatively coarse spatial resolution dating back as far as 1842. These data are complemented by modern, in-situ submersible observations with high spatial resolution, from surveys conducted by NOAA and NOAA partners. Management of high volumes of modern high-resolution observational data can be challenging. NOAA is working with our data partners to incorporate this occurrence data into the National Database, along with images and associated information related to geoposition, time, biology, taxonomy, environment, provenance, and accuracy. NOAA is also working to link associated datasets collected by our program's research, to properly archive them to the NOAA National Data Centers, to build a robust metadata record, and to establish a standard protocol to simplify the process. Access to the National Database is provided through an online mapping portal. The map displays point based records from the database. Records can be refined by taxon, region, time, and depth. The queries and extent used to view the map can also be used to download subsets of the database. The database, map, and website is already in use by NOAA, regional fishery management councils, and regional ocean planning bodies, but we envision it as a model that can expand to accommodate data on a global scale.
Liu, Qin; Tian, Li-Guang; Xiao, Shu-Hua; Qi, Zhen; Steinmann, Peter; Mak, Tippi K; Utzinger, Jürg; Zhou, Xiao-Nong
2008-01-01
The economy of China continues to boom and so have its biomedical research and related publishing activities. Several so-called neglected tropical diseases that are most common in the developing world are still rampant or even emerging in some parts of China. The purpose of this article is to document the significant research potential from the Chinese biomedical bibliographic databases. The research contributions from China in the epidemiology and control of schistosomiasis provide an excellent illustration. We searched two widely used databases, namely China National Knowledge Infrastructure (CNKI) and VIP Information (VIP). Employing the keyword "Schistosoma" () and covering the period 1990–2006, we obtained 10,244 hits in the CNKI database and 5,975 in VIP. We examined 10 Chinese biomedical journals that published the highest number of original research articles on schistosomiasis for issues including languages and open access. Although most of the journals are published in Chinese, English abstracts are usually available. Open access to full articles was available in China Tropical Medicine in 2005/2006 and is granted by the Chinese Journal of Parasitology and Parasitic Diseases since 2003; none of the other journals examined offered open access. We reviewed (i) the discovery and development of antischistosomal drugs, (ii) the progress made with molluscicides and (iii) environmental management for schistosomiasis control in China over the past 20 years. In conclusion, significant research is published in the Chinese literature, which is relevant for local control measures and global scientific knowledge. Open access should be encouraged and language barriers removed so the wealth of Chinese research can be more fully appreciated by the scientific community. PMID:18826598
Basner, Jodi E; Theisz, Katrina I; Jensen, Unni S; Jones, C David; Ponomarev, Ilya; Sulima, Pawel; Jo, Karen; Eljanne, Mariam; Espey, Michael G; Franca-Koh, Jonathan; Hanlon, Sean E; Kuhn, Nastaran Z; Nagahara, Larry A; Schnell, Joshua D; Moore, Nicole M
2013-12-01
Development of effective quantitative indicators and methodologies to assess the outcomes of cross-disciplinary collaborative initiatives has the potential to improve scientific program management and scientific output. This article highlights an example of a prospective evaluation that has been developed to monitor and improve progress of the National Cancer Institute Physical Sciences-Oncology Centers (PS-OC) program. Study data, including collaboration information, was captured through progress reports and compiled using the web-based analytic database: Interdisciplinary Team Reporting, Analysis, and Query Resource. Analysis of collaborations was further supported by data from the Thomson Reuters Web of Science database, MEDLINE database, and a web-based survey. Integration of novel and standard data sources was augmented by the development of automated methods to mine investigator pre-award publications, assign investigator disciplines, and distinguish cross-disciplinary publication content. The results highlight increases in cross-disciplinary authorship collaborations from pre- to post-award years among the primary investigators and confirm that a majority of cross-disciplinary collaborations have resulted in publications with cross-disciplinary content that rank in the top third of their field. With these evaluation data, PS-OC Program officials have provided ongoing feedback to participating investigators to improve center productivity and thereby facilitate a more successful initiative. Future analysis will continue to expand these methods and metrics to adapt to new advances in research evaluation and changes in the program.
A service-based framework for pharmacogenomics data integration
NASA Astrophysics Data System (ADS)
Wang, Kun; Bai, Xiaoying; Li, Jing; Ding, Cong
2010-08-01
Data are central to scientific research and practices. The advance of experiment methods and information retrieval technologies leads to explosive growth of scientific data and databases. However, due to the heterogeneous problems in data formats, structures and semantics, it is hard to integrate the diversified data that grow explosively and analyse them comprehensively. As more and more public databases are accessible through standard protocols like programmable interfaces and Web portals, Web-based data integration becomes a major trend to manage and synthesise data that are stored in distributed locations. Mashup, a Web 2.0 technique, presents a new way to compose content and software from multiple resources. The paper proposes a layered framework for integrating pharmacogenomics data in a service-oriented approach using the mashup technology. The framework separates the integration concerns from three perspectives including data, process and Web-based user interface. Each layer encapsulates the heterogeneous issues of one aspect. To facilitate the mapping and convergence of data, the ontology mechanism is introduced to provide consistent conceptual models across different databases and experiment platforms. To support user-interactive and iterative service orchestration, a context model is defined to capture information of users, tasks and services, which can be used for service selection and recommendation during a dynamic service composition process. A prototype system is implemented and cases studies are presented to illustrate the promising capabilities of the proposed approach.
NASA scientific and technical information for the 1990s
NASA Technical Reports Server (NTRS)
Cotter, Gladys A.
1990-01-01
Projections for NASA scientific and technical information (STI) in the 1990s are outlined. NASA STI for the 1990s will maintain a quality bibliographic and full-text database, emphasizing electronic input and products supplemented by networked access to a wide variety of sources, particularly numeric databases.
The crustal dynamics intelligent user interface anthology
NASA Technical Reports Server (NTRS)
Short, Nicholas M., Jr.; Campbell, William J.; Roelofs, Larry H.; Wattawa, Scott L.
1987-01-01
The National Space Science Data Center (NSSDC) has initiated an Intelligent Data Management (IDM) research effort which has, as one of its components, the development of an Intelligent User Interface (IUI). The intent of the IUI is to develop a friendly and intelligent user interface service based on expert systems and natural language processing technologies. The purpose of such a service is to support the large number of potential scientific and engineering users that have need of space and land-related research and technical data, but have little or no experience in query languages or understanding of the information content or architecture of the databases of interest. This document presents the design concepts, development approach and evaluation of the performance of a prototype IUI system for the Crustal Dynamics Project Database, which was developed using a microcomputer-based expert system tool (M. 1), the natural language query processor THEMIS, and the graphics software system GSS. The IUI design is based on a multiple view representation of a database from both the user and database perspective, with intelligent processes to translate between the views.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robinson Khosah
2007-07-31
Advanced Technology Systems, Inc. (ATS) was contracted by the U. S. Department of Energy's National Energy Technology Laboratory (DOE-NETL) to develop a state-of-the-art, scalable and robust web-accessible database application to manage the extensive data sets resulting from the DOE-NETL-sponsored ambient air monitoring programs in the upper Ohio River valley region. The data management system was designed to include a web-based user interface that will allow easy access to the data by the scientific community, policy- and decision-makers, and other interested stakeholders, while providing detailed information on sampling, analytical and quality control parameters. In addition, the system will provide graphical analyticalmore » tools for displaying, analyzing and interpreting the air quality data. The system will also provide multiple report generation capabilities and easy-to-understand visualization formats that can be utilized by the media and public outreach/educational institutions. The project was conducted in two phases. Phase One included the following tasks: (1) data inventory/benchmarking, including the establishment of an external stakeholder group; (2) development of a data management system; (3) population of the database; (4) development of a web-based data retrieval system, and (5) establishment of an internal quality assurance/quality control system on data management. Phase Two involved the development of a platform for on-line data analysis. Phase Two included the following tasks: (1) development of a sponsor and stakeholder/user website with extensive online analytical tools; (2) development of a public website; (3) incorporation of an extensive online help system into each website; and (4) incorporation of a graphical representation (mapping) system into each website. The project is now technically completed.« less
Judicious use of custom development in an open source component architecture
NASA Astrophysics Data System (ADS)
Bristol, S.; Latysh, N.; Long, D.; Tekell, S.; Allen, J.
2014-12-01
Modern software engineering is not as much programming from scratch as innovative assembly of existing components. Seamlessly integrating disparate components into scalable, performant architecture requires sound engineering craftsmanship and can often result in increased cost efficiency and accelerated capabilities if software teams focus their creativity on the edges of the problem space. ScienceBase is part of the U.S. Geological Survey scientific cyberinfrastructure, providing data and information management, distribution services, and analysis capabilities in a way that strives to follow this pattern. ScienceBase leverages open source NoSQL and relational databases, search indexing technology, spatial service engines, numerous libraries, and one proprietary but necessary software component in its architecture. The primary engineering focus is cohesive component interaction, including construction of a seamless Application Programming Interface (API) across all elements. The API allows researchers and software developers alike to leverage the infrastructure in unique, creative ways. Scaling the ScienceBase architecture and core API with increasing data volume (more databases) and complexity (integrated science problems) is a primary challenge addressed by judicious use of custom development in the component architecture. Other data management and informatics activities in the earth sciences have independently resolved to a similar design of reusing and building upon established technology and are working through similar issues for managing and developing information (e.g., U.S. Geoscience Information Network; NASA's Earth Observing System Clearing House; GSToRE at the University of New Mexico). Recent discussions facilitated through the Earth Science Information Partners are exploring potential avenues to exploit the implicit relationships between similar projects for explicit gains in our ability to more rapidly advance global scientific cyberinfrastructure.
A geo-spatial data management system for potentially active volcanoes—GEOWARN project
NASA Astrophysics Data System (ADS)
Gogu, Radu C.; Dietrich, Volker J.; Jenny, Bernhard; Schwandner, Florian M.; Hurni, Lorenz
2006-02-01
Integrated studies of active volcanic systems for the purpose of long-term monitoring and forecast and short-term eruption prediction require large numbers of data-sets from various disciplines. A modern database concept has been developed for managing and analyzing multi-disciplinary volcanological data-sets. The GEOWARN project (choosing the "Kos-Yali-Nisyros-Tilos volcanic field, Greece" and the "Campi Flegrei, Italy" as test sites) is oriented toward potentially active volcanoes situated in regions of high geodynamic unrest. This article describes the volcanological database of the spatial and temporal data acquired within the GEOWARN project. As a first step, a spatial database embedded in a Geographic Information System (GIS) environment was created. Digital data of different spatial resolution, and time-series data collected at different intervals or periods, were unified in a common, four-dimensional representation of space and time. The database scheme comprises various information layers containing geographic data (e.g. seafloor and land digital elevation model, satellite imagery, anthropogenic structures, land-use), geophysical data (e.g. from active and passive seismicity, gravity, tomography, SAR interferometry, thermal imagery, differential GPS), geological data (e.g. lithology, structural geology, oceanography), and geochemical data (e.g. from hydrothermal fluid chemistry and diffuse degassing features). As a second step based on the presented database, spatial data analysis has been performed using custom-programmed interfaces that execute query scripts resulting in a graphical visualization of data. These query tools were designed and compiled following scenarios of known "behavior" patterns of dormant volcanoes and first candidate signs of potential unrest. The spatial database and query approach is intended to facilitate scientific research on volcanic processes and phenomena, and volcanic surveillance.
NASA Astrophysics Data System (ADS)
Thibault, K. M.
2013-12-01
As the construction of NEON and its transition to operations progresses, more and more data will become available to the scientific community, both from NEON directly and from the concomitant growth of existing data repositories. Many of these datasets include ecological observations of a diversity of taxa in both aquatic and terrestrial environments. Although observational data have been collected and used throughout the history of organismal biology, the field has not yet fully developed a culture of data management, documentation, standardization, sharing and discoverability to facilitate the integration and synthesis of datasets. Moreover, the tools required to accomplish these goals, namely database design, implementation, and management, and automation and parallelization of analytical tasks through computational techniques, have not historically been included in biology curricula, at either the undergraduate or graduate levels. To ensure the success of data-generating projects like NEON in advancing organismal ecology and to increase transparency and reproducibility of scientific analyses, an acceleration of the cultural shift to open science practices, the development and adoption of data standards, such as the DarwinCore standard for taxonomic data, and increased training in computational approaches for biologists need to be realized. Here I highlight several initiatives that are intended to increase access to and discoverability of publicly available datasets and equip biologists and other scientists with the skills that are need to manage, integrate, and analyze data from multiple large-scale projects. The EcoData Retriever (ecodataretriever.org) is a tool that downloads publicly available datasets, re-formats the data into an efficient relational database structure, and then automatically imports the data tables onto a user's local drive into the database tool of the user's choice. The automation of these tasks results in nearly instantaneous execution of tasks that previously required hours to days of each data user's time, with decreased error rates and increased useability of the data. The Ecological Data wiki (ecologicaldata.org) provides a forum for users of ecological datasets to share relevant metadata and tips and tricks for using the data, in order to flatten learning curves, as well as minimize redundancy of efforts among users of the same datasets. Finally, Software Carpentry (software-carpentry.org) has developed curricula for scientific computing and provides both online training and low cost, short courses that can be tailored to the specific needs of the students. Demand for these courses has been increasing exponentially in recent years, and represent a significant educational resource for biologists. I will conclude by linking these initiatives to the challenges facing ecologists related to the effective and efficient exploitation of NEON's diverse data streams.
A data model and database for high-resolution pathology analytical image informatics.
Wang, Fusheng; Kong, Jun; Cooper, Lee; Pan, Tony; Kurc, Tahsin; Chen, Wenjin; Sharma, Ashish; Niedermayr, Cristobal; Oh, Tae W; Brat, Daniel; Farris, Alton B; Foran, David J; Saltz, Joel
2011-01-01
The systematic analysis of imaged pathology specimens often results in a vast amount of morphological information at both the cellular and sub-cellular scales. While microscopy scanners and computerized analysis are capable of capturing and analyzing data rapidly, microscopy image data remain underutilized in research and clinical settings. One major obstacle which tends to reduce wider adoption of these new technologies throughout the clinical and scientific communities is the challenge of managing, querying, and integrating the vast amounts of data resulting from the analysis of large digital pathology datasets. This paper presents a data model, which addresses these challenges, and demonstrates its implementation in a relational database system. This paper describes a data model, referred to as Pathology Analytic Imaging Standards (PAIS), and a database implementation, which are designed to support the data management and query requirements of detailed characterization of micro-anatomic morphology through many interrelated analysis pipelines on whole-slide images and tissue microarrays (TMAs). (1) Development of a data model capable of efficiently representing and storing virtual slide related image, annotation, markup, and feature information. (2) Development of a database, based on the data model, capable of supporting queries for data retrieval based on analysis and image metadata, queries for comparison of results from different analyses, and spatial queries on segmented regions, features, and classified objects. The work described in this paper is motivated by the challenges associated with characterization of micro-scale features for comparative and correlative analyses involving whole-slides tissue images and TMAs. Technologies for digitizing tissues have advanced significantly in the past decade. Slide scanners are capable of producing high-magnification, high-resolution images from whole slides and TMAs within several minutes. Hence, it is becoming increasingly feasible for basic, clinical, and translational research studies to produce thousands of whole-slide images. Systematic analysis of these large datasets requires efficient data management support for representing and indexing results from hundreds of interrelated analyses generating very large volumes of quantifications such as shape and texture and of classifications of the quantified features. We have designed a data model and a database to address the data management requirements of detailed characterization of micro-anatomic morphology through many interrelated analysis pipelines. The data model represents virtual slide related image, annotation, markup and feature information. The database supports a wide range of metadata and spatial queries on images, annotations, markups, and features. We currently have three databases running on a Dell PowerEdge T410 server with CentOS 5.5 Linux operating system. The database server is IBM DB2 Enterprise Edition 9.7.2. The set of databases consists of 1) a TMA database containing image analysis results from 4740 cases of breast cancer, with 641 MB storage size; 2) an algorithm validation database, which stores markups and annotations from two segmentation algorithms and two parameter sets on 18 selected slides, with 66 GB storage size; and 3) an in silico brain tumor study database comprising results from 307 TCGA slides, with 365 GB storage size. The latter two databases also contain human-generated annotations and markups for regions and nuclei. Modeling and managing pathology image analysis results in a database provide immediate benefits on the value and usability of data in a research study. The database provides powerful query capabilities, which are otherwise difficult or cumbersome to support by other approaches such as programming languages. Standardized, semantic annotated data representation and interfaces also make it possible to more efficiently share image data and analysis results.
Gene regulation knowledge commons: community action takes care of DNA binding transcription factors
Tripathi, Sushil; Vercruysse, Steven; Chawla, Konika; Christie, Karen R.; Blake, Judith A.; Huntley, Rachael P.; Orchard, Sandra; Hermjakob, Henning; Thommesen, Liv; Lægreid, Astrid; Kuiper, Martin
2016-01-01
A large gap remains between the amount of knowledge in scientific literature and the fraction that gets curated into standardized databases, despite many curation initiatives. Yet the availability of comprehensive knowledge in databases is crucial for exploiting existing background knowledge, both for designing follow-up experiments and for interpreting new experimental data. Structured resources also underpin the computational integration and modeling of regulatory pathways, which further aids our understanding of regulatory dynamics. We argue how cooperation between the scientific community and professional curators can increase the capacity of capturing precise knowledge from literature. We demonstrate this with a project in which we mobilize biological domain experts who curate large amounts of DNA binding transcription factors, and show that they, although new to the field of curation, can make valuable contributions by harvesting reported knowledge from scientific papers. Such community curation can enhance the scientific epistemic process. Database URL: http://www.tfcheckpoint.org PMID:27270715
Analysis and interpretation of diffuse x-ray emission using data from the Einstein satellite
NASA Technical Reports Server (NTRS)
Helfand, David J.
1991-01-01
An ambitious program to create a powerful and accessible archive of the HEAO-2 Imaging Proportional Counter (IPC) database was outlined. The scientific utility of that database for studies of diffuse x ray emissions was explored. Technical and scientific accomplishments are reviewed. Three papers were presented which have major new scientific findings relevant to the global structure of the interstellar medium and the origin of the cosmic x ray background. An all-sky map of diffuse x ray emission was constructed.
Munn, Maureen; Knuth, Randy; Van Horne, Katie; Shouse, Andrew W.; Levias, Sheldon
2017-01-01
This study examines how two kinds of authentic research experiences related to smoking behavior—genotyping human DNA (wet lab) and using a database to test hypotheses about factors that affect smoking behavior (dry lab)—influence students’ perceptions and understanding of scientific research and related science concepts. The study used pre and post surveys and a focus group protocol to compare students who conducted the research experiences in one of two sequences: genotyping before database and database before genotyping. Students rated the genotyping experiment to be more like real science than the database experiment, in spite of the fact that they associated more scientific tasks with the database experience than genotyping. Independent of the order of completing the labs, students showed gains in their understanding of science concepts after completion of the two experiences. There was little change in students’ attitudes toward science pre to post, as measured by the Scientific Attitude Inventory II. However, on the basis of their responses during focus groups, students developed more sophisticated views about the practices and nature of science after they had completed both research experiences, independent of the order in which they experienced them. PMID:28572181
NASA Astrophysics Data System (ADS)
Staudigel, H.; Helly, M.; Helly, J.; Koppers, A.; Massel-Symons, C.; Miller, S.
2004-12-01
The ERESE (Enduring Resources in Earth Science Education) project involves a close collaboration between teachers, librarians, educators, data archive managers and scientists in Earth sciences and information technology, to create a digital library environment for Earth science education. We report here on an ongoing (NSF-NSDL) project involving teachers' professional development in the pedagogy of plate tectonics in middle and high schools. This work included efforts in scientific database development in terms of contents and search tools, the development of an inquiry based learning approach, a two week professional development workshop attended by 15 teachers from across the nation, a classroom implementation of lesson plans developed by the teachers at the workshop and an evaluation/validation process for the success of their pedagogic approaches. This ERESE project offers a novel path for both science teaching and professional outreach for scientists, and includes four key components: (1) A true, long-term research partnership between educators and scientists, guiding each other with respect to the authenticity of the science taught and the educational soundness of a scientists' elaborations on science concepts. (2) Expansion of existing scientific databases through the use of metadata that tie scientific materials to a particular expert level and teaching goal. (3) The design of interfaces that make data accessible to the educational community. (4) The use of an inquiry based teaching approach that integrates the scientist-educator collaboration and the data base developments. Our pedagogic approach includes the development of a central hypotheses by the student in response to an initial general orientation and presentation of a well chosen central provocative phenomenon by the teacher. Then, the student develops a research plan that is devoted to address this hypothesis through the use of the materials provided by a scientific database allowing a students prove or disprove their hypothesis and to explore the limits of the (current) understanding of a particular science question. Our first experience with this ERESE project involved a steep learning curve, but the initial results are very promising, providing true professional development for educators as well as for the scientists, whereby the former learn about new ways of teaching science and the latter learn to communicate with teachers.
Henkel, Heather S.
2007-01-01
In March 2006, the U.S. Geological Survey (USGS) held the first Scientific Information Management (SIM) Workshop in Reston, Virginia. The workshop brought together more than 150 SIM professionals from across the organization to discuss the range and importance of SIM problems, identify common challenges and solutions, and investigate the use and value of “communities of practice” (CoP) as mechanisms to address these issues. The 3-day workshop began with presentations of SIM challenges faced by the Long Term Ecological Research (LTER) network and two USGS programs from geology and hydrology. These presentations were followed by a keynote address and discussion of CoP by Dr. Etienne Wenger, a pioneer and leading expert in CoP, who defined them as "groups of people who share a passion for something that they know how to do and who interact regularly to learn how to do it better." Wenger addressed the roles and characteristics of CoP, how they complement formal organizational structures, and how they can be fostered. Following this motivating overview, five panelists (including Dr. Wenger) with CoP experience in different institutional settings provided their perspectives and lessons learned. The first day closed with an open discussion on the potential intersection of SIM at the USGS with SIM challenges and the potential for CoP. The second session began the process of developing a common vocabulary for both scientific data management and CoP, and a list of eight guiding principles for information management were proposed for discussion and constructive criticism. Following this discussion, 20 live demonstrations and posters of SIM tools developed by various USGS programs and projects were presented. Two community-building sessions were held to explore the next steps in 12 specific areas: Archiving of Scientific Data and Information; Database Networks; Digital Libraries; Emerging Workforce; Field Data for Small Research Projects; Knowledge Capture; Knowledge Organization Systems and Controlled Vocabularies; Large Time Series Data Sets; Metadata; Portals and Frameworks; Preservation of Physical Collections; and Scientific Data from Monitoring Programs. In about two-thirds of these areas, initial steps to forming CoP are now underway. The final afternoon included a panel in which information professionals, managers, program coordinators, and associate directors shared their perspectives on the workshop, on ways in which the USGS could better manage its scientific information, and on the use of CoP as informal mechanisms to complement formal organizational structures. The final session focused on developing the next steps, an action plan, and a communication strategy to ensure continued development.
Nuclear Energy Infrastructure Database Description and User’s Manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heidrich, Brenden
In 2014, the Deputy Assistant Secretary for Science and Technology Innovation initiated the Nuclear Energy (NE)–Infrastructure Management Project by tasking the Nuclear Science User Facilities, formerly the Advanced Test Reactor National Scientific User Facility, to create a searchable and interactive database of all pertinent NE-supported and -related infrastructure. This database, known as the Nuclear Energy Infrastructure Database (NEID), is used for analyses to establish needs, redundancies, efficiencies, distributions, etc., to best understand the utility of NE’s infrastructure and inform the content of infrastructure calls. The Nuclear Science User Facilities developed the database by utilizing data and policy direction from amore » variety of reports from the U.S. Department of Energy, the National Research Council, the International Atomic Energy Agency, and various other federal and civilian resources. The NEID currently contains data on 802 research and development instruments housed in 377 facilities at 84 institutions in the United States and abroad. The effort to maintain and expand the database is ongoing. Detailed information on many facilities must be gathered from associated institutions and added to complete the database. The data must be validated and kept current to capture facility and instrumentation status as well as to cover new acquisitions and retirements. This document provides a short tutorial on the navigation of the NEID web portal at NSUF-Infrastructure.INL.gov.« less
Considerations to improve functional annotations in biological databases.
Benítez-Páez, Alfonso
2009-12-01
Despite the great effort to design efficient systems allowing the electronic indexation of information concerning genes, proteins, structures, and interactions published daily in scientific journals, some problems are still observed in specific tasks such as functional annotation. The annotation of function is a critical issue for bioinformatic routines, such as for instance, in functional genomics and the further prediction of unknown protein function, which are highly dependent of the quality of existing annotations. Some information management systems evolve to efficiently incorporate information from large-scale projects, but often, annotation of single records from the literature is difficult and slow. In this short report, functional characterizations of a representative sample of the entire set of uncharacterized proteins from Escherichia coli K12 was compiled from Swiss-Prot, PubMed, and EcoCyc and demonstrate a functional annotation deficit in biological databases. Some issues are postulated as causes of the lack of annotation, and different solutions are evaluated and proposed to avoid them. The hope is that as a consequence of these observations, there will be new impetus to improve the speed and quality of functional annotation and ultimately provide updated, reliable information to the scientific community.
NASA Astrophysics Data System (ADS)
Shen, Shaoling; Li, Renjie; Shen, Dongdong; Tong, Chunyan; Fu, Xueqing
2007-06-01
"Gugong Date Garden", lies in Juguan Village, Qijiawu County, Huanghua City, China. It is the largest forest of winter date in this world, which is the longest in history, largest in area and best in quality and it is also included in the first group of national main protected units of botanic cultural relics. However, it is lacking of uniform management platform and modes. According to the specific characteristics of botanic cultural relics preservation, the author sets up the "Plant Treasure Management Information System" for "Gugong Date Garden", based on the Geographic information system (GIS), Internet, database and virtual reality technologies, along with the idea of modern customer management systems. This system is designed for five types of users, named system administrators, cultural relic supervisors, researchers, farmers and tourists, with the aim of realizing integrated managements of ancient trees' protection, scientific researches, tourism and explorations altogether, so as to make better management, protection, and utilizations.
Learning from the Mars Rover Mission: Scientific Discovery, Learning and Memory
NASA Technical Reports Server (NTRS)
Linde, Charlotte
2005-01-01
Purpose: Knowledge management for space exploration is part of a multi-generational effort. Each mission builds on knowledge from prior missions, and learning is the first step in knowledge production. This paper uses the Mars Exploration Rover mission as a site to explore this process. Approach: Observational study and analysis of the work of the MER science and engineering team during rover operations, to investigate how learning occurs, how it is recorded, and how these representations might be made available for subsequent missions. Findings: Learning occurred in many areas: planning science strategy, using instrumen?s within the constraints of the martian environment, the Deep Space Network, and the mission requirements; using software tools effectively; and running two teams on Mars time for three months. This learning is preserved in many ways. Primarily it resides in individual s memories. It is also encoded in stories, procedures, programming sequences, published reports, and lessons learned databases. Research implications: Shows the earliest stages of knowledge creation in a scientific mission, and demonstrates that knowledge management must begin with an understanding of knowledge creation. Practical implications: Shows that studying learning and knowledge creation suggests proactive ways to capture and use knowledge across multiple missions and generations. Value: This paper provides a unique analysis of the learning process of a scientific space mission, relevant for knowledge management researchers and designers, as well as demonstrating in detail how new learning occurs in a learning organization.
Parallel In Situ Indexing for Data-intensive Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Jinoh; Abbasi, Hasan; Chacon, Luis
2011-09-09
As computing power increases exponentially, vast amount of data is created by many scientific re- search activities. However, the bandwidth for storing the data to disks and reading the data from disks has been improving at a much slower pace. These two trends produce an ever-widening data access gap. Our work brings together two distinct technologies to address this data access issue: indexing and in situ processing. From decades of database research literature, we know that indexing is an effective way to address the data access issue, particularly for accessing relatively small fraction of data records. As data sets increasemore » in sizes, more and more analysts need to use selective data access, which makes indexing an even more important for improving data access. The challenge is that most implementations of in- dexing technology are embedded in large database management systems (DBMS), but most scientific datasets are not managed by any DBMS. In this work, we choose to include indexes with the scientific data instead of requiring the data to be loaded into a DBMS. We use compressed bitmap indexes from the FastBit software which are known to be highly effective for query-intensive workloads common to scientific data analysis. To use the indexes, we need to build them first. The index building procedure needs to access the whole data set and may also require a significant amount of compute time. In this work, we adapt the in situ processing technology to generate the indexes, thus removing the need of read- ing data from disks and to build indexes in parallel. The in situ data processing system used is ADIOS, a middleware for high-performance I/O. Our experimental results show that the indexes can improve the data access time up to 200 times depending on the fraction of data selected, and using in situ data processing system can effectively reduce the time needed to create the indexes, up to 10 times with our in situ technique when using identical parallel settings.« less
NASA Technical Reports Server (NTRS)
Brodsky, Alexander; Segal, Victor E.
1999-01-01
The EOSCUBE constraint database system is designed to be a software productivity tool for high-level specification and efficient generation of EOSDIS and other scientific products. These products are typically derived from large volumes of multidimensional data which are collected via a range of scientific instruments.
Management of the extravasation of anti-neoplastic agents.
Boulanger, J; Ducharme, A; Dufour, A; Fortier, S; Almanric, K
2015-05-01
Extravasation is a potentially severe complication that can occur during the administration of chemotherapy. The scarcity of evidence available makes it difficult to develop an optimal management scheme. The purpose of this guideline is to review the relevant scientific literature on the prevention, management, and treatment of extravasation occurring during the administration of chemotherapy to cancer patients. A scientific literature review was conducted using the PubMed search tool. The period covered was from database inception to April 2014, inclusively. Since the literature on extravasation treatment is often empirical, anecdotal, and controversial, the review also identified clinical practice guidelines and expert consensuses published by relevant international organizations and cancer agencies. Identification of potential risk factors and preventive measures can reduce the risk of extravasation. Recognition and management of symptoms are crucial in patients with this complication. Provision of adequate instruction to personnel responsible for administering chemotherapy and to patients on recognizing symptoms, preventing, and managing extravasation is essential. Extravasation can be treated with dry warm or cold compresses and various antidotes such as dimethyl sulfoxide, dexrazoxane, hyaluronidase, or sodium thiosulfate, depending on the agent that has caused extravasation. Patient monitoring to assess the progression or regression of symptoms and to thus take the appropriate measures is necessary. Several strategies must be established to ensure that extravasation is recognized and properly managed. Given the evidence available at this time, the Comité de l'évolution des pratiques en oncologie (CEPO) has made recommendations for clinical practice in Quebec.
Larsen, Peder Olesen; von Ins, Markus
2010-09-01
The growth rate of scientific publication has been studied from 1907 to 2007 using available data from a number of literature databases, including Science Citation Index (SCI) and Social Sciences Citation Index (SSCI). Traditional scientific publishing, that is publication in peer-reviewed journals, is still increasing although there are big differences between fields. There are no indications that the growth rate has decreased in the last 50 years. At the same time publication using new channels, for example conference proceedings, open archives and home pages, is growing fast. The growth rate for SCI up to 2007 is smaller than for comparable databases. This means that SCI was covering a decreasing part of the traditional scientific literature. There are also clear indications that the coverage by SCI is especially low in some of the scientific areas with the highest growth rate, including computer science and engineering sciences. The role of conference proceedings, open access archives and publications published on the net is increasing, especially in scientific fields with high growth rates, but this has only partially been reflected in the databases. The new publication channels challenge the use of the big databases in measurements of scientific productivity or output and of the growth rate of science. Because of the declining coverage and this challenge it is problematic that SCI has been used and is used as the dominant source for science indicators based on publication and citation numbers. The limited data available for social sciences show that the growth rate in SSCI was remarkably low and indicate that the coverage by SSCI was declining over time. National Science Indicators from Thomson Reuters is based solely on SCI, SSCI and Arts and Humanities Citation Index (AHCI). Therefore the declining coverage of the citation databases problematizes the use of this source.
von Ins, Markus
2010-01-01
The growth rate of scientific publication has been studied from 1907 to 2007 using available data from a number of literature databases, including Science Citation Index (SCI) and Social Sciences Citation Index (SSCI). Traditional scientific publishing, that is publication in peer-reviewed journals, is still increasing although there are big differences between fields. There are no indications that the growth rate has decreased in the last 50 years. At the same time publication using new channels, for example conference proceedings, open archives and home pages, is growing fast. The growth rate for SCI up to 2007 is smaller than for comparable databases. This means that SCI was covering a decreasing part of the traditional scientific literature. There are also clear indications that the coverage by SCI is especially low in some of the scientific areas with the highest growth rate, including computer science and engineering sciences. The role of conference proceedings, open access archives and publications published on the net is increasing, especially in scientific fields with high growth rates, but this has only partially been reflected in the databases. The new publication channels challenge the use of the big databases in measurements of scientific productivity or output and of the growth rate of science. Because of the declining coverage and this challenge it is problematic that SCI has been used and is used as the dominant source for science indicators based on publication and citation numbers. The limited data available for social sciences show that the growth rate in SSCI was remarkably low and indicate that the coverage by SSCI was declining over time. National Science Indicators from Thomson Reuters is based solely on SCI, SSCI and Arts and Humanities Citation Index (AHCI). Therefore the declining coverage of the citation databases problematizes the use of this source. PMID:20700371
Quantitative evaluation of Iranian radiology papers and its comparison with selected countries.
Ghafoori, Mahyar; Emami, Hasan; Sedaghat, Abdolrasoul; Ghiasi, Mohammad; Shakiba, Madjid; Alavi, Manijeh
2014-01-01
Recent technological developments in medicine, including modern radiology have promoted the impact of scientific researches on social life. The scientific outputs such as article and patents are products that show the scientists' attempt to access these achievements. In the current study, we evaluate the current situation of Iranian scientists in the field of radiology and compare it with the selected countries in terms of scientific papers. For this purpose, we used scientometric tools to quantitatively assess the scientific papers in the field of radiology. Radiology papers were evaluated in the context of medical field audit using retrospective model. We used the related databases of biomedical sciences for extraction of articles related to radiology. In the next step, the situation of radiology scientific products of the country were determined with respect to the under study regional countries. Results of the current study showed a ratio of 0.19% for Iranian papers in PubMed database published in 2009. In addition, in 2009, Iranian papers constituted 0.29% of the Scopus scientific database. The proportion of Iranian papers in the understudy region was 7.6%. To diminish the gap between Iranian scientific radiology papers and other competitor countries in the region and achievement of document 2025 goals, multifold effort of the society of radiology is necessary.
High-Performance Secure Database Access Technologies for HEP Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matthew Vranicar; John Weicher
2006-04-17
The Large Hadron Collider (LHC) at the CERN Laboratory will become the largest scientific instrument in the world when it starts operations in 2007. Large Scale Analysis Computer Systems (computational grids) are required to extract rare signals of new physics from petabytes of LHC detector data. In addition to file-based event data, LHC data processing applications require access to large amounts of data in relational databases: detector conditions, calibrations, etc. U.S. high energy physicists demand efficient performance of grid computing applications in LHC physics research where world-wide remote participation is vital to their success. To empower physicists with data-intensive analysismore » capabilities a whole hyperinfrastructure of distributed databases cross-cuts a multi-tier hierarchy of computational grids. The crosscutting allows separation of concerns across both the global environment of a federation of computational grids and the local environment of a physicist’s computer used for analysis. Very few efforts are on-going in the area of database and grid integration research. Most of these are outside of the U.S. and rely on traditional approaches to secure database access via an extraneous security layer separate from the database system core, preventing efficient data transfers. Our findings are shared by the Database Access and Integration Services Working Group of the Global Grid Forum, who states that "Research and development activities relating to the Grid have generally focused on applications where data is stored in files. However, in many scientific and commercial domains, database management systems have a central role in data storage, access, organization, authorization, etc, for numerous applications.” There is a clear opportunity for a technological breakthrough, requiring innovative steps to provide high-performance secure database access technologies for grid computing. We believe that an innovative database architecture where the secure authorization is pushed into the database engine will eliminate inefficient data transfer bottlenecks. Furthermore, traditionally separated database and security layers provide an extra vulnerability, leaving a weak clear-text password authorization as the only protection on the database core systems. Due to the legacy limitations of the systems’ security models, the allowed passwords often can not even comply with the DOE password guideline requirements. We see an opportunity for the tight integration of the secure authorization layer with the database server engine resulting in both improved performance and improved security. Phase I has focused on the development of a proof-of-concept prototype using Argonne National Laboratory’s (ANL) Argonne Tandem-Linac Accelerator System (ATLAS) project as a test scenario. By developing a grid-security enabled version of the ATLAS project’s current relation database solution, MySQL, PIOCON Technologies aims to offer a more efficient solution to secure database access.« less
United States Army Medical Materiel Development Activity: 1997 Annual Report.
1997-01-01
business planning and execution information management system (Project Management Division Database ( PMDD ) and Product Management Database System (PMDS...MANAGEMENT • Project Management Division Database ( PMDD ), Product Management Database System (PMDS), and Special Users Database System:The existing...System (FMS), were investigated. New Product Managers and Project Managers were added into PMDS and PMDD . A separate division, Support, was
EMERALD: Coping with the Explosion of Seismic Data
NASA Astrophysics Data System (ADS)
West, J. D.; Fouch, M. J.; Arrowsmith, R.
2009-12-01
The geosciences are currently generating an unparalleled quantity of new public broadband seismic data with the establishment of large-scale seismic arrays such as the EarthScope USArray, which are enabling new and transformative scientific discoveries of the structure and dynamics of the Earth’s interior. Much of this explosion of data is a direct result of the formation of the IRIS consortium, which has enabled an unparalleled level of open exchange of seismic instrumentation, data, and methods. The production of these massive volumes of data has generated new and serious data management challenges for the seismological community. A significant challenge is the maintenance and updating of seismic metadata, which includes information such as station location, sensor orientation, instrument response, and clock timing data. This key information changes at unknown intervals, and the changes are not generally communicated to data users who have already downloaded and processed data. Another basic challenge is the ability to handle massive seismic datasets when waveform file volumes exceed the fundamental limitations of a computer’s operating system. A third, long-standing challenge is the difficulty of exchanging seismic processing codes between researchers; each scientist typically develops his or her own unique directory structure and file naming convention, requiring that codes developed by another researcher be rewritten before they can be used. To address these challenges, we are developing EMERALD (Explore, Manage, Edit, Reduce, & Analyze Large Datasets). The overarching goal of the EMERALD project is to enable more efficient and effective use of seismic datasets ranging from just a few hundred to millions of waveforms with a complete database-driven system, leading to higher quality seismic datasets for scientific analysis and enabling faster, more efficient scientific research. We will present a preliminary (beta) version of EMERALD, an integrated, extensible, standalone database server system based on the open-source PostgreSQL database engine. The system is designed for fast and easy processing of seismic datasets, and provides the necessary tools to manage very large datasets and all associated metadata. EMERALD provides methods for efficient preprocessing of seismic records; large record sets can be easily and quickly searched, reviewed, revised, reprocessed, and exported. EMERALD can retrieve and store station metadata and alert the user to metadata changes. The system provides many methods for visualizing data, analyzing dataset statistics, and tracking the processing history of individual datasets. EMERALD allows development and sharing of visualization and processing methods using any of 12 programming languages. EMERALD is designed to integrate existing software tools; the system provides wrapper functionality for existing widely-used programs such as GMT, SOD, and TauP. Users can interact with EMERALD via a web browser interface, or they can directly access their data from a variety of database-enabled external tools. Data can be imported and exported from the system in a variety of file formats, or can be directly requested and downloaded from the IRIS DMC from within EMERALD.
High-performance metadata indexing and search in petascale data storage systems
NASA Astrophysics Data System (ADS)
Leung, A. W.; Shao, M.; Bisson, T.; Pasupathy, S.; Miller, E. L.
2008-07-01
Large-scale storage systems used for scientific applications can store petabytes of data and billions of files, making the organization and management of data in these systems a difficult, time-consuming task. The ability to search file metadata in a storage system can address this problem by allowing scientists to quickly navigate experiment data and code while allowing storage administrators to gather the information they need to properly manage the system. In this paper, we present Spyglass, a file metadata search system that achieves scalability by exploiting storage system properties, providing the scalability that existing file metadata search tools lack. In doing so, Spyglass can achieve search performance up to several thousand times faster than existing database solutions. We show that Spyglass enables important functionality that can aid data management for scientists and storage administrators.
The Ophidia Stack: Toward Large Scale, Big Data Analytics Experiments for Climate Change
NASA Astrophysics Data System (ADS)
Fiore, S.; Williams, D. N.; D'Anca, A.; Nassisi, P.; Aloisio, G.
2015-12-01
The Ophidia project is a research effort on big data analytics facing scientific data analysis challenges in multiple domains (e.g. climate change). It provides a "datacube-oriented" framework responsible for atomically processing and manipulating scientific datasets, by providing a common way to run distributive tasks on large set of data fragments (chunks). Ophidia provides declarative, server-side, and parallel data analysis, jointly with an internal storage model able to efficiently deal with multidimensional data and a hierarchical data organization to manage large data volumes. The project relies on a strong background on high performance database management and On-Line Analytical Processing (OLAP) systems to manage large scientific datasets. The Ophidia analytics platform provides several data operators to manipulate datacubes (about 50), and array-based primitives (more than 100) to perform data analysis on large scientific data arrays. To address interoperability, Ophidia provides multiple server interfaces (e.g. OGC-WPS). From a client standpoint, a Python interface enables the exploitation of the framework into Python-based eco-systems/applications (e.g. IPython) and the straightforward adoption of a strong set of related libraries (e.g. SciPy, NumPy). The talk will highlight a key feature of the Ophidia framework stack: the "Analytics Workflow Management System" (AWfMS). The Ophidia AWfMS coordinates, orchestrates, optimises and monitors the execution of multiple scientific data analytics and visualization tasks, thus supporting "complex analytics experiments". Some real use cases related to the CMIP5 experiment will be discussed. In particular, with regard to the "Climate models intercomparison data analysis" case study proposed in the EU H2020 INDIGO-DataCloud project, workflows related to (i) anomalies, (ii) trend, and (iii) climate change signal analysis will be presented. Such workflows will be distributed across multiple sites - according to the datasets distribution - and will include intercomparison, ensemble, and outlier analysis. The two-level workflow solution envisioned in INDIGO (coarse grain for distributed tasks orchestration, and fine grain, at the level of a single data analytics cluster instance) will be presented and discussed.
Schwartz, Yannick; Barbot, Alexis; Thyreau, Benjamin; Frouin, Vincent; Varoquaux, Gaël; Siram, Aditya; Marcus, Daniel S; Poline, Jean-Baptiste
2012-01-01
As neuroimaging databases grow in size and complexity, the time researchers spend investigating and managing the data increases to the expense of data analysis. As a result, investigators rely more and more heavily on scripting using high-level languages to automate data management and processing tasks. For this, a structured and programmatic access to the data store is necessary. Web services are a first step toward this goal. They however lack in functionality and ease of use because they provide only low-level interfaces to databases. We introduce here PyXNAT, a Python module that interacts with The Extensible Neuroimaging Archive Toolkit (XNAT) through native Python calls across multiple operating systems. The choice of Python enables PyXNAT to expose the XNAT Web Services and unify their features with a higher level and more expressive language. PyXNAT provides XNAT users direct access to all the scientific packages in Python. Finally PyXNAT aims to be efficient and easy to use, both as a back-end library to build XNAT clients and as an alternative front-end from the command line.
Schwartz, Yannick; Barbot, Alexis; Thyreau, Benjamin; Frouin, Vincent; Varoquaux, Gaël; Siram, Aditya; Marcus, Daniel S.; Poline, Jean-Baptiste
2012-01-01
As neuroimaging databases grow in size and complexity, the time researchers spend investigating and managing the data increases to the expense of data analysis. As a result, investigators rely more and more heavily on scripting using high-level languages to automate data management and processing tasks. For this, a structured and programmatic access to the data store is necessary. Web services are a first step toward this goal. They however lack in functionality and ease of use because they provide only low-level interfaces to databases. We introduce here PyXNAT, a Python module that interacts with The Extensible Neuroimaging Archive Toolkit (XNAT) through native Python calls across multiple operating systems. The choice of Python enables PyXNAT to expose the XNAT Web Services and unify their features with a higher level and more expressive language. PyXNAT provides XNAT users direct access to all the scientific packages in Python. Finally PyXNAT aims to be efficient and easy to use, both as a back-end library to build XNAT clients and as an alternative front-end from the command line. PMID:22654752
Flores-Funes, Diego; Campillo-Soto, Álvaro; Pellicer-Franco, Enrique; Aguayo-Albasini, José Luis
2016-11-01
Postoperative ileus is one of the main complications in the postoperative period. New measures appeared with the introduction of «fast-track surgery» to accelerate recovery: coffee, chewing gum and gastrograffin. We performed a summary of current evidence, reviewing articles from MEDLINE, Cochrane Database of Systematic Reviews, ISI Web of Science, and SCOPUS databases. Employed search terms were «postoperative ileus» AND («definition» OR «epidemiology» OR «risk factors» OR «Management»). We selected 44 articles: 9 systematic reviews 11 narrative reviews, 13 randomized clinical trials, 6 observational studies, and the remaining 5 scientific letters, assumptions, etc. There is little literature about this topic, studies are heterogeneous, with disparity in the results. In addition, they only focus on colorectal and gynecological surgery. New high-quality studies are needed, preferably randomized clinical trials, in order to clarify the usefulness of these measures. Copyright © 2016 AEC. Publicado por Elsevier España, S.L.U. All rights reserved.
Digital Earth system based river basin data integration
NASA Astrophysics Data System (ADS)
Zhang, Xin; Li, Wanqing; Lin, Chao
2014-12-01
Digital Earth is an integrated approach to build scientific infrastructure. The Digital Earth systems provide a three-dimensional visualization and integration platform for river basin data which include the management data, in situ observation data, remote sensing observation data and model output data. This paper studies the Digital Earth system based river basin data integration technology. Firstly, the construction of the Digital Earth based three-dimensional river basin data integration environment is discussed. Then the river basin management data integration technology is presented which is realized by general database access interface, web service and ActiveX control. Thirdly, the in situ data stored in database tables as records integration is realized with three-dimensional model of the corresponding observation apparatus display in the Digital Earth system by a same ID code. In the next two parts, the remote sensing data and the model output data integration technologies are discussed in detail. The application in the Digital Zhang River basin System of China shows that the method can effectively improve the using efficiency and visualization effect of the data.
Scientific Framework for Stormwater Monitoring by the Washington State Department of Transportation
Sheibley, R.W.; Kelly, V.J.; Wagner, R.J.
2009-01-01
The Washington State Department of Transportation municipal stormwater monitoring program, in operation for about 8 years, never has received an external, objective assessment. In addition, the Washington State Department of Transportation would like to identify the standard operating procedures and quality assurance protocols that must be adopted so that their monitoring program will meet the requirements of the new National Pollutant Discharge Elimination System municipal stormwater permit. As a result, in March 2009, the Washington State Department of Transportation asked the U.S. Geological Survey to assess their pre-2009 municipal stormwater monitoring program. This report presents guidelines developed for the Washington State Department of Transportation to meet new permit requirements and regional/national stormwater monitoring standards to ensure that adequate processes and procedures are identified to collect high-quality, scientifically defensible municipal stormwater monitoring data. These include: (1) development of coherent vision and cooperation among all elements of the program; (2) a comprehensive approach for site selection; (3) an effective quality assurance program for field, laboratory, and data management; and (4) an adequate database and data management system.
Basner, Jodi E.; Theisz, Katrina I.; Jensen, Unni S.; Jones, C. David; Ponomarev, Ilya; Sulima, Pawel; Jo, Karen; Eljanne, Mariam; Espey, Michael G.; Franca-Koh, Jonathan; Hanlon, Sean E.; Kuhn, Nastaran Z.; Nagahara, Larry A.; Schnell, Joshua D.; Moore, Nicole M.
2013-01-01
Development of effective quantitative indicators and methodologies to assess the outcomes of cross-disciplinary collaborative initiatives has the potential to improve scientific program management and scientific output. This article highlights an example of a prospective evaluation that has been developed to monitor and improve progress of the National Cancer Institute Physical Sciences—Oncology Centers (PS-OC) program. Study data, including collaboration information, was captured through progress reports and compiled using the web-based analytic database: Interdisciplinary Team Reporting, Analysis, and Query Resource. Analysis of collaborations was further supported by data from the Thomson Reuters Web of Science database, MEDLINE database, and a web-based survey. Integration of novel and standard data sources was augmented by the development of automated methods to mine investigator pre-award publications, assign investigator disciplines, and distinguish cross-disciplinary publication content. The results highlight increases in cross-disciplinary authorship collaborations from pre- to post-award years among the primary investigators and confirm that a majority of cross-disciplinary collaborations have resulted in publications with cross-disciplinary content that rank in the top third of their field. With these evaluation data, PS-OC Program officials have provided ongoing feedback to participating investigators to improve center productivity and thereby facilitate a more successful initiative. Future analysis will continue to expand these methods and metrics to adapt to new advances in research evaluation and changes in the program. PMID:24808632
NASA Astrophysics Data System (ADS)
Williams, J. W.; Grimm, E. C.; Ashworth, A. C.; Blois, J.; Charles, D. F.; Crawford, S.; Davis, E.; Goring, S. J.; Graham, R. W.; Miller, D. A.; Smith, A. J.; Stryker, M.; Uhen, M. D.
2017-12-01
The Neotoma Paleoecology Database supports global change research at the intersection of geology and ecology by providing a high-quality, community-curated data repository for paleoecological data. These data are widely used to study biological responses and feedbacks to past environmental change at local to global scales. The Neotoma data model is flexible and can store multiple kinds of fossil, biogeochemical, or physical variables measured from sedimentary archives. Data additions to Neotoma are growing and include >3.5 million observations, >16,000 datasets, and >8,500 sites. Dataset types include fossil pollen, vertebrates, diatoms, ostracodes, macroinvertebrates, plant macrofossils, insects, testate amoebae, geochronological data, and the recently added organic biomarkers, stable isotopes, and specimen-level data. Neotoma data can be found and retrieved in multiple ways, including the Explorer map-based interface, a RESTful Application Programming Interface, the neotoma R package, and digital object identifiers. Neotoma has partnered with the Paleobiology Database to produce a common data portal for paleobiological data, called the Earth Life Consortium. A new embargo management is designed to allow investigators to put their data into Neotoma and then make use of Neotoma's value-added services. Neotoma's distributed scientific governance model is flexible and scalable, with many open pathways for welcoming new members, data contributors, stewards, and research communities. As the volume and variety of scientific data grow, community-curated data resources such as Neotoma have become foundational infrastructure for big data science.
NASA Astrophysics Data System (ADS)
Krumhansl, R. A.; Foster, J.; Peach, C. L.; Busey, A.; Baker, I.
2012-12-01
The practice of science and engineering is being revolutionized by the development of cyberinfrastructure for accessing near real-time and archived observatory data. Large cyberinfrastructure projects have the potential to transform the way science is taught in high school classrooms, making enormous quantities of scientific data available, giving students opportunities to analyze and draw conclusions from many kinds of complex data, and providing students with experiences using state-of-the-art resources and techniques for scientific investigations. However, online interfaces to scientific data are built by scientists for scientists, and their design can significantly impede broad use by novices. Knowledge relevant to the design of student interfaces to complex scientific databases is broadly dispersed among disciplines ranging from cognitive science to computer science and cartography and is not easily accessible to designers of educational interfaces. To inform efforts at bridging scientific cyberinfrastructure to the high school classroom, Education Development Center, Inc. and the Scripps Institution of Oceanography conducted an NSF-funded 2-year interdisciplinary review of literature and expert opinion pertinent to making interfaces to large scientific databases accessible to and usable by precollege learners and their teachers. Project findings are grounded in the fundamentals of Cognitive Load Theory, Visual Perception, Schemata formation and Universal Design for Learning. The Knowledge Status Report (KSR) presents cross-cutting and visualization-specific guidelines that highlight how interface design features can address/ ameliorate challenges novice high school students face as they navigate complex databases to find data, and construct and look for patterns in maps, graphs, animations and other data visualizations. The guidelines present ways to make scientific databases more broadly accessible by: 1) adjusting the cognitive load imposed by the user interface and visualizations so that it doesn't exceed the amount of information the learner can actively process; 2) drawing attention to important features and patterns; and 3) enabling customization of visualizations and tools to meet the needs of diverse learners.
NASA Astrophysics Data System (ADS)
Kramer, K.; Shedd, W. W.
2017-12-01
In May, 2017, the U.S. Department of the Interior's Bureau of Ocean Energy Management (BOEM) published a high-resolution seafloor map of the northern Gulf of Mexico region. The new map, derived from 3-D seismic surveys, provides the scientific community with enhanced resolution and reveals previously undiscovered and poorly resolved geologic features of the continental slope, salt minibasin province, abyssal plain, Mississippi Fan, and the Florida Shelf and Escarpment. It becomes an even more powerful scientific tool when paired with BOEM's public database of 35,000 seafloor features, identifying natural hydrocarbon seeps, hard grounds, mud volcanoes, sediment flows, pockmarks, slumps, and many others. BOEM has mapped the Gulf of Mexico seafloor since 1998 in a regulatory mission to identify natural oil and gas seeps and protect the coral and chemosynthetic communities growing at those sites. The nineteen-year mapping effort, still ongoing, resulted in the creation of the 1.4-billion pixel map and the seafloor features database. With these tools and continual collaboration with academia, professional scientific institutions, and the offshore energy industry, BOEM will continue to incorporate new data to update and expand these two resources on a regular basis. They can be downloaded for free from BOEM's website at https://www.boem.gov/Gulf-of-Mexico-Deepwater-Bathymetry/ and https://www.boem.gov/Seismic-Water-Bottom-Anomalies-Map-Gallery/.
Annotated bibliography of scientific research on greater sage-grouse published since January 2015
Carter, Sarah K.; Manier, Daniel J.; Arkle, Robert S.; Johnston, Aaron; Phillips, Susan L.; Hanser, Steven E.; Bowen, Zachary H.
2018-02-14
The greater sage-grouse (Centrocercus urophasianus; hereafter GRSG) has been a focus of scientific investigation and management action for the past two decades. The 2015 U.S. Fish and Wildlife Service listing determination of “not warranted” was in part due to a large-scale collaborative effort to develop strategies to conserve GRSG populations and their habitat and to reduce threats to both. New scientific information augments existing knowledge and can help inform updates or modifications to existing plans for managing GRSG and sagebrush ecosystems. However, the sheer number of scientific publications can be a challenge for managers tasked with evaluating and determining the need for potential updates to existing planning documents. To assist in this process, the U.S. Geological Survey (USGS) has reviewed and summarized the scientific literature published since January 1, 2015.To identify articles and reports published about GRSG, we first conducted a structured search of three reference databases (Web of Science, Scopus, and Google Scholar) using the search term “greater sage-grouse.” We refined the initial list of products by (1) removing duplicates, (2) excluding products that were not published as research or scientific review articles in peer-reviewed journals or as formal government technical reports, and (3) retaining only those products for which GRSG or their habitat was a research focus.We summarized the contents of each product by using a consistent structure (background, objectives, methods, location, findings, and implications) and assessed the content of each product relevant to a list of 31 management topics. These topics include GRSG biology and habitat characteristics along with potential management actions, land uses, and environmental factors related to GRSG management and conservation. We also noted which articles/reports created new geospatial data.The final search was conducted on January 6, 2018, and application of our criteria resulted in the inclusion of 169 published products (2 of these products were published corrections to journal articles). The management topics most commonly addressed were GRSG behavior or demographics and GRSG habitat selection or habitat characteristics at broad or site scales. Few products addressed captive breeding, recreation, wild horses and burros, and range management structures (including fences). We include in this annotated bibliography the full citation, product summary, and management topics addressed by each product. The online version of this bibliography (https://apps.usgs.gov/gsgbib/index.php) is searchable by topic and location and includes links to the original publications.A substantial body of literature has been compiled based on research explicitly related to the conservation, management, monitoring, and assessment of GRSG. These studies may inform planning and management actions that seek to balance conservation, economic, and social objectives and manage diverse resource uses and values across the western United States.The review process for this product included requesting input on each summary from one or more authors of the original peer-reviewed article or report and a formal review of the entire document by three independent reviewers and, subsequently, the USGS Bureau Approving Official. This process is consistent with USGS Fundamental Science Practices.
NASA Astrophysics Data System (ADS)
Albeke, S. E.; Perkins, D. G.; Ewers, S. L.; Ewers, B. E.; Holbrook, W. S.; Miller, S. N.
2015-12-01
The sharing of data and results is paramount for advancing scientific research. The Wyoming Center for Environmental Hydrology and Geophysics (WyCEHG) is a multidisciplinary group that is driving scientific breakthroughs to help manage water resources in the Western United States. WyCEHG is mandated by the National Science Foundation (NSF) to share their data. However, the infrastructure from which to share such diverse, complex and massive amounts of data did not exist within the University of Wyoming. We developed an innovative framework to meet the data organization, sharing, and discovery requirements of WyCEHG by integrating both open and closed source software, embedded metadata tags, semantic web technologies, and a web-mapping application. The infrastructure uses a Relational Database Management System as the foundation, providing a versatile platform to store, organize, and query myriad datasets, taking advantage of both structured and unstructured formats. Detailed metadata are fundamental to the utility of datasets. We tag data with Uniform Resource Identifiers (URI's) to specify concepts with formal descriptions (i.e. semantic ontologies), thus allowing users the ability to search metadata based on the intended context rather than conventional keyword searches. Additionally, WyCEHG data are geographically referenced. Using the ArcGIS API for Javascript, we developed a web mapping application leveraging database-linked spatial data services, providing a means to visualize and spatially query available data in an intuitive map environment. Using server-side scripting (PHP), the mapping application, in conjunction with semantic search modules, dynamically communicates with the database and file system, providing access to available datasets. Our approach provides a flexible, comprehensive infrastructure from which to store and serve WyCEHG's highly diverse research-based data. This framework has not only allowed WyCEHG to meet its data stewardship requirements, but can provide a template for others to follow.
SciELO, Scientific Electronic Library Online, a Database of Open Access Journals
ERIC Educational Resources Information Center
Meneghini, Rogerio
2013-01-01
This essay discusses SciELO, a scientific journal database operating in 14 countries. It covers over 1000 journals providing open access to full text and table sets of scientometrics data. In Brazil it is responsible for a collection of nearly 300 journals, selected along 15 years as the best Brazilian periodicals in natural and social sciences.…
Instruments of scientific visual representation in atomic databases
NASA Astrophysics Data System (ADS)
Kazakov, V. V.; Kazakov, V. G.; Meshkov, O. I.
2017-10-01
Graphic tools of spectral data representation provided by operating information systems on atomic spectroscopy—ASD NIST, VAMDC, SPECTR-W3, and Electronic Structure of Atoms—for the support of scientific-research and human-resource development are presented. Such tools of visual representation of scientific data as those of the spectrogram and Grotrian diagram plotting are considered. The possibility of comparative analysis of the experimentally obtained spectra and reference spectra of atomic systems formed according to the database of a resource is described. The access techniques to the mentioned graphic tools are presented.
Bouchet, Philippe; Boxshall, Geoff; Fauchald, Kristian; Gordon, Dennis; Hoeksema, Bert W.; Poore, Gary C. B.; van Soest, Rob W. M.; Stöhr, Sabine; Walter, T. Chad; Vanhoorne, Bart; Decock, Wim
2013-01-01
The World Register of Marine Species is an over 90% complete open-access inventory of all marine species names. Here we illustrate the scale of the problems with species names, synonyms, and their classification, and describe how WoRMS publishes online quality assured information on marine species. Within WoRMS, over 100 global, 12 regional and 4 thematic species databases are integrated with a common taxonomy. Over 240 editors from 133 institutions and 31 countries manage the content. To avoid duplication of effort, content is exchanged with 10 external databases. At present WoRMS contains 460,000 taxonomic names (from Kingdom to subspecies), 368,000 species level combinations of which 215,000 are currently accepted marine species names, and 26,000 related but non-marine species. Associated information includes 150,000 literature sources, 20,000 images, and locations of 44,000 specimens. Usage has grown linearly since its launch in 2007, with about 600,000 unique visitors to the website in 2011, and at least 90 organisations from 12 countries using WoRMS for their data management. By providing easy access to expert-validated content, WoRMS improves quality control in the use of species names, with consequent benefits to taxonomy, ecology, conservation and marine biodiversity research and management. The service manages information on species names that would otherwise be overly costly for individuals, and thus minimises errors in the application of nomenclature standards. WoRMS' content is expanding to include host-parasite relationships, additional literature sources, locations of specimens, images, distribution range, ecological, and biological data. Species are being categorised as introduced (alien, invasive), of conservation importance, and on other attributes. These developments have a multiplier effect on its potential as a resource for biodiversity research and management. As a consequence of WoRMS, we are witnessing improved communication within the scientific community, and anticipate increased taxonomic efficiency and quality control in marine biodiversity research and management. PMID:23505408
Costello, Mark J; Bouchet, Philippe; Boxshall, Geoff; Fauchald, Kristian; Gordon, Dennis; Hoeksema, Bert W; Poore, Gary C B; van Soest, Rob W M; Stöhr, Sabine; Walter, T Chad; Vanhoorne, Bart; Decock, Wim; Appeltans, Ward
2013-01-01
The World Register of Marine Species is an over 90% complete open-access inventory of all marine species names. Here we illustrate the scale of the problems with species names, synonyms, and their classification, and describe how WoRMS publishes online quality assured information on marine species. Within WoRMS, over 100 global, 12 regional and 4 thematic species databases are integrated with a common taxonomy. Over 240 editors from 133 institutions and 31 countries manage the content. To avoid duplication of effort, content is exchanged with 10 external databases. At present WoRMS contains 460,000 taxonomic names (from Kingdom to subspecies), 368,000 species level combinations of which 215,000 are currently accepted marine species names, and 26,000 related but non-marine species. Associated information includes 150,000 literature sources, 20,000 images, and locations of 44,000 specimens. Usage has grown linearly since its launch in 2007, with about 600,000 unique visitors to the website in 2011, and at least 90 organisations from 12 countries using WoRMS for their data management. By providing easy access to expert-validated content, WoRMS improves quality control in the use of species names, with consequent benefits to taxonomy, ecology, conservation and marine biodiversity research and management. The service manages information on species names that would otherwise be overly costly for individuals, and thus minimises errors in the application of nomenclature standards. WoRMS' content is expanding to include host-parasite relationships, additional literature sources, locations of specimens, images, distribution range, ecological, and biological data. Species are being categorised as introduced (alien, invasive), of conservation importance, and on other attributes. These developments have a multiplier effect on its potential as a resource for biodiversity research and management. As a consequence of WoRMS, we are witnessing improved communication within the scientific community, and anticipate increased taxonomic efficiency and quality control in marine biodiversity research and management.
Simple re-instantiation of small databases using cloud computing.
Tan, Tin Wee; Xie, Chao; De Silva, Mark; Lim, Kuan Siong; Patro, C Pawan K; Lim, Shen Jean; Govindarajan, Kunde Ramamoorthy; Tong, Joo Chuan; Choo, Khar Heng; Ranganathan, Shoba; Khan, Asif M
2013-01-01
Small bioinformatics databases, unlike institutionally funded large databases, are vulnerable to discontinuation and many reported in publications are no longer accessible. This leads to irreproducible scientific work and redundant effort, impeding the pace of scientific progress. We describe a Web-accessible system, available online at http://biodb100.apbionet.org, for archival and future on demand re-instantiation of small databases within minutes. Depositors can rebuild their databases by downloading a Linux live operating system (http://www.bioslax.com), preinstalled with bioinformatics and UNIX tools. The database and its dependencies can be compressed into an ".lzm" file for deposition. End-users can search for archived databases and activate them on dynamically re-instantiated BioSlax instances, run as virtual machines over the two popular full virtualization standard cloud-computing platforms, Xen Hypervisor or vSphere. The system is adaptable to increasing demand for disk storage or computational load and allows database developers to use the re-instantiated databases for integration and development of new databases. Herein, we demonstrate that a relatively inexpensive solution can be implemented for archival of bioinformatics databases and their rapid re-instantiation should the live databases disappear.
Simple re-instantiation of small databases using cloud computing
2013-01-01
Background Small bioinformatics databases, unlike institutionally funded large databases, are vulnerable to discontinuation and many reported in publications are no longer accessible. This leads to irreproducible scientific work and redundant effort, impeding the pace of scientific progress. Results We describe a Web-accessible system, available online at http://biodb100.apbionet.org, for archival and future on demand re-instantiation of small databases within minutes. Depositors can rebuild their databases by downloading a Linux live operating system (http://www.bioslax.com), preinstalled with bioinformatics and UNIX tools. The database and its dependencies can be compressed into an ".lzm" file for deposition. End-users can search for archived databases and activate them on dynamically re-instantiated BioSlax instances, run as virtual machines over the two popular full virtualization standard cloud-computing platforms, Xen Hypervisor or vSphere. The system is adaptable to increasing demand for disk storage or computational load and allows database developers to use the re-instantiated databases for integration and development of new databases. Conclusions Herein, we demonstrate that a relatively inexpensive solution can be implemented for archival of bioinformatics databases and their rapid re-instantiation should the live databases disappear. PMID:24564380
Linking events, science and media for flood and drought management
NASA Astrophysics Data System (ADS)
Ding, M.; Wei, Y.; Zheng, H.; Zhao, Y.
2017-12-01
Throughout history, floods and droughts have been closely related to the development of human riparian civilization. The socio-economic damage caused by floods/droughts appears to be on the rise and the frequency of floods/droughts increases due to global climate change. In this paper, we take a fresh perspective to examine the (dis)connection between events (floods and droughts), research papers and media reports in globally 42 river basins between 1990 and 2012 for better solutions in floods and droughts management. We collected hydrological data from NOAA/ESPL Physical Sciences Division (PSD) and CPC Merged Analysis of Precipitation (CMAP), all relevant scientific papers from Web of Science (WOS) and media records from Emergency Events Database (EM-DAT) during the study period, presented the temporal variability at annual level of these three groups of data, and analysed the (connection) among these three groups of data in typical river basins. We found that 1) the number of flood related reports on both media and research is much more than those on droughts; 2) the concerns of media reports just focused on partial topics (death, severity and damage) and partial catchments (Mediterranean Sea and Nile River); 3) the scientific contribution on floods and droughts were limited within some river basins such as Nile River Basin, Parana River Basin, Savannah River Basin and Murray-Darling River Basin; 4) the scientific contribution on floods and droughts were limited within only a few of disciplines such as Geology, Environmental Sciences & Ecology, Agriculture, Engineering and Forestry. It is recommended that multiple disciplinary contribution and collaboration should be promoted to achieve comprehensive flood/drought management, and science and media should interactively play their valuable roles and in flood/drought issues. Keywords: Floods, droughts, events, science, media, flood and drought management
Lee, Hwan Young; Song, Injee; Ha, Eunho; Cho, Sung-Bae; Yang, Woo Ick; Shin, Kyoung-Jin
2008-01-01
Background For the past few years, scientific controversy has surrounded the large number of errors in forensic and literature mitochondrial DNA (mtDNA) data. However, recent research has shown that using mtDNA phylogeny and referring to known mtDNA haplotypes can be useful for checking the quality of sequence data. Results We developed a Web-based bioinformatics resource "mtDNAmanager" that offers a convenient interface supporting the management and quality analysis of mtDNA sequence data. The mtDNAmanager performs computations on mtDNA control-region sequences to estimate the most-probable mtDNA haplogroups and retrieves similar sequences from a selected database. By the phased designation of the most-probable haplogroups (both expected and estimated haplogroups), mtDNAmanager enables users to systematically detect errors whilst allowing for confirmation of the presence of clear key diagnostic mutations and accompanying mutations. The query tools of mtDNAmanager also facilitate database screening with two options of "match" and "include the queried nucleotide polymorphism". In addition, mtDNAmanager provides Web interfaces for users to manage and analyse their own data in batch mode. Conclusion The mtDNAmanager will provide systematic routines for mtDNA sequence data management and analysis via easily accessible Web interfaces, and thus should be very useful for population, medical and forensic studies that employ mtDNA analysis. mtDNAmanager can be accessed at . PMID:19014619
Metnitz, P G; Laback, P; Popow, C; Laback, O; Lenz, K; Hiesmayr, M
1995-01-01
Patient Data Management Systems (PDMS) for ICUs collect, present and store clinical data. Various intentions make analysis of those digitally stored data desirable, such as quality control or scientific purposes. The aim of the Intensive Care Data Evaluation project (ICDEV), was to provide a database tool for the analysis of data recorded at various ICUs at the University Clinics of Vienna. General Hospital of Vienna, with two different PDMSs used: CareVue 9000 (Hewlett Packard, Andover, USA) at two ICUs (one medical ICU and one neonatal ICU) and PICIS Chart+ (PICIS, Paris, France) at one Cardiothoracic ICU. CONCEPT AND METHODS: Clinically oriented analysis of the data collected in a PDMS at an ICU was the beginning of the development. After defining the database structure we established a client-server based database system under Microsoft Windows NI and developed a user friendly data quering application using Microsoft Visual C++ and Visual Basic; ICDEV was successfully installed at three different ICUs, adjustment to the different PDMS configurations were done within a few days. The database structure developed by us enables a powerful query concept representing an 'EXPERT QUESTION COMPILER' which may help to answer almost any clinical questions. Several program modules facilitate queries at the patient, group and unit level. Results from ICDEV-queries are automatically transferred to Microsoft Excel for display (in form of configurable tables and graphs) and further processing. The ICDEV concept is configurable for adjustment to different intensive care information systems and can be used to support computerized quality control. However, as long as there exists no sufficient artifact recognition or data validation software for automatically recorded patient data, the reliability of these data and their usage for computer assisted quality control remain unclear and should be further studied.
Levels of evidence in pelvic trauma: a bibliometric analysis of the top 50 cited papers.
White-Gibson, Ailbhe; O'Neill, Barry; Cooper, David; Leonard, Michael; O'Daly, Brendan
2018-05-12
Scientific research is an essential aspect in the ongoing development of medical education and improved patient care. Dissemination of findings is a pivotal goal of any health research study. The number of citations that a published article receives is reflective of the importance that paper has on clinical practice. To date, it is unknown which journals are most frequently cited as influencing the management of pelvic trauma. The aim of this study was to identify the top 50 publications relating to the management of pelvic trauma. The database of the Science Citation Index of the Institute for Scientific Information (1945 to 2016) was reviewed to identify the 50 papers most commonly cited. A total of 1535 papers were included. Of these, 31 papers were cited over 100 times with the top 50 cited 69 times or more. The top 50 were subjected to further analysis to identify the authors and institutions involved. The majority of these publications originated in the USA, followed by Canada. The most cited paper is "pelvic ring fractures-should they be fixed", published by Tile in 1988. We have identified and analysed the publications that have contributed most to the assessment and management of pelvic trauma over the past 50 years. We have also identified the researchers and institutions which have most influenced the evidence-based approach currently employed in the management of pelvic trauma.
SAADA: Astronomical Databases Made Easier
NASA Astrophysics Data System (ADS)
Michel, L.; Nguyen, H. N.; Motch, C.
2005-12-01
Many astronomers wish to share datasets with their community but have not enough manpower to develop databases having the functionalities required for high-level scientific applications. The SAADA project aims at automatizing the creation and deployment process of such databases. A generic but scientifically relevant data model has been designed which allows one to build databases by providing only a limited number of product mapping rules. Databases created by SAADA rely on a relational database supporting JDBC and covered by a Java layer including a lot of generated code. Such databases can simultaneously host spectra, images, source lists and plots. Data are grouped in user defined collections whose content can be seen as one unique set per data type even if their formats differ. Datasets can be correlated one with each other using qualified links. These links help, for example, to handle the nature of a cross-identification (e.g., a distance or a likelihood) or to describe their scientific content (e.g., by associating a spectrum to a catalog entry). The SAADA query engine is based on a language well suited to the data model which can handle constraints on linked data, in addition to classical astronomical queries. These constraints can be applied on the linked objects (number, class and attributes) and/or on the link qualifier values. Databases created by SAADA are accessed through a rich WEB interface or a Java API. We are currently developing an inter-operability module implanting VO protocols.
NASA Astrophysics Data System (ADS)
Pignol, C.; Arnaud, F.; Godinho, E.; Galabertier, B.; Caillo, A.; Billy, I.; Augustin, L.; Calzas, M.; Rousseau, D. D.; Crosta, X.
2016-12-01
Managing scientific data is probably one the most challenging issues in modern science. In plaeosciences the question is made even more sensitive with the need of preserving and managing high value fragile geological samples: cores. Large international scientific programs, such as IODP or ICDP led intense effort to solve this problem and proposed detailed high standard work- and dataflows thorough core handling and curating. However many paleoscience results derived from small-scale research programs in which data and sample management is too often managed only locally - when it is… In this paper we present a national effort leads in France to develop an integrated system to curate ice and sediment cores. Under the umbrella of the national excellence equipment program CLIMCOR, we launched a reflexion about core curating and the management of associated fieldwork data. Our aim was then to conserve all data from fieldwork in an integrated cyber-environment which will evolve toward laboratory-acquired data storage in a near future. To do so, our demarche was conducted through an intimate relationship with field operators as well laboratory core curators in order to propose user-oriented solutions. The national core curating initiative proposes a single web portal in which all teams can store their fieldwork data. This portal is used as a national hub to attribute IGSNs. For legacy samples, this requires the establishment of a dedicated core list with associated metadata. However, for forthcoming core data, we developed a mobile application to capture technical and scientific data directly on the field. This application is linked with a unique coring-tools library and is adapted to most coring devices (gravity, drilling, percussion etc.) including multiple sections and holes coring operations. Those field data can be uploaded automatically to the national portal, but also referenced through international standards (IGSN and INSPIRE) and displayed in international portals (currently, NOAA's IMLGS). In this paper, we present the architecture of the integrated system, future perspectives and the approach we adopted to reach our goals. We will also present our mobile application through didactic examples.
IRIS Toxicological Review of Ethylene Glycol Mono-Butyl ...
EPA has conducted a peer review of the scientific basis supporting the human health hazard and dose-response assessment of ethylene glycol monobutyl ether that will appear on the Integrated Risk Information System (IRIS) database. EPA is conducting a peer review of the scientific basis supporting the human health hazard and dose-response assessment of propionaldehyde that will appear on the Integrated Risk Information System (IRIS) database.
Gene annotation from scientific literature using mappings between keyword systems.
Pérez, Antonio J; Perez-Iratxeta, Carolina; Bork, Peer; Thode, Guillermo; Andrade, Miguel A
2004-09-01
The description of genes in databases by keywords helps the non-specialist to quickly grasp the properties of a gene and increases the efficiency of computational tools that are applied to gene data (e.g. searching a gene database for sequences related to a particular biological process). However, the association of keywords to genes or protein sequences is a difficult process that ultimately implies examination of the literature related to a gene. To support this task, we present a procedure to derive keywords from the set of scientific abstracts related to a gene. Our system is based on the automated extraction of mappings between related terms from different databases using a model of fuzzy associations that can be applied with all generality to any pair of linked databases. We tested the system by annotating genes of the SWISS-PROT database with keywords derived from the abstracts linked to their entries (stored in the MEDLINE database of scientific references). The performance of the annotation procedure was much better for SWISS-PROT keywords (recall of 47%, precision of 68%) than for Gene Ontology terms (recall of 8%, precision of 67%). The algorithm can be publicly accessed and used for the annotation of sequences through a web server at http://www.bork.embl.de/kat
Optimal management of orthodontic pain.
Topolski, Francielle; Moro, Alexandre; Correr, Gisele Maria; Schimim, Sasha Cristina
2018-01-01
Pain is an undesirable side effect of orthodontic tooth movement, which causes many patients to give up orthodontic treatment or avoid it altogether. The aim of this study was to investigate, through an analysis of the scientific literature, the best method for managing orthodontic pain. The methodological aspects involved careful definition of keywords and diligent search in databases of scientific articles published in the English language, without any restriction of publication date. We recovered 1281 articles. After the filtering and classification of these articles, 56 randomized clinical trials were selected. Of these, 19 evaluated the effects of different types of drugs for the control of orthodontic pain, 16 evaluated the effects of low-level laser therapy on orthodontic pain, and 21 evaluated other methods of pain control. Drugs reported as effective in orthodontic pain control included ibuprofen, paracetamol, naproxen sodium, aspirin, etoricoxib, meloxicam, piroxicam, and tenoxicam. Most studies report favorable outcomes in terms of alleviation of orthodontic pain with the use of low-level laser therapy. Nevertheless, we noticed that there is no consensus, both for the drug and for laser therapy, on the doses and clinical protocols most appropriate for orthodontic pain management. Alternative methods for orthodontic pain control can also broaden the clinician's range of options in the search for better patient care.
Optimal management of orthodontic pain
Topolski, Francielle; Moro, Alexandre; Correr, Gisele Maria; Schimim, Sasha Cristina
2018-01-01
Pain is an undesirable side effect of orthodontic tooth movement, which causes many patients to give up orthodontic treatment or avoid it altogether. The aim of this study was to investigate, through an analysis of the scientific literature, the best method for managing orthodontic pain. The methodological aspects involved careful definition of keywords and diligent search in databases of scientific articles published in the English language, without any restriction of publication date. We recovered 1281 articles. After the filtering and classification of these articles, 56 randomized clinical trials were selected. Of these, 19 evaluated the effects of different types of drugs for the control of orthodontic pain, 16 evaluated the effects of low-level laser therapy on orthodontic pain, and 21 evaluated other methods of pain control. Drugs reported as effective in orthodontic pain control included ibuprofen, paracetamol, naproxen sodium, aspirin, etoricoxib, meloxicam, piroxicam, and tenoxicam. Most studies report favorable outcomes in terms of alleviation of orthodontic pain with the use of low-level laser therapy. Nevertheless, we noticed that there is no consensus, both for the drug and for laser therapy, on the doses and clinical protocols most appropriate for orthodontic pain management. Alternative methods for orthodontic pain control can also broaden the clinician’s range of options in the search for better patient care. PMID:29588616
Annual Review of Database Developments: 1993.
ERIC Educational Resources Information Center
Basch, Reva
1993-01-01
Reviews developments in the database industry for 1993. Topics addressed include scientific and technical information; environmental issues; social sciences; legal information; business and marketing; news services; documentation; databases and document delivery; electronic bulletin boards and the Internet; and information industry organizational…
Legacy: Scientific results ODP Legacy: Engineering and science operations ODP Legacy: Samples & ; databases ODP Legacy: Outreach Overview Program Administration | Scientific Results | Engineering &
Contraception supply chain challenges: a review of evidence from low- and middle-income countries.
Mukasa, Bakali; Ali, Moazzam; Farron, Madeline; Van de Weerdt, Renee
2017-10-01
To identify and assess factors determining the functioning of supply chain systems for modern contraception in low- and middle-income countries (LMICs), and to identify challenges contributing to contraception stockouts that may lead to unmet need. Scientific databases and grey literature were searched including Database of Abstracts of Reviews of Effectiveness (DARE), PubMed, MEDLINE, POPLINE, CINAHL, Academic Search Complete, Science Direct, Web of Science, Cochrane Central, Google Scholar, WHO databases and websites of key international organisations. Studies indicated that supply chain system inefficiencies significantly affect availability of modern FP and contraception commodities in LMICs, especially in rural public facilities where distribution barriers may be acute. Supply chain failures or bottlenecks may be attributed to: weak and poorly institutionalized logistic management information systems (LMIS), poor physical infrastructures in LMICs, lack of trained and dedicated staff for supply chain management, inadequate funding, and rigid government policies on task sharing. However, there is evidence that implementing effective LMISs and involving public and private providers will distribution channels resulted in reduction in medical commodities' stockout rates. Supply chain bottlenecks contribute significantly to persistent high stockout rates for modern contraceptives in LMICs. Interventions aimed at enhancing uptake of contraceptives to reduce the problem of unmet need in LMICs should make strong commitments towards strengthening these countries' health commodities supply chain management systems. Current evidence is limited and additional, and well-designed implementation research on contraception supply chain systems is warranted to gain further understanding and insights on the determinants of supply chain bottlenecks and their impact on stockouts of contraception commodities.
An Introduction to Database Structure and Database Machines.
ERIC Educational Resources Information Center
Detweiler, Karen
1984-01-01
Enumerates principal management objectives of database management systems (data independence, quality, security, multiuser access, central control) and criteria for comparison (response time, size, flexibility, other features). Conventional database management systems, relational databases, and database machines used for backend processing are…
Creating a FIESTA (Framework for Integrated Earth Science and Technology Applications) with MagIC
NASA Astrophysics Data System (ADS)
Minnett, R.; Koppers, A. A. P.; Jarboe, N.; Tauxe, L.; Constable, C.
2017-12-01
The Magnetics Information Consortium (https://earthref.org/MagIC) has recently developed a containerized web application to considerably reduce the friction in contributing, exploring and combining valuable and complex datasets for the paleo-, geo- and rock magnetic scientific community. The data produced in this scientific domain are inherently hierarchical and the communities evolving approaches to this scientific workflow, from sampling to taking measurements to multiple levels of interpretations, require a large and flexible data model to adequately annotate the results and ensure reproducibility. Historically, contributing such detail in a consistent format has been prohibitively time consuming and often resulted in only publishing the highly derived interpretations. The new open-source (https://github.com/earthref/MagIC) application provides a flexible upload tool integrated with the data model to easily create a validated contribution and a powerful search interface for discovering datasets and combining them to enable transformative science. MagIC is hosted at EarthRef.org along with several interdisciplinary geoscience databases. A FIESTA (Framework for Integrated Earth Science and Technology Applications) is being created by generalizing MagIC's web application for reuse in other domains. The application relies on a single configuration document that describes the routing, data model, component settings and external services integrations. The container hosts an isomorphic Meteor JavaScript application, MongoDB database and ElasticSearch search engine. Multiple containers can be configured as microservices to serve portions of the application or rely on externally hosted MongoDB, ElasticSearch, or third-party services to efficiently scale computational demands. FIESTA is particularly well suited for many Earth Science disciplines with its flexible data model, mapping, account management, upload tool to private workspaces, reference metadata, image galleries, full text searches and detailed filters. EarthRef's Seamount Catalog of bathymetry and morphology data, EarthRef's Geochemical Earth Reference Model (GERM) databases, and Oregon State University's Marine and Geology Repository (http://osu-mgr.org) will benefit from custom adaptations of FIESTA.
Computer Databases as an Educational Tool in the Basic Sciences.
ERIC Educational Resources Information Center
Friedman, Charles P.; And Others
1990-01-01
The University of North Carolina School of Medicine developed a computer database, INQUIRER, containing scientific information in bacteriology, and then integrated the database into routine educational activities for first-year medical students in their microbiology course. (Author/MLW)
NASA Astrophysics Data System (ADS)
Bono, Andrea
2007-01-01
The recovery and preservation of the patrimony made of the instrumental registrations regarding the historical earthquakes is with no doubt a subject of great interest. This attention, besides being purely historical, must necessarily be also scientific. In fact, the availability of a great amount of parametric information on the seismic activity in a given area is a doubtless help to the seismologic researcher's activities. In this article the project of the Sismos group of the National Institute of Geophysics and Volcanology of Rome new database is presented. In the structure of the new scheme the matured experience of five years of activity is summarized. We consider it useful for those who are approaching to "recovery and reprocess" computer based facilities. In the past years several attempts on Italian seismicity have followed each other. It has almost never been real databases. Some of them have had positive success because they were well considered and organized. In others it was limited in supplying lists of events with their relative hypocentral standards. What makes this project more interesting compared to the previous work is the completeness and the generality of the managed information. For example, it will be possible to view the hypocentral information regarding a given historical earthquake; it will be possible to research the seismograms in raster, digital or digitalized format, the information on times of arrival of the phases in the various stations, the instrumental standards and so on. The relational modern logic on which the archive is based, allows the carrying out of all these operations with little effort. The database described below will completely substitute Sismos' current data bank. Some of the organizational principles of this work are similar to those that inspire the database for the real-time monitoring of the seismicity in use in the principal offices of international research. A modern planning logic in a distinctly historical context is introduced. Following are the descriptions of the various planning phases, from the conceptual level to the physical implementation of the scheme. Each time principle instructions, rules, considerations of technical-scientific nature are highlighted that take to the final result: a vanguard relational scheme for historical data.
Scientific Communication of Geochemical Data and the Use of Computer Databases.
ERIC Educational Resources Information Center
Le Bas, M. J.; Durham, J.
1989-01-01
Describes a scheme in the United Kingdom that coordinates geochemistry publications with a computerized geochemistry database. The database comprises not only data published in the journals but also the remainder of the pertinent data set. The discussion covers the database design; collection, storage and retrieval of data; and plans for future…
Smith, Jeffrey K
2013-04-01
Regulatory administrative database systems within the Food and Drug Administration's (FDA) Center for Biologics Evaluation and Research (CBER) are essential to supporting its core mission, as a regulatory agency. Such systems are used within FDA to manage information and processes surrounding the processing, review, and tracking of investigational and marketed product submissions. This is an area of increasing interest in the pharmaceutical industry and has been a topic at trade association conferences (Buckley 2012). Such databases in CBER are complex, not for the type or relevance of the data to any particular scientific discipline but because of the variety of regulatory submission types and processes the systems support using the data. Commonalities among different data domains of CBER's regulatory administrative databases are discussed. These commonalities have evolved enough to constitute real database convergence and provide a valuable asset for business process intelligence. Balancing review workload across staff, exploring areas of risk in review capacity, process improvement, and presenting a clear and comprehensive landscape of review obligations are just some of the opportunities of such intelligence. This convergence has been occurring in the presence of usual forces that tend to drive information technology (IT) systems development toward separate stovepipes and data silos. CBER has achieved a significant level of convergence through a gradual process, using a clear goal, agreed upon development practices, and transparency of database objects, rather than through a single, discrete project or IT vendor solution. This approach offers a path forward for FDA systems toward a unified database.
filltex: Automatic queries to ADS and INSPIRE databases to fill LaTex bibliography
NASA Astrophysics Data System (ADS)
Gerosa, Davide; Vallisneri, Michele
2017-05-01
filltex is a simple tool to fill LaTex reference lists with records from the ADS and INSPIRE databases. ADS and INSPIRE are the most common databases used among the theoretical physics and astronomy scientific communities, respectively. filltex automatically looks for all citation labels present in a tex document and, by means of web-scraping, downloads all the required citation records from either of the two databases. filltex significantly speeds up the LaTex scientific writing workflow, as all required actions (compile the tex file, fill the bibliography, compile the bibliography, compile the tex file again) are automated in a single command. We also provide an integration of filltex for the macOS LaTex editor TexShop.
Methods for structuring scientific knowledge from many areas related to aging research.
Zhavoronkov, Alex; Cantor, Charles R
2011-01-01
Aging and age-related disease represents a substantial quantity of current natural, social and behavioral science research efforts. Presently, no centralized system exists for tracking aging research projects across numerous research disciplines. The multidisciplinary nature of this research complicates the understanding of underlying project categories, the establishment of project relations, and the development of a unified project classification scheme. We have developed a highly visual database, the International Aging Research Portfolio (IARP), available at AgingPortfolio.org to address this issue. The database integrates information on research grants, peer-reviewed publications, and issued patent applications from multiple sources. Additionally, the database uses flexible project classification mechanisms and tools for analyzing project associations and trends. This system enables scientists to search the centralized project database, to classify and categorize aging projects, and to analyze the funding aspects across multiple research disciplines. The IARP is designed to provide improved allocation and prioritization of scarce research funding, to reduce project overlap and improve scientific collaboration thereby accelerating scientific and medical progress in a rapidly growing area of research. Grant applications often precede publications and some grants do not result in publications, thus, this system provides utility to investigate an earlier and broader view on research activity in many research disciplines. This project is a first attempt to provide a centralized database system for research grants and to categorize aging research projects into multiple subcategories utilizing both advanced machine algorithms and a hierarchical environment for scientific collaboration.
Continual improvement: A bibliography with indexes, 1992-1993
NASA Technical Reports Server (NTRS)
1994-01-01
This bibliography lists 606 references to reports and journal articles entered into the NASA Scientific and Technical Information Database during 1992 to 1993. Topics cover the philosophy and history of Continual Improvement (CI), basic approaches and strategies for implementation, and lessons learned from public and private sector models. Entries are arranged according to the following categories: Leadership for Quality, Information and Analysis, Strategic Planning for CI, Human Resources Utilization, Management of Process Quality, Supplier Quality, Assessing Results, Customer Focus and Satisfaction, TQM Tools and Philosophies, and Applications. Indexes include subject, personal author, corporate source, contract number, report number, and accession number.
Pardo-Hernandez, Hector; Urrútia, Gerard; Barajas-Nava, Leticia A; Buitrago-Garcia, Diana; Garzón, Julieth Vanessa; Martínez-Zapata, María José; Bonfill, Xavier
2017-06-13
Systematic reviews provide the best evidence on the effect of health care interventions. They rely on comprehensive access to the available scientific literature. Electronic search strategies alone may not suffice, requiring the implementation of a handsearching approach. We have developed a database to provide an Internet-based platform from which handsearching activities can be coordinated, including a procedure to streamline the submission of these references into CENTRAL, the Cochrane Collaboration Central Register of Controlled Trials. We developed a database and a descriptive analysis. Through brainstorming and discussion among stakeholders involved in handsearching projects, we designed a database that met identified needs that had to be addressed in order to ensure the viability of handsearching activities. Three handsearching teams pilot tested the proposed database. Once the final version of the database was approved, we proceeded to train the staff involved in handsearching. The proposed database is called BADERI (Database of Iberoamerican Clinical Trials and Journals, by its initials in Spanish). BADERI was officially launched in October 2015, and it can be accessed at www.baderi.com/login.php free of cost. BADERI has an administration subsection, from which the roles of users are managed; a references subsection, where information associated to identified controlled clinical trials (CCTs) can be entered; a reports subsection, from which reports can be generated to track and analyse the results of handsearching activities; and a built-in free text search engine. BADERI allows all references to be exported in ProCite files that can be directly uploaded into CENTRAL. To date, 6284 references to CCTs have been uploaded to BADERI and sent to CENTRAL. The identified CCTs were published in a total of 420 journals related to 46 medical specialties. The year of publication ranged between 1957 and 2016. BADERI allows the efficient management of handsearching activities across different countries and institutions. References to all CCTs available in BADERI can be readily submitted to CENTRAL for their potential inclusion in systematic reviews.
Autonomous mission planning and scheduling: Innovative, integrated, responsive
NASA Technical Reports Server (NTRS)
Sary, Charisse; Liu, Simon; Hull, Larry; Davis, Randy
1994-01-01
Autonomous mission scheduling, a new concept for NASA ground data systems, is a decentralized and distributed approach to scientific spacecraft planning, scheduling, and command management. Systems and services are provided that enable investigators to operate their own instruments. In autonomous mission scheduling, separate nodes exist for each instrument and one or more operations nodes exist for the spacecraft. Each node is responsible for its own operations which include planning, scheduling, and commanding; and for resolving conflicts with other nodes. One or more database servers accessible to all nodes enable each to share mission and science planning, scheduling, and commanding information. The architecture for autonomous mission scheduling is based upon a realistic mix of state-of-the-art and emerging technology and services, e.g., high performance individual workstations, high speed communications, client-server computing, and relational databases. The concept is particularly suited to the smaller, less complex missions of the future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robinson P. Khosah; Frank T. Alex
2007-02-11
Advanced Technology Systems, Inc. (ATS) was contracted by the U. S. Department of Energy's National Energy Technology Laboratory (DOE-NETL) to develop a state-of-the-art, scalable and robust web-accessible database application to manage the extensive data sets resulting from the DOE-NETL-sponsored ambient air monitoring programs in the upper Ohio River valley region. The data management system was designed to include a web-based user interface that will allow easy access to the data by the scientific community, policy- and decision-makers, and other interested stakeholders, while providing detailed information on sampling, analytical and quality control parameters. In addition, the system will provide graphical analyticalmore » tools for displaying, analyzing and interpreting the air quality data. The system will also provide multiple report generation capabilities and easy-to-understand visualization formats that can be utilized by the media and public outreach/educational institutions. The project is being conducted in two phases. Phase One includes the following tasks: (1) data inventory/benchmarking, including the establishment of an external stakeholder group; (2) development of a data management system; (3) population of the database; (4) development of a web-based data retrieval system, and (5) establishment of an internal quality assurance/quality control system on data management. Phase Two, which is currently underway, involves the development of a platform for on-line data analysis. Phase Two includes the following tasks: (1) development of a sponsor and stakeholder/user website with extensive online analytical tools; (2) development of a public website; (3) incorporation of an extensive online help system into each website; and (4) incorporation of a graphical representation (mapping) system into each website. The project is now into its forty-eighth month of development activities.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robinson P. Khosah; Charles G. Crawford
2006-02-11
Advanced Technology Systems, Inc. (ATS) was contracted by the U. S. Department of Energy's National Energy Technology Laboratory (DOE-NETL) to develop a state-of-the-art, scalable and robust web-accessible database application to manage the extensive data sets resulting from the DOE-NETL-sponsored ambient air monitoring programs in the upper Ohio River valley region. The data management system was designed to include a web-based user interface that will allow easy access to the data by the scientific community, policy- and decision-makers, and other interested stakeholders, while providing detailed information on sampling, analytical and quality control parameters. In addition, the system will provide graphical analyticalmore » tools for displaying, analyzing and interpreting the air quality data. The system will also provide multiple report generation capabilities and easy-to-understand visualization formats that can be utilized by the media and public outreach/educational institutions. The project is being conducted in two phases. Phase One includes the following tasks: (1) data inventory/benchmarking, including the establishment of an external stakeholder group; (2) development of a data management system; (3) population of the database; (4) development of a web-based data retrieval system, and (5) establishment of an internal quality assurance/quality control system on data management. Phase Two, which is currently underway, involves the development of a platform for on-line data analysis. Phase Two includes the following tasks: (1) development of a sponsor and stakeholder/user website with extensive online analytical tools; (2) development of a public website; (3) incorporation of an extensive online help system into each website; and (4) incorporation of a graphical representation (mapping) system into each website. The project is now into its forty-second month of development activities.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robinson P. Khosah; Charles G. Crawford
Advanced Technology Systems, Inc. (ATS) was contracted by the U. S. Department of Energy's National Energy Technology Laboratory (DOE-NETL) to develop a state-of-the-art, scalable and robust web-accessible database application to manage the extensive data sets resulting from the DOE-NETL-sponsored ambient air monitoring programs in the upper Ohio River valley region. The data management system was designed to include a web-based user interface that will allow easy access to the data by the scientific community, policy- and decision-makers, and other interested stakeholders, while providing detailed information on sampling, analytical and quality control parameters. In addition, the system will provide graphical analyticalmore » tools for displaying, analyzing and interpreting the air quality data. The system will also provide multiple report generation capabilities and easy-to-understand visualization formats that can be utilized by the media and public outreach/educational institutions. The project is being conducted in two phases. Phase 1, which is currently in progress and will take twelve months to complete, will include the following tasks: (1) data inventory/benchmarking, including the establishment of an external stakeholder group; (2) development of a data management system; (3) population of the database; (4) development of a web-based data retrieval system, and (5) establishment of an internal quality assurance/quality control system on data management. In Phase 2, which will be completed in the second year of the project, a platform for on-line data analysis will be developed. Phase 2 will include the following tasks: (1) development of a sponsor and stakeholder/user website with extensive online analytical tools; (2) development of a public website; (3) incorporation of an extensive online help system into each website; and (4) incorporation of a graphical representation (mapping) system into each website. The project is now into its eleventh month of Phase 1 development activities.« less
Information System through ANIS at CeSAM
NASA Astrophysics Data System (ADS)
Moreau, C.; Agneray, F.; Gimenez, S.
2015-09-01
ANIS (AstroNomical Information System) is a web generic tool developed at CeSAM to facilitate and standardize the implementation of astronomical data of various kinds through private and/or public dedicated Information Systems. The architecture of ANIS is composed of a database server which contains the project data, a web user interface template which provides high level services (search, extract and display imaging and spectroscopic data using a combination of criteria, an object list, a sql query module or a cone search interfaces), a framework composed of several packages, and a metadata database managed by a web administration entity. The process to implement a new ANIS instance at CeSAM is easy and fast : the scientific project has to submit data or a data secure access, the CeSAM team installs the new instance (web interface template and the metadata database), and the project administrator can configure the instance with the web ANIS-administration entity. Currently, the CeSAM offers through ANIS a web access to VO compliant Information Systems for different projects (HeDaM, HST-COSMOS, CFHTLS-ZPhots, ExoDAT,...).
Efficient data management tools for the heterogeneous big data warehouse
NASA Astrophysics Data System (ADS)
Alekseev, A. A.; Osipova, V. V.; Ivanov, M. A.; Klimentov, A.; Grigorieva, N. V.; Nalamwar, H. S.
2016-09-01
The traditional RDBMS has been consistent for the normalized data structures. RDBMS served well for decades, but the technology is not optimal for data processing and analysis in data intensive fields like social networks, oil-gas industry, experiments at the Large Hadron Collider, etc. Several challenges have been raised recently on the scalability of data warehouse like workload against the transactional schema, in particular for the analysis of archived data or the aggregation of data for summary and accounting purposes. The paper evaluates new database technologies like HBase, Cassandra, and MongoDB commonly referred as NoSQL databases for handling messy, varied and large amount of data. The evaluation depends upon the performance, throughput and scalability of the above technologies for several scientific and industrial use-cases. This paper outlines the technologies and architectures needed for processing Big Data, as well as the description of the back-end application that implements data migration from RDBMS to NoSQL data warehouse, NoSQL database organization and how it could be useful for further data analytics.
Lira-Noriega, Andrés; Soberón, Jorge
2015-09-01
At a global level, the relationship between biodiversity importance and capacity to manage it is often assumed to be negative, without much differentiation among the more than 200 countries and territories of the world. We examine this relationship using a database including terrestrial biodiversity, wealth and governance indicators for most countries. From these, principal components analysis was used to construct aggregated indicators at global and regional scales. Wealth, governance, and scientific capacity represent different skills and abilities in relation to biodiversity importance. Our results show that the relationship between biodiversity and the different factors is not simple: in most regions wealth and capacity varies positively with biodiversity, while governance vary negatively with biodiversity. However, these trends, to a certain extent, are concentrated in certain groups of nations and outlier countries. We discuss our results in the context of collaboration and joint efforts among biodiversity-rich countries and foreign agencies.
Scientific evidence on perineal trauma during labor: Integrative review.
Vieira, Flaviana; Guimarães, Janaina V; Souza, Marcia C S; Sousa, Poliana M L; Santos, Rafaela F; Cavalcante, Agueda M R Z
2018-04-01
To assess the scientific evidence for management and preservation of perineal integrity during the expulsive stage of labor. Integrative review that employed the Population, Intervention, Comparison, Outcome strategy to formulate the research question: Which perineal measure(s) is(are) effective in maintaining perineal integrity during labor? The search was performed in the databases MEDLINE, LILACS, BDENF and SciELO. The ten selected studies were analyzed based on their level of evidence and grade of recommendation. Four categories of measures were located: antenatal perineal care, perineal massage during the expulsive phase of labor, manual perineal support during the expulsive phase of labor and perineal hyaluronidase injection. Based on its level of evidence, perineal massage with lubricants performed by the women or their partners at the end of pregnancy may be recommended as a measure favorable for perineal protection. Copyright © 2018 Elsevier B.V. All rights reserved.
Business management in the information age
NASA Astrophysics Data System (ADS)
Misawa, Chiyoji
This is the record of the Special Lecture at the 25th Annual Meeting on Information Science and Technology. In the first half, how managers should collect and utilize the scientific and technical information is described, based on the author' own experience. Author says they should visit the source organization by themselves when they find the interesting information. To make high use of such information, they are needed to be well acquainted with the conditions of their own facilities and technology etc., he mentions. In the second half, from historical point of view, the development of Japanese industry and technology for the past 40 years is reviewed, and he expects the databases would be utilized promote the research and development in order to make our country have a new energy resources.
Mathematical Notation in Bibliographic Databases.
ERIC Educational Resources Information Center
Pasterczyk, Catherine E.
1990-01-01
Discusses ways in which using mathematical symbols to search online bibliographic databases in scientific and technical areas can improve search results. The representations used for Greek letters, relations, binary operators, arrows, and miscellaneous special symbols in the MathSci, Inspec, Compendex, and Chemical Abstracts databases are…
Marino, Bradley S; Lipkin, Paul H; Newburger, Jane W; Peacock, Georgina; Gerdes, Marsha; Gaynor, J William; Mussatto, Kathleen A; Uzark, Karen; Goldberg, Caren S; Johnson, Walter H; Li, Jennifer; Smith, Sabrina E; Bellinger, David C; Mahle, William T
2012-08-28
The goal of this statement was to review the available literature on surveillance, screening, evaluation, and management strategies and put forward a scientific statement that would comprehensively review the literature and create recommendations to optimize neurodevelopmental outcome in the pediatric congenital heart disease (CHD) population. A writing group appointed by the American Heart Association and American Academy of Pediatrics reviewed the available literature addressing developmental disorder and disability and developmental delay in the CHD population, with specific attention given to surveillance, screening, evaluation, and management strategies. MEDLINE and Google Scholar database searches from 1966 to 2011 were performed for English-language articles cross-referencing CHD with pertinent search terms. The reference lists of identified articles were also searched. The American College of Cardiology/American Heart Association classification of recommendations and levels of evidence for practice guidelines were used. A management algorithm was devised that stratified children with CHD on the basis of established risk factors. For those deemed to be at high risk for developmental disorder or disabilities or for developmental delay, formal, periodic developmental and medical evaluations are recommended. A CHD algorithm for surveillance, screening, evaluation, reevaluation, and management of developmental disorder or disability has been constructed to serve as a supplement to the 2006 American Academy of Pediatrics statement on developmental surveillance and screening. The proposed algorithm is designed to be carried out within the context of the medical home. This scientific statement is meant for medical providers within the medical home who care for patients with CHD. Children with CHD are at increased risk of developmental disorder or disabilities or developmental delay. Periodic developmental surveillance, screening, evaluation, and reevaluation throughout childhood may enhance identification of significant deficits, allowing for appropriate therapies and education to enhance later academic, behavioral, psychosocial, and adaptive functioning.
Applications of GIS and database technologies to manage a Karst Feature Database
Gao, Y.; Tipping, R.G.; Alexander, E.C.
2006-01-01
This paper describes the management of a Karst Feature Database (KFD) in Minnesota. Two sets of applications in both GIS and Database Management System (DBMS) have been developed for the KFD of Minnesota. These applications were used to manage and to enhance the usability of the KFD. Structured Query Language (SQL) was used to manipulate transactions of the database and to facilitate the functionality of the user interfaces. The Database Administrator (DBA) authorized users with different access permissions to enhance the security of the database. Database consistency and recovery are accomplished by creating data logs and maintaining backups on a regular basis. The working database provides guidelines and management tools for future studies of karst features in Minnesota. The methodology of designing this DBMS is applicable to develop GIS-based databases to analyze and manage geomorphic and hydrologic datasets at both regional and local scales. The short-term goal of this research is to develop a regional KFD for the Upper Mississippi Valley Karst and the long-term goal is to expand this database to manage and study karst features at national and global scales.
Alijani, Rahim
2015-01-01
In recent years emphasis has been placed on evaluation studies and the publication of scientific papers in national and international journals. In this regard the publication of scientific papers in journals in the Institute for Scientific Information (ISI) database is highly recommended. The evaluation of scientific output via articles in journals indexed in the ISI database will enable the Iranian research authorities to allocate and organize research budgets and human resources in a way that maximises efficient science production. The purpose of the present paper is to publish a general and valid view of science production in the field of stem cells. In this research, outputs in the field of stem cell research are evaluated by survey research, the method of science assessment called Scientometrics in this branch of science. A total of 1528 documents was extracted from the ISI database and analysed using descriptive statistics software in Excel. The results of this research showed that 1528 papers in the stem cell field in the Web of Knowledge database were produced by Iranian researchers. The top ten Iranian researchers in this field have produced 936 of these papers, equivalent to 61.3% of the total. Among the top ten, Soleimani M. has occupied the first place with 181 papers. Regarding international scientific participation, Iranian researchers have cooperated to publish papers with researchers from 50 countries. Nearly 32% (452 papers) of the total research output in this field has been published in the top 10 journals. These results show that a small number of researchers have published the majority of papers in the stem cell field. International participation in this field of research unacceptably low. Such participation provides the opportunity to import modern science and international experience into Iran. This not only causes scientific growth, but also improves the research and enhances opportunities for employment and professional development. Iranian scientific outputs from stem cell research should not be limited to only a few specific journals.
Khorrami, F; Ahmadi, M; Alizadeh, A; Roozbeh, N; Mohseni, S
2015-01-01
Introduction: Given the ever-increasing importance and value of information, providing the management with a reliable information system, which can facilitate decision-making regarding planning, organization and control, is vitally important. This study aimed to analyze and evaluate the information needs of medical equipment offices. Methods: This descriptive applied cross-sectional study was carried out in 2010. The population of the study included the managers of statistic and medical records at the offices of vice-chancellor for treatment in 39 medical universities in Iran. Data were collected by using structured questioners. With regard to different kinds of designing information systems, sampling was done by two methods, BSP (based on processes of job description) and CSF method (based on critical success factors). The data were analyzed by SPSS-16. Results: Our study showed that 41% of information needs were found to be critical success factors of managers of office. The first priority of managers was "the number of bed and bed occupancy in hospitals". Of 29 identified information needs, 62% were initial information needs of managers (from the viewpoints of managers). Of all, 4% of the information needs were obtained through the form, 14% through both the form and database, 11% through the web site, and 71% had no sources (forms, databases, web site). Conclusion: Since 71% of the information needs of medical equipment offices managers had no information sources, the development of information system in these offices seems to be necessary. Despite the important role of users in designing the information systems (identifying 62% of information needs), other scientific methods is also needed to be utilized in designing the information systems.
Khorrami, F; Ahmadi, M; Alizadeh, A; Roozbeh, N; Mohseni, S
2015-01-01
Introduction: Given the ever-increasing importance and value of information, providing the management with a reliable information system, which can facilitate decision-making regarding planning, organization and control, is vitally important. This study aimed to analyze and evaluate the information needs of medical equipment offices. Methods: This descriptive applied cross-sectional study was carried out in 2010. The population of the study included the managers of statistic and medical records at the offices of vice-chancellor for treatment in 39 medical universities in Iran. Data were collected by using structured questioners. With regard to different kinds of designing information systems, sampling was done by two methods, BSP (based on processes of job description) and CSF method (based on critical success factors). The data were analyzed by SPSS-16. Results: Our study showed that 41% of information needs were found to be critical success factors of managers of office. The first priority of managers was “the number of bed and bed occupancy in hospitals”. Of 29 identified information needs, 62% were initial information needs of managers (from the viewpoints of managers). Of all, 4% of the information needs were obtained through the form, 14% through both the form and database, 11% through the web site, and 71% had no sources (forms, databases, web site). Conclusion: Since 71% of the information needs of medical equipment offices managers had no information sources, the development of information system in these offices seems to be necessary. Despite the important role of users in designing the information systems (identifying 62% of information needs), other scientific methods is also needed to be utilized in designing the information systems. PMID:28255389
The ESIS query environment pilot project
NASA Technical Reports Server (NTRS)
Fuchs, Jens J.; Ciarlo, Alessandro; Benso, Stefano
1993-01-01
The European Space Information System (ESIS) was originally conceived to provide the European space science community with simple and efficient access to space data archives, facilities with which to examine and analyze the retrieved data, and general information services. To achieve that ESIS will provide the scientists with a discipline specific environment for querying in a uniform and transparent manner data stored in geographically dispersed archives. Furthermore it will provide discipline specific tools for displaying and analyzing the retrieved data. The central concept of ESIS is to achieve a more efficient and wider usage of space scientific data, while maintaining the physical archives at the institutions which created them, and has the best background for ensuring and maintaining the scientific validity and interest of the data. In addition to coping with the physical distribution of data, ESIS is to manage also the heterogenity of the individual archives' data models, formats and data base management systems. Thus the ESIS system shall appear to the user as a single database, while it does in fact consist of a collection of dispersed and locally managed databases and data archives. The work reported in this paper is one of the results of the ESIS Pilot Project which is to be completed in 1993. More specifically it presents the pilot ESIS Query Environment (ESIS QE) system which forms the data retrieval and data dissemination axis of the ESIS system. The others are formed by the ESIS Correlation Environment (ESIS CE) and the ESIS Information Services. The ESIS QE Pilot Project is carried out for the European Space Agency's Research and Information center, ESRIN, by a Consortium consisting of Computer Resources International, Denmark, CISET S.p.a, Italy, the University of Strasbourg, France and the Rutherford Appleton Laboratories in the U.K. Furthermore numerous scientists both within ESA and space science community in Europe have been involved in defining the core concepts of the ESIS system.
Negative Effects of Learning Spreadsheet Management on Learning Database Management
ERIC Educational Resources Information Center
Vágner, Anikó; Zsakó, László
2015-01-01
A lot of students learn spreadsheet management before database management. Their similarities can cause a lot of negative effects when learning database management. In this article, we consider these similarities and explain what can cause problems. First, we analyse the basic concepts such as table, database, row, cell, reference, etc. Then, we…
IRIS Toxicological Review of Methanol (Non-Cancer) ...
EPA is conducting a peer review and public comment of the scientific basis supporting the human health hazard and dose-response assessment of methanol (non-cancer) that when finalized will appear on the Integrated Risk Information System (IRIS) database. EPA is conducting a peer review of the scientific basis supporting the human health hazard and dose-response assessment of methanol (non-cancer) that will appear in the Integrated Risk Information System (IRIS) database.
A database application for wilderness character monitoring
Ashley Adams; Peter Landres; Simon Kingston
2012-01-01
The National Park Service (NPS) Wilderness Stewardship Division, in collaboration with the Aldo Leopold Wilderness Research Institute and the NPS Inventory and Monitoring Program, developed a database application to facilitate tracking and trend reporting in wilderness character. The Wilderness Character Monitoring Database allows consistent, scientifically based...
Monitoring outcomes with relational databases: does it improve quality of care?
Clemmer, Terry P
2004-12-01
There are 3 key ingredients in improving quality of medial care: 1) using a scientific process of improvement, 2) executing the process at the lowest possible level in the organization, and 3) measuring the results of any change reliably. Relational databases when used within these guidelines are of great value in these efforts if they contain reliable information that is pertinent to the project and used in a scientific process of quality improvement by a front line team. Unfortunately, the data are frequently unreliable and/or not pertinent to the local process and is used by persons at very high levels in the organization without a scientific process and without reliable measurement of the outcome. Under these circumstances the effectiveness of relational databases in improving care is marginal at best, frequently wasteful and has the potential to be harmful. This article explores examples of these concepts.
Packer, Abel Laerte; Tardelli, Adalberto Otranto; Castro, Regina Célia Figueiredo
2007-01-01
This study explores the distribution of international, regional and national scientific output in health information and communication, indexed in the MEDLINE and LILACS databases, between 1996 and 2005. A selection of articles was based on the hierarchical structure of Information Science in MeSH vocabulary. Four specific domains were determined: health information, medical informatics, scientific communications on healthcare and healthcare communications. The variables analyzed were: most-covered subjects and journals, author affiliation and publication countries and languages, in both databases. The Information Science category is represented in nearly 5% of MEDLINE and LILACS articles. The four domains under analysis showed a relative annual increase in MEDLINE. The Medical Informatics domain showed the highest number of records in MEDLINE, representing about half of all indexed articles. The importance of Information Science as a whole is more visible in publications from developed countries and the findings indicate the predominance of the United States, with significant growth in scientific output from China and South Korea and, to a lesser extent, Brazil.
Salguero, E; González de Dios, J; García del Rio, M; Sánchez Díaz, F
2005-10-01
Congenital diaphragmatic hernia (CDH) is one of the high-risk diseases in neonatal surgery. The aim of this article is to make an update of the controversies about the therapeutic management (time of surgery and modalities of medical stabilization) of CDH, by means of a systematic and critical review of the best scientific evidence in bibliography. Systematic and structured review of the articles about therapeutic management of CDH (surgery, mechanical ventilation, inhaled nitric oxide, extracorporeal membrane oxygenation, surfactant, etc) published in secondary (TRIPdatabase, systematic review in Cochrane Collaboration, clinical practice guidelines, health technology assessment database, etc) and primary (bibliographic databases, biomedical journals, books, etc) publications and critical appraisal by means of methodology of the Evidence-Based Medicine Working Group. We selected the publications with the main scientific evidence in therapeutical articles (clinical trial, systematic review, meta-analysis and clinical practice guideline). The main secondary information is found in The Cochrane Library: 3 systematic review in the Neonatal Group (one specific about the time of surgery, and two related to the use of nitric oxide and extracorporeal membrane oxygenation in neonatal severe respiratory failure). But we found the main relevant articles in Pubmed database, mainly published in Journal Pediatric Surgery and with some clusters of investigation (Congenital Diaphragmatic Hernia Study Group in Texas University and Buffalo Institute of Fetal Therapy in New York University). From the evidence-based analysis, the results of CDH management between immediate versus delayed surgery were unclear, but delayed surgical (with pre-operative stabilization) has become preferred approach in many centers, and foetal surgery is not better than neonatal one. Opinion regarding the time of surgery has gradually shifted from early repair to a policy of stabilization and delayed repair. Because of associated persistent pulmonary hypertension and/or pulmonary hypoplasia in CDH, medical therapy is focused toward optimizing oxygenation while avoiding barotrauma, using gentle ventilation and permissive hypercarbia. High frequency oscillatory ventilation, inhaled nitric oxide and extracorporeal membrane oxygenation are used in severe cases, but these treatments do not clearly improve the outcome in neonates with CDH. The usefulness of surfactant and partial liquid ventilation are based in animal model experimentation, because the clinical trials in newborns are little and non-conclusive. Challenges for the future in this thematic area include the need for bigger and better trials of therapy in this field, with long-term outcomes among surviving children.
A National Virtual Specimen Database for Early Cancer Detection
NASA Technical Reports Server (NTRS)
Crichton, Daniel; Kincaid, Heather; Kelly, Sean; Thornquist, Mark; Johnsey, Donald; Winget, Marcy
2003-01-01
Access to biospecimens is essential for enabling cancer biomarker discovery. The National Cancer Institute's (NCI) Early Detection Research Network (EDRN) comprises and integrates a large number of laboratories into a network in order to establish a collaborative scientific environment to discover and validate disease markers. The diversity of both the institutions and the collaborative focus has created the need for establishing cross-disciplinary teams focused on integrating expertise in biomedical research, computational and biostatistics, and computer science. Given the collaborative design of the network, the EDRN needed an informatics infrastructure. The Fred Hutchinson Cancer Research Center, the National Cancer Institute,and NASA's Jet Propulsion Laboratory (JPL) teamed up to build an informatics infrastructure creating a collaborative, science-driven research environment despite the geographic and morphology differences of the information systems that existed within the diverse network. EDRN investigators identified the need to share biospecimen data captured across the country managed in disparate databases. As a result, the informatics team initiated an effort to create a virtual tissue database whereby scientists could search and locate details about specimens located at collaborating laboratories. Each database, however, was locally implemented and integrated into collection processes and methods unique to each institution. This meant that efforts to integrate databases needed to be done in a manner that did not require redesign or re-implementation of existing system
Barbara, Angela M; Dobbins, Maureen; Brian Haynes, R; Iorio, Alfonso; Lavis, John N; Raina, Parminder; Levinson, Anthony J
2017-07-11
The objective of this work was to provide easy access to reliable health information based on good quality research that will help health care professionals to learn what works best for seniors to stay as healthy as possible, manage health conditions and build supportive health systems. This will help meet the demands of our aging population that clinicians provide high quality care for older adults, that public health professionals deliver disease prevention and health promotion strategies across the life span, and that policymakers address the economic and social need to create a robust health system and a healthy society for all ages. The McMaster Optimal Aging Portal's (Portal) professional bibliographic database contains high quality scientific evidence about optimal aging specifically targeted to clinicians, public health professionals and policymakers. The database content comes from three information services: McMaster Premium LiteratUre Service (MacPLUS™), Health Evidence™ and Health Systems Evidence. The Portal is continually updated, freely accessible online, easily searchable, and provides email-based alerts when new records are added. The database is being continually assessed for value, usability and use. A number of improvements are planned, including French language translation of content, increased linkages between related records within the Portal database, and inclusion of additional types of content. While this article focuses on the professional database, the Portal also houses resources for patients, caregivers and the general public, which may also be of interest to geriatric practitioners and researchers.
Nuclear science abstracts (NSA) database 1948--1974 (on the Internet)
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
Nuclear Science Abstracts (NSA) is a comprehensive abstract and index collection of the International Nuclear Science and Technology literature for the period 1948 through 1976. Included are scientific and technical reports of the US Atomic Energy Commission, US Energy Research and Development Administration and its contractors, other agencies, universities, and industrial and research organizations. Coverage of the literature since 1976 is provided by Energy Science and Technology Database. Approximately 25% of the records in the file contain abstracts. These are from the following volumes of the print Nuclear Science Abstracts: Volumes 12--18, Volume 29, and Volume 33. The database containsmore » over 900,000 bibliographic records. All aspects of nuclear science and technology are covered, including: Biomedical Sciences; Metals, Ceramics, and Other Materials; Chemistry; Nuclear Materials and Waste Management; Environmental and Earth Sciences; Particle Accelerators; Engineering; Physics; Fusion Energy; Radiation Effects; Instrumentation; Reactor Technology; Isotope and Radiation Source Technology. The database includes all records contained in Volume 1 (1948) through Volume 33 (1976) of the printed version of Nuclear Science Abstracts (NSA). This worldwide coverage includes books, conference proceedings, papers, patents, dissertations, engineering drawings, and journal literature. This database is now available for searching through the GOV. Research Center (GRC) service. GRC is a single online web-based search service to well known Government databases. Featuring powerful search and retrieval software, GRC is an important research tool. The GRC web site is at http://grc.ntis.gov.« less
Differences among Major Taxa in the Extent of Ecological Knowledge across Four Major Ecosystems
Fisher, Rebecca; Knowlton, Nancy; Brainard, Russell E.; Caley, M. Julian
2011-01-01
Existing knowledge shapes our understanding of ecosystems and is critical for ecosystem-based management of the world's natural resources. Typically this knowledge is biased among taxa, with some taxa far better studied than others, but the extent of this bias is poorly known. In conjunction with the publically available World Registry of Marine Species database (WoRMS) and one of the world's premier electronic scientific literature databases (Web of Science®), a text mining approach is used to examine the distribution of existing ecological knowledge among taxa in coral reef, mangrove, seagrass and kelp bed ecosystems. We found that for each of these ecosystems, most research has been limited to a few groups of organisms. While this bias clearly reflects the perceived importance of some taxa as commercially or ecologically valuable, the relative lack of research of other taxonomic groups highlights the problem that some key taxa and associated ecosystem processes they affect may be poorly understood or completely ignored. The approach outlined here could be applied to any type of ecosystem for analyzing previous research effort and identifying knowledge gaps in order to improve ecosystem-based conservation and management. PMID:22073172
Applications of Precipitation Feature Databases from GPM core and constellation Satellites
NASA Astrophysics Data System (ADS)
Liu, C.
2017-12-01
Using the observations from Global Precipitation Mission (GPM) core and constellation satellites, global precipitation was quantitatively described from the perspective of precipitation systems and their properties. This presentation will introduce the development of precipitation feature databases, and several scientific questions that have been tackled using this database, including the topics of global snow precipitation, extreme intensive convection, hail storms, extreme precipitation, and microphysical properties derived with dual frequency radars at the top of convective cores. As more and more observations of constellation satellites become available, it is anticipated that the precipitation feature approach will help to address a large variety of scientific questions in the future. For anyone who is interested, all the current precipitation feature databases are freely open to public at: http://atmos.tamucc.edu/trmm/.
Indicators of healthy work environments--a systematic review.
Lindberg, Per; Vingård, Eva
2012-01-01
The purpose of this study was to systematically review the scientific literature and search for indicators of healthy work environments. A number of major national and international databases for scientific publication were searched for research addressing indicators of healthy work environments. Altogether 19,768 publications were found. After excluding duplicates, non-relevant publications, or publications that did not comply with the inclusion criteria 24 peer-reviewed publications remained to be included in this systematic review. Only one study explicitly addressing indicators of healthy work environments was found. That study suggested that the presence of stress management programs in an organization might serve as indicator of a 'good place to work', as these organizations were more likely to offer programs that encouraged employee well-being, safety and skill development than those without stress management programs. The other 23 studies either investigated employee's views of what constitute a healthy workplace or were guidelines for how to create such a workplace. Summarizing, the nine most pronounced factors considered as important for a healthy workplace that emerged from these studies were, in descending order: collaboration/teamwork: growth and development of the individual; recognition; employee involvement; positive, accessible and fair leader; autonomy and empowerment; appropriate staffing; skilled communication; and safe physical work.
Surgery and pleuro-pulmonary tuberculosis: a scientific literature review
Subotic, Dragan; Yablonskiy, Piotr; Sulis, Giorgia; Cordos, Ioan; Petrov, Danail; Centis, Rosella; D’Ambrosio, Lia; Sotgiu, Giovanni
2016-01-01
Tuberculosis (TB) is still a major public health concern, mostly affecting resource-constrained settings and marginalized populations. The fight against the disease is hindered by the growing emergence of drug-resistant forms whose management can be rather challenging. Surgery may play an important role to support diagnosis and treatment of the most complex cases and improve their therapeutic outcome. We conducted a non-systematic review of the literature based on relevant keywords through PubMed database. Papers in English and Russian were included. The search was focused on five main areas of intervention as follows: (I) diagnosis of complicated cases; (II) elimination of contagious persisting cavities, despite appropriate chemotherapy; (III) treatment of destroyed lung; (V) resection of tuberculomas; (VI) treatment of tuberculous pleural empyema. Although specific practical guidelines concerning surgical indications and approaches are currently unavailable, a summary of the evidence emerged from the scientific literature was elaborated to help the clinician in the management of severely compromised TB patients. The decision to proceed to surgery is usually individualized and a careful assessment of the patient’s risk profile is always recommended before performing any procedure in addition to appropriate chemotherapy. PMID:27499980
Arens-Volland, Andreas G; Spassova, Lübomira; Bohn, Torsten
2015-12-01
The aim of this review was to analyze computer-based tools for dietary management (including web-based and mobile devices) from both scientific and applied perspectives, presenting advantages and disadvantages as well as the state of validation. For this cross-sectional analysis, scientific results from 41 articles retrieved via a medline search as well as 29 applications from online markets were identified and analyzed. Results show that many approaches computerize well-established existing nutritional concepts for dietary assessment, e.g., food frequency questionnaires (FFQ) or dietary recalls (DR). Both food records and barcode scanning are less prominent in research but are frequently offered by commercial applications. Integration with a personal health record (PHR) or a health care workflow is suggested in the literature but is rarely found in mobile applications. It is expected that employing food records for dietary assessment in research settings will be increasingly used when simpler interfaces, e.g., barcode scanning techniques, and comprehensive food databases are applied, which can also support user adherence to dietary interventions and follow-up phases of nutritional studies. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Extending GIS Technology to Study Karst Features of Southeastern Minnesota
NASA Astrophysics Data System (ADS)
Gao, Y.; Tipping, R. G.; Alexander, E. C.; Alexander, S. C.
2001-12-01
This paper summarizes ongoing research on karst feature distribution of southeastern Minnesota. The main goals of this interdisciplinary research are: 1) to look for large-scale patterns in the rate and distribution of sinkhole development; 2) to conduct statistical tests of hypotheses about the formation of sinkholes; 3) to create management tools for land-use managers and planners; and 4) to deliver geomorphic and hydrogeologic criteria for making scientifically valid land-use policies and ethical decisions in karst areas of southeastern Minnesota. Existing county and sub-county karst feature datasets of southeastern Minnesota have been assembled into a large GIS-based database capable of analyzing the entire data set. The central database management system (DBMS) is a relational GIS-based system interacting with three modules: GIS, statistical and hydrogeologic modules. ArcInfo and ArcView were used to generate a series of 2D and 3D maps depicting karst feature distributions in southeastern Minnesota. IRIS ExplorerTM was used to produce satisfying 3D maps and animations using data exported from GIS-based database. Nearest-neighbor analysis has been used to test sinkhole distributions in different topographic and geologic settings. All current nearest-neighbor analyses testify that sinkholes in southeastern Minnesota are not evenly distributed in this area (i.e., they tend to be clustered). More detailed statistical methods such as cluster analysis, histograms, probability estimation, correlation and regression have been used to study the spatial distributions of some mapped karst features of southeastern Minnesota. A sinkhole probability map for Goodhue County has been constructed based on sinkhole distribution, bedrock geology, depth to bedrock, GIS buffer analysis and nearest-neighbor analysis. A series of karst features for Winona County including sinkholes, springs, seeps, stream sinks and outcrop has been mapped and entered into the Karst Feature Database of Southeastern Minnesota. The Karst Feature Database of Winona County is being expanded to include all the mapped karst features of southeastern Minnesota. Air photos from 1930s to 1990s of Spring Valley Cavern Area in Fillmore County were scanned and geo-referenced into our GIS system. This technology has been proved to be very useful to identify sinkholes and study the rate of sinkhole development.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-23
... Services, Inc., Corporate/EIT/CTO Database Management Division, Hartford, CT; Notice of Negative... Services, Inc., Corporate/EIT/CTO Database Management Division, Hartford, Connecticut (The Hartford, Corporate/EIT/CTO Database Management Division). The negative determination was issued on August 19, 2011...
The Vocational Guidance Research Database: A Scientometric Approach
ERIC Educational Resources Information Center
Flores-Buils, Raquel; Gil-Beltran, Jose Manuel; Caballer-Miedes, Antonio; Martinez-Martinez, Miguel Angel
2012-01-01
The scientometric study of scientific output through publications in specialized journals cannot be undertaken exclusively with the databases available today. For this reason, the objective of this article is to introduce the "Base de Datos de Investigacion en Orientacion Vocacional" [Vocational Guidance Research Database], based on the…
A design for a ground-based data management system
NASA Technical Reports Server (NTRS)
Lambird, Barbara A.; Lavine, David
1988-01-01
An initial design for a ground-based data management system which includes intelligent data abstraction and cataloging is described. The large quantity of data on some current and future NASA missions leads to significant problems in providing scientists with quick access to relevant data. Human screening of data for potential relevance to a particular study is time-consuming and costly. Intelligent databases can provide automatic screening when given relevent scientific parameters and constraints. The data management system would provide, at a minimum, information of availability of the range of data, the type available, specific time periods covered together with data quality information, and related sources of data. The system would inform the user about the primary types of screening, analysis, and methods of presentation available to the user. The system would then aid the user with performing the desired tasks, in such a way that the user need only specify the scientific parameters and objectives, and not worry about specific details for running a particular program. The design contains modules for data abstraction, catalog plan abstraction, a user-friendly interface, and expert systems for data handling, data evaluation, and application analysis. The emphasis is on developing general facilities for data representation, description, analysis, and presentation that will be easily used by scientists directly, thus bypassing the knowledge acquisition bottleneck. Expert system technology is used for many different aspects of the data management system, including the direct user interface, the interface to the data analysis routines, and the analysis of instrument status.
... this page please turn Javascript on. Unique DNA database has helped advance scientific discoveries worldwide Since its origin 25 years ago, the database of nucleic acid sequences known as GenBank has ...
Pareja, Eduardo; Pareja-Tobes, Pablo; Manrique, Marina; Pareja-Tobes, Eduardo; Bonal, Javier; Tobes, Raquel
2006-01-01
Background Transcriptional regulation processes are the principal mechanisms of adaptation in prokaryotes. In these processes, the regulatory proteins and the regulatory DNA signals located in extragenic regions are the key elements involved. As all extragenic spaces are putative regulatory regions, ExtraTrain covers all extragenic regions of available genomes and regulatory proteins from bacteria and archaea included in the UniProt database. Description ExtraTrain provides integrated and easily manageable information for 679816 extragenic regions and for the genes delimiting each of them. In addition ExtraTrain supplies a tool to explore extragenic regions, named Palinsight, oriented to detect and search palindromic patterns. This interactive visual tool is totally integrated in the database, allowing the search for regulatory signals in user defined sets of extragenic regions. The 26046 regulatory proteins included in ExtraTrain belong to the families AraC/XylS, ArsR, AsnC, Cold shock domain, CRP-FNR, DeoR, GntR, IclR, LacI, LuxR, LysR, MarR, MerR, NtrC/Fis, OmpR and TetR. The database follows the InterPro criteria to define these families. The information about regulators includes manually curated sets of references specifically associated to regulator entries. In order to achieve a sustainable and maintainable knowledge database ExtraTrain is a platform open to the contribution of knowledge by the scientific community providing a system for the incorporation of textual knowledge. Conclusion ExtraTrain is a new database for exploring Extragenic regions and Transcriptional information in bacteria and archaea. ExtraTrain database is available at . PMID:16539733
Roadblocks to Scientific Thinking in Educational Decision Making
ERIC Educational Resources Information Center
Yates, Gregory C. R.
2008-01-01
Principles of scientific data accumulation and evidence-based practices are vehicles of professional enhancement. In this article, the author argues that a scientific knowledge base exists descriptive of the relationship between teachers' activities and student learning. This database appears barely recognised however, for reasons including (a)…
Munn, Maureen; Knuth, Randy; Van Horne, Katie; Shouse, Andrew W; Levias, Sheldon
2017-01-01
This study examines how two kinds of authentic research experiences related to smoking behavior-genotyping human DNA (wet lab) and using a database to test hypotheses about factors that affect smoking behavior (dry lab)-influence students' perceptions and understanding of scientific research and related science concepts. The study used pre and post surveys and a focus group protocol to compare students who conducted the research experiences in one of two sequences: genotyping before database and database before genotyping. Students rated the genotyping experiment to be more like real science than the database experiment, in spite of the fact that they associated more scientific tasks with the database experience than genotyping. Independent of the order of completing the labs, students showed gains in their understanding of science concepts after completion of the two experiences. There was little change in students' attitudes toward science pre to post, as measured by the Scientific Attitude Inventory II. However, on the basis of their responses during focus groups, students developed more sophisticated views about the practices and nature of science after they had completed both research experiences, independent of the order in which they experienced them. © 2017 M. Munn et al. CBE—Life Sciences Education © 2017 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).
Auer, Jorg A; Goodship, Allen; Arnoczky, Steven; Pearce, Simon; Price, Jill; Claes, Lutz; von Rechenberg, Brigitte; Hofmann-Amtenbrinck, Margarethe; Schneider, Erich; Müller-Terpitz, R; Thiele, F; Rippe, Klaus-Peter; Grainger, David W
2007-08-01
In an attempt to establish some consensus on the proper use and design of experimental animal models in musculoskeletal research, AOVET (the veterinary specialty group of the AO Foundation) in concert with the AO Research Institute (ARI), and the European Academy for the Study of Scientific and Technological Advance, convened a group of musculoskeletal researchers, veterinarians, legal experts, and ethicists to discuss, in a frank and open forum, the use of animals in musculoskeletal research. The group narrowed the field to fracture research. The consensus opinion resulting from this workshop can be summarized as follows: Anaesthesia and pain management protocols for research animals should follow standard protocols applied in clinical work for the species involved. This will improve morbidity and mortality outcomes. A database should be established to facilitate selection of anaesthesia and pain management protocols for specific experimental surgical procedures and adopted as an International Standard (IS) according to animal species selected. A list of 10 golden rules and requirements for conduction of animal experiments in musculoskeletal research was drawn up comprising 1) Intelligent study designs to receive appropriate answers; 2) Minimal complication rates (5 to max. 10%); 3) Defined end-points for both welfare and scientific outputs analogous to quality assessment (QA) audit of protocols in GLP studies; 4) Sufficient details for materials and methods applied; 5) Potentially confounding variables (genetic background, seasonal, hormonal, size, histological, and biomechanical differences); 6) Post-operative management with emphasis on analgesia and follow-up examinations; 7) Study protocols to satisfy criteria established for a "justified animal study"; 8) Surgical expertise to conduct surgery on animals; 9) Pilot studies as a critical part of model validation and powering of the definitive study design; 10) Criteria for funding agencies to include requirements related to animal experiments as part of the overall scientific proposal review protocols. Such agencies are also encouraged to seriously consider and adopt the recommendations described here when awarding funds for specific projects. Specific new requirements and mandates related both to improving the welfare and scientific rigour of animal-based research models are urgently needed as part of international harmonization of standards.
Patterson, David A.; Cooke, Steven J.; Hinch, Scott G.; Robinson, Kendra A.; Young, Nathan; Farrell, Anthony P.; Miller, Kristina M.
2016-01-01
The inability of physiologists to effect change in fisheries management has been the source of frustration for many decades. Close collaboration between fisheries managers and researchers has afforded our interdisciplinary team an unusual opportunity to evaluate the emerging impact that physiology can have in providing relevant and credible scientific advice to assist in management decisions. We categorize the quality of scientific advice given to management into five levels based on the type of scientific activity and resulting advice (notions, observations, descriptions, predictions and prescriptions). We argue that, ideally, both managers and researchers have concomitant but separate responsibilities for increasing the level of scientific advice provided. The responsibility of managers involves clear communication of management objectives to researchers, including exact descriptions of knowledge needs and researchable problems. The role of the researcher is to provide scientific advice based on the current state of scientific information and the level of integration with management. The examples of scientific advice discussed herein relate to physiological research on the impact of high discharge and water temperature, pathogens, sex and fisheries interactions on in-river migration success of adult Fraser River sockeye salmon (Oncorhynchus nerka) and the increased understanding and quality of scientific advice that emerges. We submit that success in increasing the quality of scientific advice is a function of political motivation linked to funding, legal clarity in management objectives, collaborative structures in government and academia, personal relationships, access to interdisciplinary experts and scientific peer acceptance. The major challenges with advancing scientific advice include uncertainty in results, lack of integration with management needs and institutional caution in adopting new research. We hope that conservation physiologists can learn from our experiences of providing scientific advice to management to increase the potential for this growing field of research to have a positive influence on resource management. PMID:27928508
Patterson, David A; Cooke, Steven J; Hinch, Scott G; Robinson, Kendra A; Young, Nathan; Farrell, Anthony P; Miller, Kristina M
2016-01-01
The inability of physiologists to effect change in fisheries management has been the source of frustration for many decades. Close collaboration between fisheries managers and researchers has afforded our interdisciplinary team an unusual opportunity to evaluate the emerging impact that physiology can have in providing relevant and credible scientific advice to assist in management decisions. We categorize the quality of scientific advice given to management into five levels based on the type of scientific activity and resulting advice (notions, observations, descriptions, predictions and prescriptions). We argue that, ideally, both managers and researchers have concomitant but separate responsibilities for increasing the level of scientific advice provided. The responsibility of managers involves clear communication of management objectives to researchers, including exact descriptions of knowledge needs and researchable problems. The role of the researcher is to provide scientific advice based on the current state of scientific information and the level of integration with management. The examples of scientific advice discussed herein relate to physiological research on the impact of high discharge and water temperature, pathogens, sex and fisheries interactions on in-river migration success of adult Fraser River sockeye salmon ( Oncorhynchus nerka ) and the increased understanding and quality of scientific advice that emerges. We submit that success in increasing the quality of scientific advice is a function of political motivation linked to funding, legal clarity in management objectives, collaborative structures in government and academia, personal relationships, access to interdisciplinary experts and scientific peer acceptance. The major challenges with advancing scientific advice include uncertainty in results, lack of integration with management needs and institutional caution in adopting new research. We hope that conservation physiologists can learn from our experiences of providing scientific advice to management to increase the potential for this growing field of research to have a positive influence on resource management.
Management of Usr-i-Tamth (Menstrual Pain) in Unani (Greco-Islamic) Medicine
Sultana, Arshiya; Lamatunoor, Syed; Begum, Mazherunnisa; Qhuddsia, Q. N.
2015-01-01
Usr-i-tamth in Unani (Greco-Arabic) medicine is pain associated with menstruation, and classical manuscripts are enriched with traditional knowledge for the management of usr-i-tamth (menstrual pain/dysmenorrhoea). Hence, a comprehensive search was undertaken to find classical manuscripts for the management of menstrual pain was. We searched the Cochrane database, PubMed/Google Scholar, and other websites for articles on complementary and alternative medicine treatment and management of menstrual pain. The principal management as per Unani manuscripts is to produce analgesia and to treat the cause of usr-i-tamth such as abnormal temperament, menstrual irregularities/uterine diseases, and psychological and environmental factors. Furthermore, Unani medicines with emmenagogue, antispasmodic, anti-inflammatory, and analgesic properties are beneficial for amelioration of usr-i-tamth. Herbs such as Apium graveolens, Cuminum cyminium, Foeniculum vulgare, Matricaria chamomilla and Nigella sativa possess the aforementioned properties and are proven scientifically for their efficacy in usr-i-tamth. Thus, validation and conservation of the traditional knowledge is essential for prospective research and valuable for use in the contemporary era. PMID:26721552
SUMO: operation and maintenance management web tool for astronomical observatories
NASA Astrophysics Data System (ADS)
Mujica-Alvarez, Emma; Pérez-Calpena, Ana; García-Vargas, María. Luisa
2014-08-01
SUMO is an Operation and Maintenance Management web tool, which allows managing the operation and maintenance activities and resources required for the exploitation of a complex facility. SUMO main capabilities are: information repository, assets and stock control, tasks scheduler, executed tasks archive, configuration and anomalies control and notification and users management. The information needed to operate and maintain the system must be initially stored at the tool database. SUMO shall automatically schedule the periodical tasks and facilitates the searching and programming of the non-periodical tasks. Tasks planning can be visualized in different formats and dynamically edited to be adjusted to the available resources, anomalies, dates and other constrains that can arise during daily operation. SUMO shall provide warnings to the users notifying potential conflicts related to the required personal availability or the spare stock for the scheduled tasks. To conclude, SUMO has been designed as a tool to help during the operation management of a scientific facility, and in particular an astronomical observatory. This is done by controlling all operating parameters: personal, assets, spare and supply stocks, tasks and time constrains.
Management of scientific information with Google Drive.
Kubaszewski, Łukasz; Kaczmarczyk, Jacek; Nowakowski, Andrzej
2013-09-20
The amount and diversity of scientific publications requires a modern management system. By "management" we mean the process of gathering interesting information for the purpose of reading and archiving for quick access in future clinical practice and research activity. In the past, such system required physical existence of a library, either institutional or private. Nowadays in an era dominated by electronic information, it is natural to migrate entire systems to a digital form. In the following paper we describe the structure and functions of an individual electronic library system (IELiS) for the management of scientific publications based on the Google Drive service. Architecture of the system. Architecture system consists of a central element and peripheral devices. Central element of the system is virtual Google Drive provided by Google Inc. Physical elements of the system include: tablet with Android operating system and a personal computer, both with internet access. Required software includes a program to view and edit files in PDF format for mobile devices and another to synchronize the files. Functioning of the system. The first step in creating a system is collection of scientific papers in PDF format and their analysis. This step is performed most frequently on a tablet. At this stage, after being read, the papers are cataloged in a system of folders and subfolders, according to individual demands. During this stage, but not exclusively, the PDF files are annotated by the reader. This allows the user to quickly track down interesting information in review or research process. Modification of the document title is performed at this stage, as well. Second element of the system is creation of a mirror database in the Google Drive virtual memory. Modified and cataloged papers are synchronized with Google Drive. At this stage, a fully functional scientific information electronic library becomes available online. The third element of the system is a periodic two-way synchronization of data between Google Drive and tablet, as occasional modification of the files with annotation or recataloging may be performed at both locations. The system architecture is designed to gather, catalog and analyze scientific publications. All steps are electronic, eliminating paper forms. Indexed files are available for re-reading and modification. The system allows for fast access to full-text search with additional features making research easier. Team collaboration is also possible with full control of user privileges. Particularly important is the safety of collected data. In our opinion, the system exceeds many commercially available applications in terms of functionality and versatility.
NASA Astrophysics Data System (ADS)
Bagnasco, S.; Berzano, D.; Guarise, A.; Lusso, S.; Masera, M.; Vallero, S.
2015-12-01
The INFN computing centre in Torino hosts a private Cloud, which is managed with the OpenNebula cloud controller. The infrastructure offers Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) services to different scientific computing applications. The main stakeholders of the facility are a grid Tier-2 site for the ALICE collaboration at LHC, an interactive analysis facility for the same experiment and a grid Tier-2 site for the BESIII collaboration, plus an increasing number of other small tenants. The dynamic allocation of resources to tenants is partially automated. This feature requires detailed monitoring and accounting of the resource usage. We set up a monitoring framework to inspect the site activities both in terms of IaaS and applications running on the hosted virtual instances. For this purpose we used the ElasticSearch, Logstash and Kibana (ELK) stack. The infrastructure relies on a MySQL database back-end for data preservation and to ensure flexibility to choose a different monitoring solution if needed. The heterogeneous accounting information is transferred from the database to the ElasticSearch engine via a custom Logstash plugin. Each use-case is indexed separately in ElasticSearch and we setup a set of Kibana dashboards with pre-defined queries in order to monitor the relevant information in each case. For the IaaS metering, we developed sensors for the OpenNebula API. The IaaS level information gathered through the API is sent to the MySQL database through an ad-hoc developed RESTful web service. Moreover, we have developed a billing system for our private Cloud, which relies on the RabbitMQ message queue for asynchronous communication to the database and on the ELK stack for its graphical interface. The Italian Grid accounting framework is also migrating to a similar set-up. Concerning the application level, we used the Root plugin TProofMonSenderSQL to collect accounting data from the interactive analysis facility. The BESIII virtual instances used to be monitored with Zabbix, as a proof of concept we also retrieve the information contained in the Zabbix database. In this way we have achieved a uniform monitoring interface for both the IaaS and the scientific applications, mostly leveraging off-the-shelf tools. At present, we are working to define a model for monitoring-as-a-service, based on the tools described above, which the Cloud tenants can easily configure to suit their specific needs.
TWRS technical baseline database manager definition document
DOE Office of Scientific and Technical Information (OSTI.GOV)
Acree, C.D.
1997-08-13
This document serves as a guide for using the TWRS Technical Baseline Database Management Systems Engineering (SE) support tool in performing SE activities for the Tank Waste Remediation System (TWRS). This document will provide a consistent interpretation of the relationships between the TWRS Technical Baseline Database Management software and the present TWRS SE practices. The Database Manager currently utilized is the RDD-1000 System manufactured by the Ascent Logic Corporation. In other documents, the term RDD-1000 may be used interchangeably with TWRS Technical Baseline Database Manager.
[New bibliometric indicators for the scientific literature: an evolving panorama].
La Torre, G; Sciarra, I; Chiappetta, M; Monteduro, A
2017-01-01
Bibliometrics is a science which evaluates the impact of the scientific work of a journal or of an author, using mathematical and statistical tools. Impact Factor (IF) is the first bibliometric parameter created, and after it many others have been progressively conceived in order to go beyond its limits. Currently bibliometric indexes are used for academic purposes, among them to evaluate the eligibility of a researcher to compete for the National Scientific Qualification, in order to access to competitive exams to become professor. Aim of this study is to identify the most relevant bibliometric indexes and to summarized their characteristics. A revision of bibliometric indexes as been conducted, starting from the classic ones and completing with the most recent ones. The two most used bibliometric indexes are the IF, which measures the scientific impact of a periodical and bases on Web of Science citation database, and the h-index, which measures the impact of the scientific work of a researcher, basing on Scopus database. Besides them other indexes have been created more recently, such as the SCImago Journal Rank Indicator (SJR), the Source Normalised Impact per Paper (SNIP) and the CiteScore index. They are all based on Scopus database and evaluate, in different ways, the citational impact of a periodic. The i10-index instead is provided from Google Scholar database and allows to evaluate the impact of the scientific production of a researcher. Recently two softwares have been introduced: the first one, Publish or Perish, allows to evaluate the scientific work of a researcher, through the assessment of many indexes; the second one, Altmetric, measure the use in the Web of the academic papers, instead of measuring citations, by means of alternative metrics respect to the traditional ones. Each analized index shows advantages but also criticalities. Therefore the combined use of more than one indexes, citational and not, should be preferred, in order to correctly evaluate the work of reserchers and to finally improve the quality and the development of scientific research.
[Quality management and participation into clinical database].
Okubo, Suguru; Miyata, Hiroaki; Tomotaki, Ai; Motomura, Noboru; Murakami, Arata; Ono, Minoru; Iwanaka, Tadashi
2013-07-01
Quality management is necessary for establishing useful clinical database in cooperation with healthcare professionals and facilities. The ways of management are 1) progress management of data entry, 2) liaison with database participants (healthcare professionals), and 3) modification of data collection form. In addition, healthcare facilities are supposed to consider ethical issues and information security for joining clinical databases. Database participants should check ethical review boards and consultation service for patients.
Science center capabilities to monitor and investigate Michigan’s water resources, 2016
Giesen, Julia A.; Givens, Carrie E.
2016-09-06
Michigan faces many challenges related to water resources, including flooding, drought, water-quality degradation and impairment, varying water availability, watershed-management issues, stormwater management, aquatic-ecosystem impairment, and invasive species. Michigan’s water resources include approximately 36,000 miles of streams, over 11,000 inland lakes, 3,000 miles of shoreline along the Great Lakes (MDEQ, 2016), and groundwater aquifers throughout the State.The U.S. Geological Survey (USGS) works in cooperation with local, State, and other Federal agencies, as well as tribes and universities, to provide scientific information used to manage the water resources of Michigan. To effectively assess water resources, the USGS uses standardized methods to operate streamgages, water-quality stations, and groundwater stations. The USGS also monitors water quality in lakes and reservoirs, makes periodic measurements along rivers and streams, and maintains all monitoring data in a national, quality-assured, hydrologic database.The USGS in Michigan investigates the occurrence, distribution, quantity, movement, and chemical and biological quality of surface water and groundwater statewide. Water-resource monitoring and scientific investigations are conducted statewide by USGS hydrologists, hydrologic technicians, biologists, and microbiologists who have expertise in data collection as well as various scientific specialties. A support staff consisting of computer-operations and administrative personnel provides the USGS the functionality to move science forward. Funding for USGS activities in Michigan comes from local and State agencies, other Federal agencies, direct Federal appropriations, and through the USGS Cooperative Matching Funds, which allows the USGS to partially match funding provided by local and State partners.This fact sheet provides an overview of the USGS current (2016) capabilities to monitor and study Michigan’s vast water resources. More information regarding projects by the Michigan Water Science Center (MI WSC) is available at http://mi.water.usgs.gov/.
Thakar, Sambhaji B; Ghorpade, Pradnya N; Kale, Manisha V; Sonawane, Kailas D
2015-01-01
Fern plants are known for their ethnomedicinal applications. Huge amount of fern medicinal plants information is scattered in the form of text. Hence, database development would be an appropriate endeavor to cope with the situation. So by looking at the importance of medicinally useful fern plants, we developed a web based database which contains information about several group of ferns, their medicinal uses, chemical constituents as well as protein/enzyme sequences isolated from different fern plants. Fern ethnomedicinal plant database is an all-embracing, content management web-based database system, used to retrieve collection of factual knowledge related to the ethnomedicinal fern species. Most of the protein/enzyme sequences have been extracted from NCBI Protein sequence database. The fern species, family name, identification, taxonomy ID from NCBI, geographical occurrence, trial for, plant parts used, ethnomedicinal importance, morphological characteristics, collected from various scientific literatures and journals available in the text form. NCBI's BLAST, InterPro, phylogeny, Clustal W web source has also been provided for the future comparative studies. So users can get information related to fern plants and their medicinal applications at one place. This Fern ethnomedicinal plant database includes information of 100 fern medicinal species. This web based database would be an advantageous to derive information specifically for computational drug discovery, botanists or botanical interested persons, pharmacologists, researchers, biochemists, plant biotechnologists, ayurvedic practitioners, doctors/pharmacists, traditional medicinal users, farmers, agricultural students and teachers from universities as well as colleges and finally fern plant lovers. This effort would be useful to provide essential knowledge for the users about the adventitious applications for drug discovery, applications, conservation of fern species around the world and finally to create social awareness.
A pilot GIS database of active faults of Mt. Etna (Sicily): A tool for integrated hazard evaluation
NASA Astrophysics Data System (ADS)
Barreca, Giovanni; Bonforte, Alessandro; Neri, Marco
2013-02-01
A pilot GIS-based system has been implemented for the assessment and analysis of hazard related to active faults affecting the eastern and southern flanks of Mt. Etna. The system structure was developed in ArcGis® environment and consists of different thematic datasets that include spatially-referenced arc-features and associated database. Arc-type features, georeferenced into WGS84 Ellipsoid UTM zone 33 Projection, represent the five main fault systems that develop in the analysed region. The backbone of the GIS-based system is constituted by the large amount of information which was collected from the literature and then stored and properly geocoded in a digital database. This consists of thirty five alpha-numeric fields which include all fault parameters available from literature such us location, kinematics, landform, slip rate, etc. Although the system has been implemented according to the most common procedures used by GIS developer, the architecture and content of the database represent a pilot backbone for digital storing of fault parameters, providing a powerful tool in modelling hazard related to the active tectonics of Mt. Etna. The database collects, organises and shares all scientific currently available information about the active faults of the volcano. Furthermore, thanks to the strong effort spent on defining the fields of the database, the structure proposed in this paper is open to the collection of further data coming from future improvements in the knowledge of the fault systems. By layering additional user-specific geographic information and managing the proposed database (topological querying) a great diversity of hazard and vulnerability maps can be produced by the user. This is a proposal of a backbone for a comprehensive geographical database of fault systems, universally applicable to other sites.
Axiope tools for data management and data sharing.
Goddard, Nigel H; Cannon, Robert C; Howell, Fred W
2003-01-01
Many areas of biological research generate large volumes of very diverse data. Managing this data can be a difficult and time-consuming process, particularly in an academic environment where there are very limited resources for IT support staff such as database administrators. The most economical and efficient solutions are those that enable scientists with minimal IT expertise to control and operate their own desktop systems. Axiope provides one such solution, Catalyzer, which acts as flexible cataloging system for creating structured records describing digital resources. The user is able specify both the content and structure of the information included in the catalog. Information and resources can be shared by a variety of means, including automatically generated sets of web pages. Federation and integration of this information, where needed, is handled by Axiope's Mercat server. Where there is a need for standardization or compatibility of the structures usedby different researchers this canbe achieved later by applying user-defined mappings in Mercat. In this way, large-scale data sharing can be achieved without imposing unnecessary constraints or interfering with the way in which individual scientists choose to record and catalog their work. We summarize the key technical issues involved in scientific data management and data sharing, describe the main features and functionality of Axiope Catalyzer and Axiope Mercat, and discuss future directions and requirements for an information infrastructure to support large-scale data sharing and scientific collaboration.
Countermeasure Evaluation and Validation Project (CEVP) Database Requirement Documentation
NASA Technical Reports Server (NTRS)
Shin, Sung Y.
2003-01-01
The initial focus of the project by the JSC laboratories will be to develop, test and implement a standardized complement of integrated physiological test (Integrated Testing Regimen, ITR) that will examine both system and intersystem function, and will be used to validate and certify candidate countermeasures. The ITR will consist of medical requirements (MRs) and non-MR core ITR tests, and countermeasure-specific testing. Non-MR and countermeasure-specific test data will be archived in a database specific to the CEVP. Development of a CEVP Database will be critical to documenting the progress of candidate countermeasures. The goal of this work is a fully functional software system that will integrate computer-based data collection and storage with secure, efficient, and practical distribution of that data over the Internet. This system will provide the foundation of a new level of interagency and international cooperation for scientific experimentation and research, providing intramural, international, and extramural collaboration through management and distribution of the CEVP data. The research performed this summer includes the first phase of the project. The first phase of the project is a requirements analysis. This analysis will identify the expected behavior of the system under normal conditions and abnormal conditions; that could affect the system's ability to produce this behavior; and the internal features in the system needed to reduce the risk of unexpected or unwanted behaviors. The second phase of this project have also performed in this summer. The second phase of project is the design of data entry screen and data retrieval screen for a working model of the Ground Data Database. The final report provided the requirements for the CEVP system in a variety of ways, so that both the development team and JSC technical management have a thorough understanding of how the system is expected to behave.
NASA Astrophysics Data System (ADS)
Fleury, Laurence; Brissebrat, Guillaume; Boichard, Jean-Luc; Cloché, Sophie; Eymard, Laurence; Mastrorillo, Laurence; Moulaye, Oumarou; Ramage, Karim; Favot, Florence; Roussot, Odile
2014-05-01
In the framework of the African Monsoon Multidisciplinary Analyses (AMMA) programme, several tools have been developed in order to facilitate and speed up data and information exchange between researchers from different disciplines. The AMMA information system includes (i) a multidisciplinary user-friendly data management and dissemination system, (ii) report and chart archives associated with display websites and (iii) a scientific paper exchange system. The AMMA information system is enriched by several previous (IMPETUS...) and following projects (FENNEC, ESCAPE, QweCI, DACCIWA…) and is becoming a reference information system about West Africa monsoon. (i) The AMMA project includes airborne, ground-based and ocean measurements, satellite data use, modelling studies and value-added product development. Therefore, the AMMA database user interface enables to access a great amount and a large variety of data: - 250 local observation datasets, that cover many geophysical components (atmosphere, ocean, soil, vegetation) and human activities (agronomy, health). They have been collected by operational networks from 1850 to present, long term monitoring research networks (CATCH, IDAF, PIRATA...) or scientific campaigns; - 1350 outputs of a socio-economics questionnaire; - 60 operational satellite products and several research products; - 10 output sets of meteorological and ocean operational models and 15 of research simulations. All the data are documented in compliance with metadata international standards, and delivered into standard formats. The data request user interface takes full advantage of the data and metadata base relational structure and enables users to elaborate easily multicriteria data requests (period, area, property, property value…). The AMMA data portal counts around 800 registered users and process about 50 data requests every month. The AMMA databases and data portal have been developed and are operated jointly by SEDOO and ESPRI in France: http://database.amma-international.org. The complete system is fully duplicated and operated by CRA in Niger: http://amma.agrhymet.ne/amma-data. (ii) A day-to-day chart and report display application has been designed and operated in order to monitor meteorological and environment information and to meet the observational team needs during the 2006 AMMA SOP (http://aoc.amma-international.org) and 2011 FENNEC campaigns (http://fenoc.sedoo.fr). At present the websites constitute a testimonial view on the campaigns and a preliminary investigation tool for researchers. Since 2011, the same application enables a group of French and Senegalese researchers and forecasters to share in near real time physical indices and diagnosis calculated from numerical weather operational forecasts, satellite products and in situ operational observations along the monsoon season, in order to better estimate, understand and anticipate the monsoon intraseasonal variability (http://misva.sedoo.fr). (iii) A collaborative WIKINDX tool has also been set online in order to gather together scientific publications, theses and communications of interest to AMMA: http://biblio.amma-international.org. Now the bibliographic database counts about 1200 references. It is the most exhaustive document collection about the West African monsoon available for all. Every scientist is invited to make use of the different AMMA online tools and data. Scientists or project leaders who have data management needs for existing or future datasets over West Africa are welcome to use the AMMA database framework and to contact ammaAdmin@sedoo.fr .
Building an R&D chemical registration system.
Martin, Elyette; Monge, Aurélien; Duret, Jacques-Antoine; Gualandi, Federico; Peitsch, Manuel C; Pospisil, Pavel
2012-05-31
Small molecule chemistry is of central importance to a number of R&D companies in diverse areas such as the pharmaceutical, nutraceutical, food flavoring, and cosmeceutical industries. In order to store and manage thousands of chemical compounds in such an environment, we have built a state-of-the-art master chemical database with unique structure identifiers. Here, we present the concept and methodology we used to build the system that we call the Unique Compound Database (UCD). In the UCD, each molecule is registered only once (uniqueness), structures with alternative representations are entered in a uniform way (normalization), and the chemical structure drawings are recognizable to chemists and to a cartridge. In brief, structural molecules are entered as neutral entities which can be associated with a salt. The salts are listed in a dictionary and bound to the molecule with the appropriate stoichiometric coefficient in an entity called "substance". The substances are associated with batches. Once a molecule is registered, some properties (e.g., ADMET prediction, IUPAC name, chemical properties) are calculated automatically. The UCD has both automated and manual data controls. Moreover, the UCD concept enables the management of user errors in the structure entry by reassigning or archiving the batches. It also allows updating of the records to include newly discovered properties of individual structures. As our research spans a wide variety of scientific fields, the database enables registration of mixtures of compounds, enantiomers, tautomers, and compounds with unknown stereochemistries.
A Database of Historical Information on Landslides and Floods in Italy
NASA Astrophysics Data System (ADS)
Guzzetti, F.; Tonelli, G.
2003-04-01
For the past 12 years we have maintained and updated a database of historical information on landslides and floods in Italy, known as the National Research Council's AVI (Damaged Urban Areas) Project archive. The database was originally designed to respond to a specific request of the Minister of Civil Protection, and was aimed at helping the regional assessment of landslide and flood risk in Italy. The database was first constructed in 1991-92 to cover the period 1917 to 1990. Information of damaging landslide and flood event was collected by searching archives, by screening thousands of newspaper issues, by reviewing the existing technical and scientific literature on landslides and floods in Italy, and by interviewing landslide and flood experts. The database was then updated chiefly through the analysis of hundreds of newspaper articles, and it now covers systematically the period 1900 to 1998, and non-systematically the periods 1900 to 1916 and 1999 to 2002. Non systematic information on landslide and flood events older than 20th century is also present in the database. The database currently contains information on more than 32,000 landslide events occurred at more than 25,700 sites, and on more than 28,800 flood events occurred at more than 15,600 sites. After a brief outline of the history and evolution of the AVI Project archive, we present and discuss: (a) the present structure of the database, including the hardware and software solutions adopted to maintain, manage, use and disseminate the information stored in the database, (b) the type and amount of information stored in the database, including an estimate of its completeness, and (c) examples of recent applications of the database, including a web-based GIS systems to show the location of sites historically affected by landslides and floods, and an estimate of geo-hydrological (i.e., landslide and flood) risk in Italy based on the available historical information.
SPD-based Logistics Management Model of Medical Consumables in Hospitals.
Liu, Tongzhu; Shen, Aizong; Hu, Xiaojian; Tong, Guixian; Gu, Wei; Yang, Shanlin
2016-10-01
With the rapid development of health services, the progress of medical science and technology, and the improvement of materials research, the consumption of medical consumables (MCs) in medical activities has increased in recent years. However, owing to the lack of effective management methods and the complexity of MCs, there are several management problems including MC waste, low management efficiency, high management difficulty, and frequent medical accidents. Therefore, there is urgent need for an effective logistics management model to handle these problems and challenges in hospitals. We reviewed books and scientific literature (by searching the articles published from 2010 to 2015 in Engineering Village database) to understand supply chain related theories and methods and performed field investigations in hospitals across many cities to determine the actual state of MC logistics management of hospitals in China. We describe the definition, physical model, construction, and logistics operation processes of the supply, processing, and distribution (SPD) of MC logistics because of the traditional SPD model. With the establishment of a supply-procurement platform and a logistics lean management system, we applied the model to the MC logistics management of Anhui Provincial Hospital with good effects. The SPD model plays a critical role in optimizing the logistics procedures of MCs, improving the management efficiency of logistics, and reducing the costs of logistics of hospitals in China.
Scientific Evidence and Potential Barriers in the Management of Brazilian Protected Areas.
Giehl, Eduardo L H; Moretti, Marcela; Walsh, Jessica C; Batalha, Marco A; Cook, Carly N
2017-01-01
Protected areas are a crucial tool for halting the loss of biodiversity. Yet, the management of protected areas is under resourced, impacting the ability to achieve effective conservation actions. Effective management depends on the application of the best available knowledge, which can include both scientific evidence and the local knowledge of onsite managers. Despite the clear value of evidence-based conservation, there is still little known about how much scientific evidence is used to guide the management of protected areas. This knowledge gap is especially evident in developing countries, where resource limitations and language barriers may create additional challenges for the use of scientific evidence in management. To assess the extent to which scientific evidence is used to inform management decisions in a developing country, we surveyed Brazilian protected area managers about the information they use to support their management decisions. We targeted on-ground managers who are responsible for management decisions made at the local protected area level. We asked managers about the sources of evidence they use, how frequently they assess the different sources of evidence and the scientific content of the different sources of evidence. We also considered a range of factors that might explain the use of scientific evidence to guide the management of protected areas, such as the language spoken by managers, the accessibility of evidence sources and the characteristics of the managers and the protected areas they manage. The managers who responded to our questionnaire reported that they most frequently made decisions based on their personal experience, with scientific evidence being used relatively infrequently. While managers in our study tended to value scientific evidence less highly than other sources, most managers still considered science important for management decisions. Managers reported that the accessibility of scientific evidence is low relative to other types of evidence, with key barriers being the low levels of open access research and insufficient technical training to enable managers to interpret research findings. Based on our results, we suggest that managers in developing countries face all the same challenges as those in developed countries, along with additional language barriers that can prevent greater use of scientific evidence to support effective management of protected areas in Brazil.
NASA Technical Reports Server (NTRS)
Handley, Thomas H., Jr.; Preheim, Larry E.
1990-01-01
Data systems requirements in the Earth Observing System (EOS) Space Station Freedom (SSF) eras indicate increasing data volume, increased discipline interplay, higher complexity and broader data integration and interpretation. A response to the needs of the interdisciplinary investigator is proposed, considering the increasing complexity and rising costs of scientific investigation. The EOS Data Information System, conceived to be a widely distributed system with reliable communication links between central processing and the science user community, is described. Details are provided on information architecture, system models, intelligent data management of large complex databases, and standards for archiving ancillary data, using a research library, a laboratory and collaboration services.
Short Fiction on Film: A Relational DataBase.
ERIC Educational Resources Information Center
May, Charles
Short Fiction on Film is a database that was created and will run on DataRelator, a relational database manager created by Bill Finzer for the California State Department of Education in 1986. DataRelator was designed for use in teaching students database management skills and to provide teachers with examples of how a database manager might be…
47 CFR 0.241 - Authority delegated.
Code of Federal Regulations, 2012 CFR
2012-10-01
... database functions for unlicensed devices operating in the television broadcast bands (TV bands) as set... methods that will be used to designate TV bands database managers, to designate these database managers; to develop procedures that these database managers will use to ensure compliance with the...
47 CFR 0.241 - Authority delegated.
Code of Federal Regulations, 2013 CFR
2013-10-01
... database functions for unlicensed devices operating in the television broadcast bands (TV bands) as set... methods that will be used to designate TV bands database managers, to designate these database managers; to develop procedures that these database managers will use to ensure compliance with the...
47 CFR 0.241 - Authority delegated.
Code of Federal Regulations, 2011 CFR
2011-10-01
... database functions for unlicensed devices operating in the television broadcast bands (TV bands) as set... methods that will be used to designate TV bands database managers, to designate these database managers; to develop procedures that these database managers will use to ensure compliance with the...
Information management and analysis system for groundwater data in Thailand
NASA Astrophysics Data System (ADS)
Gill, D.; Luckananurung, P.
1992-01-01
The Ground Water Division of the Thai Department of Mineral Resources maintains a large archive of groundwater data with information on some 50,000 water wells. Each well file contains information on well location, well completion, borehole geology, water levels, water quality, and pumping tests. In order to enable efficient use of this information a computer-based system for information management and analysis was created. The project was sponsored by the United Nations Development Program and the Thai Department of Mineral Resources. The system was designed to serve users who lack prior training in automated data processing. Access is through a friendly user/system dialogue. Tasks are segmented into a number of logical steps, each of which is managed by a separate screen. Selective retrieval is possible by four different methods of area definition and by compliance with user-specified constraints on any combination of database variables. The main types of outputs are: (1) files of retrieved data, screened according to users' specifications; (2) an assortment of pre-formatted reports; (3) computed geochemical parameters and various diagrams of water chemistry derived therefrom; (4) bivariate scatter diagrams and linear regression analysis; (5) posting of data and computed results on maps; and (6) hydraulic aquifer characteristics as computed from pumping tests. Data are entered directly from formatted screens. Most records can be copied directly from hand-written documents. The database-management program performs data integrity checks in real time, enabling corrections at the time of input. The system software can be grouped into: (1) database administration and maintenance—these functions are carried out by the SIR/DBMS software package; (2) user communication interface for task definition and execution control—the interface is written in the operating system command language (VMS/DCL) and in FORTRAN 77; and (3) scientific data-processing programs, written in FORTRAN 77. The system was implemented on a DEC MicroVAX II computer.
Sanz-Valero, J; Gil, Á; Wanden-Berghe, C; Martínez de Victoria, E
2012-11-01
To evaluate by bibliometric and thematic analysis the scientific literature on omega-3 fatty acids indexed in international databases on health sciences and to establish a comparative base for future analysis. Searches were conducted with the descriptor (MeSH, as Major Topic) "Fatty Acids, Omega-3" from the first date available until December 31, 2010. Databases consulted: MEDLINE (via PubMed), EMBASE, ISI Web of Knowledge, CINAHL and LILACS. The most common type of document was originals articles. Obsolescence was set at 5 years. The geographical distribution of authors who appear as first author was EEUU and the articles were written predominantly in English. The study population was 90.98% (95% CI 89.25 to 92.71) adult humans. The documents were classified into 59 subject areas and the most studied topic 16.24% (95% CI 14.4 to 18.04) associated with omega-3, was cardiovascular disease. This study indicates that the scientific literature on omega-3 fatty acids is a full force area of knowledge. The Anglo-Saxon institutions dominate the scientific production and it is mainly oriented to the study of cardiovascular disease.
Network-based statistical comparison of citation topology of bibliographic databases
Šubelj, Lovro; Fiala, Dalibor; Bajec, Marko
2014-01-01
Modern bibliographic databases provide the basis for scientific research and its evaluation. While their content and structure differ substantially, there exist only informal notions on their reliability. Here we compare the topological consistency of citation networks extracted from six popular bibliographic databases including Web of Science, CiteSeer and arXiv.org. The networks are assessed through a rich set of local and global graph statistics. We first reveal statistically significant inconsistencies between some of the databases with respect to individual statistics. For example, the introduced field bow-tie decomposition of DBLP Computer Science Bibliography substantially differs from the rest due to the coverage of the database, while the citation information within arXiv.org is the most exhaustive. Finally, we compare the databases over multiple graph statistics using the critical difference diagram. The citation topology of DBLP Computer Science Bibliography is the least consistent with the rest, while, not surprisingly, Web of Science is significantly more reliable from the perspective of consistency. This work can serve either as a reference for scholars in bibliometrics and scientometrics or a scientific evaluation guideline for governments and research agencies. PMID:25263231
USGS Information Technology Strategic Plan: Fiscal Years 2007-2011
,
2006-01-01
Introduction: The acquisition, management, communication, and long-term stewardship of natural science data, information, and knowledge are fundamental mission responsibilities of the U.S. Geological Survey (USGS). USGS scientists collect, maintain, and exchange raw scientific data and interpret and analyze it to produce a wide variety of science-based products. Managers throughout the Bureau access, summarize, and analyze administrative or business-related information to budget, plan, evaluate, and report on programs and projects. Information professionals manage the extensive and growing stores of irreplaceable scientific information and knowledge in numerous databases, archives, libraries, and other digital and nondigital holdings. Information is the primary currency of the USGS, and it flows to scientists, managers, partners, and a wide base of customers, including local, State, and Federal agencies, private sector organizations, and individual citizens. Supporting these information flows is an infrastructure of computer systems, telecommunications equipment, software applications, digital and nondigital data stores and archives, technical expertise, and information policies and procedures. This infrastructure has evolved over many years and consists of tools and technologies acquired or built to address the specific requirements of particular projects or programs. Developed independently, the elements of this infrastructure were typically not designed to facilitate the exchange of data and information across programs or disciplines, to allow for sharing of information resources or expertise, or to be combined into a Bureauwide and broader information infrastructure. The challenge to the Bureau is to wisely and effectively use its information resources to create a more Integrated Information Environment that can reduce costs, enhance the discovery and delivery of scientific products, and improve support for science. This Information Technology Strategic Plan for the USGS outlines key information technology (IT) strategic goals and objectives that will support the Bureau's science mission, while also aligning with the Department of the Interior (DOI) IT Strategic Plan and the DOI Government Performance and Results Act (GPRA) Strategic Plan.
Learmonth, Yvonne C; Motl, Robert W
2018-01-01
Much research has been undertaken to establish the important benefits of physical activity in persons with multiple sclerosis (MS). There is disagreement regarding the strength of this research, perhaps because the majority of studies on physical activity and its benefits have not undergone initial and systematic feasibility testing. We aim to address the feasibility processes that have been examined within the context of physical activity interventions in MS. A systematic scoping review was conducted based on a literature search of five databases to identify feasibility processes described in preliminary studies of physical activity in MS. We read and extracted methodology from each study based on the following feasibility metrics: process (e.g. recruitment), resource (e.g. monetary costs), management (e.g. personnel time requirements) and scientific outcomes (e.g. clinical/participant reported outcome measures). We illustrate the use of the four feasibility metrics within a randomised controlled trial of a home-based exercise intervention in persons with MS. Twenty-five studies were identified. Resource feasibility (e.g. time and resources) and scientific outcomes feasibility (e.g. clinical outcomes) methodologies were applied and described in many studies; however, these metrics have not been systematically addressed. Metrics related to process feasibility (e.g. recruitment) and management feasibility (e.g. human and data management) are not well described within the literature. Our case study successfully enabled us to address the four feasibility metrics, and we provide new information on management feasibility (i.e. estimate data completeness and estimate data entry) and scientific outcomes feasibility (i.e. determining data collection materials appropriateness). Our review highlights the existing research and provides a case study which assesses important metrics of study feasibility. This review serves as a clarion call for feasibility trials that will substantially strengthen the foundation of research on exercise in MS.
[A web-based integrated clinical database for laryngeal cancer].
E, Qimin; Liu, Jialin; Li, Yong; Liang, Chuanyu
2014-08-01
To establish an integrated database for laryngeal cancer, and to provide an information platform for laryngeal cancer in clinical and fundamental researches. This database also meet the needs of clinical and scientific use. Under the guidance of clinical expert, we have constructed a web-based integrated clinical database for laryngeal carcinoma on the basis of clinical data standards, Apache+PHP+MySQL technology, laryngeal cancer specialist characteristics and tumor genetic information. A Web-based integrated clinical database for laryngeal carcinoma had been developed. This database had a user-friendly interface and the data could be entered and queried conveniently. In addition, this system utilized the clinical data standards and exchanged information with existing electronic medical records system to avoid the Information Silo. Furthermore, the forms of database was integrated with laryngeal cancer specialist characteristics and tumor genetic information. The Web-based integrated clinical database for laryngeal carcinoma has comprehensive specialist information, strong expandability, high feasibility of technique and conforms to the clinical characteristics of laryngeal cancer specialties. Using the clinical data standards and structured handling clinical data, the database can be able to meet the needs of scientific research better and facilitate information exchange, and the information collected and input about the tumor sufferers are very informative. In addition, the user can utilize the Internet to realize the convenient, swift visit and manipulation on the database.
NASA Astrophysics Data System (ADS)
Dabiru, L.; O'Hara, C. G.; Shaw, D.; Katragadda, S.; Anderson, D.; Kim, S.; Shrestha, B.; Aanstoos, J.; Frisbie, T.; Policelli, F.; Keblawi, N.
2006-12-01
The Research Project Knowledge Base (RPKB) is currently being designed and will be implemented in a manner that is fully compatible and interoperable with enterprise architecture tools developed to support NASA's Applied Sciences Program. Through user needs assessment, collaboration with Stennis Space Center, Goddard Space Flight Center, and NASA's DEVELOP Staff personnel insight to information needs for the RPKB were gathered from across NASA scientific communities of practice. To enable efficient, consistent, standard, structured, and managed data entry and research results compilation a prototype RPKB has been designed and fully integrated with the existing NASA Earth Science Systems Components database. The RPKB will compile research project and keyword information of relevance to the six major science focus areas, 12 national applications, and the Global Change Master Directory (GCMD). The RPKB will include information about projects awarded from NASA research solicitations, project investigator information, research publications, NASA data products employed, and model or decision support tools used or developed as well as new data product information. The RPKB will be developed in a multi-tier architecture that will include a SQL Server relational database backend, middleware, and front end client interfaces for data entry. The purpose of this project is to intelligently harvest the results of research sponsored by the NASA Applied Sciences Program and related research program results. We present various approaches for a wide spectrum of knowledge discovery of research results, publications, projects, etc. from the NASA Systems Components database and global information systems and show how this is implemented in SQL Server database. The application of knowledge discovery is useful for intelligent query answering and multiple-layered database construction. Using advanced EA tools such as the Earth Science Architecture Tool (ESAT), RPKB will enable NASA and partner agencies to efficiently identify the significant results for new experiment directions and principle investigators to formulate experiment directions for new proposals.
Microcomputer Database Management Systems for Bibliographic Data.
ERIC Educational Resources Information Center
Pollard, Richard
1986-01-01
Discusses criteria for evaluating microcomputer database management systems (DBMS) used for storage and retrieval of bibliographic data. Two popular types of microcomputer DBMS--file management systems and relational database management systems--are evaluated with respect to these criteria. (Author/MBR)
The Data Base and Decision Making in Public Schools.
ERIC Educational Resources Information Center
Hedges, William D.
1984-01-01
Describes generic types of databases--file management systems, relational database management systems, and network/hierarchical database management systems--with their respective strengths and weaknesses; discusses factors to be considered in determining whether a database is desirable; and provides evaluative criteria for use in choosing…
PaperBLAST: Text Mining Papers for Information about Homologs
Price, Morgan N.; Arkin, Adam P.
2017-08-15
Large-scale genome sequencing has identified millions of protein-coding genes whose function is unknown. Many of these proteins are similar to characterized proteins from other organisms, but much of this information is missing from annotation databases and is hidden in the scientific literature. To make this information accessible, PaperBLAST uses EuropePMC to search the full text of scientific articles for references to genes. PaperBLAST also takes advantage of curated resources (Swiss-Prot, GeneRIF, and EcoCyc) that link protein sequences to scientific articles. PaperBLAST’s database includes over 700,000 scientific articles that mention over 400,000 different proteins. Given a protein of interest, PaperBLAST quicklymore » finds similar proteins that are discussed in the literature and presents snippets of text from relevant articles or from the curators. With the recent explosion of genome sequencing data, there are now millions of uncharacterized proteins. If a scientist becomes interested in one of these proteins, it can be very difficult to find information as to its likely function. Often a protein whose sequence is similar, and which is likely to have a similar function, has been studied already, but this information is not available in any database. To help find articles about similar proteins, PaperBLAST searches the full text of scientific articles for protein identifiers or gene identifiers, and it links these articles to protein sequences. Then, given a protein of interest, it can quickly find similar proteins in its database by using standard software (BLAST), and it can show snippets of text from relevant papers. We hope that PaperBLAST will make it easier for biologists to predict proteins’ functions.« less
PaperBLAST: Text Mining Papers for Information about Homologs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Price, Morgan N.; Arkin, Adam P.
Large-scale genome sequencing has identified millions of protein-coding genes whose function is unknown. Many of these proteins are similar to characterized proteins from other organisms, but much of this information is missing from annotation databases and is hidden in the scientific literature. To make this information accessible, PaperBLAST uses EuropePMC to search the full text of scientific articles for references to genes. PaperBLAST also takes advantage of curated resources (Swiss-Prot, GeneRIF, and EcoCyc) that link protein sequences to scientific articles. PaperBLAST’s database includes over 700,000 scientific articles that mention over 400,000 different proteins. Given a protein of interest, PaperBLAST quicklymore » finds similar proteins that are discussed in the literature and presents snippets of text from relevant articles or from the curators. With the recent explosion of genome sequencing data, there are now millions of uncharacterized proteins. If a scientist becomes interested in one of these proteins, it can be very difficult to find information as to its likely function. Often a protein whose sequence is similar, and which is likely to have a similar function, has been studied already, but this information is not available in any database. To help find articles about similar proteins, PaperBLAST searches the full text of scientific articles for protein identifiers or gene identifiers, and it links these articles to protein sequences. Then, given a protein of interest, it can quickly find similar proteins in its database by using standard software (BLAST), and it can show snippets of text from relevant papers. We hope that PaperBLAST will make it easier for biologists to predict proteins’ functions.« less
PaperBLAST: Text Mining Papers for Information about Homologs
Arkin, Adam P.
2017-01-01
ABSTRACT Large-scale genome sequencing has identified millions of protein-coding genes whose function is unknown. Many of these proteins are similar to characterized proteins from other organisms, but much of this information is missing from annotation databases and is hidden in the scientific literature. To make this information accessible, PaperBLAST uses EuropePMC to search the full text of scientific articles for references to genes. PaperBLAST also takes advantage of curated resources (Swiss-Prot, GeneRIF, and EcoCyc) that link protein sequences to scientific articles. PaperBLAST’s database includes over 700,000 scientific articles that mention over 400,000 different proteins. Given a protein of interest, PaperBLAST quickly finds similar proteins that are discussed in the literature and presents snippets of text from relevant articles or from the curators. PaperBLAST is available at http://papers.genomics.lbl.gov/. IMPORTANCE With the recent explosion of genome sequencing data, there are now millions of uncharacterized proteins. If a scientist becomes interested in one of these proteins, it can be very difficult to find information as to its likely function. Often a protein whose sequence is similar, and which is likely to have a similar function, has been studied already, but this information is not available in any database. To help find articles about similar proteins, PaperBLAST searches the full text of scientific articles for protein identifiers or gene identifiers, and it links these articles to protein sequences. Then, given a protein of interest, it can quickly find similar proteins in its database by using standard software (BLAST), and it can show snippets of text from relevant papers. We hope that PaperBLAST will make it easier for biologists to predict proteins’ functions. PMID:28845458
PaperBLAST: Text Mining Papers for Information about Homologs.
Price, Morgan N; Arkin, Adam P
2017-01-01
Large-scale genome sequencing has identified millions of protein-coding genes whose function is unknown. Many of these proteins are similar to characterized proteins from other organisms, but much of this information is missing from annotation databases and is hidden in the scientific literature. To make this information accessible, PaperBLAST uses EuropePMC to search the full text of scientific articles for references to genes. PaperBLAST also takes advantage of curated resources (Swiss-Prot, GeneRIF, and EcoCyc) that link protein sequences to scientific articles. PaperBLAST's database includes over 700,000 scientific articles that mention over 400,000 different proteins. Given a protein of interest, PaperBLAST quickly finds similar proteins that are discussed in the literature and presents snippets of text from relevant articles or from the curators. PaperBLAST is available at http://papers.genomics.lbl.gov/. IMPORTANCE With the recent explosion of genome sequencing data, there are now millions of uncharacterized proteins. If a scientist becomes interested in one of these proteins, it can be very difficult to find information as to its likely function. Often a protein whose sequence is similar, and which is likely to have a similar function, has been studied already, but this information is not available in any database. To help find articles about similar proteins, PaperBLAST searches the full text of scientific articles for protein identifiers or gene identifiers, and it links these articles to protein sequences. Then, given a protein of interest, it can quickly find similar proteins in its database by using standard software (BLAST), and it can show snippets of text from relevant papers. We hope that PaperBLAST will make it easier for biologists to predict proteins' functions.
Root resorption during orthodontic treatment.
Walker, Sally
2010-01-01
Medline, Embase, LILACS, The Cochrane Library (Cochrane Database of Systematic Reviews, CENTRAL, and Cochrane Oral Health Group Trials Register) Web of Science, EBM Reviews, Computer Retrieval of Information on Scientific Project (CRISP, www.crisp.cit.nih.gov), On-Line Computer Library Center (www.oclc.org), Google Index to Scientific and Technical Proceedings, PAHO (www.paho.org), WHOLis (www.who.int/library/databases/en), BBO (Brazilian Bibliography of Dentistry), CEPS (Chinese Electronic Periodical Services), Conference materials (www.bl.uk/services/bsds/dsc/conference.html), ProQuest Dissertation Abstracts and Thesis database, TrialCentral (www.trialscentral.org), National Research Register (www.controlled-trials.com), www.Clinicaltrials.gov and SIGLE (System for Information on Grey Literature in Europe). Randomised controlled trials including split mouth design, recording the presence or absence of external apical root resorption (EARR) by treatment group at the end of the treatment period. Data were extracted independently by two reviewers using specially designed and piloted forms. Quality was also assessed independently by the same reviewers. After evaluating titles and abstracts, 144 full articles were obtained of which 13 articles, describing 11 trials, fulfilled the criteria for inclusion. Differences in the methodological approaches and reporting results made quantitative statistical comparisons impossible. Evidence suggests that comprehensive orthodontic treatment causes increased incidence and severity of root resorption, and heavy forces might be particularly harmful. Orthodontically induced inflammatory root resorption is unaffected by archwire sequencing, bracket prescription, and self-ligation. Previous trauma and tooth morphology are unlikely causative factors. There is some evidence that a two- to three-month pause in treatment decreases total root resorption. The results were inconclusive in the clinical management of root resorption, but there is evidence to support the use of light forces, especially with incisor intrusion.
Scientific Use Cases for the Virtual Atomic and Molecular Data Center
NASA Astrophysics Data System (ADS)
Dubernet, M. L.; Aboudarham, J.; Ba, Y. A.; Boiziot, M.; Bottinelli, S.; Caux, E.; Endres, C.; Glorian, J. M.; Henry, F.; Lamy, L.; Le Sidaner, P.; Møller, T.; Moreau, N.; Rénié, C.; Roueff, E.; Schilke, P.; Vastel, C.; Zwoelf, C. M.
2014-12-01
VAMDC Consortium is a worldwide consortium which federates interoperable Atomic and Molecular databases through an e-science infrastructure. The contained data are of the highest scientific quality and are crucial for many applications: astrophysics, atmospheric physics, fusion, plasma and lighting technologies, health, etc. In this paper we present astrophysical scientific use cases in relation to the use of the VAMDC e-infrastructure. Those will cover very different applications such as: (i) modeling the spectra of interstellar objects using the myXCLASS software tool implemented in the Common Astronomy Software Applications package (CASA) or using the CASSIS software tool, in its stand-alone version or implemented in the Herschel Interactive Processing Environment (HIPE); (ii) the use of Virtual Observatory tools accessing VAMDC databases; (iii) the access of VAMDC from the Paris solar BASS2000 portal; (iv) the combination of tools and database from the APIS service (Auroral Planetary Imaging and Spectroscopy); (v) combination of heterogeneous data for the application to the interstellar medium from the SPECTCOL tool.
"Mr. Database" : Jim Gray and the History of Database Technologies.
Hanwahr, Nils C
2017-12-01
Although the widespread use of the term "Big Data" is comparatively recent, it invokes a phenomenon in the developments of database technology with distinct historical contexts. The database engineer Jim Gray, known as "Mr. Database" in Silicon Valley before his disappearance at sea in 2007, was involved in many of the crucial developments since the 1970s that constitute the foundation of exceedingly large and distributed databases. Jim Gray was involved in the development of relational database systems based on the concepts of Edgar F. Codd at IBM in the 1970s before he went on to develop principles of Transaction Processing that enable the parallel and highly distributed performance of databases today. He was also involved in creating forums for discourse between academia and industry, which influenced industry performance standards as well as database research agendas. As a co-founder of the San Francisco branch of Microsoft Research, Gray increasingly turned toward scientific applications of database technologies, e. g. leading the TerraServer project, an online database of satellite images. Inspired by Vannevar Bush's idea of the memex, Gray laid out his vision of a Personal Memex as well as a World Memex, eventually postulating a new era of data-based scientific discovery termed "Fourth Paradigm Science". This article gives an overview of Gray's contributions to the development of database technology as well as his research agendas and shows that central notions of Big Data have been occupying database engineers for much longer than the actual term has been in use.
Health technology assessment in Iran: challenges and views
Olyaeemanesh, Alireza; Doaee, Shila; Mobinizadeh, Mohammadreza; Nedjati, Mina; Aboee, Parisa; Emami-Razavi, Seyed Hassan
2014-01-01
Background: Various decisions have been made on technology application at all levels of the health system in different countries around the world. Health technology assessment is considered as one of the best scientific tools at the service of policy- makers. This study attempts to investigate the current challenges of Iran’s health technology assessment and provide appropriate strategies to establish and institutionalize this program. Methods: This study was carried out in two independent phases. In the first, electronic databases such as Medline (via Pub Med) and Scientific Information Database (SID) were searched to provide a list of challenges of Iran’s health technology assessment. The views and opinions of the experts and practitioners on HTA challenges were studied through a questionnaire in the second phase which was then analyzed by SPSS Software version 16. This has been an observational and analytical study with a thematic analysis. Results: In the first phase, seven papers were retrieved; from which, twenty- two HTA challenges in Iran were extracted by the researchers; and they were used as the base for designing a structured questionnaire of the second phase. The views of the experts on the challenges of health technology assessment were categorized as follows: organizational culture, stewardship, stakeholders, health system management, infrastructures and external pressures which were mentioned in more than 60% of the cases and were also common in the views. Conclusion: The identification and prioritization of HTA challenges which were approved by those experts involved in the strategic planning of the Department of Health Technology Assessment will be a step forward in the promotion of an evidence- based policy- making and in the production of comprehensive scientific evidence. PMID:25695015
Scientific production of medical sciences universities in north of iran.
Siamian, Hasan; Firooz, Mousa Yamin; Vahedi, Mohammad; Aligolbandi, Kobra
2013-01-01
NONE DECLARED. The study of the scientific evidence citation production by famous databases of the world is one of the important indicators to evaluate and rank the universities. The study at investigating the scientific production of Northern Iran Medical Sciences Universities in Scopus from 2005 through 2010. This survey used scientometrics technique. The samples under studies were the scientific products of four northern Iran Medical universities. Viewpoints quantity of the Scientific Products Mazandaran University of Medical Sciences stands first and of Babol University of Medical Sciences ranks the end, but from the viewpoints of quality of scientific products of considering the H-Index and the number of cited papers the Mazandaran University of Medical Sciences is a head from the other universities under study. From the viewpoints of subject of the papers, the highest scientific products belonged to the faculty of Pharmacy affiliated to Mazandaran University of Medial Sciences, but the three other universities for the genetics and biochemistry. Results showed that the Mazandaran University of Medical Sciences as compared to the other understudies universities ranks higher for the number of articles, cited articles, number of hard work authors and H-Index of Scopus database from 2005 through 2010.
47 CFR 52.101 - General definitions.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Center (“NASC”). The entity that provides user support for the Service Management System database and administers the Service Management System database on a day-to-day basis. (b) Responsible Organization (“Resp... regional databases in the toll free network. (d) Service Management System Database (“SMS Database”). The...
47 CFR 52.101 - General definitions.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Center (“NASC”). The entity that provides user support for the Service Management System database and administers the Service Management System database on a day-to-day basis. (b) Responsible Organization (“Resp... regional databases in the toll free network. (d) Service Management System Database (“SMS Database”). The...
47 CFR 52.101 - General definitions.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Center (“NASC”). The entity that provides user support for the Service Management System database and administers the Service Management System database on a day-to-day basis. (b) Responsible Organization (“Resp... regional databases in the toll free network. (d) Service Management System Database (“SMS Database”). The...
47 CFR 52.101 - General definitions.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Center (“NASC”). The entity that provides user support for the Service Management System database and administers the Service Management System database on a day-to-day basis. (b) Responsible Organization (“Resp... regional databases in the toll free network. (d) Service Management System Database (“SMS Database”). The...
47 CFR 52.101 - General definitions.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Center (“NASC”). The entity that provides user support for the Service Management System database and administers the Service Management System database on a day-to-day basis. (b) Responsible Organization (“Resp... regional databases in the toll free network. (d) Service Management System Database (“SMS Database”). The...
47 CFR 0.241 - Authority delegated.
Code of Federal Regulations, 2014 CFR
2014-10-01
... individual database managers; and to perform other functions as needed for the administration of the TV bands... database functions for unlicensed devices operating in the television broadcast bands (TV bands) as set... methods that will be used to designate TV bands database managers, to designate these database managers...
GIS Application System Design Applied to Information Monitoring
NASA Astrophysics Data System (ADS)
Qun, Zhou; Yujin, Yuan; Yuena, Kang
Natural environment information management system involves on-line instrument monitoring, data communications, database establishment, information management software development and so on. Its core lies in collecting effective and reliable environmental information, increasing utilization rate and sharing degree of environment information by advanced information technology, and maximizingly providing timely and scientific foundation for environmental monitoring and management. This thesis adopts C# plug-in application development and uses a set of complete embedded GIS component libraries and tools libraries provided by GIS Engine to finish the core of plug-in GIS application framework, namely, the design and implementation of framework host program and each functional plug-in, as well as the design and implementation of plug-in GIS application framework platform. This thesis adopts the advantages of development technique of dynamic plug-in loading configuration, quickly establishes GIS application by visualized component collaborative modeling and realizes GIS application integration. The developed platform is applicable to any application integration related to GIS application (ESRI platform) and can be as basis development platform of GIS application development.
A GIS-based modeling system for petroleum waste management. Geographical information system.
Chen, Z; Huang, G H; Li, J B
2003-01-01
With an urgent need for effective management of petroleum-contaminated sites, a GIS-aided simulation (GISSIM) system is presented in this study. The GISSIM contains two components: an advanced 3D numerical model and a geographical information system (GIS), which are integrated within a general framework. The modeling component undertakes simulation for the fate of contaminants in subsurface unsaturated and saturated zones. The GIS component is used in three areas throughout the system development and implementation process: (i) managing spatial and non-spatial databases; (ii) linking inputs, model, and outputs; and (iii) providing an interface between the GISSIM and its users. The developed system is applied to a North American case study. Concentrations of benzene, toluene, and xylenes in groundwater under a petroleum-contaminated site are dynamically simulated. Reasonable outputs have been obtained and presented graphically. They provide quantitative and scientific bases for further assessment of site-contamination impacts and risks, as well as decisions on practical remediation actions.
NASA Astrophysics Data System (ADS)
Kulchitsky, A.; Maurits, S.; Watkins, B.
2006-12-01
With the widespread availability of the Internet today, many people can monitor various scientific research activities. It is important to accommodate this interest providing on-line access to dynamic and illustrative Web-resources, which could demonstrate different aspects of ongoing research. It is especially important to explain and these research activities for high school and undergraduate students, thereby providing more information for making decisions concerning their future studies. Such Web resources are also important to clarify scientific research for the general public, in order to achieve better awareness of research progress in various fields. Particularly rewarding is dissemination of information about ongoing projects within Universities and research centers to their local communities. The benefits of this type of scientific outreach are mutual, since development of Web-based automatic systems is prerequisite for many research projects targeting real-time monitoring and/or modeling of natural conditions. Continuous operation of such systems provide ongoing research opportunities for the statistically massive validation of the models, as well. We have developed a Web-based system to run the University of Alaska Fairbanks Polar Ionospheric Model in real-time. This model makes use of networking and computational resources at the Arctic Region Supercomputing Center. This system was designed to be portable among various operating systems and computational resources. Its components can be installed across different computers, separating Web servers and computational engines. The core of the system is a Real-Time Management module (RMM) written Python, which facilitates interactions of remote input data transfers, the ionospheric model runs, MySQL database filling, and PHP scripts for the Web-page preparations. The RMM downloads current geophysical inputs as soon as they become available at different on-line depositories. This information is processed to provide inputs for the next ionospheic model time step and then stored in a MySQL database as the first part of the time-specific record. The RMM then performs synchronization of the input times with the current model time, prepares a decision on initialization for the next model time step, and monitors its execution. Then, as soon as the model completes computations for the next time step, RMM visualizes the current model output into various short-term (about 1-2 hours) forecasting products and compares prior results with available ionospheric measurements. The RMM places prepared images into the MySQL database, which can be located on a different computer node, and then proceeds to the next time interval continuing the time-loop. The upper-level interface of this real-time system is the a PHP-based Web site (http://www.arsc.edu/SpaceWeather/new). This site provides general information about the Earth polar and adjacent mid-latitude ionosphere, allows for monitoring of the current developments and short-term forecasts, and facilitates access to the comparisons archive stored in the database.
Coordinating Council. Fourth Meeting: NACA Documents Database Project
NASA Technical Reports Server (NTRS)
1991-01-01
This NASA Scientific and Technical Information Coordination Council meeting dealt with the topic 'NACA Documents Database Project'. The following presentations were made and reported on: NACA documents database project study plan, AIAA study, the Optimal NACA database, Deficiencies in online file, NACA documents: Availability and Preservation, the NARA Collection: What is in it? and What to do about it?, and NACA foreign documents and availability. Visuals are available for most presentations.
Oliveira, S R M; Almeida, G V; Souza, K R R; Rodrigues, D N; Kuser-Falcão, P R; Yamagishi, M E B; Santos, E H; Vieira, F D; Jardine, J G; Neshich, G
2007-10-05
An effective strategy for managing protein databases is to provide mechanisms to transform raw data into consistent, accurate and reliable information. Such mechanisms will greatly reduce operational inefficiencies and improve one's ability to better handle scientific objectives and interpret the research results. To achieve this challenging goal for the STING project, we introduce Sting_RDB, a relational database of structural parameters for protein analysis with support for data warehousing and data mining. In this article, we highlight the main features of Sting_RDB and show how a user can explore it for efficient and biologically relevant queries. Considering its importance for molecular biologists, effort has been made to advance Sting_RDB toward data quality assessment. To the best of our knowledge, Sting_RDB is one of the most comprehensive data repositories for protein analysis, now also capable of providing its users with a data quality indicator. This paper differs from our previous study in many aspects. First, we introduce Sting_RDB, a relational database with mechanisms for efficient and relevant queries using SQL. Sting_rdb evolved from the earlier, text (flat file)-based database, in which data consistency and integrity was not guaranteed. Second, we provide support for data warehousing and mining. Third, the data quality indicator was introduced. Finally and probably most importantly, complex queries that could not be posed on a text-based database, are now easily implemented. Further details are accessible at the Sting_RDB demo web page: http://www.cbi.cnptia.embrapa.br/StingRDB.
NASA Astrophysics Data System (ADS)
Guiquan, Xi; Lin, Cong; Xuehui, Jin
2018-05-01
As an important platform for scientific and technological development, large -scale scientific facilities are the cornerstone of technological innovation and a guarantee for economic and social development. Researching management of large-scale scientific facilities can play a key role in scientific research, sociology and key national strategy. This paper reviews the characteristics of large-scale scientific facilities, and summarizes development status of China's large-scale scientific facilities. At last, the construction, management, operation and evaluation of large-scale scientific facilities is analyzed from the perspective of sustainable development.
Study of Scientific Production of Community Medicines' Department Indexed in ISI Citation Databases.
Khademloo, Mohammad; Khaseh, Ali Akbar; Siamian, Hasan; Aligolbandi, Kobra; Latifi, Mahsoomeh; Yaminfirooz, Mousa
2016-10-01
In the scientometric, the main criterion in determining the scientific position and ranking of the scientific centers, particularly the universities, is the rate of scientific production and innovation, and in all participations in the global scientific development. One of the subjects more involved in repeatedly dealt with science and technology and effective on the improvement of health is medical science fields. In this research using scientometric and citation analysis, we studied the rate of scientific productions in the field of community medicine, which is the numbers of articles published and indexed in ISI database from 2000 to 2010. This study is scientometric using the survey and analytical citation. The study samples included all of the articles in the ISI database from 2000 to 2010. For the data collection, the advance method of searching was used at the ISI database. The ISI analyses software and descriptive statistics were used for data analysis. Results showed that among the five top universities in producing documents, Tehran University of Medical Sciences with 88 (22.22%) documents are allocated to the first rank of scientific products. M. Askarian with 36 (90/9%) published documents; most of the scientific outputs in Community medicine, in the international arena is the most active author in this field. In collaboration with other writers, Iranian departments of Community Medicine with 27 published articles have the greatest participation with scholars of English authors. In the process of scientific outputs, the results showed that the scientific process was in its lowest in the years 2000 to 2004, and while the department of Community medicine in 2009 allocated most of the production process to itself. Iranian Journal of Public Health and Saudi Medical Journal each of them had 16 articles which had most participation rate in the publishing of community medicine's department. On the type of carrier, community medicine's department by presentation of 340(85.86%) articles had presented most of their scientific productions in the format of article, also in the field of community medicine outputs, article entitled: "Iron loading and erythrophagocytosis increase ferroportin 1 (FPN1) expression in J774 macrophages"(1) with 81 citations ranked first in cited articles. Subject areas of occupational health with 70 articles and subject areas of general medicine with 69 articles ranked the most active research areas in the Production of community medicine's department. the obtained data showed the much growth of scientific production. The Tehran University of medical Sciences ranked the first in publishing articles in community medicine's department and with most collaboration with community medicine department of England writers in this field and most writers will present their works in paper format.
Study of Scientific Production of Community Medicines’ Department Indexed in ISI Citation Databases
Khademloo, Mohammad; Khaseh, Ali Akbar; Siamian, Hasan; Aligolbandi, Kobra; Latifi, Mahsoomeh; Yaminfirooz, Mousa
2016-01-01
Background: In the scientometric, the main criterion in determining the scientific position and ranking of the scientific centers, particularly the universities, is the rate of scientific production and innovation, and in all participations in the global scientific development. One of the subjects more involved in repeatedly dealt with science and technology and effective on the improvement of health is medical science fields. In this research using scientometric and citation analysis, we studied the rate of scientific productions in the field of community medicine, which is the numbers of articles published and indexed in ISI database from 2000 to 2010. Methods: This study is scientometric using the survey and analytical citation. The study samples included all of the articles in the ISI database from 2000 to 2010. For the data collection, the advance method of searching was used at the ISI database. The ISI analyses software and descriptive statistics were used for data analysis. Results: Results showed that among the five top universities in producing documents, Tehran University of Medical Sciences with 88 (22.22%) documents are allocated to the first rank of scientific products. M. Askarian with 36 (90/9%) published documents; most of the scientific outputs in Community medicine, in the international arena is the most active author in this field. In collaboration with other writers, Iranian departments of Community Medicine with 27 published articles have the greatest participation with scholars of English authors. In the process of scientific outputs, the results showed that the scientific process was in its lowest in the years 2000 to 2004, and while the department of Community medicine in 2009 allocated most of the production process to itself. Iranian Journal of Public Health and Saudi Medical Journal each of them had 16 articles which had most participation rate in the publishing of community medicine’s department. On the type of carrier, community medicine’s department by presentation of 340(85.86%) articles had presented most of their scientific productions in the format of article, also in the field of community medicine outputs, article entitled: “Iron loading and erythrophagocytosis increase ferroportin 1 (FPN1) expression in J774 macrophages”(1) with 81 citations ranked first in cited articles. Subject areas of occupational health with 70 articles and subject areas of general medicine with 69 articles ranked the most active research areas in the Production of community medicine’s department. Conclusion: the obtained data showed the much growth of scientific production. The Tehran University of medical Sciences ranked the first in publishing articles in community medicine’s department and with most collaboration with community medicine department of England writers in this field and most writers will present their works in paper format. PMID:28077896
Database Management Systems: New Homes for Migrating Bibliographic Records.
ERIC Educational Resources Information Center
Brooks, Terrence A.; Bierbaum, Esther G.
1987-01-01
Assesses bibliographic databases as part of visionary text systems such as hypertext and scholars' workstations. Downloading is discussed in terms of the capability to search records and to maintain unique bibliographic descriptions, and relational database management systems, file managers, and text databases are reviewed as possible hosts for…
23 CFR 970.204 - Management systems requirements.
Code of Federal Regulations, 2010 CFR
2010-04-01
... the management systems and their associated databases; and (5) A process for data collection, processing, analysis and updating for each management system. (d) All management systems will use databases with a geographical reference system that can be used to geolocate all database information. (e...
23 CFR 970.204 - Management systems requirements.
Code of Federal Regulations, 2012 CFR
2012-04-01
... the management systems and their associated databases; and (5) A process for data collection, processing, analysis and updating for each management system. (d) All management systems will use databases with a geographical reference system that can be used to geolocate all database information. (e...
23 CFR 970.204 - Management systems requirements.
Code of Federal Regulations, 2011 CFR
2011-04-01
... the management systems and their associated databases; and (5) A process for data collection, processing, analysis and updating for each management system. (d) All management systems will use databases with a geographical reference system that can be used to geolocate all database information. (e...
Selecting Data-Base Management Software for Microcomputers in Libraries and Information Units.
ERIC Educational Resources Information Center
Pieska, K. A. O.
1986-01-01
Presents a model for the evaluation of database management systems software from the viewpoint of librarians and information specialists. The properties of data management systems, database management systems, and text retrieval systems are outlined and compared. (10 references) (CLB)
23 CFR 970.204 - Management systems requirements.
Code of Federal Regulations, 2013 CFR
2013-04-01
... the management systems and their associated databases; and (5) A process for data collection, processing, analysis and updating for each management system. (d) All management systems will use databases with a geographical reference system that can be used to geolocate all database information. (e...
Construction of databases: advances and significance in clinical research.
Long, Erping; Huang, Bingjie; Wang, Liming; Lin, Xiaoyu; Lin, Haotian
2015-12-01
Widely used in clinical research, the database is a new type of data management automation technology and the most efficient tool for data management. In this article, we first explain some basic concepts, such as the definition, classification, and establishment of databases. Afterward, the workflow for establishing databases, inputting data, verifying data, and managing databases is presented. Meanwhile, by discussing the application of databases in clinical research, we illuminate the important role of databases in clinical research practice. Lastly, we introduce the reanalysis of randomized controlled trials (RCTs) and cloud computing techniques, showing the most recent advancements of databases in clinical research.
Diway, Bibian; Khoo, Eyen
2017-01-01
The development of timber tracking methods based on genetic markers can provide scientific evidence to verify the origin of timber products and fulfill the growing requirement for sustainable forestry practices. In this study, the origin of an important Dark Red Meranti wood, Shorea platyclados, was studied by using the combination of seven chloroplast DNA and 15 short tandem repeats (STRs) markers. A total of 27 natural populations of S. platyclados were sampled throughout Malaysia to establish population level and individual level identification databases. A haplotype map was generated from chloroplast DNA sequencing for population identification, resulting in 29 multilocus haplotypes, based on 39 informative intraspecific variable sites. Subsequently, a DNA profiling database was developed from 15 STRs allowing for individual identification in Malaysia. Cluster analysis divided the 27 populations into two genetic clusters, corresponding to the region of Eastern and Western Malaysia. The conservativeness tests showed that the Malaysia database is conservative after removal of bias from population subdivision and sampling effects. Independent self-assignment tests correctly assigned individuals to the database in an overall 60.60−94.95% of cases for identified populations, and in 98.99−99.23% of cases for identified regions. Both the chloroplast DNA database and the STRs appear to be useful for tracking timber originating in Malaysia. Hence, this DNA-based method could serve as an effective addition tool to the existing forensic timber identification system for ensuring the sustainably management of this species into the future. PMID:28430826
Morphology-based Query for Galaxy Image Databases
NASA Astrophysics Data System (ADS)
Shamir, Lior
2017-02-01
Galaxies of rare morphology are of paramount scientific interest, as they carry important information about the past, present, and future Universe. Once a rare galaxy is identified, studying it more effectively requires a set of galaxies of similar morphology, allowing generalization and statistical analysis that cannot be done when N=1. Databases generated by digital sky surveys can contain a very large number of galaxy images, and therefore once a rare galaxy of interest is identified it is possible that more instances of the same morphology are also present in the database. However, when a researcher identifies a certain galaxy of rare morphology in the database, it is virtually impossible to mine the database manually in the search for galaxies of similar morphology. Here we propose a computer method that can automatically search databases of galaxy images and identify galaxies that are morphologically similar to a certain user-defined query galaxy. That is, the researcher provides an image of a galaxy of interest, and the pattern recognition system automatically returns a list of galaxies that are visually similar to the target galaxy. The algorithm uses a comprehensive set of descriptors, allowing it to support different types of galaxies, and it is not limited to a finite set of known morphologies. While the list of returned galaxies is neither clean nor complete, it contains a far higher frequency of galaxies of the morphology of interest, providing a substantial reduction of the data. Such algorithms can be integrated into data management systems of autonomous digital sky surveys such as the Large Synoptic Survey Telescope (LSST), where the number of galaxies in the database is extremely large. The source code of the method is available at http://vfacstaff.ltu.edu/lshamir/downloads/udat.
[The future of clinical laboratory database management system].
Kambe, M; Imidy, D; Matsubara, A; Sugimoto, Y
1999-09-01
To assess the present status of the clinical laboratory database management system, the difference between the Clinical Laboratory Information System and Clinical Laboratory System was explained in this study. Although three kinds of database management systems (DBMS) were shown including the relational model, tree model and network model, the relational model was found to be the best DBMS for the clinical laboratory database based on our experience and developments of some clinical laboratory expert systems. As a future clinical laboratory database management system, the IC card system connected to an automatic chemical analyzer was proposed for personal health data management and a microscope/video system was proposed for dynamic data management of leukocytes or bacteria.
NASA Astrophysics Data System (ADS)
Slater, T. F.; Elfring, L.; Novodvorsky, I.; Talanquer, V.; Quintenz, J.
2007-12-01
Science education reform documents universally call for students to have authentic and meaningful experiences using real data in the context of their science education. The underlying philosophical position is that students analyzing data can have experiences that mimic actual research. In short, research experiences that reflect the scientific spirit of inquiry potentially can: prepare students to address real world complex problems; develop students' ability to use scientific methods; prepare students to critically evaluate the validity of data or evidence and of the consequent interpretations or conclusions; teach quantitative skills, technical methods, and scientific concepts; increase verbal, written, and graphical communication skills; and train students in the values and ethics of working with scientific data. However, it is unclear what the broader pre-service teacher preparation community is doing in preparing future teachers to promote, manage, and successful facilitate their own students in conducting authentic scientific inquiry. Surveys of undergraduates in secondary science education programs suggests that students have had almost no experiences themselves in conducting open scientific inquiry where they develop researchable questions, design strategies to pursue evidence, and communicate data-based conclusions. In response, the College of Science Teacher Preparation Program at the University of Arizona requires all students enrolled in its various science teaching methods courses to complete an open inquiry research project and defend their findings at a specially designed inquiry science mini-conference at the end of the term. End-of-term surveys show that students enjoy their research experience and believe that this experience enhances their ability to facilitate their own future students in conducting open inquiry.
NASA Technical Reports Server (NTRS)
Berard, Peter R.
1993-01-01
Researchers in the Molecular Sciences Research Center (MSRC) of Pacific Northwest Laboratory (PNL) currently generate massive amounts of scientific data. The amount of data that will need to be managed by the turn of the century is expected to increase significantly. Automated tools that support the management, maintenance, and sharing of this data are minimal. Researchers typically manage their own data by physically moving datasets to and from long term storage devices and recording a dataset's historical information in a laboratory notebook. Even though it is not the most efficient use of resources, researchers have tolerated the process. The solution to this problem will evolve over the next three years in three phases. PNL plans to add sophistication to existing multilevel file system (MLFS) software by integrating it with an object database management system (ODBMS). The first phase in the evolution is currently underway. A prototype system of limited scale is being used to gather information that will feed into the next two phases. This paper describes the prototype system, identifies the successes and problems/complications experienced to date, and outlines PNL's long term goals and objectives in providing a permanent solution.
23 CFR 971.204 - Management systems requirements.
Code of Federal Regulations, 2011 CFR
2011-04-01
... maintain the management systems and their associated databases; and (5) A process for data collection, processing, analysis, and updating for each management system. (c) All management systems will use databases with a common or coordinated reference system, that can be used to geolocate all database information...
23 CFR 971.204 - Management systems requirements.
Code of Federal Regulations, 2010 CFR
2010-04-01
... maintain the management systems and their associated databases; and (5) A process for data collection, processing, analysis, and updating for each management system. (c) All management systems will use databases with a common or coordinated reference system, that can be used to geolocate all database information...
23 CFR 971.204 - Management systems requirements.
Code of Federal Regulations, 2012 CFR
2012-04-01
... maintain the management systems and their associated databases; and (5) A process for data collection, processing, analysis, and updating for each management system. (c) All management systems will use databases with a common or coordinated reference system, that can be used to geolocate all database information...
23 CFR 971.204 - Management systems requirements.
Code of Federal Regulations, 2013 CFR
2013-04-01
... maintain the management systems and their associated databases; and (5) A process for data collection, processing, analysis, and updating for each management system. (c) All management systems will use databases with a common or coordinated reference system, that can be used to geolocate all database information...
Towards an integrated set of surface meterological observations for climate science and applications
NASA Astrophysics Data System (ADS)
Dunn, Robert; Thorne, Peter
2017-04-01
We cannot predict what is not observed, and we cannot analyse what is not archived. To meet current scientific and societal demands, as well as future requirements for climate services, it is vital that the management and curation of land-based meteorological data holdings is improved. A comprehensive global set of data holdings, of known provenance, integrated across both climate variable and timescale are required to meet the wide range of user needs. Presently, the land-based holdings are highly fractured into global, region and national holdings for different variables and timescales, from a variety of sources, and in a mixture of formats. We present a high level overview, based on broad community input, of the steps that are required to bring about this integration and progress towards such a database. Any long-term, international, program creating such an integrated database will transform the our collective ability to provide societally relevant research, analysis and predictions across the globe.
A survey of the current status of web-based databases indexing Iranian journals.
Merat, Shahin; Khatibzadeh, Shahab; Mesgarpour, Bita; Malekzadeh, Reza
2009-05-01
The scientific output of Iran is increasing rapidly during the recent years. Unfortunately, most papers are published in journals which are not indexed by popular indexing systems and many of them are in Persian without English translation. This makes the results of Iranian scientific research unavailable to other researchers, including Iranians. The aim of this study was to evaluate the quality of current web-based databases indexing scientific articles published in Iran. We identified web-based databases which indexed scientific journals published in Iran using popular search engines. The sites were then subjected to a series of tests to evaluate their coverage, search capabilities, stability, accuracy of information, consistency, accessibility, ease of use, and other features. Results were compared with each other to identify strengths and shortcomings of each site. Five web sites were indentified. None had a complete coverage on scientific Iranian journals. The search capabilities were less than optimal in most sites. English translations of research titles, author names, keywords, and abstracts of Persian-language articles did not follow standards. Some sites did not cover abstracts. Numerous typing errors make searches ineffective and citation indexing unreliable. None of the currently available indexing sites are capable of presenting Iranian research to the international scientific community. The government should intervene by enforcing policies designed to facilitate indexing through a systematic approach. The policies should address Iranian journals, authors, and indexing sites. Iranian journals should be required to provide their indexing data, including references, electronically; authors should provide correct indexing information to journals; and indexing sites should improve their software to meet standards set by the government.
LiverTox: Clinical and Research Information on Drug-Induced Liver Injury
... News Information Resources Glossary Abbreviations SEARCH THE LIVERTOX DATABASE Search for a specific medication, herbal or supplement: ... About Us . Disclaimer. Information presented in the LiverTox database is derived from the scientific literature and public ...
ERIC Educational Resources Information Center
Freeman, Carla; And Others
In order to understand how the database software or online database functioned in the overall curricula, the use of database management (DBMs) systems was studied at eight elementary and middle schools through classroom observation and interviews with teachers and administrators, librarians, and students. Three overall areas were addressed:…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-16
... Excluded Parties Listing System (EPLS) databases into the System for Award Management (SAM) database. DATES... combined the functional capabilities of the CCR, ORCA, and EPLS procurement systems into the SAM database... identification number and the type of organization from the System for Award Management database. 0 3. Revise the...
An Examination of Selected Software Testing Tools: 1992
1992-12-01
Report ....................................................... 27-19 Figure 27-17. Metrics Manager Database Full Report...historical test database , the test management and problem reporting tools were examined using the sample test database provided by each supplier. 4-4...track the impact of new methods, organi- zational structures, and technologies. Metrics Manager is supported by an industry database that allows
Data management for community research projects: A JGOFS case study
NASA Technical Reports Server (NTRS)
Lowry, Roy K.
1992-01-01
Since the mid 1980s, much of the marine science research effort in the United Kingdom has been focused into large scale collaborative projects involving public sector laboratories and university departments, termed Community Research Projects. Two of these, the Biogeochemical Ocean Flux Study (BOFS) and the North Sea Project incorporated large scale data collection to underpin multidisciplinary modeling efforts. The challenge of providing project data sets to support the science was met by a small team within the British Oceanographic Data Centre (BODC) operating as a topical data center. The role of the data center was to both work up the data from the ship's sensors and to combine these data with sample measurements into online databases. The working up of the data was achieved by a unique symbiosis between data center staff and project scientists. The project management, programming and data processing skills of the data center were combined with the oceanographic experience of the project communities to develop a system which has produced quality controlled, calibrated data sets from 49 research cruises in 3.5 years of operation. The data center resources required to achieve this were modest and far outweighed by the time liberated in the scientific community by the removal of the data processing burden. Two online project databases have been assembled containing a very high proportion of the data collected. As these are under the control of BODC their long term availability as part of the UK national data archive is assured. The success of the topical data center model for UK Community Research Project data management has been founded upon the strong working relationships forged between the data center and project scientists. These can only be established by frequent personal contact and hence the relatively small size of the UK has been a critical factor. However, projects covering a larger, even international scale could be successfully supported by a network of topical data centers managing online databases which are interconnected by object oriented distributed data management systems over wide area networks.
Monitoring Wildlife Interactions with Their Environment: An Interdisciplinary Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Charles-Smith, Lauren E.; Domnguez, Ignacio X.; Fornaro, Robert J.
In a rapidly changing world, wildlife ecologists strive to correctly model and predict complex relationships between animals and their environment, which facilitates management decisions impacting public policy to conserve and protect delicate ecosystems. Recent advances in monitoring systems span scientific domains, including animal and weather monitoring devices and landscape classification mapping techniques. The current challenge is how to combine and use detailed output from various sources to address questions spanning multiple disciplines. WolfScout wildlife and weather tracking system is a software tool capable of filling this niche. WolfScout automates integration of the latest technological advances in wildlife GPS collars, weathermore » stations, drought conditions, and severe weather reports, and animal demographic information. The WolfScout database stores a variety of classified landscape maps including natural and manmade features. Additionally, WolfScout’s spatial database management system allows users to calculate distances between animals’ location and landscape characteristics, which are linked to the best approximation of environmental conditions at the animal’s location during the interaction. Through a secure website, data are exported in formats compatible with multiple software programs including R and ArcGIS. The WolfScout design promotes interoperability in data, between researchers, and software applications while standardizing analyses of animal interactions with their environment.« less
NASA Astrophysics Data System (ADS)
Guernion, Muriel; Hoeffner, Kevin; Guillocheau, Sarah; Hotte, Hoël; Cylly, Daniel; Piron, Denis; Cluzeau, Daniel; Hervé, Morgane; Nicolai, Annegret; Pérès, Guénola
2017-04-01
Scientists have become more and more interested in earthworms because of their impact on soil functioning and their importance in provision of many ecosystem services. To improve the knowledge on soil biodiversity and integrate earthworms in soil quality diagnostics, it appeared necessary to gain a large amount of data on their distribution. The University of Rennes 1 developed since 2011 a collaborative science project called Observatoire Participatif des Vers de Terre (OPVT, participative earthworm observatory). It has several purposes : i) to offer a simple tool for soil biodiversity evaluation in natural and anthropic soils through earthworm assessment, ii) to offer trainings to farmers, territory managers, gardeners, pupils on soil ecology, iii) to build a database of reference values on earthworms in different habitats, iv) to propose a website (https://ecobiosoil.univ-rennes1.fr/OPVT_accueil.php) providing for example general scientific background (earthworm ecology and impacts of soil management), sampling protocols and online visualization of results (data processing and earthworms mapping). Up to now, more than 5000 plots have been prospected since the opening of the project in 2011., Initially available to anyone on a voluntary basis, this project is also used by the French Ministry of Agriculture to carry out a scientific survey throughout the French territory.
Research resources: curating the new eagle-i discovery system
Vasilevsky, Nicole; Johnson, Tenille; Corday, Karen; Torniai, Carlo; Brush, Matthew; Segerdell, Erik; Wilson, Melanie; Shaffer, Chris; Robinson, David; Haendel, Melissa
2012-01-01
Development of biocuration processes and guidelines for new data types or projects is a challenging task. Each project finds its way toward defining annotation standards and ensuring data consistency with varying degrees of planning and different tools to support and/or report on consistency. Further, this process may be data type specific even within the context of a single project. This article describes our experiences with eagle-i, a 2-year pilot project to develop a federated network of data repositories in which unpublished, unshared or otherwise ‘invisible’ scientific resources could be inventoried and made accessible to the scientific community. During the course of eagle-i development, the main challenges we experienced related to the difficulty of collecting and curating data while the system and the data model were simultaneously built, and a deficiency and diversity of data management strategies in the laboratories from which the source data was obtained. We discuss our approach to biocuration and the importance of improving information management strategies to the research process, specifically with regard to the inventorying and usage of research resources. Finally, we highlight the commonalities and differences between eagle-i and similar efforts with the hope that our lessons learned will assist other biocuration endeavors. Database URL: www.eagle-i.net PMID:22434835
Demopoulos, Amanda W.J.; Foster, Ann M.; Jones, Michal L.; Gualtieri, Daniel J.
2011-01-01
The Geospatial Characteristics Geopdf of Florida's Coastal and Offshore Environments is a comprehensive collection of geospatial data describing the political and natural resources of Florida. This interactive map provides spatial information on bathymetry, sand resources, military areas, marine protected areas, cultural resources, locations of submerged cables, and shipping routes. The map should be useful to coastal resource managers and others interested in the administrative and political boundaries of Florida's coastal and offshore region. In particular, as oil and gas explorations continue to expand, the map may be used to explore information regarding sensitive areas and resources in the State of Florida. Users of this geospatial database will find that they have access to synthesized information in a variety of scientific disciplines concerning Florida's coastal zone. This powerful tool provides a one-stop assembly of data that can be tailored to fit the needs of many natural resource managers.
The Chandra Source Catalog 2.0: Building The Catalog
NASA Astrophysics Data System (ADS)
Grier, John D.; Plummer, David A.; Allen, Christopher E.; Anderson, Craig S.; Budynkiewicz, Jamie A.; Burke, Douglas; Chen, Judy C.; Civano, Francesca Maria; D'Abrusco, Raffaele; Doe, Stephen M.; Evans, Ian N.; Evans, Janet D.; Fabbiano, Giuseppina; Gibbs, Danny G., II; Glotfelty, Kenny J.; Graessle, Dale E.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; Houck, John C.; Lauer, Jennifer L.; Laurino, Omar; Lee, Nicholas P.; Martínez-Galarza, Juan Rafael; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph; McLaughlin, Warren; Morgan, Douglas L.; Mossman, Amy E.; Nguyen, Dan T.; Nichols, Joy S.; Nowak, Michael A.; Paxson, Charles; Primini, Francis Anthony; Rots, Arnold H.; Siemiginowska, Aneta; Sundheim, Beth A.; Tibbetts, Michael; Van Stone, David W.; Zografou, Panagoula
2018-01-01
To build release 2.0 of the Chandra Source Catalog (CSC2), we require scientific software tools and processing pipelines to evaluate and analyze the data. Additionally, software and hardware infrastructure is needed to coordinate and distribute pipeline execution, manage data i/o, and handle data for Quality Assurance (QA) intervention. We also provide data product staging for archive ingestion.Release 2 utilizes a database driven system used for integration and production. Included are four distinct instances of the Automatic Processing (AP) system (Source Detection, Master Match, Source Properties and Convex Hulls) and a high performance computing (HPC) cluster that is managed to provide efficient catalog processing. In this poster we highlight the internal systems developed to meet the CSC2 challenge.This work has been supported by NASA under contract NAS 8-03060 to the Smithsonian Astrophysical Observatory for operation of the Chandra X-ray Center.
Spatial Databases for CalVO Volcanoes: Current Status and Future Directions
NASA Astrophysics Data System (ADS)
Ramsey, D. W.
2013-12-01
The U.S. Geological Survey (USGS) California Volcano Observatory (CalVO) aims to advance scientific understanding of volcanic processes and to lessen harmful impacts of volcanic activity in California and Nevada. Within CalVO's area of responsibility, ten volcanoes or volcanic centers have been identified by a national volcanic threat assessment in support of developing the U.S. National Volcano Early Warning System (NVEWS) as posing moderate, high, or very high threats to surrounding communities based on their recent eruptive histories and their proximity to vulnerable people, property, and infrastructure. To better understand the extent of potential hazards at these and other volcanoes and volcanic centers, the USGS Volcano Science Center (VSC) is continually compiling spatial databases of volcano information, including: geologic mapping, hazards assessment maps, locations of geochemical and geochronological samples, and the distribution of volcanic vents. This digital mapping effort has been ongoing for over 15 years and early databases are being converted to match recent datasets compiled with new data models designed for use in: 1) generating hazard zones, 2) evaluating risk to population and infrastructure, 3) numerical hazard modeling, and 4) display and query on the CalVO as well as other VSC and USGS websites. In these capacities, spatial databases of CalVO volcanoes and their derivative map products provide an integrated and readily accessible framework of VSC hazards science to colleagues, emergency managers, and the general public.
Database Searching by Managers.
ERIC Educational Resources Information Center
Arnold, Stephen E.
Managers and executives need the easy and quick access to business and management information that online databases can provide, but many have difficulty articulating their search needs to an intermediary. One possible solution would be to encourage managers and their immediate support staff members to search textual databases directly as they now…
Implementation of the CUAHSI information system for regional hydrological research and workflow
NASA Astrophysics Data System (ADS)
Bugaets, Andrey; Gartsman, Boris; Bugaets, Nadezhda; Krasnopeyev, Sergey; Krasnopeyeva, Tatyana; Sokolov, Oleg; Gonchukov, Leonid
2013-04-01
Environmental research and education have become increasingly data-intensive as a result of the proliferation of digital technologies, instrumentation, and pervasive networks through which data are collected, generated, shared, and analyzed. Over the next decade, it is likely that science and engineering research will produce more scientific data than has been created over the whole of human history (Cox et al., 2006). Successful using these data to achieve new scientific breakthroughs depends on the ability to access, organize, integrate, and analyze these large datasets. The new project of PGI FEB RAS (http://tig.dvo.ru), FERHRI (www.ferhri.org) and Primgidromet (www.primgidromet.ru) is focused on creation of an open unified hydrological information system according to the international standards to support hydrological investigation, water management and forecasts systems. Within the hydrologic science community, the Consortium of Universities for the Advancement of Hydrologic Science, Inc. (http://his.cuahsi.org) has been developing a distributed network of data sources and functions that are integrated using web services and that provide access to data, tools, and models that enable synthesis, visualization, and evaluation of hydrologic system behavior. Based on the top of CUAHSI technologies two first template databases were developed for primary datasets of special observations on experimental basins in the Far East Region of Russia. The first database contains data of special observation performed on the former (1957-1994) Primorskaya Water-Balance Station (1500 km2). Measurements were carried out on 20 hydrological and 40 rain gauging station and were published as special series but only as hardcopy books. Database provides raw data from loggers with hourly and daily time support. The second database called «FarEastHydro» provides published standard daily measurement performed at Roshydromet observation network (200 hydrological and meteorological stations) for the period beginning 1930 through 1990. Both of the data resources are maintained in a test mode at the project site http://gis.dvo.ru:81/, which is permanently updated. After first success, the decision was made to use the CUAHSI technology as a basis for development of hydrological information system to support data publishing and workflow of Primgidromet, the regional office of Federal State Hydrometeorological Agency. At the moment, Primgidromet observation network is equipped with 34 automatic SEBA hydrological pressure sensor pneumatic gauges PS-Light-2 and 36 automatic SEBA weather stations. Large datasets generated by sensor networks are organized and stored within a central ODM database which allows to unambiguously interpret the data with sufficient metadata and provides traceable heritage from raw measurements to useable information. Organization of the data within a central CUAHSI ODM database was the most critical step, with several important implications. This technology is widespread and well documented, and it ensures that all datasets are publicly available and readily used by other investigators and developers to support additional analyses and hydrological modeling. Implementation of ODM within a Relational Database Management System eliminates the potential data manipulation errors and intermediate the data processing steps. Wrapping CUAHSI WaterOneFlow web-service into OpenMI 2.0 linkable component (www.openmi.org) allows a seamless integration with well-known hydrological modeling systems.
A design for the geoinformatics system
NASA Astrophysics Data System (ADS)
Allison, M. L.
2002-12-01
Informatics integrates and applies information technologies with scientific and technical disciplines. A geoinformatics system targets the spatially based sciences. The system is not a master database, but will collect pertinent information from disparate databases distributed around the world. Seamless interoperability of databases promises quantum leaps in productivity not only for scientific researchers but also for many areas of society including business and government. The system will incorporate: acquisition of analog and digital legacy data; efficient information and data retrieval mechanisms (via data mining and web services); accessibility to and application of visualization, analysis, and modeling capabilities; online workspace, software, and tutorials; GIS; integration with online scientific journal aggregates and digital libraries; access to real time data collection and dissemination; user-defined automatic notification and quality control filtering for selection of new resources; and application to field techniques such as mapping. In practical terms, such a system will provide the ability to gather data over the Web from a variety of distributed sources, regardless of computer operating systems, database formats, and servers. Search engines will gather data about any geographic location, above, on, or below ground, covering any geologic time, and at any scale or detail. A distributed network of digital geolibraries can archive permanent copies of databases at risk of being discontinued and those that continue to be maintained by the data authors. The geoinformatics system will generate results from widely distributed sources to function as a dynamic data network. Instead of posting a variety of pre-made tables, charts, or maps based on static databases, the interactive dynamic system creates these products on the fly, each time an inquiry is made, using the latest information in the appropriate databases. Thus, in the dynamic system, a map generated today may differ from one created yesterday and one to be created tomorrow, because the databases used to make it are constantly (and sometimes automatically) being updated.
"Hyperstat": an educational and working tool in epidemiology.
Nicolosi, A
1995-01-01
The work of a researcher in epidemiology is based on studying literature, planning studies, gathering data, analyzing data and writing results. Therefore he has need for performing, more or less, simple calculations, the need for consulting or quoting literature, the need for consulting textbooks about certain issues or procedures, and the need for looking at a specific formula. There are no programs conceived as a workstation to assist the different aspects of researcher work in an integrated fashion. A hypertextual system was developed which supports different stages of the epidemiologist's work. It combines database management, statistical analysis or planning, and literature searches. The software was developed on Apple Macintosh by using Hypercard 2.1 as a database and HyperTalk as a programming language. The program is structured in 7 "stacks" or files: Procedures; Statistical Tables; Graphs; References; Text; Formulas; Help. Each stack has its own management system with an automated Table of Contents. Stacks contain "cards" which make up the databases and carry executable programs. The programs are of four kinds: association; statistical procedure; formatting (input/output); database management. The system performs general statistical procedures, procedures applicable to epidemiological studies only (follow-up and case-control), and procedures for clinical trials. All commands are given by clicking the mouse on self-explanatory "buttons". In order to perform calculations, the user only needs to enter the data into the appropriate cells and then click on the selected procedure's button. The system has a hypertextual structure. The user can go from a procedure to other cards following the preferred order of succession and according to built-in associations. The user can access different levels of knowledge or information from any stack he is consulting or operating. From every card, the user can go to a selected procedure to perform statistical calculations, to the reference database management system, to the textbook in which all procedures and issues are discussed in detail, to the database of statistical formulas with automated table of contents, to statistical tables with automated table of contents, or to the help module. he program has a very user-friendly interface and leaves the user free to use the same format he would use on paper. The interface does not require special skills. It reflects the Macintosh philosophy of using windows, buttons and mouse. This allows the user to perform complicated calculations without losing the "feel" of data, weight alternatives, and simulations. This program shares many features in common with hypertexts. It has an underlying network database where the nodes consist of text, graphics, executable procedures, and combinations of these; the nodes in the database correspond to windows on the screen; the links between the nodes in the database are visible as "active" text or icons in the windows; the text is read by following links and opening new windows. The program is especially useful as an educational tool, directed to medical and epidemiology students. The combination of computing capabilities with a textbook and databases of formulas and literature references, makes the program versatile and attractive as a learning tool. The program is also helpful in the work done at the desk, where the researcher examines results, consults literature, explores different analytic approaches, plans new studies, or writes grant proposals or scientific articles.
Generalized Database Management System Support for Numeric Database Environments.
ERIC Educational Resources Information Center
Dominick, Wayne D.; Weathers, Peggy G.
1982-01-01
This overview of potential for utilizing database management systems (DBMS) within numeric database environments highlights: (1) major features, functions, and characteristics of DBMS; (2) applicability to numeric database environment needs and user needs; (3) current applications of DBMS technology; and (4) research-oriented and…
Chandra monitoring, trends, and response
NASA Astrophysics Data System (ADS)
Spitzbart, Brad D.; Wolk, Scott J.; Isobe, Takashi
2002-12-01
The Chandra X-ray Observatory was launched in July, 1999 and has yielded extraordinary scientific results. Behind the scenes, our Monitoring and Trends Analysis (MTA) system has proven to be a valuable resource. With three years worth of on-orbit data, we have available a vast array of both telescope diagnostic information and analysis of scientific data to access Observatory performance. As part of Chandra's Science Operations Team (SOT), the primary goal of MTA is to provide tools for effective decision making leading to the most efficient production of quality science output from the Observatory. We occupy a middle ground between flight operations, chiefly concerned with the health and safety of the spacecraft, and validation and verification, concerned with the scientific validity of the data taken and whether or not they fulfill the observer's requirements. In that role we provide and receive support from systems engineers, instrument experts, operations managers, and scientific users. MTA tools, products, and services include real-time monitoring and alert generation for the most mission critical components, long term trending of all spacecraft systems, detailed analysis of various subsystems for life expectancy or anomaly resolution, and creating and maintaining a large SQL database of relevant information. This is accomplished through the use of a wide variety of input data sources and flexible, accessible programming and analysis techniques. This paper will discuss the overall design of the system, its evolution and the resources available.
Wiley, Emily A.; Stover, Nicholas A.
2014-01-01
Use of inquiry-based research modules in the classroom has soared over recent years, largely in response to national calls for teaching that provides experience with scientific processes and methodologies. To increase the visibility of in-class studies among interested researchers and to strengthen their impact on student learning, we have extended the typical model of inquiry-based labs to include a means for targeted dissemination of student-generated discoveries. This initiative required: 1) creating a set of research-based lab activities with the potential to yield results that a particular scientific community would find useful and 2) developing a means for immediate sharing of student-generated results. Working toward these goals, we designed guides for course-based research aimed to fulfill the need for functional annotation of the Tetrahymena thermophila genome, and developed an interactive Web database that links directly to the official Tetrahymena Genome Database for immediate, targeted dissemination of student discoveries. This combination of research via the course modules and the opportunity for students to immediately “publish” their novel results on a Web database actively used by outside scientists culminated in a motivational tool that enhanced students’ efforts to engage the scientific process and pursue additional research opportunities beyond the course. PMID:24591511
Wiley, Emily A; Stover, Nicholas A
2014-01-01
Use of inquiry-based research modules in the classroom has soared over recent years, largely in response to national calls for teaching that provides experience with scientific processes and methodologies. To increase the visibility of in-class studies among interested researchers and to strengthen their impact on student learning, we have extended the typical model of inquiry-based labs to include a means for targeted dissemination of student-generated discoveries. This initiative required: 1) creating a set of research-based lab activities with the potential to yield results that a particular scientific community would find useful and 2) developing a means for immediate sharing of student-generated results. Working toward these goals, we designed guides for course-based research aimed to fulfill the need for functional annotation of the Tetrahymena thermophila genome, and developed an interactive Web database that links directly to the official Tetrahymena Genome Database for immediate, targeted dissemination of student discoveries. This combination of research via the course modules and the opportunity for students to immediately "publish" their novel results on a Web database actively used by outside scientists culminated in a motivational tool that enhanced students' efforts to engage the scientific process and pursue additional research opportunities beyond the course.
Analysis of commercial and public bioactivity databases.
Tiikkainen, Pekka; Franke, Lutz
2012-02-27
Activity data for small molecules are invaluable in chemoinformatics. Various bioactivity databases exist containing detailed information of target proteins and quantitative binding data for small molecules extracted from journals and patents. In the current work, we have merged several public and commercial bioactivity databases into one bioactivity metabase. The molecular presentation, target information, and activity data of the vendor databases were standardized. The main motivation of the work was to create a single relational database which allows fast and simple data retrieval by in-house scientists. Second, we wanted to know the amount of overlap between databases by commercial and public vendors to see whether the former contain data complementing the latter. Third, we quantified the degree of inconsistency between data sources by comparing data points derived from the same scientific article cited by more than one vendor. We found that each data source contains unique data which is due to different scientific articles cited by the vendors. When comparing data derived from the same article we found that inconsistencies between the vendors are common. In conclusion, using databases of different vendors is still useful since the data overlap is not complete. It should be noted that this can be partially explained by the inconsistencies and errors in the source data.
Famulari, Stevie; Witz, Kyla
2015-01-01
Designers, students, teachers, gardeners, farmers, landscape architects, architects, engineers, homeowners, and others have uses for the practice of phytoremediation. This research looks at the creation of a phytoremediation database which is designed for ease of use for a non-scientific user, as well as for students in an educational setting ( http://www.steviefamulari.net/phytoremediation ). During 2012, Environmental Artist & Professor of Landscape Architecture Stevie Famulari, with assistance from Kyla Witz, a landscape architecture student, created an online searchable database designed for high public accessibility. The database is a record of research of plant species that aid in the uptake of contaminants, including metals, organic materials, biodiesels & oils, and radionuclides. The database consists of multiple interconnected indexes categorized into common and scientific plant name, contaminant name, and contaminant type. It includes photographs, hardiness zones, specific plant qualities, full citations to the original research, and other relevant information intended to aid those designing with phytoremediation search for potential plants which may be used to address their site's need. The objective of the terminology section is to remove uncertainty for more inexperienced users, and to clarify terms for a more user-friendly experience. Implications of the work, including education and ease of browsing, as well as use of the database in teaching, are discussed.
Coad, Lauren; Leverington, Fiona; Knights, Kathryn; Geldmann, Jonas; Eassom, April; Kapos, Valerie; Kingston, Naomi; de Lima, Marcelo; Zamora, Camilo; Cuardros, Ivon; Nolte, Christoph; Burgess, Neil D.; Hockings, Marc
2015-01-01
Protected areas (PAs) are at the forefront of conservation efforts, and yet despite considerable progress towards the global target of having 17% of the world's land area within protected areas by 2020, biodiversity continues to decline. The discrepancy between increasing PA coverage and negative biodiversity trends has resulted in renewed efforts to enhance PA effectiveness. The global conservation community has conducted thousands of assessments of protected area management effectiveness (PAME), and interest in the use of these data to help measure the conservation impact of PA management interventions is high. Here, we summarize the status of PAME assessment, review the published evidence for a link between PAME assessment results and the conservation impacts of PAs, and discuss the limitations and future use of PAME data in measuring the impact of PA management interventions on conservation outcomes. We conclude that PAME data, while designed as a tool for local adaptive management, may also help to provide insights into the impact of PA management interventions from the local-to-global scale. However, the subjective and ordinal characteristics of the data present significant limitations for their application in rigorous scientific impact evaluations, a problem that should be recognized and mitigated where possible. PMID:26460133
USDA-ARS?s Scientific Manuscript database
In the course of updating the scientific names of plant-associated fungi in the U.S. National Fungus Collections Databases to conform with the requirement of one scientific name for each fungal species, several scientific names currently in use were identified that should be changed to the oldest ep...
E&P data lifecycle: a case study in Petrobras Company
NASA Astrophysics Data System (ADS)
Mastella, Laura; Campinho, Vania; Alonso, João
2013-04-01
Petrobras, the biggest Brazilian Petroleum Company, has been studying and working on Brazilian sedimentary basins for nearly 60 years. The corporate database currently registers over 25000 wells and all their associated products (geophysical logs, cores, sidewall samples) and analyses. There are thousands of samples, descriptions, pictures, measures, and other scientific data resulted from petroleum exploration and production. This data constitutes a huge scientific database which is applied to support Petrobras economic strategy. Geological models built during the exploration phase continue to be refined during both the development and production phases: data should be continually manipulated, correlated and integrated. As E&P assets reach maturity, a new cycle starts: data is re-analyzed and new hypotheses are made in order to increase hydrocarbon productivity. Initial geological models then evolve from accumulated knowledge throughout all the E&P phases. Therefore the quality control must be performed in the first phases of data acquisition, i.e., during the exploration phase, to avoid reworking and loss of information. The last decade witnessed a great evolution in petroleum industry technology. As a consequence, the complexity and particulars of the information generated have increased accordingly. Current technology has also facilitated access to networks and databases, making it possible to store large amounts of information. This scenario makes available a large mass of information from difference sources, which uses heterogeneous vocabulary as well as different scales and measurement units. In this context, knowledge might be diluted and the total amount of information cannot be applied in E&P process. In order to provide adequate data governance, data input is controlled by rules, standards and policies, implemented by corporate software systems. Petrobras' integrated E&P database is a centralized repository to which all E&P systems can have access. The quality of the data that goes into the database can be increased by means of information management practices: • data validation, • language internationalization, • dictionaries, patterns, metadata. Moreover, stored data must be kept consistent, and any changes in the data should be registered while maintaining, if possible, the original data, associating the modification with its author, timestamp and reason. These practices lead to the creation of a database that serves and benefits the company's knowledge. Information retrieval and visualization is one of the main issues concerning petroleum industries. In order to make significant information available for end-users, it is fundamental to have an efficient data integration strategy. The integration of E&P data, such as geological, geophysical, geographical and operational data, is the end goal of the exploratory activities. Petrobras corporate systems are evolving towards it so as to make available various data from diverse sources and to create a dashboard that can be easily accessed at any time by geoscientists and reservoir engineers. The main goal is to maintain scientific integrity of information, from generators to consumers, during all E&P data life cycle.
Training Packages: The Scientific Management of Education.
ERIC Educational Resources Information Center
Hunter, John
The theory of scientific management was established as a way to increase workers' productivity. The following are among the key principles underpinning scientific management: task simplification and division of labor boost productivity; management must control the planning of work down to its minutiae; and remuneration should be based on output.…
Yue, L Q; Pi, X Q; Fan, X G
2016-07-20
To analyze the current research status of evidence-based nursing of burn in the mainland of China, in order to provide basis for the improvement of scientificity of burn nursing practice. Chinese scientific articles on evidence-based nursing of burn in the mainland of China published from January 1997 to December 2015 were retrieved from Chinese Biology Medicine disc, Chinese Journals Full-text Database, Wanfang Database, and VIP Database. From the results retrieved, date with regard to publication year, region of affiliation of the first author, journal distribution, literature type, literature quality assessment, topic of evidence-based research, fund program support, implementation of evidence-based practice steps, and language and quantity of reference. Data were processed with Microsoft Excel software. A total of 50 articles conforming to the criteria were retrieved. (1) Articles about evidence-based nursing of burn arose in 2004. Compared with that in the previous year, there was no obvious increase in the number of relevant articles in each year from 2004 to 2011. The number of literature in 2012 was obviously increased than that in each year from 2004 to 2011, while the number of literature in each year from 2012 to 2015 was not obviously increased compared with that in the previous year. (2) The regions of affiliation of the first author were distributed in 13 provinces, 3 minority autonomous regions, and 3 municipalities, with the largest distribution in East China, and Northwest China and Southwest China in the follow. (3) The articles were published in 32 domestic journals, with 9 (28.12%) nursing journals, 5 (15.62%) burn medical related journals, and 18 (56.25%) other journals. Twenty (40%) articles were published in Source Journal for Chinese Scientific and Technical Papers. (4) Regarding the literature type, 31 (62%) articles dealt with clinical experiences, 17 (34%) articles dealt with scientific research, and 2 (4%) articles dealt with case report. (5) There were 21 quantitative study articles and 29 narrative study articles, all with low quality. (6) The topics of evidence-based research in these articles were mainly burn rehabilitation, burn nursing technology, pediatric burn, inhalation injury and airway management, and complications of burn injury. Only one study was supported by fund program. (7) Only one article described complete evidence-based practice steps. (8) The literature cited 57 English articles as references, with an average of 1.14, and 316 Chinese articles, with an average of 6.32. The concept of evidence-based nursing of burn has been initially formed in the mainland of China. The number of relevant articles is on the rise, but the quality needs to be further improved. There is an urgent need to improve nurses' understanding of evidence-based nursing and their command of the method of evidence-based practice through on-job training, so as to improve the scientificity and effectiveness of burn nursing.
Computer Science Research at Langley
NASA Technical Reports Server (NTRS)
Voigt, S. J. (Editor)
1982-01-01
A workshop was held at Langley Research Center, November 2-5, 1981, to highlight ongoing computer science research at Langley and to identify additional areas of research based upon the computer user requirements. A panel discussion was held in each of nine application areas, and these are summarized in the proceedings. Slides presented by the invited speakers are also included. A survey of scientific, business, data reduction, and microprocessor computer users helped identify areas of focus for the workshop. Several areas of computer science which are of most concern to the Langley computer users were identified during the workshop discussions. These include graphics, distributed processing, programmer support systems and tools, database management, and numerical methods.
SPD-based Logistics Management Model of Medical Consumables in Hospitals
LIU, Tongzhu; SHEN, Aizong; HU, Xiaojian; TONG, Guixian; GU, Wei; YANG, Shanlin
2016-01-01
Background: With the rapid development of health services, the progress of medical science and technology, and the improvement of materials research, the consumption of medical consumables (MCs) in medical activities has increased in recent years. However, owing to the lack of effective management methods and the complexity of MCs, there are several management problems including MC waste, low management efficiency, high management difficulty, and frequent medical accidents. Therefore, there is urgent need for an effective logistics management model to handle these problems and challenges in hospitals. Methods: We reviewed books and scientific literature (by searching the articles published from 2010 to 2015 in Engineering Village database) to understand supply chain related theories and methods and performed field investigations in hospitals across many cities to determine the actual state of MC logistics management of hospitals in China. Results: We describe the definition, physical model, construction, and logistics operation processes of the supply, processing, and distribution (SPD) of MC logistics because of the traditional SPD model. With the establishment of a supply-procurement platform and a logistics lean management system, we applied the model to the MC logistics management of Anhui Provincial Hospital with good effects. Conclusion: The SPD model plays a critical role in optimizing the logistics procedures of MCs, improving the management efficiency of logistics, and reducing the costs of logistics of hospitals in China. PMID:27957435
NASA Astrophysics Data System (ADS)
Fleury, Laurence; Brissebrat, Guillaume; Boichard, Jean-Luc; Cloché, Sophie; Mière, Arnaud; Moulaye, Oumarou; Ramage, Karim; Favot, Florence; Boulanger, Damien
2015-04-01
In the framework of the African Monsoon Multidisciplinary Analyses (AMMA) programme, several tools have been developed in order to boost the data and information exchange between researchers from different disciplines. The AMMA information system includes (i) a user-friendly data management and dissemination system, (ii) quasi real-time display websites and (iii) a scientific paper exchange collaborative tool. The AMMA information system is enriched by past and ongoing projects (IMPETUS, FENNEC, ESCAPE, QweCI, ACASIS, DACCIWA...) addressing meteorology, atmospheric chemistry, extreme events, health, adaptation of human societies... It is becoming a reference information system on environmental issues in West Africa. (i) The projects include airborne, ground-based and ocean measurements, social science surveys, satellite data use, modelling studies and value-added product development. Therefore, the AMMA data portal enables to access a great amount and a large variety of data: - 250 local observation datasets, that cover many geophysical components (atmosphere, ocean, soil, vegetation) and human activities (agronomy, health). They have been collected by operational networks since 1850, long term monitoring research networks (CATCH, IDAF, PIRATA...) and intensive scientific campaigns; - 1350 outputs of a socio-economics questionnaire; - 60 operational satellite products and several research products; - 10 output sets of meteorological and ocean operational models and 15 of research simulations. Data documentation complies with metadata international standards, and data are delivered into standard formats. The data request interface takes full advantage of the database relational structure and enables users to elaborate multicriteria requests (period, area, property, property value…). The AMMA data portal counts about 900 registered users, and 50 data requests every month. The AMMA databases and data portal have been developed and are operated jointly by SEDOO and ESPRI in France: http://database.amma-international.org. The complete system is fully duplicated and operated by CRA in Niger: http://amma.agrhymet.ne/amma-data. (ii) A day-to-day chart display software has been designed and operated in order to monitor meteorological and environment information and to meet the observational team needs during the AMMA 2006 SOP (http://aoc.amma-international.org) and FENNEC 2011 campaign (http://fenoc.sedoo.fr). At present the websites constitute a synthetic view on the campaigns and a preliminary investigation tool for researchers. Since 2011, the same application enables a group of French and Senegalese researchers and forecasters to exchange in near real-time physical indices and diagnosis calculated from numerical weather operational forecasts, satellite products and in situ operational observations along the monsoon season, in order to better assess, understand and anticipate the monsoon intraseasonal variability (http://misva.sedoo.fr). Another similar website is dedicated to diagnosis and forecast of heat waves in West Africa (http://acasis.sedoo.fr). It aims at becoming an operational component for national early warning systems. (iii) A collaborative WIKINDX tool has been set on line in order to gather together scientific publications, theses and communications of interest: http://biblio.amma-international.org. At present the bibliographic database counts about 1200 references. It is the most exhaustive document collection about the West African monsoon available for all. Every scientist is invited to make use of the AMMA online tools and data. Scientists or project leaders who have management needs for existing or future datasets concerning West Africa are welcome to use the AMMA database framework and to contact ammaAdmin@sedoo.fr .
Outreach and online training services at the Saccharomyces Genome Database.
MacPherson, Kevin A; Starr, Barry; Wong, Edith D; Dalusag, Kyla S; Hellerstedt, Sage T; Lang, Olivia W; Nash, Robert S; Skrzypek, Marek S; Engel, Stacia R; Cherry, J Michael
2017-01-01
The Saccharomyces Genome Database (SGD; www.yeastgenome.org ), the primary genetics and genomics resource for the budding yeast S. cerevisiae , provides free public access to expertly curated information about the yeast genome and its gene products. As the central hub for the yeast research community, SGD engages in a variety of social outreach efforts to inform our users about new developments, promote collaboration, increase public awareness of the importance of yeast to biomedical research, and facilitate scientific discovery. Here we describe these various outreach methods, from networking at scientific conferences to the use of online media such as blog posts and webinars, and include our perspectives on the benefits provided by outreach activities for model organism databases. http://www.yeastgenome.org. © The Author(s) 2017. Published by Oxford University Press.
The use of the Hirsch index in benchmarking hepatic surgery research.
Cucchetti, Alessandro; Mazzotti, Federico; Pellegrini, Sara; Cescon, Matteo; Maroni, Lorenzo; Ercolani, Giorgio; Pinna, Antonio Daniele
2013-10-01
The Hirsch index (h-index) is recognized as an effective way to summarize an individual's scientific research output. However, a benchmark for evaluating surgeon scientists in the field of hepatic surgery is still not available. A total of 3,251 authors who published between 1949 and 2011 were identified using the Scopus identification number. The h-index, the total number of cited document, the total number of citations, and the scientific age were calculated for each author using both Scopus and Google Scholar. The median h-index was 6 and the median scientific age, assessed with Google Scholar, was 19 years. The numbers of cited documents, numbers of citations, and h-indexes obtained from Scopus and Google Scholar showed good correlation with one another; however, the results from the 2 databases were modified in different ways by scientific age. By plotting scientific age against h-index percentiles an h-index growth chart for both Scopus database and Google Scholar was provided. This analysis provides a first benchmark to assess surgeon scientists' productivity in the field of liver surgery. Copyright © 2013 Elsevier Inc. All rights reserved.
Building Databases for Education. ERIC Digest.
ERIC Educational Resources Information Center
Klausmeier, Jane A.
This digest provides a brief explanation of what a database is; explains how a database can be used; identifies important factors that should be considered when choosing database management system software; and provides citations to sources for finding reviews and evaluations of database management software. The digest is concerned primarily with…
Auer, Jorg A; Goodship, Allen; Arnoczky, Steven; Pearce, Simon; Price, Jill; Claes, Lutz; von Rechenberg, Brigitte; Hofmann-Amtenbrinck, Margarethe; Schneider, Erich; Müller-Terpitz, R; Thiele, F; Rippe, Klaus-Peter; Grainger, David W
2007-01-01
Background In an attempt to establish some consensus on the proper use and design of experimental animal models in musculoskeletal research, AOVET (the veterinary specialty group of the AO Foundation) in concert with the AO Research Institute (ARI), and the European Academy for the Study of Scientific and Technological Advance, convened a group of musculoskeletal researchers, veterinarians, legal experts, and ethicists to discuss, in a frank and open forum, the use of animals in musculoskeletal research. Methods The group narrowed the field to fracture research. The consensus opinion resulting from this workshop can be summarized as follows: Results & Conclusion Anaesthesia and pain management protocols for research animals should follow standard protocols applied in clinical work for the species involved. This will improve morbidity and mortality outcomes. A database should be established to facilitate selection of anaesthesia and pain management protocols for specific experimental surgical procedures and adopted as an International Standard (IS) according to animal species selected. A list of 10 golden rules and requirements for conduction of animal experiments in musculoskeletal research was drawn up comprising 1) Intelligent study designs to receive appropriate answers; 2) Minimal complication rates (5 to max. 10%); 3) Defined end-points for both welfare and scientific outputs analogous to quality assessment (QA) audit of protocols in GLP studies; 4) Sufficient details for materials and methods applied; 5) Potentially confounding variables (genetic background, seasonal, hormonal, size, histological, and biomechanical differences); 6) Post-operative management with emphasis on analgesia and follow-up examinations; 7) Study protocols to satisfy criteria established for a "justified animal study"; 8) Surgical expertise to conduct surgery on animals; 9) Pilot studies as a critical part of model validation and powering of the definitive study design; 10) Criteria for funding agencies to include requirements related to animal experiments as part of the overall scientific proposal review protocols. Such agencies are also encouraged to seriously consider and adopt the recommendations described here when awarding funds for specific projects. Specific new requirements and mandates related both to improving the welfare and scientific rigour of animal-based research models are urgently needed as part of international harmonization of standards. PMID:17678534
JoVE: the Journal of Visualized Experiments.
Vardell, Emily
2015-01-01
The Journal of Visualized Experiments (JoVE) is the world's first scientific video journal and is designed to communicate research and scientific methods in an innovative, intuitive way. JoVE includes a wide range of biomedical videos, from biology to immunology and bioengineering to clinical and translation medicine. This column describes the browsing and searching capabilities of JoVE, as well as its additional features (including the JoVE Scientific Education Database designed for students in scientific fields).
Development of USDA's expanded flavonoid database: A Tool for Epidemiological Research
USDA-ARS?s Scientific Manuscript database
The scientific community continues to be interested in potential links between flavonoid intakes and beneficial health effects associated with certain chronic diseases such as cardiovascular diseases, some cancers and type 2 diabetes. Three separate flavonoid databases (Flavonoids (5 subclasses: fl...
A framework for integration of scientific applications into the OpenTopography workflow
NASA Astrophysics Data System (ADS)
Nandigam, V.; Crosby, C.; Baru, C.
2012-12-01
The NSF-funded OpenTopography facility provides online access to Earth science-oriented high-resolution LIDAR topography data, online processing tools, and derivative products. The underlying cyberinfrastructure employs a multi-tier service oriented architecture that is comprised of an infrastructure tier, a processing services tier, and an application tier. The infrastructure tier consists of storage, compute resources as well as supporting databases. The services tier consists of the set of processing routines each deployed as a Web service. The applications tier provides client interfaces to the system. (e.g. Portal). We propose a "pluggable" infrastructure design that will allow new scientific algorithms and processing routines developed and maintained by the community to be integrated into the OpenTopography system so that the wider earth science community can benefit from its availability. All core components in OpenTopography are available as Web services using a customized open-source Opal toolkit. The Opal toolkit provides mechanisms to manage and track job submissions, with the help of a back-end database. It allows monitoring of job and system status by providing charting tools. All core components in OpenTopography have been developed, maintained and wrapped as Web services using Opal by OpenTopography developers. However, as the scientific community develops new processing and analysis approaches this integration approach is not scalable efficiently. Most of the new scientific applications will have their own active development teams performing regular updates, maintenance and other improvements. It would be optimal to have the application co-located where its developers can continue to actively work on it while still making it accessible within the OpenTopography workflow for processing capabilities. We will utilize a software framework for remote integration of these scientific applications into the OpenTopography system. This will be accomplished by virtually extending the OpenTopography service over the various infrastructures running these scientific applications and processing routines. This involves packaging and distributing a customized instance of the Opal toolkit that will wrap the software application as an OPAL-based web service and integrate it into the OpenTopography framework. We plan to make this as automated as possible. A structured specification of service inputs and outputs along with metadata annotations encoded in XML can be utilized to automate the generation of user interfaces, with appropriate tools tips and user help features, and generation of other internal software. The OpenTopography Opal toolkit will also include the customizations that will enable security authentication, authorization and the ability to write application usage and job statistics back to the OpenTopography databases. This usage information could then be reported to the original service providers and used for auditing and performance improvements. This pluggable framework will enable the application developers to continue to work on enhancing their application while making the latest iteration available in a timely manner to the earth sciences community. This will also help us establish an overall framework that other scientific application providers will also be able to use going forward.
Bibliometric analysis of theses and dissertations on prematurity in the Capes database.
Pizzani, Luciana; Lopes, Juliana de Fátima; Manzini, Mariana Gurian; Martinez, Claudia Maria Simões
2012-01-01
To perform a bibliometric analysis of theses and dissertations on prematurity in the Capes database from 1987 to 2009. This is a descriptive study that used the bibliometric approach for the production of indicators of scientific production. Operationally, the methodology was developed in four steps: 1) construction of the theoretical framework; 2) data collection sourced from the abstracts of theses and dissertations available in the Capes Thesis Database which presented the issue of prematurity in the period 1987 to 2009; 3) organization, processing and construction of bibliometric indicators; 4) analysis and interpretation of results. Increase in the scientific literature on prematurity during the period 1987 to 2009; production is represented mostly by dissertations; the institution that received prominence was the Universidade de São Paulo. The studies are directed toward the low birth weight and very low birth weight preterm newborn, encompassing the social, biological and multifactorial causes of prematurity. There is a qualified, diverse and substantial scientific literature on prematurity developed in various graduate programs of higher education institutions in Brazil.
Konnichi Wa, Nihon (Hello, Japan!): Best Databases for Business, Technology and News.
ERIC Educational Resources Information Center
Hoetker, Glenn
1994-01-01
Describes online information sources for Japanese business, scientific, and technical developments. Highlights include English language materials versus the need for translation from Japanese; government research; scientific and technical information; patent information; corporate financial information; business information from newswires and…
NASA Astrophysics Data System (ADS)
Sheldon, W.; Chamblee, J.; Cary, R. H.
2013-12-01
Environmental scientists are under increasing pressure from funding agencies and journal publishers to release quality-controlled data in a timely manner, as well as to produce comprehensive metadata for submitting data to long-term archives (e.g. DataONE, Dryad and BCO-DMO). At the same time, the volume of digital data that researchers collect and manage is increasing rapidly due to advances in high frequency electronic data collection from flux towers, instrumented moorings and sensor networks. However, few pre-built software tools are available to meet these data management needs, and those tools that do exist typically focus on part of the data management lifecycle or one class of data. The GCE Data Toolbox has proven to be both a generalized and effective software solution for environmental data management in the Long Term Ecological Research Network (LTER). This open source MATLAB software library, developed by the Georgia Coastal Ecosystems LTER program, integrates metadata capture, creation and management with data processing, quality control and analysis to support the entire data lifecycle. Raw data can be imported directly from common data logger formats (e.g. SeaBird, Campbell Scientific, YSI, Hobo), as well as delimited text files, MATLAB files and relational database queries. Basic metadata are derived from the data source itself (e.g. parsed from file headers) and by value inspection, and then augmented using editable metadata templates containing boilerplate documentation, attribute descriptors, code definitions and quality control rules. Data and metadata content, quality control rules and qualifier flags are then managed together in a robust data structure that supports database functionality and ensures data validity throughout processing. A growing suite of metadata-aware editing, quality control, analysis and synthesis tools are provided with the software to support managing data using graphical forms and command-line functions, as well as developing automated workflows for unattended processing. Finalized data and structured metadata can be exported in a wide variety of text and MATLAB formats or uploaded to a relational database for long-term archiving and distribution. The GCE Data Toolbox can be used as a complete, light-weight solution for environmental data and metadata management, but it can also be used in conjunction with other cyber infrastructure to provide a more comprehensive solution. For example, newly acquired data can be retrieved from a Data Turbine or Campbell LoggerNet Database server for quality control and processing, then transformed to CUAHSI Observations Data Model format and uploaded to a HydroServer for distribution through the CUAHSI Hydrologic Information System. The GCE Data Toolbox can also be leveraged in analytical workflows developed using Kepler or other systems that support MATLAB integration or tool chaining. This software can therefore be leveraged in many ways to help researchers manage, analyze and distribute the data they collect.
Adding Data Management Services to Parallel File Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brandt, Scott
2015-03-04
The objective of this project, called DAMASC for “Data Management in Scientific Computing”, is to coalesce data management with parallel file system management to present a declarative interface to scientists for managing, querying, and analyzing extremely large data sets efficiently and predictably. Managing extremely large data sets is a key challenge of exascale computing. The overhead, energy, and cost of moving massive volumes of data demand designs where computation is close to storage. In current architectures, compute/analysis clusters access data in a physically separate parallel file system and largely leave it scientist to reduce data movement. Over the past decadesmore » the high-end computing community has adopted middleware with multiple layers of abstractions and specialized file formats such as NetCDF-4 and HDF5. These abstractions provide a limited set of high-level data processing functions, but have inherent functionality and performance limitations: middleware that provides access to the highly structured contents of scientific data files stored in the (unstructured) file systems can only optimize to the extent that file system interfaces permit; the highly structured formats of these files often impedes native file system performance optimizations. We are developing Damasc, an enhanced high-performance file system with native rich data management services. Damasc will enable efficient queries and updates over files stored in their native byte-stream format while retaining the inherent performance of file system data storage via declarative queries and updates over views of underlying files. Damasc has four key benefits for the development of data-intensive scientific code: (1) applications can use important data-management services, such as declarative queries, views, and provenance tracking, that are currently available only within database systems; (2) the use of these services becomes easier, as they are provided within a familiar file-based ecosystem; (3) common optimizations, e.g., indexing and caching, are readily supported across several file formats, avoiding effort duplication; and (4) performance improves significantly, as data processing is integrated more tightly with data storage. Our key contributions are: SciHadoop which explores changes to MapReduce assumption by taking advantage of semantics of structured data while preserving MapReduce’s failure and resource management; DataMods which extends common abstractions of parallel file systems so they become programmable such that they can be extended to natively support a variety of data models and can be hooked into emerging distributed runtimes such as Stanford’s Legion; and Miso which combines Hadoop and relational data warehousing to minimize time to insight, taking into account the overhead of ingesting data into data warehousing.« less
ERIC Educational Resources Information Center
American Society for Information Science, Washington, DC.
This document contains abstracts of papers on database design and management which were presented at the 1986 mid-year meeting of the American Society for Information Science (ASIS). Topics considered include: knowledge representation in a bilingual art history database; proprietary database design; relational database design; in-house databases;…
Impact factor evolution of nursing research journals: 2009 to 2014.
Cáceres, Macarena C; Guerrero-Martín, Jorge; González-Morales, Borja; Pérez-Civantos, Demetrio V; Carreto-Lemus, Maria A; Durán-Gómez, Noelia
The use of bibliometric indicators (impact factor [IF], impact index, h-index, etc.) is now believed to be a fundamental measure of the quality of scientific research output. In this context, the presence of scientific nursing journals in international databases and the factors influencing their impact ratings is being widely analyzed. The aim of this study was to analyze the presence of scientific nursing journals in international databases and track the changes in their IF. A secondary analysis was carried out on data for the years 2009 to 2014 held in the JCR database (subject category: nursing). Additionally, the presence of scientific nursing journals in Medline, CINAHL, Scopus, and SJR was analyzed. During the period studied, the number of journals indexed in the JCR under the nursing subject category increased from 70 in 2009 (mean IF: 0.99, standard deviation: 0.53) to 115 in 2014 (mean IF: 1.04, standard deviation: 0.42), of which only 70 were listed for the full six years. Although mean IF showed an upward trend throughout this time, no statistically significant differences were found in the variations to this figure. Although IF and other bibliometric indicators have their limitations, it is nonetheless true that bibliometry is now the most widely used tool for evaluating scientific output in all disciplines, including nursing, highlighting the importance of being familiar with how they are calculated and their significance when deciding the journal or journals in which to publish the results of our research. That said, it is also necessary to consider other possible alternative ways of assessing the quality and impact of scientific contributions. Copyright © 2017 Elsevier Inc. All rights reserved.
The Network Configuration of an Object Relational Database Management System
NASA Technical Reports Server (NTRS)
Diaz, Philip; Harris, W. C.
2000-01-01
The networking and implementation of the Oracle Database Management System (ODBMS) requires developers to have knowledge of the UNIX operating system as well as all the features of the Oracle Server. The server is an object relational database management system (DBMS). By using distributed processing, processes are split up between the database server and client application programs. The DBMS handles all the responsibilities of the server. The workstations running the database application concentrate on the interpretation and display of data.
The EXOSAT database and archive
NASA Technical Reports Server (NTRS)
Reynolds, A. P.; Parmar, A. N.
1992-01-01
The EXOSAT database provides on-line access to the results and data products (spectra, images, and lightcurves) from the EXOSAT mission as well as access to data and logs from a number of other missions (such as EINSTEIN, COS-B, ROSAT, and IRAS). In addition, a number of familiar optical, infrared, and x ray catalogs, including the Hubble Space Telescope (HST) guide star catalog are available. The complete database is located at the EXOSAT observatory at ESTEC in the Netherlands and is accessible remotely via a captive account. The database management system was specifically developed to efficiently access the database and to allow the user to perform statistical studies on large samples of astronomical objects as well as to retrieve scientific and bibliographic information on single sources. The system was designed to be mission independent and includes timing, image processing, and spectral analysis packages as well as software to allow the easy transfer of analysis results and products to the user's own institute. The archive at ESTEC comprises a subset of the EXOSAT observations, stored on magnetic tape. Observations of particular interest were copied in compressed format to an optical jukebox, allowing users to retrieve and analyze selected raw data entirely from their terminals. Such analysis may be necessary if the user's needs are not accommodated by the products contained in the database (in terms of time resolution, spectral range, and the finesse of the background subtraction, for instance). Long-term archiving of the full final observation data is taking place at ESRIN in Italy as part of the ESIS program, again using optical media, and ESRIN have now assumed responsibility for distributing the data to the community. Tests showed that raw observational data (typically several tens of megabytes for a single target) can be transferred via the existing networks in reasonable time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bower, J.C.; Burford, M.J.; Downing, T.R.
The Integrated Baseline System (IBS) is an emergency management planning and analysis tool that is being developed under the direction of the US Army Nuclear and Chemical Agency (USANCA). The IBS Data Management Guide provides the background, as well as the operations and procedures needed to generate and maintain a site-specific map database. Data and system managers use this guide to manage the data files and database that support the administrative, user-environment, database management, and operational capabilities of the IBS. This document provides a description of the data files and structures necessary for running the IBS software and using themore » site map database.« less
An International Aerospace Information System: A Cooperative Opportunity.
ERIC Educational Resources Information Center
Blados, Walter R.; Cotter, Gladys A.
1992-01-01
Introduces and discusses ideas and issues relevant to the international unification of scientific and technical information (STI) through development of an international aerospace database (IAD). Specific recommendations for improving the National Aeronautics and Space Administration Aerospace Database (NAD) and for implementing IAD are given.…
ECOS E-MATRIX Methane and Volatile Organic Carbon (VOC) Emissions Best Practices Database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parisien, Lia
2016-01-31
This final scientific/technical report on the ECOS e-MATRIX Methane and Volatile Organic Carbon (VOC) Emissions Best Practices Database provides a disclaimer and acknowledgement, table of contents, executive summary, description of project activities, and briefing/technical presentation link.
[Selected aspects of computer-assisted literature management].
Reiss, M; Reiss, G
1998-01-01
We want to report about our own experiences with a database manager. Bibliography database managers are used to manage information resources: specifically, to maintain a database to references and create bibliographies and reference lists for written works. A database manager allows to enter summary information (record) for articles, book sections, books, dissertations, conference proceedings, and so on. Other features that may be included in a database manager include the ability to import references from different sources, such as MEDLINE. The word processing components allow to generate reference list and bibliographies in a variety of different styles, generates a reference list from a word processor manuscript. The function and the use of the software package EndNote 2 for Windows are described. Its advantages in fulfilling different requirements for the citation style and the sort order of reference lists are emphasized.
Monge-Nájera, Julián; Nielsen-Muñoz, Vanessa; Azofeifa-Mora, Ana Beatriz
2013-06-01
BINABITROP is a bibliographical database of more than 38000 records about the ecosystems and organisms of Costa Rica. In contrast with commercial databases, such as Web of Knowledge and Scopus, which exclude most of the scientific journals published in tropical countries, BINABITROP is a comprehensive record of knowledge on the tropical ecosystems and organisms of Costa Rica. We analyzed its contents in three sites (La Selva, Palo Verde and Las Cruces) and recorded scientific field, taxonomic group and authorship. We found that most records dealt with ecology and systematics, and that most authors published only one article in the study period (1963-2011). Most research was published in four journals: Biotropica, Revista de Biología Tropical/ International Journal of Tropical Biology and Conservation, Zootaxa and Brenesia. This may be the first study of a such a comprehensive database for any case of tropical biology literature.
Challenges in developing medicinal plant databases for sharing ethnopharmacological knowledge.
Ningthoujam, Sanjoy Singh; Talukdar, Anupam Das; Potsangbam, Kumar Singh; Choudhury, Manabendra Dutta
2012-05-07
Major research contributions in ethnopharmacology have generated vast amount of data associated with medicinal plants. Computerized databases facilitate data management and analysis making coherent information available to researchers, planners and other users. Web-based databases also facilitate knowledge transmission and feed the circle of information exchange between the ethnopharmacological studies and public audience. However, despite the development of many medicinal plant databases, a lack of uniformity is still discernible. Therefore, it calls for defining a common standard to achieve the common objectives of ethnopharmacology. The aim of the study is to review the diversity of approaches in storing ethnopharmacological information in databases and to provide some minimal standards for these databases. Survey for articles on medicinal plant databases was done on the Internet by using selective keywords. Grey literatures and printed materials were also searched for information. Listed resources were critically analyzed for their approaches in content type, focus area and software technology. Necessity for rapid incorporation of traditional knowledge by compiling primary data has been felt. While citation collection is common approach for information compilation, it could not fully assimilate local literatures which reflect traditional knowledge. Need for defining standards for systematic evaluation, checking quality and authenticity of the data is felt. Databases focussing on thematic areas, viz., traditional medicine system, regional aspect, disease and phytochemical information are analyzed. Issues pertaining to data standard, data linking and unique identification need to be addressed in addition to general issues like lack of update and sustainability. In the background of the present study, suggestions have been made on some minimum standards for development of medicinal plant database. In spite of variations in approaches, existence of many overlapping features indicates redundancy of resources and efforts. As the development of global data in a single database may not be possible in view of the culture-specific differences, efforts can be given to specific regional areas. Existing scenario calls for collaborative approach for defining a common standard in medicinal plant database for knowledge sharing and scientific advancement. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Zilaout, Hicham; Vlaanderen, Jelle; Houba, Remko; Kromhout, Hans
2017-07-01
In 2000, a prospective Dust Monitoring Program (DMP) was started in which measurements of worker's exposure to respirable dust and quartz are collected in member companies from the European Industrial Minerals Association (IMA-Europe). After 15 years, the resulting IMA-DMP database allows a detailed overview of exposure levels of respirable dust and quartz over time within this industrial sector. Our aim is to describe the IMA-DMP and the current state of the corresponding database which due to continuation of the IMA-DMP is still growing. The future use of the database will also be highlighted including its utility for the industrial minerals producing sector. Exposure data are being obtained following a common protocol including a standardized sampling strategy, standardized sampling and analytical methods and a data management system. Following strict quality control procedures, exposure data are consequently added to a central database. The data comprises personal exposure measurements including auxiliary information on work and other conditions during sampling. Currently, the IMA-DMP database consists of almost 28,000 personal measurements which have been performed from 2000 until 2015 representing 29 half-yearly sampling campaigns. The exposure data have been collected from 160 different worksites owned by 35 industrial mineral companies and comes from 23 European countries and approximately 5000 workers. The IMA-DMP database provides the European minerals sector with reliable data regarding worker personal exposures to respirable dust and quartz. The database can be used as a powerful tool to address outstanding scientific issues on long-term exposure trends and exposure variability, and importantly, as a surveillance tool to evaluate exposure control measures. The database will be valuable for future epidemiological studies on respiratory health effects and will allow for estimation of quantitative exposure response relationships. Copyright © 2017 The Authors. Published by Elsevier GmbH.. All rights reserved.
Contamination of sequence databases with adaptor sequences
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoshikawa, Takeo; Sanders, A.R.; Detera-Wadleigh, S.D.
Because of the exponential increase in the amount of DNA sequences being added to the public databases on a daily basis, it has become imperative to identify sources of contamination rapidly. Previously, contaminations of sequence databases have been reported to alert the scientific community to the problem. These contaminations can be divided into two categories. The first category comprises host sequences that have been difficult for submitters to manage or control. Examples include anomalous sequences derived from Escherichia coli, which are inserted into the chromosomes (and plasmids) of the bacterial hosts. Insertion sequences are highly mobile and are capable ofmore » transposing themselves into plasmids during cloning manipulation. Another example of the first category is the infection with yeast genomic DNA or with bacterial DNA of some commercially available cDNA libraries from Clontech. The second category of database contamination is due to the inadvertent inclusion of nonhost sequences. This category includes incorporation of cloning-vector sequences and multicloning sites in the database submission. M13-derived artifacts have been common, since M13-based vectors have been widely used for subcloning DNA fragments. Recognizing this problem, the National Center for Biotechnology Information (NCBI) started to screen, in April 1994, all sequences directly submitted to GenBank, against a set of vector data retrieved from GenBank by use of key-word searches, such as {open_quotes}vector.{close_quotes} In this report, we present evidence for another sequence artifact that is widespread but that, to our knowledge, has not yet been reported. 11 refs., 1 tab.« less
Zhang, Xinghe; Guo, Taipin; Zhu, Bowen; Gao, Qing; Wang, Hourong; Tai, Xiantao; Jing, Fujie
2018-05-01
Preterm infants are babies born alive before 37 weeks. Many survived infants concomitant with defects of growth and development, a lifetime of disability usually as following when insufficient intervention. In early intervention of preterm infants, pediatric Tuina shows good effect in many Chinese and some English clinical trials. This systematic review is aimed to evaluate the efficacy and safety of pediatric Tuina for promoting growth and development of preterm infants. The electronic databases of Cochrane Library, MEDLINE, EBASE, Web of Science, Springer, World Health Organization International Clinical Trials Registry Platform, China National Knowledge Infrastructure, Chinese Biomedical Literature Database, Wan-fang database, Chinese Scientific Journal Database, and other databases will be searched from establishment to April 1, 2018. All published randomized controlled trials (RCTs) about this topic will be included. Two independent researchers will operate article retrieval, screening, quality evaluation, and data analyses by Review Manager (V.5.3.5). Meta-analyses, subgroup analysis, and/or descriptive analysis will be performed based on included data conditions. High-quality synthesis and/or descriptive analysis of current evidence will be provided from weight increase, motor development, neuropsychological development, length of stay, days of weight recovery to birthweight, days on supplemental oxygen, daily sleep duration, and side effects. This study will provide the evidence of whether pediatric Tuina is an effective early intervention for preterm infants. There is no requirement of ethical approval and informed consent, and it will be in print or published by electronic copies. This systematic review protocol has been registered in the PROSPERO network (No. CRD42018090563).
78 FR 57159 - Scientific Information Request on Medication Therapy Management
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-17
... Information Request on Medication Therapy Management AGENCY: Agency for Healthcare Research and Quality (AHRQ), HHS. ACTION: Request for scientific information submissions. SUMMARY: The Agency for Healthcare... therapy management Scientific information is being solicited to inform our review of Medication Therapy...
Advances in the Diagnosis and Management of Inflammatory Bowel Disease: Challenges and Uncertainties
Mosli, Mahmoud; Al Beshir, Mohammad; Al-Judaibi, Bandar; Al-Ameel, Turki; Saleem, Abdulaziz; Bessissow, Talat; Ghosh, Subrata; Almadi, Majid
2014-01-01
Over the past two decades, several advances have been made in the management of patients with inflammatory bowel disease (IBD) from both evaluative and therapeutic perspectives. This review discusses the medical advancements that have recently been made as the standard of care for managing patients with ulcerative colitis (UC) and Crohn's Disease (CD) and to identify the challenges associated with implementing their use in clinical practice. A comprehensive literature search of the major databases (PubMed and Embase) was conducted for all recent scientific papers (1990–2013) giving the recent updates on the management of IBD and the data were extracted. The reported advancements in managing IBD range from diagnostic and evaluative tools, such as genetic tests, biochemical surrogate markers of activity, endoscopic techniques, and radiological modalities, to therapeutic advances, which encompass medical, endoscopic, and surgical interventions. There are limited studies addressing the cost-effectiveness and the impact that these advances have had on medical practice. The majority of the advances developed for managing IBD, while considered instrumental by some IBD experts in improving patient care, have questionable applications due to constraints of cost, lack of availability, and most importantly, insufficient evidence that supports their role in improving important long-term health-related outcomes. PMID:24705146
Keeping Track of Our Treasures: Managing Historical Data with Relational Database Software.
ERIC Educational Resources Information Center
Gutmann, Myron P.; And Others
1989-01-01
Describes the way a relational database management system manages a large historical data collection project. Shows that such databases are practical to construct. States that the programing tasks involved are not for beginners, but the rewards of having data organized are worthwhile. (GG)
Content Independence in Multimedia Databases.
ERIC Educational Resources Information Center
de Vries, Arjen P.
2001-01-01
Investigates the role of data management in multimedia digital libraries, and its implications for the design of database management systems. Introduces the notions of content abstraction and content independence. Proposes a blueprint of a new class of database technology, which supports the basic functionality for the management of both content…
Development of expert systems for analyzing electronic documents
NASA Astrophysics Data System (ADS)
Abeer Yassin, Al-Azzawi; Shidlovskiy, S.; Jamal, A. A.
2018-05-01
The paper analyses a Database Management System (DBMS). Expert systems, Databases, and database technology have become an essential component of everyday life in the modern society. As databases are widely used in every organization with a computer system, data resource control and data management are very important [1]. DBMS is the most significant tool developed to serve multiple users in a database environment consisting of programs that enable users to create and maintain a database. This paper focuses on development of a database management system for General Directorate for education of Diyala in Iraq (GDED) using Clips, java Net-beans and Alfresco and system components, which were previously developed in Tomsk State University at the Faculty of Innovative Technology.
Prager, C M; Varga, A; Olmsted, P; Ingram, J C; Cattau, M; Freund, C; Wynn-Grant, R; Naeem, S
2016-08-01
Programs and projects employing payments for ecosystem service (PES) interventions achieve their objectives by linking buyers and sellers of ecosystem services. Although PES projects are popular conservation and development interventions, little is known about their adherence to basic ecological principles. We conducted a quantitative assessment of the degree to which a global set of PES projects adhered to four ecological principles that are basic scientific considerations for any project focused on ecosystem management: collection of baseline data, identification of threats to an ecosystem service, monitoring, and attention to ecosystem dynamics or the formation of an adaptive management plan. We evaluated 118 PES projects in three markets-biodiversity, carbon, and water-compiled using websites of major conservation organizations; ecology, economic, and climate-change databases; and three scholarly databases (ISI Web of Knowledge, Web of Science, and Google Scholar). To assess adherence to ecological principles, we constructed two scientific indices (one additive [ASI] and one multiplicative [MSI]) based on our four ecological criteria and analyzed index scores by relevant project characteristics (e.g., sector, buyer, seller). Carbon-sector projects had higher ASI values (P < 0.05) than water-sector projects and marginally higher ASI scores (P < 0.1) than biodiversity-sector projects, demonstrating their greater adherence to ecological principles. Projects financed by public-private partnerships had significantly higher ASI values than projects financed by governments (P < 0.05) and marginally higher ASI values than those funded by private entities (P < 0.1). We did not detect differences in adherence to ecological principles based on the inclusion of cobenefits, the spatial extent of a project, or the size of a project's budget. These findings suggest, at this critical phase in the rapid growth of PES projects, that fundamental ecological principles should be considered more carefully in PES project design and implementation in an effort to ensure PES project viability and sustainability. © 2015 Society for Conservation Biology.
NASA Technical Reports Server (NTRS)
Moroh, Marsha
1988-01-01
A methodology for building interfaces of resident database management systems to a heterogeneous distributed database management system under development at NASA, the DAVID system, was developed. The feasibility of that methodology was demonstrated by construction of the software necessary to perform the interface task. The interface terminology developed in the course of this research is presented. The work performed and the results are summarized.
THE HUMAN EXPOSURE DATABASE SYSTEM (HEDS)-PUTTING THE NHEXAS DATA ON-LINE
The EPA's National Exposure Research Laboratory (NERL) has developed an Internet accessible Human Exposure Database System (HEDS) to provide the results of NERL human exposure studies to both the EPA and the external scientific communities. The first data sets that will be ava...
DAS: A Data Management System for Instrument Tests and Operations
NASA Astrophysics Data System (ADS)
Frailis, M.; Sartor, S.; Zacchei, A.; Lodi, M.; Cirami, R.; Pasian, F.; Trifoglio, M.; Bulgarelli, A.; Gianotti, F.; Franceschi, E.; Nicastro, L.; Conforti, V.; Zoli, A.; Smart, R.; Morbidelli, R.; Dadina, M.
2014-05-01
The Data Access System (DAS) is a and data management software system, providing a reusable solution for the storage of data acquired both from telescopes and auxiliary data sources during the instrument development phases and operations. It is part of the Customizable Instrument WorkStation system (CIWS-FW), a framework for the storage, processing and quick-look at the data acquired from scientific instruments. The DAS provides a data access layer mainly targeted to software applications: quick-look displays, pre-processing pipelines and scientific workflows. It is logically organized in three main components: an intuitive and compact Data Definition Language (DAS DDL) in XML format, aimed for user-defined data types; an Application Programming Interface (DAS API), automatically adding classes and methods supporting the DDL data types, and providing an object-oriented query language; a data management component, which maps the metadata of the DDL data types in a relational Data Base Management System (DBMS), and stores the data in a shared (network) file system. With the DAS DDL, developers define the data model for a particular project, specifying for each data type the metadata attributes, the data format and layout (if applicable), and named references to related or aggregated data types. Together with the DDL user-defined data types, the DAS API acts as the only interface to store, query and retrieve the metadata and data in the DAS system, providing both an abstract interface and a data model specific one in C, C++ and Python. The mapping of metadata in the back-end database is automatic and supports several relational DBMSs, including MySQL, Oracle and PostgreSQL.
SistematX, an Online Web-Based Cheminformatics Tool for Data Management of Secondary Metabolites.
Scotti, Marcus Tullius; Herrera-Acevedo, Chonny; Oliveira, Tiago Branquinho; Costa, Renan Paiva Oliveira; Santos, Silas Yudi Konno de Oliveira; Rodrigues, Ricardo Pereira; Scotti, Luciana; Da-Costa, Fernando Batista
2018-01-03
The traditional work of a natural products researcher consists in large part of time-consuming experimental work, collecting biota to prepare and analyze extracts and to identify innovative metabolites. However, along this long scientific path, much information is lost or restricted to a specific niche. The large amounts of data already produced and the science of metabolomics reveal new questions: Are these compounds known or new? How fast can this information be obtained? To answer these and other relevant questions, an appropriate procedure to correctly store information on the data retrieved from the discovered metabolites is necessary. The SistematX (http://sistematx.ufpb.br) interface is implemented considering the following aspects: (a) the ability to search by structure, SMILES (Simplified Molecular-Input Line-Entry System) code, compound name and species; (b) the ability to save chemical structures found by searching; (c) compound data results include important characteristics for natural products chemistry; and (d) the user can find specific information for taxonomic rank (from family to species) of the plant from which the compound was isolated, the searched-for molecule, and the bibliographic reference and Global Positioning System (GPS) coordinates. The SistematX homepage allows the user to log into the data management area using a login name and password and gain access to administration pages. In this article, we introduced a modern and innovative web interface for the management of a secondary metabolite database. With its multiplatform design, it is able to be properly consulted via the internet and managed from any accredited computer. The interface provided by SistematX contains a wealth of useful information for the scientific community about natural products, highlighting the locations of species from which compounds are isolated.
Computer Security Products Technology Overview
1988-10-01
13 3. DATABASE MANAGEMENT SYSTEMS ................................... 15 Definition...this paper addresses fall into the areas of multi-user hosts, database management systems (DBMS), workstations, networks, guards and gateways, and...provide a portion of that protection, for example, a password scheme, a file protection mechanism, a secure database management system, or even a
An Introduction to Database Management Systems.
ERIC Educational Resources Information Center
Warden, William H., III; Warden, Bette M.
1984-01-01
Description of database management systems for microcomputers highlights system features and factors to consider in microcomputer system selection. A method for ranking database management systems is explained and applied to a defined need, i.e., software support for indexing a weekly newspaper. A glossary of terms and 32-item bibliography are…
Ghojazadeh, Morteza; Naghavi-Behzad, Mohammad; Nasrolah-Zadeh, Raheleh; Bayat-Khajeh, Parvaneh; Piri, Reza; Mirnia, Keyvan; Azami-Aghdash, Saber
2014-01-01
Scientometrics is a useful method for management of financial and human resources and has been applied many times in medical sciences during recent years. The aim of this study was to investigate the status of science production by Iranian scientists in the gastric cancer field based on the Medline database. In this descriptive-cross sectional study Iranian science production concerning gastric cancer during 2000-2011 was investigated based on Medline. After two stages of searching, 121 articles were found, then we reviewed publication date, authors names, journal title, impact factor (IF), and cooperation coefficient between researchers. SPSS.19 was used for statistical analysis. There was a significant increase in published articles about gastric cancer by Iranian researchers in Medline database during 2006-2011. Mean cooperation coefficient between researchers was 6.14±3.29 person per article. Articles of this field were published in 19 countries and 56 journals. Those basex in Thailand, England, and America had the most published Iranian articles. Tehran University of Medical Sciences and Mohammadreza Zali had the most outstanding role in publishing scientific articles. According to results of this study, improving cooperation of researchers in conducting research and scientometric studies about other fields may have an important role in increasing both quality and quantity of published studies.
Alavi, Seyed Mohammad; Alavi, Leila
2016-01-01
Human toxoplasmosis is an important zoonotic infection worldwide which is caused by the intracellular parasite Toxoplasma gondii (T.gondii). The aim of this study was to review briefly the general aspects of toxoplasma infection in in Iranian health system network. We searched published toxoplasmosis related articles in English databases including Science Direct, Pub Med, Scopus, Google Scholar, Magiran, Iran Medex, Iran Doc and Scientific Information Database (SID) for toxoplasmosis. Out of 1267 articles from the English and Persian databases search, 40 articles were suitable with our research objectives and so were selected for the study. It is estimated that at least a third of the world human population is infected with T.gondii, suggesting it as one of the most common parasitic infections through the world. Maternal infection during pregnancy may affect dangerous outcome for the fetus, or even cause intrauterine death. Reactivation of a previous infection in immunocompromised patient such as drug induced, AIDS and organ transplantation can cause life-threating central nervous system infection. Ocular toxoplasmosis is one of the most important causes of blindness, especially in individuals with a deficient immune system. According to the increasing burden of toxoplasmosis on human health, the findings of this study highlight the appropriate preventive measures, diagnosis, and management of this disease.
Gilligan, Tony; Alamgir, Hasanat
2008-01-01
Healthcare workers are exposed to a variety of work-related hazards including biological, chemical, physical, ergonomic, psychological hazards; and workplace violence. The Occupational Health and Safety Agency for Healthcare in British Columbia (OHSAH), in conjunction with British Columbia (BC) health regions, developed and implemented a comprehensive surveillance system that tracks occupational exposures and stressors as well as injuries and illnesses among a defined population of healthcare workers. Workplace Health Indicator Tracking and Evaluation (WHITE) is a secure operational database, used for data entry and transaction reporting. It has five modules: Incident Investigation, Case Management, Employee Health, Health and Safety, and Early Intervention/Return to Work. Since the WHITE database was first introduced into BC in 2004, it has tracked the health of 84,318 healthcare workers (120,244 jobs), representing 35,927 recorded incidents, resulting in 18,322 workers' compensation claims. Currently, four of BC's six healthcare regions are tracking and analyzing incidents and the health of healthcare workers using WHITE, providing OHSAH and healthcare stakeholders with comparative performance indicators on workplace health and safety. A number of scientific manuscripts have also been published in peer-reviewed journals. The WHITE database has been very useful for descriptive epidemiological studies, monitoring health risk factors, benchmarking, and evaluating interventions.
The European Southern Observatory-MIDAS table file system
NASA Technical Reports Server (NTRS)
Peron, M.; Grosbol, P.
1992-01-01
The new and substantially upgraded version of the Table File System in MIDAS is presented as a scientific database system. MIDAS applications for performing database operations on tables are discussed, for instance, the exchange of the data to and from the TFS, the selection of objects, the uncertainty joins across tables, and the graphical representation of data. This upgraded version of the TFS is a full implementation of the binary table extension of the FITS format; in addition, it also supports arrays of strings. Different storage strategies for optimal access of very large data sets are implemented and are addressed in detail. As a simple relational database, the TFS may be used for the management of personal data files. This opens the way to intelligent pipeline processing of large amounts of data. One of the key features of the Table File System is to provide also an extensive set of tools for the analysis of the final results of a reduction process. Column operations using standard and special mathematical functions as well as statistical distributions can be carried out; commands for linear regression and model fitting using nonlinear least square methods and user-defined functions are available. Finally, statistical tests of hypothesis and multivariate methods can also operate on tables.
A new improved database to support spanish phenological observations
NASA Astrophysics Data System (ADS)
Romero-Fresneda, Ramiro; Martínez-Núñez, Lourdes; Botey-Fullat, Roser; Gallego-Abaroa, Teresa; De Cara-García, Juan Antonio; Rodríguez-Ballesteros, César
2017-04-01
Since the last 30 years, phenology has regained scientific interest as the most reported biological indicator of anthropogenic climate change. AEMET (Spanish National Meteorological Agency) has long records in the field of phenological observations, since the 1940s. However, there is a large variety of paper records which are necessary to digitalize. On the other hand, it had been necessary to adapt our methods to the World Meteorological Organization (WMO) guidelines (BBCH code, data documentation- metadata…) and to standardize phenological stages and species in order to provide information to PEP725 (Pan European Phenology Database). Consequently, AEMET is developing a long-term, multi-taxa phenological database to support research and scientific studies about climate, their variability and influence on natural ecosystems, agriculture, etc. This paper presents the steps that are being carried out in order to achieve this goal.
NASA Astrophysics Data System (ADS)
Veneranda, M.; Negro, J. I.; Medina, J.; Rull, F.; Lantz, C.; Poulet, F.; Cousin, A.; Dypvik, H.; Hellevang, H.; Werner, S. C.
2018-04-01
The PTAL website will store multispectral analysis of samples collected from several terrestrial analogue sites and pretend to become a cornerstone tool for the scientific community interested in deepening the knowledge on Mars geological processes.
Global and Local Collaborators: A Study of Scientific Collaboration.
ERIC Educational Resources Information Center
Pao, Miranda Lee
1992-01-01
Describes an empirical study that was conducted to examine the relationship among scientific co-authorship (i.e., collaboration), research funding, and productivity. Bibliographic records from the MEDLINE database that used the subject heading for schistosomiasis are analyzed, global and local collaborators are discussed, and scientific…
XML Based Scientific Data Management Facility
NASA Technical Reports Server (NTRS)
Mehrotra, Piyush; Zubair, M.; Ziebartt, John (Technical Monitor)
2001-01-01
The World Wide Web consortium has developed an Extensible Markup Language (XML) to support the building of better information management infrastructures. The scientific computing community realizing the benefits of HTML has designed markup languages for scientific data. In this paper, we propose a XML based scientific data management facility, XDMF. The project is motivated by the fact that even though a lot of scientific data is being generated, it is not being shared because of lack of standards and infrastructure support for discovering and transforming the data. The proposed data management facility can be used to discover the scientific data itself, the transformation functions, and also for applying the required transformations. We have built a prototype system of the proposed data management facility that can work on different platforms. We have implemented the system using Java, and Apache XSLT engine Xalan. To support remote data and transformation functions, we had to extend the XSLT specification and the Xalan package.
XML Based Scientific Data Management Facility
NASA Technical Reports Server (NTRS)
Mehrotra, P.; Zubair, M.; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
The World Wide Web consortium has developed an Extensible Markup Language (XML) to support the building of better information management infrastructures. The scientific computing community realizing the benefits of XML has designed markup languages for scientific data. In this paper, we propose a XML based scientific data management ,facility, XDMF. The project is motivated by the fact that even though a lot of scientific data is being generated, it is not being shared because of lack of standards and infrastructure support for discovering and transforming the data. The proposed data management facility can be used to discover the scientific data itself, the transformation functions, and also for applying the required transformations. We have built a prototype system of the proposed data management facility that can work on different platforms. We have implemented the system using Java, and Apache XSLT engine Xalan. To support remote data and transformation functions, we had to extend the XSLT specification and the Xalan package.
Evaluation of scientific periodicals and the Brazilian production of nursing articles.
Erdmann, Alacoque Lorenzini; Marziale, Maria Helena Palucci; Pedreira, Mavilde da Luz Gonçalves; Lana, Francisco Carlos Félix; Pagliuca, Lorita Marlena Freitag; Padilha, Maria Itayra; Fernandes, Josicelia Dumêt
2009-01-01
This study aimed to identify nursing journals edited in Brazil indexed in the main bibliographic databases in the areas of health and nursing. It also aimed to classify the production of nursing graduate programs in 2007 according to the QUALIS/CAPES criteria used to classify scientific periodicals that disseminate the intellectual production of graduate programs in Brazil. This exploratory study used data from reports and documents available from CAPES to map scientific production and from searching the main international and national indexing databases. The findings from this research can help students, professors and coordinators of graduate programs in several ways: to understand the criteria of classifying periodicals; to be aware of the current production of graduate programs in the area of nursing; and to provide information that authors can use to select periodicals in which to publish their articles.
NoSQL data model for semi-automatic integration of ethnomedicinal plant data from multiple sources.
Ningthoujam, Sanjoy Singh; Choudhury, Manabendra Dutta; Potsangbam, Kumar Singh; Chetia, Pankaj; Nahar, Lutfun; Sarker, Satyajit D; Basar, Norazah; Das Talukdar, Anupam
2014-01-01
Sharing traditional knowledge with the scientific community could refine scientific approaches to phytochemical investigation and conservation of ethnomedicinal plants. As such, integration of traditional knowledge with scientific data using a single platform for sharing is greatly needed. However, ethnomedicinal data are available in heterogeneous formats, which depend on cultural aspects, survey methodology and focus of the study. Phytochemical and bioassay data are also available from many open sources in various standards and customised formats. To design a flexible data model that could integrate both primary and curated ethnomedicinal plant data from multiple sources. The current model is based on MongoDB, one of the Not only Structured Query Language (NoSQL) databases. Although it does not contain schema, modifications were made so that the model could incorporate both standard and customised ethnomedicinal plant data format from different sources. The model presented can integrate both primary and secondary data related to ethnomedicinal plants. Accommodation of disparate data was accomplished by a feature of this database that supported a different set of fields for each document. It also allowed storage of similar data having different properties. The model presented is scalable to a highly complex level with continuing maturation of the database, and is applicable for storing, retrieving and sharing ethnomedicinal plant data. It can also serve as a flexible alternative to a relational and normalised database. Copyright © 2014 John Wiley & Sons, Ltd.
MERINOVA: Meteorological risks as drivers of environmental innovation in agro-ecosystem management
NASA Astrophysics Data System (ADS)
Gobin, Anne; Oger, Robert; Marlier, Catherine; Van De Vijver, Hans; Vandermeulen, Valerie; Van Huylenbroeck, Guido; Zamani, Sepideh; Curnel, Yannick; Mettepenningen, Evi
2013-04-01
The BELSPO funded project 'MERINOVA' deals with risks associated with extreme weather phenomena and with risks of biological origin such as pests and diseases. The major objectives of the proposed project are to characterise extreme meteorological events, assess the impact on Belgian agro-ecosystems, characterise their vulnerability and resilience to these events, and explore innovative adaptation options to agricultural risk management. The project comprises of five major parts that reflect the chain of risks: (i) Hazard: Assessing the likely frequency and magnitude of extreme meteorological events by means of probability density functions; (ii) Impact: Analysing the potential bio-physical and socio-economic impact of extreme weather events on agro-ecosystems in Belgium using process-based modelling techniques commensurate with the regional scale; (iii) Vulnerability: Identifying the most vulnerable agro-ecosystems using fuzzy multi-criteria and spatial analysis; (iv) Risk Management: Uncovering innovative risk management and adaptation options using actor-network theory and fuzzy cognitive mapping techniques; and, (v) Communication: Communicating to research, policy and practitioner communities using web-based techniques. The different tasks of the MERINOVA project require expertise in several scientific disciplines: meteorology, statistics, spatial database management, agronomy, bio-physical impact modelling, socio-economic modelling, actor-network theory, fuzzy cognitive mapping techniques. These expertises are shared by the four scientific partners who each lead one work package. The MERINOVA project will concentrate on promoting a robust and flexible framework by demonstrating its performance across Belgian agro-ecosystems, and by ensuring its relevance to policy makers and practitioners. Impacts developed from physically based models will not only provide information on the state of the damage at any given time, but also assist in understanding the links between different factors causing damage and determining bio-physical vulnerability. Socio-economic impacts will enlarge the basis for vulnerability mapping, risk management and adaptation options. A strong expert and end-user network will be established to help disseminating and exploiting project results to meet user needs.
High-throughput neuroimaging-genetics computational infrastructure
Dinov, Ivo D.; Petrosyan, Petros; Liu, Zhizhong; Eggert, Paul; Hobel, Sam; Vespa, Paul; Woo Moon, Seok; Van Horn, John D.; Franco, Joseph; Toga, Arthur W.
2014-01-01
Many contemporary neuroscientific investigations face significant challenges in terms of data management, computational processing, data mining, and results interpretation. These four pillars define the core infrastructure necessary to plan, organize, orchestrate, validate, and disseminate novel scientific methods, computational resources, and translational healthcare findings. Data management includes protocols for data acquisition, archival, query, transfer, retrieval, and aggregation. Computational processing involves the necessary software, hardware, and networking infrastructure required to handle large amounts of heterogeneous neuroimaging, genetics, clinical, and phenotypic data and meta-data. Data mining refers to the process of automatically extracting data features, characteristics and associations, which are not readily visible by human exploration of the raw dataset. Result interpretation includes scientific visualization, community validation of findings and reproducible findings. In this manuscript we describe the novel high-throughput neuroimaging-genetics computational infrastructure available at the Institute for Neuroimaging and Informatics (INI) and the Laboratory of Neuro Imaging (LONI) at University of Southern California (USC). INI and LONI include ultra-high-field and standard-field MRI brain scanners along with an imaging-genetics database for storing the complete provenance of the raw and derived data and meta-data. In addition, the institute provides a large number of software tools for image and shape analysis, mathematical modeling, genomic sequence processing, and scientific visualization. A unique feature of this architecture is the Pipeline environment, which integrates the data management, processing, transfer, and visualization. Through its client-server architecture, the Pipeline environment provides a graphical user interface for designing, executing, monitoring validating, and disseminating of complex protocols that utilize diverse suites of software tools and web-services. These pipeline workflows are represented as portable XML objects which transfer the execution instructions and user specifications from the client user machine to remote pipeline servers for distributed computing. Using Alzheimer's and Parkinson's data, we provide several examples of translational applications using this infrastructure1. PMID:24795619
How to Search, Write, Prepare and Publish the Scientific Papers in the Biomedical Journals
Masic, Izet
2011-01-01
This article describes the methodology of preparation, writing and publishing scientific papers in biomedical journals. given is a concise overview of the concept and structure of the System of biomedical scientific and technical information and the way of biomedical literature retreival from worldwide biomedical databases. Described are the scientific and professional medical journals that are currently published in Bosnia and Herzegovina. Also, given is the comparative review on the number and structure of papers published in indexed journals in Bosnia and Herzegovina, which are listed in the Medline database. Analyzed are three B&H journals indexed in MEDLINE database: Medical Archives (Medicinski Arhiv), Bosnian Journal of Basic Medical Sciences and Medical Gazette (Medicinki Glasnik) in 2010. The largest number of original papers was published in the Medical Archives. There is a statistically significant difference in the number of papers published by local authors in relation to international journals in favor of the Medical Archives. True, the Journal Bosnian Journal of Basic Medical Sciences does not categorize the articles and we could not make comparisons. Journal Medical Archives and Bosnian Journal of Basic Medical Sciences by percentage published the largest number of articles by authors from Sarajevo and Tuzla, the two oldest and largest university medical centers in Bosnia and Herzegovina. The author believes that it is necessary to make qualitative changes in the reception and reviewing of papers for publication in biomedical journals published in Bosnia and Herzegovina which should be the responsibility of the separate scientific authority/ committee composed of experts in the field of medicine at the state level. PMID:23572850
Is autoimmunology a discipline of its own? A big data-based bibliometric and scientometric analyses.
Watad, Abdulla; Bragazzi, Nicola Luigi; Adawi, Mohammad; Amital, Howard; Kivity, Shaye; Mahroum, Naim; Blank, Miri; Shoenfeld, Yehuda
2017-06-01
Autoimmunology is a super-specialty of immunology specifically dealing with autoimmune disorders. To assess the extant literature concerning autoimmune disorders, bibliometric and scientometric analyses (namely, research topics/keywords co-occurrence, journal co-citation, citations, and scientific output trends - both crude and normalized, authors network, leading authors, countries, and organizations analysis) were carried out using open-source software, namely, VOSviewer and SciCurve. A corpus of 169,519 articles containing the keyword "autoimmunity" was utilized, selecting PubMed/MEDLINE as bibliographic thesaurus. Journals specifically devoted to autoimmune disorders were six and covered approximately 4.15% of the entire scientific production. Compared with all the corpus (from 1946 on), these specialized journals have been established relatively few decades ago. Top countries were the United States, Japan, Germany, United Kingdom, Italy, China, France, Canada, Australia, and Israel. Trending topics are represented by the role of microRNAs (miRNAs) in the ethiopathogenesis of autoimmune disorders, contributions of genetics and of epigenetic modifications, role of vitamins, management during pregnancy and the impact of gender. New subsets of immune cells have been extensively investigated, with a focus on interleukin production and release and on Th17 cells. Autoimmunology is emerging as a new discipline within immunology, with its own bibliometric properties, an identified scientific community and specifically devoted journals.
Li, Yuanfang; Zhou, Zhiwei
2016-02-01
Precision medicine is a new medical concept and medical model, which is based on personalized medicine, rapid progress of genome sequencing technology and cross application of biological information and big data science. Precision medicine improves the diagnosis and treatment of gastric cancer to provide more convenience through more profound analyses of characteristics, pathogenesis and other core issues in gastric cancer. Cancer clinical database is important to promote the development of precision medicine. Therefore, it is necessary to pay close attention to the construction and management of the database. The clinical database of Sun Yat-sen University Cancer Center is composed of medical record database, blood specimen bank, tissue bank and medical imaging database. In order to ensure the good quality of the database, the design and management of the database should follow the strict standard operation procedure(SOP) model. Data sharing is an important way to improve medical research in the era of medical big data. The construction and management of clinical database must also be strengthened and innovated.
NASA Astrophysics Data System (ADS)
Shchepashchenko, D.; Chave, J.; Phillips, O. L.; Davies, S. J.; Lewis, S. L.; Perger, C.; Dresel, C.; Fritz, S.; Scipal, K.
2017-12-01
Forest monitoring is high on the scientific and political agenda. Global measurements of forest height, biomass and how they change with time are urgently needed as essential climate and ecosystem variables. The Forest Observation System - FOS (http://forest-observation-system.net/) is an international cooperation to establish a global in-situ forest biomass database to support earth observation and to encourage investment in relevant field-based observations and science. FOS aims to link the Remote Sensing (RS) community with ecologists who measure forest biomass and estimating biodiversity in the field for a common benefit. The benefit of FOS for the RS community is the partnering of the most established teams and networks that manage permanent forest plots globally; to overcome data sharing issues and introduce a standard biomass data flow from tree level measurement to the plot level aggregation served in the most suitable form for the RS community. Ecologists benefit from the FOS with improved access to global biomass information, data standards, gap identification and potential improved funding opportunities to address the known gaps and deficiencies in the data. FOS closely collaborate with the Center for Tropical Forest Science -CTFS-ForestGEO, the ForestPlots.net (incl. RAINFOR, AfriTRON and T-FORCES), AusCover, Tropical managed Forests Observatory and the IIASA network. FOS is an open initiative with other networks and teams most welcome to join. The online database provides open access for both metadata (e.g. who conducted the measurements, where and which parameters) and actual data for a subset of plots where the authors have granted access. A minimum set of database values include: principal investigator and institution, plot coordinates, number of trees, forest type and tree species composition, wood density, canopy height and above ground biomass of trees. Plot size is 0.25 ha or large. The database will be essential for validating and calibrating satellite observations and various models.
NASA Astrophysics Data System (ADS)
Kingdon, Andrew; Nayembil, Martin L.; Richardson, Anne E.; Smith, A. Graham
2016-11-01
New requirements to understand geological properties in three dimensions have led to the development of PropBase, a data structure and delivery tools to deliver this. At the BGS, relational database management systems (RDBMS) has facilitated effective data management using normalised subject-based database designs with business rules in a centralised, vocabulary controlled, architecture. These have delivered effective data storage in a secure environment. However, isolated subject-oriented designs prevented efficient cross-domain querying of datasets. Additionally, the tools provided often did not enable effective data discovery as they struggled to resolve the complex underlying normalised structures providing poor data access speeds. Users developed bespoke access tools to structures they did not fully understand sometimes delivering them incorrect results. Therefore, BGS has developed PropBase, a generic denormalised data structure within an RDBMS to store property data, to facilitate rapid and standardised data discovery and access, incorporating 2D and 3D physical and chemical property data, with associated metadata. This includes scripts to populate and synchronise the layer with its data sources through structured input and transcription standards. A core component of the architecture includes, an optimised query object, to deliver geoscience information from a structure equivalent to a data warehouse. This enables optimised query performance to deliver data in multiple standardised formats using a web discovery tool. Semantic interoperability is enforced through vocabularies combined from all data sources facilitating searching of related terms. PropBase holds 28.1 million spatially enabled property data points from 10 source databases incorporating over 50 property data types with a vocabulary set that includes 557 property terms. By enabling property data searches across multiple databases PropBase has facilitated new scientific research, previously considered impractical. PropBase is easily extended to incorporate 4D data (time series) and is providing a baseline for new "big data" monitoring projects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Straume, T.; Ricker, Y.; Thut, M.
1988-08-29
This database was constructed to support research in radiation biological dosimetry and risk assessment. Relevant publications were identified through detailed searches of national and international electronic databases and through our personal knowledge of the subject. Publications were numbered and key worded, and referenced in an electronic data-retrieval system that permits quick access through computerized searches on publication number, authors, key words, title, year, and journal name. Photocopies of all publications contained in the database are maintained in a file that is numerically arranged by citation number. This report of the database is provided as a useful reference and overview. Itmore » should be emphasized that the database will grow as new citations are added to it. With that in mind, we arranged this report in order of ascending citation number so that follow-up reports will simply extend this document. The database cite 1212 publications. Publications are from 119 different scientific journals, 27 of these journals are cited at least 5 times. It also contains reference to 42 books and published symposia, and 129 reports. Information relevant to radiation biological dosimetry and risk assessment is widely distributed among the scientific literature, although a few journals clearly dominate. The four journals publishing the largest number of relevant papers are Health Physics, Mutation Research, Radiation Research, and International Journal of Radiation Biology. Publications in Health Physics make up almost 10% of the current database.« less
Utilizing the Web in the Classroom: Linking Student Scientists with Professional Data.
ERIC Educational Resources Information Center
Seitz, Kristine; Leake, Devin
1999-01-01
Describes how information gathered from a computer database can be used as a springboard to scientific discovery. Specifies directions for studying the homeobox gene PAX-6 using GenBank, a database maintained by the National Center for Biotechnology Information (NCBI). Contains 16 references. (WRM)
The BioMart community portal: an innovative alternative to large, centralized data repositories
USDA-ARS?s Scientific Manuscript database
The BioMart Community Portal (www.biomart.org) is a community-driven effort to provide a unified interface to biomedical databases that are distributed worldwide. The portal provides access to numerous database projects supported by 30 scientific organizations. It includes over 800 different biologi...
The future application of GML database in GIS
NASA Astrophysics Data System (ADS)
Deng, Yuejin; Cheng, Yushu; Jing, Lianwen
2006-10-01
In 2004, the Geography Markup Language (GML) Implementation Specification (version 3.1.1) was published by Open Geospatial Consortium, Inc. Now more and more applications in geospatial data sharing and interoperability depend on GML. The primary purpose of designing GML is for exchange and transportation of geo-information by standard modeling and encoding of geography phenomena. However, the problems of how to organize and access lots of GML data effectively arise in applications. The research on GML database focuses on these problems. The effective storage of GML data is a hot topic in GIS communities today. GML Database Management System (GDBMS) mainly deals with the problem of storage and management of GML data. Now two types of XML database, namely Native XML Database, and XML-Enabled Database are classified. Since GML is an application of the XML standard to geographic data, the XML database system can also be used for the management of GML. In this paper, we review the status of the art of XML database, including storage, index and query languages, management systems and so on, then move on to the GML database. At the end, the future prospect of GML database in GIS application is presented.
Determinants of quality management systems implementation in hospitals.
Wardhani, Viera; Utarini, Adi; van Dijk, Jitse Pieter; Post, Doeke; Groothoff, Johan Willem
2009-03-01
To identify the problems and facilitating factors in the implementation of quality management system (QMS) in hospitals through a systematic review. A search strategy was performed on the Medline database for articles written in English published between 1992 and early 2006. Using the thesaurus terms 'Total Quality Management' and 'Quality Assurance Health Care', combined with the term 'hospital' and 'implement*', we identified 533 publications. The screening process was based on empirical articles describing organization-wide QMS implementation. Fourteen empirical articles fulfilled the inclusion criteria and were reviewed in this paper. An organization culture emphasizing standards and values associated with affiliation, teamwork and innovation, assumption of change and risk taking, play as the key success factor in QMS implementation. This culture needs to be supported by sufficient technical competence to apply a scientific problem-solving approach. A clear distribution of QMS function within the organizational structure is more important than establishing a formal quality structure. In addition to management leadership, physician involvement also plays an important role in implementing QMS. Six supporting and limiting factors determining QMS implementation are identified in this review. These are the organization culture, design, leadership for quality, physician involvement, quality structure and technical competence.
NASA Astrophysics Data System (ADS)
Yatagai, A. I.; Iyemori, T.; Ritschel, B.; Koyama, Y.; Hori, T.; Abe, S.; Tanaka, Y.; Shinbori, A.; Umemura, N.; Sato, Y.; Yagi, M.; Ueno, S.; Hashiguchi, N. O.; Kaneda, N.; Belehaki, A.; Hapgood, M. A.
2013-12-01
The IUGONET is a Japanese program to build a metadata database for ground-based observations of the upper atmosphere [1]. The project began in 2009 with five Japanese institutions which archive data observed by radars, magnetometers, photometers, radio telescopes and helioscopes, and so on, at various altitudes from the Earth's surface to the Sun. Systems have been developed to allow searching of the above described metadata. We have been updating the system and adding new and updated metadata. The IUGONET development team adopted the SPASE metadata model [2] to describe the upper atmosphere data. This model is used as the common metadata format by the virtual observatories for solar-terrestrial physics. It includes metadata referring to each data file (called a 'Granule'), which enable a search for data files as well as data sets. Further details are described in [2] and [3]. Currently, three additional Japanese institutions are being incorporated in IUGONET. Furthermore, metadata of observations of the troposphere, taken at the observatories of the middle and upper atmosphere radar at Shigaraki and the Meteor radar in Indonesia, have been incorporated. These additions will contribute to efficient interdisciplinary scientific research. In the beginning of 2013, the registration of the 'Observatory' and 'Instrument' metadata was completed, which makes it easy to overview of the metadata database. The number of registered metadata as of the end of July, totalled 8.8 million, including 793 observatories and 878 instruments. It is important to promote interoperability and/or metadata exchange between the database development groups. A memorandum of agreement has been signed with the European Near-Earth Space Data Infrastructure for e-Science (ESPAS) project, which has similar objectives to IUGONET with regard to a framework for formal collaboration. Furthermore, observations by satellites and the International Space Station are being incorporated with a view for making/linking metadata databases. The development of effective data systems will contribute to the progress of scientific research on solar terrestrial physics, climate and the geophysical environment. Any kind of cooperation, metadata input and feedback, especially for linkage of the databases, is welcomed. References 1. Hayashi, H. et al., Inter-university Upper Atmosphere Global Observation Network (IUGONET), Data Sci. J., 12, WDS179-184, 2013. 2. King, T. et al., SPASE 2.0: A standard data model for space physics. Earth Sci. Inform. 3, 67-73, 2010, doi:10.1007/s12145-010-0053-4. 3. Hori, T., et al., Development of IUGONET metadata format and metadata management system. J. Space Sci. Info. Jpn., 105-111, 2012. (in Japanese)
A New Breed of Database System: Volcano Global Risk Identification and Analysis Project (VOGRIPA)
NASA Astrophysics Data System (ADS)
Crosweller, H. S.; Sparks, R. S.; Siebert, L.
2009-12-01
VOGRIPA originated as part of the Global Risk Identification Programme (GRIP) that is being co-ordinated from the Earth Institute of Columbia University under the auspices of the United Nations and World Bank. GRIP is a five-year programme aiming at improving global knowledge about risk from natural hazards and is part of the international response to the catastrophic 2004 Asian tsunami. VOGRIPA is also a formal IAVCEI project. The objectives of VOGRIPA are to create a global database of volcanic activity, hazards and vulnerability information that can be analysed to identify locations at high risk from volcanism, gaps in knowledge about hazards and risk, and will allow scientists and disaster managers at specific locations to analyse risk within a global context of systematic information. It is this added scope of risk and vulnerability as well as hazard which sets VOGRIPA apart from most previous databases. The University of Bristol is the central coordinating centre for the project, which is an international partnership including the Smithsonian Institution, the Geological Survey of Japan, the Earth Observatory of Singapore (Chris Newhall), the British Geological Survey, the University of Buffalo (SUNY) and Munich Re. The partnership is intended to grow and any individuals or institutions who are able to contribute resources to VOGRIPA objectives are welcome to participate. Work has already begun (funded principally by Munich Re) on populating a database of large magnitude explosive eruptions reaching back to the Quaternary, with extreme-value statistics being used to evaluate the magnitude-frequency relationship of such events, and also an assessment of how the quality of records affect the results. The following 4 years of funding from the European Research Council for VOGRIPA will be used to establish further international collaborations in order to develop different aspects of the database, with the data being accessible online once it is sufficiently complete and analyses have been carried out. It is anticipated that such a resource would be of use to the scientific community, civil authorities with responsibility for mitigating and managing volcanic hazards, and the public.
NASA Astrophysics Data System (ADS)
Tanabe, T.
The CRD database, which has been accumulating financial data on SMEsover the ten years since its founding, and has gathered approximately 12 million records for around 2 million SMEs, approximately 3 million records for somewhere around 900,000 sole proprietors, also collected default data on these companies and sole proprietors. The CRD database's weakness is anonymity. Going forward, therefore, it appears the CRD Association is faced with questions concerning how it will enhance the attractiveness of its database whether new knowledge should be gained by using econophysics or other research approaches. We have already seen several examples of knowledge gained through econophysical analyses using the CRD database, and I would like to express my hope that we will eventually see greater application of the SME credit information database and econophysical analysis for the development of Japans SME policies which are scientific economic policies for avoiding moral hazard, and will expect elucidating risk scenarios for the global financial, natural disaster, and other shocks expected to happen with greater frequency. Therefore, the role played by econophysics will become increasingly important, and we have high expectations for the role to be played by the field of econophysics.
García-Gómez, Francisco; Ramírez-Méndez, Fernando
2015-01-01
To analyze the number of articles of Revista Médica del Instituto Mexicano del Seguro Social (Rev Med Inst Mex Seguro Soc) in the Scopus database and describe principal quantitative bibliometric indicators of scientific publications during the period between 2005 to 2013. Scopus database was used limited to the period between 2005 to 2013. The analysis cover mainly title of articles with the title of Revista Médica del Instituto Mexicano del Seguro Social and its possible modifications. For the analysis, Scopus, Excel and Access were used. 864 articles were published during the period between 2005 to 2013 in the Scopus database. We identified authors with the highest number of contributions including articles with the highest citation rate and forms of documents cited. We also divided articles by subjects, types of documents and other bibliometric indicators which characterize the publications. The use of Scopus brings the possibility of analyze with an external tool the visibility of the scientific production published in the Revista Médica del IMSS. The use of this database also contributes to identify the state of science in México, as well as in the developing countries.
A comprehensive view of the web-resources related to sericulture
Singh, Deepika; Chetia, Hasnahana; Kabiraj, Debajyoti; Sharma, Swagata; Kumar, Anil; Sharma, Pragya; Deka, Manab; Bora, Utpal
2016-01-01
Recent progress in the field of sequencing and analysis has led to a tremendous spike in data and the development of data science tools. One of the outcomes of this scientific progress is development of numerous databases which are gaining popularity in all disciplines of biology including sericulture. As economically important organism, silkworms are studied extensively for their numerous applications in the field of textiles, biomaterials, biomimetics, etc. Similarly, host plants, pests, pathogens, etc. are also being probed to understand the seri-resources more efficiently. These studies have led to the generation of numerous seri-related databases which are extremely helpful for the scientific community. In this article, we have reviewed all the available online resources on silkworm and its related organisms, including databases as well as informative websites. We have studied their basic features and impact on research through citation count analysis, finally discussing the role of emerging sequencing and analysis technologies in the field of seri-data science. As an outcome of this review, a web portal named SeriPort, has been created which will act as an index for the various sericulture-related databases and web resources available in cyberspace. Database URL: http://www.seriport.in/ PMID:27307138
Wain, Karen E; Riggs, Erin; Hanson, Karen; Savage, Melissa; Riethmaier, Darlene; Muirhead, Andrea; Mitchell, Elyse; Packard, Bethanny Smith; Faucett, W Andrew
2012-10-01
The International Standards for Cytogenomic Arrays (ISCA) Consortium is a worldwide collaborative effort dedicated to optimizing patient care by improving the quality of chromosomal microarray testing. The primary effort of the ISCA Consortium has been the development of a database of copy number variants (CNVs) identified during the course of clinical microarray testing. This database is a powerful resource for clinicians, laboratories, and researchers, and can be utilized for a variety of applications, such as facilitating standardized interpretations of certain CNVs across laboratories or providing phenotypic information for counseling purposes when published data is sparse. A recognized limitation to the clinical utility of this database, however, is the quality of clinical information available for each patient. Clinical genetic counselors are uniquely suited to facilitate the communication of this information to the laboratory by virtue of their existing clinical responsibilities, case management skills, and appreciation of the evolving nature of scientific knowledge. We intend to highlight the critical role that genetic counselors play in ensuring optimal patient care through contributing to the clinical utility of the ISCA Consortium's database, as well as the quality of individual patient microarray reports provided by contributing laboratories. Current tools, paper and electronic forms, created to maximize this collaboration are shared. In addition to making a professional commitment to providing complete clinical information, genetic counselors are invited to become ISCA members and to become involved in the discussions and initiatives within the Consortium.
Database Systems. Course Three. Information Systems Curriculum.
ERIC Educational Resources Information Center
O'Neil, Sharon Lund; Everett, Donna R.
This course is the third of seven in the Information Systems curriculum. The purpose of the course is to familiarize students with database management concepts and standard database management software. Databases and their roles, advantages, and limitations are explained. An overview of the course sets forth the condition and performance standard…
23 CFR 972.204 - Management systems requirements.
Code of Federal Regulations, 2012 CFR
2012-04-01
... to operate and maintain the management systems and their associated databases; and (5) A process for... systems will use databases with a geographical reference system that can be used to geolocate all database...
23 CFR 972.204 - Management systems requirements.
Code of Federal Regulations, 2011 CFR
2011-04-01
... to operate and maintain the management systems and their associated databases; and (5) A process for... systems will use databases with a geographical reference system that can be used to geolocate all database...
23 CFR 972.204 - Management systems requirements.
Code of Federal Regulations, 2010 CFR
2010-04-01
... to operate and maintain the management systems and their associated databases; and (5) A process for... systems will use databases with a geographical reference system that can be used to geolocate all database...
23 CFR 972.204 - Management systems requirements.
Code of Federal Regulations, 2013 CFR
2013-04-01
... to operate and maintain the management systems and their associated databases; and (5) A process for... systems will use databases with a geographical reference system that can be used to geolocate all database...
NASA Astrophysics Data System (ADS)
Manzella, Giuseppe M. R.; Bartolini, Andrea; Bustaffa, Franco; D'Angelo, Paolo; De Mattei, Maurizio; Frontini, Francesca; Maltese, Maurizio; Medone, Daniele; Monachini, Monica; Novellino, Antonio; Spada, Andrea
2016-04-01
The MAPS (Marine Planning and Service Platform) project is aiming at building a computer platform supporting a Marine Information and Knowledge System. One of the main objective of the project is to develop a repository that should gather, classify and structure marine scientific literature and data thus guaranteeing their accessibility to researchers and institutions by means of standard protocols. In oceanography the cost related to data collection is very high and the new paradigm is based on the concept to collect once and re-use many times (for re-analysis, marine environment assessment, studies on trends, etc). This concept requires the access to quality controlled data and to information that is provided in reports (grey literature) and/or in relevant scientific literature. Hence, creation of new technology is needed by integrating several disciplines such as data management, information systems, knowledge management. In one of the most important EC projects on data management, namely SeaDataNet (www.seadatanet.org), an initial example of knowledge management is provided through the Common Data Index, that is providing links to data and (eventually) to papers. There are efforts to develop search engines to find author's contributions to scientific literature or publications. This implies the use of persistent identifiers (such as DOI), as is done in ORCID. However very few efforts are dedicated to link publications to the data cited or used or that can be of importance for the published studies. This is the objective of MAPS. Full-text technologies are often unsuccessful since they assume the presence of specific keywords in the text; in order to fix this problem, the MAPS project suggests to use different semantic technologies for retrieving the text and data and thus getting much more complying results. The main parts of our design of the search engine are: • Syntactic parser - This module is responsible for the extraction of "rich words" from the text: the whole document gets parsed to extract the words which are more meaningful for the main argument of the document, and applies the extraction in the form of N-grams (mono-grams, bi-grams, tri-grams). • MAPS database - This module is a simple database which contains all the N-grams used by MAPS (physical parameters from SeaDataNet vocabularies) to define our marine "ontology". • Relation identifier - This module performs the most important task of identifying relationships between the N-gram extracted from the text by the parser and the provided oceanographic terminology. It checks N-grams supplied by the Syntactic parser and then matches them with the terms stored in the MAPS database. Found matches are returned back to the parser with flexed form appearing in the source text. • A "relaxed" extractor - This option can be activated when the search engine is launched. It was introduced to give the user a chance to create new N-grams combining existing mono-grams and bi-grams in the database with rich-words found within the source text. The innovation of a semantic engine lies in the fact that the process is not just about the retrieval of already known documents by means of a simple term query but rather the retrieval of a population of documents whose existence was unknown. The system answers by showing a screenshot of results ordered according to the following criteria: • Relevance - of the document with respect to the concept that is searched • Date - of publication of the paper • Source - data provider as defined in the SeaDataNet Common Data Index • Matrix - environmental matrices as defined in the oceanographic field • Geographic area - area specified in the text • Clustering - the process of organizing objects into groups whose members are similar The clustering returns as the output the related documents. For each document the MAPS visualization provides: • Title, author, source/provider of data, web address • Tagging of key terms or concepts • Summary of the document • Visualization of the whole document The possibility of inserting the number of citations for each document among the criteria of the advanced search is currently undergoing; in this case the engine should be able to connect to any of the existing bibliographic citation systems (such as Google Scholar, Scopus, etc.).
Husen, Peter; Tarasov, Kirill; Katafiasz, Maciej; Sokol, Elena; Vogt, Johannes; Baumgart, Jan; Nitsch, Robert; Ekroos, Kim; Ejsing, Christer S
2013-01-01
Global lipidomics analysis across large sample sizes produces high-content datasets that require dedicated software tools supporting lipid identification and quantification, efficient data management and lipidome visualization. Here we present a novel software-based platform for streamlined data processing, management and visualization of shotgun lipidomics data acquired using high-resolution Orbitrap mass spectrometry. The platform features the ALEX framework designed for automated identification and export of lipid species intensity directly from proprietary mass spectral data files, and an auxiliary workflow using database exploration tools for integration of sample information, computation of lipid abundance and lipidome visualization. A key feature of the platform is the organization of lipidomics data in "database table format" which provides the user with an unsurpassed flexibility for rapid lipidome navigation using selected features within the dataset. To demonstrate the efficacy of the platform, we present a comparative neurolipidomics study of cerebellum, hippocampus and somatosensory barrel cortex (S1BF) from wild-type and knockout mice devoid of the putative lipid phosphate phosphatase PRG-1 (plasticity related gene-1). The presented framework is generic, extendable to processing and integration of other lipidomic data structures, can be interfaced with post-processing protocols supporting statistical testing and multivariate analysis, and can serve as an avenue for disseminating lipidomics data within the scientific community. The ALEX software is available at www.msLipidomics.info.
Yazdani, Kamran; Rahimi-Movaghar, Afarin; Nedjat, Saharnaz; Ghalichi, Leila; Khalili, Malahat
2015-01-01
Since Tehran University of Medical Sciences (TUMS) has the oldest and highest number of research centers among all Iranian medical universities, this study was conducted to evaluate scientific output of research centers affiliated to Tehran University of Medical Sciences (TUMS) using scientometric indices and the affecting factors. Moreover, a number of scientometric indicators were introduced. This cross-sectional study was performed to evaluate a 5-year scientific performance of research centers of TUMS. Data were collected through questionnaires, annual evaluation reports of the Ministry of Health, and also from Scopus database. We used appropriate measures of central tendency and variation for descriptive analyses. Moreover, uni-and multi-variable linear regression were used to evaluate the effect of independent factors on the scientific output of the centers. The medians of the numbers of papers and books during a 5-year period were 150.5 and 2.5 respectively. The median of the "articles per researcher" was 19.1. Based on multiple linear regression, younger age centers (p=0.001), having a separate budget line (p=0.016), and number of research personnel (p<0.001) had a direct significant correlation with the number of articles while real properties had a reverse significant correlation with it (p=0.004). The results can help policy makers and research managers to allocate sufficient resources to improve current situation of the centers. Newly adopted and effective scientometric indices are is suggested to be used to evaluate scientific outputs and functions of these centers.
Jeannerat, Damien
2017-01-01
The introduction of a universal data format to report the correlation data of 2D NMR spectra such as COSY, HSQC and HMBC spectra will have a large impact on the reliability of structure determination of small organic molecules. These lists of assigned cross peaks will bridge signals found in NMR 1D and 2D spectra and the assigned chemical structure. The record could be very compact, human and computer readable so that it can be included in the supplementary material of publications and easily transferred into databases of scientific literature and chemical compounds. The records will allow authors, reviewers and future users to test the consistency and, in favorable situations, the uniqueness of the assignment of the correlation data to the associated chemical structures. Ideally, the data format of the correlation data should include direct links to the NMR spectra to make it possible to validate their reliability and allow direct comparison of spectra. In order to take the full benefits of their potential, the correlation data and the NMR spectra should therefore follow any manuscript in the review process and be stored in open-access database after publication. Keeping all NMR spectra, correlation data and assigned structures together at all time will allow the future development of validation tools increasing the reliability of past and future NMR data. This will facilitate the development of artificial intelligence analysis of NMR spectra by providing a source of data than can be used efficiently because they have been validated or can be validated by future users. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Vaccarino, Anthony L; Dharsee, Moyez; Strother, Stephen; Aldridge, Don; Arnott, Stephen R; Behan, Brendan; Dafnas, Costas; Dong, Fan; Edgecombe, Kenneth; El-Badrawi, Rachad; El-Emam, Khaled; Gee, Tom; Evans, Susan G; Javadi, Mojib; Jeanson, Francis; Lefaivre, Shannon; Lutz, Kristen; MacPhee, F Chris; Mikkelsen, Jordan; Mikkelsen, Tom; Mirotchnick, Nicholas; Schmah, Tanya; Studzinski, Christa M; Stuss, Donald T; Theriault, Elizabeth; Evans, Kenneth R
2018-01-01
Historically, research databases have existed in isolation with no practical avenue for sharing or pooling medical data into high dimensional datasets that can be efficiently compared across databases. To address this challenge, the Ontario Brain Institute's "Brain-CODE" is a large-scale neuroinformatics platform designed to support the collection, storage, federation, sharing and analysis of different data types across several brain disorders, as a means to understand common underlying causes of brain dysfunction and develop novel approaches to treatment. By providing researchers access to aggregated datasets that they otherwise could not obtain independently, Brain-CODE incentivizes data sharing and collaboration and facilitates analyses both within and across disorders and across a wide array of data types, including clinical, neuroimaging and molecular. The Brain-CODE system architecture provides the technical capabilities to support (1) consolidated data management to securely capture, monitor and curate data, (2) privacy and security best-practices, and (3) interoperable and extensible systems that support harmonization, integration, and query across diverse data modalities and linkages to external data sources. Brain-CODE currently supports collaborative research networks focused on various brain conditions, including neurodevelopmental disorders, cerebral palsy, neurodegenerative diseases, epilepsy and mood disorders. These programs are generating large volumes of data that are integrated within Brain-CODE to support scientific inquiry and analytics across multiple brain disorders and modalities. By providing access to very large datasets on patients with different brain disorders and enabling linkages to provincial, national and international databases, Brain-CODE will help to generate new hypotheses about the biological bases of brain disorders, and ultimately promote new discoveries to improve patient care.
Vaccarino, Anthony L.; Dharsee, Moyez; Strother, Stephen; Aldridge, Don; Arnott, Stephen R.; Behan, Brendan; Dafnas, Costas; Dong, Fan; Edgecombe, Kenneth; El-Badrawi, Rachad; El-Emam, Khaled; Gee, Tom; Evans, Susan G.; Javadi, Mojib; Jeanson, Francis; Lefaivre, Shannon; Lutz, Kristen; MacPhee, F. Chris; Mikkelsen, Jordan; Mikkelsen, Tom; Mirotchnick, Nicholas; Schmah, Tanya; Studzinski, Christa M.; Stuss, Donald T.; Theriault, Elizabeth; Evans, Kenneth R.
2018-01-01
Historically, research databases have existed in isolation with no practical avenue for sharing or pooling medical data into high dimensional datasets that can be efficiently compared across databases. To address this challenge, the Ontario Brain Institute’s “Brain-CODE” is a large-scale neuroinformatics platform designed to support the collection, storage, federation, sharing and analysis of different data types across several brain disorders, as a means to understand common underlying causes of brain dysfunction and develop novel approaches to treatment. By providing researchers access to aggregated datasets that they otherwise could not obtain independently, Brain-CODE incentivizes data sharing and collaboration and facilitates analyses both within and across disorders and across a wide array of data types, including clinical, neuroimaging and molecular. The Brain-CODE system architecture provides the technical capabilities to support (1) consolidated data management to securely capture, monitor and curate data, (2) privacy and security best-practices, and (3) interoperable and extensible systems that support harmonization, integration, and query across diverse data modalities and linkages to external data sources. Brain-CODE currently supports collaborative research networks focused on various brain conditions, including neurodevelopmental disorders, cerebral palsy, neurodegenerative diseases, epilepsy and mood disorders. These programs are generating large volumes of data that are integrated within Brain-CODE to support scientific inquiry and analytics across multiple brain disorders and modalities. By providing access to very large datasets on patients with different brain disorders and enabling linkages to provincial, national and international databases, Brain-CODE will help to generate new hypotheses about the biological bases of brain disorders, and ultimately promote new discoveries to improve patient care. PMID:29875648
Chen, Yu-Chun; Wu, Jau-Ching; Haschler, Ingo; Majeed, Azeem; Chen, Tzeng-Ji; Wetter, Thomas
2011-01-01
Studies that use electronic health databases as research material are getting popular but the influence of a single electronic health database had not been well investigated yet. The United Kingdom's General Practice Research Database (GPRD) is one of the few electronic health databases publicly available to academic researchers. This study analyzed studies that used GPRD to demonstrate the scientific production and academic impact by a single public health database. A total of 749 studies published between 1995 and 2009 with 'General Practice Research Database' as their topics, defined as GPRD studies, were extracted from Web of Science. By the end of 2009, the GPRD had attracted 1251 authors from 22 countries and been used extensively in 749 studies published in 193 journals across 58 study fields. Each GPRD study was cited 2.7 times by successive studies. Moreover, the total number of GPRD studies increased rapidly, and it is expected to reach 1500 by 2015, twice the number accumulated till the end of 2009. Since 17 of the most prolific authors (1.4% of all authors) contributed nearly half (47.9%) of GPRD studies, success in conducting GPRD studies may accumulate. The GPRD was used mainly in, but not limited to, the three study fields of "Pharmacology and Pharmacy", "General and Internal Medicine", and "Public, Environmental and Occupational Health". The UK and United States were the two most active regions of GPRD studies. One-third of GRPD studies were internationally co-authored. A public electronic health database such as the GPRD will promote scientific production in many ways. Data owners of electronic health databases at a national level should consider how to reduce access barriers and to make data more available for research.
DoSSiER: Database of scientific simulation and experimental results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wenzel, Hans; Yarba, Julia; Genser, Krzystof
The Geant4, GeantV and GENIE collaborations regularly perform validation and regression tests for simulation results. DoSSiER (Database of Scientific Simulation and Experimental Results) is being developed as a central repository to store the simulation results as well as the experimental data used for validation. DoSSiER can be easily accessed via a web application. In addition, a web service allows for programmatic access to the repository to extract records in json or xml exchange formats. In this paper, we describe the functionality and the current status of various components of DoSSiER as well as the technology choices we made.
NASA Technical Reports Server (NTRS)
Wrenn, Gregory A.
2005-01-01
This report describes a database routine called DB90 which is intended for use with scientific and engineering computer programs. The software is written in the Fortran 90/95 programming language standard with file input and output routines written in the C programming language. These routines should be completely portable to any computing platform and operating system that has Fortran 90/95 and C compilers. DB90 allows a program to supply relation names and up to 5 integer key values to uniquely identify each record of each relation. This permits the user to select records or retrieve data in any desired order.
DoSSiER: Database of scientific simulation and experimental results
Wenzel, Hans; Yarba, Julia; Genser, Krzystof; ...
2016-08-01
The Geant4, GeantV and GENIE collaborations regularly perform validation and regression tests for simulation results. DoSSiER (Database of Scientific Simulation and Experimental Results) is being developed as a central repository to store the simulation results as well as the experimental data used for validation. DoSSiER can be easily accessed via a web application. In addition, a web service allows for programmatic access to the repository to extract records in json or xml exchange formats. In this paper, we describe the functionality and the current status of various components of DoSSiER as well as the technology choices we made.
Terreri, Maria Teresa R A; Bernardo, Wanderley Marques; Len, Claudio Arnaldo; da Silva, Clovis Artur Almeida; de Magalhães, Cristina Medeiros Ribeiro; Sacchetti, Silvana B; Ferriani, Virgínia Paes Leme; Piotto, Daniela Gerent Petry; de Souza Cavalcanti, André; de Moraes, Ana Júlia Pantoja; Sztajnbok, Flavio Roberto; de Oliveira, Sheila Knupp Feitosa; Campos, Lucia Maria Arruda; Bandeira, Marcia; Santos, Flávia Patricia Sena Teixeira; Magalhães, Claudia Saad
2016-01-01
To establish guidelines based on scientific evidence for the management of familial Mediterranean fever. The Guideline was prepared from 5 clinical questions that were structured through PICO (Patient, Intervention or indicator, Comparison and Outcome), to search key primary scientific information databases. After defining the potential studies to support the recommendations, these were graduated considering their strength of evidence and grade of recommendation. 10,341 articles were retrieved and evaluated by title and abstract; from these, 46 articles were selected to support the recommendations. 1. The diagnosis of FMF is based on clinical manifestations, characterized by recurrent febrile episodes associated with abdominal pain, chest or arthritis of large joints. 2. FMF is a genetic disease presenting an autosomal recessive trait, caused by mutation in the MEFV gene. 3. Laboratory tests are not specific, demonstrating high serum levels of inflammatory proteins in the acute phase of the disease, but also often showing high levels even between attacks. SAA serum levels may be especially useful in monitoring the effectiveness of treatment. 4. The therapy of choice is colchicine; this drug has proven its effectiveness in preventing acute inflammatory episodes and progression toward amyloidosis in adults. 5. Based on the available information, the use of biological drugs appears to be an alternative for patients with FMF who do not respond or are intolerant to therapy with colchicine. Copyright © 2015 Elsevier Editora Ltda. All rights reserved.
Vascular knowledge in medieval times was the turning point for the humanistic trend.
Ducasse, E; Speziale, F; Baste, J C; Midy, D
2006-06-01
Knowledge of the history of our surgical specialty may broaden our viewpoint for everyday practice. We illustrate the scientific progress made in medieval times relevant to the vascular system and blood circulation, progress made despite prevailing religious and philosophical dogma. We located all articles concerning vascular knowledge and historical reviews in databases such as MEDLINE, EMBASE and the database of abstracts of reviews (DARE). We also explored the database of the register from the French National Library, the French Medical Inter-University (BIUM), the Italian National Library and the French and Italian Libraries in the Vatican. All data were collected and analysed in chronological order. Medieval vascular knowledge was inherited from Greek via Byzantine and Arabic writings, the first controversies against the recognized vascular schema emanating from an Arabian physician in the 13th century. Dissection was forbidden and clerical rules instilled a fear of blood. Major contributions to scientific progress in the vascular field in medieval times came from Ibn-al-Nafis and Harvey. Vascular specialists today may feel proud to recall that once religious dogma declined in early medieval times, vascular anatomic and physiological discoveries led the way to scientific progress.
Déjà vu: a database of highly similar citations in the scientific literature
Errami, Mounir; Sun, Zhaohui; Long, Tara C.; George, Angela C.; Garner, Harold R.
2009-01-01
In the scientific research community, plagiarism and covert multiple publications of the same data are considered unacceptable because they undermine the public confidence in the scientific integrity. Yet, little has been done to help authors and editors to identify highly similar citations, which sometimes may represent cases of unethical duplication. For this reason, we have made available Déjà vu, a publicly available database of highly similar Medline citations identified by the text similarity search engine eTBLAST. Following manual verification, highly similar citation pairs are classified into various categories ranging from duplicates with different authors to sanctioned duplicates. Déjà vu records also contain user-provided commentary and supporting information to substantiate each document's categorization. Déjà vu and eTBLAST are available to authors, editors, reviewers, ethicists and sociologists to study, intercept, annotate and deter questionable publication practices. These tools are part of a sustained effort to enhance the quality of Medline as ‘the’ biomedical corpus. The Déjà vu database is freely accessible at http://spore.swmed.edu/dejavu. The tool eTBLAST is also freely available at http://etblast.org. PMID:18757888
Deja vu: a database of highly similar citations in the scientific literature.
Errami, Mounir; Sun, Zhaohui; Long, Tara C; George, Angela C; Garner, Harold R
2009-01-01
In the scientific research community, plagiarism and covert multiple publications of the same data are considered unacceptable because they undermine the public confidence in the scientific integrity. Yet, little has been done to help authors and editors to identify highly similar citations, which sometimes may represent cases of unethical duplication. For this reason, we have made available Déjà vu, a publicly available database of highly similar Medline citations identified by the text similarity search engine eTBLAST. Following manual verification, highly similar citation pairs are classified into various categories ranging from duplicates with different authors to sanctioned duplicates. Déjà vu records also contain user-provided commentary and supporting information to substantiate each document's categorization. Déjà vu and eTBLAST are available to authors, editors, reviewers, ethicists and sociologists to study, intercept, annotate and deter questionable publication practices. These tools are part of a sustained effort to enhance the quality of Medline as 'the' biomedical corpus. The Déjà vu database is freely accessible at http://spore.swmed.edu/dejavu. The tool eTBLAST is also freely available at http://etblast.org.
Jones, Alan Wayne
2005-03-01
The importance and prestige of a scientific journal is increasingly being judged by the number of times the articles it publishes are cited or referenced in articles published in other scientific journals. Citation counting is also used to assess the merits of individual scientists when academic promotion and tenure are decided. With the help of Thomson, Institute for Scientific Information (Thomson ISI) a citation database was created for six leading forensic science and legal medicine journals. This database was used to determine the most highly cited articles, authors, journals and the most prolific authors of articles in the forensic sciences. The forensic science and legal medicine journals evaluated were: Journal of Forensic Sciences (JFS), Forensic Science International (FSI), International Journal of Legal Medicine (IJLM), Medicine, Science and the Law (MSL), American Journal of Forensic Medicine and Pathology (AJFMP), and Science and Justice (S&J). The resulting forensics database contained 14,210 papers published between 1981 and 2003. This in-depth bibliometric analysis has identified the creme de la creme in forensic science and legal medicine in a quantitative and objective way by citation analysis with focus on articles, authors and journals.
Zydziūnaite, Vilma; Suominen, Tarja; Astedt-Kurki, Päivi; Lepaite, Daiva
2010-01-01
The objective was to describe the research methods and research focuses on ethical dilemmas concerning decision-making within health care leadership. The search was conducted on Medline and PubMed databases (1998-2008). The systematic review included 21 selected articles. The ethical dilemmas concerning decision-making within health care leadership are related to three levels: institutional (particular organization), political and local interface (local governmental structure), and national (professional expertise and system). The terms that are used as adequate to the term of "ethical dilemma" are the following: "continuous balancing," "result of resource allocation," "gap between professional obligations and possibilities," "ethically controversial situation," "concern about interactions," "ethical difficulty," "outcome of medical choices," "concern about society access to health care resources," "ethically difficult/challenging situation," "(the consequence of) ethical concern/ethical issue." In qualitative studies, a semi-structured interview and qualitative content analysis are the most commonly applied methods; in quantitative studies, questionnaire surveys are employed. In the research literature, there is a lack of specification according to professional qualification of health care professionals concerning ethical dilemmas by decision-making within health care management/administration. The research on ethical dilemmas in health care leadership, management, and administration should integrate data about levels at which ethical dilemmas occur and investigate ethical dilemmas as complex phenomena because those are attached to decision-making and specific nuances of health care management/administration. In this article, the presented scientific problem requires extensive scientific discussions and research on ethical dilemmas concerning decision-making within health care leadership at various levels.
Burnham, J F; Shearer, B S; Wall, J C
1992-01-01
Librarians have used bibliometrics for many years to assess collections and to provide data for making selection and deselection decisions. With the advent of new technology--specifically, CD-ROM databases and reprint file database management programs--new cost-effective procedures can be developed. This paper describes a recent multidisciplinary study conducted by two library faculty members and one allied health faculty member to test a bibliometric method that used the MEDLINE and CINAHL databases on CD-ROM and the Papyrus database management program to produce a new collection development methodology. PMID:1600424
Creating databases for biological information: an introduction.
Stein, Lincoln
2013-06-01
The essence of bioinformatics is dealing with large quantities of information. Whether it be sequencing data, microarray data files, mass spectrometric data (e.g., fingerprints), the catalog of strains arising from an insertional mutagenesis project, or even large numbers of PDF files, there inevitably comes a time when the information can simply no longer be managed with files and directories. This is where databases come into play. This unit briefly reviews the characteristics of several database management systems, including flat file, indexed file, relational databases, and NoSQL databases. It compares their strengths and weaknesses and offers some general guidelines for selecting an appropriate database management system. Copyright 2013 by JohnWiley & Sons, Inc.
Knowledge-based assistance for science visualization and analysis using large distributed databases
NASA Technical Reports Server (NTRS)
Handley, Thomas H., Jr.; Jacobson, Allan S.; Doyle, Richard J.; Collins, Donald J.
1993-01-01
Within this decade, the growth in complexity of exploratory data analysis and the sheer volume of space data require new and innovative approaches to support science investigators in achieving their research objectives. To date, there have been numerous efforts addressing the individual issues involved in inter-disciplinary, multi-instrument investigations. However, while successful in small scale, these efforts have not proven to be open and scalable. This proposal addresses four areas of significant need: scientific visualization and analysis; science data management; interactions in a distributed, heterogeneous environment; and knowledge-based assistance for these functions. The fundamental innovation embedded with this proposal is the integration of three automation technologies, namely, knowledge-based expert systems, science visualization and science data management. This integration is based on concept called the DataHub. With the DataHub concept, NASA will be able to apply a more complete solution to all nodes of a distributed system. Both computation nodes and interactive nodes will be able to effectively and efficiently use the data services (address, retrieval, update, etc.) with a distributed, interdisciplinary information system in a uniform and standard way. This will allow the science investigators to concentrate on their scientific endeavors, rather than to involve themselves in the intricate technical details of the systems and tools required to accomplish their work. Thus, science investigators need not be programmers. The emphasis will be on the definition and prototyping of system elements with sufficient detail to enable data analysis and interpretation leading to publishable scientific results. In addition, the proposed work includes all the required end-to-end components and interfaces to demonstrate the completed concept.
Knowledge-based assistance for science visualization and analysis using large distributed databases
NASA Technical Reports Server (NTRS)
Handley, Thomas H., Jr.; Jacobson, Allan S.; Doyle, Richard J.; Collins, Donald J.
1992-01-01
Within this decade, the growth in complexity of exploratory data analysis and the sheer volume of space data require new and innovative approaches to support science investigators in achieving their research objectives. To date, there have been numerous efforts addressing the individual issues involved in inter-disciplinary, multi-instrument investigations. However, while successful in small scale, these efforts have not proven to be open and scaleable. This proposal addresses four areas of significant need: scientific visualization and analysis; science data management; interactions in a distributed, heterogeneous environment; and knowledge-based assistance for these functions. The fundamental innovation embedded within this proposal is the integration of three automation technologies, namely, knowledge-based expert systems, science visualization and science data management. This integration is based on the concept called the Data Hub. With the Data Hub concept, NASA will be able to apply a more complete solution to all nodes of a distributed system. Both computation nodes and interactive nodes will be able to effectively and efficiently use the data services (access, retrieval, update, etc.) with a distributed, interdisciplinary information system in a uniform and standard way. This will allow the science investigators to concentrate on their scientific endeavors, rather than to involve themselves in the intricate technical details of the systems and tools required to accomplish their work. Thus, science investigators need not be programmers. The emphasis will be on the definition and prototyping of system elements with sufficient detail to enable data analysis and interpretation leading to publishable scientific results. In addition, the proposed work includes all the required end-to-end components and interfaces to demonstrate the completed concept.
Meetei, Potshangbam Angamba; Singh, Pankaj; Nongdam, Potshangbam; Prabhu, N Prakash; Rathore, RS; Vindal, Vaibhav
2012-01-01
The North-East region of India is one of the twelve mega biodiversity region, containing many rare and endangered species. A curated database of medicinal and aromatic plants from the regions called NeMedPlant is developed. The database contains traditional, scientific and medicinal information about plants and their active constituents, obtained from scholarly literature and local sources. The database is cross-linked with major biochemical databases and analytical tools. The integrated database provides resource for investigations into hitherto unexplored medicinal plants and serves to speed up the discovery of natural productsbased drugs. Availability The database is available for free at http://bif.uohyd.ac.in/nemedplant/orhttp://202.41.85.11/nemedplant/ PMID:22419844
Federated Web-accessible Clinical Data Management within an Extensible NeuroImaging Database
Keator, David B.; Wei, Dingying; Fennema-Notestine, Christine; Pease, Karen R.; Bockholt, Jeremy; Grethe, Jeffrey S.
2010-01-01
Managing vast datasets collected throughout multiple clinical imaging communities has become critical with the ever increasing and diverse nature of datasets. Development of data management infrastructure is further complicated by technical and experimental advances that drive modifications to existing protocols and acquisition of new types of research data to be incorporated into existing data management systems. In this paper, an extensible data management system for clinical neuroimaging studies is introduced: The Human Clinical Imaging Database (HID) and Toolkit. The database schema is constructed to support the storage of new data types without changes to the underlying schema. The complex infrastructure allows management of experiment data, such as image protocol and behavioral task parameters, as well as subject-specific data, including demographics, clinical assessments, and behavioral task performance metrics. Of significant interest, embedded clinical data entry and management tools enhance both consistency of data reporting and automatic entry of data into the database. The Clinical Assessment Layout Manager (CALM) allows users to create on-line data entry forms for use within and across sites, through which data is pulled into the underlying database via the generic clinical assessment management engine (GAME). Importantly, the system is designed to operate in a distributed environment, serving both human users and client applications in a service-oriented manner. Querying capabilities use a built-in multi-database parallel query builder/result combiner, allowing web-accessible queries within and across multiple federated databases. The system along with its documentation is open-source and available from the Neuroimaging Informatics Tools and Resource Clearinghouse (NITRC) site. PMID:20567938
Federated web-accessible clinical data management within an extensible neuroimaging database.
Ozyurt, I Burak; Keator, David B; Wei, Dingying; Fennema-Notestine, Christine; Pease, Karen R; Bockholt, Jeremy; Grethe, Jeffrey S
2010-12-01
Managing vast datasets collected throughout multiple clinical imaging communities has become critical with the ever increasing and diverse nature of datasets. Development of data management infrastructure is further complicated by technical and experimental advances that drive modifications to existing protocols and acquisition of new types of research data to be incorporated into existing data management systems. In this paper, an extensible data management system for clinical neuroimaging studies is introduced: The Human Clinical Imaging Database (HID) and Toolkit. The database schema is constructed to support the storage of new data types without changes to the underlying schema. The complex infrastructure allows management of experiment data, such as image protocol and behavioral task parameters, as well as subject-specific data, including demographics, clinical assessments, and behavioral task performance metrics. Of significant interest, embedded clinical data entry and management tools enhance both consistency of data reporting and automatic entry of data into the database. The Clinical Assessment Layout Manager (CALM) allows users to create on-line data entry forms for use within and across sites, through which data is pulled into the underlying database via the generic clinical assessment management engine (GAME). Importantly, the system is designed to operate in a distributed environment, serving both human users and client applications in a service-oriented manner. Querying capabilities use a built-in multi-database parallel query builder/result combiner, allowing web-accessible queries within and across multiple federated databases. The system along with its documentation is open-source and available from the Neuroimaging Informatics Tools and Resource Clearinghouse (NITRC) site.
Scientific challenges in shrubland ecosystems
William T. Sommers
2001-01-01
A primary goal in land management is to sustain the health, diversity, and productivity of the countryâs rangelands and shrublands for future generations. This type of sustainable management is to assure the availability and appropriate use of scientific information for decisionmaking. Some of most challenging scientific problems of shrubland ecosystem management are...
Highlighted scientific findings of the Interior Columbia Basin Ecosystem Management Project.
Thomas M. Quigley; Heidi Bigler Cole
1997-01-01
Decisions regarding 72 million acres of Forest Service- and Bureau of Land Management- administered lands will be based on scientific findings brought forth in the Interior Columbia Basin Ecosystem Management Project. Some highlights of the scientific findings are presented here. Project scientists drew three general conclusions: (1) Conditions and trends differ widely...
An annotated bibliography of scientific literature on managing forests for carbon benefits
Sarah J. Hines; Linda S. Heath; Richard A. Birdsey
2010-01-01
Managing forests for carbon benefits is a consideration for climate change, bioenergy, sustainability, and ecosystem services. A rapidly growing body of scientific literature on forest carbon management includes experimental, modeling, and synthesis approaches, at the stand- to landscape- to continental-level. We conducted a search of the scientific literature on the...
Morgese, Giorgia; Lombardo, Giovanni Pietro; Albani, Alessandra
2016-11-01
This article examines the areas of research conducted at the Laboratory of Experimental Psychology of the University of Rome from 1907 to 1947, directed first by Sante De Sanctis (1862-1935), and then, from 1931 on, by Mario Ponzo (1882-1960). The method used to distinguish the topics and areas of research that characterized the Roman School during this period is the textual analysis of the titles of the journal in which studies completed at the laboratory were published, namely, Contributi del Laboratorio di Psicologia sperimentale [Psychological Contributions of the Laboratory of Experimental Psychology]. This empirical analysis, which complements and supports the historiographical interpretation, demonstrates the disciplines that emerged under a system managed by the directors over 2 periods of time in the pursuit of scientific psychology in Rome and in Italy. This analysis highlights the process of adjustment from a traditional, general approach to a more theoretical-technical application. This article is a new contribution to the Italian debate on the periodization of the "crisis" in Italian psychology. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Italian polar data center for capacity building associated with the IHY
NASA Astrophysics Data System (ADS)
Damiani, A.; Bendetti, E.; Storini, M.; Rafanelli, C.
The International Heliophysical Year IHY offers a good opportunity to develop and coordinate studies on the Sun-Earth system by using a large variety of simultaneous data obtained by satellite spacecraft and ground based instruments Among these data we recall the ones coming from solar and interplanetary medium observations auroral neutron monitor geomagnetic field ionospheric meteorological and other atmospheric observatories In this context an Information System for the Italian Research in Antarctica SIRIA has started during 2003 aiming to collect information on the scientific research projects funded by the National Antarctic Research Program PNRA of Italy since its birth 1985 It belongs to the Joint Committee on Antarctic Data Management JCADM of SCAR Scientific Committee on Antarctic Research as the Italian Antarctic Data Center SIRIA being the Italian Polar Database gathers also information on research activities conducted in North Pole regions This Information System can be a relevant resource for capacity building associated with the IHY particularly for people involved in interdisciplinary researches We describe the present status of the Italian Polar Data Center and its potential use
Jo, Junyoung; Leem, Jungtae; Lee, Jin Moo; Park, Kyoung Sun
2017-06-15
Primary dysmenorrhoea is menstrual pain without pelvic pathology and is the most common gynaecological condition in women. Xuefu Zhuyudecoction (XZD) or Hyeolbuchukeo-tang, a traditional herbal formula, has been used as a treatment for primary dysmenorrhoea. The purpose of this study is to assess the current published evidence regarding XZD as treatment for primary dysmenorrhoea. The following databases will be searched from their inception until April 2017: MEDLINE (via PubMed), Allied and Complementary Medicine Database (AMED), EMBASE, The Cochrane Library, six Korean medical databases (Korean Studies Information Service System, DBPia, Oriental Medicine Advanced Searching Integrated System, Research Information Service System, Korea Med and the Korean Traditional Knowledge Portal), three Chinese medical databases (China National Knowledge Infrastructure (CNKI), Wan Fang Database and Chinese Scientific Journals Database (VIP)) and one Japanese medical database (CiNii). Randomised clinical trials (RCTs) that will be included in this systematic review comprise those that used XZD or modified XZD. The control groups in the RCTs include no treatment, placebo, conventional medication or other treatments. Trials testing XZD as an adjunct to other treatments and studies where the control group received the same treatment as the intervention group will be also included. Data extraction and risk of bias assessments will be performed by two independent reviewers. The risk of bias will be assessed with the Cochrane risk of bias tool. All statistical analyses will be conducted using Review Manager software (RevMan V.5.3.0). This systematic review will be published in a peer-reviewed journal. The review will also be disseminated electronically and in print. The review will benefit patients and practitioners in the fields of traditional and conventional medicine. CRD42016050447. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Implementation of a data management software system for SSME test history data
NASA Technical Reports Server (NTRS)
Abernethy, Kenneth
1986-01-01
The implementation of a software system for managing Space Shuttle Main Engine (SSME) test/flight historical data is presented. The software system uses the database management system RIM7 for primary data storage and routine data management, but includes several FORTRAN programs, described here, which provide customized access to the RIM7 database. The consolidation, modification, and transfer of data from the database THIST, to the RIM7 database THISRM is discussed. The RIM7 utility modules for generating some standard reports from THISRM and performing some routine updating and maintenance are briefly described. The FORTRAN accessing programs described include programs for initial loading of large data sets into the database, capturing data from files for database inclusion, and producing specialized statistical reports which cannot be provided by the RIM7 report generator utility. An expert system tutorial, constructed using the expert system shell product INSIGHT2, is described. Finally, a potential expert system, which would analyze data in the database, is outlined. This system could use INSIGHT2 as well and would take advantage of RIM7's compatibility with the microcomputer database system RBase 5000.
Horban', A Ie
2013-09-01
The question of implementation of the state policy in the field of technology transfer in the medical branch to implement the law of Ukraine of 02.10.2012 No 5407-VI "On Amendments to the law of Ukraine" "On state regulation of activity in the field of technology transfers", namely to ensure the formation of branch database on technology and intellectual property rights owned by scientific institutions, organizations, higher medical education institutions and enterprises of healthcare sphere of Ukraine and established by budget are considered. Analysis of international and domestic experience in the processing of information about intellectual property rights and systems implementation support transfer of new technologies are made. The main conceptual principles of creation of this branch database of technology transfer and branch technology transfer network are defined.
NASA Astrophysics Data System (ADS)
Gay, D. A.; Schmeltz, D.; Prestbo, E.; Olson, M.; Sharac, T.; Tordon, R.
2013-04-01
The National Atmospheric Deposition Program (NADP) developed and operates a collaborative network of atmospheric mercury monitoring sites based in North America - the Atmospheric Mercury Network (AMNet). The justification for the network was growing interest and demand from many scientists and policy makers for a robust database of measurements to improve model development, assess policies and programs, and improve estimates of mercury dry deposition. Many different agencies and groups support the network, including federal, state, tribal, and international governments, academic institutions, and private companies. AMNet has added two high elevation sites outside of continental North America in Hawaii and Taiwan because of new partnerships forged within NADP. Network sites measure concentrations of atmospheric mercury fractions using automated, continuous mercury speciation systems. The procedures that NADP developed for field operations, data management, and quality assurance ensure that the network makes scientifically valid and consistent measurements. AMNet reports concentrations of hourly gaseous elemental mercury (GEM), two-hour gaseous oxidized mercury (GOM), and two-hour particulate-bound mercury less than 2.5 microns in size (PBM2.5). As of January 2012, over 450 000 valid observations are available from 30 stations. The AMNet also collects ancillary meteorological data and information on land-use and vegetation, when available. We present atmospheric mercury data comparisons by time (3 yr) at 22 unique site locations. Highlighted are contrasting values for site locations across the network: urban versus rural, coastal versus high-elevation and the range of maximum observations. The data presented should catalyze the formation of many scientific questions that may be answered through further in-depth analysis and modeling studies of the AMNet database. All data and methods are publically available through an online database on the NADP website (http://nadp.isws.illinois.edu/amn/). Future network directions are to foster new network partnerships and continue to collect, quality assure, and post data, including dry deposition estimates, for each fraction.
NASA Astrophysics Data System (ADS)
Gay, D. A.; Schmeltz, D.; Prestbo, E.; Olson, M.; Sharac, T.; Tordon, R.
2013-11-01
The National Atmospheric Deposition Program (NADP) developed and operates a collaborative network of atmospheric-mercury-monitoring sites based in North America - the Atmospheric Mercury Network (AMNet). The justification for the network was growing interest and demand from many scientists and policy makers for a robust database of measurements to improve model development, assess policies and programs, and improve estimates of mercury dry deposition. Many different agencies and groups support the network, including federal, state, tribal, and international governments, academic institutions, and private companies. AMNet has added two high-elevation sites outside of continental North America in Hawaii and Taiwan because of new partnerships forged within NADP. Network sites measure concentrations of atmospheric mercury fractions using automated, continuous mercury speciation systems. The procedures that NADP developed for field operations, data management, and quality assurance ensure that the network makes scientifically valid and consistent measurements. AMNet reports concentrations of hourly gaseous elemental mercury (GEM), two-hour gaseous oxidized mercury (GOM), and two-hour particulate-bound mercury less than 2.5 microns in size (PBM2.5). As of January 2012, over 450 000 valid observations are available from 30 stations. AMNet also collects ancillary meteorological data and information on land use and vegetation, when available. We present atmospheric mercury data comparisons by time (3 yr) at 21 individual sites and instruments. Highlighted are contrasting values for site locations across the network: urban versus rural, coastal versus high elevation and the range of maximum observations. The data presented should catalyze the formation of many scientific questions that may be answered through further in-depth analysis and modeling studies of the AMNet database. All data and methods are publically available through an online database on the NADP website (http://nadp.sws.uiuc.edu/amn/). Future network directions are to foster new network partnerships and continue to collect, quality assure, and post data, including dry deposition estimates, for each fraction.
Biomedical science journals in the Arab world.
Tadmouri, Ghazi O
2004-10-01
Medieval Arab scientists established the basis of medical practice and gave important attention to the publication of scientific results. At present, modern scientific publishing in the Arab world is in its developmental stage. Arab biomedical journals are less than 300, most of which are published in Egypt, Lebanon, and the Kingdom of Saudi Arabia. Yet, many of these journals do not have on-line access or are indexed in major bibliographic databases. The majority of indexed journals, however, do not have a stable presence in the popular PubMed database and their indexes are discontinued since 2001. The exposure of Arab biomedical journals in international indices undoubtedly plays an important role in improving the scientific quality of these journals. The successful examples discussed in this review encourage us to call for the formation of a consortium of Arab biomedical journal publishers to assist in redressing the balance of the region from biomedical data consumption to data production.
[Over- or underestimated? Bibliographic survey of the biomedical periodicals published in Hungary].
Berhidi, Anna; Horváth, Katalin; Horváth, Gabriella; Vasas, Lívia
2013-06-30
This publication - based on an article published in 2006 - emphasises the qualities of the current biomedical periodicals of Hungarian editions. The aim of this study was to analyse how Hungarian journals meet the requirements of the scientific aspect and international visibility. Authors evaluated 93 Hungarian biomedical periodicals by 4 viewpoints of the two criteria mentioned above. 35% of the analysed journals complete the attributes of scientific aspect, 5% the international visibility, 6% fulfill all examined criteria, and 25% are indexed in international databases. 6 biomedical Hungarian periodicals covered by each of the three main bibliographic databases (Medline, Scopus, Web of Science) have the best qualities. Authors recommend to improve viewpoints of the scientific aspect and international visibility. The basis of qualitative adequacy are the accurate authors' guidelines, title, abstract, keywords of the articles in English, and the ability to publish on time.
2012-01-01
Background Local and regional scientific journals are important factors in bridging gaps in health knowledge translation in low-and middle-income countries. We assessed indexing, citations and publishing standards of journals from the Eastern Mediterranean region. Methods For journals from 22 countries in the collection of the Index Medicus for the Eastern Mediterranean Region (IMEMR), we analyzed indexing in bibliographical databases and citations during 2006–2009 to published items in 2006 in Web of Science (WoS) and SCOPUS. Adherence to editorial and publishing standards was assessed using a special checklist. Results Out of 419 journals in IMEMR, 19 were indexed in MEDLINE, 23 in WoS and 46 in SCOPUS. Their impact factors ranged from 0.016 to 1.417. For a subset of 175 journals with available tables of contents from 2006, articles published in 2006 from 93 journals received 2068 citations in SCOPUS (23.5% self-citations) and articles in 86 journals received 1579 citations in WoS (24.3% self-citations) during 2006–2009. Citations to articles came mostly from outside of the Eastern Mediterranean region (76.8% in WoS and 75.4% in SCOPUS). Articles receiving highest number of citations presented topics specific for the region. Many journals did not follow editorial and publishing standards, such addressing requirements about the patient’s privacy rights (68.0% out of 244 analyzed), policy on managing conflicts of interest (66.4%), and ethical conduct in clinical and animal research (66.4%). Conclusion Journals from the Eastern Mediterranean are visible in and have impact on global scientific community. Coordinated effort of all stakeholders in journal publishing, including researchers, journal editors and owners, policy makers and citation databases, is needed to further promote local journals as windows to the research in the developing world and the doors for valuable regional research to the global scientific community. PMID:22577965
Utrobičić, Ana; Chaudhry, Nauman; Ghaffar, Abdul; Marušić, Ana
2012-05-11
Local and regional scientific journals are important factors in bridging gaps in health knowledge translation in low-and middle-income countries. We assessed indexing, citations and publishing standards of journals from the Eastern Mediterranean region. For journals from 22 countries in the collection of the Index Medicus for the Eastern Mediterranean Region (IMEMR), we analyzed indexing in bibliographical databases and citations during 2006-2009 to published items in 2006 in Web of Science (WoS) and SCOPUS. Adherence to editorial and publishing standards was assessed using a special checklist. Out of 419 journals in IMEMR, 19 were indexed in MEDLINE, 23 in WoS and 46 in SCOPUS. Their impact factors ranged from 0.016 to 1.417. For a subset of 175 journals with available tables of contents from 2006, articles published in 2006 from 93 journals received 2068 citations in SCOPUS (23.5% self-citations) and articles in 86 journals received 1579 citations in WoS (24.3% self-citations) during 2006-2009. Citations to articles came mostly from outside of the Eastern Mediterranean region (76.8% in WoS and 75.4% in SCOPUS). Articles receiving highest number of citations presented topics specific for the region. Many journals did not follow editorial and publishing standards, such addressing requirements about the patient's privacy rights (68.0% out of 244 analyzed), policy on managing conflicts of interest (66.4%), and ethical conduct in clinical and animal research (66.4%). Journals from the Eastern Mediterranean are visible in and have impact on global scientific community. Coordinated effort of all stakeholders in journal publishing, including researchers, journal editors and owners, policy makers and citation databases, is needed to further promote local journals as windows to the research in the developing world and the doors for valuable regional research to the global scientific community.
Towards a semantic web of paleoclimatology
NASA Astrophysics Data System (ADS)
Emile-Geay, J.; Eshleman, J. A.
2012-12-01
The paleoclimate record is information-rich, yet signifiant technical barriers currently exist before it can be used to automatically answer scientific questions. Here we make the case for a universal format to structure paleoclimate data. A simple example demonstrates the scientific utility of such a self-contained way of organizing coral data and meta-data in the Matlab language. This example is generalized to a universal ontology that may form the backbone of an open-source, open-access and crowd-sourced paleoclimate database. Its key attributes are: 1. Parsability: the format is self-contained (hence machine-readable), and would therefore enable a semantic web of paleoclimate information. 2. Universality: the format is platform-independent (readable on all computer and operating systems), and language- independent (readable in major programming languages) 3. Extensibility: the format requires a minimum set of fields to appropriately define a paleoclimate record, but allows for the database to grow organically as more records are added, or - equally important - as more metadata are added to existing records. 4. Citability: The format enables the automatic citation of peer- reviewed articles as well as data citations whenever a data record is being used for analysis, making due recognition of scientific work an automatic part and foundational principle of paleoclimate data analysis. 5. Ergonomy: The format will be easy to use, update and manage. This structure is designed to enable semantic searches, and is expected to help accelerate discovery in all workflows where paleoclimate data are being used. Practical steps towards the implementation of such a system at the community level are then discussed.; Preliminary ontology describing relationships between the data and meta-data fields of the Nurhati et al. [2011] climate record. Several fields are viewed as instances of larger classes (ProxyClass,Site,Reference), which would allow computers to perform operations on all records within a specific class (e.g. if the measurement type is δ18O , or if the proxy class is 'Tree Ring Width', or if the resolution is less than 3 months, etc). All records in such a database would be bound to each other by similar links, allowing machines to automatically process any form of query involving existing information. Such a design would also allow growth, by adding records and/or additional information about each record.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Yubin; Shankar, Mallikarjun; Park, Byung H.
Designing a database system for both efficient data management and data services has been one of the enduring challenges in the healthcare domain. In many healthcare systems, data services and data management are often viewed as two orthogonal tasks; data services refer to retrieval and analytic queries such as search, joins, statistical data extraction, and simple data mining algorithms, while data management refers to building error-tolerant and non-redundant database systems. The gap between service and management has resulted in rigid database systems and schemas that do not support effective analytics. We compose a rich graph structure from an abstracted healthcaremore » RDBMS to illustrate how we can fill this gap in practice. We show how a healthcare graph can be automatically constructed from a normalized relational database using the proposed 3NF Equivalent Graph (3EG) transformation.We discuss a set of real world graph queries such as finding self-referrals, shared providers, and collaborative filtering, and evaluate their performance over a relational database and its 3EG-transformed graph. Experimental results show that the graph representation serves as multiple de-normalized tables, thus reducing complexity in a database and enhancing data accessibility of users. Based on this finding, we propose an ensemble framework of databases for healthcare applications.« less
Systematically Retrieving Research: A Case Study Evaluating Seven Databases
ERIC Educational Resources Information Center
Taylor, Brian; Wylie, Emma; Dempster, Martin; Donnelly, Michael
2007-01-01
Objective: Developing the scientific underpinnings of social welfare requires effective and efficient methods of retrieving relevant items from the increasing volume of research. Method: We compared seven databases by running the nearest equivalent search on each. The search topic was chosen for relevance to social work practice with older people.…