Database Software for Social Studies. A MicroSIFT Quarterly Report.
ERIC Educational Resources Information Center
Weaver, Dave
The report describes and evaluates the use of a set of learning tools called database managers and their creation of databases to help teach problem solving skills in social studies. Details include the design, building, and use of databases in a social studies setting, along with advantages and disadvantages of using them. The three types of…
ERIC Educational Resources Information Center
Friedman, Debra; Hoffman, Phillip
2001-01-01
Describes creation of a relational database at the University of Washington supporting ongoing academic planning at several levels and affecting the culture of decision making. Addresses getting started; sharing the database; questions, worries, and issues; improving access to high-demand courses; the advising function; management of instructional…
Creation of Norms for the Purpose of Global Talent Management
ERIC Educational Resources Information Center
Hedricks, Cynthia A.; Robie, Chet; Harnisher, John V.
2008-01-01
Personality scores were used to construct three databases of global norms. The composition of the three databases varied according to percentage of cases by global region, occupational group, applicant status, and gender of the job candidate. Comparison of personality scores across the three norms databases revealed that the magnitude of the…
Slushie World: An In-Class Access Database Tutorial
ERIC Educational Resources Information Center
Wynn, Donald E., Jr.; Pratt, Renée M. E.
2015-01-01
The Slushie World case study is designed to teach the basics of Microsoft Access and database management over a series of three 75-minute class sessions. Students are asked to build a basic database to track sales and inventory for a small business. Skills to be learned include table creation, data entry and importing, form and report design,…
Security Management in a Multimedia System
ERIC Educational Resources Information Center
Rednic, Emanuil; Toma, Andrei
2009-01-01
In database security, the issue of providing a level of security for multimedia information is getting more and more known. For the moment the security of multimedia information is done through the security of the database itself, in the same way, for all classic and multimedia records. So what is the reason for the creation of a security…
NASA Technical Reports Server (NTRS)
Maluf, David A.; Tran, Peter B.
2003-01-01
Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object-oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK, is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword search of records spanning across both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semistructured documents existing within NASA enterprises. Today, NETMARK is a flexible, high-throughput open database framework for managing, storing, and searching unstructured or semi-structured arbitrary hierarchal models, such as XML and HTML.
An Extensible Schema-less Database Framework for Managing High-throughput Semi-Structured Documents
NASA Technical Reports Server (NTRS)
Maluf, David A.; Tran, Peter B.; La, Tracy; Clancy, Daniel (Technical Monitor)
2002-01-01
Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword searches of records for both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semi-structured documents existing within NASA enterprises. Today, NETMARK is a flexible, high throughput open database framework for managing, storing, and searching unstructured or semi structured arbitrary hierarchal models, XML and HTML.
NASA Technical Reports Server (NTRS)
Maluf, David A.; Tran, Peter B.
2003-01-01
Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object-oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK, is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword search of records spanning across both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semi-structured documents existing within NASA enterprises. Today, NETMARK is a flexible, high-throughput open database framework for managing, storing, and searching unstructured or semi-structured arbitrary hierarchal models, such as XML and HTML.
TRENDS: The aeronautical post-test database management system
NASA Technical Reports Server (NTRS)
Bjorkman, W. S.; Bondi, M. J.
1990-01-01
TRENDS, an engineering-test database operating system developed by NASA to support rotorcraft flight tests, is described. Capabilities and characteristics of the system are presented, with examples of its use in recalling and analyzing rotorcraft flight-test data from a TRENDS database. The importance of system user-friendliness in gaining users' acceptance is stressed, as is the importance of integrating supporting narrative data with numerical data in engineering-test databases. Considerations relevant to the creation and maintenance of flight-test database are discussed and TRENDS' solutions to database management problems are described. Requirements, constraints, and other considerations which led to the system's configuration are discussed and some of the lessons learned during TRENDS' development are presented. Potential applications of TRENDS to a wide range of aeronautical and other engineering tests are identified.
PACSY, a relational database management system for protein structure and chemical shift analysis.
Lee, Woonghee; Yu, Wookyung; Kim, Suhkmann; Chang, Iksoo; Lee, Weontae; Markley, John L
2012-10-01
PACSY (Protein structure And Chemical Shift NMR spectroscopY) is a relational database management system that integrates information from the Protein Data Bank, the Biological Magnetic Resonance Data Bank, and the Structural Classification of Proteins database. PACSY provides three-dimensional coordinates and chemical shifts of atoms along with derived information such as torsion angles, solvent accessible surface areas, and hydrophobicity scales. PACSY consists of six relational table types linked to one another for coherence by key identification numbers. Database queries are enabled by advanced search functions supported by an RDBMS server such as MySQL or PostgreSQL. PACSY enables users to search for combinations of information from different database sources in support of their research. Two software packages, PACSY Maker for database creation and PACSY Analyzer for database analysis, are available from http://pacsy.nmrfam.wisc.edu.
PACSY, a relational database management system for protein structure and chemical shift analysis
Lee, Woonghee; Yu, Wookyung; Kim, Suhkmann; Chang, Iksoo
2012-01-01
PACSY (Protein structure And Chemical Shift NMR spectroscopY) is a relational database management system that integrates information from the Protein Data Bank, the Biological Magnetic Resonance Data Bank, and the Structural Classification of Proteins database. PACSY provides three-dimensional coordinates and chemical shifts of atoms along with derived information such as torsion angles, solvent accessible surface areas, and hydrophobicity scales. PACSY consists of six relational table types linked to one another for coherence by key identification numbers. Database queries are enabled by advanced search functions supported by an RDBMS server such as MySQL or PostgreSQL. PACSY enables users to search for combinations of information from different database sources in support of their research. Two software packages, PACSY Maker for database creation and PACSY Analyzer for database analysis, are available from http://pacsy.nmrfam.wisc.edu. PMID:22903636
Wollbrett, Julien; Larmande, Pierre; de Lamotte, Frédéric; Ruiz, Manuel
2013-04-15
In recent years, a large amount of "-omics" data have been produced. However, these data are stored in many different species-specific databases that are managed by different institutes and laboratories. Biologists often need to find and assemble data from disparate sources to perform certain analyses. Searching for these data and assembling them is a time-consuming task. The Semantic Web helps to facilitate interoperability across databases. A common approach involves the development of wrapper systems that map a relational database schema onto existing domain ontologies. However, few attempts have been made to automate the creation of such wrappers. We developed a framework, named BioSemantic, for the creation of Semantic Web Services that are applicable to relational biological databases. This framework makes use of both Semantic Web and Web Services technologies and can be divided into two main parts: (i) the generation and semi-automatic annotation of an RDF view; and (ii) the automatic generation of SPARQL queries and their integration into Semantic Web Services backbones. We have used our framework to integrate genomic data from different plant databases. BioSemantic is a framework that was designed to speed integration of relational databases. We present how it can be used to speed the development of Semantic Web Services for existing relational biological databases. Currently, it creates and annotates RDF views that enable the automatic generation of SPARQL queries. Web Services are also created and deployed automatically, and the semantic annotations of our Web Services are added automatically using SAWSDL attributes. BioSemantic is downloadable at http://southgreen.cirad.fr/?q=content/Biosemantic.
2013-01-01
Background In recent years, a large amount of “-omics” data have been produced. However, these data are stored in many different species-specific databases that are managed by different institutes and laboratories. Biologists often need to find and assemble data from disparate sources to perform certain analyses. Searching for these data and assembling them is a time-consuming task. The Semantic Web helps to facilitate interoperability across databases. A common approach involves the development of wrapper systems that map a relational database schema onto existing domain ontologies. However, few attempts have been made to automate the creation of such wrappers. Results We developed a framework, named BioSemantic, for the creation of Semantic Web Services that are applicable to relational biological databases. This framework makes use of both Semantic Web and Web Services technologies and can be divided into two main parts: (i) the generation and semi-automatic annotation of an RDF view; and (ii) the automatic generation of SPARQL queries and their integration into Semantic Web Services backbones. We have used our framework to integrate genomic data from different plant databases. Conclusions BioSemantic is a framework that was designed to speed integration of relational databases. We present how it can be used to speed the development of Semantic Web Services for existing relational biological databases. Currently, it creates and annotates RDF views that enable the automatic generation of SPARQL queries. Web Services are also created and deployed automatically, and the semantic annotations of our Web Services are added automatically using SAWSDL attributes. BioSemantic is downloadable at http://southgreen.cirad.fr/?q=content/Biosemantic. PMID:23586394
Expert systems identify fossils and manage large paleontological databases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beightol, D.S.; Conrad, M.A.
EXPAL is a computer program permitting creation and maintenance of comprehensive databases in marine paleontology. It is designed to assist specialists and non-specialists. EXPAL includes a powerful expert system based on the morphological descriptors specific to a given group of fossils. The expert system may be used, for example, to describe and automatically identify an unknown specimen. EXPAL was first applied to Dasycladales (Calcareous green algae). Projects are under way for corresponding expert systems and databases on planktonic foraminifers and calpionellids. EXPAL runs on an IBM XT or compatible microcomputer.
2017-09-01
NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS DATABASE CREATION AND STATISTICAL ANALYSIS: FINDING CONNECTIONS BETWEEN TWO OR MORE SECONDARY...BLANK ii Approved for public release. Distribution is unlimited. DATABASE CREATION AND STATISTICAL ANALYSIS: FINDING CONNECTIONS BETWEEN TWO OR MORE...Problem and Motivation . . . . . . . . . . . . . . . . . . . 1 1.2 DOD Applicability . . . . . . . . . . . . . . . . .. . . . . . . 2 1.3 Research
Towards G2G: Systems of Technology Database Systems
NASA Technical Reports Server (NTRS)
Maluf, David A.; Bell, David
2005-01-01
We present an approach and methodology for developing Government-to-Government (G2G) Systems of Technology Database Systems. G2G will deliver technologies for distributed and remote integration of technology data for internal use in analysis and planning as well as for external communications. G2G enables NASA managers, engineers, operational teams and information systems to "compose" technology roadmaps and plans by selecting, combining, extending, specializing and modifying components of technology database systems. G2G will interoperate information and knowledge that is distributed across organizational entities involved that is ideal for NASA future Exploration Enterprise. Key contributions of the G2G system will include the creation of an integrated approach to sustain effective management of technology investments that supports the ability of various technology database systems to be independently managed. The integration technology will comply with emerging open standards. Applications can thus be customized for local needs while enabling an integrated management of technology approach that serves the global needs of NASA. The G2G capabilities will use NASA s breakthrough in database "composition" and integration technology, will use and advance emerging open standards, and will use commercial information technologies to enable effective System of Technology Database systems.
Converting the H. W. Wilson Company Indexes to an Automated System: A Functional Analysis.
ERIC Educational Resources Information Center
Regazzi, John J.
1984-01-01
Description of the computerized information system that supports the editorial and manufacturing processes involved in creation of Wilson's subject indexes and catalogs includes the major subsystems--online data entry, batch input processing, validation and release, file generation and database management, online and offline retrieval, publication…
Knowledge Creation through User-Guided Data Mining: A Database Case
ERIC Educational Resources Information Center
Steiger, David M.
2008-01-01
This case focuses on learning by applying the four integrating mechanisms of Nonaka's knowledge creation theory: socialization, externalization, combination and internalization. In general, such knowledge creation and internalization (i.e., learning) is critical to database students since they will be expected to apply their specialized database…
New Software for Ensemble Creation in the Spitzer-Space-Telescope Operations Database
NASA Technical Reports Server (NTRS)
Laher, Russ; Rector, John
2004-01-01
Some of the computer pipelines used to process digital astronomical images from NASA's Spitzer Space Telescope require multiple input images, in order to generate high-level science and calibration products. The images are grouped into ensembles according to well documented ensemble-creation rules by making explicit associations in the operations Informix database at the Spitzer Science Center (SSC). The advantage of this approach is that a simple database query can retrieve the required ensemble of pipeline input images. New and improved software for ensemble creation has been developed. The new software is much faster than the existing software because it uses pre-compiled database stored-procedures written in Informix SPL (SQL programming language). The new software is also more flexible because the ensemble creation rules are now stored in and read from newly defined database tables. This table-driven approach was implemented so that ensemble rules can be inserted, updated, or deleted without modifying software.
MAKER-P: a tool-kit for the creation, management, and quality control of plant genome annotations
USDA-ARS?s Scientific Manuscript database
We have optimized and extended the widely used annotation-engine MAKER for use on plant genomes. We have benchmarked the resulting software, MAKER-P, using the A. thaliana genome and the TAIR10 gene models. Here we demonstrate the ability of the MAKER-P toolkit to generate de novo repeat databases, ...
The purpose of this SOP is to describe the flow and custody of laboratory data generated by NHEXAS Arizona through data processing and delivery to the project data manager for creation of the master database. This procedure was followed to ensure consistent data retrieval during...
10 CFR 719.44 - What categories of costs require advance approval?
Code of Federal Regulations, 2014 CFR
2014-01-01
... application software, or non-routine computerized databases, if they are specifically created for a particular matter. For costs associated with the creation and use of computerized databases, contractors and retained legal counsel must ensure that the creation and use of computerized databases is necessary and...
Palaeo sea-level and ice-sheet databases: problems, strategies and perspectives
NASA Astrophysics Data System (ADS)
Rovere, Alessio; Düsterhus, André; Carlson, Anders; Barlow, Natasha; Bradwell, Tom; Dutton, Andrea; Gehrels, Roland; Hibbert, Fiona; Hijma, Marc; Horton, Benjamin; Klemann, Volker; Kopp, Robert; Sivan, Dorit; Tarasov, Lev; Törnqvist, Torbjorn
2016-04-01
Databases of palaeoclimate data have driven many major developments in understanding the Earth system. The measurement and interpretation of palaeo sea-level and ice-sheet data that form such databases pose considerable challenges to the scientific communities that use them for further analyses. In this paper, we build on the experience of the PALSEA (PALeo constraints on SEA level rise) community, which is a working group inside the PAGES (Past Global Changes) project, to describe the challenges and best strategies that can be adopted to build a self-consistent and standardised database of geological and geochemical data related to palaeo sea levels and ice sheets. Our aim in this paper is to identify key points that need attention and subsequent funding when undertaking the task of database creation. We conclude that any sea-level or ice-sheet database must be divided into three instances: i) measurement; ii) interpretation; iii) database creation. Measurement should include postion, age, description of geological features, and quantification of uncertainties. All must be described as objectively as possible. Interpretation can be subjective, but it should always include uncertainties and include all the possible interpretations, without unjustified a priori exclusions. We propose that, in the creation of a database, an approach based on Accessibility, Transparency, Trust, Availability, Continued updating, Completeness and Communication of content (ATTAC3) must be adopted. Also, it is essential to consider the community structure that creates and benefits of a database. We conclude that funding sources should consider to address not only the creation of original data in specific research-question oriented projects, but also include the possibility to use part of the funding for IT-related and database creation tasks, which are essential to guarantee accessibility and maintenance of the collected data.
The purpose of this SOP is to describe the flow and custody of laboratory data generated by the Arizona Border Study through data processing and delivery to the project data manager for creation of the master database. This procedure was followed to ensure consistent data retrie...
Palaeo-sea-level and palaeo-ice-sheet databases: problems, strategies, and perspectives
NASA Astrophysics Data System (ADS)
Düsterhus, André; Rovere, Alessio; Carlson, Anders E.; Horton, Benjamin P.; Klemann, Volker; Tarasov, Lev; Barlow, Natasha L. M.; Bradwell, Tom; Clark, Jorie; Dutton, Andrea; Gehrels, W. Roland; Hibbert, Fiona D.; Hijma, Marc P.; Khan, Nicole; Kopp, Robert E.; Sivan, Dorit; Törnqvist, Torbjörn E.
2016-04-01
Sea-level and ice-sheet databases have driven numerous advances in understanding the Earth system. We describe the challenges and offer best strategies that can be adopted to build self-consistent and standardised databases of geological and geochemical information used to archive palaeo-sea-levels and palaeo-ice-sheets. There are three phases in the development of a database: (i) measurement, (ii) interpretation, and (iii) database creation. Measurement should include the objective description of the position and age of a sample, description of associated geological features, and quantification of uncertainties. Interpretation of the sample may have a subjective component, but it should always include uncertainties and alternative or contrasting interpretations, with any exclusion of existing interpretations requiring a full justification. During the creation of a database, an approach based on accessibility, transparency, trust, availability, continuity, completeness, and communication of content (ATTAC3) must be adopted. It is essential to consider the community that creates and benefits from a database. We conclude that funding agencies should not only consider the creation of original data in specific research-question-oriented projects, but also include the possibility of using part of the funding for IT-related and database creation tasks, which are essential to guarantee accessibility and maintenance of the collected data.
Issues in Real-Time Data Management.
1991-07-01
2. Multiversion concurrency control [5] interprets write operations as the creation of new ver- sions of the items (in contrast to the update-in...features of optimistic (deferred writing, celayed selection of serialization order) and multiversion concurrency control. They do not present any...34 Multiversion Concurrency Control - Theory and Algorithms". ACM Transactions on Database Systems 8, 4 (December 1983), 465-484. 6. Buchman, A. P
NASA Astrophysics Data System (ADS)
Maffei, A. R.; Chandler, C. L.; Work, T.; Allen, J.; Groman, R. C.; Fox, P. A.
2009-12-01
Content Management Systems (CMSs) provide powerful features that can be of use to oceanographic (and other geo-science) data managers. However, in many instances, geo-science data management offices have previously designed customized schemas for their metadata. The WHOI Ocean Informatics initiative and the NSF funded Biological Chemical and Biological Data Management Office (BCO-DMO) have jointly sponsored a project to port an existing, relational database containing oceanographic metadata, along with an existing interface coded in Cold Fusion middleware, to a Drupal6 Content Management System. The goal was to translate all the existing database tables, input forms, website reports, and other features present in the existing system to employ Drupal CMS features. The replacement features include Drupal content types, CCK node-reference fields, themes, RDB, SPARQL, workflow, and a number of other supporting modules. Strategic use of some Drupal6 CMS features enables three separate but complementary interfaces that provide access to oceanographic research metadata via the MySQL database: 1) a Drupal6-powered front-end; 2) a standard SQL port (used to provide a Mapserver interface to the metadata and data; and 3) a SPARQL port (feeding a new faceted search capability being developed). Future plans include the creation of science ontologies, by scientist/technologist teams, that will drive semantically-enabled faceted search capabilities planned for the site. Incorporation of semantic technologies included in the future Drupal 7 core release is also anticipated. Using a public domain CMS as opposed to proprietary middleware, and taking advantage of the many features of Drupal 6 that are designed to support semantically-enabled interfaces will help prepare the BCO-DMO database for interoperability with other ecosystem databases.
NASA Interactive Forms Type Interface - NIFTI
NASA Technical Reports Server (NTRS)
Jain, Bobby; Morris, Bill
2005-01-01
A flexible database query, update, modify, and delete tool was developed that provides an easy interface to Oracle forms. This tool - the NASA interactive forms type interface, or NIFTI - features on-the- fly forms creation, forms sharing among users, the capability to query the database from user-entered criteria on forms, traversal of query results, an ability to generate tab-delimited reports, viewing and downloading of reports to the user s workstation, and a hypertext-based help system. NIFTI is a very powerful ad hoc query tool that was developed using C++, X-Windows by a Motif application framework. A unique tool, NIFTI s capabilities appear in no other known commercial-off-the- shelf (COTS) tool, because NIFTI, which can be launched from the user s desktop, is a simple yet very powerful tool with a highly intuitive, easy-to-use graphical user interface (GUI) that will expedite the creation of database query/update forms. NIFTI, therefore, can be used in NASA s International Space Station (ISS) as well as within government and industry - indeed by all users of the widely disseminated Oracle base. And it will provide significant cost savings in the areas of user training and scalability while advancing the art over current COTS browsers. No COTS browser performs all the functions NIFTI does, and NIFTI is easier to use. NIFTI s cost savings are very significant considering the very large database with which it is used and the large user community with varying data requirements it will support. Its ease of use means that personnel unfamiliar with databases (e.g., managers, supervisors, clerks, and others) can develop their own personal reports. For NASA, a tool such as NIFTI was needed to query, update, modify, and make deletions within the ISS vehicle master database (VMDB), a repository of engineering data that includes an indentured parts list and associated resource data (power, thermal, volume, weight, and the like). Since the VMDB is used both as a collection point for data and as a common repository for engineering, integration, and operations teams, a tool such as NIFTI had to be designed that could expedite the creation of database query/update forms which could then be shared among users.
PhamDB: a web-based application for building Phamerator databases.
Lamine, James G; DeJong, Randall J; Nelesen, Serita M
2016-07-01
PhamDB is a web application which creates databases of bacteriophage genes, grouped by gene similarity. It is backwards compatible with the existing Phamerator desktop software while providing an improved database creation workflow. Key features include a graphical user interface, validation of uploaded GenBank files, and abilities to import phages from existing databases, modify existing databases and queue multiple jobs. Source code and installation instructions for Linux, Windows and Mac OSX are freely available at https://github.com/jglamine/phage PhamDB is also distributed as a docker image which can be managed via Kitematic. This docker image contains the application and all third party software dependencies as a pre-configured system, and is freely available via the installation instructions provided. snelesen@calvin.edu. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Integration of Web-based and PC-based clinical research databases.
Brandt, C A; Sun, K; Charpentier, P; Nadkarni, P M
2004-01-01
We have created a Web-based repository or data library of information about measurement instruments used in studies of multi-factorial geriatric health conditions (the Geriatrics Research Instrument Library - GRIL) based upon existing features of two separate clinical study data management systems. GRIL allows browsing, searching, and selecting measurement instruments based upon criteria such as keywords and areas of applicability. Measurement instruments selected can be printed and/or included in an automatically generated standalone microcomputer database application, which can be downloaded by investigators for use in data collection and data management. Integration of database applications requires the creation of a common semantic model, and mapping from each system to this model. Various database schema conflicts at the table and attribute level must be identified and resolved prior to integration. Using a conflict taxonomy and a mapping schema facilitates this process. Critical conflicts at the table level that required resolution included name and relationship differences. A major benefit of integration efforts is the sharing of features and cross-fertilization of applications created for similar purposes in different operating environments. Integration of applications mandates some degree of metadata model unification.
Towards evidence-based management: creating an informative database of nursing-sensitive indicators.
Patrician, Patricia A; Loan, Lori; McCarthy, Mary; Brosch, Laura R; Davey, Kimberly S
2010-12-01
The purpose of this paper is to describe the creation, evolution, and implementation of a database of nursing-sensitive and potentially nursing-sensitive indicators, the Military Nursing Outcomes Database (MilNOD). It discusses data quality, utility, and lessons learned. Prospective data collected each shift include direct staff hours by levels (i.e., registered nurse, other licensed and unlicensed providers), staff categories (i.e., military, civilian, contract, and reservist), patient census, acuity, and admissions, discharges, and transfers. Retrospective adverse event data (falls, medication errors, and needle-stick injuries) were collected from existing records. Annual patient satisfaction, nurse work environment, and pressure ulcer and restraint prevalence surveys were conducted. The MilNOD contains shift level data from 56 units in 13 military hospitals and is used to target areas for managerial and clinical performance improvement. This methodology can be modified for use in other healthcare systems. As standard tools for evidence-based management, databases such as MilNOD allow nurse leaders to track the status of nursing and adverse events in their facilities. No claim to original US government works.
Database technology and the management of multimedia data in the Mirror project
NASA Astrophysics Data System (ADS)
de Vries, Arjen P.; Blanken, H. M.
1998-10-01
Multimedia digital libraries require an open distributed architecture instead of a monolithic database system. In the Mirror project, we use the Monet extensible database kernel to manage different representation of multimedia objects. To maintain independence between content, meta-data, and the creation of meta-data, we allow distribution of data and operations using CORBA. This open architecture introduces new problems for data access. From an end user's perspective, the problem is how to search the available representations to fulfill an actual information need; the conceptual gap between human perceptual processes and the meta-data is too large. From a system's perspective, several representations of the data may semantically overlap or be irrelevant. We address these problems with an iterative query process and active user participating through relevance feedback. A retrieval model based on inference networks assists the user with query formulation. The integration of this model into the database design has two advantages. First, the user can query both the logical and the content structure of multimedia objects. Second, the use of different data models in the logical and the physical database design provides data independence and allows algebraic query optimization. We illustrate query processing with a music retrieval application.
NASA Astrophysics Data System (ADS)
Ryżyński, Grzegorz; Nałęcz, Tomasz
2016-10-01
The efficient geological data management in Poland is necessary to support multilevel decision processes for government and local authorities in case of spatial planning, mineral resources and groundwater supply and the rational use of subsurface. Vast amount of geological information gathered in the digital archives and databases of Polish Geological Survey (PGS) is a basic resource for multi-scale national subsurface management. Data integration is the key factor to allow development of GIS and web tools for decision makers, however the main barrier for efficient geological information management is the heterogeneity of data in the resources of the Polish Geological Survey. Engineering-geological database is the first PGS thematic domain applied in the whole data integration plan. The solutions developed within this area will facilitate creation of procedures and standards for multilevel data management in PGS. Twenty years of experience in delivering digital engineering-geological mapping in 1:10 000 scale and archival geotechnical reports acquisition and digitisation allowed gathering of more than 300 thousands engineering-geological boreholes database as well as set of 10 thematic spatial layers (including foundation conditions map, depth to the first groundwater level, bedrock level, geohazards). Historically, the desktop approach was the source form of the geological-engineering data storage, resulting in multiple non-correlated interbase datasets. The need for creation of domain data model emerged and an object-oriented modelling (UML) scheme has been developed. The aim of the aforementioned development was to merge all datasets in one centralised Oracle server and prepare the unified spatial data structure for efficient web presentation and applications development. The presented approach will be the milestone toward creation of the Polish national standard for engineering-geological information management. The paper presents the approach and methodology of data unification, thematic vocabularies harmonisation, assumptions and results of data modelling as well as process of the integration of domain model with enterprise architecture implemented in PGS. Currently, there is no geological data standard in Poland. Lack of guidelines for borehole and spatial data management results in an increasing data dispersion as well as in growing barrier for multilevel data management and implementation of efficient decision support tools. Building the national geological data standard makes geotechnical information accessible to multiple institutions, universities, administration and research organisations and gather their data in the same, unified digital form according to the presented data model. Such approach is compliant with current digital trends and the idea of Spatial Data Infrastructure. Efficient geological data management is essential to support the sustainable development and the economic growth, as they allow implementation of geological information to assist the idea of Smart Cites, deliver information for Building Information Management (BIM) and support modern spatial planning. The engineering-geological domain data model presented in the paper is a scalable solution. Future implementation of developed procedures on other domains of PGS geological data is possible.
TEQUEL: The query language of SADDLE
NASA Technical Reports Server (NTRS)
Rajan, S. D.
1984-01-01
A relational database management system is presented that is tailored for engineering applications. A wide variety of engineering data types are supported and the data definition language (DDL) and data manipulation language (DML) are extended to handle matrices. The system can be used either in the standalone mode or through a FORTRAN or PASCAL application program. The query language is of the relational calculus type and allows the user to store, retrieve, update and delete tuples from relations. The relational operations including union, intersect and differ facilitate creation of temporary relations that can be used for manipulating information in a powerful manner. Sample applications are shown to illustrate the creation of data through a FORTRAN program and data manipulation using the TEQUEL DML.
Alekseyenko, Alexander V.; Kim, Namshin; Lee, Christopher J.
2007-01-01
Association of alternative splicing (AS) with accelerated rates of exon evolution in some organisms has recently aroused widespread interest in its role in evolution of eukaryotic gene structure. Previous studies were limited to analysis of exon creation or lost events in mouse and/or human only. Our multigenome approach provides a way for (1) distinguishing creation and loss events on the large scale; (2) uncovering details of the evolutionary mechanisms involved; (3) estimating the corresponding rates over a wide range of evolutionary times and organisms; and (4) assessing the impact of AS on those evolutionary rates. We use previously unpublished independent analyses of alternative splicing in five species (human, mouse, dog, cow, and zebrafish) from the ASAP database combined with genomewide multiple alignment of 17 genomes to analyze exon creation and loss of both constitutively and alternatively spliced exons in mammals, fish, and birds. Our analysis provides a comprehensive database of exon creation and loss events over 360 million years of vertebrate evolution, including tens of thousands of alternative and constitutive exons. We find that exon inclusion level is inversely related to the rate of exon creation. In addition, we provide a detailed in-depth analysis of mechanisms of exon creation and loss, which suggests that a large fraction of nonrepetitive created exons are results of ab initio creation from purely intronic sequences. Our data indicate an important role for alternative splicing in creation of new exons and provide a useful novel database resource for future genome evolution research. PMID:17369312
Amick, G D
1999-01-01
A database containing names of mass spectral data files generated in a forensic toxicology laboratory and two Microsoft Visual Basic programs to maintain and search this database is described. The data files (approximately 0.5 KB/each) were collected from six mass spectrometers during routine casework. Data files were archived on 650 MB (74 min) recordable CD-ROMs. Each recordable CD-ROM was given a unique name, and its list of data file names was placed into the database. The present manuscript describes the use of search and maintenance programs for searching and routine upkeep of the database and creation of CD-ROMs for archiving of data files.
Knowledge management in a waste based biorefinery in the QbD paradigm.
Rathore, Anurag S; Chopda, Viki R; Gomes, James
2016-09-01
Shifting resource base from fossil feedstock to renewable raw materials for production of chemical products has opened up an area of novel applications of industrial biotechnology-based process tools. This review aims to provide a concise and focused discussion on recent advances in knowledge management to facilitate efficient and optimal operation of a biorefinery. Application of quality by design (QbD) and process analytical technology (PAT) as tools for knowledge creation and management at different levels has been highlighted. Role of process integration, government policies, knowledge exchange through collaboration, and use of databases and computational tools have also been touched upon. Copyright © 2016 Elsevier Ltd. All rights reserved.
Comparison of hospital databases on antibiotic consumption in France, for a single management tool.
Henard, S; Boussat, S; Demoré, B; Clément, S; Lecompte, T; May, T; Rabaud, C
2014-07-01
The surveillance of antibiotic use in hospitals and of data on resistance is an essential measure for antibiotic stewardship. There are 3 national systems in France to collect data on antibiotic use: DREES, ICATB, and ATB RAISIN. We compared these databases and drafted recommendations for the creation of an optimized database of information on antibiotic use, available to all concerned personnel: healthcare authorities, healthcare facilities, and healthcare professionals. We processed and analyzed the 3 databases (2008 data), and surveyed users. The qualitative analysis demonstrated major discrepancies in terms of objectives, healthcare facilities, participation rate, units of consumption, conditions for collection, consolidation, and control of data, and delay before availability of results. The quantitative analysis revealed that the consumption data for a given healthcare facility differed from one database to another, challenging the reliability of data collection. We specified user expectations: to compare consumption and resistance data, to carry out benchmarking, to obtain data on the prescribing habits in healthcare units, or to help understand results. The study results demonstrated the need for a reliable, single, and automated tool to manage data on antibiotic consumption compared with resistance data on several levels (national, regional, healthcare facility, healthcare units), providing rapid local feedback and educational benchmarking. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
Li, Qing-na; Huang, Xiu-ling; Gao, Rui; Lu, Fang
2012-08-01
Data management has significant impact on the quality control of clinical studies. Every clinical study should have a data management plan to provide overall work instructions and ensure that all of these tasks are completed according to the Good Clinical Data Management Practice (GCDMP). Meanwhile, the data management plan (DMP) is an auditable document requested by regulatory inspectors and must be written in a manner that is realistic and of high quality. The significance of DMP, the minimum standards and the best practices provided by GCDMP, the main contents of DMP based on electronic data capture (EDC) and some key factors of DMP influencing the quality of clinical study were elaborated in this paper. Specifically, DMP generally consists of 15 parts, namely, the approval page, the protocol summary, role and training, timelines, database design, creation, maintenance and security, data entry, data validation, quality control and quality assurance, the management of external data, serious adverse event data reconciliation, coding, database lock, data management reports, the communication plan and the abbreviated terms. Among them, the following three parts are regarded as the key factors: designing a standardized database of the clinical study, entering data in time and cleansing data efficiently. In the last part of this article, the authors also analyzed the problems in clinical research of traditional Chinese medicine using the EDC system and put forward some suggestions for improvement.
Tripal: a construction toolkit for online genome databases.
Ficklin, Stephen P; Sanderson, Lacey-Anne; Cheng, Chun-Huai; Staton, Margaret E; Lee, Taein; Cho, Il-Hyung; Jung, Sook; Bett, Kirstin E; Main, Doreen
2011-01-01
As the availability, affordability and magnitude of genomics and genetics research increases so does the need to provide online access to resulting data and analyses. Availability of a tailored online database is the desire for many investigators or research communities; however, managing the Information Technology infrastructure needed to create such a database can be an undesired distraction from primary research or potentially cost prohibitive. Tripal provides simplified site development by merging the power of Drupal, a popular web Content Management System with that of Chado, a community-derived database schema for storage of genomic, genetic and other related biological data. Tripal provides an interface that extends the content management features of Drupal to the data housed in Chado. Furthermore, Tripal provides a web-based Chado installer, genomic data loaders, web-based editing of data for organisms, genomic features, biological libraries, controlled vocabularies and stock collections. Also available are Tripal extensions that support loading and visualizations of NCBI BLAST, InterPro, Kyoto Encyclopedia of Genes and Genomes and Gene Ontology analyses, as well as an extension that provides integration of Tripal with GBrowse, a popular GMOD tool. An Application Programming Interface is available to allow creation of custom extensions by site developers, and the look-and-feel of the site is completely customizable through Drupal-based PHP template files. Addition of non-biological content and user-management is afforded through Drupal. Tripal is an open source and freely available software package found at http://tripal.sourceforge.net.
Tripal: a construction toolkit for online genome databases
Sanderson, Lacey-Anne; Cheng, Chun-Huai; Staton, Margaret E.; Lee, Taein; Cho, Il-Hyung; Jung, Sook; Bett, Kirstin E.; Main, Doreen
2011-01-01
As the availability, affordability and magnitude of genomics and genetics research increases so does the need to provide online access to resulting data and analyses. Availability of a tailored online database is the desire for many investigators or research communities; however, managing the Information Technology infrastructure needed to create such a database can be an undesired distraction from primary research or potentially cost prohibitive. Tripal provides simplified site development by merging the power of Drupal, a popular web Content Management System with that of Chado, a community-derived database schema for storage of genomic, genetic and other related biological data. Tripal provides an interface that extends the content management features of Drupal to the data housed in Chado. Furthermore, Tripal provides a web-based Chado installer, genomic data loaders, web-based editing of data for organisms, genomic features, biological libraries, controlled vocabularies and stock collections. Also available are Tripal extensions that support loading and visualizations of NCBI BLAST, InterPro, Kyoto Encyclopedia of Genes and Genomes and Gene Ontology analyses, as well as an extension that provides integration of Tripal with GBrowse, a popular GMOD tool. An Application Programming Interface is available to allow creation of custom extensions by site developers, and the look-and-feel of the site is completely customizable through Drupal-based PHP template files. Addition of non-biological content and user-management is afforded through Drupal. Tripal is an open source and freely available software package found at http://tripal.sourceforge.net PMID:21959868
The Golosiiv on-line plate archive database, management and maintenance
NASA Astrophysics Data System (ADS)
Pakuliak, L.; Sergeeva, T.
2007-08-01
We intend to create online version of the database of the MAO NASU plate archive as VO-compatible structures in accordance with principles, developed by the International Virtual Observatory Alliance in order to make them available for world astronomical community. The online version of the log-book database is constructed by means of MySQL+PHP. Data management system provides a user with user interface, gives a capability of detailed traditional form-filling radial search of plates, obtaining some auxiliary sampling, the listing of each collection and permits to browse the detail descriptions of collections. The administrative tool allows database administrator the data correction, enhancement with new data sets and control of the integrity and consistence of the database as a whole. The VO-compatible database is currently constructing under the demands and in the accordance with principles of international data archives and has to be strongly generalized in order to provide a possibility of data mining by means of standard interfaces and to be the best fitted to the demands of WFPDB Group for databases of the plate catalogues. On-going enhancements of database toward the WFPDB bring the problem of the verification of data to the forefront, as it demands the high degree of data reliability. The process of data verification is practically endless and inseparable from data management owing to a diversity of data errors nature, that means to a variety of ploys of their identification and fixing. The current status of MAO NASU glass archive forces the activity in both directions simultaneously: the enhancement of log-book database with new sets of observational data as well as generalized database creation and the cross-identification between them. The VO-compatible version of the database is supplying with digitized data of plates obtained with MicroTek ScanMaker 9800 XL TMA. The scanning procedure is not total but is conducted selectively in the frames of special projects.
Learning from the Mars Rover Mission: Scientific Discovery, Learning and Memory
NASA Technical Reports Server (NTRS)
Linde, Charlotte
2005-01-01
Purpose: Knowledge management for space exploration is part of a multi-generational effort. Each mission builds on knowledge from prior missions, and learning is the first step in knowledge production. This paper uses the Mars Exploration Rover mission as a site to explore this process. Approach: Observational study and analysis of the work of the MER science and engineering team during rover operations, to investigate how learning occurs, how it is recorded, and how these representations might be made available for subsequent missions. Findings: Learning occurred in many areas: planning science strategy, using instrumen?s within the constraints of the martian environment, the Deep Space Network, and the mission requirements; using software tools effectively; and running two teams on Mars time for three months. This learning is preserved in many ways. Primarily it resides in individual s memories. It is also encoded in stories, procedures, programming sequences, published reports, and lessons learned databases. Research implications: Shows the earliest stages of knowledge creation in a scientific mission, and demonstrates that knowledge management must begin with an understanding of knowledge creation. Practical implications: Shows that studying learning and knowledge creation suggests proactive ways to capture and use knowledge across multiple missions and generations. Value: This paper provides a unique analysis of the learning process of a scientific space mission, relevant for knowledge management researchers and designers, as well as demonstrating in detail how new learning occurs in a learning organization.
Configuring the Orion Guidance, Navigation, and Control Flight Software for Automated Sequencing
NASA Technical Reports Server (NTRS)
Odegard, Ryan G.; Siliwinski, Tomasz K.; King, Ellis T.; Hart, Jeremy J.
2010-01-01
The Orion Crew Exploration Vehicle is being designed with greater automation capabilities than any other crewed spacecraft in NASA s history. The Guidance, Navigation, and Control (GN&C) flight software architecture is designed to provide a flexible and evolvable framework that accommodates increasing levels of automation over time. Within the GN&C flight software, a data-driven approach is used to configure software. This approach allows data reconfiguration and updates to automated sequences without requiring recompilation of the software. Because of the great dependency of the automation and the flight software on the configuration data, the data management is a vital component of the processes for software certification, mission design, and flight operations. To enable the automated sequencing and data configuration of the GN&C subsystem on Orion, a desktop database configuration tool has been developed. The database tool allows the specification of the GN&C activity sequences, the automated transitions in the software, and the corresponding parameter reconfigurations. These aspects of the GN&C automation on Orion are all coordinated via data management, and the database tool provides the ability to test the automation capabilities during the development of the GN&C software. In addition to providing the infrastructure to manage the GN&C automation, the database tool has been designed with capabilities to import and export artifacts for simulation analysis and documentation purposes. Furthermore, the database configuration tool, currently used to manage simulation data, is envisioned to evolve into a mission planning tool for generating and testing GN&C software sequences and configurations. A key enabler of the GN&C automation design, the database tool allows both the creation and maintenance of the data artifacts, as well as serving the critical role of helping to manage, visualize, and understand the data-driven parameters both during software development and throughout the life of the Orion project.
Karp, Peter D; Paley, Suzanne; Romero, Pedro
2002-01-01
Bioinformatics requires reusable software tools for creating model-organism databases (MODs). The Pathway Tools is a reusable, production-quality software environment for creating a type of MOD called a Pathway/Genome Database (PGDB). A PGDB such as EcoCyc (see http://ecocyc.org) integrates our evolving understanding of the genes, proteins, metabolic network, and genetic network of an organism. This paper provides an overview of the four main components of the Pathway Tools: The PathoLogic component supports creation of new PGDBs from the annotated genome of an organism. The Pathway/Genome Navigator provides query, visualization, and Web-publishing services for PGDBs. The Pathway/Genome Editors support interactive updating of PGDBs. The Pathway Tools ontology defines the schema of PGDBs. The Pathway Tools makes use of the Ocelot object database system for data management services for PGDBs. The Pathway Tools has been used to build PGDBs for 13 organisms within SRI and by external users.
Creation of the First French Database in Primary Care Using the ICPC2: Feasibility Study.
Lacroix-Hugues, V; Darmon, D; Pradier, C; Staccini, P
2017-01-01
The objective of our study was to assess the feasibility of gathering data stored in primary care Electronic Health records (EHRs) in order to create a research database (PRIMEGE PACA project). The software for EHR models of two office and patient data management systems were analyzed; anonymized data was extracted and imported into a MySQL database. An ETL procedure to code text in ICPC2 codes was implemented. Eleven general practitioners (GPs) were enrolled as "data producers" and data were extracted from 2012 to 2015. In this paper, we explain the ways to make this process feasible as well as illustrate its utility for estimating epidemiological indicators and professional practice assessments. Other software is currently being analyzed for integration and expansion of this panel of GPs. This experimentation is recognized as a robust framework and is considered to be the technical foundation of the first regional observatory of primary care data.
Thermodynamic properties for arsenic minerals and aqueous species
Nordstrom, D. Kirk; Majzlan, Juraj; Königsberger, Erich; Bowell, Robert J.; Alpers, Charles N.; Jamieson, Heather E.; Nordstrom, D. Kirk; Majzlan, Juraj
2014-01-01
Quantitative geochemical calculations are not possible without thermodynamic databases and considerable advances in the quantity and quality of these databases have been made since the early days of Lewis and Randall (1923), Latimer (1952), and Rossini et al. (1952). Oelkers et al. (2009) wrote, “The creation of thermodynamic databases may be one of the greatest advances in the field of geochemistry of the last century.” Thermodynamic data have been used for basic research needs and for a countless variety of applications in hazardous waste management and policy making (Zhu and Anderson 2002; Nordstrom and Archer 2003; Bethke 2008; Oelkers and Schott 2009). The challenge today is to evaluate thermodynamic data for internal consistency, to reach a better consensus of the most reliable properties, to determine the degree of certainty needed for geochemical modeling, and to agree on priorities for further measurements and evaluations.
CEO Sites Mission Management System (SMMS)
NASA Technical Reports Server (NTRS)
Trenchard, Mike
2014-01-01
Late in fiscal year 2011, the Crew Earth Observations (CEO) team was tasked to upgrade its science site database management tool, which at the time was integrated with the Automated Mission Planning System (AMPS) originally developed for Earth Observations mission planning in the 1980s. Although AMPS had been adapted and was reliably used by CEO for International Space Station (ISS) payload operations support, the database structure was dated, and the compiler required for modifications would not be supported in the Windows 7 64-bit operating system scheduled for implementation the following year. The Sites Mission Management System (SMMS) is now the tool used by CEO to manage a heritage Structured Query Language (SQL) database of more than 2,000 records for Earth science sites. SMMS is a carefully designed and crafted in-house software package with complete and detailed help files available for the user and meticulous internal documentation for future modifications. It was delivered in February 2012 for test and evaluation. Following acceptance, it was implemented for CEO mission operations support in April 2012. The database spans the period from the earliest systematic requests for astronaut photography during the shuttle era to current ISS mission support of the CEO science payload. Besides logging basic image information (site names, locations, broad application categories, and mission requests), the upgraded database management tool now tracks dates of creation, modification, and activation; imagery acquired in response to requests; the status and location of ancillary site information; and affiliations with studies, their sponsors, and collaborators. SMMS was designed to facilitate overall mission planning in terms of site selection and activation and provide the necessary site parameters for the Satellite Tool Kit (STK) Integrated Message Production List Editor (SIMPLE), which is used by CEO operations to perform daily ISS mission planning. The CEO team uses the SMMS for three general functions - database queries of content and status, individual site creation and updates, and mission planning. The CEO administrator of the science site database is able to create or modify the content of sites and activate or deactivate them based on the requirements of the sponsors. The administrator supports and implements ISS mission planning by assembling, reporting, and activating mission-specific site selections for management; deactivating sites as requirements are met; and creating new sites, such as International Charter sites for disasters, as circumstances warrant. In addition to the above CEO internal uses, when site planning for a specific ISS mission is complete and approved, the SMMS can produce and export those essential site database elements for the mission into XML format for use by onboard Earth-location systems, such as Worldmap. The design, development, and implementation of the SMMS resulted in a superior database management system for CEO science sites by focusing on the functions and applications of the database alone instead of integrating the database with the multipurpose configuration of the AMPS. Unlike the AMPS, it can function and be modified within the existing Windows 7 environment. The functions and applications of the SMMS were expanded to accommodate more database elements, report products, and a streamlined interface for data entry and review. A particularly elegant enhancement in data entry was the integration of the Google Earth application for the visual display and definition of site coordinates for site areas defined by multiple coordinates. Transfer between the SMMS and Google Earth is accomplished with a Keyhole Markup Language (KML) expression of geographic data (see figures 3 and 4). Site coordinates may be entered into the SMMS panel directly for display in Google Earth, or the coordinates may be defined on the Google Earth display as a mouse-controlled polygonal definition and transferred back into the SMMS as KML input. This significantly reduces the possibility of errors in coordinate entries and provides visualization of the scale of the site being defined. CEO now has a powerful tool for managing and defining sites on the Earth's surface for both targets of astronaut photography or other onboard remote sensing systems. It can also record and track results by sponsor, collaborator, or type of study.
NASA Astrophysics Data System (ADS)
Michel, L.; Motch, C.; Nguyen Ngoc, H.; Pineau, F. X.
2009-09-01
Saada (http://amwdb.u-strasbg.fr/saada) is a tool for helping astronomers build local archives without writing any code (Michel et al. 2004). Databases created by Saada can host collections of heterogeneous data files. These data collections can also be published in the VO. An overview of the main Saada features is presented in this demo: creation of a basic database, creation of relationships, data searches using SaadaQL, metadata tagging, and use of VO services.
Enhancing GADRAS Source Term Inputs for Creation of Synthetic Spectra.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horne, Steven M.; Harding, Lee
The Gamma Detector Response and Analysis Software (GADRAS) team has enhanced the source term input for the creation of synthetic spectra. These enhancements include the following: allowing users to programmatically provide source information to GADRAS through memory, rather than through a string limited to 256 characters; allowing users to provide their own source decay database information; and updating the default GADRAS decay database to fix errors and include coincident gamma information.
The utilization of neural nets in populating an object-oriented database
NASA Technical Reports Server (NTRS)
Campbell, William J.; Hill, Scott E.; Cromp, Robert F.
1989-01-01
Existing NASA supported scientific data bases are usually developed, managed and populated in a tedious, error prone and self-limiting way in terms of what can be described in a relational Data Base Management System (DBMS). The next generation Earth remote sensing platforms (i.e., Earth Observation System, (EOS), will be capable of generating data at a rate of over 300 Mbs per second from a suite of instruments designed for different applications. What is needed is an innovative approach that creates object-oriented databases that segment, characterize, catalog and are manageable in a domain-specific context and whose contents are available interactively and in near-real-time to the user community. Described here is work in progress that utilizes an artificial neural net approach to characterize satellite imagery of undefined objects into high-level data objects. The characterized data is then dynamically allocated to an object-oriented data base where it can be reviewed and assessed by a user. The definition, development, and evolution of the overall data system model are steps in the creation of an application-driven knowledge-based scientific information system.
The influence of the infrastructure characteristics in urban road accidents occurrence.
Vieira Gomes, Sandra
2013-11-01
This paper summarizes the result of a study regarding the creation of tools that can be used in intervention methods in the planning and management of urban road networks in Portugal. The first tool relates the creation of a geocoded database of road accidents occurred in Lisbon between 2004 and 2007, which allowed the definition of digital maps, with the possibility of a wide range of consultations and crossing of information. The second tool concerns the development of models to estimate the frequency of accidents on urban networks, according to different desegregations: road element (intersections and segments); type of accident (accidents with and without pedestrians); and inclusion of explanatory variables related to the road environment. Several methods were used to assess the goodness of fit of the developed models, allowing more robust conclusions. This work aims to contribute to the scientific knowledge of accidents phenomenon in Portugal, with detailed and accurate information on the factors affecting its occurrence. This allows to explicitly include safety aspects in planning and road management tasks. Copyright © 2013 Elsevier Ltd. All rights reserved.
Monitoring SLAC High Performance UNIX Computing Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lettsome, Annette K.; /Bethune-Cookman Coll. /SLAC
2005-12-15
Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia.more » Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface.« less
Patient handover in orthopaedics, improving safety using Information Technology.
Pearkes, Tim
2015-01-01
Good inpatient handover ensures patient safety and continuity of care. An adjunct to this is the patient list which is routinely managed by junior doctors. These lists are routinely created and managed within Microsoft Excel or Word. Following the merger of two orthopaedic departments into a single service in a new hospital, it was felt that a number of safety issues within the handover process needed to be addressed. This quality improvement project addressed these issues through the creation and implementation of a new patient database which spanned the department, allowing trouble free, safe, and comprehensive handover. Feedback demonstrated an improved user experience, greater reliability, continuity within the lists and a subsequent improvement in patient safety.
CycADS: an annotation database system to ease the development and update of BioCyc databases
Vellozo, Augusto F.; Véron, Amélie S.; Baa-Puyoulet, Patrice; Huerta-Cepas, Jaime; Cottret, Ludovic; Febvay, Gérard; Calevro, Federica; Rahbé, Yvan; Douglas, Angela E.; Gabaldón, Toni; Sagot, Marie-France; Charles, Hubert; Colella, Stefano
2011-01-01
In recent years, genomes from an increasing number of organisms have been sequenced, but their annotation remains a time-consuming process. The BioCyc databases offer a framework for the integrated analysis of metabolic networks. The Pathway tool software suite allows the automated construction of a database starting from an annotated genome, but it requires prior integration of all annotations into a specific summary file or into a GenBank file. To allow the easy creation and update of a BioCyc database starting from the multiple genome annotation resources available over time, we have developed an ad hoc data management system that we called Cyc Annotation Database System (CycADS). CycADS is centred on a specific database model and on a set of Java programs to import, filter and export relevant information. Data from GenBank and other annotation sources (including for example: KAAS, PRIAM, Blast2GO and PhylomeDB) are collected into a database to be subsequently filtered and extracted to generate a complete annotation file. This file is then used to build an enriched BioCyc database using the PathoLogic program of Pathway Tools. The CycADS pipeline for annotation management was used to build the AcypiCyc database for the pea aphid (Acyrthosiphon pisum) whose genome was recently sequenced. The AcypiCyc database webpage includes also, for comparative analyses, two other metabolic reconstruction BioCyc databases generated using CycADS: TricaCyc for Tribolium castaneum and DromeCyc for Drosophila melanogaster. Linked to its flexible design, CycADS offers a powerful software tool for the generation and regular updating of enriched BioCyc databases. The CycADS system is particularly suited for metabolic gene annotation and network reconstruction in newly sequenced genomes. Because of the uniform annotation used for metabolic network reconstruction, CycADS is particularly useful for comparative analysis of the metabolism of different organisms. Database URL: http://www.cycadsys.org PMID:21474551
BDVC (Bimodal Database of Violent Content): A database of violent audio and video
NASA Astrophysics Data System (ADS)
Rivera Martínez, Jose Luis; Mijes Cruz, Mario Humberto; Rodríguez Vázqu, Manuel Antonio; Rodríguez Espejo, Luis; Montoya Obeso, Abraham; García Vázquez, Mireya Saraí; Ramírez Acosta, Alejandro Álvaro
2017-09-01
Nowadays there is a trend towards the use of unimodal databases for multimedia content description, organization and retrieval applications of a single type of content like text, voice and images, instead bimodal databases allow to associate semantically two different types of content like audio-video, image-text, among others. The generation of a bimodal database of audio-video implies the creation of a connection between the multimedia content through the semantic relation that associates the actions of both types of information. This paper describes in detail the used characteristics and methodology for the creation of the bimodal database of violent content; the semantic relationship is stablished by the proposed concepts that describe the audiovisual information. The use of bimodal databases in applications related to the audiovisual content processing allows an increase in the semantic performance only and only if these applications process both type of content. This bimodal database counts with 580 audiovisual annotated segments, with a duration of 28 minutes, divided in 41 classes. Bimodal databases are a tool in the generation of applications for the semantic web.
Charbonnier, Amandine; Knapp, Jenny; Demonmerot, Florent; Bresson-Hadni, Solange; Raoul, Francis; Grenouillet, Frédéric; Millon, Laurence; Vuitton, Dominique Angèle; Damy, Sylvie
2014-01-01
Alveolar echinococcosis (AE) is an endemic zoonosis in France due to the cestode Echinococcus multilocularis. The French National Reference Centre for Alveolar Echinococcosis (CNR-EA), connected to the FrancEchino network, is responsible for recording all AE cases diagnosed in France. Administrative, epidemiological and medical information on the French AE cases may currently be considered exhaustive only on the diagnosis time. To constitute a reference data set, an information system (IS) was developed thanks to a relational database management system (MySQL language). The current data set will evolve towards a dynamic surveillance system, including follow-up data (e.g. imaging, serology) and will be connected to environmental and parasitological data relative to E. multilocularis to better understand the pathogen transmission pathway. A particularly important goal is the possible interoperability of the IS with similar European and other databases abroad; this new IS could play a supporting role in the creation of new AE registries. © A. Charbonnier et al., published by EDP Sciences, 2014.
A new data management system for the French National Registry of human alveolar echinococcosis cases
Charbonnier, Amandine; Knapp, Jenny; Demonmerot, Florent; Bresson-Hadni, Solange; Raoul, Francis; Grenouillet, Frédéric; Millon, Laurence; Vuitton, Dominique Angèle; Damy, Sylvie
2014-01-01
Alveolar echinococcosis (AE) is an endemic zoonosis in France due to the cestode Echinococcus multilocularis. The French National Reference Centre for Alveolar Echinococcosis (CNR-EA), connected to the FrancEchino network, is responsible for recording all AE cases diagnosed in France. Administrative, epidemiological and medical information on the French AE cases may currently be considered exhaustive only on the diagnosis time. To constitute a reference data set, an information system (IS) was developed thanks to a relational database management system (MySQL language). The current data set will evolve towards a dynamic surveillance system, including follow-up data (e.g. imaging, serology) and will be connected to environmental and parasitological data relative to E. multilocularis to better understand the pathogen transmission pathway. A particularly important goal is the possible interoperability of the IS with similar European and other databases abroad; this new IS could play a supporting role in the creation of new AE registries. PMID:25526544
ERIC Educational Resources Information Center
Painter, Derrick
1996-01-01
Discussion of dictionaries as databases focuses on the digitizing of The Oxford English dictionary (OED) and the use of Standard Generalized Mark-Up Language (SGML). Topics include the creation of a consortium to digitize the OED, document structure, relational databases, text forms, sequence, and discourse. (LRW)
Fashion a Field Guide To Your School Nature Area.
ERIC Educational Resources Information Center
Dean, Bruce R.
1996-01-01
Outlines activities for creating a field guide by studying nature around a school. Includes instructions for creation of a database for recording information, for identification of various plants and animals, and for actual creation of a book. (AIM)
NASA Astrophysics Data System (ADS)
Pritykin, F. N.; Nebritov, V. I.
2017-06-01
The structure of graphic database specifying the shape and the work envelope projection position of an android arm mechanism with various positions of the known in advance forbidden zones is proposed. The technique of analytical assignment of the work envelope based on the methods of analytical geometry and theory of sets is represented. The conducted studies can be applied in creation of knowledge bases for intellectual systems of android control functioning independently in the sophisticated environment.
Knowledge Discovery in Databases.
ERIC Educational Resources Information Center
Norton, M. Jay
1999-01-01
Knowledge discovery in databases (KDD) revolves around the investigation and creation of knowledge, processes, algorithms, and mechanisms for retrieving knowledge from data collections. The article is an introductory overview of KDD. The rationale and environment of its development and applications are discussed. Issues related to database design…
A Framework for Mapping User-Designed Forms to Relational Databases
ERIC Educational Resources Information Center
Khare, Ritu
2011-01-01
In the quest for database usability, several applications enable users to design custom forms using a graphical interface, and forward engineer the forms into new databases. The path-breaking aspect of such applications is that users are completely shielded from the technicalities of database creation. Despite this innovation, the process of…
41 CFR 105-53.110 - Creation and authority.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 41 Public Contracts and Property Management 3 2014-01-01 2014-01-01 false Creation and authority. 105-53.110 Section 105-53.110 Public Contracts and Property Management Federal Property Management... General § 105-53.110 Creation and authority. The General Services Administration was established by...
41 CFR 105-53.110 - Creation and authority.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 41 Public Contracts and Property Management 3 2012-01-01 2012-01-01 false Creation and authority. 105-53.110 Section 105-53.110 Public Contracts and Property Management Federal Property Management... General § 105-53.110 Creation and authority. The General Services Administration was established by...
41 CFR 105-53.110 - Creation and authority.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 41 Public Contracts and Property Management 3 2013-07-01 2013-07-01 false Creation and authority. 105-53.110 Section 105-53.110 Public Contracts and Property Management Federal Property Management... General § 105-53.110 Creation and authority. The General Services Administration was established by...
de Merich, D; Forte, Giulia
2011-01-01
Risk assessment is the fundamental process of an enterprise's prevention system and is the principal mandatory provision contained in the Health and Safety Law (Legislative Decree 81/2008) amended by Legislative Decree 106/2009. In order to properly comply with this obligation also in small-sized enterprises, the appropriate regulatory bodies should provide the enterprises with standardized tools and methods for identifying, assessing and managing risks. To assist in particular small and micro-enterprises (SMEs) with risk assessment, by providing a flexible tool that can also be standardized in the form of a datasheet, that can be updated with more detailed information on the various work contexts in Italy. Official efforts to provide Italian SMEs with information may initially make use of the findings of research conducted by ISPESL over the past 20 years, thanks in part to cooperation with other institutions (Regions, INAIL-National Insurance Institute for Occupational Accidents and Diseases), which have led to the creation of an information system on prevention consisting of numerous databases, both statistical and documental ("National System of Surveillance on fatal and serious accidents", "National System of Surveillance on work-related diseases", "Sector hazard profiles" database, "Solutions and Best Practices" database, "Technical Guidelines" database, "Training packages for prevention professionals in enterprises" database). With regard to evaluation criteria applicable within the enterprise, the possibility of combining traditional and uniform areas of assessment (by sector or by risk factor) with assessments by job/occupation has become possible thanks to the cooperation agreement made in 2009 by ISPESL, the ILO (International Labour Organisation) of Geneva and IIOSH (Israel Institute for Occupational Health and Hygiene) regarding the creation of an international Database (HDODB) based on risk datasheets per occupation. The project sets out to assist in particular small and micro-enterprises with risk assessment, providing a flexible and standardized tool in the form of a datasheet, that can be updated with more detailed information on the various work contexts in Italy. The model proposed by ISPESL selected the ILO's "Hazard Datasheet on Occupation" as an initial information tool to steer efforts to assess and manage hazards in small and micro-enterprises. In addition to being an internationally validated tool, the occupation datasheet has a very simple structure that is very effective in communicating and updating information in relation to the local context. According to the logic based on the providing support to enterprises by means of a collaborative network among institutions, local supervisory services and social partners, standardised hazard assessment procedures should be, irrespective of any legal obligations, the preferred tools of an "updatable information system" capable of providing support for the need to improve the process of assessing and managing hazards in enterprises.
The new geographic information system in ETVA VI.PE.
NASA Astrophysics Data System (ADS)
Xagoraris, Zafiris; Soulis, George
2016-08-01
ETVA VI.PE. S.A. is a member of the Piraeus Bank Group of Companies and its activities include designing, developing, exploiting and managing Industrial Areas throughout Greece. Inside ETVA VI.PE.'s thirty-one Industrial Parks there are currently 2,500 manufacturing companies established, with 40,000 employees and € 2.5 billion of invested funds. In each one of the industrial areas ETVA VI.PE guarantees the companies industrial lots of land (sites) with propitious building codes and complete infrastructure networks of water supply, sewerage, paved roads, power supply, communications, cleansing services, etc. The development of Geographical Information System for ETVA VI.PE.'s Industrial Parks started at the beginning of 1992 and consists of three subsystems: Cadastre, that manages the information for the land acquisition of Industrial Areas; Street Layout - Sites, that manages the sites sold to manufacturing companies; Networks, that manages the infrastructure networks (roads, water supply, sewerage etc). The mapping of each Industrial Park is made incorporating state-of-the-art photogrammetric, cartographic and surveying methods and techniques. Passing through the phases of initial design (hybrid GIS) and system upgrade (integrated Gis solution with spatial database), the system is currently operating on a new upgrade (integrated gIS solution with spatial database) that includes redesigning and merging the system's database schemas, along with the creation of central security policies, and the development of a new web GIS application for advanced data entry, highly customisable and standard reports, and dynamic interactive maps. The new GIS bring the company to advanced levels of productivity and introduce the new era for decision making and business management.
Gallagher, Sarah A; Smith, Angela B; Matthews, Jonathan E; Potter, Clarence W; Woods, Michael E; Raynor, Mathew; Wallen, Eric M; Rathmell, W Kimryn; Whang, Young E; Kim, William Y; Godley, Paul A; Chen, Ronald C; Wang, Andrew; You, Chaochen; Barocas, Daniel A; Pruthi, Raj S; Nielsen, Matthew E; Milowsky, Matthew I
2014-01-01
The management of genitourinary malignancies requires a multidisciplinary care team composed of urologists, medical oncologists, and radiation oncologists. A genitourinary (GU) oncology clinical database is an invaluable resource for patient care and research. Although electronic medical records provide a single web-based record used for clinical care, billing, and scheduling, information is typically stored in a discipline-specific manner and data extraction is often not applicable to a research setting. A GU oncology database may be used for the development of multidisciplinary treatment plans, analysis of disease-specific practice patterns, and identification of patients for research studies. Despite the potential utility, there are many important considerations that must be addressed when developing and implementing a discipline-specific database. The creation of the GU oncology database including prostate, bladder, and kidney cancers with the identification of necessary variables was facilitated by meetings of stakeholders in medical oncology, urology, and radiation oncology at the University of North Carolina (UNC) at Chapel Hill with a template data dictionary provided by the Department of Urologic Surgery at Vanderbilt University Medical Center. Utilizing Research Electronic Data Capture (REDCap, version 4.14.5), the UNC Genitourinary OncoLogy Database (UNC GOLD) was designed and implemented. The process of designing and implementing a discipline-specific clinical database requires many important considerations. The primary consideration is determining the relationship between the database and the Institutional Review Board (IRB) given the potential applications for both clinical and research uses. Several other necessary steps include ensuring information technology security and federal regulation compliance; determination of a core complete dataset; creation of standard operating procedures; standardizing entry of free text fields; use of data exports, queries, and de-identification strategies; inclusion of individual investigators' data; and strategies for prioritizing specific projects and data entry. A discipline-specific database requires a buy-in from all stakeholders, meticulous development, and data entry resources to generate a unique platform for housing information that may be used for clinical care and research with IRB approval. The steps and issues identified in the development of UNC GOLD provide a process map for others interested in developing a GU oncology database. Copyright © 2014 Elsevier Inc. All rights reserved.
LactMed: Drugs and Lactation Database
... App LactMed Record Format Database Creation & Peer Review Process Help Fact Sheet Sample Record TOXNET FAQ Glossary Selected References About Dietary Supplements Breastfeeding Links Get LactMed Widget Contact Us Email: tehip@ ...
Seventy-five years of vegetation treatments on public rangelands in the Great Basin of North America
Pilliod, David S.; Welty, Justin; Toevs, Gordon R.
2017-01-01
On the Ground Land treatments occurring over millions of hectares of public rangelands in the Great Basin over the last 75 years represent one of the largest vegetation manipulation and restoration efforts in the world.The ability to use legacy data from land treatments in adaptive management and ecological research has improved with the creation of the Land Treatment Digital Library (LTDL), a spatially explicit database of land treatments conducted by the U.S. Bureau of Land Management.The LTDL contains information on over 9,000 confirmed land treatments in the Great Basin, composed of seedings (58%), vegetation control treatments (24%), and other types of vegetation or soil manipulations (18%).The potential application of land treatment legacy data for adaptive management or as natural experiments for retrospective analyses of effects of land management actions on physical, hydrologic, and ecologic patterns and processes is considerable and just beginning to be realized.
SWS: accessing SRS sites contents through Web Services.
Romano, Paolo; Marra, Domenico
2008-03-26
Web Services and Workflow Management Systems can support creation and deployment of network systems, able to automate data analysis and retrieval processes in biomedical research. Web Services have been implemented at bioinformatics centres and workflow systems have been proposed for biological data analysis. New databanks are often developed by taking into account these technologies, but many existing databases do not allow a programmatic access. Only a fraction of available databanks can thus be queried through programmatic interfaces. SRS is a well know indexing and search engine for biomedical databanks offering public access to many databanks and analysis tools. Unfortunately, these data are not easily and efficiently accessible through Web Services. We have developed 'SRS by WS' (SWS), a tool that makes information available in SRS sites accessible through Web Services. Information on known sites is maintained in a database, srsdb. SWS consists in a suite of WS that can query both srsdb, for information on sites and databases, and SRS sites. SWS returns results in a text-only format and can be accessed through a WSDL compliant client. SWS enables interoperability between workflow systems and SRS implementations, by also managing access to alternative sites, in order to cope with network and maintenance problems, and selecting the most up-to-date among available systems. Development and implementation of Web Services, allowing to make a programmatic access to an exhaustive set of biomedical databases can significantly improve automation of in-silico analysis. SWS supports this activity by making biological databanks that are managed in public SRS sites available through a programmatic interface.
Digital food photography: Dietary surveillance and beyond
USDA-ARS?s Scientific Manuscript database
The method used for creating a database of approximately 20,000 digital images of multiple portion sizes of foods linked to the USDA's Food and Nutrient Database for Dietary Studies (FNDDS) is presented. The creation of this database began in 2002, and its development has spanned 10 years. Initially...
48 CFR 504.602-71 - Federal Procurement Data System-Public access to data.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Procurement Data System—Public access to data. (a) The FPDS database. The General Services Administration awarded a contract for creation and operation of the Federal Procurement Data System (FPDS) database. That database includes information reported by departments and agencies as required by Federal Acquisition...
48 CFR 504.602-71 - Federal Procurement Data System-Public access to data.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Procurement Data System—Public access to data. (a) The FPDS database. The General Services Administration awarded a contract for creation and operation of the Federal Procurement Data System (FPDS) database. That database includes information reported by departments and agencies as required by Federal Acquisition...
Gastric cancer in India: epidemiology and standard of treatment.
Servarayan Murugesan, Chandramohan; Manickavasagam, Kanagavel; Chandramohan, Apsara; Jebaraj, Abishai; Jameel, Abdul Rehman Abdul; Jain, Mayank Shikar; Venkataraman, Jayanthi
2018-04-02
India has a low incidence of gastric cancer. It ranks among the top five most common cancers. Regional diversity of incidence is of importance. It is the second most common cause of cancer related deaths among Indian men and women in the age between 15 and 44. Helicobacter pylori carcinogenesis is low in India. Advanced stage at presentation is a cause of concern. Basic and clinical research in India reveals a globally comparable standard of care and outcome. The large population, sociodemographic profile and challenges in health expenditure, however, remain a major challenge for health care policy managers. The newer formation of National Cancer Grid, integration of national databases and the creation of social identification database Aadhaar by The Unique Identification Authority of India are set to enhance the health care provision and optimal outcome.
Updated Palaeotsunami Database for Aotearoa/New Zealand
NASA Astrophysics Data System (ADS)
Gadsby, M. R.; Goff, J. R.; King, D. N.; Robbins, J.; Duesing, U.; Franz, T.; Borrero, J. C.; Watkins, A.
2016-12-01
The updated configuration, design, and implementation of a national palaeotsunami (pre-historic tsunami) database for Aotearoa/New Zealand (A/NZ) is near completion. This tool enables correlation of events along different stretches of the NZ coastline, provides information on frequency and extent of local, regional and distant-source tsunamis, and delivers detailed information on the science and proxies used to identify the deposits. In A/NZ a plethora of data, scientific research and experience surrounds palaeotsunami deposits, but much of this information has been difficult to locate, has variable reporting standards, and lacked quality assurance. The original database was created by Professor James Goff while working at the National Institute of Water & Atmospheric Research in A/NZ, but has subsequently been updated during his tenure at the University of New South Wales. The updating and establishment of the national database was funded by the Ministry of Civil Defence and Emergency Management (MCDEM), led by Environment Canterbury Regional Council, and supported by all 16 regions of A/NZ's local government. Creation of a single database has consolidated a wide range of published and unpublished research contributions from many science providers on palaeotsunamis in A/NZ. The information is now easily accessible and quality assured and allows examination of frequency, extent and correlation of events. This provides authoritative scientific support for coastal-marine planning and risk management. The database will complement the GNS New Zealand Historical Database, and contributes to a heightened public awareness of tsunami by being a "one-stop-shop" for information on past tsunami impacts. There is scope for this to become an international database, enabling the pacific-wide correlation of large events, as well as identifying smaller regional ones. The Australian research community has already expressed an interest, and the database is also compatible with a similar one currently under development in Japan. Expressions of interest in collaborating with the A/NZ team to expand the database are invited from other Pacific nations.
Development of Human Face Literature Database Using Text Mining Approach: Phase I.
Kaur, Paramjit; Krishan, Kewal; Sharma, Suresh K
2018-06-01
The face is an important part of the human body by which an individual communicates in the society. Its importance can be highlighted by the fact that a person deprived of face cannot sustain in the living world. The amount of experiments being performed and the number of research papers being published under the domain of human face have surged in the past few decades. Several scientific disciplines, which are conducting research on human face include: Medical Science, Anthropology, Information Technology (Biometrics, Robotics, and Artificial Intelligence, etc.), Psychology, Forensic Science, Neuroscience, etc. This alarms the need of collecting and managing the data concerning human face so that the public and free access of it can be provided to the scientific community. This can be attained by developing databases and tools on human face using bioinformatics approach. The current research emphasizes on creating a database concerning literature data of human face. The database can be accessed on the basis of specific keywords, journal name, date of publication, author's name, etc. The collected research papers will be stored in the form of a database. Hence, the database will be beneficial to the research community as the comprehensive information dedicated to the human face could be found at one place. The information related to facial morphologic features, facial disorders, facial asymmetry, facial abnormalities, and many other parameters can be extracted from this database. The front end has been developed using Hyper Text Mark-up Language and Cascading Style Sheets. The back end has been developed using hypertext preprocessor (PHP). The JAVA Script has used as scripting language. MySQL (Structured Query Language) is used for database development as it is most widely used Relational Database Management System. XAMPP (X (cross platform), Apache, MySQL, PHP, Perl) open source web application software has been used as the server.The database is still under the developmental phase and discusses the initial steps of its creation. The current paper throws light on the work done till date.
The Global Flows of Metals and Minerals
Rogich, Donald G.; Matos, Grecia R.
2008-01-01
This paper provides a preliminary review of the trends in worldwide metals and industrial minerals production and consumption based on newly developed global metals and minerals Material Flow Accounts (MFA). The MFA developed encompass data on extraction and consumption for 25 metal and mineral commodities, on a country-by-country and year-by-year basis, for the period 1970 to 2004. The data-base, jointly developed by the authors, resides with the U.S. Geological Survey (USGS) as individual commodity Excel workbooks and within a Filemaker data management system for use in analysis. Numerous national MFA have been developed to provide information on the industrial metabolism of individual countries. These MFA include material flows associated with the four commodity categories of goods that are inputs to a country's economy, agriculture, forestry, metals and minerals, and nonrenewable organic material. In some cases, the material flows associated with the creation and maintenance of the built infrastructure (such as houses, buildings, roads, airports, dams, and so forth) were also examined. The creation of global metals and industrial minerals flows is viewed as a first step in the creation of comprehensive global MFA documenting the historical and current flows of all of the four categories of physical goods that support world economies. Metals and minerals represent a major category of nonrenewable resources that humans extract from and return to the natural ecosystem. As human populations and economies have increased, metals and industrial minerals use has increased concomitantly. This dramatic growth in metals and minerals use has serious implications for both the availability of future resources and the health of the environment, which is affected by the outputs associated with their use. This paper provides an overview of a number of the trends observed by examining the database and suggests areas for future study.
Blending Education and Polymer Science: Semiautomated Creation of a Thermodynamic Property Database
ERIC Educational Resources Information Center
Tchoua, Roselyne B.; Qin, Jian; Audus, Debra J.; Chard, Kyle; Foster, Ian T.; de Pablo, Juan
2016-01-01
Structured databases of chemical and physical properties play a central role in the everyday research activities of scientists and engineers. In materials science, researchers and engineers turn to these databases to quickly query, compare, and aggregate various properties, thereby allowing for the development or application of new materials. The…
48 CFR 504.605-70 - Federal Procurement Data System-Public access to data.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Procurement Data System—Public access to data. (a) The FPDS database. The General Services Administration awarded a contract for creation and operation of the Federal Procurement Data System (FPDS) database. That database includes information reported by departments and agencies as required by FAR subpart 4.6. One of...
48 CFR 504.605-70 - Federal Procurement Data System-Public access to data.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Procurement Data System—Public access to data. (a) The FPDS database. The General Services Administration awarded a contract for creation and operation of the Federal Procurement Data System (FPDS) database. That database includes information reported by departments and agencies as required by FAR subpart 4.6. One of...
48 CFR 504.605-70 - Federal Procurement Data System-Public access to data.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Procurement Data System—Public access to data. (a) The FPDS database. The General Services Administration awarded a contract for creation and operation of the Federal Procurement Data System (FPDS) database. That database includes information reported by departments and agencies as required by FAR subpart 4.6. One of...
Translation from the collaborative OSM database to cartography
NASA Astrophysics Data System (ADS)
Hayat, Flora
2018-05-01
The OpenStreetMap (OSM) database includes original items very useful for geographical analysis and for creating thematic maps. Contributors record in the open database various themes regarding amenities, leisure, transports, buildings and boundaries. The Michelin mapping department develops map prototypes to test the feasibility of mapping based on OSM. To translate the OSM database structure into a database structure fitted with Michelin graphic guidelines a research project is in development. It aims at defining the right structure for the Michelin uses. The research project relies on the analysis of semantic and geometric heterogeneities in OSM data. In that order, Michelin implements methods to transform the input geographical database into a cartographic image dedicated for specific uses (routing and tourist maps). The paper focuses on the mapping tools available to produce a personalised spatial database. Based on processed data, paper and Web maps can be displayed. Two prototypes are described in this article: a vector tile web map and a mapping method to produce paper maps on a regional scale. The vector tile mapping method offers an easy navigation within the map and within graphic and thematic guide- lines. Paper maps can be partly automatically drawn. The drawing automation and data management are part of the mapping creation as well as the final hand-drawing phase. Both prototypes have been set up using the OSM technical ecosystem.
Ethics across the computer science curriculum: privacy modules in an introductory database course.
Appel, Florence
2005-10-01
This paper describes the author's experience of infusing an introductory database course with privacy content, and the on-going project entitled Integrating Ethics Into the Database Curriculum, that evolved from that experience. The project, which has received funding from the National Science Foundation, involves the creation of a set of privacy modules that can be implemented systematically by database educators throughout the database design thread of an undergraduate course.
Knowledge Value Creation Characteristics of Virtual Teams: A Case Study in the Construction Sector
NASA Astrophysics Data System (ADS)
Vorakulpipat, Chalee; Rezgui, Yacine
Any knowledge environment aimed at virtual teams should promote identification, access, capture and retrieval of relevant knowledge anytime / anywhere, while nurturing the social activities that underpin the knowledge sharing and creation process. In fact, socio-cultural issues play a critical role in the successful implementation of Knowledge Management (KM), and constitute a milestone towards value creation. The findings indicate that Knowledge Management Systems (KMS) promote value creation when they embed and nurture the social conditions that bind and bond team members together. Furthermore, technology assets, human networks, social capital, intellectual capital, and change management are identified as essential ingredients that have the potential to ensure effective knowledge value creation.
Coastal resource and sensitivity mapping of Vietnam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Odin, L.M.
1997-08-01
This paper describes a project to establish a relationship between environmental sensitivity (primarily to oil pollution) and response planning and prevention priorities for Vietnamese coastal regions. An inventory of coastal environmental sensitivity and the creation of index mapping was performed. Satellite and geographical information system data were integrated and used for database creation. The database was used to create a coastal resource map, coastal sensitivity map, and a field inventory base map. The final coastal environment sensitivity classification showed that almost 40 percent of the 7448 km of mapped shoreline has a high to medium high sensitivity to oil pollution.
DES Science Portal: II- Creating Science-Ready Catalogs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fausti Neto, Angelo; et al.
We present a novel approach for creating science-ready catalogs through a software infrastructure developed for the Dark Energy Survey (DES). We integrate the data products released by the DES Data Management and additional products created by the DES collaboration in an environment known as DES Science Portal. Each step involved in the creation of a science-ready catalog is recorded in a relational database and can be recovered at any time. We describe how the DES Science Portal automates the creation and characterization of lightweight catalogs for DES Year 1 Annual Release, and show its flexibility in creating multiple catalogs withmore » different inputs and configurations. Finally, we discuss the advantages of this infrastructure for large surveys such as DES and the Large Synoptic Survey Telescope. The capability of creating science-ready catalogs efficiently and with full control of the inputs and configurations used is an important asset for supporting science analysis using data from large astronomical surveys.« less
Sharma, Parichit; Mantri, Shrikant S
2014-01-01
The function of a newly sequenced gene can be discovered by determining its sequence homology with known proteins. BLAST is the most extensively used sequence analysis program for sequence similarity search in large databases of sequences. With the advent of next generation sequencing technologies it has now become possible to study genes and their expression at a genome-wide scale through RNA-seq and metagenome sequencing experiments. Functional annotation of all the genes is done by sequence similarity search against multiple protein databases. This annotation task is computationally very intensive and can take days to obtain complete results. The program mpiBLAST, an open-source parallelization of BLAST that achieves superlinear speedup, can be used to accelerate large-scale annotation by using supercomputers and high performance computing (HPC) clusters. Although many parallel bioinformatics applications using the Message Passing Interface (MPI) are available in the public domain, researchers are reluctant to use them due to lack of expertise in the Linux command line and relevant programming experience. With these limitations, it becomes difficult for biologists to use mpiBLAST for accelerating annotation. No web interface is available in the open-source domain for mpiBLAST. We have developed WImpiBLAST, a user-friendly open-source web interface for parallel BLAST searches. It is implemented in Struts 1.3 using a Java backbone and runs atop the open-source Apache Tomcat Server. WImpiBLAST supports script creation and job submission features and also provides a robust job management interface for system administrators. It combines script creation and modification features with job monitoring and management through the Torque resource manager on a Linux-based HPC cluster. Use case information highlights the acceleration of annotation analysis achieved by using WImpiBLAST. Here, we describe the WImpiBLAST web interface features and architecture, explain design decisions, describe workflows and provide a detailed analysis.
Sharma, Parichit; Mantri, Shrikant S.
2014-01-01
The function of a newly sequenced gene can be discovered by determining its sequence homology with known proteins. BLAST is the most extensively used sequence analysis program for sequence similarity search in large databases of sequences. With the advent of next generation sequencing technologies it has now become possible to study genes and their expression at a genome-wide scale through RNA-seq and metagenome sequencing experiments. Functional annotation of all the genes is done by sequence similarity search against multiple protein databases. This annotation task is computationally very intensive and can take days to obtain complete results. The program mpiBLAST, an open-source parallelization of BLAST that achieves superlinear speedup, can be used to accelerate large-scale annotation by using supercomputers and high performance computing (HPC) clusters. Although many parallel bioinformatics applications using the Message Passing Interface (MPI) are available in the public domain, researchers are reluctant to use them due to lack of expertise in the Linux command line and relevant programming experience. With these limitations, it becomes difficult for biologists to use mpiBLAST for accelerating annotation. No web interface is available in the open-source domain for mpiBLAST. We have developed WImpiBLAST, a user-friendly open-source web interface for parallel BLAST searches. It is implemented in Struts 1.3 using a Java backbone and runs atop the open-source Apache Tomcat Server. WImpiBLAST supports script creation and job submission features and also provides a robust job management interface for system administrators. It combines script creation and modification features with job monitoring and management through the Torque resource manager on a Linux-based HPC cluster. Use case information highlights the acceleration of annotation analysis achieved by using WImpiBLAST. Here, we describe the WImpiBLAST web interface features and architecture, explain design decisions, describe workflows and provide a detailed analysis. PMID:24979410
Towards a collaborative, global infrastructure for biodiversity assessment
Guralnick, Robert P; Hill, Andrew W; Lane, Meredith
2007-01-01
Biodiversity data are rapidly becoming available over the Internet in common formats that promote sharing and exchange. Currently, these data are somewhat problematic, primarily with regard to geographic and taxonomic accuracy, for use in ecological research, natural resources management and conservation decision-making. However, web-based georeferencing tools that utilize best practices and gazetteer databases can be employed to improve geographic data. Taxonomic data quality can be improved through web-enabled valid taxon names databases and services, as well as more efficient mechanisms to return systematic research results and taxonomic misidentification rates back to the biodiversity community. Both of these are under construction. A separate but related challenge will be developing web-based visualization and analysis tools for tracking biodiversity change. Our aim was to discuss how such tools, combined with data of enhanced quality, will help transform today's portals to raw biodiversity data into nexuses of collaborative creation and sharing of biodiversity knowledge. PMID:17594421
Development of a functional, internet-accessible department of surgery outcomes database.
Newcomb, William L; Lincourt, Amy E; Gersin, Keith; Kercher, Kent; Iannitti, David; Kuwada, Tim; Lyons, Cynthia; Sing, Ronald F; Hadzikadic, Mirsad; Heniford, B Todd; Rucho, Susan
2008-06-01
The need for surgical outcomes data is increasing due to pressure from insurance companies, patients, and the need for surgeons to keep their own "report card". Current data management systems are limited by inability to stratify outcomes based on patients, surgeons, and differences in surgical technique. Surgeons along with research and informatics personnel from an academic, hospital-based Department of Surgery and a state university's Department of Information Technology formed a partnership to develop a dynamic, internet-based, clinical data warehouse. A five-component model was used: data dictionary development, web application creation, participating center education and management, statistics applications, and data interpretation. A data dictionary was developed from a list of data elements to address needs of research, quality assurance, industry, and centers of excellence. A user-friendly web interface was developed with menu-driven check boxes, multiple electronic data entry points, direct downloads from hospital billing information, and web-based patient portals. Data were collected on a Health Insurance Portability and Accountability Act-compliant server with a secure firewall. Protected health information was de-identified. Data management strategies included automated auditing, on-site training, a trouble-shooting hotline, and Institutional Review Board oversight. Real-time, daily, monthly, and quarterly data reports were generated. Fifty-eight publications and 109 abstracts have been generated from the database during its development and implementation. Seven national academic departments now use the database to track patient outcomes. The development of a robust surgical outcomes database requires a combination of clinical, informatics, and research expertise. Benefits of surgeon involvement in outcomes research include: tracking individual performance, patient safety, surgical research, legal defense, and the ability to provide accurate information to patient and payers.
Managing Research Output for Knowledge Creation in South-South Nigerian Universities
ERIC Educational Resources Information Center
Akuegwu, B. A.; Edet, A. O.; Uchendu, C. C.
2012-01-01
This ex-post facto designed study investigated the extent of Deans and Heads of Departments' effectiveness in managing research output for knowledge creation in South-South Nigerian universities. One research question and one hypothesis were drawn along the four dimensions of knowledge creation namely: socialization, combination, externalization…
Chatonnet, A; Hotelier, T; Cousin, X
1999-05-14
Cholinesterases are targets for organophosphorus compounds which are used as insecticides, chemical warfare agents and drugs for the treatment of disease such as glaucoma, or parasitic infections. The widespread use of these chemicals explains the growing of this area of research and the ever increasing number of sequences, structures, or biochemical data available. Future advances will depend upon effective management of existing information as well as upon creation of new knowledge. The ESTHER database goal is to facilitate retrieval and comparison of data about structure and function of proteins presenting the alpha/beta hydrolase fold. Protein engineering and in vitro production of enzymes allow direct comparison of biochemical parameters. Kinetic parameters of enzymatic reactions are now included in the database. These parameters can be searched and compared with a table construction tool. ESTHER can be reached through internet (http://www.ensam.inra.fr/cholinesterase). The full database or the specialised X-window Client-server system can be downloaded from our ftp server (ftp://ftp.toulouse.inra.fr./pub/esther). Forms can be used to send updates or corrections directly from the web.
Knowlton, Michelle N; Li, Tongbin; Ren, Yongliang; Bill, Brent R; Ellis, Lynda Bm; Ekker, Stephen C
2008-01-07
The zebrafish is a powerful model vertebrate amenable to high throughput in vivo genetic analyses. Examples include reverse genetic screens using morpholino knockdown, expression-based screening using enhancer trapping and forward genetic screening using transposon insertional mutagenesis. We have created a database to facilitate web-based distribution of data from such genetic studies. The MOrpholino DataBase is a MySQL relational database with an online, PHP interface. Multiple quality control levels allow differential access to data in raw and finished formats. MODBv1 includes sequence information relating to almost 800 morpholinos and their targets and phenotypic data regarding the dose effect of each morpholino (mortality, toxicity and defects). To improve the searchability of this database, we have incorporated a fixed-vocabulary defect ontology that allows for the organization of morpholino affects based on anatomical structure affected and defect produced. This also allows comparison between species utilizing Phenotypic Attribute Trait Ontology (PATO) designated terminology. MODB is also cross-linked with ZFIN, allowing full searches between the two databases. MODB offers users the ability to retrieve morpholino data by sequence of morpholino or target, name of target, anatomical structure affected and defect produced. MODB data can be used for functional genomic analysis of morpholino design to maximize efficacy and minimize toxicity. MODB also serves as a template for future sequence-based functional genetic screen databases, and it is currently being used as a model for the creation of a mutagenic insertional transposon database.
Technology and Microcomputers for an Information Centre/Special Library.
ERIC Educational Resources Information Center
Daehn, Ralph M.
1984-01-01
Discusses use of microcomputer hardware and software, telecommunications methods, and advanced library methods to create a specialized information center's database of literature relating to farm machinery and food processing. Systems and services (electronic messaging, serials control, database creation, cataloging, collections, circulation,…
Su, Xiaoquan; Xu, Jian; Ning, Kang
2012-10-01
It has long been intriguing scientists to effectively compare different microbial communities (also referred as 'metagenomic samples' here) in a large scale: given a set of unknown samples, find similar metagenomic samples from a large repository and examine how similar these samples are. With the current metagenomic samples accumulated, it is possible to build a database of metagenomic samples of interests. Any metagenomic samples could then be searched against this database to find the most similar metagenomic sample(s). However, on one hand, current databases with a large number of metagenomic samples mostly serve as data repositories that offer few functionalities for analysis; and on the other hand, methods to measure the similarity of metagenomic data work well only for small set of samples by pairwise comparison. It is not yet clear, how to efficiently search for metagenomic samples against a large metagenomic database. In this study, we have proposed a novel method, Meta-Storms, that could systematically and efficiently organize and search metagenomic data. It includes the following components: (i) creating a database of metagenomic samples based on their taxonomical annotations, (ii) efficient indexing of samples in the database based on a hierarchical taxonomy indexing strategy, (iii) searching for a metagenomic sample against the database by a fast scoring function based on quantitative phylogeny and (iv) managing database by index export, index import, data insertion, data deletion and database merging. We have collected more than 1300 metagenomic data from the public domain and in-house facilities, and tested the Meta-Storms method on these datasets. Our experimental results show that Meta-Storms is capable of database creation and effective searching for a large number of metagenomic samples, and it could achieve similar accuracies compared with the current popular significance testing-based methods. Meta-Storms method would serve as a suitable database management and search system to quickly identify similar metagenomic samples from a large pool of samples. ningkang@qibebt.ac.cn Supplementary data are available at Bioinformatics online.
Launching the Greek forensic DNA database. The legal framework and arising ethical issues.
Voultsos, Polychronis; Njau, Samuel; Tairis, Nikolaos; Psaroulis, Dimitrios; Kovatsi, Leda
2011-11-01
Since the creation of the first national DNA database in Europe in 1995, many European countries have legislated laws for initiating and regulating their own databases. The Greek government legislated a law in 2008, by which the National DNA Database of Greece was founded and regulated. According to this law, only DNA profiles from convicted criminals were recorded. Nevertheless, a year later, in 2009, the law was amended to permit the creation of an expanded database including innocent people and children. Unfortunately, the new law is very vague in many aspects and does not respect the principle of proportionality. Therefore, according to our opinion, it will soon need to be re-amended. Furthermore, prior to legislating the new law, there was no debate with the community itself in order to clarify what system would best suit Greece and what the citizens would be willing to accept. We present the current legal framework in Greece, we highlight issues that need to be clarified and we discuss possible ethical issues that may arise. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
41 CFR 102-193.10 - What are the goals of the Federal Records Management Program?
Code of Federal Regulations, 2014 CFR
2014-01-01
... ADMINISTRATIVE PROGRAMS 193-CREATION, MAINTENANCE, AND USE OF RECORDS § 102-193.10 What are the goals of the... maintenance of management controls that prevent the creation of unnecessary records and promote effective and... creation, maintenance, and use. (e) Judicious preservation and disposal of records. (f) Direction of...
41 CFR 102-193.10 - What are the goals of the Federal Records Management Program?
Code of Federal Regulations, 2012 CFR
2012-01-01
... ADMINISTRATIVE PROGRAMS 193-CREATION, MAINTENANCE, AND USE OF RECORDS § 102-193.10 What are the goals of the... maintenance of management controls that prevent the creation of unnecessary records and promote effective and... creation, maintenance, and use. (e) Judicious preservation and disposal of records. (f) Direction of...
41 CFR 102-193.10 - What are the goals of the Federal Records Management Program?
Code of Federal Regulations, 2011 CFR
2011-01-01
... ADMINISTRATIVE PROGRAMS 193-CREATION, MAINTENANCE, AND USE OF RECORDS § 102-193.10 What are the goals of the... maintenance of management controls that prevent the creation of unnecessary records and promote effective and... creation, maintenance, and use. (e) Judicious preservation and disposal of records. (f) Direction of...
41 CFR 102-193.10 - What are the goals of the Federal Records Management Program?
Code of Federal Regulations, 2013 CFR
2013-07-01
... ADMINISTRATIVE PROGRAMS 193-CREATION, MAINTENANCE, AND USE OF RECORDS § 102-193.10 What are the goals of the... maintenance of management controls that prevent the creation of unnecessary records and promote effective and... creation, maintenance, and use. (e) Judicious preservation and disposal of records. (f) Direction of...
Fragger: a protein fragment picker for structural queries.
Berenger, Francois; Simoncini, David; Voet, Arnout; Shrestha, Rojan; Zhang, Kam Y J
2017-01-01
Protein modeling and design activities often require querying the Protein Data Bank (PDB) with a structural fragment, possibly containing gaps. For some applications, it is preferable to work on a specific subset of the PDB or with unpublished structures. These requirements, along with specific user needs, motivated the creation of a new software to manage and query 3D protein fragments. Fragger is a protein fragment picker that allows protein fragment databases to be created and queried. All fragment lengths are supported and any set of PDB files can be used to create a database. Fragger can efficiently search a fragment database with a query fragment and a distance threshold. Matching fragments are ranked by distance to the query. The query fragment can have structural gaps and the allowed amino acid sequences matching a query can be constrained via a regular expression of one-letter amino acid codes. Fragger also incorporates a tool to compute the backbone RMSD of one versus many fragments in high throughput. Fragger should be useful for protein design, loop grafting and related structural bioinformatics tasks.
Iavindrasana, Jimison; Depeursinge, Adrien; Ruch, Patrick; Spahni, Stéphane; Geissbuhler, Antoine; Müller, Henning
2007-01-01
The diagnostic and therapeutic processes, as well as the development of new treatments, are hindered by the fragmentation of information which underlies them. In a multi-institutional research study database, the clinical information system (CIS) contains the primary data input. An important part of the money of large scale clinical studies is often paid for data creation and maintenance. The objective of this work is to design a decentralized, scalable, reusable database architecture with lower maintenance costs for managing and integrating distributed heterogeneous data required as basis for a large-scale research project. Technical and legal aspects are taken into account based on various use case scenarios. The architecture contains 4 layers: data storage and access are decentralized at their production source, a connector as a proxy between the CIS and the external world, an information mediator as a data access point and the client side. The proposed design will be implemented inside six clinical centers participating in the @neurIST project as part of a larger system on data integration and reuse for aneurism treatment.
Clinical records anonymisation and text extraction (CRATE): an open-source software system.
Cardinal, Rudolf N
2017-04-26
Electronic medical records contain information of value for research, but contain identifiable and often highly sensitive confidential information. Patient-identifiable information cannot in general be shared outside clinical care teams without explicit consent, but anonymisation/de-identification allows research uses of clinical data without explicit consent. This article presents CRATE (Clinical Records Anonymisation and Text Extraction), an open-source software system with separable functions: (1) it anonymises or de-identifies arbitrary relational databases, with sensitivity and precision similar to previous comparable systems; (2) it uses public secure cryptographic methods to map patient identifiers to research identifiers (pseudonyms); (3) it connects relational databases to external tools for natural language processing; (4) it provides a web front end for research and administrative functions; and (5) it supports a specific model through which patients may consent to be contacted about research. Creation and management of a research database from sensitive clinical records with secure pseudonym generation, full-text indexing, and a consent-to-contact process is possible and practical using entirely free and open-source software.
Genome-wide association as a means to understanding the mammary gland
USDA-ARS?s Scientific Manuscript database
Next-generation sequencing and related technologies have facilitated the creation of enormous public databases that catalogue genomic variation. These databases have facilitated a variety of approaches to discover new genes that regulate normal biology as well as disease. Genome wide association (...
Generation and validation of a universal perinatal database and biospecimen repository: PeriBank.
Antony, K M; Hemarajata, P; Chen, J; Morris, J; Cook, C; Masalas, D; Gedminas, M; Brown, A; Versalovic, J; Aagaard, K
2016-11-01
There is a dearth of biospecimen repositories available to perinatal researchers. In order to address this need, here we describe the methodology used to establish such a resource. With the collaboration of MedSci.net, we generated an online perinatal database with 847 fields of clinical information. Simultaneously, we established a biospecimen repository of the same clinical participants. The demographic and clinical outcomes data are described for the first 10 000 participants enrolled. The demographic characteristics are consistent with the demographics of the delivery hospitals. Quality analysis of the biospecimens reveals variation in very few analytes. Furthermore, since the creation of PeriBank, we have demonstrated validity of the database and tissue integrity of the biospecimen repository. Here we establish that the creation of a universal perinatal database and biospecimen collection is not only possible, but allows for the performance of state-of-the-science translational perinatal research and is a potentially valuable resource to academic perinatal researchers.
Modernized Techniques for Dealing with Quality Data and Derived Products
NASA Astrophysics Data System (ADS)
Neiswender, C.; Miller, S. P.; Clark, D.
2008-12-01
"I just want a picture of the ocean floor in this area" is expressed all too often by researchers, educators, and students in the marine geosciences. As more sophisticated systems are developed to handle data collection and processing, the demand for quality data, and standardized products continues to grow. Data management is an invisible bridge between science and researchers/educators. The SIOExplorer digital library presents more than 50 years of ocean-going research. Prior to publication, all data is checked for quality using standardized criterion developed for each data stream. Despite the evolution of data formats and processing systems, SIOExplorer continues to present derived products in well- established formats. Standardized products are published for each cruise, and include a cruise report, MGD77 merged data, multi-beam flipbook, and underway profiles. Creation of these products is made possible by processing scripts, which continue to change with ever-evolving data formats. We continue to explore the potential of database-enabled creation of standardized products, such as the metadata-rich MGD77 header file. Database-enabled, automated processing produces standards-compliant metadata for each data and derived product. Metadata facilitates discovery and interpretation of published products. This descriptive information is stored both in an ASCII file, and a searchable digital library database. SIOExplorer's underlying technology allows focused search and retrieval of data and products. For example, users can initiate a search of only multi-beam data, which includes data-specific parameters. This customization is made possible with a synthesis of database, XML, and PHP technology. The combination of standardized products and digital library technology puts quality data and derived products in the hands of scientists. Interoperable systems enable distribution these published resources using technology such as web services. By developing modernized strategies to deal with data, Scripps Institution of Oceanography is able to produce and distribute well-formed, and quality-tested derived products, which aid research, understanding, and education.
Creating a standardized watersheds database for the Lower Rio Grande/Río Bravo, Texas
Brown, J.R.; Ulery, Randy L.; Parcher, Jean W.
2000-01-01
This report describes the creation of a large-scale watershed database for the lower Rio Grande/Río Bravo Basin in Texas. The watershed database includes watersheds delineated to all 1:24,000-scale mapped stream confluences and other hydrologically significant points, selected watershed characteristics, and hydrologic derivative datasets.Computer technology allows generation of preliminary watershed boundaries in a fraction of the time needed for manual methods. This automated process reduces development time and results in quality improvements in watershed boundaries and characteristics. These data can then be compiled in a permanent database, eliminating the time-consuming step of data creation at the beginning of a project and providing a stable base dataset that can give users greater confidence when further subdividing watersheds.A standardized dataset of watershed characteristics is a valuable contribution to the understanding and management of natural resources. Vertical integration of the input datasets used to automatically generate watershed boundaries is crucial to the success of such an effort. The optimum situation would be to use the digital orthophoto quadrangles as the source of all the input datasets. While the hydrographic data from the digital line graphs can be revised to match the digital orthophoto quadrangles, hypsography data cannot be revised to match the digital orthophoto quadrangles. Revised hydrography from the digital orthophoto quadrangle should be used to create an updated digital elevation model that incorporates the stream channels as revised from the digital orthophoto quadrangle. Computer-generated, standardized watersheds that are vertically integrated with existing digital line graph hydrographic data will continue to be difficult to create until revisions can be made to existing source datasets. Until such time, manual editing will be necessary to make adjustments for man-made features and changes in the natural landscape that are not reflected in the digital elevation model data.
Creating a standardized watersheds database for the lower Rio Grande/Rio Bravo, Texas
Brown, Julie R.; Ulery, Randy L.; Parcher, Jean W.
2000-01-01
This report describes the creation of a large-scale watershed database for the lower Rio Grande/Rio Bravo Basin in Texas. The watershed database includes watersheds delineated to all 1:24,000-scale mapped stream confluences and other hydrologically significant points, selected watershed characteristics, and hydrologic derivative datasets. Computer technology allows generation of preliminary watershed boundaries in a fraction of the time needed for manual methods. This automated process reduces development time and results in quality improvements in watershed boundaries and characteristics. These data can then be compiled in a permanent database, eliminating the time-consuming step of data creation at the beginning of a project and providing a stable base dataset that can give users greater confidence when further subdividing watersheds. A standardized dataset of watershed characteristics is a valuable contribution to the understanding and management of natural resources. Vertical integration of the input datasets used to automatically generate watershed boundaries is crucial to the success of such an effort. The optimum situation would be to use the digital orthophoto quadrangles as the source of all the input datasets. While the hydrographic data from the digital line graphs can be revised to match the digital orthophoto quadrangles, hypsography data cannot be revised to match the digital orthophoto quadrangles. Revised hydrography from the digital orthophoto quadrangle should be used to create an updated digital elevation model that incorporates the stream channels as revised from the digital orthophoto quadrangle. Computer-generated, standardized watersheds that are vertically integrated with existing digital line graph hydrographic data will continue to be difficult to create until revisions can be made to existing source datasets. Until such time, manual editing will be necessary to make adjustments for man-made features and changes in the natural landscape that are not reflected in the digital elevation model data.
41 CFR 102-193.5 - What does this part cover?
Code of Federal Regulations, 2012 CFR
2012-01-01
... Regulations System (Continued) FEDERAL MANAGEMENT REGULATION ADMINISTRATIVE PROGRAMS 193-CREATION, MAINTENANCE... records management for the creation, maintenance and use of Federal agencies' records. The National...
41 CFR 102-193.5 - What does this part cover?
Code of Federal Regulations, 2014 CFR
2014-01-01
... Regulations System (Continued) FEDERAL MANAGEMENT REGULATION ADMINISTRATIVE PROGRAMS 193-CREATION, MAINTENANCE... records management for the creation, maintenance and use of Federal agencies' records. The National...
41 CFR 102-193.5 - What does this part cover?
Code of Federal Regulations, 2011 CFR
2011-01-01
... Regulations System (Continued) FEDERAL MANAGEMENT REGULATION ADMINISTRATIVE PROGRAMS 193-CREATION, MAINTENANCE... records management for the creation, maintenance and use of Federal agencies' records. The National...
41 CFR 102-193.5 - What does this part cover?
Code of Federal Regulations, 2013 CFR
2013-07-01
... Regulations System (Continued) FEDERAL MANAGEMENT REGULATION ADMINISTRATIVE PROGRAMS 193-CREATION, MAINTENANCE... records management for the creation, maintenance and use of Federal agencies' records. The National...
Oyster reef restoration in the northern Gulf of Mexico: extent, methods and outcomes
LaPeyre, Megan K.; Furlong, Jessica N.; Brown, Laura A.; Piazza, Bryan P.; Brown, Ken
2014-01-01
Shellfish reef restoration to support ecological services has become more common in recent decades, driven by increasing awareness of the functional decline of shellfish systems. Maximizing restoration benefits and increasing efficiency of shellfish restoration activities would greatly benefit from understanding and measurement of system responses to management activities. This project (1) compiles a database of northern Gulf of Mexico inshore artificial oyster reefs created for restoration purposes, and (2) quantitatively assesses a subset of reefs to determine project outcomes. We documented 259 artificial inshore reefs created for ecological restoration. Information on reef material, reef design and monitoring was located for 94, 43 and 20% of the reefs identified. To quantify restoration success, we used diver surveys to quantitatively sample oyster density and substrate volume of 11 created reefs across the coast (7 with rock; 4 with shell), paired with 7 historic reefs. Reefs were defined as fully successful if there were live oysters, and partially successful if there was hard substrate. Of these created reefs, 73% were fully successful, while 82% were partially successful. These data highlight that critical information related to reef design, cost, and success remain difficult to find and are generally inaccessible or lost, ultimately hindering efforts to maximize restoration success rates. Maintenance of reef creation information data, development of standard reef performance measures, and inclusion of material and reef design testing within reef creation projects would be highly beneficial in implementing adaptive management. Adaptive management protocols seek specifically to maximize short and long-term restoration success, but are critically dependent on tracking and measuring system responses to management activities.
A Support Database System for Integrated System Health Management (ISHM)
NASA Technical Reports Server (NTRS)
Schmalzel, John; Figueroa, Jorge F.; Turowski, Mark; Morris, John
2007-01-01
The development, deployment, operation and maintenance of Integrated Systems Health Management (ISHM) applications require the storage and processing of tremendous amounts of low-level data. This data must be shared in a secure and cost-effective manner between developers, and processed within several heterogeneous architectures. Modern database technology allows this data to be organized efficiently, while ensuring the integrity and security of the data. The extensibility and interoperability of the current database technologies also allows for the creation of an associated support database system. A support database system provides additional capabilities by building applications on top of the database structure. These applications can then be used to support the various technologies in an ISHM architecture. This presentation and paper propose a detailed structure and application description for a support database system, called the Health Assessment Database System (HADS). The HADS provides a shared context for organizing and distributing data as well as a definition of the applications that provide the required data-driven support to ISHM. This approach provides another powerful tool for ISHM developers, while also enabling novel functionality. This functionality includes: automated firmware updating and deployment, algorithm development assistance and electronic datasheet generation. The architecture for the HADS has been developed as part of the ISHM toolset at Stennis Space Center for rocket engine testing. A detailed implementation has begun for the Methane Thruster Testbed Project (MTTP) in order to assist in developing health assessment and anomaly detection algorithms for ISHM. The structure of this implementation is shown in Figure 1. The database structure consists of three primary components: the system hierarchy model, the historical data archive and the firmware codebase. The system hierarchy model replicates the physical relationships between system elements to provide the logical context for the database. The historical data archive provides a common repository for sensor data that can be shared between developers and applications. The firmware codebase is used by the developer to organize the intelligent element firmware into atomic units which can be assembled into complete firmware for specific elements.
Kaduk, James A.
1996-01-01
The crystallographic databases are powerful and cost-effective tools for solving materials identification problems, both individually and in combination. Examples of the conventional and unconventional use of the databases in solving practical problems involving organic, coordination, and inorganic compounds are provided. The creation and use of fully-relational versions of the Powder Diffraction File and NIST Crystal Data are described. PMID:27805165
A Curriculum of Value Creation and Management in Engineering
ERIC Educational Resources Information Center
Yannou, Bernard; Bigand, Michel
2004-01-01
As teachers and researchers belonging to two sister French engineering schools, we are convinced that the processes of value creation and management are essential in today's teaching of industrial engineering and project managers. We believe that such processes may be embedded in a three-part curriculum composed of value management and innovation…
41 CFR 105-53.110 - Creation and authority.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 41 Public Contracts and Property Management 3 2011-01-01 2011-01-01 false Creation and authority. 105-53.110 Section 105-53.110 Public Contracts and Property Management Federal Property Management Regulations System (Continued) GENERAL SERVICES ADMINISTRATION 53-STATEMENT OF ORGANIZATION AND FUNCTIONS...
41 CFR 105-53.110 - Creation and authority.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false Creation and authority. 105-53.110 Section 105-53.110 Public Contracts and Property Management Federal Property Management Regulations System (Continued) GENERAL SERVICES ADMINISTRATION 53-STATEMENT OF ORGANIZATION AND FUNCTIONS...
41 CFR 102-193.5 - What does this part cover?
Code of Federal Regulations, 2010 CFR
2010-07-01
... Regulations System (Continued) FEDERAL MANAGEMENT REGULATION ADMINISTRATIVE PROGRAMS 193-CREATION, MAINTENANCE... records management for the creation, maintenance and use of Federal agencies' records. The National... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false What does this part...
Horban', A Ie
2013-09-01
The question of implementation of the state policy in the field of technology transfer in the medical branch to implement the law of Ukraine of 02.10.2012 No 5407-VI "On Amendments to the law of Ukraine" "On state regulation of activity in the field of technology transfers", namely to ensure the formation of branch database on technology and intellectual property rights owned by scientific institutions, organizations, higher medical education institutions and enterprises of healthcare sphere of Ukraine and established by budget are considered. Analysis of international and domestic experience in the processing of information about intellectual property rights and systems implementation support transfer of new technologies are made. The main conceptual principles of creation of this branch database of technology transfer and branch technology transfer network are defined.
Distributed cyberinfrastructure tools for automated data processing of structural monitoring data
NASA Astrophysics Data System (ADS)
Zhang, Yilan; Kurata, Masahiro; Lynch, Jerome P.; van der Linden, Gwendolyn; Sederat, Hassan; Prakash, Atul
2012-04-01
The emergence of cost-effective sensing technologies has now enabled the use of dense arrays of sensors to monitor the behavior and condition of large-scale bridges. The continuous operation of dense networks of sensors presents a number of new challenges including how to manage such massive amounts of data that can be created by the system. This paper reports on the progress of the creation of cyberinfrastructure tools which hierarchically control networks of wireless sensors deployed in a long-span bridge. The internet-enabled cyberinfrastructure is centrally managed by a powerful database which controls the flow of data in the entire monitoring system architecture. A client-server model built upon the database provides both data-provider and system end-users with secured access to various levels of information of a bridge. In the system, information on bridge behavior (e.g., acceleration, strain, displacement) and environmental condition (e.g., wind speed, wind direction, temperature, humidity) are uploaded to the database from sensor networks installed in the bridge. Then, data interrogation services interface with the database via client APIs to autonomously process data. The current research effort focuses on an assessment of the scalability and long-term robustness of the proposed cyberinfrastructure framework that has been implemented along with a permanent wireless monitoring system on the New Carquinez (Alfred Zampa Memorial) Suspension Bridge in Vallejo, CA. Many data interrogation tools are under development using sensor data and bridge metadata (e.g., geometric details, material properties, etc.) Sample data interrogation clients including those for the detection of faulty sensors, automated modal parameter extraction.
Amann, Julia; Rubinelli, Sara
2017-10-10
The use of online communities to promote end user involvement and co-creation in the product and service innovation process is well documented in the marketing and management literature. Whereas online communities are widely used for health care service provision and peer-to-peer support, only little is known about how they could be integrated into the health care innovation process. The overall objective of this qualitative study was to explore community managers' views on and experiences with knowledge co-creation in online communities for people with disabilities. A descriptive qualitative research design was used. Data were collected through semi-structured interviews with nine community managers. To complement the interview data, additional information was retrieved from the communities in the form of structural information (number of registered users, number and names of topic areas covered by the forum) and administrative information (terms and conditions and privacy statements, forum rules). Data were analyzed using thematic analysis. Our results highlight two main aspects: peer-to-peer knowledge co-creation and types of collaboration with external actors. Although community managers strongly encouraged peer-to-peer knowledge co-creation, our findings indicated that these activities were not common practice in the communities under investigation. In fact, much of what related to co-creation, prototyping, and product development was still perceived to be directed by professionals and experts. Community managers described the role of their respective communities as informing this process rather than a driving force. The role of community members as advisors to researchers, health care professionals, and businesses was discussed in the context of types of collaboration with external actors. According to the community managers, most of the external inquiries related to research projects of students or health care professionals in training, who often joined a community for the sole purpose of recruiting participants for their research. Despite this unilateral form of knowledge co-creation, community managers acknowledged the mere interest of these user groups as beneficial, as long as their interest was not purely financially motivated. Being able to contribute to advancing research, improving products, and informing the planning and design of health care services were described as some of the key motivations to engage with external stakeholders. This paper draws attention to the currently under-investigated role of online communities as platforms for collaboration and co-creation between patients, health care professionals, researchers, and businesses. It describes community managers' views on and experiences with knowledge co-creation and provides recommendations on how these activities can be leveraged to foster knowledge co-creation in health care. Engaging in knowledge co-creation with online health communities may ultimately help to inform the planning and design of products, services, and research activities that better meet the actual needs of those living with a disability. ©Julia Amann, Sara Rubinelli. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 10.10.2017.
Fokkema, Ivo F A C; den Dunnen, Johan T; Taschner, Peter E M
2005-08-01
The completion of the human genome project has initiated, as well as provided the basis for, the collection and study of all sequence variation between individuals. Direct access to up-to-date information on sequence variation is currently provided most efficiently through web-based, gene-centered, locus-specific databases (LSDBs). We have developed the Leiden Open (source) Variation Database (LOVD) software approaching the "LSDB-in-a-Box" idea for the easy creation and maintenance of a fully web-based gene sequence variation database. LOVD is platform-independent and uses PHP and MySQL open source software only. The basic gene-centered and modular design of the database follows the recommendations of the Human Genome Variation Society (HGVS) and focuses on the collection and display of DNA sequence variations. With minimal effort, the LOVD platform is extendable with clinical data. The open set-up should both facilitate and promote functional extension with scripts written by the community. The LOVD software is freely available from the Leiden Muscular Dystrophy pages (www.DMD.nl/LOVD/). To promote the use of LOVD, we currently offer curators the possibility to set up an LSDB on our Leiden server. (c) 2005 Wiley-Liss, Inc.
Tchoua, Roselyne B; Qin, Jian; Audus, Debra J; Chard, Kyle; Foster, Ian T; de Pablo, Juan
2016-09-13
Structured databases of chemical and physical properties play a central role in the everyday research activities of scientists and engineers. In materials science, researchers and engineers turn to these databases to quickly query, compare, and aggregate various properties, thereby allowing for the development or application of new materials. The vast majority of these databases have been generated manually, through decades of labor-intensive harvesting of information from the literature; yet, while there are many examples of commonly used databases, a significant number of important properties remain locked within the tables, figures, and text of publications. The question addressed in our work is whether, and to what extent, the process of data collection can be automated. Students of the physical sciences and engineering are often confronted with the challenge of finding and applying property data from the literature, and a central aspect of their education is to develop the critical skills needed to identify such data and discern their meaning or validity. To address shortcomings associated with automated information extraction, while simultaneously preparing the next generation of scientists for their future endeavors, we developed a novel course-based approach in which students develop skills in polymer chemistry and physics and apply their knowledge by assisting with the semi-automated creation of a thermodynamic property database.
Blending Education and Polymer Science: Semiautomated Creation of a Thermodynamic Property Database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tchoua, Roselyne B.; Qin, Jian; Audus, Debra J.
Structured databases of chemical and physical properties play a central role in the everyday research activities of scientists and engineers. In materials science, researchers and engineers turn to these databases to quickly query, compare, and aggregate various properties, thereby allowing for the development or application of new materials. The vast majority of these databases have been generated manually, through decades of labor-intensive harvesting of information from the literature, yet while there are many examples of commonly used databases, a significant number of important properties remain locked within the tables, figures, and text of publications. The question addressed in our workmore » is whether and to what extent the process of data collection can be automated. Students of the physical sciences and engineering are often confronted with the challenge of finding and applying property data from the literature, and a central aspect of their education is to develop the critical skills needed to identify such data and discern their meaning or validity. To address shortcomings associated with automated information extraction while simultaneously preparing the next generation of scientists for their future endeavors, we developed a novel course-based approach in which students develop skills in polymer chemistry and physics and apply their knowledge by assisting with the semiautomated creation of a thermodynamic property database.« less
2014-01-01
Protein biomarkers offer major benefits for diagnosis and monitoring of disease processes. Recent advances in protein mass spectrometry make it feasible to use this very sensitive technology to detect and quantify proteins in blood. To explore the potential of blood biomarkers, we conducted a thorough review to evaluate the reliability of data in the literature and to determine the spectrum of proteins reported to exist in blood with a goal of creating a Federated Database of Blood Proteins (FDBP). A unique feature of our approach is the use of a SQL database for all of the peptide data; the power of the SQL database combined with standard informatic algorithms such as BLAST and the statistical analysis system (SAS) allowed the rapid annotation and analysis of the database without the need to create special programs to manage the data. Our mathematical analysis and review shows that in addition to the usual secreted proteins found in blood, there are many reports of intracellular proteins and good agreement on transcription factors, DNA remodelling factors in addition to cellular receptors and their signal transduction enzymes. Overall, we have catalogued about 12,130 proteins identified by at least one unique peptide, and of these 3858 have 3 or more peptide correlations. The FDBP with annotations should facilitate testing blood for specific disease biomarkers. PMID:24476026
Alignment of high-throughput sequencing data inside in-memory databases.
Firnkorn, Daniel; Knaup-Gregori, Petra; Lorenzo Bermejo, Justo; Ganzinger, Matthias
2014-01-01
In times of high-throughput DNA sequencing techniques, performance-capable analysis of DNA sequences is of high importance. Computer supported DNA analysis is still an intensive time-consuming task. In this paper we explore the potential of a new In-Memory database technology by using SAP's High Performance Analytic Appliance (HANA). We focus on read alignment as one of the first steps in DNA sequence analysis. In particular, we examined the widely used Burrows-Wheeler Aligner (BWA) and implemented stored procedures in both, HANA and the free database system MySQL, to compare execution time and memory management. To ensure that the results are comparable, MySQL has been running in memory as well, utilizing its integrated memory engine for database table creation. We implemented stored procedures, containing exact and inexact searching of DNA reads within the reference genome GRCh37. Due to technical restrictions in SAP HANA concerning recursion, the inexact matching problem could not be implemented on this platform. Hence, performance analysis between HANA and MySQL was made by comparing the execution time of the exact search procedures. Here, HANA was approximately 27 times faster than MySQL which means, that there is a high potential within the new In-Memory concepts, leading to further developments of DNA analysis procedures in the future.
Value Encounters - Modeling and Analyzing Co-creation of Value
NASA Astrophysics Data System (ADS)
Weigand, Hans
Recent marketing and management literature has introduced the concept of co-creation of value. Current value modeling approaches such as e3-value focus on the exchange of value rather than co-creation. In this paper, an extension to e3-value is proposed in the form of a “value encounter”. Value encounters are defined as interaction spaces where a group of actors meet and derive value by each one bringing in some of its own resources. They can be analyzed from multiple strategic perspectives, including knowledge management, social network management and operational management. Value encounter modeling can be instrumental in the context of service analysis and design.
Gamberini, R; Del Buono, D; Lolli, F; Rimini, B
2013-11-01
The definition and utilisation of engineering indexes in the field of Municipal Solid Waste Management (MSWM) is an issue of interest for technicians and scientists, which is widely discussed in literature. Specifically, the availability of consolidated engineering indexes is useful when new waste collection services are designed, along with when their performance is evaluated after a warm-up period. However, most published works in the field of MSWM complete their study with an analysis of isolated case studies. Conversely, decision makers require tools for information collection and exchange in order to trace the trends of these engineering indexes in large experiments. In this paper, common engineering indexes are presented and their values analysed in virtuous Italian communities, with the aim of contributing to the creation of a useful database whose data could be used during experiments, by indicating examples of MSWM demand profiles and the costs required to manage them. Copyright © 2013 Elsevier Ltd. All rights reserved.
Pathobiology and management of laboratory rodents administered CDC category A agents.
He, Yongqun; Rush, Howard G; Liepman, Rachel S; Xiang, Zuoshuang; Colby, Lesley A
2007-02-01
The Centers for Disease Control and Prevention Category A infectious agents include Bacillus anthracis (anthrax), Clostridium botulinum toxin (botulism), Yersinia pestis (plague), variola major virus (smallpox), Francisella tularensis (tularemia), and the filoviruses and arenaviruses that induce viral hemorrhagic fevers. These agents are regarded as having the greatest potential for adverse impact on public health and therefore are a focus of renewed attention in infectious disease research. Frequently rodent models are used to study the pathobiology of these agents. Although much is known regarding naturally occurring infections in humans, less is documented on the sources of exposures and potential risks of infection to researchers and animal care personnel after the administration of these hazardous substances to laboratory animals. Failure to appropriately manage the animals can result both in the creation of workplace hazards if human exposures occur and in disruption of the research if unintended animal exposures occur. Here we review representative Category A agents, with a focus on comparing the biologic effects in naturally infected humans and rodent models and on considerations specific to the management of infected rodent subjects. The information reviewed for each agent has been curated manually and stored in a unique Internet-based database system called HazARD (Hazards in Animal Research Database, http://helab.bioinformatics.med.umich.edu/hazard/) that is designed to assist researchers, administrators, safety officials, Institutional Biosafety Committees, and veterinary personnel seeking information on the management of risks associated with animal studies involving hazardous substances.
Haddad, J; Kalbacher, E; Piccard, M; Aubry, S; Chaigneau, L; Pauchot, J
2017-02-01
A multidisciplinary meeting (RCP) dedicated to the treatment of sarcoma was established in Franche-Comte in 2010. The goals of the study are: (a) To evaluate the treatment of sarcomas by confrontation with the existing literature; (b) To evaluate the influence of the multidisciplinary meeting on the management of sarcomas by hospitals at the regional level. This is a retrospective single center study from 2010 to 2015 on patients with sarcoma and peripheral soft tissue drawn from a Netsarc database (National Network of sarcomas) and communicating cancer record. A database Cleanweb especially dedicated is created. Forty-seven patients were included: ten sarcomas at the upper member 26 to the lower limbs, 11 on the trunk. Forty patients were operated on: ten out of the university hospital, 28 at the university hospital and two in a coordinating center. Ninety percent of patients treated at the university hospital were in accordance with the recommandations. None of the patients operated out of the university hospital benefited from medical care in accordance to the recommendations. There is an increase in the number of files sent by the hospitals out of the university hospital discussed in multidisciplinary meeting, before treatment. The creation of a dedicated multidisciplinary meeting sarcoma improves the medical management of these tumors and decreases inappropriate medical managements thanks to a better education of the regional physicians. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
Mobile, Collaborative Situated Knowledge Creation for Urban Planning
Zurita, Gustavo; Baloian, Nelson
2012-01-01
Geo-collaboration is an emerging research area in computer sciences studying the way spatial, geographically referenced information and communication technologies can support collaborative activities. Scenarios in which information associated to its physical location are of paramount importance are often referred as Situated Knowledge Creation scenarios. To date there are few computer systems supporting knowledge creation that explicitly incorporate physical context as part of the knowledge being managed in mobile face-to-face scenarios. This work presents a collaborative software application supporting visually-geo-referenced knowledge creation in mobile working scenarios while the users are interacting face-to-face. The system allows to manage data information associated to specific physical locations for knowledge creation processes in the field, such as urban planning, identifying specific physical locations, territorial management, etc.; using Tablet-PCs and GPS in order to geo-reference data and information. It presents a model for developing mobile applications supporting situated knowledge creation in the field, introducing the requirements for such an application and the functionalities it should have in order to fulfill them. The paper also presents the results of utility and usability evaluations. PMID:22778639
Mobile, collaborative situated knowledge creation for urban planning.
Zurita, Gustavo; Baloian, Nelson
2012-01-01
Geo-collaboration is an emerging research area in computer sciences studying the way spatial, geographically referenced information and communication technologies can support collaborative activities. Scenarios in which information associated to its physical location are of paramount importance are often referred as Situated Knowledge Creation scenarios. To date there are few computer systems supporting knowledge creation that explicitly incorporate physical context as part of the knowledge being managed in mobile face-to-face scenarios. This work presents a collaborative software application supporting visually-geo-referenced knowledge creation in mobile working scenarios while the users are interacting face-to-face. The system allows to manage data information associated to specific physical locations for knowledge creation processes in the field, such as urban planning, identifying specific physical locations, territorial management, etc.; using Tablet-PCs and GPS in order to geo-reference data and information. It presents a model for developing mobile applications supporting situated knowledge creation in the field, introducing the requirements for such an application and the functionalities it should have in order to fulfill them. The paper also presents the results of utility and usability evaluations.
Sagbo, S; Blochaou, F; Langlotz, F; Vangenot, C; Nolte, L-P; Zheng, G
2005-01-01
Computer-Assisted Orthopaedic Surgery (CAOS) has made much progress over the last 10 years. Navigation systems have been recognized as important tools that help surgeons, and various such systems have been developed. A disadvantage of these systems is that they use non-standard formalisms and techniques. As a result, there are no standard concepts for implant and tool management or data formats to store information for use in 3D planning and navigation. We addressed these limitations and developed a practical and generic solution that offers benefits for surgeons, implant manufacturers, and CAS application developers. We developed a virtual implant database containing geometrical as well as calibration information for orthopedic implants and instruments, with a focus on trauma. This database has been successfully tested for various applications in the client/server mode. The implant information is not static, however, because manufacturers periodically revise their implants, resulting in the deletion of some implants and the introduction of new ones. Tracking these continuous changes and keeping CAS systems up to date is a tedious task if done manually. This leads to additional costs for system development, and some errors are inevitably generated due to the huge amount of information that has to be processed. To ease management with respect to implant life cycle, we developed a tool to assist end-users (surgeons, hospitals, CAS system providers, and implant manufacturers) in managing their implants. Our system can be used for pre-operative planning and intra-operative navigation, and also for any surgical simulation involving orthopedic implants. Currently, this tool allows addition of new implants, modification of existing ones, deletion of obsolete implants, export of a given implant, and also creation of backups. Our implant management system has been successfully tested in the laboratory with very promising results. It makes it possible to fill the current gap that exists between the CAS system and implant manufacturers, hospitals, and surgeons.
Web Audio/Video Streaming Tool
NASA Technical Reports Server (NTRS)
Guruvadoo, Eranna K.
2003-01-01
In order to promote NASA-wide educational outreach program to educate and inform the public of space exploration, NASA, at Kennedy Space Center, is seeking efficient ways to add more contents to the web by streaming audio/video files. This project proposes a high level overview of a framework for the creation, management, and scheduling of audio/video assets over the web. To support short-term goals, the prototype of a web-based tool is designed and demonstrated to automate the process of streaming audio/video files. The tool provides web-enabled users interfaces to manage video assets, create publishable schedules of video assets for streaming, and schedule the streaming events. These operations are performed on user-defined and system-derived metadata of audio/video assets stored in a relational database while the assets reside on separate repository. The prototype tool is designed using ColdFusion 5.0.
The Library Web Site: Collaborative Content Creation and Management
ERIC Educational Resources Information Center
Slater, Robert
2008-01-01
Oakland University's Kresge Library first launched its Web site in 1996. The initial design and subsequent contributions were originally managed by a single Webmaster. In 2002, the library restructured its Web content creation and management to a distributed, collaborative method with the goal of increasing the amount, accuracy, and timeliness of…
Mesocaval Shunt Creation for Jejunal Variceal Bleeding with Chronic Portal Vein Thrombosis.
Yoon, Ja Kyung; Kim, Man Deuk; Lee, Do Yun; Han, Seok Joo
2018-01-01
The creation of transjugular intrahepatic portosystemic shunt (TIPS) is a widely performed technique to relieve portal hypertension, and to manage recurrent variceal bleeding and refractory ascites in patients where medical and/or endoscopic treatments have failed. However, portosystemic shunt creation can be challenging in the presence of chronic portal vein occlusion. In this case report, we describe a minimally invasive endovascular mesocaval shunt creation with transsplenic approach for the management of recurrent variceal bleeding in a portal hypertension patient with intra- and extrahepatic portal vein occlusion. © Copyright: Yonsei University College of Medicine 2018.
Web Application Software for Ground Operations Planning Database (GOPDb) Management
NASA Technical Reports Server (NTRS)
Lanham, Clifton; Kallner, Shawn; Gernand, Jeffrey
2013-01-01
A Web application facilitates collaborative development of the ground operations planning document. This will reduce costs and development time for new programs by incorporating the data governance, access control, and revision tracking of the ground operations planning data. Ground Operations Planning requires the creation and maintenance of detailed timelines and documentation. The GOPDb Web application was created using state-of-the-art Web 2.0 technologies, and was deployed as SaaS (Software as a Service), with an emphasis on data governance and security needs. Application access is managed using two-factor authentication, with data write permissions tied to user roles and responsibilities. Multiple instances of the application can be deployed on a Web server to meet the robust needs for multiple, future programs with minimal additional cost. This innovation features high availability and scalability, with no additional software that needs to be bought or installed. For data governance and security (data quality, management, business process management, and risk management for data handling), the software uses NAMS. No local copy/cloning of data is permitted. Data change log/tracking is addressed, as well as collaboration, work flow, and process standardization. The software provides on-line documentation and detailed Web-based help. There are multiple ways that this software can be deployed on a Web server to meet ground operations planning needs for future programs. The software could be used to support commercial crew ground operations planning, as well as commercial payload/satellite ground operations planning. The application source code and database schema are owned by NASA.
Chaplin, Beth; Meloni, Seema; Eisen, Geoffrey; Jolayemi, Toyin; Banigbe, Bolanle; Adeola, Juliette; Wen, Craig; Reyes Nieva, Harry; Chang, Charlotte; Okonkwo, Prosper; Kanki, Phyllis
2015-01-01
The implementation of PEPFAR programs in resource-limited settings was accompanied by the need to document patient care on a scale unprecedented in environments where paper-based records were the norm. We describe the development of an electronic medical records system (EMRS) put in place at the beginning of a large HIV/AIDS care and treatment program in Nigeria. Databases were created to record laboratory results, medications prescribed and dispensed, and clinical assessments, using a relational database program. A collection of stand-alone files recorded different elements of patient care, linked together by utilities that aggregated data on national standard indicators and assessed patient care for quality improvement, tracked patients requiring follow-up, generated counts of ART regimens dispensed, and provided 'snapshots' of a patient's response to treatment. A secure server was used to store patient files for backup and transfer. By February 2012, when the program transitioned to local in-country management by APIN, the EMRS was used in 33 hospitals across the country, with 4,947,433 adult, pediatric and PMTCT records that had been created and continued to be available for use in patient care. Ongoing trainings for data managers, along with an iterative process of implementing changes to the databases and forms based on user feedback, were needed. As the program scaled up and the volume of laboratory tests increased, results were produced in a digital format, wherever possible, that could be automatically transferred to the EMRS. Many larger clinics began to link some or all of the databases to local area networks, making them available to a larger group of staff members, or providing the ability to enter information simultaneously where needed. The EMRS improved patient care, enabled efficient reporting to the Government of Nigeria and to U.S. funding agencies, and allowed program managers and staff to conduct quality control audits. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ghiorso, M. S.
2013-12-01
Internally consistent thermodynamic databases are critical resources that facilitate the calculation of heterogeneous phase equilibria and thereby support geochemical, petrological, and geodynamical modeling. These 'databases' are actually derived data/model systems that depend on a diverse suite of physical property measurements, calorimetric data, and experimental phase equilibrium brackets. In addition, such databases are calibrated with the adoption of various models for extrapolation of heat capacities and volumetric equations of state to elevated temperature and pressure conditions. Finally, these databases require specification of thermochemical models for the mixing properties of solid, liquid, and fluid solutions, which are often rooted in physical theory and, in turn, depend on additional experimental observations. The process of 'calibrating' a thermochemical database involves considerable effort and an extensive computational infrastructure. Because of these complexities, the community tends to rely on a small number of thermochemical databases, generated by a few researchers; these databases often have limited longevity and are universally difficult to maintain. ThermoFit is a software framework and user interface whose aim is to provide a modeling environment that facilitates creation, maintenance and distribution of thermodynamic data/model collections. Underlying ThermoFit are data archives of fundamental physical property, calorimetric, crystallographic, and phase equilibrium constraints that provide the essential experimental information from which thermodynamic databases are traditionally calibrated. ThermoFit standardizes schema for accessing these data archives and provides web services for data mining these collections. Beyond simple data management and interoperability, ThermoFit provides a collection of visualization and software modeling tools that streamline the model/database generation process. Most notably, ThermoFit facilitates the rapid visualization of predicted model outcomes and permits the user to modify these outcomes using tactile- or mouse-based GUI interaction, permitting real-time updates that reflect users choices, preferences, and priorities involving derived model results. This ability permits some resolution of the problem of correlated model parameters in the common situation where thermodynamic models must be calibrated from inadequate data resources. The ability also allows modeling constraints to be imposed using natural data and observations (i.e. petrologic or geochemical intuition). Once formulated, ThermoFit facilitates deployment of data/model collections by automated creation of web services. Users consume these services via web-, excel-, or desktop-clients. ThermoFit is currently under active development and not yet generally available; a limited capability prototype system has been coded for Macintosh computers and utilized to construct thermochemical models for H2O-CO2 mixed fluid saturation in silicate liquids. The longer term goal is to release ThermoFit as a web portal application client with server-based cloud computations supporting the modeling environment.
ERIC Educational Resources Information Center
Nemeth, Erik
2010-01-01
Discovery of academic literature through Web search engines challenges the traditional role of specialized research databases. Creation of literature outside academic presses and peer-reviewed publications expands the content for scholarly research within a particular field. The resulting body of literature raises the question of whether scholars…
Protocol for developing a Database of Zoonotic disease Research in India (DoZooRI).
Chatterjee, Pranab; Bhaumik, Soumyadeep; Chauhan, Abhimanyu Singh; Kakkar, Manish
2017-12-10
Zoonotic and emerging infectious diseases (EIDs) represent a public health threat that has been acknowledged only recently although they have been on the rise for the past several decades. On an average, every year since the Second World War, one pathogen has emerged or re-emerged on a global scale. Low/middle-income countries such as India bear a significant burden of zoonotic and EIDs. We propose that the creation of a database of published, peer-reviewed research will open up avenues for evidence-based policymaking for targeted prevention and control of zoonoses. A large-scale systematic mapping of the published peer-reviewed research conducted in India will be undertaken. All published research will be included in the database, without any prejudice for quality screening, to broaden the scope of included studies. Structured search strategies will be developed for priority zoonotic diseases (leptospirosis, rabies, anthrax, brucellosis, cysticercosis, salmonellosis, bovine tuberculosis, Japanese encephalitis and rickettsial infections), and multiple databases will be searched for studies conducted in India. The database will be managed and hosted on a cloud-based platform called Rayyan. Individual studies will be tagged based on key preidentified parameters (disease, study design, study type, location, randomisation status and interventions, host involvement and others, as applicable). The database will incorporate already published studies, obviating the need for additional ethical clearances. The database will be made available online, and in collaboration with multisectoral teams, domains of enquiries will be identified and subsequent research questions will be raised. The database will be queried for these and resulting evidence will be analysed and published in peer-reviewed journals. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
NASA Astrophysics Data System (ADS)
Manukalo, V.
2012-12-01
Defining issue The river inundations are the most common and destructive natural hazards in Ukraine. Among non-structural flood management and protection measures a creation of the Early Flood Warning System is extremely important to be able to timely recognize dangerous situations in the flood-prone areas. Hydrometeorological information and forecasts are a core importance in this system. The primary factors affecting reliability and a lead - time of forecasts include: accuracy, speed and reliability with which real - time data are collected. The existing individual conception of monitoring and forecasting resulted in a need in reconsideration of the concept of integrated monitoring and forecasting approach - from "sensors to database and forecasters". Result presentation The Project: "Development of Flood Monitoring and Forecasting in the Ukrainian part of the Dniester River Basin" is presented. The project is developed by the Ukrainian Hydrometeorological Service in a conjunction with the Water Management Agency and the Energy Company "Ukrhydroenergo". The implementation of the Project is funded by the Ukrainian Government and the World Bank. The author is nominated as the responsible person for coordination of activity of organizations involved in the Project. The term of the Project implementation: 2012 - 2014. The principal objectives of the Project are: a) designing integrated automatic hydrometeorological measurement network (including using remote sensing technologies); b) hydrometeorological GIS database construction and coupling with electronic maps for flood risk assessment; c) interface-construction classic numerical database -GIS and with satellite images, and radar data collection; d) providing the real-time data dissemination from observation points to forecasting centers; e) developing hydrometeoroogical forecasting methods; f) providing a flood hazards risk assessment for different temporal and spatial scales; g) providing a dissemination of current information, forecasts and warnings to consumers automatically. Besides scientific and technical issues the implementation of these objectives requires solution of a number of organizational issues. Thus, as a result of the increased complexity of types of hydrometeorological data and in order to develop forecasting methods, a reconsideration of meteorological and hydrological measurement networks should be carried out. The "optimal density of measuring networks" is proposed taking into account principal terms: a) minimizing an uncertainty in characterizing the spacial distribution of hydrometeorological parameters; b) minimizing the Total Life Cycle Cost of creation and maintenance of measurement networks. Much attention will be given to training Ukrainian disaster management authorities from the Ministry of Emergencies and the Water Management Agency to identify the flood hazard risk level and to indicate the best protection measures on the basis of continuous monitoring and forecasts of evolution of meteorological and hydrological conditions in the river basin.
Program for Generating Graphs and Charts
NASA Technical Reports Server (NTRS)
Ackerson, C. T.
1986-01-01
Office Automation Pilot (OAP) Graphics Database system offers IBM personal computer user assistance in producing wide variety of graphs and charts and convenient data-base system, called chart base, for creating and maintaining data associated with graphs and charts. Thirteen different graphics packages available. Access graphics capabilities obtained in similar manner. User chooses creation, revision, or chartbase-maintenance options from initial menu; Enters or modifies data displayed on graphic chart. OAP graphics data-base system written in Microsoft PASCAL.
Knowledge Creation in Constructivist Learning
ERIC Educational Resources Information Center
Jaleel, Sajna; Verghis, Alie Molly
2015-01-01
In today's competitive global economy characterized by knowledge acquisition, the concept of knowledge management has become increasingly prevalent in academic and business practices. Knowledge creation is an important factor and remains a source of competitive advantage over knowledge management. Constructivism holds that learners learn actively…
Document creation, linking, and maintenance system
Claghorn, Ronald [Pasco, WA
2011-02-15
A document creation and citation system designed to maintain a database of reference documents. The content of a selected document may be automatically scanned and indexed by the system. The selected documents may also be manually indexed by a user prior to the upload. The indexed documents may be uploaded and stored within a database for later use. The system allows a user to generate new documents by selecting content within the reference documents stored within the database and inserting the selected content into a new document. The system allows the user to customize and augment the content of the new document. The system also generates citations to the selected content retrieved from the reference documents. The citations may be inserted into the new document in the appropriate location and format, as directed by the user. The new document may be uploaded into the database and included with the other reference documents. The system also maintains the database of reference documents so that when changes are made to a reference document, the author of a document referencing the changed document will be alerted to make appropriate changes to his document. The system also allows visual comparison of documents so that the user may see differences in the text of the documents.
Rubinelli, Sara
2017-01-01
Background The use of online communities to promote end user involvement and co-creation in the product and service innovation process is well documented in the marketing and management literature. Whereas online communities are widely used for health care service provision and peer-to-peer support, only little is known about how they could be integrated into the health care innovation process. Objective The overall objective of this qualitative study was to explore community managers’ views on and experiences with knowledge co-creation in online communities for people with disabilities. Methods A descriptive qualitative research design was used. Data were collected through semi-structured interviews with nine community managers. To complement the interview data, additional information was retrieved from the communities in the form of structural information (number of registered users, number and names of topic areas covered by the forum) and administrative information (terms and conditions and privacy statements, forum rules). Data were analyzed using thematic analysis. Results Our results highlight two main aspects: peer-to-peer knowledge co-creation and types of collaboration with external actors. Although community managers strongly encouraged peer-to-peer knowledge co-creation, our findings indicated that these activities were not common practice in the communities under investigation. In fact, much of what related to co-creation, prototyping, and product development was still perceived to be directed by professionals and experts. Community managers described the role of their respective communities as informing this process rather than a driving force. The role of community members as advisors to researchers, health care professionals, and businesses was discussed in the context of types of collaboration with external actors. According to the community managers, most of the external inquiries related to research projects of students or health care professionals in training, who often joined a community for the sole purpose of recruiting participants for their research. Despite this unilateral form of knowledge co-creation, community managers acknowledged the mere interest of these user groups as beneficial, as long as their interest was not purely financially motivated. Being able to contribute to advancing research, improving products, and informing the planning and design of health care services were described as some of the key motivations to engage with external stakeholders. Conclusions This paper draws attention to the currently under-investigated role of online communities as platforms for collaboration and co-creation between patients, health care professionals, researchers, and businesses. It describes community managers’ views on and experiences with knowledge co-creation and provides recommendations on how these activities can be leveraged to foster knowledge co-creation in health care. Engaging in knowledge co-creation with online health communities may ultimately help to inform the planning and design of products, services, and research activities that better meet the actual needs of those living with a disability. PMID:29017993
The Framework of Knowledge Creation for Online Learning Environments
ERIC Educational Resources Information Center
Huang, Hsiu-Mei; Liaw, Shu-Sheng
2004-01-01
In today's competitive global economy characterized by knowledge acquisition, the concept of knowledge management has become increasingly prevalent in academic and business practices. Knowledge creation is an important factor and remains a source of competitive advantage over knowledge management. Information technology facilitates knowledge…
NASA Astrophysics Data System (ADS)
Gatto, Francesca; Katsanevakis, Stelios; Vandekerkhove, Jochen; Zenetos, Argyro; Cardoso, Ana Cristina
2013-06-01
Europe is severely affected by alien invasions, which impact biodiversity, ecosystem services, economy, and human health. A large number of national, regional, and global online databases provide information on the distribution, pathways of introduction, and impacts of alien species. The sufficiency and efficiency of the current online information systems to assist the European policy on alien species was investigated by a comparative analysis of occurrence data across 43 online databases. Large differences among databases were found which are partially explained by variations in their taxonomical, environmental, and geographical scopes but also by the variable efforts for continuous updates and by inconsistencies on the definition of "alien" or "invasive" species. No single database covered all European environments, countries, and taxonomic groups. In many European countries national databases do not exist, which greatly affects the quality of reported information. To be operational and useful to scientists, managers, and policy makers, online information systems need to be regularly updated through continuous monitoring on a country or regional level. We propose the creation of a network of online interoperable web services through which information in distributed resources can be accessed, aggregated and then used for reporting and further analysis at different geographical and political scales, as an efficient approach to increase the accessibility of information. Harmonization, standardization, conformity on international standards for nomenclature, and agreement on common definitions of alien and invasive species are among the necessary prerequisites.
Sailors, R. Matthew
1997-01-01
The Arden Syntax specification for sharable computerized medical knowledge bases has not been widely utilized in the medical informatics community because of a lack of tools for developing Arden Syntax knowledge bases (Medical Logic Modules). The MLM Builder is a Microsoft Windows-hosted CASE (Computer Aided Software Engineering) tool designed to aid in the development and maintenance of Arden Syntax Medical Logic Modules (MLMs). The MLM Builder consists of the MLM Writer (an MLM generation tool), OSCAR (an anagram of Object-oriented ARden Syntax Compiler), a test database, and the MLManager (an MLM management information system). Working together, these components form a self-contained, unified development environment for the creation, testing, and maintenance of Arden Syntax Medical Logic Modules.
[Big data, generalities and integration in radiotherapy].
Le Fèvre, C; Poty, L; Noël, G
2018-02-01
The many advances in data collection computing systems (data collection, database, storage), diagnostic and therapeutic possibilities are responsible for an increase and a diversification of available data. Big data offers the capacities, in the field of health, to accelerate the discoveries and to optimize the management of patients by combining a large volume of data and the creation of therapeutic models. In radiotherapy, the development of big data is attractive because data are very numerous et heterogeneous (demographics, radiomics, genomics, radiogenomics, etc.). The expectation would be to predict the effectiveness and tolerance of radiation therapy. With these new concepts, still at the preliminary stage, it is possible to create a personalized medicine which is always more secure and reliable. Copyright © 2017. Published by Elsevier SAS.
Classifying Australian PhD Theses: Linking Research and Library Practices
ERIC Educational Resources Information Center
Macauley, Peter; Evans, Terry; Pearson, Margot
2010-01-01
This article draws on the findings from, and the methods and approach used in the provision of a database of Australian PhD thesis records for the period 1987 to 2006, coded by Research Fields, Courses and Disciplines (RFCD) fields of study. Importantly, the project was not merely the creation of yet another database but something that constitutes…
The making of a pan-European organ transplant registry.
Smits, Jacqueline M; Niesing, Jan; Breidenbach, Thomas; Collett, Dave
2013-03-01
A European patient registry to track the outcomes of organ transplant recipients does not exist. As knowledge gleaned from large registries has already led to the creation of standards of care that gained widespread support from patients and healthcare providers, the European Union initiated a project that would enable the creation of a European Registry linking currently existing national databases. This report contains a description of all functional, technical, and legal prerequisites, which upon fulfillment should allow for the seamless sharing of national longitudinal data across temporal, geographical, and subspecialty boundaries. To create a platform that can effortlessly link multiple databases and maintain the integrity of the existing national databases crucial elements were described during the project. These elements are: (i) use of a common dictionary, (ii) use of a common database and refined data uploading technology, (iii) use of standard methodology to allow uniform protocol driven and meaningful long-term follow-up analyses, (iv) use of a quality assurance mechanism to guarantee completeness and accuracy of the data collected, and (v) establishment of a solid legal framework that allows for safe data exchange. © 2012 The Authors Transplant International © 2012 European Society for Organ Transplantation. Published by Blackwell Publishing Ltd.
Terminological aspects of data elements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strehlow, R.A.; Kenworthey, W.H. Jr.; Schuldt, R.E.
1991-01-01
The creation and display of data comprise a process that involves a sequence of steps requiring both semantic and systems analysis. An essential early step in this process is the choice, definition, and naming of data element concepts and is followed by the specification of other needed data element concept attributes. The attributes and the values of data element concept remain associated with them from their birth as a concept to a generic data element that serves as a template for final application. Terminology is, therefore, centrally important to the entire data creation process. Smooth mapping from natural language tomore » a database is a critical aspect of database, and consequently, it requires terminology standardization from the outset of database work. In this paper the semantic aspects of data elements are analyzed and discussed. Seven kinds of data element concept information are considered and those that require terminological development and standardization are identified. The four terminological components of a data element are the hierarchical type of a concept, functional dependencies, schematas showing conceptual structures, and definition statements. These constitute the conventional role of terminology in database design. 12 refs., 8 figs., 1 tab.« less
wayGoo: a platform for geolocating and managing indoor and outdoor spaces
NASA Astrophysics Data System (ADS)
Thomopoulos, Stelios C. A.; Karafylli, Christina; Karafylli, Maria; Motos, Dionysis; Lampropoulos, Vassilis; Dimitros, Kostantinos; Margonis, Christos
2016-05-01
wayGoo2 is a platform for Geolocating and Managing indoor and outdoor spaces and content with multidimensional indoor and outdoor Navigation and Guidance. Its main components are a Geographic Information System, a back-end server, front-end applications and a web-based Content Management System (CMS). It constitutes a fully integrated 2D/3D space and content management system that creates a repository that consists of a database, content components and administrative data. wayGoo can connect to any third party database and event management data-source. The platform is secure as the data is only available through a Restful web service using https security protocol in conjunction with an API key used for authentication. To enhance users experience, wayGoo makes the content available by extracting components out of the repository and constructing targeted applications. The wayGoo platform supports geo-referencing of indoor and outdoor information and use of metadata. It also allows the use of existing information such as maps and databases. The platform enables planning through integration of content that is connected either spatially, temporally or contextually, and provides immediate access to all spatial data through interfaces and interactive 2D and 3D representations. wayGoo constitutes a mean to document and preserve assets through computerized techniques and provides a system that enhances the protection of your space, people and guests when combined with wayGoo notification and alert system. It constitutes a strong marketing tool providing staff and visitors with an immersive tool for navigation in indoor spaces and allowing users to organize their agenda and to discover events through wayGoo event scheduler and recommendation system. Furthermore, the wayGoo platform can be used in Security applications and event management, e.g. CBRNE incidents, man-made and natural disasters, etc., to document and geolocate information and sensor data (off line and real time) on one end, and offer navigation capabilities in indoor and outdoor spaces. Furthermore, the wayGoo platform can be used for the creation of immersive environments and experiences in conjunction with VR/AR (Virtual and Augmented Reality) technologies.
Patridge, Jeff; Namulanda, Gonza
2008-01-01
The Environmental Public Health Tracking (EPHT) Network provides an opportunity to bring together diverse environmental and health effects data by integrating}?> local, state, and national databases of environmental hazards, environmental exposures, and health effects. To help users locate data on the EPHT Network, the network will utilize descriptive metadata that provide critical information as to the purpose, location, content, and source of these data. Since 2003, the Centers for Disease Control and Prevention's EPHT Metadata Subgroup has been working to initiate the creation and use of descriptive metadata. Efforts undertaken by the group include the adoption of a metadata standard, creation of an EPHT-specific metadata profile, development of an open-source metadata creation tool, and promotion of the creation of descriptive metadata by changing the perception of metadata in the public health culture.
Perspective: Interactive material property databases through aggregation of literature data
NASA Astrophysics Data System (ADS)
Seshadri, Ram; Sparks, Taylor D.
2016-05-01
Searchable, interactive, databases of material properties, particularly those relating to functional materials (magnetics, thermoelectrics, photovoltaics, etc.) are curiously missing from discussions of machine-learning and other data-driven methods for advancing new materials discovery. Here we discuss the manual aggregation of experimental data from the published literature for the creation of interactive databases that allow the original experimental data as well additional metadata to be visualized in an interactive manner. The databases described involve materials for thermoelectric energy conversion, and for the electrodes of Li-ion batteries. The data can be subject to machine-learning, accelerating the discovery of new materials.
Biowep: a workflow enactment portal for bioinformatics applications.
Romano, Paolo; Bartocci, Ezio; Bertolini, Guglielmo; De Paoli, Flavio; Marra, Domenico; Mauri, Giancarlo; Merelli, Emanuela; Milanesi, Luciano
2007-03-08
The huge amount of biological information, its distribution over the Internet and the heterogeneity of available software tools makes the adoption of new data integration and analysis network tools a necessity in bioinformatics. ICT standards and tools, like Web Services and Workflow Management Systems (WMS), can support the creation and deployment of such systems. Many Web Services are already available and some WMS have been proposed. They assume that researchers know which bioinformatics resources can be reached through a programmatic interface and that they are skilled in programming and building workflows. Therefore, they are not viable to the majority of unskilled researchers. A portal enabling these to take profit from new technologies is still missing. We designed biowep, a web based client application that allows for the selection and execution of a set of predefined workflows. The system is available on-line. Biowep architecture includes a Workflow Manager, a User Interface and a Workflow Executor. The task of the Workflow Manager is the creation and annotation of workflows. These can be created by using either the Taverna Workbench or BioWMS. Enactment of workflows is carried out by FreeFluo for Taverna workflows and by BioAgent/Hermes, a mobile agent-based middleware, for BioWMS ones. Main workflows' processing steps are annotated on the basis of their input and output, elaboration type and application domain by using a classification of bioinformatics data and tasks. The interface supports users authentication and profiling. Workflows can be selected on the basis of users' profiles and can be searched through their annotations. Results can be saved. We developed a web system that support the selection and execution of predefined workflows, thus simplifying access for all researchers. The implementation of Web Services allowing specialized software to interact with an exhaustive set of biomedical databases and analysis software and the creation of effective workflows can significantly improve automation of in-silico analysis. Biowep is available for interested researchers as a reference portal. They are invited to submit their workflows to the workflow repository. Biowep is further being developed in the sphere of the Laboratory of Interdisciplinary Technologies in Bioinformatics - LITBIO.
Biowep: a workflow enactment portal for bioinformatics applications
Romano, Paolo; Bartocci, Ezio; Bertolini, Guglielmo; De Paoli, Flavio; Marra, Domenico; Mauri, Giancarlo; Merelli, Emanuela; Milanesi, Luciano
2007-01-01
Background The huge amount of biological information, its distribution over the Internet and the heterogeneity of available software tools makes the adoption of new data integration and analysis network tools a necessity in bioinformatics. ICT standards and tools, like Web Services and Workflow Management Systems (WMS), can support the creation and deployment of such systems. Many Web Services are already available and some WMS have been proposed. They assume that researchers know which bioinformatics resources can be reached through a programmatic interface and that they are skilled in programming and building workflows. Therefore, they are not viable to the majority of unskilled researchers. A portal enabling these to take profit from new technologies is still missing. Results We designed biowep, a web based client application that allows for the selection and execution of a set of predefined workflows. The system is available on-line. Biowep architecture includes a Workflow Manager, a User Interface and a Workflow Executor. The task of the Workflow Manager is the creation and annotation of workflows. These can be created by using either the Taverna Workbench or BioWMS. Enactment of workflows is carried out by FreeFluo for Taverna workflows and by BioAgent/Hermes, a mobile agent-based middleware, for BioWMS ones. Main workflows' processing steps are annotated on the basis of their input and output, elaboration type and application domain by using a classification of bioinformatics data and tasks. The interface supports users authentication and profiling. Workflows can be selected on the basis of users' profiles and can be searched through their annotations. Results can be saved. Conclusion We developed a web system that support the selection and execution of predefined workflows, thus simplifying access for all researchers. The implementation of Web Services allowing specialized software to interact with an exhaustive set of biomedical databases and analysis software and the creation of effective workflows can significantly improve automation of in-silico analysis. Biowep is available for interested researchers as a reference portal. They are invited to submit their workflows to the workflow repository. Biowep is further being developed in the sphere of the Laboratory of Interdisciplinary Technologies in Bioinformatics – LITBIO. PMID:17430563
Documentation and Cultural Heritage Inventories - Case of the Historic City of Ahmadabad
NASA Astrophysics Data System (ADS)
Shah, K.
2015-08-01
Located in the western Indian state of Gujarat, the historic city of Ahmadabad is renowned for the unparalleled richness of its monumental architecture, traditional house form, community based settlement patterns, city structure, crafts and mercantile culture. This paper describes the process followed for documentation and development of comprehensive Heritage Inventories for the historic city with an aim of illustrating the Outstanding Universal Values of its Architectural and Urban Heritage. The exercise undertaken between 2011 & 2014 as part of the preparation of world heritage nomination dossier included thorough archival research, field surveys, mapping and preparation of inventories using a combination of traditional data procurement and presentation tools as well as creation of advanced digital database using GIS. The major challenges encountered were: need to adapt documentation methodology and survey formats to field conditions, changing and ever widening scope of work, corresponding changes in time frame, management of large quantities of data generated during the process along with difficulties in correlating existing databases procured from the local authority in varying formats. While the end result satisfied the primary aim, the full potential of Heritage Inventory as a protection and management tool will only be realised after its acceptance as the statutory list and its integration within the larger urban development plan to guide conservation, development and management strategy for the city. The rather detailed description of evolution of documentation process and the complexities involved is presented to understand the relevance of methods used in Ahmadabad and guide similar future efforts in the field.
Innovative technology for web-based data management during an outbreak
Mukhi, Shamir N; Chester, Tammy L Stuart; Klaver-Kibria, Justine DA; Nowicki, Deborah L; Whitlock, Mandy L; Mahmud, Salah M; Louie, Marie; Lee, Bonita E
2011-01-01
Lack of automated and integrated data collection and management, and poor linkage of clinical, epidemiological and laboratory data during an outbreak can inhibit effective and timely outbreak investigation and response. This paper describes an innovative web-based technology, referred to as Web Data, developed for the rapid set-up and provision of interactive and adaptive data management during outbreak situations. We also describe the benefits and limitations of the Web Data technology identified through a questionnaire that was developed to evaluate the use of Web Data implementation and application during the 2009 H1N1 pandemic by Winnipeg Regional Health Authority and Provincial Laboratory for Public Health of Alberta. Some of the main benefits include: improved and secure data access, increased efficiency and reduced error, enhanced electronic collection and transfer of data, rapid creation and modification of the database, conversion of specimen-level to case-level data, and user-defined data extraction and query capabilities. Areas requiring improvement include: better understanding of privacy policies, increased capability for data sharing and linkages between jurisdictions to alleviate data entry duplication. PMID:23569597
Automatic labeling and characterization of objects using artificial neural networks
NASA Technical Reports Server (NTRS)
Campbell, William J.; Hill, Scott E.; Cromp, Robert F.
1989-01-01
Existing NASA supported scientific data bases are usually developed, managed and populated in a tedious, error prone and self-limiting way in terms of what can be described in a relational Data Base Management System (DBMS). The next generation Earth remote sensing platforms, i.e., Earth Observation System, (EOS), will be capable of generating data at a rate of over 300 Mbs per second from a suite of instruments designed for different applications. What is needed is an innovative approach that creates object-oriented databases that segment, characterize, catalog and are manageable in a domain-specific context and whose contents are available interactively and in near-real-time to the user community. Described here is work in progress that utilizes an artificial neural net approach to characterize satellite imagery of undefined objects into high-level data objects. The characterized data is then dynamically allocated to an object-oriented data base where it can be reviewed and assessed by a user. The definition, development, and evolution of the overall data system model are steps in the creation of an application-driven knowledge-based scientific information system.
A multimedia perioperative record keeper for clinical research.
Perrino, A C; Luther, M A; Phillips, D B; Levin, F L
1996-05-01
To develop a multimedia perioperative recordkeeper that provides: 1. synchronous, real-time acquisition of multimedia data, 2. on-line access to the patient's chart data, and 3. advanced data analysis capabilities through integrated, multimedia database and analysis applications. To minimize cost and development time, the system design utilized industry standard hardware components and graphical. software development tools. The system was configured to use a Pentium PC complemented with a variety of hardware interfaces to external data sources. These sources included physiologic monitors with data in digital, analog, video, and audio as well as paper-based formats. The development process was guided by trials in over 80 clinical cases and by the critiques from numerous users. As a result of this process, a suite of custom software applications were created to meet the design goals. The Perioperative Data Acquisition application manages data collection from a variety of physiological monitors. The Charter application provides for rapid creation of an electronic medical record from the patient's paper-based chart and investigator's notes. The Multimedia Medical Database application provides a relational database for the organization and management of multimedia data. The Triscreen application provides an integrated data analysis environment with simultaneous, full-motion data display. With recent technological advances in PC power, data acquisition hardware, and software development tools, the clinical researcher now has the ability to collect and examine a more complete perioperative record. It is hoped that the description of the MPR and its development process will assist and encourage others to advance these tools for perioperative research.
From print to digital (1985-2015): APA's evolving role in psychological publishing.
VandenBos, Gary R
2017-11-01
Knowledge dissemination plays an important role in all scientific fields. The American Psychological Association's (APA) journal publication program was established in 1927. During the 1960s, the Psychological Abstracts publication was computerized. In the mid-1980s, a reenergizing of APA Publishing began, with the establishment of the APA Books Program, as well as the movement of abstracts to CD-ROMs. This article describes the 30-year program of expansion of APA Publishing, covering the period from 1985 through 2015. This period saw the journals program grow from 15 journals to 89 journals, the abstract program grow into an Internet-based delivery system, the creation of the APA's own PsycNET delivery platform, the creation of 6 addition databases, and the establishment of dictionaries and handbooks of psychology. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
A SQL-Database Based Meta-CASE System and its Query Subsystem
NASA Astrophysics Data System (ADS)
Eessaar, Erki; Sgirka, Rünno
Meta-CASE systems simplify the creation of CASE (Computer Aided System Engineering) systems. In this paper, we present a meta-CASE system that provides a web-based user interface and uses an object-relational database system (ORDBMS) as its basis. The use of ORDBMSs allows us to integrate different parts of the system and simplify the creation of meta-CASE and CASE systems. ORDBMSs provide powerful query mechanism. The proposed system allows developers to use queries to evaluate and gradually improve artifacts and calculate values of software measures. We illustrate the use of the systems by using SimpleM modeling language and discuss the use of SQL in the context of queries about artifacts. We have created a prototype of the meta-CASE system by using PostgreSQL™ ORDBMS and PHP scripting language.
Environment: General; Grammar & Usage; Money Management; Music History; Web Page Creation & Design.
ERIC Educational Resources Information Center
Web Feet, 2001
2001-01-01
Describes Web site resources for elementary and secondary education in the topics of: environment, grammar, money management, music history, and Web page creation and design. Each entry includes an illustration of a sample page on the site and an indication of the grade levels for which it is appropriate. (AEF)
BAO Plate Archive digitization, creation of electronic database and its scientific usage
NASA Astrophysics Data System (ADS)
Mickaelian, Areg M.
2015-08-01
Astronomical plate archives created on the basis of numerous observations at many observatories are important part of the astronomical heritage. Byurakan Astrophysical Observatory (BAO) plate archive consists of 37,500 photographic plates and films, obtained at 2.6m telescope, 1m and 0.5m Schmidt telescopes and other smaller ones during 1947-1991. In 2002-2005, the famous Markarian Survey (First Byurakan Survey, FBS) 2000 plates were digitized and the Digitized FBS (DFBS, http://www.aras.am/Dfbs/dfbs.html) was created. New science projects have been conducted based on these low-dispersion spectroscopic material. In 2015, we have started a project on the whole BAO Plate Archive digitization, creation of electronic database and its scientific usage. A Science Program Board is created to evaluate the observing material, to investigate new possibilities and to propose new projects based on the combined usage of these observations together with other world databases. The Executing Team consists of 9 astronomers and 3 computer scientists and will use 2 EPSON Perfection V750 Pro scanners for the digitization, as well as Armenian Virtual Observatory (ArVO) database to accommodate all new data. The project will run during 3 years in 2015-2017 and the final result will be an electronic database and online interactive sky map to be used for further research projects.
Silumbwe, Adam; Zulu, Joseph Mumba; Halwindi, Hikabasa; Jacobs, Choolwe; Zgambo, Jessy; Dambe, Rosalia; Chola, Mumbi; Chongwe, Gershom; Michelo, Charles
2017-05-22
Understanding factors surrounding the implementation process of mass drug administration for lymphatic filariasis (MDA for LF) elimination programmes is critical for successful implementation of similar interventions. The sub-Saharan Africa (SSA) region records the second highest prevalence of the disease and subsequently several countries have initiated and implemented MDA for LF. Systematic reviews have largely focused on factors that affect coverage and compliance, with less attention on the implementation of MDA for LF activities. This review therefore seeks to document facilitators and barriers to implementation of MDA for LF in sub-Saharan Africa. A systematic search of databases PubMed, Science Direct and Google Scholar was conducted. English peer-reviewed publications focusing on implementation of MDA for LF from 2000 to 2016 were considered for analysis. Using thematic analysis, we synthesized the final 18 articles to identify key facilitators and barriers to MDA for LF programme implementation. The main factors facilitating implementation of MDA for LF programmes were awareness creation through innovative community health education programmes, creation of partnerships and collaborations, integration with existing programmes, creation of morbidity management programmes, motivation of community drug distributors (CDDs) through incentives and training, and management of adverse effects. Barriers to implementation included the lack of geographical demarcations and unregistered migrations into rapidly urbanizing areas, major disease outbreaks like the Ebola virus disease in West Africa, delayed drug deliveries at both country and community levels, inappropriate drug delivery strategies, limited number of drug distributors and the large number of households allocated for drug distribution. Mass drug administration for lymphatic filariasis elimination programmes should design their implementation strategies differently based on specific contextual factors to improve implementation outcomes. Successfully achieving this requires undertaking formative research on the possible constraining and inhibiting factors, and incorporating the findings in the design and implementation of MDA for LF.
The creation, management, and use of data quality information for life cycle assessment.
Edelen, Ashley; Ingwersen, Wesley W
2018-04-01
Despite growing access to data, questions of "best fit" data and the appropriate use of results in supporting decision making still plague the life cycle assessment (LCA) community. This discussion paper addresses revisions to assessing data quality captured in a new US Environmental Protection Agency guidance document as well as additional recommendations on data quality creation, management, and use in LCA databases and studies. Existing data quality systems and approaches in LCA were reviewed and tested. The evaluations resulted in a revision to a commonly used pedigree matrix, for which flow and process level data quality indicators are described, more clarity for scoring criteria, and further guidance on interpretation are given. Increased training for practitioners on data quality application and its limits are recommended. A multi-faceted approach to data quality assessment utilizing the pedigree method alongside uncertainty analysis in result interpretation is recommended. A method of data quality score aggregation is proposed and recommendations for usage of data quality scores in existing data are made to enable improved use of data quality scores in LCA results interpretation. Roles for data generators, data repositories, and data users are described in LCA data quality management. Guidance is provided on using data with data quality scores from other systems alongside data with scores from the new system. The new pedigree matrix and recommended data quality aggregation procedure can now be implemented in openLCA software. Additional ways in which data quality assessment might be improved and expanded are described. Interoperability efforts in LCA data should focus on descriptors to enable user scoring of data quality rather than translation of existing scores. Developing and using data quality indicators for additional dimensions of LCA data, and automation of data quality scoring through metadata extraction and comparison to goal and scope are needed.
NASA Astrophysics Data System (ADS)
Ferrari, F.; Medici, M.
2017-02-01
Since 2005, DIAPReM Centre of the Department of Architecture of the University of Ferrara, in collaboration with the "Centro Studi Leon Battista Alberti" Foundation and the Consorzio Futuro in Ricerca, is carrying out a research project for the creation of 3D databases that could allow the development of a critical interpretation of Alberti's architectural work. The project is primarily based on a common three-dimensional integrated survey methodology for the creation of a navigable multilayered database. The research allows the possibility of reiterative metrical analysis, thanks to the use of a coherent data in order to check and validate hypothesis by researchers, art historians and scholars on Alberti's architectural work. Coherently with this methodological framework, indeed, two case studies are explained in this paper: the church of San Sebastiano in Matua and The Church of the Santissima Annunziata in Florence. Furthermore, thanks to a brief introduction of further developments of the project, a short graphical analysis of preliminary results on Tempio Malatestiano in Rimini opens new perspectives of research.
The HST/WFC3 Quicklook Project: A User Interface to Hubble Space Telescope Wide Field Camera 3 Data
NASA Astrophysics Data System (ADS)
Bourque, Matthew; Bajaj, Varun; Bowers, Ariel; Dulude, Michael; Durbin, Meredith; Gosmeyer, Catherine; Gunning, Heather; Khandrika, Harish; Martlin, Catherine; Sunnquist, Ben; Viana, Alex
2017-06-01
The Hubble Space Telescope's Wide Field Camera 3 (WFC3) instrument, comprised of two detectors, UVIS (Ultraviolet-Visible) and IR (Infrared), has been acquiring ~ 50-100 images daily since its installation in 2009. The WFC3 Quicklook project provides a means for instrument analysts to store, calibrate, monitor, and interact with these data through the various Quicklook systems: (1) a ~ 175 TB filesystem, which stores the entire WFC3 archive on disk, (2) a MySQL database, which stores image header data, (3) a Python-based automation platform, which currently executes 22 unique calibration/monitoring scripts, (4) a Python-based code library, which provides system functionality such as logging, downloading tools, database connection objects, and filesystem management, and (5) a Python/Flask-based web interface to the Quicklook system. The Quicklook project has enabled large-scale WFC3 analyses and calibrations, such as the monitoring of the health and stability of the WFC3 instrument, the measurement of ~ 20 million WFC3/UVIS Point Spread Functions (PSFs), the creation of WFC3/IR persistence calibration products, and many others.
Yang, Ya-Ting; Iqbal, Usman; Chen, Ya-Mei; Su, Shyi; Chang, Yao-Mao; Handa, Yujiro; Lin, Neng-Pai; Hsu, Yi-Hsin Elsa
2016-09-01
With global population aging, great business opportunities are driven by the various needs that the elderly face in everyday living. Internet development makes information spread faster, also allows elderly and their caregivers to more easily access information and actively participate in value co-creation in the services. This study aims to investigate the designs of value co-creation by the supply and demand sides of the senior industry. This study investigated senior industry in Taiwan and analyzed bussiness models of 33 selected successful senior enterprises in 2013. We adopted series field observation, reviews of documentations, analysis of meeting records and in-depth interviews with 65 CEOs and managers. Thirty-three quality enterprises in senior industry. Sixty-five CEOs and managers in 33 senior enterprises. None. Value co-creation design, value co-creating process. We constructed a conceptual model that comprehensively describes essential aspects of value co-creation and categorized the value co-creation designs into four types applying for different business models: (i) interaction in experience spaces co-creation design, (ii) on-site interacting co-creation design, (iii) social networking platform co-creation design and (iv) empowering customers co-creation design. Through value co-creation platform design, the senior enterprises have converted the originally passive roles of the elderly and caregivers into active participants in the value co-creation process. The new paradigm of value co-creation designs not only promote innovative development during the interactive process, lead enterprises reveal and meet customers' needs but also increase markets and profits. © The Author 2016. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Database systems for knowledge-based discovery.
Jagarlapudi, Sarma A R P; Kishan, K V Radha
2009-01-01
Several database systems have been developed to provide valuable information from the bench chemist to biologist, medical practitioner to pharmaceutical scientist in a structured format. The advent of information technology and computational power enhanced the ability to access large volumes of data in the form of a database where one could do compilation, searching, archiving, analysis, and finally knowledge derivation. Although, data are of variable types the tools used for database creation, searching and retrieval are similar. GVK BIO has been developing databases from publicly available scientific literature in specific areas like medicinal chemistry, clinical research, and mechanism-based toxicity so that the structured databases containing vast data could be used in several areas of research. These databases were classified as reference centric or compound centric depending on the way the database systems were designed. Integration of these databases with knowledge derivation tools would enhance the value of these systems toward better drug design and discovery.
ERIC Educational Resources Information Center
Harrington, Denis; Kearney, Arthur
2011-01-01
Purpose: This paper aims to consider the extent to which business school transition has created new opportunities in management development, knowledge transfer and knowledge creation. Design/methodology/approach: The paper is a critical review of knowledge exchange in a business school context with a particular focus on the "translation or…
Description of the MHS Health Level 7 Chemistry Laboratory for Public Health Surveillance
2012-09-01
document provides a history of the HL7 chemistry database and its contents, explains the creation of chemistry/serology records, describes the pathway...in health surveillance activities. This technical document discusses the chemistry database by providing a history of the dataset and its contents...source for its usefulness in public health surveillance. While HL7 data also includes radiology, anatomic pathology reports and pharmacy transactions
Pullman, Daryl; Perrot-Daley, Astrid; Hodgkinson, Kathy; Street, Catherine; Rahman, Proton
2013-01-01
Objective To provide a legal and ethical analysis of some of the implementation challenges faced by the Population Therapeutics Research Group (PTRG) at Memorial University (Canada), in using genealogical information offered by individuals for its genetics research database. Materials and methods This paper describes the unique historical and genetic characteristics of the Newfoundland and Labrador founder population, which gave rise to the opportunity for PTRG to build the Newfoundland Genealogy Database containing digitized records of all pre-confederation (1949) census records of the Newfoundland founder population. In addition to building the database, PTRG has developed the Heritability Analytics Infrastructure, a data management structure that stores genotype, phenotype, and pedigree information in a single database, and custom linkage software (KINNECT) to perform pedigree linkages on the genealogy database. Discussion A newly adopted legal regimen in Newfoundland and Labrador is discussed. It incorporates health privacy legislation with a unique research ethics statute governing the composition and activities of research ethics boards and, for the first time in Canada, elevating the status of national research ethics guidelines into law. The discussion looks at this integration of legal and ethical principles which provides a flexible and seamless framework for balancing the privacy rights and welfare interests of individuals, families, and larger societies in the creation and use of research data infrastructures as public goods. Conclusion The complementary legal and ethical frameworks that now coexist in Newfoundland and Labrador provide the legislative authority, ethical legitimacy, and practical flexibility needed to find a workable balance between privacy interests and public goods. Such an approach may also be instructive for other jurisdictions as they seek to construct and use biobanks and related research platforms for genetic research. PMID:22859644
Kosseim, Patricia; Pullman, Daryl; Perrot-Daley, Astrid; Hodgkinson, Kathy; Street, Catherine; Rahman, Proton
2013-01-01
To provide a legal and ethical analysis of some of the implementation challenges faced by the Population Therapeutics Research Group (PTRG) at Memorial University (Canada), in using genealogical information offered by individuals for its genetics research database. This paper describes the unique historical and genetic characteristics of the Newfoundland and Labrador founder population, which gave rise to the opportunity for PTRG to build the Newfoundland Genealogy Database containing digitized records of all pre-confederation (1949) census records of the Newfoundland founder population. In addition to building the database, PTRG has developed the Heritability Analytics Infrastructure, a data management structure that stores genotype, phenotype, and pedigree information in a single database, and custom linkage software (KINNECT) to perform pedigree linkages on the genealogy database. A newly adopted legal regimen in Newfoundland and Labrador is discussed. It incorporates health privacy legislation with a unique research ethics statute governing the composition and activities of research ethics boards and, for the first time in Canada, elevating the status of national research ethics guidelines into law. The discussion looks at this integration of legal and ethical principles which provides a flexible and seamless framework for balancing the privacy rights and welfare interests of individuals, families, and larger societies in the creation and use of research data infrastructures as public goods. The complementary legal and ethical frameworks that now coexist in Newfoundland and Labrador provide the legislative authority, ethical legitimacy, and practical flexibility needed to find a workable balance between privacy interests and public goods. Such an approach may also be instructive for other jurisdictions as they seek to construct and use biobanks and related research platforms for genetic research.
Value-driven ERM: making ERM an engine for simultaneous value creation and value protection.
Celona, John; Driver, Jeffrey; Hall, Edward
2011-01-01
Enterprise risk management (ERM) began as an effort to integrate the historically disparate silos of risk management in organizations. More recently, as recognition has grown of the need to cover the upside risks in value creation (financial and otherwise), organizations and practitioners have been searching for the means to do this. Existing tools such as heat maps and risk registers are not adequate for this task. Instead, a conceptually new value-driven framework is needed to realize the promise of enterprise-wide coverage of all risks, for both value protection and value creation. The methodology of decision analysis provides the means of capturing systemic, correlated, and value-creation risks on the same basis as value protection risks and has been integrated into the value-driven approach to ERM described in this article. Stanford Hospital and Clinics Risk Consulting and Strategic Decisions Group have been working to apply this value-driven ERM at Stanford University Medical Center. © 2011 American Society for Healthcare Risk Management of the American Hospital Association.
22 CFR 8.5 - Creation of a committee.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 22 Foreign Relations 1 2011-04-01 2011-04-01 false Creation of a committee. 8.5 Section 8.5 Foreign Relations DEPARTMENT OF STATE GENERAL ADVISORY COMMITTEE MANAGEMENT § 8.5 Creation of a committee. (a) A bureau or an office designated or desiring to sponsor an advisory committee will prepare a...
22 CFR 8.5 - Creation of a committee.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Creation of a committee. 8.5 Section 8.5 Foreign Relations DEPARTMENT OF STATE GENERAL ADVISORY COMMITTEE MANAGEMENT § 8.5 Creation of a committee. (a) A bureau or an office designated or desiring to sponsor an advisory committee will prepare a...
The Kiel data management infrastructure - arising from a generic data model
NASA Astrophysics Data System (ADS)
Fleischer, D.; Mehrtens, H.; Schirnick, C.; Springer, P.
2010-12-01
The Kiel Data Management Infrastructure (KDMI) started from a cooperation of three large-scale projects (SFB574, SFB754 and Cluster of Excellence The Future Ocean) and the Leibniz Institute of Marine Sciences (IFM-GEOMAR). The common strategy for project data management is a single person collecting and transforming data according to the requirements of the targeted data center(s). The intention of the KDMI cooperation is to avoid redundant and potentially incompatible data management efforts for scientists and data managers and to create a single sustainable infrastructure. An increased level of complexity in the conceptual planing arose from the diversity of marine disciplines and approximately 1000 scientists involved. KDMI key features focus on the data provenance which we consider to comprise the entire workflow from field sampling thru labwork to data calculation and evaluation. Managing the data of each individual project participant in this way yields the data management for the entire project and warrants the reusability of (meta)data. Accordingly scientists provide a workflow definition of their data creation procedures resulting in their target variables. The central idea in the development of the KDMI presented here is based on the object oriented programming concept which allows to have one object definition (workflow) and infinite numbers of object instances (data). Each definition is created by a graphical user interface and produces XML output stored in a database using a generic data model. On creation of a data instance the KDMI translates the definition into web forms for the scientist, the generic data model then accepts all information input following the given data provenance definition. An important aspect of the implementation phase is the possibility of a successive transition from daily measurement routines resulting in single spreadsheet files with well known points of failure and limited reuseability to a central infrastructure as a single point of truth. The data provenance approach has the following positive side effects: (1) the scientist designs the extend and timing of data and metadata prompts by workflow definitions himself while (2) consistency and completeness (mandatory information) of metadata in the resulting XML document can be checked by XML validation. (3) Storage of the entire data creation process (including raw data and processing steps) provides a multidimensional quality history accessible by all researchers in addition to the commonly applied one dimensional quality flag system. (4) The KDMI can be extended to other scientific disciplines by adding new workflows and domain specific outputs assisted by the KDMI-Team. The KDMI is a social network inspired system but instead of sharing privacy it is a sharing platform for daily scientific work, data and their provenance.
Developing Vocabularies to Improve Understanding and Use of NOAA Observing Systems
NASA Astrophysics Data System (ADS)
Austin, M.
2014-12-01
The NOAA Observing System Integrated Analysis project (NOSIA II), is an attempt to capture and tell the story of how valuable observing systems are in producing products and services that are required to fulfill the NOAA's diverse mission. NOAA's goals and mission areas cover a broad range of environmental data; a complexity exists in terms and vocabulary as applied to the creation of observing system derived products. The NOSIA data collection focused first on decomposing NOAA's goals in the creation and acceptance of Mission Service Areas (MSAs) by NOAA senior leadership. Products and services that supported the MSAs were then identified through the process of interviewing product producers across NOAA organization. Product Data inputs including models, databases and observing system were also identified. The NOSIA model contains over 20,000 nodes each representing levels in a network connecting products, datasources, users and desired outcomes. An immediate need became apparent that the complexity and variety of the data collected required data management to mature the quality and the content of the NOSIA model. The NOSIA Analysis Database (ADB) was developed initially to improve consistency of terms and data types to allow for the linkage of observing systems, products and NOAA's Goals and mission. The ADB also allowed for the prototyping of reports and product generation in an easily accessible and comprehensive format for the first time. Web based visualization of relationships between products, datasources, users, producers were generated to make the information easily understood This includes developing ontologies/vocabularies that are used for the development of users type specific products for NOAA leadership, Observing System Portfolio mangers and the users of NOAA data.
MetPetDB: A database for metamorphic geochemistry
NASA Astrophysics Data System (ADS)
Spear, Frank S.; Hallett, Benjamin; Pyle, Joseph M.; Adalı, Sibel; Szymanski, Boleslaw K.; Waters, Anthony; Linder, Zak; Pearce, Shawn O.; Fyffe, Matthew; Goldfarb, Dennis; Glickenhouse, Nickolas; Buletti, Heather
2009-12-01
We present a data model for the initial implementation of MetPetDB, a geochemical database specific to metamorphic rock samples. The database is designed around the concept of preservation of spatial relationships, at all scales, of chemical analyses and their textural setting. Objects in the database (samples) represent physical rock samples; each sample may contain one or more subsamples with associated geochemical and image data. Samples, subsamples, geochemical data, and images are described with attributes (some required, some optional); these attributes also serve as search delimiters. All data in the database are classified as published (i.e., archived or published data), public or private. Public and published data may be freely searched and downloaded. All private data is owned; permission to view, edit, download and otherwise manipulate private data may be granted only by the data owner; all such editing operations are recorded by the database to create a data version log. The sharing of data permissions among a group of collaborators researching a common sample is done by the sample owner through the project manager. User interaction with MetPetDB is hosted by a web-based platform based upon the Java servlet application programming interface, with the PostgreSQL relational database. The database web portal includes modules that allow the user to interact with the database: registered users may save and download public and published data, upload private data, create projects, and assign permission levels to project collaborators. An Image Viewer module provides for spatial integration of image and geochemical data. A toolkit consisting of plotting and geochemical calculation software for data analysis and a mobile application for viewing the public and published data is being developed. Future issues to address include population of the database, integration with other geochemical databases, development of the analysis toolkit, creation of data models for derivative data, and building a community-wide user base. It is believed that this and other geochemical databases will enable more productive collaborations, generate more efficient research efforts, and foster new developments in basic research in the field of solid earth geochemistry.
Mining collections of compounds with Screening Assistant 2
2012-01-01
Background High-throughput screening assays have become the starting point of many drug discovery programs for large pharmaceutical companies as well as academic organisations. Despite the increasing throughput of screening technologies, the almost infinite chemical space remains out of reach, calling for tools dedicated to the analysis and selection of the compound collections intended to be screened. Results We present Screening Assistant 2 (SA2), an open-source JAVA software dedicated to the storage and analysis of small to very large chemical libraries. SA2 stores unique molecules in a MySQL database, and encapsulates several chemoinformatics methods, among which: providers management, interactive visualisation, scaffold analysis, diverse subset creation, descriptors calculation, sub-structure / SMART search, similarity search and filtering. We illustrate the use of SA2 by analysing the composition of a database of 15 million compounds collected from 73 providers, in terms of scaffolds, frameworks, and undesired properties as defined by recently proposed HTS SMARTS filters. We also show how the software can be used to create diverse libraries based on existing ones. Conclusions Screening Assistant 2 is a user-friendly, open-source software that can be used to manage collections of compounds and perform simple to advanced chemoinformatics analyses. Its modular design and growing documentation facilitate the addition of new functionalities, calling for contributions from the community. The software can be downloaded at http://sa2.sourceforge.net/. PMID:23327565
Mining collections of compounds with Screening Assistant 2.
Guilloux, Vincent Le; Arrault, Alban; Colliandre, Lionel; Bourg, Stéphane; Vayer, Philippe; Morin-Allory, Luc
2012-08-31
High-throughput screening assays have become the starting point of many drug discovery programs for large pharmaceutical companies as well as academic organisations. Despite the increasing throughput of screening technologies, the almost infinite chemical space remains out of reach, calling for tools dedicated to the analysis and selection of the compound collections intended to be screened. We present Screening Assistant 2 (SA2), an open-source JAVA software dedicated to the storage and analysis of small to very large chemical libraries. SA2 stores unique molecules in a MySQL database, and encapsulates several chemoinformatics methods, among which: providers management, interactive visualisation, scaffold analysis, diverse subset creation, descriptors calculation, sub-structure / SMART search, similarity search and filtering. We illustrate the use of SA2 by analysing the composition of a database of 15 million compounds collected from 73 providers, in terms of scaffolds, frameworks, and undesired properties as defined by recently proposed HTS SMARTS filters. We also show how the software can be used to create diverse libraries based on existing ones. Screening Assistant 2 is a user-friendly, open-source software that can be used to manage collections of compounds and perform simple to advanced chemoinformatics analyses. Its modular design and growing documentation facilitate the addition of new functionalities, calling for contributions from the community. The software can be downloaded at http://sa2.sourceforge.net/.
Reiner, Bruce I
2017-10-01
Conventional peer review practice is compromised by a number of well-documented biases, which in turn limit standard of care analysis, which is fundamental to determination of medical malpractice. In addition to these intrinsic biases, other existing deficiencies exist in current peer review including the lack of standardization, objectivity, retrospective practice, and automation. An alternative model to address these deficiencies would be one which is completely blinded to the peer reviewer, requires independent reporting from both parties, utilizes automated data mining techniques for neutral and objective report analysis, and provides data reconciliation for resolution of finding-specific report differences. If properly implemented, this peer review model could result in creation of a standardized referenceable peer review database which could further assist in customizable education, technology refinement, and implementation of real-time context and user-specific decision support.
Teaching Knowledge Management by Combining Wikis and Screen Capture Videos
ERIC Educational Resources Information Center
Makkonen, Pekka; Siakas, Kerstin; Vaidya, Shakespeare
2011-01-01
Purpose: This paper aims to report on the design and creation of a knowledge management course aimed at facilitating student creation and use of social interactive learning tools for enhanced learning. Design/methodology/approach: The era of social media and web 2.0 has enabled a bottom-up collaborative approach and new ways to publish work on the…
ERIC Educational Resources Information Center
Hosseini, Seyede Mehrnoush
2011-01-01
The research aims to define SECI model of knowledge creation (socialization, externalization, combination, and internalization) as a framework of Virtual class management which can lead to better online teaching-learning mechanisms as well as knowledge creation. It has used qualitative research methodology including researcher's close observation…
Virtual Teaching on the Tundra.
ERIC Educational Resources Information Center
McAuley, Alexander
1998-01-01
Describes how a teacher and a distance-learning consultant collaborate in using the Internet and Computer Supported Intentional Learning Environment (CISILE) to connect multicultural students on the harsh Baffin Island (Canada). Discusses the creation of the class's database and future implications. (AEF)
48 CFR 1509.170-3 - Applicability.
Code of Federal Regulations, 2011 CFR
2011-10-01
... PLANNING CONTRACTOR QUALIFICATIONS Contractor Performance Evaluations 1509.170-3 Applicability. (a) This....604 provides detailed instructions for architect-engineer contractor performance evaluations. (b) The... simplified acquisition procedures do not require the creation or existence of a formal database for past...
Managing Written Directives: A Software Solution to Streamline Workflow.
Wagner, Robert H; Savir-Baruch, Bital; Gabriel, Medhat S; Halama, James R; Bova, Davide
2017-06-01
A written directive is required by the U.S. Nuclear Regulatory Commission for any use of 131 I above 1.11 MBq (30 μCi) and for patients receiving radiopharmaceutical therapy. This requirement has also been adopted and must be enforced by the agreement states. As the introduction of new radiopharmaceuticals increases therapeutic options in nuclear medicine, time spent on regulatory paperwork also increases. The pressure of managing these time-consuming regulatory requirements may heighten the potential for inaccurate or incomplete directive data and subsequent regulatory violations. To improve on the paper-trail method of directive management, we created a software tool using a Health Insurance Portability and Accountability Act (HIPAA)-compliant database. This software allows for secure data-sharing among physicians, technologists, and managers while saving time, reducing errors, and eliminating the possibility of loss and duplication. Methods: The software tool was developed using Visual Basic, which is part of the Visual Studio development environment for the Windows platform. Patient data are deposited in an Access database on a local HIPAA-compliant secure server or hard disk. Once a working version had been developed, it was installed at our institution and used to manage directives. Updates and modifications of the software were released regularly until no more significant problems were found with its operation. Results: The software has been used at our institution for over 2 y and has reliably kept track of all directives. All physicians and technologists use the software daily and find it superior to paper directives. They can retrieve active directives at any stage of completion, as well as completed directives. Conclusion: We have developed a software solution for the management of written directives that streamlines and structures the departmental workflow. This solution saves time, centralizes the information for all staff to share, and decreases confusion about the creation, completion, filing, and retrieval of directives. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
The Monitoring Erosion of Agricultural Land and spatial database of erosion events
NASA Astrophysics Data System (ADS)
Kapicka, Jiri; Zizala, Daniel
2013-04-01
In 2011 originated in The Czech Republic The Monitoring Erosion of Agricultural Land as joint project of State Land Office (SLO) and Research Institute for Soil and Water Conservation (RISWC). The aim of the project is collecting and record keeping information about erosion events on agricultural land and their evaluation. The main idea is a creation of a spatial database that will be source of data and information for evaluation and modeling erosion process, for proposal of preventive measures and measures to reduce negative impacts of erosion events. A subject of monitoring is the manifestations of water erosion, wind erosion and slope deformation in which cause damaged agriculture land. A website, available on http://me.vumop.cz, is used as a tool for keeping and browsing information about monitored events. SLO employees carry out record keeping. RISWC is specialist institute in the Monitoring Erosion of Agricultural Land that performs keeping the spatial database, running the website, managing the record keeping of events, analysis the cause of origins events and statistical evaluations of keeping events and proposed measures. Records are inserted into the database using the user interface of the website which has map server as a component. Website is based on database technology PostgreSQL with superstructure PostGIS and MapServer UMN. Each record is in the database spatial localized by a drawing and it contains description information about character of event (data, situation description etc.) then there are recorded information about land cover and about grown crops. A part of database is photodocumentation which is taken in field reconnaissance which is performed within two days after notify of event. Another part of database are information about precipitations from accessible precipitation gauges. Website allows to do simple spatial analysis as are area calculation, slope calculation, percentage representation of GAEC etc.. Database structure was designed on the base of needs analysis inputs to mathematical models. Mathematical models are used for detailed analysis of chosen erosion events which include soil analysis. Till the end 2012 has had the database 135 events. The content of database still accrues and gives rise to the extensive source of data that is usable for testing mathematical models.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Management and Budget apportionment or reapportionment action. The creation of an obligation in excess of an... overall VA budget or plan of expenditure. The creation of an obligation in excess of an allowance is not a...
Databases for rRNA gene profiling of microbial communities
Ashby, Matthew
2013-07-02
The present invention relates to methods for performing surveys of the genetic diversity of a population. The invention also relates to methods for performing genetic analyses of a population. The invention further relates to methods for the creation of databases comprising the survey information and the databases created by these methods. The invention also relates to methods for analyzing the information to correlate the presence of nucleic acid markers with desired parameters in a sample. These methods have application in the fields of geochemical exploration, agriculture, bioremediation, environmental analysis, clinical microbiology, forensic science and medicine.
[Current status of DNA databases in the forensic field: new progress, new legal needs].
Baeta, Miriam; Martínez-Jarreta, Begoña
2009-01-01
One of the most polemic issues regarding the use of deoxyribonucleic acid (DNA) in the legal sphere, refers to the creation of DNA databases. Until relatively recently, Spain did not have a law to support the establishment of a national DNA profile bank for forensic purposes, and preserve the fundamental rights of subjects whose data are archived therein. The regulatory law of police databases regarding identifiers obtained from DNA approved in 2007, covers this void in the Spanish legislation and responds to the incessant need to adapt the laws to continuous scientific and technological progress.
Implementation of customized health information technology in diabetes self management programs.
Alexander, Susan; Frith, Karen H; O'Keefe, Louise; Hennigan, Michael A
2011-01-01
The project was a nurse-led implementation of a software application, designed to combine clinical and demographic records for a diabetes education program, which would result in secure, long-term record storage. Clinical information systems may be prohibitively expensive for small practices and require extensive training for implementation. A review of the literature suggests that the use of simple, practice-based registries offer an economical method of monitoring the outcomes of diabetic patients. The database was designed using a common software application, Microsoft Access. The theory used to guide implementation and staff training was Rogers' Diffusion of Innovations theory (1995). Outcomes after a 3-month period included incorporation of 100% of new clinical and demographic patient records into the database and positive changes in staff attitudes regarding software applications used in diabetes self-management training. These objectives were met while keeping project costs under budgeted amounts. As a function of the clinical nurse specialist (CNS) researcher role, there is a need for CNSs to identify innovative and economical methods of data collection. The success of this nurse-led project reinforces suggestions in the literature for less costly methods of data maintenance in small practice settings. Ongoing utilization and enhancement have resulted in the creation of a robust database that could aid in the research of multiple clinical issues. Clinical nurse specialists can use existing evidence to guide and improve both their own practice and outcomes for patients and organizations. Further research regarding specific factors that predict efficient transition of informatics applications, how these factors vary according to practice settings, and the role of the CNS in implementation of such applications is needed.
An offline-online Web-GIS Android application for fast data acquisition of landslide hazard and risk
NASA Astrophysics Data System (ADS)
Olyazadeh, Roya; Sudmeier-Rieux, Karen; Jaboyedoff, Michel; Derron, Marc-Henri; Devkota, Sanjaya
2017-04-01
Regional landslide assessments and mapping have been effectively pursued by research institutions, national and local governments, non-governmental organizations (NGOs), and different stakeholders for some time, and a wide range of methodologies and technologies have consequently been proposed. Land-use mapping and hazard event inventories are mostly created by remote-sensing data, subject to difficulties, such as accessibility and terrain, which need to be overcome. Likewise, landslide data acquisition for the field navigation can magnify the accuracy of databases and analysis. Open-source Web and mobile GIS tools can be used for improved ground-truthing of critical areas to improve the analysis of hazard patterns and triggering factors. This paper reviews the implementation and selected results of a secure mobile-map application called ROOMA (Rapid Offline-Online Mapping Application) for the rapid data collection of landslide hazard and risk. This prototype assists the quick creation of landslide inventory maps (LIMs) by collecting information on the type, feature, volume, date, and patterns of landslides using open-source Web-GIS technologies such as Leaflet maps, Cordova, GeoServer, PostgreSQL as the real DBMS (database management system), and PostGIS as its plug-in for spatial database management. This application comprises Leaflet maps coupled with satellite images as a base layer, drawing tools, geolocation (using GPS and the Internet), photo mapping, and event clustering. All the features and information are recorded into a GeoJSON text file in an offline version (Android) and subsequently uploaded to the online mode (using all browsers) with the availability of Internet. Finally, the events can be accessed and edited after approval by an administrator and then be visualized by the general public.
ERIC Educational Resources Information Center
Eshlaghy, Abbas Toloie; Kaveh, Haydeh
2009-01-01
The purpose of this study was to determine the most suitable ICT-based education and define the most suitable e-content creation tools for quantitative courses in the IT-management Masters program. ICT-based tools and technologies are divided in to three categories: the creation of e-content, the offering of e-content, and access to e-content. In…
The MELISSA food data base: space food preparation and process optimization
NASA Astrophysics Data System (ADS)
Creuly, Catherine; Poughon, Laurent; Pons, A.; Farges, Berangere; Dussap, Claude-Gilles
Life Support Systems have to deal with air, water and food requirement for a crew, waste management and also to the crew's habitability and safety constraints. Food can be provided from stocks (open loops) or produced during the space flight or on an extraterrestrial base (what implies usually a closed loop system). Finally it is admitted that only biological processes can fulfil the food requirement of life support system. Today, only a strictly vegetarian source range is considered, and this is limited to a very small number of crops compared to the variety available on Earth. Despite these constraints, a successful diet should have enough variety in terms of ingredients and recipes and sufficiently high acceptability in terms of acceptance ratings for individual dishes to remain interesting and palatable over a several months period and an adequate level of nutrients commensurate with the space nutritional requirements. In addition to the nutritional aspects, others parameters have to be considered for the pertinent selection of the dishes as energy consumption (for food production and transformation), quantity of generated waste, preparation time, food processes. This work concerns a global approach called MELISSA Food Database to facilitate the cre-ation and the management of these menus associated to the nutritional, mass, energy and time constraints. The MELISSA Food Database is composed of a database (MySQL based) con-taining multiple information among others crew composition, menu, dishes, recipes, plant and nutritional data and of a web interface (PHP based) to interactively access the database and manage its content. In its current version a crew is defined and a 10 days menu scenario can be created using dishes that could be cooked from a set of limited fresh plant assumed to be produced in the life support system. The nutritional covering, waste produced, mass, time and energy requirements are calculated allowing evaluation of the menu scenario and its interactions with the life support system and filled with the information on food processes and equipment suitable for use in Advanced Life Support System. The MELISSA database is available on the server of the University Blaise Pascal (Clermont Université) with an authorized access at the address http://marseating.univ-bpclermont.fr. In the future, the challenge is to complete this database with specific data related to the MELISSA project. Plants chambers in the pilot plant located in Universitat Aut`noma de Barcelona will give nutritional and process data on crops cultivation.
Mulrane, Laoighse; Rexhepaj, Elton; Smart, Valerie; Callanan, John J; Orhan, Diclehan; Eldem, Türkan; Mally, Angela; Schroeder, Susanne; Meyer, Kirstin; Wendt, Maria; O'Shea, Donal; Gallagher, William M
2008-08-01
The widespread use of digital slides has only recently come to the fore with the development of high-throughput scanners and high performance viewing software. This development, along with the optimisation of compression standards and image transfer techniques, has allowed the technology to be used in wide reaching applications including integration of images into hospital information systems and histopathological training, as well as the development of automated image analysis algorithms for prediction of histological aberrations and quantification of immunohistochemical stains. Here, the use of this technology in the creation of a comprehensive library of images of preclinical toxicological relevance is demonstrated. The images, acquired using the Aperio ScanScope CS and XT slide acquisition systems, form part of the ongoing EU FP6 Integrated Project, Innovative Medicines for Europe (InnoMed). In more detail, PredTox (abbreviation for Predictive Toxicology) is a subproject of InnoMed and comprises a consortium of 15 industrial (13 large pharma, 1 technology provider and 1 SME) and three academic partners. The primary aim of this consortium is to assess the value of combining data generated from 'omics technologies (proteomics, transcriptomics, metabolomics) with the results from more conventional toxicology methods, to facilitate further informed decision making in preclinical safety evaluation. A library of 1709 scanned images was created of full-face sections of liver and kidney tissue specimens from male Wistar rats treated with 16 proprietary and reference compounds of known toxicity; additional biological materials from these treated animals were separately used to create 'omics data, that will ultimately be used to populate an integrated toxicological database. In respect to assessment of the digital slides, a web-enabled digital slide management system, Digital SlideServer (DSS), was employed to enable integration of the digital slide content into the 'omics database and to facilitate remote viewing by pathologists connected with the project. DSS also facilitated manual annotation of digital slides by the pathologists, specifically in relation to marking particular lesions of interest. Tissue microarrays (TMAs) were constructed from the specimens for the purpose of creating a repository of tissue from animals used in the study with a view to later-stage biomarker assessment. As the PredTox consortium itself aims to identify new biomarkers of toxicity, these TMAs will be a valuable means of validation. In summary, a large repository of histological images was created enabling the subsequent pathological analysis of samples through remote viewing and, along with the utilisation of TMA technology, will allow the validation of biomarkers identified by the PredTox consortium. The population of the PredTox database with these digitised images represents the creation of the first toxicological database integrating 'omics and preclinical data with histological images.
36 CFR 1222.22 - What records are required to provide for adequate documentation of agency business?
Code of Federal Regulations, 2011 CFR
2011-07-01
... Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT CREATION AND MAINTENANCE OF..., agencies must prescribe the creation and maintenance of records that: (a) Document the persons, places...
36 CFR 1222.22 - What records are required to provide for adequate documentation of agency business?
Code of Federal Regulations, 2014 CFR
2014-07-01
... Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT CREATION AND MAINTENANCE OF..., agencies must prescribe the creation and maintenance of records that: (a) Document the persons, places...
36 CFR 1222.22 - What records are required to provide for adequate documentation of agency business?
Code of Federal Regulations, 2012 CFR
2012-07-01
... Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT CREATION AND MAINTENANCE OF..., agencies must prescribe the creation and maintenance of records that: (a) Document the persons, places...
36 CFR 1222.22 - What records are required to provide for adequate documentation of agency business?
Code of Federal Regulations, 2010 CFR
2010-07-01
... Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT CREATION AND MAINTENANCE OF..., agencies must prescribe the creation and maintenance of records that: (a) Document the persons, places...
NASA Technical Reports Server (NTRS)
Shearer, Scott C. (Inventor); Proferes, John Nicholas (Inventor); Baker, Sr., Mitchell D. (Inventor); Reilly, Kenneth B. (Inventor); Tiwari, Vijai K. (Inventor)
2013-01-01
Systems, computer program products, and methods are disclosed for tracking an improvement event. An embodiment includes an event interface configured to receive a plurality of entries related to each of a plurality of improvement events. The plurality of entries includes a project identifier for the improvement event, a creation date, an objective, an action related to reaching the objective, and a first deadline related to the improvement event. A database interface is configured to store the plurality of entries in an event database.
Huttin, Christine C; Liebman, Michael N
2013-01-01
This paper aims to discuss the economics of biobanking. Among the critical issues in evaluating potential ROI for creation of a bio-bank are: scale (e.g. local, national, international), centralized versus virtual/distributed, degree of sample annotation/QC procedures, targeted end-users and uses, types of samples, potential characterization, both of samples and annotations. The paper presents a review on cost models for an economic analysis of biobanking for different steps: data collection (e.g. biospecimens in different types of sites, storage, transport and distribution, information management for the different types of information (e.g. biological information such as cell, gene, and protein)). It also provides additional concepts to process biospecimens from laboratory to clinical practice and will help to identify how changing paradigms in translational medicine affect the economic modeling.
A PC-Based Free Text DSS for Health Care
NASA Technical Reports Server (NTRS)
Grams, Ralph R.; Buchanan, Paul; Massey, James K.; Jin, Ming
1987-01-01
A free Decision Support System(DST) has been constructed for health care professional that allows the analysis of complex medical cases and the creation of diagnostic list of potential diseases for clinical evaluation.The system uses a PC-based text management system specifically designed for desktop operation. The texts employed in the decision support package include the Merck Manual (published by Merck Sharpe & Dohme) and Control of Communicable Diseas in Man (published by the American Public Health Association). The background and design of the database are discussed along with a structured analysis procedure for handling free text DSS system. A case study is presented to show the application of this technology and conclusions are drawn in the summary that point to expanded areas of professional intention and new frontiers yet to be explored in this rapidly progressing field.
GIS Methodic and New Database for Magmatic Rocks. Application for Atlantic Oceanic Magmatism.
NASA Astrophysics Data System (ADS)
Asavin, A. M.
2001-12-01
There are several geochemical Databases in INTERNET available now. There one of the main peculiarities of stored geochemical information is geographical coordinates of each samples in those Databases. As rule the software of this Database use spatial information only for users interface search procedures. In the other side, GIS-software (Geographical Information System software),for example ARC/INFO software which using for creation and analyzing special geological, geochemical and geophysical e-map, have been deeply involved with geographical coordinates for of samples. We join peculiarities GIS systems and relational geochemical Database from special software. Our geochemical information system created in Vernadsky Geological State Museum and institute of Geochemistry and Analytical Chemistry from Moscow. Now we tested system with data of geochemistry oceanic rock from Atlantic and Pacific oceans, about 10000 chemical analysis. GIS information content consist from e-map covers Wold Globes. Parts of these maps are Atlantic ocean covers gravica map (with grid 2''), oceanic bottom hot stream, altimeteric maps, seismic activity, tectonic map and geological map. Combination of this information content makes possible created new geochemical maps and combination of spatial analysis and numerical geochemical modeling of volcanic process in ocean segment. Now we tested information system on thick client technology. Interface between GIS system Arc/View and Database resides in special multiply SQL-queries sequence. The result of the above gueries were simple DBF-file with geographical coordinates. This file act at the instant of creation geochemical and other special e-map from oceanic region. We used more complex method for geophysical data. From ARC\\View we created grid cover for polygon spatial geophysical information.
Tripal v1.1: a standards-based toolkit for construction of online genetic and genomic databases.
Sanderson, Lacey-Anne; Ficklin, Stephen P; Cheng, Chun-Huai; Jung, Sook; Feltus, Frank A; Bett, Kirstin E; Main, Dorrie
2013-01-01
Tripal is an open-source freely available toolkit for construction of online genomic and genetic databases. It aims to facilitate development of community-driven biological websites by integrating the GMOD Chado database schema with Drupal, a popular website creation and content management software. Tripal provides a suite of tools for interaction with a Chado database and display of content therein. The tools are designed to be generic to support the various ways in which data may be stored in Chado. Previous releases of Tripal have supported organisms, genomic libraries, biological stocks, stock collections and genomic features, their alignments and annotations. Also, Tripal and its extension modules provided loaders for commonly used file formats such as FASTA, GFF, OBO, GAF, BLAST XML, KEGG heir files and InterProScan XML. Default generic templates were provided for common views of biological data, which could be customized using an open Application Programming Interface to change the way data are displayed. Here, we report additional tools and functionality that are part of release v1.1 of Tripal. These include (i) a new bulk loader that allows a site curator to import data stored in a custom tab delimited format; (ii) full support of every Chado table for Drupal Views (a powerful tool allowing site developers to construct novel displays and search pages); (iii) new modules including 'Feature Map', 'Genetic', 'Publication', 'Project', 'Contact' and the 'Natural Diversity' modules. Tutorials, mailing lists, download and set-up instructions, extension modules and other documentation can be found at the Tripal website located at http://tripal.info. DATABASE URL: http://tripal.info/.
Tripal v1.1: a standards-based toolkit for construction of online genetic and genomic databases
Sanderson, Lacey-Anne; Ficklin, Stephen P.; Cheng, Chun-Huai; Jung, Sook; Feltus, Frank A.; Bett, Kirstin E.; Main, Dorrie
2013-01-01
Tripal is an open-source freely available toolkit for construction of online genomic and genetic databases. It aims to facilitate development of community-driven biological websites by integrating the GMOD Chado database schema with Drupal, a popular website creation and content management software. Tripal provides a suite of tools for interaction with a Chado database and display of content therein. The tools are designed to be generic to support the various ways in which data may be stored in Chado. Previous releases of Tripal have supported organisms, genomic libraries, biological stocks, stock collections and genomic features, their alignments and annotations. Also, Tripal and its extension modules provided loaders for commonly used file formats such as FASTA, GFF, OBO, GAF, BLAST XML, KEGG heir files and InterProScan XML. Default generic templates were provided for common views of biological data, which could be customized using an open Application Programming Interface to change the way data are displayed. Here, we report additional tools and functionality that are part of release v1.1 of Tripal. These include (i) a new bulk loader that allows a site curator to import data stored in a custom tab delimited format; (ii) full support of every Chado table for Drupal Views (a powerful tool allowing site developers to construct novel displays and search pages); (iii) new modules including ‘Feature Map’, ‘Genetic’, ‘Publication’, ‘Project’, ‘Contact’ and the ‘Natural Diversity’ modules. Tutorials, mailing lists, download and set-up instructions, extension modules and other documentation can be found at the Tripal website located at http://tripal.info. Database URL: http://tripal.info/ PMID:24163125
Code of Federal Regulations, 2013 CFR
2013-07-01
... Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT CREATION AND MAINTENANCE... documentation, agencies must prescribe the creation and maintenance of records that: (a) Document the persons...
NASA Astrophysics Data System (ADS)
Sheldon, W.; Chamblee, J.; Cary, R. H.
2013-12-01
Environmental scientists are under increasing pressure from funding agencies and journal publishers to release quality-controlled data in a timely manner, as well as to produce comprehensive metadata for submitting data to long-term archives (e.g. DataONE, Dryad and BCO-DMO). At the same time, the volume of digital data that researchers collect and manage is increasing rapidly due to advances in high frequency electronic data collection from flux towers, instrumented moorings and sensor networks. However, few pre-built software tools are available to meet these data management needs, and those tools that do exist typically focus on part of the data management lifecycle or one class of data. The GCE Data Toolbox has proven to be both a generalized and effective software solution for environmental data management in the Long Term Ecological Research Network (LTER). This open source MATLAB software library, developed by the Georgia Coastal Ecosystems LTER program, integrates metadata capture, creation and management with data processing, quality control and analysis to support the entire data lifecycle. Raw data can be imported directly from common data logger formats (e.g. SeaBird, Campbell Scientific, YSI, Hobo), as well as delimited text files, MATLAB files and relational database queries. Basic metadata are derived from the data source itself (e.g. parsed from file headers) and by value inspection, and then augmented using editable metadata templates containing boilerplate documentation, attribute descriptors, code definitions and quality control rules. Data and metadata content, quality control rules and qualifier flags are then managed together in a robust data structure that supports database functionality and ensures data validity throughout processing. A growing suite of metadata-aware editing, quality control, analysis and synthesis tools are provided with the software to support managing data using graphical forms and command-line functions, as well as developing automated workflows for unattended processing. Finalized data and structured metadata can be exported in a wide variety of text and MATLAB formats or uploaded to a relational database for long-term archiving and distribution. The GCE Data Toolbox can be used as a complete, light-weight solution for environmental data and metadata management, but it can also be used in conjunction with other cyber infrastructure to provide a more comprehensive solution. For example, newly acquired data can be retrieved from a Data Turbine or Campbell LoggerNet Database server for quality control and processing, then transformed to CUAHSI Observations Data Model format and uploaded to a HydroServer for distribution through the CUAHSI Hydrologic Information System. The GCE Data Toolbox can also be leveraged in analytical workflows developed using Kepler or other systems that support MATLAB integration or tool chaining. This software can therefore be leveraged in many ways to help researchers manage, analyze and distribute the data they collect.
The Mayo Clinic Value Creation System.
Swensen, Stephen J; Dilling, James A; Harper, C Michel; Noseworthy, John H
2012-01-01
The authors present Mayo Clinic's Value Creation System, a coherent systems engineering approach to delivering a single high-value practice. There are 4 tightly linked, interdependent phases of the system: alignment, discovery, managed diffusion, and measurement. The methodology is described and examples of the results to date are presented. The Value Creation System has been demonstrated to improve the quality of patient care while reducing costs and increasing productivity.
Hopkins, Mark E; Summers-Ables, Joy E; Clifton, Shari C; Coffman, Michael A
2011-06-01
To make electronic resources available to library users while effectively harnessing intellectual capital within the library, ultimately fostering the library's use of technology to interact asynchronously with its patrons (users). The methods used in the project included: (1) developing a new library website to facilitate the creation, management, accessibility, maintenance and dissemination of library resources; and (2) establishing ownership by those who participated in the project, while creating effective work allocation strategies through the implementation of a content management system that allowed the library to manage cost, complexity and interoperability. Preliminary results indicate that contributors to the system benefit from an increased understanding of the library's resources and add content valuable to library patrons. These strategies have helped promote the manageable creation and maintenance of electronic content in accomplishing the library's goal of interacting with its patrons. Establishment of a contributive system for adding to the library's electronic resources and electronic content has been successful. Further work will look at improving asynchronous interaction, particularly highlighting accessibility of electronic content and resources. © 2010 The authors. Health Information and Libraries Journal © 2010 Health Libraries Group.
Salati, Michele; Pompili, Cecilia; Refai, Majed; Xiumè, Francesco; Sabbatini, Armando; Brunelli, Alessandro
2014-06-01
The aim of the present study was to verify whether the implementation of an electronic health record (EHR) in our thoracic surgery unit allows creation of a high-quality clinical database saving time and costs. Before August 2011, multiple individuals compiled the on-paper documents/records and a single data manager inputted selected data into the database (traditional database, tDB). Since the adoption of an EHR in August 2011, multiple individuals have been responsible for compiling the EHR, which automatically generates a real-time database (EHR-based database, eDB), without the need for a data manager. During the initial period of implementation of the EHR, periodic meetings were held with all physicians involved in the use of the EHR in order to monitor and standardize the data registration process. Data quality of the first 100 anatomical lung resections recorded in the eDB was assessed by measuring the total number of missing values (MVs: existing non-reported value) and inaccurate values (wrong data) occurring in 95 core variables. The average MV of the eDB was compared with the one occurring in the same variables of the last 100 records registered in the tDB. A learning curve was constructed by plotting the number of MVs in the electronic database and tDB with the patients arranged by the date of registration. The tDB and eDB had similar MVs (0.74 vs 1, P = 0.13). The learning curve showed an initial phase including about 35 records, where MV in the eDB was higher than that in the tDB (1.9 vs 0.74, P = 0.03), and a subsequent phase, where the MV was similar in the two databases (0.7 vs 0.74, P = 0.6). The inaccuracy rate of these two phases in the eDB was stable (0.5 vs 0.3, P = 0.3). Using EHR saved an average of 9 min per patient, totalling 15 h saved for obtaining a dataset of 100 patients with respect to the tDB. The implementation of EHR allowed streamlining the process of clinical data recording. It saved time and human resource costs, without compromising the quality of data. © The Author 2014. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
Information technology and public health management of disasters--a model for South Asian countries.
Mathew, Dolly
2005-01-01
This paper highlights the use of information technology (IT) in disaster management and public health management of disasters. Effective health response to disasters will depend on three important lines of action: (1) disaster preparedness; (2) emergency relief; and (3) management of disasters. This is facilitated by the presence of modern communication and space technology, especially the Internet and remote sensing satellites. This has made the use of databases, knowledge bases, geographic information systems (GIS), management information systems (MIS), information transfer, and online connectivity possible in the area of disaster management and medicine. This paper suggests a conceptual model called, "The Model for Public Health Management of Disasters for South Asia". This Model visualizes the use of IT in the public health management of disasters by setting up the Health and Disaster Information Network and Internet Community Centers, which will facilitate cooperation among all those in the areas of disaster management and emergency medicine. The suggested infrastructure would benefit the governments, non-government organizations, and institutions working in the areas of disaster and emergency medicine, professionals, the community, and all others associated with disaster management and emergency medicine. The creation of such an infrastructure will enable the rapid transfer of information, data, knowledge, and online connectivity from top officials to the grassroots organizations, and also among these countries regionally. This Model may be debated, modified, and tested further in the field to suit the national and local conditions. It is hoped that this exercise will result in a viable and practical model for use in public health management of disasters by South Asian countries.
Flash Foods' Job Creation and Petroleum Independence with E85
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walk, Steve
Protec Fuel Management project objectives are to help design, build, provide, promote and supply biofuels for the greater energy independence, national security and domestic economic growth through job creations, infrastructure projects and supply chain business stimulants.
NASA Astrophysics Data System (ADS)
Hussain, M.; Chen, D.
2014-11-01
Buildings, the basic unit of an urban landscape, host most of its socio-economic activities and play an important role in the creation of urban land-use patterns. The spatial arrangement of different building types creates varied urban land-use clusters which can provide an insight to understand the relationships between social, economic, and living spaces. The classification of such urban clusters can help in policy-making and resource management. In many countries including the UK no national-level cadastral database containing information on individual building types exists in public domain. In this paper, we present a framework for inferring functional types of buildings based on the analysis of their form (e.g. geometrical properties, such as area and perimeter, layout) and spatial relationship from large topographic and address-based GIS database. Machine learning algorithms along with exploratory spatial analysis techniques are used to create the classification rules. The classification is extended to two further levels based on the functions (use) of buildings derived from address-based data. The developed methodology was applied to the Manchester metropolitan area using the Ordnance Survey's MasterMap®, a large-scale topographic and address-based data available for the UK.
United States Army Medical Materiel Development Activity: 1997 Annual Report.
1997-01-01
business planning and execution information management system (Project Management Division Database ( PMDD ) and Product Management Database System (PMDS...MANAGEMENT • Project Management Division Database ( PMDD ), Product Management Database System (PMDS), and Special Users Database System:The existing...System (FMS), were investigated. New Product Managers and Project Managers were added into PMDS and PMDD . A separate division, Support, was
NASA Technical Reports Server (NTRS)
Fromm, Michael; Pitts, Michael; Alfred, Jerome
2000-01-01
This report summarizes the project team's activity and accomplishments during the period 12 February, 1999 - 12 February, 2000. The primary objective of this project was to create and test a generic algorithm for detecting polar stratospheric clouds (PSC), an algorithm that would permit creation of a unified, long term PSC database from a variety of solar occultation instruments that measure aerosol extinction near 1000 nm The second objective was to make a database of PSC observations and certain relevant related datasets. In this report we describe the algorithm, the data we are making available, and user access options. The remainder of this document provides the details of the algorithm and the database offering.
Automated knowledge base development from CAD/CAE databases
NASA Technical Reports Server (NTRS)
Wright, R. Glenn; Blanchard, Mary
1988-01-01
Knowledge base development requires a substantial investment in time, money, and resources in order to capture the knowledge and information necessary for anything other than trivial applications. This paper addresses a means to integrate the design and knowledge base development process through automated knowledge base development from CAD/CAE databases and files. Benefits of this approach include the development of a more efficient means of knowledge engineering, resulting in the timely creation of large knowledge based systems that are inherently free of error.
ERIC Educational Resources Information Center
de Freitas Guilhermino Trindade, Daniela; Guimaraes, Cayley; Antunes, Diego Roberto; Garcia, Laura Sanchez; Lopes da Silva, Rafaella Aline; Fernandes, Sueli
2012-01-01
This study analysed the role of knowledge management (KM) tools used to cultivate a community of practice (CP) in its knowledge creation (KC), transfer, learning processes. The goal of such observations was to determine requirements that KM tools should address for the specific CP formed by Deaf and non-Deaf members of the CP. The CP studied is a…
Trezza, Alfonso; Bernini, Andrea; Langella, Andrea; Ascher, David B; Pires, Douglas E V; Sodi, Andrea; Passerini, Ilaria; Pelo, Elisabetta; Rizzo, Stanislao; Niccolai, Neri; Spiga, Ottavia
2017-10-01
The aim of this article is to report the investigation of the structural features of ABCA4, a protein associated with a genetic retinal disease. A new database collecting knowledge of ABCA4 structure may facilitate predictions about the possible functional consequences of gene mutations observed in clinical practice. In order to correlate structural and functional effects of the observed mutations, the structure of mouse P-glycoprotein was used as a template for homology modeling. The obtained structural information and genetic data are the basis of our relational database (ABCA4Database). Sequence variability among all ABCA4-deposited entries was calculated and reported as Shannon entropy score at the residue level. The three-dimensional model of ABCA4 structure was used to locate the spatial distribution of the observed variable regions. Our predictions from structural in silico tools were able to accurately link the functional effects of mutations to phenotype. The development of the ABCA4Database gathers all the available genetic and structural information, yielding a global view of the molecular basis of some retinal diseases. ABCA4 modeled structure provides a molecular basis on which to analyze protein sequence mutations related to genetic retinal disease in order to predict the risk of retinal disease across all possible ABCA4 mutations. Additionally, our ABCA4 predicted structure is a good starting point for the creation of a new data analysis model, appropriate for precision medicine, in order to develop a deeper knowledge network of the disease and to improve the management of patients.
Towards the Interoperability of Web, Database, and Mass Storage Technologies for Petabyte Archives
NASA Technical Reports Server (NTRS)
Moore, Reagan; Marciano, Richard; Wan, Michael; Sherwin, Tom; Frost, Richard
1996-01-01
At the San Diego Supercomputer Center, a massive data analysis system (MDAS) is being developed to support data-intensive applications that manipulate terabyte sized data sets. The objective is to support scientific application access to data whether it is located at a Web site, stored as an object in a database, and/or storage in an archival storage system. We are developing a suite of demonstration programs which illustrate how Web, database (DBMS), and archival storage (mass storage) technologies can be integrated. An application presentation interface is being designed that integrates data access to all of these sources. We have developed a data movement interface between the Illustra object-relational database and the NSL UniTree archival storage system running in a production mode at the San Diego Supercomputer Center. With this interface, an Illustra client can transparently access data on UniTree under the control of the Illustr DBMS server. The current implementation is based on the creation of a new DBMS storage manager class, and a set of library functions that allow the manipulation and migration of data stored as Illustra 'large objects'. We have extended this interface to allow a Web client application to control data movement between its local disk, the Web server, the DBMS Illustra server, and the UniTree mass storage environment. This paper describes some of the current approaches successfully integrating these technologies. This framework is measured against a representative sample of environmental data extracted from the San Diego Ba Environmental Data Repository. Practical lessons are drawn and critical research areas are highlighted.
48 CFR 1509.170-3 - Applicability.
Code of Federal Regulations, 2010 CFR
2010-10-01
... simplified acquisition procedures do not require the creation or existence of a formal database for past... officers shall complete all contractor performance evaluations by use of the National Institutes of Health...) Construction acquisitions shall be completed by use of the NIH construction module. Performance evaluations for...
41 CFR 102-193.20 - What are the specific agency responsibilities for records management?
Code of Federal Regulations, 2011 CFR
2011-01-01
... REGULATION ADMINISTRATIVE PROGRAMS 193-CREATION, MAINTENANCE, AND USE OF RECORDS § 102-193.20 What are the..., irrespective of the medium (e.g., paper, electronic, or other). (e) Control the creation, maintenance, and use...
41 CFR 102-193.20 - What are the specific agency responsibilities for records management?
Code of Federal Regulations, 2012 CFR
2012-01-01
... REGULATION ADMINISTRATIVE PROGRAMS 193-CREATION, MAINTENANCE, AND USE OF RECORDS § 102-193.20 What are the..., irrespective of the medium (e.g., paper, electronic, or other). (e) Control the creation, maintenance, and use...
41 CFR 102-193.20 - What are the specific agency responsibilities for records management?
Code of Federal Regulations, 2013 CFR
2013-07-01
... REGULATION ADMINISTRATIVE PROGRAMS 193-CREATION, MAINTENANCE, AND USE OF RECORDS § 102-193.20 What are the..., irrespective of the medium (e.g., paper, electronic, or other). (e) Control the creation, maintenance, and use...
41 CFR 102-193.20 - What are the specific agency responsibilities for records management?
Code of Federal Regulations, 2014 CFR
2014-01-01
... REGULATION ADMINISTRATIVE PROGRAMS 193-CREATION, MAINTENANCE, AND USE OF RECORDS § 102-193.20 What are the..., irrespective of the medium (e.g., paper, electronic, or other). (e) Control the creation, maintenance, and use...
Factors shaping the evolution of electronic documentation systems
NASA Technical Reports Server (NTRS)
Dede, Christopher J.; Sullivan, Tim R.; Scace, Jacque R.
1990-01-01
The main goal is to prepare the space station technical and managerial structure for likely changes in the creation, capture, transfer, and utilization of knowledge. By anticipating advances, the design of Space Station Project (SSP) information systems can be tailored to facilitate a progression of increasingly sophisticated strategies as the space station evolves. Future generations of advanced information systems will use increases in power to deliver environmentally meaningful, contextually targeted, interconnected data (knowledge). The concept of a Knowledge Base Management System is emerging when the problem is focused on how information systems can perform such a conversion of raw data. Such a system would include traditional management functions for large space databases. Added artificial intelligence features might encompass co-existing knowledge representation schemes; effective control structures for deductive, plausible, and inductive reasoning; means for knowledge acquisition, refinement, and validation; explanation facilities; and dynamic human intervention. The major areas covered include: alternative knowledge representation approaches; advanced user interface capabilities; computer-supported cooperative work; the evolution of information system hardware; standardization, compatibility, and connectivity; and organizational impacts of information intensive environments.
NASA Astrophysics Data System (ADS)
Kim, Duk-hyun; Lee, Hyoung-Jin
2018-04-01
A study of efficient aerodynamic database modeling method was conducted. A creation of database using periodicity and symmetry characteristic of missile aerodynamic coefficient was investigated to minimize the number of wind tunnel test cases. In addition, studies of how to generate the aerodynamic database when the periodicity changes due to installation of protuberance and how to conduct a zero calibration were carried out. Depending on missile configurations, the required number of test cases changes and there exist tests that can be omitted. A database of aerodynamic on deflection angle of control surface can be constituted using phase shift. A validity of modeling method was demonstrated by confirming that the result which the aerodynamic coefficient calculated by using the modeling method was in agreement with wind tunnel test results.
NASA Astrophysics Data System (ADS)
Archetti, Renata; Vacchi, Matteo; Carniel, Sandro; Benetazzo, Alvise
2013-04-01
Measuring the location of the shoreline and monitoring foreshore changes through time represent a fundamental task for correct coastal management at many sites around the world. Several authors demonstrated video systems to be an essential tool for increasing the amount of data available for coastline management. These systems typically sample at least once per hour and can provide long-term datasets showing variations over days, events, months, seasons and years. In the past few years, due to the wide diffusion of video cameras at relatively low price, the use of video cameras and of video images analysis for environmental control has increased significantly. Even if video monitoring systems were often used in the research field they are most often applied with practical purposes including: i) identification and quantification of shoreline erosion, ii) assessment of coastal protection structure and/or beach nourishment performance, and iii) basic input to engineering design in the coastal zone iv) support for integrated numerical model validation Here we present the guidelines for the creation of a new video monitoring network in the proximity of the Jesolo beach (NW of the Adriatic Sea, Italy), Within this 10 km-long tourist district several engineering structures have been built in recent years, with the aim of solving urgent local erosion problems; as a result, almost all types of protection structures are present at this site: groynes, detached breakwaters.The area investigated experienced severe problems of coastal erosion in the past decades, inclusding a major one in the last November 2012. The activity is planned within the framework of the RITMARE project, that is also including other monitoring and scientific activities (bathymetry survey, waves and currents measurements, hydrodynamics and morphodynamic modeling). This contribution focuses on best practices to be adopted in the creation of the video monitoring system, and briefly describes the architectural design of the network, the creation of a database of images, the information extracted by the videomonitoring and its integration with other data.
Smol, Marzena; Kulczycka, Joanna; Kowalski, Zygmunt
2016-12-15
The aim of this research is to present the possibility of using the sewage sludge ash (SSA) generated in incineration plants as a secondary source of phosphorus (P). The importance of issues related to P recovery from waste materials results from European Union (UE) legislation, which indicated phosphorus as a critical raw material (CRM). Due to the risks of a shortage of supply and its impact on the economy, which is greater than other raw materials, the proper management of phosphorus resources is required in order to achieve global P security. Based on available databases and literature, an analysis of the potential use of SSA for P-recovery in Poland was conducted. Currently, approx. 43,000 Mg/year of SSA is produced in large and small incineration plants and according to in the Polish National Waste Management Plan 2014 (NWMP) further steady growth is predicted. This indicates a great potential to recycle phosphorus from SSA and to reintroduce it again into the value chain as a component of fertilisers which can be applied directly on fields. The amount of SSA generated in installations, both large and small, varies and this contributes to the fact that new and different P recovery technology solutions must be developed and put into use in the years to come (e.g. mobile/stationary P recovery installations). The creation of a database focused on the collection and sharing of data about the amount of P recovered in EU and Polish installations is identified as a helpful tool in the development of an efficient P management model for Poland. Copyright © 2016 Elsevier Ltd. All rights reserved.
Back to Basics: The Effect of Healthy Diet and Exercise on Chronic Disease Management.
Allison, Robert L
2017-01-01
The increase in obesity rates in the U.S. and other less developed industrial countries have led to a worldwide epidemic of chronic disease states. Increased obesity rates are implicated in the treatment failures for illnesses such as coronary artery disease, diabetes, heart failure, hypertension and cancer. Effective prevention of obesity through diet and exercise contributes to the successful medical management of multiple chronic disease states. Review the last 10 years of literature (2006-2016) on the effects of diet and exercise as they relate to the prevention of chronic disease. Cochran Database of Systematic Reviews and other original articles using the National Center for Biotechnical Information database. The success in management of chronic disease lies in a physician's ability to educate patients and effective utilization of the resources available to that provider. Patient accountability for their individual chronic disease states is a problem related to patient education, patient participation, access to care, and payment resources. Financial, racial, and socioeconomic barriers must be addressed in the creation of an effective plan. Teaching on the importance of diet and exercise needs to occur early in life and be continually reinforced for successful outcomes. In the last 10 years, there has not been a significant study suggesting a single successful model of diet and exercise that can control chronic diseases. Cardiac, diabetic, and cancer patients have reduced hospital admissions, improved diabetic control, and improved quality of life scores related to coordinated diet and exercise programs, however. Patients may be unwilling or unable to be accountable for health care coordination. The development of exercise and obesity prevention policies and the adjustment in financial rewards to health care organizations will have a major impact in implementing these programs over the next 10 years.
NASA Technical Reports Server (NTRS)
Orcutt, John M.; Brenton, James C.
2016-01-01
An accurate database of meteorological data is essential for designing any aerospace vehicle and for preparing launch commit criteria. Meteorological instrumentation were recently placed on the three Lightning Protection System (LPS) towers at Kennedy Space Center (KSC) launch complex 39B (LC-39B), which provide a unique meteorological dataset existing at the launch complex over an extensive altitude range. Data records of temperature, dew point, relative humidity, wind speed, and wind direction are produced at 40, 78, 116, and 139 m at each tower. The Marshall Space Flight Center Natural Environments Branch (EV44) received an archive that consists of one-minute averaged measurements for the period of record of January 2011 - April 2015. However, before the received database could be used EV44 needed to remove any erroneous data from within the database through a comprehensive quality control (QC) process. The QC process applied to the LPS towers' meteorological data is similar to other QC processes developed by EV44, which were used in the creation of meteorological databases for other towers at KSC. The QC process utilized in this study has been modified specifically for use with the LPS tower database. The QC process first includes a check of each individual sensor. This check includes removing any unrealistic data and checking the temporal consistency of each variable. Next, data from all three sensors at each height are checked against each other, checked against climatology, and checked for sensors that erroneously report a constant value. Then, a vertical consistency check of each variable at each tower is completed. Last, the upwind sensor at each level is selected to minimize the influence of the towers and other structures at LC-39B on the measurements. The selection process for the upwind sensor implemented a study of tower-induced turbulence. This paper describes in detail the QC process, QC results, and the attributes of the LPS towers meteorological database.
ASIS '99 Knowledge: Creation, Organization and Use, Part III: Plenary Sessions.
ERIC Educational Resources Information Center
Proceedings of the ASIS Annual Meeting, 1999
1999-01-01
Describes the following sessions: "Knowledge Management: A Celebration of Humans Connected with Quality Information Objects (Plenary Session 1); "Intellectual Property Rights and the Emerging Information Infrastructure (Plenary Session 2); and "Knowledge: Creation, Organization and Use (Conference Wrap-up Session). (AEF)
The state of the art of medical imaging technology: from creation to archive and back.
Gao, Xiaohong W; Qian, Yu; Hui, Rui
2011-01-01
Medical imaging has learnt itself well into modern medicine and revolutionized medical industry in the last 30 years. Stemming from the discovery of X-ray by Nobel laureate Wilhelm Roentgen, radiology was born, leading to the creation of large quantities of digital images as opposed to film-based medium. While this rich supply of images provides immeasurable information that would otherwise not be possible to obtain, medical images pose great challenges in archiving them safe from corrupted, lost and misuse, retrievable from databases of huge sizes with varying forms of metadata, and reusable when new tools for data mining and new media for data storing become available. This paper provides a summative account on the creation of medical imaging tomography, the development of image archiving systems and the innovation from the existing acquired image data pools. The focus of this paper is on content-based image retrieval (CBIR), in particular, for 3D images, which is exemplified by our developed online e-learning system, MIRAGE, home to a repository of medical images with variety of domains and different dimensions. In terms of novelties, the facilities of CBIR for 3D images coupled with image annotation in a fully automatic fashion have been developed and implemented in the system, resonating with future versatile, flexible and sustainable medical image databases that can reap new innovations.
The State of the Art of Medical Imaging Technology: from Creation to Archive and Back
Gao, Xiaohong W; Qian, Yu; Hui, Rui
2011-01-01
Medical imaging has learnt itself well into modern medicine and revolutionized medical industry in the last 30 years. Stemming from the discovery of X-ray by Nobel laureate Wilhelm Roentgen, radiology was born, leading to the creation of large quantities of digital images as opposed to film-based medium. While this rich supply of images provides immeasurable information that would otherwise not be possible to obtain, medical images pose great challenges in archiving them safe from corrupted, lost and misuse, retrievable from databases of huge sizes with varying forms of metadata, and reusable when new tools for data mining and new media for data storing become available. This paper provides a summative account on the creation of medical imaging tomography, the development of image archiving systems and the innovation from the existing acquired image data pools. The focus of this paper is on content-based image retrieval (CBIR), in particular, for 3D images, which is exemplified by our developed online e-learning system, MIRAGE, home to a repository of medical images with variety of domains and different dimensions. In terms of novelties, the facilities of CBIR for 3D images coupled with image annotation in a fully automatic fashion have been developed and implemented in the system, resonating with future versatile, flexible and sustainable medical image databases that can reap new innovations. PMID:21915232
Western Europe--A Trading Game.
ERIC Educational Resources Information Center
Cox, Ann Curtis
1991-01-01
Presents a geography program to show students why the European Community was formed. Involves student research of economic data, creation of a computer database on the European Community, and simulation of trading. Emphasizes geographic themes of movement, region formation, and change in response to economic forces. Includes game rules, sample…
Converting Student Support Services to Online Delivery.
ERIC Educational Resources Information Center
Brigham, David E.
2001-01-01
Uses a systems framework to analyze the creation of student support services for distance education at Regents College: electronic advising, electronic peer network, online course database, online bookstore, virtual library, and alumni services website. Addresses the issues involved in converting distance education programs from print-based and…
PROFESS: a PROtein Function, Evolution, Structure and Sequence database
Triplet, Thomas; Shortridge, Matthew D.; Griep, Mark A.; Stark, Jaime L.; Powers, Robert; Revesz, Peter
2010-01-01
The proliferation of biological databases and the easy access enabled by the Internet is having a beneficial impact on biological sciences and transforming the way research is conducted. There are ∼1100 molecular biology databases dispersed throughout the Internet. To assist in the functional, structural and evolutionary analysis of the abundant number of novel proteins continually identified from whole-genome sequencing, we introduce the PROFESS (PROtein Function, Evolution, Structure and Sequence) database. Our database is designed to be versatile and expandable and will not confine analysis to a pre-existing set of data relationships. A fundamental component of this approach is the development of an intuitive query system that incorporates a variety of similarity functions capable of generating data relationships not conceived during the creation of the database. The utility of PROFESS is demonstrated by the analysis of the structural drift of homologous proteins and the identification of potential pancreatic cancer therapeutic targets based on the observation of protein–protein interaction networks. Database URL: http://cse.unl.edu/∼profess/ PMID:20624718
Vascular Access Creation before Hemodialysis Initiation and Use: A Population-Based Cohort Study
Al-Jaishi, Ahmed A.; Lok, Charmaine E.; Garg, Amit X.; Zhang, Joyce C.
2015-01-01
Background and objectives In Canada, approximately 17% of patients use an arteriovenous access (fistula or arteriovenous graft) at commencement of hemodialysis, despite guideline recommendations promoting its timely creation and use. It is unclear if this low pattern of use is attributable to the lack of surgical creation or a high nonuse rate. Design, setting, participants, & measurements Using large health care databases in Ontario, Canada, a population-based cohort of adult patients (≥18 years old) who initiated hemodialysis as their first form of RRT between 2001 and 2010 was studied. The aims were to (1) estimate the proportion of patients who had an arteriovenous access created before starting hemodialysis and the proportion who successfully used it at hemodialysis start, (2) test for secular trends in arteriovenous access creation, and (3) estimate the effect of late nephrology referral and patient characteristics on arteriovenous access creation. Results There were 17,183 patients on incident hemodialysis. The mean age was 65.8 years, 60% were men, and 40% were referred late to a nephrologist; 27% of patients (4556 of 17,183) had one or more arteriovenous accesses created, and the median time between arteriovenous access creation and hemodialysis start was 184 days. When late referrals were excluded, 39% of patients (4007 of 10,291) had one or more arteriovenous accesses created, and 27% of patients (2724 of 10,291) used the arteriovenous access. Since 2001, there has been a decline in arteriovenous access creation before hemodialysis initiation. Women, higher numbers of comorbidities, and rural residence were consistently associated with lower rates of arteriovenous access creation. These results persisted even after removing patients with <6 months nephrology care or who had AKI 6 months before starting hemodialysis. Conclusions In Canada, arteriovenous access creation before hemodialysis initiation is low, even among patients followed by a nephrologist. Better understanding of the barriers and influencers of arteriovenous access creation is needed to inform both clinical care and guidelines. PMID:25568219
Vascular access creation before hemodialysis initiation and use: a population-based cohort study.
Al-Jaishi, Ahmed A; Lok, Charmaine E; Garg, Amit X; Zhang, Joyce C; Moist, Louise M
2015-03-06
In Canada, approximately 17% of patients use an arteriovenous access (fistula or arteriovenous graft) at commencement of hemodialysis, despite guideline recommendations promoting its timely creation and use. It is unclear if this low pattern of use is attributable to the lack of surgical creation or a high nonuse rate. Using large health care databases in Ontario, Canada, a population-based cohort of adult patients (≥18 years old) who initiated hemodialysis as their first form of RRT between 2001 and 2010 was studied. The aims were to (1) estimate the proportion of patients who had an arteriovenous access created before starting hemodialysis and the proportion who successfully used it at hemodialysis start, (2) test for secular trends in arteriovenous access creation, and (3) estimate the effect of late nephrology referral and patient characteristics on arteriovenous access creation. There were 17,183 patients on incident hemodialysis. The mean age was 65.8 years, 60% were men, and 40% were referred late to a nephrologist; 27% of patients (4556 of 17,183) had one or more arteriovenous accesses created, and the median time between arteriovenous access creation and hemodialysis start was 184 days. When late referrals were excluded, 39% of patients (4007 of 10,291) had one or more arteriovenous accesses created, and 27% of patients (2724 of 10,291) used the arteriovenous access. Since 2001, there has been a decline in arteriovenous access creation before hemodialysis initiation. Women, higher numbers of comorbidities, and rural residence were consistently associated with lower rates of arteriovenous access creation. These results persisted even after removing patients with <6 months nephrology care or who had AKI 6 months before starting hemodialysis. In Canada, arteriovenous access creation before hemodialysis initiation is low, even among patients followed by a nephrologist. Better understanding of the barriers and influencers of arteriovenous access creation is needed to inform both clinical care and guidelines. Copyright © 2015 by the American Society of Nephrology.
Creation of the NaSCoRD Database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Denman, Matthew R.; Jankovsky, Zachary Kyle; Stuart, William
This report was written as part of a United States Department of Energy (DOE), Office of Nuclear Energy, Advanced Reactor Technologies program funded project to re-create the capabilities of the legacy Centralized Reliability Database Organization (CREDO) database. The CREDO database provided a record of component design and performance documentation across various systems that used sodium as a working fluid. Regaining this capability will allow the DOE complex and the domestic sodium reactor industry to better understand how previous systems were designed and built for use in improving the design and operations of future loops. The contents of this report include:more » overview of the current state of domestic sodium reliability databases; summary of the ongoing effort to improve, understand, and process the CREDO information; summary of the initial efforts to develop a unified sodium reliability database called the Sodium System Component Reliability Database (NaSCoRD); and explain both how potential users can access the domestic sodium reliability databases and the type of information that can be accessed from these databases.« less
Design and Implementation of a Perioperative Surgical Home at a Veterans Affairs Hospital.
Walters, Tessa L; Howard, Steven K; Kou, Alex; Bertaccini, Edward J; Harrison, T Kyle; Kim, T Edward; Shafer, Audrey; Brun, Carlos; Funck, Natasha; Siegel, Lawrence C; Stary, Erica; Mariano, Edward R
2016-06-01
The innovative Perioperative Surgical Home model aims to optimize the outcomes of surgical patients by leveraging the expertise and leadership of physician anesthesiologists, but there is a paucity of practical examples to follow. Veterans Affairs health care, the largest integrated system in the United States, may be the ideal environment in which to explore this model. We present our experience implementing Perioperative Surgical Home at one tertiary care university-affiliated Veterans Affairs hospital. This process involved initiating consistent postoperative patient follow-up beyond the postanesthesia care unit, a focus on improving in-hospital acute pain management, creation of an accessible database to track outcomes, developing new clinical pathways, and recruiting additional staff. Today, our Perioperative Surgical Home facilitates communication between various services involved in the care of surgical patients, monitoring of patient outcomes, and continuous process improvement. © The Author(s) 2015.
New Structures for the Effective Dissemination of Knowledge in an Enterprise.
ERIC Educational Resources Information Center
Kok, J. Andrew
2000-01-01
Discusses the creation of knowledge enterprises. Highlights include knowledge creation and sharing; networked organizational structures; structures of knowledge organization; competitive strategies; new structures to manage knowledge; boundary crossing; multi-skilled teams; communities of interest or practice; and dissemination of knowledge in an…
Pattern-based information portal for business plan co-creation
NASA Astrophysics Data System (ADS)
Bontchev, Boyan; Ruskov, Petko; Tanev, Stoyan
2011-03-01
Creation of business plans helps entrepreneurs in managing identification of business opportunities and committing necessary resources for process evolution. Applying patterns in business plan creation facilitates the identification of effective solutions that were adopted in the past and may provide a basis for adopting similar solutions in the future within given business context. The article presents the system design of an information portal for business plan co-creation based on patterns. The portal is going to provide start-up and entrepreneurs with ready-to-modify business plan patterns in order to help them in development of effective and efficient business plans. It will facilitate entrepreneurs in co-experimenting and co-learning more frequently and faster. Moreover, the paper focuses on the software architecture of the pattern based portal and explains the functionality of its modules, namely the pattern designer, pattern repository services and agent-based pattern implementers. It explains their role for business process co-creation, storing and managing patterns described formally, and selecting patterns best suited for specific business case. Thus, innovative entrepreneurs will be guided by the portal in co-writing winning business plans and staying competitive in the present day dynamic globalized environment.
Pattern-based information portal for business plan co-creation
NASA Astrophysics Data System (ADS)
Bontchev, Boyan; Ruskov, Petko; Tanev, Stoyan
2010-10-01
Creation of business plans helps entrepreneurs in managing identification of business opportunities and committing necessary resources for process evolution. Applying patterns in business plan creation facilitates the identification of effective solutions that were adopted in the past and may provide a basis for adopting similar solutions in the future within given business context. The article presents the system design of an information portal for business plan co-creation based on patterns. The portal is going to provide start-up and entrepreneurs with ready-to-modify business plan patterns in order to help them in development of effective and efficient business plans. It will facilitate entrepreneurs in co-experimenting and co-learning more frequently and faster. Moreover, the paper focuses on the software architecture of the pattern based portal and explains the functionality of its modules, namely the pattern designer, pattern repository services and agent-based pattern implementers. It explains their role for business process co-creation, storing and managing patterns described formally, and selecting patterns best suited for specific business case. Thus, innovative entrepreneurs will be guided by the portal in co-writing winning business plans and staying competitive in the present day dynamic globalized environment.
Cloudbus Toolkit for Market-Oriented Cloud Computing
NASA Astrophysics Data System (ADS)
Buyya, Rajkumar; Pandey, Suraj; Vecchiola, Christian
This keynote paper: (1) presents the 21st century vision of computing and identifies various IT paradigms promising to deliver computing as a utility; (2) defines the architecture for creating market-oriented Clouds and computing atmosphere by leveraging technologies such as virtual machines; (3) provides thoughts on market-based resource management strategies that encompass both customer-driven service management and computational risk management to sustain SLA-oriented resource allocation; (4) presents the work carried out as part of our new Cloud Computing initiative, called Cloudbus: (i) Aneka, a Platform as a Service software system containing SDK (Software Development Kit) for construction of Cloud applications and deployment on private or public Clouds, in addition to supporting market-oriented resource management; (ii) internetworking of Clouds for dynamic creation of federated computing environments for scaling of elastic applications; (iii) creation of 3rd party Cloud brokering services for building content delivery networks and e-Science applications and their deployment on capabilities of IaaS providers such as Amazon along with Grid mashups; (iv) CloudSim supporting modelling and simulation of Clouds for performance studies; (v) Energy Efficient Resource Allocation Mechanisms and Techniques for creation and management of Green Clouds; and (vi) pathways for future research.
Development of a Regional U.S. MARKAL Database for Energy and Emissions Modeling
The U.S. Climate Change Science Program (CCSP) is a collaborative effort among 13 agencies of the U.S. federal government. From the CCSP's 2003 strategic plan, its mission is to: "facilitate the creation and application of knowledge of the earth's global environment through resea...
The O*Net Jobs Classification System: A Primer for Family Researchers
ERIC Educational Resources Information Center
Crouter, Ann C.; Lanza, Stephanie T.; Pirretti, Amy; Goodman, W. Benjamin; Neebe, Eloise
2006-01-01
We introduce family researchers to the Occupational Information Network, or O*Net, an electronic database on the work characteristics of over 950 occupations. The paper here is a practical primer that covers data collection, selecting occupational characteristics, coding occupations, scale creation, and construct validity, with empirical…
EPA’s National Emission Inventory has been incorporated into the Emission Database for Global Atmospheric Research-Hemispheric Transport of Air Pollutants (EDGAR-HTAP) version 2. This work involves the creation of a detailed mapping of EPA Source Classification Codes (SCC) to the...
Optical Scanning for Retrospective Conversion of Information.
ERIC Educational Resources Information Center
Hein, Morten
1986-01-01
This discussion of the use of optical scanning and computer formatting for retrospective conversion focuses on a series of applications known as Optical Scanning for Creation of Information Databases (OSCID). Prior research in this area and the usefulness of OSCID for creating low-priced machine-readable data representing older materials are…
Automated Agent Ontology Creation for Distributed Databases
2004-03-01
relationships between themselves if one exists. For example, if one agent’s ontology was ‘ NBA ’ and the second agent’s ontology was ‘College Hoops...the two agents should discover their relationship ‘ basketball ’ [28]. The authors’ agents use supervised inductive learning to learn their individual
Database Creation for Information Processing Methods, Metrics, and Models (DCIPM3)
2009-05-01
who is in Student Government Association (SGA), attends a meeting that addresses the lineup of events to have at the pep rally, with other...documented events had some level of importance to the development of similar sequential events of a notional subject, many dead ends and random events
An Introduction to Database Structure and Database Machines.
ERIC Educational Resources Information Center
Detweiler, Karen
1984-01-01
Enumerates principal management objectives of database management systems (data independence, quality, security, multiuser access, central control) and criteria for comparison (response time, size, flexibility, other features). Conventional database management systems, relational databases, and database machines used for backend processing are…
NASA Astrophysics Data System (ADS)
Mendela-Anzlik, Małgorzata; Borkowski, Andrzej
2017-06-01
Airborne laser scanning data (ALS) are used mainly for creation of precise digital elevation models. However, it appears that the informative potential stored in ALS data can be also used for updating spatial databases, including the Database of Topographic Objects (BDOT10k). Typically, geometric representations of buildings in the BDOT10k are equal to their entities in the Land and Property Register (EGiB). In this study ALS is considered as supporting data source. The thresholding method of original ALS data with the use of the alpha shape algorithm, proposed in this paper, allows for extraction of points that represent horizontal cross section of building walls, leading to creation of vector, geometric models of buildings that can be then used for updating the BDOT10k. This method gives also the possibility of an easy verification of up-to-dateness of both the BDOT10k and the district EGiB databases within geometric information about buildings. For verification of the proposed methodology there have been used the classified ALS data acquired with a density of 4 points/m2. The accuracy assessment of the identified building outlines has been carried out by their comparison to the corresponding EGiB objects. The RMSE values for 78 buildings are from a few to tens of centimeters and the average value is about 0,5 m. At the same time for several objects there have been revealed huge geometric discrepancies. Further analyses have shown that these discrepancies could be resulted from incorrect representations of buildings in the EGiB database.
Application of real-time database to LAMOST control system
NASA Astrophysics Data System (ADS)
Xu, Lingzhe; Xu, Xinqi
2004-09-01
The QNX based real time database is one of main features for Large sky Area Multi-Object fiber Spectroscopic Telescope's (LAMOST) control system, which serves as a storage and platform for data flow, recording and updating timely various status of moving components in the telescope structure as well as environmental parameters around it. The database joins harmonically in the administration of the Telescope Control System (TCS). The paper presents methodology and technique tips in designing the EMPRESS database GUI software package, such as the dynamic creation of control widgets, dynamic query and share memory. The seamless connection between EMPRESS and the graphical development tool of QNX"s Photon Application Builder (PhAB) has been realized, and so have the Windows look and feel yet under Unix-like operating system. In particular, the real time feature of the database is analyzed that satisfies the needs of the control system.
36 CFR 1220.30 - What are an agency's records management responsibilities?
Code of Federal Regulations, 2011 CFR
2011-07-01
... management programs must provide for: (1) Effective controls over the creation, maintenance, and use of... management responsibilities? 1220.30 Section 1220.30 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT FEDERAL RECORDS; GENERAL Agency Records Management...
NASA Astrophysics Data System (ADS)
van Aalst, Jan; Sioux Truong, Mya
2011-03-01
The phrase 'knowledge creation' refers to the practices by which a community advances its collective knowledge. Experience with a model of knowledge creation could help students to learn about the nature of science. This research examined how much progress a teacher and 16 Primary Five (Grade 4) students in the International Baccalaureate Primary Years Programme could make towards the discourse needed for Bereiter and Scardamalia's model of knowledge creation. The study consisted of two phases: a five-month period focusing on the development of the classroom ethos and skills needed for this model (Phase 1), followed by a two-month inquiry into life cycles (Phase 2). In Phase 1, we examined the classroom practices that are thought to support knowledge creation and the early experiences of the students with a web-based inquiry environment, Knowledge Forum®. In Phase 2, we conducted a summative evaluation of the students' work in Knowledge Forum in the light of the model. The data sources included classroom video recordings, artefacts of the in-class work, the Knowledge Forum database, a science content test, questionnaires, and interviews. The findings indicate that the students made substantial progress towards the knowledge creation discourse, particularly regarding the social structure of this kind of discourse and, to a lesser extent, its idea-centred nature. They also made acceptable advances in scientific knowledge and appeared to enjoy this way of learning. The study provides one of the first accounts in the literature of how a teacher new to the knowledge creation model enacted it in an Asian primary classroom.
University Governance in Uncertain Times: Refocusing on Knowledge Creation and Innovation
ERIC Educational Resources Information Center
Blackman, Deborah; Kennedy, Monica; Swansson, James; Richardson, Alice
2008-01-01
Knowledge, its creation, development, dispersion and institutionalisation in organisations is a complex topic and one that attracts much attention in both academic and management literatures (Choo & Bontis, 2002; Davenport & De Long, 1998; Davenport & Prusak, 1998; Spender, 1996). The relationship between knowledge and organisational…
Durack, Jeremy C.; Chao, Chih-Chien; Stevenson, Derek; Andriole, Katherine P.; Dev, Parvati
2002-01-01
Medical media collections are growing at a pace that exceeds the value they currently provide as research and educational resources. To address this issue, the Stanford MediaServer was designed to promote innovative multimedia-based application development. The nucleus of the MediaServer platform is a digital media database strategically designed to meet the information needs of many biomedical disciplines. Key features include an intuitive web-based interface for collaboratively populating the media database, flexible creation of media collections for diverse and specialized purposes, and the ability to construct a variety of end-user applications from the same database to support biomedical education and research. PMID:12463820
Durack, Jeremy C; Chao, Chih-Chien; Stevenson, Derek; Andriole, Katherine P; Dev, Parvati
2002-01-01
Medical media collections are growing at a pace that exceeds the value they currently provide as research and educational resources. To address this issue, the Stanford MediaServer was designed to promote innovative multimedia-based application development. The nucleus of the MediaServer platform is a digital media database strategically designed to meet the information needs of many biomedical disciplines. Key features include an intuitive web-based interface for collaboratively populating the media database, flexible creation of media collections for diverse and specialized purposes, and the ability to construct a variety of end-user applications from the same database to support biomedical education and research.
'Ethos' Enabling Organisational Knowledge Creation
NASA Astrophysics Data System (ADS)
Matsudaira, Yoshito
This paper examines knowledge creation in relation to improvements on the production line in the manufacturing department of Nissan Motor Company and aims to clarify embodied knowledge observed in the actions of organisational members who enable knowledge creation will be clarified. For that purpose, this study adopts an approach that adds a first, second, and third-person's viewpoint to the theory of knowledge creation. Embodied knowledge, observed in the actions of organisational members who enable knowledge creation, is the continued practice of 'ethos' (in Greek) founded in Nissan Production Way as an ethical basis. Ethos is knowledge (intangible) assets for knowledge creating companies. Substantiated analysis classifies ethos into three categories: the individual, team and organisation. This indicates the precise actions of the organisational members in each category during the knowledge creation process. This research will be successful in its role of showing the indispensability of ethos - the new concept of knowledge assets, which enables knowledge creation -for future knowledge-based management in the knowledge society.
DNA-based methods of geochemical prospecting
Ashby, Matthew [Mill Valley, CA
2011-12-06
The present invention relates to methods for performing surveys of the genetic diversity of a population. The invention also relates to methods for performing genetic analyses of a population. The invention further relates to methods for the creation of databases comprising the survey information and the databases created by these methods. The invention also relates to methods for analyzing the information to correlate the presence of nucleic acid markers with desired parameters in a sample. These methods have application in the fields of geochemical exploration, agriculture, bioremediation, environmental analysis, clinical microbiology, forensic science and medicine.
Transaction Processing Performance Council (TPC): State of the Council 2010
NASA Astrophysics Data System (ADS)
Nambiar, Raghunath; Wakou, Nicholas; Carman, Forrest; Majdalany, Michael
The Transaction Processing Performance Council (TPC) is a non-profit corporation founded to define transaction processing and database benchmarks and to disseminate objective, verifiable performance data to the industry. Established in August 1988, the TPC has been integral in shaping the landscape of modern transaction processing and database benchmarks over the past twenty-two years. This paper provides an overview of the TPC's existing benchmark standards and specifications, introduces two new TPC benchmarks under development, and examines the TPC's active involvement in the early creation of additional future benchmarks.
Washington, Donna L; Sun, Su; Canning, Mark
2010-01-01
Most veteran research is conducted in Department of Veterans Affairs (VA) healthcare settings, although most veterans obtain healthcare outside the VA. Our objective was to determine the adequacy and relative contributions of Veterans Health Administration (VHA), Veterans Benefits Administration (VBA), and Department of Defense (DOD) administrative databases for representing the U.S. veteran population, using as an example the creation of a sampling frame for the National Survey of Women Veterans. In 2008, we merged the VHA, VBA, and DOD databases. We identified the number of unique records both overall and from each database. The combined databases yielded 925,946 unique records, representing 51% of the 1,802,000 U.S. women veteran population. The DOD database included 30% of the population (with 8% overlap with other databases). The VHA enrollment database contributed an additional 20% unique women veterans (with 6% overlap with VBA databases). VBA databases contributed an additional 2% unique women veterans (beyond 10% overlap with other databases). Use of VBA and DOD databases substantially expands access to the population of veterans beyond those in VHA databases, regardless of VA use. Adoption of these additional databases would enhance the value and generalizability of a wide range of studies of both male and female veterans.
NASA Astrophysics Data System (ADS)
Carbone, Gianluca; Cosentino, Giuseppe; Pennica, Francesco; Moscatelli, Massimiliano; Stigliano, Francesco
2017-04-01
After the strong earthquakes that hit central Italy in recent months, the Center for Seismic Microzonation and its applications (CentroMS) was commissioned by the Italian Department of Civil Protection to conduct the study of seismic microzonation of the territories affected by the earthquake of August 24, 2016. As part of the activities of microzonation, IGAG CNR has created WebEQ, a management tool of the data that have been acquired by all participants (i.e., more than twenty research institutes and university departments). The data collection was organized and divided into sub-areas, assigned to working groups with multidisciplinary expertise in geology, geophysics and engineering. WebEQ is a web-GIS System that helps all the subjects involved in the data collection activities, through tools aimed at data uploading and validation, and with a simple GIS interface to display, query and download geographic data. WebEQ is contributing to the creation of a large database containing geographical data, both vector and raster, from various sources and types: - Regional Technical Map em Geological and geomorphological maps em Data location maps em Maps of microzones homogeneous in seismic perspective and seismic microzonation maps em National strong motion network location. Data loading is done through simple input masks that ensure consistency with the database structure, avoiding possible errors and helping users to interact with the map through user-friendly tools. All the data are thematized through standardized symbologies and colors (Gruppo di lavoro MS 2008), in order to allow the easy interpretation by all users. The data download tools allow data exchange between working groups and the scientific community to benefit from the activities. The seismic microzonation activities are still ongoing. WebEQ is enabling easy management of large amounts of data and will form a basis for the development of tools for the management of the upcoming seismic emergencies.
Curiosity and Its Role in Cross-Cultural Knowledge Creation
ERIC Educational Resources Information Center
Mikhaylov, Natalie S.
2016-01-01
This paper explores the role of curiosity in promoting cross-cultural knowledge creation and competence development. It is based on a study with four international higher educational institutions, all of which offer management and business education for local and international students. The reality of multicultural and intercultural relationships…
Improving Information Products for System 2 Decision Support
ERIC Educational Resources Information Center
Gibson, Neal
2010-01-01
The creation, maintenance, and management of Information Product (IP) systems that are used by organizations for complex decisions represent a unique set of challenges. These challenges are compounded when the purpose of such a systems is also for knowledge creation and dissemination. Information quality research to date has focused mainly upon…
Benefits, barriers, and limitations on the use of Hospital Incident Command System.
Shooshtari, Shahin; Tofighi, Shahram; Abbasi, Shirin
2017-01-01
Hospital Incident Command System (HICS) has been established with the mission of prevention, response, and recovery in hazards. Regarding the key role of hospitals in medical management of events, the present study is aimed at investigating benefits, barriers, and limitations of applying HICS in hospital. Employing a review study, articles related to the aforementioned subject published from 1995 to 2016 were extracted from accredited websites and databases such as PubMed, Google Scholar, Elsevier, and SID by searching keywords such as HICS, benefits, barriers, and limitations. Then, those articles were summarized and reported. Using of HICS can cause creating preparedness in facing disasters, constructive management in strategies of controlling events, and disasters. Therefore, experiences indicate that there are some limitations in the system such as failure to assess the strength and severity of vulnerabilities of hospital, no observation of standards for disaster management in the design, constructing and equipping hospitals, and the absence of a model for evaluating the system. Accordingly, the conducted studies were investigated for probing the performance HICS. With regard to the role of health in disaster management, it requires advanced international methods in facing disasters. Using accurate models for assessing, the investigation of preparedness of hospitals in precrisis conditions based on components such as command, communications, security, safety, development of action plans, changes in staff's attitudes through effective operational training and exercises and creation of required maneuvers seems necessary.
On feasibility of a closed nuclear power fuel cycle with minimum radioactivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrianova, E. A.; Davidenko, V. D.; Tsibulskiy, V. F., E-mail: Tsibulskiy-VF@nrcki.ru
2015-12-15
Practical implementation of a closed nuclear fuel cycle implies solution of two main tasks. The first task is creation of environmentally acceptable operating conditions of the nuclear fuel cycle considering, first of all, high radioactivity of the involved materials. The second task is creation of effective and economically appropriate conditions of involving fertile isotopes in the fuel cycle. Creation of technologies for management of the high-level radioactivity of spent fuel reliable in terms of radiological protection seems to be the hardest problem.
Applications of GIS and database technologies to manage a Karst Feature Database
Gao, Y.; Tipping, R.G.; Alexander, E.C.
2006-01-01
This paper describes the management of a Karst Feature Database (KFD) in Minnesota. Two sets of applications in both GIS and Database Management System (DBMS) have been developed for the KFD of Minnesota. These applications were used to manage and to enhance the usability of the KFD. Structured Query Language (SQL) was used to manipulate transactions of the database and to facilitate the functionality of the user interfaces. The Database Administrator (DBA) authorized users with different access permissions to enhance the security of the database. Database consistency and recovery are accomplished by creating data logs and maintaining backups on a regular basis. The working database provides guidelines and management tools for future studies of karst features in Minnesota. The methodology of designing this DBMS is applicable to develop GIS-based databases to analyze and manage geomorphic and hydrologic datasets at both regional and local scales. The short-term goal of this research is to develop a regional KFD for the Upper Mississippi Valley Karst and the long-term goal is to expand this database to manage and study karst features at national and global scales.
36 CFR 1222.32 - How do agencies manage records created or received by contractors?
Code of Federal Regulations, 2010 CFR
2010-07-01
... NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT CREATION AND MAINTENANCE OF FEDERAL RECORDS...) (48 CFR parts 200-299). (2) Records management oversight of contract records is necessary to ensure...
Introducing the GRACEnet/REAP Data Contribution, Discovery, and Retrieval System.
Del Grosso, S J; White, J W; Wilson, G; Vandenberg, B; Karlen, D L; Follett, R F; Johnson, J M F; Franzluebbers, A J; Archer, D W; Gollany, H T; Liebig, M A; Ascough, J; Reyes-Fox, M; Pellack, L; Starr, J; Barbour, N; Polumsky, R W; Gutwein, M; James, D
2013-07-01
Difficulties in accessing high-quality data on trace gas fluxes and performance of bioenergy/bioproduct feedstocks limit the ability of researchers and others to address environmental impacts of agriculture and the potential to produce feedstocks. To address those needs, the GRACEnet (Greenhouse gas Reduction through Agricultural Carbon Enhancement network) and REAP (Renewable Energy Assessment Project) research programs were initiated by the USDA Agricultural Research Service (ARS). A major product of these programs is the creation of a database with greenhouse gas fluxes, soil carbon stocks, biomass yield, nutrient, and energy characteristics, and input data for modeling cropped and grazed systems. The data include site descriptors (e.g., weather, soil class, spatial attributes), experimental design (e.g., factors manipulated, measurements performed, plot layouts), management information (e.g., planting and harvesting schedules, fertilizer types and amounts, biomass harvested, grazing intensity), and measurements (e.g., soil C and N stocks, plant biomass amount and chemical composition). To promote standardization of data and ensure that experiments were fully described, sampling protocols and a spreadsheet-based data-entry template were developed. Data were first uploaded to a temporary database for checking and then were uploaded to the central database. A Web-accessible application allows for registered users to query and download data including measurement protocols. Separate portals have been provided for each project (GRACEnet and REAP) at nrrc.ars.usda.gov/slgracenet/#/Home and nrrc.ars.usda.gov/slreap/#/Home. The database architecture and data entry template have proven flexible and robust for describing a wide range of field experiments and thus appear suitable for other natural resource research projects. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Negative Effects of Learning Spreadsheet Management on Learning Database Management
ERIC Educational Resources Information Center
Vágner, Anikó; Zsakó, László
2015-01-01
A lot of students learn spreadsheet management before database management. Their similarities can cause a lot of negative effects when learning database management. In this article, we consider these similarities and explain what can cause problems. First, we analyse the basic concepts such as table, database, row, cell, reference, etc. Then, we…
Development of a database system for operational use in the selection of titanium alloys
NASA Astrophysics Data System (ADS)
Han, Yuan-Fei; Zeng, Wei-Dong; Sun, Yu; Zhao, Yong-Qing
2011-08-01
The selection of titanium alloys has become a complex decision-making task due to the growing number of creation and utilization for titanium alloys, with each having its own characteristics, advantages, and limitations. In choosing the most appropriate titanium alloys, it is very essential to offer a reasonable and intelligent service for technical engineers. One possible solution of this problem is to develop a database system (DS) to help retrieve rational proposals from different databases and information sources and analyze them to provide useful and explicit information. For this purpose, a design strategy of the fuzzy set theory is proposed, and a distributed database system is developed. Through ranking of the candidate titanium alloys, the most suitable material is determined. It is found that the selection results are in good agreement with the practical situation.
Schnick, Rosalie A.; Morton, John M.; Mochalski, Jeffrey C.; Beall, Jonathan T.
1982-01-01
Extensive information is provided on techniques that can reduce or eliminate the negative impact of man's activities (particularly those related to navigation) on large river systems, with special reference to the Upper Mississippi River. These techniques should help resource managers who are concerned with such river systems to establish sound environmental programs. Discussion of each technique or group of techniques include (1) situation to be mitigated or enhanced; (2) description of technique; (3) impacts on the environment; (4) costs; and (5) evaluation for use on the Upper Mississippi River Systems. The techniques are divided into four primary categories: Bank Stabilization Techniques, Dredging and Disposal of Dredged Material, Fishery Management Techniques, and Wildlife Management Techniques. Because techniques have been grouped by function, rather than by structure, some structures are discussed in several contexts. For example, gabions are discussed for use in revetments, river training structures, and breakwaters. The measures covered under Bank Stabilization Techniques include the use of riprap revetments, other revetments, bulkheads, river training structures, breakwater structures, chemical soil stabilizers, erosion-control mattings, and filter fabrics; the planting of vegetation; the creation of islands; the creation of berms or enrichment of beaches; and the control of water level and boat traffic. The discussions of Dredging and the Disposal of Dredged Material consider dredges, dredging methods, and disposal of dredged material. The following subjects are considered under Fishery Management Techniques: fish attractors; spawning structures; nursery ponds, coves, and marshes; fish screens and barriers; fish passage; water control structures; management of water levels and flows; wing dam modification; side channel modification; aeration techniques; control of nuisance aquatic plants; and manipulated of fish populations. Wildlife Management Techniques include treatments of artificial nest structures, island creation or development, marsh creation or development, greentree reservoirs and mast management, vegetation control, water level control, and revegetation.
Raising the Bar: Book Vendors and the New Realities of Service.
ERIC Educational Resources Information Center
Alessi, Dana L.
1999-01-01
Library book vendors are facing new realities in the 21st century. Changes in continuations, firm order placement, value-added services, approval plans, retrospective collection development, and database creation and maintenance are being effected in an effort to keep current customers and attract new ones. Those changes and the subsequent shift…
Relationships between Computer Skills and Technostress: How Does This Affect Me?
ERIC Educational Resources Information Center
Shepherd, Sonya S. Gaither
2004-01-01
The creation of computer software and hardware, telecommunications, databases, and the Internet has affected society as a whole, and particularly higher education by giving people new productivity options and changing the way they work (Hulbert, 1998). In the so-called "Information Age" the increasing use of technology has become the driving force…
Creating a New Definition of Library Cooperation: Past, Present, and Future Models.
ERIC Educational Resources Information Center
Lenzini, Rebecca T.; Shaw, Ward
1991-01-01
Describes the creation and purpose of the Colorado Alliance of Research Libraries (CARL), the subsequent development of CARL Systems, and its current research projects. Topics discussed include online catalogs; UnCover, a journal article database; full text data; document delivery; visual images in computer systems; networks; and implications for…
Academic Oral History: Life Review in Spite of Itself.
ERIC Educational Resources Information Center
Ryant, Carl
The process and content of the life review should not be separated from the creation of an oral history. Several projects, undertaken at the University of Louisville Oral History Center, support the therapeutic aspects of reminiscence. The dichotomy between oral history, as an historical database, and life review, as a therapeutic exercise, breaks…
Integrated and Applied Curricula Discussion Group and Data Base Project. Final Report.
ERIC Educational Resources Information Center
Wisconsin Univ. - Stout, Menomonie. Center for Vocational, Technical and Adult Education.
A project was conducted to compile integrated and applied curriculum resources, develop databases on the World Wide Web, and encourage networking for high school and technical college educators through an Internet discussion group. Activities conducted during the project include the creation of a web page to guide users to resource banks…
Burkhardt, John C; DesJardins, Stephen L; Teener, Carol A; Gay, Steven E; Santen, Sally A
2016-11-01
In higher education, enrollment management has been developed to accurately predict the likelihood of enrollment of admitted students. This allows evidence to dictate numbers of interviews scheduled, offers of admission, and financial aid package distribution. The applicability of enrollment management techniques for use in medical education was tested through creation of a predictive enrollment model at the University of Michigan Medical School (U-M). U-M and American Medical College Application Service data (2006-2014) were combined to create a database including applicant demographics, academic application scores, institutional financial aid offer, and choice of school attended. Binomial logistic regression and multinomial logistic regression models were estimated in order to study factors related to enrollment at the local institution versus elsewhere and to groupings of competing peer institutions. A predictive analytic "dashboard" was created for practical use. Both models were significant at P < .001 and had similar predictive performance. In the binomial model female, underrepresented minority students, grade point average, Medical College Admission Test score, admissions committee desirability score, and most individual financial aid offers were significant (P < .05). The significant covariates were similar in the multinomial model (excluding female) and provided separate likelihoods of students enrolling at different institutional types. An enrollment-management-based approach would allow medical schools to better manage the number of students they admit and target recruitment efforts to improve their likelihood of success. It also performs a key institutional research function for understanding failed recruitment of highly desirable candidates.
NASA Astrophysics Data System (ADS)
Buravlev, V.; Sereshnikov, S. V.; Mayorov, A. A.; Vila, J. J.
At each level of the state and municipal management the information resources which provide the support of acceptance of administrative decisions, usually are performed as a number of polytypic, untied among themselves electronic data sources, such as databases, geoinformation projects, electronic archives of documents, etc. These sources are located in the various organizations, they function in various programs, and are actualized according to various rules. Creation on the basis of such isolated sources of the uniform information systems which provide an opportunity to look through and analyze any information stored in these sources in real time mode, will help to promote an increase in a degree of adequacy of accepted administrative decisions. The Distributed Data Service technology - TrisoftDDS, developed by company Trisoft, Ltd, provides the construction of horizontal territorially distributed heterogeneous information systems (TeRGIS). Technology TrisoftDDS allows the quickly creation and support, easy modification of systems, the data sources for which are already existing information complexes, without any working capacity infringements of the last ones, and provides the remote regulated multi-user access to the different types of data sources by the Internet/Intranet. Relational databases, GIS projects, files of various types (documents MS Office, images, html documents, etc.) can be used as data sources in TeRGIS. TeRGIS is created as Internet/Intranet application representing three-level client-server system. Access to the information in existing data sources is carried out by means of the distributed DDS data service, the nucleus of which is the distributed data service server - the DSServer, settling down on an intermediate level. TrisoftDDS Technology includes the following components: Client DSBrowser (Data Service Browser) - the client application connected through the Internet/intranet to the DSServer and provides both - a choice and viewing of documents. Tables of databases, inquiries to databases, inquiries to geoinformation projects, files of various types (documents MS Office, images, html files, etc.) can act as documents. For work with complex data sources the DSBrowser gives an opportunity to create inquiries, to execute data view and filter. Server of the distributed data service - DSServer (Data Service Server) - the web-application that provides the access to the data sources and performance of the client's inquiries on granting of chosen documents. Tool means - Toolkit DDS: the Manager of catalogue - the DCMan (Data Catalog Manager) - - the client-server application intended for the organization and administration of the data catalogue. Documentator - the DSDoc (Data Source Documentor) - the client-server application intended for documenting the procedure of formation of the required document from the data source. The documentation, created by the DBDoc, represents the metadata tables, which are included in the data catalogue with the help of the catalogue manager - the DSCMan. The functioning logic of territorially distributed heterogeneous information system, based on DDS technology, is following: Client application - DSBrowser addresses to the DSServer on specified Internet address. In reply to the reference the DSServer sends the client the catalogue of the system's information resources. The catalogue represents the xml-document which is processed by the client's browser and is deduced as tree - structure in a special window. The user of the application looks through the list and chooses necessary documents, the DSBrowser sends corresponding inquiry to the DSServer. The DSServer, in its turn, addresses to the metadata tables, which describe the document, chosen by user, and broadcasts inquiry to the corresponding data source and after this returns to the client application the result of the inquiry. The catalogue of the data services contains the full Internet address of the document. This allows to create catalogues of the distributed information resources, separate parts of which (documents) can be located on different servers in various places of Internet. Catalogues, thus, can separately settle down at anyone Internet provider, which supports the necessary software. Lists of documents in the catalogue gather in the thematic blocks, allowing to organize user-friendly navigation down the information sources of the system. The TrisoftDDS technology perspectives, first of all, consist of the organization and the functionality of the distributed data service which process inquiries about granting of documents. The distributed data service allows to hide the complex and, in most cases, not necessary features of structure of complex data sources and ways of connection to them from the external user. Instead of this, user receives pseudonyms of connections and file directories, the real parameters of which are stored in the register of the web-server, which hosts the DSServer. Such scheme gives also wide opportunities of the data protection and differentiations of access rights to the information. The technology of creation of horizontal territory distributed geoinformation systems with the purpose of the territorial social and economic development level classification of Quindio Departamento (Columbia) is also given in this work. This technology includes the creation of thematic maps on the base of ESRI software products - Arcview and Erdas. It also shows and offer some ways of regional social and economic development conditions analysis for comparison of optimality of the decision. This technology includes the following parameters: dynamics of demographic processes; education; health and a feed; infrastructure; political and social stability; culture, social and family values; condition of an environment; political and civil institutes; profitableness of the population; unemployment, use of a labour; poverty and not equality. The methodology allows to include other parameters with the help of an expert estimations method and optimization theories and there is also a module for the forecast check by field checks on district.
MADGE: scalable distributed data management software for cDNA microarrays.
McIndoe, Richard A; Lanzen, Aaron; Hurtz, Kimberly
2003-01-01
The human genome project and the development of new high-throughput technologies have created unparalleled opportunities to study the mechanism of diseases, monitor the disease progression and evaluate effective therapies. Gene expression profiling is a critical tool to accomplish these goals. The use of nucleic acid microarrays to assess the gene expression of thousands of genes simultaneously has seen phenomenal growth over the past five years. Although commercial sources of microarrays exist, investigators wanting more flexibility in the genes represented on the array will turn to in-house production. The creation and use of cDNA microarrays is a complicated process that generates an enormous amount of information. Effective data management of this information is essential to efficiently access, analyze, troubleshoot and evaluate the microarray experiments. We have developed a distributable software package designed to track and store the various pieces of data generated by a cDNA microarray facility. This includes the clone collection storage data, annotation data, workflow queues, microarray data, data repositories, sample submission information, and project/investigator information. This application was designed using a 3-tier client server model. The data access layer (1st tier) contains the relational database system tuned to support a large number of transactions. The data services layer (2nd tier) is a distributed COM server with full database transaction support. The application layer (3rd tier) is an internet based user interface that contains both client and server side code for dynamic interactions with the user. This software is freely available to academic institutions and non-profit organizations at http://www.genomics.mcg.edu/niddkbtc.
Zhou, Jindan; Rudd, Kenneth E.
2013-01-01
EcoGene (http://ecogene.org) is a database and website devoted to continuously improving the structural and functional annotation of Escherichia coli K-12, one of the most well understood model organisms, represented by the MG1655(Seq) genome sequence and annotations. Major improvements to EcoGene in the past decade include (i) graphic presentations of genome map features; (ii) ability to design Boolean queries and Venn diagrams from EcoArray, EcoTopics or user-provided GeneSets; (iii) the genome-wide clone and deletion primer design tool, PrimerPairs; (iv) sequence searches using a customized EcoBLAST; (v) a Cross Reference table of synonymous gene and protein identifiers; (vi) proteome-wide indexing with GO terms; (vii) EcoTools access to >2000 complete bacterial genomes in EcoGene-RefSeq; (viii) establishment of a MySql relational database; and (ix) use of web content management systems. The biomedical literature is surveyed daily to provide citation and gene function updates. As of September 2012, the review of 37 397 abstracts and articles led to creation of 98 425 PubMed-Gene links and 5415 PubMed-Topic links. Annotation updates to Genbank U00096 are transmitted from EcoGene to NCBI. Experimental verifications include confirmation of a CTG start codon, pseudogene restoration and quality assurance of the Keio strain collection. PMID:23197660
Zhou, Jindan; Rudd, Kenneth E
2013-01-01
EcoGene (http://ecogene.org) is a database and website devoted to continuously improving the structural and functional annotation of Escherichia coli K-12, one of the most well understood model organisms, represented by the MG1655(Seq) genome sequence and annotations. Major improvements to EcoGene in the past decade include (i) graphic presentations of genome map features; (ii) ability to design Boolean queries and Venn diagrams from EcoArray, EcoTopics or user-provided GeneSets; (iii) the genome-wide clone and deletion primer design tool, PrimerPairs; (iv) sequence searches using a customized EcoBLAST; (v) a Cross Reference table of synonymous gene and protein identifiers; (vi) proteome-wide indexing with GO terms; (vii) EcoTools access to >2000 complete bacterial genomes in EcoGene-RefSeq; (viii) establishment of a MySql relational database; and (ix) use of web content management systems. The biomedical literature is surveyed daily to provide citation and gene function updates. As of September 2012, the review of 37 397 abstracts and articles led to creation of 98 425 PubMed-Gene links and 5415 PubMed-Topic links. Annotation updates to Genbank U00096 are transmitted from EcoGene to NCBI. Experimental verifications include confirmation of a CTG start codon, pseudogene restoration and quality assurance of the Keio strain collection.
Content Management: If You Build It Right, They Will Come
ERIC Educational Resources Information Center
Starkman, Neal
2006-01-01
According to James Robertson, the managing director of Step Two Designs, a content management consultancy, "A content management system supports the creation, management, distribution, publishing, and discovery of corporate information. It covers the complete lifecycle of the pages of one's site, from providing simple tools to create the…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-13
... DEPARTMENT OF LABOR Office of the Secretary Agency Information Collection Activities; Submission for OMB Review; Comment Request; Middle Class Tax Relief and Job Creation Act of 2012 State Monitoring... Creation Act of 2012 State Monitoring,'' to the Office of Management and Budget (OMB) for review and...
A Co-Creation Blended KM Model for Cultivating Critical-Thinking Skills
ERIC Educational Resources Information Center
Yeh, Yu-chu
2012-01-01
Both critical thinking (CT) and knowledge management (KM) skills are necessary elements for a university student's success. Therefore, this study developed a co-creation blended KM model to cultivate university students' CT skills and to explore the underlying mechanisms for achieving success. Thirty-one university students participated in this…
A Prototype HTML Training System for Graphic Communication Majors
ERIC Educational Resources Information Center
Runquist, Roger L.
2010-01-01
This design research demonstrates a prototype content management system capable of training graphic communication students in the creation of basic HTML web pages. The prototype serve as a method of helping students learn basic HTML structure and commands earlier in their academic careers. Exposure to the concepts of web page creation early in…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-23
... Services, Inc., Corporate/EIT/CTO Database Management Division, Hartford, CT; Notice of Negative... Services, Inc., Corporate/EIT/CTO Database Management Division, Hartford, Connecticut (The Hartford, Corporate/EIT/CTO Database Management Division). The negative determination was issued on August 19, 2011...
Doud, Andrea N; Levine, Edward A; Fino, Nora F; Stewart, John H; Shen, Perry; Votanopoulos, Konstantinos I
2016-02-01
Cytoreductive surgery with heated intraperitoneal chemotherapy (CRS/HIPEC) often includes stoma creation. We evaluated the indications, morbidity, and mortality associated with stoma creation and reversal after CRS/HIPEC. Retrospective analysis of a prospective database of 1149 CRS-HIPEC procedures was performed. Patient demographics, type of malignancy, comorbidities, Clavien-graded morbidity, mortality, indications for stoma creation, and outcomes of subsequent reversal were abstracted. Sixteen percent (186/1149) of CRS/HIPEC procedures included stoma creation, whereas 1.1 % (11/963) of patients without initial stoma creation developed anastomotic leaks requiring stoma. Patients who required a stoma had worse preoperative performance status (ECOG 0/1: 77.2 vs. 86.1 %, p = 0.002), greater burden of disease (PCI 17.6 vs. 12.9, p < 0.0001), and were more likely to have R2 resections (74.5 vs. 48.8 %, p < 0.0001) than those without stoma creation. Stomas were intended to be permanent in 17.5 % (35/199). Of 164 patients with potentially reversible ostomies, only 26.2 % (43/164) underwent reversal. Disease progression (43/164, 26.2 %) and death (40/164, 24.3 %) most commonly precluded reversal. After reversal, 27.9 % (12/43) suffered a Clavien I/II morbidity, 27.9 % (12/43) suffered Clavien III/IV morbidity, and 30-day mortality was 4.7 % (2/43). Anastomotic leak occurred after 9 % (3/33) of ileostomy and 10 % (1/10) of colostomy reversals. Stomas are more common among CRS/HIPEC patients with a high burden of disease and poor functional status. Reversal is uncommon and is associated with significant major morbidity. Preoperative counseling for those with high disease burden and poor functional status should include the risk of permanent stoma.
41 CFR 105-53.143 - Information Resources Management Service.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false Information Resources... FUNCTIONS Central Offices § 105-53.143 Information Resources Management Service. (a) Creation and authority. The Information Resources Management Service (IRMS), headed by the Commissioner, Information Resources...
41 CFR 105-53.143 - Information Resources Management Service.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 41 Public Contracts and Property Management 3 2011-01-01 2011-01-01 false Information Resources... FUNCTIONS Central Offices § 105-53.143 Information Resources Management Service. (a) Creation and authority. The Information Resources Management Service (IRMS), headed by the Commissioner, Information Resources...
41 CFR 105-53.143 - Information Resources Management Service.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 41 Public Contracts and Property Management 3 2014-01-01 2014-01-01 false Information Resources... FUNCTIONS Central Offices § 105-53.143 Information Resources Management Service. (a) Creation and authority. The Information Resources Management Service (IRMS), headed by the Commissioner, Information Resources...
41 CFR 105-53.143 - Information Resources Management Service.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 41 Public Contracts and Property Management 3 2013-07-01 2013-07-01 false Information Resources... FUNCTIONS Central Offices § 105-53.143 Information Resources Management Service. (a) Creation and authority. The Information Resources Management Service (IRMS), headed by the Commissioner, Information Resources...
41 CFR 105-53.143 - Information Resources Management Service.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 41 Public Contracts and Property Management 3 2012-01-01 2012-01-01 false Information Resources... FUNCTIONS Central Offices § 105-53.143 Information Resources Management Service. (a) Creation and authority. The Information Resources Management Service (IRMS), headed by the Commissioner, Information Resources...
Importance of databases of nucleic acids for bioinformatic analysis focused to genomics
NASA Astrophysics Data System (ADS)
Jimenez-Gutierrez, L. R.; Barrios-Hernández, C. J.; Pedraza-Ferreira, G. R.; Vera-Cala, L.; Martinez-Perez, F.
2016-08-01
Recently, bioinformatics has become a new field of science, indispensable in the analysis of millions of nucleic acids sequences, which are currently deposited in international databases (public or private); these databases contain information of genes, RNA, ORF, proteins, intergenic regions, including entire genomes from some species. The analysis of this information requires computer programs; which were renewed in the use of new mathematical methods, and the introduction of the use of artificial intelligence. In addition to the constant creation of supercomputing units trained to withstand the heavy workload of sequence analysis. However, it is still necessary the innovation on platforms that allow genomic analyses, faster and more effectively, with a technological understanding of all biological processes.
Stewart, Moira; Thind, Amardeep; Terry, Amanda L; Chevendra, Vijaya; Marshall, J Neil
2009-11-01
Electronic medical records (EMRs) are posited as a tool for improving practice, policy and research in primary healthcare. This paper describes the Deliver Primary Healthcare Information (DELPHI) Project at the Department of Family Medicine at the University of Western Ontario, focusing on its development, current status and research potential in order to share experiences with researchers in similar contexts. The project progressed through four stages: (a) participant recruitment, (b) EMR software modification and implementation, (c) database creation and (d) data quality assessment. Currently, the DELPHI database holds more than two years of high-quality, de-identified data from 10 practices, with 30,000 patients and nearly a quarter of a million encounters.
Li, Mingze; Zhuang, Xiaoli; Liu, Wenxing; Zhang, Pengcheng
2017-01-01
This study aims to explore the influence of co-author network on team knowledge creation. Integrating the two traditional perspectives of network relationship and network structure, we examine the direct and interactive effects of tie stability and structural holes on team knowledge creation. Tracking scientific articles published by 111 scholars in the research field of human resource management from the top 8 American universities, we analyze scholars’ scientific co-author networks. The result indicates that tie stability changes the teams’ information processing modes and, when graphed, results in an inverted U-shape relationship between tie stability and team knowledge creation. Moreover, structural holes in co-author network are proved to be harmful to team knowledge sharing and diffusion, thereby impeding team knowledge creation. Also, tie stability and structural hole interactively influence team knowledge creation. When the number of structural hole is low in the co-author network, the graphical representation of the relationship between tie stability and team knowledge creation tends to be a more distinct U-shape. PMID:28993744
Li, Mingze; Zhuang, Xiaoli; Liu, Wenxing; Zhang, Pengcheng
2017-01-01
This study aims to explore the influence of co-author network on team knowledge creation. Integrating the two traditional perspectives of network relationship and network structure, we examine the direct and interactive effects of tie stability and structural holes on team knowledge creation. Tracking scientific articles published by 111 scholars in the research field of human resource management from the top 8 American universities, we analyze scholars' scientific co-author networks. The result indicates that tie stability changes the teams' information processing modes and, when graphed, results in an inverted U-shape relationship between tie stability and team knowledge creation. Moreover, structural holes in co-author network are proved to be harmful to team knowledge sharing and diffusion, thereby impeding team knowledge creation. Also, tie stability and structural hole interactively influence team knowledge creation. When the number of structural hole is low in the co-author network, the graphical representation of the relationship between tie stability and team knowledge creation tends to be a more distinct U-shape.
TWRS technical baseline database manager definition document
DOE Office of Scientific and Technical Information (OSTI.GOV)
Acree, C.D.
1997-08-13
This document serves as a guide for using the TWRS Technical Baseline Database Management Systems Engineering (SE) support tool in performing SE activities for the Tank Waste Remediation System (TWRS). This document will provide a consistent interpretation of the relationships between the TWRS Technical Baseline Database Management software and the present TWRS SE practices. The Database Manager currently utilized is the RDD-1000 System manufactured by the Ascent Logic Corporation. In other documents, the term RDD-1000 may be used interchangeably with TWRS Technical Baseline Database Manager.
NASA Astrophysics Data System (ADS)
Smith, James F., III; Blank, Joseph A.
2003-03-01
An approach is being explored that involves embedding a fuzzy logic based resource manager in an electronic game environment. Game agents can function under their own autonomous logic or human control. This approach automates the data mining problem. The game automatically creates a cleansed database reflecting the domain expert's knowledge, it calls a data mining function, a genetic algorithm, for data mining of the data base as required and allows easy evaluation of the information extracted. The co-evolutionary fitness functions, chromosomes and stopping criteria for ending the game are discussed. Genetic algorithm and genetic program based data mining procedures are discussed that automatically discover new fuzzy rules and strategies. The strategy tree concept and its relationship to co-evolutionary data mining are examined as well as the associated phase space representation of fuzzy concepts. The overlap of fuzzy concepts in phase space reduces the effective strategies available to adversaries. Co-evolutionary data mining alters the geometric properties of the overlap region known as the admissible region of phase space significantly enhancing the performance of the resource manager. Procedures for validation of the information data mined are discussed and significant experimental results provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bercu, Zachary L., E-mail: zachary.bercu@mountsinai.org; Sheth, Sachin B., E-mail: sachinsheth@gmail.com; Noor, Amir, E-mail: amir.noor@gmail.com
The creation of a transjugular intrahepatic portosystemic shunt (TIPS) is a critical procedure for the treatment of recurrent variceal bleeding and refractory ascites in the setting of portal hypertension. Chronic portal vein thrombosis remains a relative contraindication to conventional TIPS and options are limited in this scenario. Presented is a novel technique for management of refractory ascites in a patient with hepatitis C cirrhosis and chronic portal and superior mesenteric vein thrombosis secondary to schistosomiasis and lupus anticoagulant utilizing fluoroscopically guided percutaneous mesocaval shunt creation.
[Quality management and participation into clinical database].
Okubo, Suguru; Miyata, Hiroaki; Tomotaki, Ai; Motomura, Noboru; Murakami, Arata; Ono, Minoru; Iwanaka, Tadashi
2013-07-01
Quality management is necessary for establishing useful clinical database in cooperation with healthcare professionals and facilities. The ways of management are 1) progress management of data entry, 2) liaison with database participants (healthcare professionals), and 3) modification of data collection form. In addition, healthcare facilities are supposed to consider ethical issues and information security for joining clinical databases. Database participants should check ethical review boards and consultation service for patients.
36 CFR 1222.20 - How are personal files defined and managed?
Code of Federal Regulations, 2010 CFR
2010-07-01
... RECORDS ADMINISTRATION RECORDS MANAGEMENT CREATION AND MAINTENANCE OF FEDERAL RECORDS Identifying Federal... “personal” does not affect the status of documentary materials in a Federal agency. ...
Methods for the survey and genetic analysis of populations
Ashby, Matthew
2003-09-02
The present invention relates to methods for performing surveys of the genetic diversity of a population. The invention also relates to methods for performing genetic analyses of a population. The invention further relates to methods for the creation of databases comprising the survey information and the databases created by these methods. The invention also relates to methods for analyzing the information to correlate the presence of nucleic acid markers with desired parameters in a sample. These methods have application in the fields of geochemical exploration, agriculture, bioremediation, environmental analysis, clinical microbiology, forensic science and medicine.
Strategy for a transparent, accessible, and sustainable national claims database.
Gelburd, Robin
2015-03-01
The article outlines the strategy employed by FAIR Health, Inc, an independent nonprofit, to maintain a national database of over 18 billion private health insurance claims to support consumer education, payer and provider operations, policy makers, and researchers with standard and customized data sets on an economically self-sufficient basis. It explains how FAIR Health conducts all operations in-house, including data collection, security, validation, information organization, product creation, and transmission, with a commitment to objectivity and reliability in data and data products. It also describes the data elements available to researchers and the diverse studies that FAIR Health data facilitate.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 41 Public Contracts and Property Management 3 2011-01-01 2011-01-01 false What are the records... Public Contracts and Property Management Federal Property Management Regulations System (Continued) FEDERAL MANAGEMENT REGULATION ADMINISTRATIVE PROGRAMS 193-CREATION, MAINTENANCE, AND USE OF RECORDS § 102...
Code of Federal Regulations, 2014 CFR
2014-01-01
... 41 Public Contracts and Property Management 3 2014-01-01 2014-01-01 false What are the records... Public Contracts and Property Management Federal Property Management Regulations System (Continued) FEDERAL MANAGEMENT REGULATION ADMINISTRATIVE PROGRAMS 193-CREATION, MAINTENANCE, AND USE OF RECORDS § 102...
Code of Federal Regulations, 2012 CFR
2012-01-01
... 41 Public Contracts and Property Management 3 2012-01-01 2012-01-01 false What are the records... Public Contracts and Property Management Federal Property Management Regulations System (Continued) FEDERAL MANAGEMENT REGULATION ADMINISTRATIVE PROGRAMS 193-CREATION, MAINTENANCE, AND USE OF RECORDS § 102...
Code of Federal Regulations, 2013 CFR
2013-07-01
... 41 Public Contracts and Property Management 3 2013-07-01 2013-07-01 false What are the records... Public Contracts and Property Management Federal Property Management Regulations System (Continued) FEDERAL MANAGEMENT REGULATION ADMINISTRATIVE PROGRAMS 193-CREATION, MAINTENANCE, AND USE OF RECORDS § 102...
A French national research project to the creation of an auscultation's school: the ASAP project.
Andrès, Emmanuel; Reichert, Sandra; Gass, Raymond; Brandt, Christian
2009-05-01
Auscultation of pulmonary sounds provides valuable clinical information but has been regarded as a tool of low diagnostic value due to the inherent subjectivity in the evaluation of these sounds. This paper describes an ambitious study of in the so-called ASAP project or "Analyse de Sons Auscultatoires et Pathologiques". ASAP is a 3-year-long French collaborative project developed in the context of the News Technologies of Information and Communication. ASAP aims at making evolve the auscultation technics: by 1) the development objective tools for the analyse of auscultation sounds: electronic stethoscopes paired with computing device; 2) the creation of an auscultation sounds' database in order to compare and identify the acoustical and visual signatures of the pathologies; and 3) the capitalisation of these new auscultation techniques around the creation of a teaching unit: "Ecole de l'Auscultation". This auscultation's school will be destined to the initial and continuous formation of the medical attendants.
Mechanisms for Job Creation. Lessons from the United States.
ERIC Educational Resources Information Center
Organisation for Economic Cooperation and Development, Paris (France).
This document contains papers presented at a seminar to explore how the U.S. economy has created 30 million jobs since the early 1970s, while most European countries have barely managed to keep their labor force employed. The following papers are included: "Job Creation in the United States: Some Facts and Figures" (Sibille); "Unanswered…
Effects of Colony Creation Method and Beekeeper Education on Honeybee ("Apis mellifera") Mortality
ERIC Educational Resources Information Center
Findlay, J. Reed; Eborn, Benjamin; Jones, Wayne
2015-01-01
The two-part study reported here analyzed the effects of beekeeper education and colony creation methods on colony mortality. The first study examined the difference in hive mortality between hives managed by beekeepers who had received formal training in beekeeping with beekeepers who had not. The second study examined the effect on hive…
The Implications of Self-Creation and Self-Care in Higher Education: A Transdisciplinary Inquiry
ERIC Educational Resources Information Center
Jackson, Lesley A.
2017-01-01
This dissertation explores and connects the concepts of self-creation and self-care as a means to better address the evolving needs of students seeking to actualize themselves in and beyond higher education. These needs include helping students manage change, and other issues such as stress, anxiety, substance abuse, and physical health…
78 FR 60929 - Guinness Atkinson Asset Management, Inc., et al.; Notice of Application
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-02
... receive securities from, the series in connection with the purchase and redemption of Creation Units; (e... $25, and that Creation Units will consist of at least 10,000 Shares. All orders to purchase and redeem... Distributor will transmit all purchase orders to the relevant Fund. 6. The Shares will be purchased and...
Computer-Based Testing in the Medical Curriculum: A Decade of Experiences at One School
ERIC Educational Resources Information Center
McNulty, John; Chandrasekhar, Arcot; Hoyt, Amy; Gruener, Gregory; Espiritu, Baltazar; Price, Ron, Jr.
2011-01-01
This report summarizes more than a decade of experiences with implementing computer-based testing across a 4-year medical curriculum. Practical considerations are given to the fields incorporated within an item database and their use in the creation and analysis of examinations, security issues in the delivery and integrity of examinations,…
Studying the Impact of Federal and State Changes in Student Aid Policy at the Campus Level.
ERIC Educational Resources Information Center
Fenske, Robert H.; Dillon, Kathryn A.; Porter, John D.
1997-01-01
Argues that shifts in government policies can produce unintended consequences for needy students and the institutions they attend, and illustrates how campus units can cooperate to examine the impact of these changes through creation of longitudinal databases and data warehousing techniques. Describes the approach used and results of a study at…
The purpose of this SOP is to describe how lab results are organized and processed into the official database known as the Complete Dataset (CDS); to describe the structure and creation of the Analysis-ready Dataset (ADS); and to describe the structure and process of creating the...
Flip-J: Development of the System for Flipped Jigsaw Supported Language Learning
ERIC Educational Resources Information Center
Yamada, Masanori; Goda, Yoshiko; Hata, Kojiro; Matsukawa, Hideya; Yasunami, Seisuke
2016-01-01
This study aims to develop and evaluate a language learning system supported by the "flipped jigsaw" technique, called "Flip-J". This system mainly consists of three functions: (1) the creation of a learning material database, (2) allocation of learning materials, and (3) formation of an expert and jigsaw group. Flip-J was…
Study of Italian Renaissance sculptures using an external beam nuclear microprobe
NASA Astrophysics Data System (ADS)
Zucchiatti, A.; Bouquillon, A.; Moignard, B.; Salomon, J.; Gaborit, J. R.
2000-03-01
The use of an extracted proton micro-beam for the PIXE analysis of glazes is discussed in the context of the growing interest in the creation of an analytical database on Italian Renaissance glazed terracotta sculptures. Some results concerning the frieze of an altarpiece of the Louvre museum, featuring white angels and cherubs heads, are presented.
DeitY-TU face database: its design, multiple camera capturing, characteristics, and evaluation
NASA Astrophysics Data System (ADS)
Bhowmik, Mrinal Kanti; Saha, Kankan; Saha, Priya; Bhattacharjee, Debotosh
2014-10-01
The development of the latest face databases is providing researchers different and realistic problems that play an important role in the development of efficient algorithms for solving the difficulties during automatic recognition of human faces. This paper presents the creation of a new visual face database, named the Department of Electronics and Information Technology-Tripura University (DeitY-TU) face database. It contains face images of 524 persons belonging to different nontribes and Mongolian tribes of north-east India, with their anthropometric measurements for identification. Database images are captured within a room with controlled variations in illumination, expression, and pose along with variability in age, gender, accessories, make-up, and partial occlusion. Each image contains the combined primary challenges of face recognition, i.e., illumination, expression, and pose. This database also represents some new features: soft biometric traits such as mole, freckle, scar, etc., and facial anthropometric variations that may be helpful for researchers for biometric recognition. It also gives an equivalent study of the existing two-dimensional face image databases. The database has been tested using two baseline algorithms: linear discriminant analysis and principal component analysis, which may be used by other researchers as the control algorithm performance score.
Burstyn, I; Kromhout, H; Cruise, P J; Brennan, P
2000-01-01
The objective of this project was to construct a database of exposure measurements which would be used to retrospectively assess the intensity of various exposures in an epidemiological study of cancer risk among asphalt workers. The database was developed as a stand-alone Microsoft Access 2.0 application, which could work in each of the national centres. Exposure data included in the database comprised measurements of exposure levels, plus supplementary information on production characteristics which was analogous to that used to describe companies enrolled in the study. The database has been successfully implemented in eight countries, demonstrating the flexibility and data security features adequate to the task. The database allowed retrieval and consistent coding of 38 data sets of which 34 have never been described in peer-reviewed scientific literature. We were able to collect most of the data intended. As of February 1999 the database consisted of 2007 sets of measurements from persons or locations. The measurements appeared to be free from any obvious bias. The methodology embodied in the creation of the database can be usefully employed to develop exposure assessment tools in epidemiological studies.
Drill hole data for coal beds in the Powder River Basin, Montana and Wyoming
Haacke, Jon E.; Scott, David C.
2013-01-01
This report by the U.S. Geological Survey (USGS) of the Powder River Basin (PRB) of Montana and Wyoming is part of the U.S. Coal Resources and Reserves Assessment Project. Essential to that project was the creation of a comprehensive drill hole database that was used for coal bed correlation and for coal resource and reserve assessments in the PRB. This drill hole database was assembled using data from the USGS National Coal Resources Data System, several other Federal and State agencies, and selected mining companies. Additionally, USGS personnel manually entered lithologic picks into the database from geophysical logs of coalbed methane, oil, and gas wells. Of the 29,928 drill holes processed, records of 21,393 are in the public domain and are included in this report. The database contains location information, lithology, and coal bed names for each drill hole.
The future of medical diagnostics: large digitized databases.
Kerr, Wesley T; Lau, Edward P; Owens, Gwen E; Trefler, Aaron
2012-09-01
The electronic health record mandate within the American Recovery and Reinvestment Act of 2009 will have a far-reaching affect on medicine. In this article, we provide an in-depth analysis of how this mandate is expected to stimulate the production of large-scale, digitized databases of patient information. There is evidence to suggest that millions of patients and the National Institutes of Health will fully support the mining of such databases to better understand the process of diagnosing patients. This data mining likely will reaffirm and quantify known risk factors for many diagnoses. This quantification may be leveraged to further develop computer-aided diagnostic tools that weigh risk factors and provide decision support for health care providers. We expect that creation of these databases will stimulate the development of computer-aided diagnostic support tools that will become an integral part of modern medicine.
Assessment on the Creation of an Under Secretary of Defense for Business Management & Information OUR Defense team for 2017. View Study Best Practices for Real Property Management Best Practices for Real Property Management There is now an opportunity to modernize the management of the Department's real
A Conceptual Framework for Examining Knowledge Management in Higher Education Contexts
ERIC Educational Resources Information Center
Lee, Hae-Young; Roth, Gene L.
2009-01-01
Knowledge management is an on-going process that involves varied activities: diagnosis, design, and implementation of knowledge creation, knowledge transfer, and knowledge sharing. The primary goal of knowledge management, like other management theories or models, is to identify and leverage organizational and individual knowledge for the…
Hopping out of the Swamp: Management of Change in a Downsizing Environment.
ERIC Educational Resources Information Center
Burton, Jennus L.
1993-01-01
Arizona State University has developed a model for managing declining resources in administrative service functions. A variant of Total Quality Management, it involves clarification of administrative unit functions, unit self-examination, establishment of program priorities, environmental scanning, creation of an infrastructure to manage change,…
Science, state, and spirituality: Stories of four creationists in South Korea.
Park, Hyung Wook; Cho, Kyuhoon
2018-03-01
This paper presents an analysis of the birth and growth of scientific creationism in South Korea by focusing on the lives of four major contributors. After creationism arrived in Korea in 1980 through the global campaign of leading American creationists, including Henry Morris and Duane Gish, it steadily grew in the country, reflecting its historical and social conditions, and especially its developmental state with its structured mode of managing science and appropriating religion. We argue that while South Korea's creationism started with the state-centered conservative Christianity under the government that also vigilantly managed scientists, it subsequently constituted some technical experts' efforts to move away from the state and its religion and science through their negotiation of a new identity as Christian intellectuals ( chisigin). Our historical study will thus explain why South Korea became what Ronald Numbers has called "the creationist capital of the world."
Complete Imageless solution for overlay front-end manufacturing
NASA Astrophysics Data System (ADS)
Herisson, David; LeCacheux, Virginie; Touchet, Mathieu; Vachellerie, Vincent; Lecarpentier, Laurent; Felten, Franck; Polli, Marco
2005-09-01
Imageless option of KLA-Tencor RDM system (Recipe Data Management) is a new method of recipe creation, using only the mask design to define alignment target and measurement parameters. This technique is potentially the easiest tool to improve recipe management of a large amount of products in logic fab. Overlay recipes are created without wafer, by using a synthetic image (copy of gds mask file) for alignment pattern and target design like shape (frame in frame) and size for the measurement. A complete gauge study on critical CMOS 90nm Gate level has been conducted to evaluate reliability and robustness of the imageless recipe. We show that Imageless limits drastically the number of templates used for recipe creation, and improves or maintains measurement capability compare to manual recipe creation (operator dependant). Imageless appears to be a suitable solution for high volume manufacturing, as shown by the results obtained on production lots.
[Creation and management of organizational knowledge].
Shinyashiki, Gilberto Tadeu; Trevizan, Maria Auxiliadora; Mendes, Isabel Amélia
2003-01-01
With a view to creating and establishing a sustainable position of competitive advantage, the best organizations are increasingly investing in the application of concepts such as learning, knowledge and competency. The organization's creation or acquisition of knowledge about its actions represents an intangible resource that is capable of conferring a competitive advantage upon them. This knowledge derives from interactions developed in learning processes that occur in the organizational environment. The more specific characteristics this knowledge demonstrates in relation to the organization, the more it will become the foundation of its core competencies and, consequently, an important strategic asset. This article emphasizes nurses' role in the process of knowledge management, placing them in the intersection between horizontal and vertical information levels as well as in the creation of a sustainable competitive advantage. Authors believe that this contribution may represent an opportunity for a reflection about its implications for the scenarious of health and nursing practices.
Total Quality Management and Cost of Quality
NASA Astrophysics Data System (ADS)
Hadjicostas, Evsevios
Before we start analysing the philosophy of Total Quality Management it is worthwhile going back to the early days of quality and the quality movement. In fact, the quality concept dates back to the creation of Adam and Eve: “And God saw every thing that he had made, and, behold, it was very good”. (Genesis A 31). It is remarkable that at the end of each day, looking at his creations God was saying, “This is good”. However, at the end of the sixth day, after he finished the creation of human beings, he said, “This is very good”. It is amazing that he did not say, “This is excellent”. This is because excellence is something that we gain after tireless effort. God left room for improvement in order to challenge us and make our life more attractive, which has really happened!
Short Fiction on Film: A Relational DataBase.
ERIC Educational Resources Information Center
May, Charles
Short Fiction on Film is a database that was created and will run on DataRelator, a relational database manager created by Bill Finzer for the California State Department of Education in 1986. DataRelator was designed for use in teaching students database management skills and to provide teachers with examples of how a database manager might be…
47 CFR 0.241 - Authority delegated.
Code of Federal Regulations, 2012 CFR
2012-10-01
... database functions for unlicensed devices operating in the television broadcast bands (TV bands) as set... methods that will be used to designate TV bands database managers, to designate these database managers; to develop procedures that these database managers will use to ensure compliance with the...
47 CFR 0.241 - Authority delegated.
Code of Federal Regulations, 2013 CFR
2013-10-01
... database functions for unlicensed devices operating in the television broadcast bands (TV bands) as set... methods that will be used to designate TV bands database managers, to designate these database managers; to develop procedures that these database managers will use to ensure compliance with the...
47 CFR 0.241 - Authority delegated.
Code of Federal Regulations, 2011 CFR
2011-10-01
... database functions for unlicensed devices operating in the television broadcast bands (TV bands) as set... methods that will be used to designate TV bands database managers, to designate these database managers; to develop procedures that these database managers will use to ensure compliance with the...
Approaching Academic Digital Content Management.
ERIC Educational Resources Information Center
Acker, Stephen R.
2002-01-01
Discusses digital content management in higher education. Highlights include learning objects that make content more modular so it can be used in other courses or by other institutions; and a system at Ohio State University for content management that includes the creation of learner profiles. (LRW)
Modernization and multiscale databases at the U.S. geological survey
Morrison, J.L.
1992-01-01
The U.S. Geological Survey (USGS) has begun a digital cartographic modernization program. Keys to that program are the creation of a multiscale database, a feature-based file structure that is derived from a spatial data model, and a series of "templates" or rules that specify the relationships between instances of entities in reality and features in the database. The database will initially hold data collected from the USGS standard map products at scales of 1:24,000, 1:100,000, and 1:2,000,000. The spatial data model is called the digital line graph-enhanced model, and the comprehensive rule set consists of collection rules, product generation rules, and conflict resolution rules. This modernization program will affect the USGS mapmaking process because both digital and graphic products will be created from the database. In addition, non-USGS map users will have more flexibility in uses of the databases. These remarks are those of the session discussant made in response to the six papers and the keynote address given in the session. ?? 1992.
Microcomputer Database Management Systems for Bibliographic Data.
ERIC Educational Resources Information Center
Pollard, Richard
1986-01-01
Discusses criteria for evaluating microcomputer database management systems (DBMS) used for storage and retrieval of bibliographic data. Two popular types of microcomputer DBMS--file management systems and relational database management systems--are evaluated with respect to these criteria. (Author/MBR)
The Data Base and Decision Making in Public Schools.
ERIC Educational Resources Information Center
Hedges, William D.
1984-01-01
Describes generic types of databases--file management systems, relational database management systems, and network/hierarchical database management systems--with their respective strengths and weaknesses; discusses factors to be considered in determining whether a database is desirable; and provides evaluative criteria for use in choosing…
[Exposed workers to lung cancer risk. An estimation using the ISPESL database of enterprises].
Scarselli, Alberto; Marinaccio, Alessandro; Nesti, Massimo
2007-01-01
lung cancer is the first cause of death in the industrialized country among males and is increasing among females. In 2001 a uniform and standardised list of occupations or jobs known or suspected to be associated with lung cancer has been prepared. The aim of this study is to set up a database of Italian enterprises corresponding to activities related to this list and to assess the number of potentially exposed workers. a detailed and unique list of codes, referred to Ateco91 ISTAT classification with exclusion of the State Railways and the public administration sectors, has been developed. The list is divided into two categories: respectively for occupations or jobs definitely entailing carcinogenic risk and for those which probably/possibly entail a risk. Firms have been selected from the ISPESL database of enterprises and the number of workers has been estimated on the basis of this list. Italy. assessment of the number of workers potentially exposed to lung cancer risk and creation of a register of involved firms. the number of potentially exposed workers in the industrial and services sector related to lung cancer risk is 650,886 blue collars and the number of firms censused in Italy is 117,006 units. Corresponding figures in the agriculture sector are 163,340 and 84,839. This type of evaluation, being based on administrative sources rather then on direct measures of exposure, certainly includes an overestimation of exposed workers. the lists based on a standard classification which have been created allow for the creation of databases which can be used to control occupational exposure to carcinogens and to increase comparability between epidemiologic studies based on job-exposure matrix.
NASA Astrophysics Data System (ADS)
Weatherill, G. A.; Pagani, M.; Garcia, J.
2016-09-01
The creation of a magnitude-homogenized catalogue is often one of the most fundamental steps in seismic hazard analysis. The process of homogenizing multiple catalogues of earthquakes into a single unified catalogue typically requires careful appraisal of available bulletins, identification of common events within multiple bulletins and the development and application of empirical models to convert from each catalogue's native scale into the required target. The database of the International Seismological Center (ISC) provides the most exhaustive compilation of records from local bulletins, in addition to its reviewed global bulletin. New open-source tools are developed that can utilize this, or any other compiled database, to explore the relations between earthquake solutions provided by different recording networks, and to build and apply empirical models in order to harmonize magnitude scales for the purpose of creating magnitude-homogeneous earthquake catalogues. These tools are described and their application illustrated in two different contexts. The first is a simple application in the Sub-Saharan Africa region where the spatial coverage and magnitude scales for different local recording networks are compared, and their relation to global magnitude scales explored. In the second application the tools are used on a global scale for the purpose of creating an extended magnitude-homogeneous global earthquake catalogue. Several existing high-quality earthquake databases, such as the ISC-GEM and the ISC Reviewed Bulletins, are harmonized into moment magnitude to form a catalogue of more than 562 840 events. This extended catalogue, while not an appropriate substitute for a locally calibrated analysis, can help in studying global patterns in seismicity and hazard, and is therefore released with the accompanying software.
ERIC Educational Resources Information Center
New York State Education Dept., Albany. State Archives and Records Administration.
This reports provides local governments with guidelines and suggestions for selecting a Records Management Officer to develop, organize, and direct a records management program. Such a program is described as an over-arching, continuing, administrative effort that manages recorded information from its initial creation to its final disposition.…
47 CFR 52.101 - General definitions.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Center (“NASC”). The entity that provides user support for the Service Management System database and administers the Service Management System database on a day-to-day basis. (b) Responsible Organization (“Resp... regional databases in the toll free network. (d) Service Management System Database (“SMS Database”). The...
47 CFR 52.101 - General definitions.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Center (“NASC”). The entity that provides user support for the Service Management System database and administers the Service Management System database on a day-to-day basis. (b) Responsible Organization (“Resp... regional databases in the toll free network. (d) Service Management System Database (“SMS Database”). The...
47 CFR 52.101 - General definitions.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Center (“NASC”). The entity that provides user support for the Service Management System database and administers the Service Management System database on a day-to-day basis. (b) Responsible Organization (“Resp... regional databases in the toll free network. (d) Service Management System Database (“SMS Database”). The...
47 CFR 52.101 - General definitions.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Center (“NASC”). The entity that provides user support for the Service Management System database and administers the Service Management System database on a day-to-day basis. (b) Responsible Organization (“Resp... regional databases in the toll free network. (d) Service Management System Database (“SMS Database”). The...
47 CFR 52.101 - General definitions.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Center (“NASC”). The entity that provides user support for the Service Management System database and administers the Service Management System database on a day-to-day basis. (b) Responsible Organization (“Resp... regional databases in the toll free network. (d) Service Management System Database (“SMS Database”). The...
47 CFR 0.241 - Authority delegated.
Code of Federal Regulations, 2014 CFR
2014-10-01
... individual database managers; and to perform other functions as needed for the administration of the TV bands... database functions for unlicensed devices operating in the television broadcast bands (TV bands) as set... methods that will be used to designate TV bands database managers, to designate these database managers...
Practice management for academic dermatology departments.
Eaglstein, W H
2000-09-01
Practice management in the academic medical center (AMC) is different than in other environments. Practice is only a part of the practitioner's mission within an AMC. Practice revenue will be subject to a tax or overhead by both the school and the department. Contract and practice guidelines cannot be tailored to the needs of the dermatology practice, because contracts and guidelines are negotiated globally for all of the practices within the AMC. Personnel, on which the practice depends, may report to hospitals and clinics rather than to the practice's management. Even control of the practice's manager may be diluted by a dual or "dotted line" reporting relationship between the department manager and the school practice manager. Although more constraints exist within the AMC, there are some strategic and operational choices that affect a practice's success. Among these are: (1) selection of services offered; (2) creation of satellites; (3) stimulation of faculty effort; (4) enhancement of faculty billing knowledge; and (5) creation of a "tie" between staff and the practice.
Lafond, Valentine; Cordonnier, Thomas; Courbaud, Benoît
2015-11-01
Mixed uneven-aged forests are considered favorable to the provision of multiple ecosystem services and to the conciliation of timber production and biodiversity conservation. However, some forest managers now plan to increase the intensity of thinning and harvesting operations in these forests. Retention measures or gap creation are considered to compensate potential negative impacts on biodiversity. Our objectives were to assess the effect of these management practices on timber production and biodiversity conservation and identify potential compensating effects between these practices, using the concept of ecological intensification as a framework. We performed a simulation study coupling Samsara2, a simulation model designed for spruce-fir uneven-aged mountain forests, an uneven-aged silviculture algorithm, and biodiversity models. We analyzed the effect of parameters related to uneven-aged management practices on timber production, biodiversity, and sustainability indicators. Our study confirmed that the indicators responded differently to management practices, leading to trade-offs situations. Increasing management intensity had negative impacts on several biodiversity indicators, which could be partly compensated by the positive effect of retention measures targeting large trees, non-dominant species, and deadwood. The impact of gap creation was more mitigated, with a positive effect on the diversity of tree sizes and deadwood but a negative impact on the spruce-fir mixing balance and on the diversity of the understory layer. Through the analysis of compensating effects, we finally revealed the existence of possible ecological intensification pathways, i.e., the possibility to increase management intensity while maintaining biodiversity through the promotion of nature-based management principles (gap creation and retention measures).
ERIC Educational Resources Information Center
Woods, Larry, Ed.
The 1999 American Society for Information Science (ASIS) conference explored current knowledge creation, acquisition, navigation, correlation, retrieval, management, and dissemination practicalities and potentialities, their implementation and impact, and the theories behind the developments. Speakers reviewed processes, technologies, and tools,…
Relationship between organizational structure and creativity in teaching hospitals.
Rezaee, Rita; Marhamati, Saadat; Nabeiei, Parisa; Marhamati, Raheleh
2014-07-01
Organization structure and manpower constitute two basic components of anorganization and both are necessary for stablishing an organization. The aim of this survey was to investigate the type of the organization structure (mechanic and organic) from viewpoint of senior and junior managers in Shiraz teaching hospitals and creativity in each of these two structures. In this cross-sectional and descriptive-analytic study, organization structure and organizational creation questionnaires were filled out by hospital managers. According to the statistical consultation and due to limited target population, the entire study population was considered as sample. Thus, the sample size in this study was 84 (12 hospitals and every hospital, n = 7). For data analysis, SPSS 14 was used and Spearman correlation coefficient and t-test were used. RESULTS showed that there is a negative association between centralization and complexity with organizational creation and its dimensions. Also there was a negative association between formalization and 4 organizational creation dimensions: reception change, accepting ambiguity, abet new view and less control outside (p=0.001). The results of this study showed that the creation in hospitals with organic structure is more than that in hospitals with mechanic structure.
Relationship between organizational structure and creativity in teaching hospitals
REZAEE, RITA; MARHAMATI, SAADAT; NABEIEI, PARISA; MARHAMATI, RAHELEH
2014-01-01
Introduction: Organization structure and manpower constitute two basic components of anorganization and both are necessary for stablishing an organization. The aim of this survey was to investigate the type of the organization structure (mechanic and organic) from viewpoint of senior and junior managers in Shiraz teaching hospitals and creativity in each of these two structures. Methods: In this cross-sectional and descriptive-analytic study, organization structure and organizational creation questionnaires were filled out by hospital managers. According to the statistical consultation and due to limited target population, the entire study population was considered as sample. Thus, the sample size in this study was 84 (12 hospitals and every hospital, n = 7). For data analysis, SPSS 14 was used and Spearman correlation coefficient and t-test were used. Results: Results showed that there is a negative association between centralization and complexity with organizational creation and its dimensions. Also there was a negative association between formalization and 4 organizational creation dimensions: reception change, accepting ambiguity, abet new view and less control outside (p=0.001). Conclusion: The results of this study showed that the creation in hospitals with organic structure is more than that in hospitals with mechanic structure. PMID:25512934
Mynodbcsv: lightweight zero-config database solution for handling very large CSV files.
Adaszewski, Stanisław
2014-01-01
Volumes of data used in science and industry are growing rapidly. When researchers face the challenge of analyzing them, their format is often the first obstacle. Lack of standardized ways of exploring different data layouts requires an effort each time to solve the problem from scratch. Possibility to access data in a rich, uniform manner, e.g. using Structured Query Language (SQL) would offer expressiveness and user-friendliness. Comma-separated values (CSV) are one of the most common data storage formats. Despite its simplicity, with growing file size handling it becomes non-trivial. Importing CSVs into existing databases is time-consuming and troublesome, or even impossible if its horizontal dimension reaches thousands of columns. Most databases are optimized for handling large number of rows rather than columns, therefore, performance for datasets with non-typical layouts is often unacceptable. Other challenges include schema creation, updates and repeated data imports. To address the above-mentioned problems, I present a system for accessing very large CSV-based datasets by means of SQL. It's characterized by: "no copy" approach--data stay mostly in the CSV files; "zero configuration"--no need to specify database schema; written in C++, with boost [1], SQLite [2] and Qt [3], doesn't require installation and has very small size; query rewriting, dynamic creation of indices for appropriate columns and static data retrieval directly from CSV files ensure efficient plan execution; effortless support for millions of columns; due to per-value typing, using mixed text/numbers data is easy; very simple network protocol provides efficient interface for MATLAB and reduces implementation time for other languages. The software is available as freeware along with educational videos on its website [4]. It doesn't need any prerequisites to run, as all of the libraries are included in the distribution package. I test it against existing database solutions using a battery of benchmarks and discuss the results.
Mynodbcsv: Lightweight Zero-Config Database Solution for Handling Very Large CSV Files
Adaszewski, Stanisław
2014-01-01
Volumes of data used in science and industry are growing rapidly. When researchers face the challenge of analyzing them, their format is often the first obstacle. Lack of standardized ways of exploring different data layouts requires an effort each time to solve the problem from scratch. Possibility to access data in a rich, uniform manner, e.g. using Structured Query Language (SQL) would offer expressiveness and user-friendliness. Comma-separated values (CSV) are one of the most common data storage formats. Despite its simplicity, with growing file size handling it becomes non-trivial. Importing CSVs into existing databases is time-consuming and troublesome, or even impossible if its horizontal dimension reaches thousands of columns. Most databases are optimized for handling large number of rows rather than columns, therefore, performance for datasets with non-typical layouts is often unacceptable. Other challenges include schema creation, updates and repeated data imports. To address the above-mentioned problems, I present a system for accessing very large CSV-based datasets by means of SQL. It's characterized by: “no copy” approach – data stay mostly in the CSV files; “zero configuration” – no need to specify database schema; written in C++, with boost [1], SQLite [2] and Qt [3], doesn't require installation and has very small size; query rewriting, dynamic creation of indices for appropriate columns and static data retrieval directly from CSV files ensure efficient plan execution; effortless support for millions of columns; due to per-value typing, using mixed text/numbers data is easy; very simple network protocol provides efficient interface for MATLAB and reduces implementation time for other languages. The software is available as freeware along with educational videos on its website [4]. It doesn't need any prerequisites to run, as all of the libraries are included in the distribution package. I test it against existing database solutions using a battery of benchmarks and discuss the results. PMID:25068261
Total Quality Management in Information Services. Information Services Management Series.
ERIC Educational Resources Information Center
St. Clair, Guy
Information services managers have a responsibility to provide the best information delivery possible. The basic principles of total quality management can be used by information professionals to help justify library funding through the creation of an environment where customer-patron satisfaction is paramount. This book reveals how to apply the…
Code of Federal Regulations, 2010 CFR
2010-07-01
...) FEDERAL MANAGEMENT REGULATION ADMINISTRATIVE PROGRAMS 193-CREATION, MAINTENANCE, AND USE OF RECORDS § 102... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false What are the records management responsibilities of the Administrator of General Services (the Administrator), the Archivist of...
ERIC Educational Resources Information Center
LaLonde, Courtney C.
2017-01-01
Effective classroom management is critical in the creation of learning environments that foster academic success for all students. Preservice teachers must develop an awareness and understanding of all aspects of classroom management and their relation to the two main classroom management approaches: the discipline based approach and the…
Database Management Systems: New Homes for Migrating Bibliographic Records.
ERIC Educational Resources Information Center
Brooks, Terrence A.; Bierbaum, Esther G.
1987-01-01
Assesses bibliographic databases as part of visionary text systems such as hypertext and scholars' workstations. Downloading is discussed in terms of the capability to search records and to maintain unique bibliographic descriptions, and relational database management systems, file managers, and text databases are reviewed as possible hosts for…
blend4php: a PHP API for galaxy
Wytko, Connor; Soto, Brian; Ficklin, Stephen P.
2017-01-01
Galaxy is a popular framework for execution of complex analytical pipelines typically for large data sets, and is a commonly used for (but not limited to) genomic, genetic and related biological analysis. It provides a web front-end and integrates with high performance computing resources. Here we report the development of the blend4php library that wraps Galaxy’s RESTful API into a PHP-based library. PHP-based web applications can use blend4php to automate execution, monitoring and management of a remote Galaxy server, including its users, workflows, jobs and more. The blend4php library was specifically developed for the integration of Galaxy with Tripal, the open-source toolkit for the creation of online genomic and genetic web sites. However, it was designed as an independent library for use by any application, and is freely available under version 3 of the GNU Lesser General Public License (LPGL v3.0) at https://github.com/galaxyproject/blend4php. Database URL: https://github.com/galaxyproject/blend4php PMID:28077564
Gurinović, Mirjana; Milešević, Jelena; Novaković, Romana; Kadvan, Agnes; Djekić-Ivanković, Marija; Šatalić, Zvonimir; Korošec, Mojca; Spiroski, Igor; Ranić, Marija; Dupouy, Eleonora; Oshaug, Arne; Finglas, Paul; Glibetić, Maria
2016-02-15
The objective of this paper is to share experience and provide updated information on Capacity Development in the Central and Eastern Europe/Balkan Countries (CEE/BC) region relevant to public health nutrition, particularly in creation of food composition databases (FCDBs), applying dietary intake assessment and monitoring tools, and harmonizing methodology for nutrition surveillance. Balkan Food Platform was established by a Memorandum of Understanding among EuroFIR AISBL, Institute for Medical Research, Belgrade, Capacity Development Network in Nutrition in CEE - CAPNUTRA and institutions from nine countries in the region. Inventory on FCDB status identified lack of harmonized and standardized research tools. To strengthen harmonization in CEE/BC in line with European research trends, the Network members collaborated in development of a Regional FCDB, using web-based food composition data base management software following EuroFIR standards. Comprehensive nutrition assessment and planning tool - DIET ASSESS & PLAN could enable synchronization of nutrition surveillance across countries. Copyright © 2015 Elsevier Ltd. All rights reserved.
Digital divide, biometeorological data infrastructures and human vulnerability definition
NASA Astrophysics Data System (ADS)
Fdez-Arroyabe, Pablo; Lecha Estela, Luis; Schimt, Falko
2018-05-01
The design and implementation of any climate-related health service, nowadays, imply avoiding the digital divide as it means having access and being able to use complex technological devices, massive meteorological data, user's geographic location and biophysical information. This article presents the co-creation, in detail, of a biometeorological data infrastructure, which is a complex platform formed by multiple components: a mainframe, a biometeorological model called Pronbiomet, a relational database management system, data procedures, communication protocols, different software packages, users, datasets and a mobile application. The system produces four daily world maps of the partial density of the atmospheric oxygen and collects user feedback on their health condition. The infrastructure is shown to be a useful tool to delineate individual vulnerability to meteorological changes as one key factor in the definition of any biometeorological risk. This technological approach to study weather-related health impacts is the initial seed for the definition of biometeorological profiles of persons, and for the future development of customized climate services for users in the near future.
From BIM to GIS at the Smithsonian Institution
NASA Astrophysics Data System (ADS)
Günther-Diringer, Detlef
2018-05-01
BIM-files (Building Information Models) are in modern architecture and building management a basic prerequisite for successful creation of construction engineering projects. At the facilities department of the Smithsonian Institution more than six hundred buildings were maintained. All facilities were digital available in an ESRI ArcGIS-environment with connection to the database information about single rooms with the usage and further maintenance information. These data are organization wide available by an intranet viewer, but only in a two-dimensional representation. Goal of the carried out project was the development of a workflow from available BIM-models to the given GIS-structure. The test-environment were the BIM-models of the buildings of the Smithsonian museums along the Washington Mall. Based on new software editions of Autodesk Revit, FME and ArcGIS Pro the workflow from BIM to the GIS-data structure of the Smithsonian was successfully developed and may be applied for the setup of the future 3D intranet viewer.
Digital divide, biometeorological data infrastructures and human vulnerability definition.
Fdez-Arroyabe, Pablo; Lecha Estela, Luis; Schimt, Falko
2018-05-01
The design and implementation of any climate-related health service, nowadays, imply avoiding the digital divide as it means having access and being able to use complex technological devices, massive meteorological data, user's geographic location and biophysical information. This article presents the co-creation, in detail, of a biometeorological data infrastructure, which is a complex platform formed by multiple components: a mainframe, a biometeorological model called Pronbiomet, a relational database management system, data procedures, communication protocols, different software packages, users, datasets and a mobile application. The system produces four daily world maps of the partial density of the atmospheric oxygen and collects user feedback on their health condition. The infrastructure is shown to be a useful tool to delineate individual vulnerability to meteorological changes as one key factor in the definition of any biometeorological risk. This technological approach to study weather-related health impacts is the initial seed for the definition of biometeorological profiles of persons, and for the future development of customized climate services for users in the near future.
Digital divide, biometeorological data infrastructures and human vulnerability definition
NASA Astrophysics Data System (ADS)
Fdez-Arroyabe, Pablo; Lecha Estela, Luis; Schimt, Falko
2017-06-01
The design and implementation of any climate-related health service, nowadays, imply avoiding the digital divide as it means having access and being able to use complex technological devices, massive meteorological data, user's geographic location and biophysical information. This article presents the co-creation, in detail, of a biometeorological data infrastructure, which is a complex platform formed by multiple components: a mainframe, a biometeorological model called Pronbiomet, a relational database management system, data procedures, communication protocols, different software packages, users, datasets and a mobile application. The system produces four daily world maps of the partial density of the atmospheric oxygen and collects user feedback on their health condition. The infrastructure is shown to be a useful tool to delineate individual vulnerability to meteorological changes as one key factor in the definition of any biometeorological risk. This technological approach to study weather-related health impacts is the initial seed for the definition of biometeorological profiles of persons, and for the future development of customized climate services for users in the near future.
A component-based, distributed object services architecture for a clinical workstation.
Chueh, H C; Raila, W F; Pappas, J J; Ford, M; Zatsman, P; Tu, J; Barnett, G O
1996-01-01
Attention to an architectural framework in the development of clinical applications can promote reusability of both legacy systems as well as newly designed software. We describe one approach to an architecture for a clinical workstation application which is based on a critical middle tier of distributed object-oriented services. This tier of network-based services provides flexibility in the creation of both the user interface and the database tiers. We developed a clinical workstation for ambulatory care using this architecture, defining a number of core services including those for vocabulary, patient index, documents, charting, security, and encounter management. These services can be implemented through proprietary or more standard distributed object interfaces such as CORBA and OLE. Services are accessed over the network by a collection of user interface components which can be mixed and matched to form a variety of interface styles. These services have also been reused with several applications based on World Wide Web browser interfaces.
A component-based, distributed object services architecture for a clinical workstation.
Chueh, H. C.; Raila, W. F.; Pappas, J. J.; Ford, M.; Zatsman, P.; Tu, J.; Barnett, G. O.
1996-01-01
Attention to an architectural framework in the development of clinical applications can promote reusability of both legacy systems as well as newly designed software. We describe one approach to an architecture for a clinical workstation application which is based on a critical middle tier of distributed object-oriented services. This tier of network-based services provides flexibility in the creation of both the user interface and the database tiers. We developed a clinical workstation for ambulatory care using this architecture, defining a number of core services including those for vocabulary, patient index, documents, charting, security, and encounter management. These services can be implemented through proprietary or more standard distributed object interfaces such as CORBA and OLE. Services are accessed over the network by a collection of user interface components which can be mixed and matched to form a variety of interface styles. These services have also been reused with several applications based on World Wide Web browser interfaces. PMID:8947744
Utilizing ORACLE tools within Unix
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferguson, R.
1995-07-01
Large databases, by their very nature, often serve as repositories of data which may be needed by other systems. The transmission of this data to other systems has in the past involved several layers of human intervention. The Integrated Cargo Data Base (ICDB) developed by Martin Marietta Energy Systems for the Military Traffic Management Command as part of the Worldwide Port System provides data integration and worldwide tracking of cargo that passes through common-user ocean cargo ports. One of the key functions of ICDB is data distribution of a variety of data files to a number of other systems. Developmentmore » of automated data distribution procedures had to deal with the following constraints: (1) variable generation time for data files, (2) use of only current data for data files, (3) use of a minimum number of select statements, (4) creation of unique data files for multiple recipients, (5) automatic transmission of data files to recipients, and (6) avoidance of extensive and long-term data storage.« less
Partial polygon pruning of hydrographic features in automated generalization
Stum, Alexander K.; Buttenfield, Barbara P.; Stanislawski, Larry V.
2017-01-01
This paper demonstrates a working method to automatically detect and prune portions of waterbody polygons to support creation of a multi-scale hydrographic database. Water features are known to be sensitive to scale change; and thus multiple representations are required to maintain visual and geographic logic at smaller scales. Partial pruning of polygonal features—such as long and sinuous reservoir arms, stream channels that are too narrow at the target scale, and islands that begin to coalesce—entails concurrent management of the length and width of polygonal features as well as integrating pruned polygons with other generalized point and linear hydrographic features to maintain stream network connectivity. The implementation follows data representation standards developed by the U.S. Geological Survey (USGS) for the National Hydrography Dataset (NHD). Portions of polygonal rivers, streams, and canals are automatically characterized for width, length, and connectivity. This paper describes an algorithm for automatic detection and subsequent processing, and shows results for a sample of NHD subbasins in different landscape conditions in the United States.
Today's challenges in pharmacovigilance: what can we learn from epoetins?
Ebbers, Hans C; Mantel-Teeuwisse, Aukje K; Moors, Ellen H M; Schellekens, Huub; Leufkens, Hubert G
2011-04-01
Highly publicized safety issues of medicinal products in recent years and the accompanying political pressure have forced both the US FDA and the European Medicines Agency (EMA) to implement stronger regulations concerning pharmacovigilance. These legislative changes demand more proactive risk management strategies of both pharmaceutical companies and regulators to characterize and minimize known and potential safety concerns. Concurrently, comprehensive surveillance systems are implemented, intended to identify and confirm adverse drug reactions, including the creation of large pharmacovigilance databases and the cooperation with epidemiological centres. Although the ambitions are high, not much is known about how effective all these measures are, or will be. In this review we analyse how the pharmacovigilance community has acted upon two adverse events associated with the use of erythropoiesis-stimulating agents: the sudden increase in pure red cell aplasia and the possible risk of tumour progression associated with these products. These incidents provide important insight for improving pharmacovigilance, but also pose new challenges for regulatory decision making.
23 CFR 970.204 - Management systems requirements.
Code of Federal Regulations, 2010 CFR
2010-04-01
... the management systems and their associated databases; and (5) A process for data collection, processing, analysis and updating for each management system. (d) All management systems will use databases with a geographical reference system that can be used to geolocate all database information. (e...
23 CFR 970.204 - Management systems requirements.
Code of Federal Regulations, 2012 CFR
2012-04-01
... the management systems and their associated databases; and (5) A process for data collection, processing, analysis and updating for each management system. (d) All management systems will use databases with a geographical reference system that can be used to geolocate all database information. (e...
23 CFR 970.204 - Management systems requirements.
Code of Federal Regulations, 2011 CFR
2011-04-01
... the management systems and their associated databases; and (5) A process for data collection, processing, analysis and updating for each management system. (d) All management systems will use databases with a geographical reference system that can be used to geolocate all database information. (e...
Selecting Data-Base Management Software for Microcomputers in Libraries and Information Units.
ERIC Educational Resources Information Center
Pieska, K. A. O.
1986-01-01
Presents a model for the evaluation of database management systems software from the viewpoint of librarians and information specialists. The properties of data management systems, database management systems, and text retrieval systems are outlined and compared. (10 references) (CLB)
23 CFR 970.204 - Management systems requirements.
Code of Federal Regulations, 2013 CFR
2013-04-01
... the management systems and their associated databases; and (5) A process for data collection, processing, analysis and updating for each management system. (d) All management systems will use databases with a geographical reference system that can be used to geolocate all database information. (e...
Construction of databases: advances and significance in clinical research.
Long, Erping; Huang, Bingjie; Wang, Liming; Lin, Xiaoyu; Lin, Haotian
2015-12-01
Widely used in clinical research, the database is a new type of data management automation technology and the most efficient tool for data management. In this article, we first explain some basic concepts, such as the definition, classification, and establishment of databases. Afterward, the workflow for establishing databases, inputting data, verifying data, and managing databases is presented. Meanwhile, by discussing the application of databases in clinical research, we illuminate the important role of databases in clinical research practice. Lastly, we introduce the reanalysis of randomized controlled trials (RCTs) and cloud computing techniques, showing the most recent advancements of databases in clinical research.
Cargo Data Management Demonstration System
DOT National Transportation Integrated Search
1974-02-01
Delays in receipt and creation of cargo documents are a problem in international trade. The work described demonstrates some of the advantages and capabilities of a computer-based cargo data management system. A demonstration system for data manageme...
[The future of clinical laboratory database management system].
Kambe, M; Imidy, D; Matsubara, A; Sugimoto, Y
1999-09-01
To assess the present status of the clinical laboratory database management system, the difference between the Clinical Laboratory Information System and Clinical Laboratory System was explained in this study. Although three kinds of database management systems (DBMS) were shown including the relational model, tree model and network model, the relational model was found to be the best DBMS for the clinical laboratory database based on our experience and developments of some clinical laboratory expert systems. As a future clinical laboratory database management system, the IC card system connected to an automatic chemical analyzer was proposed for personal health data management and a microscope/video system was proposed for dynamic data management of leukocytes or bacteria.
LANDFIRE Remap: A New National Baseline Product Suite
NASA Astrophysics Data System (ADS)
Dockter, D.; Peterson, B.; Picotte, J. J.; Long, J.; Tolk, B.; Callahan, K.; Davidson, A.; Earnhardt, T.
2017-12-01
LANDFIRE, also known as the Landscape Fire and Resource Management Planning Tools Program, is a vegetation, fire, and fuel characteristic data creation program managed by both the U.S. Department of Agriculture Forest Service and the U.S. Department of the Interior with involvement from The Nature Conservancy. LANDFIRE represents the first and only complete, nationally consistent collection of over 20 geo-spatial layers (e.g., vegetation type and structure, fuels, fire regimes), databases, and ecological models that can be used across multiple disciplines to support cross-boundary planning, management, and operations across all lands of the United States and insular areas. Since 2004, LANDFIRE has produced comprehensive, consistent, and scientifically based suites of mapped products and associated databases for the United States and affiliated territories. These products depict the nation's major ecosystems and wildlife habitats. Over a decade has passed since the development of the first LANDFIRE base map, and an overhaul of the data products, i.e., a "Remap", is needed to maintain their functionality and relevance. To prepare for Remap production LANDFIRE has invested in a prototyping phase that focused on exploring various input data sources and new modeling and mapping techniques. While still grounded in a solid base consisting of Landsat imagery and high-quality field observations, the prototyping efforts explored different image compositing techniques, the integration of lidar data, modeling approaches as well as other factors that will inform Remap production. Several of these various research efforts are highlighted here and are currently being integrated into an end-to-end data processing flow that will drive the Remap production. The current Remap prototype effort has focused on several study areas throughout CONUS, with additional studies anticipated for Alaska, Hawaii and the territories. The LANDFIRE Remap effort is expected to take three to four years, with production commencing in northwestern CONUS.
European settlement-era vegetation of the Monongahela National Forest, West Virginia
Melissa A. Thomas-Van Gundy; Michael P. Strager
2012-01-01
Forest restoration would be greatly helped by understanding just what forests looked like a century or more ago. One source of information on early forests is found in old deeds or surveys, where boundary corners were described by noting nearby trees known as witness trees. This paper describes the creation and analysis of a database of witness trees from original...
ERIC Educational Resources Information Center
Henthorne, Eileen
1995-01-01
Describes a project at the Princeton University libraries that converted the pre-1981 public card catalog, using digital imaging and optical character recognition technology, to fully tagged and indexed records of text in MARC format that are available on an online database and will be added to the online catalog. (LRW)
Graphics interfaces and numerical simulations: Mexican Virtual Solar Observatory
NASA Astrophysics Data System (ADS)
Hernández, L.; González, A.; Salas, G.; Santillán, A.
2007-08-01
Preliminary results associated to the computational development and creation of the Mexican Virtual Solar Observatory (MVSO) are presented. Basically, the MVSO prototype consists of two parts: the first, related to observations that have been made during the past ten years at the Solar Observation Station (EOS) and at the Carl Sagan Observatory (OCS) of the Universidad de Sonora in Mexico. The second part is associated to the creation and manipulation of a database produced by numerical simulations related to solar phenomena, we are using the MHD ZEUS-3D code. The development of this prototype was made using mysql, apache, java and VSO 1.2. based GNU and `open source philosophy'. A graphic user interface (GUI) was created in order to make web-based, remote numerical simulations. For this purpose, Mono was used, because it is provides the necessary software to develop and run .NET client and server applications on Linux. Although this project is still under development, we hope to have access, by means of this portal, to other virtual solar observatories and to be able to count on a database created through numerical simulations or, given the case, perform simulations associated to solar phenomena.
Open Access to Geophysical Data
NASA Astrophysics Data System (ADS)
Sergeyeva, Nataliya A.; Zabarinskaya, Ludmila P.
2017-04-01
Russian World Data Centers for Solar-Terrestrial Physics & Solid Earth Physics hosted by the Geophysical Center of the Russian Academy of Sciences are the Regular Members of the ICSU-World Data System. Guided by the principles of the WDS Constitution and WDS Data Sharing Principles, the WDCs provide full and open access to data, long-term data stewardship, compliance with agreed-upon data standards and conventions, and mechanisms to facilitate and improve access to data. Historical and current geophysical data on different media, in the form of digital data sets, analog records, collections of maps, descriptions are stored and collected in the Centers. The WDCs regularly fill up repositories and database with new data, support them up to date. Now the WDCs focus on four new projects, aimed at increase of data available in network by retrospective data collection and digital preservation of data; creation of a modern system of registration and publication of data with digital object identifier (DOI) assignment, and promotion of data citation culture; creation of databases instead of file system for more convenient access to data; participation in the WDS Metadata Catalogue and Data Portal by creating of metadata for information resources of WDCs.
2016-01-01
Implementing a new technical process demands a complex preparation. In cardiac surgery this complex preparation is often reduced to visiting a surgeon who is familiar with a technique. The science of learning has identified that several steps are needed towards a successful implementation. The first step is the creation of a complete conceptual approach; this demands the deposit in writing of the actions and reactions of every involved party in this new approach. By definition a successful implementation starts with the creation of a group of involved individuals willing to collaborate towards a new goal. Then every teachable component, described in this concept, needs to be worked out in simulation training, from the smallest manual step to complete scenario training for complex situations. Finally, optimal organisational learning needs to have an existing database of the previous situation, a clear goal and objective and a new database where every new approach is restudied versus the previous one, using appropriate methods of corrections for variability. A complete implementation will always be more successful versus a partial one, due to the habit in partial implementation to return to the previous routines. PMID:27942400
Sánchez Cuervo, Marina; Muñoz García, María; Gómez de Salazar López de Silanes, María Esther; Bermejo Vicedo, Teresa
2015-03-01
to describe the features of a computer program for management of drugs in special situations (off-label and compassionate use) in a Department of Hospital Pharmacy (PD). To describe the methodology followed for its implementation in the Medical Services. To evaluate their use after 2 years of practice. the design was carried out by pharmacists of the PD. The stages of the process were: selection of a software development company, establishment of a working group, selection of a development platform, design of an interactive Viewer, definition of functionality and data processing, creation of databases, connection, installation and configuration, application testing and improvements development. A directed sequential strategy was used for implementation in the Medical Services. The program's utility and experience of use were evaluated after 2 years. a multidisciplinary working group was formed and developed Pk_Usos®. The program works in web environment with a common viewer for all users enabling real time checking of the request files' status and that adapts to the management of medications in special situations procedure. Pk_Usos® was introduced first in the Oncology Department, with 15 oncologists as users of the program. 343 patients had 384 treatment requests managed, of which 363 are authorized throughout two years. PK_Usos® is the first software designed for the management of drugs in special situations in the PD. It is a dynamic and efficient tool for all professionals involved in the process by optimization of times. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.
23 CFR 971.204 - Management systems requirements.
Code of Federal Regulations, 2011 CFR
2011-04-01
... maintain the management systems and their associated databases; and (5) A process for data collection, processing, analysis, and updating for each management system. (c) All management systems will use databases with a common or coordinated reference system, that can be used to geolocate all database information...
23 CFR 971.204 - Management systems requirements.
Code of Federal Regulations, 2010 CFR
2010-04-01
... maintain the management systems and their associated databases; and (5) A process for data collection, processing, analysis, and updating for each management system. (c) All management systems will use databases with a common or coordinated reference system, that can be used to geolocate all database information...
23 CFR 971.204 - Management systems requirements.
Code of Federal Regulations, 2012 CFR
2012-04-01
... maintain the management systems and their associated databases; and (5) A process for data collection, processing, analysis, and updating for each management system. (c) All management systems will use databases with a common or coordinated reference system, that can be used to geolocate all database information...
23 CFR 971.204 - Management systems requirements.
Code of Federal Regulations, 2013 CFR
2013-04-01
... maintain the management systems and their associated databases; and (5) A process for data collection, processing, analysis, and updating for each management system. (c) All management systems will use databases with a common or coordinated reference system, that can be used to geolocate all database information...
Distributed Practicum Supervision in a Managed Learning Environment (MLE)
ERIC Educational Resources Information Center
Carter, David
2005-01-01
This evaluation-research feasibility study piloted the creation of a technology-mediated managed learning environment (MLE) involving the implementation of one of a new generation of instructionally driven management information systems (IMISs). The system, and supporting information and communications technology (ICT) was employed to support…
The Roles of Knowledge Professionals for Knowledge Management.
ERIC Educational Resources Information Center
Kim, Seonghee
This paper starts by exploring the definition of knowledge and knowledge management; examples of acquisition, creation, packaging, application, and reuse of knowledge are provided. It then considers the partnership for knowledge management and especially how librarians as knowledge professionals, users, and technology experts can contribute to…
A Computer Program for the Management of Prescription-Based Problems.
ERIC Educational Resources Information Center
Cotter, Patricia M.; Gumtow, Robert H.
1991-01-01
The Prescription Management Program, a software program using Apple's HyperCard on a MacIntosh, was developed to simplify the creation, storage, modification, and general management of prescription-based problems. Pharmacy instructors may customize the program to serve their individual teaching needs. (Author/DB)
Magnetic Pair Creation Transparency in Pulsars
NASA Astrophysics Data System (ADS)
Story, Sarah; Baring, M. G.
2013-04-01
The Fermi gamma-ray pulsar database now exceeds 115 sources and has defined an important part of Fermi's science legacy, providing rich information for the interpretation of young energetic pulsars and old millisecond pulsars. Among the well established population characteristics is the common occurrence of exponential turnovers in the 1-10 GeV range. These turnovers are too gradual to arise from magnetic pair creation in the strong magnetic fields of pulsar inner magnetospheres, so their energy can be used to provide lower bounds to the typical altitude of GeV band emission. We explore such constraints due to single-photon pair creation transparency below the turnover energy. We adopt a semi-analytic approach, spanning both domains when general relativistic influences are important and locales where flat spacetime photon propagation is modified by rotational aberration effects. Our work clearly demonstrates that including near-threshold physics in the pair creation rate is essential to deriving accurate attenuation lengths. The altitude bounds, typically in the range of 2-6 neutron star radii, provide key information on the emission altitude in radio quiet pulsars that do not possess double-peaked pulse profiles. For the Crab pulsar, which emits pulsed radiation up to energies of 120 GeV, we obtain a lower bound of around 15 neutron star radii to its emission altitude.
ERIC Educational Resources Information Center
Freeman, Carla; And Others
In order to understand how the database software or online database functioned in the overall curricula, the use of database management (DBMs) systems was studied at eight elementary and middle schools through classroom observation and interviews with teachers and administrators, librarians, and students. Three overall areas were addressed:…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-16
... Excluded Parties Listing System (EPLS) databases into the System for Award Management (SAM) database. DATES... combined the functional capabilities of the CCR, ORCA, and EPLS procurement systems into the SAM database... identification number and the type of organization from the System for Award Management database. 0 3. Revise the...
An Examination of Selected Software Testing Tools: 1992
1992-12-01
Report ....................................................... 27-19 Figure 27-17. Metrics Manager Database Full Report...historical test database , the test management and problem reporting tools were examined using the sample test database provided by each supplier. 4-4...track the impact of new methods, organi- zational structures, and technologies. Metrics Manager is supported by an industry database that allows
Campsite impact management: A survey of National Park Service backcountry managers
Marion, J.L.; Stubbs, C.J.; Vander Stoep, Gail A.
1993-01-01
Though a central purpose for the creation and management of parks, visitation inevitably affects the natural resources of parks. This is particularly true at campsites, where visitation and its effects are concentrated. This paper presents partial results from a survey of National Park Service managers regarding general strategies and specific actions implemented by park managers to address campsite impact problems.
2010-09-01
Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions...the Office of Management and Budget, Paperwork Reduction Project (0704-0188), Washington, DC 20503 1 . AGENCY USE ONLY (Leave blank) 2. REPORT DATE... 1 2.0 ROAD GEOMETRY CREATION ............................................................. 1 3.0 NODE AND ELEMENT CREATION
Database Searching by Managers.
ERIC Educational Resources Information Center
Arnold, Stephen E.
Managers and executives need the easy and quick access to business and management information that online databases can provide, but many have difficulty articulating their search needs to an intermediary. One possible solution would be to encourage managers and their immediate support staff members to search textual databases directly as they now…
Corporate Entrepreneurship: Teaching Managers To Be Entrepreneurs.
ERIC Educational Resources Information Center
Thornberry, Neal E.
2003-01-01
Examined training programs in four large companies designed to turn managers into corporate entrepreneurs. Results indicated that many managers can be trained to act like entrepreneurs and that their actions can lead to new value creation. Problems may arise when newly trained entrepreneurs reenter the corporation. (Conains 16 references.) (JOW)
Leveraging Metadata to Create Better Web Services
ERIC Educational Resources Information Center
Mitchell, Erik
2012-01-01
Libraries have been increasingly concerned with data creation, management, and publication. This increase is partly driven by shifting metadata standards in libraries and partly by the growth of data and metadata repositories being managed by libraries. In order to manage these data sets, libraries are looking for new preservation and discovery…
36 CFR 1222.16 - How are nonrecord materials managed?
Code of Federal Regulations, 2012 CFR
2012-07-01
... 36 Parks, Forests, and Public Property 3 2012-07-01 2012-07-01 false How are nonrecord materials managed? 1222.16 Section 1222.16 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT CREATION AND MAINTENANCE OF FEDERAL RECORDS Identifying Federal Records § 1222...
36 CFR 1222.16 - How are nonrecord materials managed?
Code of Federal Regulations, 2011 CFR
2011-07-01
... 36 Parks, Forests, and Public Property 3 2011-07-01 2011-07-01 false How are nonrecord materials managed? 1222.16 Section 1222.16 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT CREATION AND MAINTENANCE OF FEDERAL RECORDS Identifying Federal Records § 1222...
36 CFR 1222.20 - How are personal files defined and managed?
Code of Federal Regulations, 2014 CFR
2014-07-01
... 36 Parks, Forests, and Public Property 3 2014-07-01 2014-07-01 false How are personal files defined and managed? 1222.20 Section 1222.20 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT CREATION AND MAINTENANCE OF FEDERAL RECORDS Identifying Federal...
36 CFR 1222.16 - How are nonrecord materials managed?
Code of Federal Regulations, 2014 CFR
2014-07-01
... 36 Parks, Forests, and Public Property 3 2014-07-01 2014-07-01 false How are nonrecord materials managed? 1222.16 Section 1222.16 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT CREATION AND MAINTENANCE OF FEDERAL RECORDS Identifying Federal Records § 1222...
36 CFR 1222.20 - How are personal files defined and managed?
Code of Federal Regulations, 2012 CFR
2012-07-01
... 36 Parks, Forests, and Public Property 3 2012-07-01 2012-07-01 false How are personal files defined and managed? 1222.20 Section 1222.20 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT CREATION AND MAINTENANCE OF FEDERAL RECORDS Identifying Federal...
36 CFR 1222.16 - How are nonrecord materials managed?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false How are nonrecord materials managed? 1222.16 Section 1222.16 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT CREATION AND MAINTENANCE OF FEDERAL RECORDS Identifying Federal Records § 1222...
36 CFR 1222.3 - What standards are used as guidance for this part?
Code of Federal Regulations, 2010 CFR
2010-07-01
... RECORDS ADMINISTRATION RECORDS MANAGEMENT CREATION AND MAINTENANCE OF FEDERAL RECORDS Identifying Federal... guidance provided in ISO 15489-1:2001, Information and documentation—Records management. Paragraphs 7.1 (Principles of records management programmes), 7.2 (Characteristics of a record), 8.3.5 (Conversion and...
Trees of Our National Forests.
ERIC Educational Resources Information Center
Forest Service (USDA), Washington, DC.
Presented is a description of the creation of the National Forests system, how trees grow, managing the National Forests, types of management systems, and managing for multiple use, including wildlife, water, recreation and other uses. Included are: (1) photographs; (2) line drawings of typical leaves, cones, flowers, and seeds; and (3)…
Creation of a Book Order Management System Using a Microcomputer and a DBMS.
ERIC Educational Resources Information Center
Neill, Charlotte; And Others
1985-01-01
Describes management decisions and resultant technology-based system that allowed a medical library to meet increasing workloads without accompanying increases in resources available. Discussion covers system analysis; capabilities of book-order management system, "BOOKDIRT;" software and training; hardware; data files; data entry;…
Situated Action and the Management of Impression.
ERIC Educational Resources Information Center
Ginsburg, G. P.
Studies of the creation and management of impressions have advanced rapidly in recent years. However, relatively little empirical information has been provided about the processes by which impressions are created and managed in routine interaction and about the range of matters about which impressions are created. The excessive use of internal…
From Discipline to Dynamic Pedagogy: A Re-Conceptualization of Classroom Management
ERIC Educational Resources Information Center
Davis, Jonathan Ryan
2017-01-01
The purpose of this article is to re-conceptualize the definition of classroom management, moving away from its traditional definition rooted in discipline and control toward a definition that focuses on the creation of a positive learning environment. Integrating innovative, culturally responsive classroom management theories, frameworks, and…
Overview of EVE - the event visualization environment of ROOT
NASA Astrophysics Data System (ADS)
Tadel, Matevž
2010-04-01
EVE is a high-level visualization library using ROOT's data-processing, GUI and OpenGL interfaces. It is designed as a framework for object management offering hierarchical data organization, object interaction and visualization via GUI and OpenGL representations. Automatic creation of 2D projected views is also supported. On the other hand, it can serve as an event visualization toolkit satisfying most HEP requirements: visualization of geometry, simulated and reconstructed data such as hits, clusters, tracks and calorimeter information. Special classes are available for visualization of raw-data. Object-interaction layer allows for easy selection and highlighting of objects and their derived representations (projections) across several views (3D, Rho-Z, R-Phi). Object-specific tooltips are provided in both GUI and GL views. The visual-configuration layer of EVE is built around a data-base of template objects that can be applied to specific instances of visualization objects to ensure consistent object presentation. The data-base can be retrieved from a file, edited during the framework operation and stored to file. EVE prototype was developed within the ALICE collaboration and has been included into ROOT in December 2007. Since then all EVE components have reached maturity. EVE is used as the base of AliEve visualization framework in ALICE, Firework physics-oriented event-display in CMS, and as the visualization engine of FairRoot in FAIR.
When global environmentalism meets local livelihoods: policy and management lessons
John Schelhas; Max J. Pfeffer
2009-01-01
Creation of national parks often imposes immediate livelihood costs on local people, and tensions between park managers and local people are common. Park managers have tried different approaches to managing relationships with local people, but nearly all include efforts to promote environmental values and behaviors. These efforts have had uneven results, and there is a...
41 CFR 102-193.10 - What are the goals of the Federal Records Management Program?
Code of Federal Regulations, 2010 CFR
2010-07-01
... maintenance of management controls that prevent the creation of unnecessary records and promote effective and... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false What are the goals of the Federal Records Management Program? 102-193.10 Section 102-193.10 Public Contracts and Property...
Generalized Database Management System Support for Numeric Database Environments.
ERIC Educational Resources Information Center
Dominick, Wayne D.; Weathers, Peggy G.
1982-01-01
This overview of potential for utilizing database management systems (DBMS) within numeric database environments highlights: (1) major features, functions, and characteristics of DBMS; (2) applicability to numeric database environment needs and user needs; (3) current applications of DBMS technology; and (4) research-oriented and…
Geyer, John; Myers, Kathleen; Vander Stoep, Ann; McCarty, Carolyn; Palmer, Nancy; DeSalvo, Amy
2011-10-01
Clinical trials with multiple intervention locations and a single research coordinating center can be logistically difficult to implement. Increasingly, web-based systems are used to provide clinical trial support with many commercial, open source, and proprietary systems in use. New web-based tools are available which can be customized without programming expertise to deliver web-based clinical trial management and data collection functions. To demonstrate the feasibility of utilizing low-cost configurable applications to create a customized web-based data collection and study management system for a five intervention site randomized clinical trial establishing the efficacy of providing evidence-based treatment via teleconferencing to children with attention-deficit hyperactivity disorder. The sites are small communities that would not usually be included in traditional randomized trials. A major goal was to develop database that participants could access from computers in their home communities for direct data entry. Discussed is the selection process leading to the identification and utilization of a cost-effective and user-friendly set of tools capable of customization for data collection and study management tasks. An online assessment collection application, template-based web portal creation application, and web-accessible Access 2007 database were selected and customized to provide the following features: schedule appointments, administer and monitor online secure assessments, issue subject incentives, and securely transmit electronic documents between sites. Each tool was configured by users with limited programming expertise. As of June 2011, the system has successfully been used with 125 participants in 5 communities, who have completed 536 sets of assessment questionnaires, 8 community therapists, and 11 research staff at the research coordinating center. Total automation of processes is not possible with the current set of tools as each is loosely affiliated, creating some inefficiency. This system is best suited to investigations with a single data source e.g., psychosocial questionnaires. New web-based applications can be used by investigators with limited programming experience to implement user-friendly, efficient, and cost-effective tools for multi-site clinical trials with small distant communities. Such systems allow the inclusion in research of populations that are not usually involved in clinical trials.
Herrera-Hernandez, Maria C; Lai-Yuen, Susana K; Piegl, Les A; Zhang, Xiao
2016-10-26
This article presents the design of a web-based knowledge management system as a training and research tool for the exploration of key relationships between Western and Traditional Chinese Medicine, in order to facilitate relational medical diagnosis integrating these mainstream healing modalities. The main goal of this system is to facilitate decision-making processes, while developing skills and creating new medical knowledge. Traditional Chinese Medicine can be considered as an ancient relational knowledge-based approach, focusing on balancing interrelated human functions to reach a healthy state. Western Medicine focuses on specialties and body systems and has achieved advanced methods to evaluate the impact of a health disorder on the body functions. Identifying key relationships between Traditional Chinese and Western Medicine opens new approaches for health care practices and can increase the understanding of human medical conditions. Our knowledge management system was designed from initial datasets of symptoms, known diagnosis and treatments, collected from both medicines. The datasets were subjected to process-oriented analysis, hierarchical knowledge representation and relational database interconnection. Web technology was implemented to develop a user-friendly interface, for easy navigation, training and research. Our system was prototyped with a case study on chronic prostatitis. This trial presented the system's capability for users to learn the correlation approach, connecting knowledge in Western and Traditional Chinese Medicine by querying the database, mapping validated medical information, accessing complementary information from official sites, and creating new knowledge as part of the learning process. By addressing the challenging tasks of data acquisition and modeling, organization, storage and transfer, the proposed web-based knowledge management system is presented as a tool for users in medical training and research to explore, learn and update relational information for the practice of integrated medical diagnosis. This proposal in education has the potential to enable further creation of medical knowledge from both Traditional Chinese and Western Medicine for improved care providing. The presented system positively improves the information visualization, learning process and knowledge sharing, for training and development of new skills for diagnosis and treatment, and a better understanding of medical diseases. © IMechE 2016.
Technical Support for Contaminated Sites
In 1987, the U.S. Environmental Protection Agency’s (EPA) Office of Research and Development (ORD), Office of Land and Emergency Management, and EPA Regional waste management offices established the Technical Support Project. The creation of the Technical Support Project enabled...
Experiences with a Science Hotline.
ERIC Educational Resources Information Center
Evans, Laura J.; Frazier, Donald T.
1993-01-01
Describes the orientation and management of a science hotline managed by the University of Kentucky for the benefit of teachers. Results include a more positive public image of science and the creation of links between academic scientists and precollege teachers. (DDR)
Experiences in the creation of an electromyography database to help hand amputated persons.
Atzori, Manfredo; Gijsberts, Arjan; Heynen, Simone; Hager, Anne-Gabrielle Mittaz; Castellimi, Claudio; Caputo, Barbara; Müller, Henning
2012-01-01
Currently, trans-radial amputees can only perform a few simple movements with prosthetic hands. This is mainly due to low control capabilities and the long training time that is required to learn controlling them with surface electromyography (sEMG). This is in contrast with recent advances in mechatronics, thanks to which mechanical hands have multiple degrees of freedom and in some cases force control. To help improve the situation, we are building the NinaPro (Non-Invasive Adaptive Prosthetics) database, a database of about 50 hand and wrist movements recorded from several healthy and currently very few amputated persons that will help the community to test and improve sEMG-based natural control systems for prosthetic hands. In this paper we describe the experimental experiences and practical aspects related to the data acquisition.
Unconstrained and contactless hand geometry biometrics.
de-Santos-Sierra, Alberto; Sánchez-Ávila, Carmen; Del Pozo, Gonzalo Bailador; Guerra-Casanova, Javier
2011-01-01
This paper presents a hand biometric system for contact-less, platform-free scenarios, proposing innovative methods in feature extraction, template creation and template matching. The evaluation of the proposed method considers both the use of three contact-less publicly available hand databases, and the comparison of the performance to two competitive pattern recognition techniques existing in literature: namely support vector machines (SVM) and k-nearest neighbour (k-NN). Results highlight the fact that the proposed method outcomes existing approaches in literature in terms of computational cost, accuracy in human identification, number of extracted features and number of samples for template creation. The proposed method is a suitable solution for human identification in contact-less scenarios based on hand biometrics, providing a feasible solution to devices with limited hardware requirements like mobile devices.
Automated rule-base creation via CLIPS-Induce
NASA Technical Reports Server (NTRS)
Murphy, Patrick M.
1994-01-01
Many CLIPS rule-bases contain one or more rule groups that perform classification. In this paper we describe CLIPS-Induce, an automated system for the creation of a CLIPS classification rule-base from a set of test cases. CLIPS-Induce consists of two components, a decision tree induction component and a CLIPS production extraction component. ID3, a popular decision tree induction algorithm, is used to induce a decision tree from the test cases. CLIPS production extraction is accomplished through a top-down traversal of the decision tree. Nodes of the tree are used to construct query rules, and branches of the tree are used to construct classification rules. The learned CLIPS productions may easily be incorporated into a large CLIPS system that perform tasks such as accessing a database or displaying information.
Unconstrained and Contactless Hand Geometry Biometrics
de-Santos-Sierra, Alberto; Sánchez-Ávila, Carmen; del Pozo, Gonzalo Bailador; Guerra-Casanova, Javier
2011-01-01
This paper presents a hand biometric system for contact-less, platform-free scenarios, proposing innovative methods in feature extraction, template creation and template matching. The evaluation of the proposed method considers both the use of three contact-less publicly available hand databases, and the comparison of the performance to two competitive pattern recognition techniques existing in literature: namely Support Vector Machines (SVM) and k-Nearest Neighbour (k-NN). Results highlight the fact that the proposed method outcomes existing approaches in literature in terms of computational cost, accuracy in human identification, number of extracted features and number of samples for template creation. The proposed method is a suitable solution for human identification in contact-less scenarios based on hand biometrics, providing a feasible solution to devices with limited hardware requirements like mobile devices. PMID:22346634
US Army Research Laboratory Joint Interagency Field Experimentation 15-2 Final Report
2015-12-01
February 2015, at Alameda Island, California. Advanced text analytics capabilities were demonstrated in a logically coherent workflow pipeline that... text processing capabilities allowed the targeted use of a persistent imagery sensor for rapid detection of mission- critical events. The creation of...a very large text database from open source data provides a relevant and unclassified foundation for continued development of text -processing
BioModels Database: a repository of mathematical models of biological processes.
Chelliah, Vijayalakshmi; Laibe, Camille; Le Novère, Nicolas
2013-01-01
BioModels Database is a public online resource that allows storing and sharing of published, peer-reviewed quantitative, dynamic models of biological processes. The model components and behaviour are thoroughly checked to correspond the original publication and manually curated to ensure reliability. Furthermore, the model elements are annotated with terms from controlled vocabularies as well as linked to relevant external data resources. This greatly helps in model interpretation and reuse. Models are stored in SBML format, accepted in SBML and CellML formats, and are available for download in various other common formats such as BioPAX, Octave, SciLab, VCML, XPP and PDF, in addition to SBML. The reaction network diagram of the models is also available in several formats. BioModels Database features a search engine, which provides simple and more advanced searches. Features such as online simulation and creation of smaller models (submodels) from the selected model elements of a larger one are provided. BioModels Database can be accessed both via a web interface and programmatically via web services. New models are available in BioModels Database at regular releases, about every 4 months.
The Partners in Recovery program: mental health commissioning using value co-creation.
Cheverton, Jeff; Janamian, Tina
2016-04-18
The Australian Government's Partners in Recovery (PIR) program established a new form of mental health intervention which required multiple sectors, services and consumers to work in a more collaborative way. Brisbane North Primary Health Network applied a value co-creation approach with partners and end users, engaging more than 100 organisations in the development of a funding submission to PIR. Engagement platforms were established and continue to provide opportunities for new co-creation experiences. Initially, seven provider agencies - later expanded to eight to include an Aboriginal and Torres Strait Islander provider organisation - worked collaboratively as a Consortium Management Committee. The co-creation development process has been part of achieving the co-created outcomes, which include new initiatives, changes to existing interventions and referral practices, and an increased understanding and awareness of end users' needs.
Building Databases for Education. ERIC Digest.
ERIC Educational Resources Information Center
Klausmeier, Jane A.
This digest provides a brief explanation of what a database is; explains how a database can be used; identifies important factors that should be considered when choosing database management system software; and provides citations to sources for finding reviews and evaluations of database management software. The digest is concerned primarily with…
ERIC Educational Resources Information Center
London, Manuel, Ed.
The 13 chapters in this volume detail how industrial and organizational psychologists, human resource professionals, and consultants have created innovative human resource development and training programs. "Employee Development and Job Creation" (Jennifer Jarratt, Joseph F. Coates) looks at several trends that have important consequences for…
The value of trauma registries.
Moore, Lynne; Clark, David E
2008-06-01
Trauma registries are databases that document acute care delivered to patients hospitalised with injuries. They are designed to provide information that can be used to improve the efficiency and quality of trauma care. Indeed, the combination of trauma registry data at regional or national levels can produce very large databases that allow unprecedented opportunities for the evaluation of patient outcomes and inter-hospital comparisons. However, the creation and upkeep of trauma registries requires a substantial investment of money, time and effort, data quality is an important challenge and aggregated trauma data sets rarely represent a population-based sample of trauma. In addition, trauma hospitalisations are already routinely documented in administrative hospital discharge databases. The present review aims to provide evidence that trauma registry data can be used to improve the care dispensed to victims of injury in ways that could not be achieved with information from administrative databases alone. In addition, we will define the structure and purpose of contemporary trauma registries, acknowledge their limitations, and discuss possible ways to make them more useful.
The Molecular Signatures Database (MSigDB) hallmark gene set collection.
Liberzon, Arthur; Birger, Chet; Thorvaldsdóttir, Helga; Ghandi, Mahmoud; Mesirov, Jill P; Tamayo, Pablo
2015-12-23
The Molecular Signatures Database (MSigDB) is one of the most widely used and comprehensive databases of gene sets for performing gene set enrichment analysis. Since its creation, MSigDB has grown beyond its roots in metabolic disease and cancer to include >10,000 gene sets. These better represent a wider range of biological processes and diseases, but the utility of the database is reduced by increased redundancy across, and heterogeneity within, gene sets. To address this challenge, here we use a combination of automated approaches and expert curation to develop a collection of "hallmark" gene sets as part of MSigDB. Each hallmark in this collection consists of a "refined" gene set, derived from multiple "founder" sets, that conveys a specific biological state or process and displays coherent expression. The hallmarks effectively summarize most of the relevant information of the original founder sets and, by reducing both variation and redundancy, provide more refined and concise inputs for gene set enrichment analysis.
Image query and indexing for digital x rays
NASA Astrophysics Data System (ADS)
Long, L. Rodney; Thoma, George R.
1998-12-01
The web-based medical information retrieval system (WebMIRS) allows interned access to databases containing 17,000 digitized x-ray spine images and associated text data from National Health and Nutrition Examination Surveys (NHANES). WebMIRS allows SQL query of the text, and viewing of the returned text records and images using a standard browser. We are now working (1) to determine utility of data directly derived from the images in our databases, and (2) to investigate the feasibility of computer-assisted or automated indexing of the images to support image retrieval of images of interest to biomedical researchers in the field of osteoarthritis. To build an initial database based on image data, we are manually segmenting a subset of the vertebrae, using techniques from vertebral morphometry. From this, we will derive and add to the database vertebral features. This image-derived data will enhance the user's data access capability by enabling the creation of combined SQL/image-content queries.
Practical Implementation of Semi-Automated As-Built Bim Creation for Complex Indoor Environments
NASA Astrophysics Data System (ADS)
Yoon, S.; Jung, J.; Heo, J.
2015-05-01
In recent days, for efficient management and operation of existing buildings, the importance of as-built BIM is emphasized in AEC/FM domain. However, fully automated as-built BIM creation is a tough issue since newly-constructed buildings are becoming more complex. To manage this problem, our research group has developed a semi-automated approach, focusing on productive 3D as-built BIM creation for complex indoor environments. In order to test its feasibility for a variety of complex indoor environments, we applied the developed approach to model the `Charlotte stairs' in Lotte World Mall, Korea. The approach includes 4 main phases: data acquisition, data pre-processing, geometric drawing, and as-built BIM creation. In the data acquisition phase, due to its complex structure, we moved the scanner location several times to obtain the entire point clouds of the test site. After which, data pre-processing phase entailing point-cloud registration, noise removal, and coordinate transformation was followed. The 3D geometric drawing was created using the RANSAC-based plane detection and boundary tracing methods. Finally, in order to create a semantically-rich BIM, the geometric drawing was imported into the commercial BIM software. The final as-built BIM confirmed that the feasibility of the proposed approach in the complex indoor environment.
ERIC Educational Resources Information Center
American Society for Information Science, Washington, DC.
This document contains abstracts of papers on database design and management which were presented at the 1986 mid-year meeting of the American Society for Information Science (ASIS). Topics considered include: knowledge representation in a bilingual art history database; proprietary database design; relational database design; in-house databases;…
DOT National Transportation Integrated Search
1994-01-01
The Intermodal Surface Transportation Efficiency Act (ISTEA) of 1991 required that states develop systems for managing highway pavement, bridges, safety, congestion, public transportation, and intermodal transportation. This document is Virginia's wo...
Forsander, Gun; Pellinat, Martin; Volk, Michael; Muller, Markus; Pinelli, Leonardo; Magnen, Agnes; Danne, Thomas; Aschemeier, Bärbel; de Beaufort, Carine
2012-09-01
One of the most important tasks of the SWEET study is benchmarking the data collected. Information on the occurrence of the disease of diabetes, the treatment, and their outcomes in children from the different member states of European Union (EU) is crucial. How the collection of data is realized is essential, concerning both the technical issues and the results. The creation of SWEET Centers of Reference (CoR), all over Europe will be facilitated by the access to safe data collection, where legal aspects and privacy are ascertained. To describe the rationale for- and the technical procedure in the data collection implementation, in the SWEET study. Selected data on all patients treated at SWEET CoR are collected. The SWEET project data collection and management system, consists of modular components for data collection, online data interchange, and a database for statistical analysis. The SWEET study and the organization of CoR aims for the goal of offering an updated, secure, and continuous evaluation of diabetes treatment regimens for all children with diabetes in Europe. To support this goal, an appropriate and secure data management system as described in this paper has been created. © 2012 John Wiley & Sons A/S.
[The role of workplace health promotion in the concept of corporate social responsibility].
Wojtaszczyk, Patrycja
2008-01-01
Workplace health promotion (WHP) is an idea that was conceived over 25 years ago. At its very core is the wellbeing of employees. The development and dissemination of this notion, as well as the implementation of its basic principles have always been challenged by various theories and practices derived from the field of human resources management. The corporate social responsibility (CSR) is one of such new concepts promulgated within the European Union Based on the literature review, especially European Commission documents, articles retrieved in the EBSCO database, guidelines and guidebooks published by the CSR Forum, other NGOs active in the field, and the publications of the Nofer Institute of Occupational Medicine, the author makes an attempt to compare these two ideas and discuss the coherence between their assumptions. The primary hypothesis was that WHP is an element of CSR. The comparison between CSR and WHP concepts confirm a hypothesis that the latter is an element of the former, which means that activities aimed at taking care of health and well-being of employees contribute to the creation of a socially responsible company. It indicates that the implementation of both ideas requires multidisciplinary and holistic approach. In addition, the role of social dialog and workers' participation in the company management are strongly emphasized.
Affective Learning and Personal Information Management: Essential Components of Information Literacy
ERIC Educational Resources Information Center
Cahoy, Ellysa Stern
2013-01-01
"Affective competence," managing the feelings and emotions that students encounter throughout the content creation/research process, is essential to academic success. Just as it is crucial for students to acquire core literacies, it is essential that they learn how to manage the anxieties and emotions that will emerge throughout all…
Strategies for job creation through national forest management
Susan Charnley
2014-01-01
This chapter explores the ways in which national forest managers may contribute to community well-being by designing projects that accomplish forest management in ways that not only meet their ecological goals, but also create economic opportunities for nearby communities. The chapter summarizes a number of strategies for enhancing the economic benefits to communities...
DOT National Transportation Integrated Search
2009-10-01
The goal of this research is to mitigate the risk of highway accidents (crashes) and fatalities in work zones. The approach of this research has been to address the mitigation of work zone crashes through the creation of a formal risk management mode...
Project Management in Real Time: A Service-Learning Project
ERIC Educational Resources Information Center
Larson, Erik; Drexler, John A., Jr.
2010-01-01
This article describes a service-learning assignment for a project management course. It is designed to facilitate hands-on student learning of both the technical and the interpersonal aspects of project management, and it involves student engagement with real customers and real stakeholders in the creation of real events with real outcomes. As…
36 CFR § 1222.16 - How are nonrecord materials managed?
Code of Federal Regulations, 2013 CFR
2013-07-01
... 36 Parks, Forests, and Public Property 3 2013-07-01 2012-07-01 true How are nonrecord materials managed? § 1222.16 Section § 1222.16 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT CREATION AND MAINTENANCE OF FEDERAL RECORDS Identifying Federal Records § 1222...
36 CFR § 1222.20 - How are personal files defined and managed?
Code of Federal Regulations, 2013 CFR
2013-07-01
... 36 Parks, Forests, and Public Property 3 2013-07-01 2012-07-01 true How are personal files defined and managed? § 1222.20 Section § 1222.20 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS ADMINISTRATION RECORDS MANAGEMENT CREATION AND MAINTENANCE OF FEDERAL RECORDS Identifying Federal...
Preparing Students for Front-Line Management: Non-Routine Day-to-Day Decisions
ERIC Educational Resources Information Center
Clydesdale, Greg; Tan, John
2009-01-01
Purpose: This paper attempts to reduce the gap between management education and practice. It emphasises day-to-day decisions that middle and lower level managers make. The purpose is to provide an education framework embodying a flexible approach to interpretation and solution creation, suitable for situations of ambiguity and uncertainty.…
Code of Federal Regulations, 2010 CFR
2010-07-01
...) FEDERAL MANAGEMENT REGULATION ADMINISTRATIVE PROGRAMS 193-CREATION, MAINTENANCE, AND USE OF RECORDS § 102... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false What type of records management business process improvements should my agency strive to achieve? 102-193.25 Section 102-193.25...
The Network Configuration of an Object Relational Database Management System
NASA Technical Reports Server (NTRS)
Diaz, Philip; Harris, W. C.
2000-01-01
The networking and implementation of the Oracle Database Management System (ODBMS) requires developers to have knowledge of the UNIX operating system as well as all the features of the Oracle Server. The server is an object relational database management system (DBMS). By using distributed processing, processes are split up between the database server and client application programs. The DBMS handles all the responsibilities of the server. The workstations running the database application concentrate on the interpretation and display of data.
Creating of Central Geospatial Database of the Slovak Republic and Procedures of its Revision
NASA Astrophysics Data System (ADS)
Miškolci, M.; Šafář, V.; Šrámková, R.
2016-06-01
The article describes the creation of initial three dimensional geodatabase from planning and designing through the determination of technological and manufacturing processes to practical using of Central Geospatial Database (CGD - official name in Slovak language is Centrálna Priestorová Databáza - CPD) and shortly describes procedures of its revision. CGD ensures proper collection, processing, storing, transferring and displaying of digital geospatial information. CGD is used by Ministry of Defense (MoD) for defense and crisis management tasks and by Integrated rescue system. For military personnel CGD is run on MoD intranet, and for other users outside of MoD is transmutated to ZbGIS (Primary Geodatabase of Slovak Republic) and is run on public web site. CGD is a global set of geo-spatial information. CGD is a vector computer model which completely covers entire territory of Slovakia. Seamless CGD is created by digitizing of real world using of photogrammetric stereoscopic methods and measurements of objects properties. Basic vector model of CGD (from photogrammetric processing) is then taken out to the field for inspection and additional gathering of objects properties in the whole area of mapping. Finally real-world objects are spatially modeled as a entities of three-dimensional database. CGD gives us opportunity, to get know the territory complexly in all the three spatial dimensions. Every entity in CGD has recorded the time of collection, which allows the individual to assess the timeliness of information. CGD can be utilized for the purposes of geographical analysis, geo-referencing, cartographic purposes as well as various special-purpose mapping and has the ambition to cover the needs not only the MoD, but to become a reference model for the national geographical infrastructure.
NASA Astrophysics Data System (ADS)
LeBauer, D.
2015-12-01
Humans need a secure and sustainable food supply, and science can help. We have an opportunity to transform agriculture by combining knowledge of organisms and ecosystems to engineer ecosystems that sustainably produce food, fuel, and other services. The challenge is that the information we have. Measurements, theories, and laws found in publications, notebooks, measurements, software, and human brains are difficult to combine. We homogenize, encode, and automate the synthesis of data and mechanistic understanding in a way that links understanding at different scales and across domains. This allows extrapolation, prediction, and assessment. Reusable components allow automated construction of new knowledge that can be used to assess, predict, and optimize agro-ecosystems. Developing reusable software and open-access databases is hard, and examples will illustrate how we use the Predictive Ecosystem Analyzer (PEcAn, pecanproject.org), the Biofuel Ecophysiological Traits and Yields database (BETYdb, betydb.org), and ecophysiological crop models to predict crop yield, decide which crops to plant, and which traits can be selected for the next generation of data driven crop improvement. A next step is to automate the use of sensors mounted on robots, drones, and tractors to assess plants in the field. The TERRA Reference Phenotyping Platform (TERRA-Ref, terraref.github.io) will provide an open access database and computing platform on which researchers can use and develop tools that use sensor data to assess and manage agricultural and other terrestrial ecosystems. TERRA-Ref will adopt existing standards and develop modular software components and common interfaces, in collaboration with researchers from iPlant, NEON, AgMIP, USDA, rOpenSci, ARPA-E, many scientists and industry partners. Our goal is to advance science by enabling efficient use, reuse, exchange, and creation of knowledge.
Report of the Second Asian Prostate Cancer (A-CaP) Study Meeting.
Kim, Choung-Soo; Lee, Ji Youl; Chung, Byung Ha; Kim, Wun-Jae; Fai, Ng Chi; Hakim, Lukman; Umbas, Rainy; Ong, Teng Aik; Lim, Jasmine; Letran, Jason L; Chiong, Edmund; Wu, Tong-Lin; Lojanapiwat, Bannakij; Türkeri, Levent; Murphy, Declan G; Gardiner, Robert A; Moretti, Kim; Cooperberg, Matthew; Carroll, Peter; Mun, Seong Ki; Hinotsu, Shiro; Hirao, Yoshihiko; Ozono, Seiichiro; Horie, Shigeo; Onozawa, Mizuki; Kitagawa, Yasuhide; Kitamura, Tadaichi; Namiki, Mikio; Akaza, Hideyuki
2017-09-01
The Asian Prostate Cancer (A-CaP) Study is an Asia-wide initiative that has been developed over the course of 2 years. The study was launched in December 2015 in Tokyo, Japan, and the participating countries and regions engaged in preparations for the study during the course of 2016, including patient registration and creation of databases for the purpose of the study. The Second A-CaP Meeting was held on September 8, 2016 in Seoul, Korea, with the participation of members and collaborators from 12 countries and regions. Under the study, each participating country or region will begin registration of newly diagnosed prostate cancer patients and conduct prognostic investigations. From the data gathered, common research themes will be identified, such as comparisons among Asian countries of background factors in newly diagnosed prostate cancer patients. This is the first Asia-wide study of prostate cancer and has developed from single country research efforts in this field, including in Japan and Korea. At the Second Meeting, participating countries and regions discussed the status of preparations and discussed various issues that are being faced. These issues include technical challenges in creating databases, promoting participation in each country or region, clarifying issues relating to data input, addressing institutional issues such as institutional review board requirements, and the need for dedicated data managers. The meeting was positioned as an opportunity to share information and address outstanding issues prior to the initiation of the study. In addition to A-CaP-specific discussions, a series of special lectures was also delivered as a means of providing international perspectives on the latest developments in prostate cancer and the use of databases and registration studies around the world.
Traumatic colorectal injuries in children: The National Trauma Database experience.
Choi, Pamela M; Wallendorf, Michael; Keller, Martin S; Vogel, Adam M
2017-10-01
We sought to utilize a nationwide database to characterize colorectal injuries in pediatric trauma. The National Trauma Database (NTDB) was queried for all patients (age≤14years) with colorectal injuries from 2013 to 2014. We stratified patients by demographics and measured outcomes. We analyzed groups based on mechanism, colon vs rectal injury, as well as colostomy creation. Statistical analysis was conducted using t-test and ANOVA for continuous variables as well as chi-square for continuous variables. There were 534 pediatric patients who sustained colorectal trauma. The mean ISS was 15.6±0.6 with an average LOS of 8.5±0.5days. 435 (81.5%) were injured by blunt mechanism while 99 (18.5%) were injured by penetrating mechanism. There were no differences between age, ISS, complications, mortality, LOS, ICU LOS, and ventilator days between blunt and penetrating groups. Significantly more patients in the penetrating group had associated small intestine and hepatic injuries as well as underwent colostomies. Patients with rectal injuries (25.7%) were more likely to undergo colonic diversion (p<0.0001), but also had decreased mortality (p=0.001) and decreased LOS (p=0.01). Patients with colostomies (9.9%) had no differences in age, ISS, GCS, transfusion of blood products, and complications compared to patients who did not receive a colostomy. Despite this, colostomy patients had significantly increased hospital LOS (12.1±1.8 vs 8.2±0.5days, p=0.02) and ICU LOS (9.0±1.7 vs 5.4±0.3days, p=0.02). Although infrequent, colorectal injuries in children are associated with considerable morbidity regardless of mechanism and may be managed without fecal diversion. III. Epidemiology. Copyright © 2017 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bower, J.C.; Burford, M.J.; Downing, T.R.
The Integrated Baseline System (IBS) is an emergency management planning and analysis tool that is being developed under the direction of the US Army Nuclear and Chemical Agency (USANCA). The IBS Data Management Guide provides the background, as well as the operations and procedures needed to generate and maintain a site-specific map database. Data and system managers use this guide to manage the data files and database that support the administrative, user-environment, database management, and operational capabilities of the IBS. This document provides a description of the data files and structures necessary for running the IBS software and using themore » site map database.« less
Efficiency improvements of offline metrology job creation
NASA Astrophysics Data System (ADS)
Zuniga, Victor J.; Carlson, Alan; Podlesny, John C.; Knutrud, Paul C.
1999-06-01
Progress of the first lot of a new design through the production line is watched very closely. All performance metrics, cycle-time, in-line measurement results and final electrical performance are critical. Rapid movement of this lot through the line has serious time-to-market implications. Having this material waiting at a metrology operation for an engineer to create a measurement job plan wastes valuable turnaround time. Further, efficient use of a metrology system is compromised by the time required to create and maintain these measurement job plans. Thus, having a method to develop metrology job plans prior to the actual running of the material through the manufacture area can significantly improve both cycle time and overall equipment efficiency. Motorola and Schlumberger have worked together to develop and test such a system. The Remote Job Generator (RJG) created job plans for new device sin a manufacturing process from an NT host or workstation, offline. This increases available system tim effort making production measurements, decreases turnaround time on job plan creation and editing, and improves consistency across job plans. Most importantly this allows job plans for new devices to be available before the first wafers of the device arrive at the tool for measurement. The software also includes a database manager which allows updates of existing job plans to incorporate measurement changes required by process changes or measurement optimization. This paper will review the result of productivity enhancements through the increased metrology utilization and decreased cycle time associated with the use of RJG. Finally, improvements in process control through better control of Job Plans across different devices and layers will be discussed.
Risk factors for colostomy in military colorectal trauma: a review of 867 patients.
Watson, J Devin B; Aden, James K; Engel, Julie E; Rasmussen, Todd E; Glasgow, Sean C
2014-06-01
Limited data exist examining the use of fecal diversion in combatants from modern armed conflicts. Characterization of factors leading to colostomy creation is an initial step toward optimizing and individualizing combat casualty care. A retrospective review of the US Department of Defense Trauma Registry database was performed for all US and coalition troops with colorectal injuries sustained during combat operations in Iraq and Afghanistan over 8 years. Colostomy rate, anatomic injury location, mechanism of injury, demographic data, and initial physiologic parameters were examined. Univariate and multivariate analyses were conducted. We identified 867 coalition military personnel with colorectal injuries. The overall colostomy rate was 37%. Rectal injuries had the highest diversion rate (56%), followed by left-sided (41%) and right-sided (20%) locations (P < .0001). Those with gunshot wounds (GSW) underwent diversion more often than blast injuries (43% vs 31% respectively, P < .0008). Injury Severity Score ≥16 (41% vs 30%; P = .0018) and damage control surgery (DCS; 48.2% vs 31.4%; P < .0001) were associated with higher diversion rates. On multivariate analysis, significant predictors for colostomy creation were injury location: Rectal versus left colon (odds ratio [OR], 2.2), rectal versus right colon (OR, 7.5), left versus right colon (OR, 3.4), GSW (OR, 2.0), ISS ≥ 16 (OR, 1.7), and DCS (OR, 1.6). In this exploratory study of 320 combat-related colostomies, distal colon and rectal injuries continue to be diverted at higher rates independent of other comorbidities. Additional outcomes-directed research is needed to determine whether such operative management is beneficial in all patients. Published by Mosby, Inc.
Heller, Richard E
2014-01-01
As a result of macroeconomic forces necessitating fundamental changes in health care delivery systems, value has become a popular term in the medical industry. Much has been written recently about the idea of value as it relates to health care services in general and the practice of radiology in particular. Of course, cost, value, and cost-effectiveness are not new topics of conversation in radiology. Not only is value one of the most frequently used and complex words in management, entire classes in business school are taught around the concept of understanding and maximizing value. But what is value, and when speaking of value creation strategies, what is it exactly that is meant? For the leader of a radiology department, either private or academic, value creation is a core function. This article provides a deeper examination of what value is, what drives value creation, and how practices and departments can evaluate their own value creation efficiencies. An equation, referred to as the Total Value Equation, is presented as a framework to assess value creation activities and strategies. Copyright © 2014 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Bertrand, Jane T; Njeuhmeli, Emmanuel; Forsythe, Steven; Mattison, Sarah K; Mahler, Hally; Hankins, Catherine A
2011-01-01
This paper proposes an approach to estimating the costs of demand creation for voluntary medical male circumcision (VMMC) scale-up in 13 countries of eastern and southern Africa. It addresses two key questions: (1) what are the elements of a standardized package for demand creation? And (2) what challenges exist and must be taken into account in estimating the costs of demand creation? We conducted a key informant study on VMMC demand creation using purposive sampling to recruit seven people who provide technical assistance to government programs and manage budgets for VMMC demand creation. Key informants provided their views on the important elements of VMMC demand creation and the most effective funding allocations across different types of communication approaches (e.g., mass media, small media, outreach/mobilization). The key finding was the wide range of views, suggesting that a standard package of core demand creation elements would not be universally applicable. This underscored the importance of tailoring demand creation strategies and estimates to specific country contexts before estimating costs. The key informant interviews, supplemented by the researchers' field experience, identified these issues to be addressed in future costing exercises: variations in the cost of VMMC demand creation activities by country and program, decisions about the quality and comprehensiveness of programming, and lack of data on critical elements needed to "trigger the decision" among eligible men. Based on this study's findings, we propose a seven-step methodological approach to estimate the cost of VMMC scale-up in a priority country, based on our key assumptions. However, further work is needed to better understand core components of a demand creation package and how to cost them. Notwithstanding the methodological challenges, estimating the cost of demand creation remains an essential element in deriving estimates of the total costs for VMMC scale-up in eastern and southern Africa.
Bertrand, Jane T.; Njeuhmeli, Emmanuel; Forsythe, Steven; Mattison, Sarah K.; Mahler, Hally; Hankins, Catherine A.
2011-01-01
Background This paper proposes an approach to estimating the costs of demand creation for voluntary medical male circumcision (VMMC) scale-up in 13 countries of eastern and southern Africa. It addresses two key questions: (1) what are the elements of a standardized package for demand creation? And (2) what challenges exist and must be taken into account in estimating the costs of demand creation? Methods and Findings We conducted a key informant study on VMMC demand creation using purposive sampling to recruit seven people who provide technical assistance to government programs and manage budgets for VMMC demand creation. Key informants provided their views on the important elements of VMMC demand creation and the most effective funding allocations across different types of communication approaches (e.g., mass media, small media, outreach/mobilization). The key finding was the wide range of views, suggesting that a standard package of core demand creation elements would not be universally applicable. This underscored the importance of tailoring demand creation strategies and estimates to specific country contexts before estimating costs. The key informant interviews, supplemented by the researchers' field experience, identified these issues to be addressed in future costing exercises: variations in the cost of VMMC demand creation activities by country and program, decisions about the quality and comprehensiveness of programming, and lack of data on critical elements needed to “trigger the decision” among eligible men. Conclusions Based on this study's findings, we propose a seven-step methodological approach to estimate the cost of VMMC scale-up in a priority country, based on our key assumptions. However, further work is needed to better understand core components of a demand creation package and how to cost them. Notwithstanding the methodological challenges, estimating the cost of demand creation remains an essential element in deriving estimates of the total costs for VMMC scale-up in eastern and southern Africa. PMID:22140450
[Selected aspects of computer-assisted literature management].
Reiss, M; Reiss, G
1998-01-01
We want to report about our own experiences with a database manager. Bibliography database managers are used to manage information resources: specifically, to maintain a database to references and create bibliographies and reference lists for written works. A database manager allows to enter summary information (record) for articles, book sections, books, dissertations, conference proceedings, and so on. Other features that may be included in a database manager include the ability to import references from different sources, such as MEDLINE. The word processing components allow to generate reference list and bibliographies in a variety of different styles, generates a reference list from a word processor manuscript. The function and the use of the software package EndNote 2 for Windows are described. Its advantages in fulfilling different requirements for the citation style and the sort order of reference lists are emphasized.
Famulari, Stevie; Witz, Kyla
2015-01-01
Designers, students, teachers, gardeners, farmers, landscape architects, architects, engineers, homeowners, and others have uses for the practice of phytoremediation. This research looks at the creation of a phytoremediation database which is designed for ease of use for a non-scientific user, as well as for students in an educational setting ( http://www.steviefamulari.net/phytoremediation ). During 2012, Environmental Artist & Professor of Landscape Architecture Stevie Famulari, with assistance from Kyla Witz, a landscape architecture student, created an online searchable database designed for high public accessibility. The database is a record of research of plant species that aid in the uptake of contaminants, including metals, organic materials, biodiesels & oils, and radionuclides. The database consists of multiple interconnected indexes categorized into common and scientific plant name, contaminant name, and contaminant type. It includes photographs, hardiness zones, specific plant qualities, full citations to the original research, and other relevant information intended to aid those designing with phytoremediation search for potential plants which may be used to address their site's need. The objective of the terminology section is to remove uncertainty for more inexperienced users, and to clarify terms for a more user-friendly experience. Implications of the work, including education and ease of browsing, as well as use of the database in teaching, are discussed.
A spatiotemporal data model for incorporating time in geographic information systems (GEN-STGIS)
NASA Astrophysics Data System (ADS)
Narciso, Flor Eugenia
Temporal Geographic Information Systems (TGIS) is a new technology, which is being developed to work with Geographic Information Systems (GIS) that deal with geographic phenomena that change over time. The capabilities of TGIS depend on the underlying data model. However, a literature review of current spatiotemporal GIS data models has shown that they are not adequate for managing time when representing temporal data. In addition, the majority of these data models have been designed to support the requirements of specific-purpose applications. In an effort to resolve this problem, the related literature has been explored. A comparative investigation of the current spatiotemporal GIS data models has been made to identify their characteristics, advantages and disadvantages, similarities and differences, and to determine why they do not work adequately. A new object-oriented General-purpose Spatiotemporal GIS (GEN-STGIS) data model is proposed here. This model provides better representation, storage and management of data related to geographic phenomena that change over time and overcomes some of the problems detected in the reviewed data models. The proposed data model has four key benefits. First, it provides the capabilities of a standard vector-based GIS embedded in the 2-D Euclidean space. Second, it includes the two temporal dimensions, valid time and transaction time, supported by temporal databases. Third, it inherits, from the object oriented approach, the flexibility, modularity and ability to handle the complexities introduced by spatial and temporal dimensions. Fourth, it improves the geographic query capabilities of current TGIS with the introduction of the concept of bounding box while providing temporal and spatiotemporal query capabilities. The data model is then evaluated in order to assess its strengths and weaknesses as a spatiotemporal GIS data model, and to determine how well the model satisfies the requirements imposed by TGIS applications. The practicality of the data model is demonstrated by the creation of a TGIS example and the partial implementation of the model using the POET Java software for developing the object-oriented database. the object-oriented database.
Fendri, Jihene; Palcau, Laura; Cameliere, Lucie; Coffin, Olivier; Felisaz, Aurelien; Gouicem, Djelloul; Dufranc, Julie; Laneelle, Damien; Berger, Ludovic
2017-02-01
The donor artery after a long-standing arteriovenous fistula (AVF) for hemodialysis usually evolves exceptionally toward a true aneurysmal degeneration (AD). The purpose of this article was to describe true brachial artery AD in end-stage renal disease patients after AVF creation, as well as its influencing factors and treatment strategies. We present a retrospective, observational, single-center study realized in Caen University Hospital's Vascular Surgery Department from May 1996 to November 2015. The inclusion criteria were true AD of the brachial artery after a vascular access for hemodialysis. A literature research, using the same criteria, was performed on the articles published between 1994 and 2015. The used databases included MEDLINE (via PubMed), EMBASE via OVID, Cochrane Library Database, and ResearchGate. Our series includes 5 patients. Twenty-one articles were found in the literature: 17 case reports, 3 series, and 1 review. The same triggering factors for AD (high flow and immunosuppressive treatment) were found. The mean age at the time of AVF creation, first renal transplantation, and AD's diagnosis were respectively 26 (range 15-49), 29.2, and 48.6 years (range 37-76) in our series versus 34 (range 27-39), 40.4 (range 28-55), and 55.5 years (range 35-75) in cases found in the literature. The time spread after AVF creation and aneurysmal diagnosis was about 20.6 years (range 18-25) in our study versus 20.5 years (range 9-29) in the case reports. Our surgical attitude corresponds principally to that described in the literature. Nevertheless, we describe for the first time one case of arterial transposition to exclude the brachial aneurysm using superficial femoral artery. Arterial aneurysm is a rare, but significant complication after a long-term creation of hemodialysis access. High flow and immunosuppression may accelerate this process. Young age of the patients may act as a benefic factor and delay the AD. Arterial transposition could be an option in the absence of any venous conduit, if anatomy does not permit the use of prosthetic grafts. Copyright © 2016 Elsevier Inc. All rights reserved.
Keeping Track of Our Treasures: Managing Historical Data with Relational Database Software.
ERIC Educational Resources Information Center
Gutmann, Myron P.; And Others
1989-01-01
Describes the way a relational database management system manages a large historical data collection project. Shows that such databases are practical to construct. States that the programing tasks involved are not for beginners, but the rewards of having data organized are worthwhile. (GG)
Content Independence in Multimedia Databases.
ERIC Educational Resources Information Center
de Vries, Arjen P.
2001-01-01
Investigates the role of data management in multimedia digital libraries, and its implications for the design of database management systems. Introduces the notions of content abstraction and content independence. Proposes a blueprint of a new class of database technology, which supports the basic functionality for the management of both content…
A Survey on Distributed Mobile Database and Data Mining
NASA Astrophysics Data System (ADS)
Goel, Ajay Mohan; Mangla, Neeraj; Patel, R. B.
2010-11-01
The anticipated increase in popular use of the Internet has created more opportunity in information dissemination, Ecommerce, and multimedia communication. It has also created more challenges in organizing information and facilitating its efficient retrieval. In response to this, new techniques have evolved which facilitate the creation of such applications. Certainly the most promising among the new paradigms is the use of mobile agents. In this paper, mobile agent and distributed database technologies are applied in the banking system. Many approaches have been proposed to schedule data items for broadcasting in a mobile environment. In this paper, an efficient strategy for accessing multiple data items in mobile environments and the bottleneck of current banking will be proposed.
Technological Entrepreneurship. Research in Entrepreneurship and Management Series.
ERIC Educational Resources Information Center
Phan, Philip H., Ed.
This document contains 11 papers on technological entrepreneurship, with particular focus on the following topics: the context of technological entrepreneurship; value creation and opportunity recognition in turbulent environments; venture capital in technological entrepreneurship; and managing in turbulent environments. The following papers are…
Records Legislation for Local Governments.
ERIC Educational Resources Information Center
New York State Education Dept., Albany. State Archives and Records Administration.
This information leaflet provides local governments with guidelines and suggestions for writing an ordinance, resolution, or local law to establish a records management program. Such a program is an over-arching, continuing, administrative effort which manages recorded information from initial creation to final disposition. It includes…
Auto-Versioning Systems Image Manager
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pezzaglia, Larry
2013-08-01
The av_sys_image_mgr utility provides an interface for the creation, manipulation, and analysis of system boot images for computer systems. It is primarily intended to provide a convenient method for managing the introduction of changes to boot images for long-lived production HPC systems.
Development of expert systems for analyzing electronic documents
NASA Astrophysics Data System (ADS)
Abeer Yassin, Al-Azzawi; Shidlovskiy, S.; Jamal, A. A.
2018-05-01
The paper analyses a Database Management System (DBMS). Expert systems, Databases, and database technology have become an essential component of everyday life in the modern society. As databases are widely used in every organization with a computer system, data resource control and data management are very important [1]. DBMS is the most significant tool developed to serve multiple users in a database environment consisting of programs that enable users to create and maintain a database. This paper focuses on development of a database management system for General Directorate for education of Diyala in Iraq (GDED) using Clips, java Net-beans and Alfresco and system components, which were previously developed in Tomsk State University at the Faculty of Innovative Technology.
NASA Technical Reports Server (NTRS)
Moroh, Marsha
1988-01-01
A methodology for building interfaces of resident database management systems to a heterogeneous distributed database management system under development at NASA, the DAVID system, was developed. The feasibility of that methodology was demonstrated by construction of the software necessary to perform the interface task. The interface terminology developed in the course of this research is presented. The work performed and the results are summarized.
Review of systematic reviews on acute procedural pain in children in the hospital setting
Stinson, Jennifer; Yamada, Janet; Dickson, Alison; Lamba, Jasmine; Stevens, Bonnie
2008-01-01
BACKGROUND: Acute pain is a common experience for hospitalized children. Despite mounting research on treatments for acute procedure-related pain, it remains inadequately treated. OBJECTIVE: To critically appraise all systematic reviews on the effectiveness of acute procedure-related pain management in hospitalized children. METHODS: Published systematic reviews and meta-analyses on pharmacological and nonpharmacological management of acute procedure-related pain in hospitalized children aged one to 18 years were evaluated. Electronic searches were conducted in the Cochrane Database of Systematic Reviews, Medline, EMBASE, the Cumulative Index to Nursing and Allied Health Literature and PsycINFO. Two reviewers independently selected articles for review and assessed their quality using a validated seven-point quality assessment measure. Any disagreements were resolved by a third reviewer. RESULTS: Of 1469 published articles on interventions for acute pain in hospitalized children, eight systematic reviews met the inclusion criteria and were included in the analysis. However, only five of these reviews were of high quality. Critical appraisal of pharmacological pain interventions indicated that amethocaine was superior to EMLA (AstraZeneca Canada Inc) for reducing needle pain. Distraction and hypnosis were nonpharmacological interventions effective for management of acute procedure-related pain in hospitalized children. CONCLUSIONS: There is growing evidence of rigorous evaluations of both pharmacological and nonpharmacological strategies for acute procedure-related pain in children; however, the evidence underlying some commonly used strategies is limited. The present review will enable the creation of a future research plan to facilitate clinical decision making and to develop clinical policy for managing acute procedure-related pain in children. PMID:18301816
The successes and challenges of open-source biopharmaceutical innovation.
Allarakhia, Minna
2014-05-01
Increasingly, open-source-based alliances seek to provide broad access to data, research-based tools, preclinical samples and downstream compounds. The challenge is how to create value from open-source biopharmaceutical innovation. This value creation may occur via transparency and usage of data across the biopharmaceutical value chain as stakeholders move dynamically between open source and open innovation. In this article, several examples are used to trace the evolution of biopharmaceutical open-source initiatives. The article specifically discusses the technological challenges associated with the integration and standardization of big data; the human capacity development challenges associated with skill development around big data usage; and the data-material access challenge associated with data and material access and usage rights, particularly as the boundary between open source and open innovation becomes more fluid. It is the author's opinion that the assessment of when and how value creation will occur, through open-source biopharmaceutical innovation, is paramount. The key is to determine the metrics of value creation and the necessary technological, educational and legal frameworks to support the downstream outcomes of now big data-based open-source initiatives. The continued focus on the early-stage value creation is not advisable. Instead, it would be more advisable to adopt an approach where stakeholders transform open-source initiatives into open-source discovery, crowdsourcing and open product development partnerships on the same platform.
The Portfolio Creation Model Developed for the Capital Investment Program Plan Review (CIPPR)
2014-11-12
Basinger, Director, DCI, CFD Scientific Letter The PORTFOLIO CREATION MODEL developed for the Capital Investment Program Plan Review (CIPPR) To inform...senior management about CIPPR decision support, this scientific letter has been prepared upon request [1] to clarify some of the key concepts about...delivery process as laid out in the Defence Project Approval Directive (PAD). 1 With respect to the list above, the subject of this scientific letter is
Computer Security Products Technology Overview
1988-10-01
13 3. DATABASE MANAGEMENT SYSTEMS ................................... 15 Definition...this paper addresses fall into the areas of multi-user hosts, database management systems (DBMS), workstations, networks, guards and gateways, and...provide a portion of that protection, for example, a password scheme, a file protection mechanism, a secure database management system, or even a
An Introduction to Database Management Systems.
ERIC Educational Resources Information Center
Warden, William H., III; Warden, Bette M.
1984-01-01
Description of database management systems for microcomputers highlights system features and factors to consider in microcomputer system selection. A method for ranking database management systems is explained and applied to a defined need, i.e., software support for indexing a weekly newspaper. A glossary of terms and 32-item bibliography are…
NASA Astrophysics Data System (ADS)
Thoemel, J.; Cosson, E.; Chazot, O.
2009-01-01
In the framework of the creation of an aerothermodynamic database for the design the Intermediate Experimental Vehicle, surface properties of heat shield materials that represent the boundary conditions are reviewed. Catalytic and radiative characteristics available in the literature are critically analyzed and summarized. It turns out that large uncertainties on the parameters exist. Finally, simple and conservative values are proposed.
Syllabus Computer in Astronomy
NASA Astrophysics Data System (ADS)
Hojaev, Alisher S.
2015-08-01
One of the most important and actual subjects and training courses in the curricula for undergraduate level students at the National university of Uzbekistan is ‘Computer Methods in Astronomy’. It covers two semesters and includes both lecture and practice classes. Based on the long term experience we prepared the tutorial for students which contain the description of modern computer applications in astronomy.The main directions of computer application in field of astronomy briefly as follows:1) Automating the process of observation, data acquisition and processing2) Create and store databases (the results of observations, experiments and theoretical calculations) their generalization, classification and cataloging, working with large databases3) The decisions of the theoretical problems (physical modeling, mathematical modeling of astronomical objects and phenomena, derivation of model parameters to obtain a solution of the corresponding equations, numerical simulations), appropriate software creation4) The utilization in the educational process (e-text books, presentations, virtual labs, remote education, testing), amateur astronomy and popularization of the science5) The use as a means of communication and data transfer, research result presenting and dissemination (web-journals), the creation of a virtual information system (local and global computer networks).During the classes the special attention is drawn on the practical training and individual work of students including the independent one.
AlReshidi, Nahar; Long, Tony; Darvill, Angela
2018-03-01
Despite extensive research in the international arena into pain and its management, there is, as yet, little research on the topic of pain in children in Saudi Arabia and in the Gulf countries generally. A systematic review was conducted to explore the impact of education programs on factors affecting paediatric nurses' postoperative pain management practice. This was done in order to advise the creation of an educational program for nurses in Saudi Arabia. Knowledge about pain, attitudes towards pain, beliefs about children's pain, perceptions of children's reports of pain, self-efficacy with regard to pain management, and perceptions of barriers to optimal practice were all considered to be relevant factors. The review was restricted to randomized controlled trials and quasi-experimental designs, excluding studies focussed on chronic pain or populations other than solely children. Studies published in English between 2000 and 2016 were identified using CINAHL, MEDLINE, Ovid SP, The Cochrane Library, ProQuest, and Google Scholar databases. Of 499 published studies identified by the search, 14 met the inclusion criteria and were included in the review. There was evidence of educational programs exerting a postive impact on enhancing pediatric nurses' knowledge of pain and modifing their attitudes towards it, but only limited evidence was available about the impact on nurses' beliefs and perceptions of children's reports of pain, nurses' self-efficacy, or barriers to optimal practice. None of the studies was conducted in Saudi Arabia. Studies were needed to address additional aspects of preparedness for effective postperative pain management. Details of educational programs used as experimental intervention must be included in reports.
Development of Performance Dashboards in Healthcare Sector: Key Practical Issues.
Ghazisaeidi, Marjan; Safdari, Reza; Torabi, Mashallah; Mirzaee, Mahboobeh; Farzi, Jebraeil; Goodini, Azadeh
2015-10-01
Static nature of performance reporting systems in health care sector has resulted in inconsistent, incomparable, time consuming, and static performance reports that are not able to transparently reflect a round picture of performance and effectively support healthcare managers' decision makings. So, the healthcare sector needs interactive performance management tools such as performance dashboards to measure, monitor, and manage performance more effectively. The aim of this article was to identify key issues that need to be addressed for developing high-quality performance dashboards in healthcare sector. A literature review was established to search electronic research databases, e-journals collections, and printed journals, books, dissertations, and theses for relevant articles. The search strategy interchangeably used the terms of "dashboard", "performance measurement system", and "executive information system" with the term of "design" combined with operator "AND". Search results (n=250) were adjusted for duplications, screened based on their abstract relevancy and full-text availability (n=147) and then assessed for eligibility (n=40). Eligible articles were included if they had explicitly focused on dashboards, performance measurement systems or executive information systems design. Finally, 28 relevant articles included in the study. Creating high-quality performance dashboards requires addressing both performance measurement and executive information systems design issues. Covering these two fields, identified contents were categorized to four main domains: KPIs development, Data Sources and data generation, Integration of dashboards to source systems, and Information presentation issues. This study implies the main steps to develop dashboards for the purpose of performance management. Performance dashboards developed on performance measurement and executive information systems principles and supported by proper back-end infrastructure will result in creation of dynamic reports that help healthcare managers to consistently measure the performance, continuously detect outliers, deeply analyze causes of poor performance, and effectively plan for the future.
Bittorf, A.; Diepgen, T. L.
1996-01-01
The World Wide Web (WWW) is becoming the major way of acquiring information in all scientific disciplines as well as in business. It is very well suitable for fast distribution and exchange of up to date teaching resources. However, to date most teaching applications on the Web do not use its full power by integrating interactive components. We have set up a computer based training (CBT) framework for Dermatology, which consists of dynamic lecture scripts, case reports, an atlas and a quiz system. All these components heavily rely on an underlying image database that permits the creation of dynamic documents. We used a demon process that keeps the database open and can be accessed using HTTP to achieve better performance and avoid the overhead involved by starting CGI-processes. The result of our evaluation was very encouraging. Images Figure 3 PMID:8947625
The frequency and distribution of high-velocity gas in the Galaxy
NASA Technical Reports Server (NTRS)
Nichols, Joy S.
1995-01-01
The purpose of this study was to estimate the frequency and distribution of high-velocity gas in the Galaxy using UV absorption line measurements from archival high-dispersion IUE spectra and to identify particularly interesting regions for future study. Approximately 500 spectra have been examined. The study began with the creation of a database of all 0 and B stars with b less than or = to 30 deg observed with IUE at high dispersion over its 18-year lifetime. The original database of 2500 unique objects was reduced to 1200 objects which had optimal exposures available. The next task was to determine the distances of these stars so the high-velocity structures could be mapped in the Galaxy. Spectroscopic distances were calculated for each star for which photometry was available. The photometry was acquired for each star using the SIMBAD database. Preference was given to the ubvy system where available; otherwise the UBV system was used.
The risk management professional and medication safety.
Cohen, Hedy; Tuohy, Nancy; Carroll, Roberta
2009-01-01
ASHRM is committed to the future development of the healthcare risk management profession. A key contribution to this commitment is the creation of a student version of ASHRM's best-selling Risk Management Handbook for Healthcare Organizations. The Student Edition was released this spring. It is now being made available to universities and colleges to incorporate into their degree programs.
Enhance Learning on Software Project Management through a Role-Play Game in a Virtual World
ERIC Educational Resources Information Center
Maratou, Vicky; Chatzidaki, Eleni; Xenos, Michalis
2016-01-01
This article presents a role-play game for software project management (SPM) in a three-dimensional online multiuser virtual world. The Opensimulator platform is used for the creation of an immersive virtual environment that facilitates students' collaboration and realistic interaction, in order to manage unexpected events occurring during the…
ERIC Educational Resources Information Center
Aquila, Frank D.
Educational managers may benefit greatly from adoption or adaptation of Japanese managerial practices, such as "Theory Z," involving developing staff potential and the creation of new incentives. There are at least 17 things administrators can do to utilize the key tenets of Japanese management. These include allowing teachers to…
ERIC Educational Resources Information Center
Humphrey, Keith
2008-01-01
In this article, the author discusses how the creation of enrollment management organizations separate from traditional student affairs and academic affairs divisions have caused negative impacts on student learning. A relatively new organizational structure that first appeared in the mid-1970s, enrollment management has its roots in…
Genetic management of plants in the California state parks: A primer
Deborah L. Rogers; Connie (editor) Millar
1992-01-01
Conservation biology, of which genetics is a major component, is receiving unprecedented attention in the management of natural resources. National fora on biological diversity, adoption by the World Bank of a strong wildland policy, and creation of new management strategies for species protection in our national parks all attest to the growing acceptance and...
Edmonstone, J; Chisnell, C
1992-01-01
The creation of clinical directorates in acute hospital services has directed attention towards the clinical director role. The two "support" roles of clinical nurse manager and business manager have received less attention. Reports on an action research study into these roles, examining recruitment and selection, monitoring of performance, training needs and succession planning. Deals with the clinical nurse manager role.
Li, Yuanfang; Zhou, Zhiwei
2016-02-01
Precision medicine is a new medical concept and medical model, which is based on personalized medicine, rapid progress of genome sequencing technology and cross application of biological information and big data science. Precision medicine improves the diagnosis and treatment of gastric cancer to provide more convenience through more profound analyses of characteristics, pathogenesis and other core issues in gastric cancer. Cancer clinical database is important to promote the development of precision medicine. Therefore, it is necessary to pay close attention to the construction and management of the database. The clinical database of Sun Yat-sen University Cancer Center is composed of medical record database, blood specimen bank, tissue bank and medical imaging database. In order to ensure the good quality of the database, the design and management of the database should follow the strict standard operation procedure(SOP) model. Data sharing is an important way to improve medical research in the era of medical big data. The construction and management of clinical database must also be strengthened and innovated.
Taverna, Constanza Giselle; Mazza, Mariana; Bueno, Nadia Soledad; Alvarez, Christian; Amigot, Susana; Andreani, Mariana; Azula, Natalia; Barrios, Rubén; Fernández, Norma; Fox, Barbara; Guelfand, Liliana; Maldonado, Ivana; Murisengo, Omar Alejandro; Relloso, Silvia; Vivot, Matias; Davel, Graciela
2018-05-11
Matrix-assisted laser desorption/ionization-time of flight mass spectrometry (MALDI-TOF MS) has revolutionized the identification of microorganisms in clinical laboratories because it is rapid, relatively simple to use, accurate, and can be used for a wide number of microorganisms. Several studies have demonstrated the utility of this technique in the identification of yeasts; however, its performance is usually improved by the extension of the database. Here we developed an in-house database of 143 strains belonging to 42 yeast species in the MALDI Biotyper platform, and we validated the extended database with 388 regional strains and 15 reference strains belonging to 55 yeast species. We also performed an intra- and interlaboratory study to assess reproducibility and analyzed the use of the cutoff values of 1.700 and 2.000 to correctly identify at species level. The creation of an in-house database that extended the manufacturer's database was successful in view of no incorrect identification was introduced. The best performance was observed by using the extended database and a cutoff value of 1.700 with a sensitivity of .94 and specificity of .96. A reproducibility study showed utility to detect deviations and could be used for external quality control. The extended database was able to differentiate closely related species and it has potential in distinguishing the molecular genotypes of Cryptococcus neoformans and Cryptococcus gattii.
Potential Knowledge Management Contributions to Human Performance Technology Research and Practice.
ERIC Educational Resources Information Center
Schwen, Thomas M.; Kalman, Howard K.; Hara, Noriko; Kisling, Eric L.
1998-01-01
Considers aspects of knowledge management that have the potential to enhance human-performance-technology research and practice. Topics include intellectual capital; learning organization versus organizational learning; the importance of epistemology; the relationship of knowledge, learning, and performance; knowledge creation; socio-technical…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-02-09
...- modal freight and intelligent supply chain management, provides significant business [[Page 7154..., including port development, airport development, freight rail systems and technologies, supply chain systems... for U.S. engineers, program management firms, and manufacturers to contribute to the creation of new...
36 CFR 1222.14 - What are nonrecord materials?
Code of Federal Regulations, 2011 CFR
2011-07-01
... ADMINISTRATION RECORDS MANAGEMENT CREATION AND MAINTENANCE OF FEDERAL RECORDS Identifying Federal Records § 1222... from the statutory definition of records (see 44 U.S.C. 3301). An agency's records management program... publications and of processed documents. Catalogs, trade journals, and other publications that are received...
36 CFR 1222.14 - What are nonrecord materials?
Code of Federal Regulations, 2014 CFR
2014-07-01
... ADMINISTRATION RECORDS MANAGEMENT CREATION AND MAINTENANCE OF FEDERAL RECORDS Identifying Federal Records § 1222... from the statutory definition of records (see 44 U.S.C. 3301). An agency's records management program... publications and of processed documents. Catalogs, trade journals, and other publications that are received...
36 CFR 1222.14 - What are nonrecord materials?
Code of Federal Regulations, 2012 CFR
2012-07-01
... ADMINISTRATION RECORDS MANAGEMENT CREATION AND MAINTENANCE OF FEDERAL RECORDS Identifying Federal Records § 1222... from the statutory definition of records (see 44 U.S.C. 3301). An agency's records management program... publications and of processed documents. Catalogs, trade journals, and other publications that are received...
SELECTION, WITH MINIMAL BIAS, OF AN EXPERIMENTAL CONTROL FROM NATURAL WETLAND ENVIRONMENTS
This report is of the National Network for Environmental Management studies conducted under the auspices of the Office of Cooperative Environmental Management--U.S. Environmental Protection Agency. The goal of wetland restoration and creation projects is to replicate the native w...
36 CFR 1222.14 - What are nonrecord materials?
Code of Federal Regulations, 2010 CFR
2010-07-01
... ADMINISTRATION RECORDS MANAGEMENT CREATION AND MAINTENANCE OF FEDERAL RECORDS Identifying Federal Records § 1222.14 What are nonrecord materials? Nonrecord materials are U.S. Government-owned documentary materials... from the statutory definition of records (see 44 U.S.C. 3301). An agency's records management program...