Managing Heterogeneous Information Systems through Discovery and Retrieval of Generic Concepts.
ERIC Educational Resources Information Center
Srinivasan, Uma; Ngu, Anne H. H.; Gedeon, Tom
2000-01-01
Introduces a conceptual integration approach to heterogeneous databases or information systems that exploits the similarity in metalevel information and performs metadata mining on database objects to discover a set of concepts that serve as a domain abstraction and provide a conceptual layer above existing legacy systems. Presents results of…
Database Systems. Course Three. Information Systems Curriculum.
ERIC Educational Resources Information Center
O'Neil, Sharon Lund; Everett, Donna R.
This course is the third of seven in the Information Systems curriculum. The purpose of the course is to familiarize students with database management concepts and standard database management software. Databases and their roles, advantages, and limitations are explained. An overview of the course sets forth the condition and performance standard…
Advanced transportation system studies. Alternate propulsion subsystem concepts: Propulsion database
NASA Technical Reports Server (NTRS)
Levack, Daniel
1993-01-01
The Advanced Transportation System Studies alternate propulsion subsystem concepts propulsion database interim report is presented. The objective of the database development task is to produce a propulsion database which is easy to use and modify while also being comprehensive in the level of detail available. The database is to be available on the Macintosh computer system. The task is to extend across all three years of the contract. Consequently, a significant fraction of the effort in this first year of the task was devoted to the development of the database structure to ensure a robust base for the following years' efforts. Nonetheless, significant point design propulsion system descriptions and parametric models were also produced. Each of the two propulsion databases, parametric propulsion database and propulsion system database, are described. The descriptions include a user's guide to each code, write-ups for models used, and sample output. The parametric database has models for LOX/H2 and LOX/RP liquid engines, solid rocket boosters using three different propellants, a hybrid rocket booster, and a NERVA derived nuclear thermal rocket engine.
Teaching Database Management System Use in a Library School Curriculum.
ERIC Educational Resources Information Center
Cooper, Michael D.
1985-01-01
Description of database management systems course being taught to students at School of Library and Information Studies, University of California, Berkeley, notes course structure, assignments, and course evaluation. Approaches to teaching concepts of three types of database systems are discussed and systems used by students in the course are…
ARACHNID: A prototype object-oriented database tool for distributed systems
NASA Technical Reports Server (NTRS)
Younger, Herbert; Oreilly, John; Frogner, Bjorn
1994-01-01
This paper discusses the results of a Phase 2 SBIR project sponsored by NASA and performed by MIMD Systems, Inc. A major objective of this project was to develop specific concepts for improved performance in accessing large databases. An object-oriented and distributed approach was used for the general design, while a geographical decomposition was used as a specific solution. The resulting software framework is called ARACHNID. The Faint Source Catalog developed by NASA was the initial database testbed. This is a database of many giga-bytes, where an order of magnitude improvement in query speed is being sought. This database contains faint infrared point sources obtained from telescope measurements of the sky. A geographical decomposition of this database is an attractive approach to dividing it into pieces. Each piece can then be searched on individual processors with only a weak data linkage between the processors being required. As a further demonstration of the concepts implemented in ARACHNID, a tourist information system is discussed. This version of ARACHNID is the commercial result of the project. It is a distributed, networked, database application where speed, maintenance, and reliability are important considerations. This paper focuses on the design concepts and technologies that form the basis for ARACHNID.
Multi-Sensor Scene Synthesis and Analysis
1981-09-01
Quad Trees for Image Representation and Processing ...... ... 126 2.6.2 Databases ..... ..... ... ..... ... ..... ..... 138 2.6.2.1 Definitions and...Basic Concepts ....... 138 2.6.3 Use of Databases in Hierarchical Scene Analysis ...... ... ..................... 147 2.6.4 Use of Relational Tables...Multisensor Image Database Systems (MIDAS) . 161 2.7.2 Relational Database System for Pictures .... ..... 168 2.7.3 Relational Pictorial Database
Timely Diagnostic Feedback for Database Concept Learning
ERIC Educational Resources Information Center
Lin, Jian-Wei; Lai, Yuan-Cheng; Chuang, Yuh-Shy
2013-01-01
To efficiently learn database concepts, this work adopts association rules to provide diagnostic feedback for drawing an Entity-Relationship Diagram (ERD). Using association rules and Asynchronous JavaScript and XML (AJAX) techniques, this work implements a novel Web-based Timely Diagnosis System (WTDS), which provides timely diagnostic feedback…
Using decision-tree classifier systems to extract knowledge from databases
NASA Technical Reports Server (NTRS)
St.clair, D. C.; Sabharwal, C. L.; Hacke, Keith; Bond, W. E.
1990-01-01
One difficulty in applying artificial intelligence techniques to the solution of real world problems is that the development and maintenance of many AI systems, such as those used in diagnostics, require large amounts of human resources. At the same time, databases frequently exist which contain information about the process(es) of interest. Recently, efforts to reduce development and maintenance costs of AI systems have focused on using machine learning techniques to extract knowledge from existing databases. Research is described in the area of knowledge extraction using a class of machine learning techniques called decision-tree classifier systems. Results of this research suggest ways of performing knowledge extraction which may be applied in numerous situations. In addition, a measurement called the concept strength metric (CSM) is described which can be used to determine how well the resulting decision tree can differentiate between the concepts it has learned. The CSM can be used to determine whether or not additional knowledge needs to be extracted from the database. An experiment involving real world data is presented to illustrate the concepts described.
Safeguarding Databases Basic Concepts Revisited.
ERIC Educational Resources Information Center
Cardinali, Richard
1995-01-01
Discusses issues of database security and integrity, including computer crime and vandalism, human error, computer viruses, employee and user access, and personnel policies. Suggests some precautions to minimize system vulnerability such as careful personnel screening, audit systems, passwords, and building and software security systems. (JKP)
James Webb Space Telescope XML Database: From the Beginning to Today
NASA Technical Reports Server (NTRS)
Gal-Edd, Jonathan; Fatig, Curtis C.
2005-01-01
The James Webb Space Telescope (JWST) Project has been defining, developing, and exercising the use of a common eXtensible Markup Language (XML) for the command and telemetry (C&T) database structure. JWST is the first large NASA space mission to use XML for databases. The JWST project started developing the concepts for the C&T database in 2002. The database will need to last at least 20 years since it will be used beginning with flight software development, continuing through Observatory integration and test (I&T) and through operations. Also, a database tool kit has been provided to the 18 various flight software development laboratories located in the United States, Europe, and Canada that allows the local users to create their own databases. Recently the JWST Project has been working with the Jet Propulsion Laboratory (JPL) and Object Management Group (OMG) XML Telemetry and Command Exchange (XTCE) personnel to provide all the information needed by JWST and JPL for exchanging database information using a XML standard structure. The lack of standardization requires custom ingest scripts for each ground system segment, increasing the cost of the total system. Providing a non-proprietary standard of the telemetry and command database definition formation will allow dissimilar systems to communicate without the need for expensive mission specific database tools and testing of the systems after the database translation. The various ground system components that would benefit from a standardized database are the telemetry and command systems, archives, simulators, and trending tools. JWST has exchanged the XML database with the Eclipse, EPOCH, ASIST ground systems, Portable spacecraft simulator (PSS), a front-end system, and Integrated Trending and Plotting System (ITPS) successfully. This paper will discuss how JWST decided to use XML, the barriers to a new concept, experiences utilizing the XML structure, exchanging databases with other users, and issues that have been experienced in creating databases for the C&T system.
A Systems Development Life Cycle Project for the AIS Class
ERIC Educational Resources Information Center
Wang, Ting J.; Saemann, Georgia; Du, Hui
2007-01-01
The Systems Development Life Cycle (SDLC) project was designed for use by an accounting information systems (AIS) class. Along the tasks in the SDLC, this project integrates students' knowledge of transaction and business processes, systems documentation techniques, relational database concepts, and hands-on skills in relational database use.…
Combining computational models, semantic annotations and simulation experiments in a graph database
Henkel, Ron; Wolkenhauer, Olaf; Waltemath, Dagmar
2015-01-01
Model repositories such as the BioModels Database, the CellML Model Repository or JWS Online are frequently accessed to retrieve computational models of biological systems. However, their storage concepts support only restricted types of queries and not all data inside the repositories can be retrieved. In this article we present a storage concept that meets this challenge. It grounds on a graph database, reflects the models’ structure, incorporates semantic annotations and simulation descriptions and ultimately connects different types of model-related data. The connections between heterogeneous model-related data and bio-ontologies enable efficient search via biological facts and grant access to new model features. The introduced concept notably improves the access of computational models and associated simulations in a model repository. This has positive effects on tasks such as model search, retrieval, ranking, matching and filtering. Furthermore, our work for the first time enables CellML- and Systems Biology Markup Language-encoded models to be effectively maintained in one database. We show how these models can be linked via annotations and queried. Database URL: https://sems.uni-rostock.de/projects/masymos/ PMID:25754863
First Database Course--Keeping It All Organized
ERIC Educational Resources Information Center
Baugh, Jeanne M.
2015-01-01
All Computer Information Systems programs require a database course for their majors. This paper describes an approach to such a course in which real world examples, both design projects and actual database application projects are incorporated throughout the semester. Students are expected to apply the traditional database concepts to actual…
Semi-Automated Annotation of Biobank Data Using Standard Medical Terminologies in a Graph Database.
Hofer, Philipp; Neururer, Sabrina; Goebel, Georg
2016-01-01
Data describing biobank resources frequently contains unstructured free-text information or insufficient coding standards. (Bio-) medical ontologies like Orphanet Rare Diseases Ontology (ORDO) or the Human Disease Ontology (DOID) provide a high number of concepts, synonyms and entity relationship properties. Such standard terminologies increase quality and granularity of input data by adding comprehensive semantic background knowledge from validated entity relationships. Moreover, cross-references between terminology concepts facilitate data integration across databases using different coding standards. In order to encourage the use of standard terminologies, our aim is to identify and link relevant concepts with free-text diagnosis inputs within a biobank registry. Relevant concepts are selected automatically by lexical matching and SPARQL queries against a RDF triplestore. To ensure correctness of annotations, proposed concepts have to be confirmed by medical data administration experts before they are entered into the registry database. Relevant (bio-) medical terminologies describing diseases and phenotypes were identified and stored in a graph database which was tied to a local biobank registry. Concept recommendations during data input trigger a structured description of medical data and facilitate data linkage between heterogeneous systems.
Application of new type of distributed multimedia databases to networked electronic museum
NASA Astrophysics Data System (ADS)
Kuroda, Kazuhide; Komatsu, Naohisa; Komiya, Kazumi; Ikeda, Hiroaki
1999-01-01
Recently, various kinds of multimedia application systems have actively been developed based on the achievement of advanced high sped communication networks, computer processing technologies, and digital contents-handling technologies. Under this background, this paper proposed a new distributed multimedia database system which can effectively perform a new function of cooperative retrieval among distributed databases. The proposed system introduces a new concept of 'Retrieval manager' which functions as an intelligent controller so that the user can recognize a set of distributed databases as one logical database. The logical database dynamically generates and performs a preferred combination of retrieving parameters on the basis of both directory data and the system environment. Moreover, a concept of 'domain' is defined in the system as a managing unit of retrieval. The retrieval can effectively be performed by cooperation of processing among multiple domains. Communication language and protocols are also defined in the system. These are used in every action for communications in the system. A language interpreter in each machine translates a communication language into an internal language used in each machine. Using the language interpreter, internal processing, such internal modules as DBMS and user interface modules can freely be selected. A concept of 'content-set' is also introduced. A content-set is defined as a package of contents. Contents in the content-set are related to each other. The system handles a content-set as one object. The user terminal can effectively control the displaying of retrieved contents, referring to data indicating the relation of the contents in the content- set. In order to verify the function of the proposed system, a networked electronic museum was experimentally built. The results of this experiment indicate that the proposed system can effectively retrieve the objective contents under the control to a number of distributed domains. The result also indicate that the system can effectively work even if the system becomes large.
A Strategy for Reusing the Data of Electronic Medical Record Systems for Clinical Research.
Matsumura, Yasushi; Hattori, Atsushi; Manabe, Shiro; Tsuda, Tsutomu; Takeda, Toshihiro; Okada, Katsuki; Murata, Taizo; Mihara, Naoki
2016-01-01
There is a great need to reuse data stored in electronic medical records (EMR) databases for clinical research. We previously reported the development of a system in which progress notes and case report forms (CRFs) were simultaneously recorded using a template in the EMR in order to exclude redundant data entry. To make the data collection process more efficient, we are developing a system in which the data originally stored in the EMR database can be populated within a frame in a template. We developed interface plugin modules that retrieve data from the databases of other EMR applications. A universal keyword written in a template master is converted to a local code using a data conversion table, then the objective data is retrieved from the corresponding database. The template element data, which are entered by a template, are stored in the template element database. To retrieve the data entered by other templates, the objective data is designated by the template element code with the template code, or by the concept code if it is written for the element. When the application systems in the EMR generate documents, they also generate a PDF file and a corresponding document profile XML, which includes important data, and send them to the document archive server and the data sharing saver, respectively. In the data sharing server, the data are represented by an item with an item code with a document class code and its value. By linking a concept code to an item identifier, an objective data can be retrieved by designating a concept code. We employed a flexible strategy in which a unique identifier for a hospital is initially attached to all of the data that the hospital generates. The identifier is secondarily linked with concept codes. The data that are not linked with a concept code can also be retrieved using the unique identifier of the hospital. This strategy makes it possible to reuse any of a hospital's data.
Concept-oriented indexing of video databases: toward semantic sensitive retrieval and browsing.
Fan, Jianping; Luo, Hangzai; Elmagarmid, Ahmed K
2004-07-01
Digital video now plays an important role in medical education, health care, telemedicine and other medical applications. Several content-based video retrieval (CBVR) systems have been proposed in the past, but they still suffer from the following challenging problems: semantic gap, semantic video concept modeling, semantic video classification, and concept-oriented video database indexing and access. In this paper, we propose a novel framework to make some advances toward the final goal to solve these problems. Specifically, the framework includes: 1) a semantic-sensitive video content representation framework by using principal video shots to enhance the quality of features; 2) semantic video concept interpretation by using flexible mixture model to bridge the semantic gap; 3) a novel semantic video-classifier training framework by integrating feature selection, parameter estimation, and model selection seamlessly in a single algorithm; and 4) a concept-oriented video database organization technique through a certain domain-dependent concept hierarchy to enable semantic-sensitive video retrieval and browsing.
Terminological aspects of data elements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strehlow, R.A.; Kenworthey, W.H. Jr.; Schuldt, R.E.
1991-01-01
The creation and display of data comprise a process that involves a sequence of steps requiring both semantic and systems analysis. An essential early step in this process is the choice, definition, and naming of data element concepts and is followed by the specification of other needed data element concept attributes. The attributes and the values of data element concept remain associated with them from their birth as a concept to a generic data element that serves as a template for final application. Terminology is, therefore, centrally important to the entire data creation process. Smooth mapping from natural language tomore » a database is a critical aspect of database, and consequently, it requires terminology standardization from the outset of database work. In this paper the semantic aspects of data elements are analyzed and discussed. Seven kinds of data element concept information are considered and those that require terminological development and standardization are identified. The four terminological components of a data element are the hierarchical type of a concept, functional dependencies, schematas showing conceptual structures, and definition statements. These constitute the conventional role of terminology in database design. 12 refs., 8 figs., 1 tab.« less
Linking Multiple Databases: Term Project Using "Sentences" DBMS.
ERIC Educational Resources Information Center
King, Ronald S.; Rainwater, Stephen B.
This paper describes a methodology for use in teaching an introductory Database Management System (DBMS) course. Students master basic database concepts through the use of a multiple component project implemented in both relational and associative data models. The associative data model is a new approach for designing multi-user, Web-enabled…
Aquatic information and retrieval (AQUIRE) database system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunter, R.; Niemi, G.; Pilli, A.
The AQUIRE database system is one of the foremost international resources for finding aquatic toxicity information. Information in the system is organized around the concept of an 'aquatic toxicity test.' A toxicity test record contains information about the chemical, species, endpoint, endpoint concentrations, and test conditions under which the toxicity test was conducted. For the past 10 years aquatic literature has been reviewed and entered into the system. Currently, the AQUIRE database system contains data on more than 2,400 species, 160 endpoints, 5,000 chemicals, 6,000 references, and 104,000 toxicity tests.
A web-based system architecture for ontology-based data integration in the domain of IT benchmarking
NASA Astrophysics Data System (ADS)
Pfaff, Matthias; Krcmar, Helmut
2018-03-01
In the domain of IT benchmarking (ITBM), a variety of data and information are collected. Although these data serve as the basis for business analyses, no unified semantic representation of such data yet exists. Consequently, data analysis across different distributed data sets and different benchmarks is almost impossible. This paper presents a system architecture and prototypical implementation for an integrated data management of distributed databases based on a domain-specific ontology. To preserve the semantic meaning of the data, the ITBM ontology is linked to data sources and functions as the central concept for database access. Thus, additional databases can be integrated by linking them to this domain-specific ontology and are directly available for further business analyses. Moreover, the web-based system supports the process of mapping ontology concepts to external databases by introducing a semi-automatic mapping recommender and by visualizing possible mapping candidates. The system also provides a natural language interface to easily query linked databases. The expected result of this ontology-based approach of knowledge representation and data access is an increase in knowledge and data sharing in this domain, which will enhance existing business analysis methods.
Ontology to relational database transformation for web application development and maintenance
NASA Astrophysics Data System (ADS)
Mahmudi, Kamal; Inggriani Liem, M. M.; Akbar, Saiful
2018-03-01
Ontology is used as knowledge representation while database is used as facts recorder in a KMS (Knowledge Management System). In most applications, data are managed in a database system and updated through the application and then they are transformed to knowledge as needed. Once a domain conceptor defines the knowledge in the ontology, application and database can be generated from the ontology. Most existing frameworks generate application from its database. In this research, ontology is used for generating the application. As the data are updated through the application, a mechanism is designed to trigger an update to the ontology so that the application can be rebuilt based on the newest ontology. By this approach, a knowledge engineer has a full flexibility to renew the application based on the latest ontology without dependency to a software developer. In many cases, the concept needs to be updated when the data changed. The framework is built and tested in a spring java environment. A case study was conducted to proof the concepts.
Archetype relational mapping - a practical openEHR persistence solution.
Wang, Li; Min, Lingtong; Wang, Rui; Lu, Xudong; Duan, Huilong
2015-11-05
One of the primary obstacles to the widespread adoption of openEHR methodology is the lack of practical persistence solutions for future-proof electronic health record (EHR) systems as described by the openEHR specifications. This paper presents an archetype relational mapping (ARM) persistence solution for the archetype-based EHR systems to support healthcare delivery in the clinical environment. First, the data requirements of the EHR systems are analysed and organized into archetype-friendly concepts. The Clinical Knowledge Manager (CKM) is queried for matching archetypes; when necessary, new archetypes are developed to reflect concepts that are not encompassed by existing archetypes. Next, a template is designed for each archetype to apply constraints related to the local EHR context. Finally, a set of rules is designed to map the archetypes to data tables and provide data persistence based on the relational database. A comparison study was conducted to investigate the differences among the conventional database of an EHR system from a tertiary Class A hospital in China, the generated ARM database, and the Node + Path database. Five data-retrieving tests were designed based on clinical workflow to retrieve exams and laboratory tests. Additionally, two patient-searching tests were designed to identify patients who satisfy certain criteria. The ARM database achieved better performance than the conventional database in three of the five data-retrieving tests, but was less efficient in the remaining two tests. The time difference of query executions conducted by the ARM database and the conventional database is less than 130 %. The ARM database was approximately 6-50 times more efficient than the conventional database in the patient-searching tests, while the Node + Path database requires far more time than the other two databases to execute both the data-retrieving and the patient-searching tests. The ARM approach is capable of generating relational databases using archetypes and templates for archetype-based EHR systems, thus successfully adapting to changes in data requirements. ARM performance is similar to that of conventionally-designed EHR systems, and can be applied in a practical clinical environment. System components such as ARM can greatly facilitate the adoption of openEHR architecture within EHR systems.
NASA Technical Reports Server (NTRS)
Brodsky, Alexander; Segal, Victor E.
1999-01-01
The EOSCUBE constraint database system is designed to be a software productivity tool for high-level specification and efficient generation of EOSDIS and other scientific products. These products are typically derived from large volumes of multidimensional data which are collected via a range of scientific instruments.
ERIC Educational Resources Information Center
Qin, Jian; Jurisica, Igor; Liddy, Elizabeth D.; Jansen, Bernard J; Spink, Amanda; Priss, Uta; Norton, Melanie J.
2000-01-01
These six articles discuss knowledge discovery in databases (KDD). Topics include data mining; knowledge management systems; applications of knowledge discovery; text and Web mining; text mining and information retrieval; user search patterns through Web log analysis; concept analysis; data collection; and data structure inconsistency. (LRW)
In-vehicle signing concepts: An analytical precursor to an in-vehicle information system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spelt, P.F.; Tufano, D.R.; Knee, H.E.
The purpose of the project described in this report is to develop alternative In-Vehicle Signing (IVS) system concepts based on allocation of the functions associated with driving a road vehicle. In the driving milieu, tasks can be assigned to one of three agents, the driver, the vehicle or the infrastructure. Assignment of tasks is based on a philosophy of function allocation which can emphasize any of several philosophical approaches. In this project, function allocations were made according to the current practice in vehicle design and signage as well as a human-centered strategy. Several IVS system concepts are presented based onmore » differing functional allocation outcomes. A design space for IVS systems is described, and a technical analysis of a map-based and sever beacon-based IVS systems are presented. Because of problems associated with both map-based and beacon-based concepts, a hybrid IVS concept was proposed. The hybrid system uses on-board map-based databases to serve those areas in which signage can be anticipated to be relatively static, such as large metropolitan areas where few if any new roads will be built. For areas where sign density is low, and/or where population growth causes changes in traffic flow, beacon-based concepts function best. For this situation, changes need only occur in the central database from which sign information is transmitted. This report presents system concepts which enable progress from the IVS system concept-independent functional requirements to a more specific set of system concepts which facilitate analysis and selection of hardware and software to perform the functions of IVS. As such, this phase of the project represents a major step toward the design and development of a prototype WS system. Once such a system is developed, a program of testing, evaluation, an revision will be undertaken. Ultimately, such a system can become part of the road vehicle of the future.« less
A Novel Concept for the Search and Retrieval of the Derwent Markush Resource Database.
Barth, Andreas; Stengel, Thomas; Litterst, Edwin; Kraut, Hans; Matuszczyk, Henry; Ailer, Franz; Hajkowski, Steve
2016-05-23
The representation of and search for generic chemical structures (Markush) remains a continuing challenge. Several research groups have addressed this problem, and over time a limited number of practical solutions have been proposed. Today there are two large commercial providers of Markush databases: Chemical Abstracts Service (CAS) and Thomson Reuters. The Thomson Reuters "Derwent" Markush database is currently offered via the online services Questel and STN and as a data feed for in-house use. The aim of this paper is to briefly review the existing Markush systems (databases plus search engines) and to describe our new approach for the implementation of the Derwent Markush Resource on STN. Our new approach demonstrates the integration of the Derwent Markush Resource database into the existing chemistry-focused STN platform without loss of detail. This provides compatibility with other structure and Markush databases on STN and at the same time makes it possible to deploy the specific features and functions of the Derwent approach. It is shown that the different Markush languages developed by CAS and Derwent can be combined into a single general Markush description. In this concept the generic nodes are grouped together in a unique hierarchy where all chemical elements and fragments can be integrated. As a consequence, both systems are searchable using a single structure query. Moreover, the presented concept could serve as a promising starting point for a common generalized description of Markush structures.
NASA Technical Reports Server (NTRS)
Kelley, Steve; Roussopoulos, Nick; Sellis, Timos
1992-01-01
The goal of the Universal Index System (UIS), is to provide an easy-to-use and reliable interface to many different kinds of database systems. The impetus for this system was to simplify database index management for users, thus encouraging the use of indexes. As the idea grew into an actual system design, the concept of increasing database performance by facilitating the use of time-saving techniques at the user level became a theme for the project. This Final Report describes the Design, the Implementation of UIS, and its Language Interfaces. It also includes the User's Guide and the Reference Manual.
NASA Astrophysics Data System (ADS)
Linsebarth, A.; Moscicka, A.
2010-01-01
The article describes the infl uence of the Bible geographic object peculiarities on the spatiotemporal geoinformation system of the Bible events. In the proposed concept of this system the special attention was concentrated to the Bible geographic objects and interrelations between the names of these objects and their location in the geospace. In the Bible, both in the Old and New Testament, there are hundreds of geographical names, but the selection of these names from the Bible text is not so easy. The same names are applied for the persons and geographic objects. The next problem which arises is the classification of the geographical object, because in several cases the same name is used for the towns, mountains, hills, valleys etc. Also very serious problem is related to the time-changes of the names. The interrelation between the object name and its location is also complicated. The geographic object of this same name is located in various places which should be properly correlated with the Bible text. Above mentioned peculiarities of Bible geographic objects infl uenced the concept of the proposed system which consists of three databases: reference, geographic object, and subject/thematic. The crucial component of this system is proper architecture of the geographic object database. In the paper very detailed description of this database is presented. The interrelation between the databases allows to the Bible readers to connect the Bible text with the geography of the terrain on which the Bible events occurred and additionally to have access to the other geographical and historical information related to the geographic objects.
Flight Testing an Integrated Synthetic Vision System
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Arthur, Jarvis J., III; Bailey, Randall E.; Prinzel, Lawrence J., III
2005-01-01
NASA's Synthetic Vision Systems (SVS) project is developing technologies with practical applications to eliminate low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. A major thrust of the SVS project involves the development/demonstration of affordable, certifiable display configurations that provide intuitive out-the-window terrain and obstacle information with advanced pathway guidance for transport aircraft. The SVS concept being developed at NASA encompasses the integration of tactical and strategic Synthetic Vision Display Concepts (SVDC) with Runway Incursion Prevention System (RIPS) alerting and display concepts, real-time terrain database integrity monitoring equipment (DIME), and Enhanced Vision Systems (EVS) and/or improved Weather Radar for real-time object detection and database integrity monitoring. A flight test evaluation was jointly conducted (in July and August 2004) by NASA Langley Research Center and an industry partner team under NASA's Aviation Safety and Security, Synthetic Vision System project. A Gulfstream GV aircraft was flown over a 3-week period in the Reno/Tahoe International Airport (NV) local area and an additional 3-week period in the Wallops Flight Facility (VA) local area to evaluate integrated Synthetic Vision System concepts. The enabling technologies (RIPS, EVS and DIME) were integrated into the larger SVS concept design. This paper presents experimental methods and the high level results of this flight test.
Current Status of NASDA Terminology Database
NASA Astrophysics Data System (ADS)
Kato, Akira
2002-01-01
NASDA Terminology Database System provides the English and Japanese terms, abbreviations, definition and reference documents. Recent progress includes a service to provide abbreviation data from the NASDA Home Page, and publishing a revised NASDA bilingual dictionary. Our next efforts to improve the system are (1) to combine our data with the data of NASA THESAURUS, (2) to add terms from new academic and engineering fields that have begun to have relations with space activities, and (3) to revise the NASDA Definition List. To combine our data with the NASA THESAURUS database we must consider the difference between the database concepts. Further effort to select adequate terms is thus required. Terms must be added from other fields to deal with microgravity experiments, human factors and so on. Some examples of new terms to be added have been collected. To revise the NASDA terms definition list, NASA and ESA definition lists were surveyed and a general concept to revise the NASDA definition list was proposed. I expect these activities will contribute to the IAA dictionary.
Copyright, Licensing Agreements and Gateways.
ERIC Educational Resources Information Center
Elias, Arthur W.
1990-01-01
Discusses technological developments in information distribution and management in relation to concepts of ownership. A historical overview of the concept of copyright is presented; licensing elements for databases are examined; and implications for gateway systems are explored, including ownership, identification of users, and allowable uses of…
Key features for ATA / ATR database design in missile systems
NASA Astrophysics Data System (ADS)
Özertem, Kemal Arda
2017-05-01
Automatic target acquisition (ATA) and automatic target recognition (ATR) are two vital tasks for missile systems, and having a robust detection and recognition algorithm is crucial for overall system performance. In order to have a robust target detection and recognition algorithm, an extensive image database is required. Automatic target recognition algorithms use the database of images in training and testing steps of algorithm. This directly affects the recognition performance, since the training accuracy is driven by the quality of the image database. In addition, the performance of an automatic target detection algorithm can be measured effectively by using an image database. There are two main ways for designing an ATA / ATR database. The first and easy way is by using a scene generator. A scene generator can model the objects by considering its material information, the atmospheric conditions, detector type and the territory. Designing image database by using a scene generator is inexpensive and it allows creating many different scenarios quickly and easily. However the major drawback of using a scene generator is its low fidelity, since the images are created virtually. The second and difficult way is designing it using real-world images. Designing image database with real-world images is a lot more costly and time consuming; however it offers high fidelity, which is critical for missile algorithms. In this paper, critical concepts in ATA / ATR database design with real-world images are discussed. Each concept is discussed in the perspective of ATA and ATR separately. For the implementation stage, some possible solutions and trade-offs for creating the database are proposed, and all proposed approaches are compared to each other with regards to their pros and cons.
GIS and RDBMS Used with Offline FAA Airspace Databases
NASA Technical Reports Server (NTRS)
Clark, J.; Simmons, J.; Scofield, E.; Talbott, B.
1994-01-01
A geographic information system (GIS) and relational database management system (RDBMS) were used in a Macintosh environment to access, manipulate, and display off-line FAA databases of airport and navigational aid locations, airways, and airspace boundaries. This proof-of-concept effort used data available from the Adaptation Controlled Environment System (ACES) and Digital Aeronautical Chart Supplement (DACS) databases to allow FAA cartographers and others to create computer-assisted charts and overlays as reference material for air traffic controllers. These products were created on an engineering model of the future GRASP (GRaphics Adaptation Support Position) workstation that will be used to make graphics and text products for the Advanced Automation System (AAS), which will upgrade and replace the current air traffic control system. Techniques developed during the prototyping effort have shown the viability of using databases to create graphical products without the need for an intervening data entry step.
The MeSH translation maintenance system: structure, interface design, and implementation.
Nelson, Stuart J; Schopen, Michael; Savage, Allan G; Schulman, Jacque-Lynne; Arluk, Natalie
2004-01-01
The National Library of Medicine (NLM) produces annual editions of the Medical Subject Headings (MeSH). Translations of MeSH are often done to make the vocabulary useful for non-English users. However, MeSH translators have encountered difficulties with entry vocabulary as they maintain and update their translation. Tracking MeSH changes and updating their translations in a reasonable time frame is cumbersome. NLM has developed and implemented a concept-centered vocabulary maintenance system for MeSH. This system has been extended to create an interlingual database of translations, the MeSH Translation Maintenance System (MTMS). This database allows continual updating of the translations, as well as facilitating tracking of the changes within MeSH from one year to another. The MTMS interface uses a Web-based design with multiple colors and fonts to indicate concepts needing translation or review. Concepts for which there is no exact English equivalent can be added. The system software encourages compliance with the Unicode standard in order to ensure that character sets with native alphabets and full orthography are used consistently.
Knowledge Based Engineering for Spatial Database Management and Use
NASA Technical Reports Server (NTRS)
Peuquet, D. (Principal Investigator)
1984-01-01
The use of artificial intelligence techniques that are applicable to Geographic Information Systems (GIS) are examined. Questions involving the performance and modification to the database structure, the definition of spectra in quadtree structures and their use in search heuristics, extension of the knowledge base, and learning algorithm concepts are investigated.
Aerodynamic Characteristics and Glide-Back Performance of Langley Glide-Back Booster
NASA Technical Reports Server (NTRS)
Pamadi, Bandu N.; Covell, Peter F.; Tartabini, Paul V.; Murphy, Kelly J.
2004-01-01
NASA-Langley Research Center is conducting system level studies on an-house concept of a small launch vehicle to address NASA's needs for rapid deployment of small payloads to Low Earth Orbit. The vehicle concept is a three-stage system with a reusable first stage and expendable upper stages. The reusable first stage booster, which glides back to launch site after staging around Mach 3 is named the Langley Glide-Back Booster (LGBB). This paper discusses the aerodynamic characteristics of the LGBB from subsonic to supersonic speeds, development of the aerodynamic database and application of this database to evaluate the glide back performance of the LGBB. The aerodynamic database was assembled using a combination of wind tunnel test data and engineering level analysis. The glide back performance of the LGBB was evaluated using a trajectory optimization code and subject to constraints on angle of attack, dynamic pressure and normal acceleration.
An engineering database management system for spacecraft operations
NASA Technical Reports Server (NTRS)
Cipollone, Gregorio; Mckay, Michael H.; Paris, Joseph
1993-01-01
Studies at ESOC have demonstrated the feasibility of a flexible and powerful Engineering Database Management System in support for spacecraft operations documentation. The objectives set out were three-fold: first an analysis of the problems encountered by the Operations team in obtaining and managing operations documents; secondly, the definition of a concept for operations documentation and the implementation of prototype to prove the feasibility of the concept; and thirdly, definition of standards and protocols required for the exchange of data between the top-level partners in a satellite project. The EDMS prototype was populated with ERS-l satellite design data and has been used by the operations team at ESOC to gather operational experience. An operational EDMS would be implemented at the satellite prime contractor's site as a common database for all technical information surrounding a project and would be accessible by the cocontractor's and ESA teams.
NASA Astrophysics Data System (ADS)
Friedrich, Axel; Raabe, Helmut; Schiefele, Jens; Doerr, Kai Uwe
1999-07-01
In future aircraft cockpit designs SVS (Synthetic Vision System) databases will be used to display 3D physical and virtual information to pilots. In contrast to pure warning systems (TAWS, MSAW, EGPWS) SVS serve to enhance pilot spatial awareness by 3-dimensional perspective views of the objects in the environment. Therefore all kind of aeronautical relevant data has to be integrated into the SVS-database: Navigation- data, terrain-data, obstacles and airport-Data. For the integration of all these data the concept of a GIS (Geographical Information System) based HQDB (High-Quality- Database) has been created at the TUD (Technical University Darmstadt). To enable database certification, quality- assessment procedures according to ICAO Annex 4, 11, 14 and 15 and RTCA DO-200A/EUROCAE ED76 were established in the concept. They can be differentiated in object-related quality- assessment-methods following the keywords accuracy, resolution, timeliness, traceability, assurance-level, completeness, format and GIS-related quality assessment methods with the keywords system-tolerances, logical consistence and visual quality assessment. An airport database is integrated in the concept as part of the High-Quality- Database. The contents of the HQDB are chosen so that they support both Flight-Guidance-SVS and other aeronautical applications like SMGCS (Surface Movement and Guidance Systems) and flight simulation as well. Most airport data are not available. Even though data for runways, threshold, taxilines and parking positions were to be generated by the end of 1997 (ICAO Annex 11 and 15) only a few countries fulfilled these requirements. For that reason methods of creating and certifying airport data have to be found. Remote sensing and digital photogrammetry serve as means to acquire large amounts of airport objects with high spatial resolution and accuracy in much shorter time than with classical surveying methods. Remotely sensed images can be acquired from satellite-platforms or aircraft-platforms. To achieve the highest horizontal accuracy requirements stated in ICAO Annex 14 for runway centerlines (0.50 meters), at the present moment only images acquired from aircraft based sensors can be used as source data. Still, ground reference by GCP (Ground Control-points) is obligatory. A DEM (Digital Elevation Model) can be created automatically in the photogrammetric process. It can be used as highly accurate elevation model for the airport area. The final verification of airport data is accomplished by independent surveyed runway- and taxiway- control-points. The concept of generation airport-data by means of remote sensing and photogrammetry was tested with the Stuttgart/Germany airport. The results proved that the final accuracy was within the accuracy specification defined by ICAO Annex 14.
Automated extraction of knowledge for model-based diagnostics
NASA Technical Reports Server (NTRS)
Gonzalez, Avelino J.; Myler, Harley R.; Towhidnejad, Massood; Mckenzie, Frederic D.; Kladke, Robin R.
1990-01-01
The concept of accessing computer aided design (CAD) design databases and extracting a process model automatically is investigated as a possible source for the generation of knowledge bases for model-based reasoning systems. The resulting system, referred to as automated knowledge generation (AKG), uses an object-oriented programming structure and constraint techniques as well as internal database of component descriptions to generate a frame-based structure that describes the model. The procedure has been designed to be general enough to be easily coupled to CAD systems that feature a database capable of providing label and connectivity data from the drawn system. The AKG system is capable of defining knowledge bases in formats required by various model-based reasoning tools.
An object-oriented approach to the management of meteorological and hydrological data
NASA Technical Reports Server (NTRS)
Graves, S. J.; Williams, S. F.; Criswell, E. A.
1990-01-01
An interface to several meteorological and hydrological databases have been developed that enables researchers efficiently to access and interrelate data through a customized menu system. By extending a relational database system with object-oriented concepts, each user or group of users may have different 'views' of the data to allow user access to data in customized ways without altering the organization of the database. An application to COHMEX and WetNet, two earth science projects within NASA Marshall Space Flight Center's Earth Science and Applications Division, are described.
Performance Evaluation of a Database System in a Multiple Backend Configurations,
1984-10-01
leaving a systemn process , the * internal performance measuremnents of MMSD have been carried out. Mathodo lo.- gies for constructing test databases...access d i rectory data via the AT, EDIT, and CDT. In designing the test database, one of the key concepts is the choice of the directory attributes in...internal timing. These requests are selected since they retrieve the seIaI lest portion of the test database and the processing time for each request is
Standardization of Terminology in Laboratory Medicine II
Lee, Kap No; Yoon, Jong-Hyun; Min, Won Ki; Lim, Hwan Sub; Song, Junghan; Chae, Seok Lae; Jang, Seongsoo; Ki, Chang-Seok; Bae, Sook Young; Kim, Jang Su; Kwon, Jung-Ah; Lee, Chang Kyu
2008-01-01
Standardization of medical terminology is essential in data transmission between health care institutes and in maximizing the benefits of information technology. The purpose of this study was to standardize medical terms for laboratory observations. During the second year of the study, a standard database of concept names for laboratory terms that covered those used in tertiary health care institutes and reference laboratories was developed. The laboratory terms in the Logical Observation Identifier Names and Codes (LOINC) database were adopted and matched with the electronic data interchange (EDI) codes in Korea. A public hearing and a workshop for clinical pathologists were held to collect the opinions of experts. The Korean standard laboratory terminology database containing six axial concept names, components, property, time aspect, system (specimen), scale type, and method type, was established for 29,340 test observations. Short names and mapping tables for EDI codes and UMLS were added. Synonym tables were prepared to help match concept names to common terms used in the fields. We herein described the Korean standard laboratory terminology database for test names, result description terms, and result units encompassing most of the laboratory tests in Korea. PMID:18756062
Integration of Oracle and Hadoop: Hybrid Databases Affordable at Scale
NASA Astrophysics Data System (ADS)
Canali, L.; Baranowski, Z.; Kothuri, P.
2017-10-01
This work reports on the activities aimed at integrating Oracle and Hadoop technologies for the use cases of CERN database services and in particular on the development of solutions for offloading data and queries from Oracle databases into Hadoop-based systems. The goal and interest of this investigation is to increase the scalability and optimize the cost/performance footprint for some of our largest Oracle databases. These concepts have been applied, among others, to build offline copies of CERN accelerator controls and logging databases. The tested solution allows to run reports on the controls data offloaded in Hadoop without affecting the critical production database, providing both performance benefits and cost reduction for the underlying infrastructure. Other use cases discussed include building hybrid database solutions with Oracle and Hadoop, offering the combined advantages of a mature relational database system with a scalable analytics engine.
An Integrated Nursing Management Information System: From Concept to Reality
Pinkley, Connie L.; Sommer, Patricia K.
1988-01-01
This paper addresses the transition from the conceptualization of a Nursing Management Information System (NMIS) integrated and interdependent with the Hospital Information System (HIS) to its realization. Concepts of input, throughout, and output are presented to illustrate developmental strategies used to achieve nursing information products. Essential processing capabilities include: 1) ability to interact with multiple data sources; 2) database management, statistical, and graphics software packages; 3) online, batch and reporting; and 4) interactive data analysis. Challenges encountered in system construction are examined.
Validation Database Based Thermal Analysis of an Advanced RPS Concept
NASA Technical Reports Server (NTRS)
Balint, Tibor S.; Emis, Nickolas D.
2006-01-01
Advanced RPS concepts can be conceived, designed and assessed using high-end computational analysis tools. These predictions may provide an initial insight into the potential performance of these models, but verification and validation are necessary and required steps to gain confidence in the numerical analysis results. This paper discusses the findings from a numerical validation exercise for a small advanced RPS concept, based on a thermal analysis methodology developed at JPL and on a validation database obtained from experiments performed at Oregon State University. Both the numerical and experimental configurations utilized a single GPHS module enabled design, resembling a Mod-RTG concept. The analysis focused on operating and environmental conditions during the storage phase only. This validation exercise helped to refine key thermal analysis and modeling parameters, such as heat transfer coefficients, and conductivity and radiation heat transfer values. Improved understanding of the Mod-RTG concept through validation of the thermal model allows for future improvements to this power system concept.
Frankewitsch, T; Prokosch, H U
2000-01-01
Knowledge in the environment of information technologies is bound to structured vocabularies. Medical data dictionaries are necessary for uniquely describing findings like diagnoses, procedures or functions. Therefore we decided to locally install a version of the Unified Medical Language System (UMLS) of the U.S. National Library of Medicine as a repository for defining entries of a medical multimedia database. Because of the requirement to extend the vocabulary in concepts and relations between existing concepts a graphical tool for appending new items to the database has been developed: Although the database is an instance of a semantic network the focus on single entries offers the opportunity of reducing the net to a tree within this detail. Based on the graph theorem, there are definitions of nodes of concepts and nodes of knowledge. The UMLS additionally offers the specification of sub-relations, which can be represented, too. Using this view it is possible to manage these 1:n-Relations in a simple tree view. On this background an explorer like graphical user interface has been realised to add new concepts and define new relationships between those and existing entries for adapting the UMLS for specific purposes such as describing medical multimedia objects.
Library Automation in the Netherlands and Pica.
ERIC Educational Resources Information Center
Bossers, Anton; Van Muyen, Martin
1984-01-01
Describes the Pica Library Automation Network (originally the Project for Integrated Catalogue Automation), which is based on a centralized bibliographic database. Highlights include the Pica conception of library automation, online shared cataloging system, circulation control system, acquisition system, and online Dutch union catalog with…
Metnitz, P G; Laback, P; Popow, C; Laback, O; Lenz, K; Hiesmayr, M
1995-01-01
Patient Data Management Systems (PDMS) for ICUs collect, present and store clinical data. Various intentions make analysis of those digitally stored data desirable, such as quality control or scientific purposes. The aim of the Intensive Care Data Evaluation project (ICDEV), was to provide a database tool for the analysis of data recorded at various ICUs at the University Clinics of Vienna. General Hospital of Vienna, with two different PDMSs used: CareVue 9000 (Hewlett Packard, Andover, USA) at two ICUs (one medical ICU and one neonatal ICU) and PICIS Chart+ (PICIS, Paris, France) at one Cardiothoracic ICU. CONCEPT AND METHODS: Clinically oriented analysis of the data collected in a PDMS at an ICU was the beginning of the development. After defining the database structure we established a client-server based database system under Microsoft Windows NI and developed a user friendly data quering application using Microsoft Visual C++ and Visual Basic; ICDEV was successfully installed at three different ICUs, adjustment to the different PDMS configurations were done within a few days. The database structure developed by us enables a powerful query concept representing an 'EXPERT QUESTION COMPILER' which may help to answer almost any clinical questions. Several program modules facilitate queries at the patient, group and unit level. Results from ICDEV-queries are automatically transferred to Microsoft Excel for display (in form of configurable tables and graphs) and further processing. The ICDEV concept is configurable for adjustment to different intensive care information systems and can be used to support computerized quality control. However, as long as there exists no sufficient artifact recognition or data validation software for automatically recorded patient data, the reliability of these data and their usage for computer assisted quality control remain unclear and should be further studied.
Data Structures in Natural Computing: Databases as Weak or Strong Anticipatory Systems
NASA Astrophysics Data System (ADS)
Rossiter, B. N.; Heather, M. A.
2004-08-01
Information systems anticipate the real world. Classical databases store, organise and search collections of data of that real world but only as weak anticipatory information systems. This is because of the reductionism and normalisation needed to map the structuralism of natural data on to idealised machines with von Neumann architectures consisting of fixed instructions. Category theory developed as a formalism to explore the theoretical concept of naturality shows that methods like sketches arising from graph theory as only non-natural models of naturality cannot capture real-world structures for strong anticipatory information systems. Databases need a schema of the natural world. Natural computing databases need the schema itself to be also natural. Natural computing methods including neural computers, evolutionary automata, molecular and nanocomputing and quantum computation have the potential to be strong. At present they are mainly at the stage of weak anticipatory systems.
Pilot Aircraft Interface Objectives/Rationale
NASA Technical Reports Server (NTRS)
Shively, Jay
2010-01-01
Objective: Database and proof of concept for guidelines for GCS compliance a) Rationale: 1) Provide research test-bed to develop guidelines. 2) Modify GCS for NAS Compliance to provide proof of concept. b) Approach: 1) Assess current state of GCS technology. 2) Information Requirements Definition. 3) SME Workshop. 4) Modify an Existing GCS for NAS Compliance. 5) Define exemplar UAS (choose system to develop prototype). 6) Define Candidate Displays & Controls. 7) Evaluate/ refine in Simulations. 8) Demonstrate in flight. c) Deliverables: 1) Information Requirements Report. 2) Workshop Proceedings. 3) Technical Reports/ papers on Simulations & Flight Demo. 4) Database for guidelines.
Solar probe shield developmental testing
NASA Technical Reports Server (NTRS)
Miyake, Robert N.
1991-01-01
The objectives of the Solar Probe mission and the current status of the Solar Probe thermal shield subsystem development are described. In particular, the discussion includes a brief description of the mission concepts, spacecraft configuration and shield concept, material selection criteria, and the required material testing to provide a database to support the development of the shield system.
NASA Technical Reports Server (NTRS)
Young, Steven D.; Harrah, Steven D.; deHaag, Maarten Uijt
2002-01-01
Terrain Awareness and Warning Systems (TAWS) and Synthetic Vision Systems (SVS) provide pilots with displays of stored geo-spatial data (e.g. terrain, obstacles, and/or features). As comprehensive validation is impractical, these databases typically have no quantifiable level of integrity. This lack of a quantifiable integrity level is one of the constraints that has limited certification and operational approval of TAWS/SVS to "advisory-only" systems for civil aviation. Previous work demonstrated the feasibility of using a real-time monitor to bound database integrity by using downward-looking remote sensing technology (i.e. radar altimeters). This paper describes an extension of the integrity monitor concept to include a forward-looking sensor to cover additional classes of terrain database faults and to reduce the exposure time associated with integrity threats. An operational concept is presented that combines established feature extraction techniques with a statistical assessment of similarity measures between the sensed and stored features using principles from classical detection theory. Finally, an implementation is presented that uses existing commercial-off-the-shelf weather radar sensor technology.
Basic elements and concepts of information systems are presented: definition of the term "information", main elements of data and database structure. The report also deals with the information system and its underlying theory and design. Examples of the application of formation ...
New model for distributed multimedia databases and its application to networking of museums
NASA Astrophysics Data System (ADS)
Kuroda, Kazuhide; Komatsu, Naohisa; Komiya, Kazumi; Ikeda, Hiroaki
1998-02-01
This paper proposes a new distributed multimedia data base system where the databases storing MPEG-2 videos and/or super high definition images are connected together through the B-ISDN's, and also refers to an example of the networking of museums on the basis of the proposed database system. The proposed database system introduces a new concept of the 'retrieval manager' which functions an intelligent controller so that the user can recognize a set of image databases as one logical database. A user terminal issues a request to retrieve contents to the retrieval manager which is located in the nearest place to the user terminal on the network. Then, the retrieved contents are directly sent through the B-ISDN's to the user terminal from the server which stores the designated contents. In this case, the designated logical data base dynamically generates the best combination of such a retrieving parameter as a data transfer path referring to directly or data on the basis of the environment of the system. The generated retrieving parameter is then executed to select the most suitable data transfer path on the network. Therefore, the best combination of these parameters fits to the distributed multimedia database system.
Big data and ophthalmic research.
Clark, Antony; Ng, Jonathon Q; Morlet, Nigel; Semmens, James B
2016-01-01
Large population-based health administrative databases, clinical registries, and data linkage systems are a rapidly expanding resource for health research. Ophthalmic research has benefited from the use of these databases in expanding the breadth of knowledge in areas such as disease surveillance, disease etiology, health services utilization, and health outcomes. Furthermore, the quantity of data available for research has increased exponentially in recent times, particularly as e-health initiatives come online in health systems across the globe. We review some big data concepts, the databases and data linkage systems used in eye research-including their advantages and limitations, the types of studies previously undertaken, and the future direction for big data in eye research. Copyright © 2016 Elsevier Inc. All rights reserved.
Asynchronous Data Retrieval from an Object-Oriented Database
NASA Astrophysics Data System (ADS)
Gilbert, Jonathan P.; Bic, Lubomir
We present an object-oriented semantic database model which, similar to other object-oriented systems, combines the virtues of four concepts: the functional data model, a property inheritance hierarchy, abstract data types and message-driven computation. The main emphasis is on the last of these four concepts. We describe generic procedures that permit queries to be processed in a purely message-driven manner. A database is represented as a network of nodes and directed arcs, in which each node is a logical processing element, capable of communicating with other nodes by exchanging messages. This eliminates the need for shared memory and for centralized control during query processing. Hence, the model is suitable for implementation on a multiprocessor computer architecture, consisting of large numbers of loosely coupled processing elements.
Development of an Integrated Biospecimen Database among the Regional Biobanks in Korea.
Park, Hyun Sang; Cho, Hune; Kim, Hwa Sun
2016-04-01
This study developed an integrated database for 15 regional biobanks that provides large quantities of high-quality bio-data to researchers to be used for the prevention of disease, for the development of personalized medicines, and in genetics studies. We collected raw data, managed independently by 15 regional biobanks, for database modeling and analyzed and defined the metadata of the items. We also built a three-step (high, middle, and low) classification system for classifying the item concepts based on the metadata. To generate clear meanings of the items, clinical items were defined using the Systematized Nomenclature of Medicine Clinical Terms, and specimen items were defined using the Logical Observation Identifiers Names and Codes. To optimize database performance, we set up a multi-column index based on the classification system and the international standard code. As a result of subdividing 7,197,252 raw data items collected, we refined the metadata into 1,796 clinical items and 1,792 specimen items. The classification system consists of 15 high, 163 middle, and 3,588 low class items. International standard codes were linked to 69.9% of the clinical items and 71.7% of the specimen items. The database consists of 18 tables based on a table from MySQL Server 5.6. As a result of the performance evaluation, the multi-column index shortened query time by as much as nine times. The database developed was based on an international standard terminology system, providing an infrastructure that can integrate the 7,197,252 raw data items managed by the 15 regional biobanks. In particular, it resolved the inevitable interoperability issues in the exchange of information among the biobanks, and provided a solution to the synonym problem, which arises when the same concept is expressed in a variety of ways.
USDA-ARS?s Scientific Manuscript database
The Prototype Food and Nutrient Database for Dietary Studies (Prototype FNDDS) Branded Food Products Database for Public Health is a proof of concept database. The database contains a small selection of food products which is being used to exhibit the approach for incorporation of the Branded Food ...
NASA Astrophysics Data System (ADS)
Shahzad, Muhammad A.
1999-02-01
With the emergence of data warehousing, Decision support systems have evolved to its best. At the core of these warehousing systems lies a good database management system. Database server, used for data warehousing, is responsible for providing robust data management, scalability, high performance query processing and integration with other servers. Oracle being the initiator in warehousing servers, provides a wide range of features for facilitating data warehousing. This paper is designed to review the features of data warehousing - conceptualizing the concept of data warehousing and, lastly, features of Oracle servers for implementing a data warehouse.
Application of a Database System for Korean Military Personnel Management.
1987-03-01
PUNOINtGiSPONSORING 6b OFFICE SYMBOIL 9 PROCUREMENT INSTRUMENT IDENTIFICATION NUMBER ORGANIZATION (taoab 8c AOORE SS (city. Stare. MWd BP Code) 10...concepts ......... 33 C. R SHIIONSPS WITH THE A .TL. ................. ............................... 35 1. Tree or hierarchical relationships...between relation and data-processing concepts6 ............... 35 3.6 Example of Tree Relationship ......... .......................... 36 3.7
The Application of Security Concepts to the Personnel Database for the Indonesian Navy.
1983-09-01
Postgraduate School, lionterey, California, June 1982. Since 1977, the Indonesian Navy Data Center (DISPULAHTAL) has collected and processed pa-sonnel data to...zel dlta Processing in the Indonesian Navy. 4 -a "o ’% ’." 5. ’S 1 1’S~. . . II. THE _IIIT_ IPR2ES1D PERSONSEL DATABASE SYSTEM The present Database...LEVEL *USER PROCESSING :CONCURRENT MULTI USER/LEVEL Ulf, U 3 , U 3 . . . users S. .. ...... secret C. .. ...... classified U .. .. ..... unclassified
HUC--A User Designed System for All Recorded Knowledge and Information.
ERIC Educational Resources Information Center
Hilton, Howard J.
This paper proposes a user designed system, HUC, intended to provide a single index and retrieval system covering all recorded knowledge and information capable of being retrieved from all modes of storage, from manual to the most sophisticated retrieval system. The concept integrates terminal hardware, software, and database structure to allow…
[Standardization of terminology in laboratory medicine I].
Yoon, Soo Young; Yoon, Jong Hyun; Min, Won Ki; Lim, Hwan Sub; Song, Junghan; Chae, Seok Lae; Lee, Chang Kyu; Kwon, Jung Ah; Lee, Kap No
2007-04-01
Standardization of medical terminology is essential for data transmission between health-care institutions or clinical laboratories and for maximizing the benefits of information technology. Purpose of our study was to standardize the medical terms used in the clinical laboratory, such as test names, units, terms used in result descriptions, etc. During the first year of the study, we developed a standard database of concept names for laboratory terms, which covered the terms used in government health care centers, their branch offices, and primary health care units. Laboratory terms were collected from the electronic data interchange (EDI) codes from National Health Insurance Corporation (NHIC), Logical Observation Identifier Names and Codes (LOINC) database, community health centers and their branch offices, and clinical laboratories of representative university medical centers. For standard expression, we referred to the English-Korean/ Korean-English medical dictionary of Korean Medical Association and the rules for foreign language translation. Programs for mapping between LOINC DB and EDI code and for translating English to Korean were developed. A Korean standard laboratory terminology database containing six axial concept names such as components, property, time aspect, system (specimen), scale type, and method type was established for 7,508 test observations. Short names and a mapping table for EDI codes and Unified Medical Language System (UMLS) were added. Synonym tables for concept names, words used in the database, and six axial terms were prepared to make it easier to find the standard terminology with common terms used in the field of laboratory medicine. Here we report for the first time a Korean standard laboratory terminology database for test names, result description terms, result units covering most laboratory tests in primary healthcare centers.
The Design and Implement of Tourism Information System Based on GIS
NASA Astrophysics Data System (ADS)
Chunchang, Fu; Nan, Zhang
From the geographical information system concept, discusses the main contents of the geographic information system, and the current of the geographic information system key technological measures of tourism information system, the application of tourism information system for specific requirements and goals, and analyzes a relational database model based on the tourist information system in GIS application methods of realization.
Software engineering aspects of real-time programming concepts
NASA Astrophysics Data System (ADS)
Schoitsch, Erwin
1986-08-01
Real-time programming is a discipline of great importance not only in process control, but also in fields like communication, office automation, interactive databases, interactive graphics and operating systems development. General concepts of concurrent programming and constructs for process-synchronization are discussed in detail. Tasking and synchronization concepts, methods of process communication, interrupt and timeout handling in systems based on semaphores, signals, conditional critical regions or on real-time languages like Concurrent PASCAL, MODULA, CHILL and ADA are explained and compared with each other. The second part deals with structuring and modularization of technical processes to build reliable and maintainable real time systems. Software-quality and software engineering aspects are considered throughout the paper.
Integrating Scientific Array Processing into Standard SQL
NASA Astrophysics Data System (ADS)
Misev, Dimitar; Bachhuber, Johannes; Baumann, Peter
2014-05-01
We live in a time that is dominated by data. Data storage is cheap and more applications than ever accrue vast amounts of data. Storing the emerging multidimensional data sets efficiently, however, and allowing them to be queried by their inherent structure, is a challenge many databases have to face today. Despite the fact that multidimensional array data is almost always linked to additional, non-array information, array databases have mostly developed separately from relational systems, resulting in a disparity between the two database categories. The current SQL standard and SQL DBMS supports arrays - and in an extension also multidimensional arrays - but does so in a very rudimentary and inefficient way. This poster demonstrates the practicality of an SQL extension for array processing, implemented in a proof-of-concept multi-faceted system that manages a federation of array and relational database systems, providing transparent, efficient and scalable access to the heterogeneous data in them.
Graphical user interfaces for symbol-oriented database visualization and interaction
NASA Astrophysics Data System (ADS)
Brinkschulte, Uwe; Siormanolakis, Marios; Vogelsang, Holger
1997-04-01
In this approach, two basic services designed for the engineering of computer based systems are combined: a symbol-oriented man-machine-service and a high speed database-service. The man-machine service is used to build graphical user interfaces (GUIs) for the database service; these interfaces are stored using the database service. The idea is to create a GUI-builder and a GUI-manager for the database service based upon the man-machine service using the concept of symbols. With user-definable and predefined symbols, database contents can be visualized and manipulated in a very flexible and intuitive way. Using the GUI-builder and GUI-manager, a user can build and operate its own graphical user interface for a given database according to its needs without writing a single line of code.
Hawthorne L. Beyer; Jeff Jenness; Samuel A. Cushman
2010-01-01
Spatial information systems (SIS) is a term that describes a wide diversity of concepts, techniques, and technologies related to the capture, management, display and analysis of spatial information. It encompasses technologies such as geographic information systems (GIS), global positioning systems (GPS), remote sensing, and relational database management systems (...
Insect barcode information system.
Pratheepa, Maria; Jalali, Sushil Kumar; Arokiaraj, Robinson Silvester; Venkatesan, Thiruvengadam; Nagesh, Mandadi; Panda, Madhusmita; Pattar, Sharath
2014-01-01
Insect Barcode Information System called as Insect Barcode Informática (IBIn) is an online database resource developed by the National Bureau of Agriculturally Important Insects, Bangalore. This database provides acquisition, storage, analysis and publication of DNA barcode records of agriculturally important insects, for researchers specifically in India and other countries. It bridges a gap in bioinformatics by integrating molecular, morphological and distribution details of agriculturally important insects. IBIn was developed using PHP/My SQL by using relational database management concept. This database is based on the client- server architecture, where many clients can access data simultaneously. IBIn is freely available on-line and is user-friendly. IBIn allows the registered users to input new information, search and view information related to DNA barcode of agriculturally important insects.This paper provides a current status of insect barcode in India and brief introduction about the database IBIn. http://www.nabg-nbaii.res.in/barcode.
Knowledge acquisition to qualify Unified Medical Language System interconceptual relationships.
Le Duff, F.; Burgun, A.; Cleret, M.; Pouliquen, B.; Barac'h, V.; Le Beux, P.
2000-01-01
Adding automatically relations between concepts from a database to a knowledge base such as the Unified Medical Language System can be very useful to increase the consistency of the latter one. But the transfer of qualified relationships is more interesting. The most important interest of these new acquisitions is that the UMLS became more compliant and medically pertinent to be used in different medical applications. This paper describes the possibility to inherit automatically medical inter-conceptual relationships qualifiers from a disease description included into a database and to integrate them into the UMLS knowledge base. The paper focuses on the transmission of knowledge from a French medical database to an English one. PMID:11079930
The XSD-Builder Specification Language—Toward a Semantic View of XML Schema Definition
NASA Astrophysics Data System (ADS)
Fong, Joseph; Cheung, San Kuen
In the present database market, XML database model is a main structure for the forthcoming database system in the Internet environment. As a conceptual schema of XML database, XML Model has its limitation on presenting its data semantics. System analyst has no toolset for modeling and analyzing XML system. We apply XML Tree Model (shown in Figure 2) as a conceptual schema of XML database to model and analyze the structure of an XML database. It is important not only for visualizing, specifying, and documenting structural models, but also for constructing executable systems. The tree model represents inter-relationship among elements inside different logical schema such as XML Schema Definition (XSD), DTD, Schematron, XDR, SOX, and DSD (shown in Figure 1, an explanation of the terms in the figure are shown in Table 1). The XSD-Builder consists of XML Tree Model, source language, translator, and XSD. The source language is called XSD-Source which is mainly for providing an environment with concept of user friendliness while writing an XSD. The source language will consequently be translated by XSD-Translator. Output of XSD-Translator is an XSD which is our target and is called as an object language.
Development of a pseudo/anonymised primary care research database: Proof-of-concept study.
MacRury, Sandra; Finlayson, Jim; Hussey-Wilson, Susan; Holden, Samantha
2016-06-01
General practice records present a comprehensive source of data that could form a variety of anonymised or pseudonymised research databases to aid identification of potential research participants regardless of location. A proof-of-concept study was undertaken to extract data from general practice systems in 15 practices across the region to form pseudo and anonymised research data sets. Two feasibility studies and a disease surveillance study compared numbers of potential study participants and accuracy of disease prevalence, respectively. There was a marked reduction in screening time and increase in numbers of potential study participants identified with the research repository compared with conventional methods. Accurate disease prevalence was established and enhanced with the addition of selective text mining. This study confirms the potential for development of national anonymised research database from general practice records in addition to improving data collection for local or national audits and epidemiological projects. © The Author(s) 2014.
Automated Database Mediation Using Ontological Metadata Mappings
Marenco, Luis; Wang, Rixin; Nadkarni, Prakash
2009-01-01
Objective To devise an automated approach for integrating federated database information using database ontologies constructed from their extended metadata. Background One challenge of database federation is that the granularity of representation of equivalent data varies across systems. Dealing effectively with this problem is analogous to dealing with precoordinated vs. postcoordinated concepts in biomedical ontologies. Model Description The authors describe an approach based on ontological metadata mapping rules defined with elements of a global vocabulary, which allows a query specified at one granularity level to fetch data, where possible, from databases within the federation that use different granularities. This is implemented in OntoMediator, a newly developed production component of our previously described Query Integrator System. OntoMediator's operation is illustrated with a query that accesses three geographically separate, interoperating databases. An example based on SNOMED also illustrates the applicability of high-level rules to support the enforcement of constraints that can prevent inappropriate curator or power-user actions. Summary A rule-based framework simplifies the design and maintenance of systems where categories of data must be mapped to each other, for the purpose of either cross-database query or for curation of the contents of compositional controlled vocabularies. PMID:19567801
CALS Database Usage and Analysis Tool Study
1991-09-01
inference aggregation and cardinality aggregation as two distinct aspects of the aggregation problem. The paper develops the concept of a semantic...aggregation, cardinality aggregation I " CALS Database Usage Analysis Tool Study * Bibliography * Page 7 i NIDX - An Expert System for Real-Time...1989 IEEE Symposium on Research in Security and Privacy, Oakland, CA, May 1989. [21 Baur, D.S.; Eichelman, F.R. 1I; Herrera , R.M.; Irgon, A.E
An alternative database approach for management of SNOMED CT and improved patient data queries.
Campbell, W Scott; Pedersen, Jay; McClay, James C; Rao, Praveen; Bastola, Dhundy; Campbell, James R
2015-10-01
SNOMED CT is the international lingua franca of terminologies for human health. Based in Description Logics (DL), the terminology enables data queries that incorporate inferences between data elements, as well as, those relationships that are explicitly stated. However, the ontologic and polyhierarchical nature of the SNOMED CT concept model make it difficult to implement in its entirety within electronic health record systems that largely employ object oriented or relational database architectures. The result is a reduction of data richness, limitations of query capability and increased systems overhead. The hypothesis of this research was that a graph database (graph DB) architecture using SNOMED CT as the basis for the data model and subsequently modeling patient data upon the semantic core of SNOMED CT could exploit the full value of the terminology to enrich and support advanced data querying capability of patient data sets. The hypothesis was tested by instantiating a graph DB with the fully classified SNOMED CT concept model. The graph DB instance was tested for integrity by calculating the transitive closure table for the SNOMED CT hierarchy and comparing the results with transitive closure tables created using current, validated methods. The graph DB was then populated with 461,171 anonymized patient record fragments and over 2.1 million associated SNOMED CT clinical findings. Queries, including concept negation and disjunction, were then run against the graph database and an enterprise Oracle relational database (RDBMS) of the same patient data sets. The graph DB was then populated with laboratory data encoded using LOINC, as well as, medication data encoded with RxNorm and complex queries performed using LOINC, RxNorm and SNOMED CT to identify uniquely described patient populations. A graph database instance was successfully created for two international releases of SNOMED CT and two US SNOMED CT editions. Transitive closure tables and descriptive statistics generated using the graph database were identical to those using validated methods. Patient queries produced identical patient count results to the Oracle RDBMS with comparable times. Database queries involving defining attributes of SNOMED CT concepts were possible with the graph DB. The same queries could not be directly performed with the Oracle RDBMS representation of the patient data and required the creation and use of external terminology services. Further, queries of undefined depth were successful in identifying unknown relationships between patient cohorts. The results of this study supported the hypothesis that a patient database built upon and around the semantic model of SNOMED CT was possible. The model supported queries that leveraged all aspects of the SNOMED CT logical model to produce clinically relevant query results. Logical disjunction and negation queries were possible using the data model, as well as, queries that extended beyond the structural IS_A hierarchy of SNOMED CT to include queries that employed defining attribute-values of SNOMED CT concepts as search parameters. As medical terminologies, such as SNOMED CT, continue to expand, they will become more complex and model consistency will be more difficult to assure. Simultaneously, consumers of data will increasingly demand improvements to query functionality to accommodate additional granularity of clinical concepts without sacrificing speed. This new line of research provides an alternative approach to instantiating and querying patient data represented using advanced computable clinical terminologies. Copyright © 2015 Elsevier Inc. All rights reserved.
Schiotis, Ruxandra; Font, Pilar; Zarco, Pedro; Almodovar, Raquel; Gratacós, Jordi; Mulero, Juan; Juanola, Xavier; Montilla, Carlos; Moreno, Estefanía; Ariza Ariza, Rafael; Collantes-Estevez, Eduardo
2011-01-01
Objective. To present the usefulness of a centralized system of data collection for the development of an international multicentre registry of SpA. Method. The originality of this registry consists in the creation of a virtual network of researchers in a computerized Internet database. From its conception, the registry was meant to be a dynamic acquiring system. Results. REGISPONSER has two developing phases (Conception and Universalization) and gathers several evolving secondary projects (REGISPONSER-EARLY, REGISPONSER-AS, ESPERANZA and RESPONDIA). Each sub-project answered the necessity of having more specific and complete data of the patients even from the onset of the disease so, in the end, obtaining a well-defined picture of SpAs spectrum in the Spanish population. Conclusion. REGISPONSER is the first dynamic SpA database composed of cohorts with a significant number of patients distributed by specific diagnosis, which provides basic specific information of the sub-cohorts useful for patients’ evaluation in rheumatology ambulatory consulting. PMID:20823095
Autonomous mission planning and scheduling: Innovative, integrated, responsive
NASA Technical Reports Server (NTRS)
Sary, Charisse; Liu, Simon; Hull, Larry; Davis, Randy
1994-01-01
Autonomous mission scheduling, a new concept for NASA ground data systems, is a decentralized and distributed approach to scientific spacecraft planning, scheduling, and command management. Systems and services are provided that enable investigators to operate their own instruments. In autonomous mission scheduling, separate nodes exist for each instrument and one or more operations nodes exist for the spacecraft. Each node is responsible for its own operations which include planning, scheduling, and commanding; and for resolving conflicts with other nodes. One or more database servers accessible to all nodes enable each to share mission and science planning, scheduling, and commanding information. The architecture for autonomous mission scheduling is based upon a realistic mix of state-of-the-art and emerging technology and services, e.g., high performance individual workstations, high speed communications, client-server computing, and relational databases. The concept is particularly suited to the smaller, less complex missions of the future.
NASA Technical Reports Server (NTRS)
Levack, Daniel J. H.
2000-01-01
The Alternate Propulsion Subsystem Concepts contract had seven tasks defined that are reported under this contract deliverable. The tasks were: FAA Restart Study, J-2S Restart Study, Propulsion Database Development. SSME Upper Stage Use. CERs for Liquid Propellant Rocket Engines. Advanced Low Cost Engines, and Tripropellant Comparison Study. The two restart studies, F-1A and J-2S, generated program plans for restarting production of each engine. Special emphasis was placed on determining changes to individual parts due to obsolete materials, changes in OSHA and environmental concerns, new processes available, and any configuration changes to the engines. The Propulsion Database Development task developed a database structure and format which is easy to use and modify while also being comprehensive in the level of detail available. The database structure included extensive engine information and allows for parametric data generation for conceptual engine concepts. The SSME Upper Stage Use task examined the changes needed or desirable to use the SSME as an upper stage engine both in a second stage and in a translunar injection stage. The CERs for Liquid Engines task developed qualitative parametric cost estimating relationships at the engine and major subassembly level for estimating development and production costs of chemical propulsion liquid rocket engines. The Advanced Low Cost Engines task examined propulsion systems for SSTO applications including engine concept definition, mission analysis. trade studies. operating point selection, turbomachinery alternatives, life cycle cost, weight definition. and point design conceptual drawings and component design. The task concentrated on bipropellant engines, but also examined tripropellant engines. The Tripropellant Comparison Study task provided an unambiguous comparison among various tripropellant implementation approaches and cycle choices, and then compared them to similarly designed bipropellant engines in the SSTO mission This volume overviews each of the tasks giving its objectives, main results. and conclusions. More detailed Final Task Reports are available on each individual task.
NASA Astrophysics Data System (ADS)
WANG, Qingrong; ZHU, Changfeng
2017-06-01
Integration of distributed heterogeneous data sources is the key issues under the big data applications. In this paper the strategy of variable precision is introduced to the concept lattice, and the one-to-one mapping mode of variable precision concept lattice and ontology concept lattice is constructed to produce the local ontology by constructing the variable precision concept lattice for each subsystem, and the distributed generation algorithm of variable precision concept lattice based on ontology heterogeneous database is proposed to draw support from the special relationship between concept lattice and ontology construction. Finally, based on the standard of main concept lattice of the existing heterogeneous database generated, a case study has been carried out in order to testify the feasibility and validity of this algorithm, and the differences between the main concept lattice and the standard concept lattice are compared. Analysis results show that this algorithm above-mentioned can automatically process the construction process of distributed concept lattice under the heterogeneous data sources.
Scherer, A; Kröpil, P; Heusch, P; Buchbender, C; Sewerin, P; Blondin, D; Lanzman, R S; Miese, F; Ostendorf, B; Bölke, E; Mödder, U; Antoch, G
2011-11-01
Medical curricula are currently being reformed in order to establish superordinated learning objectives, including, e.g., diagnostic, therapeutic and preventive competences. This requires a shifting from traditional teaching methods towards interactive and case-based teaching concepts. Conceptions, initial experiences and student evaluations of a novel radiological course Co-operative Learning In Clinical Radiology (CLICR) are presented in this article. A novel radiological teaching course (CLICR course), which combines different innovative teaching elements, was established and integrated into the medical curriculum. Radiological case vignettes were created for three clinical teaching modules. By using a PC with PACS (Picture Archiving and Communication System) access, web-based databases and the CASUS platform, a problem-oriented, case-based and independent way of learning was supported as an adjunct to the well established radiological courses and lectures. Student evaluations of the novel CLICR course and the radiological block course were compared. Student evaluations of the novel CLICR course were significantly better compared to the conventional radiological block course. Of the participating students 52% gave the highest rating for the novel CLICR course concerning the endpoint overall satisfaction as compared to 3% of students for the conventional block course. The innovative interactive concept of the course and the opportunity to use a web-based database were favorably accepted by the students. Of the students 95% rated the novel course concept as a substantial gain for the medical curriculum and 95% also commented that interactive working with the PACS and a web-based database (82%) promoted learning and understanding. Interactive, case-based teaching concepts such as the presented CLICR course are considered by both students and teachers as useful extensions to the radiological course program. These concepts fit well into competence-oriented curricula.
ARIANE: integration of information databases within a hospital intranet.
Joubert, M; Aymard, S; Fieschi, D; Volot, F; Staccini, P; Robert, J J; Fieschi, M
1998-05-01
Large information systems handle massive volume of data stored in heterogeneous sources. Each server has its own model of representation of concepts with regard to its aims. One of the main problems end-users encounter when accessing different servers is to match their own viewpoint on biomedical concepts with the various representations that are made in the databases servers. The aim of the project ARIANE is to provide end-users with easy-to-use and natural means to access and query heterogeneous information databases. The objectives of this research work consist in building a conceptual interface by means of the Internet technology inside an enterprise Intranet and to propose a method to realize it. This method is based on the knowledge sources provided by the Unified Medical Language System (UMLS) project of the US National Library of Medicine. Experiments concern queries to three different information servers: PubMed, a Medline server of the NLM; Thériaque, a French database on drugs implemented in the Hospital Intranet; and a Web site dedicated to Internet resources in gastroenterology and nutrition, located at the Faculty of Medicine of Nice (France). Accessing to each of these servers is different according to the kind of information delivered and according to the technology used to query it. Dealing with health care professional workstation, the authors introduced in the ARIANE project quality criteria in order to attempt a homogeneous and efficient way to build a query system able to be integrated in existing information systems and to integrate existing and new information sources.
Extending the data dictionary for data/knowledge management
NASA Technical Reports Server (NTRS)
Hydrick, Cecile L.; Graves, Sara J.
1988-01-01
Current relational database technology provides the means for efficiently storing and retrieving large amounts of data. By combining techniques learned from the field of artificial intelligence with this technology, it is possible to expand the capabilities of such systems. This paper suggests using the expanded domain concept, an object-oriented organization, and the storing of knowledge rules within the relational database as a solution to the unique problems associated with CAD/CAM and engineering data.
Development of a Multidisciplinary and Telemedicine Focused System Database.
Paštěka, Richard; Forjan, Mathias; Sauermann, Stefan
2017-01-01
Tele-rehabilitation at home is one of the promising approaches in increasing rehabilitative success and simultaneously decreasing the financial burden on the healthcare system. Novel and mostly mobile devices are already in use, but shall be used in the future to a higher extent for allowing at home rehabilitation processes at a high quality level. The combination of exercises, assessments and available equipment is the basic objective of the presented database. The database has been structured in order to allow easy-to-use and fast access for the three main user groups. Therapists - looking for exercise and equipment combinations - patients - rechecking their tasks for home exercises - and manufacturers - entering their equipment for specific use cases. The database has been evaluated by a proof of concept study and shows a high degree of applicability for the field of rehabilitative medicine. Currently it contains 110 exercises/assessments and 111 equipment/systems. Foundations of presented database are already established in the rehabilitative field of application, but can and will be enhanced in its functionality to be usable for a higher variety of medical fields and specifications.
Experiments and Analysis on a Computer Interface to an Information-Retrieval Network.
ERIC Educational Resources Information Center
Marcus, Richard S.; Reintjes, J. Francis
A primary goal of this project was to develop an interface that would provide direct access for inexperienced users to existing online bibliographic information retrieval networks. The experiment tested the concept of a virtual-system mode of access to a network of heterogeneous interactive retrieval systems and databases. An experimental…
Strabo: An App and Database for Structural Geology and Tectonics Data
NASA Astrophysics Data System (ADS)
Newman, J.; Williams, R. T.; Tikoff, B.; Walker, J. D.; Good, J.; Michels, Z. D.; Ash, J.
2016-12-01
Strabo is a data system designed to facilitate digital storage and sharing of structural geology and tectonics data. The data system allows researchers to store and share field and laboratory data as well as construct new multi-disciplinary data sets. Strabo is built on graph database technology, as opposed to a relational database, which provides the flexibility to define relationships between objects of any type. This framework allows observations to be linked in a complex and hierarchical manner that is not possible in traditional database topologies. Thus, the advantage of the Strabo data structure is the ability of graph databases to link objects in both numerous and complex ways, in a manner that more accurately reflects the realities of the collecting and organizing of geological data sets. The data system is accessible via a mobile interface (iOS and Android devices) that allows these data to be stored, visualized, and shared during primary collection in the field or the laboratory. The Strabo Data System is underlain by the concept of a "Spot," which we define as any observation that characterizes a specific area. This can be anything from a strike and dip measurement of bedding to cross-cutting relationships between faults in complex dissected terrains. Each of these spots can then contain other Spots and/or measurements (e.g., lithology, slickenlines, displacement magnitude.) Hence, the Spot concept is applicable to all relationships and observation sets. Strabo is therefore capable of quantifying and digitally storing large spatial variations and complex geometries of naturally deformed rocks within hierarchically related maps and images. These approaches provide an observational fidelity comparable to a traditional field book, but with the added benefits of digital data storage, processing, and ease of sharing. This approach allows Strabo to integrate seamlessly into the workflow of most geologists. Future efforts will focus on extending Strabo to other sub-disciplines as well as developing a desktop system for the enhanced collection and organization of microstructural data.
Development of a full-text information retrieval system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keizo Oyama; AKira Miyazawa, Atsuhiro Takasu; Kouji Shibano
The authors have executed a project to realize a full-text information retrieval system. The system is designed to deal with a document database comprising full text of a large number of documents such as academic papers. The document structures are utilized in searching and extracting appropriate information. The concept of structure handling and the configuration of the system are described in this paper.
Annotations of Mexican bullfighting videos for semantic index
NASA Astrophysics Data System (ADS)
Montoya Obeso, Abraham; Oropesa Morales, Lester Arturo; Fernando Vázquez, Luis; Cocolán Almeda, Sara Ivonne; Stoian, Andrei; García Vázquez, Mireya Saraí; Zamudio Fuentes, Luis Miguel; Montiel Perez, Jesús Yalja; de la O Torres, Saul; Ramírez Acosta, Alejandro Alvaro
2015-09-01
The video annotation is important for web indexing and browsing systems. Indeed, in order to evaluate the performance of video query and mining techniques, databases with concept annotations are required. Therefore, it is necessary generate a database with a semantic indexing that represents the digital content of the Mexican bullfighting atmosphere. This paper proposes a scheme to make complex annotations in a video in the frame of multimedia search engine project. Each video is partitioned using our segmentation algorithm that creates shots of different length and different number of frames. In order to make complex annotations about the video, we use ELAN software. The annotations are done in two steps: First, we take note about the whole content in each shot. Second, we describe the actions as parameters of the camera like direction, position and deepness. As a consequence, we obtain a more complete descriptor of every action. In both cases we use the concepts of the TRECVid 2014 dataset. We also propose new concepts. This methodology allows to generate a database with the necessary information to create descriptors and algorithms capable to detect actions to automatically index and classify new bullfighting multimedia content.
An international aerospace information system: A cooperative opportunity
NASA Technical Reports Server (NTRS)
Cotter, Gladys A.; Blados, Walter R.
1992-01-01
Scientific and technical information (STI) is a valuable resource which represents the results of large investments in research and development (R&D), and the expertise of a nation. NASA and its predecessor organizations have developed and managed the preeminent aerospace information system. We see information and information systems changing and becoming more international in scope. In Europe, consistent with joint R&D programs and a view toward a united Europe, we have seen the emergence of a European Aerospace Database concept. In addition, the development of aeronautics and astronautics in individual nations have also lead to initiatives for national aerospace databases. Considering recent technological developments in information science and technology, as well as the reality of scarce resources in all nations, it is time to reconsider the mutually beneficial possibilities offered by cooperation and international resource sharing. The new possibilities offered through cooperation among the various aerospace database efforts toward an international aerospace database initiative which can optimize the cost/benefit equation for all participants are considered.
NASA Technical Reports Server (NTRS)
Campbell, William J.
1985-01-01
Intelligent data management is the concept of interfacing a user to a database management system with a value added service that will allow a full range of data management operations at a high level of abstraction using human written language. The development of such a system will be based on expert systems and related artificial intelligence technologies, and will allow the capturing of procedural and relational knowledge about data management operations and the support of a user with such knowledge in an on-line, interactive manner. Such a system will have the following capabilities: (1) the ability to construct a model of the users view of the database, based on the query syntax; (2) the ability to transform English queries and commands into database instructions and processes; (3) the ability to use heuristic knowledge to rapidly prune the data space in search processes; and (4) the ability to use an on-line explanation system to allow the user to understand what the system is doing and why it is doing it. Additional information is given in outline form.
Towards linked open gene mutations data
2012-01-01
Background With the advent of high-throughput technologies, a great wealth of variation data is being produced. Such information may constitute the basis for correlation analyses between genotypes and phenotypes and, in the future, for personalized medicine. Several databases on gene variation exist, but this kind of information is still scarce in the Semantic Web framework. In this paper, we discuss issues related to the integration of mutation data in the Linked Open Data infrastructure, part of the Semantic Web framework. We present the development of a mapping from the IARC TP53 Mutation database to RDF and the implementation of servers publishing this data. Methods A version of the IARC TP53 Mutation database implemented in a relational database was used as first test set. Automatic mappings to RDF were first created by using D2RQ and later manually refined by introducing concepts and properties from domain vocabularies and ontologies, as well as links to Linked Open Data implementations of various systems of biomedical interest. Since D2RQ query performances are lower than those that can be achieved by using an RDF archive, generated data was also loaded into a dedicated system based on tools from the Jena software suite. Results We have implemented a D2RQ Server for TP53 mutation data, providing data on a subset of the IARC database, including gene variations, somatic mutations, and bibliographic references. The server allows to browse the RDF graph by using links both between classes and to external systems. An alternative interface offers improved performances for SPARQL queries. The resulting data can be explored by using any Semantic Web browser or application. Conclusions This has been the first case of a mutation database exposed as Linked Data. A revised version of our prototype, including further concepts and IARC TP53 Mutation database data sets, is under development. The publication of variation information as Linked Data opens new perspectives: the exploitation of SPARQL searches on mutation data and other biological databases may support data retrieval which is presently not possible. Moreover, reasoning on integrated variation data may support discoveries towards personalized medicine. PMID:22536974
Towards linked open gene mutations data.
Zappa, Achille; Splendiani, Andrea; Romano, Paolo
2012-03-28
With the advent of high-throughput technologies, a great wealth of variation data is being produced. Such information may constitute the basis for correlation analyses between genotypes and phenotypes and, in the future, for personalized medicine. Several databases on gene variation exist, but this kind of information is still scarce in the Semantic Web framework. In this paper, we discuss issues related to the integration of mutation data in the Linked Open Data infrastructure, part of the Semantic Web framework. We present the development of a mapping from the IARC TP53 Mutation database to RDF and the implementation of servers publishing this data. A version of the IARC TP53 Mutation database implemented in a relational database was used as first test set. Automatic mappings to RDF were first created by using D2RQ and later manually refined by introducing concepts and properties from domain vocabularies and ontologies, as well as links to Linked Open Data implementations of various systems of biomedical interest. Since D2RQ query performances are lower than those that can be achieved by using an RDF archive, generated data was also loaded into a dedicated system based on tools from the Jena software suite. We have implemented a D2RQ Server for TP53 mutation data, providing data on a subset of the IARC database, including gene variations, somatic mutations, and bibliographic references. The server allows to browse the RDF graph by using links both between classes and to external systems. An alternative interface offers improved performances for SPARQL queries. The resulting data can be explored by using any Semantic Web browser or application. This has been the first case of a mutation database exposed as Linked Data. A revised version of our prototype, including further concepts and IARC TP53 Mutation database data sets, is under development.The publication of variation information as Linked Data opens new perspectives: the exploitation of SPARQL searches on mutation data and other biological databases may support data retrieval which is presently not possible. Moreover, reasoning on integrated variation data may support discoveries towards personalized medicine.
1981-05-01
factors that cause damage are discussed below. a. Architectural elements. Damage to architectural elements can result in both significant dollar losses...hazard priority- ranking procedure are: 1. To produce meaningful results which are as simple as possible, con- sidering the existing databases. 2. To...minimize the amount of data required for meaningful results , i.e., the database should contain only the most fundamental building characteris- tics. 3. To
2009-07-01
data were recognized as being largely geospatial and thus a GIS was considered the most reasonable way to proceed. The Postgre suite of software also...for the ESRI (2009) geodatabase environment but is applicable for this Postgre -based system. We then introduce and discuss spatial reference...PostgreSQL database using a Postgre ODBC connection. This procedure identified 100 tables with 737 columns. This is after the removal of two
Self-concept of left-behind children in China: a systematic review of the literature.
Wang, X; Ling, L; Su, H; Cheng, J; Jin, L; Sun, Y-H
2015-05-01
The aim of our study was to systematically review studies which had compared self-concept in left-behind children with the general population of children in China. Relevant studies about self-concept of left-behind children in China published from 2004 to 2014 were sought by searching online databases including Chinese Biological Medicine Database (CBM), Chinese National Knowledge Infrastructure (CNKI), Wanfang Database, Vip Database, PubMed Database, Google Scholar and Web of Science. The methodological quality of the articles was assessed by using Newcastle-Ottawa Scale (NOS). Poled effect size and associated 95% confidence interval (CI) were calculated using the random effects model. Cochrane's Q was used to test for heterogeneity and I(2) index was used to determine the degree of heterogeneity. Nineteen studies involving 7758 left-behind children met the inclusion criteria and 15 studies were included in a meta-analysis. The results indicated that left-behind group had a lower score of self-concept and more psychological problems than the control group. The factors associated with self-concept in left-behind children were gender, age, grade and the relationships with parents, guardians and teachers. Left-behind children had lower self-concept and more mental health problems compared with the general population of children. The development of self-concept may be an important channel for promoting mental health of left-behind children. © 2014 John Wiley & Sons Ltd.
The crustal dynamics intelligent user interface anthology
NASA Technical Reports Server (NTRS)
Short, Nicholas M., Jr.; Campbell, William J.; Roelofs, Larry H.; Wattawa, Scott L.
1987-01-01
The National Space Science Data Center (NSSDC) has initiated an Intelligent Data Management (IDM) research effort which has, as one of its components, the development of an Intelligent User Interface (IUI). The intent of the IUI is to develop a friendly and intelligent user interface service based on expert systems and natural language processing technologies. The purpose of such a service is to support the large number of potential scientific and engineering users that have need of space and land-related research and technical data, but have little or no experience in query languages or understanding of the information content or architecture of the databases of interest. This document presents the design concepts, development approach and evaluation of the performance of a prototype IUI system for the Crustal Dynamics Project Database, which was developed using a microcomputer-based expert system tool (M. 1), the natural language query processor THEMIS, and the graphics software system GSS. The IUI design is based on a multiple view representation of a database from both the user and database perspective, with intelligent processes to translate between the views.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rohatgi, Upendra S.
Nuclear reactor codes require validation with appropriate data representing the plant for specific scenarios. The thermal-hydraulic data is scattered in different locations and in different formats. Some of the data is in danger of being lost. A relational database is being developed to organize the international thermal hydraulic test data for various reactor concepts and different scenarios. At the reactor system level, that data is organized to include separate effect tests and integral effect tests for specific scenarios and corresponding phenomena. The database relies on the phenomena identification sections of expert developed PIRTs. The database will provide a summary ofmore » appropriate data, review of facility information, test description, instrumentation, references for the experimental data and some examples of application of the data for validation. The current database platform includes scenarios for PWR, BWR, VVER, and specific benchmarks for CFD modelling data and is to be expanded to include references for molten salt reactors. There are place holders for high temperature gas cooled reactors, CANDU and liquid metal reactors. This relational database is called The International Experimental Thermal Hydraulic Systems (TIETHYS) database and currently resides at Nuclear Energy Agency (NEA) of the OECD and is freely open to public access. Going forward the database will be extended to include additional links and data as they become available. https://www.oecd-nea.org/tiethysweb/« less
Spatial cyberinfrastructures, ontologies, and the humanities.
Sieber, Renee E; Wellen, Christopher C; Jin, Yuan
2011-04-05
We report on research into building a cyberinfrastructure for Chinese biographical and geographic data. Our cyberinfrastructure contains (i) the McGill-Harvard-Yenching Library Ming Qing Women's Writings database (MQWW), the only online database on historical Chinese women's writings, (ii) the China Biographical Database, the authority for Chinese historical people, and (iii) the China Historical Geographical Information System, one of the first historical geographic information systems. Key to this integration is that linked databases retain separate identities as bases of knowledge, while they possess sufficient semantic interoperability to allow for multidatabase concepts and to support cross-database queries on an ad hoc basis. Computational ontologies create underlying semantics for database access. This paper focuses on the spatial component in a humanities cyberinfrastructure, which includes issues of conflicting data, heterogeneous data models, disambiguation, and geographic scale. First, we describe the methodology for integrating the databases. Then we detail the system architecture, which includes a tier of ontologies and schema. We describe the user interface and applications that allow for cross-database queries. For instance, users should be able to analyze the data, examine hypotheses on spatial and temporal relationships, and generate historical maps with datasets from MQWW for research, teaching, and publication on Chinese women writers, their familial relations, publishing venues, and the literary and social communities. Last, we discuss the social side of cyberinfrastructure development, as people are considered to be as critical as the technical components for its success.
American Association of University Women: Branch Operations Data Modeling Case
ERIC Educational Resources Information Center
Harris, Ranida B.; Wedel, Thomas L.
2015-01-01
A nationally prominent woman's advocacy organization is featured in this case study. The scenario may be used as a teaching case, an assignment, or a project in systems analysis and design as well as database design classes. Students are required to document the system operations and requirements, apply logical data modeling concepts, and design…
Datacube Services in Action, Using Open Source and Open Standards
NASA Astrophysics Data System (ADS)
Baumann, P.; Misev, D.
2016-12-01
Array Databases comprise novel, promising technology for massive spatio-temporal datacubes, extending the SQL paradigm of "any query, anytime" to n-D arrays. On server side, such queries can be optimized, parallelized, and distributed based on partitioned array storage. The rasdaman ("raster data manager") system, which has pioneered Array Databases, is available in open source on www.rasdaman.org. Its declarative query language extends SQL with array operators which are optimized and parallelized on server side. The rasdaman engine, which is part of OSGeo Live, is mature and in operational use databases individually holding dozens of Terabytes. Further, the rasdaman concepts have strongly impacted international Big Data standards in the field, including the forthcoming MDA ("Multi-Dimensional Array") extension to ISO SQL, the OGC Web Coverage Service (WCS) and Web Coverage Processing Service (WCPS) standards, and the forthcoming INSPIRE WCS/WCPS; in both OGC and INSPIRE, OGC is WCS Core Reference Implementation. In our talk we present concepts, architecture, operational services, and standardization impact of open-source rasdaman, as well as experiences made.
Technology and the Online Catalog.
ERIC Educational Resources Information Center
Graham, Peter S.
1983-01-01
Discusses trends in computer technology and their use for library catalogs, noting the concept of bandwidth (describes quantity of information transmitted per given unit of time); computer hardware differences (micros, minis, maxis); distributed processing systems and databases; optical disk storage; networks; transmission media; and terminals.…
Intelligence Fusion for Combined Operations
1994-06-03
Database ISE - Intelligence Support Element JASMIN - Joint Analysis System for Military Intelligence RC - Joint Intelligence Center JDISS - Joint Defense...has made accessable otherwise inaccessible networks such as connectivity to the German Joint Analysis System for Military Intelligence ( JASMIN ) and the...successfully any mission in the Battlespace is the essence of the C41 for the Warrior concept."’ It recognizes that the current C41 systems do not
A Concept for Continuous Monitoring that Reduces Redundancy in Information Assurance Processes
2011-09-01
System.out.println(“Driver loaded”); String url=“jdbc:postgresql://localhost/IAcontrols”; String user = “ postgres ”; String pwd... postgres ”; Connection DB_mobile_conn = DriverManager.getConnection(url,user,pwd); System.out.println(“Database Connect ok...user = “ postgres ”; String pwd = “ postgres ”; Connection DB_mobile_conn = DriverManager.getConnection(url,user,pwd); System.out.println
Leveraging Cognitive Context for Object Recognition
2014-06-01
learned from large image databases. We build upon this concept by exploring cognitive context, demonstrating how rich dynamic context provided by...context that people rely upon as they perceive the world. Context in ACT-R/E takes the form of associations between related concepts that are learned ...and accuracy of object recognition. Context is most often viewed as a static concept, learned from large image databases. We build upon this concept by
Database assessment of CMIP5 and hydrological models to determine flood risk areas
NASA Astrophysics Data System (ADS)
Limlahapun, Ponthip; Fukui, Hiromichi
2016-11-01
Solutions for water-related disasters may not be solved with a single scientific method. Based on this premise, we involved logic conceptions, associate sequential result amongst models, and database applications attempting to analyse historical and future scenarios in the context of flooding. The three main models used in this study are (1) the fifth phase of the Coupled Model Intercomparison Project (CMIP5) to derive precipitation; (2) the Integrated Flood Analysis System (IFAS) to extract amount of discharge; and (3) the Hydrologic Engineering Center (HEC) model to generate inundated areas. This research notably focused on integrating data regardless of system-design complexity, and database approaches are significantly flexible, manageable, and well-supported for system data transfer, which makes them suitable for monitoring a flood. The outcome of flood map together with real-time stream data can help local communities identify areas at-risk of flooding in advance.
NASA Astrophysics Data System (ADS)
Wiacek, Daniel; Kudla, Ignacy M.; Pozniak, Krzysztof T.; Bunkowski, Karol
2005-02-01
The main task of the RPC (Resistive Plate Chamber) Muon Trigger monitoring system design for the CMS (Compact Muon Solenoid) experiment (at LHC in CERN Geneva) is the visualization of data that includes the structure of electronic trigger system (e.g. geometry and imagery), the way of its processes and to generate automatically files with VHDL source code used for programming of the FPGA matrix. In the near future, the system will enable the analysis of condition, operation and efficiency of individual Muon Trigger elements, registration of information about some Muon Trigger devices and present previously obtained results in interactive presentation layer. A broad variety of different database and programming concepts for design of Muon Trigger monitoring system was presented in this article. The structure and architecture of the system and its principle of operation were described. One of ideas for building this system is use object-oriented programming and design techniques to describe real electronics systems through abstract object models stored in database and implement these models in Java language.
A Database-Based and Web-Based Meta-CASE System
NASA Astrophysics Data System (ADS)
Eessaar, Erki; Sgirka, Rünno
Each Computer Aided Software Engineering (CASE) system provides support to a software process or specific tasks or activities that are part of a software process. Each meta-CASE system allows us to create new CASE systems. The creators of a new CASE system have to specify abstract syntax of the language that is used in the system and functionality as well as non-functional properties of the new system. Many meta-CASE systems record their data directly in files. In this paper, we introduce a meta-CASE system, the enabling technology of which is an object-relational database system (ORDBMS). The system allows users to manage specifications of languages and create models by using these languages. The system has web-based and form-based user interface. We have created a proof-of-concept prototype of the system by using PostgreSQL ORDBMS and PHP scripting language.
The development of an intelligent user interface for NASA's scientific databases
NASA Technical Reports Server (NTRS)
Campbell, William J.; Roelofs, Larry H.
1986-01-01
The National Space Science Data Center (NSSDC) has initiated an Intelligent Data Management (IDM) research effort which has as one of its components, the development of an Intelligent User Interface (IUI). The intent of the IUI effort is to develop a friendly and intelligent user interface service that is based on expert systems and natural language processing technologies. This paper presents the design concepts, development approach and evaluation of performance of a prototype Intelligent User Interface Subsystem (IUIS) supporting an operational database.
A concept for routine emergency-care data-based syndromic surveillance in Europe.
Ziemann, A; Rosenkötter, N; Garcia-Castrillo Riesgo, L; Schrell, S; Kauhl, B; Vergeiner, G; Fischer, M; Lippert, F K; Krämer, A; Brand, H; Krafft, T
2014-11-01
We developed a syndromic surveillance (SyS) concept using emergency dispatch, ambulance and emergency-department data from different European countries. Based on an inventory of sub-national emergency data availability in 12 countries, we propose framework definitions for specific syndromes and a SyS system design. We tested the concept by retrospectively applying cumulative sum and spatio-temporal cluster analyses for the detection of local gastrointestinal outbreaks in four countries and comparing the results with notifiable disease reporting. Routine emergency data was available daily and electronically in 11 regions, following a common structure. We identified two gastrointestinal outbreaks in two countries; one was confirmed as a norovirus outbreak. We detected 1/147 notified outbreaks. Emergency-care data-based SyS can supplement local surveillance with near real-time information on gastrointestinal patients, especially in special circumstances, e.g. foreign tourists. It most likely cannot detect the majority of local gastrointestinal outbreaks with few, mild or dispersed cases.
NASA Technical Reports Server (NTRS)
Campbell, William J.; Short, Nicholas M., Jr.; Roelofs, Larry H.; Dorfman, Erik
1991-01-01
A methodology for optimizing organization of data obtained by NASA earth and space missions is discussed. The methodology uses a concept based on semantic data modeling techniques implemented in a hierarchical storage model. The modeling is used to organize objects in mass storage devices, relational database systems, and object-oriented databases. The semantic data modeling at the metadata record level is examined, including the simulation of a knowledge base and semantic metadata storage issues. The semantic data model hierarchy and its application for efficient data storage is addressed, as is the mapping of the application structure to the mass storage.
Applied Educational Computing: Putting Skills to Practice.
ERIC Educational Resources Information Center
Thomerson, J. D.
The College of Education at Valdosta State University (Georgia) developed a followup course to their required entry-level educational computing course. The introductory course covers word processing, spreadsheet, database, presentation, Internet, electronic mail, and operating system software and basic computer concepts. Students expressed a need…
Design and implementation of a distributed large-scale spatial database system based on J2EE
NASA Astrophysics Data System (ADS)
Gong, Jianya; Chen, Nengcheng; Zhu, Xinyan; Zhang, Xia
2003-03-01
With the increasing maturity of distributed object technology, CORBA, .NET and EJB are universally used in traditional IT field. However, theories and practices of distributed spatial database need farther improvement in virtue of contradictions between large scale spatial data and limited network bandwidth or between transitory session and long transaction processing. Differences and trends among of CORBA, .NET and EJB are discussed in details, afterwards the concept, architecture and characteristic of distributed large-scale seamless spatial database system based on J2EE is provided, which contains GIS client application, web server, GIS application server and spatial data server. Moreover the design and implementation of components of GIS client application based on JavaBeans, the GIS engine based on servlet, the GIS Application server based on GIS enterprise JavaBeans(contains session bean and entity bean) are explained.Besides, the experiments of relation of spatial data and response time under different conditions are conducted, which proves that distributed spatial database system based on J2EE can be used to manage, distribute and share large scale spatial data on Internet. Lastly, a distributed large-scale seamless image database based on Internet is presented.
Analyzing a multimodal biometric system using real and virtual users
NASA Astrophysics Data System (ADS)
Scheidat, Tobias; Vielhauer, Claus
2007-02-01
Three main topics of recent research on multimodal biometric systems are addressed in this article: The lack of sufficiently large multimodal test data sets, the influence of cultural aspects and data protection issues of multimodal biometric data. In this contribution, different possibilities are presented to extend multimodal databases by generating so-called virtual users, which are created by combining single biometric modality data of different users. Comparative tests on databases containing real and virtual users based on a multimodal system using handwriting and speech are presented, to study to which degree the use of virtual multimodal databases allows conclusions with respect to recognition accuracy in comparison to real multimodal data. All tests have been carried out on databases created from donations from three different nationality groups. This allows to review the experimental results both in general and in context of cultural origin. The results show that in most cases the usage of virtual persons leads to lower accuracy than the usage of real users in terms of the measurement applied: the Equal Error Rate. Finally, this article will address the general question how the concept of virtual users may influence the data protection requirements for multimodal evaluation databases in the future.
Inoue, J
1991-12-01
When occupational health personnel, especially occupational physicians search bibliographies, they usually have to search bibliographies by themselves. Also, if a library is not available because of the location of their work place, they might have to rely on online databases. Although there are many commercial databases in the world, people who seldom use them, will have problems with on-line searching, such as user-computer interface, keywords, and so on. The present study surveyed the best bibliographic searching system in the field of occupational medicine by questionnaire through the use of DIALOG OnDisc MEDLINE as a commercial database. In order to ascertain the problems involved in determining the best bibliographic searching system, a prototype bibliographic searching system was constructed and then evaluated. Finally, solutions for the problems were discussed. These led to the following conclusions: to construct the best bibliographic searching system at the present time, 1) a concept of micro-to-mainframe links (MML) is needed for the computer hardware network; 2) multi-lingual font standards and an excellent common user-computer interface are needed for the computer software; 3) a short course and education of database management systems, and support of personal information processing for retrieved data are necessary for the practical use of the system.
NASA Technical Reports Server (NTRS)
Jefferson, David; Beckman, Brian
1986-01-01
This paper describes the concept of virtual time and its implementation in the Time Warp Operating System at the Jet Propulsion Laboratory. Virtual time is a distributed synchronization paradigm that is appropriate for distributed simulation, database concurrency control, real time systems, and coordination of replicated processes. The Time Warp Operating System is targeted toward the distributed simulation application and runs on a 32-node JPL Mark II Hypercube.
[Towards understanding human ecology in nursing practice: a concept analysis].
Huynh, Truc; Alderson, Marie
2010-06-01
Human ecology is an umbrella concept encompassing several social, physical, and cultural elements existing in the individual's external environment. The pragmatic utility method was used to analyze the "human ecology" concept in order to ascertain the conceptual fit with nursing epistemology and to promote its use by nurses in clinical practice. Relevant articles for the review were retrieved from the MEDLINE, CINAHL, PsycINFO, and CSA databases using the terms "human ecology," "environment," "nursing," and "ecology." Data analysis revealed that human ecology is perceived as a theoretical perspective designating a complex, multilayered, and multidimensional system, one that comprises individuals and their reciprocal interactions with their global environments and the subsequent impact of these interactions upon their health. Human ecology preconditions include the individuals, their environments, and their transactions. Attributes of this concept encompass the characteristics of an open system (e.g., interdependence, reciprocal).
Collaborative and Multilingual Approach to Learn Database Topics Using Concept Maps
Calvo, Iñaki
2014-01-01
Authors report on a study using the concept mapping technique in computer engineering education for learning theoretical introductory database topics. In addition, the learning of multilingual technical terminology by means of the collaborative drawing of a concept map is also pursued in this experiment. The main characteristics of a study carried out in the database subject at the University of the Basque Country during the 2011/2012 course are described. This study contributes to the field of concept mapping as these kinds of cognitive tools have proved to be valid to support learning in computer engineering education. It contributes to the field of computer engineering education, providing a technique that can be incorporated with several educational purposes within the discipline. Results reveal the potential that a collaborative concept map editor offers to fulfil the above mentioned objectives. PMID:25538957
Spatial cyberinfrastructures, ontologies, and the humanities
Sieber, Renee E.; Wellen, Christopher C.; Jin, Yuan
2011-01-01
We report on research into building a cyberinfrastructure for Chinese biographical and geographic data. Our cyberinfrastructure contains (i) the McGill-Harvard-Yenching Library Ming Qing Women's Writings database (MQWW), the only online database on historical Chinese women's writings, (ii) the China Biographical Database, the authority for Chinese historical people, and (iii) the China Historical Geographical Information System, one of the first historical geographic information systems. Key to this integration is that linked databases retain separate identities as bases of knowledge, while they possess sufficient semantic interoperability to allow for multidatabase concepts and to support cross-database queries on an ad hoc basis. Computational ontologies create underlying semantics for database access. This paper focuses on the spatial component in a humanities cyberinfrastructure, which includes issues of conflicting data, heterogeneous data models, disambiguation, and geographic scale. First, we describe the methodology for integrating the databases. Then we detail the system architecture, which includes a tier of ontologies and schema. We describe the user interface and applications that allow for cross-database queries. For instance, users should be able to analyze the data, examine hypotheses on spatial and temporal relationships, and generate historical maps with datasets from MQWW for research, teaching, and publication on Chinese women writers, their familial relations, publishing venues, and the literary and social communities. Last, we discuss the social side of cyberinfrastructure development, as people are considered to be as critical as the technical components for its success. PMID:21444819
Flight Test Evaluation of Synthetic Vision Concepts at a Terrain Challenged Airport
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Prince, Lawrence J., III; Bailey, Randell E.; Arthur, Jarvis J., III; Parrish, Russell V.
2004-01-01
NASA's Synthetic Vision Systems (SVS) Project is striving to eliminate poor visibility as a causal factor in aircraft accidents as well as enhance operational capabilities of all aircraft through the display of computer generated imagery derived from an onboard database of terrain, obstacle, and airport information. To achieve these objectives, NASA 757 flight test research was conducted at the Eagle-Vail, Colorado airport to evaluate three SVS display types (Head-up Display, Head-Down Size A, Head-Down Size X) and two terrain texture methods (photo-realistic, generic) in comparison to the simulated Baseline Boeing-757 Electronic Attitude Direction Indicator and Navigation/Terrain Awareness and Warning System displays. The results of the experiment showed significantly improved situation awareness, performance, and workload for SVS concepts compared to the Baseline displays and confirmed the retrofit capability of the Head-Up Display and Size A SVS concepts. The research also demonstrated that the tunnel guidance display concept used within the SVS concepts achieved required navigation performance (RNP) criteria.
PRO-Elicere: A Study for Create a New Process of Dependability Analysis of Space Computer Systems
NASA Astrophysics Data System (ADS)
da Silva, Glauco; Netto Lahoz, Carlos Henrique
2013-09-01
This paper presents the new approach to the computer system dependability analysis, called PRO-ELICERE, which introduces data mining concepts and intelligent mechanisms to decision support to analyze the potential hazards and failures of a critical computer system. Also, are presented some techniques and tools that support the traditional dependability analysis and briefly discusses the concept of knowledge discovery and intelligent databases for critical computer systems. After that, introduces the PRO-ELICERE process, an intelligent approach to automate the ELICERE, a process created to extract non-functional requirements for critical computer systems. The PRO-ELICERE can be used in the V&V activities in the projects of Institute of Aeronautics and Space, such as the Brazilian Satellite Launcher (VLS-1).
XML: James Webb Space Telescope Database Issues, Lessons, and Status
NASA Technical Reports Server (NTRS)
Detter, Ryan; Mooney, Michael; Fatig, Curtis
2003-01-01
This paper will present the current concept using extensible Markup Language (XML) as the underlying structure for the James Webb Space Telescope (JWST) database. The purpose of using XML is to provide a JWST database, independent of any portion of the ground system, yet still compatible with the various systems using a variety of different structures. The testing of the JWST Flight Software (FSW) started in 2002, yet the launch is scheduled for 2011 with a planned 5-year mission and a 5-year follow on option. The initial database and ground system elements, including the commands, telemetry, and ground system tools will be used for 19 years, plus post mission activities. During the Integration and Test (I&T) phases of the JWST development, 24 distinct laboratories, each geographically dispersed, will have local database tools with an XML database. Each of these laboratories database tools will be used for the exporting and importing of data both locally and to a central database system, inputting data to the database certification process, and providing various reports. A centralized certified database repository will be maintained by the Space Telescope Science Institute (STScI), in Baltimore, Maryland, USA. One of the challenges for the database is to be flexible enough to allow for the upgrade, addition or changing of individual items without effecting the entire ground system. Also, using XML should allow for the altering of the import and export formats needed by the various elements, tracking the verification/validation of each database item, allow many organizations to provide database inputs, and the merging of the many existing database processes into one central database structure throughout the JWST program. Many National Aeronautics and Space Administration (NASA) projects have attempted to take advantage of open source and commercial technology. Often this causes a greater reliance on the use of Commercial-Off-The-Shelf (COTS), which is often limiting. In our review of the database requirements and the COTS software available, only very expensive COTS software will meet 90% of requirements. Even with the high projected initial cost of COTS, the development and support for custom code over the 19-year mission period was forecasted to be higher than the total licensing costs. A group did look at reusing existing database tools and formats. If the JWST database was already in a mature state, the reuse made sense, but with the database still needing to handing the addition of different types of command and telemetry structures, defining new spacecraft systems, accept input and export to systems which has not been defined yet, XML provided the flexibility desired. It remains to be determined whether the XML database will reduce the over all cost for the JWST mission.
Using Clustering Strategies for Creating Authority Files.
ERIC Educational Resources Information Center
French, James C.; Powell, Allison L.; Schulman, Eric
2000-01-01
Discussion of quality control of data in online bibliographic databases focuses on authority files. Describes approximate string matching, introduces the concept of approximate word matching and clustering, and presents a case study using the Astrophysics Data System (ADS) that shows how to reduce human effort involved in authority work. (LRW)
Are Computer Science Students Ready for the Real World.
ERIC Educational Resources Information Center
Elliot, Noreen
The typical undergraduate program in computer science includes an introduction to hardware and operating systems, file processing and database organization, data communication and networking, and programming. However, many graduates may lack the ability to integrate the concepts "learned" into a skill set and pattern of approaching problems that…
NASA Astrophysics Data System (ADS)
Taira, Ricky K.; Wong, Clement; Johnson, David; Bhushan, Vikas; Rivera, Monica; Huang, Lu J.; Aberle, Denise R.; Cardenas, Alfonso F.; Chu, Wesley W.
1995-05-01
With the increase in the volume and distribution of images and text available in PACS and medical electronic health-care environments it becomes increasingly important to maintain indexes that summarize the content of these multi-media documents. Such indices are necessary to quickly locate relevant patient cases for research, patient management, and teaching. The goal of this project is to develop an intelligent document retrieval system that allows researchers to request for patient cases based on document content. Thus we wish to retrieve patient cases from electronic information archives that could include a combined specification of patient demographics, low level radiologic findings (size, shape, number), intermediate-level radiologic findings (e.g., atelectasis, infiltrates, etc.) and/or high-level pathology constraints (e.g., well-differentiated small cell carcinoma). The cases could be distributed among multiple heterogeneous databases such as PACS, RIS, and HIS. Content- based retrieval systems go beyond the capabilities of simple key-word or string-based retrieval matching systems. These systems require a knowledge base to comprehend the generality/specificity of a concept (thus knowing the subclasses or related concepts to a given concept) and knowledge of the various string representations for each concept (i.e., synonyms, lexical variants, etc.). We have previously reported on a data integration mediation layer that allows transparent access to multiple heterogeneous distributed medical databases (HIS, RIS, and PACS). The data access layer of our architecture currently has limited query processing capabilities. Given a patient hospital identification number, the access mediation layer collects all documents in RIS and HIS and returns this information to a specified workstation location. In this paper we report on our efforts to extend the query processing capabilities of the system by creation of custom query interfaces, an intelligent query processing engine, and a document-content index that can be generated automatically (i.e., no manual authoring or changes to the normal clinical protocols).
A Concept Analysis of Systems Thinking.
Stalter, Ann M; Phillips, Janet M; Ruggiero, Jeanne S; Scardaville, Debra L; Merriam, Deborah; Dolansky, Mary A; Goldschmidt, Karen A; Wiggs, Carol M; Winegardner, Sherri
2017-10-01
This concept analysis, written by the National Quality and Safety Education for Nurses (QSEN) RN-BSN Task Force, defines systems thinking in relation to healthcare delivery. A review of the literature was conducted using five databases with the keywords "systems thinking" as well as "nursing education," "nursing curriculum," "online," "capstone," "practicum," "RN-BSN/RN to BSN," "healthcare organizations," "hospitals," and "clinical agencies." Only articles that focused on systems thinking in health care were used. The authors identified defining attributes, antecedents, consequences, and empirical referents of systems thinking. Systems thinking was defined as a process applied to individuals, teams, and organizations to impact cause and effect where solutions to complex problems are accomplished through collaborative effort according to personal ability with respect to improving components and the greater whole. Four primary attributes characterized systems thinking: dynamic system, holistic perspective, pattern identification, and transformation. Using the platform provided in this concept analysis, interprofessional practice has the ability to embrace planned efforts to improve critically needed quality and safety initiatives across patients' lifespans and all healthcare settings. © 2016 Wiley Periodicals, Inc.
Evaluation of HardSys/HardDraw, An Expert System for Electromagnetic Interactions Modelling
1993-05-01
interactions ir complex systems. This report gives a description of HardSys/HardDraw and reviews the main concepts used in its design. Various aspects of its ...HardDraw, an expert system for the modelling of electromagnetic interactions in complex systems. It consists of two main components: HardSys and HardDraw...HardSys is the advisor part of the expert system. It is knowledge-based, that is it contains a database of models and properties for various types of
Machado, Helena; Silva, Susana
2015-01-01
The ethical aspects of biobanks and forensic DNA databases are often treated as separate issues. As a reflection of this, public participation, or the involvement of citizens in genetic databases, has been approached differently in the fields of forensics and medicine. This paper aims to cross the boundaries between medicine and forensics by exploring the flows between the ethical issues presented in the two domains and the subsequent conceptualisation of public trust and legitimisation. We propose to introduce the concept of ‘solidarity’, traditionally applied only to medical and research biobanks, into a consideration of public engagement in medicine and forensics. Inclusion of a solidarity-based framework, in both medical biobanks and forensic DNA databases, raises new questions that should be included in the ethical debate, in relation to both health services/medical research and activities associated with the criminal justice system. PMID:26139851
Validating a strategy for psychosocial phenotyping using a large corpus of clinical text.
Gundlapalli, Adi V; Redd, Andrew; Carter, Marjorie; Divita, Guy; Shen, Shuying; Palmer, Miland; Samore, Matthew H
2013-12-01
To develop algorithms to improve efficiency of patient phenotyping using natural language processing (NLP) on text data. Of a large number of note titles available in our database, we sought to determine those with highest yield and precision for psychosocial concepts. From a database of over 1 billion documents from US Department of Veterans Affairs medical facilities, a random sample of 1500 documents from each of 218 enterprise note titles were chosen. Psychosocial concepts were extracted using a UIMA-AS-based NLP pipeline (v3NLP), using a lexicon of relevant concepts with negation and template format annotators. Human reviewers evaluated a subset of documents for false positives and sensitivity. High-yield documents were identified by hit rate and precision. Reasons for false positivity were characterized. A total of 58 707 psychosocial concepts were identified from 316 355 documents for an overall hit rate of 0.2 concepts per document (median 0.1, range 1.6-0). Of 6031 concepts reviewed from a high-yield set of note titles, the overall precision for all concept categories was 80%, with variability among note titles and concept categories. Reasons for false positivity included templating, negation, context, and alternate meaning of words. The sensitivity of the NLP system was noted to be 49% (95% CI 43% to 55%). Phenotyping using NLP need not involve the entire document corpus. Our methods offer a generalizable strategy for scaling NLP pipelines to large free text corpora with complex linguistic annotations in attempts to identify patients of a certain phenotype.
Validating a strategy for psychosocial phenotyping using a large corpus of clinical text
Gundlapalli, Adi V; Redd, Andrew; Carter, Marjorie; Divita, Guy; Shen, Shuying; Palmer, Miland; Samore, Matthew H
2013-01-01
Objective To develop algorithms to improve efficiency of patient phenotyping using natural language processing (NLP) on text data. Of a large number of note titles available in our database, we sought to determine those with highest yield and precision for psychosocial concepts. Materials and methods From a database of over 1 billion documents from US Department of Veterans Affairs medical facilities, a random sample of 1500 documents from each of 218 enterprise note titles were chosen. Psychosocial concepts were extracted using a UIMA-AS-based NLP pipeline (v3NLP), using a lexicon of relevant concepts with negation and template format annotators. Human reviewers evaluated a subset of documents for false positives and sensitivity. High-yield documents were identified by hit rate and precision. Reasons for false positivity were characterized. Results A total of 58 707 psychosocial concepts were identified from 316 355 documents for an overall hit rate of 0.2 concepts per document (median 0.1, range 1.6–0). Of 6031 concepts reviewed from a high-yield set of note titles, the overall precision for all concept categories was 80%, with variability among note titles and concept categories. Reasons for false positivity included templating, negation, context, and alternate meaning of words. The sensitivity of the NLP system was noted to be 49% (95% CI 43% to 55%). Conclusions Phenotyping using NLP need not involve the entire document corpus. Our methods offer a generalizable strategy for scaling NLP pipelines to large free text corpora with complex linguistic annotations in attempts to identify patients of a certain phenotype. PMID:24169276
NASA Astrophysics Data System (ADS)
Anugrah, Wirdah; Suryono; Suseno, Jatmiko Endro
2018-02-01
Management of water resources based on Geographic Information System can provide substantial benefits to water availability settings. Monitoring the potential water level is needed in the development sector, agriculture, energy and others. In this research is developed water resource information system using real-time Geographic Information System concept for monitoring the potential water level of web based area by applying rule based system method. GIS consists of hardware, software, and database. Based on the web-based GIS architecture, this study uses a set of computer that are connected to the network, run on the Apache web server and PHP programming language using MySQL database. The Ultrasound Wireless Sensor System is used as a water level data input. It also includes time and geographic location information. This GIS maps the five sensor locations. GIS is processed through a rule based system to determine the level of potential water level of the area. Water level monitoring information result can be displayed on thematic maps by overlaying more than one layer, and also generating information in the form of tables from the database, as well as graphs are based on the timing of events and the water level values.
Systems design and comparative analysis of large antenna concepts
NASA Technical Reports Server (NTRS)
Garrett, L. B.; Ferebee, M. J., Jr.
1983-01-01
Conceptual designs are evaluated and comparative analyses conducted for several large antenna spacecraft for Land Mobile Satellite System (LMSS) communications missions. Structural configurations include trusses, hoop and column and radial rib. The study was conducted using the Interactive Design and Evaluation of Advanced Spacecraft (IDEAS) system. The current capabilities, development status, and near-term plans for the IDEAS system are reviewed. Overall capabilities are highlighted. IDEAS is an integrated system of computer-aided design and analysis software used to rapidly evaluate system concepts and technology needs for future advanced spacecraft such as large antennas, platforms, and space stations. The system was developed at Langley to meet a need for rapid, cost-effective, labor-saving approaches to the design and analysis of numerous missions and total spacecraft system options under consideration. IDEAS consists of about 40 technical modules efficient executive, data-base and file management software, and interactive graphics display capabilities.
Rotating Rake Turbofan Duct Mode Measurement System
NASA Technical Reports Server (NTRS)
Sutliff, Daniel L.
2005-01-01
An experimental measurement system was developed and implemented by the NASA Glenn Research Center in the 1990s to measure turbofan duct acoustic modes. The system is a continuously rotating radial microphone rake that is inserted into the duct. This Rotating Rake provides a complete map of the acoustic duct modes present in a ducted fan and has been used on a variety of test articles: from a low-speed, concept test rig, to a full-scale production turbofan engine. The Rotating Rake has been critical in developing and evaluating a number of noise reduction concepts as well as providing experimental databases for verification of several aero-acoustic codes. More detailed derivation of the unique Rotating Rake equations are presented in the appendix.
Basic level scene understanding: categories, attributes and structures
Xiao, Jianxiong; Hays, James; Russell, Bryan C.; Patterson, Genevieve; Ehinger, Krista A.; Torralba, Antonio; Oliva, Aude
2013-01-01
A longstanding goal of computer vision is to build a system that can automatically understand a 3D scene from a single image. This requires extracting semantic concepts and 3D information from 2D images which can depict an enormous variety of environments that comprise our visual world. This paper summarizes our recent efforts toward these goals. First, we describe the richly annotated SUN database which is a collection of annotated images spanning 908 different scene categories with object, attribute, and geometric labels for many scenes. This database allows us to systematically study the space of scenes and to establish a benchmark for scene and object recognition. We augment the categorical SUN database with 102 scene attributes for every image and explore attribute recognition. Finally, we present an integrated system to extract the 3D structure of the scene and objects depicted in an image. PMID:24009590
Information Extraction for Clinical Data Mining: A Mammography Case Study
Nassif, Houssam; Woods, Ryan; Burnside, Elizabeth; Ayvaci, Mehmet; Shavlik, Jude; Page, David
2013-01-01
Breast cancer is the leading cause of cancer mortality in women between the ages of 15 and 54. During mammography screening, radiologists use a strict lexicon (BI-RADS) to describe and report their findings. Mammography records are then stored in a well-defined database format (NMD). Lately, researchers have applied data mining and machine learning techniques to these databases. They successfully built breast cancer classifiers that can help in early detection of malignancy. However, the validity of these models depends on the quality of the underlying databases. Unfortunately, most databases suffer from inconsistencies, missing data, inter-observer variability and inappropriate term usage. In addition, many databases are not compliant with the NMD format and/or solely consist of text reports. BI-RADS feature extraction from free text and consistency checks between recorded predictive variables and text reports are crucial to addressing this problem. We describe a general scheme for concept information retrieval from free text given a lexicon, and present a BI-RADS features extraction algorithm for clinical data mining. It consists of a syntax analyzer, a concept finder and a negation detector. The syntax analyzer preprocesses the input into individual sentences. The concept finder uses a semantic grammar based on the BI-RADS lexicon and the experts’ input. It parses sentences detecting BI-RADS concepts. Once a concept is located, a lexical scanner checks for negation. Our method can handle multiple latent concepts within the text, filtering out ultrasound concepts. On our dataset, our algorithm achieves 97.7% precision, 95.5% recall and an F1-score of 0.97. It outperforms manual feature extraction at the 5% statistical significance level. PMID:23765123
Information Extraction for Clinical Data Mining: A Mammography Case Study.
Nassif, Houssam; Woods, Ryan; Burnside, Elizabeth; Ayvaci, Mehmet; Shavlik, Jude; Page, David
2009-01-01
Breast cancer is the leading cause of cancer mortality in women between the ages of 15 and 54. During mammography screening, radiologists use a strict lexicon (BI-RADS) to describe and report their findings. Mammography records are then stored in a well-defined database format (NMD). Lately, researchers have applied data mining and machine learning techniques to these databases. They successfully built breast cancer classifiers that can help in early detection of malignancy. However, the validity of these models depends on the quality of the underlying databases. Unfortunately, most databases suffer from inconsistencies, missing data, inter-observer variability and inappropriate term usage. In addition, many databases are not compliant with the NMD format and/or solely consist of text reports. BI-RADS feature extraction from free text and consistency checks between recorded predictive variables and text reports are crucial to addressing this problem. We describe a general scheme for concept information retrieval from free text given a lexicon, and present a BI-RADS features extraction algorithm for clinical data mining. It consists of a syntax analyzer, a concept finder and a negation detector. The syntax analyzer preprocesses the input into individual sentences. The concept finder uses a semantic grammar based on the BI-RADS lexicon and the experts' input. It parses sentences detecting BI-RADS concepts. Once a concept is located, a lexical scanner checks for negation. Our method can handle multiple latent concepts within the text, filtering out ultrasound concepts. On our dataset, our algorithm achieves 97.7% precision, 95.5% recall and an F 1 -score of 0.97. It outperforms manual feature extraction at the 5% statistical significance level.
BIO-Plex Information System Concept
NASA Technical Reports Server (NTRS)
Jones, Harry; Boulanger, Richard; Arnold, James O. (Technical Monitor)
1999-01-01
This paper describes a suggested design for an integrated information system for the proposed BIO-Plex (Bioregenerative Planetary Life Support Systems Test Complex) at Johnson Space Center (JSC), including distributed control systems, central control, networks, database servers, personal computers and workstations, applications software, and external communications. The system will have an open commercial computing and networking, architecture. The network will provide automatic real-time transfer of information to database server computers which perform data collection and validation. This information system will support integrated, data sharing applications for everything, from system alarms to management summaries. Most existing complex process control systems have information gaps between the different real time subsystems, between these subsystems and central controller, between the central controller and system level planning and analysis application software, and between the system level applications and management overview reporting. An integrated information system is vitally necessary as the basis for the integration of planning, scheduling, modeling, monitoring, and control, which will allow improved monitoring and control based on timely, accurate and complete data. Data describing the system configuration and the real time processes can be collected, checked and reconciled, analyzed and stored in database servers that can be accessed by all applications. The required technology is available. The only opportunity to design a distributed, nonredundant, integrated system is before it is built. Retrofit is extremely difficult and costly.
Problem reporting management system performance simulation
NASA Technical Reports Server (NTRS)
Vannatta, David S.
1993-01-01
This paper proposes the Problem Reporting Management System (PRMS) model as an effective discrete simulation tool that determines the risks involved during the development phase of a Trouble Tracking Reporting Data Base replacement system. The model considers the type of equipment and networks which will be used in the replacement system as well as varying user loads, size of the database, and expected operational availability. The paper discusses the dynamics, stability, and application of the PRMS and addresses suggested concepts to enhance the service performance and enrich them.
Enhanced/Synthetic Vision Systems - Human factors research and implications for future systems
NASA Technical Reports Server (NTRS)
Foyle, David C.; Ahumada, Albert J.; Larimer, James; Sweet, Barbara T.
1992-01-01
This paper reviews recent human factors research studies conducted in the Aerospace Human Factors Research Division at NASA Ames Research Center related to the development and usage of Enhanced or Synthetic Vision Systems. Research discussed includes studies of field of view (FOV), representational differences of infrared (IR) imagery, head-up display (HUD) symbology, HUD advanced concept designs, sensor fusion, and sensor/database fusion and evaluation. Implications for the design and usage of Enhanced or Synthetic Vision Systems are discussed.
Standardization of databases for AMDB taxi routing functions
NASA Astrophysics Data System (ADS)
Pschierer, C.; Sindlinger, A.; Schiefele, J.
2010-04-01
Input, management, and display of taxi routes on airport moving map displays (AMM) have been covered in various studies in the past. The demonstrated applications are typically based on Aerodrome Mapping Databases (AMDB). Taxi routing functions require specific enhancements, typically in the form of a graph network with nodes and edges modeling all connectivities within an airport, which are not supported by the current AMDB standards. Therefore, the data schemas and data content have been defined specifically for the purpose and test scenarios of these studies. A standardization of the data format for taxi routing information is a prerequisite for turning taxi routing functions into production. The joint RTCA/EUROCAE special committee SC-217, responsible for updating and enhancing the AMDB standards DO-272 [1] and DO-291 [2], is currently in the process of studying different alternatives and defining reasonable formats. Requirements for taxi routing data are primarily driven by depiction concepts for assigned and cleared taxi routes, but also by database size and the economic feasibility. Studied concepts are similar to the ones described in the GDF (geographic data files) specification [3], which is used in most car navigation systems today. They include - A highly aggregated graph network of complex features - A modestly aggregated graph network of simple features - A non-explicit topology of plain AMDB taxi guidance line elements This paper introduces the different concepts and their advantages and disadvantages.
NASA Technical Reports Server (NTRS)
Parrish, Russell V.; Busquets, Anthony M.; Williams, Steven P.; Nold, Dean E.
2003-01-01
A simulation study was conducted in 1994 at Langley Research Center that used 12 commercial airline pilots repeatedly flying complex Microwave Landing System (MLS)-type approaches to parallel runways under Category IIIc weather conditions. Two sensor insert concepts of 'Synthetic Vision Systems' (SVS) were used in the simulated flights, with a more conventional electro-optical display (similar to a Head-Up Display with raster capability for sensor imagery), flown under less restrictive visibility conditions, used as a control condition. The SVS concepts combined the sensor imagery with a computer-generated image (CGI) of an out-the-window scene based on an onboard airport database. Various scenarios involving runway traffic incursions (taxiing aircraft and parked fuel trucks) and navigational system position errors (both static and dynamic) were used to assess the pilots' ability to manage the approach task with the display concepts. The two SVS sensor insert concepts contrasted the simple overlay of sensor imagery on the CGI scene without additional image processing (the SV display) to the complex integration (the AV display) of the CGI scene with pilot-decision aiding using both object and edge detection techniques for detection of obstacle conflicts and runway alignment errors.
ERIC Educational Resources Information Center
Rzepa, Henry S.
2016-01-01
Three new examples are presented illustrating three-dimensional chemical information searches of the Cambridge structure database (CSD) from which basic core concepts in organic and inorganic chemistry emerge. These include connecting the regiochemistry of aromatic electrophilic substitution with the geometrical properties of hydrogen bonding…
Addressing the English Language Arts Technology Standard in a Secondary Reading Methodology Course.
ERIC Educational Resources Information Center
Merkley, Donna J.; Schmidt, Denise A.; Allen, Gayle
2001-01-01
Describes efforts to integrate technology into a reading methodology course for secondary English majors. Discusses the use of e-mail, multimedia, distance education for videoconferences, online discussion technology, subject-specific software, desktop publishing, a database management system, a concept mapping program, and the use of the World…
Constructing a Graph Database for Semantic Literature-Based Discovery.
Hristovski, Dimitar; Kastrin, Andrej; Dinevski, Dejan; Rindflesch, Thomas C
2015-01-01
Literature-based discovery (LBD) generates discoveries, or hypotheses, by combining what is already known in the literature. Potential discoveries have the form of relations between biomedical concepts; for example, a drug may be determined to treat a disease other than the one for which it was intended. LBD views the knowledge in a domain as a network; a set of concepts along with the relations between them. As a starting point, we used SemMedDB, a database of semantic relations between biomedical concepts extracted with SemRep from Medline. SemMedDB is distributed as a MySQL relational database, which has some problems when dealing with network data. We transformed and uploaded SemMedDB into the Neo4j graph database, and implemented the basic LBD discovery algorithms with the Cypher query language. We conclude that storing the data needed for semantic LBD is more natural in a graph database. Also, implementing LBD discovery algorithms is conceptually simpler with a graph query language when compared with standard SQL.
The Application of Lidar to Synthetic Vision System Integrity
NASA Technical Reports Server (NTRS)
Campbell, Jacob L.; UijtdeHaag, Maarten; Vadlamani, Ananth; Young, Steve
2003-01-01
One goal in the development of a Synthetic Vision System (SVS) is to create a system that can be certified by the Federal Aviation Administration (FAA) for use at various flight criticality levels. As part of NASA s Aviation Safety Program, Ohio University and NASA Langley have been involved in the research and development of real-time terrain database integrity monitors for SVS. Integrity monitors based on a consistency check with onboard sensors may be required if the inherent terrain database integrity is not sufficient for a particular operation. Sensors such as the radar altimeter and weather radar, which are available on most commercial aircraft, are currently being investigated for use in a real-time terrain database integrity monitor. This paper introduces the concept of using a Light Detection And Ranging (LiDAR) sensor as part of a real-time terrain database integrity monitor. A LiDAR system consists of a scanning laser ranger, an inertial measurement unit (IMU), and a Global Positioning System (GPS) receiver. Information from these three sensors can be combined to generate synthesized terrain models (profiles), which can then be compared to the stored SVS terrain model. This paper discusses an initial performance evaluation of the LiDAR-based terrain database integrity monitor using LiDAR data collected over Reno, Nevada. The paper will address the consistency checking mechanism and test statistic, sensitivity to position errors, and a comparison of the LiDAR-based integrity monitor to a radar altimeter-based integrity monitor.
Social media based NPL system to find and retrieve ARM data: Concept paper
DOE Office of Scientific and Technical Information (OSTI.GOV)
Devarakonda, Ranjeet; Giansiracusa, Michael T.; Kumar, Jitendra
Information connectivity and retrieval has a role in our daily lives. The most pervasive source of online information is databases. The amount of data is growing at rapid rate and database technology is improving and having a profound effect. Almost all online applications are storing and retrieving information from databases. One challenge in supplying the public with wider access to informational databases is the need for knowledge of database languages like Structured Query Language (SQL). Although the SQL language has been published in many forms, not everybody is able to write SQL queries. Another challenge is that it may notmore » be practical to make the public aware of the structure of the database. There is a need for novice users to query relational databases using their natural language. To solve this problem, many natural language interfaces to structured databases have been developed. The goal is to provide more intuitive method for generating database queries and delivering responses. Social media makes it possible to interact with a wide section of the population. Through this medium, and with the help of Natural Language Processing (NLP) we can make the data of the Atmospheric Radiation Measurement Data Center (ADC) more accessible to the public. We propose an architecture for using Apache Lucene/Solr [1], OpenML [2,3], and Kafka [4] to generate an automated query/response system with inputs from Twitter5, our Cassandra DB, and our log database. Using the Twitter API and NLP we can give the public the ability to ask questions of our database and get automated responses.« less
GIS Toolsets for Planetary Geomorphology and Landing-Site Analysis
NASA Astrophysics Data System (ADS)
Nass, Andrea; van Gasselt, Stephan
2015-04-01
Modern Geographic Information Systems (GIS) allow expert and lay users alike to load and position geographic data and perform simple to highly complex surface analyses. For many applications dedicated and ready-to-use GIS tools are available in standard software systems while other applications require the modular combination of available basic tools to answer more specific questions. This also applies to analyses in modern planetary geomorphology where many of such (basic) tools can be used to build complex analysis tools, e.g. in image- and terrain model analysis. Apart from the simple application of sets of different tools, many complex tasks require a more sophisticated design for storing and accessing data using databases (e.g. ArcHydro for hydrological data analysis). In planetary sciences, complex database-driven models are often required to efficiently analyse potential landings sites or store rover data, but also geologic mapping data can be efficiently stored and accessed using database models rather than stand-alone shapefiles. For landings-site analyses, relief and surface roughness estimates are two common concepts that are of particular interest and for both, a number of different definitions co-exist. We here present an advanced toolset for the analysis of image and terrain-model data with an emphasis on extraction of landing site characteristics using established criteria. We provide working examples and particularly focus on the concepts of terrain roughness as it is interpreted in geomorphology and engineering studies.
On Hunting Animals of the Biometric Menagerie for Online Signature.
Houmani, Nesma; Garcia-Salicetti, Sonia
2016-01-01
Individuals behave differently regarding to biometric authentication systems. This fact was formalized in the literature by the concept of Biometric Menagerie, defining and labeling user groups with animal names in order to reflect their characteristics with respect to biometric systems. This concept was illustrated for face, fingerprint, iris, and speech modalities. The present study extends the Biometric Menagerie to online signatures, by proposing a novel methodology that ties specific quality measures for signatures to categories of the Biometric Menagerie. Such measures are combined for retrieving automatically writer categories of the extended version of the Biometric Menagerie. Performance analysis with different types of classifiers shows the pertinence of our approach on the well-known MCYT-100 database.
Using an image-extended relational database to support content-based image retrieval in a PACS.
Traina, Caetano; Traina, Agma J M; Araújo, Myrian R B; Bueno, Josiane M; Chino, Fabio J T; Razente, Humberto; Azevedo-Marques, Paulo M
2005-12-01
This paper presents a new Picture Archiving and Communication System (PACS), called cbPACS, which has content-based image retrieval capabilities. The cbPACS answers range and k-nearest- neighbor similarity queries, employing a relational database manager extended to support images. The images are compared through their features, which are extracted by an image-processing module and stored in the extended relational database. The database extensions were developed aiming at efficiently answering similarity queries by taking advantage of specialized indexing methods. The main concept supporting the extensions is the definition, inside the relational manager, of distance functions based on features extracted from the images. An extension to the SQL language enables the construction of an interpreter that intercepts the extended commands and translates them to standard SQL, allowing any relational database server to be used. By now, the system implemented works on features based on color distribution of the images through normalized histograms as well as metric histograms. Metric histograms are invariant regarding scale, translation and rotation of images and also to brightness transformations. The cbPACS is prepared to integrate new image features, based on texture and shape of the main objects in the image.
Computer-aided system for detecting runway incursions
NASA Astrophysics Data System (ADS)
Sridhar, Banavar; Chatterji, Gano B.
1994-07-01
A synthetic vision system for enhancing the pilot's ability to navigate and control the aircraft on the ground is described. The system uses the onboard airport database and images acquired by external sensors. Additional navigation information needed by the system is provided by the Inertial Navigation System and the Global Positioning System. The various functions of the system, such as image enhancement, map generation, obstacle detection, collision avoidance, guidance, etc., are identified. The available technologies, some of which were developed at NASA, that are applicable to the aircraft ground navigation problem are noted. Example images of a truck crossing the runway while the aircraft flies close to the runway centerline are described. These images are from a sequence of images acquired during one of the several flight experiments conducted by NASA to acquire data to be used for the development and verification of the synthetic vision concepts. These experiments provide a realistic database including video and infrared images, motion states from the Inertial Navigation System and the Global Positioning System, and camera parameters.
Design of a Multi Dimensional Database for the Archimed DataWarehouse.
Bréant, Claudine; Thurler, Gérald; Borst, François; Geissbuhler, Antoine
2005-01-01
The Archimed data warehouse project started in 1993 at the Geneva University Hospital. It has progressively integrated seven data marts (or domains of activity) archiving medical data such as Admission/Discharge/Transfer (ADT) data, laboratory results, radiology exams, diagnoses, and procedure codes. The objective of the Archimed data warehouse is to facilitate the access to an integrated and coherent view of patient medical in order to support analytical activities such as medical statistics, clinical studies, retrieval of similar cases and data mining processes. This paper discusses three principal design aspects relative to the conception of the database of the data warehouse: 1) the granularity of the database, which refers to the level of detail or summarization of data, 2) the database model and architecture, describing how data will be presented to end users and how new data is integrated, 3) the life cycle of the database, in order to ensure long term scalability of the environment. Both, the organization of patient medical data using a standardized elementary fact representation and the use of the multi dimensional model have proved to be powerful design tools to integrate data coming from the multiple heterogeneous database systems part of the transactional Hospital Information System (HIS). Concurrently, the building of the data warehouse in an incremental way has helped to control the evolution of the data content. These three design aspects bring clarity and performance regarding data access. They also provide long term scalability to the system and resilience to further changes that may occur in source systems feeding the data warehouse.
NASA Astrophysics Data System (ADS)
Schoitsch, Erwin
1988-07-01
Our society is depending more and more on the reliability of embedded (real-time) computer systems even in every-day life. Considering the complexity of the real world, this might become a severe threat. Real-time programming is a discipline important not only in process control and data acquisition systems, but also in fields like communication, office automation, interactive databases, interactive graphics and operating systems development. General concepts of concurrent programming and constructs for process-synchronization are discussed in detail. Tasking and synchronization concepts, methods of process communication, interrupt- and timeout handling in systems based on semaphores, signals, conditional critical regions or on real-time languages like Concurrent PASCAL, MODULA, CHILL and ADA are explained and compared with each other and with respect to their potential to quality and safety.
Competitive-Cooperative Automated Reasoning from Distributed and Multiple Source of Data
NASA Astrophysics Data System (ADS)
Fard, Amin Milani
Knowledge extraction from distributed database systems, have been investigated during past decade in order to analyze billions of information records. In this work a competitive deduction approach in a heterogeneous data grid environment is proposed using classic data mining and statistical methods. By applying a game theory concept in a multi-agent model, we tried to design a policy for hierarchical knowledge discovery and inference fusion. To show the system run, a sample multi-expert system has also been developed.
Investigating Evolutionary Questions Using Online Molecular Databases.
ERIC Educational Resources Information Center
Puterbaugh, Mary N.; Burleigh, J. Gordon
2001-01-01
Recommends using online molecular databases as teaching tools to illustrate evolutionary questions and concepts while introducing students to public molecular databases. Provides activities in which students make molecular comparisons between species. (YDS)
ERIC Educational Resources Information Center
Simon, Hans-Reiner; Thormann, K.-D.
This report discusses the use of the Science Citation Index produced by the Institute for Scientific Information (ISI) as a given "expert system" in the experimental study of different search levels. The inquiry has two objectives: (1) to test whether a "traditional" information system will also produce the rudiments of a…
NASA Technical Reports Server (NTRS)
Chwalowski, Pawel; Samareh, Jamshid A.; Horta, Lucas G.; Piatak, David J.; McGowan, Anna-Maria R.
2009-01-01
The conceptual and preliminary design processes for aircraft with large shape changes are generally difficult and time-consuming, and the processes are often customized for a specific shape change concept to streamline the vehicle design effort. Accordingly, several existing reports show excellent results of assessing a particular shape change concept or perturbations of a concept. The goal of the current effort was to develop a multidisciplinary analysis tool and process that would enable an aircraft designer to assess several very different morphing concepts early in the design phase and yet obtain second-order performance results so that design decisions can be made with better confidence. The approach uses an efficient parametric model formulation that allows automatic model generation for systems undergoing radical shape changes as a function of aerodynamic parameters, geometry parameters, and shape change parameters. In contrast to other more self-contained approaches, the approach utilizes off-the-shelf analysis modules to reduce development time and to make it accessible to many users. Because the analysis is loosely coupled, discipline modules like a multibody code can be easily swapped for other modules with similar capabilities. One of the advantages of this loosely coupled system is the ability to use the medium- to high-fidelity tools early in the design stages when the information can significantly influence and improve overall vehicle design. Data transfer among the analysis modules are based on an accurate and automated general purpose data transfer tool. In general, setup time for the integrated system presented in this paper is 2-4 days for simple shape change concepts and 1-2 weeks for more mechanically complicated concepts. Some of the key elements briefly described in the paper include parametric model development, aerodynamic database generation, multibody analysis, and the required software modules as well as examples for a telescoping wing, a folding wing, and a bat-like wing. The paper also includes the verification of a medium-fidelity aerodynamic tool used for the aerodynamic database generation with a steady and unsteady high-fidelity CFD analysis tool for a folding wing example.
Automated Aerial Refueling Concept of Operations
2017-06-09
and their associated contractors . This document is suitable for release in the public domain; it may be included in DOD and NATO databases such as...5.3.5 TENSION DISCONNECT (BOOM/RECEPTACLE ONLY) ..................22 5.3.6 FUEL LEAKAGE...Navigation System (INS) technology to provide high -availability, high integrity, four dimensional guidance. A robust datalink will be needed with the
Scheil-Gulliver Constituent Diagrams
NASA Astrophysics Data System (ADS)
Pelton, Arthur D.; Eriksson, Gunnar; Bale, Christopher W.
2017-06-01
During solidification of alloys, conditions often approach those of Scheil-Gulliver cooling in which it is assumed that solid phases, once precipitated, remain unchanged. That is, they no longer react with the liquid or with each other. In the case of equilibrium solidification, equilibrium phase diagrams provide a valuable means of visualizing the effects of composition changes upon the final microstructure. In the present study, we propose for the first time the concept of Scheil-Gulliver constituent diagrams which play the same role as that in the case of Scheil-Gulliver cooling. It is shown how these diagrams can be calculated and plotted by the currently available thermodynamic database computing systems that combine Gibbs energy minimization software with large databases of optimized thermodynamic properties of solutions and compounds. Examples calculated using the FactSage system are presented for the Al-Li and Al-Mg-Zn systems, and for the Au-Bi-Sb-Pb system and its binary and ternary subsystems.
National Institute of Standards and Technology Data Gateway
SRD 103a NIST ThermoData Engine Database (PC database for purchase) ThermoData Engine is the first product fully implementing all major principles of the concept of dynamic data evaluation formulated at NIST/TRC.
A Computational Chemistry Database for Semiconductor Processing
NASA Technical Reports Server (NTRS)
Jaffe, R.; Meyyappan, M.; Arnold, J. O. (Technical Monitor)
1998-01-01
The concept of 'virtual reactor' or 'virtual prototyping' has received much attention recently in the semiconductor industry. Commercial codes to simulate thermal CVD and plasma processes have become available to aid in equipment and process design efforts, The virtual prototyping effort would go nowhere if codes do not come with a reliable database of chemical and physical properties of gases involved in semiconductor processing. Commercial code vendors have no capabilities to generate such a database, rather leave the task to the user of finding whatever is needed. While individual investigations of interesting chemical systems continue at Universities, there has not been any large scale effort to create a database. In this presentation, we outline our efforts in this area. Our effort focuses on the following five areas: 1. Thermal CVD reaction mechanism and rate constants. 2. Thermochemical properties. 3. Transport properties.4. Electron-molecule collision cross sections. and 5. Gas-surface interactions.
Technical Challenges in the Development of a NASA Synthetic Vision System Concept
NASA Technical Reports Server (NTRS)
Bailey, Randall E.; Parrish, Russell V.; Kramer, Lynda J.; Harrah, Steve; Arthur, J. J., III
2002-01-01
Within NASA's Aviation Safety Program, the Synthetic Vision Systems Project is developing display system concepts to improve pilot terrain/situation awareness by providing a perspective synthetic view of the outside world through an on-board database driven by precise aircraft positioning information updating via Global Positioning System-based data. This work is aimed at eliminating visibility-induced errors and low visibility conditions as a causal factor to civil aircraft accidents, as well as replicating the operational benefits of clear day flight operations regardless of the actual outside visibility condition. Synthetic vision research and development activities at NASA Langley Research Center are focused around a series of ground simulation and flight test experiments designed to evaluate, investigate, and assess the technology which can lead to operational and certified synthetic vision systems. The technical challenges that have been encountered and that are anticipated in this research and development activity are summarized.
Pan, Xuequn; Cimino, James J
2014-01-01
Clinicians and clinical researchers often seek information in electronic health records (EHRs) that are relevant to some concept of interest, such as a disease or finding. The heterogeneous nature of EHRs can complicate retrieval, risking incomplete results. We frame this problem as the presence of two gaps: 1) a gap between clinical concepts and their representations in EHR data and 2) a gap between data representations and their locations within EHR data structures. We bridge these gaps with a knowledge structure that comprises relationships among clinical concepts (including concepts of interest and concepts that may be instantiated in EHR data) and relationships between clinical concepts and the database structures. We make use of available knowledge resources to develop a reproducible, scalable process for creating a knowledge base that can support automated query expansion from a clinical concept to all relevant EHR data.
Analyzing high energy physics data using database computing: Preliminary report
NASA Technical Reports Server (NTRS)
Baden, Andrew; Day, Chris; Grossman, Robert; Lifka, Dave; Lusk, Ewing; May, Edward; Price, Larry
1991-01-01
A proof of concept system is described for analyzing high energy physics (HEP) data using data base computing. The system is designed to scale up to the size required for HEP experiments at the Superconducting SuperCollider (SSC) lab. These experiments will require collecting and analyzing approximately 10 to 100 million 'events' per year during proton colliding beam collisions. Each 'event' consists of a set of vectors with a total length of approx. one megabyte. This represents an increase of approx. 2 to 3 orders of magnitude in the amount of data accumulated by present HEP experiments. The system is called the HEPDBC System (High Energy Physics Database Computing System). At present, the Mark 0 HEPDBC System is completed, and can produce analysis of HEP experimental data approx. an order of magnitude faster than current production software on data sets of approx. 1 GB. The Mark 1 HEPDBC System is currently undergoing testing and is designed to analyze data sets 10 to 100 times larger.
The Impact of Online Bibliographic Databases on Teaching and Research in Political Science.
ERIC Educational Resources Information Center
Reichel, Mary
The availability of online bibliographic databases greatly facilitates literature searching in political science. The advantages to searching databases online include combination of concepts, comprehensiveness, multiple database searching, free-text searching, currency, current awareness services, document delivery service, and convenience.…
Using the Cambridge Structural Database to Teach Molecular Geometry Concepts in Organic Chemistry
ERIC Educational Resources Information Center
Wackerly, Jay Wm.; Janowicz, Philip A.; Ritchey, Joshua A.; Caruso, Mary M.; Elliott, Erin L.; Moore, Jeffrey S.
2009-01-01
This article reports a set of two homework assignments that can be used in a second-year undergraduate organic chemistry class. These assignments were designed to help reinforce concepts of molecular geometry and to give students the opportunity to use a technological database and data mining to analyze experimentally determined chemical…
ERIC Educational Resources Information Center
Hauge, Sharon K.
While functions and relations are important concepts in the teaching of mathematics, research suggests that many students lack an understanding and appreciation of these concepts. The present paper discusses an approach for teaching functions and relations that draws on the use of illustrations from database management. This approach has the…
Lessons Learned from Deploying an Analytical Task Management Database
NASA Technical Reports Server (NTRS)
O'Neil, Daniel A.; Welch, Clara; Arceneaux, Joshua; Bulgatz, Dennis; Hunt, Mitch; Young, Stephen
2007-01-01
Defining requirements, missions, technologies, and concepts for space exploration involves multiple levels of organizations, teams of people with complementary skills, and analytical models and simulations. Analytical activities range from filling a To-Be-Determined (TBD) in a requirement to creating animations and simulations of exploration missions. In a program as large as returning to the Moon, there are hundreds of simultaneous analysis activities. A way to manage and integrate efforts of this magnitude is to deploy a centralized database that provides the capability to define tasks, identify resources, describe products, schedule deliveries, and generate a variety of reports. This paper describes a web-accessible task management system and explains the lessons learned during the development and deployment of the database. Through the database, managers and team leaders can define tasks, establish review schedules, assign teams, link tasks to specific requirements, identify products, and link the task data records to external repositories that contain the products. Data filters and spreadsheet export utilities provide a powerful capability to create custom reports. Import utilities provide a means to populate the database from previously filled form files. Within a four month period, a small team analyzed requirements, developed a prototype, conducted multiple system demonstrations, and deployed a working system supporting hundreds of users across the aeros pace community. Open-source technologies and agile software development techniques, applied by a skilled team enabled this impressive achievement. Topics in the paper cover the web application technologies, agile software development, an overview of the system's functions and features, dealing with increasing scope, and deploying new versions of the system.
Scalable and expressive medical terminologies.
Mays, E; Weida, R; Dionne, R; Laker, M; White, B; Liang, C; Oles, F J
1996-01-01
The K-Rep system, based on description logic, is used to represent and reason with large and expressive controlled medical terminologies. Expressive concept descriptions incorporate semantically precise definitions composed using logical operators, together with important non-semantic information such as synonyms and codes. Examples are drawn from our experience with K-Rep in modeling the InterMed laboratory terminology and also developing a large clinical terminology now in production use at Kaiser-Permanente. System-level scalability of performance is achieved through an object-oriented database system which efficiently maps persistent memory to virtual memory. Equally important is conceptual scalability-the ability to support collaborative development, organization, and visualization of a substantial terminology as it evolves over time. K-Rep addresses this need by logically completing concept definitions and automatically classifying concepts in a taxonomy via subsumption inferences. The K-Rep system includes a general-purpose GUI environment for terminology development and browsing, a custom interface for formulary term maintenance, a C+2 application program interface, and a distributed client-server mode which provides lightweight clients with efficient run-time access to K-Rep by means of a scripting language.
Matney, Susan; Bakken, Suzanne; Huff, Stanley M
2003-01-01
In recent years, the Logical Observation Identifiers, Names, and Codes (LOINC) Database has been expanded to include assessment items of relevance to nursing and in 2002 met the criteria for "recognition" by the American Nurses Association. Assessment measures in LOINC include those related to vital signs, obstetric measurements, clinical assessment scales, assessments from standardized nursing terminologies, and research instruments. In order for LOINC to be of greater use in implementing information systems that support nursing practice, additional content is needed. Moreover, those implementing systems for nursing practice must be aware of the manner in which LOINC codes for assessments can be appropriately linked with other aspects of the nursing process such as diagnoses and interventions. Such linkages are necessary to document nursing contributions to healthcare outcomes within the context of a multidisciplinary care environment and to facilitate building of nursing knowledge from clinical practice. The purposes of this paper are to provide an overview of the LOINC database, to describe examples of assessments of relevance to nursing contained in LOINC, and to illustrate linkages of LOINC assessments with other nursing concepts.
Field-Based Experiential Learning Using Mobile Devices
NASA Astrophysics Data System (ADS)
Hilley, G. E.
2015-12-01
Technologies such as GPS and cellular triangulation allow location-specific content to be delivered by mobile devices, but no mechanism currently exists to associate content shared between locations in a way that guarantees the delivery of coherent and non-redundant information at every location. Thus, experiential learning via mobile devices must currently take place along a predefined path, as in the case of a self-guided tour. I developed a mobile-device-based system that allows a person to move through a space along a path of their choosing, while receiving information in a way that guarantees delivery of appropriate background and location-specific information without producing redundancy of content between locations. This is accomplished by coupling content to knowledge-concept tags that are noted as fulfilled when users take prescribed actions. Similarly, the presentation of the content is related to the fulfillment of these knowledge-concept tags through logic statements that control the presentation. Content delivery is triggered by mobile-device geolocation including GPS/cellular navigation, and sensing of low-power Bluetooth proximity beacons. Together, these features implement a process that guarantees a coherent, non-redundant educational experience throughout a space, regardless of a learner's chosen path. The app that runs on the mobile device works in tandem with a server-side database and file-serving system that can be configured through a web-based GUI, and so content creators can easily populate and configure content with the system. Once the database has been updated, the new content is immediately available to the mobile devices when they arrive at the location at which content is required. Such a system serves as a platform for the development of field-based geoscience educational experiences, in which students can organically learn about core concepts at particular locations while individually exploring a space.
Bookey-Bassett, Sue; Markle-Reid, Maureen; Mckey, Colleen A; Akhtar-Danesh, Noori
2017-01-01
To report a concept analysis of interprofessional collaboration in the context of chronic disease management, for older adults living in communities. Increasing prevalence of chronic disease among older adults is creating significant burden for patients, families and healthcare systems. Managing chronic disease for older adults living in the community requires interprofessional collaboration across different health and other care providers, organizations and sectors. However, there is a lack of consensus about the definition and use of interprofessional collaboration for community-based chronic disease management. Concept analysis. Electronic databases CINAHL, Medline, HealthStar, EMBASE, PsychINFO, Ageline and Cochrane Database were searched from 2000 - 2013. Rodgers' evolutionary method for concept analysis. The most common surrogate term was interdisciplinary collaboration. Related terms were interprofessional team, multidisciplinary team and teamwork. Attributes included: an evolving interpersonal process; shared goals, decision-making and care planning; interdependence; effective and frequent communication; evaluation of team processes; involving older adults and family members in the team; and diverse and flexible team membership. Antecedents comprised: role awareness; interprofessional education; trust between team members; belief that interprofessional collaboration improves care; and organizational support. Consequences included impacts on team composition and function, care planning processes and providers' knowledge, confidence and job satisfaction. Interprofessional collaboration is a complex evolving concept. Key components of interprofessional collaboration in chronic disease management for community-living older adults are identified. Implications for nursing practice, education and research are proposed. © 2016 John Wiley & Sons Ltd.
The Methods of Cognitive Visualization for the Astronomical Databases Analyzing Tools Development
NASA Astrophysics Data System (ADS)
Vitkovskiy, V.; Gorohov, V.
2008-08-01
There are two kinds of computer graphics: the illustrative one and the cognitive one. Appropriate the cognitive pictures not only make evident and clear the sense of complex and difficult scientific concepts, but promote, --- and not so very rarely, --- a birth of a new knowledge. On the basis of the cognitive graphics concept, we worked out the SW-system for visualization and analysis. It allows to train and to aggravate intuition of researcher, to raise his interest and motivation to the creative, scientific cognition, to realize process of dialogue with the very problems simultaneously.
[The concept "a case in outpatient treatment" in military policlinic activity].
Vinogradov, S N; Vorob'ev, E G; Shklovskiĭ, B L
2014-04-01
Substantiates the necessity of transition of military policlinics to the accounting system and evaluation of their activity on the finished cases of outpatient treatment. Only automating data-statistical processes can solve this problem. On the basis of analysis of the literature data, requirements of the guidance documents and observational results concludes that preliminarily should be done revisal (formalisation) of existing concepts of medical statistics from the position of information environment which in use - electronic databases. In this aspect specified the main features of outpatient treatment case as a unit of medical-statistical record, and formulated its definition.
On Hunting Animals of the Biometric Menagerie for Online Signature
Houmani, Nesma; Garcia-Salicetti, Sonia
2016-01-01
Individuals behave differently regarding to biometric authentication systems. This fact was formalized in the literature by the concept of Biometric Menagerie, defining and labeling user groups with animal names in order to reflect their characteristics with respect to biometric systems. This concept was illustrated for face, fingerprint, iris, and speech modalities. The present study extends the Biometric Menagerie to online signatures, by proposing a novel methodology that ties specific quality measures for signatures to categories of the Biometric Menagerie. Such measures are combined for retrieving automatically writer categories of the extended version of the Biometric Menagerie. Performance analysis with different types of classifiers shows the pertinence of our approach on the well-known MCYT-100 database. PMID:27054836
PGSB/MIPS Plant Genome Information Resources and Concepts for the Analysis of Complex Grass Genomes.
Spannagl, Manuel; Bader, Kai; Pfeifer, Matthias; Nussbaumer, Thomas; Mayer, Klaus F X
2016-01-01
PGSB (Plant Genome and Systems Biology; formerly MIPS-Munich Institute for Protein Sequences) has been involved in developing, implementing and maintaining plant genome databases for more than a decade. Genome databases and analysis resources have focused on individual genomes and aim to provide flexible and maintainable datasets for model plant genomes as a backbone against which experimental data, e.g., from high-throughput functional genomics, can be organized and analyzed. In addition, genomes from both model and crop plants form a scaffold for comparative genomics, assisted by specialized tools such as the CrowsNest viewer to explore conserved gene order (synteny) between related species on macro- and micro-levels.The genomes of many economically important Triticeae plants such as wheat, barley, and rye present a great challenge for sequence assembly and bioinformatic analysis due to their enormous complexity and large genome size. Novel concepts and strategies have been developed to deal with these difficulties and have been applied to the genomes of wheat, barley, rye, and other cereals. This includes the GenomeZipper concept, reference-guided exome assembly, and "chromosome genomics" based on flow cytometry sorted chromosomes.
Topologically Consistent Models for Efficient Big Geo-Spatio Data Distribution
NASA Astrophysics Data System (ADS)
Jahn, M. W.; Bradley, P. E.; Doori, M. Al; Breunig, M.
2017-10-01
Geo-spatio-temporal topology models are likely to become a key concept to check the consistency of 3D (spatial space) and 4D (spatial + temporal space) models for emerging GIS applications such as subsurface reservoir modelling or the simulation of energy and water supply of mega or smart cities. Furthermore, the data management for complex models consisting of big geo-spatial data is a challenge for GIS and geo-database research. General challenges, concepts, and techniques of big geo-spatial data management are presented. In this paper we introduce a sound mathematical approach for a topologically consistent geo-spatio-temporal model based on the concept of the incidence graph. We redesign DB4GeO, our service-based geo-spatio-temporal database architecture, on the way to the parallel management of massive geo-spatial data. Approaches for a new geo-spatio-temporal and object model of DB4GeO meeting the requirements of big geo-spatial data are discussed in detail. Finally, a conclusion and outlook on our future research are given on the way to support the processing of geo-analytics and -simulations in a parallel and distributed system environment.
NASA Technical Reports Server (NTRS)
Weber, Gary A.
1991-01-01
During the 90-day study, support was provided to NASA in defining a point-of-departure space transfer vehicle (STV). The resulting STV concept was performance optimized with a two-stage LTV/LEV configuration. Appendix A reports on the effort during this period of the study. From the end of the 90-day study until the March Interim Review, effort was placed on optimizing the two-stage vehicle approach identified in the 90-day effort. After the March Interim Review, the effort was expanded to perform a full architectural trade study with the intent of developing a decision database to support STV system decisions in response to changing SEI infrastructure concepts. Several of the architecture trade studies were combined in a System Architecture Trade Study. In addition to this trade, system optimization/definition trades and analyses were completed and some special topics were addressed. Program- and system-level trade study and analyses methodologies and results are presented in this section. Trades and analyses covered in this section are: (1) a system architecture trade study; (2) evolution; (3) safety and abort considerations; (4) STV as a launch vehicle upper stage; and (5) optimum crew and cargo split.
Simple Logic for Big Problems: An Inside Look at Relational Databases.
ERIC Educational Resources Information Center
Seba, Douglas B.; Smith, Pat
1982-01-01
Discusses database design concept termed "normalization" (process replacing associations between data with associations in two-dimensional tabular form) which results in formation of relational databases (they are to computers what dictionaries are to spoken languages). Applications of the database in serials control and complex systems…
Volcanic observation data and simulation database at NIED, Japan (Invited)
NASA Astrophysics Data System (ADS)
Fujita, E.; Ueda, H.; Kozono, T.
2009-12-01
NIED (Nat’l Res. Inst. for Earth Sci. & Disast. Prev.) has a project to develop two volcanic database systems: (1) volcanic observation database; (2) volcanic simulation database. The volcanic observation database is the data archive center obtained by the geophysical observation networks at Mt. Fuji, Miyake, Izu-Oshima, Iwo-jima and Nasu volcanoes, central Japan. The data consist of seismic (both high-sensitivity and broadband), ground deformation (tiltmeter, GPS) and those from other sensors (e.g., rain gauge, gravimeter, magnetometer, pressure gauge.) These data is originally stored in “WIN format,” the Japanese standard format, which is also at the Hi-net (High sensitivity seismic network Japan, http://www.hinet.bosai.go.jp/). NIED joins to WOVOdat and we have prepared to upload our data, via XML format. Our concept of the XML format is 1)a common format for intermediate files to upload into the WOVOdat DB, 2) for data files downloaded from the WOVOdat DB, 3) for data exchanges between observatories without the WOVOdat DB, 4) for common data files in each observatory, 5) for data communications between systems and softwares and 6)a for softwares. NIED is now preparing for (2) the volcanic simulation database. The objective of this project is to support to develop a “real-time” hazard map, i.e., the system which is effective to evaluate volcanic hazard in case of emergency, including the up-to-date conditions. Our system will include lava flow simulation (LavaSIM) and pyroclastic flow simulation (grvcrt). The database will keep many cases of assumed simulations and we can pick up the most probable case as the first evaluation in case the eruption started. The final goals of the both database will realize the volcanic eruption prediction and forecasting in real time by the combination of monitoring data and numerical simulations.
The Computerized Laboratory Notebook concept for genetic toxicology experimentation and testing.
Strauss, G H; Stanford, W L; Berkowitz, S J
1989-03-01
We describe a microcomputer system utilizing the Computerized Laboratory Notebook (CLN) concept developed in our laboratory for the purpose of automating the Battery of Leukocyte Tests (BLT). The BLT was designed to evaluate blood specimens for toxic, immunotoxic, and genotoxic effects after in vivo exposure to putative mutagens. A system was developed with the advantages of low cost, limited spatial requirements, ease of use for personnel inexperienced with computers, and applicability to specific testing yet flexibility for experimentation. This system eliminates cumbersome record keeping and repetitive analysis inherent in genetic toxicology bioassays. Statistical analysis of the vast quantity of data produced by the BLT would not be feasible without a central database. Our central database is maintained by an integrated package which we have adapted to develop the CLN. The clonal assay of lymphocyte mutagenesis (CALM) section of the CLN is demonstrated. PC-Slaves expand the microcomputer to multiple workstations so that our computerized notebook can be used next to a hood while other work is done in an office and instrument room simultaneously. Communication with peripheral instruments is an indispensable part of many laboratory operations, and we present a representative program, written to acquire and analyze CALM data, for communicating with both a liquid scintillation counter and an ELISA plate reader. In conclusion we discuss how our computer system could easily be adapted to the needs of other laboratories.
New tools and methods for direct programmatic access to the dbSNP relational database.
Saccone, Scott F; Quan, Jiaxi; Mehta, Gaurang; Bolze, Raphael; Thomas, Prasanth; Deelman, Ewa; Tischfield, Jay A; Rice, John P
2011-01-01
Genome-wide association studies often incorporate information from public biological databases in order to provide a biological reference for interpreting the results. The dbSNP database is an extensive source of information on single nucleotide polymorphisms (SNPs) for many different organisms, including humans. We have developed free software that will download and install a local MySQL implementation of the dbSNP relational database for a specified organism. We have also designed a system for classifying dbSNP tables in terms of common tasks we wish to accomplish using the database. For each task we have designed a small set of custom tables that facilitate task-related queries and provide entity-relationship diagrams for each task composed from the relevant dbSNP tables. In order to expose these concepts and methods to a wider audience we have developed web tools for querying the database and browsing documentation on the tables and columns to clarify the relevant relational structure. All web tools and software are freely available to the public at http://cgsmd.isi.edu/dbsnpq. Resources such as these for programmatically querying biological databases are essential for viably integrating biological information into genetic association experiments on a genome-wide scale.
Investigation of DBMS for Use in a Research Environment. Rand Paper Series 7002.
ERIC Educational Resources Information Center
Rosenfeld, Pilar N.
This investigation of the use of database management systems (DBMS) in a research environment used the Rand Corporation as a case study. After a general introduction in section 1, eight sections present the major components of the study. Section 2 contains an overview of DBMS terminology and concepts, followed in section 3 by a general dsecription…
Building an R&D chemical registration system.
Martin, Elyette; Monge, Aurélien; Duret, Jacques-Antoine; Gualandi, Federico; Peitsch, Manuel C; Pospisil, Pavel
2012-05-31
Small molecule chemistry is of central importance to a number of R&D companies in diverse areas such as the pharmaceutical, nutraceutical, food flavoring, and cosmeceutical industries. In order to store and manage thousands of chemical compounds in such an environment, we have built a state-of-the-art master chemical database with unique structure identifiers. Here, we present the concept and methodology we used to build the system that we call the Unique Compound Database (UCD). In the UCD, each molecule is registered only once (uniqueness), structures with alternative representations are entered in a uniform way (normalization), and the chemical structure drawings are recognizable to chemists and to a cartridge. In brief, structural molecules are entered as neutral entities which can be associated with a salt. The salts are listed in a dictionary and bound to the molecule with the appropriate stoichiometric coefficient in an entity called "substance". The substances are associated with batches. Once a molecule is registered, some properties (e.g., ADMET prediction, IUPAC name, chemical properties) are calculated automatically. The UCD has both automated and manual data controls. Moreover, the UCD concept enables the management of user errors in the structure entry by reassigning or archiving the batches. It also allows updating of the records to include newly discovered properties of individual structures. As our research spans a wide variety of scientific fields, the database enables registration of mixtures of compounds, enantiomers, tautomers, and compounds with unknown stereochemistries.
WMC Database Evaluation. Case Study Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palounek, Andrea P. T
The WMC Database is ultimately envisioned to hold a collection of experimental data, design information, and information from computational models. This project was a first attempt at using the Database to access experimental data and extract information from it. This evaluation shows that the Database concept is sound and robust, and that the Database, once fully populated, should remain eminently usable for future researchers.
BNDB - the Biochemical Network Database.
Küntzer, Jan; Backes, Christina; Blum, Torsten; Gerasch, Andreas; Kaufmann, Michael; Kohlbacher, Oliver; Lenhof, Hans-Peter
2007-10-02
Technological advances in high-throughput techniques and efficient data acquisition methods have resulted in a massive amount of life science data. The data is stored in numerous databases that have been established over the last decades and are essential resources for scientists nowadays. However, the diversity of the databases and the underlying data models make it difficult to combine this information for solving complex problems in systems biology. Currently, researchers typically have to browse several, often highly focused, databases to obtain the required information. Hence, there is a pressing need for more efficient systems for integrating, analyzing, and interpreting these data. The standardization and virtual consolidation of the databases is a major challenge resulting in a unified access to a variety of data sources. We present the Biochemical Network Database (BNDB), a powerful relational database platform, allowing a complete semantic integration of an extensive collection of external databases. BNDB is built upon a comprehensive and extensible object model called BioCore, which is powerful enough to model most known biochemical processes and at the same time easily extensible to be adapted to new biological concepts. Besides a web interface for the search and curation of the data, a Java-based viewer (BiNA) provides a powerful platform-independent visualization and navigation of the data. BiNA uses sophisticated graph layout algorithms for an interactive visualization and navigation of BNDB. BNDB allows a simple, unified access to a variety of external data sources. Its tight integration with the biochemical network library BN++ offers the possibility for import, integration, analysis, and visualization of the data. BNDB is freely accessible at http://www.bndb.org.
Propellant Mass Gauging: Database of Vehicle Applications and Research and Development Studies
NASA Technical Reports Server (NTRS)
Dodge, Franklin T.
2008-01-01
Gauging the mass of propellants in a tank in low gravity is not a straightforward task because of the uncertainty of the liquid configuration in the tank and the possibility of there being more than one ullage bubble. Several concepts for such a low-gravity gauging system have been proposed, and breadboard or flight-like versions have been tested in normal gravity or even in low gravity, but at present, a flight-proven reliable gauging system is not available. NASA desired a database of the gauging techniques used in current and past vehicles during ascent or under settled conditions, and during short coasting (unpowered) periods, for both cryogenic and storable propellants. Past and current research and development efforts on gauging systems that are believed to be applicable in low-gravity conditions were also desired. This report documents the results of that survey.
Negative Effects of Learning Spreadsheet Management on Learning Database Management
ERIC Educational Resources Information Center
Vágner, Anikó; Zsakó, László
2015-01-01
A lot of students learn spreadsheet management before database management. Their similarities can cause a lot of negative effects when learning database management. In this article, we consider these similarities and explain what can cause problems. First, we analyse the basic concepts such as table, database, row, cell, reference, etc. Then, we…
Studies of Big Data metadata segmentation between relational and non-relational databases
NASA Astrophysics Data System (ADS)
Golosova, M. V.; Grigorieva, M. A.; Klimentov, A. A.; Ryabinkin, E. A.; Dimitrov, G.; Potekhin, M.
2015-12-01
In recent years the concepts of Big Data became well established in IT. Systems managing large data volumes produce metadata that describe data and workflows. These metadata are used to obtain information about current system state and for statistical and trend analysis of the processes these systems drive. Over the time the amount of the stored metadata can grow dramatically. In this article we present our studies to demonstrate how metadata storage scalability and performance can be improved by using hybrid RDBMS/NoSQL architecture.
Symbolic rule-based classification of lung cancer stages from free-text pathology reports.
Nguyen, Anthony N; Lawley, Michael J; Hansen, David P; Bowman, Rayleen V; Clarke, Belinda E; Duhig, Edwina E; Colquist, Shoni
2010-01-01
To classify automatically lung tumor-node-metastases (TNM) cancer stages from free-text pathology reports using symbolic rule-based classification. By exploiting report substructure and the symbolic manipulation of systematized nomenclature of medicine-clinical terms (SNOMED CT) concepts in reports, statements in free text can be evaluated for relevance against factors relating to the staging guidelines. Post-coordinated SNOMED CT expressions based on templates were defined and populated by concepts in reports, and tested for subsumption by staging factors. The subsumption results were used to build logic according to the staging guidelines to calculate the TNM stage. The accuracy measure and confusion matrices were used to evaluate the TNM stages classified by the symbolic rule-based system. The system was evaluated against a database of multidisciplinary team staging decisions and a machine learning-based text classification system using support vector machines. Overall accuracy on a corpus of pathology reports for 718 lung cancer patients against a database of pathological TNM staging decisions were 72%, 78%, and 94% for T, N, and M staging, respectively. The system's performance was also comparable to support vector machine classification approaches. A system to classify lung TNM stages from free-text pathology reports was developed, and it was verified that the symbolic rule-based approach using SNOMED CT can be used for the extraction of key lung cancer characteristics from free-text reports. Future work will investigate the applicability of using the proposed methodology for extracting other cancer characteristics and types.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Velsko, S. P.
The microbial DNA Index System (MiDIS) is a concept for a microbial forensic database and investigative decision support system that can be used to help investigators identify the sources of microbial agents that have been used in a criminal or terrorist incident. The heart of the proposed system is a rigorous method for calculating source probabilities by using certain fundamental sampling distributions associated with the propagation and mutation of microbes on disease transmission networks. This formalism has a close relationship to mitochondrial and Y-chromosomal human DNA forensics, and the proposed decision support system is somewhat analogous to the CODIS andmore » SWGDAM mtDNA databases. The MiDIS concept does not involve the use of opportunistic collections of microbial isolates and phylogenetic tree building as a basis for inference. A staged approach can be used to build MiDIS as an enduring capability, beginning with a pilot demonstration program that must meet user expectations for performance and validation before evolving into a continuing effort. Because MiDIS requires input from a a broad array of expertise including outbreak surveillance, field microbial isolate collection, microbial genome sequencing, disease transmission networks, and laboratory mutation rate studies, it will be necessary to assemble a national multi-laboratory team to develop such a system. The MiDIS effort would lend direction and focus to the national microbial genetics research program for microbial forensics, and would provide an appropriate forensic framework for interfacing to future national and international disease surveillance efforts.« less
The method of abstraction in the design of databases and the interoperability
NASA Astrophysics Data System (ADS)
Yakovlev, Nikolay
2018-03-01
When designing the database structure oriented to the contents of indicators presented in the documents and communications subject area. First, the method of abstraction is applied by expansion of the indices of new, artificially constructed abstract concepts. The use of abstract concepts allows to avoid registration of relations many-to-many. For this reason, when built using abstract concepts, demonstrate greater stability in the processes. The example abstract concepts to address structure - a unique house number. Second, the method of abstraction can be used in the transformation of concepts by omitting some attributes that are unnecessary for solving certain classes of problems. Data processing associated with the amended concepts is more simple without losing the possibility of solving the considered classes of problems. For example, the concept "street" loses the binding to the land. The content of the modified concept of "street" are only the relations of the houses to the declared name. For most accounting tasks and ensure communication is enough.
Text Mining the Biomedical Literature
2007-11-05
activities, and repeating past mistakes, or 3) agencies not participating in joint efforts that would fully exploit each agency’s strengths...research and joint projects (multi- department, multi-agency, multi-national, and government-industry) appropriate? • Is the balance among single...overall database taxonomy, i.e., are there any concepts missing from any of the databases, and even if not, do all the concepts bear the same
ERIC Educational Resources Information Center
Bell, Steven J.
2003-01-01
Discusses full-text databases and whether existing aggregator databases are meeting user needs. Topics include the need for better search interfaces; concepts of quality research and information retrieval; information overload; full text in electronic journal collections versus aggregator databases; underrepresentation of certain disciplines; and…
Kaas, Quentin; Ruiz, Manuel; Lefranc, Marie-Paule
2004-01-01
IMGT/3Dstructure-DB and IMGT/Structural-Query are a novel 3D structure database and a new tool for immunological proteins. They are part of IMGT, the international ImMunoGenetics information system®, a high-quality integrated knowledge resource specializing in immunoglobulins (IG), T cell receptors (TR), major histocompatibility complex (MHC) and related proteins of the immune system (RPI) of human and other vertebrate species, which consists of databases, Web resources and interactive on-line tools. IMGT/3Dstructure-DB data are described according to the IMGT Scientific chart rules based on the IMGT-ONTOLOGY concepts. IMGT/3Dstructure-DB provides IMGT gene and allele identification of IG, TR and MHC proteins with known 3D structures, domain delimitations, amino acid positions according to the IMGT unique numbering and renumbered coordinate flat files. Moreover IMGT/3Dstructure-DB provides 2D graphical representations (or Collier de Perles) and results of contact analysis. The IMGT/StructuralQuery tool allows search of this database based on specific structural characteristics. IMGT/3Dstructure-DB and IMGT/StructuralQuery are freely available at http://imgt.cines.fr. PMID:14681396
Safety climate and culture: Integrating psychological and systems perspectives.
Casey, Tristan; Griffin, Mark A; Flatau Harrison, Huw; Neal, Andrew
2017-07-01
Safety climate research has reached a mature stage of development, with a number of meta-analyses demonstrating the link between safety climate and safety outcomes. More recently, there has been interest from systems theorists in integrating the concept of safety culture and to a lesser extent, safety climate into systems-based models of organizational safety. Such models represent a theoretical and practical development of the safety climate concept by positioning climate as part of a dynamic work system in which perceptions of safety act to constrain and shape employee behavior. We propose safety climate and safety culture constitute part of the enabling capitals through which organizations build safety capability. We discuss how organizations can deploy different configurations of enabling capital to exert control over work systems and maintain safe and productive performance. We outline 4 key strategies through which organizations to reconcile the system control problems of promotion versus prevention, and stability versus flexibility. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
High-Performance Secure Database Access Technologies for HEP Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matthew Vranicar; John Weicher
2006-04-17
The Large Hadron Collider (LHC) at the CERN Laboratory will become the largest scientific instrument in the world when it starts operations in 2007. Large Scale Analysis Computer Systems (computational grids) are required to extract rare signals of new physics from petabytes of LHC detector data. In addition to file-based event data, LHC data processing applications require access to large amounts of data in relational databases: detector conditions, calibrations, etc. U.S. high energy physicists demand efficient performance of grid computing applications in LHC physics research where world-wide remote participation is vital to their success. To empower physicists with data-intensive analysismore » capabilities a whole hyperinfrastructure of distributed databases cross-cuts a multi-tier hierarchy of computational grids. The crosscutting allows separation of concerns across both the global environment of a federation of computational grids and the local environment of a physicist’s computer used for analysis. Very few efforts are on-going in the area of database and grid integration research. Most of these are outside of the U.S. and rely on traditional approaches to secure database access via an extraneous security layer separate from the database system core, preventing efficient data transfers. Our findings are shared by the Database Access and Integration Services Working Group of the Global Grid Forum, who states that "Research and development activities relating to the Grid have generally focused on applications where data is stored in files. However, in many scientific and commercial domains, database management systems have a central role in data storage, access, organization, authorization, etc, for numerous applications.” There is a clear opportunity for a technological breakthrough, requiring innovative steps to provide high-performance secure database access technologies for grid computing. We believe that an innovative database architecture where the secure authorization is pushed into the database engine will eliminate inefficient data transfer bottlenecks. Furthermore, traditionally separated database and security layers provide an extra vulnerability, leaving a weak clear-text password authorization as the only protection on the database core systems. Due to the legacy limitations of the systems’ security models, the allowed passwords often can not even comply with the DOE password guideline requirements. We see an opportunity for the tight integration of the secure authorization layer with the database server engine resulting in both improved performance and improved security. Phase I has focused on the development of a proof-of-concept prototype using Argonne National Laboratory’s (ANL) Argonne Tandem-Linac Accelerator System (ATLAS) project as a test scenario. By developing a grid-security enabled version of the ATLAS project’s current relation database solution, MySQL, PIOCON Technologies aims to offer a more efficient solution to secure database access.« less
Techniques for Efficiently Managing Large Geosciences Data Sets
NASA Astrophysics Data System (ADS)
Kruger, A.; Krajewski, W. F.; Bradley, A. A.; Smith, J. A.; Baeck, M. L.; Steiner, M.; Lawrence, R. E.; Ramamurthy, M. K.; Weber, J.; Delgreco, S. A.; Domaszczynski, P.; Seo, B.; Gunyon, C. A.
2007-12-01
We have developed techniques and software tools for efficiently managing large geosciences data sets. While the techniques were developed as part of an NSF-Funded ITR project that focuses on making NEXRAD weather data and rainfall products available to hydrologists and other scientists, they are relevant to other geosciences disciplines that deal with large data sets. Metadata, relational databases, data compression, and networking are central to our methodology. Data and derived products are stored on file servers in a compressed format. URLs to, and metadata about the data and derived products are managed in a PostgreSQL database. Virtually all access to the data and products is through this database. Geosciences data normally require a number of processing steps to transform the raw data into useful products: data quality assurance, coordinate transformations and georeferencing, applying calibration information, and many more. We have developed the concept of crawlers that manage this scientific workflow. Crawlers are unattended processes that run indefinitely, and at set intervals query the database for their next assignment. A database table functions as a roster for the crawlers. Crawlers perform well-defined tasks that are, except for perhaps sequencing, largely independent from other crawlers. Once a crawler is done with its current assignment, it updates the database roster table, and gets its next assignment by querying the database. We have developed a library that enables one to quickly add crawlers. The library provides hooks to external (i.e., C-language) compiled codes, so that developers can work and contribute independently. Processes called ingesters inject data into the system. The bulk of the data are from a real-time feed using UCAR/Unidata's IDD/LDM software. An exciting recent development is the establishment of a Unidata HYDRO feed that feeds value-added metadata over the IDD/LDM. Ingesters grab the metadata and populate the PostgreSQL tables. These and other concepts we have developed have enabled us to efficiently manage a 70 Tb (and growing) data weather radar data set.
Concepts to Support HRP Integration Using Publications and Modeling
NASA Technical Reports Server (NTRS)
Mindock, J.; Lumpkins, S.; Shelhamer, M.
2014-01-01
Initial efforts are underway to enhance the Human Research Program (HRP)'s identification and support of potential cross-disciplinary scientific collaborations. To increase the emphasis on integration in HRP's science portfolio management, concepts are being explored through the development of a set of tools. These tools are intended to enable modeling, analysis, and visualization of the state of the human system in the spaceflight environment; HRP's current understanding of that state with an indication of uncertainties; and how that state changes due to HRP programmatic progress and design reference mission definitions. In this talk, we will discuss proof-of-concept work performed using a subset of publications captured in the HRP publications database. The publications were tagged in the database with words representing factors influencing health and performance in spaceflight, as well as with words representing the risks HRP research is reducing. Analysis was performed on the publication tag data to identify relationships between factors and between risks. Network representations were then created as one type of visualization of these relationships. This enables future analyses of the structure of the networks based on results from network theory. Such analyses can provide insights into HRP's current human system knowledge state as informed by the publication data. The network structure analyses can also elucidate potential improvements by identifying network connections to establish or strengthen for maximized information flow. The relationships identified in the publication data were subsequently used as inputs to a model captured in the Systems Modeling Language (SysML), which functions as a repository for relationship information to be gleaned from multiple sources. Example network visualization outputs from a simple SysML model were then also created to compare to the visualizations based on the publication data only. We will also discuss ideas for building upon this proof-of-concept work to further support an integrated approach to human spaceflight risk reduction.
Development and validation of a Database Forensic Metamodel (DBFM)
Al-dhaqm, Arafat; Razak, Shukor; Othman, Siti Hajar; Ngadi, Asri; Ahmed, Mohammed Nazir; Ali Mohammed, Abdulalem
2017-01-01
Database Forensics (DBF) is a widespread area of knowledge. It has many complex features and is well known amongst database investigators and practitioners. Several models and frameworks have been created specifically to allow knowledge-sharing and effective DBF activities. However, these are often narrow in focus and address specified database incident types. We have analysed 60 such models in an attempt to uncover how numerous DBF activities are really public even when the actions vary. We then generate a unified abstract view of DBF in the form of a metamodel. We identified, extracted, and proposed a common concept and reconciled concept definitions to propose a metamodel. We have applied a metamodelling process to guarantee that this metamodel is comprehensive and consistent. PMID:28146585
Bio-psycho-social factors affecting sexual self-concept: A systematic review.
Potki, Robabeh; Ziaei, Tayebe; Faramarzi, Mahbobeh; Moosazadeh, Mahmood; Shahhosseini, Zohreh
2017-09-01
Nowadays, it is believed that mental and emotional aspects of sexual well-being are the important aspects of sexual health. Sexual self-concept is a major component of sexual health and the core of sexuality. It is defined as the cognitive perspective concerning the sexual aspects of 'self' and refers to the individual's self-perception as a sexual creature. The aim of this study was to assess the different factors affecting sexual self-concept. English electronic databases including PubMed, Scopus, Web of Science and Google Scholar as well as two Iranian databases including Scientific Information Database and Iranmedex were searched for English and Persian-language articles published between 1996 and 2016. Of 281 retrieved articles, 37 articles were finally included for writing this review article. Factors affecting sexual self-concept were categorized to biological, psychological and social factors. In the category of biological factors, age gender, marital status, race, disability and sexual transmitted infections are described. In the psychological category, the impact of body image, sexual abuse in childhood and mental health history are present. Lastly, in the social category, the roles of parents, peers and the media are discussed. As the development of sexual self-concept is influenced by multiple events in individuals' lives, to promotion of sexual self-concept, an integrated implementation of health policies is recommended.
CRAVE: a database, middleware and visualization system for phenotype ontologies.
Gkoutos, Georgios V; Green, Eain C J; Greenaway, Simon; Blake, Andrew; Mallon, Ann-Marie; Hancock, John M
2005-04-01
A major challenge in modern biology is to link genome sequence information to organismal function. In many organisms this is being done by characterizing phenotypes resulting from mutations. Efficiently expressing phenotypic information requires combinatorial use of ontologies. However tools are not currently available to visualize combinations of ontologies. Here we describe CRAVE (Concept Relation Assay Value Explorer), a package allowing storage, active updating and visualization of multiple ontologies. CRAVE is a web-accessible JAVA application that accesses an underlying MySQL database of ontologies via a JAVA persistent middleware layer (Chameleon). This maps the database tables into discrete JAVA classes and creates memory resident, interlinked objects corresponding to the ontology data. These JAVA objects are accessed via calls through the middleware's application programming interface. CRAVE allows simultaneous display and linking of multiple ontologies and searching using Boolean and advanced searches.
An object-oriented, knowledge-based system for cardiovascular rehabilitation--phase II.
Ryder, R. M.; Inamdar, B.
1995-01-01
The Heart Monitor is an object-oriented, knowledge-based system designed to support the clinical activities of cardiovascular (CV) rehabilitation. The original concept was developed as part of graduate research completed in 1992. This paper describes the second generation system which is being implemented in collaboration with a local heart rehabilitation program. The PC UNIX-based system supports an extensive patient database organized by clinical areas. In addition, a knowledge base is employed to monitor patient status. Rule-based automated reasoning is employed to assess risk factors contraindicative to exercise therapy and to monitor administrative and statutory requirements. PMID:8563285
NASA Technical Reports Server (NTRS)
Prinzel, Lawrence J., III; Kramer, Lynda J.; Arthur, Jarvis J., III
2005-01-01
Research was conducted onboard a Gulfstream G-V aircraft to evaluate integrated Synthetic Vision System concepts during flight tests over a 6-week period at the Wallops Flight Facility and Reno/Tahoe International Airport. The NASA Synthetic Vision System incorporates database integrity monitoring, runway incursion prevention alerting, surface maps, enhanced vision sensors, and advanced pathway guidance and synthetic terrain presentation. The paper details the goals and objectives of the flight test with a focus on the situation awareness benefits of integrating synthetic vision system enabling technologies for commercial aircraft.
A topological multilayer model of the human body.
Barbeito, Antonio; Painho, Marco; Cabral, Pedro; O'Neill, João
2015-11-04
Geographical information systems deal with spatial databases in which topological models are described with alphanumeric information. Its graphical interfaces implement the multilayer concept and provide powerful interaction tools. In this study, we apply these concepts to the human body creating a representation that would allow an interactive, precise, and detailed anatomical study. A vector surface component of the human body is built using a three-dimensional (3-D) reconstruction methodology. This multilayer concept is implemented by associating raster components with the corresponding vector surfaces, which include neighbourhood topology enabling spatial analysis. A root mean square error of 0.18 mm validated the three-dimensional reconstruction technique of internal anatomical structures. The expansion of the identification and the development of a neighbourhood analysis function are the new tools provided in this model.
Recommender systems in knowledge-mining
NASA Astrophysics Data System (ADS)
Volna, Eva
2017-07-01
The subject of the paper is to analyse the possibilities of application of recommender systems in the field of data mining. The work focuses on three basic types of recommender systems (collaborative, content-based and hybrid). The goal of the article is to evaluate which of these three concepts of recommender systems provides forecast with the lowest error rate in the domain of recommending movies. This target is fulfilled by the practical part of the work - at first, the own recommender system was designed and created, capable of obtaining movies recommendation from the database based on the user's preferences. Next, we verified experimentally which recommender system produces more accurate results.
Machado, Helena; Silva, Susana
2015-10-01
The ethical aspects of biobanks and forensic DNA databases are often treated as separate issues. As a reflection of this, public participation, or the involvement of citizens in genetic databases, has been approached differently in the fields of forensics and medicine. This paper aims to cross the boundaries between medicine and forensics by exploring the flows between the ethical issues presented in the two domains and the subsequent conceptualisation of public trust and legitimisation. We propose to introduce the concept of 'solidarity', traditionally applied only to medical and research biobanks, into a consideration of public engagement in medicine and forensics. Inclusion of a solidarity-based framework, in both medical biobanks and forensic DNA databases, raises new questions that should be included in the ethical debate, in relation to both health services/medical research and activities associated with the criminal justice system. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
ERIC Educational Resources Information Center
Liou, Pey-Yan
2014-01-01
The purpose of this study is to examine the relationship between student self-concept and achievement in science in Taiwan based on the big-fish-little-pond effect (BFLPE) model using the Trends in International Mathematics and Science Study (TIMSS) 2003 and 2007 databases. Hierarchical linear modeling was used to examine the effects of the…
Bezgin, Gleb; Reid, Andrew T; Schubert, Dirk; Kötter, Rolf
2009-01-01
Brain atlases are widely used in experimental neuroscience as tools for locating and targeting specific brain structures. Delineated structures in a given atlas, however, are often difficult to interpret and to interface with database systems that supply additional information using hierarchically organized vocabularies (ontologies). Here we discuss the concept of volume-to-ontology mapping in the context of macroscopical brain structures. We present Java tools with which we have implemented this concept for retrieval of mapping and connectivity data on the macaque brain from the CoCoMac database in connection with an electronic version of "The Rhesus Monkey Brain in Stereotaxic Coordinates" authored by George Paxinos and colleagues. The software, including our manually drawn monkey brain template, can be downloaded freely under the GNU General Public License. It adds value to the printed atlas and has a wider (neuro-)informatics application since it can read appropriately annotated data from delineated sections of other species and organs, and turn them into 3D registered stacks. The tools provide additional features, including visualization and analysis of connectivity data, volume and centre-of-mass estimates, and graphical manipulation of entire structures, which are potentially useful for a range of research and teaching applications.
Two-Phase chief complaint mapping to the UMLS metathesaurus in Korean electronic medical records.
Kang, Bo-Yeong; Kim, Dae-Won; Kim, Hong-Gee
2009-01-01
The task of automatically determining the concepts referred to in chief complaint (CC) data from electronic medical records (EMRs) is an essential component of many EMR applications aimed at biosurveillance for disease outbreaks. Previous approaches that have been used for this concept mapping have mainly relied on term-level matching, whereby the medical terms in the raw text and their synonyms are matched with concepts in a terminology database. These previous approaches, however, have shortcomings that limit their efficacy in CC concept mapping, where the concepts for CC data are often represented by associative terms rather than by synonyms. Therefore, herein we propose a concept mapping scheme based on a two-phase matching approach, especially for application to Korean CCs, which uses term-level complete matching in the first phase and concept-level matching based on concept learning in the second phase. The proposed concept-level matching suggests the method to learn all the terms (associative terms as well as synonyms) that represent the concept and predict the most probable concept for a CC based on the learned terms. Experiments on 1204 CCs extracted from 15,618 discharge summaries of Korean EMRs showed that the proposed method gave significantly improved F-measure values compared to the baseline system, with improvements of up to 73.57%.
Toward intelligent information sysytem
NASA Astrophysics Data System (ADS)
Onodera, Natsuo
"Hypertext" means a concept of a novel computer-assisted tool for storage and retrieval of text information based on human association. Structure of knowledge in our idea processing is generally complicated and networked, but traditional paper documents merely express it in essentially linear and sequential forms. However, recent advances in work-station technology have allowed us to process easily electronic documents containing non-linear structure such as references or hierarchies. This paper describes concept, history and basic organization of hypertext, and shows the outline and features of existing main hypertext systems. Particularly, use of the hypertext database is illustrated by an example of Intermedia developed by Brown University.
NASA Technical Reports Server (NTRS)
Levak, Daniel
1993-01-01
The Alternate Propulsion Subsystem Concepts contract had five tasks defined for the first year. The tasks were: F-1A Restart Study, J-2S Restart Study, Propulsion Database Development, Space Shuttle Main Engine (SSME) Upper Stage Use, and CER's for Liquid Propellant Rocket Engines. The detailed study results, with the data to support the conclusions from various analyses, are being reported as a series of five separate Final Task Reports. Consequently, this volume only reports the required programmatic information concerning Computer Aided Design Documentation, and New Technology Reports. A detailed Executive Summary, covering all the tasks, is also available as Volume 1.
The EBI SRS server-new features.
Zdobnov, Evgeny M; Lopez, Rodrigo; Apweiler, Rolf; Etzold, Thure
2002-08-01
Here we report on recent developments at the EBI SRS server (http://srs.ebi.ac.uk). SRS has become an integration system for both data retrieval and sequence analysis applications. The EBI SRS server is a primary gateway to major databases in the field of molecular biology produced and supported at EBI as well as European public access point to the MEDLINE database provided by US National Library of Medicine (NLM). It is a reference server for latest developments in data and application integration. The new additions include: concept of virtual databases, integration of XML databases like the Integrated Resource of Protein Domains and Functional Sites (InterPro), Gene Ontology (GO), MEDLINE, Metabolic pathways, etc., user friendly data representation in 'Nice views', SRSQuickSearch bookmarklets. SRS6 is a licensed product of LION Bioscience AG freely available for academics. The EBI SRS server (http://srs.ebi.ac.uk) is a free central resource for molecular biology data as well as a reference server for the latest developments in data integration.
"Mr. Database" : Jim Gray and the History of Database Technologies.
Hanwahr, Nils C
2017-12-01
Although the widespread use of the term "Big Data" is comparatively recent, it invokes a phenomenon in the developments of database technology with distinct historical contexts. The database engineer Jim Gray, known as "Mr. Database" in Silicon Valley before his disappearance at sea in 2007, was involved in many of the crucial developments since the 1970s that constitute the foundation of exceedingly large and distributed databases. Jim Gray was involved in the development of relational database systems based on the concepts of Edgar F. Codd at IBM in the 1970s before he went on to develop principles of Transaction Processing that enable the parallel and highly distributed performance of databases today. He was also involved in creating forums for discourse between academia and industry, which influenced industry performance standards as well as database research agendas. As a co-founder of the San Francisco branch of Microsoft Research, Gray increasingly turned toward scientific applications of database technologies, e. g. leading the TerraServer project, an online database of satellite images. Inspired by Vannevar Bush's idea of the memex, Gray laid out his vision of a Personal Memex as well as a World Memex, eventually postulating a new era of data-based scientific discovery termed "Fourth Paradigm Science". This article gives an overview of Gray's contributions to the development of database technology as well as his research agendas and shows that central notions of Big Data have been occupying database engineers for much longer than the actual term has been in use.
Zhang, Lanlan; Hub, Martina; Mang, Sarah; Thieke, Christian; Nix, Oliver; Karger, Christian P; Floca, Ralf O
2013-06-01
Radiotherapy is a fast-developing discipline which plays a major role in cancer care. Quantitative analysis of radiotherapy data can improve the success of the treatment and support the prediction of outcome. In this paper, we first identify functional, conceptional and general requirements on a software system for quantitative analysis of radiotherapy. Further we present an overview of existing radiotherapy analysis software tools and check them against the stated requirements. As none of them could meet all of the demands presented herein, we analyzed possible conceptional problems and present software design solutions and recommendations to meet the stated requirements (e.g. algorithmic decoupling via dose iterator pattern; analysis database design). As a proof of concept we developed a software library "RTToolbox" following the presented design principles. The RTToolbox is available as open source library and has already been tested in a larger-scale software system for different use cases. These examples demonstrate the benefit of the presented design principles. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Mere exposure to money increases endorsement of free-market systems and social inequality.
Caruso, Eugene M; Vohs, Kathleen D; Baxter, Brittani; Waytz, Adam
2013-05-01
The present research tested whether incidental exposure to money affects people's endorsement of social systems that legitimize social inequality. We found that subtle reminders of the concept of money, relative to nonmoney concepts, led participants to endorse more strongly the existing social system in the United States in general (Experiment 1) and free-market capitalism in particular (Experiment 4), to assert more strongly that victims deserve their fate (Experiment 2), and to believe more strongly that socially advantaged groups should dominate socially disadvantaged groups (Experiment 3). We further found that reminders of money increased preference for a free-market system of organ transplants that benefited the wealthy at the expense of the poor even though this was not the prevailing system (Experiment 5) and that this effect was moderated by participants' nationality. These results demonstrate how merely thinking about money can influence beliefs about the social order and the extent to which people deserve their station in life. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Extracting semantics from audio-visual content: the final frontier in multimedia retrieval.
Naphade, M R; Huang, T S
2002-01-01
Multimedia understanding is a fast emerging interdisciplinary research area. There is tremendous potential for effective use of multimedia content through intelligent analysis. Diverse application areas are increasingly relying on multimedia understanding systems. Advances in multimedia understanding are related directly to advances in signal processing, computer vision, pattern recognition, multimedia databases, and smart sensors. We review the state-of-the-art techniques in multimedia retrieval. In particular, we discuss how multimedia retrieval can be viewed as a pattern recognition problem. We discuss how reliance on powerful pattern recognition and machine learning techniques is increasing in the field of multimedia retrieval. We review the state-of-the-art multimedia understanding systems with particular emphasis on a system for semantic video indexing centered around multijects and multinets. We discuss how semantic retrieval is centered around concepts and context and the various mechanisms for modeling concepts and context.
Preliminary assessment of rover power systems for the Mars Rover Sample Return Mission
NASA Technical Reports Server (NTRS)
Bents, D. J.
1989-01-01
Four isotope power system concepts were presented and compared on a common basis for application to on-board electrical prime power for an autonomous planetary rover vehicle. A representative design point corresponding to the Mars Rover Sample Return (MRSR) preliminary mission requirements (500 W) was selected for comparison purposes. All systems concepts utilize the General Purpose Heat Source (GPHS) isotope heat source developed by DOE. Two of the concepts employ thermoelectric (TE) conversion: one using the GPHS Radioisotope Thermoelectric Generator (RTG) used as a reference case, the other using an advanced RTG with improved thermoelectric materials. The other two concepts employed are dynamic isotope power systems (DIPS): one using a closed Brayton cycle (CBC) turboalternator, and the other using a free piston Stirling cycle engine/linear alternator (FPSE) with integrated heat source/heater head. Near-term technology levels have been assumed for concept characterization using component technology figure-of-merit values taken from the published literature. For example, the CBC characterization draws from the historical test database accumulated from space Brayton cycle subsystems and components from the NASA B engine through the mini-Brayton rotating unit. TE system performance is estimated from Voyager/multihundred Watt (MHW)-RTG flight experience through Mod-RTG performance estimates considering recent advances in TE materials under the DOD/DOE/NASA SP-100 and NASA Committee on Scientific and Technological Information programs. The Stirling DIPS system is characterized from scaled-down Space Power Demonstrator Engine (SPDE) data using the GPHS directly incorporated into the heater head. The characterization/comparison results presented here differ from previous comparison of isotope power (made for LEO applications) because of the elevated background temperature on the Martian surface compared to LEO, and the higher sensitivity of dynamic systems to elevated s
Collaboration systems for classroom instruction
NASA Astrophysics Data System (ADS)
Chen, C. Y. Roger; Meliksetian, Dikran S.; Chang, Martin C.
1996-01-01
In this paper we discuss how classroom instruction can benefit from state-of-the-art technologies in networks, worldwide web access through Internet, multimedia, databases, and computing. Functional requirements for establishing such a high-tech classroom are identified, followed by descriptions of our current experimental implementations. The focus of the paper is on the capabilities of distributed collaboration, which supports both synchronous multimedia information sharing as well as a shared work environment for distributed teamwork and group decision making. Our ultimate goal is to achieve the concept of 'living world in a classroom' such that live and dynamic up-to-date information and material from all over the world can be integrated into classroom instruction on a real-time basis. We describe how we incorporate application developments in a geography study tool, worldwide web information retrievals, databases, and programming environments into the collaborative system.
Semantically Interoperable XML Data
Vergara-Niedermayr, Cristobal; Wang, Fusheng; Pan, Tony; Kurc, Tahsin; Saltz, Joel
2013-01-01
XML is ubiquitously used as an information exchange platform for web-based applications in healthcare, life sciences, and many other domains. Proliferating XML data are now managed through latest native XML database technologies. XML data sources conforming to common XML schemas could be shared and integrated with syntactic interoperability. Semantic interoperability can be achieved through semantic annotations of data models using common data elements linked to concepts from ontologies. In this paper, we present a framework and software system to support the development of semantic interoperable XML based data sources that can be shared through a Grid infrastructure. We also present our work on supporting semantic validated XML data through semantic annotations for XML Schema, semantic validation and semantic authoring of XML data. We demonstrate the use of the system for a biomedical database of medical image annotations and markups. PMID:25298789
Performance Confirmation Data Aquisition System
DOE Office of Scientific and Technical Information (OSTI.GOV)
D.W. Markman
2000-10-27
The purpose of this analysis is to identify and analyze concepts for the acquisition of data in support of the Performance Confirmation (PC) program at the potential subsurface nuclear waste repository at Yucca Mountain. The scope and primary objectives of this analysis are to: (1) Review the criteria for design as presented in the Performance Confirmation Data Acquisition/Monitoring System Description Document, by way of the Input Transmittal, Performance Confirmation Input Criteria (CRWMS M&O 1999c). (2) Identify and describe existing and potential new trends in data acquisition system software and hardware that would support the PC plan. The data acquisition softwaremore » and hardware will support the field instruments and equipment that will be installed for the observation and perimeter drift borehole monitoring, and in-situ monitoring within the emplacement drifts. The exhaust air monitoring requirements will be supported by a data communication network interface with the ventilation monitoring system database. (3) Identify the concepts and features that a data acquisition system should have in order to support the PC process and its activities. (4) Based on PC monitoring needs and available technologies, further develop concepts of a potential data acquisition system network in support of the PC program and the Site Recommendation and License Application.« less
Flight Test Comparison Between Enhanced Vision (FLIR) and Synthetic Vision Systems
NASA Technical Reports Server (NTRS)
Arthur, Jarvis J., III; Kramer, Lynda J.; Bailey, Randall E.
2005-01-01
Limited visibility and reduced situational awareness have been cited as predominant causal factors for both Controlled Flight Into Terrain (CFIT) and runway incursion accidents. NASA s Synthetic Vision Systems (SVS) project is developing practical application technologies with the goal of eliminating low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. A major thrust of the SVS project involves the development/demonstration of affordable, certifiable display configurations that provide intuitive out-the-window terrain and obstacle information with advanced pathway guidance. A flight test evaluation was conducted in the summer of 2004 by NASA Langley Research Center under NASA s Aviation Safety and Security, Synthetic Vision System - Commercial and Business program. A Gulfstream G-V aircraft, modified and operated under NASA contract by the Gulfstream Aerospace Corporation, was flown over a 3-week period at the Reno/Tahoe International Airport and an additional 3-week period at the NASA Wallops Flight Facility to evaluate integrated Synthetic Vision System concepts. Flight testing was conducted to evaluate the performance, usability, and acceptance of an integrated synthetic vision concept which included advanced Synthetic Vision display concepts for a transport aircraft flight deck, a Runway Incursion Prevention System, an Enhanced Vision Systems (EVS), and real-time Database Integrity Monitoring Equipment. This paper focuses on comparing qualitative and subjective results between EVS and SVS display concepts.
New tools and methods for direct programmatic access to the dbSNP relational database
Saccone, Scott F.; Quan, Jiaxi; Mehta, Gaurang; Bolze, Raphael; Thomas, Prasanth; Deelman, Ewa; Tischfield, Jay A.; Rice, John P.
2011-01-01
Genome-wide association studies often incorporate information from public biological databases in order to provide a biological reference for interpreting the results. The dbSNP database is an extensive source of information on single nucleotide polymorphisms (SNPs) for many different organisms, including humans. We have developed free software that will download and install a local MySQL implementation of the dbSNP relational database for a specified organism. We have also designed a system for classifying dbSNP tables in terms of common tasks we wish to accomplish using the database. For each task we have designed a small set of custom tables that facilitate task-related queries and provide entity-relationship diagrams for each task composed from the relevant dbSNP tables. In order to expose these concepts and methods to a wider audience we have developed web tools for querying the database and browsing documentation on the tables and columns to clarify the relevant relational structure. All web tools and software are freely available to the public at http://cgsmd.isi.edu/dbsnpq. Resources such as these for programmatically querying biological databases are essential for viably integrating biological information into genetic association experiments on a genome-wide scale. PMID:21037260
An incremental database access method for autonomous interoperable databases
NASA Technical Reports Server (NTRS)
Roussopoulos, Nicholas; Sellis, Timos
1994-01-01
We investigated a number of design and performance issues of interoperable database management systems (DBMS's). The major results of our investigation were obtained in the areas of client-server database architectures for heterogeneous DBMS's, incremental computation models, buffer management techniques, and query optimization. We finished a prototype of an advanced client-server workstation-based DBMS which allows access to multiple heterogeneous commercial DBMS's. Experiments and simulations were then run to compare its performance with the standard client-server architectures. The focus of this research was on adaptive optimization methods of heterogeneous database systems. Adaptive buffer management accounts for the random and object-oriented access methods for which no known characterization of the access patterns exists. Adaptive query optimization means that value distributions and selectives, which play the most significant role in query plan evaluation, are continuously refined to reflect the actual values as opposed to static ones that are computed off-line. Query feedback is a concept that was first introduced to the literature by our group. We employed query feedback for both adaptive buffer management and for computing value distributions and selectivities. For adaptive buffer management, we use the page faults of prior executions to achieve more 'informed' management decisions. For the estimation of the distributions of the selectivities, we use curve-fitting techniques, such as least squares and splines, for regressing on these values.
NASA Astrophysics Data System (ADS)
Hermanns, R. L.; Zentel, K.-O.; Wenzel, F.; Hövel, M.; Hesse, A.
In order to benefit from synergies and to avoid replication in the field of disaster re- duction programs and related scientific projects it is important to create an overview on the state of art, the fields of activity and their key aspects. Therefore, the German Committee for Disaster Reduction intends to document projects and institution related to natural disaster prevention in three databases. One database is designed to docu- ment scientific programs and projects related to natural hazards. In a first step data acquisition concentrated on projects carried out by German institutions. In a second step projects from all other European countries will be archived. The second database focuses on projects on early-warning systems and has no regional limit. Data mining started in November 2001 and will be finished soon. The third database documents op- erational projects dealing with disaster prevention and concentrates on international projects or internationally funded projects. These databases will be available on the internet end of spring 2002 (http://www.dkkv.org) and will be updated continuously. They will allow rapid and concise information on various international projects, pro- vide up-to-date descriptions, and facilitate exchange as all relevant information in- cluding contact addresses are available to the public. The aim of this contribution is to present concepts and the work done so far, to invite participation, and to contact other organizations with similar objectives.
A geo-spatial data management system for potentially active volcanoes—GEOWARN project
NASA Astrophysics Data System (ADS)
Gogu, Radu C.; Dietrich, Volker J.; Jenny, Bernhard; Schwandner, Florian M.; Hurni, Lorenz
2006-02-01
Integrated studies of active volcanic systems for the purpose of long-term monitoring and forecast and short-term eruption prediction require large numbers of data-sets from various disciplines. A modern database concept has been developed for managing and analyzing multi-disciplinary volcanological data-sets. The GEOWARN project (choosing the "Kos-Yali-Nisyros-Tilos volcanic field, Greece" and the "Campi Flegrei, Italy" as test sites) is oriented toward potentially active volcanoes situated in regions of high geodynamic unrest. This article describes the volcanological database of the spatial and temporal data acquired within the GEOWARN project. As a first step, a spatial database embedded in a Geographic Information System (GIS) environment was created. Digital data of different spatial resolution, and time-series data collected at different intervals or periods, were unified in a common, four-dimensional representation of space and time. The database scheme comprises various information layers containing geographic data (e.g. seafloor and land digital elevation model, satellite imagery, anthropogenic structures, land-use), geophysical data (e.g. from active and passive seismicity, gravity, tomography, SAR interferometry, thermal imagery, differential GPS), geological data (e.g. lithology, structural geology, oceanography), and geochemical data (e.g. from hydrothermal fluid chemistry and diffuse degassing features). As a second step based on the presented database, spatial data analysis has been performed using custom-programmed interfaces that execute query scripts resulting in a graphical visualization of data. These query tools were designed and compiled following scenarios of known "behavior" patterns of dormant volcanoes and first candidate signs of potential unrest. The spatial database and query approach is intended to facilitate scientific research on volcanic processes and phenomena, and volcanic surveillance.
NASA Astrophysics Data System (ADS)
Liu, Xiufeng; McKeough, Anne
2005-05-01
The aim of this study was to develop a model of students' energy concept development. Applying Case's (1985, 1992) structural theory of cognitive development, we hypothesized that students' concept of energy undergoes a series of transitions, corresponding to systematic increases in working memory capacity. The US national sample from the Third International Mathematics and Science Study (TIMSS) database was used to test our hypothesis. Items relevant to the energy concept in the TIMSS test booklets for three populations were identified. Item difficulty from Rasch modeling was used to test the hypothesized developmental sequence, and percentage of students' correct responses was used to test the correspondence between students' age/grade level and level of the energy concepts. The analysis supported our hypothesized sequence of energy concept development and suggested mixed effects of maturation and schooling on energy concept development. Further, the results suggest that curriculum and instruction design take into consideration the developmental progression of students' concept of energy.
Gupta, Amarnath; Bug, William; Marenco, Luis; Qian, Xufei; Condit, Christopher; Rangarajan, Arun; Müller, Hans Michael; Miller, Perry L.; Sanders, Brian; Grethe, Jeffrey S.; Astakhov, Vadim; Shepherd, Gordon; Sternberg, Paul W.; Martone, Maryann E.
2009-01-01
The overarching goal of the NIF (Neuroscience Information Framework) project is to be a one-stop-shop for Neuroscience. This paper provides a technical overview of how the system is designed. The technical goal of the first version of the NIF system was to develop an information system that a neuroscientist can use to locate relevant information from a wide variety of information sources by simple keyword queries. Although the user would provide only keywords to retrieve information, the NIF system is designed to treat them as concepts whose meanings are interpreted by the system. Thus, a search for term should find a record containing synonyms of the term. The system is targeted to find information from web pages, publications, databases, web sites built upon databases, XML documents and any other modality in which such information may be published. We have designed a system to achieve this functionality. A central element in the system is an ontology called NIFSTD (for NIF Standard) constructed by amalgamating a number of known and newly developed ontologies. NIFSTD is used by our ontology management module, called OntoQuest to perform ontology-based search over data sources. The NIF architecture currently provides three different mechanisms for searching heterogeneous data sources including relational databases, web sites, XML documents and full text of publications. Version 1.0 of the NIF system is currently in beta test and may be accessed through http://nif.nih.gov. PMID:18958629
Gupta, Amarnath; Bug, William; Marenco, Luis; Qian, Xufei; Condit, Christopher; Rangarajan, Arun; Müller, Hans Michael; Miller, Perry L; Sanders, Brian; Grethe, Jeffrey S; Astakhov, Vadim; Shepherd, Gordon; Sternberg, Paul W; Martone, Maryann E
2008-09-01
The overarching goal of the NIF (Neuroscience Information Framework) project is to be a one-stop-shop for Neuroscience. This paper provides a technical overview of how the system is designed. The technical goal of the first version of the NIF system was to develop an information system that a neuroscientist can use to locate relevant information from a wide variety of information sources by simple keyword queries. Although the user would provide only keywords to retrieve information, the NIF system is designed to treat them as concepts whose meanings are interpreted by the system. Thus, a search for term should find a record containing synonyms of the term. The system is targeted to find information from web pages, publications, databases, web sites built upon databases, XML documents and any other modality in which such information may be published. We have designed a system to achieve this functionality. A central element in the system is an ontology called NIFSTD (for NIF Standard) constructed by amalgamating a number of known and newly developed ontologies. NIFSTD is used by our ontology management module, called OntoQuest to perform ontology-based search over data sources. The NIF architecture currently provides three different mechanisms for searching heterogeneous data sources including relational databases, web sites, XML documents and full text of publications. Version 1.0 of the NIF system is currently in beta test and may be accessed through http://nif.nih.gov.
Munn, Maureen; Knuth, Randy; Van Horne, Katie; Shouse, Andrew W.; Levias, Sheldon
2017-01-01
This study examines how two kinds of authentic research experiences related to smoking behavior—genotyping human DNA (wet lab) and using a database to test hypotheses about factors that affect smoking behavior (dry lab)—influence students’ perceptions and understanding of scientific research and related science concepts. The study used pre and post surveys and a focus group protocol to compare students who conducted the research experiences in one of two sequences: genotyping before database and database before genotyping. Students rated the genotyping experiment to be more like real science than the database experiment, in spite of the fact that they associated more scientific tasks with the database experience than genotyping. Independent of the order of completing the labs, students showed gains in their understanding of science concepts after completion of the two experiences. There was little change in students’ attitudes toward science pre to post, as measured by the Scientific Attitude Inventory II. However, on the basis of their responses during focus groups, students developed more sophisticated views about the practices and nature of science after they had completed both research experiences, independent of the order in which they experienced them. PMID:28572181
An Animated Introduction to Relational Databases for Many Majors
ERIC Educational Resources Information Center
Dietrich, Suzanne W.; Goelman, Don; Borror, Connie M.; Crook, Sharon M.
2015-01-01
Database technology affects many disciplines beyond computer science and business. This paper describes two animations developed with images and color that visually and dynamically introduce fundamental relational database concepts and querying to students of many majors. The goal is for educators in diverse academic disciplines to incorporate the…
Leveraging Semantic Knowledge in IRB Databases to Improve Translation Science
Hurdle, John F.; Botkin, Jeffery; Rindflesch, Thomas C.
2007-01-01
We introduce the notion that research administrative databases (RADs), such as those increasingly used to manage information flow in the Institutional Review Board (IRB), offer a novel, useful, and mine-able data source overlooked by informaticists. As a proof of concept, using an IRB database we extracted all titles and abstracts from system startup through January 2007 (n=1,876); formatted these in a pseudo-MEDLINE format; and processed them through the SemRep semantic knowledge extraction system. Even though SemRep is tuned to find semantic relations in MEDLINE citations, we found that it performed comparably well on the IRB texts. When adjusted to eliminate non-healthcare IRB submissions (e.g., economic and education studies), SemRep extracted an average of 7.3 semantic relations per IRB abstract (compared to an average of 11.1 for MEDLINE citations) with a precision of 70% (compared to 78% for MEDLINE). We conclude that RADs, as represented by IRB data, are mine-able with existing tools, but that performance will improve as these tools are tuned for RAD structures. PMID:18693856
Singh, Ranjit; Pace, Wilson; Singh, Sonjoy; Singh, Ashok; Singh, Gurdev
2007-01-01
Evidence suggests that the quality of care delivered by the healthcare industry currently falls far short of its capabilities. Whilst most patient safety and quality improvement work to date has focused on inpatient settings, some estimates suggest that outpatient settings are equally important, with up to 200,000 avoidable deaths annually in the United States of America (USA) alone. There is currently a need for improved error reporting and taxonomy systems that are useful at the point of care. This provides an opportunity to harness the benefits of computer visualisation to help structure and illustrate the 'stories' behind errors. In this paper we present a concept for a visual taxonomy of errors, based on visual models of the healthcare system at both macrosystem and microsystem levels (previously published in this journal), and describe how this could be used to create a visual database of errors. In an alphatest in a US context, we were able to code a sample of 20 errors from an existing error database using the visual taxonomy. The approach is designed to capture and disseminate patient safety information in an unambiguous format that is useful to all members of the healthcare team (including the patient) at the point of care as well as at the policy-making level.
Bio-psycho-social factors affecting sexual self-concept: A systematic review
Potki, Robabeh; Ziaei, Tayebe; Faramarzi, Mahbobeh; Moosazadeh, Mahmood; Shahhosseini, Zohreh
2017-01-01
Background Nowadays, it is believed that mental and emotional aspects of sexual well-being are the important aspects of sexual health. Sexual self-concept is a major component of sexual health and the core of sexuality. It is defined as the cognitive perspective concerning the sexual aspects of ‘self’ and refers to the individual’s self-perception as a sexual creature. Objective The aim of this study was to assess the different factors affecting sexual self-concept. Methods English electronic databases including PubMed, Scopus, Web of Science and Google Scholar as well as two Iranian databases including Scientific Information Database and Iranmedex were searched for English and Persian-language articles published between 1996 and 2016. Of 281 retrieved articles, 37 articles were finally included for writing this review article. Results Factors affecting sexual self-concept were categorized to biological, psychological and social factors. In the category of biological factors, age gender, marital status, race, disability and sexual transmitted infections are described. In the psychological category, the impact of body image, sexual abuse in childhood and mental health history are present. Lastly, in the social category, the roles of parents, peers and the media are discussed. Conclusion As the development of sexual self-concept is influenced by multiple events in individuals’ lives, to promotion of sexual self-concept, an integrated implementation of health policies is recommended. PMID:29038693
On data modeling for neurological application
NASA Astrophysics Data System (ADS)
Woźniak, Karol; Mulawka, Jan
The aim of this paper is to design and implement information system containing large database dedicated to support neurological-psychiatric examinations focused on human brain after stroke. This approach encompasses the following steps: analysis of software requirements, presentation of the problem solving concept, design and implementation of the final information system. Certain experiments were performed in order to verify the correctness of the project ideas. The approach can be considered as an interdisciplinary venture. Elaboration of the system architecture, data model and the tools supporting medical examinations are provided. The achievement of the design goals is demonstrated in the final conclusion.
The Wettzell System Monitoring Concept and First Realizations
NASA Technical Reports Server (NTRS)
Ettl, Martin; Neidhardt, Alexander; Muehlbauer, Matthias; Ploetz, Christian; Beaudoin, Christopher
2010-01-01
Automated monitoring of operational system parameters for the geodetic space techniques is becoming more important in order to improve the geodetic data and to ensure the safety and stability of automatic and remote-controlled observations. Therefore, the Wettzell group has developed the system monitoring software, SysMon, which is based on a reliable, remotely-controllable hardware/software realization. A multi-layered data logging system based on a fanless, robust industrial PC with an internal database system is used to collect data from several external, serial, bus, or PCI-based sensors. The internal communication is realized with Remote Procedure Calls (RPC) and uses generative programming with the interface software generator idl2rpc.pl developed at Wettzell. Each data monitoring stream can be configured individually via configuration files to define the logging rates or analog-digital-conversion parameters. First realizations are currently installed at the new laser ranging system at Wettzell to address safety issues and at the VLBI station O Higgins as a meteorological data logger. The system monitoring concept should be realized for the Wettzell radio telescope in the near future.
Space transfer vehicle concepts and requirements study, phase 2
NASA Technical Reports Server (NTRS)
Cannon, Jeffrey H.; Vinopal, Tim; Andrews, Dana; Richards, Bill; Weber, Gary; Paddock, Greg; Maricich, Peter; Bouton, Bruce; Hagen, Jim; Kolesar, Richard
1992-01-01
This final report is a compilation of the Phase 1 and Phase 2 study findings and is intended as a Space Transfer Vehicle (STV) 'users guide' rather than an exhaustive explanation of STV design details. It provides a database for design choices in the general areas of basing, reusability, propulsion, and staging; with selection criteria based on cost, performance, available infrastructure, risk, and technology. The report is organized into the following three parts: (1) design guide; (2) STV Phase 1 Concepts and Requirements Study Summary; and (3) STV Phase 2 Concepts and Requirements Study Summary. The overall objectives of the STV study were to: (1) define preferred STV concepts capable of accommodating future exploration missions in a cost-effective manner; (2) determine the level of technology development required to perform these missions in the most cost effective manner; and (3) develop a decision database of programmatic approaches for the development of an STV concept.
Construction of databases: advances and significance in clinical research.
Long, Erping; Huang, Bingjie; Wang, Liming; Lin, Xiaoyu; Lin, Haotian
2015-12-01
Widely used in clinical research, the database is a new type of data management automation technology and the most efficient tool for data management. In this article, we first explain some basic concepts, such as the definition, classification, and establishment of databases. Afterward, the workflow for establishing databases, inputting data, verifying data, and managing databases is presented. Meanwhile, by discussing the application of databases in clinical research, we illuminate the important role of databases in clinical research practice. Lastly, we introduce the reanalysis of randomized controlled trials (RCTs) and cloud computing techniques, showing the most recent advancements of databases in clinical research.
Assembling proteomics data as a prerequisite for the analysis of large scale experiments
Schmidt, Frank; Schmid, Monika; Thiede, Bernd; Pleißner, Klaus-Peter; Böhme, Martina; Jungblut, Peter R
2009-01-01
Background Despite the complete determination of the genome sequence of a huge number of bacteria, their proteomes remain relatively poorly defined. Beside new methods to increase the number of identified proteins new database applications are necessary to store and present results of large- scale proteomics experiments. Results In the present study, a database concept has been developed to address these issues and to offer complete information via a web interface. In our concept, the Oracle based data repository system SQL-LIMS plays the central role in the proteomics workflow and was applied to the proteomes of Mycobacterium tuberculosis, Helicobacter pylori, Salmonella typhimurium and protein complexes such as 20S proteasome. Technical operations of our proteomics labs were used as the standard for SQL-LIMS template creation. By means of a Java based data parser, post-processed data of different approaches, such as LC/ESI-MS, MALDI-MS and 2-D gel electrophoresis (2-DE), were stored in SQL-LIMS. A minimum set of the proteomics data were transferred in our public 2D-PAGE database using a Java based interface (Data Transfer Tool) with the requirements of the PEDRo standardization. Furthermore, the stored proteomics data were extractable out of SQL-LIMS via XML. Conclusion The Oracle based data repository system SQL-LIMS played the central role in the proteomics workflow concept. Technical operations of our proteomics labs were used as standards for SQL-LIMS templates. Using a Java based parser, post-processed data of different approaches such as LC/ESI-MS, MALDI-MS and 1-DE and 2-DE were stored in SQL-LIMS. Thus, unique data formats of different instruments were unified and stored in SQL-LIMS tables. Moreover, a unique submission identifier allowed fast access to all experimental data. This was the main advantage compared to multi software solutions, especially if personnel fluctuations are high. Moreover, large scale and high-throughput experiments must be managed in a comprehensive repository system such as SQL-LIMS, to query results in a systematic manner. On the other hand, these database systems are expensive and require at least one full time administrator and specialized lab manager. Moreover, the high technical dynamics in proteomics may cause problems to adjust new data formats. To summarize, SQL-LIMS met the requirements of proteomics data handling especially in skilled processes such as gel-electrophoresis or mass spectrometry and fulfilled the PSI standardization criteria. The data transfer into a public domain via DTT facilitated validation of proteomics data. Additionally, evaluation of mass spectra by post-processing using MS-Screener improved the reliability of mass analysis and prevented storage of data junk. PMID:19166578
CVcat: An interactive database on cataclysmic variables
NASA Astrophysics Data System (ADS)
Kube, J.; Gänsicke, B. T.; Euchner, F.; Hoffmann, B.
2003-06-01
CVcat is a database that contains published data on cataclysmic variables and related objects. Unlike in the existing online sources, the users are allowed to add data to the catalogue. The concept of an ``open catalogue'' approach is reviewed together with the experience from one year of public usage of CVcat. New concepts to be included in the upcoming AstroCat framework and the next CVcat implementation are presented. CVcat can be found at http://www.cvcat.org.
FACET: Future ATM Concepts Evaluation Tool
NASA Technical Reports Server (NTRS)
Bilmoria, Karl D.; Banavar, Sridhar; Chatterji, Gano B.; Sheth, Kapil S.; Grabbe, Shon
2000-01-01
FACET (Future ATM Concepts Evaluation Tool) is an Air Traffic Management research tool being developed at the NASA Ames Research Center. This paper describes the design, architecture and functionalities of FACET. The purpose of FACET is to provide E simulation environment for exploration, development and evaluation of advanced ATM concepts. Examples of these concepts include new ATM paradigms such as Distributed Air-Ground Traffic Management, airspace redesign and new Decision Support Tools (DSTs) for controllers working within the operational procedures of the existing air traffic control system. FACET is currently capable of modeling system-wide en route airspace operations over the contiguous United States. Airspace models (e.g., Center/sector boundaries, airways, locations of navigation aids and airports) are available from databases. A core capability of FACET is the modeling of aircraft trajectories. Using round-earth kinematic equations, aircraft can be flown along flight plan routes or great circle routes as they climb, cruise and descend according to their individual aircraft-type performance models. Performance parameters (e.g., climb/descent rates and speeds, cruise speeds) are obtained from data table lookups. Heading, airspeed and altitude-rate dynamics are also modeled. Additional functionalities will be added as necessary for specific applications. FACET software is written in Java and C programming languages. It is platform-independent, and can be run on a variety of computers. FACET has been designed with a modular software architecture to enable rapid integration of research prototype implementations of new ATM concepts. There are several advanced ATM concepts that are currently being implemented in FACET airborne separation assurance, dynamic density predictions, airspace redesign (re-sectorization), benefits of a controller DST for direct-routing, and the integration of commercial space transportation system operations into the U.S. National Airspace System (NAS).
Advanced Noise Control Fan: A 20-Year Retrospective
NASA Technical Reports Server (NTRS)
Sutliff, Dan
2016-01-01
The ANCF test bed is used for evaluating fan noise reduction concepts, developing noise measurement technologies, and providing a database for Aero-acoustic code development. Rig Capabilities: 4 foot 16 bladed rotor @ 2500 rpm, Auxiliary air delivery system (3 lbm/sec @ 6/12 psi), Variable configuration (rotor pitch angle, stator count/position, duct length), synthetic acoustic noise generation (tone/broadband). Measurement Capabilities: 112 channels dynamic data system, Unique rotating rake mode measuremen, Farfield (variable radius), Duct wall microphones, Stator vane microphones, Two component CTA w/ traversing, ESP for static pressures.
Operator Performance Support System (OPSS)
NASA Technical Reports Server (NTRS)
Conklin, Marlen Z.
1993-01-01
In the complex and fast reaction world of military operations, present technologies, combined with tactical situations, have flooded the operator with assorted information that he is expected to process instantly. As technologies progress, this flow of data and information have both guided and overwhelmed the operator. However, the technologies that have confounded many operators today can be used to assist him -- thus the Operator Performance Support Team. In this paper we propose an operator support station that incorporates the elements of Video and Image Databases, productivity Software, Interactive Computer Based Training, Hypertext/Hypermedia Databases, Expert Programs, and Human Factors Engineering. The Operator Performance Support System will provide the operator with an integrating on-line information/knowledge system that will guide expert or novice to correct systems operations. Although the OPSS is being developed for the Navy, the performance of the workforce in today's competitive industry is of major concern. The concepts presented in this paper which address ASW systems software design issues are also directly applicable to industry. the OPSS will propose practical applications in how to more closely align the relationships between technical knowledge and equipment operator performance.
Bridging the Gap between the Data Base and User in a Distributed Environment.
ERIC Educational Resources Information Center
Howard, Richard D.; And Others
1989-01-01
The distribution of databases physically separates users from those who administer the database and the administrators who perform database administration. By drawing on the work of social scientists in reliability and validity, a set of concepts and a list of questions to ensure data quality were developed. (Author/MLW)
NASA Technical Reports Server (NTRS)
Henderson, Brenda S.; Doty, Mike
2012-01-01
Acoustic and flow-field experiments were conducted on exhaust concepts for the next generation supersonic, commercial aircraft. The concepts were developed by Lockheed Martin (LM), Rolls-Royce Liberty Works (RRLW), and General Electric Global Research (GEGR) as part of an N+2 (next generation forward) aircraft system study initiated by the Supersonics Project in NASA s Fundamental Aeronautics Program. The experiments were conducted in the Aero-Acoustic Propulsion Laboratory at the NASA Glenn Research Center. The exhaust concepts presented here utilized lobed-mixers and ejectors. A powered third-stream was implemented to improve ejector acoustic performance. One concept was found to produce stagnant flow within the ejector and the other produced discrete-frequency tones (due to flow separations within the model) that degraded the acoustic performance of the exhaust concept. NASA's Environmentally Responsible Aviation (ERA) Project has been investigating a Hybrid Wing Body (HWB) aircraft as a possible configuration for meeting N+2 system level goals for noise, emissions, and fuel burn. A recently completed NRA led by Boeing Research and Technology resulted in a full-scale aircraft design and wind tunnel model. This model will be tested acoustically in NASA Langley's 14-by 22-Foot Subsonic Tunnel and will include dual jet engine simulators and broadband engine noise simulators as part of the test campaign. The objectives of the test are to characterize the system level noise, quantify the effects of shielding, and generate a valuable database for prediction method development. Further details of the test and various component preparations are described.
NADM Conceptual Model 1.0 -- A Conceptual Model for Geologic Map Information
,
2004-01-01
Executive Summary -- The NADM Data Model Design Team was established in 1999 by the North American Geologic Map Data Model Steering Committee (NADMSC) with the purpose of drafting a geologic map data model for consideration as a standard for developing interoperable geologic map-centered databases by state, provincial, and federal geological surveys. The model is designed to be a technology-neutral conceptual model that can form the basis for a web-based interchange format using evolving information technology (e.g., XML, RDF, OWL), and guide implementation of geoscience databases in a common conceptual framework. The intended purpose is to allow geologic information sharing between geologic map data providers and users, independent of local information system implementation. The model emphasizes geoscience concepts and relationships related to information presented on geologic maps. Design has been guided by an informal requirements analysis, documentation of existing databases, technology developments, and other standardization efforts in the geoscience and computer-science communities. A key aspect of the model is the notion that representation of the conceptual framework (ontology) that underlies geologic map data must be part of the model, because this framework changes with time and understanding, and varies between information providers. The top level of the model distinguishes geologic concepts, geologic representation concepts, and metadata. The geologic representation part of the model provides a framework for representing the ontology that underlies geologic map data through a controlled vocabulary, and for establishing the relationships between this vocabulary and a geologic map visualization or portrayal. Top-level geologic classes in the model are Earth material (substance), geologic unit (parts of the Earth), geologic age, geologic structure, fossil, geologic process, geologic relation, and geologic event.
Remote sensing information sciences research group: Browse in the EOS era
NASA Technical Reports Server (NTRS)
Estes, John E.; Star, Jeffrey L.
1989-01-01
The problem of science data browse was examined. Given the tremendous data volumes that are planned for future space missions, particularly the Earth Observing System in the late 1990's, the need for access to large spatial databases must be understood. Work was continued to refine the concept of data browse. Further, software was developed to provide a testbed of the concepts, both to locate possibly interesting data, as well as view a small portion of the data. Build II was placed on a minicomputer and a PC in the laboratory, and provided accounts for use in the testbed. Consideration of the testbed software as an element of in-house data management plans was begun.
Permutation coding technique for image recognition systems.
Kussul, Ernst M; Baidyk, Tatiana N; Wunsch, Donald C; Makeyev, Oleksandr; Martín, Anabel
2006-11-01
A feature extractor and neural classifier for image recognition systems are proposed. The proposed feature extractor is based on the concept of random local descriptors (RLDs). It is followed by the encoder that is based on the permutation coding technique that allows to take into account not only detected features but also the position of each feature on the image and to make the recognition process invariant to small displacements. The combination of RLDs and permutation coding permits us to obtain a sufficiently general description of the image to be recognized. The code generated by the encoder is used as an input data for the neural classifier. Different types of images were used to test the proposed image recognition system. It was tested in the handwritten digit recognition problem, the face recognition problem, and the microobject shape recognition problem. The results of testing are very promising. The error rate for the Modified National Institute of Standards and Technology (MNIST) database is 0.44% and for the Olivetti Research Laboratory (ORL) database it is 0.1%.
The Ecological Model Web Concept: A Consultative Infrastructure for Decision Makers and Researchers
NASA Astrophysics Data System (ADS)
Geller, G.; Nativi, S.
2011-12-01
Rapid climate and socioeconomic changes may be outrunning society's ability to understand, predict, and respond to change effectively. Decision makers want better information about what these changes will be and how various resources will be affected, while researchers want better understanding of the components and processes of ecological systems, how they interact, and how they respond to change. Although there are many excellent models in ecology and related disciplines, there is only limited coordination among them, and accessible, openly shared models or model systems that can be consulted to gain insight on important ecological questions or assist with decision-making are rare. A "consultative infrastructure" that increased access to and sharing of models and model outputs would benefit decision makers, researchers, as well as modelers. Of course, envisioning such an ambitious system is much easier than building it, but several complementary approaches exist that could contribute. The one discussed here is called the Model Web. This is a concept for an open-ended system of interoperable computer models and databases based on making models and their outputs available as services ("model as a service"). Initially, it might consist of a core of several models from which it could grow gradually as new models or databases were added. However, a model web would not be a monolithic, rigidly planned and built system--instead, like the World Wide Web, it would grow largely organically, with limited central control, within a framework of broad goals and data exchange standards. One difference from the WWW is that a model web is much harder to create, and has more pitfalls, and thus is a long term vision. However, technology, science, observations, and models have advanced enough so that parts of an ecological model web can be built and utilized now, forming a framework for gradual growth as well as a broadly accessible infrastructure. Ultimately, the value of a model web lies in the increase in access to and sharing of both models and model outputs. By lowering access barriers to models and their outputs there is less reinvention, more efficient use of resources, greater interaction among researchers and across disciplines, as well as other benefits. The growth of such a system of models fits well with the concept and architecture of the Global Earth Observing System of Systems (GEOSS) as well as the Semantic Web. And, while framed here in the context of ecological forecasting, the same concept can be applied to any discipline utilizing models.
Biocuration workflows and text mining: overview of the BioCreative 2012 Workshop Track II.
Lu, Zhiyong; Hirschman, Lynette
2012-01-01
Manual curation of data from the biomedical literature is a rate-limiting factor for many expert curated databases. Despite the continuing advances in biomedical text mining and the pressing needs of biocurators for better tools, few existing text-mining tools have been successfully integrated into production literature curation systems such as those used by the expert curated databases. To close this gap and better understand all aspects of literature curation, we invited submissions of written descriptions of curation workflows from expert curated databases for the BioCreative 2012 Workshop Track II. We received seven qualified contributions, primarily from model organism databases. Based on these descriptions, we identified commonalities and differences across the workflows, the common ontologies and controlled vocabularies used and the current and desired uses of text mining for biocuration. Compared to a survey done in 2009, our 2012 results show that many more databases are now using text mining in parts of their curation workflows. In addition, the workshop participants identified text-mining aids for finding gene names and symbols (gene indexing), prioritization of documents for curation (document triage) and ontology concept assignment as those most desired by the biocurators. DATABASE URL: http://www.biocreative.org/tasks/bc-workshop-2012/workflow/.
Space transfer concepts and analysis for exploration missions
NASA Technical Reports Server (NTRS)
1990-01-01
The progress and results are summarized for mission/system requirements database; mission analysis; GN and C (Guidance, Navigation, and Control), aeroheating, Mars landing; radiation protection; aerobrake mass analysis; Shuttle-Z, TMIS (Trans-Mars Injection Stage); Long Duration Habitat Trade Study; evolutionary lunar and Mars options; NTR (Nuclear Thermal Rocket); NEP (Nuclear Electric Propulsion) update; SEP (Solar Electric Propulsion) update; orbital and space-based requirements; technology; piloted rover; programmatic task; and evolutionary and innovative architecture.
Intimate partner violence, technology, and stalking.
Southworth, Cynthia; Finn, Jerry; Dawson, Shawndell; Fraser, Cynthia; Tucker, Sarah
2007-08-01
This research note describes the use of a broad range of technologies in intimate partner stalking, including cordless and cellular telephones, fax machines, e-mail, Internet-based harassment, global positioning systems, spy ware, video cameras, and online databases. The concept of "stalking with technology" is reviewed, and the need for an expanded definition of cyberstalking is presented. Legal issues and advocacy-centered responses, including training, legal remedies, public policy issues, and technology industry practices, are discussed.
Research on the ITOC based scheduling system for ship piping production
NASA Astrophysics Data System (ADS)
Li, Rui; Liu, Yu-Jun; Hamada, Kunihiro
2010-12-01
Manufacturing of ship piping systems is one of the major production activities in shipbuilding. The schedule of pipe production has an important impact on the master schedule of shipbuilding. In this research, the ITOC concept was introduced to solve the scheduling problems of a piping factory, and an intelligent scheduling system was developed. The system, in which a product model, an operation model, a factory model, and a knowledge database of piping production were integrated, automated the planning process and production scheduling. Details of the above points were discussed. Moreover, an application of the system in a piping factory, which achieved a higher level of performance as measured by tardiness, lead time, and inventory, was demonstrated.
Ventilator-Related Adverse Events: A Taxonomy and Findings From 3 Incident Reporting Systems.
Pham, Julius Cuong; Williams, Tamara L; Sparnon, Erin M; Cillie, Tam K; Scharen, Hilda F; Marella, William M
2016-05-01
In 2009, researchers from Johns Hopkins University's Armstrong Institute for Patient Safety and Quality; public agencies, including the FDA; and private partners, including the Emergency Care Research Institute and the University HealthSystem Consortium (UHC) Safety Intelligence Patient Safety Organization, sought to form a public-private partnership for the promotion of patient safety (P5S) to advance patient safety through voluntary partnerships. The study objective was to test the concept of the P5S to advance our understanding of safety issues related to ventilator events, to develop a common classification system for categorizing adverse events related to mechanical ventilators, and to perform a comparison of adverse events across different adverse event reporting systems. We performed a cross-sectional analysis of ventilator-related adverse events reported in 2012 from the following incident reporting systems: the Pennsylvania Patient Safety Authority's Patient Safety Reporting System, UHC's Safety Intelligence Patient Safety Organization database, and the FDA's Manufacturer and User Facility Device Experience database. Once each organization had its dataset of ventilator-related adverse events, reviewers read the narrative descriptions of each event and classified it according to the developed common taxonomy. A Pennsylvania Patient Safety Authority, FDA, and UHC search provided 252, 274, and 700 relevant reports, respectively. The 3 event types most commonly reported to the UHC and the Pennsylvania Patient Safety Authority's Patient Safety Reporting System databases were airway/breathing circuit issue, human factor issues, and ventilator malfunction events. The top 3 event types reported to the FDA were ventilator malfunction, power source issue, and alarm failure. Overall, we found that (1) through the development of a common taxonomy, adverse events from 3 reporting systems can be evaluated, (2) the types of events reported in each database were related to the purpose of the database and the source of the reports, resulting in significant differences in reported event categories across the 3 systems, and (3) a public-private collaboration for investigating ventilator-related adverse events under the P5S model is feasible. Copyright © 2016 by Daedalus Enterprises.
Exploring molecular networks using MONET ontology.
Silva, João Paulo Müller da; Lemke, Ney; Mombach, José Carlos; Souza, José Guilherme Camargo de; Sinigaglia, Marialva; Vieira, Renata
2006-03-31
The description of the complex molecular network responsible for cell behavior requires new tools to integrate large quantities of experimental data in the design of biological information systems. These tools could be used in the characterization of these networks and in the formulation of relevant biological hypotheses. The building of an ontology is a crucial step because it integrates in a coherent framework the concepts necessary to accomplish such a task. We present MONET (molecular network), an extensible ontology and an architecture designed to facilitate the integration of data originating from different public databases in a single- and well-documented relational database, that is compatible with MONET formal definition. We also present an example of an application that can easily be implemented using these tools.
[The evolution of the concept of self-care in the healthcare system: a narrative literature review].
Lommi, Marzia; Matarese, Maria; Alvaro, Rosaria; Piredda, Michela; De Marinis, Maria Grazia
2015-01-01
To identify and analyze the definitions of self-care in the healthcare system in the evolution over time as well as to identify the key actors of self-care. We conducted a narrative review of the literature on the definition of self-care in the CINAHL, PubMed and ILISI databases. The searches ranged from the first year included in each database until May 2013. In addition, a secondary analysis was performed on the references of articles selected to identify additional data sources. The self-care definitions were grouped according to decades and examined in their evolution. The first self-care definitions date back to the seventies, but only in the eighties self-care has been seen as a key resource for healthcare systems. In the nineties, care activities previously considered the exclusive domain of the health professions were included in this concept; and finally over the 2000s the role of health professionals in self-care is highlighted, extending the self-care activities to the psychological, social and spiritual dimensions. Self-care activities can be carried out directly by the person upon himself, delegated to others, or performed on others. The review has showed the large body of literature published in different disciplinary and cultural fields, which has led to the proliferation of definitions and interpretations of self-care. Italy has taken part in a marginal way to the international debate. It would be useful that also Italian nurses did research to describe and understand.
Patient Safety Leadership WalkRounds.
Frankel, Allan; Graydon-Baker, Erin; Neppl, Camilla; Simmonds, Terri; Gustafson, Michael; Gandhi, Tejal K
2003-01-01
In the WalkRounds concept, a core group, which includes the senior executives and/or vice presidents, conducts weekly visits to different areas of the hospital. The group, joined by one or two nurses in the area and other available staff, asks specific questions about adverse events or near misses and about the factors or systems issues that led to these events. ANALYSIS OF EVENTS: Events in the Walkrounds are entered into a database and classified according to the contributing factors. The data are aggregated by contributing factors and priority scores to highlight the root issues. The priority scores are used to determine QI pilots and make best use of limited resources. Executives are surveyed quarterly about actions they have taken as a direct result of WalkRounds and are asked what they have learned from the rounds. As of September 2002, 47 Patient Safety Leadership WalkRounds visited a total of 48 different areas of the hospital, with 432 individual comments. The WalkRounds require not only knowledgeable and invested senior leadership but also a well-organized support structure. Quality and safety personnel are needed to collect data and maintain a database of confidential information, evaluate the data from a systems approach, and delineate systems-based actions to improve care delivery. Comments of frontline clinicians and executives suggested that WalkRounds helps educate leadership and frontline staff in patient safety concepts and will lead to cultural changes, as manifested in more open discussion of adverse events and an improved rate of safety-based changes.
Teaching Children to Use Databases through Direct Instruction.
ERIC Educational Resources Information Center
Rooze, Gene E.
1988-01-01
Provides a direct instruction strategy for teaching skills and concepts required for database use. Creates an interactive environment which motivates, provides a model, imparts information, allows active student participation, gives knowledge of results, and presents guidance. (LS)
Teaching Database Modeling and Design: Areas of Confusion and Helpful Hints
ERIC Educational Resources Information Center
Philip, George C.
2007-01-01
This paper identifies several areas of database modeling and design that have been problematic for students and even are likely to confuse faculty. Major contributing factors are the lack of clarity and inaccuracies that persist in the presentation of some basic database concepts in textbooks. The paper analyzes the problems and discusses ways to…
Globe Teachers Guide and Photographic Data on the Web
NASA Technical Reports Server (NTRS)
Kowal, Dan
2004-01-01
The task of managing the GLOBE Online Teacher s Guide during this time period focused on transforming the technology behind the delivery system of this document. The web application transformed from a flat file retrieval system to a dynamic database access approach. The new methodology utilizes Java Server Pages (JSP) on the front-end and an Oracle relational database on the backend. This new approach allows users of the web site, mainly teachers, to access content efficiently by grade level and/or by investigation or educational concept area. Moreover, teachers can gain easier access to data sheets and lab and field guides. The new online guide also included updated content for all GLOBE protocols. The GLOBE web management team was given documentation for maintaining the new application. Instructions for modifying the JSP templates and managing database content were included in this document. It was delivered to the team by the end of October, 2003. The National Geophysical Data Center (NGDC) continued to manage the school study site photos on the GLOBE website. 333 study site photo images were added to the GLOBE database and posted on the web during this same time period for 64 schools. Documentation for processing study site photos was also delivered to the new GLOBE web management team. Lastly, assistance was provided in transferring reference applications such as the Cloud and LandSat quizzes and Earth Systems Online Poster from NGDC servers to GLOBE servers along with documentation for maintaining these applications.
NASA Astrophysics Data System (ADS)
Meyer, Hanna; Authmann, Christian; Dreber, Niels; Hess, Bastian; Kellner, Klaus; Morgenthal, Theunis; Nauss, Thomas; Seeger, Bernhard; Tsvuura, Zivanai; Wiegand, Kerstin
2017-04-01
Bush encroachment is a syndrome of land degradation that occurs in many savannas including those of southern Africa. The increase in density, cover or biomass of woody vegetation often has negative effects on a range of ecosystem functions and services, which are hardly reversible. However, despite its importance, neither the causes of bush encroachment, nor the consequences of different resource management strategies to combat or mitigate related shifts in savanna states are fully understood. The project "IDESSA" (An Integrative Decision Support System for Sustainable Rangeland Management in Southern African Savannas) aims to improve the understanding of the complex interplays between land use, climate patterns and vegetation dynamics and to implement an integrative monitoring and decision-support system for the sustainable management of different savanna types. For this purpose, IDESSA follows an innovative approach that integrates local knowledge, botanical surveys, remote-sensing and machine-learning based time-series of atmospheric and land-cover dynamics, spatially explicit simulation modeling and analytical database management. The integration of the heterogeneous data will be implemented in a user oriented database infrastructure and scientific workflow system. Accessible via web-based interfaces, this database and analysis system will allow scientists to manage and analyze monitoring data and scenario computations, as well as allow stakeholders (e. g. land users, policy makers) to retrieve current ecosystem information and seasonal outlooks. We present the concept of the project and show preliminary results of the realization steps towards the integrative savanna management and decision-support system.
Preliminary assessment of rover power systems for the Mars Rover Sample Return Mission
NASA Technical Reports Server (NTRS)
Bents, David J.
1989-01-01
Four isotope power system concepts were presented and compared on a common basis for application to on-board electrical prime power for an autonomous planetary rover vehicle. A representative design point corresponding to the Mars Rover Sample Return (MRSR) preliminary mission requirements (500 W) was selected for comparison purposes. All systems concepts utilize the General Purpose Heat Source (GPHS) isotope heat source developed by DOE. Two of the concepts employ thermoelectric (TE) conversion: one using the GPHS Radioisotope Thermoelectric Generator (RTG) used as a reference case, the other using an advanced RTG with improved thermoelectric materials. The other two concepts employed are dynamic isotope power systems (DIPS): one using a closed Brayton cycle (CBC) turboalternator, and the other using a free piston Stirling cycle engine/linear alternator (FPSE) with integrated heat source/heater head. Near term technology levels have been assumed for concept characterization using component technology figure-of-merit values taken from the published literature. For example, the CBC characterization draws from the historical test database accumulated from space Brayton cycle subsystems and components from the NASA B engine through the mini-Brayton rotating unit. TE system performance is estimated from Voyager/multihundred Watt (MHW)-RTG flight experience through Mod-RTG performance estimates considering recent advances in TE materials under the DOD/DOE/NASA SP-100 and NASA Committee on Scientific and Technological Information programs. The Stirling DIPS system is characterized from scaled-down Space Power Demonstrator Engine (SPDE) data using the GPHS directly incorporated into the heater head. The characterization/comparison results presented here differ from previous comparison of isotope power (made for Low Earth Orbit (LEO) applications) because of the elevated background temperature on the Martian surface compared to LEO, and the higher sensitivity of dynamic systems to elevated sink temperature. The mass advantage of dynamic systems is significantly reduced for this application due to Mars' elevated background temperature.
Linking Publications to Instruments, Field Campaigns, Sites and Working Groups: The ARM Experience
NASA Astrophysics Data System (ADS)
Lehnert, K.; Parsons, M. A.; Ramachandran, R.; Fils, D.; Narock, T.; Fox, P. A.; Troyan, D.; Cialella, A. T.; Gregory, L.; Lazar, K.; Liang, M.; Ma, L.; Tilp, A.; Wagener, R.
2017-12-01
For the past 25 years, the ARM Climate Research Facility - a US Department of Energy scientific user facility - has been collecting atmospheric data in different climatic regimes using both in situ and remote instrumentation. Configuration of the facility's components has been designed to improve the understanding and representation, in climate and earth system models, of clouds and aerosols. Placing a premium on long-term continuous data collection resulted in terabytes of data having been collected, stored, and made accessible to any interested person. All data is accessible via the ARM.gov website and the ARM Data Discovery Tool. A team of metadata professionals assign appropriate tags to help facilitate searching the databases for desired data. The knowledge organization tools and concepts are used to create connections between data, instruments, field campaigns, sites, and measurements are familiar to informatics professionals. Ontology, taxonomy, classification, and thesauri are among the customized concepts put into practice for ARM's purposes. In addition to the multitude of data available, there have been approximately 3,000 journal articles that utilize ARM data. These have been linked to specific ARM web pages. Searches of the complete ARM publication database can be done using a separate interface. This presentation describes how ARM data is linked to instruments, sites, field campaigns, and publications through the application of standard knowledge organization tools and concepts.
A thermal shield concept for the Solar Probe mission
NASA Technical Reports Server (NTRS)
Miyake, Robert N.; Millard, Jerry M.; Randolph, James E.
1991-01-01
The Solar Probe spacecraft will travel to within 4 solar radii of the sun's center while performing a variety of fundamental experiments in space physics. Exposure to 2900 earth suns (400 W/sq cm) at perihelion imposes severe thermal and material demands on a solar shield system designed to protect the payload that will reside within the shield's shadow envelope or umbra. The design of the shield subsystem is a thermal/materials challenge requiring new technology development. While currently in the preproject study phase, anticipating a 1995 project start, shield preliminary design efforts are currently underway. This paper documents the current status of the mission concept, the materials issues, the configuration concept for the shield subsystem, the current configuration studies performed to date, and the required material testing to provide a database to support a design effort required to develop the shield subsystem.
An Ancient Relation between Units of Length and Volume Based on a Sphere
Zapassky, Elena; Gadot, Yuval; Finkelstein, Israel; Benenson, Itzhak
2012-01-01
The modern metric system defines units of volume based on the cube. We propose that the ancient Egyptian system of measuring capacity employed a similar concept, but used the sphere instead. When considered in ancient Egyptian units, the volume of a sphere, whose circumference is one royal cubit, equals half a hekat. Using the measurements of large sets of ancient containers as a database, the article demonstrates that this formula was characteristic of Egyptian and Egyptian-related pottery vessels but not of the ceramics of Mesopotamia, which had a different system of measuring length and volume units. PMID:22470489
Hernandez, Penni; Podchiyska, Tanya; Weber, Susan; Ferris, Todd; Lowe, Henry
2009-11-14
The Stanford Translational Research Integrated Database Environment (STRIDE) clinical data warehouse integrates medication information from two Stanford hospitals that use different drug representation systems. To merge this pharmacy data into a single, standards-based model supporting research we developed an algorithm to map HL7 pharmacy orders to RxNorm concepts. A formal evaluation of this algorithm on 1.5 million pharmacy orders showed that the system could accurately assign pharmacy orders in over 96% of cases. This paper describes the algorithm and discusses some of the causes of failures in mapping to RxNorm.
Maguire, Martin C
2013-11-01
The EU EuroClim project developed a system to monitor and record climate change indicator data based on satellite observations of snow cover, sea ice and glaciers in Northern Europe and the Arctic. It also contained projection data for temperature, rainfall and average wind speed for Europe. These were all stored as data sets in a GIS database for users to download. The process of gathering requirements for a user population including scientists, researchers, policy makers, educationalists and the general public is described. Using an iterative design methodology, a user survey was administered to obtain initial feedback on the system concept followed by panel sessions where users were presented with the system concept and a demonstrator to interact with it. The requirements of both specialist and non-specialist users is summarised together with strategies for the effective communication of geographic climate change information. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Ontology for heart rate turbulence domain from the conceptual model of SNOMED-CT.
Soguero-Ruiz, Cristina; Lechuga-Suárez, Luis; Mora-Jiménez, Inmaculada; Ramos-López, Javier; Barquero-Pérez, Óscar; García-Alberola, Arcadi; Rojo-Álvarez, José L
2013-07-01
Electronic health record (EHR) automates the clinician workflow, allowing evidence-based decision support and quality management. We aimed to start a framework for domain standardization of cardiovascular risk stratification into the EHR, including risk indices whose calculation involves ECG signal processing. We propose the use of biomedical ontologies completely based on the conceptual model of SNOMED-CT, which allows us to implement our domain in the EHR. In this setting, the present study focused on the heart rate turbulence (HRT) domain, according to its concise guidelines and clear procedures for parameter calculations. We used 289 concepts from SNOMED-CT, and generated 19 local extensions (new concepts) for the HRT specific concepts not present in the current version of SNOMED-CT. New concepts included averaged and individual ventricular premature complex tachograms, initial sinus acceleration for turbulence onset, or sinusal oscillation for turbulence slope. Two representative use studies were implemented: first, a prototype was inserted in the hospital information system for supporting HRT recordings and their simple follow up by medical societies; second, an advanced support for a prospective scientific research, involving standard and emergent signal processing algorithms in the HRT indices, was generated and then tested in an example database of 27 Holter patients. Concepts of the proposed HRT ontology are publicly available through a terminology server, hence their use in any information system will be straightforward due to the interoperability provided by SNOMED-CT.
Health Information Exchange as a Complex and Adaptive Construct: Scoping Review.
Akhlaq, Ather; Sheikh, Aziz; Pagliari, Claudia
2017-01-25
To understand how the concept of Health Information Exchange (HIE) has evolved over time. Supplementary analysis of data from a systematic scoping review of definitions of HIE from 1900 to 2014, involving temporal analysis of underpinning themes. The search identified 268 unique definitions of HIE dating from 1957 onwards; 103 in scientific databases and 165 in Google. These contained consistent themes, representing the core concept of exchanging health information electronically, as well as fluid themes, reflecting the evolving policy, business, organisational and technological context of HIE (including the emergence of HIE as an organisational 'entity'). These are summarised graphically to show how the concept has evolved around the world with the passage of time. The term HIE emerged in 1957 with the establishment of Occupational HIE, evolving through the 1990s with concepts such as electronic data interchange and mobile computing technology; then from 2006-10 largely aligning with the US Government's health information technology strategy and the creation of HIEs as organisational entities, alongside the broader interoperability imperative, and continuing to evolve today as part of a broader international agenda for sustainable, information-driven health systems. The concept of HIE is an evolving and adaptive one, reflecting the ongoing quest for integrated and interoperable information to improve the efficiency and effectiveness of health systems, in a changing technological and policy environment.
The concept of shared mental models in healthcare collaboration.
McComb, Sara; Simpson, Vicki
2014-07-01
To report an analysis of the concept of shared mental models in health care. Shared mental models have been described as facilitators of effective teamwork. The complexity and criticality of the current healthcare system requires shared mental models to enhance safe and effective patient/client care. Yet, the current concept definition in the healthcare literature is vague and, therefore, difficult to apply consistently in research and practice. Concept analysis. Literature for this concept analysis was retrieved from several databases, including CINAHL, PubMed and MEDLINE (EBSCO Interface), for the years 1997-2013. Walker and Avant's approach to concept analysis was employed and, following Paley's guidance, embedded in extant theory from the team literature. Although teamwork and collaboration are discussed frequently in healthcare literature, the concept of shared mental models in that context is not as commonly found but is increasing in appearance. Our concept analysis defines shared mental models as individually held knowledge structures that help team members function collaboratively in their environments and are comprised of the attributes of content, similarity, accuracy and dynamics. This theoretically grounded concept analysis provides a foundation for a middle-range descriptive theory of shared mental models in nursing and health care. Further research concerning the impact of shared mental models in the healthcare setting can result in development and refinement of shared mental models to support effective teamwork and collaboration. © 2013 John Wiley & Sons Ltd.
Variant terminology. [for aerospace information systems
NASA Technical Reports Server (NTRS)
Buchan, Ronald L.
1991-01-01
A system called Variant Terminology Switching (VTS) is set forth that is intended to provide computer-assisted spellings for terms that have American and British versions. VTS is based on the use of brackets, parentheses, and other symbols in conjunction with letters that distinguish American and British spellings. The symbols are used in the systems as indicators of actions such as deleting, adding, and replacing letters as well as replacing entire words and concepts. The system is shown to be useful for the intended purpose and also for the recognition of misspellings and for the standardization of computerized input/output. The VTS system is of interest to the development of international retrieval systems for aerospace and other technical databases that enhance the use by the global scientific community.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Joshua M.
Manufacturing tasks that are deemed too hazardous for workers require the use of automation, robotics, and/or other remote handling tools. The associated hazards may be radiological or nonradiological, and based on the characteristics of the environment and processing, a design may necessitate robotic labor, human labor, or both. There are also other factors such as cost, ergonomics, maintenance, and efficiency that also effect task allocation and other design choices. Handling the tradeoffs of these factors can be complex, and lack of experience can be an issue when trying to determine if and what feasible automation/robotics options exist. To address thismore » problem, we utilize common engineering design approaches adapted more for manufacturing system design in hazardous environments. We limit our scope to the conceptual and embodiment design stages, specifically a computational algorithm for concept generation and early design evaluation. In regard to concept generation, we first develop the functional model or function structure for the process, using the common 'verb-noun' format for describing function. A common language or functional basis for manufacturing was developed and utilized to formalize function descriptions and guide rules for function decomposition. Potential components for embodiment are also grouped in terms of this functional language and are stored in a database. The properties of each component are given as quantitative and qualitative criteria. Operators are also rated for task-relevant criteria which are used to address task compatibility. Through the gathering of process requirements/constraints, construction of the component database, and development of the manufacturing basis and rule set, design knowledge is stored and available for computer use. Thus, once the higher level process functions are defined, the computer can automate the synthesis of new design concepts through alternating steps of embodiment and function structure updates/decomposition. In the process, criteria guide function allocation of components/operators and help ensure compatibility and feasibility. Through multiple function assignment options and varied function structures, multiple design concepts are created. All of the generated designs are then evaluated based on a number of relevant evaluation criteria: cost, dose, ergonomics, hazards, efficiency, etc. These criteria are computed using physical properties/parameters of each system based on the qualities an engineer would use to make evaluations. Nuclear processes such as oxide conversion and electrorefining are utilized to aid algorithm development and provide test cases for the completed program. Through our approach, we capture design knowledge related to manufacturing and other operations in hazardous environments to enable a computational program to automatically generate and evaluate system design concepts.« less
Knowledge representation in metabolic pathway databases.
Stobbe, Miranda D; Jansen, Gerbert A; Moerland, Perry D; van Kampen, Antoine H C
2014-05-01
The accurate representation of all aspects of a metabolic network in a structured format, such that it can be used for a wide variety of computational analyses, is a challenge faced by a growing number of researchers. Analysis of five major metabolic pathway databases reveals that each database has made widely different choices to address this challenge, including how to deal with knowledge that is uncertain or missing. In concise overviews, we show how concepts such as compartments, enzymatic complexes and the direction of reactions are represented in each database. Importantly, also concepts which a database does not represent are described. Which aspects of the metabolic network need to be available in a structured format and to what detail differs per application. For example, for in silico phenotype prediction, a detailed representation of gene-protein-reaction relations and the compartmentalization of the network is essential. Our analysis also shows that current databases are still limited in capturing all details of the biology of the metabolic network, further illustrated with a detailed analysis of three metabolic processes. Finally, we conclude that the conceptual differences between the databases, which make knowledge exchange and integration a challenge, have not been resolved, so far, by the exchange formats in which knowledge representation is standardized.
Application of 3D Spatio-Temporal Data Modeling, Management, and Analysis in DB4GEO
NASA Astrophysics Data System (ADS)
Kuper, P. V.; Breunig, M.; Al-Doori, M.; Thomsen, A.
2016-10-01
Many of todaýs world wide challenges such as climate change, water supply and transport systems in cities or movements of crowds need spatio-temporal data to be examined in detail. Thus the number of examinations in 3D space dealing with geospatial objects moving in space and time or even changing their shapes in time will rapidly increase in the future. Prominent spatio-temporal applications are subsurface reservoir modeling, water supply after seawater desalination and the development of transport systems in mega cities. All of these applications generate large spatio-temporal data sets. However, the modeling, management and analysis of 3D geo-objects with changing shape and attributes in time still is a challenge for geospatial database architectures. In this article we describe the application of concepts for the modeling, management and analysis of 2.5D and 3D spatial plus 1D temporal objects implemented in DB4GeO, our service-oriented geospatial database architecture. An example application with spatio-temporal data of a landfill, near the city of Osnabrück in Germany demonstrates the usage of the concepts. Finally, an outlook on our future research focusing on new applications with big data analysis in three spatial plus one temporal dimension in the United Arab Emirates, especially the Dubai area, is given.
Staff nurse clinical leadership: a concept analysis.
Chávez, Eduardo C; Yoder, Linda H
2015-01-01
The purpose of this article is to provide a concept analysis of staff nurse clinical leadership (SNCL). A clear delineation of SNCL will promote understanding and encourage communication of the phenomenon. Clarification of the concept will establish a common understanding of the concept, and advance the practice, education, and research of this phenomenon. A review of the literature was conducted using several databases. The databases were searched using the following keywords: clinical leadership, nursing, bedside, staff nurse, front-line, front line, and leadership. The search yielded several sources; however, only those that focused on clinical leadership demonstrated by staff nurses in acute care hospital settings were selected for review. SNCL is defined as staff nurses who exert significant influence over other individuals in the healthcare team, and although no formal authority has been vested in them facilitates individual and collective efforts to accomplish shared clinical objectives. The theoretical definition for SNCL within the team context will provide a common understanding of this concept and differentiate it from other types of leadership in the nursing profession. This clarification and conceptualization of the concept will assist further research of the concept and advance its practical application in acute care hospital settings. © 2014 Wiley Periodicals, Inc.
Knowledge-Based Motion Control of AN Intelligent Mobile Autonomous System
NASA Astrophysics Data System (ADS)
Isik, Can
An Intelligent Mobile Autonomous System (IMAS), which is equipped with vision and low level sensors to cope with unknown obstacles, is modeled as a hierarchy of path planning and motion control. This dissertation concentrates on the lower level of this hierarchy (Pilot) with a knowledge-based controller. The basis of a theory of knowledge-based controllers is established, using the example of the Pilot level motion control of IMAS. In this context, the knowledge-based controller with a linguistic world concept is shown to be adequate for the minimum time control of an autonomous mobile robot motion. The Pilot level motion control of IMAS is approached in the framework of production systems. The three major components of the knowledge-based control that are included here are the hierarchies of the database, the rule base and the rule evaluator. The database, which is the representation of the state of the world, is organized as a semantic network, using a concept of minimal admissible vocabulary. The hierarchy of rule base is derived from the analytical formulation of minimum-time control of IMAS motion. The procedure introduced for rule derivation, which is called analytical model verbalization, utilizes the concept of causalities to describe the system behavior. A realistic analytical system model is developed and the minimum-time motion control in an obstacle strewn environment is decomposed to a hierarchy of motion planning and control. The conditions for the validity of the hierarchical problem decomposition are established, and the consistency of operation is maintained by detecting the long term conflicting decisions of the levels of the hierarchy. The imprecision in the world description is modeled using the theory of fuzzy sets. The method developed for the choice of the rule that prescribes the minimum-time motion control among the redundant set of applicable rules is explained and the usage of fuzzy set operators is justified. Also included in the dissertation are the description of the computer simulation of Pilot within the hierarchy of IMAS control and the simulated experiments that demonstrate the theoretical work.
An evolutionary concept analysis of futility in health care.
Morata, Lauren
2018-06-01
To report a concept analysis of futility in health care. Each member of the healthcare team: the physician, the nurse, the patient, the family and all others involved perceive futility differently. The current evidence and knowledge in regard to futility in health care manifest a plethora of definitions, meanings and interpretations without consensus. Concept analysis. Databases searched included Medline, Cumulative Index of Nursing and Allied Health Literature, Academic Search Premier, Cochrane Database of Systematic Reviews and PsycINFO. Search terms included "futil*," "concept analysis," "concept," "inefficacious," "non-beneficial," "ineffective" and "fruitless" from 1935-2016 to ensure a historical perspective of the concept. A total of 106 articles were retained to develop the concept. Rogers' evolutionary concept analysis was used to evaluate the concept of futility from ancient medicine to the present. Seven antecedents (the patient/family autonomy, surrogate decision-making movement, the patient-family/physician relationship, physician authority, legislation and court rulings, catastrophic events and advancing medical technology) lead to four major attributes (quantitative, physiologic, qualitative, and disease-specific). Ultimately, futile care could lead to consequences such as litigation, advancing technology, increasing healthcare costs, rationing, moral distress and ethical dilemmas. Futility in health care demonstrates components of a cyclical process and a consensus definition is proposed. A framework is developed to clarify the concept and articulate relationships among attributes, antecedents and consequences. Further testing of the proposed definition and framework are needed. © 2018 John Wiley & Sons Ltd.
Research on Ajax and Hibernate technology in the development of E-shop system
NASA Astrophysics Data System (ADS)
Yin, Luo
2011-12-01
Hibernate is a object relational mapping framework of open source code, which conducts light-weighted object encapsulation of JDBC to let Java programmers use the concept of object-oriented programming to manipulate database at will. The appearence of the concept of Ajax (asynchronous JavaScript and XML technology) begins the time prelude of page partial refresh so that developers can develop web application programs with stronger interaction. The paper illustrates the concrete application of Ajax and Hibernate to the development of E-shop in details and adopts them to design to divide the entire program code into relatively independent parts which can cooperate with one another as well. In this way, it is easier for the entire program to maintain and expand.
Relevance feedback-based building recognition
NASA Astrophysics Data System (ADS)
Li, Jing; Allinson, Nigel M.
2010-07-01
Building recognition is a nontrivial task in computer vision research which can be utilized in robot localization, mobile navigation, etc. However, existing building recognition systems usually encounter the following two problems: 1) extracted low level features cannot reveal the true semantic concepts; and 2) they usually involve high dimensional data which require heavy computational costs and memory. Relevance feedback (RF), widely applied in multimedia information retrieval, is able to bridge the gap between the low level visual features and high level concepts; while dimensionality reduction methods can mitigate the high-dimensional problem. In this paper, we propose a building recognition scheme which integrates the RF and subspace learning algorithms. Experimental results undertaken on our own building database show that the newly proposed scheme appreciably enhances the recognition accuracy.
Drozda, Joseph P; Roach, James; Forsyth, Thomas; Helmering, Paul; Dummitt, Benjamin; Tcheng, James E
2018-02-01
The US Food and Drug Administration (FDA) has recognized the need to improve the tracking of medical device safety and performance, with implementation of Unique Device Identifiers (UDIs) in electronic health information as a key strategy. The FDA funded a demonstration by Mercy Health wherein prototype UDIs were incorporated into its electronic information systems. This report describes the demonstration's informatics architecture. Prototype UDIs for coronary stents were created and implemented across a series of information systems, resulting in UDI-associated data flow from manufacture through point of use to long-term follow-up, with barcode scanning linking clinical data with UDI-associated device attributes. A reference database containing device attributes and the UDI Research and Surveillance Database (UDIR) containing the linked clinical and device information were created, enabling longitudinal assessment of device performance. The demonstration included many stakeholders: multiple Mercy departments, manufacturers, health system partners, the FDA, professional societies, the National Cardiovascular Data Registry, and information system vendors. The resulting system of systems is described in detail, including entities, functions, linkage between the UDIR and proprietary systems using UDIs as the index key, data flow, roles and responsibilities of actors, and the UDIR data model. The demonstration provided proof of concept that UDIs can be incorporated into provider and enterprise electronic information systems and used as the index key to combine device and clinical data in a database useful for device evaluation. Keys to success and challenges to achieving this goal were identified. Fundamental informatics principles were central to accomplishing the system of systems model. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com
NASA Astrophysics Data System (ADS)
Lyapin, Sergey; Kukovyakin, Alexey
Within the framework of the research program "Textaurus" an operational prototype of multifunctional library T-Libra v.4.1. has been created which makes it possible to carry out flexible parametrizable search within a full-text database. The information system is realized in the architecture Web-browser / Web-server / SQL-server. This allows to achieve an optimal combination of universality and efficiency of text processing, on the one hand, and convenience and minimization of expenses for an end user (due to applying of a standard Web-browser as a client application), on the other one. The following principles underlie the information system: a) multifunctionality, b) intelligence, c) multilingual primary texts and full-text searching, d) development of digital library (DL) by a user ("administrative client"), e) multi-platform working. A "library of concepts", i.e. a block of functional models of semantic (concept-oriented) searching, as well as a subsystem of parametrizable queries to a full-text database, which is closely connected with the "library", serve as a conceptual basis of multifunctionality and "intelligence" of the DL T-Libra v.4.1. An author's paragraph is a unit of full-text searching in the suggested technology. At that, the "logic" of an educational / scientific topic or a problem can be built in a multilevel flexible structure of a query and the "library of concepts", replenishable by the developers and experts. About 10 queries of various level of complexity and conceptuality are realized in the suggested version of the information system: from simple terminological searching (taking into account lexical and grammatical paradigms of Russian) to several kinds of explication of terminological fields and adjustable two-parameter thematic searching (a [set of terms] and a [distance between terms] within the limits of an author's paragraph are such parameters correspondingly).
A standards-based clinical information system for HIV/AIDS.
Stitt, F W
1995-01-01
To create a clinical data repository to interface the Veteran's Administration (VA) Decentralized Hospital Computer Program (DHCP) and a departmental clinical information system for the management of HIV patients. This system supports record-keeping, decision-making, reporting, and analysis. The database development was designed to overcome two impediments to successful implementations of clinical databases: (i) lack of a standard reference data model, and; (ii) lack of a universal standard for medical concept representation. Health Level Seven (HL7) is a standard protocol that specifies the implementation of interfaces between two computer applications (sender and receiver) from different vendors or sources of electronic data exchange in the health care environment. This eliminates or substantially reduces the custom interface programming and program maintenance that would otherwise be required. HL7 defines the data to be exchanged, the timing of the interchange, and the communication of errors to the application. The formats are generic in nature and must be configured to meet the needs of the two applications involved. The standard conceptually operates at the seventh level of the ISO model for Open Systems Interconnection (OSI). The OSI simply defines the data elements that are exchanged as abstract messages, and does not prescribe the exact bit stream of the messages that flow over the network. Lower level network software developed according to the OSI model may be used to encode and decode the actual bit stream. The OSI protocols are not universally implemented and, therefore, a set of encoding rules for defining the exact representation of a message must be specified. The VA has created an HL7 module to assist DHCP applications in exchanging health care information with other applications using the HL7 protocol. The DHCP HL7 module consists of a set of utility routines and files that provide a generic interface to the HL7 protocol for all DHCP applications. The VA's DHCP core modules are in standard use at 169 hospitals, and the role of the VA system in health care delivery has been discussed elsewhere. This development was performed at the Miami VA Medical Center Special Immunology Unit, where a database was created for an HIV patient registry in 1987. Over 2,300 patient have been entered into a database that supports a problem-oriented summary of the patient's clinical record. The interface to the VA DHCP was designed and implemented to capture information from the patient treatment file, pharmacy, laboratory, radiology, and other modules. We obtained a suite of programs for implementing the HL7 encoding rules from Columbia-Presbyterian Medical Center in New York, written in ANSI C. This toolkit isolates our application programs from the details of the HL7 encoding rules, and allows them to deal with abstract messages and the programming level. While HL7 has become a standard for healthcare message exchange, SQL (Structured Query Language) is the standard for database definition, data manipulation, and query. The target database (Stitt F.W. The Problem-Oriented Medical Synopsis: a patient-centered clinical information system. Proc 17 SCAMC. 1993:88-93) provides clinical workstation functionality. Medical concepts are encoded using a preferred terminology derived from over 15 sources that include the Unified Medical Language System and SNOMed International ( Stitt F.W. The Problem-Oriented Medical Synopsis: coding, indexing, and classification sub-model. Proc 18 SCAMC, 1994: in press). The databases were modeled using the Information Engineering CASE tools, and were written using relational database utilities, including embedded SQL in C (ESQL/C). We linked ESQL/C programs to the HL7 toolkit to allow data to be inserted, deleted, or updated, under transaction control. A graphical format will be used to display the entity-rel
Beadle, Elizabeth Jane; Ownsworth, Tamara; Fleming, Jennifer; Shum, David
2016-01-01
This review systematically appraised the evidence for changes to self-identity after traumatic brain injury (TBI) in adults and investigated associations between self-concept changes and neurocognitive and psychosocial functioning. Systematic searches of 4 databases (PsycINFO, PubMed, CINAHL, and Cochrane Systematic Review Database) were undertaken from January 1983 to July 2014. Empirical studies were included if they used a quantitative measure of pre-/postinjury changes in self-concept after TBI or compared levels of self-concept between TBI and control participants. Fifteen studies met the review criteria and, despite methodological differences, provided mostly evidence of negative changes to self-concept. However, stability in self-concept and positive changes to sense of self were also reported in some studies. Furthermore, levels of self-esteem and personality characteristics did not significantly differ between participants with TBI and orthopedic/trauma controls. Negative self-concept changes were associated with emotional distress in 3 studies. People with TBI most commonly experience negative changes in self-identity; however, such changes are also reported after other traumatic events or injuries. Greater consistency in measurement of self-identity change and use of longitudinal designs is recommended to improve understanding of factors contributing to self-concept changes after TBI and to guide clinical interventions.
Publishing Linked Open Data for Physical Samples - Lessons Learned
NASA Astrophysics Data System (ADS)
Ji, P.; Arko, R. A.; Lehnert, K.; Bristol, S.
2016-12-01
Most data and information about physical samples and associated sampling features currently reside in relational databases. Integrating common concepts from various databases has motivated us to publish Linked Open Data for collections of physical samples, using Semantic Web technologies including the Resource Description Framework (RDF), RDF Query Language (SPARQL), and Web Ontology Language (OWL). The goal of our work is threefold: To evaluate and select ontologies in different granularities for common concepts; to establish best practices and develop a generic methodology for publishing physical sample data stored in relational database as Linked Open Data; and to reuse standard community vocabularies from the International Commission on Stratigraphy (ICS), Global Volcanism Program (GVP), General Bathymetric Chart of the Oceans (GEBCO), and others. Our work leverages developments in the EarthCube GeoLink project and the Interdisciplinary Earth Data Alliance (IEDA) facility for modeling and extracting physical sample data stored in relational databases. Reusing ontologies developed by GeoLink and IEDA has facilitated discovery and integration of data and information across multiple collections including the USGS National Geochemical Database (NGDB), System for Earth Sample Registration (SESAR), and Index to Marine & Lacustrine Geological Samples (IMLGS). We have evaluated, tested, and deployed Linked Open Data tools including Morph, Virtuoso Server, LodView, LodLive, and YASGUI for converting, storing, representing, and querying data in a knowledge base (RDF triplestore). Using persistent identifiers such as Open Researcher & Contributor IDs (ORCIDs) and International Geo Sample Numbers (IGSNs) at the record level makes it possible for other repositories to link related resources such as persons, datasets, documents, expeditions, awards, etc. to samples, features, and collections. This work is supported by the EarthCube "GeoLink" project (NSF# ICER14-40221 and others) and the "USGS-IEDA Partnership to Support a Data Lifecycle Framework and Tools" project (USGS# G13AC00381).
Propagation from the Start: The Spread of a Concept-Based Instructional Tool
ERIC Educational Resources Information Center
Friedrichsen, Debra M.; Smith, Christina; Koretsky, Milo D.
2017-01-01
We describe the propagation of a technology-based educational innovation through its first 3 years of public use. The innovation studied is the Concept Warehouse (CW), a database-driven website developed to support the use of concept-based pedagogies. This tool was initially developed for instructors in undergraduate chemical engineering courses,…
ERIC Educational Resources Information Center
Liu, Xiufeng; McKeough, Anne
2005-01-01
The aim of this study was to develop a model of students' energy concept development. Applying Case's (1985, 1992) structural theory of cognitive development, we hypothesized that students' concept of energy undergoes a series of transitions, corresponding to systematic increases in working memory capacity. The US national sample from the Third…
NASA Astrophysics Data System (ADS)
Henze, F.; Magdalinski, N.; Schwarzbach, F.; Schulze, A.; Gerth, Ph.; Schäfer, F.
2013-07-01
Information systems play an important role in historical research as well as in heritage documentation. As part of a joint research project of the German Archaeological Institute, the Brandenburg University of Technology Cottbus and the Dresden University of Applied Sciences a web-based documentation system is currently being developed, which can easily be adapted to the needs of different projects with individual scientific concepts, methods and questions. Based on open source and standardized technologies it will focus on open and well-documented interfaces to ease the dissemination and re-use of its content via web-services and to communicate with desktop applications for further evaluation and analysis. Core of the system is a generic data model that represents a wide range of topics and methods of archaeological work. By the provision of a concerted amount of initial themes and attributes a cross project analysis of research data will be possible. The development of enhanced search and retrieval functionalities will simplify the processing and handling of large heterogeneous data sets. To achieve a high degree of interoperability with existing external data, systems and applications, standardized interfaces will be integrated. The analysis of spatial data shall be possible through the integration of web-based GIS functions. As an extension to this, customized functions for storage, processing and provision of 3D geo data are being developed. As part of the contribution system requirements and concepts will be presented and discussed. A particular focus will be on introducing the generic data model and the derived database schema. The research work on enhanced search and retrieval capabilities will be illustrated by prototypical developments, as well as concepts and first implementations for an integrated 2D/3D Web-GIS.
Lack, N
2001-08-01
The introduction of the modified data set for quality assurance in obstetrics (formerly perinatal survey) in Lower Saxony and Bavaria as early as 1999 saw the urgent requirement for a corresponding new statistical analysis of the revised data. The general outline of a new data reporting concept was originally presented by the Bavarian Commission for Perinatology and Neonatology at the Munich Perinatal Conference in November 1997. These ideas are germinal to content and layout of the new quality report for obstetrics currently in its nationwide harmonisation phase coordinated by the federal office for quality assurance in hospital care. A flexible and modular database oriented analysis tool developed in Bavaria is now in its second year of successful operation. The functionalities of this system are described in detail.
ERIC Educational Resources Information Center
Nehm, Ross H.; Budd, Ann F.
2006-01-01
NMITA is a reef coral biodiversity database that we use to introduce students to the expansive realm of bioinformatics beyond genetics. We introduce a series of lessons that have students use this database, thereby accessing real data that can be used to test hypotheses about biodiversity and evolution while targeting the "National Science …
A system for intelligent teleoperation research
NASA Technical Reports Server (NTRS)
Orlando, N. E.
1983-01-01
The Automation Technology Branch of NASA Langley Research Center is developing a research capability in the field of artificial intelligence, particularly as applicable in teleoperator/robotics development for remote space operations. As a testbed for experimentation in these areas, a system concept has been developed and is being implemented. This system termed DAISIE (Distributed Artificially Intelligent System for Interacting with the Environment), interfaces the key processes of perception, reasoning, and manipulation by linking hardware sensors and manipulators to a modular artificial intelligence (AI) software system in a hierarchical control structure. Verification experiments have been performed: one experiment used a blocksworld database and planner embedded in the DAISIE system to intelligently manipulate a simple physical environment; the other experiment implemented a joint-space collision avoidance algorithm. Continued system development is planned.
Bhinder, Prabhjot; Oberoi, Mandeep Singh
2009-01-01
Hospitals require better information connectivity because timing and content of the information to be traded is critical. The imperative success in the past has generated renewed thrust on the expectations and credibility of the current enterprise resource planning (ERP) applications in health care. The desire to bring improved connectivity and to match it with critical timing remains the penultimate dream. Currently, majority of ERP system integrators are not able to match these requirements of the healthcare industry. It is perceived that the concept of ERP has made the process of segregating bills and patient records much easier. Hence the industry is able to save more lives, but at the cost of an individual's privacy as it enables to access the database of patients and medical histories through the common database shared by hospitals though at a quicker rate. Businesses such as health care providers, pharmaceutical manufacturers, and distributors have already implemented rapid ERPs. The new concept "Smart Pharmacies" will link the process all the way from drug delivery, patient care, demand management, drug repository, and pharmaceutical manufacturers while maintaining Regulatory Compliances and make the vital connections where these Businesses will talk to each other electronically.
The One Universal Graph — a free and open graph database
NASA Astrophysics Data System (ADS)
Ng, Liang S.; Champion, Corbin
2016-02-01
Recent developments in graph database mostly are huge projects involving big organizations, big operations and big capital, as the name Big Data attests. We proposed the concept of One Universal Graph (OUG) which states that all observable and known objects and concepts (physical, conceptual or digitally represented) can be connected with only one single graph; furthermore the OUG can be implemented with a very simple text file format with free software, capable of being executed on Android or smaller devices. As such the One Universal Graph Data Exchange (GOUDEX) modules can potentially be installed on hundreds of millions of Android devices and Intel compatible computers shipped annually. Coupled with its open nature and ability to connect to existing leading search engines and databases currently in operation, GOUDEX has the potential to become the largest and a better interface for users and programmers to interact with the data on the Internet. With a Web User Interface for users to use and program in native Linux environment, Free Crowdware implemented in GOUDEX can help inexperienced users learn programming with better organized documentation for free software, and is able to manage programmer's contribution down to a single line of code or a single variable in software projects. It can become the first practically realizable “Internet brain” on which a global artificial intelligence system can be implemented. Being practically free and open, One Universal Graph can have significant applications in robotics, artificial intelligence as well as social networks.
Mashup of Geo and Space Science Data Provided via Relational Databases in the Semantic Web
NASA Astrophysics Data System (ADS)
Ritschel, B.; Seelus, C.; Neher, G.; Iyemori, T.; Koyama, Y.; Yatagai, A. I.; Murayama, Y.; King, T. A.; Hughes, J. S.; Fung, S. F.; Galkin, I. A.; Hapgood, M. A.; Belehaki, A.
2014-12-01
The use of RDBMS for the storage and management of geo and space science data and/or metadata is very common. Although the information stored in tables is based on a data model and therefore well organized and structured, a direct mashup with RDF based data stored in triple stores is not possible. One solution of the problem consists in the transformation of the whole content into RDF structures and storage in triple stores. Another interesting way is the use of a specific system/service, such as e.g. D2RQ, for the access to relational database content as virtual, read only RDF graphs. The Semantic Web based -proof of concept- GFZ ISDC uses the triple store Virtuoso for the storage of general context information/metadata to geo and space science satellite and ground station data. There is information about projects, platforms, instruments, persons, product types, etc. available but no detailed metadata about the data granuals itself. Such important information, as e.g. start or end time or the detailed spatial coverage of a single measurement is stored in RDBMS tables of the ISDC catalog system only. In order to provide a seamless access to all available information about the granuals/data products a mashup of the different data resources (triple store and RDBMS) is necessary. This paper describes the use of D2RQ for a Semantic Web/SPARQL based mashup of relational databases used for ISDC data server but also for the access to IUGONET and/or ESPAS and further geo and space science data resources. RDBMS Relational Database Management System RDF Resource Description Framework SPARQL SPARQL Protocol And RDF Query Language D2RQ Accessing Relational Databases as Virtual RDF Graphs GFZ ISDC German Research Centre for Geosciences Information System and Data Center IUGONET Inter-university Upper Atmosphere Global Observation Network (Japanese project) ESPAS Near earth space data infrastructure for e-science (European Union funded project)
Acquisition-Management Program
NASA Technical Reports Server (NTRS)
Avery, Don E.; Vann, A. Vernon; Jones, Richard H.; Rew, William E.
1987-01-01
NASA Acquisition Management Subsystem (AMS) program integrated NASA-wide standard automated-procurement-system program developed in 1985. Designed to provide each NASA installation with procurement data-base concept with on-line terminals for managing, tracking, reporting, and controlling contractual actions and associated procurement data. Subsystem provides control, status, and reporting for various procurement areas. Purpose of standardization is to decrease costs of procurement and operation of automatic data processing; increases procurement productivity; furnishes accurate, on-line management information and improves customer support. Written in the ADABAS NATURAL.
The Strabo digital data system for Structural Geology and Tectonics
NASA Astrophysics Data System (ADS)
Tikoff, Basil; Newman, Julie; Walker, J. Doug; Williams, Randy; Michels, Zach; Andrews, Joseph; Bunse, Emily; Ash, Jason; Good, Jessica
2017-04-01
We are developing the Strabo data system for the structural geology and tectonics community. The data system will allow researchers to share primary data, apply new types of analytical procedures (e.g., statistical analysis), facilitate interaction with other geology communities, and allow new types of science to be done. The data system is based on a graph database, rather than relational database approach, to increase flexibility and allow geologically realistic relationships between observations and measurements. Development is occurring on: 1) A field-based application that runs on iOS and Android mobile devices and can function in either internet connected or disconnected environments; and 2) A desktop system that runs only in connected settings and directly addresses the back-end database. The field application also makes extensive use of images, such as photos or sketches, which can be hierarchically arranged with encapsulated field measurements/observations across all scales. The system also accepts Shapefile, GEOJSON, KML formats made in ArcGIS and QGIS, and will allow export to these formats as well. Strabo uses two main concepts to organize the data: Spots and Tags. A Spot is any observation that characterizes a specific area. Below GPS resolution, a Spot can be tied to an image (outcrop photo, thin section, etc.). Spots are related in a purely spatial manner (one spot encloses anther spot, which encloses another, etc.). Tags provide a linkage between conceptually related spots. Together, this organization works seamlessly with the workflow of most geologists. We are expanding this effort to include microstructural data, as well as to the disciplines of sedimentology and petrology.
Lunar base Controlled Ecological Life Support System (LCELSS): Preliminary conceptual design study
NASA Technical Reports Server (NTRS)
Schwartzkopf, Steven H.
1991-01-01
The objective of this study was to develop a conceptual design for a self-sufficient LCELSS. The mission need is for a CELSS with a capacity to supply the life support needs for a nominal crew of 30, and a capability for accommodating a range of crew sizes from 4 to 100 people. The work performed in this study was nominally divided into two parts. In the first part, relevant literature was assembled and reviewed. This review identified LCELSS performance requirements and the constraints and advantages confronting the design. It also collected information on the environment of the lunar surface and identified candidate technologies for the life support subsystems and the systems with which the LCELSS interfaced. Information on the operation and performance of these technologies was collected, along with concepts of how they might be incorporated into the LCELSS conceptual design. The data collected on these technologies was stored for incorporation into the study database. Also during part one, the study database structure was formulated and implemented, and an overall systems engineering methodology was developed for carrying out the study.
Ontology-based data integration between clinical and research systems.
Mate, Sebastian; Köpcke, Felix; Toddenroth, Dennis; Martin, Marcus; Prokosch, Hans-Ulrich; Bürkle, Thomas; Ganslandt, Thomas
2015-01-01
Data from the electronic medical record comprise numerous structured but uncoded elements, which are not linked to standard terminologies. Reuse of such data for secondary research purposes has gained in importance recently. However, the identification of relevant data elements and the creation of database jobs for extraction, transformation and loading (ETL) are challenging: With current methods such as data warehousing, it is not feasible to efficiently maintain and reuse semantically complex data extraction and trans-formation routines. We present an ontology-supported approach to overcome this challenge by making use of abstraction: Instead of defining ETL procedures at the database level, we use ontologies to organize and describe the medical concepts of both the source system and the target system. Instead of using unique, specifically developed SQL statements or ETL jobs, we define declarative transformation rules within ontologies and illustrate how these constructs can then be used to automatically generate SQL code to perform the desired ETL procedures. This demonstrates how a suitable level of abstraction may not only aid the interpretation of clinical data, but can also foster the reutilization of methods for un-locking it.
Relational databases for rare disease study: application to vascular anomalies.
Perkins, Jonathan A; Coltrera, Marc D
2008-01-01
To design a relational database integrating clinical and basic science data needed for multidisciplinary treatment and research in the field of vascular anomalies. Based on data points agreed on by the American Society of Pediatric Otolaryngology (ASPO) Vascular Anomalies Task Force. The database design enables sharing of data subsets in a Health Insurance Portability and Accountability Act (HIPAA)-compliant manner for multisite collaborative trials. Vascular anomalies pose diagnostic and therapeutic challenges. Our understanding of these lesions and treatment improvement is limited by nonstandard terminology, severity assessment, and measures of treatment efficacy. The rarity of these lesions places a premium on coordinated studies among multiple participant sites. The relational database design is conceptually centered on subjects having 1 or more lesions. Each anomaly can be tracked individually along with their treatment outcomes. This design allows for differentiation between treatment responses and untreated lesions' natural course. The relational database design eliminates data entry redundancy and results in extremely flexible search and data export functionality. Vascular anomaly programs in the United States. A relational database correlating clinical findings and photographic, radiologic, histologic, and treatment data for vascular anomalies was created for stand-alone and multiuser networked systems. Proof of concept for independent site data gathering and HIPAA-compliant sharing of data subsets was demonstrated. The collaborative effort by the ASPO Vascular Anomalies Task Force to create the database helped define a common vascular anomaly data set. The resulting relational database software is a powerful tool to further the study of vascular anomalies and the development of evidence-based treatment innovation.
A lightweight approach for biometric template protection
NASA Astrophysics Data System (ADS)
Al-Assam, Hisham; Sellahewa, Harin; Jassim, Sabah
2009-05-01
Privacy and security are vital concerns for practical biometric systems. The concept of cancelable or revocable biometrics has been proposed as a solution for biometric template security. Revocable biometric means that biometric templates are no longer fixed over time and could be revoked in the same way as lost or stolen credit cards are. In this paper, we describe a novel and an efficient approach to biometric template protection that meets the revocability property. This scheme can be incorporated into any biometric verification scheme while maintaining, if not improving, the accuracy of the original biometric system. However, we shall demonstrate the result of applying such transforms on face biometric templates and compare the efficiency of our approach with that of the well-known random projection techniques. We shall also present the results of experimental work on recognition accuracy before and after applying the proposed transform on feature vectors that are generated by wavelet transforms. These results are based on experiments conducted on a number of well-known face image databases, e.g. Yale and ORL databases.
Web interfaces to relational databases
NASA Technical Reports Server (NTRS)
Carlisle, W. H.
1996-01-01
This reports on a project to extend the capabilities of a Virtual Research Center (VRC) for NASA's Advanced Concepts Office. The work was performed as part of NASA's 1995 Summer Faculty Fellowship program and involved the development of a prototype component of the VRC - a database system that provides data creation and access services within a room of the VRC. In support of VRC development, NASA has assembled a laboratory containing the variety of equipment expected to be used by scientists within the VRC. This laboratory consists of the major hardware platforms, SUN, Intel, and Motorola processors and their most common operating systems UNIX, Windows NT, Windows for Workgroups, and Macintosh. The SPARC 20 runs SUN Solaris 2.4, an Intel Pentium runs Windows NT and is installed on a different network from the other machines in the laboratory, a Pentium PC runs Windows for Workgroups, two Intel 386 machines run Windows 3.1, and finally, a PowerMacintosh and a Macintosh IIsi run MacOS.
Albin, Aaron; Ji, Xiaonan; Borlawsky, Tara B; Ye, Zhan; Lin, Simon; Payne, Philip Ro; Huang, Kun; Xiang, Yang
2014-10-07
The Unified Medical Language System (UMLS) contains many important ontologies in which terms are connected by semantic relations. For many studies on the relationships between biomedical concepts, the use of transitively associated information from ontologies and the UMLS has been shown to be effective. Although there are a few tools and methods available for extracting transitive relationships from the UMLS, they usually have major restrictions on the length of transitive relations or on the number of data sources. Our goal was to design an efficient online platform that enables efficient studies on the conceptual relationships between any medical terms. To overcome the restrictions of available methods and to facilitate studies on the conceptual relationships between medical terms, we developed a Web platform, onGrid, that supports efficient transitive queries and conceptual relationship studies using the UMLS. This framework uses the latest technique in converting natural language queries into UMLS concepts, performs efficient transitive queries, and visualizes the result paths. It also dynamically builds a relationship matrix for two sets of input biomedical terms. We are thus able to perform effective studies on conceptual relationships between medical terms based on their relationship matrix. The advantage of onGrid is that it can be applied to study any two sets of biomedical concept relations and the relations within one set of biomedical concepts. We use onGrid to study the disease-disease relationships in the Online Mendelian Inheritance in Man (OMIM). By crossvalidating our results with an external database, the Comparative Toxicogenomics Database (CTD), we demonstrated that onGrid is effective for the study of conceptual relationships between medical terms. onGrid is an efficient tool for querying the UMLS for transitive relations, studying the relationship between medical terms, and generating hypotheses.
Extraction of events and rules of land use/cover change from the policy text
NASA Astrophysics Data System (ADS)
Lin, Guangfa; Xia, Beicheng; Huang, Wangli; Jiang, Huixian; Chen, Youfei
2007-06-01
The database of recording the snapshots of land parcels history is the foundation for the most of the models on simulating land use/cover change (LUCC) process. But the sequences of temporal snapshots are not sufficient to deduce and describe the mechanism of LUCC process. The temporal relationship between scenarios of LUCC we recorded could not be transfer into causal relationship categorically, which was regarded as a key factor in spatial-temporal reasoning. The proprietor of land parcels adapted themselves to the policies from governments and the change of production market, and then made decisions in this or that way. The occurrence of each change of a land parcel in an urban area was often related with one or more decision texts when it was investigated on the local scale with high resolution of the background scene. These decision texts may come from different sections of a hierarchical government system on different levels, such as villages or communities, towns or counties, cities, provinces or even the paramount. All these texts were balance results between advantages and disadvantages of different interest groups. They are the essential forces of LUCC in human dimension. Up to now, a methodology is still wanted for on how to express these forces in a simulation system using GIS as a language. The presented paper was part of our initial research on this topic. The term "Event" is a very important concept in the frame of "Object-Oriented" theory in computer science. While in the domain of temporal GIS, the concept of event was developed in another category. The definitions of the event and their transformation relationship were discussed in this paper on three modeling levels as real world level, conceptual level and programming level. In this context, with a case study of LUCC in recent 30 years in Xiamen city of Fujian province, P. R. China, the paper focused on how to extract information of events and rules from the policy files collected and integrate the information into the LUCC temporal database. The paper concluded by listing the main steps of how to extract events and rules from files and build an event database, and indicating directions for future work about how to develop a spatial-temporal reasoning system on the event-oriented LUCC database.
Ji, Yanqing; Ying, Hao; Tran, John; Dews, Peter; Massanari, R Michael
2016-07-19
Finding highly relevant articles from biomedical databases is challenging not only because it is often difficult to accurately express a user's underlying intention through keywords but also because a keyword-based query normally returns a long list of hits with many citations being unwanted by the user. This paper proposes a novel biomedical literature search system, called BiomedSearch, which supports complex queries and relevance feedback. The system employed association mining techniques to build a k-profile representing a user's relevance feedback. More specifically, we developed a weighted interest measure and an association mining algorithm to find the strength of association between a query and each concept in the article(s) selected by the user as feedback. The top concepts were utilized to form a k-profile used for the next-round search. BiomedSearch relies on Unified Medical Language System (UMLS) knowledge sources to map text files to standard biomedical concepts. It was designed to support queries with any levels of complexity. A prototype of BiomedSearch software was made and it was preliminarily evaluated using the Genomics data from TREC (Text Retrieval Conference) 2006 Genomics Track. Initial experiment results indicated that BiomedSearch increased the mean average precision (MAP) for a set of queries. With UMLS and association mining techniques, BiomedSearch can effectively utilize users' relevance feedback to improve the performance of biomedical literature search.
Kaiser Permanente Northern California pregnancy database: Description and proof of concept study.
Zerbo, Ousseny; Chan, Berwick; Goddard, Kristin; Lewis, Ned; Bok, Karin; Klein, Nicola P; Baxter, Roger
2016-11-04
We describe the establishment of a dynamic database linking mothers to newborns with the goal of studying vaccine safety in both pregnant women and their children and provide results of a study utilizing this database as a proof of concept. All Kaiser Permanente Northern California (KPNC) live births and their mothers were eligible for inclusion in the pregnancy database. We used the medical record number (MRN), a unique identifier, to retrieve information about events that occurred during the pregnancy and at delivery and linked this same MRN to newborns for post-partum follow up. We conducted a retrospective cohort study to evaluate the association between receipt of tetanus, diphtheria and acellular pertussis (Tdap) vaccine during pregnancy and fever 0-3days after the first dose of diphtheria tetanus and acellular pertussis (DTaP) vaccine in the infant. The study included infants who were born at ⩾37weeks gestation from January 1, 2009 - October 1, 2015 and who received their first DTaP vaccine between 6 and 10weeks of age. We utilized diagnostic codes from inpatient, emergency department, outpatient clinics, and telephone calls. We identified fever using ICD 9 code 780.6, recorded temperature ⩾101 degree Fahrenheit, or parental report. The database contained the starting and ending date of each pregnancy and basic demographic characteristics of mothers and infants. There were 859,699 women and 873,753 children in the database as of January 2016. The proof of concept study included 148,699 infants. In a multivariable logistic regression analysis, Tdap vaccination during pregnancy was not associated with infant fever 0-3daysafter first dose of DTaP (adjusted odds ratio=0.92, 95% CI 0.82-1.04). The KPNC pregnancy database can be used for studies investigating exposure during pregnancy and outcomes in mothers and/or infants, particularly monitoring vaccine safety and effectiveness. Copyright © 2016 Elsevier Ltd. All rights reserved.
Concepts of soil mapping as a basis for the assessment of soil functions
NASA Astrophysics Data System (ADS)
Baumgarten, Andreas
2014-05-01
Soil mapping systems in Europe have been designed mainly as a tool for the description of soil characteristics from a morphogenetic viewpoint. Contrasting to the American or FAO system, the soil development has been in the main focus of European systems. Nevertheless , recent developments in soil science stress the importance of the functions of soils with respect to the ecosystems. As soil mapping systems usually offer a sound and extensive database, the deduction of soil functions from "classic" mapping parameters can be used for local and regional assessments. According to the used pedo-transfer functions and mapping systems, tailored approaches can be chosen for different applications. In Austria, a system mainly for spatial planning purposes has been developed that will be presented and illustrated by means of best practice examples.
Sensory overload: A concept analysis.
Scheydt, Stefan; Müller Staub, Maria; Frauenfelder, Fritz; Nielsen, Gunnar H; Behrens, Johann; Needham, Ian
2017-04-01
In the context of mental disorders sensory overload is a widely described phenomenon used in conjunction with psychiatric interventions such as removal from stimuli. However, the theoretical foundation of sensory overload as addressed in the literature can be described as insufficient and fragmentary. To date, the concept of sensory overload has not yet been sufficiently specified or analyzed. The aim of the study was to analyze the concept of sensory overload in mental health care. A literature search was undertaken using specific electronic databases, specific journals and websites, hand searches, specific library catalogues, and electronic publishing databases. Walker and Avant's method of concept analysis was used to analyze the sources included in the analysis. All aspects of the method of Walker and Avant were covered in this concept analysis. The conceptual understanding has become more focused, the defining attributes, influencing factors and consequences are described and empirical referents identified. The concept analysis is a first step in the development of a middle-range descriptive theory of sensory overload based on social scientific and stress-theoretical approaches. This specification may serve as a fundament for further research, for the development of a nursing diagnosis or for guidelines. © 2017 Australian College of Mental Health Nurses Inc.
Goulet, Marie-Hélène; Larue, Caroline; Alderson, Marie
2016-04-01
This paper reports on an analysis of the concept of reflective practice. Reflective practice, a concept borrowed from the field of education, is widely used in nursing. However, to date, no study has explored whether this appropriation has resulted in a definition of the concept specific to the nursing discipline. A sample comprised of 42 articles in the field of nursing drawn from the CINAHL database and 35 articles in education from the ERIC database (1989-2013) was analyzed. A concept analysis using the method proposed by Bowers and Schatzman was conducted to explore the differing meanings of reflective practice in nursing and education. In nursing, the dimensions of the concept differ depending on context. In the clinical context, the dimensions may be summarized as theory-practice gap, development, and caring; in training, as learning, guided process, and development; and in research, as knowledge, method, and social change. In education, the concept is also used in the contexts of training (the dimensions being development, deliberate review, emotions, and evaluation) and research (knowledge, temporal distance, and method). The humanist dimension in nursing thus reflects a use of the concept more specific to the discipline. The concept analysis helped clarify the meaning of reflective practice in nursing and its specific use in the discipline. This observation leads to a consideration of how the concept has developed since its appropriation by nursing; the adoption of a terminology particular to nursing may well be worth contemplating. © 2015 Wiley Periodicals, Inc.
IMGT, the international ImMunoGeneTics information system®
Lefranc, Marie-Paule; Giudicelli, Véronique; Kaas, Quentin; Duprat, Elodie; Jabado-Michaloud, Joumana; Scaviner, Dominique; Ginestoux, Chantal; Clément, Oliver; Chaume, Denys; Lefranc, Gérard
2005-01-01
The international ImMunoGeneTics information system® (IMGT) (http://imgt.cines.fr), created in 1989, by the Laboratoire d'ImmunoGénétique Moléculaire LIGM (Université Montpellier II and CNRS) at Montpellier, France, is a high-quality integrated knowledge resource specializing in the immunoglobulins (IGs), T cell receptors (TRs), major histocompatibility complex (MHC) of human and other vertebrates, and related proteins of the immune systems (RPI) that belong to the immunoglobulin superfamily (IgSF) and to the MHC superfamily (MhcSF). IMGT includes several sequence databases (IMGT/LIGM-DB, IMGT/PRIMER-DB, IMGT/PROTEIN-DB and IMGT/MHC-DB), one genome database (IMGT/GENE-DB) and one three-dimensional (3D) structure database (IMGT/3Dstructure-DB), Web resources comprising 8000 HTML pages (IMGT Marie-Paule page), and interactive tools. IMGT data are expertly annotated according to the rules of the IMGT Scientific chart, based on the IMGT-ONTOLOGY concepts. IMGT tools are particularly useful for the analysis of the IG and TR repertoires in normal physiological and pathological situations. IMGT is used in medical research (autoimmune diseases, infectious diseases, AIDS, leukemias, lymphomas, myelomas), veterinary research, biotechnology related to antibody engineering (phage displays, combinatorial libraries, chimeric, humanized and human antibodies), diagnostics (clonalities, detection and follow up of residual diseases) and therapeutical approaches (graft, immunotherapy and vaccinology). IMGT is freely available at http://imgt.cines.fr. PMID:15608269
Sadygov, Rovshan G; Cociorva, Daniel; Yates, John R
2004-12-01
Database searching is an essential element of large-scale proteomics. Because these methods are widely used, it is important to understand the rationale of the algorithms. Most algorithms are based on concepts first developed in SEQUEST and PeptideSearch. Four basic approaches are used to determine a match between a spectrum and sequence: descriptive, interpretative, stochastic and probability-based matching. We review the basic concepts used by most search algorithms, the computational modeling of peptide identification and current challenges and limitations of this approach for protein identification.
ERIC Educational Resources Information Center
Battle, Gary M.; Allen, Frank H.; Ferrence, Gregory M.
2010-01-01
A series of online interactive teaching units have been developed that illustrate the use of experimentally measured three-dimensional (3D) structures to teach fundamental chemistry concepts. The units integrate a 500-structure subset of the Cambridge Structural Database specially chosen for their pedagogical value. The units span a number of key…
High-Order Methods for Computational Physics
1999-03-01
computation is running in 278 Ronald D. Henderson parallel. Instead we use the concept of a voxel database (VDB) of geometric positions in the mesh [85...processor 0 Fig. 4.19. Connectivity and communications axe established by building a voxel database (VDB) of positions. A VDB maps each position to a...studies such as the highly accurate stability computations considered help expand the database for this benchmark problem. The two-dimensional linear
NASA Technical Reports Server (NTRS)
Hielkema, J. U.; Howard, J. A.; Tucker, C. J.; Van Ingen Schenau, H. A.
1987-01-01
The African real time environmental monitoring using imaging satellites (Artemis) system, which should monitor precipitation and vegetation conditions on a continental scale, is presented. The hardware and software characteristics of the system are illustrated and the Artemis databases are outlined. Plans for the system include the use of hourly digital Meteosat data and daily NOAA/AVHRR data to study environmental conditions. Planned mapping activities include monthly rainfall anomaly maps, normalized difference vegetation index maps for ten day and monthly periods with a spatial resolution of 7.6 km, ten day crop/rangeland moisture availability maps, and desert locust potential breeding activity factor maps for a plague prevention program.
Medical image informatics infrastructure design and applications.
Huang, H K; Wong, S T; Pietka, E
1997-01-01
Picture archiving and communication systems (PACS) is a system integration of multimodality images and health information systems designed for improving the operation of a radiology department. As it evolves, PACS becomes a hospital image document management system with a voluminous image and related data file repository. A medical image informatics infrastructure can be designed to take advantage of existing data, providing PACS with add-on value for health care service, research, and education. A medical image informatics infrastructure (MIII) consists of the following components: medical images and associated data (including PACS database), image processing, data/knowledge base management, visualization, graphic user interface, communication networking, and application oriented software. This paper describes these components and their logical connection, and illustrates some applications based on the concept of the MIII.
DOT National Transportation Integrated Search
1995-05-01
FAA Air Traffic Control Operations Concepts Volume VII.- TRACON Controllers (1989) developed by CTA, Inc., a technical description of the duties of a TRACON air traffic control specialist (ATCS), formatted in User Interface Language, was restructured...
Scoping literature review on the Learning Organisation concept as applied to the health system.
Akhnif, E; Macq, J; Idrissi Fakhreddine, M O; Meessen, B
2017-03-01
ᅟ: There is growing interest in the use of the management concept of a 'learning organisation'. The objective of this review is to explore work undertaken towards the application of this concept to the health sector in general and to reach the goal of universal health coverage in particular. Of interest are the exploration of evaluation frameworks and their application in health. We used a scoping literature review based on the York methodology. We conducted an online search using selected keywords on some of the main databases on health science, selected websites and main reference books on learning organisations. We restricted the focus of our search on sources in the English language only. Inclusive and exclusive criteria were applied to arrive at a final list of articles, from which information was extracted and then selected and inserted in a chart. We identified 263 articles and other documents from our search. From these, 50 articles were selected for a full analysis and 27 articles were used for the summary. The majority of the articles concerned hospital settings (15 articles, 55%). Seven articles (25%) were related to the application of the concept to the health centre setting. Four articles discussed the application of the concept to the health system (14%). Most of the applications involved high-income countries (21 articles, 78%), with only one article being related to a low-income country. We found 13 different frameworks that were applied to different health organisations. The scoping review allowed us to assess applications of the learning organisation concept to the health sector to date. Such applications are still rare, but are increasingly being used. There is no uniform framework thus far, but convergence as for the dimensions that matter is increasing. Many methodological questions remain unanswered. We also identified a gap in terms of the use of this concept in low- and middle-income countries and to the health system as a whole.
Accounting for the Benefits of Database Normalization
ERIC Educational Resources Information Center
Wang, Ting J.; Du, Hui; Lehmann, Constance M.
2010-01-01
This paper proposes a teaching approach to reinforce accounting students' understanding of the concept of database normalization. Unlike a conceptual approach shown in most of the AIS textbooks, this approach involves with calculations and reconciliations with which accounting students are familiar because the methods are frequently used in…
2012-01-01
Background To establish a common database on particle therapy for the evaluation of clinical studies integrating a large variety of voluminous datasets, different documentation styles, and various information systems, especially in the field of radiation oncology. Methods We developed a web-based documentation system for transnational and multicenter clinical studies in particle therapy. 560 patients have been treated from November 2009 to September 2011. Protons, carbon ions or a combination of both, as well as a combination with photons were applied. To date, 12 studies have been initiated and more are in preparation. Results It is possible to immediately access all patient information and exchange, store, process, and visualize text data, any DICOM images and multimedia data. Accessing the system and submitting clinical data is possible for internal and external users. Integrated into the hospital environment, data is imported both manually and automatically. Security and privacy protection as well as data validation and verification are ensured. Studies can be designed to fit individual needs. Conclusions The described database provides a basis for documentation of large patient groups with specific and specialized questions to be answered. Having recently begun electronic documentation, it has become apparent that the benefits lie in the user-friendly and timely workflow for documentation. The ultimate goal is a simplification of research work, better study analyses quality and eventually, the improvement of treatment concepts by evaluating the effectiveness of particle therapy. PMID:22828013
ASEAN Mineral Database and Information System (AMDIS)
NASA Astrophysics Data System (ADS)
Okubo, Y.; Ohno, T.; Bandibas, J. C.; Wakita, K.; Oki, Y.; Takahashi, Y.
2014-12-01
AMDIS has lunched officially since the Fourth ASEAN Ministerial Meeting on Minerals on 28 November 2013. In cooperation with Geological Survey of Japan, the web-based GIS was developed using Free and Open Source Software (FOSS) and the Open Geospatial Consortium (OGC) standards. The system is composed of the local databases and the centralized GIS. The local databases created and updated using the centralized GIS are accessible from the portal site. The system introduces distinct advantages over traditional GIS. Those are a global reach, a large number of users, better cross-platform capability, charge free for users, charge free for provider, easy to use, and unified updates. Raising transparency of mineral information to mining companies and to the public, AMDIS shows that mineral resources are abundant throughout the ASEAN region; however, there are many datum vacancies. We understand that such problems occur because of insufficient governance of mineral resources. Mineral governance we refer to is a concept that enforces and maximizes the capacity and systems of government institutions that manages minerals sector. The elements of mineral governance include a) strengthening of information infrastructure facility, b) technological and legal capacities of state-owned mining companies to fully-engage with mining sponsors, c) government-led management of mining projects by supporting the project implementation units, d) government capacity in mineral management such as the control and monitoring of mining operations, and e) facilitation of regional and local development plans and its implementation with the private sector.
Applying the archetype approach to the database of a biobank information management system.
Späth, Melanie Bettina; Grimson, Jane
2011-03-01
The purpose of this study is to investigate the feasibility of applying the openEHR archetype approach to modelling the data in the database of an existing proprietary biobank information management system. A biobank information management system stores the clinical/phenotypic data of the sample donor and sample related information. The clinical/phenotypic data is potentially sourced from the donor's electronic health record (EHR). The study evaluates the reuse of openEHR archetypes that have been developed for the creation of an interoperable EHR in the context of biobanking, and proposes a new set of archetypes specifically for biobanks. The ultimate goal of the research is the development of an interoperable electronic biomedical research record (eBMRR) to support biomedical knowledge discovery. The database of the prostate cancer biobank of the Irish Prostate Cancer Research Consortium (PCRC), which supports the identification of novel biomarkers for prostate cancer, was taken as the basis for the modelling effort. First the database schema of the biobank was analyzed and reorganized into archetype-friendly concepts. Then, archetype repositories were searched for matching archetypes. Some existing archetypes were reused without change, some were modified or specialized, and new archetypes were developed where needed. The fields of the biobank database schema were then mapped to the elements in the archetypes. Finally, the archetypes were arranged into templates specifically to meet the requirements of the PCRC biobank. A set of 47 archetypes was found to cover all the concepts used in the biobank. Of these, 29 (62%) were reused without change, 6 were modified and/or extended, 1 was specialized, and 11 were newly defined. These archetypes were arranged into 8 templates specifically required for this biobank. A number of issues were encountered in this research. Some arose from the immaturity of the archetype approach, such as immature modelling support tools, difficulties in defining high-quality archetypes and the problem of overlapping archetypes. In addition, the identification of suitable existing archetypes was time-consuming and many semantic conflicts were encountered during the process of mapping the PCRC BIMS database to existing archetypes. These include differences in the granularity of documentation, in metadata-level versus data-level modelling, in terminologies and vocabularies used, and in the amount of structure imposed on the information to be recorded. Furthermore, the current way of modelling the sample entity was found to be cumbersome in the sample-centric activity of biobanking. The archetype approach is a promising approach to create a shareable eBMRR based on the study participant/donor for biobanks. Many archetypes originally developed for the EHR domain can be reused to model the clinical/phenotypic and sample information in the biobank context, which validates the genericity of these archetypes and their potential for reuse in the context of biomedical research. However, finding suitable archetypes in the repositories and establishing an exact mapping between the fields in the PCRC BIMS database and the elements of existing archetypes that have been designed for clinical practice can be challenging and time-consuming and involves resolving many common system integration conflicts. These may be attributable to differences in the requirements for information documentation between clinical practice and biobanking. This research also recognized the need for better support tools, modelling guidelines and best practice rules and reconfirmed the need for better domain knowledge governance. Furthermore, the authors propose that the establishment of an independent sample record with the sample as record subject should be investigated. The research presented in this paper is limited by the fact that the new archetypes developed during this research are based on a single biobank instance. These new archetypes may not be complete, representing only those subsets of items required by this particular database. Nevertheless, this exercise exposes some of the gaps that exist in the archetype modelling landscape and highlights the concepts that need to be modelled with archetypes to enable the development of an eBMRR. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Synthetic Vision Enhances Situation Awareness and RNP Capabilities for Terrain-Challenged Approaches
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Prinzel, Lawrence J., III; Bailey, Randall E.; Arthur, Jarvis J., III
2003-01-01
The Synthetic Vision Systems (SVS) Project of Aviation Safety Program is striving to eliminate poor visibility as a causal factor in aircraft accidents as well as enhance operational capabilities of all aircraft through the display of computer generated imagery derived from an onboard database of terrain, obstacle, and airport information. To achieve these objectives, NASA 757 flight test research was conducted at the Eagle-Vail, Colorado airport to evaluate three SVS display types (Head-Up Display, Head-Down Size A, Head-Down Size X) and two terrain texture methods (photo-realistic, generic) in comparison to the simulated Baseline Boeing-757 Electronic Attitude Direction Indicator and Navigation / Terrain Awareness and Warning System displays. These independent variables were evaluated for situation awareness, path error, and workload while making approaches to Runway 25 and 07 and during simulated engine-out Cottonwood 2 and KREMM departures. The results of the experiment showed significantly improved situation awareness, performance, and workload for SVS concepts compared to the Baseline displays and confirmed the retrofit capability of the Head-Up Display and Size A SVS concepts. The research also demonstrated that the pathway and pursuit guidance used within the SVS concepts achieved required navigation performance (RNP) criteria.
Munn, Maureen; Knuth, Randy; Van Horne, Katie; Shouse, Andrew W; Levias, Sheldon
2017-01-01
This study examines how two kinds of authentic research experiences related to smoking behavior-genotyping human DNA (wet lab) and using a database to test hypotheses about factors that affect smoking behavior (dry lab)-influence students' perceptions and understanding of scientific research and related science concepts. The study used pre and post surveys and a focus group protocol to compare students who conducted the research experiences in one of two sequences: genotyping before database and database before genotyping. Students rated the genotyping experiment to be more like real science than the database experiment, in spite of the fact that they associated more scientific tasks with the database experience than genotyping. Independent of the order of completing the labs, students showed gains in their understanding of science concepts after completion of the two experiences. There was little change in students' attitudes toward science pre to post, as measured by the Scientific Attitude Inventory II. However, on the basis of their responses during focus groups, students developed more sophisticated views about the practices and nature of science after they had completed both research experiences, independent of the order in which they experienced them. © 2017 M. Munn et al. CBE—Life Sciences Education © 2017 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).
Kreimeyer, Kory; Foster, Matthew; Pandey, Abhishek; Arya, Nina; Halford, Gwendolyn; Jones, Sandra F; Forshee, Richard; Walderhaug, Mark; Botsis, Taxiarchis
2017-09-01
We followed a systematic approach based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses to identify existing clinical natural language processing (NLP) systems that generate structured information from unstructured free text. Seven literature databases were searched with a query combining the concepts of natural language processing and structured data capture. Two reviewers screened all records for relevance during two screening phases, and information about clinical NLP systems was collected from the final set of papers. A total of 7149 records (after removing duplicates) were retrieved and screened, and 86 were determined to fit the review criteria. These papers contained information about 71 different clinical NLP systems, which were then analyzed. The NLP systems address a wide variety of important clinical and research tasks. Certain tasks are well addressed by the existing systems, while others remain as open challenges that only a small number of systems attempt, such as extraction of temporal information or normalization of concepts to standard terminologies. This review has identified many NLP systems capable of processing clinical free text and generating structured output, and the information collected and evaluated here will be important for prioritizing development of new approaches for clinical NLP. Copyright © 2017 Elsevier Inc. All rights reserved.
Benigni, Romualdo; Bossa, Cecilia; Richard, Ann M; Yang, Chihae
2008-01-01
Mutagenicity and carcinogenicity databases are crucial resources for toxicologists and regulators involved in chemicals risk assessment. Until recently, existing public toxicity databases have been constructed primarily as "look-up-tables" of existing data, and most often did not contain chemical structures. Concepts and technologies originated from the structure-activity relationships science have provided powerful tools to create new types of databases, where the effective linkage of chemical toxicity with chemical structure can facilitate and greatly enhance data gathering and hypothesis generation, by permitting: a) exploration across both chemical and biological domains; and b) structure-searchability through the data. This paper reviews the main public databases, together with the progress in the field of chemical relational databases, and presents the ISSCAN database on experimental chemical carcinogens.
Jandee, Kasemsak; Kaewkungwal, Jaranit; Khamsiriwatchara, Amnat; Lawpoolsri, Saranath; Wongwit, Waranya; Wansatid, Peerawat
2015-07-20
Entering data onto paper-based forms, then digitizing them, is a traditional data-management method that might result in poor data quality, especially when the secondary data are incomplete, illegible, or missing. Transcription errors from source documents to case report forms (CRFs) are common, and subsequently the errors pass from the CRFs to the electronic database. This study aimed to demonstrate the usefulness and to evaluate the effectiveness of mobile phone camera applications in capturing health-related data, aiming for data quality and completeness as compared to current routine practices exercised by government officials. In this study, the concept of "data entry via phone image capture" (DEPIC) was introduced and developed to capture data directly from source documents. This case study was based on immunization history data recorded in a mother and child health (MCH) logbook. The MCH logbooks (kept by parents) were updated whenever parents brought their children to health care facilities for immunization. Traditionally, health providers are supposed to key in duplicate information of the immunization history of each child; both on the MCH logbook, which is returned to the parents, and on the individual immunization history card, which is kept at the health care unit to be subsequently entered into the electronic health care information system (HCIS). In this study, DEPIC utilized the photographic functionality of mobile phones to capture images of all immunization-history records on logbook pages and to transcribe these records directly into the database using a data-entry screen corresponding to logbook data records. DEPIC data were then compared with HCIS data-points for quality, completeness, and consistency. As a proof-of-concept, DEPIC captured immunization history records of 363 ethnic children living in remote areas from their MCH logbooks. Comparison of the 2 databases, DEPIC versus HCIS, revealed differences in the percentage of completeness and consistency of immunization history records. Comparing the records of each logbook in the DEPIC and HCIS databases, 17.3% (63/363) of children had complete immunization history records in the DEPIC database, but no complete records were reported in the HCIS database. Regarding the individual's actual vaccination dates, comparison of records taken from MCH logbook and those in the HCIS found that 24.2% (88/363) of the children's records were absolutely inconsistent. In addition, statistics derived from the DEPIC records showed a higher immunization coverage and much more compliance to immunization schedule by age group when compared to records derived from the HCIS database. DEPIC, or the concept of collecting data via image capture directly from their primary sources, has proven to be a useful data collection method in terms of completeness and consistency. In this study, DEPIC was implemented in data collection of a single survey. The DEPIC concept, however, can be easily applied in other types of survey research, for example, collecting data on changes or trends based on image evidence over time. With its image evidence and audit trail features, DEPIC has the potential for being used even in clinical studies since it could generate improved data integrity and more reliable statistics for use in both health care and research settings.
End-User Use of Data Base Query Language: Pros and Cons.
ERIC Educational Resources Information Center
Nicholes, Walter
1988-01-01
Man-machine interface, the concept of a computer "query," a review of database technology, and a description of the use of query languages at Brigham Young University are discussed. The pros and cons of end-user use of database query languages are explored. (Author/MLW)
Imprecision and Uncertainty in the UFO Database Model.
ERIC Educational Resources Information Center
Van Gyseghem, Nancy; De Caluwe, Rita
1998-01-01
Discusses how imprecision and uncertainty are dealt with in the UFO (Uncertainty and Fuzziness in an Object-oriented) database model. Such information is expressed by means of possibility distributions, and modeled by means of the proposed concept of "role objects." The role objects model uncertain, tentative information about objects,…
ERIC Educational Resources Information Center
Williamson, Ben
2015-01-01
This article examines the emergence of "digital governance" in public education in England. Drawing on and combining concepts from software studies, policy and political studies, it identifies some specific approaches to digital governance facilitated by network-based communications and database-driven information processing software…
Software for Simulating Air Traffic
NASA Technical Reports Server (NTRS)
Sridhar, Banavar; Bilimoria, Karl; Grabbe, Shon; Chatterji, Gano; Sheth, Kapil; Mulfinger, Daniel
2006-01-01
Future Air Traffic Management Concepts Evaluation Tool (FACET) is a system of software for performing computational simulations for evaluating advanced concepts of advanced air-traffic management. FACET includes a program that generates a graphical user interface plus programs and databases that implement computational models of weather, airspace, airports, navigation aids, aircraft performance, and aircraft trajectories. Examples of concepts studied by use of FACET include aircraft self-separation for free flight; prediction of air-traffic-controller workload; decision support for direct routing; integration of spacecraft-launch operations into the U.S. national airspace system; and traffic- flow-management using rerouting, metering, and ground delays. Aircraft can be modeled as flying along either flight-plan routes or great-circle routes as they climb, cruise, and descend according to their individual performance models. The FACET software is modular and is written in the Java and C programming languages. The architecture of FACET strikes a balance between flexibility and fidelity; as a consequence, FACET can be used to model systemwide airspace operations over the contiguous U.S., involving as many as 10,000 aircraft, all on a single desktop or laptop computer running any of a variety of operating systems. Two notable applications of FACET include: (1) reroute conformance monitoring algorithms that have been implemented in one of the Federal Aviation Administration s nationally deployed, real-time, operational systems; and (2) the licensing and integration of FACET with the commercially available Flight Explorer, which is an Internet- based, real-time flight-tracking system.
Adverse drug event reporting systems: a systematic review
Peddie, David; Wickham, Maeve E.; Badke, Katherin; Small, Serena S.; Doyle‐Waters, Mary M.; Balka, Ellen; Hohl, Corinne M.
2016-01-01
Aim Adverse drug events (ADEs) are harmful and unintended consequences of medications. Their reporting is essential for drug safety monitoring and research, but it has not been standardized internationally. Our aim was to synthesize information about the type and variety of data collected within ADE reporting systems. Methods We developed a systematic search strategy, applied it to four electronic databases, and completed an electronic grey literature search. Two authors reviewed titles and abstracts, and all eligible full‐texts. We extracted data using a standardized form, and discussed disagreements until reaching consensus. We synthesized data by collapsing data elements, eliminating duplicate fields and identifying relationships between reporting concepts and data fields using visual analysis software. Results We identified 108 ADE reporting systems containing 1782 unique data fields. We mapped them to 33 reporting concepts describing patient information, the ADE, concomitant and suspect drugs, and the reporter. While reporting concepts were fairly consistent, we found variability in data fields and corresponding response options. Few systems clarified the terminology used, and many used multiple drug and disease dictionaries such as the Medical Dictionary for Regulatory Activities (MedDRA). Conclusion We found substantial variability in the data fields used to report ADEs, limiting the comparability of ADE data collected using different reporting systems, and undermining efforts to aggregate data across cohorts. The development of a common standardized data set that can be evaluated with regard to data quality, comparability and reporting rates is likely to optimize ADE data and drug safety surveillance. PMID:27016266
Adverse drug event reporting systems: a systematic review.
Bailey, Chantelle; Peddie, David; Wickham, Maeve E; Badke, Katherin; Small, Serena S; Doyle-Waters, Mary M; Balka, Ellen; Hohl, Corinne M
2016-07-01
Adverse drug events (ADEs) are harmful and unintended consequences of medications. Their reporting is essential for drug safety monitoring and research, but it has not been standardized internationally. Our aim was to synthesize information about the type and variety of data collected within ADE reporting systems. We developed a systematic search strategy, applied it to four electronic databases, and completed an electronic grey literature search. Two authors reviewed titles and abstracts, and all eligible full-texts. We extracted data using a standardized form, and discussed disagreements until reaching consensus. We synthesized data by collapsing data elements, eliminating duplicate fields and identifying relationships between reporting concepts and data fields using visual analysis software. We identified 108 ADE reporting systems containing 1782 unique data fields. We mapped them to 33 reporting concepts describing patient information, the ADE, concomitant and suspect drugs, and the reporter. While reporting concepts were fairly consistent, we found variability in data fields and corresponding response options. Few systems clarified the terminology used, and many used multiple drug and disease dictionaries such as the Medical Dictionary for Regulatory Activities (MedDRA). We found substantial variability in the data fields used to report ADEs, limiting the comparability of ADE data collected using different reporting systems, and undermining efforts to aggregate data across cohorts. The development of a common standardized data set that can be evaluated with regard to data quality, comparability and reporting rates is likely to optimize ADE data and drug safety surveillance. © 2016 The British Pharmacological Society.
PANDA asymmetric-configuration passive decay heat removal test results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fischer, O.; Dreier, J.; Aubert, C.
1997-12-01
PANDA is a large-scale, low-pressure test facility for investigating passive decay heat removal systems for the next generation of LWRs. In the first series of experiments, PANDA was used to examine the long-term LOCA response of the Passive Containment Cooling System (PCCS) for the General Electric (GE) Simplified Boiling Water Reactor (SBWR). The test objectives include concept demonstration and extension of the database available for qualification of containment codes. Also included is the study of the effects of nonuniform distributions of steam and noncondensable gases in the Dry-well (DW) and in the Suppression Chamber (SC). 3 refs., 9 figs.
Proposal for a CLIPS software library
NASA Technical Reports Server (NTRS)
Porter, Ken
1991-01-01
This paper is a proposal to create a software library for the C Language Integrated Production System (CLIPS) expert system shell developed by NASA. Many innovative ideas for extending CLIPS were presented at the First CLIPS Users Conference, including useful user and database interfaces. CLIPS developers would benefit from a software library of reusable code. The CLIPS Users Group should establish a software library-- a course of action to make that happen is proposed. Open discussion to revise this library concept is essential, since only a group effort is likely to succeed. A response form intended to solicit opinions and support from the CLIPS community is included.
A Data Model Framework for the Characterization of a Satellite Data Handling Software
NASA Astrophysics Data System (ADS)
Camatto, Gianluigi; Tipaldi, Massimo; Bothmer, Wolfgang; Ferraguto, Massimo; Bruenjes, Bernhard
2014-08-01
This paper describes an approach for the modelling of the characterization and configuration data yielded when developing a Satellite Data Handling Software (DHSW). The model can then be used as an input for the preparation of the logical and physical representation of the Satellite Reference Database (SRDB) contents and related SW suite, an essential product that allows transferring the information between the different system stakeholders, but also to produce part of the DHSW documentation and artefacts. Special attention is given to the shaping of the general Parameter concept, which is shared by a number of different entities within a Space System.
Private database queries based on counterfactual quantum key distribution
NASA Astrophysics Data System (ADS)
Zhang, Jia-Li; Guo, Fen-Zhuo; Gao, Fei; Liu, Bin; Wen, Qiao-Yan
2013-08-01
Based on the fundamental concept of quantum counterfactuality, we propose a protocol to achieve quantum private database queries, which is a theoretical study of how counterfactuality can be employed beyond counterfactual quantum key distribution (QKD). By adding crucial detecting apparatus to the device of QKD, the privacy of both the distrustful user and the database owner can be guaranteed. Furthermore, the proposed private-database-query protocol makes full use of the low efficiency in the counterfactual QKD, and by adjusting the relevant parameters, the protocol obtains excellent flexibility and extensibility.
Training system for digital mammographic diagnoses of breast cancer
NASA Astrophysics Data System (ADS)
Thomaz, R. L.; Nirschl Crozara, M. G.; Patrocinio, A. C.
2013-03-01
As the technology evolves, the analog mammography systems are being replaced by digital systems. The digital system uses video monitors as the display of mammographic images instead of the previously used screen-film and negatoscope for analog images. The change in the way of visualizing mammographic images may require a different approach for training the health care professionals in diagnosing the breast cancer with digital mammography. Thus, this paper presents a computational approach to train the health care professionals providing a smooth transition between analog and digital technology also training to use the advantages of digital image processing tools to diagnose the breast cancer. This computational approach consists of a software where is possible to open, process and diagnose a full mammogram case from a database, which has the digital images of each of the mammographic views. The software communicates with a gold standard digital mammogram cases database. This database contains the digital images in Tagged Image File Format (TIFF) and the respective diagnoses according to BI-RADSTM, these files are read by software and shown to the user as needed. There are also some digital image processing tools that can be used to provide better visualization of each single image. The software was built based on a minimalist and a user-friendly interface concept that might help in the smooth transition. It also has an interface for inputting diagnoses from the professional being trained, providing a result feedback. This system has been already completed, but hasn't been applied to any professional training yet.
The ESIS query environment pilot project
NASA Technical Reports Server (NTRS)
Fuchs, Jens J.; Ciarlo, Alessandro; Benso, Stefano
1993-01-01
The European Space Information System (ESIS) was originally conceived to provide the European space science community with simple and efficient access to space data archives, facilities with which to examine and analyze the retrieved data, and general information services. To achieve that ESIS will provide the scientists with a discipline specific environment for querying in a uniform and transparent manner data stored in geographically dispersed archives. Furthermore it will provide discipline specific tools for displaying and analyzing the retrieved data. The central concept of ESIS is to achieve a more efficient and wider usage of space scientific data, while maintaining the physical archives at the institutions which created them, and has the best background for ensuring and maintaining the scientific validity and interest of the data. In addition to coping with the physical distribution of data, ESIS is to manage also the heterogenity of the individual archives' data models, formats and data base management systems. Thus the ESIS system shall appear to the user as a single database, while it does in fact consist of a collection of dispersed and locally managed databases and data archives. The work reported in this paper is one of the results of the ESIS Pilot Project which is to be completed in 1993. More specifically it presents the pilot ESIS Query Environment (ESIS QE) system which forms the data retrieval and data dissemination axis of the ESIS system. The others are formed by the ESIS Correlation Environment (ESIS CE) and the ESIS Information Services. The ESIS QE Pilot Project is carried out for the European Space Agency's Research and Information center, ESRIN, by a Consortium consisting of Computer Resources International, Denmark, CISET S.p.a, Italy, the University of Strasbourg, France and the Rutherford Appleton Laboratories in the U.K. Furthermore numerous scientists both within ESA and space science community in Europe have been involved in defining the core concepts of the ESIS system.
Joy and happiness: a simultaneous and evolutionary concept analysis.
Cottrell, Laura
2016-07-01
To report a simultaneous and evolutionary analysis of the concepts of joy and long-term happiness. Joy and happiness are underrepresented in the nursing literature, though negative concepts are well represented. When mentioned in the literature, neither joy nor happiness is adequately defined, explained, or clearly understood. To promote further investigation of these concepts in nursing and to explore their relationship with health and healing, conceptual clarity is an essential first step. Concept analysis. The following databases were searched, without time restrictions, for articles in English: Academic Search Complete, Anthropology Plus; ATLA Religious Database with ATLASerials; Cumulative Index of Nursing and Allied Health Literature (CINAHL); Education Research Complete; Humanities International Complete; Psych EXTRA; and SocINDEX with Full Text. The final sample size consists of 61 articles and one book, published between 1978-2014. An adapted combination of Rodgers' Evolutionary Model and Haase et al.'s Simultaneous Concept Analysis (SCA) method. Though both are positive concepts, joy and happiness have significant differences. Attributes of joy describe a spontaneous, sudden and transient concept associated with connection, awareness, and freedom. Attributes of happiness describe a pursued, long-lasting, stable mental state associated with virtue and self-control. Further exploration of joy and happiness is necessary to ascertain their relationship with health and their value to nursing practice and theory development. Nurses are encouraged to consider the value of positive concepts to all areas of nursing. © 2016 John Wiley & Sons Ltd.
Research on image evidence in land supervision and GIS management
NASA Astrophysics Data System (ADS)
Li, Qiu; Wu, Lixin
2006-10-01
Land resource development and utilization brings many problems. The numbers, the scale and volume of illegal land use cases are on the increasing. Since the territory is vast, and the land violations are concealment, it is difficulty for an effective land supervision and management. In this paper, the concepts of evidence, and preservation of evidence were described first. The concepts of image evidence (IE), natural evidence (NE), natural preservation of evidence (NPE), general preservation of evidence (GPE) were proposed based on the characteristics of remote sensing image (RSI) which has a characteristic of objectiveness, truthfulness, high spatial resolution, more information included. Using MapObjects and Visual Basic 6.0, under the Access management to implement the conjunction of spatial vector database and attribute data table; taking RSI as the data sources and background layer; combining the powerful management of geographic information system (GIS) for spatial data, and visual analysis, a land supervision and GIS management system was design and implemented based on NPE. The practical use in Beijing shows that the system is running well, and solved some problems in land supervision and management.
Privacy Preserving Facial and Fingerprint Multi-biometric Authentication
NASA Astrophysics Data System (ADS)
Anzaku, Esla Timothy; Sohn, Hosik; Ro, Yong Man
The cases of identity theft can be mitigated by the adoption of secure authentication methods. Biohashing and its variants, which utilizes secret keys and biometrics, are promising methods for secure authentication; however, their shortcoming is the degraded performance under the assumption that secret keys are compromised. In this paper, we extend the concept of Biohashing to multi-biometrics - facial and fingerprint traits. We chose these traits because they are widely used, howbeit, little research attention has been given to designing privacy preserving multi-biometric systems using them. Instead of just using a single modality (facial or fingerprint), we presented a framework for using both modalities. The improved performance of the proposed method, using face and fingerprint, as against either facial or fingerprint trait used in isolation is evaluated using two chimerical bimodal databases formed from publicly available facial and fingerprint databases.
Application of connectivity mapping in predictive toxicology based on gene-expression similarity.
Smalley, Joshua L; Gant, Timothy W; Zhang, Shu-Dong
2010-02-09
Connectivity mapping is the process of establishing connections between different biological states using gene-expression profiles or signatures. There are a number of applications but in toxicology the most pertinent is for understanding mechanisms of toxicity. In its essence the process involves comparing a query gene signature generated as a result of exposure of a biological system to a chemical to those in a database that have been previously derived. In the ideal situation the query gene-expression signature is characteristic of the event and will be matched to similar events in the database. Key criteria are therefore the means of choosing the signature to be matched and the means by which the match is made. In this article we explore these concepts with examples applicable to toxicology. (c) 2009 Elsevier Ireland Ltd. All rights reserved.
James Webb Space Telescope: Supporting Multiple Ground System Transitions in One Year
NASA Technical Reports Server (NTRS)
Detter, Ryan; Fatig, Curtis; Steck, Jane
2004-01-01
Ideas, requirements, and concepts developed during the very early phases of the mission design often conflict with the reality of a situation once the prime contractors are awarded. This happened for the James Webb Space Telescope (JWST) as well. The high level requirement of a common real-time ground system for both the Integration and Test (I&T), as well as the Operation phase of the mission is meant to reduce the cost and time needed later in the mission development for re-certification of databases, command and control systems, scripts, display pages, etc. In the case of JWST, the early Phase A flight software development needed a real-time ground system and database prior to the spacecraft prime contractor being selected. To compound the situation, the very low level requirements for the real-time ground system were not well defined. These two situations caused the initial real-time ground system to be switched out for a system that was previously used by the Bight software development team. To meet the high-!evel requirement, a third ground system was selected based on the prime spacecraft contractor needs and JWST Project decisions. The JWST ground system team has responded to each of these changes successfully. The lessons learned from each transition have not only made each transition smoother, but have also resolved issues earlier in the mission development than what would normally occur.
Park, Jeong Eun; Kim, Hwa Sun; Chang, Min Jung; Hong, Hae Sook
2014-06-01
The influence of dietary composition on blood pressure is an important subject in healthcare. Interactions between antihypertensive drugs and diet (IBADD) is the most important factor in the management of hypertension. It is therefore essential to support healthcare providers' decision making role in active and continuous interaction control in hypertension management. The aim of this study was to implement an ontology-based clinical decision support system (CDSS) for IBADD management (IBADDM). We considered the concepts of antihypertensive drugs and foods, and focused on the interchangeability between the database and the CDSS when providing tailored information. An ontology-based CDSS for IBADDM was implemented in eight phases: (1) determining the domain and scope of ontology, (2) reviewing existing ontology, (3) extracting and defining the concepts, (4) assigning relationships between concepts, (5) creating a conceptual map with CmapTools, (6) selecting upper ontology, (7) formally representing the ontology with Protégé (ver.4.3), (8) implementing an ontology-based CDSS as a JAVA prototype application. We extracted 5,926 concepts, 15 properties, and formally represented them using Protégé. An ontology-based CDSS for IBADDM was implemented and the evaluation score was 4.60 out of 5. We endeavored to map functions of a CDSS and implement an ontology-based CDSS for IBADDM.
NASA Astrophysics Data System (ADS)
Monterde Rey, Ana Maria
In the area of terminology, one can find very little literature about the relationships and dependencies between linguistic and non-linguistic forms of concept representation. Furthermore, a large gap exists in the studies of non-linguistic forms. All of this constitutes the central problem in our thesis that we attempt to solve. Following an onomasiologic process of creating a terminological database, we have analysed and related, using three levels of specialisation (expert, student, and general public), the various linguistic forms (term, definition, and explanation) and a non-linguistic form (illustration) of concept representation in the area of aeronautical fuel-system installations. Specifically, of the aforementioned forms of conceptual representation, we have studied the adaptation of the level of knowledge of the material to those to whom the texts are addressed. Additionally, we have examined the formation, origin, etimology, foreign words, polysemy, synonymy, and typology of each term. We have also described in the following detail the characteristics of each type of illustration isolated in our corpus: the relationship to the object or to the concept, the existence of text and terms (linguistic media) within the illustrations, the degree of abstraction, the a priori knowledge necessary to interpret the illustrations, and, the existence of grafic symbols. Finally, we have related all linguistic and non-linguistic forms of conceptual representation.
Uncertainty in georeferencing current and historic plant locations
McEachern, K.; Niessen, K.
2009-01-01
With shrinking habitats, weed invasions, and climate change, repeated surveys are becoming increasingly important for rare plant conservation and ecological restoration. We often need to relocate historical sites or provide locations for newly restored sites. Georeferencing is the technique of giving geographic coordinates to the location of a site. Georeferencing has been done historically using verbal descriptions or field maps that accompany voucher collections. New digital technology gives us more exact techniques for mapping and storing location information. Error still exists, however, and even georeferenced locations can be uncertain, especially if error information is not included with the observation. We review the concept of uncertainty in georeferencing and compare several institutional database systems for cataloging error and uncertainty with georeferenced locations. These concepts are widely discussed among geographers, but ecologists and restorationists need to become more aware of issues related to uncertainty to improve our use of spatial information in field studies. ?? 2009 by the Board of Regents of the University of Wisconsin System.
Flight Simulator Evaluation of Display Media Devices for Synthetic Vision Concepts
NASA Technical Reports Server (NTRS)
Arthur, J. J., III; Williams, Steven P.; Prinzel, Lawrence J., III; Kramer, Lynda J.; Bailey, Randall E.
2004-01-01
The Synthetic Vision Systems (SVS) Project of the National Aeronautics and Space Administration's (NASA) Aviation Safety Program (AvSP) is striving to eliminate poor visibility as a causal factor in aircraft accidents as well as enhance operational capabilities of all aircraft. To accomplish these safety and capacity improvements, the SVS concept is designed to provide a clear view of the world around the aircraft through the display of computer-generated imagery derived from an onboard database of terrain, obstacle, and airport information. Display media devices with which to implement SVS technology that have been evaluated so far within the Project include fixed field of view head up displays and head down Primary Flight Displays with pilot-selectable field of view. A simulation experiment was conducted comparing these display devices to a fixed field of view, unlimited field of regard, full color Helmet-Mounted Display system. Subject pilots flew a visual circling maneuver in IMC at a terrain-challenged airport. The data collected for this experiment is compared to past SVS research studies.
Kurbasic, Izeta; Pandza, Haris; Masic, Izet; Huseinagic, Senad; Tandir, Salih; Alicajic, Fredi; Toromanovic, Selim
2008-01-01
CONFLICT OF INTEREST: NONE DECLARED Introduction The International classification of diseases (ICD) is the most important classification in medicine. It is used by all medical professionals. Concept The basic concept of ICD is founded on the standardization of the nomenclature for the names of diseases and their basic systematization in the hierarchically structured category. Advantages and disadvantages The health care provider institutions such as hospitals are subjects that should facilitate implementation of medical applications that follows the patient medical condition and facts connected with him. The definitive diagnosis that can be coded using ICD can be achieved after several visits of patient and rarely during the first visit. Conclusion The ICD classification is one of the oldest and most important classifications in medicine. In the scope of ICD are all fields of medicine. It is used in statistical purpose and as a coding system in medical databases. PMID:24109155
Expanding on Successful Concepts, Models, and Organization
If the goal of the AEP framework was to replace existing exposure models or databases for organizing exposure data with a concept, we would share Dr. von Göetz concerns. Instead, the outcome we promote is broader use of an organizational framework for exposure science. The f...
Opportunity to Learn and Conceptions of Educational Equality.
ERIC Educational Resources Information Center
Guiton, Gretchen; Oakes, Jeannie
1995-01-01
Conceptual issues in developing and using opportunity-to-learn (OTL) standards to inform policy questions about equal educational opportunity are discussed. Using two national databases, OTL measures are developed according to Libertarian, Liberal, and Democratic Liberal conceptualizations, and the influence of these concepts on the information…
Specialized microbial databases for inductive exploration of microbial genome sequences
Fang, Gang; Ho, Christine; Qiu, Yaowu; Cubas, Virginie; Yu, Zhou; Cabau, Cédric; Cheung, Frankie; Moszer, Ivan; Danchin, Antoine
2005-01-01
Background The enormous amount of genome sequence data asks for user-oriented databases to manage sequences and annotations. Queries must include search tools permitting function identification through exploration of related objects. Methods The GenoList package for collecting and mining microbial genome databases has been rewritten using MySQL as the database management system. Functions that were not available in MySQL, such as nested subquery, have been implemented. Results Inductive reasoning in the study of genomes starts from "islands of knowledge", centered around genes with some known background. With this concept of "neighborhood" in mind, a modified version of the GenoList structure has been used for organizing sequence data from prokaryotic genomes of particular interest in China. GenoChore , a set of 17 specialized end-user-oriented microbial databases (including one instance of Microsporidia, Encephalitozoon cuniculi, a member of Eukarya) has been made publicly available. These databases allow the user to browse genome sequence and annotation data using standard queries. In addition they provide a weekly update of searches against the world-wide protein sequences data libraries, allowing one to monitor annotation updates on genes of interest. Finally, they allow users to search for patterns in DNA or protein sequences, taking into account a clustering of genes into formal operons, as well as providing extra facilities to query sequences using predefined sequence patterns. Conclusion This growing set of specialized microbial databases organize data created by the first Chinese bacterial genome programs (ThermaList, Thermoanaerobacter tencongensis, LeptoList, with two different genomes of Leptospira interrogans and SepiList, Staphylococcus epidermidis) associated to related organisms for comparison. PMID:15698474
Peng, Jinye; Babaguchi, Noboru; Luo, Hangzai; Gao, Yuli; Fan, Jianping
2010-07-01
Digital video now plays an important role in supporting more profitable online patient training and counseling, and integration of patient training videos from multiple competitive organizations in the health care network will result in better offerings for patients. However, privacy concerns often prevent multiple competitive organizations from sharing and integrating their patient training videos. In addition, patients with infectious or chronic diseases may not want the online patient training organizations to identify who they are or even which video clips they are interested in. Thus, there is an urgent need to develop more effective techniques to protect both video content privacy and access privacy . In this paper, we have developed a new approach to construct a distributed Hippocratic video database system for supporting more profitable online patient training and counseling. First, a new database modeling approach is developed to support concept-oriented video database organization and assign a degree of privacy of the video content for each database level automatically. Second, a new algorithm is developed to protect the video content privacy at the level of individual video clip by filtering out the privacy-sensitive human objects automatically. In order to integrate the patient training videos from multiple competitive organizations for constructing a centralized video database indexing structure, a privacy-preserving video sharing scheme is developed to support privacy-preserving distributed classifier training and prevent the statistical inferences from the videos that are shared for cross-validation of video classifiers. Our experiments on large-scale video databases have also provided very convincing results.
NASA Astrophysics Data System (ADS)
Olszewski, R.; Pillich-Kolipińska, A.; Fiedukowicz, A.
2013-12-01
Implementation of INSPIRE Directive in Poland requires not only legal transposition but also development of a number of technological solutions. The one of such tasks, associated with creation of Spatial Information Infrastructure in Poland, is developing a complex model of georeference database. Significant funding for GBDOT project enables development of the national basic topographical database as a multiresolution database (MRDB). Effective implementation of this type of database requires developing procedures for generalization of geographic information (generalization of digital landscape model - DLM), which, treating TOPO10 component as the only source for creation of TOPO250 component, will allow keeping conceptual and classification consistency between those database elements. To carry out this task, the implementation of the system's concept (prepared previously for Head Office of Geodesy and Cartography) is required. Such system is going to execute the generalization process using constrained-based modeling and allows to keep topological relationships between the objects as well as between the object classes. Full implementation of the designed generalization system requires running comprehensive tests which would help with its calibration and parameterization of the generalization procedures (related to the character of generalized area). Parameterization of this process will allow determining the criteria of specific objects selection, simplification algorithms as well as the operation order. Tests with the usage of differentiated, related to the character of the area, generalization process parameters become nowadays the priority issue. Parameters are delivered to the system in the form of XML files, which, with the help of dedicated tool, are generated from the spreadsheet files (XLS) filled in by user. Using XLS file makes entering and modifying the parameters easier. Among the other elements defined by the external parametric files there are: criteria of object selection, metric parameters of generalization algorithms (e.g. simplification or aggregation) and the operations' sequence. Testing on the trial areas of diverse character will allow developing the rules of generalization process' realization, its parameterization with the proposed tool within the multiresolution reference database. The authors have attempted to develop a generalization process' parameterization for a number of different trial areas. The generalization of the results will contribute to the development of a holistic system of generalized reference data stored in the national geodetic and cartographic resources.
The LAILAPS search engine: a feature model for relevance ranking in life science databases.
Lange, Matthias; Spies, Karl; Colmsee, Christian; Flemming, Steffen; Klapperstück, Matthias; Scholz, Uwe
2010-03-25
Efficient and effective information retrieval in life sciences is one of the most pressing challenge in bioinformatics. The incredible growth of life science databases to a vast network of interconnected information systems is to the same extent a big challenge and a great chance for life science research. The knowledge found in the Web, in particular in life-science databases, are a valuable major resource. In order to bring it to the scientist desktop, it is essential to have well performing search engines. Thereby, not the response time nor the number of results is important. The most crucial factor for millions of query results is the relevance ranking. In this paper, we present a feature model for relevance ranking in life science databases and its implementation in the LAILAPS search engine. Motivated by the observation of user behavior during their inspection of search engine result, we condensed a set of 9 relevance discriminating features. These features are intuitively used by scientists, who briefly screen database entries for potential relevance. The features are both sufficient to estimate the potential relevance, and efficiently quantifiable. The derivation of a relevance prediction function that computes the relevance from this features constitutes a regression problem. To solve this problem, we used artificial neural networks that have been trained with a reference set of relevant database entries for 19 protein queries. Supporting a flexible text index and a simple data import format, this concepts are implemented in the LAILAPS search engine. It can easily be used both as search engine for comprehensive integrated life science databases and for small in-house project databases. LAILAPS is publicly available for SWISSPROT data at http://lailaps.ipk-gatersleben.de.
Chen, X; Zhou, H; Liu, Y B; Wang, J F; Li, H; Ung, C Y; Han, L Y; Cao, Z W; Chen, Y Z
2006-12-01
Traditional Chinese Medicine (TCM) is widely practised and is viewed as an attractive alternative to conventional medicine. Quantitative information about TCM prescriptions, constituent herbs and herbal ingredients is necessary for studying and exploring TCM. We manually collected information on TCM in books and other printed sources in Medline. The Traditional Chinese Medicine Information Database TCM-ID, at http://tcm.cz3.nus.edu.sg/group/tcm-id/tcmid.asp, was introduced for providing comprehensive information about all aspects of TCM including prescriptions, constituent herbs, herbal ingredients, molecular structure and functional properties of active ingredients, therapeutic and side effects, clinical indication and application and related matters. TCM-ID currently contains information for 1,588 prescriptions, 1,313 herbs, 5,669 herbal ingredients, and the 3D structure of 3,725 herbal ingredients. The value of the data in TCM-ID was illustrated by using some of the data for an in-silico study of molecular mechanism of the therapeutic effects of herbal ingredients and for developing a computer program to validate TCM multi-herb preparations. The development of systems biology has led to a new design principle for therapeutic intervention strategy, the concept of 'magic shrapnel' (rather than the 'magic bullet'), involving many drugs against multiple targets, administered in a single treatment. TCM offers an extensive source of examples of this concept in which several active ingredients in one prescription are aimed at numerous targets and work together to provide therapeutic benefit. The database and its mining applications described here represent early efforts toward exploring TCM for new theories in drug discovery.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-13
... collected on members of the general public, health professionals, faculty of academic institutions, students... peers on healthy living and pre-conception care. 5. Organizational Databases: Business contact... to work. 3. Organizational Databases: Name of organization and key contact person, business address...
DOT National Transportation Integrated Search
1993-01-01
FAA Air Traffic Control Operations Concepts Volume VI: ARTCC-Host En Route Controllers (1990) developed by CTA, Inc., a technical description of the duties of an En Route air traffic control specialist (ATCS), formatted in User Interface Language, wa...
Re-examination of service-sire conception rates in the United States
USDA-ARS?s Scientific Manuscript database
Until recently sire conception rates (SCRs) in the United States had been published only for bulls from artificial-insemination (AI) organizations that paid dairy records processing centers a fee for editing the data and forwarding it to the national dairy database of the Council on Dairy Cattle Bre...
Changing the Latitudes and Attitudes about Content Analysis Research
ERIC Educational Resources Information Center
Brank, Eve M.; Fox, Kathleen A.; Youstin, Tasha J.; Boeppler, Lee C.
2008-01-01
The current research employs the use of content analysis to teach research methods concepts among students enrolled in an upper division research methods course. Students coded and analyzed Jimmy Buffett song lyrics rather than using a downloadable database or collecting survey data. Students' knowledge of content analysis concepts increased after…
Mindfulness in nursing: an evolutionary concept analysis.
White, Lacie
2014-02-01
To report an analysis of the concept of mindfulness. Mindfulness is an emerging concept in health care that has significant implications for a variety of clinical populations. Nursing uses this concept in limited ways, and subsequently requires conceptual clarity to further identify its significance, use and applications in nursing. Mindfulness was explored using Rodgers evolutionary method of concept analysis. For this analysis, a sample of 59 English theoretical and research-based articles from the Cumulative Index to Nursing and Allied Health Literature database were obtained. The search was conducted between all-inclusive years of the database, 1981-2012. Data were analysed with particular focus on the attributes, antecedents, consequences, references and related terms that arose in relation to mindfulness in the nursing literature. The analysis found five intricately connected attributes: mindfulness is a transformative process where one develops an increasing ability to 'experience being present', with 'acceptance', 'attention' and 'awareness'. Antecedents, attributes and consequences appeared to inform and strengthen one another over time. Mindfulness is a significant concept for the discipline of nursing with practical applications for nurse well-being, the development and sustainability of therapeutic nursing qualities and holistic health promotion. It is imperative that nurse well-being and self-care become a more prominent focus in nursing research and education. Further development of the concept of mindfulness could support this focus, particularly through rigorous qualitative methodologies. © 2013 John Wiley & Sons Ltd.
NASA Technical Reports Server (NTRS)
Lancaster, Jeff; Dillard, Michael; Alves, Erin; Olofinboba, Olu
2014-01-01
The User Guide details the Access Database provided with the Flight Deck Interval Management (FIM) Display Elements, Information, & Annunciations program. The goal of this User Guide is to support ease of use and the ability to quickly retrieve and select items of interest from the Database. The Database includes FIM Concepts identified in a literature review preceding the publication of this document. Only items that are directly related to FIM (e.g., spacing indicators), which change or enable FIM (e.g., menu with control buttons), or which are affected by FIM (e.g., altitude reading) are included in the database. The guide has been expanded from previous versions to cover database structure, content, and search features with voiced explanations.
Reid, David W; Doell, Faye K; Dalton, E Jane; Ahmad, Saunia
2008-12-01
The systemic-constructivist approach to studying and benefiting couples was derived from qualitative and quantitative research on distressed couples over the past 10 years. Systemic-constructivist couple therapy (SCCT) is the clinical intervention that accompanies the approach. SCCT guides the therapist to work with both the intrapersonal and the interpersonal aspects of marriage while also integrating the social-environmental context of the couple. The theory that underlies SCCT is explained, including concepts such as we-ness and interpersonal processing. The primary components of the therapy are described. Findings described previously in an inaugural monograph containing extensive research demonstrating the long-term utility of SCCT are reviewed. (PsycINFO Database Record (c) 2010 APA, all rights reserved).
Semi-automatic Data Integration using Karma
NASA Astrophysics Data System (ADS)
Garijo, D.; Kejriwal, M.; Pierce, S. A.; Houser, P. I. Q.; Peckham, S. D.; Stanko, Z.; Hardesty Lewis, D.; Gil, Y.; Pennington, D. D.; Knoblock, C.
2017-12-01
Data integration applications are ubiquitous in scientific disciplines. A state-of-the-art data integration system accepts both a set of data sources and a target ontology as input, and semi-automatically maps the data sources in terms of concepts and relationships in the target ontology. Mappings can be both complex and highly domain-specific. Once such a semantic model, expressing the mapping using community-wide standard, is acquired, the source data can be stored in a single repository or database using the semantics of the target ontology. However, acquiring the mapping is a labor-prone process, and state-of-the-art artificial intelligence systems are unable to fully automate the process using heuristics and algorithms alone. Instead, a more realistic goal is to develop adaptive tools that minimize user feedback (e.g., by offering good mapping recommendations), while at the same time making it intuitive and easy for the user to both correct errors and to define complex mappings. We present Karma, a data integration system that has been developed over multiple years in the information integration group at the Information Sciences Institute, a research institute at the University of Southern California's Viterbi School of Engineering. Karma is a state-of-the-art data integration tool that supports an interactive graphical user interface, and has been featured in multiple domains over the last five years, including geospatial, biological, humanities and bibliographic applications. Karma allows a user to import their own ontology and datasets using widely used formats such as RDF, XML, CSV and JSON, can be set up either locally or on a server, supports a native backend database for prototyping queries, and can even be seamlessly integrated into external computational pipelines, including those ingesting data via streaming data sources, Web APIs and SQL databases. We illustrate a Karma workflow at a conceptual level, along with a live demo, and show use cases of Karma specifically for the geosciences. In particular, we show how Karma can be used intuitively to obtain the mapping model between case study data sources and a publicly available and expressive target ontology that has been designed to capture a broad set of concepts in geoscience with standardized, easily searchable names.
Aspects of Synthetic Vision Display Systems and the Best Practices of the NASA's SVS Project
NASA Technical Reports Server (NTRS)
Bailey, Randall E.; Kramer, Lynda J.; Jones, Denise R.; Young, Steven D.; Arthur, Jarvis J.; Prinzel, Lawrence J.; Glaab, Louis J.; Harrah, Steven D.; Parrish, Russell V.
2008-01-01
NASA s Synthetic Vision Systems (SVS) Project conducted research aimed at eliminating visibility-induced errors and low visibility conditions as causal factors in civil aircraft accidents while enabling the operational benefits of clear day flight operations regardless of actual outside visibility. SVS takes advantage of many enabling technologies to achieve this capability including, for example, the Global Positioning System (GPS), data links, radar, imaging sensors, geospatial databases, advanced display media and three dimensional video graphics processors. Integration of these technologies to achieve the SVS concept provides pilots with high-integrity information that improves situational awareness with respect to terrain, obstacles, traffic, and flight path. This paper attempts to emphasize the system aspects of SVS - true systems, rather than just terrain on a flight display - and to document from an historical viewpoint many of the best practices that evolved during the SVS Project from the perspective of some of the NASA researchers most heavily involved in its execution. The Integrated SVS Concepts are envisagements of what production-grade Synthetic Vision systems might, or perhaps should, be in order to provide the desired functional capabilities that eliminate low visibility as a causal factor to accidents and enable clear-day operational benefits regardless of visibility conditions.
Dziadkowiec, Oliwier; Callahan, Tiffany; Ozkaynak, Mustafa; Reeder, Blaine; Welton, John
2016-01-01
Objectives: We examine the following: (1) the appropriateness of using a data quality (DQ) framework developed for relational databases as a data-cleaning tool for a data set extracted from two EPIC databases, and (2) the differences in statistical parameter estimates on a data set cleaned with the DQ framework and data set not cleaned with the DQ framework. Background: The use of data contained within electronic health records (EHRs) has the potential to open doors for a new wave of innovative research. Without adequate preparation of such large data sets for analysis, the results might be erroneous, which might affect clinical decision-making or the results of Comparative Effectives Research studies. Methods: Two emergency department (ED) data sets extracted from EPIC databases (adult ED and children ED) were used as examples for examining the five concepts of DQ based on a DQ assessment framework designed for EHR databases. The first data set contained 70,061 visits; and the second data set contained 2,815,550 visits. SPSS Syntax examples as well as step-by-step instructions of how to apply the five key DQ concepts these EHR database extracts are provided. Conclusions: SPSS Syntax to address each of the DQ concepts proposed by Kahn et al. (2012)1 was developed. The data set cleaned using Kahn’s framework yielded more accurate results than the data set cleaned without this framework. Future plans involve creating functions in R language for cleaning data extracted from the EHR as well as an R package that combines DQ checks with missing data analysis functions. PMID:27429992
KaBOB: ontology-based semantic integration of biomedical databases.
Livingston, Kevin M; Bada, Michael; Baumgartner, William A; Hunter, Lawrence E
2015-04-23
The ability to query many independent biological databases using a common ontology-based semantic model would facilitate deeper integration and more effective utilization of these diverse and rapidly growing resources. Despite ongoing work moving toward shared data formats and linked identifiers, significant problems persist in semantic data integration in order to establish shared identity and shared meaning across heterogeneous biomedical data sources. We present five processes for semantic data integration that, when applied collectively, solve seven key problems. These processes include making explicit the differences between biomedical concepts and database records, aggregating sets of identifiers denoting the same biomedical concepts across data sources, and using declaratively represented forward-chaining rules to take information that is variably represented in source databases and integrating it into a consistent biomedical representation. We demonstrate these processes and solutions by presenting KaBOB (the Knowledge Base Of Biomedicine), a knowledge base of semantically integrated data from 18 prominent biomedical databases using common representations grounded in Open Biomedical Ontologies. An instance of KaBOB with data about humans and seven major model organisms can be built using on the order of 500 million RDF triples. All source code for building KaBOB is available under an open-source license. KaBOB is an integrated knowledge base of biomedical data representationally based in prominent, actively maintained Open Biomedical Ontologies, thus enabling queries of the underlying data in terms of biomedical concepts (e.g., genes and gene products, interactions and processes) rather than features of source-specific data schemas or file formats. KaBOB resolves many of the issues that routinely plague biomedical researchers intending to work with data from multiple data sources and provides a platform for ongoing data integration and development and for formal reasoning over a wealth of integrated biomedical data.
Advanced instrumentation for next-generation aerospace propulsion control systems
NASA Technical Reports Server (NTRS)
Barkhoudarian, S.; Cross, G. S.; Lorenzo, Carl F.
1993-01-01
New control concepts for the next generation of advanced air-breathing and rocket engines and hypersonic combined-cycle propulsion systems are analyzed. The analysis provides a database on the instrumentation technologies for advanced control systems and cross matches the available technologies for each type of engine to the control needs and applications of the other two types of engines. Measurement technologies that are considered to be ready for implementation include optical surface temperature sensors, an isotope wear detector, a brushless torquemeter, a fiberoptic deflectometer, an optical absorption leak detector, the nonintrusive speed sensor, and an ultrasonic triducer. It is concluded that all 30 advanced instrumentation technologies considered can be recommended for further development to meet need of the next generation of jet-, rocket-, and hypersonic-engine control systems.
Automation of Shuttle Tile Inspection - Engineering methodology for Space Station
NASA Technical Reports Server (NTRS)
Wiskerchen, M. J.; Mollakarimi, C.
1987-01-01
The Space Systems Integration and Operations Research Applications (SIORA) Program was initiated in late 1986 as a cooperative applications research effort between Stanford University, NASA Kennedy Space Center, and Lockheed Space Operations Company. One of the major initial SIORA tasks was the application of automation and robotics technology to all aspects of the Shuttle tile processing and inspection system. This effort has adopted a systems engineering approach consisting of an integrated set of rapid prototyping testbeds in which a government/university/industry team of users, technologists, and engineers test and evaluate new concepts and technologies within the operational world of Shuttle. These integrated testbeds include speech recognition and synthesis, laser imaging inspection systems, distributed Ada programming environments, distributed relational database architectures, distributed computer network architectures, multimedia workbenches, and human factors considerations.
Concept analysis of nurses' happiness.
Ozkara San, Eda
2015-01-01
The purpose of this analysis is to examine and clarify the concept of nurses' happiness (NH), understand the different uses of the concept, explore the conditions that foster it, and consider the consequences of NH, including the phenomena that emerge as a result of NH occurrence. The author utilizes Walker and Avant's eight-stage concept analysis. Computer and manual searches were conducted of articles in the English language addressing NH from 1990 to present. EBSCO and PubMed are the electronic databases used to access literature for this paper. For both databases, the researcher has examined this new term by splitting the term nurses' happiness into its two root words, namely nurses and happiness. An inductive analysis of articles produced descriptive themes. Definitions of happiness and NH are analyzed. Antecedents, attributes, and consequences of NH are described. Model, borderline, contrary, and related cases for NH are also identified. This concept analysis helps in the understanding of the definition of NH, the attributes that contribute to the occurrence of NH in clinical practice, as well as the consequences of NH, and how it should be measured from a nursing perspective. Ozkara San. © 2014 Wiley Periodicals, Inc.
De-implementation: A concept analysis.
Upvall, Michele J; Bourgault, Annette M
2018-04-25
The purpose of this concept analysis is to explore the meaning of de-implementation and provide a definition that can be used by researchers and clinicians to facilitate evidence-based practice. De-implementation is a relatively unknown process overshadowed by the novelty of introducing new ideas and techniques into practice. Few studies have addressed the challenge of de-implementation and the cognitive processes involved when terminating harmful or unnecessary practices. Also, confusion exists regarding the myriad of terms used to describe de-implementation processes. Walker and Avant's method (2011) for describing concepts was used to clarify de-implementation. A database search limited to academic journals yielded 281 publications representing basic research, study protocols, and editorials/commentaries from implementation science experts. After applying exclusion criterion of English language only and eliminating overlap between databases, 41 articles were selected for review. Literature review and synthesis provided a concept analysis and a distinct definition of de-implementation. De-implementation was defined as the process of identifying and removing harmful, non-cost-effective, or ineffective practices based on tradition and without adequate scientific support. The analysis provided further refinement of de-implementation as a significant concept for ongoing theory development in implementation science and clinical practice. © 2018 Wiley Periodicals, Inc.
Failure Modes and Effects Analysis (FMEA): A Bibliography
NASA Technical Reports Server (NTRS)
2000-01-01
Failure modes and effects analysis (FMEA) is a bottom-up analytical process that identifies process hazards, which helps managers understand vulnerabilities of systems, as well as assess and mitigate risk. It is one of several engineering tools and techniques available to program and project managers aimed at increasing the likelihood of safe and successful NASA programs and missions. This bibliography references 465 documents in the NASA STI Database that contain the major concepts, failure modes or failure analysis, in either the basic index of the major subject terms.
A Relational/Object-Oriented Database Management System: R/OODBMS
1992-09-01
Concepts In 1968, Dr. Edgar F . Codd had the idea that "predicate logic could be applied to maintaining the logical integrity of the data" in a DBMS [CD90, p...Hall, Inc., Englewood Cliffs, NJ, 1990. [Co70] Codd , E. F ., "A Relational Model for Large Shared Data Banks," Communications of the ACM, v. 13, no.6...pp. 377-387 Jun 1970. [CD90] Interview between E. F . Codd and DBMS, "Relational philosopher: the creator of the relational model talks about his
Dimai, Hans P
2017-11-01
Dual-energy X-ray absorptiometry (DXA) is a two-dimensional imaging technology developed to assess bone mineral density (BMD) of the entire human skeleton and also specifically of skeletal sites known to be most vulnerable to fracture. In order to simplify interpretation of BMD measurement results and allow comparability among different DXA-devices, the T-score concept was introduced. This concept involves an individual's BMD which is then compared with the mean value of a young healthy reference population, with the difference expressed as a standard deviation (SD). Since the early nineties of the past century, the diagnostic categories "normal, osteopenia, and osteoporosis", as recommended by a WHO working Group, are based on this concept. Thus, DXA is still the globally accepted "gold-standard" method for the noninvasive diagnosis of osteoporosis. Another score obtained from DXA measurement, termed Z-score, describes the number of SDs by which the BMD in an individual differs from the mean value expected for age and sex. Although not intended for diagnosis of osteoporosis in adults, it nevertheless provides information about an individual's fracture risk compared to peers. DXA measurement can either be used as a "stand-alone" means in the assessment of an individual's fracture risk, or incorporated into one of the available fracture risk assessment tools such as FRAX® or Garvan, thus improving the predictive power of such tools. The issue which reference databases should be used by DXA-device manufacturers for T-score reference standards has been recently addressed by an expert group, who recommended use National Health and Nutrition Examination Survey III (NHANES III) databases for the hip reference standard but own databases for the lumbar spine. Furthermore, in men it is recommended use female reference databases for calculation of the T-score and use male reference databases for calculation of Z-score. Copyright © 2017 Elsevier Inc. All rights reserved.
Integrated Aerodynamic/Structural/Dynamic Analyses of Aircraft with Large Shape Changes
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.; Chwalowski, Pawel; Horta, Lucas G.; Piatak, David J.; McGowan, Anna-Maria R.
2007-01-01
The conceptual and preliminary design processes for aircraft with large shape changes are generally difficult and time-consuming, and the processes are often customized for a specific shape change concept to streamline the vehicle design effort. Accordingly, several existing reports show excellent results of assessing a particular shape change concept or perturbations of a concept. The goal of the current effort was to develop a multidisciplinary analysis tool and process that would enable an aircraft designer to assess several very different morphing concepts early in the design phase and yet obtain second-order performance results so that design decisions can be made with better confidence. The approach uses an efficient parametric model formulation that allows automatic model generation for systems undergoing radical shape changes as a function of aerodynamic parameters, geometry parameters, and shape change parameters. In contrast to other more self-contained approaches, the approach utilizes off-the-shelf analysis modules to reduce development time and to make it accessible to many users. Because the analysis is loosely coupled, discipline modules like a multibody code can be easily swapped for other modules with similar capabilities. One of the advantages of this loosely coupled system is the ability to use the medium-to high-fidelity tools early in the design stages when the information can significantly influence and improve overall vehicle design. Data transfer among the analysis modules are based on an accurate and automated general purpose data transfer tool. In general, setup time for the integrated system presented in this paper is 2-4 days for simple shape change concepts and 1-2 weeks for more mechanically complicated concepts. Some of the key elements briefly described in the paper include parametric model development, aerodynamic database generation, multibody analysis, and the required software modules as well as examples for a telescoping wing, a folding wing, and a bat-like wing.
The World Karst Aquifer Mapping project: concept, mapping procedure and map of Europe
NASA Astrophysics Data System (ADS)
Chen, Zhao; Auler, Augusto S.; Bakalowicz, Michel; Drew, David; Griger, Franziska; Hartmann, Jens; Jiang, Guanghui; Moosdorf, Nils; Richts, Andrea; Stevanovic, Zoran; Veni, George; Goldscheider, Nico
2017-05-01
Karst aquifers contribute substantially to freshwater supplies in many regions of the world, but are vulnerable to contamination and difficult to manage because of their unique hydrogeological characteristics. Many karst systems are hydraulically connected over wide areas and require transboundary exploration, protection and management. In order to obtain a better global overview of karst aquifers, to create a basis for sustainable international water-resources management, and to increase the awareness in the public and among decision makers, the World Karst Aquifer Mapping (WOKAM) project was established. The goal is to create a world map and database of karst aquifers, as a further development of earlier maps. This paper presents the basic concepts and the detailed mapping procedure, using France as an example to illustrate the step-by-step workflow, which includes generalization, differentiation of continuous and discontinuous carbonate and evaporite rock areas, and the identification of non-exposed karst aquifers. The map also shows selected caves and karst springs, which are collected in an associated global database. The draft karst aquifer map of Europe shows that 21.6% of the European land surface is characterized by the presence of (continuous or discontinuous) carbonate rocks; about 13.8% of the land surface is carbonate rock outcrop.
NASA Astrophysics Data System (ADS)
Abdellatif, Dehni; Mourad, Lounis
2017-07-01
Soil salinity is a complex problem that affects groundwater aquifers and agricultural lands in the semiarid regions. Remote sensing and spectroscopy database systems provide accuracy for salinity autodetection and dynamical delineation. Salinity detection techniques using polychromatic wavebands by field geocomputation and experimental data are time consuming and expensive. This paper presents an automated spectral detection and identification of salt minerals using a monochromatic waveband concept from multispectral bands-Landsat 8 Operational Land Imager (OLI) and Thermal InfraRed Sensor (TIRS) and spectroscopy United States Geological Survey database. For detecting mineral salts related to electrolytes, such as electronical and vibrational transitions, an integrated approach of salinity detection related to the optical monochromatic concept has been addressed. The purpose of this paper is to discriminate waveband intrinsic spectral similarity using the Beer-Lambert and Van 't Hoff laws for spectral curve extraction such as transmittance, reflectance, absorbance, land surface temperature, molar concentration, and osmotic pressure. These parameters are primordial for hydrodynamic salinity modeling and continuity identification using chemical and physical approaches. The established regression fitted models have been addressed for salt spectroscopy validation for suitable calibration and validation. Furthermore, our analytical tool is conducted for better decision interface using spectral salinity detection and identification in the Oran watershed, Algeria.
What does it mean to be an oncology nurse? Reexamining the life cycle concepts.
Cohen, Marlene Z; Ferrell, Betty R; Vrabel, Mark; Visovsky, Constance; Schaefer, Brandi
2010-09-01
To summarize the current research pertaining to the concepts initially examined by the Oncology Nursing Society Life Cycle of the Oncology Nurse Task Force and related projects completed in 1994. Published articles on the 21 concepts from the Oncology Nursing Society Life Cycle of the Oncology Nurse Task Force work. Research published in English from 1995-2009 was obtained from PubMed, CINAHL(R), PsycINFO, ISI Science, and EBSCO Health Source(R): Nursing/Academic Edition databases. Most of the concepts identified from the Oncology Nursing Society Life Cycle of the Oncology Nurse Task Force have been examined in the literature. Relationships and witnessing suffering were common concepts among studies of the meaning of oncology nursing. Nurses provide holistic care, and not surprisingly, holistic interventions have been found useful to support nurses. Interventions included storytelling, clinical support of nurses, workshops to find balance in lives, and dream work. Additional support comes from mentoring. The research identified was primarily descriptive, with very few interventions reported. Findings have been consistent over time in diverse countries. This review indicates that although the healthcare system has changed significantly in 15 years, nurses' experiences of providing care to patients with cancer have remained consistent. The need for interventions to support nurses remains.
StarView: The object oriented design of the ST DADS user interface
NASA Technical Reports Server (NTRS)
Williams, J. D.; Pollizzi, J. A.
1992-01-01
StarView is the user interface being developed for the Hubble Space Telescope Data Archive and Distribution Service (ST DADS). ST DADS is the data archive for HST observations and a relational database catalog describing the archived data. Users will use StarView to query the catalog and select appropriate datasets for study. StarView sends requests for archived datasets to ST DADS which processes the requests and returns the database to the user. StarView is designed to be a powerful and extensible user interface. Unique features include an internal relational database to navigate query results, a form definition language that will work with both CRT and X interfaces, a data definition language that will allow StarView to work with any relational database, and the ability to generate adhoc queries without requiring the user to understand the structure of the ST DADS catalog. Ultimately, StarView will allow the user to refine queries in the local database for improved performance and merge in data from external sources for correlation with other query results. The user will be able to create a query from single or multiple forms, merging the selected attributes into a single query. Arbitrary selection of attributes for querying is supported. The user will be able to select how query results are viewed. A standard form or table-row format may be used. Navigation capabilities are provided to aid the user in viewing query results. Object oriented analysis and design techniques were used in the design of StarView to support the mechanisms and concepts required to implement these features. One such mechanism is the Model-View-Controller (MVC) paradigm. The MVC allows the user to have multiple views of the underlying database, while providing a consistent mechanism for interaction regardless of the view. This approach supports both CRT and X interfaces while providing a common mode of user interaction. Another powerful abstraction is the concept of a Query Model. This concept allows a single query to be built form a single or multiple forms before it is submitted to ST DADS. Supporting this concept is the adhoc query generator which allows the user to select and qualify an indeterminate number attributes from the database. The user does not need any knowledge of how the joins across various tables are to be resolved. The adhoc generator calculates the joins automatically and generates the correct SQL query.
Examining the margins: a concept analysis of marginalization.
Vasas, Elyssa B
2005-01-01
The aim of this analysis is to explore the concept of social marginalization for the purpose of concept development. Specifically, the article intends to clarify the relationship between health disparities and marginalization and generate knowledge about working with people who are socially marginalized. Concept development evolved from the critical analysis of relevant literature generated through searches of nursing and social science databases. Literature was organized thematically and themes related to marginalization as a social process were included and analyzed. The article explores the challenges of using marginalization as an independent concept and suggests areas for future inquiry and research.
Inroads to predict in vivo toxicology-an introduction to the eTOX Project.
Briggs, Katharine; Cases, Montserrat; Heard, David J; Pastor, Manuel; Pognan, François; Sanz, Ferran; Schwab, Christof H; Steger-Hartmann, Thomas; Sutter, Andreas; Watson, David K; Wichard, Jörg D
2012-01-01
There is a widespread awareness that the wealth of preclinical toxicity data that the pharmaceutical industry has generated in recent decades is not exploited as efficiently as it could be. Enhanced data availability for compound comparison ("read-across"), or for data mining to build predictive tools, should lead to a more efficient drug development process and contribute to the reduction of animal use (3Rs principle). In order to achieve these goals, a consortium approach, grouping numbers of relevant partners, is required. The eTOX ("electronic toxicity") consortium represents such a project and is a public-private partnership within the framework of the European Innovative Medicines Initiative (IMI). The project aims at the development of in silico prediction systems for organ and in vivo toxicity. The backbone of the project will be a database consisting of preclinical toxicity data for drug compounds or candidates extracted from previously unpublished, legacy reports from thirteen European and European operation-based pharmaceutical companies. The database will be enhanced by incorporation of publically available, high quality toxicology data. Seven academic institutes and five small-to-medium size enterprises (SMEs) contribute with their expertise in data gathering, database curation, data mining, chemoinformatics and predictive systems development. The outcome of the project will be a predictive system contributing to early potential hazard identification and risk assessment during the drug development process. The concept and strategy of the eTOX project is described here, together with current achievements and future deliverables.
NASA Astrophysics Data System (ADS)
Seo, Yongwon; Hwang, Junsik; Choi, Hyun Il
2017-04-01
The concept of directly connected impervious area (DCIA) or efficient impervious areas (EIA) refers to a subset of impervious cover, which is directly connected to a drainage system or a water body via continuous impervious surfaces. The concept of DCIA is important in that it is regarded as a better predictor of stream ecosystem health than the total impervious area (TIA). DCIA is a key concept for a better assessment of green infrastructures introduced in urban catchments. Green infrastructure can help restore water cycle; it improves water quality, manages stormwater, provides recreational environment even at lower cost compared to conventional alternatives. In this study, we evaluated several methods to obtain the DCIA based on a GIS database and showed the importance of the accurate measurement of DCIA in terms of resulting hydrographs. We also evaluated several potential green infrastructure scenarios and showed how the spatial planning of green infrastruesture affects the shape of hydrographs and reduction of peak flows. These results imply that well-planned green infrastructure can be introduced to urban catchments for flood risk managements and quantitative assessment of spatial distribution of DCIA is crucial for sustainable development in urban environment.
Ontology-Based Data Integration between Clinical and Research Systems
Mate, Sebastian; Köpcke, Felix; Toddenroth, Dennis; Martin, Marcus; Prokosch, Hans-Ulrich
2015-01-01
Data from the electronic medical record comprise numerous structured but uncoded ele-ments, which are not linked to standard terminologies. Reuse of such data for secondary research purposes has gained in importance recently. However, the identification of rele-vant data elements and the creation of database jobs for extraction, transformation and loading (ETL) are challenging: With current methods such as data warehousing, it is not feasible to efficiently maintain and reuse semantically complex data extraction and trans-formation routines. We present an ontology-supported approach to overcome this challenge by making use of abstraction: Instead of defining ETL procedures at the database level, we use ontologies to organize and describe the medical concepts of both the source system and the target system. Instead of using unique, specifically developed SQL statements or ETL jobs, we define declarative transformation rules within ontologies and illustrate how these constructs can then be used to automatically generate SQL code to perform the desired ETL procedures. This demonstrates how a suitable level of abstraction may not only aid the interpretation of clinical data, but can also foster the reutilization of methods for un-locking it. PMID:25588043
Alignment of high-throughput sequencing data inside in-memory databases.
Firnkorn, Daniel; Knaup-Gregori, Petra; Lorenzo Bermejo, Justo; Ganzinger, Matthias
2014-01-01
In times of high-throughput DNA sequencing techniques, performance-capable analysis of DNA sequences is of high importance. Computer supported DNA analysis is still an intensive time-consuming task. In this paper we explore the potential of a new In-Memory database technology by using SAP's High Performance Analytic Appliance (HANA). We focus on read alignment as one of the first steps in DNA sequence analysis. In particular, we examined the widely used Burrows-Wheeler Aligner (BWA) and implemented stored procedures in both, HANA and the free database system MySQL, to compare execution time and memory management. To ensure that the results are comparable, MySQL has been running in memory as well, utilizing its integrated memory engine for database table creation. We implemented stored procedures, containing exact and inexact searching of DNA reads within the reference genome GRCh37. Due to technical restrictions in SAP HANA concerning recursion, the inexact matching problem could not be implemented on this platform. Hence, performance analysis between HANA and MySQL was made by comparing the execution time of the exact search procedures. Here, HANA was approximately 27 times faster than MySQL which means, that there is a high potential within the new In-Memory concepts, leading to further developments of DNA analysis procedures in the future.
ERIC Educational Resources Information Center
Lowe, M. Sara; Maxson, Bronwen K.; Stone, Sean M.; Miller, Willie; Snajdr, Eric; Hanna, Kathleen
2018-01-01
Boolean logic can be a difficult concept for first-year, introductory students to grasp. This paper compares the results of Boolean and natural language searching across several databases with searches created from student research questions. Performance differences between databases varied. Overall, natural search language is at least as good as…
The present report describes a strategy to refine the current Cramer classification of the TTC concept using a broad database (DB) termed TTC RepDose. Cramer classes 1-3 overlap to some extent, indicating a need for a better separation of structural classes likely to be toxic, mo...
ERIC Educational Resources Information Center
Scott, Rachel Elizabeth
2016-01-01
Librarians are frequently asked to teach several databases in a one-shot session, despite findings suggesting that such database demonstrations do not lead to optimal student outcomes. The "ACRL Framework for Information Literacy for Higher Education" highlights the concepts of metaliteracy and metacognition. This paper investigates ways…
Navigation system for autonomous mapper robots
NASA Astrophysics Data System (ADS)
Halbach, Marc; Baudoin, Yvan
1993-05-01
This paper describes the conception and realization of a fast, robust, and general navigation system for a mobile (wheeled or legged) robot. A database, representing a high level map of the environment is generated and continuously updated. The first part describes the legged target vehicle and the hexapod robot being developed. The second section deals with spatial and temporal sensor fusion for dynamic environment modeling within an obstacle/free space probabilistic classification grid. Ultrasonic sensors are used, others are suspected to be integrated, and a-priori knowledge is treated. US sensors are controlled by the path planning module. The third part concerns path planning and a simulation of a wheeled robot is also presented.
Decision fatigue: A conceptual analysis.
Pignatiello, Grant A; Martin, Richard J; Hickman, Ronald L
2018-03-01
Decision fatigue is an applicable concept to healthcare psychology. Due to a lack of conceptual clarity, we present a concept analysis of decision fatigue. A search of the term "decision fatigue" was conducted across seven research databases, which yielded 17 relevant articles. The authors identified three antecedent themes (decisional, self-regulatory, and situational) and three attributional themes (behavioral, cognitive, and physiological) of decision fatigue. However, the extant literature failed to adequately describe consequences of decision fatigue. This concept analysis provides needed conceptual clarity for decision fatigue, a concept possessing relevance to nursing and allied health sciences.
Mysql Data-Base Applications for Dst-Like Physics Analysis
NASA Astrophysics Data System (ADS)
Tsenov, Roumen
2004-07-01
The data and analysis model developed and being used in the HARP experiment for studying hadron production at CERN Proton Synchrotron is discussed. Emphasis is put on usage of data-base (DB) back-ends for persistent storing and retrieving "alive" C++ objects encapsulating raw and reconstructed data. Concepts of "Data Summary Tape" (DST) as a logical collection of DB-persistent data of different types, and of "intermediate DST" (iDST) as a physical "tag" of DST, are introduced. iDST level of persistency allows a powerful, DST-level of analysis to be performed by applications running on an isolated machine (even laptop) with no connection to the experiment's main data storage. Implementation of these concepts is considered.
OntoMate: a text-mining tool aiding curation at the Rat Genome Database
Liu, Weisong; Laulederkind, Stanley J. F.; Hayman, G. Thomas; Wang, Shur-Jen; Nigam, Rajni; Smith, Jennifer R.; De Pons, Jeff; Dwinell, Melinda R.; Shimoyama, Mary
2015-01-01
The Rat Genome Database (RGD) is the premier repository of rat genomic, genetic and physiologic data. Converting data from free text in the scientific literature to a structured format is one of the main tasks of all model organism databases. RGD spends considerable effort manually curating gene, Quantitative Trait Locus (QTL) and strain information. The rapidly growing volume of biomedical literature and the active research in the biological natural language processing (bioNLP) community have given RGD the impetus to adopt text-mining tools to improve curation efficiency. Recently, RGD has initiated a project to use OntoMate, an ontology-driven, concept-based literature search engine developed at RGD, as a replacement for the PubMed (http://www.ncbi.nlm.nih.gov/pubmed) search engine in the gene curation workflow. OntoMate tags abstracts with gene names, gene mutations, organism name and most of the 16 ontologies/vocabularies used at RGD. All terms/ entities tagged to an abstract are listed with the abstract in the search results. All listed terms are linked both to data entry boxes and a term browser in the curation tool. OntoMate also provides user-activated filters for species, date and other parameters relevant to the literature search. Using the system for literature search and import has streamlined the process compared to using PubMed. The system was built with a scalable and open architecture, including features specifically designed to accelerate the RGD gene curation process. With the use of bioNLP tools, RGD has added more automation to its curation workflow. Database URL: http://rgd.mcw.edu PMID:25619558
Sagbo, S; Blochaou, F; Langlotz, F; Vangenot, C; Nolte, L-P; Zheng, G
2005-01-01
Computer-Assisted Orthopaedic Surgery (CAOS) has made much progress over the last 10 years. Navigation systems have been recognized as important tools that help surgeons, and various such systems have been developed. A disadvantage of these systems is that they use non-standard formalisms and techniques. As a result, there are no standard concepts for implant and tool management or data formats to store information for use in 3D planning and navigation. We addressed these limitations and developed a practical and generic solution that offers benefits for surgeons, implant manufacturers, and CAS application developers. We developed a virtual implant database containing geometrical as well as calibration information for orthopedic implants and instruments, with a focus on trauma. This database has been successfully tested for various applications in the client/server mode. The implant information is not static, however, because manufacturers periodically revise their implants, resulting in the deletion of some implants and the introduction of new ones. Tracking these continuous changes and keeping CAS systems up to date is a tedious task if done manually. This leads to additional costs for system development, and some errors are inevitably generated due to the huge amount of information that has to be processed. To ease management with respect to implant life cycle, we developed a tool to assist end-users (surgeons, hospitals, CAS system providers, and implant manufacturers) in managing their implants. Our system can be used for pre-operative planning and intra-operative navigation, and also for any surgical simulation involving orthopedic implants. Currently, this tool allows addition of new implants, modification of existing ones, deletion of obsolete implants, export of a given implant, and also creation of backups. Our implant management system has been successfully tested in the laboratory with very promising results. It makes it possible to fill the current gap that exists between the CAS system and implant manufacturers, hospitals, and surgeons.
Resources for comparing the speed and performance of medical autocoders.
Berman, Jules J
2004-06-15
Concept indexing is a popular method for characterizing medical text, and is one of the most important early steps in many data mining efforts. Concept indexing differs from simple word or phrase indexing because concepts are typically represented by a nomenclature code that binds a medical concept to all equivalent representations. A concept search on the term renal cell carcinoma would be expected to find occurrences of hypernephroma, and renal carcinoma (concept equivalents). The purpose of this study is to provide freely available resources to compare speed and performance among different autocoders. These tools consist of: 1) a public domain autocoder written in Perl (a free and open source programming language that installs on any operating system); 2) a nomenclature database derived from the unencumbered subset of the publicly available Unified Medical Language System; 3) a large corpus of autocoded output derived from a publicly available medical text. A simple lexical autocoder was written that parses plain-text into a listing of all 1,2,3, and 4-word strings contained in text, assigning a nomenclature code for text strings that match terms in the nomenclature. The nomenclature used is the unencumbered subset of the 2003 Unified Medical Language System (UMLS). The unencumbered subset of UMLS was reduced to exclude homonymous one-word terms and proper names, resulting in a term/code data dictionary containing about a half million medical terms. The Online Mendelian Inheritance in Man (OMIM), a 92+ Megabyte publicly available medical opus, was used as sample medical text for the autocoder. The autocoding Perl script is remarkably short, consisting of just 38 command lines. The 92+ Megabyte OMIM file was completely autocoded in 869 seconds on a 2.4 GHz processor (less than 10 seconds per Megabyte of text). The autocoded output file (9,540,442 bytes) contains 367,963 coded terms from OMIM and is distributed with this manuscript. A public domain Perl script is provided that can parse through plain-text files of any length, matching concepts against an external nomenclature. The script and associated files can be used freely to compare the speed and performance of autocoding software.
The dimensions of nursing surveillance: a concept analysis.
Kelly, Lesly; Vincent, Deborah
2011-03-01
This paper is a report of an analysis of the concept of nursing surveillance. Nursing surveillance, a primary function of acute care nurses, is critical to patient safety and outcomes. Although it has been associated with patient outcomes and organizational context of care, little knowledge has been generated about the conceptual and operational process of surveillance. A search using the CINAHL, Medline and PubMed databases was used to compile an international data set of 18 papers and 4 book chapters published from 1985 to 2009. Rodger's evolutionary concept analysis techniques were used to analyse surveillance in a systems framework. This method focused the search to nursing surveillance (as opposed to other medical uses of the term) and used a theoretical framework to guide the analysis. The examination of the literature clarifies the multifaceted nature of nursing surveillance in the acute care setting. Surveillance involves purposeful and ongoing acquisition, interpretation and synthesis of patient data for clinical decision-making. Behavioural activities and multiple cognitive processes are used in surveillance in order for the nurse to make decisions for patient safety and health maintenance. A systems approach to the analysis also demonstrates how organizational characteristics and contextual factors influence the process in the acute care environment. This conceptual analysis describes the nature of the surveillance process and clarifies the concept for effective communication and future use in health services research. © 2010 The Authors. Journal of Advanced Nursing © 2010 Blackwell Publishing Ltd.
The dimensions of nursing surveillance: a concept analysis
Kelly, Lesly; Vincent, Deborah
2011-01-01
Aim This paper is a report of an analysis of the concept of nursing surveillance. Background Nursing surveillance, a primary function of acute care nurses, is critical to patient safety and outcomes. Although it has been associated with patient outcomes and organizational context of care, little knowledge has been generated about the conceptual and operational process of surveillance. Data sources A search using the CINAHL, Medline and PubMed databases was used to compile an international data set of 18 papers and 4 book chapters published from 1985 to 2009. Review methods Rodger’s evolutionary concept analysis techniques were used to analyse surveillance in a systems framework. This method focused the search to nursing surveillance (as opposed to other medical uses of the term) and used a theoretical framework to guide the analysis. Results The examination of the literature clarifies the multifaceted nature of nursing surveillance in the acute care setting. Surveillance involves purposeful and ongoing acquisition, interpretation and synthesis of patient data for clinical decision- making. Behavioural activities and multiple cognitive processes are used in surveillance in order for the nurse to make decisions for patient safety and health maintenance. A systems approach to the analysis also demonstrates how organizational characteristics and contextual factors influence the process in the acute care environment. Conclusion This conceptual analysis describes the nature of the surveillance process and clarifies the concept for effective communication and future use in health services research. PMID:21129007
Leading change: a concept analysis.
Nelson-Brantley, Heather V; Ford, Debra J
2017-04-01
To report an analysis of the concept of leading change. Nurses have been called to lead change to advance the health of individuals, populations, and systems. Conceptual clarity about leading change in the context of nursing and healthcare systems provides an empirical direction for future research and theory development that can advance the science of leadership studies in nursing. Concept analysis. CINAHL, PubMed, PsycINFO, Psychology and Behavioral Sciences Collection, Health Business Elite and Business Source Premier databases were searched using the terms: leading change, transformation, reform, leadership and change. Literature published in English from 2001 - 2015 in the fields of nursing, medicine, organizational studies, business, education, psychology or sociology were included. Walker and Avant's method was used to identify descriptions, antecedents, consequences and empirical referents of the concept. Model, related and contrary cases were developed. Five defining attributes of leading change were identified: (a) individual and collective leadership; (b) operational support; (c) fostering relationships; (d) organizational learning; and (e) balance. Antecedents were external or internal driving forces and organizational readiness. The consequences of leading change included improved organizational performance and outcomes and new organizational culture and values. A theoretical definition and conceptual model of leading change were developed. Future studies that use and test the model may contribute to the refinement of a middle-range theory to advance nursing leadership research and education. From this, empirically derived interventions that prepare and enable nurses to lead change to advance health may be realized. © 2016 John Wiley & Sons Ltd.
Health Recommender Systems: Concepts, Requirements, Technical Basics and Challenges
Wiesner, Martin; Pfeifer, Daniel
2014-01-01
During the last decades huge amounts of data have been collected in clinical databases representing patients' health states (e.g., as laboratory results, treatment plans, medical reports). Hence, digital information available for patient-oriented decision making has increased drastically but is often scattered across different sites. As as solution, personal health record systems (PHRS) are meant to centralize an individual's health data and to allow access for the owner as well as for authorized health professionals. Yet, expert-oriented language, complex interrelations of medical facts and information overload in general pose major obstacles for patients to understand their own record and to draw adequate conclusions. In this context, recommender systems may supply patients with additional laymen-friendly information helping to better comprehend their health status as represented by their record. However, such systems must be adapted to cope with the specific requirements in the health domain in order to deliver highly relevant information for patients. They are referred to as health recommender systems (HRS). In this article we give an introduction to health recommender systems and explain why they are a useful enhancement to PHR solutions. Basic concepts and scenarios are discussed and a first implementation is presented. In addition, we outline an evaluation approach for such a system, which is supported by medical experts. The construction of a test collection for case-related recommendations is described. Finally, challenges and open issues are discussed. PMID:24595212
Prototype and Evaluation of AutoHelp: A Case-based, Web-accessible Help Desk System for EOSDIS
NASA Technical Reports Server (NTRS)
Mitchell, Christine M.; Thurman, David A.
1999-01-01
AutoHelp is a case-based, Web-accessible help desk for users of the EOSDIS. Its uses a combination of advanced computer and Web technologies, knowledge-based systems tools, and cognitive engineering to offload the current, person-intensive, help desk facilities at the DAACs. As a case-based system, AutoHelp starts with an organized database of previous help requests (questions and answers) indexed by a hierarchical category structure that facilitates recognition by persons seeking assistance. As an initial proof-of-concept demonstration, a month of email help requests to the Goddard DAAC were analyzed and partially organized into help request cases. These cases were then categorized to create a preliminary case indexing system, or category structure. This category structure allows potential users to identify or recognize categories of questions, responses, and sample cases similar to their needs. Year one of this research project focused on the development of a technology demonstration. User assistance 'cases' are stored in an Oracle database in a combination of tables linking prototypical questions with responses and detailed examples from the email help requests analyzed to date. When a potential user accesses the AutoHelp system, a Web server provides a Java applet that displays the category structure of the help case base organized by the needs of previous users. When the user identifies or requests a particular type of assistance, the applet uses Java database connectivity (JDBC) software to access the database and extract the relevant cases. The demonstration will include an on-line presentation of how AutoHelp is currently structured. We will show how a user might request assistance via the Web interface and how the AutoHelp case base provides assistance. The presentation will describe the DAAC data collection, case definition, and organization to date, as well as the AutoHelp architecture. It will conclude with the year 2 proposal to more fully develop the case base, the user interface (including the category structure), interface with the current DAAC Help System, the development of tools to add new cases, and user testing and evaluation at (perhaps) the Goddard DAAC.
Toward Phase IV, Populating the WOVOdat Database
NASA Astrophysics Data System (ADS)
Ratdomopurbo, A.; Newhall, C. G.; Schwandner, F. M.; Selva, J.; Ueda, H.
2009-12-01
One of challenges for volcanologists is the fact that more and more people are likely to live on volcanic slopes. Information about volcanic activity during unrest should be accurate and rapidly distributed. As unrest may lead to eruption, evacuation may be necessary to minimize damage and casualties. The decision to evacuate people is usually based on the interpretation of monitoring data. Over the past several decades, monitoring volcanoes has used more and more sophisticated instruments. A huge volume of data is collected in order to understand the state of activity and behaviour of a volcano. WOVOdat, The World Organization of Volcano Observatories (WOVO) Database of Volcanic Unrest, will provide context within which scientists can interpret the state of their own volcano, during and between crises. After a decision during the 2000 IAVCEI General Assembly to create WOVOdat, development has passed through several phases, from Concept Development (Phase-I in 2000-2002), Database Design (Phase-II, 2003-2006) and Pilot Testing (Phase-III in 2007-2008). For WOVOdat to be operational, there are still two (2) steps to complete, which are: Database Population (Phase-IV) and Enhancement and Maintenance (Phase-V). Since January 2009, the WOVOdat project is hosted by Earth Observatory of Singapore for at least a 5-year period. According to the original planning in 2002, this 5-year period will be used for completing the Phase-IV. As the WOVOdat design is not yet tested for all types of data, 2009 is still reserved for building the back-end relational database management system (RDBMS) of WOVOdat and testing it with more complex data. Fine-tuning of the WOVOdat’s RDBMS design is being done with each new upload of observatory data. The next and main phase of WOVOdat development will be data population, managing data transfer from multiple observatory formats to WOVOdat format. Data population will depend on two important things, the availability of SQL database in volcano observatories and their data sharing policy. Hence, a strong collaboration with every WOVO observatory is important. For some volcanoes where the data are not in an SQL system, the WOVOdat project will help scientists working on the volcano to start building an SQL database.
NASA Astrophysics Data System (ADS)
Piasecki, M.; Beran, B.
2007-12-01
Search engines have changed the way we see the Internet. The ability to find the information by just typing in keywords was a big contribution to the overall web experience. While the conventional search engine methodology worked well for textual documents, locating scientific data remains a problem since they are stored in databases not readily accessible by search engine bots. Considering different temporal, spatial and thematic coverage of different databases, especially for interdisciplinary research it is typically necessary to work with multiple data sources. These sources can be federal agencies which generally offer national coverage or regional sources which cover a smaller area with higher detail. However for a given geographic area of interest there often exists more than one database with relevant data. Thus being able to query multiple databases simultaneously is a desirable feature that would be tremendously useful for scientists. Development of such a search engine requires dealing with various heterogeneity issues. In scientific databases, systems often impose controlled vocabularies which ensure that they are generally homogeneous within themselves but are semantically heterogeneous when moving between different databases. This defines the boundaries of possible semantic related problems making it easier to solve than with the conventional search engines that deal with free text. We have developed a search engine that enables querying multiple data sources simultaneously and returns data in a standardized output despite the aforementioned heterogeneity issues between the underlying systems. This application relies mainly on metadata catalogs or indexing databases, ontologies and webservices with virtual globe and AJAX technologies for the graphical user interface. Users can trigger a search of dozens of different parameters over hundreds of thousands of stations from multiple agencies by providing a keyword, a spatial extent, i.e. a bounding box, and a temporal bracket. As part of this development we have also added an environment that allows users to do some of the semantic tagging, i.e. the linkage of a variable name (which can be anything they desire) to defined concepts in the ontology structure which in turn provides the backbone of the search engine.
Use of HSM with Relational Databases
NASA Technical Reports Server (NTRS)
Breeden, Randall; Burgess, John; Higdon, Dan
1996-01-01
Hierarchical storage management (HSM) systems have evolved to become a critical component of large information storage operations. They are built on the concept of using a hierarchy of storage technologies to provide a balance in performance and cost. In general, they migrate data from expensive high performance storage to inexpensive low performance storage based on frequency of use. The predominant usage characteristic is that frequency of use is reduced with age and in most cases quite rapidly. The result is that HSM provides an economical means for managing and storing massive volumes of data. Inherent in HSM systems is system managed storage, where the system performs most of the work with minimum operations personnel involvement. This automation is generally extended to include: backup and recovery, data duplexing to provide high availability, and catastrophic recovery through use of off-site storage.
A Model of Object-Identities and Values
1990-02-23
integrity constraints in its construct, which provides the natural integration of the logical database model and the object-oriented database model. 20...portions are integrated by a simple commutative diagram of modeling functions. The formalism includes the expression of integrity constraints in its ...38 .5.2.2 The (Concept Model and Its Semantics .. .. .. .. ... .... ... .. 40 5.2.3 Two K%.inds of Predicates
NASA Astrophysics Data System (ADS)
Kacprzyk, Janusz; Zadrożny, Sławomir
2010-05-01
We present how the conceptually and numerically simple concept of a fuzzy linguistic database summary can be a very powerful tool for gaining much insight into the very essence of data. The use of linguistic summaries provides tools for the verbalisation of data analysis (mining) results which, in addition to the more commonly used visualisation, e.g. via a graphical user interface, can contribute to an increased human consistency and ease of use, notably for supporting decision makers via the data-driven decision support system paradigm. Two new relevant aspects of the analysis are also outlined which were first initiated by the authors. First, following Kacprzyk and Zadrożny, it is further considered how linguistic data summarisation is closely related to some types of solutions used in natural language generation (NLG). This can make it possible to use more and more effective and efficient tools and techniques developed in NLG. Second, similar remarks are given on relations to systemic functional linguistics. Moreover, following Kacprzyk and Zadrożny, comments are given on an extremely relevant aspect of scalability of linguistic summarisation of data, using a new concept of a conceptual scalability.
Database Administration: Concepts, Tools, Experiences, and Problems.
ERIC Educational Resources Information Center
Leong-Hong, Belkis; Marron, Beatrice
The concepts of data base administration, the role of the data base administrator (DBA), and computer software tools useful in data base administration are described in order to assist data base technologists and managers. A study of DBA's in the Federal Government is detailed in terms of the functions they perform, the software tools they use,…
INDIAM--an e-learning system for the interpretation of mammograms.
Guliato, Denise; Bôaventura, Ricardo S; Maia, Marcelo A; Rangayyan, Rangaraj M; Simedo, Mariângela S; Macedo, Túlio A A
2009-08-01
We propose the design of a teaching system named Interpretation and Diagnosis of Mammograms (INDIAM) for training students in the interpretation of mammograms and diagnosis of breast cancer. The proposed system integrates an illustrated tutorial on radiology of the breast, that is, mammography, which uses education techniques to guide the user (doctors, students, or researchers) through various concepts related to the diagnosis of breast cancer. The user can obtain informative text about specific subjects, access a library of bibliographic references, and retrieve cases from a mammographic database that are similar to a query case on hand. The information of each case stored in the mammographic database includes the radiological findings, the clinical history, the lifestyle of the patient, and complementary exams. The breast cancer tutorial is linked to a module that simulates the analysis and diagnosis of a mammogram. The tutorial incorporates tools for helping the user to evaluate his or her knowledge about a specific subject by using the education system or by simulating a diagnosis with appropriate feedback in case of error. The system also makes available digital image processing tools that allow the user to draw the contour of a lesion, the contour of the breast, or identify a cluster of calcifications in a given mammogram. The contours provided by the user are submitted to the system for evaluation. The teaching system is integrated with AMDI-An Indexed Atlas of Digital Mammograms-that includes case studies, e-learning, and research systems. All the resources are accessible via the Web.
Tone as a health concept: An analysis.
McDowall, Donald; Emmanuel, Elizabeth; Grace, Sandra; Chaseling, Marilyn
2017-11-01
Concept analysis. This paper is a report on the analysis of the concept of tone in chiropractic. The purpose of this paper is to clarify the concept of tone as originally understood by Daniel David Palmer from 1895 to 1914 and to monitor its evolution over time. Data was sourced from Palmer's original work, published between 1895 and 1914. A literature search from 1980 to 2016 was also performed on the online databases CINHAL, PubMed and Scopus with key terms including 'tone', 'chiropractic', 'Palmer', 'vitalism', 'health', 'homeostasis', 'holism' and 'wellness'. Finally hand-searches were conducted through chiropractic books and professional literature from 1906 to 1980 for any references to 'tone'. Rodgers' evolutionary method of analysis was used to categorise the data in relation to the surrogates, attributes, references, antecedents and consequences of tone. A total of 49 references were found: five from publications by Palmer; three from the database searches, and; the remaining 41 from professional books, trade journals and websites. There is no clear interpretation of tone in the contemporary chiropractic literature. Tone is closely aligned with functional neurology and can be understood as an interface between the metaphysical and the biomedical. Using the concept of tone as a foundation for practice could strengthen the identity of the chiropractic profession. Copyright © 2017 Elsevier Ltd. All rights reserved.
NRA8-21 Cycle 2 RBCC Turbopump Risk Reduction
NASA Technical Reports Server (NTRS)
Ferguson, Thomas V.; Williams, Morgan; Marcu, Bogdan
2004-01-01
This project was composed of three sub-tasks. The objective of the first task was to use the CFD code INS3D to generate both on- and off-design predictions for the consortium optimized impeller flowfield. The results of the flow simulations are given in the first section. The objective of the second task was to construct a turbomachinery testing database comprised of measurements made on several different impellers, an inducer and a diffuser. The data was in the form of static pressure measurements as well as laser velocimeter measurements of velocities and flow angles within the stated components. Several databases with this information were created for these components. The third subtask objective was two-fold: first, to validate the Enigma CFD code for pump diffuser analysis, and secondly, to perform steady and unsteady analyses on some wide flow range diffuser concepts using Enigma. The code was validated using the consortium optimized impeller database and then applied to two different concepts for wide flow diffusers.
NASA Astrophysics Data System (ADS)
Bhattacharya, D.; Forbes, C.; Roehrig, G.; Chandler, M. A.
2017-12-01
Promoting climate literacy among in-service science teachers necessitates an understanding of fundamental concepts about the Earth's climate System (USGCRP, 2009). Very few teachers report having any formal instruction in climate science (Plutzer et al., 2016), therefore, rather simple conceptions of climate systems and their variability exist, which has implications for students' science learning (Francies et al., 1993; Libarkin, 2005; Rebich, 2005). This study uses the inferences from a NASA Innovations in Climate Education (NICE) teacher professional development program (CYCLES) to establish the necessity for developing an epistemological perspective among teachers. In CYCLES, 19 middle and high school (male=8, female=11) teachers were assessed for their understanding of global climate change (GCC). A qualitative analysis of their concept maps and an alignment of their conceptions with the Essential Principles of Climate Literacy (NOAA, 2009) demonstrated that participants emphasized on EPCL 1, 3, 6, 7 focusing on the Earth system, atmospheric, social and ecological impacts of GCC. However, EPCL 4 (variability in climate) and 5 (data-based observations and modeling) were least represented and emphasized upon. Thus, participants' descriptions about global climatic patterns were often factual rather than incorporating causation (why the temperatures are increasing) and/or correlation (describing what other factors might influence global temperatures). Therefore, engaging with epistemic dimensions of climate science to understand the processes, tools, and norms through which climate scientists study the Earth's climate system (Huxter et al., 2013) is critical for developing an in-depth conceptual understanding of climate. CLiMES (Climate Modeling and Epistemology of Science), a NSF initiative proposes to use EzGCM (EzGlobal Climate Model) to engage students and teachers in designing and running simulations, performing data processing activities, and analyzing computational models to develop their own evidence-based claims about the Earth's climate system. We describe how epistemological investigations can be conducted using EzGCM to bring the scientific process and authentic climate science practice to middle and high school classrooms.
Design and implementation of a portal for the medical equipment market: MEDICOM.
Palamas, S; Kalivas, D; Panou-Diamandi, O; Zeelenberg, C; van Nimwegen, C
2001-01-01
The MEDICOM (Medical Products Electronic Commerce) Portal provides the electronic means for medical-equipment manufacturers to communicate online with their customers while supporting the Purchasing Process and Post Market Surveillance. The Portal offers a powerful Internet-based search tool for finding medical products and manufacturers. Its main advantage is the fast, reliable and up-to-date retrieval of information while eliminating all unrelated content that a general-purpose search engine would retrieve. The Universal Medical Device Nomenclature System (UMDNS) registers all products. The Portal accepts end-user requests and generates a list of results containing text descriptions of devices, UMDNS attribute values, and links to manufacturer Web pages and online catalogues for access to more-detailed information. Device short descriptions are provided by the corresponding manufacturer. The Portal offers technical support for integration of the manufacturers Web sites with itself. The network of the Portal and the connected manufacturers sites is called the MEDICOM system. To establish an environment hosting all the interactions of consumers (health care organizations and professionals) and providers (manufacturers, distributors, and resellers of medical devices). The Portal provides the end-user interface, implements system management, and supports database compatibility. The Portal hosts information about the whole MEDICOM system (Common Database) and summarized descriptions of medical devices (Short Description Database); the manufacturers servers present extended descriptions. The Portal provides end-user profiling and registration, an efficient product-searching mechanism, bulletin boards, links to on-line libraries and standards, on-line information for the MEDICOM system, and special messages or advertisements from manufacturers. Platform independence and interoperability characterize the system design. Relational Database Management Systems are used for the system s databases. The end-user interface is implemented using HTML, Javascript, Java applets, and XML documents. Communication between the Portal and the manufacturers servers is implemented using a CORBA interface. Remote administration of the Portal is enabled by dynamically-generated HTML interfaces based on XML documents. A representative group of users evaluated the system. The aim of the evaluation was validation of the usability of all of MEDICOM s functionality. The evaluation procedure was based on ISO/IEC 9126 Information technology - Software product evaluation - Quality characteristics and guidelines for their use. The overall user evaluation of the MEDICOM system was very positive. The MEDICOM system was characterized as an innovative concept that brings significant added value to medical-equipment commerce. The eventual benefits of the MEDICOM system are (a) establishment of a worldwide-accessible marketplace between manufacturers and health care professionals that provides up-to-date and high-quality product information in an easy and friendly way and (b) enhancement of the efficiency of marketing procedures and after-sales support.
Design and Implementation of a Portal for the Medical Equipment Market: MEDICOM
Kalivas, Dimitris; Panou-Diamandi, Ourania; Zeelenberg, Cees; van Nimwegen, Chris
2001-01-01
Background The MEDICOM (Medical Products Electronic Commerce) Portal provides the electronic means for medical-equipment manufacturers to communicate online with their customers while supporting the Purchasing Process and Post Market Surveillance. The Portal offers a powerful Internet-based search tool for finding medical products and manufacturers. Its main advantage is the fast, reliable and up-to-date retrieval of information while eliminating all unrelated content that a general-purpose search engine would retrieve. The Universal Medical Device Nomenclature System (UMDNS) registers all products. The Portal accepts end-user requests and generates a list of results containing text descriptions of devices, UMDNS attribute values, and links to manufacturer Web pages and online catalogues for access to more-detailed information. Device short descriptions are provided by the corresponding manufacturer. The Portal offers technical support for integration of the manufacturers' Web sites with itself. The network of the Portal and the connected manufacturers' sites is called the MEDICOM system. Objective To establish an environment hosting all the interactions of consumers (health care organizations and professionals) and providers (manufacturers, distributors, and resellers of medical devices). Methods The Portal provides the end-user interface, implements system management, and supports database compatibility. The Portal hosts information about the whole MEDICOM system (Common Database) and summarized descriptions of medical devices (Short Description Database); the manufacturers' servers present extended descriptions. The Portal provides end-user profiling and registration, an efficient product-searching mechanism, bulletin boards, links to on-line libraries and standards, on-line information for the MEDICOM system, and special messages or advertisements from manufacturers. Platform independence and interoperability characterize the system design. Relational Database Management Systems are used for the system's databases. The end-user interface is implemented using HTML, Javascript, Java applets, and XML documents. Communication between the Portal and the manufacturers' servers is implemented using a CORBA interface. Remote administration of the Portal is enabled by dynamically-generated HTML interfaces based on XML documents. A representative group of users evaluated the system. The aim of the evaluation was validation of the usability of all of MEDICOM's functionality. The evaluation procedure was based on ISO/IEC 9126 Information technology - Software product evaluation - Quality characteristics and guidelines for their use. Results The overall user evaluation of the MEDICOM system was very positive. The MEDICOM system was characterized as an innovative concept that brings significant added value to medical-equipment commerce. Conclusions The eventual benefits of the MEDICOM system are (a) establishment of a worldwide-accessible marketplace between manufacturers and health care professionals that provides up-to-date and high-quality product information in an easy and friendly way and (b) enhancement of the efficiency of marketing procedures and after-sales support. PMID:11772547
Barros, Débora Gomes; Chiesa, Anna Maria
2007-12-01
Given recent changes in the organization of the primary health care in Brazil, it is necessary to reflect on the contributions of nursing care. This article aims to review the concepts of autonomy and health needs and its applications in different proposals for the systematization of the nursing care. It is a literature review on systematization of the nursing assistance, autonomy and health needs in databases LILACS and BDENF. The most relevant results indicate that autonomy incorporates aspects professional and patient's that are sustained by their respective categories. About needs we found that tracks biological needs and social needs, which intersect with the psychological to cover biopsychosocial needs. It was found that the application of the concepts was not present in classification systems of nursing. However, they were more related to International Classification of Nursing Practice (ICNP) and International Classification of Nursing Practice in Collective Heath (ICNPCH) projects.
Structure at every scale: A semantic network account of the similarities between unrelated concepts.
De Deyne, Simon; Navarro, Daniel J; Perfors, Amy; Storms, Gert
2016-09-01
Similarity plays an important role in organizing the semantic system. However, given that similarity cannot be defined on purely logical grounds, it is important to understand how people perceive similarities between different entities. Despite this, the vast majority of studies focus on measuring similarity between very closely related items. When considering concepts that are very weakly related, little is known. In this article, we present 4 experiments showing that there are reliable and systematic patterns in how people evaluate the similarities between very dissimilar entities. We present a semantic network account of these similarities showing that a spreading activation mechanism defined over a word association network naturally makes correct predictions about weak similarities, whereas, though simpler, models based on direct neighbors between word pairs derived using the same network cannot. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Frietze, Seth; Leatherman, Judith
2014-03-01
New genes that arise from modification of the noncoding portion of a genome rather than being duplicated from parent genes are called de novo genes. These genes, identified by their brief evolution and lack of parent genes, provide an opportunity to study the timeframe in which emerging genes integrate into cellular networks, and how the characteristics of these genes change as they mature into bona fide genes. An article by G. Abrusán provides an opportunity to introduce students to fundamental concepts in evolutionary and comparative genetics and to provide a technical background by which to discuss systems biology approaches when studying the evolutionary process of gene birth. Basic background needed to understand the Abrusán study and details on comparative genomic concepts tailored for a classroom discussion are provided, including discussion questions and a supplemental exercise on navigating a genome database.
Building Airport Surface HITL Simulation Capability
NASA Technical Reports Server (NTRS)
Chinn, Fay Cherie
2016-01-01
FutureFlight Central is a high fidelity, real-time simulator designed to study surface operations and automation. As an air traffic control tower simulator, FFC allows stakeholders such as the FAA, controllers, pilots, airports, and airlines to develop and test advanced surface and terminal area concepts and automation including NextGen and beyond automation concepts and tools. These technologies will improve the safety, capacity and environmental issues facing the National Airspace system. FFC also has extensive video streaming capabilities, which combined with the 3-D database capability makes the facility ideal for any research needing an immersive virtual and or video environment. FutureFlight Central allows human in the loop testing which accommodates human interactions and errors giving a more complete picture than fast time simulations. This presentation describes FFCs capabilities and the components necessary to build an airport surface human in the loop simulation capability.
Clothing Matching for Visually Impaired Persons
Yuan, Shuai; Tian, YingLi; Arditi, Aries
2012-01-01
Matching clothes is a challenging task for many blind people. In this paper, we present a proof of concept system to solve this problem. The system consists of 1) a camera connected to a computer to perform pattern and color matching process; 2) speech commands for system control and configuration; and 3) audio feedback to provide matching results for both color and patterns of clothes. This system can handle clothes in deficient color without any pattern, as well as clothing with multiple colors and complex patterns to aid both blind and color deficient people. Furthermore, our method is robust to variations of illumination, clothing rotation and wrinkling. To evaluate the proposed prototype, we collect two challenging databases including clothes without any pattern, or with multiple colors and different patterns under different conditions of lighting and rotation. Results reported here demonstrate the robustness and effectiveness of the proposed clothing matching system. PMID:22523465
Cell illustrator 4.0: a computational platform for systems biology.
Nagasaki, Masao; Saito, Ayumu; Jeong, Euna; Li, Chen; Kojima, Kaname; Ikeda, Emi; Miyano, Satoru
2011-01-01
Cell Illustrator is a software platform for Systems Biology that uses the concept of Petri net for modeling and simulating biopathways. It is intended for biological scientists working at bench. The latest version of Cell Illustrator 4.0 uses Java Web Start technology and is enhanced with new capabilities, including: automatic graph grid layout algorithms using ontology information; tools using Cell System Markup Language (CSML) 3.0 and Cell System Ontology 3.0; parameter search module; high-performance simulation module; CSML database management system; conversion from CSML model to programming languages (FORTRAN, C, C++, Java, Python and Perl); import from SBML, CellML, and BioPAX; and, export to SVG and HTML. Cell Illustrator employs an extension of hybrid Petri net in an object-oriented style so that biopathway models can include objects such as DNA sequence, molecular density, 3D localization information, transcription with frame-shift, translation with codon table, as well as biochemical reactions.
Human factors of intelligent computer aided display design
NASA Technical Reports Server (NTRS)
Hunt, R. M.
1985-01-01
Design concepts for a decision support system being studied at NASA Langley as an aid to visual display unit (VDU) designers are described. Ideally, human factors should be taken into account by VDU designers. In reality, although the human factors database on VDUs is small, such systems must be constantly developed. Human factors are therefore a secondary consideration. An expert system will thus serve mainly in an advisory capacity. Functions can include facilitating the design process by shortening the time to generate and alter drawings, enhancing the capability of breaking design requirements down into simpler functions, and providing visual displays equivalent to the final product. The VDU system could also discriminate, and display the difference, between designer decisions and machine inferences. The system could also aid in analyzing the effects of designer choices on future options and in ennunciating when there are data available on a design selections.
Development of Rural Emergency Medical System (REMS) with Geospatial Technology in Malaysia
NASA Astrophysics Data System (ADS)
Ooi, W. H.; Shahrizal, I. M.; Noordin, A.; Nurulain, M. I.; Norhan, M. Y.
2014-02-01
Emergency medical services are dedicated services in providing out-of-hospital transport to definitive care or patients with illnesses and injuries. In this service the response time and the preparedness of medical services is of prime importance. The application of space and geospatial technology such as satellite navigation system and Geographical Information System (GIS) was proven to improve the emergency operation in many developed countries. In collaboration with a medical service NGO, the National Space Agency (ANGKASA) has developed a prototype Rural Emergency Medical System (REMS), focusing on providing medical services to rural areas and incorporating satellite based tracking module integrated with GIS and patience database to improve the response time of the paramedic team during emergency. With the aim to benefit the grassroots community by exploiting space technology, the project was able to prove the system concept which will be addressed in this paper.
An Informatics Blueprint for Healthcare Quality Information Systems
Niland, Joyce C.; Rouse, Layla; Stahl, Douglas C.
2006-01-01
There is a critical gap in our nation's ability to accurately measure and manage the quality of medical care. A robust healthcare quality information system (HQIS) has the potential to address this deficiency through the capture, codification, and analysis of information about patient treatments and related outcomes. Because non-technical issues often present the greatest challenges, this paper provides an overview of these socio-technical issues in building a successful HQIS, including the human, organizational, and knowledge management (KM) perspectives. Through an extensive literature review and direct experience in building a practical HQIS (the National Comprehensive Cancer Network Outcomes Research Database system), we have formulated an “informatics blueprint” to guide the development of such systems. While the blueprint was developed to facilitate healthcare quality information collection, management, analysis, and reporting, the concepts and advice provided may be extensible to the development of other types of clinical research information systems. PMID:16622161
Cell Illustrator 4.0: a computational platform for systems biology.
Nagasaki, Masao; Saito, Ayumu; Jeong, Euna; Li, Chen; Kojima, Kaname; Ikeda, Emi; Miyano, Satoru
2010-01-01
Cell Illustrator is a software platform for Systems Biology that uses the concept of Petri net for modeling and simulating biopathways. It is intended for biological scientists working at bench. The latest version of Cell Illustrator 4.0 uses Java Web Start technology and is enhanced with new capabilities, including: automatic graph grid layout algorithms using ontology information; tools using Cell System Markup Language (CSML) 3.0 and Cell System Ontology 3.0; parameter search module; high-performance simulation module; CSML database management system; conversion from CSML model to programming languages (FORTRAN, C, C++, Java, Python and Perl); import from SBML, CellML, and BioPAX; and, export to SVG and HTML. Cell Illustrator employs an extension of hybrid Petri net in an object-oriented style so that biopathway models can include objects such as DNA sequence, molecular density, 3D localization information, transcription with frame-shift, translation with codon table, as well as biochemical reactions.
Clothing Matching for Visually Impaired Persons.
Yuan, Shuai; Tian, Yingli; Arditi, Aries
2011-01-01
Matching clothes is a challenging task for many blind people. In this paper, we present a proof of concept system to solve this problem. The system consists of 1) a camera connected to a computer to perform pattern and color matching process; 2) speech commands for system control and configuration; and 3) audio feedback to provide matching results for both color and patterns of clothes. This system can handle clothes in deficient color without any pattern, as well as clothing with multiple colors and complex patterns to aid both blind and color deficient people. Furthermore, our method is robust to variations of illumination, clothing rotation and wrinkling. To evaluate the proposed prototype, we collect two challenging databases including clothes without any pattern, or with multiple colors and different patterns under different conditions of lighting and rotation. Results reported here demonstrate the robustness and effectiveness of the proposed clothing matching system.
NASA Technical Reports Server (NTRS)
McGlynn, T.; Santisteban, M.
2007-01-01
This chapter provides a very brief introduction to the Structured Query Language (SQL) for getting information from relational databases. We make no pretense that this is a complete or comprehensive discussion of SQL. There are many aspects of the language the will be completely ignored in the presentation. The goal here is to provide enough background so that users understand the basic concepts involved in building and using relational databases. We also go through the steps involved in building a particular astronomical database used in some of the other presentations in this volume.
United States Air Force Summer Research Program -- 1993. Volume 4. Rome Laboratory
1993-12-01
H., eds., Object-Oriented Concepts, Databases , and Applications, Addison-Wesley, Reading, MA, 1989. [Lano9l] Lano, K., "Z++, An Object-Orientated...1433 46.92 60 TCP janus.rl.af.mil mensa.rl.af.mil 1433 2611 The Target Filter Manager responds to requests for data and accesses the target database . A...2.5 2- 1.5- 28 -3 -2 -10 12 3 AZIMUTH (OE(3) Figure 12. Contour plot of antenna pattern, QC2 algorithm 5-32 UPDATING PROBABILISTIC DATABASES Michael A
Assessment of indexing trends with specific and general terms for herbal medicine.
Bartol, Tomaz
2012-12-01
Concepts for medicinal plants are represented by a variety of associated general terms with specific indexing patterns in databases, which may not consistently reflect growth of records. The objectives of this study are to assess the development in databases by identifying general terms that describe herbal medicine with optimal retrieval recall and to identify possible special trends in co-occurrence of specific and general concepts. Different search strategies are tested in cab abstracts, medline and web of science. Specific terms (Origanum and Salvia) are employed. Relevant general terms (e.g. 'Plants, Medicinal', Phytotherapy, Herbal drugs) are identified, along with indexing trends and co-occurrences. Growth trends, in specific (narrower) terms, are similar among databases. General terms, however, exhibit dissimilar trends, sometimes almost opposing one another. Co-occurrence of specific and general terms is changing over time. General terms may not denote definite development of trends as the use of terms differs amongst databases, making it difficult to correctly assess possible numbers of relevant records. Perceived increase can, sometimes, be attributed to an increased occurrence of a more general term alongside the specific one. Thesaurus-controlled databases may yield more hits, because of 'up-posted' (broader) terms. Use of broader terms is helpful as it enhances retrieval of relevant documents. © 2012 The authors. Health Information and Libraries Journal © 2012 Health Libraries Group.
Adapting a Clinical Data Repository to ICD-10-CM through the use of a Terminology Repository
Cimino, James J.; Remennick, Lyubov
2014-01-01
Clinical data repositories frequently contain patient diagnoses coded with the International Classification of Diseases, Ninth Revision (ICD-9-CM). These repositories now need to accommodate data coded with the Tenth Revision (ICD-10-CM). Database users wish to retrieve relevant data regardless of the system by which they are coded. We demonstrate how a terminology repository (the Research Entities Dictionary or RED) serves as an ontology relating terms of both ICD versions to each other to support seamless version-independent retrieval from the Biomedical Translational Research Information System (BTRIS) at the National Institutes of Health. We make use of the Center for Medicare and Medicaid Services’ General Equivalence Mappings (GEMs) to reduce the modeling effort required to determine whether ICD-10-CM terms should be added to the RED as new concepts or as synonyms of existing concepts. A divide-and-conquer approach is used to develop integration heuristics that offer a satisfactory interim solution and facilitate additional refinement of the integration as time and resources allow. PMID:25954344
Loneliness: a concept analysis.
Bekhet, Abir K; Zauszniewski, Jaclene A; Nakhla, Wagdy E
2008-01-01
Loneliness is a universal human experience recognized since the dawn of time, yet it is unique for every individual. Loneliness can lead to both depression and low self-esteem. This article explicates the concept of loneliness through the examination of its conceptual definition and uses, defining attributes, related concepts, and empirical referents. Literature review using hand search and database were used as sources of information. Because loneliness is commonly encountered in nursing situations, the information provided will serve as a framework for assessment, planning, intervention, and evaluation of clients.
Social justice: a concept analysis.
Buettner-Schmidt, Kelly; Lobo, Marie L
2012-04-01
This article is a report of an analysis of the concept of social justice. Nursing's involvement in social justice has waned in the recent past. A resurgence of interest in nurses' roles about social justice requires a clear understanding of the concept. Literature for this concept analysis included English language articles from CINAHL, PubMed, and broad multidisciplinary literature databases, within and outside of health-related literature, for the years 1968-2010. Two books and appropriate websites were also reviewed. The reference lists of the identified sources were reviewed for additional sources. The authors used Wilsonian methods of concept analysis as a guide. An efficient, synthesized definition of social justice was developed, based on the identification of its attributes, antecedents and consequences that provides clarification of the concept. Social justice was defined as full participation in society and the balancing of benefits and burdens by all citizens, resulting in equitable living and a just ordering of society. Its attributes included: (1) fairness; (2) equity in the distribution of power, resources, and processes that affect the sufficiency of the social determinants of health; (3) just institutions, systems, structures, policies, and processes; (4) equity in human development, rights, and sustainability; and (5) sufficiency of well-being. Nurses can have an important influence on the health of people globally by reinvesting in social justice. Implications for research, education, practice and policy, such as development of a social justice framework and educational competencies are presented. © 2011 The Authors. Journal of Advanced Nursing © 2011 Blackwell Publishing Ltd.
Tracking the Evolution of the Internet of Things Concept Across Different Application Domains.
Ibarra-Esquer, Jorge E; González-Navarro, Félix F; Flores-Rios, Brenda L; Burtseva, Larysa; Astorga-Vargas, María A
2017-06-14
Both the idea and technology for connecting sensors and actuators to a network to remotely monitor and control physical systems have been known for many years and developed accordingly. However, a little more than a decade ago the concept of the Internet of Things (IoT) was coined and used to integrate such approaches into a common framework. Technology has been constantly evolving and so has the concept of the Internet of Things, incorporating new terminology appropriate to technological advances and different application domains. This paper presents the changes that the IoT has undertaken since its conception and research on how technological advances have shaped it and fostered the arising of derived names suitable to specific domains. A two-step literature review through major publishers and indexing databases was conducted; first by searching for proposals on the Internet of Things concept and analyzing them to find similarities, differences, and technological features that allow us to create a timeline showing its development; in the second step the most mentioned names given to the IoT for specific domains, as well as closely related concepts were identified and briefly analyzed. The study confirms the claim that a consensus on the IoT definition has not yet been reached, as enabling technology keeps evolving and new application domains are being proposed. However, recent changes have been relatively moderated, and its variations on application domains are clearly differentiated, with data and data technologies playing an important role in the IoT landscape.
The biometric-based module of smart grid system
NASA Astrophysics Data System (ADS)
Engel, E.; Kovalev, I. V.; Ermoshkina, A.
2015-10-01
Within Smart Grid concept the flexible biometric-based module base on Principal Component Analysis (PCA) and selective Neural Network is developed. The formation of the selective Neural Network the biometric-based module uses the method which includes three main stages: preliminary processing of the image, face localization and face recognition. Experiments on the Yale face database show that (i) selective Neural Network exhibits promising classification capability for face detection, recognition problems; and (ii) the proposed biometric-based module achieves near real-time face detection, recognition speed and the competitive performance, as compared to some existing subspaces-based methods.
Open web system of Virtual labs for nuclear and applied physics
NASA Astrophysics Data System (ADS)
Saldikov, I. S.; Afanasyev, V. V.; Petrov, V. I.; Ternovykh, M. Yu
2017-01-01
An example of virtual lab work on unique experimental equipment is presented. The virtual lab work is software based on a model of real equipment. Virtual labs can be used for educational process in nuclear safety and analysis field. As an example it includes the virtual lab called “Experimental determination of the material parameter depending on the pitch of a uranium-water lattice”. This paper included general description of this lab. A description of a database on the support of laboratory work on unique experimental equipment which is included this work, its concept development are also presented.
Heavy Lift Launch Capability with a New Hydrocarbon Engine (NHE)
NASA Technical Reports Server (NTRS)
Threet, Grady E., Jr.; Holt, James B.; Philips, Alan D.; Garcia, Jessica A.
2011-01-01
The Advanced Concepts Office (ACO) at NASA Marshall Space Flight Center has analyzed over 2000 Ares V and other heavy lift concepts in the last 3 years. These concepts were analyzed for Lunar Exploration Missions, heavy lift capability to Low Earth Orbit (LEO) as well as exploratory missions to other near earth objects in our solar system. With the pending retirement of the Shuttle fleet, our nation will be without a civil heavy lift launch capability, so the future development of a new heavy lift capability is imperative for the exploration and large science missions our Agency has been tasked to deliver. The majority of the heavy lift concepts analyzed by ACO during the last 3 years have been based on liquid oxygen / liquid hydrogen (LOX/LH2) core stage and solids booster stage propulsion technologies (Ares V / Shuttle Derived and their variants). These concepts were driven by the decisions made from the results of the Exploration Systems Architecture Study (ESAS), which in turn, led to the Ares V launch vehicle that has been baselined in the Constellation Program. Now that the decision has been made at the Agency level to cancel Constellation, other propulsion options such as liquid hydrocarbon fuels are back in the exploration trade space. NASA is still planning exploration missions with the eventual destination of Mars and a new heavy lift launch vehicle is still required and will serve as the centerpiece of our nation s next exploration architecture s infrastructure. With an extensive launch vehicle database already developed on LOX/LH2 based heavy lift launch vehicles, ACO initiated a study to look at using a new high thrust (> 1.0 Mlb vacuum thrust) hydrocarbon engine as the primary main stage propulsion in such a launch vehicle.
Space transfer vehicle concepts and requirements study. Volume 1: Executive summary
NASA Technical Reports Server (NTRS)
Weber, Gary A.
1991-01-01
A description of the study in terms of background, objectives, and issues is provided. NASA is currently studying new initiatives of space exploration involving both piloted and unpiloted missions to destinations throughout the solar system. Many of these missions require substantial improvements in launch vehicle and upper stage capabilities. This study provides a focused examination of the Space Transfer Vehicles (STV) required to perform these missions using the emerging national launch vehicle definition, the Space Station Freedom (SSF) definition, and the latest mission scenario requirements. The study objectives are to define preferred STV concepts capable of accommodating future exploration missions in a cost-effective manner, determine the technology development (if any) required to perform these missions, and develop a decision database of various programmatic approaches for the development of the STV family of vehicles. Special emphasis was given to examining space basing (stationing reusable vehicles at a space station), examining the piloted lunar mission as a primary design mission, and restricting trade studies to the high-performance, near-term cryogenics (LO2/LH2) as vehicle propellant. The study progressed through three distinct 6-month phases. The first phase concentrated on supporting a NASA 3 month definition of exploration requirements (the '90-day study') and during this phase developed and optimized the space-based point-of-departure (POD) 2.5-stage lunar vehicle. The second phase developed a broad decision database of 95 different vehicle options and transportation architectures. The final phase chose the three most cost-effective architectures and developed point designs to carry to the end of the study. These reference vehicle designs are mutually exclusive and correspond to different national choices about launch vehicles and in-space reusability. There is, however, potential for evolution between concepts.
Song, Peipei; He, Jiangjiang; Li, Fen; Jin, Chunlin
2017-02-01
China is facing the great challenge of treating the world's largest rare disease population, an estimated 16 million patients with rare diseases. One effort offering promise has been a pilot national project that was launched in 2013 and that focused on 20 representative rare diseases. Another government-supported special research program on rare diseases - the "Rare Diseases Clinical Cohort Study" - was launched in December 2016. According to the plan for this research project, the unified National Rare Diseases Registry System of China will be established as of 2020, and a large-scale cohort study will be conducted from 2016 to 2020. The project plans to develop 109 technical standards, to establish and improve 2 national databases of rare diseases - a multi-center clinical database and a biological sample library, and to conduct studies on more than 50,000 registered cases of 50 different rare diseases. More importantly, this study will be combined with the concept of precision medicine. Chinese population-specific basic information on rare diseases, clinical information, and genomic information will be integrated to create a comprehensive predictive model with a follow-up database system and a model to evaluate prognosis. This will provide the evidence for accurate classification, diagnosis, treatment, and estimation of prognosis for rare diseases in China. Numerous challenges including data standardization, protecting patient privacy, big data processing, and interpretation of genetic information still need to be overcome, but research prospects offer great promise.
Frameworks to assess health systems governance: a systematic review.
Pyone, Thidar; Smith, Helen; van den Broek, Nynke
2017-06-01
Governance of the health system is a relatively new concept and there are gaps in understanding what health system governance is and how it could be assessed. We conducted a systematic review of the literature to describe the concept of governance and the theories underpinning as applied to health systems; and to identify which frameworks are available and have been applied to assess health systems governance. Frameworks were reviewed to understand how the principles of governance might be operationalized at different levels of a health system. Electronic databases and web portals of international institutions concerned with governance were searched for publications in English for the period January 1994 to February 2016. Sixteen frameworks developed to assess governance in the health system were identified and are described. Of these, six frameworks were developed based on theories from new institutional economics; three are primarily informed by political science and public management disciplines; three arise from the development literature and four use multidisciplinary approaches. Only five of the identified frameworks have been applied. These used the principal-agent theory, theory of common pool resources, North's institutional analysis and the cybernetics theory. Governance is a practice, dependent on arrangements set at political or national level, but which needs to be operationalized by individuals at lower levels in the health system; multi-level frameworks acknowledge this. Three frameworks were used to assess governance at all levels of the health system. Health system governance is complex and difficult to assess; the concept of governance originates from different disciplines and is multidimensional. There is a need to validate and apply existing frameworks and share lessons learnt regarding which frameworks work well in which settings. A comprehensive assessment of governance could enable policy makers to prioritize solutions for problems identified as well as replicate and scale-up examples of good practice. © The Author 2017. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine.
Frameworks to assess health systems governance: a systematic review
Smith, Helen; van den Broek, Nynke
2017-01-01
Abstract Governance of the health system is a relatively new concept and there are gaps in understanding what health system governance is and how it could be assessed. We conducted a systematic review of the literature to describe the concept of governance and the theories underpinning as applied to health systems; and to identify which frameworks are available and have been applied to assess health systems governance. Frameworks were reviewed to understand how the principles of governance might be operationalized at different levels of a health system. Electronic databases and web portals of international institutions concerned with governance were searched for publications in English for the period January 1994 to February 2016. Sixteen frameworks developed to assess governance in the health system were identified and are described. Of these, six frameworks were developed based on theories from new institutional economics; three are primarily informed by political science and public management disciplines; three arise from the development literature and four use multidisciplinary approaches. Only five of the identified frameworks have been applied. These used the principal–agent theory, theory of common pool resources, North’s institutional analysis and the cybernetics theory. Governance is a practice, dependent on arrangements set at political or national level, but which needs to be operationalized by individuals at lower levels in the health system; multi-level frameworks acknowledge this. Three frameworks were used to assess governance at all levels of the health system. Health system governance is complex and difficult to assess; the concept of governance originates from different disciplines and is multidimensional. There is a need to validate and apply existing frameworks and share lessons learnt regarding which frameworks work well in which settings. A comprehensive assessment of governance could enable policy makers to prioritize solutions for problems identified as well as replicate and scale-up examples of good practice. PMID:28334991
Web application and database modeling of traffic impact analysis using Google Maps
NASA Astrophysics Data System (ADS)
Yulianto, Budi; Setiono
2017-06-01
Traffic impact analysis (TIA) is a traffic study that aims at identifying the impact of traffic generated by development or change in land use. In addition to identifying the traffic impact, TIA is also equipped with mitigation measurement to minimize the arising traffic impact. TIA has been increasingly important since it was defined in the act as one of the requirements in the proposal of Building Permit. The act encourages a number of TIA studies in various cities in Indonesia, including Surakarta. For that reason, it is necessary to study the development of TIA by adopting the concept Transportation Impact Control (TIC) in the implementation of the TIA standard document and multimodal modeling. It includes TIA's standardization for technical guidelines, database and inspection by providing TIA checklists, monitoring and evaluation. The research was undertaken by collecting the historical data of junctions, modeling of the data in the form of relational database, building a user interface for CRUD (Create, Read, Update and Delete) the TIA data in the form of web programming with Google Maps libraries. The result research is a system that provides information that helps the improvement and repairment of TIA documents that exist today which is more transparent, reliable and credible.
Intelligent tutoring using HyperCLIPS
NASA Technical Reports Server (NTRS)
Hill, Randall W., Jr.; Pickering, Brad
1990-01-01
HyperCard is a popular hypertext-like system used for building user interfaces to databases and other applications, and CLIPS is a highly portable government-owned expert system shell. We developed HyperCLIPS in order to fill a gap in the U.S. Army's computer-based instruction tool set; it was conceived as a development environment for building adaptive practical exercises for subject-matter problem-solving, though it is not limited to this approach to tutoring. Once HyperCLIPS was developed, we set out to implement a practical exercise prototype using HyperCLIPS in order to demonstrate the following concepts: learning can be facilitated by doing; student performance evaluation can be done in real-time; and the problems in a practical exercise can be adapted to the individual student's knowledge.
Reach for Reference. BrainPOP--A Teaching Tool Library Media Specialists Should Know
ERIC Educational Resources Information Center
Safford, Barbara Ripp
2005-01-01
This column describes a new teaching tool, BrainPOP, which is a database that blurs the distinction between classroom and library media center. This collection of more than 300 short, concept-based, animated movies is intended primarily for use by teachers in classroom instruction. It is reminiscent of the single-concept film cartridges that used…
ERIC Educational Resources Information Center
Wang, Jianjun
2004-01-01
Located at a meeting place between the West and the East, Hong Kong has been chosen in this comparative investigation to reconfirm a theoretical model of "reciprocal relationship" between mathematics achievement and self-concept using the 8th grade databases from TIMSS and TIMSS-R. During the time between these two projects, Hong Kong…
ERIC Educational Resources Information Center
Codding, Robin S.; Mercer, Sterett; Connell, James; Fiorello, Catherine; Kleinert, Whitney
2016-01-01
There is a paucity of evidence supporting the use of curriculum-based mathematics measures (M-CBMs) at the middle school level, which makes data-based decisions challenging for school professionals. The purpose of this study was to examine the relationships among three existing M-CBM indices: (a) basic facts, (b) concepts/application, and (c)…
Learning concepts of cinenurducation: an integrative review.
Oh, Jina; Kang, Jeongae; De Gagne, Jennie C
2012-11-01
Cinenurducation is the use of films in both didactic and clinical nursing education. Although films are already used as instructional aids in nursing education, few studies have been made that demonstrate the learning concepts that can be attributed to this particular teaching strategy. The purpose of this paper is to describe the learning concepts of cinenurducation and its conceptual metaphor based on a review of literature. The databases CINAHL, MEDLINE, PsychINFO, ERIC, EBSCO, ProQuest Library Journal, and Scopus databases were searched for articles. Fifteen peer-reviewed articles were selected through title and abstract screening from "films in nursing" related articles found in internationally published articles in English from the past 20 years. Four common concepts emerged that relate to cinenurducation: (a) student-centered, (b) experiential, (c) reflective, and (d) problem-solving learning. Current literature corroborates cinenurducation as an effective teaching strategy with its learning activities in nursing education. Future studies may include instructional guides of sample films that could be practically used in various domains to teach nursing competencies, as well as in the development of evaluation criteria and standards to assess students' learning outcomes. Copyright © 2012 Elsevier Ltd. All rights reserved.
Information systems in food safety management.
McMeekin, T A; Baranyi, J; Bowman, J; Dalgaard, P; Kirk, M; Ross, T; Schmid, S; Zwietering, M H
2006-12-01
Information systems are concerned with data capture, storage, analysis and retrieval. In the context of food safety management they are vital to assist decision making in a short time frame, potentially allowing decisions to be made and practices to be actioned in real time. Databases with information on microorganisms pertinent to the identification of foodborne pathogens, response of microbial populations to the environment and characteristics of foods and processing conditions are the cornerstone of food safety management systems. Such databases find application in: Identifying pathogens in food at the genus or species level using applied systematics in automated ways. Identifying pathogens below the species level by molecular subtyping, an approach successfully applied in epidemiological investigations of foodborne disease and the basis for national surveillance programs. Predictive modelling software, such as the Pathogen Modeling Program and Growth Predictor (that took over the main functions of Food Micromodel) the raw data of which were combined as the genesis of an international web based searchable database (ComBase). Expert systems combining databases on microbial characteristics, food composition and processing information with the resulting "pattern match" indicating problems that may arise from changes in product formulation or processing conditions. Computer software packages to aid the practical application of HACCP and risk assessment and decision trees to bring logical sequences to establishing and modifying food safety management practices. In addition there are many other uses of information systems that benefit food safety more globally, including: Rapid dissemination of information on foodborne disease outbreaks via websites or list servers carrying commentary from many sources, including the press and interest groups, on the reasons for and consequences of foodborne disease incidents. Active surveillance networks allowing rapid dissemination of molecular subtyping information between public health agencies to detect foodborne outbreaks and limit the spread of human disease. Traceability of individual animals or crops from (or before) conception or germination to the consumer as an integral part of food supply chain management. Provision of high quality, online educational packages to food industry personnel otherwise precluded from access to such courses.
Image-Based Airborne LiDAR Point Cloud Encoding for 3d Building Model Retrieval
NASA Astrophysics Data System (ADS)
Chen, Yi-Chen; Lin, Chao-Hung
2016-06-01
With the development of Web 2.0 and cyber city modeling, an increasing number of 3D models have been available on web-based model-sharing platforms with many applications such as navigation, urban planning, and virtual reality. Based on the concept of data reuse, a 3D model retrieval system is proposed to retrieve building models similar to a user-specified query. The basic idea behind this system is to reuse these existing 3D building models instead of reconstruction from point clouds. To efficiently retrieve models, the models in databases are compactly encoded by using a shape descriptor generally. However, most of the geometric descriptors in related works are applied to polygonal models. In this study, the input query of the model retrieval system is a point cloud acquired by Light Detection and Ranging (LiDAR) systems because of the efficient scene scanning and spatial information collection. Using Point clouds with sparse, noisy, and incomplete sampling as input queries is more difficult than that by using 3D models. Because that the building roof is more informative than other parts in the airborne LiDAR point cloud, an image-based approach is proposed to encode both point clouds from input queries and 3D models in databases. The main goal of data encoding is that the models in the database and input point clouds can be consistently encoded. Firstly, top-view depth images of buildings are generated to represent the geometry surface of a building roof. Secondly, geometric features are extracted from depth images based on height, edge and plane of building. Finally, descriptors can be extracted by spatial histograms and used in 3D model retrieval system. For data retrieval, the models are retrieved by matching the encoding coefficients of point clouds and building models. In experiments, a database including about 900,000 3D models collected from the Internet is used for evaluation of data retrieval. The results of the proposed method show a clear superiority over related methods.
Knowledge-based assistance for science visualization and analysis using large distributed databases
NASA Technical Reports Server (NTRS)
Handley, Thomas H., Jr.; Jacobson, Allan S.; Doyle, Richard J.; Collins, Donald J.
1993-01-01
Within this decade, the growth in complexity of exploratory data analysis and the sheer volume of space data require new and innovative approaches to support science investigators in achieving their research objectives. To date, there have been numerous efforts addressing the individual issues involved in inter-disciplinary, multi-instrument investigations. However, while successful in small scale, these efforts have not proven to be open and scalable. This proposal addresses four areas of significant need: scientific visualization and analysis; science data management; interactions in a distributed, heterogeneous environment; and knowledge-based assistance for these functions. The fundamental innovation embedded with this proposal is the integration of three automation technologies, namely, knowledge-based expert systems, science visualization and science data management. This integration is based on concept called the DataHub. With the DataHub concept, NASA will be able to apply a more complete solution to all nodes of a distributed system. Both computation nodes and interactive nodes will be able to effectively and efficiently use the data services (address, retrieval, update, etc.) with a distributed, interdisciplinary information system in a uniform and standard way. This will allow the science investigators to concentrate on their scientific endeavors, rather than to involve themselves in the intricate technical details of the systems and tools required to accomplish their work. Thus, science investigators need not be programmers. The emphasis will be on the definition and prototyping of system elements with sufficient detail to enable data analysis and interpretation leading to publishable scientific results. In addition, the proposed work includes all the required end-to-end components and interfaces to demonstrate the completed concept.
Knowledge-based assistance for science visualization and analysis using large distributed databases
NASA Technical Reports Server (NTRS)
Handley, Thomas H., Jr.; Jacobson, Allan S.; Doyle, Richard J.; Collins, Donald J.
1992-01-01
Within this decade, the growth in complexity of exploratory data analysis and the sheer volume of space data require new and innovative approaches to support science investigators in achieving their research objectives. To date, there have been numerous efforts addressing the individual issues involved in inter-disciplinary, multi-instrument investigations. However, while successful in small scale, these efforts have not proven to be open and scaleable. This proposal addresses four areas of significant need: scientific visualization and analysis; science data management; interactions in a distributed, heterogeneous environment; and knowledge-based assistance for these functions. The fundamental innovation embedded within this proposal is the integration of three automation technologies, namely, knowledge-based expert systems, science visualization and science data management. This integration is based on the concept called the Data Hub. With the Data Hub concept, NASA will be able to apply a more complete solution to all nodes of a distributed system. Both computation nodes and interactive nodes will be able to effectively and efficiently use the data services (access, retrieval, update, etc.) with a distributed, interdisciplinary information system in a uniform and standard way. This will allow the science investigators to concentrate on their scientific endeavors, rather than to involve themselves in the intricate technical details of the systems and tools required to accomplish their work. Thus, science investigators need not be programmers. The emphasis will be on the definition and prototyping of system elements with sufficient detail to enable data analysis and interpretation leading to publishable scientific results. In addition, the proposed work includes all the required end-to-end components and interfaces to demonstrate the completed concept.
Therapeutic communication in nursing students: A Walker & Avant concept analysis
Abdolrahimi, Mahbobeh; Ghiyasvandian, Shahrzad; Zakerimoghadam, Masoumeh; Ebadi, Abbas
2017-01-01
Background and aim Therapeutic communication, the fundamental component of nursing, is a complex concept. Furthermore, the poor encounters between nursing student and patient demonstrate the necessity of instruction regarding therapeutic communication. The aim of this study was to define and clarify this important concept for including this subject in the nursing curriculum with more emphasis. Methods A literature search was conducted using keywords such as “nursing student”, “patient” and “therapeutic communication” and Persian-equivalent words in Persian databases (including Magiran and Medlib) and English databases (including PubMed, ScienceDirect, Scopus and ProQuest) without time limitation. After extracting concept definitions and determining characteristic features, therapeutic communication in nursing students was defined. Then, sample cases, antecedents, consequences and empirical referents of concept were determined. Results After assessing 30 articles, therapeutic communication defining attributes were as follows: “an important means in building interpersonal relationships”, “a process of information transmission”, “an important clinical competency”, “a structure with two different sections” and “a significant tool in patient centered care”. Furthermore, theoretical and clinical education and receiving educators’ feedback regarding therapeutic communication were considered as antecedents of the concept. Improving physical and psychological health status of patient as well as professional development of nursing students were identified as consequences of the concept. Conclusion Nursing instructors can use these results in order to teach and evaluate therapeutic communication in nursing students and train qualified nurses. Also, nursing students may apply the results to improve the quality of their interactions with patients, perform their various duties and meet patients’ diverse needs. PMID:28979730
[A relational database to store Poison Centers calls].
Barelli, Alessandro; Biondi, Immacolata; Tafani, Chiara; Pellegrini, Aristide; Soave, Maurizio; Gaspari, Rita; Annetta, Maria Giuseppina
2006-01-01
Italian Poison Centers answer to approximately 100,000 calls per year. Potentially, this activity is a huge source of data for toxicovigilance and for syndromic surveillance. During the last decade, surveillance systems for early detection of outbreaks have drawn the attention of public health institutions due to the threat of terrorism and high-profile disease outbreaks. Poisoning surveillance needs the ongoing, systematic collection, analysis, interpretation, and dissemination of harmonised data about poisonings from all Poison Centers for use in public health action to reduce morbidity and mortality and to improve health. The entity-relationship model for a Poison Center relational database is extremely complex and not studied in detail. For this reason, not harmonised data collection happens among Italian Poison Centers. Entities are recognizable concepts, either concrete or abstract, such as patients and poisons, or events which have relevance to the database, such as calls. Connectivity and cardinality of relationships are complex as well. A one-to-many relationship exist between calls and patients: for one instance of entity calls, there are zero, one, or many instances of entity patients. At the same time, a one-to-many relationship exist between patients and poisons: for one instance of entity patients, there are zero, one, or many instances of entity poisons. This paper shows a relational model for a poison center database which allows the harmonised data collection of poison centers calls.
NASA Astrophysics Data System (ADS)
Fletcher, Alex; Yoo, Terry S.
2004-04-01
Public databases today can be constructed with a wide variety of authoring and management structures. The widespread appeal of Internet search engines suggests that public information be made open and available to common search strategies, making accessible information that would otherwise be hidden by the infrastructure and software interfaces of a traditional database management system. We present the construction and organizational details for managing NOVA, the National Online Volumetric Archive. As an archival effort of the Visible Human Project for supporting medical visualization research, archiving 3D multimodal radiological teaching files, and enhancing medical education with volumetric data, our overall database structure is simplified; archives grow by accruing information, but seldom have to modify, delete, or overwrite stored records. NOVA is being constructed and populated so that it is transparent to the Internet; that is, much of its internal structure is mirrored in HTML allowing internet search engines to investigate, catalog, and link directly to the deep relational structure of the collection index. The key organizational concept for NOVA is the Image Content Group (ICG), an indexing strategy for cataloging incoming data as a set structure rather than by keyword management. These groups are managed through a series of XML files and authoring scripts. We cover the motivation for Image Content Groups, their overall construction, authorship, and management in XML, and the pilot results for creating public data repositories using this strategy.
Valued social roles and measuring mental health recovery: examining the structure of the tapestry.
Hunt, Marcia G; Stein, Catherine H
2012-12-01
The complexity of the concept of mental health recovery often makes it difficult to systematically examine recovery processes and outcomes. The concept of social role is inherent within many acknowledged dimensions of recovery such as community integration, family relationships, and peer support and can deepen our understanding of these dimensions when social roles are operationalized in ways that directly relate to recovery research and practice. This paper reviews seminal social role theories and operationalizes aspects of social roles: role investment, role perception, role loss, and role gain. The paper provides a critical analysis of the ability of social role concepts to inform mental health recovery research and practice. PubMed and PsychInfo databases were used for the literature review. A more thorough examination of social role aspects allows for a richer picture of recovery domains that are structured by the concept social roles. Increasing understanding of consumers' investment and changes in particular roles, perceptions of consumers' role performance relative to peers, and consumers' hopes for the future with regards to the different roles that they occupy could generate tangible, pragmatic approaches in addressing complex recovery domains. This deeper understanding allows a more nuanced approach to recovery-related movements in mental health system transformation.
Valued Social Roles and Measuring Mental Health Recovery: Examining the Structure of the Tapestry
Hunt, Marcia G.; Stein, Catherine H.
2014-01-01
The complexity of the concept of mental health recovery often makes it difficult to systematically examine recovery processes and outcomes. The concept of social role is inherent within many acknowledged dimensions of recovery such as community integration, family relationships, and peer support and can deepen our understanding of these dimensions when social roles are operationalized in ways that directly relate to recovery research and practice. Objective This paper reviews seminal social role theories and operationalizes aspects of social roles: role investment, role perception, role loss, and role gain. The paper provides a critical analysis of the ability of social role concepts to inform mental health recovery research and practice. Method PubMed and PsychInfo databases were used for the literature review. Results A more thorough examination of social role aspects allows for a richer picture of recovery domains that are structured by the concept social roles. Increasing understanding of consumers’ investment and changes in particular roles, perceptions of consumers’ role performance relative to peers, and consumers’ hopes for the future with regards to the different roles that they occupy could generate tangible, pragmatic approaches in addressing complex recovery domains. Conclusions and Implications for Practice This deeper understanding allows a more nuanced approach to recovery-related movements in mental health system transformation. PMID:23276237
G-Bean: an ontology-graph based web tool for biomedical literature retrieval
2014-01-01
Background Currently, most people use NCBI's PubMed to search the MEDLINE database, an important bibliographical information source for life science and biomedical information. However, PubMed has some drawbacks that make it difficult to find relevant publications pertaining to users' individual intentions, especially for non-expert users. To ameliorate the disadvantages of PubMed, we developed G-Bean, a graph based biomedical search engine, to search biomedical articles in MEDLINE database more efficiently. Methods G-Bean addresses PubMed's limitations with three innovations: (1) Parallel document index creation: a multithreaded index creation strategy is employed to generate the document index for G-Bean in parallel; (2) Ontology-graph based query expansion: an ontology graph is constructed by merging four major UMLS (Version 2013AA) vocabularies, MeSH, SNOMEDCT, CSP and AOD, to cover all concepts in National Library of Medicine (NLM) database; a Personalized PageRank algorithm is used to compute concept relevance in this ontology graph and the Term Frequency - Inverse Document Frequency (TF-IDF) weighting scheme is used to re-rank the concepts. The top 500 ranked concepts are selected for expanding the initial query to retrieve more accurate and relevant information; (3) Retrieval and re-ranking of documents based on user's search intention: after the user selects any article from the existing search results, G-Bean analyzes user's selections to determine his/her true search intention and then uses more relevant and more specific terms to retrieve additional related articles. The new articles are presented to the user in the order of their relevance to the already selected articles. Results Performance evaluation with 106 OHSUMED benchmark queries shows that G-Bean returns more relevant results than PubMed does when using these queries to search the MEDLINE database. PubMed could not even return any search result for some OHSUMED queries because it failed to form the appropriate Boolean query statement automatically from the natural language query strings. G-Bean is available at http://bioinformatics.clemson.edu/G-Bean/index.php. Conclusions G-Bean addresses PubMed's limitations with ontology-graph based query expansion, automatic document indexing, and user search intention discovery. It shows significant advantages in finding relevant articles from the MEDLINE database to meet the information need of the user. PMID:25474588
G-Bean: an ontology-graph based web tool for biomedical literature retrieval.
Wang, James Z; Zhang, Yuanyuan; Dong, Liang; Li, Lin; Srimani, Pradip K; Yu, Philip S
2014-01-01
Currently, most people use NCBI's PubMed to search the MEDLINE database, an important bibliographical information source for life science and biomedical information. However, PubMed has some drawbacks that make it difficult to find relevant publications pertaining to users' individual intentions, especially for non-expert users. To ameliorate the disadvantages of PubMed, we developed G-Bean, a graph based biomedical search engine, to search biomedical articles in MEDLINE database more efficiently. G-Bean addresses PubMed's limitations with three innovations: (1) Parallel document index creation: a multithreaded index creation strategy is employed to generate the document index for G-Bean in parallel; (2) Ontology-graph based query expansion: an ontology graph is constructed by merging four major UMLS (Version 2013AA) vocabularies, MeSH, SNOMEDCT, CSP and AOD, to cover all concepts in National Library of Medicine (NLM) database; a Personalized PageRank algorithm is used to compute concept relevance in this ontology graph and the Term Frequency - Inverse Document Frequency (TF-IDF) weighting scheme is used to re-rank the concepts. The top 500 ranked concepts are selected for expanding the initial query to retrieve more accurate and relevant information; (3) Retrieval and re-ranking of documents based on user's search intention: after the user selects any article from the existing search results, G-Bean analyzes user's selections to determine his/her true search intention and then uses more relevant and more specific terms to retrieve additional related articles. The new articles are presented to the user in the order of their relevance to the already selected articles. Performance evaluation with 106 OHSUMED benchmark queries shows that G-Bean returns more relevant results than PubMed does when using these queries to search the MEDLINE database. PubMed could not even return any search result for some OHSUMED queries because it failed to form the appropriate Boolean query statement automatically from the natural language query strings. G-Bean is available at http://bioinformatics.clemson.edu/G-Bean/index.php. G-Bean addresses PubMed's limitations with ontology-graph based query expansion, automatic document indexing, and user search intention discovery. It shows significant advantages in finding relevant articles from the MEDLINE database to meet the information need of the user.
Light Detection and Ranging-Based Terrain Navigation: A Concept Exploration
NASA Technical Reports Server (NTRS)
Campbell, Jacob; UijtdeHaag, Maarten; vanGraas, Frank; Young, Steve
2003-01-01
This paper discusses the use of Airborne Light Detection And Ranging (LiDAR) equipment for terrain navigation. Airborne LiDAR is a relatively new technology used primarily by the geo-spatial mapping community to produce highly accurate and dense terrain elevation maps. In this paper, the term LiDAR refers to a scanning laser ranger rigidly mounted to an aircraft, as opposed to an integrated sensor system that consists of a scanning laser ranger integrated with Global Positioning System (GPS) and Inertial Measurement Unit (IMU) data. Data from the laser range scanner and IMU will be integrated with a terrain database to estimate the aircraft position and data from the laser range scanner will be integrated with GPS to estimate the aircraft attitude. LiDAR data was collected using NASA Dryden's DC-8 flying laboratory in Reno, NV and was used to test the proposed terrain navigation system. The results of LiDAR-based terrain navigation shown in this paper indicate that airborne LiDAR is a viable technology enabler for fully autonomous aircraft navigation. The navigation performance is highly dependent on the quality of the terrain databases used for positioning and therefore high-resolution (2 m post-spacing) data was used as the terrain reference.
Responsibility among bachelor degree nursing students: A concept analysis.
Ghasemi, Saeed; Ahmadi, Fazlollah; Kazemnejad, Anoshirvan
2018-01-01
Responsibility is an important component of the professional values and core competencies for bachelor degree nursing students and has relationships with nursing education and professionalization. It is important for providing safe and high-quality care to the clients for the present and future performance of student. But there is no clear and operational definition of this concept for bachelor degree nursing students; however, there are extensive contents and debates about the definitions, attributes, domains and boundaries of responsibility in nursing and non-nursing literature. To examine the concept of responsibility among bachelor degree nursing students using the evolutionary approach to concept analysis. A total of 75 articles published between 1990 and 2016 and related to the concept of responsibility were selected from seven databases and considered for concept analysis based on Rogers' evolutionary approach. Ethical considerations: Throughout all stages of data collection, analysis and reporting, accuracy and bailment were respected. Responsibility is a procedural, spectral, dynamic and complex concept. The attributes of the concept are smart thinking, appropriate managerial behaviours, appropriate communicational behaviours, situational self-mandatory and task-orientation behaviours. Personal, educational and professional factors lead to the emergence of the responsible behaviours among bachelor degree nursing students. The emergence of such behaviours facilitates the learning and education process, ensures nursing profession life and promotes clients and community health level. Responsibility has some effects on nursing students. This concept had been changed over time since 1990-2016. There are similarities and differences in the elements of this concept in disciplines of nursing and other educational disciplines. Conclusion The analysis of this concept can help to develop educational or managerial theories, design instruments for better identification and evaluation of responsible behaviours among bachelor degree nursing students, develop strategies for enhancing the responsibility and improve the safety and quality of nursing care in the community and healthcare system.
BDVC (Bimodal Database of Violent Content): A database of violent audio and video
NASA Astrophysics Data System (ADS)
Rivera Martínez, Jose Luis; Mijes Cruz, Mario Humberto; Rodríguez Vázqu, Manuel Antonio; Rodríguez Espejo, Luis; Montoya Obeso, Abraham; García Vázquez, Mireya Saraí; Ramírez Acosta, Alejandro Álvaro
2017-09-01
Nowadays there is a trend towards the use of unimodal databases for multimedia content description, organization and retrieval applications of a single type of content like text, voice and images, instead bimodal databases allow to associate semantically two different types of content like audio-video, image-text, among others. The generation of a bimodal database of audio-video implies the creation of a connection between the multimedia content through the semantic relation that associates the actions of both types of information. This paper describes in detail the used characteristics and methodology for the creation of the bimodal database of violent content; the semantic relationship is stablished by the proposed concepts that describe the audiovisual information. The use of bimodal databases in applications related to the audiovisual content processing allows an increase in the semantic performance only and only if these applications process both type of content. This bimodal database counts with 580 audiovisual annotated segments, with a duration of 28 minutes, divided in 41 classes. Bimodal databases are a tool in the generation of applications for the semantic web.
Li, Yuanfang; Zhou, Zhiwei
2016-02-01
Precision medicine is a new medical concept and medical model, which is based on personalized medicine, rapid progress of genome sequencing technology and cross application of biological information and big data science. Precision medicine improves the diagnosis and treatment of gastric cancer to provide more convenience through more profound analyses of characteristics, pathogenesis and other core issues in gastric cancer. Cancer clinical database is important to promote the development of precision medicine. Therefore, it is necessary to pay close attention to the construction and management of the database. The clinical database of Sun Yat-sen University Cancer Center is composed of medical record database, blood specimen bank, tissue bank and medical imaging database. In order to ensure the good quality of the database, the design and management of the database should follow the strict standard operation procedure(SOP) model. Data sharing is an important way to improve medical research in the era of medical big data. The construction and management of clinical database must also be strengthened and innovated.
Physical Samples Linked Data in Action
NASA Astrophysics Data System (ADS)
Ji, P.; Arko, R. A.; Lehnert, K.; Bristol, S.
2017-12-01
Most data and metadata related to physical samples currently reside in isolated relational databases driven by diverse data models. How to approach the challenge for sharing, interchanging and integrating data from these difference relational databases motivated us to publish Linked Open Data for collections of physical samples, using Semantic Web technologies including the Resource Description Framework (RDF), RDF Query Language (SPARQL), and Web Ontology Language (OWL). In last few years, we have released four knowledge graphs concentrated on physical samples, including System for Earth Sample Registration (SESAR), USGS National Geochemical Database (NGDC), Ocean Biogeographic Information System (OBIS), and Earthchem Database. Currently the four knowledge graphs contain over 12 million facets (triples) about objects of interest to the geoscience domain. Choosing appropriate domain ontologies for representing context of data is the core of the whole work. Geolink ontology developed by Earthcube Geolink project was used as top level to represent common concepts like person, organization, cruise, etc. Physical sample ontology developed by Interdisciplinary Earth Data Alliance (IEDA) and Darwin Core vocabulary were used as second level to describe details about geological samples and biological diversity. We also focused on finding and building best tool chains to support the whole life cycle of publishing linked data we have, including information retrieval, linked data browsing and data visualization. Currently, Morph, Virtuoso Server, LodView, LodLive, and YASGUI were employed for converting, storing, representing, and querying data in a knowledge base (RDF triplestore). Persistent digital identifier is another main point we concentrated on. Open Researcher & Contributor IDs (ORCIDs), International Geo Sample Numbers (IGSNs), Global Research Identifier Database (GRID) and other persistent identifiers were used to link different resources from various graphs with person, sample, organization, cruise, etc. This work is supported by the EarthCube "GeoLink" project (NSF# ICER14-40221 and others) and the "USGS-IEDA Partnership to Support a Data Lifecycle Framework and Tools" project (USGS# G13AC00381).
The photo-colorimetric space as a medium for the representation of spatial data
NASA Technical Reports Server (NTRS)
Kraiss, K. Friedrich; Widdel, Heino
1989-01-01
Spatial displays and instruments are usually used in the context of vehicle guidance, but it is hard to find applicable spatial formats in information retrieval and interaction systems. Human interaction with spatial data structures and the applicability of the CIE color space to improve dialogue transparency is discussed. A proposal is made to use the color space to code spatially represented data. The semantic distances of the categories of dialogue structures or, more general, of database structures, are determined empirically. Subsequently the distances are transformed and depicted into the color space. The concept is demonstrated for a car diagnosis system, where the category cooling system could, e.g., be coded in blue, the category ignition system in red. Hereby a correspondence between color and semantic distances is achieved. Subcategories can be coded as luminance differences within the color space.
NASA Technical Reports Server (NTRS)
Havelund, Klaus
2014-01-01
We present a form of automaton, referred to as data automata, suited for monitoring sequences of data-carrying events, for example emitted by an executing software system. This form of automata allows states to be parameterized with data, forming named records, which are stored in an efficiently indexed data structure, a form of database. This very explicit approach differs from other automaton-based monitoring approaches. Data automata are also characterized by allowing transition conditions to refer to other parameterized states, and by allowing transitions sequences. The presented automaton concept is inspired by rule-based systems, especially the Rete algorithm, which is one of the well-established algorithms for executing rule-based systems. We present an optimized external DSL for data automata, as well as a comparable unoptimized internal DSL (API) in the Scala programming language, in order to compare the two solutions. An evaluation compares these two solutions to several other monitoring systems.
NASA Technical Reports Server (NTRS)
Salikuddin, M.; Martens, S.; Shin, H.; Majjigi, R. K.; Krejsa, Gene (Technical Monitor)
2002-01-01
The objective of this task was to develop a design methodology and noise reduction concepts for high bypass exhaust systems which could be applied to both existing production and new advanced engine designs. Special emphasis was given to engine cycles with bypass ratios in the range of 4:1 to 7:1, where jet mixing noise was a primary noise source at full power takeoff conditions. The goal of this effort was to develop the design methodology for mixed-flow exhaust systems and other novel noise reduction concepts that would yield 3 EPNdB noise reduction relative to 1992 baseline technology. Two multi-lobed mixers, a 22-lobed axisymmetric and a 21-lobed with a unique lobe, were designed. These mixers along with a confluent mixer were tested with several fan nozzles of different lengths with and without acoustic treatment in GEAE's Cell 41 under the current subtask (Subtask C). In addition to the acoustic and LDA tests for the model mixer exhaust systems, a semi-empirical noise prediction method for mixer exhaust system is developed. Effort was also made to implement flowfield data for noise prediction by utilizing MGB code. In general, this study established an aero and acoustic diagnostic database to calibrate and refine current aero and acoustic prediction tools.
NASA Astrophysics Data System (ADS)
Ivanov, Stanislav; Kamzolkin, Vladimir; Konilov, Aleksandr; Aleshin, Igor
2014-05-01
There are many various methods of assessing the conditions of rocks formation based on determining the composition of the constituent minerals. Our objective was to create a universal tool for processing mineral's chemical analysis results and solving geothermobarometry problems by creating a database of existing sensors and providing a user-friendly standard interface. Similar computer assisted tools are based upon large collection of sensors (geothermometers and geobarometers) are known, for example, the project TPF (Konilov A.N., 1999) - text-based sensor collection tool written in PASCAL. The application contained more than 350 different sensors and has been used widely in petrochemical studies (see A.N. Konilov , A.A. Grafchikov, V.I. Fonarev 2010 for review). Our prototype uses the TPF project concept and is designed with modern application development techniques, which allows better flexibility. Main components of the designed system are 3 connected datasets: sensors collection (geothermometers, geobarometers, oxygen geobarometers, etc.), petrochemical data and modeling results. All data is maintained by special management and visualization tools and resides in sql database. System utilities allow user to import and export data in various file formats, edit records and plot graphs. Sensors database contains up to date collections of known methods. New sensors may be added by user. Measured database should be filled in by researcher. User friendly interface allows access to all available data and sensors, automates routine work, reduces the risk of common user mistakes and simplifies information exchange between research groups. We use prototype to evaluate peak pressure during the formation of garnet-amphibolite apoeclogites, gneisses and schists Blybsky metamorphic complex of the Front Range of the Northern Caucasus. In particular, our estimation of formation pressure range (18 ± 4 kbar) agrees on independent research results. The reported study was partially supported by RFBR, research project No. 14-05-00615.
Type-Based Access Control in Data-Centric Systems
NASA Astrophysics Data System (ADS)
Caires, Luís; Pérez, Jorge A.; Seco, João Costa; Vieira, Hugo Torres; Ferrão, Lúcio
Data-centric multi-user systems, such as web applications, require flexible yet fine-grained data security mechanisms. Such mechanisms are usually enforced by a specially crafted security layer, which adds extra complexity and often leads to error prone coding, easily causing severe security breaches. In this paper, we introduce a programming language approach for enforcing access control policies to data in data-centric programs by static typing. Our development is based on the general concept of refinement type, but extended so as to address realistic and challenging scenarios of permission-based data security, in which policies dynamically depend on the database state, and flexible combinations of column- and row-level protection of data are necessary. We state and prove soundness and safety of our type system, stating that well-typed programs never break the declared data access control policies.
Knowledge bases built on web languages from the point of view of predicate logics
NASA Astrophysics Data System (ADS)
Vajgl, Marek; Lukasová, Alena; Žáček, Martin
2017-06-01
The article undergoes evaluation of formal systems created on the base of web (ontology/concept) languages by simplifying the usual approach of knowledge representation within the FOPL, but sharing its expressiveness, semantic correct-ness, completeness and decidability. Evaluation of two of them - that one based on description logic and that one built on RDF model principles - identifies some of the lacks of those formal systems and presents, if possible, corrections of them. Possibilities to build an inference system capable to obtain new further knowledge over given knowledge bases including those describing domains by giant linked domain databases has been taken into account. Moreover, the directions towards simplifying FOPL language discussed here has been evaluated from the point of view of a possibility to become a web language for fulfilling an idea of semantic web.
Real-time speech gisting for ATC applications
NASA Astrophysics Data System (ADS)
Dunkelberger, Kirk A.
1995-06-01
Command and control within the ATC environment remains primarily voice-based. Hence, automatic real time, speaker independent, continuous speech recognition (CSR) has many obvious applications and implied benefits to the ATC community: automated target tagging, aircraft compliance monitoring, controller training, automatic alarm disabling, display management, and many others. However, while current state-of-the-art CSR systems provide upwards of 98% word accuracy in laboratory environments, recent low-intrusion experiments in the ATCT environments demonstrated less than 70% word accuracy in spite of significant investments in recognizer tuning. Acoustic channel irregularities and controller/pilot grammar verities impact current CSR algorithms at their weakest points. It will be shown herein, however, that real time context- and environment-sensitive gisting can provide key command phrase recognition rates of greater than 95% using the same low-intrusion approach. The combination of real time inexact syntactic pattern recognition techniques and a tight integration of CSR, gisting, and ATC database accessor system components is the key to these high phase recognition rates. A system concept for real time gisting in the ATC context is presented herein. After establishing an application context, discussion presents a minimal CSR technology context then focuses on the gisting mechanism, desirable interfaces into the ATCT database environment, and data and control flow within the prototype system. Results of recent tests for a subset of the functionality are presented together with suggestions for further research.
NASA Astrophysics Data System (ADS)
Fuchs, Christian; Poulenard, Sylvain; Perlot, Nicolas; Riedi, Jerome; Perdigues, Josep
2017-02-01
Optical satellite communications play an increasingly important role in a number of space applications. However, if the system concept includes optical links to the surface of the Earth, the limited availability due to clouds and other atmospheric impacts need to be considered to give a reliable estimate of the system performance. An OGS network is required for increasing the availability to acceptable figures. In order to realistically estimate the performance and achievable throughput in various scenarios, a simulation tool has been developed under ESA contract. The tool is based on a database of 5 years of cloud data with global coverage and can thus easily simulate different optical ground station network topologies for LEO- and GEO-to-ground links. Further parameters, like e.g. limited availability due to sun blinding and atmospheric turbulence, are considered as well. This paper gives an overview about the simulation tool, the cloud database, as well as the modelling behind the simulation scheme. Several scenarios have been investigated: LEO-to-ground links, GEO feeder links, and GEO relay links. The key results of the optical ground station network optimization and throughput estimations will be presented. The implications of key technical parameters, as e.g. memory size aboard the satellite, will be discussed. Finally, potential system designs for LEO- and GEO-systems will be presented.
NASA Technical Reports Server (NTRS)
McMillin, Naomi; Allen, Jerry; Erickson, Gary; Campbell, Jim; Mann, Mike; Kubiatko, Paul; Yingling, David; Mason, Charlie
1999-01-01
The objective was to experimentally evaluate the longitudinal and lateral-directional stability and control characteristics of the Reference H configuration at supersonic and transonic speeds. A series of conventional and alternate control devices were also evaluated at supersonic and transonic speeds. A database on the conventional and alternate control devices was to be created for use in the HSR program.
Reconsidering the conceptualization of nursing workload: literature review.
Morris, Roisin; MacNeela, Padraig; Scott, Anne; Treacy, Pearl; Hyde, Abbey
2007-03-01
This paper reports a literature review that aimed to analyse the way in which nursing intensity and patient dependency have been considered to be conceptually similar to nursing workload, and to propose a model to show how these concepts actually differ in both theoretical and practical terms. The literature on nursing workload considers the concepts of patient 'dependency' and nursing 'intensity' in the realm of nursing workload. These concepts differ by definition but are used to measure the same phenomenon, i.e. nursing workload. The literature search was undertaken in 2004 using electronic databases, reference lists and other available literature. Papers were sourced from the Medline, Psychlit, CINAHL and Cochrane databases and through the general search engine Google. The keywords focussed on nursing workload, nursing intensity and patient dependency. Nursing work and workload concepts and labels are defined and measured in different and often contradictory ways. It is vitally important to understand these differences when using such conceptualizations to measure nursing workload. A preliminary model is put forward to clarify the relationships between nursing workload concepts. In presenting a preliminary model of nursing workload, it is hoped that nursing workload might be better understood so that it becomes more visible and recognizable. Increasing the visibility of nursing workload should have a positive impact on nursing workload management and on the provision of patient care.
Burchill, C; Roos, L L; Fergusson, P; Jebamani, L; Turner, K; Dueck, S
2000-01-01
Comprehensive data available in the Canadian province of Manitoba since 1970 have aided study of the interaction between population health, health care utilization, and structural features of the health care system. Given a complex linked database and many ongoing projects, better organization of available epidemiological, institutional, and technical information was needed. The Manitoba Centre for Health Policy and Evaluation wished to develop a knowledge repository to handle data, document research Methods, and facilitate both internal communication and collaboration with other sites. This evolving knowledge repository consists of both public and internal (restricted access) pages on the World Wide Web (WWW). Information can be accessed using an indexed logical format or queried to allow entry at user-defined points. The main topics are: Concept Dictionary, Research Definitions, Meta-Index, and Glossary. The Concept Dictionary operationalizes concepts used in health research using administrative data, outlining the creation of complex variables. Research Definitions specify the codes for common surgical procedures, tests, and diagnoses. The Meta-Index organizes concepts and definitions according to the Medical Sub-Heading (MeSH) system developed by the National Library of Medicine. The Glossary facilitates navigation through the research terms and abbreviations in the knowledge repository. An Education Resources heading presents a web-based graduate course using substantial amounts of material in the Concept Dictionary, a lecture in the Epidemiology Supercourse, and material for Manitoba's Regional Health Authorities. Confidential information (including Data Dictionaries) is available on the Centre's internal website. Use of the public pages has increased dramatically since January 1998, with almost 6,000 page hits from 250 different hosts in May 1999. More recently, the number of page hits has averaged around 4,000 per month, while the number of unique hosts has climbed to around 400. This knowledge repository promotes standardization and increases efficiency by placing concepts and associated programming in the Centre's collective memory. Collaboration and project management are facilitated.
Burchill, Charles; Fergusson, Patricia; Jebamani, Laurel; Turner, Ken; Dueck, Stephen
2000-01-01
Background Comprehensive data available in the Canadian province of Manitoba since 1970 have aided study of the interaction between population health, health care utilization, and structural features of the health care system. Given a complex linked database and many ongoing projects, better organization of available epidemiological, institutional, and technical information was needed. Objective The Manitoba Centre for Health Policy and Evaluation wished to develop a knowledge repository to handle data, document research methods, and facilitate both internal communication and collaboration with other sites. Methods This evolving knowledge repository consists of both public and internal (restricted access) pages on the World Wide Web (WWW). Information can be accessed using an indexed logical format or queried to allow entry at user-defined points. The main topics are: Concept Dictionary, Research Definitions, Meta-Index, and Glossary. The Concept Dictionary operationalizes concepts used in health research using administrative data, outlining the creation of complex variables. Research Definitions specify the codes for common surgical procedures, tests, and diagnoses. The Meta-Index organizes concepts and definitions according to the Medical Sub-Heading (MeSH) system developed by the National Library of Medicine. The Glossary facilitates navigation through the research terms and abbreviations in the knowledge repository. An Education Resources heading presents a web-based graduate course using substantial amounts of material in the Concept Dictionary, a lecture in the Epidemiology Supercourse, and material for Manitoba's Regional Health Authorities. Confidential information (including Data Dictionaries) is available on the Centre's internal website. Results Use of the public pages has increased dramatically since January 1998, with almost 6,000 page hits from 250 different hosts in May 1999. More recently, the number of page hits has averaged around 4,000 per month, while the number of unique hosts has climbed to around 400. Conclusions This knowledge repository promotes standardization and increases efficiency by placing concepts and associated programming in the Centre's collective memory. Collaboration and project management are facilitated. PMID:11720929
Nadkarni, P M
1997-08-01
Concept Locator (CL) is a client-server application that accesses a Sybase relational database server containing a subset of the UMLS Metathesaurus for the purpose of retrieval of concepts corresponding to one or more query expressions supplied to it. CL's query grammar permits complex Boolean expressions, wildcard patterns, and parenthesized (nested) subexpressions. CL translates the query expressions supplied to it into one or more SQL statements that actually perform the retrieval. The generated SQL is optimized by the client to take advantage of the strengths of the server's query optimizer, and sidesteps its weaknesses, so that execution is reasonably efficient.
Perception: a concept analysis.
McDonald, Susan M
2012-02-01
Concept analysis methodology by Walker and Avant (2005) was used to define, describe, and delimit the concept of perception. Nursing literature in the Medline database was searched for definitions of "perception." Definitions, uses, and defining attributes of perception were identified; model and contrary cases were developed; and antecedents, consequences, and empirical referents were determined. An operational definition for the concept was developed. Nurses need to be cognizant of how perceptual differences impact the delivery of nursing care. In research, a mixed methodology approach may yield a richer description of the phenomenon and provide useful information for clinical practice. © 2011, The Author. International Journal of Nursing Knowledge © 2011, NANDA International.
Foley, Michele; Beckley, Jacqueline; Ashman, Hollis; Moskowitz, Howard R
2009-06-01
We introduce a new type of study that combines self-profile of behaviors and attitudes regarding food together with responses to structured, systematically varied concepts about the food. We deal here with the responses of teens, for 28 different foods and beverages. The study creates a database that reveals how a person responds to different types of messaging about the food. We show how to develop the database for many different foods, from which one can compare foods to each other, or compare the performance of messages within a specific food.
NASA Technical Reports Server (NTRS)
Young, Steve; UijtdeHaag, Maarten; Sayre, Jonathon
2003-01-01
Synthetic Vision Systems (SVS) provide pilots with displays of stored geo-spatial data representing terrain, obstacles, and cultural features. As comprehensive validation is impractical, these databases typically have no quantifiable level of integrity. Further, updates to the databases may not be provided as changes occur. These issues limit the certification level and constrain the operational context of SVS for civil aviation. Previous work demonstrated the feasibility of using a realtime monitor to bound the integrity of Digital Elevation Models (DEMs) by using radar altimeter measurements during flight. This paper describes an extension of this concept to include X-band Weather Radar (WxR) measurements. This enables the monitor to detect additional classes of DEM errors and to reduce the exposure time associated with integrity threats. Feature extraction techniques are used along with a statistical assessment of similarity measures between the sensed and stored features that are detected. Recent flight-testing in the area around the Juneau, Alaska Airport (JNU) has resulted in a comprehensive set of sensor data that is being used to assess the feasibility of the proposed monitor technology. Initial results of this assessment are presented.
Supporting reputation based trust management enhancing security layer for cloud service models
NASA Astrophysics Data System (ADS)
Karthiga, R.; Vanitha, M.; Sumaiya Thaseen, I.; Mangaiyarkarasi, R.
2017-11-01
In the existing system trust between cloud providers and consumers is inadequate to establish the service level agreement though the consumer’s response is good cause to assess the overall reliability of cloud services. Investigators recognized the significance of trust can be managed and security can be provided based on feedback collected from participant. In this work a face recognition system that helps to identify the user effectively. So we use an image comparison algorithm where the user face is captured during registration time and get stored in database. With that original image we compare it with the sample image that is already stored in database. If both the image get matched then the users are identified effectively. When the confidential data are subcontracted to the cloud, data holders will become worried about the confidentiality of their data in the cloud. Encrypting the data before subcontracting has been regarded as the important resources of keeping user data privacy beside the cloud server. So in order to keep the data secure we use an AES algorithm. Symmetric-key algorithms practice a shared key concept, keeping data secret requires keeping this key secret. So only the user with private key can decrypt data.
Semantic memory: a feature-based analysis and new norms for Italian.
Montefinese, Maria; Ambrosini, Ettore; Fairfield, Beth; Mammarella, Nicola
2013-06-01
Semantic norms for properties produced by native speakers are valuable tools for researchers interested in the structure of semantic memory and in category-specific semantic deficits in individuals following brain damage. The aims of this study were threefold. First, we sought to extend existing semantic norms by adopting an empirical approach to category (Exp. 1) and concept (Exp. 2) selection, in order to obtain a more representative set of semantic memory features. Second, we extensively outlined a new set of semantic production norms collected from Italian native speakers for 120 artifactual and natural basic-level concepts, using numerous measures and statistics following a feature-listing task (Exp. 3b). Finally, we aimed to create a new publicly accessible database, since only a few existing databases are publicly available online.
Benefits of an Object-oriented Database Representation for Controlled Medical Terminologies
Gu, Huanying; Halper, Michael; Geller, James; Perl, Yehoshua
1999-01-01
Objective: Controlled medical terminologies (CMTs) have been recognized as important tools in a variety of medical informatics applications, ranging from patient-record systems to decision-support systems. Controlled medical terminologies are typically organized in semantic network structures consisting of tens to hundreds of thousands of concepts. This overwhelming size and complexity can be a serious barrier to their maintenance and widespread utilization. The authors propose the use of object-oriented databases to address the problems posed by the extensive scope and high complexity of most CMTs for maintenance personnel and general users alike. Design: The authors present a methodology that allows an existing CMT, modeled as a semantic network, to be represented as an equivalent object-oriented database. Such a representation is called an object-oriented health care terminology repository (OOHTR). Results: The major benefit of an OOHTR is its schema, which provides an important layer of structural abstraction. Using the high-level view of a CMT afforded by the schema, one can gain insight into the CMT's overarching organization and begin to better comprehend it. The authors' methodology is applied to the Medical Entities Dictionary (MED), a large CMT developed at Columbia-Presbyterian Medical Center. Examples of how the OOHTR schema facilitated updating, correcting, and improving the design of the MED are presented. Conclusion: The OOHTR schema can serve as an important abstraction mechanism for enhancing comprehension of a large CMT, and thus promotes its usability. PMID:10428002
Consensus and conflict cards for metabolic pathway databases
2013-01-01
Background The metabolic network of H. sapiens and many other organisms is described in multiple pathway databases. The level of agreement between these descriptions, however, has proven to be low. We can use these different descriptions to our advantage by identifying conflicting information and combining their knowledge into a single, more accurate, and more complete description. This task is, however, far from trivial. Results We introduce the concept of Consensus and Conflict Cards (C2Cards) to provide concise overviews of what the databases do or do not agree on. Each card is centered at a single gene, EC number or reaction. These three complementary perspectives make it possible to distinguish disagreements on the underlying biology of a metabolic process from differences that can be explained by different decisions on how and in what detail to represent knowledge. As a proof-of-concept, we implemented C2CardsHuman, as a web application http://www.molgenis.org/c2cards, covering five human pathway databases. Conclusions C2Cards can contribute to ongoing reconciliation efforts by simplifying the identification of consensus and conflicts between pathway databases and lowering the threshold for experts to contribute. Several case studies illustrate the potential of the C2Cards in identifying disagreements on the underlying biology of a metabolic process. The overviews may also point out controversial biological knowledge that should be subject of further research. Finally, the examples provided emphasize the importance of manual curation and the need for a broad community involvement. PMID:23803311
Consensus and conflict cards for metabolic pathway databases.
Stobbe, Miranda D; Swertz, Morris A; Thiele, Ines; Rengaw, Trebor; van Kampen, Antoine H C; Moerland, Perry D
2013-06-26
The metabolic network of H. sapiens and many other organisms is described in multiple pathway databases. The level of agreement between these descriptions, however, has proven to be low. We can use these different descriptions to our advantage by identifying conflicting information and combining their knowledge into a single, more accurate, and more complete description. This task is, however, far from trivial. We introduce the concept of Consensus and Conflict Cards (C₂Cards) to provide concise overviews of what the databases do or do not agree on. Each card is centered at a single gene, EC number or reaction. These three complementary perspectives make it possible to distinguish disagreements on the underlying biology of a metabolic process from differences that can be explained by different decisions on how and in what detail to represent knowledge. As a proof-of-concept, we implemented C₂Cards(Human), as a web application http://www.molgenis.org/c2cards, covering five human pathway databases. C₂Cards can contribute to ongoing reconciliation efforts by simplifying the identification of consensus and conflicts between pathway databases and lowering the threshold for experts to contribute. Several case studies illustrate the potential of the C₂Cards in identifying disagreements on the underlying biology of a metabolic process. The overviews may also point out controversial biological knowledge that should be subject of further research. Finally, the examples provided emphasize the importance of manual curation and the need for a broad community involvement.
Eronen, Lauri; Toivonen, Hannu
2012-06-06
Biological databases contain large amounts of data concerning the functions and associations of genes and proteins. Integration of data from several such databases into a single repository can aid the discovery of previously unknown connections spanning multiple types of relationships and databases. Biomine is a system that integrates cross-references from several biological databases into a graph model with multiple types of edges, such as protein interactions, gene-disease associations and gene ontology annotations. Edges are weighted based on their type, reliability, and informativeness. We present Biomine and evaluate its performance in link prediction, where the goal is to predict pairs of nodes that will be connected in the future, based on current data. In particular, we formulate protein interaction prediction and disease gene prioritization tasks as instances of link prediction. The predictions are based on a proximity measure computed on the integrated graph. We consider and experiment with several such measures, and perform a parameter optimization procedure where different edge types are weighted to optimize link prediction accuracy. We also propose a novel method for disease-gene prioritization, defined as finding a subset of candidate genes that cluster together in the graph. We experimentally evaluate Biomine by predicting future annotations in the source databases and prioritizing lists of putative disease genes. The experimental results show that Biomine has strong potential for predicting links when a set of selected candidate links is available. The predictions obtained using the entire Biomine dataset are shown to clearly outperform ones obtained using any single source of data alone, when different types of links are suitably weighted. In the gene prioritization task, an established reference set of disease-associated genes is useful, but the results show that under favorable conditions, Biomine can also perform well when no such information is available.The Biomine system is a proof of concept. Its current version contains 1.1 million entities and 8.1 million relations between them, with focus on human genetics. Some of its functionalities are available in a public query interface at http://biomine.cs.helsinki.fi, allowing searching for and visualizing connections between given biological entities.
An algorithm to identify functional groups in organic molecules.
Ertl, Peter
2017-06-07
The concept of functional groups forms a basis of organic chemistry, medicinal chemistry, toxicity assessment, spectroscopy and also chemical nomenclature. All current software systems to identify functional groups are based on a predefined list of substructures. We are not aware of any program that can identify all functional groups in a molecule automatically. The algorithm presented in this article is an attempt to solve this scientific challenge. An algorithm to identify functional groups in a molecule based on iterative marching through its atoms is described. The procedure is illustrated by extracting functional groups from the bioactive portion of the ChEMBL database, resulting in identification of 3080 unique functional groups. A new algorithm to identify all functional groups in organic molecules is presented. The algorithm is relatively simple and full details with examples are provided, therefore implementation in any cheminformatics toolkit should be relatively easy. The new method allows the analysis of functional groups in large chemical databases in a way that was not possible using previous approaches. Graphical abstract .
Applications of the Cambridge Structural Database in chemical education1
Battle, Gary M.; Ferrence, Gregory M.; Allen, Frank H.
2010-01-01
The Cambridge Structural Database (CSD) is a vast and ever growing compendium of accurate three-dimensional structures that has massive chemical diversity across organic and metal–organic compounds. For these reasons, the CSD is finding significant uses in chemical education, and these applications are reviewed. As part of the teaching initiative of the Cambridge Crystallographic Data Centre (CCDC), a teaching subset of more than 500 CSD structures has been created that illustrate key chemical concepts, and a number of teaching modules have been devised that make use of this subset in a teaching environment. All of this material is freely available from the CCDC website, and the subset can be freely viewed and interrogated using WebCSD, an internet application for searching and displaying CSD information content. In some cases, however, the complete CSD System is required for specific educational applications, and some examples of these more extensive teaching modules are also discussed. The educational value of visualizing real three-dimensional structures, and of handling real experimental results, is stressed throughout. PMID:20877495
Applications of the Cambridge Structural Database in chemical education.
Battle, Gary M; Ferrence, Gregory M; Allen, Frank H
2010-10-01
The Cambridge Structural Database (CSD) is a vast and ever growing compendium of accurate three-dimensional structures that has massive chemical diversity across organic and metal-organic compounds. For these reasons, the CSD is finding significant uses in chemical education, and these applications are reviewed. As part of the teaching initiative of the Cambridge Crystallographic Data Centre (CCDC), a teaching subset of more than 500 CSD structures has been created that illustrate key chemical concepts, and a number of teaching modules have been devised that make use of this subset in a teaching environment. All of this material is freely available from the CCDC website, and the subset can be freely viewed and interrogated using WebCSD, an internet application for searching and displaying CSD information content. In some cases, however, the complete CSD System is required for specific educational applications, and some examples of these more extensive teaching modules are also discussed. The educational value of visualizing real three-dimensional structures, and of handling real experimental results, is stressed throughout.
Addressing unwarranted clinical variation: A rapid review of current evidence.
Harrison, Reema; Manias, Elizabeth; Mears, Stephen; Heslop, David; Hinchcliff, Reece; Hay, Liz
2018-05-15
Unwarranted clinical variation (UCV) can be described as variation that can only be explained by differences in health system performance. There is a lack of clarity regarding how to define and identify UCV and, once identified, to determine whether it is sufficiently problematic to warrant action. As such, the implementation of systemic approaches to reducing UCV is challenging. A review of approaches to understand, identify, and address UCV was undertaken to determine how conceptual and theoretical frameworks currently attempt to define UCV, the approaches used to identify UCV, and the evidence of their effectiveness. Rapid evidence assessment (REA) methodology was used. A range of text words, synonyms, and subject headings were developed for the major concepts of unwarranted clinical variation, standards (and deviation from these standards), and health care environment. Two electronic databases (Medline and Pubmed) were searched from January 2006 to April 2017, in addition to hand searching of relevant journals, reference lists, and grey literature. Results were merged using reference-management software (Endnote) and duplicates removed. Inclusion criteria were independently applied to potentially relevant articles by 3 reviewers. Findings were presented in a narrative synthesis to highlight key concepts addressed in the published literature. A total of 48 relevant publications were included in the review; 21 articles were identified as eligible from the database search, 4 from hand searching published work and 23 from the grey literature. The search process highlighted the voluminous literature reporting clinical variation internationally; yet, there is a dearth of evidence regarding systematic approaches to identifying or addressing UCV. Wennberg's classification framework is commonly cited in relation to classifying variation, but no single approach is agreed upon to systematically explore and address UCV. The instances of UCV that warrant investigation and action are largely determined at a systems level currently, and stakeholder engagement in this process is limited. Lack of consensus on an evidence-based definition for UCV remains a substantial barrier to progress in this field. © 2018 John Wiley & Sons, Ltd.
English semantic word-pair norms and a searchable Web portal for experimental stimulus creation.
Buchanan, Erin M; Holmes, Jessica L; Teasley, Marilee L; Hutchison, Keith A
2013-09-01
As researchers explore the complexity of memory and language hierarchies, the need to expand normed stimulus databases is growing. Therefore, we present 1,808 words, paired with their features and concept-concept information, that were collected using previously established norming methods (McRae, Cree, Seidenberg, & McNorgan Behavior Research Methods 37:547-559, 2005). This database supplements existing stimuli and complements the Semantic Priming Project (Hutchison, Balota, Cortese, Neely, Niemeyer, Bengson, & Cohen-Shikora 2010). The data set includes many types of words (including nouns, verbs, adjectives, etc.), expanding on previous collections of nouns and verbs (Vinson & Vigliocco Journal of Neurolinguistics 15:317-351, 2008). We describe the relation between our and other semantic norms, as well as giving a short review of word-pair norms. The stimuli are provided in conjunction with a searchable Web portal that allows researchers to create a set of experimental stimuli without prior programming knowledge. When researchers use this new database in tandem with previous norming efforts, precise stimuli sets can be created for future research endeavors.
Internet-based distributed collaborative environment for engineering education and design
NASA Astrophysics Data System (ADS)
Sun, Qiuli
2001-07-01
This research investigates the use of the Internet for engineering education, design, and analysis through the presentation of a Virtual City environment. The main focus of this research was to provide an infrastructure for engineering education, test the concept of distributed collaborative design and analysis, develop and implement the Virtual City environment, and assess the environment's effectiveness in the real world. A three-tier architecture was adopted in the development of the prototype, which contains an online database server, a Web server as well as multi-user servers, and client browsers. The environment is composed of five components, a 3D virtual world, multiple Internet-based multimedia modules, an online database, a collaborative geometric modeling module, and a collaborative analysis module. The environment was designed using multiple Intenet-based technologies, such as Shockwave, Java, Java 3D, VRML, Perl, ASP, SQL, and a database. These various technologies together formed the basis of the environment and were programmed to communicate smoothly with each other. Three assessments were conducted over a period of three semesters. The Virtual City is open to the public at www.vcity.ou.edu. The online database was designed to manage the changeable data related to the environment. The virtual world was used to implement 3D visualization and tie the multimedia modules together. Students are allowed to build segments of the 3D virtual world upon completion of appropriate undergraduate courses in civil engineering. The end result is a complete virtual world that contains designs from all of their coursework and is viewable on the Internet. The environment is a content-rich educational system, which can be used to teach multiple engineering topics with the help of 3D visualization, animations, and simulations. The concept of collaborative design and analysis using the Internet was investigated and implemented. Geographically dispersed users can build the same geometric model simultaneously over the Internet and communicate with each other through a chat room. They can also conduct finite element analysis collaboratively on the same object over the Internet. They can mesh the same object, apply and edit the same boundary conditions and forces, obtain the same analysis results, and then discuss the results through the Internet.
Tracking the Evolution of the Internet of Things Concept Across Different Application Domains
Ibarra-Esquer, Jorge E.; González-Navarro, Félix F.; Flores-Rios, Brenda L.; Burtseva, Larysa; Astorga-Vargas, María A.
2017-01-01
Both the idea and technology for connecting sensors and actuators to a network to remotely monitor and control physical systems have been known for many years and developed accordingly. However, a little more than a decade ago the concept of the Internet of Things (IoT) was coined and used to integrate such approaches into a common framework. Technology has been constantly evolving and so has the concept of the Internet of Things, incorporating new terminology appropriate to technological advances and different application domains. This paper presents the changes that the IoT has undertaken since its conception and research on how technological advances have shaped it and fostered the arising of derived names suitable to specific domains. A two-step literature review through major publishers and indexing databases was conducted; first by searching for proposals on the Internet of Things concept and analyzing them to find similarities, differences, and technological features that allow us to create a timeline showing its development; in the second step the most mentioned names given to the IoT for specific domains, as well as closely related concepts were identified and briefly analyzed. The study confirms the claim that a consensus on the IoT definition has not yet been reached, as enabling technology keeps evolving and new application domains are being proposed. However, recent changes have been relatively moderated, and its variations on application domains are clearly differentiated, with data and data technologies playing an important role in the IoT landscape. PMID:28613238
Probing concept of critical thinking in nursing education in Iran: a concept analysis.
Tajvidi, Mansooreh; Ghiyasvandian, Shahrzad; Salsali, Mahvash
2014-06-01
Given the wide disagreement over the definition of critical thinking in different disciplines, defining and standardizing the concept according to the discipline of nursing is essential. Moreover, there is limited scientific evidence regarding critical thinking in the context of nursing in Iran. The aim of this study was to analyze and clarify the concept of critical thinking in nursing education in Iran. We employed the hybrid model to define the concept of critical thinking. The hybrid model has three interconnected phases--the theoretical phase, the fieldwork phase, and the final analytic phase. In the theoretical phase, we searched the online scientific databases (such as Elsevier, Wiley, CINAHL, Proquest, Ovid, and Springer as well as Iranian databases such as SID, Magiran, and Iranmedex). In the fieldwork phase, a purposive sample of 17 nursing faculties, PhD students, clinical instructors, and clinical nurses was recruited. Participants were interviewed by using an interview guide. In the analytical phase we compared the data from the theoretical and the fieldwork phases. The concept of critical thinking had many different antecedents, attributes, and consequences. Antecedents, attributes, and consequences of critical thinking concept identified in the theoretical phase were in some ways different and in some way similar to antecedents, attributes, and consequences identified in the fieldwork phase. Finally critical thinking in nursing education in Iran was clarified. Critical thinking is a logical, situational, purposive, and outcome-oriented thinking process. It is an acquired and evolving ability which develops individually. Such thinking process could lead to the professional accountability, personal development, God's consent, conscience appeasement, and personality development. Copyright © 2014. Published by Elsevier B.V.
Novel approaches for an enhanced geothermal development of residential sites
NASA Astrophysics Data System (ADS)
Schelenz, Sophie; Firmbach, Linda; Shao, Haibing; Dietrich, Peter; Vienken, Thomas
2015-04-01
An ongoing technological enhancement drives an increasing use of shallow geothermal systems for heating and cooling applications. However, even in areas with intensive shallow geothermal use, planning of geothermal systems is in many cases solely based on geological maps, drilling databases, and literature references. Thus, relevant heat transport parameters are rather approximated than measured for the specific site. To increase the planning safety and promote the use of renewable energies in the domestic sector, this study investigates a novel concept for an enhanced geothermal development of residential neighbourhoods. This concept is based on a site-specific characterization of subsurface conditions and the implementation of demand-oriented geothermal usage options. Therefore, an investigation approach has been tested that combines non-invasive with minimum-invasive exploration methods. While electrical resistivity tomography has been applied to characterize the geological subsurface structure, Direct Push soundings enable a detailed, vertical high-resolution characterization of the subsurface surrounding the borehole heat exchangers. The benefit of this site-specific subsurface investigation is highlighted for 1) a more precise design of shallow geothermal systems and 2) a reliable prediction of induced long-term changes in groundwater temperatures. To guarantee the financial feasibility and practicability of the novel geothermal development, three different options for its implementation in residential neighbourhoods were consequently deduced.
Bell, Erica; Campbell, Steve; Goldberg, Lynette R
2015-01-22
The most important and contested element of nursing identity may be the patient-centredness of nursing, though this concept is not well-treated in the nursing identity literature. More conceptually-based mapping of nursing identity constructs are needed to help nurses shape their identity. The field of computational text analytics offers new opportunities to scrutinise how growing disciplines such as health services research construct nursing identity. This paper maps the conceptual content of scholarly health services research in PubMed as it relates to the patient-centeredness of nursing. Computational text analytics software was used to analyse all health services abstracts in the database PubMed since 1986. Abstracts were treated as indicative of the content of health services research. The database PubMed was searched for all research papers using the term "service" or "services" in the abstract or keywords for the period 01/01/1986 to 30/06/2013. A total of 234,926 abstracts were obtained. Leximancer software was used in 1) mapping of 4,144,458 instances of 107 concepts; 2) analysis of 106 paired concept co-occurrences for the nursing concept; and 3) sentiment analysis of the nursing concept versus patient, family and community concepts, and clinical concepts. Nursing is constructed within quality assurance or service implementation or workforce development concepts. It is relatively disconnected from patient, family or community care concepts. For those who agree that patient-centredness should be a part of nursing identity in practice, this study suggests that there is a need for development of health services research into both the nature of the caring construct in nursing identity and its expression in practice. More fundamentally, the study raises questions about whether health services research cultures even value the politically popular idea of nurses as patient-centred caregivers and whether they should.
[Complexity of care: meanings and interpretation].
Cologna, Marina; Zanolli, Daniela; Saiani, Luisa
2010-01-01
Although the concept of complexity of care is widely used and discussed, its meaning is blurred and its characteristics are not well defined. To identify the words used to define the concept of complexity in the literature and its meaning. A literature search was performed on the following databases: Pubmed, Medline, Ebsco, Cinahl and Cochrane. No temporal limits were set; publications written in English and Italian were included. Several terms are used to define the concept of complexity, often interchangeably notwithstanding their different meaning. Three main concepts were identified: nursing intensity that includes the concepts of dependency, severity and complexity of patients care; nursing workload that comprises the concept of nursing intensity and all the activities not patient-related; and the patient acuity that includes the severity of illness and the caring intensity. A common definition is needed to be able to use the concept of complexity of care to allocate nursing resources.
NASA Astrophysics Data System (ADS)
Lee, Kangwon
Intelligent vehicle systems, such as Adaptive Cruise Control (ACC) or Collision Warning/Collision Avoidance (CW/CA), are currently under development, and several companies have already offered ACC on selected models. Control or decision-making algorithms of these systems are commonly evaluated under extensive computer simulations and well-defined scenarios on test tracks. However, they have rarely been validated with large quantities of naturalistic human driving data. This dissertation utilized two University of Michigan Transportation Research Institute databases (Intelligent Cruise Control Field Operational Test and System for Assessment of Vehicle Motion Environment) in the development and evaluation of longitudinal driver models and CW/CA algorithms. First, to examine how drivers normally follow other vehicles, the vehicle motion data from the databases were processed using a Kalman smoother. The processed data was then used to fit and evaluate existing longitudinal driver models (e.g., the linear follow-the-leader model, the Newell's special model, the nonlinear follow-the-leader model, the linear optimal control model, the Gipps model and the optimal velocity model). A modified version of the Gipps model was proposed and found to be accurate in both microscopic (vehicle) and macroscopic (traffic) senses. Second, to examine emergency braking behavior and to evaluate CW/CA algorithms, the concepts of signal detection theory and a performance index suitable for unbalanced situations (few threatening data points vs. many safe data points) are introduced. Selected existing CW/CA algorithms were found to have a performance index (geometric mean of true-positive rate and precision) not exceeding 20%. To optimize the parameters of the CW/CA algorithms, a new numerical optimization scheme was developed to replace the original data points with their representative statistics. A new CW/CA algorithm was proposed, which was found to score higher than 55% in the performance index. This dissertation provides a model of how drivers follow lead-vehicles that is much more accurate than other models in the literature. Furthermore, the data-based approach was used to confirm that a CW/CA algorithm utilizing lead-vehicle braking was substantially more effective than existing algorithms, leading to collision warning systems that are much more likely to contribute to driver safety.
Hsieh, Sheng-Hsun; Li, Yung-Hui; Wang, Wei; Tien, Chung-Hao
2018-03-06
In this study, we maneuvered a dual-band spectral imaging system to capture an iridal image from a cosmetic-contact-lens-wearing subject. By using the independent component analysis to separate individual spectral primitives, we successfully distinguished the natural iris texture from the cosmetic contact lens (CCL) pattern, and restored the genuine iris patterns from the CCL-polluted image. Based on a database containing 200 test image pairs from 20 CCL-wearing subjects as the proof of concept, the recognition accuracy (False Rejection Rate: FRR) was improved from FRR = 10.52% to FRR = 0.57% with the proposed ICA anti-spoofing scheme.
Conceptualisation of patient satisfaction: a systematic narrative literature review.
Batbaatar, Enkhjargal; Dorjdagva, Javkhlanbayar; Luvsannyam, Ariunbat; Amenta, Pietro
2015-09-01
Patient satisfaction concept is widely measured due to its appropriateness to health service; however, evidence suggests that it is a poorly developed concept. This article is a first part of a two-part series of research with a goal to review a current conceptual framework of patient satisfaction and to bring the concept for further operationalisation procedures. The current article aimed to review a theoretical framework that helps the next article to review determinants of patient satisfaction for designing a measurement system. The study used a systematic review method, meta-narrative review, based on the RAMESES guideline with the phases of screening evidence, appraisal evidence, data extraction and synthesis. Patient satisfaction theoretical articles were searched on the two databases MEDLINE and CINAHL. Inclusion criteria were articles published between 1980 and 2014, and English language papers only. There were 36 articles selected for the synthesis. Results showed that most of the patient satisfaction theories and formulations are based on marketing theories and defined as how well health service fulfils patient expectations. However, review demonstrated that a relationship between expectation and satisfaction is unclear and the concept expectation itself is not distinctly theorised as well. Researchers brought satisfaction theories from other fields to the current healthcare literature without much adaptation. Thus, there is a need to attempt to define the patient satisfaction concept from other perspectives or to learn how patients evaluate the care rather than struggling to describe it by consumerist theories. © Royal Society for Public Health 2015.
Women's toileting behaviour related to urinary elimination: concept analysis.
Wang, Kefang; Palmer, Mary H
2010-08-01
This paper is a report of analysis of the concept of women's toileting behaviour related to urinary elimination. Behaviours related to emptying urine from the bladder can contribute to bladder health problems. Evidence exists that clinical interventions focusing on specific behaviours that promote urine storage and controlled emptying are effective in reducing lower urinary tract symptoms. The concept of women's toileting behaviour related to urinary elimination has not been well-developed to guide nursing research and intervention. The CINAHL, Medline, PsycInfo and ISI Citation databases were searched for publications between January, 1960 and May, 2009, using combinations of keywords related to women's toileting behaviour. Additional publications were identified by examining the reference lists in the papers identified. Johnson's behavioural system model provided the conceptual framework to identify the concept. Walker and Avant's method was used for this concept analysis. Women's toileting behaviour related to urinary elimination can be defined as voluntary actions related to the physiological event of emptying the bladder, which is comprised of specific attributes including voiding place, voiding time, voiding position and voiding style. This behaviour is also influenced by the physical and social environments. An explicit definition of women's toileting behaviour can offer a basis for nurses to understand the factors involved in women's toileting behaviour. It also facilitates the development of an instrument to assess women's toileting behaviour better, and to facilitate development of behavioural interventions designed to prevent, eliminate, reduce and manage female lower urinary tract symptoms.
Patient satisfaction with nursing care: a concept analysis within a nursing framework.
Wagner, Debra; Bear, Mary
2009-03-01
This paper is a report of a concept analysis of patient satisfaction with nursing care. Patient satisfaction is an important indicator of quality of care, and healthcare facilities are interested in maintaining high levels of satisfaction in order to stay competitive in the healthcare market. Nursing care has a prominent role in patient satisfaction. Using a nursing model to measure patient satisfaction with nursing care helps define and clarify this concept. Rodgers' evolutionary method of concept analysis provided the framework for this analysis. Data were retrieved from the Cumulative Index of Nursing and Allied Health Literature and MEDLINE databases and the ABI/INFORM global business database. The literature search used the keywords patient satisfaction, nursing care and hospital. The sample included 44 papers published in English, between 1998 and 2007. Cox's Interaction Model of Client Health Behavior was used to analyse the concept of patient satisfaction with nursing care. The attributes leading to the health outcome of patient satisfaction with nursing care were categorized as affective support, health information, decisional control and professional/technical competencies. Antecedents embodied the uniqueness of the patient in terms of demographic data, social influence, previous healthcare experiences, environmental resources, intrinsic motivation, cognitive appraisal and affective response. Consequences of achieving patient satisfaction with nursing care included greater market share of healthcare finances, compliance with healthcare regimens and better health outcomes. The meaning of patient satisfaction continues to evolve. Using a nursing model to measure patient satisfaction with nursing care delineates the concept from other measures of patient satisfaction.
Automated container transportation using self-guided vehicles: Fernald site requirements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hazen, F.B.
1993-09-01
A new opportunity to improve the safety and efficiency of environmental restoration operations, using robotics has emerged from advances in industry, academia, and government labs. Self-Guided Vehicles (SGV`s) have recently been developed in industry and early systems have already demonstrated much, though not all, of the functionality necessary to support driverless transportation of waste within and between processing facilities. Improved materials databases are being developed by at least two DOE remediation sites, the Fernald Environmental Management Project (FEME) in the State of Ohio and the Hanford Complex in the State of Washington. SGV`s can be developed that take advantage ofmore » the information in these databases and yield improved dispatch, waste tracking, report and shipment documentation. In addition, they will reduce the radiation hazard to workers and the risk of damaging containers through accidental collision. In this document, features of remediation sites that dictate the design of both the individual SGV`s and the collective system of SGV`s are presented, through the example of the site requirements at Fernald. Some concepts borrowed from the world of manufacturing are explained and then used to develop an integrated, holistic view of the remediation site as a pseudo-factory. Transportation methods at Fernald and anticipated growth in transport demand are analyzed. The new site-wide database under development at Fernald is presented so that advantageous and synergistic links between SGV`s and information systems can be analyzed. Details of the SGV development proposed are submitted, and some results of a recently completed state of the art survey for SGV use in this application are also presented.« less
United States Army Medical Materiel Development Activity: 1997 Annual Report.
1997-01-01
business planning and execution information management system (Project Management Division Database ( PMDD ) and Product Management Database System (PMDS...MANAGEMENT • Project Management Division Database ( PMDD ), Product Management Database System (PMDS), and Special Users Database System:The existing...System (FMS), were investigated. New Product Managers and Project Managers were added into PMDS and PMDD . A separate division, Support, was
ERIC Educational Resources Information Center
Munn, Maureen; Knuth, Randy; Van Horne, Katie; Shouse, Andrew W.; Levias, Sheldon
2017-01-01
This study examines how two kinds of authentic research experiences related to smoking behavior--genotyping human DNA (wet lab) and using a database to test hypotheses about factors that affect smoking behavior (dry lab)--influence students' perceptions and understanding of scientific research and related science concepts. The study used pre and…
Collaborative Interactive Visualization Exploratory Concept
2015-06-01
the FIAC concepts. It consists of various DRDC-RDDC-2015-N004 intelligence analysis web services build of top of big data technologies exploited...sits on the UDS where validated common knowledge is stored. Based on the Lumify software2, this important component exploits big data technologies such...interfaces. Above this database resides the Big Data Manager responsible for transparent data transmission between the UDS and the rest of the S3
Innovative railroad information displays : executive summary
DOT National Transportation Integrated Search
1998-01-01
The objectives ofthis study were to explore the potential of advanced digital technology, : novel concepts of information management, geographic information databases and : display capabilities in order to enhance planning and decision-making process...
Failsafe automation of Phase II clinical trial interim monitoring for stopping rules.
Day, Roger S
2010-02-01
In Phase II clinical trials in cancer, preventing the treatment of patients on a study when current data demonstrate that the treatment is insufficiently active or too toxic has obvious benefits, both in protecting patients and in reducing sponsor costs. Considerable efforts have gone into experimental designs for Phase II clinical trials with flexible sample size, usually implemented by early stopping rules. The intended benefits will not ensue, however, if the design is not followed. Despite the best intentions, failures can occur for many reasons. The main goal is to develop an automated system for interim monitoring, as a backup system supplementing the protocol team, to ensure that patients are protected. A secondary goal is to stimulate timely recording of patient assessments. We developed key concepts and performance needs, then designed, implemented, and deployed a software solution embedded in the clinical trials database system. The system has been in place since October 2007. One clinical trial tripped the automated monitor, resulting in e-mails that initiated statistician/investigator review in timely fashion. Several essential contributing activities still require human intervention, institutional policy decisions, and institutional commitment of resources. We believe that implementing the concepts presented here will provide greater assurance that interim monitoring plans are followed and that patients are protected from inadequate response or excessive toxicity. This approach may also facilitate wider acceptance and quicker implementation of new interim monitoring algorithms.
Performance assessment of EMR systems based on post-relational database.
Yu, Hai-Yan; Li, Jing-Song; Zhang, Xiao-Guang; Tian, Yu; Suzuki, Muneou; Araki, Kenji
2012-08-01
Post-relational databases provide high performance and are currently widely used in American hospitals. As few hospital information systems (HIS) in either China or Japan are based on post-relational databases, here we introduce a new-generation electronic medical records (EMR) system called Hygeia, which was developed with the post-relational database Caché and the latest platform Ensemble. Utilizing the benefits of a post-relational database, Hygeia is equipped with an "integration" feature that allows all the system users to access data-with a fast response time-anywhere and at anytime. Performance tests of databases in EMR systems were implemented in both China and Japan. First, a comparison test was conducted between a post-relational database, Caché, and a relational database, Oracle, embedded in the EMR systems of a medium-sized first-class hospital in China. Second, a user terminal test was done on the EMR system Izanami, which is based on the identical database Caché and operates efficiently at the Miyazaki University Hospital in Japan. The results proved that the post-relational database Caché works faster than the relational database Oracle and showed perfect performance in the real-time EMR system.
Supersonics Project - Airport Noise Tech Challenge
NASA Technical Reports Server (NTRS)
Bridges, James
2010-01-01
The Airport Noise Tech Challenge research effort under the Supersonics Project is reviewed. While the goal of "Improved supersonic jet noise models validated on innovative nozzle concepts" remains the same, the success of the research effort has caused the thrust of the research to be modified going forward in time. The main activities from FY06-10 focused on development and validation of jet noise prediction codes. This required innovative diagnostic techniques to be developed and deployed, extensive jet noise and flow databases to be created, and computational tools to be developed and validated. Furthermore, in FY09-10 systems studies commissioned by the Supersonics Project showed that viable supersonic aircraft were within reach using variable cycle engine architectures if exhaust nozzle technology could provide 3-5dB of suppression. The Project then began to focus on integrating the technologies being developed in its Tech Challenge areas to bring about successful system designs. Consequently, the Airport Noise Tech Challenge area has shifted efforts from developing jet noise prediction codes to using them to develop low-noise nozzle concepts for integration into supersonic aircraft. The new plan of research is briefly presented by technology and timelines.
Zaslansky, R; Chapman, C R; Rothaug, J; Bäckström, R; Brill, S; Davidson, E; Elessi, K; Fletcher, D; Fodor, L; Karanja, E; Konrad, C; Kopf, A; Leykin, Y; Lipman, A; Puig, M; Rawal, N; Schug, S; Ullrich, K; Volk, T; Meissner, W
2012-03-01
Post-operative pain exacts a high toll from patients, families, healthcare professionals and healthcare systems worldwide. PAIN-OUT is a research project funded by the European Union's 7th Framework Program designed to develop effective, evidence-based approaches to improve pain management after surgery, including creating a registry for feedback, benchmarking and decision support. In preparation for PAIN-OUT, we conducted a pilot study to evaluate the feasibility of international data collection with feedback to participating sites. Adult orthopaedic or general surgery patients consented to participate between May and October 2008 at 14 collaborating hospitals in 13 countries. Project staff collected patient-reported outcomes and process data from 688 patients and entered the data into an online database. Project staff in 10 institutions met the enrolment criteria of collecting data from at least 50 patients. The completeness and quality of the data, as assessed by rate of missing data, were acceptable; only 2% of process data and 0.06% of patient-reported outcome data were missing. Participating institutions received access to select items as Web-based feedback comparing their outcomes to those of the other sites, presented anonymously. We achieved proof of concept because staff and patients in all 14 sites cooperated well despite marked differences in cultures, nationalities and languages, and a central database management team was able to provide valuable feedback to all. © 2011 European Federation of International Association for the Study of Pain Chapters.
Teaching about Human Geography.
ERIC Educational Resources Information Center
Schlene, Vickie J.
1991-01-01
Presents a sampling of items from the ERIC database concerning the teaching of human geography. Includes documents dealing with Africa, Asia, the United States, Canada, Antarctica, and geographic concepts. Explains how to obtain ERIC documents. (SG)