Schema Versioning for Multitemporal Relational Databases.
ERIC Educational Resources Information Center
De Castro, Cristina; Grandi, Fabio; Scalas, Maria Rita
1997-01-01
Investigates new design options for extended schema versioning support for multitemporal relational databases. Discusses the improved functionalities they may provide. Outlines options and basic motivations for the new design solutions, as well as techniques for the management of proposed schema versioning solutions, includes algorithms and…
Automated Database Schema Design Using Mined Data Dependencies.
ERIC Educational Resources Information Center
Wong, S. K. M.; Butz, C. J.; Xiang, Y.
1998-01-01
Describes a bottom-up procedure for discovering multivalued dependencies in observed data without knowing a priori the relationships among the attributes. The proposed algorithm is an application of technique designed for learning conditional independencies in probabilistic reasoning; a prototype system for automated database schema design has…
Mungall, Christopher J; Emmert, David B
2007-07-01
A few years ago, FlyBase undertook to design a new database schema to store Drosophila data. It would fully integrate genomic sequence and annotation data with bibliographic, genetic, phenotypic and molecular data from the literature representing a distillation of the first 100 years of research on this major animal model system. In developing this new integrated schema, FlyBase also made a commitment to ensure that its design was generic, extensible and available as open source, so that it could be employed as the core schema of any model organism data repository, thereby avoiding redundant software development and potentially increasing interoperability. Our question was whether we could create a relational database schema that would be successfully reused. Chado is a relational database schema now being used to manage biological knowledge for a wide variety of organisms, from human to pathogens, especially the classes of information that directly or indirectly can be associated with genome sequences or the primary RNA and protein products encoded by a genome. Biological databases that conform to this schema can interoperate with one another, and with application software from the Generic Model Organism Database (GMOD) toolkit. Chado is distinctive because its design is driven by ontologies. The use of ontologies (or controlled vocabularies) is ubiquitous across the schema, as they are used as a means of typing entities. The Chado schema is partitioned into integrated subschemas (modules), each encapsulating a different biological domain, and each described using representations in appropriate ontologies. To illustrate this methodology, we describe here the Chado modules used for describing genomic sequences. GMOD is a collaboration of several model organism database groups, including FlyBase, to develop a set of open-source software for managing model organism data. The Chado schema is freely distributed under the terms of the Artistic License (http://www.opensource.org/licenses/artistic-license.php) from GMOD (www.gmod.org).
Rapid HIS, RIS, PACS Integration Using Graphical CASE Tools
NASA Astrophysics Data System (ADS)
Taira, Ricky K.; Breant, Claudine M.; Stepczyk, Frank M.; Kho, Hwa T.; Valentino, Daniel J.; Tashima, Gregory H.; Materna, Anthony T.
1994-05-01
We describe the clinical requirements of the integrated federation of databases and present our client-mediator-server design. The main body of the paper describes five important aspects of integrating information systems: (1) global schema design, (2) establishing sessions with remote database servers, (3) development of schema translators, (4) integration of global system triggers, and (5) development of job workflow scripts.
Parsing GML data based on integrative GML syntactic and semantic schemas database
NASA Astrophysics Data System (ADS)
Miao, Lizhi; Zhang, Shuliang; Lu, Guonian; Gao, Xiaoli; Jiao, Donglai; Gan, Jiayan
2007-06-01
This paper proposes a new method to parse various application schemas of Geography Markup Language (GML) for understanding syntax and semantic of their element and type in order to implement uniform interpretation of the same GML instance data among diverse users. The proposed method generates an Integrative GML Syntactic and Semantic Schemas Database (IGSSSDB) from GML3.1 core schemas and corresponding application schema. This paper parses GML data based on IGSSSDB, which is composed of syntactic and semantic information, nesting information and mapping rules of GML core schemas and application schemas. Three kinds of relational tables are designed for storing information from schemas when constructing IGSSSDB. Those are info tables for schemas included and namespace imported in application schemas, tables for information related to schemas and catalog tables of core schemas. In relational tables, we propose to use homologous regular expression to describe model of elements and complex types in schemas, which can ensure model complete and readable. Based on IGSSSDB, we design and develop many APIs to implement GML data parsing, and can process syntactic and semantic information of GML data from diverse fields and users. At the latter part of this paper, test study is implemented to show that the proposed method is feasible and appropriate for parsing GML data. Also, it founds a good basis for future GML data studies such as storage, index and query etc.
Protoptype integrated design (Pride) system reference manual. Volume 2: Schema definition
NASA Technical Reports Server (NTRS)
Fishwick, P. A.; Sutter, T. R.; Blackburn, C. L.
1983-01-01
An initial description of an evolving relational database schema is presented for the management of finite element model design and analysis data. The report presents a description of each relation including attribute names, data types, and definitions. The format of this report is such that future modifications and enhancements may be easily incorporated.
García-Remesal, M; Maojo, V; Billhardt, H; Crespo, J
2010-01-01
Bringing together structured and text-based sources is an exciting challenge for biomedical informaticians, since most relevant biomedical sources belong to one of these categories. In this paper we evaluate the feasibility of integrating relational and text-based biomedical sources using: i) an original logical schema acquisition method for textual databases developed by the authors, and ii) OntoFusion, a system originally designed by the authors for the integration of relational sources. We conducted an integration experiment involving a test set of seven differently structured sources covering the domain of genetic diseases. We used our logical schema acquisition method to generate schemas for all textual sources. The sources were integrated using the methods and tools provided by OntoFusion. The integration was validated using a test set of 500 queries. A panel of experts answered a questionnaire to evaluate i) the quality of the extracted schemas, ii) the query processing performance of the integrated set of sources, and iii) the relevance of the retrieved results. The results of the survey show that our method extracts coherent and representative logical schemas. Experts' feedback on the performance of the integrated system and the relevance of the retrieved results was also positive. Regarding the validation of the integration, the system successfully provided correct results for all queries in the test set. The results of the experiment suggest that text-based sources including a logical schema can be regarded as equivalent to structured databases. Using our method, previous research and existing tools designed for the integration of structured databases can be reused - possibly subject to minor modifications - to integrate differently structured sources.
Corwin, John; Silberschatz, Avi; Miller, Perry L; Marenco, Luis
2007-01-01
Data sparsity and schema evolution issues affecting clinical informatics and bioinformatics communities have led to the adoption of vertical or object-attribute-value-based database schemas to overcome limitations posed when using conventional relational database technology. This paper explores these issues and discusses why biomedical data are difficult to model using conventional relational techniques. The authors propose a solution to these obstacles based on a relational database engine using a sparse, column-store architecture. The authors provide benchmarks comparing the performance of queries and schema-modification operations using three different strategies: (1) the standard conventional relational design; (2) past approaches used by biomedical informatics researchers; and (3) their sparse, column-store architecture. The performance results show that their architecture is a promising technique for storing and processing many types of data that are not handled well by the other two semantic data models.
Integration of Schemas on the Pre-Design Level Using the KCPM-Approach
NASA Astrophysics Data System (ADS)
Vöhringer, Jürgen; Mayr, Heinrich C.
Integration is a central research and operational issue in information system design and development. It can be conducted on the system, schema, and view or data level. On the system level, integration deals with the progressive linking and testing of system components to merge their functional and technical characteristics and behavior into a comprehensive, interoperable system. Schema integration comprises the comparison and merging of two or more schemas, usually conceptual database schemas. The integration of data deals with merging the contents of multiple sources of related data. View integration is similar to schema integration, however focuses on views and queries on these instead of schemas. All these types of integration have in common, that two or more sources are merged and previously compared, in order to identify matches and mismatches as well as conflicts and inconsistencies. The sources may stem from heterogeneous companies, organizational units or projects. Integration enables the reuse and combined use of source components.
A Flexible Online Metadata Editing and Management System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aguilar, Raul; Pan, Jerry Yun; Gries, Corinna
2010-01-01
A metadata editing and management system is being developed employing state of the art XML technologies. A modular and distributed design was chosen for scalability, flexibility, options for customizations, and the possibility to add more functionality at a later stage. The system consists of a desktop design tool or schema walker used to generate code for the actual online editor, a native XML database, and an online user access management application. The design tool is a Java Swing application that reads an XML schema, provides the designer with options to combine input fields into online forms and give the fieldsmore » user friendly tags. Based on design decisions, the tool generates code for the online metadata editor. The code generated is an implementation of the XForms standard using the Orbeon Framework. The design tool fulfills two requirements: First, data entry forms based on one schema may be customized at design time and second data entry applications may be generated for any valid XML schema without relying on custom information in the schema. However, the customized information generated at design time is saved in a configuration file which may be re-used and changed again in the design tool. Future developments will add functionality to the design tool to integrate help text, tool tips, project specific keyword lists, and thesaurus services. Additional styling of the finished editor is accomplished via cascading style sheets which may be further customized and different look-and-feels may be accumulated through the community process. The customized editor produces XML files in compliance with the original schema, however, data from the current page is saved into a native XML database whenever the user moves to the next screen or pushes the save button independently of validity. Currently the system uses the open source XML database eXist for storage and management, which comes with third party online and desktop management tools. However, access to metadata files in the application introduced here is managed in a custom online module, using a MySQL backend accessed by a simple Java Server Faces front end. A flexible system with three grouping options, organization, group and single editing access is provided. Three levels were chosen to distribute administrative responsibilities and handle the common situation of an information manager entering the bulk of the metadata but leave specifics to the actual data provider.« less
Heterogenous database integration in a physician workstation.
Annevelink, J; Young, C Y; Tang, P C
1991-01-01
We discuss the integration of a variety of data and information sources in a Physician Workstation (PWS), focusing on the integration of data from DHCP, the Veteran Administration's Distributed Hospital Computer Program. We designed a logically centralized, object-oriented data-schema, used by end users and applications to explore the data accessible through an object-oriented database using a declarative query language. We emphasize the use of procedural abstraction to transparently integrate a variety of information sources into the data schema.
Heterogenous database integration in a physician workstation.
Annevelink, J.; Young, C. Y.; Tang, P. C.
1991-01-01
We discuss the integration of a variety of data and information sources in a Physician Workstation (PWS), focusing on the integration of data from DHCP, the Veteran Administration's Distributed Hospital Computer Program. We designed a logically centralized, object-oriented data-schema, used by end users and applications to explore the data accessible through an object-oriented database using a declarative query language. We emphasize the use of procedural abstraction to transparently integrate a variety of information sources into the data schema. PMID:1807624
Guidelines for the Effective Use of Entity-Attribute-Value Modeling for Biomedical Databases
Dinu, Valentin; Nadkarni, Prakash
2007-01-01
Purpose To introduce the goals of EAV database modeling, to describe the situations where Entity-Attribute-Value (EAV) modeling is a useful alternative to conventional relational methods of database modeling, and to describe the fine points of implementation in production systems. Methods We analyze the following circumstances: 1) data are sparse and have a large number of applicable attributes, but only a small fraction will apply to a given entity; 2) numerous classes of data need to be represented, each class has a limited number of attributes, but the number of instances of each class is very small. We also consider situations calling for a mixed approach where both conventional and EAV design are used for appropriate data classes. Results and Conclusions In robust production systems, EAV-modeled databases trade a modest data sub-schema for a complex metadata sub-schema. The need to design the metadata effectively makes EAV design potentially more challenging than conventional design. PMID:17098467
DOE Office of Scientific and Technical Information (OSTI.GOV)
Femec, D.A.
This report describes two code-generating tools used to speed design and implementation of relational databases and user interfaces: CREATE-SCHEMA and BUILD-SCREEN. CREATE-SCHEMA produces the SQL commands that actually create and define the database. BUILD-SCREEN takes templates for data entry screens and generates the screen management system routine calls to display the desired screen. Both tools also generate the related FORTRAN declaration statements and precompiled SQL calls. Included with this report is the source code for a number of FORTRAN routines and functions used by the user interface. This code is broadly applicable to a number of different databases.
rCAD: A Novel Database Schema for the Comparative Analysis of RNA.
Ozer, Stuart; Doshi, Kishore J; Xu, Weijia; Gutell, Robin R
2011-12-31
Beyond its direct involvement in protein synthesis with mRNA, tRNA, and rRNA, RNA is now being appreciated for its significance in the overall metabolism and regulation of the cell. Comparative analysis has been very effective in the identification and characterization of RNA molecules, including the accurate prediction of their secondary structure. We are developing an integrative scalable data management and analysis system, the RNA Comparative Analysis Database (rCAD), implemented with SQL Server to support RNA comparative analysis. The platformagnostic database schema of rCAD captures the essential relationships between the different dimensions of information for RNA comparative analysis datasets. The rCAD implementation enables a variety of comparative analysis manipulations with multiple integrated data dimensions for advanced RNA comparative analysis workflows. In this paper, we describe details of the rCAD schema design and illustrate its usefulness with two usage scenarios.
rCAD: A Novel Database Schema for the Comparative Analysis of RNA
Ozer, Stuart; Doshi, Kishore J.; Xu, Weijia; Gutell, Robin R.
2013-01-01
Beyond its direct involvement in protein synthesis with mRNA, tRNA, and rRNA, RNA is now being appreciated for its significance in the overall metabolism and regulation of the cell. Comparative analysis has been very effective in the identification and characterization of RNA molecules, including the accurate prediction of their secondary structure. We are developing an integrative scalable data management and analysis system, the RNA Comparative Analysis Database (rCAD), implemented with SQL Server to support RNA comparative analysis. The platformagnostic database schema of rCAD captures the essential relationships between the different dimensions of information for RNA comparative analysis datasets. The rCAD implementation enables a variety of comparative analysis manipulations with multiple integrated data dimensions for advanced RNA comparative analysis workflows. In this paper, we describe details of the rCAD schema design and illustrate its usefulness with two usage scenarios. PMID:24772454
Design of Knowledge Bases for Plant Gene Regulatory Networks.
Mukundi, Eric; Gomez-Cano, Fabio; Ouma, Wilberforce Zachary; Grotewold, Erich
2017-01-01
Developing a knowledge base that contains all the information necessary for the researcher studying gene regulation in a particular organism can be accomplished in four stages. This begins with defining the data scope. We describe here the necessary information and resources, and outline the methods for obtaining data. The second stage consists of designing the schema, which involves defining the entire arrangement of the database in a systematic plan. The third stage is the implementation, defined by actualization of the database by using software according to a predefined schema. The final stage is development, where the database is made available to users in a web-accessible system. The result is a knowledgebase that integrates all the information pertaining to gene regulation, and which is easily expandable and transferable.
Benefits of an Object-oriented Database Representation for Controlled Medical Terminologies
Gu, Huanying; Halper, Michael; Geller, James; Perl, Yehoshua
1999-01-01
Objective: Controlled medical terminologies (CMTs) have been recognized as important tools in a variety of medical informatics applications, ranging from patient-record systems to decision-support systems. Controlled medical terminologies are typically organized in semantic network structures consisting of tens to hundreds of thousands of concepts. This overwhelming size and complexity can be a serious barrier to their maintenance and widespread utilization. The authors propose the use of object-oriented databases to address the problems posed by the extensive scope and high complexity of most CMTs for maintenance personnel and general users alike. Design: The authors present a methodology that allows an existing CMT, modeled as a semantic network, to be represented as an equivalent object-oriented database. Such a representation is called an object-oriented health care terminology repository (OOHTR). Results: The major benefit of an OOHTR is its schema, which provides an important layer of structural abstraction. Using the high-level view of a CMT afforded by the schema, one can gain insight into the CMT's overarching organization and begin to better comprehend it. The authors' methodology is applied to the Medical Entities Dictionary (MED), a large CMT developed at Columbia-Presbyterian Medical Center. Examples of how the OOHTR schema facilitated updating, correcting, and improving the design of the MED are presented. Conclusion: The OOHTR schema can serve as an important abstraction mechanism for enhancing comprehension of a large CMT, and thus promotes its usability. PMID:10428002
The BiolAD-DB system : an informatics system for clinical and genetic data.
Nielsen, David A; Leidner, Marty; Haynes, Chad; Krauthammer, Michael; Kreek, Mary Jeanne
2007-01-01
The Biology of Addictive Diseases-Database (BiolAD-DB) system is a research bioinformatics system for archiving, analyzing, and processing of complex clinical and genetic data. The database schema employs design principles for handling complex clinical information, such as response items in genetic questionnaires. Data access and validation is provided by the BiolAD-DB client application, which features a data validation engine tightly coupled to a graphical user interface. Data integrity is provided by the password-protected BiolAD-DB SQL compliant server and database. BiolAD-DB tools further provide functionalities for generating customized reports and views. The BiolAD-DB system schema, client, and installation instructions are freely available at http://www.rockefeller.edu/biolad-db/.
Predicting Host Level Reachability via Static Analysis of Routing Protocol Configuration
2007-09-01
check_function_bodies = false; SET client_min_messages = warning; -- -- Name: SCHEMA public; Type: COMMENT; Schema: -; Owner: postgres -- COMMENT...public; Owner: mcmanst -- -- -- Name: public; Type: ACL; Schema: -; Owner: postgres -- REVOKE ALL ON SCHEMA public FROM PUBLIC; REVOKE...ALL ON SCHEMA public FROM postgres ; GRANT ALL ON SCHEMA public TO postgres ; GRANT ALL ON SCHEMA public TO PUBLIC; -- -- PostgreSQL database
The XSD-Builder Specification Language—Toward a Semantic View of XML Schema Definition
NASA Astrophysics Data System (ADS)
Fong, Joseph; Cheung, San Kuen
In the present database market, XML database model is a main structure for the forthcoming database system in the Internet environment. As a conceptual schema of XML database, XML Model has its limitation on presenting its data semantics. System analyst has no toolset for modeling and analyzing XML system. We apply XML Tree Model (shown in Figure 2) as a conceptual schema of XML database to model and analyze the structure of an XML database. It is important not only for visualizing, specifying, and documenting structural models, but also for constructing executable systems. The tree model represents inter-relationship among elements inside different logical schema such as XML Schema Definition (XSD), DTD, Schematron, XDR, SOX, and DSD (shown in Figure 1, an explanation of the terms in the figure are shown in Table 1). The XSD-Builder consists of XML Tree Model, source language, translator, and XSD. The source language is called XSD-Source which is mainly for providing an environment with concept of user friendliness while writing an XSD. The source language will consequently be translated by XSD-Translator. Output of XSD-Translator is an XSD which is our target and is called as an object language.
Nonparametric Bayesian Modeling for Automated Database Schema Matching
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferragut, Erik M; Laska, Jason A
2015-01-01
The problem of merging databases arises in many government and commercial applications. Schema matching, a common first step, identifies equivalent fields between databases. We introduce a schema matching framework that builds nonparametric Bayesian models for each field and compares them by computing the probability that a single model could have generated both fields. Our experiments show that our method is more accurate and faster than the existing instance-based matching algorithms in part because of the use of nonparametric Bayesian models.
Naval Ship Database: Database Design, Implementation, and Schema
2013-09-01
incoming data. The solution allows database users to store and analyze data collected by navy ships in the Royal Canadian Navy ( RCN ). The data...understanding RCN jargon and common practices on a typical RCN vessel. This experience led to the development of several error detection methods to...data to be stored in the database. Mr. Massel has also collected data pertaining to day to day activities on RCN vessels that has been imported into
Design of Integrated Database on Mobile Information System: A Study of Yogyakarta Smart City App
NASA Astrophysics Data System (ADS)
Nurnawati, E. K.; Ermawati, E.
2018-02-01
An integration database is a database which acts as the data store for multiple applications and thus integrates data across these applications (in contrast to an Application Database). An integration database needs a schema that takes all its client applications into account. The benefit of the schema that sharing data among applications does not require an extra layer of integration services on the applications. Any changes to data made in a single application are made available to all applications at the time of database commit - thus keeping the applications’ data use better synchronized. This study aims to design and build an integrated database that can be used by various applications in a mobile device based system platforms with the based on smart city system. The built-in database can be used by various applications, whether used together or separately. The design and development of the database are emphasized on the flexibility, security, and completeness of attributes that can be used together by various applications to be built. The method used in this study is to choice of the appropriate database logical structure (patterns of data) and to build the relational-database models (Design Databases). Test the resulting design with some prototype apps and analyze system performance with test data. The integrated database can be utilized both of the admin and the user in an integral and comprehensive platform. This system can help admin, manager, and operator in managing the application easily and efficiently. This Android-based app is built based on a dynamic clientserver where data is extracted from an external database MySQL. So if there is a change of data in the database, then the data on Android applications will also change. This Android app assists users in searching of Yogyakarta (as smart city) related information, especially in culture, government, hotels, and transportation.
Toward a view-oriented approach for aligning RDF-based biomedical repositories.
Anguita, A; García-Remesal, M; de la Iglesia, D; Graf, N; Maojo, V
2015-01-01
This article is part of the Focus Theme of METHODS of Information in Medicine on "Managing Interoperability and Complexity in Health Systems". The need for complementary access to multiple RDF databases has fostered new lines of research, but also entailed new challenges due to data representation disparities. While several approaches for RDF-based database integration have been proposed, those focused on schema alignment have become the most widely adopted. All state-of-the-art solutions for aligning RDF-based sources resort to a simple technique inherited from legacy relational database integration methods. This technique - known as element-to-element (e2e) mappings - is based on establishing 1:1 mappings between single primitive elements - e.g. concepts, attributes, relationships, etc. - belonging to the source and target schemas. However, due to the intrinsic nature of RDF - a representation language based on defining tuples < subject, predicate, object > -, one may find RDF elements whose semantics vary dramatically when combined into a view involving other RDF elements - i.e. they depend on their context. The latter cannot be adequately represented in the target schema by resorting to the traditional e2e approach. These approaches fail to properly address this issue without explicitly modifying the target ontology, thus lacking the required expressiveness for properly reflecting the intended semantics in the alignment information. To enhance existing RDF schema alignment techniques by providing a mechanism to properly represent elements with context-dependent semantics, thus enabling users to perform more expressive alignments, including scenarios that cannot be adequately addressed by the existing approaches. Instead of establishing 1:1 correspondences between single primitive elements of the schemas, we propose adopting a view-based approach. The latter is targeted at establishing mapping relationships between RDF subgraphs - that can be regarded as the equivalent of views in traditional databases -, rather than between single schema elements. This approach enables users to represent scenarios defined by context-dependent RDF elements that cannot be properly represented when adopting the currently existing approaches. We developed a software tool implementing our view-based strategy. Our tool is currently being used in the context of the European Commission funded p-medicine project, targeted at creating a technological framework to integrate clinical and genomic data to facilitate the development of personalized drugs and therapies for cancer, based on the genetic profile of the patient. We used our tool to integrate different RDF-based databases - including different repositories of clinical trials and DICOM images - using the Health Data Ontology Trunk (HDOT) ontology as the target schema. The importance of database integration methods and tools in the context of biomedical research has been widely recognized. Modern research in this area - e.g. identification of disease biomarkers, or design of personalized therapies - heavily relies on the availability of a technical framework to enable researchers to uniformly access disparate repositories. We present a method and a tool that implement a novel alignment method specifically designed to support and enhance the integration of RDF-based data sources at schema (metadata) level. This approach provides an increased level of expressiveness compared to other existing solutions, and allows solving heterogeneity scenarios that cannot be properly represented using other state-of-the-art techniques.
Software Application for Supporting the Education of Database Systems
ERIC Educational Resources Information Center
Vágner, Anikó
2015-01-01
The article introduces an application which supports the education of database systems, particularly the teaching of SQL and PL/SQL in Oracle Database Management System environment. The application has two parts, one is the database schema and its content, and the other is a C# application. The schema is to administrate and store the tasks and the…
A natural language interface plug-in for cooperative query answering in biological databases.
Jamil, Hasan M
2012-06-11
One of the many unique features of biological databases is that the mere existence of a ground data item is not always a precondition for a query response. It may be argued that from a biologist's standpoint, queries are not always best posed using a structured language. By this we mean that approximate and flexible responses to natural language like queries are well suited for this domain. This is partly due to biologists' tendency to seek simpler interfaces and partly due to the fact that questions in biology involve high level concepts that are open to interpretations computed using sophisticated tools. In such highly interpretive environments, rigidly structured databases do not always perform well. In this paper, our goal is to propose a semantic correspondence plug-in to aid natural language query processing over arbitrary biological database schema with an aim to providing cooperative responses to queries tailored to users' interpretations. Natural language interfaces for databases are generally effective when they are tuned to the underlying database schema and its semantics. Therefore, changes in database schema become impossible to support, or a substantial reorganization cost must be absorbed to reflect any change. We leverage developments in natural language parsing, rule languages and ontologies, and data integration technologies to assemble a prototype query processor that is able to transform a natural language query into a semantically equivalent structured query over the database. We allow knowledge rules and their frequent modifications as part of the underlying database schema. The approach we adopt in our plug-in overcomes some of the serious limitations of many contemporary natural language interfaces, including support for schema modifications and independence from underlying database schema. The plug-in introduced in this paper is generic and facilitates connecting user selected natural language interfaces to arbitrary databases using a semantic description of the intended application. We demonstrate the feasibility of our approach with a practical example.
An ECG storage and retrieval system embedded in client server HIS utilizing object-oriented DB.
Wang, C; Ohe, K; Sakurai, T; Nagase, T; Kaihara, S
1996-02-01
In the University of Tokyo Hospital, the improved client server HIS has been applied to clinical practice and physicians can order prescription, laboratory examination, ECG examination and radiographic examination, etc. directly by themselves and read results of these examinations, except medical signal waves, schema and image, on UNIX workstations. Recently, we designed and developed an ECG storage and retrieval system embedded in the client server HIS utilizing object-oriented database to take the first step in dealing with digitized signal, schema and image data and show waves, graphics, and images directly to physicians by the client server HIS. The system was developed based on object-oriented analysis and design, and implemented with object-oriented database management system (OODMS) and C++ programming language. In this paper, we describe the ECG data model, functions of the storage and retrieval system, features of user interface and the result of its implementation in the HIS.
2012-01-01
Background In the scientific biodiversity community, it is increasingly perceived the need to build a bridge between molecular and traditional biodiversity studies. We believe that the information technology could have a preeminent role in integrating the information generated by these studies with the large amount of molecular data we can find in bioinformatics public databases. This work is primarily aimed at building a bioinformatic infrastructure for the integration of public and private biodiversity data through the development of GIDL, an Intelligent Data Loader coupled with the Molecular Biodiversity Database. The system presented here organizes in an ontological way and locally stores the sequence and annotation data contained in the GenBank primary database. Methods The GIDL architecture consists of a relational database and of an intelligent data loader software. The relational database schema is designed to manage biodiversity information (Molecular Biodiversity Database) and it is organized in four areas: MolecularData, Experiment, Collection and Taxonomy. The MolecularData area is inspired to an established standard in Generic Model Organism Databases, the Chado relational schema. The peculiarity of Chado, and also its strength, is the adoption of an ontological schema which makes use of the Sequence Ontology. The Intelligent Data Loader (IDL) component of GIDL is an Extract, Transform and Load software able to parse data, to discover hidden information in the GenBank entries and to populate the Molecular Biodiversity Database. The IDL is composed by three main modules: the Parser, able to parse GenBank flat files; the Reasoner, which automatically builds CLIPS facts mapping the biological knowledge expressed by the Sequence Ontology; the DBFiller, which translates the CLIPS facts into ordered SQL statements used to populate the database. In GIDL Semantic Web technologies have been adopted due to their advantages in data representation, integration and processing. Results and conclusions Entries coming from Virus (814,122), Plant (1,365,360) and Invertebrate (959,065) divisions of GenBank rel.180 have been loaded in the Molecular Biodiversity Database by GIDL. Our system, combining the Sequence Ontology and the Chado schema, allows a more powerful query expressiveness compared with the most commonly used sequence retrieval systems like Entrez or SRS. PMID:22536971
Automated database design from natural language input
NASA Technical Reports Server (NTRS)
Gomez, Fernando; Segami, Carlos; Delaune, Carl
1995-01-01
Users and programmers of small systems typically do not have the skills needed to design a database schema from an English description of a problem. This paper describes a system that automatically designs databases for such small applications from English descriptions provided by end-users. Although the system has been motivated by the space applications at Kennedy Space Center, and portions of it have been designed with that idea in mind, it can be applied to different situations. The system consists of two major components: a natural language understander and a problem-solver. The paper describes briefly the knowledge representation structures constructed by the natural language understander, and, then, explains the problem-solver in detail.
NASA Technical Reports Server (NTRS)
Muniz, R.; Martinez, El; Szafran, J.; Dalton, A.
2011-01-01
The Function Point Analysis (FPA) Depot is a web application originally designed by one of the NE-C3 branch's engineers, Jamie Szafran, and created specifically for the Software Development team of the Launch Control Systems (LCS) project. The application consists of evaluating the work of each developer to be able to get a real estimate of the hours that is going to be assigned to a specific task of development. The Architect Team had made design change requests for the depot to change the schema of the application's information; that information, changed in the database, needed to be changed in the graphical user interface (GUI) (written in Ruby on Rails (RoR and the web service/server side in Java to match the database changes. These changes were made by two interns from NE-C, Ricardo Muniz from NE-C3, who made all the schema changes for the GUI in RoR and Edwin Martinez, from NE-C2, who made all the changes in the Java side.
SQL is Dead; Long-live SQL: Relational Database Technology in Science Contexts
NASA Astrophysics Data System (ADS)
Howe, B.; Halperin, D.
2014-12-01
Relational databases are often perceived as a poor fit in science contexts: Rigid schemas, poor support for complex analytics, unpredictable performance, significant maintenance and tuning requirements --- these idiosyncrasies often make databases unattractive in science contexts characterized by heterogeneous data sources, complex analysis tasks, rapidly changing requirements, and limited IT budgets. In this talk, I'll argue that although the value proposition of typical relational database systems are weak in science, the core ideas that power relational databases have become incredibly prolific in open source science software, and are emerging as a universal abstraction for both big data and small data. In addition, I'll talk about two open source systems we are building to "jailbreak" the core technology of relational databases and adapt them for use in science. The first is SQLShare, a Database-as-a-Service system supporting collaborative data analysis and exchange by reducing database use to an Upload-Query-Share workflow with no installation, schema design, or configuration required. The second is Myria, a service that supports much larger scale data, complex analytics, and supports multiple back end systems. Finally, I'll describe some of the ways our collaborators in oceanography, astronomy, biology, fisheries science, and more are using these systems to replace script-based workflows for reasons of performance, flexibility, and convenience.
A Codasyl-Type Schema for Natural Language Medical Records
Sager, N.; Tick, L.; Story, G.; Hirschman, L.
1980-01-01
This paper describes a CODASYL (network) database schema for information derived from narrative clinical reports. The goal of this work is to create an automated process that accepts natural language documents as input and maps this information into a database of a type managed by existing database management systems. The schema described here represents the medical events and facts identified through the natural language processing. This processing decomposes each narrative into a set of elementary assertions, represented as MEDFACT records in the database. Each assertion in turn consists of a subject and a predicate classed according to a limited number of medical event types, e.g., signs/symptoms, laboratory tests, etc. The subject and predicate are represented by EVENT records which are owned by the MEDFACT record associated with the assertion. The CODASYL-type network structure was found to be suitable for expressing most of the relations needed to represent the natural language information. However, special mechanisms were developed for storing the time relations between EVENT records and for recording connections (such as causality) between certain MEDFACT records. This schema has been implemented using the UNIVAC DMS-1100 DBMS.
A rudimentary database for three-dimensional objects using structural representation
NASA Technical Reports Server (NTRS)
Sowers, James P.
1987-01-01
A database which enables users to store and share the description of three-dimensional objects in a research environment is presented. The main objective of the design is to make it a compact structure that holds sufficient information to reconstruct the object. The database design is based on an object representation scheme which is information preserving, reasonably efficient, and yet economical in terms of the storage requirement. The determination of the needed data for the reconstruction process is guided by the belief that it is faster to do simple computations to generate needed data/information for construction than to retrieve everything from memory. Some recent techniques of three-dimensional representation that influenced the design of the database are discussed. The schema for the database and the structural definition used to define an object are given. The user manual for the software developed to create and maintain the contents of the database is included.
The tissue microarray OWL schema: An open-source tool for sharing tissue microarray data
Kang, Hyunseok P.; Borromeo, Charles D.; Berman, Jules J.; Becich, Michael J.
2010-01-01
Background: Tissue microarrays (TMAs) are enormously useful tools for translational research, but incompatibilities in database systems between various researchers and institutions prevent the efficient sharing of data that could help realize their full potential. Resource Description Framework (RDF) provides a flexible method to represent knowledge in triples, which take the form Subject-Predicate-Object. All data resources are described using Uniform Resource Identifiers (URIs), which are global in scope. We present an OWL (Web Ontology Language) schema that expands upon the TMA data exchange specification to address this issue and assist in data sharing and integration. Methods: A minimal OWL schema was designed containing only concepts specific to TMA experiments. More general data elements were incorporated from predefined ontologies such as the NCI thesaurus. URIs were assigned using the Linked Data format. Results: We present examples of files utilizing the schema and conversion of XML data (similar to the TMA DES) to OWL. Conclusion: By utilizing predefined ontologies and global unique identifiers, this OWL schema provides a solution to the limitations of XML, which represents concepts defined in a localized setting. This will help increase the utilization of tissue resources, facilitating collaborative translational research efforts. PMID:20805954
SORTEZ: a relational translator for NCBI's ASN.1 database.
Hart, K W; Searls, D B; Overton, G C
1994-07-01
The National Center for Biotechnology Information (NCBI) has created a database collection that includes several protein and nucleic acid sequence databases, a biosequence-specific subset of MEDLINE, as well as value-added information such as links between similar sequences. Information in the NCBI database is modeled in Abstract Syntax Notation 1 (ASN.1) an Open Systems Interconnection protocol designed for the purpose of exchanging structured data between software applications rather than as a data model for database systems. While the NCBI database is distributed with an easy-to-use information retrieval system, ENTREZ, the ASN.1 data model currently lacks an ad hoc query language for general-purpose data access. For that reason, we have developed a software package, SORTEZ, that transforms the ASN.1 database (or other databases with nested data structures) to a relational data model and subsequently to a relational database management system (Sybase) where information can be accessed through the relational query language, SQL. Because the need to transform data from one data model and schema to another arises naturally in several important contexts, including efficient execution of specific applications, access to multiple databases and adaptation to database evolution this work also serves as a practical study of the issues involved in the various stages of database transformation. We show that transformation from the ASN.1 data model to a relational data model can be largely automated, but that schema transformation and data conversion require considerable domain expertise and would greatly benefit from additional support tools.
Development of the Global Earthquake Model’s neotectonic fault database
Christophersen, Annemarie; Litchfield, Nicola; Berryman, Kelvin; Thomas, Richard; Basili, Roberto; Wallace, Laura; Ries, William; Hayes, Gavin P.; Haller, Kathleen M.; Yoshioka, Toshikazu; Koehler, Richard D.; Clark, Dan; Wolfson-Schwehr, Monica; Boettcher, Margaret S.; Villamor, Pilar; Horspool, Nick; Ornthammarath, Teraphan; Zuñiga, Ramon; Langridge, Robert M.; Stirling, Mark W.; Goded, Tatiana; Costa, Carlos; Yeats, Robert
2015-01-01
The Global Earthquake Model (GEM) aims to develop uniform, openly available, standards, datasets and tools for worldwide seismic risk assessment through global collaboration, transparent communication and adapting state-of-the-art science. GEM Faulted Earth (GFE) is one of GEM’s global hazard module projects. This paper describes GFE’s development of a modern neotectonic fault database and a unique graphical interface for the compilation of new fault data. A key design principle is that of an electronic field notebook for capturing observations a geologist would make about a fault. The database is designed to accommodate abundant as well as sparse fault observations. It features two layers, one for capturing neotectonic faults and fold observations, and the other to calculate potential earthquake fault sources from the observations. In order to test the flexibility of the database structure and to start a global compilation, five preexisting databases have been uploaded to the first layer and two to the second. In addition, the GFE project has characterised the world’s approximately 55,000 km of subduction interfaces in a globally consistent manner as a basis for generating earthquake event sets for inclusion in earthquake hazard and risk modelling. Following the subduction interface fault schema and including the trace attributes of the GFE database schema, the 2500-km-long frontal thrust fault system of the Himalaya has also been characterised. We propose the database structure to be used widely, so that neotectonic fault data can make a more complete and beneficial contribution to seismic hazard and risk characterisation globally.
NASA Astrophysics Data System (ADS)
Jones, A. S.; Horsburgh, J. S.; Matos, M.; Caraballo, J.
2015-12-01
Networks conducting long term monitoring using in situ sensors need the functionality to track physical equipment as well as deployments, calibrations, and other actions related to site and equipment maintenance. The observational data being generated by sensors are enhanced if direct linkages to equipment details and actions can be made. This type of information is typically recorded in field notebooks or in static files, which are rarely linked to observations in a way that could be used to interpret results. However, the record of field activities is often relevant to analysis or post-processing of the observational data. We have developed an underlying database schema and deployed a web interface for recording and retrieving information on physical infrastructure and related actions for observational networks. The database schema for equipment was designed as an extension to the Observations Data Model 2 (ODM2), a community-developed information model for spatially discrete, feature based earth observations. The core entities of ODM2 describe location, observed variable, and timing of observations, and the equipment extension contains entities to provide additional metadata specific to the inventory of physical infrastructure and associated actions. The schema is implemented in a relational database system for storage and management with an associated web interface. We designed the web-based tools for technicians to enter and query information on the physical equipment and actions such as site visits, equipment deployments, maintenance, and calibrations. These tools were implemented for the iUTAH (innovative Urban Transitions and Aridregion Hydrosustainability) ecohydrologic observatory, and we anticipate that they will be useful for similar large-scale monitoring networks desiring to link observing infrastructure to observational data to increase the quality of sensor-based data products.
Ologs: a categorical framework for knowledge representation.
Spivak, David I; Kent, Robert E
2012-01-01
In this paper we introduce the olog, or ontology log, a category-theoretic model for knowledge representation (KR). Grounded in formal mathematics, ologs can be rigorously formulated and cross-compared in ways that other KR models (such as semantic networks) cannot. An olog is similar to a relational database schema; in fact an olog can serve as a data repository if desired. Unlike database schemas, which are generally difficult to create or modify, ologs are designed to be user-friendly enough that authoring or reconfiguring an olog is a matter of course rather than a difficult chore. It is hoped that learning to author ologs is much simpler than learning a database definition language, despite their similarity. We describe ologs carefully and illustrate with many examples. As an application we show that any primitive recursive function can be described by an olog. We also show that ologs can be aligned or connected together into a larger network using functors. The various methods of information flow and institutions can then be used to integrate local and global world-views. We finish by providing several different avenues for future research.
Ologs: A Categorical Framework for Knowledge Representation
Spivak, David I.; Kent, Robert E.
2012-01-01
In this paper we introduce the olog, or ontology log, a category-theoretic model for knowledge representation (KR). Grounded in formal mathematics, ologs can be rigorously formulated and cross-compared in ways that other KR models (such as semantic networks) cannot. An olog is similar to a relational database schema; in fact an olog can serve as a data repository if desired. Unlike database schemas, which are generally difficult to create or modify, ologs are designed to be user-friendly enough that authoring or reconfiguring an olog is a matter of course rather than a difficult chore. It is hoped that learning to author ologs is much simpler than learning a database definition language, despite their similarity. We describe ologs carefully and illustrate with many examples. As an application we show that any primitive recursive function can be described by an olog. We also show that ologs can be aligned or connected together into a larger network using functors. The various methods of information flow and institutions can then be used to integrate local and global world-views. We finish by providing several different avenues for future research. PMID:22303434
A complete history of everything
NASA Astrophysics Data System (ADS)
Lanclos, Kyle; Deich, William T. S.
2012-09-01
This paper discusses Lick Observatory's local solution for retaining a complete history of everything. Leveraging our existing deployment of a publish/subscribe communications model that is used to broadcast the state of all systems at Lick Observatory, a monitoring daemon runs on a dedicated server that subscribes to and records all published messages. Our success with this system is a testament to the power of simple, straightforward approaches to complex problems. The solution itself is written in Python, and the initial version required about a week of development time; the data are stored in PostgreSQL database tables using a distinctly simple schema. Over time, we addressed scaling issues as the data set grew, which involved reworking the PostgreSQL database schema on the back-end. We also duplicate the data in flat files to enable recovery or migration of the data from one server to another. This paper will cover both the initial design as well as the solutions to the subsequent deployment issues, the trade-offs that motivated those choices, and the integration of this history database with existing client applications.
Development of a land-cover characteristics database for the conterminous U.S.
Loveland, Thomas R.; Merchant, J.W.; Ohlen, D.O.; Brown, Jesslyn F.
1991-01-01
Information regarding the characteristics and spatial distribution of the Earth's land cover is critical to global environmental research. A prototype land-cover database for the conterminous United States designed for use in a variety of global modelling, monitoring, mapping, and analytical endeavors has been created. The resultant database contains multiple layers, including the source AVHRR data, the ancillary data layers, the land-cover regions defined by the research, and translation tables linking the regions to other land classification schema (for example, UNESCO, USGS Anderson System). The land-cover characteristics database can be analyzed, transformed, or aggregated by users to meet a broad spectrum of requirements. -from Authors
Designing for Peta-Scale in the LSST Database
NASA Astrophysics Data System (ADS)
Kantor, J.; Axelrod, T.; Becla, J.; Cook, K.; Nikolaev, S.; Gray, J.; Plante, R.; Nieto-Santisteban, M.; Szalay, A.; Thakar, A.
2007-10-01
The Large Synoptic Survey Telescope (LSST), a proposed ground-based 8.4 m telescope with a 10 deg^2 field of view, will generate 15 TB of raw images every observing night. When calibration and processed data are added, the image archive, catalogs, and meta-data will grow 15 PB yr^{-1} on average. The LSST Data Management System (DMS) must capture, process, store, index, replicate, and provide open access to this data. Alerts must be triggered within 30 s of data acquisition. To do this in real-time at these data volumes will require advances in data management, database, and file system techniques. This paper describes the design of the LSST DMS and emphasizes features for peta-scale data. The LSST DMS will employ a combination of distributed database and file systems, with schema, partitioning, and indexing oriented for parallel operations. Image files are stored in a distributed file system with references to, and meta-data from, each file stored in the databases. The schema design supports pipeline processing, rapid ingest, and efficient query. Vertical partitioning reduces disk input/output requirements, horizontal partitioning allows parallel data access using arrays of servers and disks. Indexing is extensive, utilizing both conventional RAM-resident indexes and column-narrow, row-deep tag tables/covering indices that are extracted from tables that contain many more attributes. The DMS Data Access Framework is encapsulated in a middleware framework to provide a uniform service interface to all framework capabilities. This framework will provide the automated work-flow, replication, and data analysis capabilities necessary to make data processing and data quality analysis feasible at this scale.
Modeling biology using relational databases.
Peitzsch, Robert M
2003-02-01
There are several different methodologies that can be used for designing a database schema; no one is the best for all occasions. This unit demonstrates two different techniques for designing relational tables and discusses when each should be used. These two techniques presented are (1) traditional Entity-Relationship (E-R) modeling and (2) a hybrid method that combines aspects of data warehousing and E-R modeling. The method of choice depends on (1) how well the information and all its inherent relationships are understood, (2) what types of questions will be asked, (3) how many different types of data will be included, and (4) how much data exists.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hendrickson, K; Phillips, M; Fishburn, M
Purpose: To implement a common database structure and user-friendly web-browser based data collection tools across several medical institutions to better support evidence-based clinical decision making and comparative effectiveness research through shared outcomes data. Methods: A consortium of four academic medical centers agreed to implement a federated database, known as Oncospace. Initial implementation has addressed issues of differences between institutions in workflow and types and breadth of structured information captured. This requires coordination of data collection from departmental oncology information systems (OIS), treatment planning systems, and hospital electronic medical records in order to include as much as possible the multi-disciplinary clinicalmore » data associated with a patients care. Results: The original database schema was well-designed and required only minor changes to meet institution-specific data requirements. Mobile browser interfaces for data entry and review for both the OIS and the Oncospace database were tailored for the workflow of individual institutions. Federation of database queries--the ultimate goal of the project--was tested using artificial patient data. The tests serve as proof-of-principle that the system as a whole--from data collection and entry to providing responses to research queries of the federated database--was viable. The resolution of inter-institutional use of patient data for research is still not completed. Conclusions: The migration from unstructured data mainly in the form of notes and documents to searchable, structured data is difficult. Making the transition requires cooperation of many groups within the department and can be greatly facilitated by using the structured data to improve clinical processes and workflow. The original database schema design is critical to providing enough flexibility for multi-institutional use to improve each institution s ability to study outcomes, determine best practices, and support research. The project has demonstrated the feasibility of deploying a federated database environment for research purposes to multiple institutions.« less
Milc, Justyna; Sala, Antonio; Bergamaschi, Sonia; Pecchioni, Nicola
2011-01-01
The CEREALAB database aims to store genotypic and phenotypic data obtained by the CEREALAB project and to integrate them with already existing data sources in order to create a tool for plant breeders and geneticists. The database can help them in unravelling the genetics of economically important phenotypic traits; in identifying and choosing molecular markers associated to key traits; and in choosing the desired parentals for breeding programs. The database is divided into three sub-schemas corresponding to the species of interest: wheat, barley and rice; each sub-schema is then divided into two sub-ontologies, regarding genotypic and phenotypic data, respectively. Database URL: http://www.cerealab.unimore.it/jws/cerealab.jnlp PMID:21247929
EasyKSORD: A Platform of Keyword Search Over Relational Databases
NASA Astrophysics Data System (ADS)
Peng, Zhaohui; Li, Jing; Wang, Shan
Keyword Search Over Relational Databases (KSORD) enables casual users to use keyword queries (a set of keywords) to search relational databases just like searching the Web, without any knowledge of the database schema or any need of writing SQL queries. Based on our previous work, we design and implement a novel KSORD platform named EasyKSORD for users and system administrators to use and manage different KSORD systems in a novel and simple manner. EasyKSORD supports advanced queries, efficient data-graph-based search engines, multiform result presentations, and system logging and analysis. Through EasyKSORD, users can search relational databases easily and read search results conveniently, and system administrators can easily monitor and analyze the operations of KSORD and manage KSORD systems much better.
MGIS: managing banana (Musa spp.) genetic resources information and high-throughput genotyping data
Guignon, V.; Sempere, G.; Sardos, J.; Hueber, Y.; Duvergey, H.; Andrieu, A.; Chase, R.; Jenny, C.; Hazekamp, T.; Irish, B.; Jelali, K.; Adeka, J.; Ayala-Silva, T.; Chao, C.P.; Daniells, J.; Dowiya, B.; Effa effa, B.; Gueco, L.; Herradura, L.; Ibobondji, L.; Kempenaers, E.; Kilangi, J.; Muhangi, S.; Ngo Xuan, P.; Paofa, J.; Pavis, C.; Thiemele, D.; Tossou, C.; Sandoval, J.; Sutanto, A.; Vangu Paka, G.; Yi, G.; Van den houwe, I.; Roux, N.
2017-01-01
Abstract Unraveling the genetic diversity held in genebanks on a large scale is underway, due to advances in Next-generation sequence (NGS) based technologies that produce high-density genetic markers for a large number of samples at low cost. Genebank users should be in a position to identify and select germplasm from the global genepool based on a combination of passport, genotypic and phenotypic data. To facilitate this, a new generation of information systems is being designed to efficiently handle data and link it with other external resources such as genome or breeding databases. The Musa Germplasm Information System (MGIS), the database for global ex situ-held banana genetic resources, has been developed to address those needs in a user-friendly way. In developing MGIS, we selected a generic database schema (Chado), the robust content management system Drupal for the user interface, and Tripal, a set of Drupal modules which links the Chado schema to Drupal. MGIS allows germplasm collection examination, accession browsing, advanced search functions, and germplasm orders. Additionally, we developed unique graphical interfaces to compare accessions and to explore them based on their taxonomic information. Accession-based data has been enriched with publications, genotyping studies and associated genotyping datasets reporting on germplasm use. Finally, an interoperability layer has been implemented to facilitate the link with complementary databases like the Banana Genome Hub and the MusaBase breeding database. Database URL: https://www.crop-diversity.org/mgis/ PMID:29220435
Development of the Plate Tectonics and Seismology markup languages with XML
NASA Astrophysics Data System (ADS)
Babaie, H.; Babaei, A.
2003-04-01
The Extensible Markup Language (XML) and its specifications such as the XSD Schema, allow geologists to design discipline-specific vocabularies such as Seismology Markup Language (SeismML) or Plate Tectonics Markup Language (TectML). These languages make it possible to store and interchange structured geological information over the Web. Development of a geological markup language requires mapping geological concepts, such as "Earthquake" or "Plate" into a UML object model, applying a modeling and design environment. We have selected four inter-related geological concepts: earthquake, fault, plate, and orogeny, and developed four XML Schema Definitions (XSD), that define the relationships, cardinalities, hierarchies, and semantics of these concepts. In such a geological concept model, the UML object "Earthquake" is related to one or more "Wave" objects, each arriving to a seismic station at a specific "DateTime", and relating to a specific "Epicenter" object that lies at a unique "Location". The "Earthquake" object occurs along a "Segment" of a "Fault" object, which is related to a specific "Plate" object. The "Fault" has its own associations with such things as "Bend", "Step", and "Segment", and could be of any kind (e.g., "Thrust", "Transform'). The "Plate" is related to many other objects such as "MOR", "Subduction", and "Forearc", and is associated with an "Orogeny" object that relates to "Deformation" and "Strain" and several other objects. These UML objects were mapped into XML Metadata Interchange (XMI) formats, which were then converted into four XSD Schemas. The schemas were used to create and validate the XML instance documents, and to create a relational database hosting the plate tectonics and seismological data in the Microsoft Access format. The SeismML and TectML allow seismologists and structural geologists, among others, to submit and retrieve structured geological data on the Internet. A seismologist, for example, can submit peer-reviewed and reliable data about a specific earthquake to a Java Server Page on our web site hosting the XML application. Other geologists can readily retrieve the submitted data, saved in files or special tables of the designed database, through a search engine designed with J2EE (JSP, servlet, Java Bean) and XML specifications such as XPath, XPointer, and XSLT. When extended to include all the important concepts of seismology and plate tectonics, the two markup languages will make global interchange of geological data a reality.
Conceptual and logical level of database modeling
NASA Astrophysics Data System (ADS)
Hunka, Frantisek; Matula, Jiri
2016-06-01
Conceptual and logical levels form the top most levels of database modeling. Usually, ORM (Object Role Modeling) and ER diagrams are utilized to capture the corresponding schema. The final aim of business process modeling is to store its results in the form of database solution. For this reason, value oriented business process modeling which utilizes ER diagram to express the modeling entities and relationships between them are used. However, ER diagrams form the logical level of database schema. To extend possibilities of different business process modeling methodologies, the conceptual level of database modeling is needed. The paper deals with the REA value modeling approach to business process modeling using ER-diagrams, and derives conceptual model utilizing ORM modeling approach. Conceptual model extends possibilities for value modeling to other business modeling approaches.
Evaluation Methodology for UML and GML Application Schemas Quality
NASA Astrophysics Data System (ADS)
Chojka, Agnieszka
2014-05-01
INSPIRE Directive implementation in Poland has caused the significant increase of interest in making spatial data and services available, particularly among public administration and private institutions. This entailed a series of initiatives that aim to harmonise different spatial data sets, so to ensure their internal logical and semantic coherence. Harmonisation lets to reach the interoperability of spatial databases, then among other things enables joining them together. The process of harmonisation requires either working out new data structures or adjusting existing data structures of spatial databases to INSPIRE guidelines and recommendations. Data structures are described with the use of UML and GML application schemas. Although working out accurate and correct application schemas isn't an easy task. There should be considered many issues, for instance recommendations of ISO 19100 series of Geographic Information Standards, appropriate regulations for given problem or topic, production opportunities and limitations (software, tools). In addition, GML application schema is deeply connected with UML application schema, it should be its translation. Not everything that can be expressed in UML, though can be directly expressed in GML, and this can have significant influence on the spatial data sets interoperability, and thereby the ability to valid data exchange. For these reasons, the capability to examine and estimate UML and GML application schemas quality, therein also the capability to explore their entropy, would be very important. The principal subject of this research is to propose an evaluation methodology for UML and GML application schemas quality prepared in the Head Office of Geodesy and Cartography in Poland within the INSPIRE Directive implementation works.
Federated Web-accessible Clinical Data Management within an Extensible NeuroImaging Database
Keator, David B.; Wei, Dingying; Fennema-Notestine, Christine; Pease, Karen R.; Bockholt, Jeremy; Grethe, Jeffrey S.
2010-01-01
Managing vast datasets collected throughout multiple clinical imaging communities has become critical with the ever increasing and diverse nature of datasets. Development of data management infrastructure is further complicated by technical and experimental advances that drive modifications to existing protocols and acquisition of new types of research data to be incorporated into existing data management systems. In this paper, an extensible data management system for clinical neuroimaging studies is introduced: The Human Clinical Imaging Database (HID) and Toolkit. The database schema is constructed to support the storage of new data types without changes to the underlying schema. The complex infrastructure allows management of experiment data, such as image protocol and behavioral task parameters, as well as subject-specific data, including demographics, clinical assessments, and behavioral task performance metrics. Of significant interest, embedded clinical data entry and management tools enhance both consistency of data reporting and automatic entry of data into the database. The Clinical Assessment Layout Manager (CALM) allows users to create on-line data entry forms for use within and across sites, through which data is pulled into the underlying database via the generic clinical assessment management engine (GAME). Importantly, the system is designed to operate in a distributed environment, serving both human users and client applications in a service-oriented manner. Querying capabilities use a built-in multi-database parallel query builder/result combiner, allowing web-accessible queries within and across multiple federated databases. The system along with its documentation is open-source and available from the Neuroimaging Informatics Tools and Resource Clearinghouse (NITRC) site. PMID:20567938
Federated web-accessible clinical data management within an extensible neuroimaging database.
Ozyurt, I Burak; Keator, David B; Wei, Dingying; Fennema-Notestine, Christine; Pease, Karen R; Bockholt, Jeremy; Grethe, Jeffrey S
2010-12-01
Managing vast datasets collected throughout multiple clinical imaging communities has become critical with the ever increasing and diverse nature of datasets. Development of data management infrastructure is further complicated by technical and experimental advances that drive modifications to existing protocols and acquisition of new types of research data to be incorporated into existing data management systems. In this paper, an extensible data management system for clinical neuroimaging studies is introduced: The Human Clinical Imaging Database (HID) and Toolkit. The database schema is constructed to support the storage of new data types without changes to the underlying schema. The complex infrastructure allows management of experiment data, such as image protocol and behavioral task parameters, as well as subject-specific data, including demographics, clinical assessments, and behavioral task performance metrics. Of significant interest, embedded clinical data entry and management tools enhance both consistency of data reporting and automatic entry of data into the database. The Clinical Assessment Layout Manager (CALM) allows users to create on-line data entry forms for use within and across sites, through which data is pulled into the underlying database via the generic clinical assessment management engine (GAME). Importantly, the system is designed to operate in a distributed environment, serving both human users and client applications in a service-oriented manner. Querying capabilities use a built-in multi-database parallel query builder/result combiner, allowing web-accessible queries within and across multiple federated databases. The system along with its documentation is open-source and available from the Neuroimaging Informatics Tools and Resource Clearinghouse (NITRC) site.
Heterogeneous database integration in biomedicine.
Sujansky, W
2001-08-01
The rapid expansion of biomedical knowledge, reduction in computing costs, and spread of internet access have created an ocean of electronic data. The decentralized nature of our scientific community and healthcare system, however, has resulted in a patchwork of diverse, or heterogeneous, database implementations, making access to and aggregation of data across databases very difficult. The database heterogeneity problem applies equally to clinical data describing individual patients and biological data characterizing our genome. Specifically, databases are highly heterogeneous with respect to the data models they employ, the data schemas they specify, the query languages they support, and the terminologies they recognize. Heterogeneous database systems attempt to unify disparate databases by providing uniform conceptual schemas that resolve representational heterogeneities, and by providing querying capabilities that aggregate and integrate distributed data. Research in this area has applied a variety of database and knowledge-based techniques, including semantic data modeling, ontology definition, query translation, query optimization, and terminology mapping. Existing systems have addressed heterogeneous database integration in the realms of molecular biology, hospital information systems, and application portability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowers, M; Robertson, S; Moore, J
Purpose: Advancement in Radiation Oncology (RO) practice develops through evidence based medicine and clinical trial. Knowledge usable for treatment planning, decision support and research is contained in our clinical data, stored in an Oncospace database. This data store and the tools for populating and analyzing it are compatible with standard RO practice and are shared with collaborating institutions. The question is - what protocol for system development and data sharing within an Oncospace Consortium? We focus our example on the technology and data meaning necessary to share across the Consortium. Methods: Oncospace consists of a database schema, planning and outcomemore » data import and web based analysis tools.1) Database: The Consortium implements a federated data store; each member collects and maintains its own data within an Oncospace schema. For privacy, PHI is contained within a single table, accessible to the database owner.2) Import: Spatial dose data from treatment plans (Pinnacle or DICOM) is imported via Oncolink. Treatment outcomes are imported from an OIS (MOSAIQ).3) Analysis: JHU has built a number of webpages to answer analysis questions. Oncospace data can also be analyzed via MATLAB or SAS queries.These materials are available to Consortium members, who contribute enhancements and improvements. Results: 1) The Oncospace Consortium now consists of RO centers at JHU, UVA, UW and the University of Toronto. These members have successfully installed and populated Oncospace databases with over 1000 patients collectively.2) Members contributing code and getting updates via SVN repository. Errors are reported and tracked via Redmine. Teleconferences include strategizing design and code reviews.3) Successfully remotely queried federated databases to combine multiple institutions’ DVH data for dose-toxicity analysis (see below – data combined from JHU and UW Oncospace). Conclusion: RO data sharing can and has been effected according to the Oncospace Consortium model: http://oncospace.radonc.jhmi.edu/ . John Wong - SRA from Elekta; Todd McNutt - SRA from Elekta; Michael Bowers - funded by Elekta.« less
Spatial Designation of Critical Habitats for Endangered and Threatened Species in the United States
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tuttle, Mark A; Singh, Nagendra; Sabesan, Aarthy
Establishing biological reserves or "hot spots" for endangered and threatened species is critical to support real-world species regulatory and management problems. Geographic data on the distribution of endangered and threatened species can be used to improve ongoing efforts for species conservation in the United States. At present no spatial database exists which maps out the location endangered species for the US. However, spatial descriptions do exists for the habitat associated with all endangered species, but in a form not readily suitable to use in a geographic information system (GIS). In our study, the principal challenge was extracting spatial data describingmore » these critical habitats for 472 species from over 1000 pages of the federal register. In addition, an appropriate database schema was designed to accommodate the different tiers of information associated with the species along with the confidence of designation; the interpreted location data was geo-referenced to the county enumeration unit producing a spatial database of endangered species for the whole of US. The significance of these critical habitat designations, database scheme and methodologies will be discussed.« less
Wollbrett, Julien; Larmande, Pierre; de Lamotte, Frédéric; Ruiz, Manuel
2013-04-15
In recent years, a large amount of "-omics" data have been produced. However, these data are stored in many different species-specific databases that are managed by different institutes and laboratories. Biologists often need to find and assemble data from disparate sources to perform certain analyses. Searching for these data and assembling them is a time-consuming task. The Semantic Web helps to facilitate interoperability across databases. A common approach involves the development of wrapper systems that map a relational database schema onto existing domain ontologies. However, few attempts have been made to automate the creation of such wrappers. We developed a framework, named BioSemantic, for the creation of Semantic Web Services that are applicable to relational biological databases. This framework makes use of both Semantic Web and Web Services technologies and can be divided into two main parts: (i) the generation and semi-automatic annotation of an RDF view; and (ii) the automatic generation of SPARQL queries and their integration into Semantic Web Services backbones. We have used our framework to integrate genomic data from different plant databases. BioSemantic is a framework that was designed to speed integration of relational databases. We present how it can be used to speed the development of Semantic Web Services for existing relational biological databases. Currently, it creates and annotates RDF views that enable the automatic generation of SPARQL queries. Web Services are also created and deployed automatically, and the semantic annotations of our Web Services are added automatically using SAWSDL attributes. BioSemantic is downloadable at http://southgreen.cirad.fr/?q=content/Biosemantic.
2013-01-01
Background In recent years, a large amount of “-omics” data have been produced. However, these data are stored in many different species-specific databases that are managed by different institutes and laboratories. Biologists often need to find and assemble data from disparate sources to perform certain analyses. Searching for these data and assembling them is a time-consuming task. The Semantic Web helps to facilitate interoperability across databases. A common approach involves the development of wrapper systems that map a relational database schema onto existing domain ontologies. However, few attempts have been made to automate the creation of such wrappers. Results We developed a framework, named BioSemantic, for the creation of Semantic Web Services that are applicable to relational biological databases. This framework makes use of both Semantic Web and Web Services technologies and can be divided into two main parts: (i) the generation and semi-automatic annotation of an RDF view; and (ii) the automatic generation of SPARQL queries and their integration into Semantic Web Services backbones. We have used our framework to integrate genomic data from different plant databases. Conclusions BioSemantic is a framework that was designed to speed integration of relational databases. We present how it can be used to speed the development of Semantic Web Services for existing relational biological databases. Currently, it creates and annotates RDF views that enable the automatic generation of SPARQL queries. Web Services are also created and deployed automatically, and the semantic annotations of our Web Services are added automatically using SAWSDL attributes. BioSemantic is downloadable at http://southgreen.cirad.fr/?q=content/Biosemantic. PMID:23586394
Implementation of a Distributed Object-Oriented Database Management System
1989-03-01
and heuristic algorithms. A method for determining ueit allocation by splitting relations in the conceptual schema base on queries and updates is...level framworks can provide to the user the appearance of many tools to be closely integrated. In particular, the KBSA tools use many high level...development process should begin first with conceptual design of the system. Approximately one month should be used to decide how the new projects
User-oriented views in health care information systems.
Portoni, Luisa; Combi, Carlo; Pinciroli, Francesco
2002-12-01
In this paper, we present the methodology we adopted in designing and developing an object-oriented database system for the management of medical records. The designed system provides technical solutions to important requirements of most clinical information systems, such as 1) the support of tools to create and manage views on data and view schemas, offering to different users specific perspectives on data tailored to their needs; 2) the capability to handle in a suitable way the temporal aspects related to clinical information; and 3) the effective integration of multimedia data. Remote data access for authorized users is also considered. As clinical application, we describe here the prototype of a user-oriented clinical information system for the archiving and the management of multimedia and temporally oriented clinical data related to percutaneous transluminal coronary angioplasty (PTCA) patients. Suitable view schemas for various user roles (cath-lab physician, ward nurse, general practitioner) have been modeled and implemented on the basis of a detailed analysis of the considered clinical environment, carried out by an object-oriented approach.
Goldacre, Ben; Gray, Jonathan
2016-04-08
OpenTrials is a collaborative and open database for all available structured data and documents on all clinical trials, threaded together by individual trial. With a versatile and expandable data schema, it is initially designed to host and match the following documents and data for each trial: registry entries; links, abstracts, or texts of academic journal papers; portions of regulatory documents describing individual trials; structured data on methods and results extracted by systematic reviewers or other researchers; clinical study reports; and additional documents such as blank consent forms, blank case report forms, and protocols. The intention is to create an open, freely re-usable index of all such information and to increase discoverability, facilitate research, identify inconsistent data, enable audits on the availability and completeness of this information, support advocacy for better data and drive up standards around open data in evidence-based medicine. The project has phase I funding. This will allow us to create a practical data schema and populate the database initially through web-scraping, basic record linkage techniques, crowd-sourced curation around selected drug areas, and import of existing sources of structured and documents. It will also allow us to create user-friendly web interfaces onto the data and conduct user engagement workshops to optimise the database and interface designs. Where other projects have set out to manually and perfectly curate a narrow range of information on a smaller number of trials, we aim to use a broader range of techniques and attempt to match a very large quantity of information on all trials. We are currently seeking feedback and additional sources of structured data.
SGDB: a database of synthetic genes re-designed for optimizing protein over-expression.
Wu, Gang; Zheng, Yuanpu; Qureshi, Imran; Zin, Htar Thant; Beck, Tyler; Bulka, Blazej; Freeland, Stephen J
2007-01-01
Here we present the Synthetic Gene Database (SGDB): a relational database that houses sequences and associated experimental information on synthetic (artificially engineered) genes from all peer-reviewed studies published to date. At present, the database comprises information from more than 200 published experiments. This resource not only provides reference material to guide experimentalists in designing new genes that improve protein expression, but also offers a dataset for analysis by bioinformaticians who seek to test ideas regarding the underlying factors that influence gene expression. The SGDB was built under MySQL database management system. We also offer an XML schema for standardized data description of synthetic genes. Users can access the database at http://www.evolvingcode.net/codon/sgdb/index.php, or batch downloads all information through XML files. Moreover, users may visually compare the coding sequences of a synthetic gene and its natural counterpart with an integrated web tool at http://www.evolvingcode.net/codon/sgdb/aligner.php, and discuss questions, findings and related information on an associated e-forum at http://www.evolvingcode.net/forum/viewforum.php?f=27.
Louis, John P; Wood, Alex M; Lockwood, George; Ho, Moon-Ho Ringo; Ferguson, Eamonn
2018-04-19
Negative schemas have been widely recognized as being linked to psychopathology and mental health, and they are central to the Schema Therapy (ST) model. This study is the first to report on the psychometric properties of the Young Positive Schema Questionnaire (YPSQ). In a combined community sample (Manila, Philippines, n = 559; Bangalore, India, n = 350; Singapore, n = 628), we identified a 56-item, 14-factor solution for the YPSQ. Multigroup confirmatory factor analysis supported the 14-factor model using data from two other independent samples: an Eastern sample from Kuala Lumpur, Malaysia (n = 229) and a Western sample from the United States (n = 214). Construct validity was demonstrated with the Young Schema Questionnaire 3 Short Form (YSQ-S3) that measures negative schemas, and divergent validity was demonstrated for 11 of the YPSQ subscales with their respective negative schema counterparts. Convergent validity of the 14 subscales of YPSQ was demonstrated with measures of personality dispositions, emotional distress, well-being, trait gratitude, and humor styles. Positive schemas also showed incremental validity over and above negative schemas for these same measures, thus demonstrating that both positive and negative schemas are separate constructs that relate in unique ways to mental health. Implications for using both the YPSQ and the YSQ-S3 scales in tandem in ST as well as cultural nuances from the use of Asian samples were discussed. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
XCEDE: An Extensible Schema For Biomedical Data
Gadde, Syam; Aucoin, Nicole; Grethe, Jeffrey S.; Keator, David B.; Marcus, Daniel S.; Pieper, Steve
2013-01-01
The XCEDE (XML-based Clinical and Experimental Data Exchange) XML schema, developed by members of the BIRN (Biomedical Informatics Research Network), provides an extensive metadata hierarchy for storing, describing and documenting the data generated by scientific studies. Currently at version 2.0, the XCEDE schema serves as a specification for the exchange of scientific data between databases, analysis tools, and web services. It provides a structured metadata hierarchy, storing information relevant to various aspects of an experiment (project, subject, protocol, etc.). Each hierarchy level also provides for the storage of data provenance information allowing for a traceable record of processing and/or changes to the underlying data. The schema is extensible to support the needs of various data modalities and to express types of data not originally envisioned by the developers. The latest version of the XCEDE schema and manual are available from http://www.xcede.org/ PMID:21479735
Tagare, Hemant D.; Jaffe, C. Carl; Duncan, James
1997-01-01
Abstract Information contained in medical images differs considerably from that residing in alphanumeric format. The difference can be attributed to four characteristics: (1) the semantics of medical knowledge extractable from images is imprecise; (2) image information contains form and spatial data, which are not expressible in conventional language; (3) a large part of image information is geometric; (4) diagnostic inferences derived from images rest on an incomplete, continuously evolving model of normality. This paper explores the differentiating characteristics of text versus images and their impact on design of a medical image database intended to allow content-based indexing and retrieval. One strategy for implementing medical image databases is presented, which employs object-oriented iconic queries, semantics by association with prototypes, and a generic schema. PMID:9147338
2016-01-01
ProXL is a Web application and accompanying database designed for sharing, visualizing, and analyzing bottom-up protein cross-linking mass spectrometry data with an emphasis on structural analysis and quality control. ProXL is designed to be independent of any particular software pipeline. The import process is simplified by the use of the ProXL XML data format, which shields developers of data importers from the relative complexity of the relational database schema. The database and Web interfaces function equally well for any software pipeline and allow data from disparate pipelines to be merged and contrasted. ProXL includes robust public and private data sharing capabilities, including a project-based interface designed to ensure security and facilitate collaboration among multiple researchers. ProXL provides multiple interactive and highly dynamic data visualizations that facilitate structural-based analysis of the observed cross-links as well as quality control. ProXL is open-source, well-documented, and freely available at https://github.com/yeastrc/proxl-web-app. PMID:27302480
Riffle, Michael; Jaschob, Daniel; Zelter, Alex; Davis, Trisha N
2016-08-05
ProXL is a Web application and accompanying database designed for sharing, visualizing, and analyzing bottom-up protein cross-linking mass spectrometry data with an emphasis on structural analysis and quality control. ProXL is designed to be independent of any particular software pipeline. The import process is simplified by the use of the ProXL XML data format, which shields developers of data importers from the relative complexity of the relational database schema. The database and Web interfaces function equally well for any software pipeline and allow data from disparate pipelines to be merged and contrasted. ProXL includes robust public and private data sharing capabilities, including a project-based interface designed to ensure security and facilitate collaboration among multiple researchers. ProXL provides multiple interactive and highly dynamic data visualizations that facilitate structural-based analysis of the observed cross-links as well as quality control. ProXL is open-source, well-documented, and freely available at https://github.com/yeastrc/proxl-web-app .
Schema for the LANL infrasound analysis tool, infrapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dannemann, Fransiska Kate; Marcillo, Omar Eduardo
2017-04-14
The purpose of this document is to define the schema used for the operation of the infrasound analysis tool, infrapy. The tables described by this document extend the CSS3.0 or KB core schema to include information required for the operation of infrapy. This document is divided into three sections, the first being this introduction. Section two defines eight new, infrasonic data processing-specific database tables. Both internal (ORACLE) and external formats for the attributes are defined, along with a short description of each attribute. Section three of the document shows the relationships between the different tables by using entity-relationship diagrams.
Bichutskiy, Vadim Y.; Colman, Richard; Brachmann, Rainer K.; Lathrop, Richard H.
2006-01-01
Complex problems in life science research give rise to multidisciplinary collaboration, and hence, to the need for heterogeneous database integration. The tumor suppressor p53 is mutated in close to 50% of human cancers, and a small drug-like molecule with the ability to restore native function to cancerous p53 mutants is a long-held medical goal of cancer treatment. The Cancer Research DataBase (CRDB) was designed in support of a project to find such small molecules. As a cancer informatics project, the CRDB involved small molecule data, computational docking results, functional assays, and protein structure data. As an example of the hybrid strategy for data integration, it combined the mediation and data warehousing approaches. This paper uses the CRDB to illustrate the hybrid strategy as a viable approach to heterogeneous data integration in biomedicine, and provides a design method for those considering similar systems. More efficient data sharing implies increased productivity, and, hopefully, improved chances of success in cancer research. (Code and database schemas are freely downloadable, http://www.igb.uci.edu/research/research.html.) PMID:19458771
Data Structures in Natural Computing: Databases as Weak or Strong Anticipatory Systems
NASA Astrophysics Data System (ADS)
Rossiter, B. N.; Heather, M. A.
2004-08-01
Information systems anticipate the real world. Classical databases store, organise and search collections of data of that real world but only as weak anticipatory information systems. This is because of the reductionism and normalisation needed to map the structuralism of natural data on to idealised machines with von Neumann architectures consisting of fixed instructions. Category theory developed as a formalism to explore the theoretical concept of naturality shows that methods like sketches arising from graph theory as only non-natural models of naturality cannot capture real-world structures for strong anticipatory information systems. Databases need a schema of the natural world. Natural computing databases need the schema itself to be also natural. Natural computing methods including neural computers, evolutionary automata, molecular and nanocomputing and quantum computation have the potential to be strong. At present they are mainly at the stage of weak anticipatory systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
David Nix, Lisa Simirenko
2006-10-25
The Biolmaging Database (BID) is a relational database developed to store the data and meta-data for the 3D gene expression in early Drosophila embryo development on a cellular level. The schema was written to be used with the MySQL DBMS but with minor modifications can be used on any SQL compliant relational DBMS.
An XML-based interchange format for genotype-phenotype data.
Whirl-Carrillo, M; Woon, M; Thorn, C F; Klein, T E; Altman, R B
2008-02-01
Recent advances in high-throughput genotyping and phenotyping have accelerated the creation of pharmacogenomic data. Consequently, the community requires standard formats to exchange large amounts of diverse information. To facilitate the transfer of pharmacogenomics data between databases and analysis packages, we have created a standard XML (eXtensible Markup Language) schema that describes both genotype and phenotype data as well as associated metadata. The schema accommodates information regarding genes, drugs, diseases, experimental methods, genomic/RNA/protein sequences, subjects, subject groups, and literature. The Pharmacogenetics and Pharmacogenomics Knowledge Base (PharmGKB; www.pharmgkb.org) has used this XML schema for more than 5 years to accept and process submissions containing more than 1,814,139 SNPs on 20,797 subjects using 8,975 assays. Although developed in the context of pharmacogenomics, the schema is of general utility for exchange of genotype and phenotype data. We have written syntactic and semantic validators to check documents using this format. The schema and code for validation is available to the community at http://www.pharmgkb.org/schema/index.html (last accessed: 8 October 2007). (c) 2007 Wiley-Liss, Inc.
A data management infrastructure for bridge monitoring
NASA Astrophysics Data System (ADS)
Jeong, Seongwoon; Byun, Jaewook; Kim, Daeyoung; Sohn, Hoon; Bae, In Hwan; Law, Kincho H.
2015-04-01
This paper discusses a data management infrastructure framework for bridge monitoring applications. As sensor technologies mature and become economically affordable, their deployment for bridge monitoring will continue to grow. Data management becomes a critical issue not only for storing the sensor data but also for integrating with the bridge model to support other functions, such as management, maintenance and inspection. The focus of this study is on the effective data management of bridge information and sensor data, which is crucial to structural health monitoring and life cycle management of bridge structures. We review the state-of-the-art of bridge information modeling and sensor data management, and propose a data management framework for bridge monitoring based on NoSQL database technologies that have been shown useful in handling high volume, time-series data and to flexibly deal with unstructured data schema. Specifically, Apache Cassandra and Mongo DB are deployed for the prototype implementation of the framework. This paper describes the database design for an XML-based Bridge Information Modeling (BrIM) schema, and the representation of sensor data using Sensor Model Language (SensorML). The proposed prototype data management framework is validated using data collected from the Yeongjong Bridge in Incheon, Korea.
Toward a Bio-Medical Thesaurus: Building the Foundation of the UMLS
Tuttle, Mark S.; Blois, Marsden S.; Erlbaum, Mark S.; Nelson, Stuart J.; Sherertz, David D.
1988-01-01
The Unified Medical Language System (UMLS) is being designed to provide a uniform user interface to heterogeneous machine-readable bio-medical information resources, such as bibliographic databases, genetic databases, expert systems and patient records.1 Such an interface will have to recognize different ways of saying the same thing, and provide links to ways of saying related things. One way to represent the necessary associations is via a domain thesaurus. As no such thesaurus exists, and because, once built, it will be both sizable and in need of continuous maintenance, its design should include a methodology for building and maintaining it. We propose a methodology, utilizing lexically expanded schema inversion, and a design, called T. Lex, which together form one approach to the problem of defining and building a bio-medical thesaurus. We argue that the semantic locality implicit in such a thesaurus will support model-based reasoning in bio-medicine.2
Laukka, Petri; Elfenbein, Hillary Anger; Thingujam, Nutankumar S; Rockstuhl, Thomas; Iraki, Frederick K; Chui, Wanda; Althoff, Jean
2016-11-01
This study extends previous work on emotion communication across cultures with a large-scale investigation of the physical expression cues in vocal tone. In doing so, it provides the first direct test of a key proposition of dialect theory, namely that greater accuracy of detecting emotions from one's own cultural group-known as in-group advantage-results from a match between culturally specific schemas in emotional expression style and culturally specific schemas in emotion recognition. Study 1 used stimuli from 100 professional actors from five English-speaking nations vocally conveying 11 emotional states (anger, contempt, fear, happiness, interest, lust, neutral, pride, relief, sadness, and shame) using standard-content sentences. Detailed acoustic analyses showed many similarities across groups, and yet also systematic group differences. This provides evidence for cultural accents in expressive style at the level of acoustic cues. In Study 2, listeners evaluated these expressions in a 5 × 5 design balanced across groups. Cross-cultural accuracy was greater than expected by chance. However, there was also in-group advantage, which varied across emotions. A lens model analysis of fundamental acoustic properties examined patterns in emotional expression and perception within and across groups. Acoustic cues were used relatively similarly across groups both to produce and judge emotions, and yet there were also subtle cultural differences. Speakers appear to have a culturally nuanced schema for enacting vocal tones via acoustic cues, and perceivers have a culturally nuanced schema in judging them. Consistent with dialect theory's prediction, in-group judgments showed a greater match between these schemas used for emotional expression and perception. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Schaap, Grietje M; Chakhssi, Farid; Westerhof, Gerben J
2016-12-01
This study provides an evaluation of group schema therapy (ST) for inpatient treatment of patients with personality pathology who did not respond to previous psychotherapeutic interventions. Forty-two patients were assessed pre- and posttreatment, and 35 patients were evaluated at follow-up 6 months later. The results showed a dropout rate of 35%. Those who dropped out did not differ from those who completed treatment with regard to demographic and clinical variables; the only exception was that those who dropped out showed a lower prevalence of mood disorders. Furthermore, intention-to-treat analyses showed a significant improvement in maladaptive schemas, schema modes, maladaptive coping styles, mental well-being, and psychological distress after treatment, and these improvements were maintained at follow-up. On the other hand, there was no significant change in experienced parenting style as self-reported by patients. Changes in schemas and schema modes measured from pre- to posttreatment were predictive of general psychological distress at follow-up. Overall, these preliminary findings suggest that positive treatment results can be obtained with group ST-based inpatient treatment for patients who did not respond to previous psychotherapeutic interventions. Moreover, these findings are comparable with treatment results for patients without such a nonresponsive treatment history. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Some issues in data model mapping
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Alsabbagh, Jamal R.
1985-01-01
Numerous data models have been reported in the literature since the early 1970's. They have been used as database interfaces and as conceptual design tools. The mapping between schemas expressed according to the same data model or according to different models is interesting for theoretical and practical purposes. This paper addresses some of the issues involved in such a mapping. Of special interest are the identification of the mapping parameters and some current approaches for handling the various situations that require a mapping.
Organization of Heterogeneous Scientific Data Using the EAV/CR Representation
Nadkarni, Prakash M.; Marenco, Luis; Chen, Roland; Skoufos, Emmanouil; Shepherd, Gordon; Miller, Perry
1999-01-01
Entity-attribute-value (EAV) representation is a means of organizing highly heterogeneous data using a relatively simple physical database schema. EAV representation is widely used in the medical domain, most notably in the storage of data related to clinical patient records. Its potential strengths suggest its use in other biomedical areas, in particular research databases whose schemas are complex as well as constantly changing to reflect evolving knowledge in rapidly advancing scientific domains. When deployed for such purposes, the basic EAV representation needs to be augmented significantly to handle the modeling of complex objects (classes) as well as to manage interobject relationships. The authors refer to their modification of the basic EAV paradigm as EAV/CR (EAV with classes and relationships). They describe EAV/CR representation with examples from two biomedical databases that use it. PMID:10579606
WIS Implementation Study Report. Volume 3. Background Information.
1983-10-01
similar representations so that a single schema interpreter can serve in either environment. Examples of schema intepreters exist in all databases...Unfortunately, programs expect ing ;L VSA M file caninot accept a similar, non-VSAM file instead. In practice, a tile written using any of tire 6 film ...Thomas Kaczmarek USC/Information Sciences Institute 4676 Admiralty Way Marina del Rey, CA 90292 14 September 1983 247 PAiIECDIG PAMI &~A -NOT FILM
Langer, Steve G
2016-06-01
In 2010, the DICOM Data Warehouse (DDW) was launched as a data warehouse for DICOM meta-data. Its chief design goals were to have a flexible database schema that enabled it to index standard patient and study information, modality specific tags (public and private), and create a framework to derive computable information (derived tags) from the former items. Furthermore, it was to map the above information to an internally standard lexicon that enables a non-DICOM savvy programmer to write standard SQL queries and retrieve the equivalent data from a cohort of scanners, regardless of what tag that data element was found in over the changing epochs of DICOM and ensuing migration of elements from private to public tags. After 5 years, the original design has scaled astonishingly well. Very little has changed in the database schema. The knowledge base is now fluent in over 90 device types. Also, additional stored procedures have been written to compute data that is derivable from standard or mapped tags. Finally, an early concern is that the system would not be able to address the variability DICOM-SR objects has been addressed. As of this writing the system is indexing 300 MR, 600 CT, and 2000 other (XA, DR, CR, MG) imaging studies per day. The only remaining issue to be solved is the case for tags that were not prospectively indexed-and indeed, this final challenge may lead to a noSQL, big data, approach in a subsequent version.
Creating Access to Data of Worldwide Volcanic Unrest
NASA Astrophysics Data System (ADS)
Venezky, D. Y.; Newhall, C. G.; Malone, S. D.
2003-12-01
We are creating a pilot database (WOVOdat - the World Organization of Volcano Observatories database) using an open source database and content generation software, allowing web access to data of worldwide volcanic seismicity, ground deformation, fumarolic activity, and other changes within or adjacent to a volcanic system. After three years of discussions with volcano observatories of the WOVO community and institutional databases such as IRIS, UNAVCO, and the Smithsonian's Global Volcanism Program about how to link global data of volcanic unrest for use during crisis situations and for research, we are now developing the pilot database. We already have created the core tables and have written simple queries that access some of the available data using pull-down menus on a website. Over the next year, we plan to complete schema realization, expand querying capabilities, and then open the pilot database for a multi-year data-loading process. Many of the challenges we are encountering are common to multidisciplinary projects and include determining standard data formats, choosing levels of data detail (raw vs. minimally processed data, summary intervals vs. continuous data, etc.), and organizing the extant but variable data into a useable schema. Additionally, we are working on how best to enter the varied data into the database (scripts for digital data and web-entry tools for non-digital data) and what standard sets of queries are most important. An essential during an evolving volcanic crisis would be: `Has any volcano shown the behavior being observed here and what happened?'. We believe that with a systematic aggregation of all datasets on volcanic unrest, we should be able to find patterns that were previously inaccessible or unrecognized. The second WOVOdat workshop in 2002 provided a recent forum for discussion of data formats, database access, and schemas. The formats and units for the discussed parameters can be viewed at http://www.wovo.org/WOVOdat/parameters.htm. Comments, suggestions, and participation in all aspects of the WOVOdat project are welcome and appreciated.
Sharing Epigraphic Information as Linked Data
NASA Astrophysics Data System (ADS)
Álvarez, Fernando-Luis; García-Barriocanal, Elena; Gómez-Pantoja, Joaquín-L.
The diffusion of epigraphic data has evolved in the last years from printed catalogues to indexed digital databases shared through the Web. Recently, the open EpiDoc specifications have resulted in an XML-based schema for the interchange of ancient texts that uses XSLT to render typographic representations. However, these schemas and representation systems are still not providing a way to encode computational semantics and semantic relations between pieces of epigraphic data. This paper sketches an approach to bring these semantics into an EpiDoc based schema using the Ontology Web Language (OWL) and following the principles and methods of information sharing known as "linked data". The paper describes the general principles of the OWL mapping of the EpiDoc schema and how epigraphic data can be shared in RDF format via dereferenceable URIs that can be used to build advanced search, visualization and analysis systems.
The research infrastructure of Chinese foundations, a database for Chinese civil society studies
Ma, Ji; Wang, Qun; Dong, Chao; Li, Huafang
2017-01-01
This paper provides technical details and user guidance on the Research Infrastructure of Chinese Foundations (RICF), a database of Chinese foundations, civil society, and social development in general. The structure of the RICF is deliberately designed and normalized according to the Three Normal Forms. The database schema consists of three major themes: foundations’ basic organizational profile (i.e., basic profile, board member, supervisor, staff, and related party tables), program information (i.e., program information, major program, program relationship, and major recipient tables), and financial information (i.e., financial position, financial activities, cash flow, activity overview, and large donation tables). The RICF’s data quality can be measured by four criteria: data source reputation and credibility, completeness, accuracy, and timeliness. Data records are properly versioned, allowing verification and replication for research purposes. PMID:28742065
DOE Office of Scientific and Technical Information (OSTI.GOV)
Theodore Larrieu, Christopher Slominski, Michele Joyce
2011-03-01
With the inauguration of the CEBAF Element Database (CED) in Fall 2010, Jefferson Lab computer scientists have taken a step toward the eventual goal of a model-driven accelerator. Once fully populated, the database will be the primary repository of information used for everything from generating lattice decks to booting control computers to building controls screens. A requirement influencing the CED design is that it provide access to not only present, but also future and past configurations of the accelerator. To accomplish this, an introspective database schema was designed that allows new elements, types, and properties to be defined on-the-fly withmore » no changes to table structure. Used in conjunction with Oracle Workspace Manager, it allows users to query data from any time in the database history with the same tools used to query the present configuration. Users can also check-out workspaces to use as staging areas for upcoming machine configurations. All Access to the CED is through a well-documented Application Programming Interface (API) that is translated automatically from original C++ source code into native libraries for scripting languages such as perl, php, and TCL making access to the CED easy and ubiquitous.« less
Reddy, T.B.K.; Thomas, Alex D.; Stamatis, Dimitri; Bertsch, Jon; Isbandi, Michelle; Jansson, Jakob; Mallajosyula, Jyothi; Pagani, Ioanna; Lobos, Elizabeth A.; Kyrpides, Nikos C.
2015-01-01
The Genomes OnLine Database (GOLD; http://www.genomesonline.org) is a comprehensive online resource to catalog and monitor genetic studies worldwide. GOLD provides up-to-date status on complete and ongoing sequencing projects along with a broad array of curated metadata. Here we report version 5 (v.5) of the database. The newly designed database schema and web user interface supports several new features including the implementation of a four level (meta)genome project classification system and a simplified intuitive web interface to access reports and launch search tools. The database currently hosts information for about 19 200 studies, 56 000 Biosamples, 56 000 sequencing projects and 39 400 analysis projects. More than just a catalog of worldwide genome projects, GOLD is a manually curated, quality-controlled metadata warehouse. The problems encountered in integrating disparate and varying quality data into GOLD are briefly highlighted. GOLD fully supports and follows the Genomic Standards Consortium (GSC) Minimum Information standards. PMID:25348402
2017-01-01
Reusing the data from healthcare information systems can effectively facilitate clinical trials (CTs). How to select candidate patients eligible for CT recruitment criteria is a central task. Related work either depends on DBA (database administrator) to convert the recruitment criteria to native SQL queries or involves the data mapping between a standard ontology/information model and individual data source schema. This paper proposes an alternative computer-aided CT recruitment paradigm, based on syntax translation between different DSLs (domain-specific languages). In this paradigm, the CT recruitment criteria are first formally represented as production rules. The referenced rule variables are all from the underlying database schema. Then the production rule is translated to an intermediate query-oriented DSL (e.g., LINQ). Finally, the intermediate DSL is directly mapped to native database queries (e.g., SQL) automated by ORM (object-relational mapping). PMID:29065644
Zhang, Yinsheng; Zhang, Guoming; Shang, Qian
2017-01-01
Reusing the data from healthcare information systems can effectively facilitate clinical trials (CTs). How to select candidate patients eligible for CT recruitment criteria is a central task. Related work either depends on DBA (database administrator) to convert the recruitment criteria to native SQL queries or involves the data mapping between a standard ontology/information model and individual data source schema. This paper proposes an alternative computer-aided CT recruitment paradigm, based on syntax translation between different DSLs (domain-specific languages). In this paradigm, the CT recruitment criteria are first formally represented as production rules. The referenced rule variables are all from the underlying database schema. Then the production rule is translated to an intermediate query-oriented DSL (e.g., LINQ). Finally, the intermediate DSL is directly mapped to native database queries (e.g., SQL) automated by ORM (object-relational mapping).
ProtaBank: A repository for protein design and engineering data.
Wang, Connie Y; Chang, Paul M; Ary, Marie L; Allen, Benjamin D; Chica, Roberto A; Mayo, Stephen L; Olafson, Barry D
2018-03-25
We present ProtaBank, a repository for storing, querying, analyzing, and sharing protein design and engineering data in an actively maintained and updated database. ProtaBank provides a format to describe and compare all types of protein mutational data, spanning a wide range of properties and techniques. It features a user-friendly web interface and programming layer that streamlines data deposition and allows for batch input and queries. The database schema design incorporates a standard format for reporting protein sequences and experimental data that facilitates comparison of results across different data sets. A suite of analysis and visualization tools are provided to facilitate discovery, to guide future designs, and to benchmark and train new predictive tools and algorithms. ProtaBank will provide a valuable resource to the protein engineering community by storing and safeguarding newly generated data, allowing for fast searching and identification of relevant data from the existing literature, and exploring correlations between disparate data sets. ProtaBank invites researchers to contribute data to the database to make it accessible for search and analysis. ProtaBank is available at https://protabank.org. © 2018 The Authors Protein Science published by Wiley Periodicals, Inc. on behalf of The Protein Society.
The Star Schema Benchmark and Augmented Fact Table Indexing
NASA Astrophysics Data System (ADS)
O'Neil, Patrick; O'Neil, Elizabeth; Chen, Xuedong; Revilak, Stephen
We provide a benchmark measuring star schema queries retrieving data from a fact table with Where clause column restrictions on dimension tables. Clustering is crucial to performance with modern disk technology, since retrievals with filter factors down to 0.0005 are now performed most efficiently by sequential table search rather than by indexed access. DB2’s Multi-Dimensional Clustering (MDC) provides methods to "dice" the fact table along a number of orthogonal "dimensions", but only when these dimensions are columns in the fact table. The diced cells cluster fact rows on several of these "dimensions" at once so queries restricting several such columns can access crucially localized data, with much faster query response. Unfortunately, columns of dimension tables of a star schema are not usually represented in the fact table. In this paper, we show a simple way to adjoin physical copies of dimension columns to the fact table, dicing data to effectively cluster query retrieval, and explain how such dicing can be achieved on database products other than DB2. We provide benchmark measurements to show successful use of this methodology on three commercial database products.
Pulverman, Carey S; Boyd, Ryan L; Stanton, Amelia M; Meston, Cindy M
2017-03-01
Sexual self-schemas are cognitive generalizations about the sexual self that influence the processing of sexually pertinent information and guide sexual behavior. Until recently sexual self-schemas were exclusively assessed with self-report instruments. Recent research using the meaning extraction method, an inductive method of topic modeling, identified 7 unique themes of sexual self-schemas: family and development, virginity, abuse, relationship, sexual activity, attraction, and existentialism from essays of 239 women (Stanton, Boyd, Pulverman, & Meston, 2015). In the current study, these themes were used to examine changes in theme prominence after an expressive writing treatment. Women (n = 138) with a history of childhood sexual abuse completed a 5-session expressive writing treatment, and essays on sexual self-schemas written at pretreatment and posttreatment were examined for changes in themes. Women showed a reduction in the prominence of the abuse, family and development, virginity, and attraction themes, and an increase in the existentialism theme. This study supports the validity of the 7 themes identified by Stanton and colleagues (2015) and suggests that expressive writing may aid women with a history of sexual abuse to process their abuse history such that it becomes a less salient aspect of their sexual self-schemas. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Acquiring Software Design Schemas: A Machine Learning Perspective
NASA Technical Reports Server (NTRS)
Harandi, Mehdi T.; Lee, Hing-Yan
1991-01-01
In this paper, we describe an approach based on machine learning that acquires software design schemas from design cases of existing applications. An overview of the technique, design representation, and acquisition system are presented. the paper also addresses issues associated with generalizing common features such as biases. The generalization process is illustrated using an example.
Schema Theory: A Basis for Domain Integration Design.
ERIC Educational Resources Information Center
Suzuki, Katsuaki
The cognitive and affective domains of learning outcomes--i.e., intellectual skills, verbal information, cognitive strategies, and attitudes--are parts of every schema. Located within an individual schema, these capabilities are interrelated, and acquiring one capability is likely to have an effect on other types of capabilities within the same…
Earth Science Markup Language: Transitioning From Design to Application
NASA Technical Reports Server (NTRS)
Moe, Karen; Graves, Sara; Ramachandran, Rahul
2002-01-01
The primary objective of the proposed Earth Science Markup Language (ESML) research is to transition from design to application. The resulting schema and prototype software will foster community acceptance for the "define once, use anywhere" concept central to ESML. Supporting goals include: 1. Refinement of the ESML schema and software libraries in cooperation with the user community. 2. Application of the ESML schema and software libraries to a variety of Earth science data sets and analysis tools. 3. Development of supporting prototype software for enhanced ease of use. 4. Cooperation with standards bodies in order to assure ESML is aligned with related metadata standards as appropriate. 5. Widespread publication of the ESML approach, schema, and software.
Relax with CouchDB - Into the non-relational DBMS era of Bioinformatics
Manyam, Ganiraju; Payton, Michelle A.; Roth, Jack A.; Abruzzo, Lynne V.; Coombes, Kevin R.
2012-01-01
With the proliferation of high-throughput technologies, genome-level data analysis has become common in molecular biology. Bioinformaticians are developing extensive resources to annotate and mine biological features from high-throughput data. The underlying database management systems for most bioinformatics software are based on a relational model. Modern non-relational databases offer an alternative that has flexibility, scalability, and a non-rigid design schema. Moreover, with an accelerated development pace, non-relational databases like CouchDB can be ideal tools to construct bioinformatics utilities. We describe CouchDB by presenting three new bioinformatics resources: (a) geneSmash, which collates data from bioinformatics resources and provides automated gene-centric annotations, (b) drugBase, a database of drug-target interactions with a web interface powered by geneSmash, and (c) HapMap-CN, which provides a web interface to query copy number variations from three SNP-chip HapMap datasets. In addition to the web sites, all three systems can be accessed programmatically via web services. PMID:22609849
Integration of Web-based and PC-based clinical research databases.
Brandt, C A; Sun, K; Charpentier, P; Nadkarni, P M
2004-01-01
We have created a Web-based repository or data library of information about measurement instruments used in studies of multi-factorial geriatric health conditions (the Geriatrics Research Instrument Library - GRIL) based upon existing features of two separate clinical study data management systems. GRIL allows browsing, searching, and selecting measurement instruments based upon criteria such as keywords and areas of applicability. Measurement instruments selected can be printed and/or included in an automatically generated standalone microcomputer database application, which can be downloaded by investigators for use in data collection and data management. Integration of database applications requires the creation of a common semantic model, and mapping from each system to this model. Various database schema conflicts at the table and attribute level must be identified and resolved prior to integration. Using a conflict taxonomy and a mapping schema facilitates this process. Critical conflicts at the table level that required resolution included name and relationship differences. A major benefit of integration efforts is the sharing of features and cross-fertilization of applications created for similar purposes in different operating environments. Integration of applications mandates some degree of metadata model unification.
A JEE RESTful service to access Conditions Data in ATLAS
NASA Astrophysics Data System (ADS)
Formica, Andrea; Gallas, E. J.
2015-12-01
Usage of condition data in ATLAS is extensive for offline reconstruction and analysis (e.g. alignment, calibration, data quality). The system is based on the LCG Conditions Database infrastructure, with read and write access via an ad hoc C++ API (COOL), a system which was developed before Run 1 data taking began. The infrastructure dictates that the data is organized into separate schemas (assigned to subsystems/groups storing distinct and independent sets of conditions), making it difficult to access information from several schemas at the same time. We have thus created PL/SQL functions containing queries to provide content extraction at multi-schema level. The PL/SQL API has been exposed to external clients by means of a Java application providing DB access via REST services, deployed inside an application server (JBoss WildFly). The services allow navigation over multiple schemas via simple URLs. The data can be retrieved either in XML or JSON formats, via simple clients (like curl or Web browsers).
HodDB: Design and Analysis of a Query Processor for Brick.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fierro, Gabriel; Culler, David
Brick is a recently proposed metadata schema and ontology for describing building components and the relationships between them. It represents buildings as directed labeled graphs using the RDF data model. Using the SPARQL query language, building-agnostic applications query a Brick graph to discover the set of resources and relationships they require to operate. Latency-sensitive applications, such as user interfaces, demand response and modelpredictive control, require fast queries — conventionally less than 100ms. We benchmark a set of popular open-source and commercial SPARQL databases against three real Brick models using seven application queries and find that none of them meet thismore » performance target. This lack of performance can be attributed to design decisions that optimize for queries over large graphs consisting of billions of triples, but give poor spatial locality and join performance on the small dense graphs typical of Brick. We present the design and evaluation of HodDB, a RDF/SPARQL database for Brick built over a node-based index structure. HodDB performs Brick queries 3-700x faster than leading SPARQL databases and consistently meets the 100ms threshold, enabling the portability of important latency-sensitive building applications.« less
PDBj Mine: design and implementation of relational database interface for Protein Data Bank Japan
Kinjo, Akira R.; Yamashita, Reiko; Nakamura, Haruki
2010-01-01
This article is a tutorial for PDBj Mine, a new database and its interface for Protein Data Bank Japan (PDBj). In PDBj Mine, data are loaded from files in the PDBMLplus format (an extension of PDBML, PDB's canonical XML format, enriched with annotations), which are then served for the user of PDBj via the worldwide web (WWW). We describe the basic design of the relational database (RDB) and web interfaces of PDBj Mine. The contents of PDBMLplus files are first broken into XPath entities, and these paths and data are indexed in the way that reflects the hierarchical structure of the XML files. The data for each XPath type are saved into the corresponding relational table that is named as the XPath itself. The generation of table definitions from the PDBMLplus XML schema is fully automated. For efficient search, frequently queried terms are compiled into a brief summary table. Casual users can perform simple keyword search, and 'Advanced Search' which can specify various conditions on the entries. More experienced users can query the database using SQL statements which can be constructed in a uniform manner. Thus, PDBj Mine achieves a combination of the flexibility of XML documents and the robustness of the RDB. Database URL: http://www.pdbj.org/ PMID:20798081
PDBj Mine: design and implementation of relational database interface for Protein Data Bank Japan.
Kinjo, Akira R; Yamashita, Reiko; Nakamura, Haruki
2010-08-25
This article is a tutorial for PDBj Mine, a new database and its interface for Protein Data Bank Japan (PDBj). In PDBj Mine, data are loaded from files in the PDBMLplus format (an extension of PDBML, PDB's canonical XML format, enriched with annotations), which are then served for the user of PDBj via the worldwide web (WWW). We describe the basic design of the relational database (RDB) and web interfaces of PDBj Mine. The contents of PDBMLplus files are first broken into XPath entities, and these paths and data are indexed in the way that reflects the hierarchical structure of the XML files. The data for each XPath type are saved into the corresponding relational table that is named as the XPath itself. The generation of table definitions from the PDBMLplus XML schema is fully automated. For efficient search, frequently queried terms are compiled into a brief summary table. Casual users can perform simple keyword search, and 'Advanced Search' which can specify various conditions on the entries. More experienced users can query the database using SQL statements which can be constructed in a uniform manner. Thus, PDBj Mine achieves a combination of the flexibility of XML documents and the robustness of the RDB. Database URL: http://www.pdbj.org/
Object-Oriented Approach to Integrating Database Semantics. Volume 4.
1987-12-01
schemata for; 1. Object Classification Shema -- Entities 2. Object Structure and Relationship Schema -- Relations 3. Operation Classification and... relationships are represented in a database is non- intuitive for naive users. *It is difficult to access and combine information in multiple databases. In this...from the CURRENT-.CLASSES table. Choosing a selected item do-selects it. Choose 0 to exit. 1. STUDENTS 2. CUR~RENT-..CLASSES 3. MANAGMNT -.CLASS
Construction of the Dependence Matrix Based on the TRIZ Contradiction Matrix in OOD
NASA Astrophysics Data System (ADS)
Ma, Jianhong; Zhang, Quan; Wang, Yanling; Luo, Tao
In the Object-Oriented software design (OOD), design of the class and object, definition of the classes’ interface and inheritance levels and determination of dependent relations have a serious impact on the reusability and flexibility of the system. According to the concrete problems of design, how to select the right solution from the hundreds of the design schemas which has become the focus of attention of designers. After analyzing lots of software design schemas in practice and Object-Oriented design patterns, this paper constructs the dependence matrix of Object-Oriented software design filed, referring to contradiction matrix of TRIZ (Theory of Inventive Problem Solving) proposed by the former Soviet Union innovation master Altshuller. As the practice indicates, it provides a intuitive, common and standardized method for designers to choose the right design schema. Make research and communication more effectively, and also improve the software development efficiency and software quality.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reddy, Tatiparthi B. K.; Thomas, Alex D.; Stamatis, Dimitri
The Genomes OnLine Database (GOLD; http://www.genomesonline.org) is a comprehensive online resource to catalog and monitor genetic studies worldwide. GOLD provides up-to-date status on complete and ongoing sequencing projects along with a broad array of curated metadata. Within this paper, we report version 5 (v.5) of the database. The newly designed database schema and web user interface supports several new features including the implementation of a four level (meta)genome project classification system and a simplified intuitive web interface to access reports and launch search tools. The database currently hosts information for about 19 200 studies, 56 000 Biosamples, 56 000 sequencingmore » projects and 39 400 analysis projects. More than just a catalog of worldwide genome projects, GOLD is a manually curated, quality-controlled metadata warehouse. The problems encountered in integrating disparate and varying quality data into GOLD are briefly highlighted. Lastly, GOLD fully supports and follows the Genomic Standards Consortium (GSC) Minimum Information standards.« less
NASA Astrophysics Data System (ADS)
Kuznetsov, Valentin; Riley, Daniel; Afaq, Anzar; Sekhri, Vijay; Guo, Yuyi; Lueking, Lee
2010-04-01
The CMS experiment has implemented a flexible and powerful system enabling users to find data within the CMS physics data catalog. The Dataset Bookkeeping Service (DBS) comprises a database and the services used to store and access metadata related to CMS physics data. To this, we have added a generalized query system in addition to the existing web and programmatic interfaces to the DBS. This query system is based on a query language that hides the complexity of the underlying database structure by discovering the join conditions between database tables. This provides a way of querying the system that is simple and straightforward for CMS data managers and physicists to use without requiring knowledge of the database tables or keys. The DBS Query Language uses the ANTLR tool to build the input query parser and tokenizer, followed by a query builder that uses a graph representation of the DBS schema to construct the SQL query sent to underlying database. We will describe the design of the query system, provide details of the language components and overview of how this component fits into the overall data discovery system architecture.
Ferdynus, C; Huiart, L
2016-09-01
Administrative health databases such as the French National Heath Insurance Database - SNIIRAM - are a major tool to answer numerous public health research questions. However the use of such data requires complex and time-consuming data management. Our objective was to develop and make available a tool to optimize cohort constitution within administrative health databases. We developed a process to extract, transform and load (ETL) data from various heterogeneous sources in a standardized data warehouse. This data warehouse is architected as a star schema corresponding to an i2b2 star schema model. We then evaluated the performance of this ETL using data from a pharmacoepidemiology research project conducted in the SNIIRAM database. The ETL we developed comprises a set of functionalities for creating SAS scripts. Data can be integrated into a standardized data warehouse. As part of the performance assessment of this ETL, we achieved integration of a dataset from the SNIIRAM comprising more than 900 million lines in less than three hours using a desktop computer. This enables patient selection from the standardized data warehouse within seconds of the request. The ETL described in this paper provides a tool which is effective and compatible with all administrative health databases, without requiring complex database servers. This tool should simplify cohort constitution in health databases; the standardization of warehouse data facilitates collaborative work between research teams. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
DataSpread: Unifying Databases and Spreadsheets.
Bendre, Mangesh; Sun, Bofan; Zhang, Ding; Zhou, Xinyan; Chang, Kevin ChenChuan; Parameswaran, Aditya
2015-08-01
Spreadsheet software is often the tool of choice for ad-hoc tabular data management, processing, and visualization, especially on tiny data sets. On the other hand, relational database systems offer significant power, expressivity, and efficiency over spreadsheet software for data management, while lacking in the ease of use and ad-hoc analysis capabilities. We demonstrate DataSpread, a data exploration tool that holistically unifies databases and spreadsheets. It continues to offer a Microsoft Excel-based spreadsheet front-end, while in parallel managing all the data in a back-end database, specifically, PostgreSQL. DataSpread retains all the advantages of spreadsheets, including ease of use, ad-hoc analysis and visualization capabilities, and a schema-free nature, while also adding the advantages of traditional relational databases, such as scalability and the ability to use arbitrary SQL to import, filter, or join external or internal tables and have the results appear in the spreadsheet. DataSpread needs to reason about and reconcile differences in the notions of schema, addressing of cells and tuples, and the current "pane" (which exists in spreadsheets but not in traditional databases), and support data modifications at both the front-end and the back-end. Our demonstration will center on our first and early prototype of the DataSpread, and will give the attendees a sense for the enormous data exploration capabilities offered by unifying spreadsheets and databases.
DataSpread: Unifying Databases and Spreadsheets
Bendre, Mangesh; Sun, Bofan; Zhang, Ding; Zhou, Xinyan; Chang, Kevin ChenChuan; Parameswaran, Aditya
2015-01-01
Spreadsheet software is often the tool of choice for ad-hoc tabular data management, processing, and visualization, especially on tiny data sets. On the other hand, relational database systems offer significant power, expressivity, and efficiency over spreadsheet software for data management, while lacking in the ease of use and ad-hoc analysis capabilities. We demonstrate DataSpread, a data exploration tool that holistically unifies databases and spreadsheets. It continues to offer a Microsoft Excel-based spreadsheet front-end, while in parallel managing all the data in a back-end database, specifically, PostgreSQL. DataSpread retains all the advantages of spreadsheets, including ease of use, ad-hoc analysis and visualization capabilities, and a schema-free nature, while also adding the advantages of traditional relational databases, such as scalability and the ability to use arbitrary SQL to import, filter, or join external or internal tables and have the results appear in the spreadsheet. DataSpread needs to reason about and reconcile differences in the notions of schema, addressing of cells and tuples, and the current “pane” (which exists in spreadsheets but not in traditional databases), and support data modifications at both the front-end and the back-end. Our demonstration will center on our first and early prototype of the DataSpread, and will give the attendees a sense for the enormous data exploration capabilities offered by unifying spreadsheets and databases. PMID:26900487
Internet-based data warehousing
NASA Astrophysics Data System (ADS)
Boreisha, Yurii
2001-10-01
In this paper, we consider the process of the data warehouse creation and population using the latest Internet and database access technologies. The logical three-tier model is applied. This approach allows developing of an enterprise schema by analyzing the various processes in the organization, and extracting the relevant entities and relationships from them. Integration with local schemas and population of the data warehouse is done through the corresponding user, business, and data services components. The hierarchy of these components is used to hide from the data warehouse users the entire complex online analytical processing functionality.
CMLLite: a design philosophy for CML
2011-01-01
CMLLite is a collection of definitions and processes which provide strong and flexible validation for a document in Chemical Markup Language (CML). It consists of an updated CML schema (schema3), conventions specifying rules in both human and machine-understandable forms and a validator available both online and offline to check conformance. This article explores the rationale behind the changes which have been made to the schema, explains how conventions interact and how they are designed, formulated, implemented and tested, and gives an overview of the validation service. PMID:21999395
Computer systems and methods for the query and visualization of multidimensional databases
Stolte, Chris; Tang, Diane L.; Hanrahan, Patrick
2006-08-08
A method and system for producing graphics. A hierarchical structure of a database is determined. A visual table, comprising a plurality of panes, is constructed by providing a specification that is in a language based on the hierarchical structure of the database. In some cases, this language can include fields that are in the database schema. The database is queried to retrieve a set of tuples in accordance with the specification. A subset of the set of tuples is associated with a pane in the plurality of panes.
Computer systems and methods for the query and visualization of multidimensional database
Stolte, Chris; Tang, Diane L.; Hanrahan, Patrick
2010-05-11
A method and system for producing graphics. A hierarchical structure of a database is determined. A visual table, comprising a plurality of panes, is constructed by providing a specification that is in a language based on the hierarchical structure of the database. In some cases, this language can include fields that are in the database schema. The database is queried to retrieve a set of tuples in accordance with the specification. A subset of the set of tuples is associated with a pane in the plurality of panes.
Semantically Interoperable XML Data
Vergara-Niedermayr, Cristobal; Wang, Fusheng; Pan, Tony; Kurc, Tahsin; Saltz, Joel
2013-01-01
XML is ubiquitously used as an information exchange platform for web-based applications in healthcare, life sciences, and many other domains. Proliferating XML data are now managed through latest native XML database technologies. XML data sources conforming to common XML schemas could be shared and integrated with syntactic interoperability. Semantic interoperability can be achieved through semantic annotations of data models using common data elements linked to concepts from ontologies. In this paper, we present a framework and software system to support the development of semantic interoperable XML based data sources that can be shared through a Grid infrastructure. We also present our work on supporting semantic validated XML data through semantic annotations for XML Schema, semantic validation and semantic authoring of XML data. We demonstrate the use of the system for a biomedical database of medical image annotations and markups. PMID:25298789
BioWarehouse: a bioinformatics database warehouse toolkit
Lee, Thomas J; Pouliot, Yannick; Wagner, Valerie; Gupta, Priyanka; Stringer-Calvert, David WJ; Tenenbaum, Jessica D; Karp, Peter D
2006-01-01
Background This article addresses the problem of interoperation of heterogeneous bioinformatics databases. Results We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL) but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. Conclusion BioWarehouse embodies significant progress on the database integration problem for bioinformatics. PMID:16556315
BioWarehouse: a bioinformatics database warehouse toolkit.
Lee, Thomas J; Pouliot, Yannick; Wagner, Valerie; Gupta, Priyanka; Stringer-Calvert, David W J; Tenenbaum, Jessica D; Karp, Peter D
2006-03-23
This article addresses the problem of interoperation of heterogeneous bioinformatics databases. We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL) but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. BioWarehouse embodies significant progress on the database integration problem for bioinformatics.
Rezaei, Mehdi; Ghazanfari, Firoozeh; Rezaee, Fatemeh
2016-12-30
The present investigation was designed to examine disconnection and rejection (DR) schemas, negative emotional schemas (NESs) and experimental avoidance (EA) as mediating variables of the relationship between the childhood trauma (CT) and depression. Specifically we examined the mediating role of NESs and EA between DR schemas and depression. The study sample consist of 439 female college students (M age =22.47; SD=6.0), of whom 88 met the criteria for current major depressive disorder (MDD) and 351 who had history of MDD in the last 12 months. Subjects were assessed by Structured Clinical Interview for DSM-IV (SCID) and completed the Childhood Trauma Questionnaire (CTQ), the Early Maladaptive Schemas Questionnaire (SQ-SF), the Leahy Emotional Schemas Scale (LESS), the Acceptance and Action Questionnaire (AAQ-II), and the Beck Depression Inventory-II (BDI-II). The findings showed that DR schemas were mediator of the relationship CT and depression but CT through the NESs and EA did not predict depression. NESs were mediator of the relationship between DR schemas and depression and EA was mediator of the relationship between DR schemas and depression. In general, results suggest that intervention of depressed women may need to target the changing of DR schemas, NESs and reduction of EA. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NIST Gas Hydrate Research Database and Web Dissemination Channel.
Kroenlein, K; Muzny, C D; Kazakov, A; Diky, V V; Chirico, R D; Frenkel, M; Sloan, E D
2010-01-01
To facilitate advances in application of technologies pertaining to gas hydrates, a freely available data resource containing experimentally derived information about those materials was developed. This work was performed by the Thermodynamic Research Center (TRC) paralleling a highly successful database of thermodynamic and transport properties of molecular pure compounds and their mixtures. Population of the gas-hydrates database required development of guided data capture (GDC) software designed to convert experimental data and metadata into a well organized electronic format, as well as a relational database schema to accommodate all types of numerical and metadata within the scope of the project. To guarantee utility for the broad gas hydrate research community, TRC worked closely with the Committee on Data for Science and Technology (CODATA) task group for Data on Natural Gas Hydrates, an international data sharing effort, in developing a gas hydrate markup language (GHML). The fruits of these efforts are disseminated through the NIST Sandard Reference Data Program [1] as the Clathrate Hydrate Physical Property Database (SRD #156). A web-based interface for this database, as well as scientific results from the Mallik 2002 Gas Hydrate Production Research Well Program [2], is deployed at http://gashydrates.nist.gov.
Design and Implementation of the CEBAF Element Database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Theodore Larrieu, Christopher Slominski, Michele Joyce
2011-10-01
With inauguration of the CEBAF Element Database (CED) in Fall 2010, Jefferson Lab computer scientists have taken a first step toward the eventual goal of a model-driven accelerator. Once fully populated, the database will be the primary repository of information used for everything from generating lattice decks to booting front-end computers to building controls screens. A particular requirement influencing the CED design is that it must provide consistent access to not only present, but also future, and eventually past, configurations of the CEBAF accelerator. To accomplish this, an introspective database schema was designed that allows new elements, element types, andmore » element properties to be defined on-the-fly without changing table structure. When used in conjunction with the Oracle Workspace Manager, it allows users to seamlessly query data from any time in the database history with the exact same tools as they use for querying the present configuration. Users can also check-out workspaces and use them as staging areas for upcoming machine configurations. All Access to the CED is through a well-documented API that is translated automatically from original C++ into native libraries for script languages such as perl, php, and TCL making access to the CED easy and ubiquitous. Notice: Authored by Jefferson Science Associates, LLC under U.S. DOE Contract No. DE-AC05-06OR23177. The U.S. Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce this manuscript for U.S. Government purposes.« less
A Data Management System for International Space Station Simulation Tools
NASA Technical Reports Server (NTRS)
Betts, Bradley J.; DelMundo, Rommel; Elcott, Sharif; McIntosh, Dawn; Niehaus, Brian; Papasin, Richard; Mah, Robert W.; Clancy, Daniel (Technical Monitor)
2002-01-01
Groups associated with the design, operational, and training aspects of the International Space Station make extensive use of modeling and simulation tools. Users of these tools often need to access and manipulate large quantities of data associated with the station, ranging from design documents to wiring diagrams. Retrieving and manipulating this data directly within the simulation and modeling environment can provide substantial benefit to users. An approach for providing these kinds of data management services, including a database schema and class structure, is presented. Implementation details are also provided as a data management system is integrated into the Intelligent Virtual Station, a modeling and simulation tool developed by the NASA Ames Smart Systems Research Laboratory. One use of the Intelligent Virtual Station is generating station-related training procedures in a virtual environment, The data management component allows users to quickly and easily retrieve information related to objects on the station, enhancing their ability to generate accurate procedures. Users can associate new information with objects and have that information stored in a database.
Predictive Models and Computational Embryology
EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...
JEnsembl: a version-aware Java API to Ensembl data systems.
Paterson, Trevor; Law, Andy
2012-11-01
The Ensembl Project provides release-specific Perl APIs for efficient high-level programmatic access to data stored in various Ensembl database schema. Although Perl scripts are perfectly suited for processing large volumes of text-based data, Perl is not ideal for developing large-scale software applications nor embedding in graphical interfaces. The provision of a novel Java API would facilitate type-safe, modular, object-orientated development of new Bioinformatics tools with which to access, analyse and visualize Ensembl data. The JEnsembl API implementation provides basic data retrieval and manipulation functionality from the Core, Compara and Variation databases for all species in Ensembl and EnsemblGenomes and is a platform for the development of a richer API to Ensembl datasources. The JEnsembl architecture uses a text-based configuration module to provide evolving, versioned mappings from database schema to code objects. A single installation of the JEnsembl API can therefore simultaneously and transparently connect to current and previous database instances (such as those in the public archive) thus facilitating better analysis repeatability and allowing 'through time' comparative analyses to be performed. Project development, released code libraries, Maven repository and documentation are hosted at SourceForge (http://jensembl.sourceforge.net).
Schema Theory and Signaling: Implications for Text Design.
ERIC Educational Resources Information Center
Rodriguez, Stephen R.
This discussion of the implications of schema theory and signaling theory for the design of both paper- and computer-based text describes the macro and micro levels of text structure and their interaction, provides a definition of signaling, and identifies four types of signals: (1) pointer words informing the reader of the author's perspective on…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Yubin; Shankar, Mallikarjun; Park, Byung H.
Designing a database system for both efficient data management and data services has been one of the enduring challenges in the healthcare domain. In many healthcare systems, data services and data management are often viewed as two orthogonal tasks; data services refer to retrieval and analytic queries such as search, joins, statistical data extraction, and simple data mining algorithms, while data management refers to building error-tolerant and non-redundant database systems. The gap between service and management has resulted in rigid database systems and schemas that do not support effective analytics. We compose a rich graph structure from an abstracted healthcaremore » RDBMS to illustrate how we can fill this gap in practice. We show how a healthcare graph can be automatically constructed from a normalized relational database using the proposed 3NF Equivalent Graph (3EG) transformation.We discuss a set of real world graph queries such as finding self-referrals, shared providers, and collaborative filtering, and evaluate their performance over a relational database and its 3EG-transformed graph. Experimental results show that the graph representation serves as multiple de-normalized tables, thus reducing complexity in a database and enhancing data accessibility of users. Based on this finding, we propose an ensemble framework of databases for healthcare applications.« less
An efficient temporal database design method based on EER
NASA Astrophysics Data System (ADS)
Liu, Zhi; Huang, Jiping; Miao, Hua
2007-12-01
Many existing methods of modeling temporal information are based on logical model, which makes relational schema optimization more difficult and more complicated. In this paper, based on the conventional EER model, the author attempts to analyse and abstract temporal information in the phase of conceptual modelling according to the concrete requirement to history information. Then a temporal data model named BTEER is presented. BTEER not only retains all designing ideas and methods of EER which makes BTEER have good upward compatibility, but also supports the modelling of valid time and transaction time effectively at the same time. In addition, BTEER can be transformed to EER easily and automatically. It proves in practice, this method can model the temporal information well.
bioWidgets: data interaction components for genomics.
Fischer, S; Crabtree, J; Brunk, B; Gibson, M; Overton, G C
1999-10-01
The presentation of genomics data in a perspicuous visual format is critical for its rapid interpretation and validation. Relatively few public database developers have the resources to implement sophisticated front-end user interfaces themselves. Accordingly, these developers would benefit from a reusable toolkit of user interface and data visualization components. We have designed the bioWidget toolkit as a set of JavaBean components. It includes a wide array of user interface components and defines an architecture for assembling applications. The toolkit is founded on established software engineering design patterns and principles, including componentry, Model-View-Controller, factored models and schema neutrality. As a proof of concept, we have used the bioWidget toolkit to create three extendible applications: AnnotView, BlastView and AlignView.
A Hybrid EAV-Relational Model for Consistent and Scalable Capture of Clinical Research Data.
Khan, Omar; Lim Choi Keung, Sarah N; Zhao, Lei; Arvanitis, Theodoros N
2014-01-01
Many clinical research databases are built for specific purposes and their design is often guided by the requirements of their particular setting. Not only does this lead to issues of interoperability and reusability between research groups in the wider community but, within the project itself, changes and additions to the system could be implemented using an ad hoc approach, which may make the system difficult to maintain and even more difficult to share. In this paper, we outline a hybrid Entity-Attribute-Value and relational model approach for modelling data, in light of frequently changing requirements, which enables the back-end database schema to remain static, improving the extensibility and scalability of an application. The model also facilitates data reuse. The methods used build on the modular architecture previously introduced in the CURe project.
Coleman, Craig I; Vaitsiakhovich, Tatsiana; Nguyen, Elaine; Weeda, Erin R; Sood, Nitesh A; Bunz, Thomas J; Schaefer, Bernhard; Meinecke, Anna-Katharina; Eriksson, Daniel
2018-01-01
Schemas to identify bleeding-related hospitalizations in claims data differ in billing codes used and coding positions allowed. We assessed agreement across bleeding-related hospitalization coding schemas for claims analyses of nonvalvular atrial fibrillation (NVAF) patients on oral anticoagulation (OAC). We hypothesized that prior coding schemas used to identify bleeding-related hospitalizations in claim database studies would provide varying levels of agreement in incidence rates. Within MarketScan data, we identified adults, newly started on OAC for NVAF from January 2012 to June 2015. Billing code schemas developed by Cunningham et al., the US Food and Drug Administration (FDA) Mini-Sentinel program, and Yao et al. were used to identify bleeding-related hospitalizations as a surrogate for major bleeding. Bleeds were subcategorized as intracranial hemorrhage (ICH), gastrointestinal (GI), or other. Schema agreement was assessed by comparing incidence, rates of events/100 person-years (PYs), and Cohen's kappa statistic. We identified 151 738 new-users of OAC with NVAF (CHA2DS2-VASc score = 3, [interquartile range = 2-4] and median HAS-BLED score = 3 [interquartile range = 2-3]). The Cunningham, FDA Mini-Sentinel, and Yao schemas identified any bleeding-related hospitalizations in 1.87% (95% confidence interval [CI]: 1.81-1.94), 2.65% (95% CI: 2.57-2.74), and 4.66% (95% CI: 4.55-4.76) of patients (corresponding rates = 3.45, 4.90, and 8.65 events/100 PYs). Kappa agreement across schemas was weak-to-moderate (κ = 0.47-0.66) for any bleeding hospitalization. Near-perfect agreement (κ = 0.99) was observed with the FDA Mini-Sentinel and Yao schemas for ICH-related hospitalizations, but agreement was weak when comparing Cunningham to FDA Mini-Sentinel or Yao (κ = 0.52-0.53). FDA Mini-Sentinel and Yao agreement was moderate (κ = 0.62) for GI bleeding, but agreement was weak when comparing Cunningham to FDA Mini-Sentinel or Yao (κ = 0.44-0.56). For other bleeds, agreement across schemas was minimal (κ = 0.14-0.38). We observed varying levels of agreement among 3 bleeding-related hospitalizations schemas in NVAF patients. © 2018 Wiley Periodicals, Inc.
Out of place, out of mind: Schema-driven false memory effects for object-location bindings.
Lew, Adina R; Howe, Mark L
2017-03-01
Events consist of diverse elements, each processed in specialized neocortical networks, with temporal lobe memory systems binding these elements to form coherent event memories. We provide a novel theoretical analysis of an unexplored consequence of the independence of memory systems for elements and their bindings, 1 that raises the paradoxical prediction that schema-driven false memories can act solely on the binding of event elements despite the superior retrieval of individual elements. This is because if 2, or more, schema-relevant elements are bound together in unexpected conjunctions, the unexpected conjunction will increase attention during encoding to both the elements and their bindings, but only the bindings will receive competition with evoked schema-expected bindings. We test our model by examining memory for object-location bindings in recognition (Study 1) and recall (Studies 2 and 3) tasks. After studying schema-relevant objects in unexpected locations (e.g., pan on a stool in a kitchen scene), participants who then viewed these objects in expected locations (e.g., pan on stove) at test were more likely to falsely remember this object-location pairing as correct, compared with participants that viewed a different unexpected object-location pairing (e.g., pan on floor). In recall, participants were more likely to correctly remember individual schema-relevant objects originally viewed in unexpected, as opposed to expected locations, but were then more likely to misplace these items in the original room scene to expected places, relative to control schema-irrelevant objects. Our theoretical analysis and novel paradigm provide a tool for investigating memory distortions acting on binding processes. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Semantics-informed cartography: the case of Piemonte Geological Map
NASA Astrophysics Data System (ADS)
Piana, Fabrizio; Lombardo, Vincenzo; Mimmo, Dario; Giardino, Marco; Fubelli, Giandomenico
2016-04-01
In modern digital geological maps, namely those supported by a large geo-database and devoted to dynamical, interactive representation on WMS-WebGIS services, there is the need to provide, in an explicit form, the geological assumptions used for the design and compilation of the database of the Map, and to get a definition and/or adoption of semantic representation and taxonomies, in order to achieve a formal and interoperable representation of the geologic knowledge. These approaches are fundamental for the integration and harmonisation of geological information and services across cultural (e.g. different scientific disciplines) and/or physical barriers (e.g. administrative boundaries). Initiatives such as GeoScience Markup Language (last version is GeoSciML 4.0, 2015, http://www.geosciml.org) and the INSPIRE "Data Specification on Geology" http://inspire.jrc.ec.europa.eu/documents/Data_Specifications/INSPIRE_DataSpecification_GE_v3.0rc3.pdf (an operative simplification of GeoSciML, last version is 3.0 rc3, 2013), as well as the recent terminological shepherding of the Geoscience Terminology Working Group (GTWG) have been promoting information exchange of the geologic knowledge. Grounded on these standard vocabularies, schemas and data models, we provide a shared semantic classification of geological data referring to the study case of the synthetic digital geological map of the Piemonte region (NW Italy), named "GEOPiemonteMap", developed by the CNR Institute of Geosciences and Earth Resources, Torino (CNR IGG TO) and hosted as a dynamical interactive map on the geoportal of ARPA Piemonte Environmental Agency. The Piemonte Geological Map is grounded on a regional-scale geo-database consisting of some hundreds of GeologicUnits whose thousands instances (Mapped Features, polygons geometry) widely occur in Piemonte region, and each one is bounded by GeologicStructures (Mapped Features, line geometry). GeologicUnits and GeologicStructures have been spatially correlated through the whole region and described using the GeoSciML vocabularies. A hierarchical schema is provided for the Piemonte Geological Map that gives the parental relations between several orders of GeologicUnits referring to mostly recurring geological objects and main GeologicEvents, in a logical framework compliant with GeoSciML and INSPIRE data models. The classification criteria and the Hierarchy Schema used to define the GEOPiemonteMap Legend, as well as the intended meanings of the geological concepts used to achieve the overall classification schema, are explicitly described in several WikiGeo pages (implemented by "MediaWiki" open source software, https://www.mediawiki.org/wiki/MediaWiki). Moreover, a further step toward a formal classification of the contents (both data and interpretation) of the GEOPiemonteMap was triggered, by setting up an ontological framework, named "OntoGeonous", in order to achieve a thorough semantic characterization of the Map.
Systematic plan of building Web geographic information system based on ActiveX control
NASA Astrophysics Data System (ADS)
Zhang, Xia; Li, Deren; Zhu, Xinyan; Chen, Nengcheng
2003-03-01
A systematic plan of building Web Geographic Information System (WebGIS) using ActiveX technology is proposed in this paper. In the proposed plan, ActiveX control technology is adopted in building client-side application, and two different schemas are introduced to implement communication between controls in users¡ browser and middle application server. One is based on Distribute Component Object Model (DCOM), the other is based on socket. In the former schema, middle service application is developed as a DCOM object that communicates with ActiveX control through Object Remote Procedure Call (ORPC) and accesses data in GIS Data Server through Open Database Connectivity (ODBC). In the latter, middle service application is developed using Java language. It communicates with ActiveX control through socket based on TCP/IP and accesses data in GIS Data Server through Java Database Connectivity (JDBC). The first one is usually developed using C/C++, and it is difficult to develop and deploy. The second one is relatively easy to develop, but its performance of data transfer relies on Web bandwidth. A sample application is developed using the latter schema. It is proved that the performance of the sample application is better than that of some other WebGIS applications in some degree.
Predictive Models and Computational Toxicology (II IBAMTOX)
EPA’s ‘virtual embryo’ project is building an integrative systems biology framework for predictive models of developmental toxicity. One schema involves a knowledge-driven adverse outcome pathway (AOP) framework utilizing information from public databases, standardized ontologies...
Computer systems and methods for the query and visualization of multidimensional databases
Stolte, Chris; Tang, Diane L; Hanrahan, Patrick
2015-03-03
A computer displays a graphical user interface on its display. The graphical user interface includes a schema information region and a data visualization region. The schema information region includes multiple operand names, each operand corresponding to one or more fields of a multi-dimensional database that includes at least one data hierarchy. The data visualization region includes a columns shelf and a rows shelf. The computer detects user actions to associate one or more first operands with the columns shelf and to associate one or more second operands with the rows shelf. The computer generates a visual table in the data visualization region in accordance with the user actions. The visual table includes one or more panes. Each pane has an x-axis defined based on data for the one or more first operands, and each pane has a y-axis defined based on data for the one or more second operands.
ESTminer: a Web interface for mining EST contig and cluster databases.
Huang, Yecheng; Pumphrey, Janie; Gingle, Alan R
2005-03-01
ESTminer is a Web application and database schema for interactive mining of expressed sequence tag (EST) contig and cluster datasets. The Web interface contains a query frame that allows the selection of contigs/clusters with specific cDNA library makeup or a threshold number of members. The results are displayed as color-coded tree nodes, where the color indicates the fractional size of each cDNA library component. The nodes are expandable, revealing library statistics as well as EST or contig members, with links to sequence data, GenBank records or user configurable links. Also, the interface allows 'queries within queries' where the result set of a query is further filtered by the subsequent query. ESTminer is implemented in Java/JSP and the package, including MySQL and Oracle schema creation scripts, is available from http://cggc.agtec.uga.edu/Data/download.asp agingle@uga.edu.
Computer systems and methods for the query and visualization of multidimensional databases
Stolte, Chris; Tang, Diane L.; Hanrahan, Patrick
2015-11-10
A computer displays a graphical user interface on its display. The graphical user interface includes a schema information region and a data visualization region. The schema information region includes a plurality of fields of a multi-dimensional database that includes at least one data hierarchy. The data visualization region includes a columns shelf and a rows shelf. The computer detects user actions to associate one or more first fields with the columns shelf and to associate one or more second fields with the rows shelf. The computer generates a visual table in the data visualization region in accordance with the user actions. The visual table includes one or more panes. Each pane has an x-axis defined based on data for the one or more first fields, and each pane has a y-axis defined based on data for the one or more second fields.
Service Management Database for DSN Equipment
NASA Technical Reports Server (NTRS)
Zendejas, Silvino; Bui, Tung; Bui, Bach; Malhotra, Shantanu; Chen, Fannie; Wolgast, Paul; Allen, Christopher; Luong, Ivy; Chang, George; Sadaqathulla, Syed
2009-01-01
This data- and event-driven persistent storage system leverages the use of commercial software provided by Oracle for portability, ease of maintenance, scalability, and ease of integration with embedded, client-server, and multi-tiered applications. In this role, the Service Management Database (SMDB) is a key component of the overall end-to-end process involved in the scheduling, preparation, and configuration of the Deep Space Network (DSN) equipment needed to perform the various telecommunication services the DSN provides to its customers worldwide. SMDB makes efficient use of triggers, stored procedures, queuing functions, e-mail capabilities, data management, and Java integration features provided by the Oracle relational database management system. SMDB uses a third normal form schema design that allows for simple data maintenance procedures and thin layers of integration with client applications. The software provides an integrated event logging system with ability to publish events to a JMS messaging system for synchronous and asynchronous delivery to subscribed applications. It provides a structured classification of events and application-level messages stored in database tables that are accessible by monitoring applications for real-time monitoring or for troubleshooting and analysis over historical archives.
Relax with CouchDB--into the non-relational DBMS era of bioinformatics.
Manyam, Ganiraju; Payton, Michelle A; Roth, Jack A; Abruzzo, Lynne V; Coombes, Kevin R
2012-07-01
With the proliferation of high-throughput technologies, genome-level data analysis has become common in molecular biology. Bioinformaticians are developing extensive resources to annotate and mine biological features from high-throughput data. The underlying database management systems for most bioinformatics software are based on a relational model. Modern non-relational databases offer an alternative that has flexibility, scalability, and a non-rigid design schema. Moreover, with an accelerated development pace, non-relational databases like CouchDB can be ideal tools to construct bioinformatics utilities. We describe CouchDB by presenting three new bioinformatics resources: (a) geneSmash, which collates data from bioinformatics resources and provides automated gene-centric annotations, (b) drugBase, a database of drug-target interactions with a web interface powered by geneSmash, and (c) HapMap-CN, which provides a web interface to query copy number variations from three SNP-chip HapMap datasets. In addition to the web sites, all three systems can be accessed programmatically via web services. Copyright © 2012 Elsevier Inc. All rights reserved.
Towards the XML schema measurement based on mapping between XML and OO domain
NASA Astrophysics Data System (ADS)
Rakić, Gordana; Budimac, Zoran; Heričko, Marjan; Pušnik, Maja
2017-07-01
Measuring quality of IT solutions is a priority in software engineering. Although numerous metrics for measuring object-oriented code already exist, measuring quality of UML models or XML Schemas is still developing. One of the research questions in the overall research leaded by ideas described in this paper is whether we can apply already defined object-oriented design metrics on XML schemas based on predefined mappings. In this paper, basic ideas for mentioned mapping are presented. This mapping is prerequisite for setting the future approach to XML schema quality measuring with object-oriented metrics.
Amadoz, Alicia; González-Candelas, Fernando
2007-04-20
Most research scientists working in the fields of molecular epidemiology, population and evolutionary genetics are confronted with the management of large volumes of data. Moreover, the data used in studies of infectious diseases are complex and usually derive from different institutions such as hospitals or laboratories. Since no public database scheme incorporating clinical and epidemiological information about patients and molecular information about pathogens is currently available, we have developed an information system, composed by a main database and a web-based interface, which integrates both types of data and satisfies requirements of good organization, simple accessibility, data security and multi-user support. From the moment a patient arrives to a hospital or health centre until the processing and analysis of molecular sequences obtained from infectious pathogens in the laboratory, lots of information is collected from different sources. We have divided the most relevant data into 12 conceptual modules around which we have organized the database schema. Our schema is very complete and it covers many aspects of sample sources, samples, laboratory processes, molecular sequences, phylogenetics results, clinical tests and results, clinical information, treatments, pathogens, transmissions, outbreaks and bibliographic information. Communication between end-users and the selected Relational Database Management System (RDMS) is carried out by default through a command-line window or through a user-friendly, web-based interface which provides access and management tools for the data. epiPATH is an information system for managing clinical and molecular information from infectious diseases. It facilitates daily work related to infectious pathogens and sequences obtained from them. This software is intended for local installation in order to safeguard private data and provides advanced SQL-users the flexibility to adapt it to their needs. The database schema, tool scripts and web-based interface are free software but data stored in our database server are not publicly available. epiPATH is distributed under the terms of GNU General Public License. More details about epiPATH can be found at http://genevo.uv.es/epipath.
Harris, Eric S J; Erickson, Sean D; Tolopko, Andrew N; Cao, Shugeng; Craycroft, Jane A; Scholten, Robert; Fu, Yanling; Wang, Wenquan; Liu, Yong; Zhao, Zhongzhen; Clardy, Jon; Shamu, Caroline E; Eisenberg, David M
2011-05-17
Ethnobotanically driven drug-discovery programs include data related to many aspects of the preparation of botanical medicines, from initial plant collection to chemical extraction and fractionation. The Traditional Medicine Collection Tracking System (TM-CTS) was created to organize and store data of this type for an international collaborative project involving the systematic evaluation of commonly used Traditional Chinese Medicinal plants. The system was developed using domain-driven design techniques, and is implemented using Java, Hibernate, PostgreSQL, Business Intelligence and Reporting Tools (BIRT), and Apache Tomcat. The TM-CTS relational database schema contains over 70 data types, comprising over 500 data fields. The system incorporates a number of unique features that are useful in the context of ethnobotanical projects such as support for information about botanical collection, method of processing, quality tests for plants with existing pharmacopoeia standards, chemical extraction and fractionation, and historical uses of the plants. The database also accommodates data provided in multiple languages and integration with a database system built to support high throughput screening based drug discovery efforts. It is accessed via a web-based application that provides extensive, multi-format reporting capabilities. This new database system was designed to support a project evaluating the bioactivity of Chinese medicinal plants. The software used to create the database is open source, freely available, and could potentially be applied to other ethnobotanically driven natural product collection and drug-discovery programs. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Harris, Eric S. J.; Erickson, Sean D.; Tolopko, Andrew N.; Cao, Shugeng; Craycroft, Jane A.; Scholten, Robert; Fu, Yanling; Wang, Wenquan; Liu, Yong; Zhao, Zhongzhen; Clardy, Jon; Shamu, Caroline E.; Eisenberg, David M.
2011-01-01
Aim of the study. Ethnobotanically-driven drug-discovery programs include data related to many aspects of the preparation of botanical medicines, from initial plant collection to chemical extraction and fractionation. The Traditional Medicine-Collection Tracking System (TM-CTS) was created to organize and store data of this type for an international collaborative project involving the systematic evaluation of commonly used Traditional Chinese Medicinal plants. Materials and Methods. The system was developed using domain-driven design techniques, and is implemented using Java, Hibernate, PostgreSQL, Business Intelligence and Reporting Tools (BIRT), and Apache Tomcat. Results. The TM-CTS relational database schema contains over 70 data types, comprising over 500 data fields. The system incorporates a number of unique features that are useful in the context of ethnobotanical projects such as support for information about botanical collection, method of processing, quality tests for plants with existing pharmacopoeia standards, chemical extraction and fractionation, and historical uses of the plants. The database also accommodates data provided in multiple languages and integration with a database system built to support high throughput screening based drug discovery efforts. It is accessed via a web-based application that provides extensive, multi-format reporting capabilities. Conclusions. This new database system was designed to support a project evaluating the bioactivity of Chinese medicinal plants. The software used to create the database is open source, freely available, and could potentially be applied to other ethnobotanically-driven natural product collection and drug-discovery programs. PMID:21420479
Data Management Applications for the Service Preparation Subsystem
NASA Technical Reports Server (NTRS)
Luong, Ivy P.; Chang, George W.; Bui, Tung; Allen, Christopher; Malhotra, Shantanu; Chen, Fannie C.; Bui, Bach X.; Gutheinz, Sandy C.; Kim, Rachel Y.; Zendejas, Silvino C.;
2009-01-01
These software applications provide intuitive User Interfaces (UIs) with a consistent look and feel for interaction with, and control of, the Service Preparation Subsystem (SPS). The elements of the UIs described here are the File Manager, Mission Manager, and Log Monitor applications. All UIs provide access to add/delete/update data entities in a complex database schema without requiring technical expertise on the part of the end users. These applications allow for safe, validated, catalogued input of data. Also, the software has been designed in multiple, coherent layers to promote ease of code maintenance and reuse in addition to reducing testing and accelerating maturity.
Constructing a Geology Ontology Using a Relational Database
NASA Astrophysics Data System (ADS)
Hou, W.; Yang, L.; Yin, S.; Ye, J.; Clarke, K.
2013-12-01
In geology community, the creation of a common geology ontology has become a useful means to solve problems of data integration, knowledge transformation and the interoperation of multi-source, heterogeneous and multiple scale geological data. Currently, human-computer interaction methods and relational database-based methods are the primary ontology construction methods. Some human-computer interaction methods such as the Geo-rule based method, the ontology life cycle method and the module design method have been proposed for applied geological ontologies. Essentially, the relational database-based method is a reverse engineering of abstracted semantic information from an existing database. The key is to construct rules for the transformation of database entities into the ontology. Relative to the human-computer interaction method, relational database-based methods can use existing resources and the stated semantic relationships among geological entities. However, two problems challenge the development and application. One is the transformation of multiple inheritances and nested relationships and their representation in an ontology. The other is that most of these methods do not measure the semantic retention of the transformation process. In this study, we focused on constructing a rule set to convert the semantics in a geological database into a geological ontology. According to the relational schema of a geological database, a conversion approach is presented to convert a geological spatial database to an OWL-based geological ontology, which is based on identifying semantics such as entities, relationships, inheritance relationships, nested relationships and cluster relationships. The semantic integrity of the transformation was verified using an inverse mapping process. In a geological ontology, an inheritance and union operations between superclass and subclass were used to present the nested relationship in a geochronology and the multiple inheritances relationship. Based on a Quaternary database of downtown of Foshan city, Guangdong Province, in Southern China, a geological ontology was constructed using the proposed method. To measure the maintenance of semantics in the conversation process and the results, an inverse mapping from the ontology to a relational database was tested based on a proposed conversation rule. The comparison of schema and entities and the reduction of tables between the inverse database and the original database illustrated that the proposed method retains the semantic information well during the conversation process. An application for abstracting sandstone information showed that semantic relationships among concepts in the geological database were successfully reorganized in the constructed ontology. Key words: geological ontology; geological spatial database; multiple inheritance; OWL Acknowledgement: This research is jointly funded by the Specialized Research Fund for the Doctoral Program of Higher Education of China (RFDP) (20100171120001), NSFC (41102207) and the Fundamental Research Funds for the Central Universities (12lgpy19).
Peixoto, Maria Manuela; Nobre, Pedro
2017-04-01
Despite the existence of conceptual models of sexual dysfunction based on cognitive theory, few studies have tested the role of vulnerability factors such as sexual beliefs as moderators of the activation of cognitive schemas in response to negative sexual events. To test the moderator role of dysfunctional sexual beliefs in the association between the frequency of negative sexual episodes and the activation of incompetence schemas in gay and heterosexual men. Five-hundred seventy-five men (287 gay, 288 heterosexual) who completed an online survey on cognitive-affective dimensions and sexual functioning were selected from a larger database. Hierarchical regression analyses were conducted to test the hypothesis that dysfunctional sexual beliefs moderate the association between the frequency of unsuccessful sexual episodes and the activation of incompetence schemas. Participants completed the Sexual Dysfunctional Beliefs Questionnaire and the Questionnaire of Cognitive Schemas Activated in Sexual Context. Findings indicated that men's ability for always being ready for sex, to satisfy the partner, and to maintain an erection until ending sexual activity constitute "macho" beliefs that moderate the activation of incompetence schemas when unsuccessful sexual events occur in gay and heterosexual men. In addition, activation of incompetence schemas in response to negative sexual events in gay men was moderated by the endorsement of conservative attitudes toward moderate sexuality. The main findings suggested that psychological interventions targeting dysfunctional sexual beliefs could help de-catastrophize the consequences of negative sexual events and facilitate sexual functioning. Despite being a web-based study, it represents the first attempt to test the moderator role of dysfunctional sexual beliefs in the association between the frequency of unsuccessful sexual episodes and the activation of incompetence schemas in gay and heterosexual men. Overall, findings support the role of sexual beliefs as facilitators of the activation of incompetence schemas in the face of negative sexual events in gay and heterosexual men, emphasizing the need to develop treatment and prevention strategies aimed at challenging common male beliefs about sexuality. Peixoto MM, Nobre P. "Macho" Beliefs Moderate the Association Between Negative Sexual Episodes and Activation of Incompetence Schemas in Sexual Context, in Gay and Heterosexual Men. J Sex Med 2017;14:518-525. Copyright © 2017 International Society for Sexual Medicine. Published by Elsevier Inc. All rights reserved.
On-line interactive virtual experiments on nanoscience
NASA Astrophysics Data System (ADS)
Kadar, Manuella; Ileana, Ioan; Hutanu, Constantin
2009-01-01
This paper is an overview on the next generation web which allows students to experience virtual experiments on nano science, physics devices, processes and processing equipment. Virtual reality is used to support a real university lab in which a student can experiment real lab sessions. The web material is presented in an intuitive and highly visual 3D form that is accessible to a diverse group of students. Such type of laboratory provides opportunities for professional and practical education for a wide range of users. The expensive equipment and apparatuses that build the experimental stage in a particular standard laboratory is used to create virtual educational research laboratories. Students learn how to prepare the apparatuses and facilities for the experiment. The online experiments metadata schema is the format for describing online experiments, much like the schema behind a library catalogue used to describe the books in a library. As an online experiment is a special kind of learning object, one specifies its schema as an extension to an established metadata schema for learning objects. The content of the courses, metainformation as well as readings and user data are saved on the server in a database as XML objects.
Artemis and ACT: viewing, annotating and comparing sequences stored in a relational database.
Carver, Tim; Berriman, Matthew; Tivey, Adrian; Patel, Chinmay; Böhme, Ulrike; Barrell, Barclay G; Parkhill, Julian; Rajandream, Marie-Adèle
2008-12-01
Artemis and Artemis Comparison Tool (ACT) have become mainstream tools for viewing and annotating sequence data, particularly for microbial genomes. Since its first release, Artemis has been continuously developed and supported with additional functionality for editing and analysing sequences based on feedback from an active user community of laboratory biologists and professional annotators. Nevertheless, its utility has been somewhat restricted by its limitation to reading and writing from flat files. Therefore, a new version of Artemis has been developed, which reads from and writes to a relational database schema, and allows users to annotate more complex, often large and fragmented, genome sequences. Artemis and ACT have now been extended to read and write directly to the Generic Model Organism Database (GMOD, http://www.gmod.org) Chado relational database schema. In addition, a Gene Builder tool has been developed to provide structured forms and tables to edit coordinates of gene models and edit functional annotation, based on standard ontologies, controlled vocabularies and free text. Artemis and ACT are freely available (under a GPL licence) for download (for MacOSX, UNIX and Windows) at the Wellcome Trust Sanger Institute web sites: http://www.sanger.ac.uk/Software/Artemis/ http://www.sanger.ac.uk/Software/ACT/
JEnsembl: a version-aware Java API to Ensembl data systems
Paterson, Trevor; Law, Andy
2012-01-01
Motivation: The Ensembl Project provides release-specific Perl APIs for efficient high-level programmatic access to data stored in various Ensembl database schema. Although Perl scripts are perfectly suited for processing large volumes of text-based data, Perl is not ideal for developing large-scale software applications nor embedding in graphical interfaces. The provision of a novel Java API would facilitate type-safe, modular, object-orientated development of new Bioinformatics tools with which to access, analyse and visualize Ensembl data. Results: The JEnsembl API implementation provides basic data retrieval and manipulation functionality from the Core, Compara and Variation databases for all species in Ensembl and EnsemblGenomes and is a platform for the development of a richer API to Ensembl datasources. The JEnsembl architecture uses a text-based configuration module to provide evolving, versioned mappings from database schema to code objects. A single installation of the JEnsembl API can therefore simultaneously and transparently connect to current and previous database instances (such as those in the public archive) thus facilitating better analysis repeatability and allowing ‘through time’ comparative analyses to be performed. Availability: Project development, released code libraries, Maven repository and documentation are hosted at SourceForge (http://jensembl.sourceforge.net). Contact: jensembl-develop@lists.sf.net, andy.law@roslin.ed.ac.uk, trevor.paterson@roslin.ed.ac.uk PMID:22945789
Andersen, Barbara L.; Fowler, Jeffrey M.; Maxwell, G. Larry
2008-01-01
Abstract Gynecologic cancer patients are at high risk for emotional distress and sexual dysfunction. The present study tested sexual self schema as an individual difference variable that might be useful in identifying those at risk for unfavorable outcomes. First, we tested schema as a predictor of sexual outcomes,including bodychangestress. Second,we examined schema as a contributor to broader quality of life outcomes, specifically as a moderator of the relationship between sexual satisfaction and psychological statue (depressive symptoms and quality of life). A cross-sectional design was used. Gynecologic cancer survivors (N = 175) 2−10 years post treatment were assessed during routine follow up. In regression analyses controlling for sociodemographic variables, patients' physical symptoms/signs as evaluated by nurses, health status, and extent of partner sexual difficulties, sexual self schema accounted for significant variance in the prediction of current sexual behavior, responsiveness, and satisfaction. Moreover, schema moderated the relationship between sexual satisfaction and psychological outcomes, suggesting that a positive sexual self schema might “buffer” patients from depressive symptoms when their sexual satisfaction is low. Furthermore, the combination of a negative sexual self schema and low sexual satisfaction might heighten survivors' risk for psychological distress, including depressive symptomatology. These data support the consideration of sexual self schema as a predictor of sexual morbidity among gynecologic cancer survivors. PMID:18418707
Chen, R S; Nadkarni, P; Marenco, L; Levin, F; Erdos, J; Miller, P L
2000-01-01
The entity-attribute-value representation with classes and relationships (EAV/CR) provides a flexible and simple database schema to store heterogeneous biomedical data. In certain circumstances, however, the EAV/CR model is known to retrieve data less efficiently than conventionally based database schemas. To perform a pilot study that systematically quantifies performance differences for database queries directed at real-world microbiology data modeled with EAV/CR and conventional representations, and to explore the relative merits of different EAV/CR query implementation strategies. Clinical microbiology data obtained over a ten-year period were stored using both database models. Query execution times were compared for four clinically oriented attribute-centered and entity-centered queries operating under varying conditions of database size and system memory. The performance characteristics of three different EAV/CR query strategies were also examined. Performance was similar for entity-centered queries in the two database models. Performance in the EAV/CR model was approximately three to five times less efficient than its conventional counterpart for attribute-centered queries. The differences in query efficiency became slightly greater as database size increased, although they were reduced with the addition of system memory. The authors found that EAV/CR queries formulated using multiple, simple SQL statements executed in batch were more efficient than single, large SQL statements. This paper describes a pilot project to explore issues in and compare query performance for EAV/CR and conventional database representations. Although attribute-centered queries were less efficient in the EAV/CR model, these inefficiencies may be addressable, at least in part, by the use of more powerful hardware or more memory, or both.
Issues and prospects for the next generation of the spatial data transfer standard (SDTS)
Arctur, D.; Hair, D.; Timson, G.; Martin, E.P.; Fegeas, R.
1998-01-01
The Spatial Data Transfer Standard (SDTS) was designed to be capable of representing virtually any data model, rather than being a prescription for a single data model. It has fallen short of this ambitious goal for a number of reasons, which this paper investigates. In addition to issues that might have been anticipated in its design, a number of new issues have arisen since its initial development. These include the need to support explicit feature definitions, incremental update, value-added extensions, and change tracking within large, national databases. It is time to consider the next stage of evolution for SDTS. This paper suggests development of an Object Profile for SDTS that would integrate concepts for a dynamic schema structure, OpenGIS interface, and CORBA IDL.
Hewitt, Robin; Gobbi, Alberto; Lee, Man-Ling
2005-01-01
Relational databases are the current standard for storing and retrieving data in the pharmaceutical and biotech industries. However, retrieving data from a relational database requires specialized knowledge of the database schema and of the SQL query language. At Anadys, we have developed an easy-to-use system for searching and reporting data in a relational database to support our drug discovery project teams. This system is fast and flexible and allows users to access all data without having to write SQL queries. This paper presents the hierarchical, graph-based metadata representation and SQL-construction methods that, together, are the basis of this system's capabilities.
Semantic mediation in the national geologic map database (US)
Percy, D.; Richard, S.; Soller, D.
2008-01-01
Controlled language is the primary challenge in merging heterogeneous databases of geologic information. Each agency or organization produces databases with different schema, and different terminology for describing the objects within. In order to make some progress toward merging these databases using current technology, we have developed software and a workflow that allows for the "manual semantic mediation" of these geologic map databases. Enthusiastic support from many state agencies (stakeholders and data stewards) has shown that the community supports this approach. Future implementations will move toward a more Artificial Intelligence-based approach, using expert-systems or knowledge-bases to process data based on the training sets we have developed manually.
Insertion algorithms for network model database management systems
NASA Astrophysics Data System (ADS)
Mamadolimov, Abdurashid; Khikmat, Saburov
2017-12-01
The network model is a database model conceived as a flexible way of representing objects and their relationships. Its distinguishing feature is that the schema, viewed as a graph in which object types are nodes and relationship types are arcs, forms partial order. When a database is large and a query comparison is expensive then the efficiency requirement of managing algorithms is minimizing the number of query comparisons. We consider updating operation for network model database management systems. We develop a new sequantial algorithm for updating operation. Also we suggest a distributed version of the algorithm.
Aided generation of search interfaces to astronomical archives
NASA Astrophysics Data System (ADS)
Zorba, Sonia; Bignamini, Andrea; Cepparo, Francesco; Knapic, Cristina; Molinaro, Marco; Smareglia, Riccardo
2016-07-01
Astrophysical data provider organizations that host web based interfaces to provide access to data resources have to cope with possible changes in data management that imply partial rewrites of web applications. To avoid doing this manually it was decided to develop a dynamically configurable Java EE web application that can set itself up reading needed information from configuration files. Specification of what information the astronomical archive database has to expose is managed using the TAP SCHEMA schema from the IVOA TAP recommendation, that can be edited using a graphical interface. When configuration steps are done the tool will build a war file to allow easy deployment of the application.
ERIC Educational Resources Information Center
Millan, Eva; Belmonte, Maria-Victoria; Ruiz-Montiel, Manuela; Gavilanes, Juan; Perez-de-la-Cruz, Jose-Luis
2016-01-01
In this paper, we present BH-ShaDe, a new software tool to assist architecture students learning the ill-structured domain/task of housing design. The software tool provides students with automatic or interactively generated floor plan schemas for basic houses. The students can then use the generated schemas as initial seeds to develop complete…
Ricebase - a resource for rice breeding
USDA-ARS?s Scientific Manuscript database
Ricebase combines accessions, traits, markers, and genes with genome-scale datasets to empower rice breeders and geneticists to explore big-data resources. The underlying code and schema are shared with CassavaBase and the Sol Genomics Network (SGN) databases. Ricebase was launched specifically to m...
Analytical Design Package (ADP2): A computer aided engineering tool for aircraft transparency design
NASA Technical Reports Server (NTRS)
Wuerer, J. E.; Gran, M.; Held, T. W.
1994-01-01
The Analytical Design Package (ADP2) is being developed as a part of the Air Force Frameless Transparency Program (FTP). ADP2 is an integrated design tool consisting of existing analysis codes and Computer Aided Engineering (CAE) software. The objective of the ADP2 is to develop and confirm an integrated design methodology for frameless transparencies, related aircraft interfaces, and their corresponding tooling. The application of this methodology will generate high confidence for achieving a qualified part prior to mold fabrication. ADP2 is a customized integration of analysis codes, CAE software, and material databases. The primary CAE integration tool for the ADP2 is P3/PATRAN, a commercial-off-the-shelf (COTS) software tool. The open architecture of P3/PATRAN allows customized installations with different applications modules for specific site requirements. Integration of material databases allows the engineer to select a material, and those material properties are automatically called into the relevant analysis code. The ADP2 materials database will be composed of four independent schemas: CAE Design, Processing, Testing, and Logistics Support. The design of ADP2 places major emphasis on the seamless integration of CAE and analysis modules with a single intuitive graphical interface. This tool is being designed to serve and be used by an entire project team, i.e., analysts, designers, materials experts, and managers. The final version of the software will be delivered to the Air Force in Jan. 1994. The Analytical Design Package (ADP2) will then be ready for transfer to industry. The package will be capable of a wide range of design and manufacturing applications.
ESML for Earth Science Data Sets and Analysis
NASA Technical Reports Server (NTRS)
Graves, Sara; Ramachandran, Rahul
2003-01-01
The primary objective of this research project was to transition ESML from design to application. The resulting schema and prototype software will foster community acceptance for the Define once, use anywhere concept central to ESML. Supporting goals include: 1) Refinement of the ESML schema and software libraries in cooperation with the user community; 2) Application of the ESML schema and software to a variety of Earth science data sets and analysis tools; 3) Development of supporting prototype software for enhanced ease of use; 4) Cooperation with standards bodies in order to assure ESML is aligned with related metadata standards as appropriate; and 5) Widespread publication of the ESML approach, schema, and software.
Workplace-based assessment: raters' performance theories and constructs.
Govaerts, M J B; Van de Wiel, M W J; Schuwirth, L W T; Van der Vleuten, C P M; Muijtjens, A M M
2013-08-01
Weaknesses in the nature of rater judgments are generally considered to compromise the utility of workplace-based assessment (WBA). In order to gain insight into the underpinnings of rater behaviours, we investigated how raters form impressions of and make judgments on trainee performance. Using theoretical frameworks of social cognition and person perception, we explored raters' implicit performance theories, use of task-specific performance schemas and the formation of person schemas during WBA. We used think-aloud procedures and verbal protocol analysis to investigate schema-based processing by experienced (N = 18) and inexperienced (N = 16) raters (supervisor-raters in general practice residency training). Qualitative data analysis was used to explore schema content and usage. We quantitatively assessed rater idiosyncrasy in the use of performance schemas and we investigated effects of rater expertise on the use of (task-specific) performance schemas. Raters used different schemas in judging trainee performance. We developed a normative performance theory comprising seventeen inter-related performance dimensions. Levels of rater idiosyncrasy were substantial and unrelated to rater expertise. Experienced raters made significantly more use of task-specific performance schemas compared to inexperienced raters, suggesting more differentiated performance schemas in experienced raters. Most raters started to develop person schemas the moment they began to observe trainee performance. The findings further our understanding of processes underpinning judgment and decision making in WBA. Raters make and justify judgments based on personal theories and performance constructs. Raters' information processing seems to be affected by differences in rater expertise. The results of this study can help to improve rater training, the design of assessment instruments and decision making in WBA.
Architecture for biomedical multimedia information delivery on the World Wide Web
NASA Astrophysics Data System (ADS)
Long, L. Rodney; Goh, Gin-Hua; Neve, Leif; Thoma, George R.
1997-10-01
Research engineers at the National Library of Medicine are building a prototype system for the delivery of multimedia biomedical information on the World Wide Web. This paper discuses the architecture and design considerations for the system, which will be used initially to make images and text from the third National Health and Nutrition Examination Survey (NHANES) publicly available. We categorized our analysis as follows: (1) fundamental software tools: we analyzed trade-offs among use of conventional HTML/CGI, X Window Broadway, and Java; (2) image delivery: we examined the use of unconventional TCP transmission methods; (3) database manager and database design: we discuss the capabilities and planned use of the Informix object-relational database manager and the planned schema for the HNANES database; (4) storage requirements for our Sun server; (5) user interface considerations; (6) the compatibility of the system with other standard research and analysis tools; (7) image display: we discuss considerations for consistent image display for end users. Finally, we discuss the scalability of the system in terms of incorporating larger or more databases of similar data, and the extendibility of the system for supporting content-based retrieval of biomedical images. The system prototype is called the Web-based Medical Information Retrieval System. An early version was built as a Java applet and tested on Unix, PC, and Macintosh platforms. This prototype used the MiniSQL database manager to do text queries on a small database of records of participants in the second NHANES survey. The full records and associated x-ray images were retrievable and displayable on a standard Web browser. A second version has now been built, also a Java applet, using the MySQL database manager.
1983-12-16
management system (DBMS) is to record and maintain information used by an organization in the organization’s decision-making process. Some advantages of a...independence. Database Management Systems are classified into three major models; relational, network, and hierarchical. Each model uses a software...feeling impedes the overall effectiveness of the 4-" Acquisition Management Information System (AMIS), which currently uses S2k. The size of the AMIS
Validation of an Instrument and Testing Protocol for Measuring the Combinatorial Analysis Schema.
ERIC Educational Resources Information Center
Staver, John R.; Harty, Harold
1979-01-01
Designs a testing situation to examine the presence of combinatorial analysis, to establish construct validity in the use of an instrument, Combinatorial Analysis Behavior Observation Scheme (CABOS), and to investigate the presence of the schema in young adolescents. (Author/GA)
The NASA Program Management Tool: A New Vision in Business Intelligence
NASA Technical Reports Server (NTRS)
Maluf, David A.; Swanson, Keith; Putz, Peter; Bell, David G.; Gawdiak, Yuri
2006-01-01
This paper describes a novel approach to business intelligence and program management for large technology enterprises like the U.S. National Aeronautics and Space Administration (NASA). Two key distinctions of the approach are that 1) standard business documents are the user interface, and 2) a "schema-less" XML database enables flexible integration of technology information for use by both humans and machines in a highly dynamic environment. The implementation utilizes patent-pending NASA software called the NASA Program Management Tool (PMT) and its underlying "schema-less" XML database called Netmark. Initial benefits of PMT include elimination of discrepancies between business documents that use the same information and "paperwork reduction" for program and project management in the form of reducing the effort required to understand standard reporting requirements and to comply with those reporting requirements. We project that the underlying approach to business intelligence will enable significant benefits in the timeliness, integrity and depth of business information available to decision makers on all organizational levels.
Paterson, Trevor; Law, Andy
2009-08-14
Genomic analysis, particularly for less well-characterized organisms, is greatly assisted by performing comparative analyses between different types of genome maps and across species boundaries. Various providers publish a plethora of on-line resources collating genome mapping data from a multitude of species. Datasources range in scale and scope from small bespoke resources for particular organisms, through larger web-resources containing data from multiple species, to large-scale bioinformatics resources providing access to data derived from genome projects for model and non-model organisms. The heterogeneity of information held in these resources reflects both the technologies used to generate the data and the target users of each resource. Currently there is no common information exchange standard or protocol to enable access and integration of these disparate resources. Consequently data integration and comparison must be performed in an ad hoc manner. We have developed a simple generic XML schema (GenomicMappingData.xsd - GMD) to allow export and exchange of mapping data in a common lightweight XML document format. This schema represents the various types of data objects commonly described across mapping datasources and provides a mechanism for recording relationships between data objects. The schema is sufficiently generic to allow representation of any map type (for example genetic linkage maps, radiation hybrid maps, sequence maps and physical maps). It also provides mechanisms for recording data provenance and for cross referencing external datasources (including for example ENSEMBL, PubMed and Genbank.). The schema is extensible via the inclusion of additional datatypes, which can be achieved by importing further schemas, e.g. a schema defining relationship types. We have built demonstration web services that export data from our ArkDB database according to the GMD schema, facilitating the integration of data retrieval into Taverna workflows. The data exchange standard we present here provides a useful generic format for transfer and integration of genomic and genetic mapping data. The extensibility of our schema allows for inclusion of additional data and provides a mechanism for typing mapping objects via third party standards. Web services retrieving GMD-compliant mapping data demonstrate that use of this exchange standard provides a practical mechanism for achieving data integration, by facilitating syntactically and semantically-controlled access to the data.
Paterson, Trevor; Law, Andy
2009-01-01
Background Genomic analysis, particularly for less well-characterized organisms, is greatly assisted by performing comparative analyses between different types of genome maps and across species boundaries. Various providers publish a plethora of on-line resources collating genome mapping data from a multitude of species. Datasources range in scale and scope from small bespoke resources for particular organisms, through larger web-resources containing data from multiple species, to large-scale bioinformatics resources providing access to data derived from genome projects for model and non-model organisms. The heterogeneity of information held in these resources reflects both the technologies used to generate the data and the target users of each resource. Currently there is no common information exchange standard or protocol to enable access and integration of these disparate resources. Consequently data integration and comparison must be performed in an ad hoc manner. Results We have developed a simple generic XML schema (GenomicMappingData.xsd – GMD) to allow export and exchange of mapping data in a common lightweight XML document format. This schema represents the various types of data objects commonly described across mapping datasources and provides a mechanism for recording relationships between data objects. The schema is sufficiently generic to allow representation of any map type (for example genetic linkage maps, radiation hybrid maps, sequence maps and physical maps). It also provides mechanisms for recording data provenance and for cross referencing external datasources (including for example ENSEMBL, PubMed and Genbank.). The schema is extensible via the inclusion of additional datatypes, which can be achieved by importing further schemas, e.g. a schema defining relationship types. We have built demonstration web services that export data from our ArkDB database according to the GMD schema, facilitating the integration of data retrieval into Taverna workflows. Conclusion The data exchange standard we present here provides a useful generic format for transfer and integration of genomic and genetic mapping data. The extensibility of our schema allows for inclusion of additional data and provides a mechanism for typing mapping objects via third party standards. Web services retrieving GMD-compliant mapping data demonstrate that use of this exchange standard provides a practical mechanism for achieving data integration, by facilitating syntactically and semantically-controlled access to the data. PMID:19682365
TR32DB - Management of Research Data in a Collaborative, Interdisciplinary Research Project
NASA Astrophysics Data System (ADS)
Curdt, Constanze; Hoffmeister, Dirk; Waldhoff, Guido; Lang, Ulrich; Bareth, Georg
2015-04-01
The management of research data in a well-structured and documented manner is essential in the context of collaborative, interdisciplinary research environments (e.g. across various institutions). Consequently, set-up and use of a research data management (RDM) system like a data repository or project database is necessary. These systems should accompany and support scientists during the entire research life cycle (e.g. data collection, documentation, storage, archiving, sharing, publishing) and operate cross-disciplinary in interdisciplinary research projects. Challenges and problems of RDM are well-know. Consequently, the set-up of a user-friendly, well-documented, sustainable RDM system is essential, as well as user support and further assistance. In the framework of the Transregio Collaborative Research Centre 32 'Patterns in Soil-Vegetation-Atmosphere Systems: Monitoring, Modelling, and Data Assimilation' (CRC/TR32), funded by the German Research Foundation (DFG), a RDM system was self-designed and implemented. The CRC/TR32 project database (TR32DB, www.tr32db.de) is operating online since early 2008. The TR32DB handles all data, which are created by the involved project participants from several institutions (e.g. Universities of Cologne, Bonn, Aachen, and the Research Centre Jülich) and research fields (e.g. soil and plant sciences, hydrology, geography, geophysics, meteorology, remote sensing). Very heterogeneous research data are considered, which are resulting from field measurement campaigns, meteorological monitoring, remote sensing, laboratory studies and modelling approaches. Furthermore, outcomes like publications, conference contributions, PhD reports and corresponding images are regarded. The TR32DB project database is set-up in cooperation with the Regional Computing Centre of the University of Cologne (RRZK) and also located in this hardware environment. The TR32DB system architecture is composed of three main components: (i) a file-based data storage including backup, (ii) a database-based storage for administrative data and metadata, and (iii) a web-interface for user access. The TR32DB offers common features of RDM systems. These include data storage, entry of corresponding metadata by a user-friendly input wizard, search and download of data depending on user permission, as well as secure internal exchange of data. In addition, a Digital Object Identifier (DOI) can be allocated for specific datasets and several web mapping components are supported (e.g. Web-GIS and map search). The centrepiece of the TR32DB is the self-provided and implemented CRC/TR32 specific metadata schema. This enables the documentation of all involved, heterogeneous data with accurate, interoperable metadata. The TR32DB Metadata Schema is set-up in a multi-level approach and supports several metadata standards and schemes (e.g. Dublin Core, ISO 19115, INSPIRE, DataCite). Furthermore, metadata properties with focus on the CRC/TR32 background (e.g. CRC/TR32 specific keywords) and the supported data types are complemented. Mandatory, optional and automatic metadata properties are specified. Overall, the TR32DB is designed and implemented according to the needs of the CRC/TR32 (e.g. huge amount of heterogeneous data) and demands of the DFG (e.g. cooperation with a computing centre). The application of a self-designed, project-specific, interoperable metadata schema enables the accurate documentation of all CRC/TR32 data. The implementation of the TR32DB in the hardware environment of the RRZK ensures the access to the data after the end of the CRC/TR32 funding in 2018.
Nuclear Science References (NSR)
be included. For more information, see the help page. The NSR database schema and Web applications have undergone some recent changes. This is a revised version of the NSR Web Interface. NSR Quick Manager: Boris Pritychenko, NNDC, Brookhaven National Laboratory Web Programming: Boris Pritychenko, NNDC
A Coherent VLSI Design Environment.
1986-03-31
Schema were a CMOS sorter and a TTL PC board for gathering statistics from a Multibus. Neither design was completed using Schema, but at least in the...technique for automatically adjusting signal delays in an MOS system has been developed. The Dynamic Delay Adjustment (DDA) technique provides...34Synchronization Reliability in CMOS Technology," IEEE J. of Solid - State Circuits, Vol. SC-20, No. 4, pp. 880-883, 1985. * [8] J. Hohl, W. Larsen and L. Schooley
Investigating a Tier 1 Intervention Focused on Proportional Reasoning: A Follow-Up Study
ERIC Educational Resources Information Center
Jitendra, Asha K.; Harwell, Michael R.; Karl, Stacy R.; Simonson, Gregory R.; Slater, Susan C.
2017-01-01
This randomized controlled study investigated the efficacy of a Tier 1 intervention--schema-based instruction--designed to help students with and without mathematics difficulties (MD) develop proportional reasoning. Twenty seventh-grade teachers/classrooms were randomly assigned to a treatment condition (schema-based instruction) or control…
Beyond Linear Syntax: An Image-Oriented Communication Aid
ERIC Educational Resources Information Center
Patel, Rupal; Pilato, Sam; Roy, Deb
2004-01-01
This article presents a novel AAC communication aid based on semantic rather than syntactic schema, leading to more natural message construction. Users interact with a two-dimensional spatially organized image schema, which depicts the semantic structure and contents of the message. An overview of the interface design is presented followed by…
The StarView intelligent query mechanism
NASA Technical Reports Server (NTRS)
Semmel, R. D.; Silberberg, D. P.
1993-01-01
The StarView interface is being developed to facilitate the retrieval of scientific and engineering data produced by the Hubble Space Telescope. While predefined screens in the interface can be used to specify many common requests, ad hoc requests require a dynamic query formulation capability. Unfortunately, logical level knowledge is too sparse to support this capability. In particular, essential formulation knowledge is lost when the domain of interest is mapped to a set of database relation schemas. Thus, a system known as QUICK has been developed that uses conceptual design knowledge to facilitate query formulation. By heuristically determining strongly associated objects at the conceptual level, QUICK is able to formulate semantically reasonable queries in response to high-level requests that specify only attributes of interest. Moreover, by exploiting constraint knowledge in the conceptual design, QUICK assures that queries are formulated quickly and will execute efficiently.
1994-01-01
databases and identifying new data entities, data elements, and relationships . - Standard data naming conventions, schema, and definition processes...management system. The use of such a tool could offer: (1) structured support for representation of objects and their relationships to each other (and...their relationships to related multimedia objects such as an engineering drawing of the tank object or a satellite image that contains the installation
Ultra-Structure database design methodology for managing systems biology data and analyses
Maier, Christopher W; Long, Jeffrey G; Hemminger, Bradley M; Giddings, Morgan C
2009-01-01
Background Modern, high-throughput biological experiments generate copious, heterogeneous, interconnected data sets. Research is dynamic, with frequently changing protocols, techniques, instruments, and file formats. Because of these factors, systems designed to manage and integrate modern biological data sets often end up as large, unwieldy databases that become difficult to maintain or evolve. The novel rule-based approach of the Ultra-Structure design methodology presents a potential solution to this problem. By representing both data and processes as formal rules within a database, an Ultra-Structure system constitutes a flexible framework that enables users to explicitly store domain knowledge in both a machine- and human-readable form. End users themselves can change the system's capabilities without programmer intervention, simply by altering database contents; no computer code or schemas need be modified. This provides flexibility in adapting to change, and allows integration of disparate, heterogenous data sets within a small core set of database tables, facilitating joint analysis and visualization without becoming unwieldy. Here, we examine the application of Ultra-Structure to our ongoing research program for the integration of large proteomic and genomic data sets (proteogenomic mapping). Results We transitioned our proteogenomic mapping information system from a traditional entity-relationship design to one based on Ultra-Structure. Our system integrates tandem mass spectrum data, genomic annotation sets, and spectrum/peptide mappings, all within a small, general framework implemented within a standard relational database system. General software procedures driven by user-modifiable rules can perform tasks such as logical deduction and location-based computations. The system is not tied specifically to proteogenomic research, but is rather designed to accommodate virtually any kind of biological research. Conclusion We find Ultra-Structure offers substantial benefits for biological information systems, the largest being the integration of diverse information sources into a common framework. This facilitates systems biology research by integrating data from disparate high-throughput techniques. It also enables us to readily incorporate new data types, sources, and domain knowledge with no change to the database structure or associated computer code. Ultra-Structure may be a significant step towards solving the hard problem of data management and integration in the systems biology era. PMID:19691849
BIOSPIDA: A Relational Database Translator for NCBI.
Hagen, Matthew S; Lee, Eva K
2010-11-13
As the volume and availability of biological databases continue widespread growth, it has become increasingly difficult for research scientists to identify all relevant information for biological entities of interest. Details of nucleotide sequences, gene expression, molecular interactions, and three-dimensional structures are maintained across many different databases. To retrieve all necessary information requires an integrated system that can query multiple databases with minimized overhead. This paper introduces a universal parser and relational schema translator that can be utilized for all NCBI databases in Abstract Syntax Notation (ASN.1). The data models for OMIM, Entrez-Gene, Pubmed, MMDB and GenBank have been successfully converted into relational databases and all are easily linkable helping to answer complex biological questions. These tools facilitate research scientists to locally integrate databases from NCBI without significant workload or development time.
NASA Astrophysics Data System (ADS)
Velazquez, Enrique Israel
Improvements in medical and genomic technologies have dramatically increased the production of electronic data over the last decade. As a result, data management is rapidly becoming a major determinant, and urgent challenge, for the development of Precision Medicine. Although successful data management is achievable using Relational Database Management Systems (RDBMS), exponential data growth is a significant contributor to failure scenarios. Growing amounts of data can also be observed in other sectors, such as economics and business, which, together with the previous facts, suggests that alternate database approaches (NoSQL) may soon be required for efficient storage and management of big databases. However, this hypothesis has been difficult to test in the Precision Medicine field since alternate database architectures are complex to assess and means to integrate heterogeneous electronic health records (EHR) with dynamic genomic data are not easily available. In this dissertation, we present a novel set of experiments for identifying NoSQL database approaches that enable effective data storage and management in Precision Medicine using patients' clinical and genomic information from the cancer genome atlas (TCGA). The first experiment draws on performance and scalability from biologically meaningful queries with differing complexity and database sizes. The second experiment measures performance and scalability in database updates without schema changes. The third experiment assesses performance and scalability in database updates with schema modifications due dynamic data. We have identified two NoSQL approach, based on Cassandra and Redis, which seems to be the ideal database management systems for our precision medicine queries in terms of performance and scalability. We present NoSQL approaches and show how they can be used to manage clinical and genomic big data. Our research is relevant to the public health since we are focusing on one of the main challenges to the development of Precision Medicine and, consequently, investigating a potential solution to the progressively increasing demands on health care.
The triticeae toolbox: combining phenotype and genotype data to advance small-grains breeding
USDA-ARS?s Scientific Manuscript database
The Triticeae Toolbox (http://triticeaetoolbox.org; T3) is the database schema enabling plant breeders and researchers to combine, visualize, and interrogate the wealth of phenotype and genotype data generated by the Triticeae Coordinated Agricultural Project (TCAP). T3 enables users to define speci...
NASA Astrophysics Data System (ADS)
Devaraju, Anusuriya; Klump, Jens; Tey, Victor; Fraser, Ryan
2016-04-01
Physical samples such as minerals, soil, rocks, water, air and plants are important observational units for understanding the complexity of our environment and its resources. They are usually collected and curated by different entities, e.g., individual researchers, laboratories, state agencies, or museums. Persistent identifiers may facilitate access to physical samples that are scattered across various repositories. They are essential to locate samples unambiguously and to share their associated metadata and data systematically across the Web. The International Geo Sample Number (IGSN) is a persistent, globally unique label for identifying physical samples. The IGSNs of physical samples are registered by end-users (e.g., individual researchers, data centers and projects) through allocating agents. Allocating agents are the institutions acting on behalf of the implementing organization (IGSN e.V.). The Commonwealth Scientific and Industrial Research Organisation CSIRO) is one of the allocating agents in Australia. To implement IGSN in our organisation, we developed a RESTful service and a metadata model. The web service enables a client to register sub-namespaces and multiple samples, and retrieve samples' metadata programmatically. The metadata model provides a framework in which different types of samples may be represented. It is generic and extensible, therefore it may be applied in the context of multi-disciplinary projects. The metadata model has been implemented as an XML schema and a PostgreSQL database. The schema is used to handle sample registrations requests and to disseminate their metadata, whereas the relational database is used to preserve the metadata records. The metadata schema leverages existing controlled vocabularies to minimize the scope for error and incorporates some simplifications to reduce complexity of the schema implementation. The solutions developed have been applied and tested in the context of two sample repositories in CSIRO, the Capricorn Distal Footprints project and the Rock Store.
Lassere, Marissa N; Johnson, Kent R; Boers, Maarten; Tugwell, Peter; Brooks, Peter; Simon, Lee; Strand, Vibeke; Conaghan, Philip G; Ostergaard, Mikkel; Maksymowych, Walter P; Landewe, Robert; Bresnihan, Barry; Tak, Paul-Peter; Wakefield, Richard; Mease, Philip; Bingham, Clifton O; Hughes, Michael; Altman, Doug; Buyse, Marc; Galbraith, Sally; Wells, George
2007-03-01
There are clear advantages to using biomarkers and surrogate endpoints, but concerns about clinical and statistical validity and systematic methods to evaluate these aspects hinder their efficient application. Our objective was to review the literature on biomarkers and surrogates to develop a hierarchical schema that systematically evaluates and ranks the surrogacy status of biomarkers and surrogates; and to obtain feedback from stakeholders. After a systematic search of Medline and Embase on biomarkers, surrogate (outcomes, endpoints, markers, indicators), intermediate endpoints, and leading indicators, a quantitative surrogate validation schema was developed and subsequently evaluated at a stakeholder workshop. The search identified several classification schema and definitions. Components of these were incorporated into a new quantitative surrogate validation level of evidence schema that evaluates biomarkers along 4 domains: Target, Study Design, Statistical Strength, and Penalties. Scores derived from 3 domains the Target that the marker is being substituted for, the Design of the (best) evidence, and the Statistical strength are additive. Penalties are then applied if there is serious counterevidence. A total score (0 to 15) determines the level of evidence, with Level 1 the strongest and Level 5 the weakest. It was proposed that the term "surrogate" be restricted to markers attaining Levels 1 or 2 only. Most stakeholders agreed that this operationalization of the National Institutes of Health definitions of biomarker, surrogate endpoint, and clinical endpoint was useful. Further development and application of this schema provides incentives and guidance for effective biomarker and surrogate endpoint research, and more efficient drug discovery, development, and approval.
ERIC Educational Resources Information Center
Jitendra, Asha K.; Star, Jon R.; Starosta, Kristin; Leh, Jayne M.; Sood, Sheetal; Caskie, Grace; Hughes, Cheyenne L.; Mack, Toshi R.
2009-01-01
The present study evaluated the effectiveness of an instructional intervention (schema-based instruction, SBI) that was designed to meet the diverse needs of middle school students by addressing the research literatures from both special education and mathematics education. Specifically, SBI emphasizes the role of the mathematical structure of…
Towards Introducing a Geocoding Information System for Greenland
NASA Astrophysics Data System (ADS)
Siksnans, J.; Pirupshvarre, Hans R.; Lind, M.; Mioc, D.; Anton, F.
2011-08-01
Currently, addressing practices in Greenland do not support geocoding. Addressing points on a map by geographic coordinates is vital for emergency services such as police and ambulance for avoiding ambiguities in finding incident locations (Government of Greenland, 2010) Therefore, it is necessary to investigate the current addressing practices in Greenland. Asiaq (Asiaq, 2011) is a public enterprise of the Government of Greenland which holds three separate databases regards addressing and place references: - list of locality names (towns, villages, farms), - technical base maps (including road center lines not connected with names, and buildings), - the NIN registry (The Land Use Register of Greenland - holds information on the land allotments and buildings in Greenland). The main problem is that these data sets are not interconnected, thus making it impossible to address a point in a map with geographic coordinates in a standardized way. The possible solutions suffer from the fact that Greenland has a scattered habitation pattern and the generalization of the address assignment schema is a difficult task. A schema would be developed according to the characteristics of the settlement pattern, e.g. cities, remote locations and place names. The aim is to propose an ontology for a common postal address system for Greenland. The main part of the research is dedicated to the current system and user requirement engineering. This allowed us to design a conceptual database model which corresponds to the user requirements, and implement a small scale prototype. Furthermore, our research includes resemblance findings in Danish and Greenland's addressing practices, data dictionary for establishing Greenland addressing system's logical model and enhanced entity relationship diagram. This initial prototype of the Greenland addressing system could be used to evaluate and build the full architecture of the addressing information system for Greenland. Using software engineering methods the implementation can be done according to the developed data model and initial database prototype. Development of the Greenland addressing system using a modern GIS and database technology would ease the work and improve the quality of public services such as: postal delivery, emergency response, customer/business relationship management, administration of land, utility planning and maintenance and public statistical data analysis.
An online analytical processing multi-dimensional data warehouse for malaria data
Madey, Gregory R; Vyushkov, Alexander; Raybaud, Benoit; Burkot, Thomas R; Collins, Frank H
2017-01-01
Abstract Malaria is a vector-borne disease that contributes substantially to the global burden of morbidity and mortality. The management of malaria-related data from heterogeneous, autonomous, and distributed data sources poses unique challenges and requirements. Although online data storage systems exist that address specific malaria-related issues, a globally integrated online resource to address different aspects of the disease does not exist. In this article, we describe the design, implementation, and applications of a multi-dimensional, online analytical processing data warehouse, named the VecNet Data Warehouse (VecNet-DW). It is the first online, globally-integrated platform that provides efficient search, retrieval and visualization of historical, predictive, and static malaria-related data, organized in data marts. Historical and static data are modelled using star schemas, while predictive data are modelled using a snowflake schema. The major goals, characteristics, and components of the DW are described along with its data taxonomy and ontology, the external data storage systems and the logical modelling and physical design phases. Results are presented as screenshots of a Dimensional Data browser, a Lookup Tables browser, and a Results Viewer interface. The power of the DW emerges from integrated querying of the different data marts and structuring those queries to the desired dimensions, enabling users to search, view, analyse, and store large volumes of aggregated data, and responding better to the increasing demands of users. Database URL https://dw.vecnet.org/datawarehouse/ PMID:29220463
A Multi-Purpose Data Dissemination Infrastructure for the Marine-Earth Observations
NASA Astrophysics Data System (ADS)
Hanafusa, Y.; Saito, H.; Kayo, M.; Suzuki, H.
2015-12-01
To open the data from a variety of observations, the Japan Agency for Marine-Earth Science and Technology (JAMSTEC) has developed a multi-purpose data dissemination infrastructure. Although many observations have been made in the earth science, all the data are not opened completely. We think data centers may provide researchers with a universal data dissemination service which can handle various kinds of observation data with little effort. For this purpose JAMSTEC Data Management Office has developed the "Information Catalog Infrastructure System (Catalog System)". This is a kind of catalog management system which can create, renew and delete catalogs (= databases) and has following features, - The Catalog System does not depend on data types or granularity of data records. - By registering a new metadata schema to the system, a new database can be created on the same system without sytem modification. - As web pages are defined by the cascading style sheets, databases have different look and feel, and operability. - The Catalog System provides databases with basic search tools; search by text, selection from a category tree, and selection from a time line chart. - For domestic users it creates the Japanese and English pages at the same time and has dictionary to control terminology and proper noun. As of August 2015 JAMSTEC operates 7 databases on the Catalog System. We expect to transfer existing databases to this system, or create new databases on it. In comparison with a dedicated database developed for the specific dataset, the Catalog System is suitable for the dissemination of small datasets, with minimum cost. Metadata held in the catalogs may be transfered to other metadata schema to exchange global databases or portals. Examples: JAMSTEC Data Catalog: http://www.godac.jamstec.go.jp/catalog/data_catalog/metadataList?lang=enJAMSTEC Document Catalog: http://www.godac.jamstec.go.jp/catalog/doc_catalog/metadataList?lang=en&tab=categoryResearch Information and Data Access Site of TEAMS: http://www.i-teams.jp/catalog/rias/metadataList?lang=en&tab=list
Schema therapy for personality disorders in older adults: a multiple-baseline study.
Videler, Arjan C; van Alphen, Sebastiaan P J; van Royen, Rita J J; van der Feltz-Cornelis, Christina M; Rossi, Gina; Arntz, Arnoud
2018-06-01
No studies have been conducted yet into the effectiveness of treatment of personality disorders in later life. This study is a first test of the effectiveness of schema therapy for personality disorders in older adults. Multiple-baseline design with eight cluster C personality disorder patients, with a mean age of 69. After a baseline phase with random length, schema therapy was given during the first year, followed by follow-up sessions during six months. Participants weekly rated the credibility of dysfunctional core beliefs. Symptomatic distress, early maladaptive schemas, quality of life and target complaints were assessed every six months and personality disorder diagnosis was assessed before baseline and after follow-up. Data were analyzed with mixed regression analyses. Results revealed significant linear trends during treatment phases, but not during baseline and follow-up. The scores during follow-up remained stable and were significantly lower compared to baseline, with high effect sizes. Seven participants remitted from their personality disorder diagnosis. Schema therapy appears an effective treatment for cluster C personality disorders in older adults. This finding is highly innovative as this is the first study exploring the effectiveness of psychotherapy, in this case schema therapy, for personality disorders in older adults.
NASA Astrophysics Data System (ADS)
Maffei, A. R.; Chandler, C. L.; Work, T.; Allen, J.; Groman, R. C.; Fox, P. A.
2009-12-01
Content Management Systems (CMSs) provide powerful features that can be of use to oceanographic (and other geo-science) data managers. However, in many instances, geo-science data management offices have previously designed customized schemas for their metadata. The WHOI Ocean Informatics initiative and the NSF funded Biological Chemical and Biological Data Management Office (BCO-DMO) have jointly sponsored a project to port an existing, relational database containing oceanographic metadata, along with an existing interface coded in Cold Fusion middleware, to a Drupal6 Content Management System. The goal was to translate all the existing database tables, input forms, website reports, and other features present in the existing system to employ Drupal CMS features. The replacement features include Drupal content types, CCK node-reference fields, themes, RDB, SPARQL, workflow, and a number of other supporting modules. Strategic use of some Drupal6 CMS features enables three separate but complementary interfaces that provide access to oceanographic research metadata via the MySQL database: 1) a Drupal6-powered front-end; 2) a standard SQL port (used to provide a Mapserver interface to the metadata and data; and 3) a SPARQL port (feeding a new faceted search capability being developed). Future plans include the creation of science ontologies, by scientist/technologist teams, that will drive semantically-enabled faceted search capabilities planned for the site. Incorporation of semantic technologies included in the future Drupal 7 core release is also anticipated. Using a public domain CMS as opposed to proprietary middleware, and taking advantage of the many features of Drupal 6 that are designed to support semantically-enabled interfaces will help prepare the BCO-DMO database for interoperability with other ecosystem databases.
Information Network Model Query Processing
NASA Astrophysics Data System (ADS)
Song, Xiaopu
Information Networking Model (INM) [31] is a novel database model for real world objects and relationships management. It naturally and directly supports various kinds of static and dynamic relationships between objects. In INM, objects are networked through various natural and complex relationships. INM Query Language (INM-QL) [30] is designed to explore such information network, retrieve information about schema, instance, their attributes, relationships, and context-dependent information, and process query results in the user specified form. INM database management system has been implemented using Berkeley DB, and it supports INM-QL. This thesis is mainly focused on the implementation of the subsystem that is able to effectively and efficiently process INM-QL. The subsystem provides a lexical and syntactical analyzer of INM-QL, and it is able to choose appropriate evaluation strategies and index mechanism to process queries in INM-QL without the user's intervention. It also uses intermediate result structure to hold intermediate query result and other helping structures to reduce complexity of query processing.
Sajadi, Seyede Fateme; Arshadi, Nasrin; Zargar, Yadolla; Mehrabizade Honarmand, Mahnaz; Hajjari, Zahra
2015-06-01
Numerous studies have demonstrated that early maladaptive schemas, emotional dysregulation are supposed to be the defining core of borderline personality disorder. Many studies have also found a strong association between the diagnosis of borderline personality and the occurrence of suicide ideation and dissociative symptoms. The present study was designed to investigate the relationship between borderline personality features and schema, emotion regulation, dissociative experiences and suicidal ideation among high school students in Shiraz City, Iran. In this descriptive correlational study, 300 students (150 boys and 150 girls) were selected from the high schools in Shiraz, Iran, using the multi-stage random sampling. Data were collected using some instruments including borderline personality feature scale for children, young schema questionnaire-short form, difficulties in emotion-regulation scale (DERS), dissociative experience scale and beck suicide ideation scale. Data were analyzed using the Pearson correlation coefficient and multivariate regression analysis. The results showed a significant positive correlation between schema, emotion regulation, dissociative experiences and suicide ideation with borderline personality features. Moreover, the results of multivariate regression analysis suggested that among the studied variables, schema was the most effective predicting variable of borderline features (P < 0.001). The findings of this study are in accordance with findings from previous studies, and generally show a meaningful association between schema, emotion regulation, dissociative experiences, and suicide ideation with borderline personality features.
Classification of proteins with shared motifs and internal repeats in the ECOD database
Kinch, Lisa N.; Liao, Yuxing
2016-01-01
Abstract Proteins and their domains evolve by a set of events commonly including the duplication and divergence of small motifs. The presence of short repetitive regions in domains has generally constituted a difficult case for structural domain classifications and their hierarchies. We developed the Evolutionary Classification Of protein Domains (ECOD) in part to implement a new schema for the classification of these types of proteins. Here we document the ways in which ECOD classifies proteins with small internal repeats, widespread functional motifs, and assemblies of small domain‐like fragments in its evolutionary schema. We illustrate the ways in which the structural genomics project impacted the classification and characterization of new structural domains and sequence families over the decade. PMID:26833690
BIOSPIDA: A Relational Database Translator for NCBI
Hagen, Matthew S.; Lee, Eva K.
2010-01-01
As the volume and availability of biological databases continue widespread growth, it has become increasingly difficult for research scientists to identify all relevant information for biological entities of interest. Details of nucleotide sequences, gene expression, molecular interactions, and three-dimensional structures are maintained across many different databases. To retrieve all necessary information requires an integrated system that can query multiple databases with minimized overhead. This paper introduces a universal parser and relational schema translator that can be utilized for all NCBI databases in Abstract Syntax Notation (ASN.1). The data models for OMIM, Entrez-Gene, Pubmed, MMDB and GenBank have been successfully converted into relational databases and all are easily linkable helping to answer complex biological questions. These tools facilitate research scientists to locally integrate databases from NCBI without significant workload or development time. PMID:21347013
The semantics of Chemical Markup Language (CML) for computational chemistry : CompChem.
Phadungsukanan, Weerapong; Kraft, Markus; Townsend, Joe A; Murray-Rust, Peter
2012-08-07
: This paper introduces a subdomain chemistry format for storing computational chemistry data called CompChem. It has been developed based on the design, concepts and methodologies of Chemical Markup Language (CML) by adding computational chemistry semantics on top of the CML Schema. The format allows a wide range of ab initio quantum chemistry calculations of individual molecules to be stored. These calculations include, for example, single point energy calculation, molecular geometry optimization, and vibrational frequency analysis. The paper also describes the supporting infrastructure, such as processing software, dictionaries, validation tools and database repositories. In addition, some of the challenges and difficulties in developing common computational chemistry dictionaries are discussed. The uses of CompChem are illustrated by two practical applications.
The semantics of Chemical Markup Language (CML) for computational chemistry : CompChem
2012-01-01
This paper introduces a subdomain chemistry format for storing computational chemistry data called CompChem. It has been developed based on the design, concepts and methodologies of Chemical Markup Language (CML) by adding computational chemistry semantics on top of the CML Schema. The format allows a wide range of ab initio quantum chemistry calculations of individual molecules to be stored. These calculations include, for example, single point energy calculation, molecular geometry optimization, and vibrational frequency analysis. The paper also describes the supporting infrastructure, such as processing software, dictionaries, validation tools and database repositories. In addition, some of the challenges and difficulties in developing common computational chemistry dictionaries are discussed. The uses of CompChem are illustrated by two practical applications. PMID:22870956
A cloud-based information repository for bridge monitoring applications
NASA Astrophysics Data System (ADS)
Jeong, Seongwoon; Zhang, Yilan; Hou, Rui; Lynch, Jerome P.; Sohn, Hoon; Law, Kincho H.
2016-04-01
This paper describes an information repository to support bridge monitoring applications on a cloud computing platform. Bridge monitoring, with instrumentation of sensors in particular, collects significant amount of data. In addition to sensor data, a wide variety of information such as bridge geometry, analysis model and sensor description need to be stored. Data management plays an important role to facilitate data utilization and data sharing. While bridge information modeling (BrIM) technologies and standards have been proposed and they provide a means to enable integration and facilitate interoperability, current BrIM standards support mostly the information about bridge geometry. In this study, we extend the BrIM schema to include analysis models and sensor information. Specifically, using the OpenBrIM standards as the base, we draw on CSI Bridge, a commercial software widely used for bridge analysis and design, and SensorML, a standard schema for sensor definition, to define the data entities necessary for bridge monitoring applications. NoSQL database systems are employed for data repository. Cloud service infrastructure is deployed to enhance scalability, flexibility and accessibility of the data management system. The data model and systems are tested using the bridge model and the sensor data collected at the Telegraph Road Bridge, Monroe, Michigan.
An Activity to Teach Students about Schematic Processing
ERIC Educational Resources Information Center
Isbell, Linda M.; Tyler, James M.; Burns, Kathleen C.
2007-01-01
We designed a classroom activity to foster students' understanding of what schemas are and how they function. We used a video of the instructor as an infant to illustrate how schemas influence gender stereotyping. Before the video, we told students that the baby was either a boy or a girl. After the video, students rated whether the baby would…
Clinical Views: Object-Oriented Views for Clinical Databases
Portoni, Luisa; Combi, Carlo; Pinciroli, Francesco
1998-01-01
We present here a prototype of a clinical information system for the archiving and the management of multimedia and temporally-oriented clinical data related to PTCA patients. The system is based on an object-oriented DBMS and supports multiple views and view schemas on patients' data. Remote data access is supported too.
A Systematic Review of Serious Games in Training Health Care Professionals.
Wang, Ryan; DeMaria, Samuel; Goldberg, Andrew; Katz, Daniel
2016-02-01
Serious games are computer-based games designed for training purposes. They are poised to expand their role in medical education. This systematic review, conducted in accordance with PRISMA guidelines, aimed to synthesize current serious gaming trends in health care training, especially those pertaining to developmental methodologies and game evaluation. PubMed, EMBASE, and Cochrane databases were queried for relevant documents published through December 2014. Of the 3737 publications identified, 48 of them, covering 42 serious games, were included. From 2007 to 2014, they demonstrate a growth from 2 games and 2 genres to 42 games and 8 genres. Overall, study design was heterogeneous and methodological quality by MERQSI score averaged 10.5/18, which is modest. Seventy-nine percent of serious games were evaluated for training outcomes. As the number of serious games for health care training continues to grow, having schemas that organize how educators approach their development and evaluation is essential for their success.
Chao, Tian-Jy; Kim, Younghun
2015-02-10
An end-to-end interoperability and workflows from building architecture design to one or more simulations, in one aspect, may comprise establishing a BIM enablement platform architecture. A data model defines data entities and entity relationships for enabling the interoperability and workflows. A data definition language may be implemented that defines and creates a table schema of a database associated with the data model. Data management services and/or application programming interfaces may be implemented for interacting with the data model. Web services may also be provided for interacting with the data model via the Web. A user interface may be implemented that communicates with users and uses the BIM enablement platform architecture, the data model, the data definition language, data management services and application programming interfaces to provide functions to the users to perform work related to building information management.
Informatics in radiology: use of CouchDB for document-based storage of DICOM objects.
Rascovsky, Simón J; Delgado, Jorge A; Sanz, Alexander; Calvo, Víctor D; Castrillón, Gabriel
2012-01-01
Picture archiving and communication systems traditionally have depended on schema-based Structured Query Language (SQL) databases for imaging data management. To optimize database size and performance, many such systems store a reduced set of Digital Imaging and Communications in Medicine (DICOM) metadata, discarding informational content that might be needed in the future. As an alternative to traditional database systems, document-based key-value stores recently have gained popularity. These systems store documents containing key-value pairs that facilitate data searches without predefined schemas. Document-based key-value stores are especially suited to archive DICOM objects because DICOM metadata are highly heterogeneous collections of tag-value pairs conveying specific information about imaging modalities, acquisition protocols, and vendor-supported postprocessing options. The authors used an open-source document-based database management system (Apache CouchDB) to create and test two such databases; CouchDB was selected for its overall ease of use, capability for managing attachments, and reliance on HTTP and Representational State Transfer standards for accessing and retrieving data. A large database was created first in which the DICOM metadata from 5880 anonymized magnetic resonance imaging studies (1,949,753 images) were loaded by using a Ruby script. To provide the usual DICOM query functionality, several predefined "views" (standard queries) were created by using JavaScript. For performance comparison, the same queries were executed in both the CouchDB database and a SQL-based DICOM archive. The capabilities of CouchDB for attachment management and database replication were separately assessed in tests of a similar, smaller database. Results showed that CouchDB allowed efficient storage and interrogation of all DICOM objects; with the use of information retrieval algorithms such as map-reduce, all the DICOM metadata stored in the large database were searchable with only a minimal increase in retrieval time over that with the traditional database management system. Results also indicated possible uses for document-based databases in data mining applications such as dose monitoring, quality assurance, and protocol optimization. RSNA, 2012
Negative Relational Schemas Predict the Trajectory of Coercive Dynamics During Early Childhood
Smith, Justin D.; Dishion, Thomas J.; Shaw, Daniel S.; Wilson, Melvin N.
2014-01-01
Coercive family processes are germane to the development of problem behaviors in early childhood, yet the cognitive and affective underpinnings are not well understood. We hypothesized that one antecedent of early coercive interactions is the caregiver’s implicit affective attitudes toward the child, which in this article are termed relational schemas. Relational schemas have previously been linked to coercion and problem behaviors, but there has yet to be an examination of the association between relational schemas and trajectories of coercion during early childhood. We examined 731 indigent caregiver-child dyads (49% female children) from a randomized intervention trial of the Family Check-Up. Predominantly biological mothers participated. A speech sample was used to assess relational schemas at age 2. Coercive interactions were assessed observationally each year between ages 2 and 4. Caregiver and teacher reports of children’s oppositional and aggressive behaviors were collected at age 7.5 and 8.5. Path analysis revealed that negative relational schemas were associated with less steep declines in coercion during this period, which in turn were predictive of ratings of oppositional and aggressive behaviors at age 7.5/8.5 after controlling for baseline levels, positive relational schemas, child gender, ethnicity, and cumulative risk. Intervention condition assignment did not moderate this relationship, suggesting the results represent a naturally occurring process. Given the link between persistent early coercion and later deleterious outcomes, relational schemas that maintain and amplify coercive dynamics represent a potential target for early intervention programs designed to improve parent–child relationships. PMID:25208813
Sun, Xiaobo; Gao, Jingjing; Jin, Peng; Eng, Celeste; Burchard, Esteban G; Beaty, Terri H; Ruczinski, Ingo; Mathias, Rasika A; Barnes, Kathleen; Wang, Fusheng; Qin, Zhaohui S
2018-06-01
Sorted merging of genomic data is a common data operation necessary in many sequencing-based studies. It involves sorting and merging genomic data from different subjects by their genomic locations. In particular, merging a large number of variant call format (VCF) files is frequently required in large-scale whole-genome sequencing or whole-exome sequencing projects. Traditional single-machine based methods become increasingly inefficient when processing large numbers of files due to the excessive computation time and Input/Output bottleneck. Distributed systems and more recent cloud-based systems offer an attractive solution. However, carefully designed and optimized workflow patterns and execution plans (schemas) are required to take full advantage of the increased computing power while overcoming bottlenecks to achieve high performance. In this study, we custom-design optimized schemas for three Apache big data platforms, Hadoop (MapReduce), HBase, and Spark, to perform sorted merging of a large number of VCF files. These schemas all adopt the divide-and-conquer strategy to split the merging job into sequential phases/stages consisting of subtasks that are conquered in an ordered, parallel, and bottleneck-free way. In two illustrating examples, we test the performance of our schemas on merging multiple VCF files into either a single TPED or a single VCF file, which are benchmarked with the traditional single/parallel multiway-merge methods, message passing interface (MPI)-based high-performance computing (HPC) implementation, and the popular VCFTools. Our experiments suggest all three schemas either deliver a significant improvement in efficiency or render much better strong and weak scalabilities over traditional methods. Our findings provide generalized scalable schemas for performing sorted merging on genetics and genomics data using these Apache distributed systems.
Gao, Jingjing; Jin, Peng; Eng, Celeste; Burchard, Esteban G; Beaty, Terri H; Ruczinski, Ingo; Mathias, Rasika A; Barnes, Kathleen; Wang, Fusheng
2018-01-01
Abstract Background Sorted merging of genomic data is a common data operation necessary in many sequencing-based studies. It involves sorting and merging genomic data from different subjects by their genomic locations. In particular, merging a large number of variant call format (VCF) files is frequently required in large-scale whole-genome sequencing or whole-exome sequencing projects. Traditional single-machine based methods become increasingly inefficient when processing large numbers of files due to the excessive computation time and Input/Output bottleneck. Distributed systems and more recent cloud-based systems offer an attractive solution. However, carefully designed and optimized workflow patterns and execution plans (schemas) are required to take full advantage of the increased computing power while overcoming bottlenecks to achieve high performance. Findings In this study, we custom-design optimized schemas for three Apache big data platforms, Hadoop (MapReduce), HBase, and Spark, to perform sorted merging of a large number of VCF files. These schemas all adopt the divide-and-conquer strategy to split the merging job into sequential phases/stages consisting of subtasks that are conquered in an ordered, parallel, and bottleneck-free way. In two illustrating examples, we test the performance of our schemas on merging multiple VCF files into either a single TPED or a single VCF file, which are benchmarked with the traditional single/parallel multiway-merge methods, message passing interface (MPI)–based high-performance computing (HPC) implementation, and the popular VCFTools. Conclusions Our experiments suggest all three schemas either deliver a significant improvement in efficiency or render much better strong and weak scalabilities over traditional methods. Our findings provide generalized scalable schemas for performing sorted merging on genetics and genomics data using these Apache distributed systems. PMID:29762754
Database architectures for Space Telescope Science Institute
NASA Astrophysics Data System (ADS)
Lubow, Stephen
1993-08-01
At STScI nearly all large applications require database support. A general purpose architecture has been developed and is in use that relies upon an extended client-server paradigm. Processing is in general distributed across three processes, each of which generally resides on its own processor. Database queries are evaluated on one such process, called the DBMS server. The DBMS server software is provided by a database vendor. The application issues database queries and is called the application client. This client uses a set of generic DBMS application programming calls through our STDB/NET programming interface. Intermediate between the application client and the DBMS server is the STDB/NET server. This server accepts generic query requests from the application and converts them into the specific requirements of the DBMS server. In addition, it accepts query results from the DBMS server and passes them back to the application. Typically the STDB/NET server is local to the DBMS server, while the application client may be remote. The STDB/NET server provides additional capabilities such as database deadlock restart and performance monitoring. This architecture is currently in use for some major STScI applications, including the ground support system. We are currently investigating means of providing ad hoc query support to users through the above architecture. Such support is critical for providing flexible user interface capabilities. The Universal Relation advocated by Ullman, Kernighan, and others appears to be promising. In this approach, the user sees the entire database as a single table, thereby freeing the user from needing to understand the detailed schema. A software layer provides the translation between the user and detailed schema views of the database. However, many subtle issues arise in making this transformation. We are currently exploring this scheme for use in the Hubble Space Telescope user interface to the data archive system (DADS).
Ho, Michelle L; Adler, Benjamin A; Torre, Michael L; Silberg, Jonathan J; Suh, Junghae
2013-12-20
Adeno-associated virus (AAV) recombination can result in chimeric capsid protein subunits whose ability to assemble into an oligomeric capsid, package a genome, and transduce cells depends on the inheritance of sequence from different AAV parents. To develop quantitative design principles for guiding site-directed recombination of AAV capsids, we have examined how capsid structural perturbations predicted by the SCHEMA algorithm correlate with experimental measurements of disruption in seventeen chimeric capsid proteins. In our small chimera population, created by recombining AAV serotypes 2 and 4, we found that protection of viral genomes and cellular transduction were inversely related to calculated disruption of the capsid structure. Interestingly, however, we did not observe a correlation between genome packaging and calculated structural disruption; a majority of the chimeric capsid proteins formed at least partially assembled capsids and more than half packaged genomes, including those with the highest SCHEMA disruption. These results suggest that the sequence space accessed by recombination of divergent AAV serotypes is rich in capsid chimeras that assemble into 60-mer capsids and package viral genomes. Overall, the SCHEMA algorithm may be useful for delineating quantitative design principles to guide the creation of libraries enriched in genome-protecting virus nanoparticles that can effectively transduce cells. Such improvements to the virus design process may help advance not only gene therapy applications but also other bionanotechnologies dependent upon the development of viruses with new sequences and functions.
Ho, Michelle L.; Adler, Benjamin A.; Torre, Michael L.; Silberg, Jonathan J.; Suh, Junghae
2013-01-01
Adeno-associated virus (AAV) recombination can result in chimeric capsid protein subunits whose ability to assemble into an oligomeric capsid, package a genome, and transduce cells depends on the inheritance of sequence from different AAV parents. To develop quantitative design principles for guiding site-directed recombination of AAV capsids, we have examined how capsid structural perturbations predicted by the SCHEMA algorithm correlate with experimental measurements of disruption in seventeen chimeric capsid proteins. In our small chimera population, created by recombining AAV serotypes 2 and 4, we found that protection of viral genomes and cellular transduction were inversely related to calculated disruption of the capsid structure. Interestingly, however, we did not observe a correlation between genome packaging and calculated structural disruption; a majority of the chimeric capsid proteins formed at least partially assembled capsids and more than half packaged genomes, including those with the highest SCHEMA disruption. These results suggest that the sequence space accessed by recombination of divergent AAV serotypes is rich in capsid chimeras that assemble into 60-mer capsids and package viral genomes. Overall, the SCHEMA algorithm may be useful for delineating quantitative design principles to guide the creation of libraries enriched in genome-protecting virus nanoparticles that can effectively transduce cells. Such improvements to the virus design process may help advance not only gene therapy applications, but also other bionanotechnologies dependent upon the development of viruses with new sequences and functions. PMID:23899192
Damming the genomic data flood using a comprehensive analysis and storage data structure
Bouffard, Marc; Phillips, Michael S.; Brown, Andrew M.K.; Marsh, Sharon; Tardif, Jean-Claude; van Rooij, Tibor
2010-01-01
Data generation, driven by rapid advances in genomic technologies, is fast outpacing our analysis capabilities. Faced with this flood of data, more hardware and software resources are added to accommodate data sets whose structure has not specifically been designed for analysis. This leads to unnecessarily lengthy processing times and excessive data handling and storage costs. Current efforts to address this have centered on developing new indexing schemas and analysis algorithms, whereas the root of the problem lies in the format of the data itself. We have developed a new data structure for storing and analyzing genotype and phenotype data. By leveraging data normalization techniques, database management system capabilities and the use of a novel multi-table, multidimensional database structure we have eliminated the following: (i) unnecessarily large data set size due to high levels of redundancy, (ii) sequential access to these data sets and (iii) common bottlenecks in analysis times. The resulting novel data structure horizontally divides the data to circumvent traditional problems associated with the use of databases for very large genomic data sets. The resulting data set required 86% less disk space and performed analytical calculations 6248 times faster compared to a standard approach without any loss of information. Database URL: http://castor.pharmacogenomics.ca PMID:21159730
NASA Astrophysics Data System (ADS)
Willmes, C.
2017-12-01
In the frame of the Collaborative Research Centre 806 (CRC 806) an interdisciplinary research project, that needs to manage data, information and knowledge from heterogeneous domains, such as archeology, cultural sciences, and the geosciences, a collaborative internal knowledge base system was developed. The system is based on the open source MediaWiki software, that is well known as the software that enables Wikipedia, for its facilitation of a web based collaborative knowledge and information management platform. This software is additionally enhanced with the Semantic MediaWiki (SMW) extension, that allows to store and manage structural data within the Wiki platform, as well as it facilitates complex query and API interfaces to the structured data stored in the SMW data base. Using an additional open source software called mobo, it is possible to improve the data model development process, as well as automated data imports, from small spreadsheets to large relational databases. Mobo is a command line tool that helps building and deploying SMW structure in an agile, Schema-Driven Development way, and allows to manage and collaboratively develop the data model formalizations, that are formalized in JSON-Schema format, using version control systems like git. The combination of a well equipped collaborative web platform facilitated by Mediawiki, the possibility to store and query structured data in this collaborative database provided by SMW, as well as the possibility for automated data import and data model development enabled by mobo, result in a powerful but flexible system to build and develop a collaborative knowledge base system. Furthermore, SMW allows the application of Semantic Web technology, the structured data can be exported into RDF, thus it is possible to set a triple-store including a SPARQL endpoint on top of the database. The JSON-Schema based data models, can be enhanced into JSON-LD, to facilitate and profit from the possibilities of Linked Data technology.
ERIC Educational Resources Information Center
Braune, Rolf; Foshay, Wellesley R.
1983-01-01
The proposed three-step strategy for research on human information processing--concept hierarchy analysis, analysis of example sets to teach relations among concepts, and analysis of problem sets to build a progressively larger schema for the problem space--may lead to practical procedures for instructional design and task analysis. Sixty-four…
ERIC Educational Resources Information Center
Goetz, Ernest T.; And Others
Two experiments using the same design and subjects drawn from the same populations tested two accounts of schema-directed text processing, the selective attention hypothesis that suggests readers identify text elements as important or unimportant on the basis of an engaged, operative, or subsuming schema; and the slot-filling hypothesis that…
Dynamic publication model for neurophysiology databases.
Gardner, D; Abato, M; Knuth, K H; DeBellis, R; Erde, S M
2001-08-29
We have implemented a pair of database projects, one serving cortical electrophysiology and the other invertebrate neurones and recordings. The design for each combines aspects of two proven schemes for information interchange. The journal article metaphor determined the type, scope, organization and quantity of data to comprise each submission. Sequence databases encouraged intuitive tools for data viewing, capture, and direct submission by authors. Neurophysiology required transcending these models with new datatypes. Time-series, histogram and bivariate datatypes, including illustration-like wrappers, were selected by their utility to the community of investigators. As interpretation of neurophysiological recordings depends on context supplied by metadata attributes, searches are via visual interfaces to sets of controlled-vocabulary metadata trees. Neurones, for example, can be specified by metadata describing functional and anatomical characteristics. Permanence is advanced by data model and data formats largely independent of contemporary technology or implementation, including Java and the XML standard. All user tools, including dynamic data viewers that serve as a virtual oscilloscope, are Java-based, free, multiplatform, and distributed by our application servers to any contemporary networked computer. Copyright is retained by submitters; viewer displays are dynamic and do not violate copyright of related journal figures. Panels of neurophysiologists view and test schemas and tools, enhancing community support.
NASA Astrophysics Data System (ADS)
Bikakis, Nikos; Gioldasis, Nektarios; Tsinaraki, Chrisa; Christodoulakis, Stavros
SPARQL is today the standard access language for Semantic Web data. In the recent years XML databases have also acquired industrial importance due to the widespread applicability of XML in the Web. In this paper we present a framework that bridges the heterogeneity gap and creates an interoperable environment where SPARQL queries are used to access XML databases. Our approach assumes that fairly generic mappings between ontology constructs and XML Schema constructs have been automatically derived or manually specified. The mappings are used to automatically translate SPARQL queries to semantically equivalent XQuery queries which are used to access the XML databases. We present the algorithms and the implementation of SPARQL2XQuery framework, which is used for answering SPARQL queries over XML databases.
Guenther, Lars; Froehlich, Klara; Milde, Jutta; Heidecke, Gitte; Ruhrmann, Georg
2015-01-01
Journalists portray health issues within different frames, which may shape news recipients' evaluations, attitudes, and behaviors. As the research on framing continues to face theoretical challenges and methodological concerns, this study examines the transformation and establishing of evaluative schemas, which are steps in the process toward attitudinal change. The study measures recipients' evaluations of actual television clips dealing with cancer diagnoses and cancer therapies. Two valenced (positive vs. negative) media frames were tested in a 3-week online panel (n = 298) using a pretest-posttest design with a German sample. The results offer limited support for the hypothesis that media frames transform participants' schemas, but do not support the hypothesis that new schemas are established in response to media frames. The study also investigates interactions between framing and participants' issue involvement, as well as between framing and topic-specific interest and media use.
NASA Astrophysics Data System (ADS)
Auer, M.; Agugiaro, G.; Billen, N.; Loos, L.; Zipf, A.
2014-05-01
Many important Cultural Heritage sites have been studied over long periods of time by different means of technical equipment, methods and intentions by different researchers. This has led to huge amounts of heterogeneous "traditional" datasets and formats. The rising popularity of 3D models in the field of Cultural Heritage in recent years has brought additional data formats and makes it even more necessary to find solutions to manage, publish and study these data in an integrated way. The MayaArch3D project aims to realize such an integrative approach by establishing a web-based research platform bringing spatial and non-spatial databases together and providing visualization and analysis tools. Especially the 3D components of the platform use hierarchical segmentation concepts to structure the data and to perform queries on semantic entities. This paper presents a database schema to organize not only segmented models but also different Levels-of-Details and other representations of the same entity. It is further implemented in a spatial database which allows the storing of georeferenced 3D data. This enables organization and queries by semantic, geometric and spatial properties. As service for the delivery of the segmented models a standardization candidate of the OpenGeospatialConsortium (OGC), the Web3DService (W3DS) has been extended to cope with the new database schema and deliver a web friendly format for WebGL rendering. Finally a generic user interface is presented which uses the segments as navigation metaphor to browse and query the semantic segmentation levels and retrieve information from an external database of the German Archaeological Institute (DAI).
ERIC Educational Resources Information Center
Kirisci, Levent; Tarter, Ralph E.
2001-01-01
Designs and evaluates a multidimensional schema for the assessment of alcohol, tobacco and other drug use topology. Findings illustrate the value of multidimensional assessment for identifying youth at high risk for substance use disorder (SUD) as well as for elucidating the factors contributing to the transition to suprathreshold SUD. (Contains…
Ontology based heterogeneous materials database integration and semantic query
NASA Astrophysics Data System (ADS)
Zhao, Shuai; Qian, Quan
2017-10-01
Materials digital data, high throughput experiments and high throughput computations are regarded as three key pillars of materials genome initiatives. With the fast growth of materials data, the integration and sharing of data is very urgent, that has gradually become a hot topic of materials informatics. Due to the lack of semantic description, it is difficult to integrate data deeply in semantic level when adopting the conventional heterogeneous database integration approaches such as federal database or data warehouse. In this paper, a semantic integration method is proposed to create the semantic ontology by extracting the database schema semi-automatically. Other heterogeneous databases are integrated to the ontology by means of relational algebra and the rooted graph. Based on integrated ontology, semantic query can be done using SPARQL. During the experiments, two world famous First Principle Computational databases, OQMD and Materials Project are used as the integration targets, which show the availability and effectiveness of our method.
NASA Technical Reports Server (NTRS)
Maluf, David A.; Tran, Peter B.
2003-01-01
Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object-oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK, is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword search of records spanning across both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semistructured documents existing within NASA enterprises. Today, NETMARK is a flexible, high-throughput open database framework for managing, storing, and searching unstructured or semi-structured arbitrary hierarchal models, such as XML and HTML.
An Extensible Schema-less Database Framework for Managing High-throughput Semi-Structured Documents
NASA Technical Reports Server (NTRS)
Maluf, David A.; Tran, Peter B.; La, Tracy; Clancy, Daniel (Technical Monitor)
2002-01-01
Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword searches of records for both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semi-structured documents existing within NASA enterprises. Today, NETMARK is a flexible, high throughput open database framework for managing, storing, and searching unstructured or semi structured arbitrary hierarchal models, XML and HTML.
NASA Technical Reports Server (NTRS)
Maluf, David A.; Tran, Peter B.
2003-01-01
Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object-oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK, is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword search of records spanning across both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semi-structured documents existing within NASA enterprises. Today, NETMARK is a flexible, high-throughput open database framework for managing, storing, and searching unstructured or semi-structured arbitrary hierarchal models, such as XML and HTML.
myChEMBL: a virtual machine implementation of open data and cheminformatics tools.
Ochoa, Rodrigo; Davies, Mark; Papadatos, George; Atkinson, Francis; Overington, John P
2014-01-15
myChEMBL is a completely open platform, which combines public domain bioactivity data with open source database and cheminformatics technologies. myChEMBL consists of a Linux (Ubuntu) Virtual Machine featuring a PostgreSQL schema with the latest version of the ChEMBL database, as well as the latest RDKit cheminformatics libraries. In addition, a self-contained web interface is available, which can be modified and improved according to user specifications. The VM is available at: ftp://ftp.ebi.ac.uk/pub/databases/chembl/VM/myChEMBL/current. The web interface and web services code is available at: https://github.com/rochoa85/myChEMBL.
Lassere, Marissa N
2008-06-01
There are clear advantages to using biomarkers and surrogate endpoints, but concerns about clinical and statistical validity and systematic methods to evaluate these aspects hinder their efficient application. Section 2 is a systematic, historical review of the biomarker-surrogate endpoint literature with special reference to the nomenclature, the systems of classification and statistical methods developed for their evaluation. In Section 3 an explicit, criterion-based, quantitative, multidimensional hierarchical levels of evidence schema - Biomarker-Surrogacy Evaluation Schema - is proposed to evaluate and co-ordinate the multiple dimensions (biological, epidemiological, statistical, clinical trial and risk-benefit evidence) of the biomarker clinical endpoint relationships. The schema systematically evaluates and ranks the surrogacy status of biomarkers and surrogate endpoints using defined levels of evidence. The schema incorporates the three independent domains: Study Design, Target Outcome and Statistical Evaluation. Each domain has items ranked from zero to five. An additional category called Penalties incorporates additional considerations of biological plausibility, risk-benefit and generalizability. The total score (0-15) determines the level of evidence, with Level 1 the strongest and Level 5 the weakest. The term ;surrogate' is restricted to markers attaining Levels 1 or 2 only. Surrogacy status of markers can then be directly compared within and across different areas of medicine to guide individual, trial-based or drug-development decisions. This schema would facilitate communication between clinical, researcher, regulatory, industry and consumer participants necessary for evaluation of the biomarker-surrogate-clinical endpoint relationship in their different settings.
Barton, Allen W; Kogan, Steven M; Cho, Junhan; Brown, Geoffrey L
2015-12-01
This study was designed to examine the associations of biological father and social father involvement during childhood with African American young men's development and engagement in risk behaviors. With a sample of 505 young men living in the rural South of the United States, a dual mediation model was tested in which retrospective reports of involvement from biological fathers and social fathers were linked to young men's substance misuse and multiple sexual partnerships through men's relational schemas and future expectations. Results from structural equation modeling indicated that levels of involvement from biological fathers and social fathers predicted young men's relational schemas; only biological fathers' involvement predicted future expectations. In turn, future expectations predicted levels of substance misuse, and negative relational schemas predicted multiple sexual partnerships. Biological fathers' involvement evinced significant indirect associations with young men's substance misuse and multiple sexual partnerships through both schemas and expectations; social fathers' involvement exhibited an indirect association with multiple sexual partnerships through relational schemas. Findings highlight the unique influences of biological fathers and social fathers on multiple domains of African American young men's psychosocial development that subsequently render young men more or less likely to engage in risk behaviors.
WOVOdat: A New Tool for Managing and Accessing Data of Worldwide Volcanic Unrest
NASA Astrophysics Data System (ADS)
Venezky, D. Y.; Malone, S. D.; Newhall, C. G.
2002-12-01
WOVOdat (World Organization of Volcano Observatories database of volcanic unrest) will for the first time bring together data of worldwide volcanic seismicity, ground deformation, fumarolic activity, and other changes within or adjacent to a volcanic system. Although a large body of data and experience has been built over the past century, currently, we have no means of accessing that collective experience for use during crises and for research. WOVOdat will be the central resource of a data management system; other components will include utilities for data input and archiving, structured data retrieval, and data mining; educational modules; and links to institutional databases such as IRIS (global seismicity), UNAVCO (global GPS coordinates and strain vectors), and Smithsonian's Global Volcanism Program (historical eruptions). Data will be geospatially and time-referenced, to provide four dimensional images of how volcanic systems respond to magma intrusion, regional strain, and other disturbances prior to and during eruption. As part of the design phase, a small WOVOdat team is currently collecting information from observatories about their data types, formats, and local data management. The database schema is being designed such that responses to common, yet complex, queries are rapid (e.g., where else has similar unrest occurred and what was the outcome?) while also allowing for more detailed research analysis of relationships between various parameters (e.g., what do temporal relations between long-period earthquakes, transient deformation, and spikes in gas emission tell us about the geometry and physical properties of magma and a volcanic edifice?). We are excited by the potential of WOVOdat, and we invite participation in its design and development. Next steps involve formalizing and testing the design, and, developing utilities for translating data of various formats into common formats. The large job of populating the database will follow, and eventually we will have a great new tool for eruption forecasting and research.
NoSQL data model for semi-automatic integration of ethnomedicinal plant data from multiple sources.
Ningthoujam, Sanjoy Singh; Choudhury, Manabendra Dutta; Potsangbam, Kumar Singh; Chetia, Pankaj; Nahar, Lutfun; Sarker, Satyajit D; Basar, Norazah; Das Talukdar, Anupam
2014-01-01
Sharing traditional knowledge with the scientific community could refine scientific approaches to phytochemical investigation and conservation of ethnomedicinal plants. As such, integration of traditional knowledge with scientific data using a single platform for sharing is greatly needed. However, ethnomedicinal data are available in heterogeneous formats, which depend on cultural aspects, survey methodology and focus of the study. Phytochemical and bioassay data are also available from many open sources in various standards and customised formats. To design a flexible data model that could integrate both primary and curated ethnomedicinal plant data from multiple sources. The current model is based on MongoDB, one of the Not only Structured Query Language (NoSQL) databases. Although it does not contain schema, modifications were made so that the model could incorporate both standard and customised ethnomedicinal plant data format from different sources. The model presented can integrate both primary and secondary data related to ethnomedicinal plants. Accommodation of disparate data was accomplished by a feature of this database that supported a different set of fields for each document. It also allowed storage of similar data having different properties. The model presented is scalable to a highly complex level with continuing maturation of the database, and is applicable for storing, retrieving and sharing ethnomedicinal plant data. It can also serve as a flexible alternative to a relational and normalised database. Copyright © 2014 John Wiley & Sons, Ltd.
Designing a framework of intelligent information processing for dentistry administration data.
Amiri, N; Matthews, D C; Gao, Q
2005-07-01
This study was designed to test a cumulative view of current data in the clinical database at the Faculty of Dentistry, Dalhousie University. We planned to examine associations among demographic factors and treatments. Three tables were selected from the database of the faculty: patient, treatment and procedures. All fields and record numbers in each table were documented. Data was explored using SQL server and Visual Basic and then cleaned by removing incongruent fields. After transformation, a data warehouse was created. This was imported to SQL analysis services manager to create an OLAP (Online Analytic Process) cube. The multidimensional model used for access to data was created using a star schema. Treatment count was the measurement variable. Five dimensions--date, postal code, gender, age group and treatment categories--were used to detect associations. Another data warehouse of 8 tables (international tooth code # 1-8) was created and imported to SAS enterprise miner to complete data mining. Association nodes were used for each table to find sequential associations and minimum criteria were set to 2% of cases. Findings of this study confirmed most assumptions of treatment planning procedures. There were some small unexpected patterns of clinical interest. Further developments are recommended to create predictive models. Recent improvements in information technology offer numerous advantages for conversion of raw data from faculty databases to information and subsequently to knowledge. This knowledge can be used by decision makers, managers, and researchers to answer clinical questions, affect policy change and determine future research needs.
Smart travel guide: from internet image database to intelligent system
NASA Astrophysics Data System (ADS)
Chareyron, Ga"l.; Da Rugna, Jérome; Cousin, Saskia
2011-02-01
To help the tourist to discover a city, a region or a park, many options are provided by public tourism travel centers, by free online guides or by dedicated book guides. Nonetheless, these guides provide only mainstream information which are not conform to a particular tourist behavior. On the other hand, we may find several online image databases allowing users to upload their images and to localize each image on a map. These websites are representative of tourism practices and constitute a proxy to analyze tourism flows. Then, this work intends to answer this question: knowing what I have visited and what other people have visited, where should I go now? This process needs to profile users, sites and photos. our paper presents the acquired data and relationship between photographers, sites and photos and introduces the model designed to correctly estimate the site interest of each tourism point. The third part shows an application of our schema: a smart travel guide on geolocated mobile devices. This android application is a travel guide truly matching the user wishes.
NASA Astrophysics Data System (ADS)
Tong, Xin; Gromala, Diane; Shaw, Chris D.; Williamson, Owen; Iscen, Ozgun E.
2015-03-01
Body image/body schema (BIBS) is within the larger realm of embodied cognition. Its interdisciplinary literature can inspire Virtual Reality (VR) researchers and designers to develop novel ideas and provide them with approaches to human perception and experience. In this paper, we introduced six fundamental ideas in designing interactions in VR, derived from BIBS literature that demonstrates how the mind is embodied. We discuss our own research, ranging from two mature works to a prototype, to support explorations VR interaction design from a BIBS approach. Based on our experiences, we argue that incorporating ideas of embodiment into design practices requires a shift in the perspective or understanding of the human body, perception and experiences, all of which affect interaction design in unique ways. The dynamic, interactive and distributed understanding of cognition guides our approach to interaction design, where the interrelatedness and plasticity of BIBS play a crucial role.
Bravo, Carlos; Suarez, Carlos; González, Carolina; López, Diego; Blobel, Bernd
2014-01-01
Healthcare information is distributed through multiple heterogeneous and autonomous systems. Access to, and sharing of, distributed information sources are a challenging task. To contribute to meeting this challenge, this paper presents a formal, complete and semi-automatic transformation service from Relational Databases to Web Ontology Language. The proposed service makes use of an algorithm that allows to transform several data models of different domains by deploying mainly inheritance rules. The paper emphasizes the relevance of integrating the proposed approach into an ontology-based interoperability service to achieve semantic interoperability.
A proposal of fuzzy connective with learning function and its application to fuzzy retrieval system
NASA Technical Reports Server (NTRS)
Hayashi, Isao; Naito, Eiichi; Ozawa, Jun; Wakami, Noboru
1993-01-01
A new fuzzy connective and a structure of network constructed by fuzzy connectives are proposed to overcome a drawback of conventional fuzzy retrieval systems. This network represents a retrieval query and the fuzzy connectives in networks have a learning function to adjust its parameters by data from a database and outputs of a user. The fuzzy retrieval systems employing this network are also constructed. Users can retrieve results even with a query whose attributes do not exist in a database schema and can get satisfactory results for variety of thinkings by learning function.
The carbohydrate sequence markup language (CabosML): an XML description of carbohydrate structures.
Kikuchi, Norihiro; Kameyama, Akihiko; Nakaya, Shuuichi; Ito, Hiromi; Sato, Takashi; Shikanai, Toshihide; Takahashi, Yoriko; Narimatsu, Hisashi
2005-04-15
Bioinformatics resources for glycomics are very poor as compared with those for genomics and proteomics. The complexity of carbohydrate sequences makes it difficult to define a common language to represent them, and the development of bioinformatics tools for glycomics has not progressed. In this study, we developed a carbohydrate sequence markup language (CabosML), an XML description of carbohydrate structures. The language definition (XML Schema) and an experimental database of carbohydrate structures using an XML database management system are available at http://www.phoenix.hydra.mki.co.jp/CabosDemo.html kikuchi@hydra.mki.co.jp.
Data Warehouse Design from HL7 Clinical Document Architecture Schema.
Pecoraro, Fabrizio; Luzi, Daniela; Ricci, Fabrizio L
2015-01-01
This paper proposes a semi-automatic approach to extract clinical information structured in a HL7 Clinical Document Architecture (CDA) and transform it in a data warehouse dimensional model schema. It is based on a conceptual framework published in a previous work that maps the dimensional model primitives with CDA elements. Its feasibility is demonstrated providing a case study based on the analysis of vital signs gathered during laboratory tests.
2014-04-25
EA’s Java application programming interface (API), the team built a tool called OWL2EA that can ingest an OWL file and generate the corresponding UML...ObjectItemStructure specification shown in Figure 10. Running this script in the relational database server MySQL creates the physical schema that
NASA Astrophysics Data System (ADS)
Cervato, C.; Fils, D.; Bohling, G.; Diver, P.; Greer, D.; Reed, J.; Tang, X.
2006-12-01
The federation of databases is not a new endeavor. Great strides have been made e.g. in the health and astrophysics communities. Reviews of those successes indicate that they have been able to leverage off key cross-community core concepts. In its simplest implementation, a federation of databases with identical base schemas that can be extended to address individual efforts, is relatively easy to accomplish. Efforts of groups like the Open Geospatial Consortium have shown methods to geospatially relate data between different sources. We present here a summary of CHRONOS's (http://www.chronos.org) experience with highly heterogeneous data. Our experience with the federation of very diverse databases shows that the wide variety of encoding options for items like locality, time scale, taxon ID, and other key parameters makes it difficult to effectively join data across them. However, the response to this is not to develop one large, monolithic database, which will suffer growth pains due to social, national, and operational issues, but rather to systematically develop the architecture that will enable cross-resource (database, repository, tool, interface) interaction. CHRONOS has accomplished the major hurdle of federating small IT database efforts with service-oriented and XML-based approaches. The application of easy-to-use procedures that allow groups of all sizes to implement and experiment with searches across various databases and to use externally created tools is vital. We are sharing with the geoinformatics community the difficulties with application frameworks, user authentication, standards compliance, and data storage encountered in setting up web sites and portals for various science initiatives (e.g., ANDRILL, EARTHTIME). The ability to incorporate CHRONOS data, services, and tools into the existing framework of a group is crucial to the development of a model that supports and extends the vitality of the small- to medium-sized research effort that is essential for a vibrant scientific community. This presentation will directly address issues of portal development related to JSR-168 and other portal API's as well as issues related to both federated and local directory-based authentication. The application of service-oriented architecture in connection with ReST-based approaches is vital to facilitate service use by experienced and less experienced information technology groups. Application of these services with XML- based schemas allows for the connection to third party tools such a GIS-based tools and software designed to perform a specific scientific analysis. The connection of all these capabilities into a combined framework based on the standard XHTML Document object model and CSS 2.0 standards used in traditional web development will be demonstrated. CHRONOS also utilizes newer client techniques such as AJAX and cross- domain scripting along with traditional server-side database, application, and web servers. The combination of the various components of this architecture creates an environment based on open and free standards that allows for the discovery, retrieval, and integration of tools and data.
Mazurek Melnyk, Bernadette
2013-01-01
Abstract Background The transition to hospice care is a stressful experience for caregivers, who report high anxiety, unpreparedness, and lack of confidence. These sequelae are likely explained by the lack of an accurate cognitive schema, not knowing what to expect or how to help their loved one. Few interventions exist for this population and most do not measure preparedness, confidence, and anxiety using a schema building a conceptual framework for a new experience. Objective The purpose of this study was to test the feasibility and preliminary effects of an intervention program, Education and Skill building Intervention for Caregivers of Hospice patients (ESI-CH), using an innovative conceptual design that targets cognitive schema development and basic skill building for caregivers of loved ones newly admitted to hospice services. Design A pre-experimental one-group pre- and post-test study design was used. Eighteen caregivers caring for loved ones in their homes were recruited and twelve completed the pilot study. Depression, anxiety, activity restriction, preparedness, and beliefs/confidence were measured. Results Caregivers reported increased preparedness, more helpful beliefs, and more confidence about their ability to care for their loved one. Preliminary trends suggested decreased anxiety levels for the intervention group. Caregivers who completed the intervention program rated the program very good or excellent, thought the information was helpful and timely, and would recommend it to friends. Conclusions Results show promise that the ESI-CH program may assist as an evidence-based program to support caregivers in their role as a caregiver to a newly admitted hospice patient. PMID:23384244
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chao, Tian-Jy; Kim, Younghun
An end-to-end interoperability and workflows from building architecture design to one or more simulations, in one aspect, may comprise establishing a BIM enablement platform architecture. A data model defines data entities and entity relationships for enabling the interoperability and workflows. A data definition language may be implemented that defines and creates a table schema of a database associated with the data model. Data management services and/or application programming interfaces may be implemented for interacting with the data model. Web services may also be provided for interacting with the data model via the Web. A user interface may be implemented thatmore » communicates with users and uses the BIM enablement platform architecture, the data model, the data definition language, data management services and application programming interfaces to provide functions to the users to perform work related to building information management.« less
How does schema theory apply to real versus virtual memories?
Flannery, Kathleen A; Walles, Rena
2003-04-01
Schemas are cognitive frameworks that guide memory, aide in the interpretation of events, and influence how we retrieve stored memories. The purpose of this study was to explore how schemas operate in a well-known environment and to examine whether or not schemas operate differently in real versus virtual environments. Twenty-four undergraduate students from a small liberal arts college in the northeast participated for course credit. Two identical offices (a real office and a virtual office) were created and filled with eight consistent and eight inconsistent items. Each participant explored either the real office or the virtual office for 20 seconds without any knowledge that their memory would be tested. After leaving the office, participants completed a recognition task and a short demographic questionnaire. Overall sensitivity and higher confidence in recognition memory scores was found for inconsistent compared to consistent items. Greater support for the consistency effect was observed in this study and interpreted in terms of the dynamic memory model and the schema-plus-correction model. The results also demonstrate that virtual reality paradigms may produce similar outcomes compared to the real world in terms of some memory processes, but additional design factors must be considered if the researcher's goal is to create equivalent paradigms.
Environmental modeling and recognition for an autonomous land vehicle
NASA Technical Reports Server (NTRS)
Lawton, D. T.; Levitt, T. S.; Mcconnell, C. C.; Nelson, P. C.
1987-01-01
An architecture for object modeling and recognition for an autonomous land vehicle is presented. Examples of objects of interest include terrain features, fields, roads, horizon features, trees, etc. The architecture is organized around a set of data bases for generic object models and perceptual structures, temporary memory for the instantiation of object and relational hypotheses, and a long term memory for storing stable hypotheses that are affixed to the terrain representation. Multiple inference processes operate over these databases. Researchers describe these particular components: the perceptual structure database, the grouping processes that operate over this, schemas, and the long term terrain database. A processing example that matches predictions from the long term terrain model to imagery, extracts significant perceptual structures for consideration as potential landmarks, and extracts a relational structure to update the long term terrain database is given.
Methods for eliciting, annotating, and analyzing databases for child speech development.
Beckman, Mary E; Plummer, Andrew R; Munson, Benjamin; Reidy, Patrick F
2017-09-01
Methods from automatic speech recognition (ASR), such as segmentation and forced alignment, have facilitated the rapid annotation and analysis of very large adult speech databases and databases of caregiver-infant interaction, enabling advances in speech science that were unimaginable just a few decades ago. This paper centers on two main problems that must be addressed in order to have analogous resources for developing and exploiting databases of young children's speech. The first problem is to understand and appreciate the differences between adult and child speech that cause ASR models developed for adult speech to fail when applied to child speech. These differences include the fact that children's vocal tracts are smaller than those of adult males and also changing rapidly in size and shape over the course of development, leading to between-talker variability across age groups that dwarfs the between-talker differences between adult men and women. Moreover, children do not achieve fully adult-like speech motor control until they are young adults, and their vocabularies and phonological proficiency are developing as well, leading to considerably more within-talker variability as well as more between-talker variability. The second problem then is to determine what annotation schemas and analysis techniques can most usefully capture relevant aspects of this variability. Indeed, standard acoustic characterizations applied to child speech reveal that adult-centered annotation schemas fail to capture phenomena such as the emergence of covert contrasts in children's developing phonological systems, while also revealing children's nonuniform progression toward community speech norms as they acquire the phonological systems of their native languages. Both problems point to the need for more basic research into the growth and development of the articulatory system (as well as of the lexicon and phonological system) that is oriented explicitly toward the construction of age-appropriate computational models.
Using Web Ontology Language to Integrate Heterogeneous Databases in the Neurosciences
Lam, Hugo Y.K.; Marenco, Luis; Shepherd, Gordon M.; Miller, Perry L.; Cheung, Kei-Hoi
2006-01-01
Integrative neuroscience involves the integration and analysis of diverse types of neuroscience data involving many different experimental techniques. This data will increasingly be distributed across many heterogeneous databases that are web-accessible. Currently, these databases do not expose their schemas (database structures) and their contents to web applications/agents in a standardized, machine-friendly way. This limits database interoperation. To address this problem, we describe a pilot project that illustrates how neuroscience databases can be expressed using the Web Ontology Language, which is a semantically-rich ontological language, as a common data representation language to facilitate complex cross-database queries. In this pilot project, an existing tool called “D2RQ” was used to translate two neuroscience databases (NeuronDB and CoCoDat) into OWL, and the resulting OWL ontologies were then merged. An OWL-based reasoner (Racer) was then used to provide a sophisticated query language (nRQL) to perform integrated queries across the two databases based on the merged ontology. This pilot project is one step toward exploring the use of semantic web technologies in the neurosciences. PMID:17238384
IntegratedMap: a Web interface for integrating genetic map data.
Yang, Hongyu; Wang, Hongyu; Gingle, Alan R
2005-05-01
IntegratedMap is a Web application and database schema for storing and interactively displaying genetic map data. Its Web interface includes a menu for direct chromosome/linkage group selection, a search form for selection based on mapped object location and linkage group displays. An overview display provides convenient access to the full range of mapped and anchored object types with genetic locus details, such as numbers, types and names of mapped/anchored objects displayed in a compact scrollable list box that automatically updates based on selected map location and object type. Also, multilinkage group and localized map views are available along with links that can be configured for integration with other Web resources. IntegratedMap is implemented in C#/ASP.NET and the package, including a MySQL schema creation script, is available from http://cggc.agtec.uga.edu/Data/download.asp
OxfordGrid: a web interface for pairwise comparative map views.
Yang, Hongyu; Gingle, Alan R
2005-12-01
OxfordGrid is a web application and database schema for storing and interactively displaying genetic map data in a comparative, dot-plot, fashion. Its display is composed of a matrix of cells, each representing a pairwise comparison of mapped probe data for two linkage groups or chromosomes. These are arranged along the axes with one forming grid columns and the other grid rows with the degree and pattern of synteny/colinearity between the two linkage groups manifested in the cell's dot density and structure. A mouse click over the selected grid cell launches an image map-based display for the selected cell. Both individual and linear groups of mapped probes can be selected and displayed. Also, configurable links can be used to access other web resources for mapped probe information. OxfordGrid is implemented in C#/ASP.NET and the package, including MySQL schema creation scripts, is available at ftp://cggc.agtec.uga.edu/OxfordGrid/.
Neurobiology of Schemas and Schema-Mediated Memory.
Gilboa, Asaf; Marlatte, Hannah
2017-08-01
Schemas are superordinate knowledge structures that reflect abstracted commonalities across multiple experiences, exerting powerful influences over how events are perceived, interpreted, and remembered. Activated schema templates modulate early perceptual processing, as they get populated with specific informational instances (schema instantiation). Instantiated schemas, in turn, can enhance or distort mnemonic processing from the outset (at encoding), impact offline memory transformation and accelerate neocortical integration. Recent studies demonstrate distinctive neurobiological processes underlying schema-related learning. Interactions between the ventromedial prefrontal cortex (vmPFC), hippocampus, angular gyrus (AG), and unimodal associative cortices support context-relevant schema instantiation and schema mnemonic effects. The vmPFC and hippocampus may compete (as suggested by some models) or synchronize (as suggested by others) to optimize schema-related learning depending on the specific operationalization of schema memory. This highlights the need for more precise definitions of memory schemas. Copyright © 2017 Elsevier Ltd. All rights reserved.
Gojani, Parvin Jamali; Masjedi, Mohsen; Khaleghipour, Shahnaz; Behzadi, Ehsan
2017-01-01
Background: This study aimed to compare the effects of the schema along with mindfulness-based therapies in the psoriasis patients. Materials and Methods: This semi-experimental study with post- and pre-tests was conducted on the psoriasis patients in the Dermatology Clinic of the Isfahan Alzahra Hospital, Iran using the convenience sampling in 2014. The patients had a low general health score. The experimental groups included two treatment groups of schema-based (n = 8) and mindfulness (n = 8). Both groups received eight 90-min sessions therapy once a week; they were compared with 8 patients in the control group. To evaluate the psoriasis patients’ maladaptive schema, Young schema questionnaire was used. Data were analyzed through the covariance analysis test. Results: There was a significant difference between the schema-based therapy and mindfulness groups with the control group. There was also a significant difference between the schema-based therapy groups consisting of the defeated schema, dependence/incompetence schema, devotion schema, stubbornly criteria schema, merit schema, restraint/inadequate self-discipline schema, and the control group. Moreover, a significant difference existed between the maladaptive schema of mindfulness therapy group and the controls. There was a significant difference concerning the improvement of the psychopathologic symptoms between the mindfulness therapy group and the control group. Conclusions: This study showed similar effects of both the schema and mindfulness-based therapies on the maladaptive schemas in improving the psoriasis patients with the psychopathologic symptoms. PMID:28217649
Hybrid Schema Matching for Deep Web
NASA Astrophysics Data System (ADS)
Chen, Kerui; Zuo, Wanli; He, Fengling; Chen, Yongheng
Schema matching is the process of identifying semantic mappings, or correspondences, between two or more schemas. Schema matching is a first step and critical part of data integration. For schema matching of deep web, most researches only interested in query interface, while rarely pay attention to abundant schema information contained in query result pages. This paper proposed a mixed schema matching technique, which combines attributes that appeared in query structures and query results of different data sources, and mines the matched schemas inside. Experimental results prove the effectiveness of this method for improving the accuracy of schema matching.
1980-02-01
ADOAA82 342 OKLAHOMA UNIV NORMAN COLL OF EDUCATION F/B 5/9 TASK ANALYSIS SCHEMA BASED ON COGNITIVE STYLE AND SUPPLANFATION--ETC(U) FEB GO F B AUSBURN...separately- perceived fragments) 6. Tasks requiring use of a. Visual/haptic (pre- kinesthetic or tactile ference for kinesthetic stimuli stimuli; ability...to transform kinesthetic stimuli into visual images; ability to learn directly from tactile or kinesthet - ic impressions) b. Field independence/de
Embodying a cognitive model in a mobile robot
NASA Astrophysics Data System (ADS)
Benjamin, D. Paul; Lyons, Damian; Lonsdale, Deryle
2006-10-01
The ADAPT project is a collaboration of researchers in robotics, linguistics and artificial intelligence at three universities to create a cognitive architecture specifically designed to be embodied in a mobile robot. There are major respects in which existing cognitive architectures are inadequate for robot cognition. In particular, they lack support for true concurrency and for active perception. ADAPT addresses these deficiencies by modeling the world as a network of concurrent schemas, and modeling perception as problem solving. Schemas are represented using the RS (Robot Schemas) language, and are activated by spreading activation. RS provides a powerful language for distributed control of concurrent processes. Also, The formal semantics of RS provides the basis for the semantics of ADAPT's use of natural language. We have implemented the RS language in Soar, a mature cognitive architecture originally developed at CMU and used at a number of universities and companies. Soar's subgoaling and learning capabilities enable ADAPT to manage the complexity of its environment and to learn new schemas from experience. We describe the issues faced in developing an embodied cognitive architecture, and our implementation choices.
Mynodbcsv: lightweight zero-config database solution for handling very large CSV files.
Adaszewski, Stanisław
2014-01-01
Volumes of data used in science and industry are growing rapidly. When researchers face the challenge of analyzing them, their format is often the first obstacle. Lack of standardized ways of exploring different data layouts requires an effort each time to solve the problem from scratch. Possibility to access data in a rich, uniform manner, e.g. using Structured Query Language (SQL) would offer expressiveness and user-friendliness. Comma-separated values (CSV) are one of the most common data storage formats. Despite its simplicity, with growing file size handling it becomes non-trivial. Importing CSVs into existing databases is time-consuming and troublesome, or even impossible if its horizontal dimension reaches thousands of columns. Most databases are optimized for handling large number of rows rather than columns, therefore, performance for datasets with non-typical layouts is often unacceptable. Other challenges include schema creation, updates and repeated data imports. To address the above-mentioned problems, I present a system for accessing very large CSV-based datasets by means of SQL. It's characterized by: "no copy" approach--data stay mostly in the CSV files; "zero configuration"--no need to specify database schema; written in C++, with boost [1], SQLite [2] and Qt [3], doesn't require installation and has very small size; query rewriting, dynamic creation of indices for appropriate columns and static data retrieval directly from CSV files ensure efficient plan execution; effortless support for millions of columns; due to per-value typing, using mixed text/numbers data is easy; very simple network protocol provides efficient interface for MATLAB and reduces implementation time for other languages. The software is available as freeware along with educational videos on its website [4]. It doesn't need any prerequisites to run, as all of the libraries are included in the distribution package. I test it against existing database solutions using a battery of benchmarks and discuss the results.
Mynodbcsv: Lightweight Zero-Config Database Solution for Handling Very Large CSV Files
Adaszewski, Stanisław
2014-01-01
Volumes of data used in science and industry are growing rapidly. When researchers face the challenge of analyzing them, their format is often the first obstacle. Lack of standardized ways of exploring different data layouts requires an effort each time to solve the problem from scratch. Possibility to access data in a rich, uniform manner, e.g. using Structured Query Language (SQL) would offer expressiveness and user-friendliness. Comma-separated values (CSV) are one of the most common data storage formats. Despite its simplicity, with growing file size handling it becomes non-trivial. Importing CSVs into existing databases is time-consuming and troublesome, or even impossible if its horizontal dimension reaches thousands of columns. Most databases are optimized for handling large number of rows rather than columns, therefore, performance for datasets with non-typical layouts is often unacceptable. Other challenges include schema creation, updates and repeated data imports. To address the above-mentioned problems, I present a system for accessing very large CSV-based datasets by means of SQL. It's characterized by: “no copy” approach – data stay mostly in the CSV files; “zero configuration” – no need to specify database schema; written in C++, with boost [1], SQLite [2] and Qt [3], doesn't require installation and has very small size; query rewriting, dynamic creation of indices for appropriate columns and static data retrieval directly from CSV files ensure efficient plan execution; effortless support for millions of columns; due to per-value typing, using mixed text/numbers data is easy; very simple network protocol provides efficient interface for MATLAB and reduces implementation time for other languages. The software is available as freeware along with educational videos on its website [4]. It doesn't need any prerequisites to run, as all of the libraries are included in the distribution package. I test it against existing database solutions using a battery of benchmarks and discuss the results. PMID:25068261
Wetzelaer, Pim; Farrell, Joan; Evers, Silvia M A A; Jacob, Gitta A; Lee, Christopher W; Brand, Odette; van Breukelen, Gerard; Fassbinder, Eva; Fretwell, Heather; Harper, R Patrick; Lavender, Anna; Lockwood, George; Malogiannis, Ioannis A; Schweiger, Ulrich; Startup, Helen; Stevenson, Teresa; Zarbock, Gerhard; Arntz, Arnoud
2014-11-18
Borderline personality disorder (BPD) is a severe and highly prevalent mental disorder. Schema therapy (ST) has been found effective in the treatment of BPD and is commonly delivered through an individual format. A group format (group schema therapy, GST) has also been developed. GST has been found to speed up and amplify the treatment effects found for individual ST. Delivery in a group format may lead to improved cost-effectiveness. An important question is how GST compares to treatment as usual (TAU) and what format for delivery of schema therapy (format A; intensive group therapy only, or format B; a combination of group and individual therapy) produces the best outcomes. An international, multicentre randomized controlled trial (RCT) will be conducted with a minimum of fourteen participating centres. Each centre will recruit multiple cohorts of at least sixteen patients. GST formats as well as the orders in which they are delivered to successive cohorts will be balanced. Within countries that contribute an uneven number of sites, the orders of GST formats will be balanced within a difference of one. The RCT is designed to include a minimum of 448 patients with BPD. The primary clinical outcome measure will be BPD severity. Secondary clinical outcome measures will include measures of BPD and general psychiatric symptoms, schemas and schema modes, social functioning and quality of life. Furthermore, an economic evaluation that consists of cost-effectiveness and cost-utility analyses will be performed using a societal perspective. Lastly, additional investigations will be carried out that include an assessment of the integrity of GST, a qualitative study on patients' and therapists' experiences with GST, and studies on variables that might influence the effectiveness of GST. This trial will compare GST to TAU for patients with BPD as well as two different formats for the delivery of GST. By combining an evaluation of clinical effectiveness, an economic evaluation and additional investigations, it will contribute to an evidence-based understanding of which treatment should be offered to patients with BPD from clinical, economic, and stakeholders' perspectives. Netherlands Trial Register NTR2392. Registered 25 June 2010.
Novel design solutions for fishing reel mechanisms
NASA Astrophysics Data System (ADS)
Lovasz, Erwin-Christian; Modler, Karl-Heinz; Neumann, Rudolf; Gruescu, Corina Mihaela; Perju, Dan; Ciupe, Valentin; Maniu, Inocentiu
2015-07-01
Currently, there are various reels on the market regarding the type of mechanism, which achieves the winding and unwinding of the line. The designers have the purpose of obtaining a linear transmission function, by means of a simple and small-sized mechanism. However, the present solutions are not satisfactory because of large deviations from linearity of the transmission function and complexity of mechanical schema. A novel solution for the reel spool mechanism is proposed. Its kinematic schema and synthesis method are described. The kinematic schema of the chosen mechanism is based on a noncircular gear in series with a scotch-yoke mechanism. The yoke is driven by a stud fixed on the driving noncircular gear. The drawbacks of other models regarding the effects occurring at the ends of the spool are eliminated through achieving an appropriate transmission function of the spool. The linear function approximation with curved end-arches appropriately computed to ensure mathematical continuity is very good. The experimental results on the mechanism model validate the theoretical approach. The developed mechanism solution is recorded under a reel spool mechanism patent.
BioXSD: the common data-exchange format for everyday bioinformatics web services.
Kalas, Matús; Puntervoll, Pål; Joseph, Alexandre; Bartaseviciūte, Edita; Töpfer, Armin; Venkataraman, Prabakar; Pettifer, Steve; Bryne, Jan Christian; Ison, Jon; Blanchet, Christophe; Rapacki, Kristoffer; Jonassen, Inge
2010-09-15
The world-wide community of life scientists has access to a large number of public bioinformatics databases and tools, which are developed and deployed using diverse technologies and designs. More and more of the resources offer programmatic web-service interface. However, efficient use of the resources is hampered by the lack of widely used, standard data-exchange formats for the basic, everyday bioinformatics data types. BioXSD has been developed as a candidate for standard, canonical exchange format for basic bioinformatics data. BioXSD is represented by a dedicated XML Schema and defines syntax for biological sequences, sequence annotations, alignments and references to resources. We have adapted a set of web services to use BioXSD as the input and output format, and implemented a test-case workflow. This demonstrates that the approach is feasible and provides smooth interoperability. Semantics for BioXSD is provided by annotation with the EDAM ontology. We discuss in a separate section how BioXSD relates to other initiatives and approaches, including existing standards and the Semantic Web. The BioXSD 1.0 XML Schema is freely available at http://www.bioxsd.org/BioXSD-1.0.xsd under the Creative Commons BY-ND 3.0 license. The http://bioxsd.org web page offers documentation, examples of data in BioXSD format, example workflows with source codes in common programming languages, an updated list of compatible web services and tools and a repository of feature requests from the community.
The new geographic information system in ETVA VI.PE.
NASA Astrophysics Data System (ADS)
Xagoraris, Zafiris; Soulis, George
2016-08-01
ETVA VI.PE. S.A. is a member of the Piraeus Bank Group of Companies and its activities include designing, developing, exploiting and managing Industrial Areas throughout Greece. Inside ETVA VI.PE.'s thirty-one Industrial Parks there are currently 2,500 manufacturing companies established, with 40,000 employees and € 2.5 billion of invested funds. In each one of the industrial areas ETVA VI.PE guarantees the companies industrial lots of land (sites) with propitious building codes and complete infrastructure networks of water supply, sewerage, paved roads, power supply, communications, cleansing services, etc. The development of Geographical Information System for ETVA VI.PE.'s Industrial Parks started at the beginning of 1992 and consists of three subsystems: Cadastre, that manages the information for the land acquisition of Industrial Areas; Street Layout - Sites, that manages the sites sold to manufacturing companies; Networks, that manages the infrastructure networks (roads, water supply, sewerage etc). The mapping of each Industrial Park is made incorporating state-of-the-art photogrammetric, cartographic and surveying methods and techniques. Passing through the phases of initial design (hybrid GIS) and system upgrade (integrated Gis solution with spatial database), the system is currently operating on a new upgrade (integrated gIS solution with spatial database) that includes redesigning and merging the system's database schemas, along with the creation of central security policies, and the development of a new web GIS application for advanced data entry, highly customisable and standard reports, and dynamic interactive maps. The new GIS bring the company to advanced levels of productivity and introduce the new era for decision making and business management.
Application of SQL database to the control system of MOIRCS
NASA Astrophysics Data System (ADS)
Yoshikawa, Tomohiro; Omata, Koji; Konishi, Masahiro; Ichikawa, Takashi; Suzuki, Ryuji; Tokoku, Chihiro; Uchimoto, Yuka Katsuno; Nishimura, Tetsuo
2006-06-01
MOIRCS (Multi-Object Infrared Camera and Spectrograph) is a new instrument for the Subaru telescope. In order to perform observations of near-infrared imaging and spectroscopy with cold slit mask, MOIRCS contains many device components, which are distributed on an Ethernet LAN. Two PCs wired to the focal plane array electronics operate two HAWAII2 detectors, respectively, and other two PCs are used for integrated control and quick data reduction, respectively. Though most of the devices (e.g., filter and grism turrets, slit exchange mechanism for spectroscopy) are controlled via RS232C interface, they are accessible from TCP/IP connection using TCP/IP to RS232C converters. Moreover, other devices are also connected to the Ethernet LAN. This network distributed structure provides flexibility of hardware configuration. We have constructed an integrated control system for such network distributed hardwares, named T-LECS (Tohoku University - Layered Electronic Control System). T-LECS has also network distributed software design, applying TCP/IP socket communication to interprocess communication. In order to help the communication between the device interfaces and the user interfaces, we defined three layers in T-LECS; an external layer for user interface applications, an internal layer for device interface applications, and a communication layer, which connects two layers above. In the communication layer, we store the data of the system to an SQL database server; they are status data, FITS header data, and also meta data such as device configuration data and FITS configuration data. We present our software system design and the database schema to manage observations of MOIRCS with Subaru.
The MAJORANA Parts Tracking Database
NASA Astrophysics Data System (ADS)
Abgrall, N.; Aguayo, E.; Avignone, F. T.; Barabash, A. S.; Bertrand, F. E.; Brudanin, V.; Busch, M.; Byram, D.; Caldwell, A. S.; Chan, Y.-D.; Christofferson, C. D.; Combs, D. C.; Cuesta, C.; Detwiler, J. A.; Doe, P. J.; Efremenko, Yu.; Egorov, V.; Ejiri, H.; Elliott, S. R.; Esterline, J.; Fast, J. E.; Finnerty, P.; Fraenkle, F. M.; Galindo-Uribarri, A.; Giovanetti, G. K.; Goett, J.; Green, M. P.; Gruszko, J.; Guiseppe, V. E.; Gusev, K.; Hallin, A. L.; Hazama, R.; Hegai, A.; Henning, R.; Hoppe, E. W.; Howard, S.; Howe, M. A.; Keeter, K. J.; Kidd, M. F.; Kochetov, O.; Konovalov, S. I.; Kouzes, R. T.; LaFerriere, B. D.; Leon, J. Diaz; Leviner, L. E.; Loach, J. C.; MacMullin, J.; Martin, R. D.; Meijer, S. J.; Mertens, S.; Miller, M. L.; Mizouni, L.; Nomachi, M.; Orrell, J. L.; O`Shaughnessy, C.; Overman, N. R.; Petersburg, R.; Phillips, D. G.; Poon, A. W. P.; Pushkin, K.; Radford, D. C.; Rager, J.; Rielage, K.; Robertson, R. G. H.; Romero-Romero, E.; Ronquest, M. C.; Shanks, B.; Shima, T.; Shirchenko, M.; Snavely, K. J.; Snyder, N.; Soin, A.; Suriano, A. M.; Tedeschi, D.; Thompson, J.; Timkin, V.; Tornow, W.; Trimble, J. E.; Varner, R. L.; Vasilyev, S.; Vetter, K.; Vorren, K.; White, B. R.; Wilkerson, J. F.; Wiseman, C.; Xu, W.; Yakushev, E.; Young, A. R.; Yu, C.-H.; Yumatov, V.; Zhitnikov, I.
2015-04-01
The MAJORANA DEMONSTRATOR is an ultra-low background physics experiment searching for the neutrinoless double beta decay of 76Ge. The MAJORANA Parts Tracking Database is used to record the history of components used in the construction of the DEMONSTRATOR. The tracking implementation takes a novel approach based on the schema-free database technology CouchDB. Transportation, storage, and processes undergone by parts such as machining or cleaning are linked to part records. Tracking parts provide a great logistics benefit and an important quality assurance reference during construction. In addition, the location history of parts provides an estimate of their exposure to cosmic radiation. A web application for data entry and a radiation exposure calculator have been developed as tools for achieving the extreme radio-purity required for this rare decay search.
A Multiagent System for Dynamic Data Aggregation in Medical Research
Urovi, Visara; Barba, Imanol; Aberer, Karl; Schumacher, Michael Ignaz
2016-01-01
The collection of medical data for research purposes is a challenging and long-lasting process. In an effort to accelerate and facilitate this process we propose a new framework for dynamic aggregation of medical data from distributed sources. We use agent-based coordination between medical and research institutions. Our system employs principles of peer-to-peer network organization and coordination models to search over already constructed distributed databases and to identify the potential contributors when a new database has to be built. Our framework takes into account both the requirements of a research study and current data availability. This leads to better definition of database characteristics such as schema, content, and privacy parameters. We show that this approach enables a more efficient way to collect data for medical research. PMID:27975063
Güner, Olcay
2017-03-01
The Early Maladaptive Schema Questionnaires Set for Children and Adolescents (SQS) was developed to assess early maladaptive schemas in children between the ages of 10 and 16 in Turkey. The SQS consists of five questionnaires that represent five schema domains in Young's schema theory. Psychometric properties (n = 983) and normative values (n = 2250) of SQS were investigated in children and adolescents between the ages of 10 and 16. Both exploratory and confirmatory factor analyses were performed. Results revealed 15 schema factors under five schema domains, with good fit indexes. A total of 14 schema factors were in line with Young's early maladaptive schemas. In addition to these factors, one new schema emerged: self-disapproval. Reliability analyses showed that SQS has high internal consistency and consistency over a 1-month interval. Correlations of SQS with the Adjective Check List (ACL), the Inventory of Parent and Peer Attachment (IPPA), the Symptom Assessment (SA-45) and the Young Schema Questionnaire (YSQ) were investigated to assess criterion validity, and the correlations revealed encouraging results. SQS significantly differentiated between children who have clinical diagnoses (n = 78) and children who have no diagnosis (n = 100). Finally, general normative values (n = 2,250) were determined for age groups, gender and age/gender groups. In conclusion, the early maladaptive schema questionnaires set for children and adolescents turned out to be a reliable and valid questionnaire with standard scores.Copyright © 2016 John Wiley & Sons, Ltd. The early maladaptive schema questionnaires set for children and adolescents (SQS) is a psychometrically reliable and valid measure of early maladaptive schemas for children between the ages of 10 and 16. SQS consists of five schema domains that represent Young's schema domains including 15 early maladaptive schemas and 97 items. Normative values for each schema were determined for age, gender and age/gender groups. Clinically, SQS presents valuable information about early maladaptive schemas during childhood and adolescence, before such schemas become more pervasive and persistent. Copyright © 2016 John Wiley & Sons, Ltd.
OFFl Models: Novel Schema for Dynamical Modeling of Biological Systems
2016-01-01
Flow diagrams are a common tool used to help build and interpret models of dynamical systems, often in biological contexts such as consumer-resource models and similar compartmental models. Typically, their usage is intuitive and informal. Here, we present a formalized version of flow diagrams as a kind of weighted directed graph which follow a strict grammar, which translate into a system of ordinary differential equations (ODEs) by a single unambiguous rule, and which have an equivalent representation as a relational database. (We abbreviate this schema of “ODEs and formalized flow diagrams” as OFFL.) Drawing a diagram within this strict grammar encourages a mental discipline on the part of the modeler in which all dynamical processes of a system are thought of as interactions between dynamical species that draw parcels from one or more source species and deposit them into target species according to a set of transformation rules. From these rules, the net rate of change for each species can be derived. The modeling schema can therefore be understood as both an epistemic and practical heuristic for modeling, serving both as an organizational framework for the model building process and as a mechanism for deriving ODEs. All steps of the schema beyond the initial scientific (intuitive, creative) abstraction of natural observations into model variables are algorithmic and easily carried out by a computer, thus enabling the future development of a dedicated software implementation. Such tools would empower the modeler to consider significantly more complex models than practical limitations might have otherwise proscribed, since the modeling framework itself manages that complexity on the modeler’s behalf. In this report, we describe the chief motivations for OFFL, carefully outline its implementation, and utilize a range of classic examples from ecology and epidemiology to showcase its features. PMID:27270918
A narrative review of schemas and schema therapy outcomes in the eating disorders.
Pugh, Matthew
2015-07-01
Whilst cognitive-behavioural therapy has demonstrated efficacy in the treatment of eating disorders, therapy outcomes and current conceptualizations still remain inadequate. In light of these shortcomings there has been growing interest in the utility of schema therapy applied to eating pathology. The present article first provides a narrative review of empirical literature exploring schemas and schema processes in eating disorders. Secondly, it critically evaluates outcome studies assessing schema therapy applied to eating disorders. Current evidence lends support to schema-focused conceptualizations of eating pathology and confirms that eating disorders are characterised by pronounced maladaptive schemas. Treatment outcomes also indicate that schema therapy, the schema-mode approach, and associated techniques are promising interventions for complex eating disorders. Implications for clinical practice and future directions for research are discussed. Copyright © 2015 Elsevier Ltd. All rights reserved.
Mental Schemas Hamper Memory Storage of Goal-Irrelevant Information
Sweegers, C. C. G.; Coleman, G. A.; van Poppel, E. A. M.; Cox, R.; Talamini, L. M.
2015-01-01
Mental schemas exert top-down control on information processing, for instance by facilitating the storage of schema-related information. However, given capacity-limits and competition in neural network processing, schemas may additionally exert their effects by suppressing information with low momentary relevance. In particular, when existing schemas suffice to guide goal-directed behavior, this may actually reduce encoding of the redundant sensory input, in favor of gaining efficiency in task performance. The present experiment set out to test this schema-induced shallow encoding hypothesis. Our approach involved a memory task in which faces had to be coupled to homes. For half of the faces the responses could be guided by a pre-learned schema, for the other half of the faces such a schema was not available. Memory storage was compared between schema-congruent and schema-incongruent items. To characterize putative schema effects, memory was assessed both with regard to visual details and contextual aspects of each item. The depth of encoding was also assessed through an objective neural measure: the parietal old/new ERP effect. This ERP effect, observed between 500–800 ms post-stimulus onset, is thought to reflect the extent of recollection: the retrieval of a vivid memory, including various contextual details from the learning episode. We found that schema-congruency induced substantial impairments in item memory and even larger ones in context memory. Furthermore, the parietal old/new ERP effect indicated higher recollection for the schema-incongruent than the schema-congruent memories. The combined findings indicate that, when goals can be achieved using existing schemas, this can hinder the in-depth processing of novel input, impairing the formation of perceptually detailed and contextually rich memory traces. Taking into account both current and previous findings, we suggest that schemas can both positively and negatively bias the processing of sensory input. An important determinant in this matter is likely related to momentary goals, such that mental schemas facilitate memory processing of goal-relevant input, but suppress processing of goal-irrelevant information. Highlights – Schema-congruent information suffers from shallow encoding. – Schema congruency induces poor item and context memory. – The parietal old/new effect is less pronounced for schema-congruent items. – Schemas exert different influences on memory formation depending on current goals. PMID:26635582
A semantic data dictionary method for database schema integration in CIESIN
NASA Astrophysics Data System (ADS)
Hinds, N.; Huang, Y.; Ravishankar, C.
1993-08-01
CIESIN (Consortium for International Earth Science Information Network) is funded by NASA to investigate the technology necessary to integrate and facilitate the interdisciplinary use of Global Change information. A clear of this mission includes providing a link between the various global change data sets, in particular the physical sciences and the human (social) sciences. The typical scientist using the CIESIN system will want to know how phenomena in an outside field affects his/her work. For example, a medical researcher might ask: how does air-quality effect emphysema? This and many similar questions will require sophisticated semantic data integration. The researcher who raised the question may be familiar with medical data sets containing emphysema occurrences. But this same investigator may know little, if anything, about the existance or location of air-quality data. It is easy to envision a system which would allow that investigator to locate and perform a ``join'' on two data sets, one containing emphysema cases and the other containing air-quality levels. No such system exists today. One major obstacle to providing such a system will be overcoming the heterogeneity which falls into two broad categories. ``Database system'' heterogeneity involves differences in data models and packages. ``Data semantic'' heterogeneity involves differences in terminology between disciplines which translates into data semantic issues, and varying levels of data refinement, from raw to summary. Our work investigates a global data dictionary mechanism to facilitate a merged data service. Specially, we propose using a semantic tree during schema definition to aid in locating and integrating heterogeneous databases.
A new relational database structure and online interface for the HITRAN database
NASA Astrophysics Data System (ADS)
Hill, Christian; Gordon, Iouli E.; Rothman, Laurence S.; Tennyson, Jonathan
2013-11-01
A new format for the HITRAN database is proposed. By storing the line-transition data in a number of linked tables described by a relational database schema, it is possible to overcome the limitations of the existing format, which have become increasingly apparent over the last few years as new and more varied data are being used by radiative-transfer models. Although the database in the new format can be searched using the well-established Structured Query Language (SQL), a web service, HITRANonline, has been deployed to allow users to make most common queries of the database using a graphical user interface in a web page. The advantages of the relational form of the database to ensuring data integrity and consistency are explored, and the compatibility of the online interface with the emerging standards of the Virtual Atomic and Molecular Data Centre (VAMDC) project is discussed. In particular, the ability to access HITRAN data using a standard query language from other websites, command line tools and from within computer programs is described.
Advancing the LSST Operations Simulator
NASA Astrophysics Data System (ADS)
Saha, Abhijit; Ridgway, S. T.; Cook, K. H.; Delgado, F.; Chandrasekharan, S.; Petry, C. E.; Operations Simulator Group
2013-01-01
The Operations Simulator for the Large Synoptic Survey Telescope (LSST; http://lsst.org) allows the planning of LSST observations that obey explicit science driven observing specifications, patterns, schema, and priorities, while optimizing against the constraints placed by design-specific opto-mechanical system performance of the telescope facility, site specific conditions (including weather and seeing), as well as additional scheduled and unscheduled downtime. A simulation run records the characteristics of all observations (e.g., epoch, sky position, seeing, sky brightness) in a MySQL database, which can be queried for any desired purpose. Derivative information digests of the observing history database are made with an analysis package called Simulation Survey Tools for Analysis and Reporting (SSTAR). Merit functions and metrics have been designed to examine how suitable a specific simulation run is for several different science applications. This poster reports recent work which has focussed on an architectural restructuring of the code that will allow us to a) use "look-ahead" strategies that avoid cadence sequences that cannot be completed due to observing constraints; and b) examine alternate optimization strategies, so that the most efficient scheduling algorithm(s) can be identified and used: even few-percent efficiency gains will create substantive scientific opportunity. The enhanced simulator will be used to assess the feasibility of desired observing cadences, study the impact of changing science program priorities, and assist with performance margin investigations of the LSST system.
Dunne, Ashley L; Gilbert, Flora; Lee, Stuart; Daffern, Michael
2018-05-01
Contemporary social-cognitive aggression theory and extant empirical research highlights the relationship between certain Early Maladaptive Schemas (EMSs) and aggression in offenders. To date, the related construct of schema modes, which presents a comprehensive and integrated schema unit, has received scant empirical attention. Furthermore, EMSs and schema modes have yet to be examined concurrently with respect to aggressive behavior. This study examined associations between EMSs, schema modes, and aggression in an offender sample. Two hundred and eight adult male prisoners completed self-report psychological tests measuring their histories of aggression, EMSs, and schema modes. Regression analyses revealed that EMSs were significantly associated with aggression but did not account for a unique portion of variance once the effects of schema modes were taken into account. Three schema modes, Enraged Child, Impulsive Child, and Bully and Attack, significantly predicted aggression. These findings support the proposition that schema modes characterized by escalating states of anger, rage, and impulsivity characterize aggressive offenders. In this regard, we call attention to the need to include schema modes in contemporary social-cognitive aggression theories, and suggest that systematic assessment and treatment of schema modes has the potential to enhance outcomes with violent offenders. © 2018 Wiley Periodicals, Inc.
Joint Battlespace Infosphere: Information Management Within a C2 Enterprise
2005-06-01
using. In version 1.2, we support both MySQL and Oracle as underlying implementations where the XML metadata schema is mapped into relational tables in...Identity Servers, Role-Based Access Control, and Policy Representation – Databases: Oracle , MySQL , TigerLogic, Berkeley XML DB 15 Instrumentation Services...converted to SQL for execution. Invocations are then forwarded to the appropriate underlying IOR core components that have the responsibility of issuing
Solving Word Problems using Schemas: A Review of the Literature
Powell, Sarah R.
2011-01-01
Solving word problems is a difficult task for students at-risk for or with learning disabilities (LD). One instructional approach that has emerged as a valid method for helping students at-risk for or with LD to become more proficient at word-problem solving is using schemas. A schema is a framework for solving a problem. With a schema, students are taught to recognize problems as falling within word-problem types and to apply a problem solution method that matches that problem type. This review highlights two schema approaches for 2nd- and 3rd-grade students at-risk for or with LD: schema-based instruction and schema-broadening instruction. A total of 12 schema studies were reviewed and synthesized. Both types of schema approaches enhanced the word-problem skill of students at-risk for or with LD. Based on the review, suggestions are provided for incorporating word-problem instruction using schemas. PMID:21643477
A structured interface to the object-oriented genomics unified schema for XML-formatted data.
Clark, Terry; Jurek, Josef; Kettler, Gregory; Preuss, Daphe
2005-01-01
Data management systems are fast becoming required components in many biology laboratories as the role of computer-based information grows. Although the need for data management systems is on the rise, their inherent complexities can deter the full and routine use of their computational capabilities. The significant undertaking to implement a capable production system can be reduced in part by adapting an established data management system. In such a way, we are leveraging the Genomics Unified Schema (GUS) developed at the Computational Biology and Informatics Laboratory at the University of Pennsylvania as a foundation for managing and analysing DNA sequence data in centromere research projects around Arabidopsis thaliana and related species. Because GUS provides a core schema that includes support for genome sequences, mRNA and its expression, and annotated chromosomes, it is ideal for synthesising a variety of parameters to analyse these repetitive and highly dynamic portions of the genome. Despite this, production-strength data management frameworks are complex, requiring dedicated efforts to adapt and maintain. The work reported in this article addresses one component of such an effort, namely the pivotal task of marshalling data from various sources into GUS. In order to harness GUS for our project, and motivated by efficiency needs, we developed a structured framework for transferring data into GUS from outside sources. This technology is embodied in a GUS object-layer processor, XMLGUS. XMLGUS facilitates incorporating data into GUS by (i) formulating an XML interface that includes relational database key constraint definitions, (ii) regularising traversal through that XML, (iii) realising automatic processing of the XML with database key constraints and (iv) allowing for special processing of input data within the framework for automated processing. The application of XMLGUS to production pipeline processing for a sequencing project and inputting the Arabidopsis genome into GUS is discussed. XMLGUS is available from the Flora website (http://flora.ittc.ku.edu/).
Implications of Evidence-Centered Design for Educational Testing
ERIC Educational Resources Information Center
Mislevy, Robert J.; Haertel, Geneva D.
2006-01-01
Evidence-centered assessment design (ECD) provides language, concepts, and knowledge representations for designing and delivering educational assessments, all organized around the evidentiary argument an assessment is meant to embody. This article describes ECD in terms of layers for analyzing domains, laying out arguments, creating schemas for…
Design vs. Content: A Study of Adolescent Girls' Website Design Preferences
ERIC Educational Resources Information Center
Agosto, Denise E.
2004-01-01
This study considered the utility of gender schema theory in examining girls' website design preferences. It built on a previous study which identified eight website evaluation criteria related to biological sex: collaboration, social connectivity, flexibility, motility, contextuality, personal identification, inclusion, and graphic/multimedia…
Backward Design: Targeting Depth of Understanding for All Learners
ERIC Educational Resources Information Center
Childre, Amy; Sands, Jennifer R.; Pope, Saundra Tanner
2009-01-01
Curriculum design is at the center of developing student ability to construct understanding. Without appropriately designed curriculum, instruction can be ineffective at scaffolding understanding. Often students with disabilities need more explicit instruction or guidance in applying their schema to new information. Thus, instruction must not only…
Information, intelligence, and interface: the pillars of a successful medical information system.
Hadzikadic, M; Harrington, A L; Bohren, B F
1995-01-01
This paper addresses three key issues facing developers of clinical and/or research medical information systems. 1. INFORMATION. The basic function of every database is to store information about the phenomenon under investigation. There are many ways to organize information in a computer; however only a few will prove optimal for any real life situation. Computer Science theory has developed several approaches to database structure, with relational theory leading in popularity among end users [8]. Strict conformance to the rules of relational database design rewards the user with consistent data and flexible access to that data. A properly defined database structure minimizes redundancy i.e.,multiple storage of the same information. Redundancy introduces problems when updating a database, since the repeated value has to be updated in all locations--missing even a single value corrupts the whole database, and incorrect reports are produced [8]. To avoid such problems, relational theory offers a formal mechanism for determining the number and content of data files. These files not only preserve the conceptual schema of the application domain, but allow a virtually unlimited number of reports to be efficiently generated. 2. INTELLIGENCE. Flexible access enables the user to harvest additional value from collected data. This value is usually gained via reports defined at the time of database design. Although these reports are indispensable, with proper tools more information can be extracted from the database. For example, machine learning, a sub-discipline of artificial intelligence, has been successfully used to extract knowledge from databases of varying size by uncovering a correlation among fields and records[1-6, 9]. This knowledge, represented in the form of decision trees, production rules, and probabilistic networks, clearly adds a flavor of intelligence to the data collection and manipulation system. 3. INTERFACE. Despite the obvious importance of collecting data and extracting knowledge, current systems often impede these processes. Problems stem from the lack of user friendliness and functionality. To overcome these problems, several features of a successful human-computer interface have been identified [7], including the following "golden" rules of dialog design [7]: consistency, use of shortcuts for frequent users, informative feedback, organized sequence of actions, simple error handling, easy reversal of actions, user-oriented focus of control, and reduced short-term memory load. To this list of rules, we added visual representation of both data and query results, since our experience has demonstrated that users react much more positively to visual rather than textual information. In our design of the Orthopaedic Trauma Registry--under development at the Carolinas Medical Center--we have made every effort to follow the above rules. The results were rewarding--the end users actually not only want to use the product, but also to participate in its development.
The STP (Solar-Terrestrial Physics) Semantic Web based on the RSS1.0 and the RDF
NASA Astrophysics Data System (ADS)
Kubo, T.; Murata, K. T.; Kimura, E.; Ishikura, S.; Shinohara, I.; Kasaba, Y.; Watari, S.; Matsuoka, D.
2006-12-01
In the Solar-Terrestrial Physics (STP), it is pointed out that circulation and utilization of observation data among researchers are insufficient. To archive interdisciplinary researches, we need to overcome this circulation and utilization problems. Under such a background, authors' group has developed a world-wide database that manages meta-data of satellite and ground-based observation data files. It is noted that retrieving meta-data from the observation data and registering them to database have been carried out by hand so far. Our goal is to establish the STP Semantic Web. The Semantic Web provides a common framework that allows a variety of data shared and reused across applications, enterprises, and communities. We also expect that the secondary information related with observations, such as event information and associated news, are also shared over the networks. The most fundamental issue on the establishment is who generates, manages and provides meta-data in the Semantic Web. We developed an automatic meta-data collection system for the observation data using the RSS (RDF Site Summary) 1.0. The RSS1.0 is one of the XML-based markup languages based on the RDF (Resource Description Framework), which is designed for syndicating news and contents of news-like sites. The RSS1.0 is used to describe the STP meta-data, such as data file name, file server address and observation date. To describe the meta-data of the STP beyond RSS1.0 vocabulary, we defined original vocabularies for the STP resources using the RDF Schema. The RDF describes technical terms on the STP along with the Dublin Core Metadata Element Set, which is standard for cross-domain information resource descriptions. Researchers' information on the STP by FOAF, which is known as an RDF/XML vocabulary, creates a machine-readable metadata describing people. Using the RSS1.0 as a meta-data distribution method, the workflow from retrieving meta-data to registering them into the database is automated. This technique is applied for several database systems, such as the DARTS database system and NICT Space Weather Report Service. The DARTS is a science database managed by ISAS/JAXA in Japan. We succeeded in generating and collecting the meta-data automatically for the CDF (Common data Format) data, such as Reimei satellite data, provided by the DARTS. We also create an RDF service for space weather report and real-time global MHD simulation 3D data provided by the NICT. Our Semantic Web system works as follows: The RSS1.0 documents generated on the data sites (ISAS and NICT) are automatically collected by a meta-data collection agent. The RDF documents are registered and the agent extracts meta-data to store them in the Sesame, which is an open source RDF database with support for RDF Schema inferencing and querying. The RDF database provides advanced retrieval processing that has considered property and relation. Finally, the STP Semantic Web provides automatic processing or high level search for the data which are not only for observation data but for space weather news, physical events, technical terms and researches information related to the STP.
A Magnetic Petrology Database for Satellite Magnetic Anomaly Interpretations
NASA Astrophysics Data System (ADS)
Nazarova, K.; Wasilewski, P.; Didenko, A.; Genshaft, Y.; Pashkevich, I.
2002-05-01
A Magnetic Petrology Database (MPDB) is now being compiled at NASA/Goddard Space Flight Center in cooperation with Russian and Ukrainian Institutions. The purpose of this database is to provide the geomagnetic community with a comprehensive and user-friendly method of accessing magnetic petrology data via Internet for more realistic interpretation of satellite magnetic anomalies. Magnetic Petrology Data had been accumulated in NASA/Goddard Space Flight Center, United Institute of Physics of the Earth (Russia) and Institute of Geophysics (Ukraine) over several decades and now consists of many thousands of records of data in our archives. The MPDB was, and continues to be in big demand especially since recent launching in near Earth orbit of the mini-constellation of three satellites - Oersted (in 1999), Champ (in 2000), and SAC-C (in 2000) which will provide lithospheric magnetic maps with better spatial and amplitude resolution (about 1 nT). The MPDB is focused on lower crustal and upper mantle rocks and will include data on mantle xenoliths, serpentinized ultramafic rocks, granulites, iron quartzites and rocks from Archean-Proterozoic metamorphic sequences from all around the world. A substantial amount of data is coming from the area of unique Kursk Magnetic Anomaly and Kola Deep Borehole (which recovered 12 km of continental crust). A prototype MPDB can be found on the Geodynamics Branch web server of Goddard Space Flight Center at http://core2.gsfc.nasa.gov/terr_mag/magnpetr.html. The MPDB employs a searchable relational design and consists of 7 interrelated tables. The schema of database is shown at http://core2.gsfc.nasa.gov/terr_mag/doc.html. MySQL database server was utilized to implement MPDB. The SQL (Structured Query Language) is used to query the database. To present the results of queries on WEB and for WEB programming we utilized PHP scripting language and CGI scripts. The prototype MPDB is designed to search database by major satellite magnetic anomaly, tectonic structure, geographical location, rock type, magnetic properties, chemistry and reference, see http://core2.gsfc.nasa.gov/terr_mag/query1.html. The output of database is HTML structured table, text file, and downloadable file. This database will be very useful for studies of lithospheric satellite magnetic anomalies on the Earth and other terrestrial planets.
Jalali, Mohammad Reza; Zargar, Mohammad; Salavati, Mojgan; Kakavand, Ali Reza
2011-01-01
The aim of this study was to examine the difference of early maladaptive schemas and parenting origins in opioid abusers and non-opioid abusers. The early maladaptive schemas and parenting origins were compared in 56 opioid abusers and 56 non-opioids abusers. Schemas were assessed by the Young Schema Questionnaire 3rd (short form); and parenting origins were assessed by the Young Parenting Inventory. Data were analyzed by multivariate analysis of variance (MANOVA). The analysis showed that the means for schemas between opioid abusers and non-opioid abusers were different. Chi square test showed that parenting origins were significantly associated with their related schemas. The early maladaptive schemas and parenting origins in opioid abusers were more than non-opioid abusers; and parenting origins were related to their Corresponding schemas.
Jalali, Farzad; Hasani, Alireza; Hashemi, Seyedeh Fatemeh; Kimiaei, Seyed Ali; Babaei, Ali
2018-06-01
Depression is one the most common mental disorders in prisons. People living with HIV are more likely to develop psychological difficulties when compared with the general population. This study aims to determine the efficacy of cognitive group therapy based on schema-focused approach in reducing depression in prisoners living with HIV. The design of this study was between-groups (or "independent measures"). It was conducted with pretest, posttest, and waiting list control group. The research population comprised all prisoners living with HIV in a men's prison in Iran. Based on voluntary desire, screening, and inclusion criteria, 42 prisoners living with HIV participated in this study. They were randomly assigned to an experimental group (21 prisoners) and waiting list control group (21 prisoners). The experimental group received 11 sessions of schema-focused cognitive group therapy, while the waiting list control group received the treatment after the completion of the study. The various groups were evaluated in terms of depression. ANCOVA models were employed to test the study hypotheses. Collated results indicated that depression was reduced among prisoners in the experimental group. Schema therapy (ST) could reduce depression among prisoners living with HIV/AIDS.
Kottmann, Renzo; Gray, Tanya; Murphy, Sean; Kagan, Leonid; Kravitz, Saul; Lombardot, Thierry; Field, Dawn; Glöckner, Frank Oliver
2008-06-01
The Genomic Contextual Data Markup Language (GCDML) is a core project of the Genomic Standards Consortium (GSC) that implements the "Minimum Information about a Genome Sequence" (MIGS) specification and its extension, the "Minimum Information about a Metagenome Sequence" (MIMS). GCDML is an XML Schema for generating MIGS/MIMS compliant reports for data entry, exchange, and storage. When mature, this sample-centric, strongly-typed schema will provide a diverse set of descriptors for describing the exact origin and processing of a biological sample, from sampling to sequencing, and subsequent analysis. Here we describe the need for such a project, outline design principles required to support the project, and make an open call for participation in defining the future content of GCDML. GCDML is freely available, and can be downloaded, along with documentation, from the GSC Web site (http://gensc.org).
ApiEST-DB: analyzing clustered EST data of the apicomplexan parasites.
Li, Li; Crabtree, Jonathan; Fischer, Steve; Pinney, Deborah; Stoeckert, Christian J; Sibley, L David; Roos, David S
2004-01-01
ApiEST-DB (http://www.cbil.upenn.edu/paradbs-servlet/) provides integrated access to publicly available EST data from protozoan parasites in the phylum Apicomplexa. The database currently incorporates a total of nearly 100,000 ESTs from several parasite species of clinical and/or veterinary interest, including Eimeria tenella, Neospora caninum, Plasmodium falciparum, Sarcocystis neurona and Toxoplasma gondii. To facilitate analysis of these data, EST sequences were clustered and assembled to form consensus sequences for each organism, and these assemblies were then subjected to automated annotation via similarity searches against protein and domain databases. The underlying relational database infrastructure, Genomics Unified Schema (GUS), enables complex biologically based queries, facilitating validation of gene models, identification of alternative splicing, detection of single nucleotide polymorphisms, identification of stage-specific genes and recognition of phylogenetically conserved and phylogenetically restricted sequences.
The Majorana Parts Tracking Database
Abgrall, N.; Aguayo, E.; Avignone, F. T.; ...
2015-01-16
The Majorana Demonstrator is an ultra-low background physics experiment searching for the neutrinoless double beta decay of 76Ge. The Majorana Parts Tracking Database is used to record the history of components used in the construction of the Demonstrator. The tracking implementation takes a novel approach based on the schema-free database technology CouchDB. Transportation, storage, and processes undergone by parts such as machining or cleaning are linked to part records. Tracking parts provides a great logistics benefit and an important quality assurance reference during construction. In addition, the location history of parts provides an estimate of their exposure to cosmic radiation.more » In summary, a web application for data entry and a radiation exposure calculator have been developed as tools for achieving the extreme radio-purity required for this rare decay search.« less
SoyFN: a knowledge database of soybean functional networks.
Xu, Yungang; Guo, Maozu; Liu, Xiaoyan; Wang, Chunyu; Liu, Yang
2014-01-01
Many databases for soybean genomic analysis have been built and made publicly available, but few of them contain knowledge specifically targeting the omics-level gene-gene, gene-microRNA (miRNA) and miRNA-miRNA interactions. Here, we present SoyFN, a knowledge database of soybean functional gene networks and miRNA functional networks. SoyFN provides user-friendly interfaces to retrieve, visualize, analyze and download the functional networks of soybean genes and miRNAs. In addition, it incorporates much information about KEGG pathways, gene ontology annotations and 3'-UTR sequences as well as many useful tools including SoySearch, ID mapping, Genome Browser, eFP Browser and promoter motif scan. SoyFN is a schema-free database that can be accessed as a Web service from any modern programming language using a simple Hypertext Transfer Protocol call. The Web site is implemented in Java, JavaScript, PHP, HTML and Apache, with all major browsers supported. We anticipate that this database will be useful for members of research communities both in soybean experimental science and bioinformatics. Database URL: http://nclab.hit.edu.cn/SoyFN.
Early maladaptive schemas in adult patients with attention deficit hyperactivity disorder.
Philipsen, Alexandra; Lam, Alexandra P; Breit, Sigrid; Lücke, Caroline; Müller, Helge H; Matthies, Swantje
2017-06-01
The main purpose of this study was to examine whether adult patients with attention deficit hyperactivity disorder (ADHD) demonstrate sets of dysfunctional cognitive beliefs and behavioural tendencies according to Jeffrey Young's schema-focused therapy model. Sets of dysfunctional beliefs (maladaptive schemas) were assessed with the Young Schema Questionnaire (YSQ-S2) in 78 adult ADHD patients and 80 control subjects. Patients with ADHD scored significantly higher than the control group on almost all maladaptive schemas. The 'Failure', 'Defectiveness/Shame', 'Subjugation' and 'Emotional Deprivation' schemas were most pronounced in adult ADHD patients, while only 'Vulnerability to Harm or Illness' did not differ between the two groups. The schemas which were most pronounced in adult patients with ADHD correspond well with their learning histories and core symptoms. By demonstrating the existence of early maladaptive schemas in adults suffering from ADHD, this study suggests that schema theory may usefully be applied to adult ADHD therapy.
Securely and Flexibly Sharing a Biomedical Data Management System
Wang, Fusheng; Hussels, Phillip; Liu, Peiya
2011-01-01
Biomedical database systems need not only to address the issues of managing complex data, but also to provide data security and access control to the system. These include not only system level security, but also instance level access control such as access of documents, schemas, or aggregation of information. The latter is becoming more important as multiple users can share a single scientific data management system to conduct their research, while data have to be protected before they are published or IP-protected. This problem is challenging as users’ needs for data security vary dramatically from one application to another, in terms of who to share with, what resources to be shared, and at what access level. We develop a comprehensive data access framework for a biomedical data management system SciPort. SciPort provides fine-grained multi-level space based access control of resources at not only object level (documents and schemas), but also space level (resources set aggregated in a hierarchy way). Furthermore, to simplify the management of users and privileges, customizable role-based user model is developed. The access control is implemented efficiently by integrating access privileges into the backend XML database, thus efficient queries are supported. The secure access approach we take makes it possible for multiple users to share the same biomedical data management system with flexible access management and high data security. PMID:21625285
Schema representation in patients with ventromedial PFC lesions.
Ghosh, Vanessa E; Moscovitch, Morris; Melo Colella, Brenda; Gilboa, Asaf
2014-09-03
Human neuroimaging and animal studies have recently implicated the ventromedial prefrontal cortex (vmPFC) in memory schema, particularly in facilitating new encoding by existing schemas. In humans, the most conspicuous memory disorder following vmPFC damage is confabulation; strategic retrieval models suggest that aberrant schema activation or reinstatement plays a role in confabulation. This raises the possibility that beyond its role in schema-supported memory encoding, the vmPFC is also implicated in schema reinstatement itself. If that is the case, vmPFC lesions should lead to impaired schema-based operations, even on tasks that do not involve memory acquisition. To test this prediction, ten patients with vmPFC damage, four with present or prior confabulation, and a group of twelve matched healthy controls made speeded yes/no decisions as to whether words were closely related to a schema (a visit to the doctor). Ten minutes later, they repeated the task for a new schema (going to bed) with some words related to the first schema included as lures. Last, they rated the degree to which stimuli were related to the second schema. All four vmPFC patients with present or prior confabulation were impaired in rejecting lures and in classifying stimulus belongingness to the schema, even when they were not lures. Nonconfabulating patients performed comparably to healthy adults with high accuracy, comparable reaction times, and similar ratings. These results show for the first time that damage to the human vmPFC, when associated with confabulation, leads to deficient schema reinstatement, which is likely a prerequisite for schema-mediated memory integration. Copyright © 2014 the authors 0270-6474/14/3412057-14$15.00/0.
Characterizing Thematized Derivative Schema by the Underlying Emergent Structures
ERIC Educational Resources Information Center
Garcia, Mercedes; Llinares, Salvador; Sanchez-Matamoros, Gloria
2011-01-01
This paper reports on different underlying structures of the derivative schema of three undergraduate students that were considered to be at the trans level of development of the derivative schema (action-process-object-schema). The derivative schema is characterized in terms of the students' ability to explicitly transfer the relationship between…
SchemaOnRead: A Package for Schema-on-Read in R
DOE Office of Scientific and Technical Information (OSTI.GOV)
North, Michael J.
Schema-on-read is an agile approach to data storage and retrieval that defers investments in data organization until production queries need to be run by working with data directly in native form. Schema-on-read functions have been implemented in a wide range of analytical systems, most notably Hadoop. SchemaOnRead is a CRAN package that uses R’s flexible data representations to provide transparent and convenient support for the schema-on-read paradigm in R. The schema-on- read tools within the package include a single function call that recursively reads folders with text, comma separated value, raster image, R data, HDF5, NetCDF, spreadsheet, Weka, Epi Info,more » Pajek network, R network, HTML, SPSS, Systat, and Stata files. The provided tools can be used as-is or easily adapted to implement customized schema-on-read tool chains in R. This paper’s contribution is that it introduces and describes SchemaOnRead, the first R package specifically focused on providing explicit schema-on-read support in R.« less
Tripal v1.1: a standards-based toolkit for construction of online genetic and genomic databases.
Sanderson, Lacey-Anne; Ficklin, Stephen P; Cheng, Chun-Huai; Jung, Sook; Feltus, Frank A; Bett, Kirstin E; Main, Dorrie
2013-01-01
Tripal is an open-source freely available toolkit for construction of online genomic and genetic databases. It aims to facilitate development of community-driven biological websites by integrating the GMOD Chado database schema with Drupal, a popular website creation and content management software. Tripal provides a suite of tools for interaction with a Chado database and display of content therein. The tools are designed to be generic to support the various ways in which data may be stored in Chado. Previous releases of Tripal have supported organisms, genomic libraries, biological stocks, stock collections and genomic features, their alignments and annotations. Also, Tripal and its extension modules provided loaders for commonly used file formats such as FASTA, GFF, OBO, GAF, BLAST XML, KEGG heir files and InterProScan XML. Default generic templates were provided for common views of biological data, which could be customized using an open Application Programming Interface to change the way data are displayed. Here, we report additional tools and functionality that are part of release v1.1 of Tripal. These include (i) a new bulk loader that allows a site curator to import data stored in a custom tab delimited format; (ii) full support of every Chado table for Drupal Views (a powerful tool allowing site developers to construct novel displays and search pages); (iii) new modules including 'Feature Map', 'Genetic', 'Publication', 'Project', 'Contact' and the 'Natural Diversity' modules. Tutorials, mailing lists, download and set-up instructions, extension modules and other documentation can be found at the Tripal website located at http://tripal.info. DATABASE URL: http://tripal.info/.
Tripal v1.1: a standards-based toolkit for construction of online genetic and genomic databases
Sanderson, Lacey-Anne; Ficklin, Stephen P.; Cheng, Chun-Huai; Jung, Sook; Feltus, Frank A.; Bett, Kirstin E.; Main, Dorrie
2013-01-01
Tripal is an open-source freely available toolkit for construction of online genomic and genetic databases. It aims to facilitate development of community-driven biological websites by integrating the GMOD Chado database schema with Drupal, a popular website creation and content management software. Tripal provides a suite of tools for interaction with a Chado database and display of content therein. The tools are designed to be generic to support the various ways in which data may be stored in Chado. Previous releases of Tripal have supported organisms, genomic libraries, biological stocks, stock collections and genomic features, their alignments and annotations. Also, Tripal and its extension modules provided loaders for commonly used file formats such as FASTA, GFF, OBO, GAF, BLAST XML, KEGG heir files and InterProScan XML. Default generic templates were provided for common views of biological data, which could be customized using an open Application Programming Interface to change the way data are displayed. Here, we report additional tools and functionality that are part of release v1.1 of Tripal. These include (i) a new bulk loader that allows a site curator to import data stored in a custom tab delimited format; (ii) full support of every Chado table for Drupal Views (a powerful tool allowing site developers to construct novel displays and search pages); (iii) new modules including ‘Feature Map’, ‘Genetic’, ‘Publication’, ‘Project’, ‘Contact’ and the ‘Natural Diversity’ modules. Tutorials, mailing lists, download and set-up instructions, extension modules and other documentation can be found at the Tripal website located at http://tripal.info. Database URL: http://tripal.info/ PMID:24163125
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poliakov, Alexander; Couronne, Olivier
2002-11-04
Aligning large vertebrate genomes that are structurally complex poses a variety of problems not encountered on smaller scales. Such genomes are rich in repetitive elements and contain multiple segmental duplications, which increases the difficulty of identifying true orthologous SNA segments in alignments. The sizes of the sequences make many alignment algorithms designed for comparing single proteins extremely inefficient when processing large genomic intervals. We integrated both local and global alignment tools and developed a suite of programs for automatically aligning large vertebrate genomes and identifying conserved non-coding regions in the alignments. Our method uses the BLAT local alignment program tomore » find anchors on the base genome to identify regions of possible homology for a query sequence. These regions are postprocessed to find the best candidates which are then globally aligned using the AVID global alignment program. In the last step conserved non-coding segments are identified using VISTA. Our methods are fast and the resulting alignments exhibit a high degree of sensitivity, covering more than 90% of known coding exons in the human genome. The GenomeVISTA software is a suite of Perl programs that is built on a MySQL database platform. The scheduler gets control data from the database, builds a queve of jobs, and dispatches them to a PC cluster for execution. The main program, running on each node of the cluster, processes individual sequences. A Perl library acts as an interface between the database and the above programs. The use of a separate library allows the programs to function independently of the database schema. The library also improves on the standard Perl MySQL database interfere package by providing auto-reconnect functionality and improved error handling.« less
Pilling, Valerie K; Brannon, Laura A
2007-01-01
Health communication appeals were utilized through a Web site simulation to evaluate the potential effectiveness of 3 intervention approaches to promote responsible drinking among college students. Within the Web site simulation, participants were exposed to a persuasive message designed to represent either the generalized social norms advertising approach (based on others' behavior), the personalized behavioral feedback approach (tailored to the individual's behavior), or the schema-based approach (tailored to the individual's self-schema, or personality). A control group was exposed to a message that was designed to be neutral (it was designed to discourage heavy drinking, but it did not represent any of the previously mentioned approaches). It was hypothesized that the more personalized the message was to the individual, the more favorable college students' attitudes would be toward the responsible drinking message. Participants receiving the more personalized messages did report more favorable attitudes toward the responsible drinking message.
Development of an Integrated Hydrologic Modeling System for Rainfall-Runoff Simulation
NASA Astrophysics Data System (ADS)
Lu, B.; Piasecki, M.
2008-12-01
This paper aims to present the development of an integrated hydrological model which involves functionalities of digital watershed processing, online data retrieval, hydrologic simulation and post-event analysis. The proposed system is intended to work as a back end to the CUAHSI HIS cyberinfrastructure developments. As a first step into developing this system, a physics-based distributed hydrologic model PIHM (Penn State Integrated Hydrologic Model) is wrapped into OpenMI(Open Modeling Interface and Environment ) environment so as to seamlessly interact with OpenMI compliant meteorological models. The graphical user interface is being developed from the openGIS application called MapWindows which permits functionality expansion through the addition of plug-ins. . Modules required to set up through the GUI workboard include those for retrieving meteorological data from existing database or meteorological prediction models, obtaining geospatial data from the output of digital watershed processing, and importing initial condition and boundary condition. They are connected to the OpenMI compliant PIHM to simulate rainfall-runoff processes and includes a module for automatically displaying output after the simulation. Online databases are accessed through the WaterOneFlow web services, and the retrieved data are either stored in an observation database(OD) following the schema of Observation Data Model(ODM) in case for time series support, or a grid based storage facility which may be a format like netCDF or a grid-based-data database schema . Specific development steps include the creation of a bridge to overcome interoperability issue between PIHM and the ODM, as well as the embedding of TauDEM (Terrain Analysis Using Digital Elevation Models) into the model. This module is responsible for developing watershed and stream network using digital elevation models. Visualizing and editing geospatial data is achieved by the usage of MapWinGIS, an ActiveX control developed by MapWindow team. After applying to the practical watershed, the performance of the model can be tested by the post-event analysis module.
A Semantic Analysis of XML Schema Matching for B2B Systems Integration
ERIC Educational Resources Information Center
Kim, Jaewook
2011-01-01
One of the most critical steps to integrating heterogeneous e-Business applications using different XML schemas is schema matching, which is known to be costly and error-prone. Many automatic schema matching approaches have been proposed, but the challenge is still daunting because of the complexity of schemas and immaturity of technologies in…
StatsDB: platform-agnostic storage and understanding of next generation sequencing run metrics
Ramirez-Gonzalez, Ricardo H.; Leggett, Richard M.; Waite, Darren; Thanki, Anil; Drou, Nizar; Caccamo, Mario; Davey, Robert
2014-01-01
Modern sequencing platforms generate enormous quantities of data in ever-decreasing amounts of time. Additionally, techniques such as multiplex sequencing allow one run to contain hundreds of different samples. With such data comes a significant challenge to understand its quality and to understand how the quality and yield are changing across instruments and over time. As well as the desire to understand historical data, sequencing centres often have a duty to provide clear summaries of individual run performance to collaborators or customers. We present StatsDB, an open-source software package for storage and analysis of next generation sequencing run metrics. The system has been designed for incorporation into a primary analysis pipeline, either at the programmatic level or via integration into existing user interfaces. Statistics are stored in an SQL database and APIs provide the ability to store and access the data while abstracting the underlying database design. This abstraction allows simpler, wider querying across multiple fields than is possible by the manual steps and calculation required to dissect individual reports, e.g. ”provide metrics about nucleotide bias in libraries using adaptor barcode X, across all runs on sequencer A, within the last month”. The software is supplied with modules for storage of statistics from FastQC, a commonly used tool for analysis of sequence reads, but the open nature of the database schema means it can be easily adapted to other tools. Currently at The Genome Analysis Centre (TGAC), reports are accessed through our LIMS system or through a standalone GUI tool, but the API and supplied examples make it easy to develop custom reports and to interface with other packages. PMID:24627795
Research on Historic Bim of Built Heritage in Taiwan - a Case Study of Huangxi Academy
NASA Astrophysics Data System (ADS)
Lu, Y. C.; Shih, T. Y.; Yen, Y. N.
2018-05-01
Digital archiving technology for conserving cultural heritage is an important subject nowadays. The Taiwanese Ministry of Culture continues to try to converge the concept and technology of conservation towards international conventions. However, the products from these different technologies are not yet integrated due to the lack of research and development in this field. There is currently no effective schema in HBIM for Taiwanese cultural heritage. The aim of this research is to establish an HBIM schema for Chinese built heritage in Taiwan. The proposed method starts from the perspective of the components of built heritage buildings, up to the investigation of the important properties of the components through important international charters and Taiwanese laws of cultural heritage conservation. Afterwards, object-oriented class diagram and ontology from the scale of components were defined to clarify the concept and increase the interoperability. A historical database was then established for the historical information of components and to bring it into the concept of BIM in order to build a 3D model of heritage objects which can be used for visualization. An integration platform was developed for the users to browse and manipulate the database and 3D model simultaneously. In addition, this research also evaluated the feasibility of this method using the study case at the Huangxi academy located in Taiwan. The conclusion showed that class diagram could help the establishment of database and even its application for different Chinese built heritage objects. The establishment of ontology helped to convey knowledge and increase interoperability. In comparison to traditional documentation methods, the querying result of the platform was more accurate and less prone to human error.
A Descriptive and Interpretative Information System for the IODP
NASA Astrophysics Data System (ADS)
Blum, P.; Foster, P. A.; Mateo, Z.
2006-12-01
The ODP/IODP has a long and rich history of collecting descriptive and interpretative information (DESCINFO) from rock and sediment cores from the world's oceans. Unlike instrumental data, DESCINFO generated by subject experts is biased by the scientific and cultural background of the observers and their choices of classification schemes. As a result, global searches of DESCINFO and its integration with other data are problematical. To address this issue, the IODP-USIO is in the process of designing and implementing a DESCINFO system for IODP Phase 2 (2007-2013) that meets the user expectations expressed over the past decade. The requirements include support of (1) detailed, material property-based descriptions as well as classification-based descriptions; (2) global searches by physical sample and digital data sources as well as any of the descriptive parameters; (3) user-friendly data capture tools for a variety of workflows; and (4) extensive visualization of DESCINFO data along with instrumental data and images; and (5) portability/interoperability such that the system can work with database schemas of other organizations - a specific challenge given the schema and semantic heterogeneity not only among the three IODP operators but within the geosciences in general. The DESCINFO approach is based on the definition of a set of generic observable parameters that are populated with numeric or text values. Text values are derived from controlled, extensible hierarchical value lists that allow descriptions at the appropriate level of detail and ensure successful data searches. Material descriptions can be completed independently of domain-specific classifications, genetic concepts, and interpretative frameworks.
BioXSD: the common data-exchange format for everyday bioinformatics web services
Kalaš, Matúš; Puntervoll, Pæl; Joseph, Alexandre; Bartaševičiūtė, Edita; Töpfer, Armin; Venkataraman, Prabakar; Pettifer, Steve; Bryne, Jan Christian; Ison, Jon; Blanchet, Christophe; Rapacki, Kristoffer; Jonassen, Inge
2010-01-01
Motivation: The world-wide community of life scientists has access to a large number of public bioinformatics databases and tools, which are developed and deployed using diverse technologies and designs. More and more of the resources offer programmatic web-service interface. However, efficient use of the resources is hampered by the lack of widely used, standard data-exchange formats for the basic, everyday bioinformatics data types. Results: BioXSD has been developed as a candidate for standard, canonical exchange format for basic bioinformatics data. BioXSD is represented by a dedicated XML Schema and defines syntax for biological sequences, sequence annotations, alignments and references to resources. We have adapted a set of web services to use BioXSD as the input and output format, and implemented a test-case workflow. This demonstrates that the approach is feasible and provides smooth interoperability. Semantics for BioXSD is provided by annotation with the EDAM ontology. We discuss in a separate section how BioXSD relates to other initiatives and approaches, including existing standards and the Semantic Web. Availability: The BioXSD 1.0 XML Schema is freely available at http://www.bioxsd.org/BioXSD-1.0.xsd under the Creative Commons BY-ND 3.0 license. The http://bioxsd.org web page offers documentation, examples of data in BioXSD format, example workflows with source codes in common programming languages, an updated list of compatible web services and tools and a repository of feature requests from the community. Contact: matus.kalas@bccs.uib.no; developers@bioxsd.org; support@bioxsd.org PMID:20823319
Cruella: developing a scalable tissue microarray data management system.
Cowan, James D; Rimm, David L; Tuck, David P
2006-06-01
Compared with DNA microarray technology, relatively little information is available concerning the special requirements, design influences, and implementation strategies of data systems for tissue microarray technology. These issues include the requirement to accommodate new and different data elements for each new project as well as the need to interact with pre-existing models for clinical, biological, and specimen-related data. To design and implement a flexible, scalable tissue microarray data storage and management system that could accommodate information regarding different disease types and different clinical investigators, and different clinical investigation questions, all of which could potentially contribute unforeseen data types that require dynamic integration with existing data. The unpredictability of the data elements combined with the novelty of automated analysis algorithms and controlled vocabulary standards in this area require flexible designs and practical decisions. Our design includes a custom Java-based persistence layer to mediate and facilitate interaction with an object-relational database model and a novel database schema. User interaction is provided through a Java Servlet-based Web interface. Cruella has become an indispensable resource and is used by dozens of researchers every day. The system stores millions of experimental values covering more than 300 biological markers and more than 30 disease types. The experimental data are merged with clinical data that has been aggregated from multiple sources and is available to the researchers for management, analysis, and export. Cruella addresses many of the special considerations for managing tissue microarray experimental data and the associated clinical information. A metadata-driven approach provides a practical solution to many of the unique issues inherent in tissue microarray research, and allows relatively straightforward interoperability with and accommodation of new data models.
Stress affects the neural ensemble for integrating new information and prior knowledge.
Vogel, Susanne; Kluen, Lisa Marieke; Fernández, Guillén; Schwabe, Lars
2018-06-01
Prior knowledge, represented as a schema, facilitates memory encoding. This schema-related learning is assumed to rely on the medial prefrontal cortex (mPFC) that rapidly integrates new information into the schema, whereas schema-incongruent or novel information is encoded by the hippocampus. Stress is a powerful modulator of prefrontal and hippocampal functioning and first studies suggest a stress-induced deficit of schema-related learning. However, the underlying neural mechanism is currently unknown. To investigate the neural basis of a stress-induced schema-related learning impairment, participants first acquired a schema. One day later, they underwent a stress induction or a control procedure before learning schema-related and novel information in the MRI scanner. In line with previous studies, learning schema-related compared to novel information activated the mPFC, angular gyrus, and precuneus. Stress, however, affected the neural ensemble activated during learning. Whereas the control group distinguished between sets of brain regions for related and novel information, stressed individuals engaged the hippocampus even when a relevant schema was present. Additionally, stressed participants displayed aberrant functional connectivity between brain regions involved in schema processing when encoding novel information. The failure to segregate functional connectivity patterns depending on the presence of prior knowledge was linked to impaired performance after stress. Our results show that stress affects the neural ensemble underlying the efficient use of schemas during learning. These findings may have relevant implications for clinical and educational settings. Copyright © 2018 Elsevier Inc. All rights reserved.
Development of an open metadata schema for prospective clinical research (openPCR) in China.
Xu, W; Guan, Z; Sun, J; Wang, Z; Geng, Y
2014-01-01
In China, deployment of electronic data capture (EDC) and clinical data management system (CDMS) for clinical research (CR) is in its very early stage, and about 90% of clinical studies collected and submitted clinical data manually. This work aims to build an open metadata schema for Prospective Clinical Research (openPCR) in China based on openEHR archetypes, in order to help Chinese researchers easily create specific data entry templates for registration, study design and clinical data collection. Singapore Framework for Dublin Core Application Profiles (DCAP) is used to develop openPCR and four steps such as defining the core functional requirements and deducing the core metadata items, developing archetype models, defining metadata terms and creating archetype records, and finally developing implementation syntax are followed. The core functional requirements are divided into three categories: requirements for research registration, requirements for trial design, and requirements for case report form (CRF). 74 metadata items are identified and their Chinese authority names are created. The minimum metadata set of openPCR includes 3 documents, 6 sections, 26 top level data groups, 32 lower data groups and 74 data elements. The top level container in openPCR is composed of public document, internal document and clinical document archetypes. A hierarchical structure of openPCR is established according to Data Structure of Electronic Health Record Architecture and Data Standard of China (Chinese EHR Standard). Metadata attributes are grouped into six parts: identification, definition, representation, relation, usage guides, and administration. OpenPCR is an open metadata schema based on research registration standards, standards of the Clinical Data Interchange Standards Consortium (CDISC) and Chinese healthcare related standards, and is to be publicly available throughout China. It considers future integration of EHR and CR by adopting data structure and data terms in Chinese EHR Standard. Archetypes in openPCR are modularity models and can be separated, recombined, and reused. The authors recommend that the method to develop openPCR can be referenced by other countries when designing metadata schema of clinical research. In the next steps, openPCR should be used in a number of CR projects to test its applicability and to continuously improve its coverage. Besides, metadata schema for research protocol can be developed to structurize and standardize protocol, and syntactical interoperability of openPCR with other related standards can be considered.
NASA Astrophysics Data System (ADS)
Grimes, J.; Mahoney, A. R.; Heinrichs, T. A.; Eicken, H.
2012-12-01
Sensor data can be highly variable in nature and also varied depending on the physical quantity being observed, sensor hardware and sampling parameters. The sea ice mass balance site (MBS) operated in Barrow by the University of Alaska Fairbanks (http://seaice.alaska.edu/gi/observatories/barrow_sealevel) is a multisensor platform consisting of a thermistor string, air and water temperature sensors, acoustic altimeters above and below the ice and a humidity sensor. Each sensor has a unique specification and configuration. The data from multiple sensors are combined to generate sea ice data products. For example, ice thickness is calculated from the positions of the upper and lower ice surfaces, which are determined using data from downward-looking and upward-looking acoustic altimeters above and below the ice, respectively. As a data clearinghouse, the Geographic Information Network of Alaska (GINA) processes real time data from many sources, including the Barrow MBS. Doing so requires a system that is easy to use, yet also offers the flexibility to handle data from multisensor observing platforms. In the case of the Barrow MBS, the metadata system needs to accommodate the addition of new and retirement of old sensors from year to year as well as instrument configuration changes caused by, for example, spring melt or inquisitive polar bears. We also require ease of use for both administrators and end users. Here we present the data and processing steps of using sensor data system powered by the NoSQL storage engine, MongoDB. The system has been developed to ingest, process, disseminate and archive data from the Barrow MBS. Storing sensor data in a generalized format, from many different sources, is a challenging task, especially for traditional SQL databases with a set schema. MongoDB is a NoSQL (not only SQL) database that does not require a fixed schema. There are several advantages using this model over the traditional relational database management system (RDBMS) model databases. The lack of a required schema allows flexibility in how the data can be ingested into the database. For example, MongoDB imposes no restrictions on field names. For researchers using the system, this means that the name they have chosen for the sensor is carried through the database, any processing, and to the final output helping to preserve data integrity. Also, MongoDB allows the data to be pushed to it dynamically meaning that field attributes can be defined at the point of ingestion. This allows any sensor data to be ingested as a document and for this functionality to be transferred to the user interface, allowing greater adaptability to different use-case scenarios. In presenting the MondoDB data system being developed for the Barrow MBS, we demonstrate the versatility of this approach and its suitability as the foundation of a Barrow node of the Arctic Observing Network. Authors Jason Grimes - Geographic Information Network of Alaska - jason@gina.alaska.edu Andy Mahony - Geophysical Institute - mahoney@gi.alaska.edu Hajo Eiken - Geophysical Institute - Hajo.Eicken@gi.alaska.edu Tom Heinrichs - Geographic Information Network of Alaska - Tom.Heinrichs@alaska.edu
Oh, Sungyoung; Cha, Jieun; Ji, Myungkyu; Kang, Hyekyung; Kim, Seok; Heo, Eunyoung; Han, Jong Soo; Kang, Hyunggoo; Chae, Hoseok; Hwang, Hee; Yoo, Sooyoung
2015-04-01
To design a cloud computing-based Healthcare Software-as-a-Service (SaaS) Platform (HSP) for delivering healthcare information services with low cost, high clinical value, and high usability. We analyzed the architecture requirements of an HSP, including the interface, business services, cloud SaaS, quality attributes, privacy and security, and multi-lingual capacity. For cloud-based SaaS services, we focused on Clinical Decision Service (CDS) content services, basic functional services, and mobile services. Microsoft's Azure cloud computing for Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) was used. The functional and software views of an HSP were designed in a layered architecture. External systems can be interfaced with the HSP using SOAP and REST/JSON. The multi-tenancy model of the HSP was designed as a shared database, with a separate schema for each tenant through a single application, although healthcare data can be physically located on a cloud or in a hospital, depending on regulations. The CDS services were categorized into rule-based services for medications, alert registration services, and knowledge services. We expect that cloud-based HSPs will allow small and mid-sized hospitals, in addition to large-sized hospitals, to adopt information infrastructures and health information technology with low system operation and maintenance costs.
York, Valerie K; Brannon, Laura A; Miller, Megan M
2012-01-01
We investigated whether a thoroughly personalized message (tailored to a person's "Big Five" personality traits) or a message matched to an alternate form of self-schema (ideal self-schema) would be more influential than a self-schema matched message (that has been found to be effective) at marketing responsible drinking. We expected the more thoroughly personalized Big Five matched message to be more effective than the self-schema matched message. However, neither the Big Five message nor the ideal self-schema message was more effective than the actual self-schema message. Therefore, research examining self-schema matching should be pursued rather than more complex Big Five matching.
NASA Astrophysics Data System (ADS)
Benedict, K. K.; Scott, S.
2013-12-01
While there has been a convergence towards a limited number of standards for representing knowledge (metadata) about geospatial (and other) data objects and collections, there exist a variety of community conventions around the specific use of those standards and within specific data discovery and access systems. This combination of limited (but multiple) standards and conventions creates a challenge for system developers that aspire to participate in multiple data infrastrucutres, each of which may use a different combination of standards and conventions. While Extensible Markup Language (XML) is a shared standard for encoding most metadata, traditional direct XML transformations (XSLT) from one standard to another often result in an imperfect transfer of information due to incomplete mapping from one standard's content model to another. This paper presents the work at the University of New Mexico's Earth Data Analysis Center (EDAC) in which a unified data and metadata management system has been developed in support of the storage, discovery and access of heterogeneous data products. This system, the Geographic Storage, Transformation and Retrieval Engine (GSTORE) platform has adopted a polyglot database model in which a combination of relational and document-based databases are used to store both data and metadata, with some metadata stored in a custom XML schema designed as a superset of the requirements for multiple target metadata standards: ISO 19115-2/19139/19110/19119, FGCD CSDGM (both with and without remote sensing extensions) and Dublin Core. Metadata stored within this schema is complemented by additional service, format and publisher information that is dynamically "injected" into produced metadata documents when they are requested from the system. While mapping from the underlying common metadata schema is relatively straightforward, the generation of valid metadata within each target standard is necessary but not sufficient for integration into multiple data infrastructures, as has been demonstrated through EDAC's testing and deployment of metadata into multiple external systems: Data.Gov, the GEOSS Registry, the DataONE network, the DSpace based institutional repository at UNM and semantic mediation systems developed as part of the NASA ACCESS ELSeWEB project. Each of these systems requires valid metadata as a first step, but to make most effective use of the delivered metadata each also has a set of conventions that are specific to the system. This presentation will provide an overview of the underlying metadata management model, the processes and web services that have been developed to automatically generate metadata in a variety of standard formats and highlight some of the specific modifications made to the output metadata content to support the different conventions used by the multiple metadata integration endpoints.
Paek, Hye-Jin; Hove, Thomas
2018-05-01
This study examines the roles that the media effects and persuasion ethics schemas play in people's responses to an antismoking ad in South Korea. An online experiment was conducted with 347 adults. The media effects schema was manipulated with news stories on an antismoking campaign's effectiveness, while the persuasion ethics schema was measured and median-split. Analysis of Variance (ANOVA) tests were performed for issue attitudes (Iatt), attitude toward the ad (Aad), and behavioral intention (BI). Results show significant main effects of the media effects schema on the three dependent variables. People in the weak media effects condition had significantly lower Iatt, Aad, and BI than those in either the strong media effects condition or the control condition. This pattern was more pronounced among smokers. While there was no significant main effect of the persuasion ethics schema on any of the dependent variables, a significant interaction effect for persuasion ethics schema and smoking status was found on behavioral intention (BI). Nonsmokers' BI was significantly higher than smokers' in the low-persuasion ethics schema condition, but it was not significant in the high-persuasion ethics schema condition.
GenomeHubs: simple containerized setup of a custom Ensembl database and web server for any species
Kumar, Sujai; Stevens, Lewis; Blaxter, Mark
2017-01-01
Abstract As the generation and use of genomic datasets is becoming increasingly common in all areas of biology, the need for resources to collate, analyse and present data from one or more genome projects is becoming more pressing. The Ensembl platform is a powerful tool to make genome data and cross-species analyses easily accessible through a web interface and a comprehensive application programming interface. Here we introduce GenomeHubs, which provide a containerized environment to facilitate the setup and hosting of custom Ensembl genome browsers. This simplifies mirroring of existing content and import of new genomic data into the Ensembl database schema. GenomeHubs also provide a set of analysis containers to decorate imported genomes with results of standard analyses and functional annotations and support export to flat files, including EMBL format for submission of assemblies and annotations to International Nucleotide Sequence Database Collaboration. Database URL: http://GenomeHubs.org PMID:28605774
Chen, Po-Hao; Loehfelm, Thomas W; Kamer, Aaron P; Lemmon, Andrew B; Cook, Tessa S; Kohli, Marc D
2016-12-01
The residency review committee of the Accreditation Council of Graduate Medical Education (ACGME) collects data on resident exam volume and sets minimum requirements. However, this data is not made readily available, and the ACGME does not share their tools or methodology. It is therefore difficult to assess the integrity of the data and determine if it truly reflects relevant aspects of the resident experience. This manuscript describes our experience creating a multi-institutional case log, incorporating data from three American diagnostic radiology residency programs. Each of the three sites independently established automated query pipelines from the various radiology information systems in their respective hospital groups, thereby creating a resident-specific database. Then, the three institutional resident case log databases were aggregated into a single centralized database schema. Three hundred thirty residents and 2,905,923 radiologic examinations over a 4-year span were catalogued using 11 ACGME categories. Our experience highlights big data challenges including internal data heterogeneity and external data discrepancies faced by informatics researchers.
PAZAR: a framework for collection and dissemination of cis-regulatory sequence annotation
Portales-Casamar, Elodie; Kirov, Stefan; Lim, Jonathan; Lithwick, Stuart; Swanson, Magdalena I; Ticoll, Amy; Snoddy, Jay; Wasserman, Wyeth W
2007-01-01
PAZAR is an open-access and open-source database of transcription factor and regulatory sequence annotation with associated web interface and programming tools for data submission and extraction. Curated boutique data collections can be maintained and disseminated through the unified schema of the mall-like PAZAR repository. The Pleiades Promoter Project collection of brain-linked regulatory sequences is introduced to demonstrate the depth of annotation possible within PAZAR. PAZAR, located at , is open for business. PMID:17916232
PAZAR: a framework for collection and dissemination of cis-regulatory sequence annotation.
Portales-Casamar, Elodie; Kirov, Stefan; Lim, Jonathan; Lithwick, Stuart; Swanson, Magdalena I; Ticoll, Amy; Snoddy, Jay; Wasserman, Wyeth W
2007-01-01
PAZAR is an open-access and open-source database of transcription factor and regulatory sequence annotation with associated web interface and programming tools for data submission and extraction. Curated boutique data collections can be maintained and disseminated through the unified schema of the mall-like PAZAR repository. The Pleiades Promoter Project collection of brain-linked regulatory sequences is introduced to demonstrate the depth of annotation possible within PAZAR. PAZAR, located at http://www.pazar.info, is open for business.
NASA Technical Reports Server (NTRS)
Maluf, David A.; Bell, David g.; Ashish, Naveen
2005-01-01
This paper describes an approach to achieving data integration across multiple sources in an enterprise, in a manner that is cost efficient and economically scalable. We present an approach that does not rely on major investment in structured, heavy-weight database systems for data storage or heavy-weight middleware responsible for integrated access. The approach is centered around pushing any required data structure and semantics functionality (schema) to application clients, as well as pushing integration specification and functionality to clients where integration can be performed on-the-fly .
Toxics Release Inventory Chemical Hazard Information Profiles (TRI-CHIP) Dataset
The Toxics Release Inventory (TRI) Chemical Hazard Information Profiles (TRI-CHIP) dataset contains hazard information about the chemicals reported in TRI. Users can use this XML-format dataset to create their own databases and hazard analyses of TRI chemicals. The hazard information is compiled from a series of authoritative sources including the Integrated Risk Information System (IRIS). The dataset is provided as a downloadable .zip file that when extracted provides XML files and schemas for the hazard information tables.
2006-09-01
STELLA and PowerLoomn. These modules comunicate with a knowledge basec using KIF and stan(lardl relational database systelnis using either standard...groups ontology as well as a rule that infers additional seed members based on joint participation in a terrorism event. EDB schema files are a special... terrorism links from the Ali Baba EDB. Our interpretation of such links is that they KOJAK Manual E-42 encode that two people committed an act of
NASA Astrophysics Data System (ADS)
Gembong, S.; Suwarsono, S. T.; Prabowo
2018-03-01
Schema in the current study refers to a set of action, process, object and other schemas already possessed to build an individual’s ways of thinking to solve a given problem. The current study aims to investigate the schemas built among elementary school students in solving problems related to operations of addition to fractions. The analyses of the schema building were done qualitatively on the basis of the analytical framework of the APOS theory (Action, Process, Object, and Schema). Findings show that the schemas built on students of high and middle ability indicate the following. In the action stage, students were able to add two fractions by way of drawing a picture or procedural way. In the Stage of process, they could add two and three fractions. In the stage of object, they could explain the steps of adding two fractions and change a fraction into addition of fractions. In the last stage, schema, they could add fractions by relating them to another schema they have possessed i.e. the least common multiple. Those of high and middle mathematic abilities showed that their schema building in solving problems related to operations odd addition to fractions worked in line with the framework of the APOS theory. Those of low mathematic ability, however, showed that their schema on each stage did not work properly.
Karp, Peter D; Paley, Suzanne; Romero, Pedro
2002-01-01
Bioinformatics requires reusable software tools for creating model-organism databases (MODs). The Pathway Tools is a reusable, production-quality software environment for creating a type of MOD called a Pathway/Genome Database (PGDB). A PGDB such as EcoCyc (see http://ecocyc.org) integrates our evolving understanding of the genes, proteins, metabolic network, and genetic network of an organism. This paper provides an overview of the four main components of the Pathway Tools: The PathoLogic component supports creation of new PGDBs from the annotated genome of an organism. The Pathway/Genome Navigator provides query, visualization, and Web-publishing services for PGDBs. The Pathway/Genome Editors support interactive updating of PGDBs. The Pathway Tools ontology defines the schema of PGDBs. The Pathway Tools makes use of the Ocelot object database system for data management services for PGDBs. The Pathway Tools has been used to build PGDBs for 13 organisms within SRI and by external users.
Kogan, Steven M; Lei, Man-Kit; Grange, Christina R; Simons, Ronald L; Brody, Gene H; Gibbons, Frederick X; Chen, Yi-Fu
2013-06-01
Accumulating evidence suggests that African American men and women experience unique challenges in developing and maintaining stable, satisfying romantic relationships. Extant studies have linked relationship quality among African American couples to contemporaneous risk factors such as economic hardship and racial discrimination. Little research, however, has examined the contextual and intrapersonal processes in late childhood and adolescence that influence romantic relationship health among African American adults. We investigated competence-promoting parenting practices and exposure to community-related stressors in late childhood, and negative relational schemas in adolescence, as predictors of young adult romantic relationship health. Participants were 318 African American young adults (59.4% female) who had provided data at four time points from ages 10-22 years. Structural equation modeling indicated that exposure to community-related stressors and low levels of competence-promoting parenting contributed to negative relational schemas, which were proximal predictors of young adult relationship health. Relational schemas mediated the associations of competence-promoting parenting practices and exposure to community stressors in late childhood with romantic relationship health during young adulthood. Results suggest that enhancing caregiving practices, limiting youths' exposure to community stressors, and modifying relational schemas are important processes to be targeted for interventions designed to enhance African American adults' romantic relationships.
Kogan, Steven M.; Lei, Man-Kit; Grange, Christina R.; Simons, Ronald L.; Brody, Gene H.; Gibbons, Frederick X.; Chen, Yifu
2013-01-01
Accumulating evidence suggests that African American men and women experience unique challenges in developing and maintaining stable, satisfying romantic relationships. Extant studies have linked relationship quality among African American couples to contemporaneous risk factors such as economic hardship and racial discrimination. Little research, however, has examined the contextual and intrapersonal processes in late childhood and adolescence that influence romantic relationship health among African American adults. We investigated competence-promoting parenting practices and exposure to community-related stressors in late childhood, and negative relational schemas in adolescence, as predictors of young adult romantic relationship health. Participants were 318 African American young adults (59.4% female) who had provided data at four time points from ages 10–22 years. Structural equation modeling indicated that exposure to community-related stressors and low levels of competence-promoting parenting contributed to negative relational schemas, which were proximal predictors of young adult relationship health. Relational schemas mediated the associations of competence-promoting parenting practices and exposure to community stressors in late childhood with romantic relationship health during young adulthood. Results suggest that enhancing caregiving practices, limiting youths’ exposure to community stressors, and modifying relational schemas are important processes to be targeted for interventions designed to enhance African American adults’ romantic relationships. PMID:23494451
Ahluwalia, Monisha; Hughes, Alicia M; McCracken, Lance M; Chilcot, Joseph
2017-08-01
Few studies have assessed the underlying theoretical components of the Common Sense Model. Past studies have found, through implicit priming, that coping strategies are embedded within illness schema. Our aim was to evaluate the effect priming 'headache' illness schema upon attentional engagement to pain relief medication and to examine the interaction with illness treatment beliefs. Attentional engagement to the pain relief medication ('Paracetamol') was assessed using a 2 (primed vs. control) × 2 (strong belief in medication efficacy vs. weak belief in medication efficacy) design. During a grammatical decision task (identifying verbs/non-verbs), participants were randomised to receive a headache prime or a control. Response latency to the target word, 'Paracetamol' was the dependent variable. 'Paracetamol' treatment beliefs were determined using the brief illness perception questionnaire. Sixty-three participants completed the experiment. There was a significant interaction between illness-primed vs. control and high vs. low treatment efficacy of Paracetamol (p < .001), suggesting an attentional disengagement effect to the coping strategy in illness-primed participants whom held stronger treatment beliefs regarding the efficacy of Paracetamol. In summary, implicit illness schema activation may simultaneously activate embedded coping strategies, which appears to be moderated by specific illness beliefs.
Lomax, C L; Barnard, P J; Lam, D
2009-05-01
There are few theoretical proposals that attempt to account for the variation in affective processing across different affective states of bipolar disorder (BD). The Interacting Cognitive Subsystems (ICS) framework has been recently extended to account for manic states. Within the framework, positive mood state is hypothesized to tap into an implicational level of processing, which is proposed to be more extreme in states of mania. Thirty individuals with BD and 30 individuals with no history of affective disorder were tested in euthymic mood state and then in induced positive mood state using the Question-Answer task to examine the mode of processing of schemas. The task was designed to test whether individuals would detect discrepancies within the prevailing schemas of the sentences. Although the present study did not support the hypothesis that the groups differ in their ability to detect discrepancies within schemas, we did find that the BD group was significantly more likely than the control group to answer questions that were consistent with the prevailing schemas, both before and after mood induction. These results may reflect a general cognitive bias, that individuals with BD have a tendency to operate at a more abstract level of representation. This may leave an individual prone to affective disturbance, although further research is required to replicate this finding.
A schema theory analysis of students' think aloud protocols in an STS biology context
NASA Astrophysics Data System (ADS)
Quinlan, Catherine Louise
This dissertation study is a conglomerate of the fields of Science Education and Applied Cognitive Psychology. The goal of this study is to determine what organizational features and knowledge representation patterns high school students exhibit over time for issues pertinent to science and society. Participants are thirteen tenth grade students in a diverse suburban-urban classroom in a northeastern state. Students' think alouds are recorded, pre-, post-, and late-post treatment. Treatment consists of instruction in three Science, Technology, and Society (STS) biology issues, namely the human genome project, nutrition and health, and stem cell research. Coding and analyses are performed using Marshall's knowledge representations---identification knowledge, elaboration knowledge, planning knowledge, and execution knowledge, as well as qualitative research analysis methods. Schema theory, information processing theory, and other applied cognitive theory provide a framework in which to understand and explain students' schema descriptions and progressions over time. The results show that students display five organizational features in their identification and elaboration knowledge. Students also fall into one of four categories according to if they display prior schema or no prior schema, and their orientation "for" or "against," some of the issues. Students with prior schema and orientation "against" display the most robust schema descriptions and schema progressions. Those with no prior schemas and orientation "against" show very modest schema progressions best characterized by their keyword searches. This study shows the importance in considering not only students' integrated schemas but also their individual schemes. A role for the use of a more schema-based instruction that scaffolds student learning is implicated.
Gstruct: a system for extracting schemas from GML documents
NASA Astrophysics Data System (ADS)
Chen, Hui; Zhu, Fubao; Guan, Jihong; Zhou, Shuigeng
2008-10-01
Geography Markup Language (GML) becomes the de facto standard for geographic information representation on the internet. GML schema provides a way to define the structure, content, and semantic of GML documents. It contains useful structural information of GML documents and plays an important role in storing, querying and analyzing GML data. However, GML schema is not mandatory, and it is common that a GML document contains no schema. In this paper, we present Gstruct, a tool for GML schema extraction. Gstruct finds the features in the input GML documents, identifies geometry datatypes as well as simple datatypes, then integrates all these features and eliminates improper components to output the optimal schema. Experiments demonstrate that Gstruct is effective in extracting semantically meaningful schemas from GML documents.
samiDB: A Prototype Data Archive for Big Science Exploration
NASA Astrophysics Data System (ADS)
Konstantopoulos, I. S.; Green, A. W.; Cortese, L.; Foster, C.; Scott, N.
2015-04-01
samiDB is an archive, database, and query engine to serve the spectra, spectral hypercubes, and high-level science products that make up the SAMI Galaxy Survey. Based on the versatile Hierarchical Data Format (HDF5), samiDB does not depend on relational database structures and hence lightens the setup and maintenance load imposed on science teams by metadata tables. The code, written in Python, covers the ingestion, querying, and exporting of data as well as the automatic setup of an HTML schema browser. samiDB serves as a maintenance-light data archive for Big Science and can be adopted and adapted by science teams that lack the means to hire professional archivists to set up the data back end for their projects.
QuakeML - An XML Schema for Seismology
NASA Astrophysics Data System (ADS)
Wyss, A.; Schorlemmer, D.; Maraini, S.; Baer, M.; Wiemer, S.
2004-12-01
We propose an extensible format-definition for seismic data (QuakeML). Sharing data and seismic information efficiently is one of the most important issues for research and observational seismology in the future. The eXtensible Markup Language (XML) is playing an increasingly important role in the exchange of a variety of data. Due to its extensible definition capabilities, its wide acceptance and the existing large number of utilities and libraries for XML, a structured representation of various types of seismological data should in our opinion be developed by defining a 'QuakeML' standard. Here we present the QuakeML definitions for parameter databases and further efforts, e.g. a central QuakeML catalog database and a web portal for exchanging codes and stylesheets.
Stress leads to aberrant hippocampal involvement when processing schema-related information.
Vogel, Susanne; Kluen, Lisa Marieke; Fernández, Guillén; Schwabe, Lars
2018-01-01
Prior knowledge, represented as a mental schema, has critical impact on how we organize, interpret, and process incoming information. Recent findings indicate that the use of an existing schema is coordinated by the medial prefrontal cortex (mPFC), communicating with parietal areas. The hippocampus, however, is crucial for encoding schema-unrelated information but not for schema-related information. A recent study indicated that stress mediators may affect schema-related memory, but the underlying neural mechanisms are currently unknown. Here, we thus tested the impact of acute stress on neural processing of schema-related information. We exposed healthy participants to a stress or control manipulation before they processed, in the MRI scanner, words related or unrelated to a preexisting schema activated by a specific cue. Participants' memory for the presented material was tested 3-5 d after encoding. Overall, the processing of schema-related information activated the mPFC, the precuneus, and the angular gyrus. Stress resulted in aberrant hippocampal activity and connectivity while participants processed schema-related information. This aberrant engagement of the hippocampus was linked to altered subsequent memory. These findings suggest that stress may interfere with the efficient use of prior knowledge during encoding and may have important practical implications, in particular for educational settings. © 2018 Vogel et al.; Published by Cold Spring Harbor Laboratory Press.
Hren, Darko; Marušić, Matko; Marušić, Ana
2011-01-01
Background Moral reasoning is important for developing medical professionalism but current evidence for the relationship between education and moral reasoning does not clearly apply to medical students. We used a combined study design to test the effect of clinical teaching on moral reasoning. Methods We used the Defining Issues Test-2 as a measure of moral judgment, with 3 general moral schemas: Personal Interest, Maintaining Norms, and Postconventional Schema. The test was applied to 3 consecutive cohorts of second year students in 2002 (n = 207), 2003 (n = 192), and 2004 (n = 139), and to 707 students of all 6 study years in 2004 cross-sectional study. We also tested 298 age-matched controls without university education. Results In the cross-sectional study, there was significant main effect of the study year for Postconventional (F(5,679) = 3.67, P = 0.003) and Personal Interest scores (F(5,679) = 3.38, P = 0.005). There was no effect of the study year for Maintaining Norms scores. 3rd year medical students scored higher on Postconventional schema score than all other study years (p<0.001). There were no statistically significant differences among 3 cohorts of 2nd year medical students, demonstrating the absence of cohort or point-of-measurement effects. Longitudinal study of 3 cohorts demonstrated that students regressed from Postconventional to Maintaining Norms schema-based reasoning after entering the clinical part of the curriculum. Interpretation Our study demonstrated direct causative relationship between the regression in moral reasoning development and clinical teaching during medical curriculum. The reasons may include hierarchical organization of clinical practice, specific nature of moral dilemmas faced by medical students, and hidden medical curriculum. PMID:21479204
Eminaga, O; Semjonow, A; Oezguer, E; Herden, J; Akbarov, I; Tok, A; Engelmann, U; Wille, S
2014-01-01
The integrity of collection protocols in biobanking is essential for a high-quality sample preparation process. However, there is not currently a well-defined universal method for integrating collection protocols in the biobanking information system (BIMS). Therefore, an electronic schema of the collection protocol that is based on Extensible Markup Language (XML) is required to maintain the integrity and enable the exchange of collection protocols. The development and implementation of an electronic specimen collection protocol schema (eSCPS) was performed at two institutions (Muenster and Cologne) in three stages. First, we analyzed the infrastructure that was already established at both the biorepository and the hospital information systems of these institutions and determined the requirements for the sufficient preparation of specimens and documentation. Second, we designed an eSCPS according to these requirements. Finally, a prospective study was conducted to implement and evaluate the novel schema in the current BIMS. We designed an eSCPS that provides all of the relevant information about collection protocols. Ten electronic collection protocols were generated using the supplementary Protocol Editor tool, and these protocols were successfully implemented in the existing BIMS. Moreover, an electronic list of collection protocols for the current studies being performed at each institution was included, new collection protocols were added, and the existing protocols were redesigned to be modifiable. The documentation time was significantly reduced after implementing the eSCPS (5 ± 2 min vs. 7 ± 3 min; p = 0.0002). The eSCPS improves the integrity and facilitates the exchange of specimen collection protocols in the existing open-source BIMS.
A Learning Design Ontology Based on the IMS Specification
ERIC Educational Resources Information Center
Amorim, Ricardo R.; Lama, Manuel; Sanchez, Eduardo; Riera, Adolfo; Vila, Xose A.
2006-01-01
In this paper, we present an ontology to represent the semantics of the IMS Learning Design (IMS LD) specification, a meta-language used to describe the main elements of the learning design process. The motivation of this work relies on the expressiveness limitations found on the current XML-Schema implementation of the IMS LD conceptual model. To…
KAYA TEZEL, Fulya; TUTAREL KIŞLAK, Şennur; BOYSAN, Murat
2015-01-01
Introduction Cognitive theories of psychopathology have generally proposed that early experiences of childhood abuse and neglect may result in the development of early maladaptive self-schemas. Maladaptive core schemas are central in the development and maintenance of psychological symptoms in a schema-focused approach. Psychosocial dysfunction in individuals with psychological problems has been consistently found to be associated with symptom severity. However, till date, linkages between psychosocial functioning, early traumatic experiences and core schemas have received little attention. The aim of the present study was to explore the relations among maladaptive interpersonal styles, negative experiences in childhood and core self-schemas in non-clinical adults. Methods A total of 300 adults (58% women) participated in the study. The participants completed a socio-demographic questionnaire, Young Schema Questionnaire, Childhood Trauma Questionnaire and Interpersonal Style Scale. Results Hierarchical regression analyses revealed that the Disconnection and Rejection and Impaired Limits schema domains were significant antecedents of maladaptive interpersonal styles after controlling for demographic characteristics and childhood abuse and neglect. Associations of child sexual abuse with Emotionally Avoidant, Manipulative and Abusive interpersonal styles were mediated by early maladaptive schemas. Early maladaptive schemas mediated the relations of emotional abuse with Emotionally Avoidant and Avoidant interpersonal styles as well as the relations of physical abuse with Avoidant and Abusive interpersonal styles. Conclusion Interpersonal styles in adulthood are significantly associated with childhood traumatic experiences. Significant relations between early traumatic experiences and maladaptive interpersonal styles are mediated by early maladaptive schemas. PMID:28360715
Kaya Tezel, Fulya; Tutarel Kişlak, Şennur; Boysan, Murat
2015-09-01
Cognitive theories of psychopathology have generally proposed that early experiences of childhood abuse and neglect may result in the development of early maladaptive self-schemas. Maladaptive core schemas are central in the development and maintenance of psychological symptoms in a schema-focused approach. Psychosocial dysfunction in individuals with psychological problems has been consistently found to be associated with symptom severity. However, till date, linkages between psychosocial functioning, early traumatic experiences and core schemas have received little attention. The aim of the present study was to explore the relations among maladaptive interpersonal styles, negative experiences in childhood and core self-schemas in non-clinical adults. A total of 300 adults (58% women) participated in the study. The participants completed a socio-demographic questionnaire, Young Schema Questionnaire, Childhood Trauma Questionnaire and Interpersonal Style Scale. Hierarchical regression analyses revealed that the Disconnection and Rejection and Impaired Limits schema domains were significant antecedents of maladaptive interpersonal styles after controlling for demographic characteristics and childhood abuse and neglect. Associations of child sexual abuse with Emotionally Avoidant, Manipulative and Abusive interpersonal styles were mediated by early maladaptive schemas. Early maladaptive schemas mediated the relations of emotional abuse with Emotionally Avoidant and Avoidant interpersonal styles as well as the relations of physical abuse with Avoidant and Abusive interpersonal styles. Interpersonal styles in adulthood are significantly associated with childhood traumatic experiences. Significant relations between early traumatic experiences and maladaptive interpersonal styles are mediated by early maladaptive schemas.
Baker, Amanda; Blanchard, Céline
2017-09-01
Research has primarily focused on the consequences of the female thin ideal on women and has largely ignored the effects on men. Two studies were designed to investigate the effects of a female thin ideal video on cognitive (Study 1: appearance schema, Study 2: visual-spatial processing) and self-evaluative measures in male viewers. Results revealed that the female thin ideal predicted men's increased appearance schema activation and poorer cognitive performance on a visual-spatial task. Constructs from self-determination theory (i.e., global autonomous and controlled motivation) were included to help explain for whom the video effects might be strongest or weakest. Findings demonstrated that a global autonomous motivation orientation played a protective role against the effects of the female thin ideal. Given that autonomous motivation was a significant moderator, SDT is an area worth exploring further to determine whether motivational strategies can benefit men who are susceptible to media body ideals. Copyright © 2017 Elsevier Ltd. All rights reserved.
A generic minimization random allocation and blinding system on web.
Cai, Hongwei; Xia, Jielai; Xu, Dezhong; Gao, Donghuai; Yan, Yongping
2006-12-01
Minimization is a dynamic randomization method for clinical trials. Although recommended by many researchers, the utilization of minimization has been seldom reported in randomized trials mainly because of the controversy surrounding the validity of conventional analyses and its complexity in implementation. However, both the statistical and clinical validity of minimization were demonstrated in recent studies. Minimization random allocation system integrated with blinding function that could facilitate the implementation of this method in general clinical trials has not been reported. SYSTEM OVERVIEW: The system is a web-based random allocation system using Pocock and Simon minimization method. It also supports multiple treatment arms within a trial, multiple simultaneous trials, and blinding without further programming. This system was constructed with generic database schema design method, Pocock and Simon minimization method and blinding method. It was coded with Microsoft Visual Basic and Active Server Pages (ASP) programming languages. And all dataset were managed with a Microsoft SQL Server database. Some critical programming codes were also provided. SIMULATIONS AND RESULTS: Two clinical trials were simulated simultaneously to test the system's applicability. Not only balanced groups but also blinded allocation results were achieved in both trials. Practical considerations for minimization method, the benefits, general applicability and drawbacks of the technique implemented in this system are discussed. Promising features of the proposed system are also summarized.
Conceptual Developments in Schema Theory.
ERIC Educational Resources Information Center
Bigenho, Frederick W., Jr.
The conceptual development of schema theory, the way an individual organizes knowledge, is discussed, reviewing a range of perspectives regarding schema. Schema has been defined as the interfacing of incoming information with prior knowledge, clustered in networks. These networks comprise a superordinate concept and supporting information. The…
Schematic memory components converge within angular gyrus during retrieval
Wagner, Isabella C; van Buuren, Mariët; Kroes, Marijn CW; Gutteling, Tjerk P; van der Linden, Marieke; Morris, Richard G; Fernández, Guillén
2015-01-01
Mental schemas form associative knowledge structures that can promote the encoding and consolidation of new and related information. Schemas are facilitated by a distributed system that stores components separately, presumably in the form of inter-connected neocortical representations. During retrieval, these components need to be recombined into one representation, but where exactly such recombination takes place is unclear. Thus, we asked where different schema components are neuronally represented and converge during retrieval. Subjects acquired and retrieved two well-controlled, rule-based schema structures during fMRI on consecutive days. Schema retrieval was associated with midline, medial-temporal, and parietal processing. We identified the multi-voxel representations of different schema components, which converged within the angular gyrus during retrieval. Critically, convergence only happened after 24-hour-consolidation and during a transfer test where schema material was applied to novel but related trials. Therefore, the angular gyrus appears to recombine consolidated schema components into one memory representation. DOI: http://dx.doi.org/10.7554/eLife.09668.001 PMID:26575291
Brasfield, Hope; Anderson, Scott; Stuart, Gregory L.
2014-01-01
Recent research has examined the relation between mindfulness and substance use, demonstrating that lower trait mindfulness is associated with increased substance use, and that mindfulness-based interventions help to reduce substance use. Research has also demonstrated that early maladaptive schemas are prevalent among individuals seeking substance use treatment and that targeting early maladaptive schemas in treatment may improve outcomes. However, no known research has examined the relation between mindfulness and early maladaptive schemas despite theoretical and empirical reasons to suspect their association. Therefore, the current study examined the relation between trait mindfulness and early maladaptive schemas among adult men seeking residential substance abuse treatment (N = 82). Findings demonstrated strong negative associations between trait mindfulness and 15 of the 18 early maladaptive schemas. Moreover, men endorsing multiple early maladaptive schemas reported lower trait mindfulness than men with fewer early maladaptive schemas. The implications of these findings for future research and treatment are discussed. PMID:26085852
DOE Office of Scientific and Technical Information (OSTI.GOV)
North, Michael J.
Schema-on-read is an agile approach to data storage and retrieval that defers investments in data organization until production queries need to be run by working with data directly in native form. Schema-on-read functions have been implemented in a wide range of analytical systems, most notably Hadoop. SchemaOnRead is a CRAN package that uses R’s flexible data representations to provide transparent and convenient support for the schema-on-read paradigm in R. The schema-on- read tools within the package include a single function call that recursively reads folders with text, comma separated value, raster image, R data, HDF5, NetCDF, spreadsheet, Weka, Epi Info,more » Pajek network, R network, HTML, SPSS, Systat, and Stata files. The provided tools can be used as-is or easily adapted to implement customized schema-on-read tool chains in R. This paper’s contribution is that it introduces and describes SchemaOnRead, the first R package specifically focused on providing explicit schema-on-read support in R.« less
ECG Rhythm Analysis with Expert and Learner-Generated Schemas in Novice Learners
ERIC Educational Resources Information Center
Blissett, Sarah; Cavalcanti, Rodrigo; Sibbald, Matthew
2015-01-01
Although instruction using expert-generated schemas is associated with higher diagnostic performance, implementation is resource intensive. Learner-generated schemas are an alternative, but may be limited by increases in cognitive load. We compared expert- and learner-generated schemas for learning ECG rhythm interpretation on diagnostic accuracy,…
Thai University Student Schemas and Anxiety Symptomatology
ERIC Educational Resources Information Center
Rhein, Douglas; Sukawatana, Parisa
2015-01-01
This study explores how early maladaptive schemas (EMSs) contribute to the development of anxiety symptomologies among college undergraduates (N = 110). The study was conducted by assessing the correlations between 18 schemas derived from Young's model of Early Maladaptive Schemas (EMSs) and anxiety symptoms using Zung Self-Rating Anxiety Scale…
Schema-Based Text Comprehension
ERIC Educational Resources Information Center
Ensar, Ferhat
2015-01-01
Schema is one of the most common terms used for classifying and constructing knowledge. Therefore, a schema is a pre-planned set of concepts. It usually contains social information and is used to represent chain of events, perceptions, situations, relationships and even objects. For example, Kant initially defines the idea of schema as some…
eMelanoBase: an online locus-specific variant database for familial melanoma.
Fung, David C Y; Holland, Elizabeth A; Becker, Therese M; Hayward, Nicholas K; Bressac-de Paillerets, Brigitte; Mann, Graham J
2003-01-01
A proportion of melanoma-prone individuals in both familial and non-familial contexts has been shown to carry inactivating mutations in either CDKN2A or, rarely, CDK4. CDKN2A is a complex locus that encodes two unrelated proteins from alternately spliced transcripts that are read in different frames. The alpha transcript (exons 1alpha, 2, and 3) produces the p16INK4A cyclin-dependent kinase inhibitor, while the beta transcript (exons 1beta and 2) is translated as p14ARF, a stabilizing factor of p53 levels through binding to MDM2. Mutations in exon 2 can impair both polypeptides and insertions and deletions in exons 1alpha, 1beta, and 2, which can theoretically generate p16INK4A-p14ARF fusion proteins. No online database currently takes into account all the consequences of these genotypes, a situation compounded by some problematic previous annotations of CDKN2A-related sequences and descriptions of their mutations. As an initiative of the international Melanoma Genetics Consortium, we have therefore established a database of germline variants observed in all loci implicated in familial melanoma susceptibility. Such a comprehensive, publicly accessible database is an essential foundation for research on melanoma susceptibility and its clinical application. Our database serves two types of data as defined by HUGO. The core dataset includes the nucleotide variants on the genomic and transcript levels, amino acid variants, and citation. The ancillary dataset includes keyword description of events at the transcription and translation levels and epidemiological data. The application that handles users' queries was designed in the model-view-controller architecture and was implemented in Java. The object-relational database schema was deduced using functional dependency analysis. We hereby present our first functional prototype of eMelanoBase. The service is accessible via the URL www.wmi.usyd.edu.au:8080/melanoma.html. Copyright 2002 Wiley-Liss, Inc.
Venezky, Dina Y.; Newhall, Christopher G.
2007-01-01
WOVOdat Overview During periods of volcanic unrest, the ability to forecast near future activity has been a primary concern for human populations living near volcanoes. Our ability to forecast future activity and mitigate hazards is based on knowledge of previous activity at the volcano exhibiting unrest and knowledge of previous activity at similar volcanoes. A small set of experts with past experience are often involved in forecasting. We need to both preserve the knowledge the experts use and continue to investigate volcanic data to make better forecasts. Advances in instrumentation, networking, and data storage technologies have greatly increased our ability to collect volcanic data and share observations with our colleagues. The wealth of data creates numerous opportunities for gaining a better understanding of magmatic conditions and processes, if the data can be easily accessed for comparison. To allow for comparison of volcanic unrest data, we are creating a central database called WOVOdat. WOVOdat will contain a subset of time-series and geo-referenced data from each WOVO observatory in common and easily accessible formats. WOVOdat is being created for volcano experts in charge of forecasting volcanic activity, scientists investigating volcanic processes, and the public. The types of queries each of these groups might ask range from, 'What volcanoes were active in November of 2002?' and 'What are the relationships between tectonic earthquakes and volcanic processes?' to complex analyses of volcanic unrest to determine what future activity might occur. A new structure for storing and accessing our data was needed to examine processes across a wide range of volcanologic conditions. WOVOdat provides this new structure using relationships to connect the data parameters such that searches can be created for analogs of unrest. The subset of data that will fill WOVOdat will continue to be collected by the observatories, who will remain the primary archives of raw and detailed data on individual episodes of unrest. MySQL, an Open Source database, was chosen as the WOVOdat database for its integration with common web languages. The question of where the data will be stored and how the disparate data sets will be integrated will not be discussed in detail here. The focus of this document is to explain the data types, formats, and table organization chosen for WOVOdat 1.0. It was written for database administrators, data loaders, query writers, and anyone who monitors volcanoes. We begin with an overview of several challenges faced and solutions used in creating the WOVOdat schema. Specifics are then given for the parameters and table organization. After each table organization section, basic create table statements are included for viewing the database field formats. In the next stage of the project, scripts will be needed for data conversion, entry, and cleansing. Views will also need to be created once the data have been loaded and the basic queries are better known. Many questions and opportunities remain. We look forward to the growth and continual improvement in efficiency of the system. We hope WOVOdat will improve our understanding of magmatic systems and help mitigate future volcanic hazards.
D Modelling and Visualization Based on the Unity Game Engine - Advantages and Challenges
NASA Astrophysics Data System (ADS)
Buyuksalih, I.; Bayburt, S.; Buyuksalih, G.; Baskaraca, A. P.; Karim, H.; Rahman, A. A.
2017-11-01
3D City modelling is increasingly popular and becoming valuable tools in managing big cities. Urban and energy planning, landscape, noise-sewage modelling, underground mapping and navigation are among the applications/fields which really depend on 3D modelling for their effectiveness operations. Several research areas and implementation projects had been carried out to provide the most reliable 3D data format for sharing and functionalities as well as visualization platform and analysis. For instance, BIMTAS company has recently completed a project to estimate potential solar energy on 3D buildings for the whole Istanbul and now focussing on 3D utility underground mapping for a pilot case study. The research and implementation standard on 3D City Model domain (3D data sharing and visualization schema) is based on CityGML schema version 2.0. However, there are some limitations and issues in implementation phase for large dataset. Most of the limitations were due to the visualization, database integration and analysis platform (Unity3D game engine) as highlighted in this paper.
ERIC Educational Resources Information Center
Mahalik, James R.; Morrison, Jay A.
2006-01-01
Cognitive therapists may be able to help fathers increase their involvement with their children by identifying and changing restrictive masculine schemas that interfere with men's parenting roles. In this paper, we (a) discuss the development of restrictive masculine schemas, (b) explain how these schemas may affect men's involvement in fathering…
SCHeMA web-based observation data information system
NASA Astrophysics Data System (ADS)
Novellino, Antonio; Benedetti, Giacomo; D'Angelo, Paolo; Confalonieri, Fabio; Massa, Francesco; Povero, Paolo; Tercier-Waeber, Marie-Louise
2016-04-01
It is well recognized that the need of sharing ocean data among non-specialized users is constantly increasing. Initiatives that are built upon international standards will contribute to simplify data processing and dissemination, improve user-accessibility also through web browsers, facilitate the sharing of information across the integrated network of ocean observing systems; and ultimately provide a better understanding of the ocean functioning. The SCHeMA (Integrated in Situ Chemical MApping probe) Project is developing an open and modular sensing solution for autonomous in situ high resolution mapping of a wide range of anthropogenic and natural chemical compounds coupled to master bio-physicochemical parameters (www.schema-ocean.eu). The SCHeMA web system is designed to ensure user-friendly data discovery, access and download as well as interoperability with other projects through a dedicated interface that implements the Global Earth Observation System of Systems - Common Infrastructure (GCI) recommendations and the international Open Geospatial Consortium - Sensor Web Enablement (OGC-SWE) standards. This approach will insure data accessibility in compliance with major European Directives and recommendations. Being modular, the system allows the plug-and-play of commercially available probes as well as new sensor probess under development within the project. The access to the network of monitoring probes is provided via a web-based system interface that, being implemented as a SOS (Sensor Observation Service), is providing standard interoperability and access tosensor observations systems through O&M standard - as well as sensor descriptions - encoded in Sensor Model Language (SensorML). The use of common vocabularies in all metadatabases and data formats, to describe data in an already harmonized and common standard is a prerequisite towards consistency and interoperability. Therefore, the SCHeMA SOS has adopted the SeaVox common vocabularies populated by SeaDataNet network of National Oceanographic Data Centres. The SCHeMA presentation layer, a fundamental part of the software architecture, offers to the user a bidirectional interaction with the integrated system allowing to manage and configure the sensor probes; view the stored observations and metadata, and handle alarms. The overall structure of the web portal developed within the SCHeMA initiative (Sensor Configuration, development of Core Profile interface for data access via OGC standard, external services such as web services, WMS, WFS; and Data download and query manager) will be presented and illustrated with examples of ongoing tests in costal and open sea.
Plant, Katherine L; Stanton, Neville A
2013-01-01
Schema Theory is intuitively appealing although it has not always received positive press; critics of the approach argue that the concept is too ambiguous and vague and there are inherent difficulties associated with measuring schemata. As such, the term schema can be met with scepticism and wariness. The purpose of this paper is to address the criticisms that have been levelled at Schema Theory by demonstrating how Schema Theory has been utilised in Ergonomics research, particularly in the key areas of situation awareness, naturalistic decision making and error. The future of Schema Theory is also discussed in light of its potential roles as a unifying theory in Ergonomics and in contributing to our understanding of distributed cognition. We conclude that Schema Theory has made a positive contribution to Ergonomics and with continued refinement of methods to infer and represent schemata it is likely that this trend will continue. This paper reviews the contribution that Schema Theory has made to Ergonomics research. The criticisms of the theory are addressed using examples from the areas of situation awareness, decision making and error.
Chapman, Wendy W.; Dowling, John N.
2006-01-01
Evaluating automated indexing applications requires comparing automatically indexed terms against manual reference standard annotations. However, there are no standard guidelines for determining which words from a textual document to include in manual annotations, and the vague task can result in substantial variation among manual indexers. We applied grounded theory to emergency department reports to create an annotation schema representing syntactic and semantic variables that could be annotated when indexing clinical conditions. We describe the annotation schema, which includes variables representing medical concepts (e.g., symptom, demographics), linguistic form (e.g., noun, adjective), and modifier types (e.g., anatomic location, severity). We measured the schema’s quality and found: (1) the schema was comprehensive enough to be applied to 20 unseen reports without changes to the schema; (2) agreement between author annotators applying the schema was high, with an F measure of 93%; and (3) an error analysis showed that the authors made complementary errors when applying the schema, demonstrating that the schema incorporates both linguistic and medical expertise. PMID:16230050
Hablo Inglés y Español: Cultural Self-Schemas as a Function of Language.
Rodríguez-Arauz, Gloriana; Ramírez-Esparza, Nairán; Pérez-Brena, Norma; Boyd, Ryan L
2017-01-01
Research has demonstrated that bilingual individuals experience a "double personality," which allows them to shift their self-schemas when they are primed with different language modes. In this study, we examine whether self-schemas change in Mexican-American ( N = 193) bilinguals living in the U.S. when they provide open-ended personality self-descriptions in both English and Spanish. We used the Meaning Extraction Helper (MEH) software to extract the most salient self-schemas that influence individuals' self-defining process. Following a qualitative-inductive approach, words were extracted from the open-ended essays and organized into semantic clusters, which were analyzed qualitatively and named. The results show that as expected, language primed bilinguals to think about different self-schemas. In Spanish, their Mexican self-schemas were more salient; whereas, in English their U.S. American self-schemas were more salient. Similarities of self-schemas across languages were assessed using a quantitative approach. Language differences and similarities in theme definition and implications for self-identity of bilinguals are discussed.
Hablo Inglés y Español: Cultural Self-Schemas as a Function of Language
Rodríguez-Arauz, Gloriana; Ramírez-Esparza, Nairán; Pérez-Brena, Norma; Boyd, Ryan L.
2017-01-01
Research has demonstrated that bilingual individuals experience a “double personality,” which allows them to shift their self-schemas when they are primed with different language modes. In this study, we examine whether self-schemas change in Mexican-American (N = 193) bilinguals living in the U.S. when they provide open-ended personality self-descriptions in both English and Spanish. We used the Meaning Extraction Helper (MEH) software to extract the most salient self-schemas that influence individuals' self-defining process. Following a qualitative-inductive approach, words were extracted from the open-ended essays and organized into semantic clusters, which were analyzed qualitatively and named. The results show that as expected, language primed bilinguals to think about different self-schemas. In Spanish, their Mexican self-schemas were more salient; whereas, in English their U.S. American self-schemas were more salient. Similarities of self-schemas across languages were assessed using a quantitative approach. Language differences and similarities in theme definition and implications for self-identity of bilinguals are discussed. PMID:28611719
Baby Schema in Infant Faces Induces Cuteness Perception and Motivation for Caretaking in Adults.
Glocker, Melanie L; Langleben, Daniel D; Ruparel, Kosha; Loughead, James W; Gur, Ruben C; Sachser, Norbert
2009-03-01
Ethologist Konrad Lorenz proposed that baby schema ('Kindchenschema') is a set of infantile physical features such as the large head, round face and big eyes that is perceived as cute and motivates caretaking behavior in other individuals, with the evolutionary function of enhancing offspring survival. Previous work on this fundamental concept was restricted to schematic baby representations or correlative approaches. Here, we experimentally tested the effects of baby schema on the perception of cuteness and the motivation for caretaking using photographs of infant faces. Employing quantitative techniques, we parametrically manipulated the baby schema content to produce infant faces with high (e.g. round face and high forehead), and low (e. g. narrow face and low forehead) baby schema features that retained all the characteristics of a photographic portrait. Undergraduate students (n = 122) rated these infants' cuteness and their motivation to take care of them. The high baby schema infants were rated as more cute and elicited stronger motivation for caretaking than the unmanipulated and the low baby schema infants. This is the first experimental proof of the baby schema effects in actual infant faces. Our findings indicate that the baby schema response is a critical function of human social cognition that may be the basis of caregiving and have implications for infant-caretaker interactions.
Information Management Workflow and Tools Enabling Multiscale Modeling Within ICME Paradigm
NASA Technical Reports Server (NTRS)
Arnold, Steven M.; Bednarcyk, Brett A.; Austin, Nic; Terentjev, Igor; Cebon, Dave; Marsden, Will
2016-01-01
With the increased emphasis on reducing the cost and time to market of new materials, the need for analytical tools that enable the virtual design and optimization of materials throughout their processing - internal structure - property - performance envelope, along with the capturing and storing of the associated material and model information across its lifecycle, has become critical. This need is also fueled by the demands for higher efficiency in material testing; consistency, quality and traceability of data; product design; engineering analysis; as well as control of access to proprietary or sensitive information. Fortunately, material information management systems and physics-based multiscale modeling methods have kept pace with the growing user demands. Herein, recent efforts to establish workflow for and demonstrate a unique set of web application tools for linking NASA GRC's Integrated Computational Materials Engineering (ICME) Granta MI database schema and NASA GRC's Integrated multiscale Micromechanics Analysis Code (ImMAC) software toolset are presented. The goal is to enable seamless coupling between both test data and simulation data, which is captured and tracked automatically within Granta MI®, with full model pedigree information. These tools, and this type of linkage, are foundational to realizing the full potential of ICME, in which materials processing, microstructure, properties, and performance are coupled to enable application-driven design and optimization of materials and structures.
Soygüt, Gonca; Cakir, Zehra
2009-01-01
The first aim of this study was to examine the relationships between perceived parenting styles and interpersonal schemas. The second purpose was to investigate the mediator role of interpersonal schemas between perceived parenting styles and psychological symptoms. University students (N=94), ages ranging between 17-26, attending to different faculty and classes, have completed Interpersonal Schema Questionnaire, Young Parenting Inventory and Symptom Check List-90. A series of regression analyses revealed that perceived parenting styles have predictive power on a number of interpersonal schemas. Further analyses pointed out that the mediator role of Hostility situation of interpersonal schemas between psychological symptoms and normative, belittling/criticizing, pessimistic/worried parenting styles on the mother forms (Sobel z= 1.94-2.08, p < .01); and normative, belittling/criticizing, emotionally depriving, pessimistic/worried, punitive, and restricted/emotionally inhibited parenting styles (Sobel z= 2.20-2.86, p < .05-.01) on the father forms of the scales. Regression analyses pointed out the predictive power of perceived parenting styles on interpersonal schemas. Moreover, the mediator role of interpersonal schemas between perceived parenting styles and psychological symptoms was also observed. Excluding pessimistic/anxious parenting styles, perceived parenting styles of mothers and fathers differed in their relation to psychological symptoms. In overall evaluation, we believe that, although schemas and parental styles have some universalities in relation to their impacts on psychological health, further research is necessary to address their implications and possible paternal differences in our collectivistic cultural context.
ERIC Educational Resources Information Center
Schwonke, Rolf
2015-01-01
Instructional design theories such as the "cognitive load theory" (CLT) or the "cognitive theory of multimedia learning" (CTML) explain learning difficulties in (computer-based) learning usually as a result of design deficiencies that hinder effective schema construction. However, learners often struggle even in well-designed…
Lomax, C. L.; Barnard, P. J.; Lam, D.
2009-01-01
Background There are few theoretical proposals that attempt to account for the variation in affective processing across different affective states of bipolar disorder (BD). The Interacting Cognitive Subsystems (ICS) framework has been recently extended to account for manic states. Within the framework, positive mood state is hypothesized to tap into an implicational level of processing, which is proposed to be more extreme in states of mania. Method Thirty individuals with BD and 30 individuals with no history of affective disorder were tested in euthymic mood state and then in induced positive mood state using the Question–Answer task to examine the mode of processing of schemas. The task was designed to test whether individuals would detect discrepancies within the prevailing schemas of the sentences. Results Although the present study did not support the hypothesis that the groups differ in their ability to detect discrepancies within schemas, we did find that the BD group was significantly more likely than the control group to answer questions that were consistent with the prevailing schemas, both before and after mood induction. Conclusions These results may reflect a general cognitive bias, that individuals with BD have a tendency to operate at a more abstract level of representation. This may leave an individual prone to affective disturbance, although further research is required to replicate this finding. PMID:18796173
CytometryML: a markup language for analytical cytology
NASA Astrophysics Data System (ADS)
Leif, Robert C.; Leif, Stephanie H.; Leif, Suzanne B.
2003-06-01
Cytometry Markup Language, CytometryML, is a proposed new analytical cytology data standard. CytometryML is a set of XML schemas for encoding both flow cytometry and digital microscopy text based data types. CytometryML schemas reference both DICOM (Digital Imaging and Communications in Medicine) codes and FCS keywords. These schemas provide representations for the keywords in FCS 3.0 and will soon include DICOM microscopic image data. Flow Cytometry Standard (FCS) list-mode has been mapped to the DICOM Waveform Information Object. A preliminary version of a list mode binary data type, which does not presently exist in DICOM, has been designed. This binary type is required to enhance the storage and transmission of flow cytometry and digital microscopy data. Index files based on Waveform indices will be used to rapidly locate the cells present in individual subsets. DICOM has the advantage of employing standard file types, TIF and JPEG, for Digital Microscopy. Using an XML schema based representation means that standard commercial software packages such as Excel and MathCad can be used to analyze, display, and store analytical cytometry data. Furthermore, by providing one standard for both DICOM data and analytical cytology data, it eliminates the need to create and maintain special purpose interfaces for analytical cytology data thereby integrating the data into the larger DICOM and other clinical communities. A draft version of CytometryML is available at www.newportinstruments.com.
NASA Astrophysics Data System (ADS)
Ghiorso, M. S.
2013-12-01
Internally consistent thermodynamic databases are critical resources that facilitate the calculation of heterogeneous phase equilibria and thereby support geochemical, petrological, and geodynamical modeling. These 'databases' are actually derived data/model systems that depend on a diverse suite of physical property measurements, calorimetric data, and experimental phase equilibrium brackets. In addition, such databases are calibrated with the adoption of various models for extrapolation of heat capacities and volumetric equations of state to elevated temperature and pressure conditions. Finally, these databases require specification of thermochemical models for the mixing properties of solid, liquid, and fluid solutions, which are often rooted in physical theory and, in turn, depend on additional experimental observations. The process of 'calibrating' a thermochemical database involves considerable effort and an extensive computational infrastructure. Because of these complexities, the community tends to rely on a small number of thermochemical databases, generated by a few researchers; these databases often have limited longevity and are universally difficult to maintain. ThermoFit is a software framework and user interface whose aim is to provide a modeling environment that facilitates creation, maintenance and distribution of thermodynamic data/model collections. Underlying ThermoFit are data archives of fundamental physical property, calorimetric, crystallographic, and phase equilibrium constraints that provide the essential experimental information from which thermodynamic databases are traditionally calibrated. ThermoFit standardizes schema for accessing these data archives and provides web services for data mining these collections. Beyond simple data management and interoperability, ThermoFit provides a collection of visualization and software modeling tools that streamline the model/database generation process. Most notably, ThermoFit facilitates the rapid visualization of predicted model outcomes and permits the user to modify these outcomes using tactile- or mouse-based GUI interaction, permitting real-time updates that reflect users choices, preferences, and priorities involving derived model results. This ability permits some resolution of the problem of correlated model parameters in the common situation where thermodynamic models must be calibrated from inadequate data resources. The ability also allows modeling constraints to be imposed using natural data and observations (i.e. petrologic or geochemical intuition). Once formulated, ThermoFit facilitates deployment of data/model collections by automated creation of web services. Users consume these services via web-, excel-, or desktop-clients. ThermoFit is currently under active development and not yet generally available; a limited capability prototype system has been coded for Macintosh computers and utilized to construct thermochemical models for H2O-CO2 mixed fluid saturation in silicate liquids. The longer term goal is to release ThermoFit as a web portal application client with server-based cloud computations supporting the modeling environment.
Self-Schemas, Depression, and the Processing of Personal Information in Children.
ERIC Educational Resources Information Center
Hammen, Constance; Zupan, Brian A.
1984-01-01
Investigates the applicability of the self-as-schema model to children and examines the extent of negative self-schemas in relatively depressed children among 61 elementary school students; most of the students were between 8 and 12 years old. Results were consistent with the self-as-schema hypotheses, and mood congruent content-specific recall…
BioStar models of clinical and genomic data for biomedical data warehouse design
Wang, Liangjiang; Ramanathan, Murali
2008-01-01
Biomedical research is now generating large amounts of data, ranging from clinical test results to microarray gene expression profiles. The scale and complexity of these datasets give rise to substantial challenges in data management and analysis. It is highly desirable that data warehousing and online analytical processing technologies can be applied to biomedical data integration and mining. The major difficulty probably lies in the task of capturing and modelling diverse biological objects and their complex relationships. This paper describes multidimensional data modelling for biomedical data warehouse design. Since the conventional models such as star schema appear to be insufficient for modelling clinical and genomic data, we develop a new model called BioStar schema. The new model can capture the rich semantics of biomedical data and provide greater extensibility for the fast evolution of biological research methodologies. PMID:18048122
Oh, Sungyoung; Cha, Jieun; Ji, Myungkyu; Kang, Hyekyung; Kim, Seok; Heo, Eunyoung; Han, Jong Soo; Kang, Hyunggoo; Chae, Hoseok; Hwang, Hee
2015-01-01
Objectives To design a cloud computing-based Healthcare Software-as-a-Service (SaaS) Platform (HSP) for delivering healthcare information services with low cost, high clinical value, and high usability. Methods We analyzed the architecture requirements of an HSP, including the interface, business services, cloud SaaS, quality attributes, privacy and security, and multi-lingual capacity. For cloud-based SaaS services, we focused on Clinical Decision Service (CDS) content services, basic functional services, and mobile services. Microsoft's Azure cloud computing for Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) was used. Results The functional and software views of an HSP were designed in a layered architecture. External systems can be interfaced with the HSP using SOAP and REST/JSON. The multi-tenancy model of the HSP was designed as a shared database, with a separate schema for each tenant through a single application, although healthcare data can be physically located on a cloud or in a hospital, depending on regulations. The CDS services were categorized into rule-based services for medications, alert registration services, and knowledge services. Conclusions We expect that cloud-based HSPs will allow small and mid-sized hospitals, in addition to large-sized hospitals, to adopt information infrastructures and health information technology with low system operation and maintenance costs. PMID:25995962
An effective XML based name mapping mechanism within StoRM
NASA Astrophysics Data System (ADS)
Corso, E.; Forti, A.; Ghiselli, A.; Magnoni, L.; Zappi, R.
2008-07-01
In a Grid environment the naming capability allows users to refer to specific data resources in a physical storage system using a high level logical identifier. This logical identifier is typically organized in a file system like structure, a hierarchical tree of names. Storage Resource Manager (SRM) services map the logical identifier to the physical location of data evaluating a set of parameters as the desired quality of services and the VOMS attributes specified in the requests. StoRM is a SRM service developed by INFN and ICTP-EGRID to manage file and space on standard POSIX and high performing parallel and cluster file systems. An upcoming requirement in the Grid data scenario is the orthogonality of the logical name and the physical location of data, in order to refer, with the same identifier, to different copies of data archived in various storage areas with different quality of service. The mapping mechanism proposed in StoRM is based on a XML document that represents the different storage components managed by the service, the storage areas defined by the site administrator, the quality of service they provide and the Virtual Organization that want to use the storage area. An appropriate directory tree is realized in each storage component reflecting the XML schema. In this scenario StoRM is able to identify the physical location of a requested data evaluating the logical identifier and the specified attributes following the XML schema, without querying any database service. This paper presents the namespace schema defined, the different entities represented and the technical details of the StoRM implementation.
Yan, Xu; Zhou, Minxiong; Ying, Lingfang; Yin, Dazhi; Fan, Mingxia; Yang, Guang; Zhou, Yongdi; Song, Fan; Xu, Dongrong
2013-01-01
Diffusion kurtosis imaging (DKI) is a new method of magnetic resonance imaging (MRI) that provides non-Gaussian information that is not available in conventional diffusion tensor imaging (DTI). DKI requires data acquisition at multiple b-values for parameter estimation; this process is usually time-consuming. Therefore, fewer b-values are preferable to expedite acquisition. In this study, we carefully evaluated various acquisition schemas using different numbers and combinations of b-values. Acquisition schemas that sampled b-values that were distributed to two ends were optimized. Compared to conventional schemas using equally spaced b-values (ESB), optimized schemas require fewer b-values to minimize fitting errors in parameter estimation and may thus significantly reduce scanning time. Following a ranked list of optimized schemas resulted from the evaluation, we recommend the 3b schema based on its estimation accuracy and time efficiency, which needs data from only 3 b-values at 0, around 800 and around 2600 s/mm2, respectively. Analyses using voxel-based analysis (VBA) and region-of-interest (ROI) analysis with human DKI datasets support the use of the optimized 3b (0, 1000, 2500 s/mm2) DKI schema in practical clinical applications. PMID:23735303
Early Maladaptive Schemas and Aggression in Men Seeking Residential Substance Use Treatment
Shorey, Ryan C.; Elmquist, Joanna; Anderson, Scott; Stuart, Gregory L.
2015-01-01
Social-cognitive theories of aggression postulate that individuals who perpetrate aggression are likely to have high levels of maladaptive cognitive schemas that increase risk for aggression. Indeed, recent research has begun to examine whether early maladaptive schemas may increase the risk for aggression. However, no known research has examined this among individuals in substance use treatment, despite aggression and early maladaptive schemas being more prevalent among individuals with a substance use disorder than the general population. Toward this end, we examined the relationship between early maladaptive schemas and aggression in men in a residential substance use treatment facility (N = 106). Utilizing pre-existing patient records, results demonstrated unique associations between early maladaptive schema domains and aggression depending on the type of aggression and schema domain examined, even after controlling for substance use, antisocial personality, age, and education. The Impaired Limits domain was positively associated with verbal aggression, aggressive attitude, and overall aggression, whereas the Disconnection and Rejection domain was positively associated with physical aggression. These findings are consistent with social-cognitive models of aggression and advance our understanding of how early maladaptive schemas may influence aggression. The implications of these findings for future research are discussed. PMID:25897180
Calvete, Esther; Gámez-Guadix, Manuel; Fernández-Gonzalez, Liria; Orue, Izaskun; Borrajo, Erika
2018-07-01
This study examined whether exposure to family violence, both in the form of direct victimization and witnessing violence, predicted dating violence victimization in adolescents through maladaptive schemas. A sample of 933 adolescents (445 boys and 488 girls), aged between 13 and 18 (M = 15.10), participated in a three-year longitudinal study. They completed measures of exposure to family violence, maladaptive schemas of disconnection/rejection, and dating violence victimization. The findings indicate that witnessing family violence predicts the increase of dating violence victimization over time, through the mediation of maladaptive schemas in girls, but not in boys. Direct victimization in the family predicts dating violence victimization directly, without the mediation of schemas. In addition, maladaptive schemas contribute to the perpetuation of dating violence victimization over time. These findings provide new opportunities for preventive interventions, as maladaptive schemas can be modified. Copyright © 2018 Elsevier Ltd. All rights reserved.
Partitioning an object-oriented terminology schema.
Gu, H; Perl, Y; Halper, M; Geller, J; Kuo, F; Cimino, J J
2001-07-01
Controlled medical terminologies are increasingly becoming strategic components of various healthcare enterprises. However, the typical medical terminology can be difficult to exploit due to its extensive size and high density. The schema of a medical terminology offered by an object-oriented representation is a valuable tool in providing an abstract view of the terminology, enhancing comprehensibility and making it more usable. However, schemas themselves can be large and unwieldy. We present a methodology for partitioning a medical terminology schema into manageably sized fragments that promote increased comprehension. Our methodology has a refinement process for the subclass hierarchy of the terminology schema. The methodology is carried out by a medical domain expert in conjunction with a computer. The expert is guided by a set of three modeling rules, which guarantee that the resulting partitioned schema consists of a forest of trees. This makes it easier to understand and consequently use the medical terminology. The application of our methodology to the schema of the Medical Entities Dictionary (MED) is presented.
Relationship of negative self-schemas and attachment styles with appearance schemas.
Ledoux, Tracey; Winterowd, Carrie; Richardson, Tamara; Clark, Julie Dorton
2010-06-01
The purpose was to test, among women, the relationship between negative self-schemas and styles of attachment with men and women and two types of appearance investment (Self-evaluative and Motivational Salience). Predominantly Caucasian undergraduate women (N=194) completed a modified version of the Relationship Questionnaire, the Young Schema Questionnaire-Short Form, and the Appearance Schemas Inventory-Revised. Linear multiple regression analyses were conducted with Motivational Salience and Self-evaluative Salience of appearance serving as dependent variables and relevant demographic variables, negative self-schemas, and styles of attachment to men serving as independent variables. Styles of attachment to women were not entered into these regression models because Pearson correlations indicated they were not related to either dependent variable. Self-evaluative Salience of appearance was related to impaired autonomy and performance negative self-schema and the preoccupation style of attachment with men, while Motivational Salience of appearance was related only to the preoccupation style of attachment with men. 2010 Elsevier Ltd. All rights reserved.
Neural mechanisms of mental schema: a triplet of delta, low beta/spindle and ripple oscillations.
Ohki, Takefumi; Takei, Yuichi
2018-02-06
Schemas are higher-level knowledge structures that integrate and organise lower-level representations. As internal templates, schemas are formed according to how events are perceived, interpreted and remembered. Although these higher-level units are assumed to play a fundamental role in our daily life from an early age, the neuronal basis and mechanisms of schema formation and use remain largely unknown. It is important to elucidate how the brain constructs and maintains these higher-level units. In order to examine the possible neural underpinnings of schema, we recapitulate previous work and discuss their findings related to schemas as the brain template. We specifically focused on low beta/spindle oscillations, which are assumed to be the key components of schemas, and propose that the brain template is implemented with a triplet of neural oscillations, that is delta, low beta/spindle and ripple oscillations. © 2018 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Role of Father–Child Relational Quality in Early Maladaptive Schemas
Monirpoor, Nader; Gholamyzarch, Morteza; Tamaddonfard, Mohsen; Khoosfi, Helen; Ganjali, Ali Reza
2012-01-01
Background Primary maladaptive schemas, which are the basis of high-risk behavior and psychological disorders, result from childhood experiences with significant objects, such as fathers, in different developmental phases. Objectives This endeavor examined the role of the father in predicting these schemas. Patients and Methods A total of 345 Islamic Azad University students (Qom Branch) who were chosen through convenience sampling completed the Young Schema Questionnaire, the Parental Bonding Instrument, and the Parent–Child Relationship Survey. Results A multivariate regression analysis indicated that anumber of aspects of the father–child relationship, including care, emotional interaction, positive affection, the effective relationship, and excessive support, predict particular schemas. Conclusions Therefore, these findings suggested that psychotherapists examine the different aspects of the father–child relationship when restructuring schemas. PMID:24971232
Shorey, Ryan C.; Anderson, Scott; Stuart, Gregory L.
2014-01-01
Individuals with substance use disorders are more likely to have antisocial and borderline personality disorder than non-substance abusers. Recently, research has examined the relations between early maladaptive schemas and personality disorders, as early maladaptive schemas are believed to underlie personality disorders. However, there is a dearth of research on the relations between early maladaptive schemas and personality disorders among individuals seeking treatment for substance abuse. The current study examined the relations among early maladaptive schemas and antisocial and borderline personality within in a sample of men seeking substance abuse treatment (n = 98). Results demonstrated that early maladaptive schema domains were associated with antisocial and borderline personality symptoms. Implications of these findings for substance use treatment and research are discussed. PMID:23650153
Eddleston, Kimberly A; Veiga, John F; Powell, Gary N
2006-03-01
Using survey data from 400 managers, the authors examined whether gender self-schema would explain sex differences in preferences for status-based and socioemotional career satisfiers. Female gender self-schema, represented by femininity and family role salience, completely mediated the relationship between managers' sex and preferences for socioemotional career satisfiers. However, male gender self-schema, represented by masculinity and career role salience, did not mediate the relationship between managers' sex and preferences for status-based career satisfiers. As expected, male managers regarded status-based career satisfiers as more important and socioemotional career satisfiers as less important than female managers did. The proposed conceptualization of male and female gender self-schemas, which was supported by the data, enhances understanding of adult self-schema and work-related attitudes and behavior.
ALTERNATIVE ENERGY SOURCES FOR WASTEWATER TREATMENT PLANTS
The technology assessment provides an introduction to the use of several alternative energy sources at wastewater treatment plants. The report contains fact sheets (technical descriptions) and data sheets (cost and design information) for the technologies. Cost figures and schema...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bly, Aaron; Oxstrand, Johanna; Le Blanc, Katya L
2015-02-01
Most activities that involve human interaction with systems in a nuclear power plant are guided by procedures. Traditionally, the use of procedures has been a paper-based process that supports safe operation of the nuclear power industry. However, the nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. Advances in digital technology make computer-based procedures (CBPs) a valid option that provides further enhancement of safety by improving human performance related to procedure use. The transition from paper-based procedures (PBPs) to CBPs creates a need for a computer-based proceduremore » system (CBPS). A CBPS needs to have the ability to perform logical operations in order to adjust to the inputs received from either users or real time data from plant status databases. Without the ability for logical operations the procedure is just an electronic copy of the paper-based procedure. In order to provide the CBPS with the information it needs to display the procedure steps to the user, special care is needed in the format used to deliver all data and instructions to create the steps. The procedure should be broken down into basic elements and formatted in a standard method for the CBPS. One way to build the underlying data architecture is to use an Extensible Markup Language (XML) schema, which utilizes basic elements to build each step in the smart procedure. The attributes of each step will determine the type of functionality that the system will generate for that step. The CBPS will provide the context for the step to deliver referential information, request a decision, or accept input from the user. The XML schema needs to provide all data necessary for the system to accurately perform each step without the need for the procedure writer to reprogram the CBPS. The research team at the Idaho National Laboratory has developed a prototype CBPS for field workers as well as the underlying data structure for such CBPS. The objective of the research effort is to develop guidance on how to design both the user interface and the underlying schema. This paper will describe the result and insights gained from the research activities conducted to date.« less
Spatial cyberinfrastructures, ontologies, and the humanities.
Sieber, Renee E; Wellen, Christopher C; Jin, Yuan
2011-04-05
We report on research into building a cyberinfrastructure for Chinese biographical and geographic data. Our cyberinfrastructure contains (i) the McGill-Harvard-Yenching Library Ming Qing Women's Writings database (MQWW), the only online database on historical Chinese women's writings, (ii) the China Biographical Database, the authority for Chinese historical people, and (iii) the China Historical Geographical Information System, one of the first historical geographic information systems. Key to this integration is that linked databases retain separate identities as bases of knowledge, while they possess sufficient semantic interoperability to allow for multidatabase concepts and to support cross-database queries on an ad hoc basis. Computational ontologies create underlying semantics for database access. This paper focuses on the spatial component in a humanities cyberinfrastructure, which includes issues of conflicting data, heterogeneous data models, disambiguation, and geographic scale. First, we describe the methodology for integrating the databases. Then we detail the system architecture, which includes a tier of ontologies and schema. We describe the user interface and applications that allow for cross-database queries. For instance, users should be able to analyze the data, examine hypotheses on spatial and temporal relationships, and generate historical maps with datasets from MQWW for research, teaching, and publication on Chinese women writers, their familial relations, publishing venues, and the literary and social communities. Last, we discuss the social side of cyberinfrastructure development, as people are considered to be as critical as the technical components for its success.
TMATS/ IHAL/ DDML Schema Validation
2017-02-01
task was to create a method for performing IRIG eXtensible Markup Language (XML) schema validation. As opposed to XML instance document validation...TMATS / IHAL / DDML Schema Validation, RCC 126-17, February 2017 vii Acronyms DDML Data Display Markup Language HUD heads-up display iNET...system XML eXtensible Markup Language TMATS / IHAL / DDML Schema Validation, RCC 126-17, February 2017 viii This page intentionally left blank
Sleep Spindle Density Predicts the Effect of Prior Knowledge on Memory Consolidation
Lambon Ralph, Matthew A.; Kempkes, Marleen; Cousins, James N.; Lewis, Penelope A.
2016-01-01
Information that relates to a prior knowledge schema is remembered better and consolidates more rapidly than information that does not. Another factor that influences memory consolidation is sleep and growing evidence suggests that sleep-related processing is important for integration with existing knowledge. Here, we perform an examination of how sleep-related mechanisms interact with schema-dependent memory advantage. Participants first established a schema over 2 weeks. Next, they encoded new facts, which were either related to the schema or completely unrelated. After a 24 h retention interval, including a night of sleep, which we monitored with polysomnography, participants encoded a second set of facts. Finally, memory for all facts was tested in a functional magnetic resonance imaging scanner. Behaviorally, sleep spindle density predicted an increase of the schema benefit to memory across the retention interval. Higher spindle densities were associated with reduced decay of schema-related memories. Functionally, spindle density predicted increased disengagement of the hippocampus across 24 h for schema-related memories only. Together, these results suggest that sleep spindle activity is associated with the effect of prior knowledge on memory consolidation. SIGNIFICANCE STATEMENT Episodic memories are gradually assimilated into long-term memory and this process is strongly influenced by sleep. The consolidation of new information is also influenced by its relationship to existing knowledge structures, or schemas, but the role of sleep in such schema-related consolidation is unknown. We show that sleep spindle density predicts the extent to which schemas influence the consolidation of related facts. This is the first evidence that sleep is associated with the interaction between prior knowledge and long-term memory formation. PMID:27030764
The Protein Information Resource: an integrated public resource of functional annotation of proteins
Wu, Cathy H.; Huang, Hongzhan; Arminski, Leslie; Castro-Alvear, Jorge; Chen, Yongxing; Hu, Zhang-Zhi; Ledley, Robert S.; Lewis, Kali C.; Mewes, Hans-Werner; Orcutt, Bruce C.; Suzek, Baris E.; Tsugita, Akira; Vinayaka, C. R.; Yeh, Lai-Su L.; Zhang, Jian; Barker, Winona C.
2002-01-01
The Protein Information Resource (PIR) serves as an integrated public resource of functional annotation of protein data to support genomic/proteomic research and scientific discovery. The PIR, in collaboration with the Munich Information Center for Protein Sequences (MIPS) and the Japan International Protein Information Database (JIPID), produces the PIR-International Protein Sequence Database (PSD), the major annotated protein sequence database in the public domain, containing about 250 000 proteins. To improve protein annotation and the coverage of experimentally validated data, a bibliography submission system is developed for scientists to submit, categorize and retrieve literature information. Comprehensive protein information is available from iProClass, which includes family classification at the superfamily, domain and motif levels, structural and functional features of proteins, as well as cross-references to over 40 biological databases. To provide timely and comprehensive protein data with source attribution, we have introduced a non-redundant reference protein database, PIR-NREF. The database consists of about 800 000 proteins collected from PIR-PSD, SWISS-PROT, TrEMBL, GenPept, RefSeq and PDB, with composite protein names and literature data. To promote database interoperability, we provide XML data distribution and open database schema, and adopt common ontologies. The PIR web site (http://pir.georgetown.edu/) features data mining and sequence analysis tools for information retrieval and functional identification of proteins based on both sequence and annotation information. The PIR databases and other files are also available by FTP (ftp://nbrfa.georgetown.edu/pir_databases). PMID:11752247
InterMine Webservices for Phytozome (Rev2)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlson, Joseph; Goodstein, David; Rokhsar, Dan
2014-07-10
A datawarehousing framework for information provides a useful infrastructure for providers and users of genomic data. For providers, the infrastructure give them a consistent mechanism for extracting raw data. While for the users, the web services supported by the software allows them to make complex, and often unique, queries of the data. Previously, phytozome.net used BioMart to provide the infrastructure. As the complexity, scale and diversity of the dataset as grown, we decided to implement an InterMine web service on our servers. This change was largely motivated by the ability to have a more complex table structure and richer webmore » reporting mechanism than BioMart. For InterMine to achieve its more complex database schema it requires an XML description of the data and an appropriate loader. Unlimited one-to-many and many-to-many relationship between the tables can be enabled in the schema. We have implemented support for:1.) Genomes and annotations for the data in Phytozome. This set is the 48 organisms currently stored in a back end CHADO datastore. The data loaders are modified versions of the CHADO data adapters from FlyMine. 2.) Interproscan results from all proteins in the Phytozome database. 3.) Clusters of proteins into a grouped heirarchically by similarity. 4.) Cufflinks results from tissue-specific RNA-Seq data of Phytozome organisms. 5.) Diversity data (GATK and SnpEFF results) from a set of individual organism. The last two datatypes are new in this implementation of our web services. We anticipate that the scale of these data will increase considerably in the near future.« less
e-MIR2: a public online inventory of medical informatics resources.
de la Calle, Guillermo; García-Remesal, Miguel; Nkumu-Mbomio, Nelida; Kulikowski, Casimir; Maojo, Victor
2012-08-02
Over the past years, the number of available informatics resources in medicine has grown exponentially. While specific inventories of such resources have already begun to be developed for Bioinformatics (BI), comparable inventories are as yet not available for the Medical Informatics (MI) field, so that locating and accessing them currently remains a difficult and time-consuming task. We have created a repository of MI resources from the scientific literature, providing free access to its contents through a web-based service. We define informatics resources as all those elements that constitute, serve to define or are used by informatics systems, ranging from architectures or development methodologies to terminologies, vocabularies, databases or tools. Relevant information describing the resources is automatically extracted from manuscripts published in top-ranked MI journals. We used a pattern matching approach to detect the resources' names and their main features. Detected resources are classified according to three different criteria: functionality, resource type and domain. To facilitate these tasks, we have built three different classification schemas by following a novel approach based on folksonomies and social tagging. We adopted the terminology most frequently used by MI researchers in their publications to create the concepts and hierarchical relationships belonging to the classification schemas. The classification algorithm identifies the categories associated with resources and annotates them accordingly. The database is then populated with this data after manual curation and validation. We have created an online repository of MI resources to assist researchers in locating and accessing the most suitable resources to perform specific tasks. The database contains 609 resources at the time of writing and is available at http://www.gib.fi.upm.es/eMIR2. We are continuing to expand the number of available resources by taking into account further publications as well as suggestions from users and resource developers.
e-MIR2: a public online inventory of medical informatics resources
2012-01-01
Background Over the past years, the number of available informatics resources in medicine has grown exponentially. While specific inventories of such resources have already begun to be developed for Bioinformatics (BI), comparable inventories are as yet not available for the Medical Informatics (MI) field, so that locating and accessing them currently remains a difficult and time-consuming task. Description We have created a repository of MI resources from the scientific literature, providing free access to its contents through a web-based service. We define informatics resources as all those elements that constitute, serve to define or are used by informatics systems, ranging from architectures or development methodologies to terminologies, vocabularies, databases or tools. Relevant information describing the resources is automatically extracted from manuscripts published in top-ranked MI journals. We used a pattern matching approach to detect the resources’ names and their main features. Detected resources are classified according to three different criteria: functionality, resource type and domain. To facilitate these tasks, we have built three different classification schemas by following a novel approach based on folksonomies and social tagging. We adopted the terminology most frequently used by MI researchers in their publications to create the concepts and hierarchical relationships belonging to the classification schemas. The classification algorithm identifies the categories associated with resources and annotates them accordingly. The database is then populated with this data after manual curation and validation. Conclusions We have created an online repository of MI resources to assist researchers in locating and accessing the most suitable resources to perform specific tasks. The database contains 609 resources at the time of writing and is available at http://www.gib.fi.upm.es/eMIR2. We are continuing to expand the number of available resources by taking into account further publications as well as suggestions from users and resource developers. PMID:22857741
A Space Surveillance Ontology: Captured in an XML Schema
2000-10-01
characterization in a way most appropriate to a sub- domain. 6. The commercial market is embracing XML, and the military can take advantage of this significant...the space surveillance ontology effort to two key efforts: the Defense Information Infrastructure Common Operating Environment (DII COE) XML...strongly believe XML schemas will supplant them. Some of the advantages that XML schemas provide over DTDs include: • Strong data typing: The XML Schema
Atmaca, Sinem; Gençöz, Tülin
2016-02-01
The purpose of the current study is to explore the revictimization process between child abuse and neglect (CAN), and intimate partner violence (IPV) based on the schema theory perspective. For this aim, 222 married women recruited in four central cities of Turkey participated in the study. Results indicated that early negative CAN experiences increased the risk of being exposed to later IPV. Specifically, emotional abuse and sexual abuse in the childhood predicted the four subtypes of IPV, which are physical, psychological, and sexual violence, and injury, while physical abuse only associated with physical violence. To explore the mediational role of early maladaptive schemas (EMSs) on this association, first, five schema domains were tested via Parallel Multiple Mediation Model. Results indicated that only Disconnection/Rejection (D/R) schema domains mediated the association between CAN and IPV. Second, to determine the particular mediational roles of each schema, eighteen EMS were tested as mediators, and results showed that Emotional Deprivation Schema and Vulnerability to Harm or Illness Schema mediated the association between CAN and IPV. These findings provided an empirical support for the crucial roles of EMSs on the effect of revictimization process. Clinical implications were discussed. Copyright © 2016 Elsevier Ltd. All rights reserved.
Correlates of African American Men's Sexual Schemas
Morales, Dawn A.; Coyne-Beasley, Tamera; St. Lawrence, Janet
2013-01-01
Sexual schemas are cognitive representations of oneself as a sexual being and aid in the processing of sexually relevant information. We examined the relationship between sociosexuality (attitudes about casual sex), masculine ideology (attitudes toward traditional men and male roles), and cultural centrality (strength of identity with racial group) as significant psychosocial and sociocultural predictors in shaping young, heterosexual African American men's sexual schemas. A community sample (n=133) of men in a southeastern city of the United States completed quantitative self-report measures examining their attitudes and behavior related to casual sex, beliefs about masculinity, racial and cultural identity, and self-views of various sexual aspects of themselves. Results indicated that masculine ideology and cultural centrality were both positively related to men's sexual schemas. Cultural centrality explained 12 % of the variance in level of sexual schema, and had the strongest correlation of the predictor variables with sexual schema (r=.36). The need for more attention to the bidirectional relationships between masculinity, racial/cultural identity, and sexual schemas in prevention, intervention, and public health efforts for African American men is discussed. PMID:24031118
The acquisition process of musical tonal schema: implications from connectionist modeling.
Matsunaga, Rie; Hartono, Pitoyo; Abe, Jun-Ichi
2015-01-01
Using connectionist modeling, we address fundamental questions concerning the acquisition process of musical tonal schema of listeners. Compared to models of previous studies, our connectionist model (Learning Network for Tonal Schema, LeNTS) was better equipped to fulfill three basic requirements. Specifically, LeNTS was equipped with a learning mechanism, bound by culture-general properties, and trained by sufficient melody materials. When exposed to Western music, LeNTS acquired musical 'scale' sensitivity early and 'harmony' sensitivity later. The order of acquisition of scale and harmony sensitivities shown by LeNTS was consistent with the culture-specific acquisition order shown by musically westernized children. The implications of these results for the acquisition process of a tonal schema of listeners are as follows: (a) the acquisition process may entail small and incremental changes, rather than large and stage-like changes, in corresponding neural circuits; (b) the speed of schema acquisition may mainly depend on musical experiences rather than maturation; and (c) the learning principles of schema acquisition may be culturally invariant while the acquired tonal schemas are varied with exposed culture-specific music.
The acquisition process of musical tonal schema: implications from connectionist modeling
Matsunaga, Rie; Hartono, Pitoyo; Abe, Jun-ichi
2015-01-01
Using connectionist modeling, we address fundamental questions concerning the acquisition process of musical tonal schema of listeners. Compared to models of previous studies, our connectionist model (Learning Network for Tonal Schema, LeNTS) was better equipped to fulfill three basic requirements. Specifically, LeNTS was equipped with a learning mechanism, bound by culture-general properties, and trained by sufficient melody materials. When exposed to Western music, LeNTS acquired musical ‘scale’ sensitivity early and ‘harmony’ sensitivity later. The order of acquisition of scale and harmony sensitivities shown by LeNTS was consistent with the culture-specific acquisition order shown by musically westernized children. The implications of these results for the acquisition process of a tonal schema of listeners are as follows: (a) the acquisition process may entail small and incremental changes, rather than large and stage-like changes, in corresponding neural circuits; (b) the speed of schema acquisition may mainly depend on musical experiences rather than maturation; and (c) the learning principles of schema acquisition may be culturally invariant while the acquired tonal schemas are varied with exposed culture-specific music. PMID:26441725
Morris, Chris; Pajon, Anne; Griffiths, Susanne L.; Daniel, Ed; Savitsky, Marc; Lin, Bill; Diprose, Jonathan M.; Wilter da Silva, Alan; Pilicheva, Katya; Troshin, Peter; van Niekerk, Johannes; Isaacs, Neil; Naismith, James; Nave, Colin; Blake, Richard; Wilson, Keith S.; Stuart, David I.; Henrick, Kim; Esnouf, Robert M.
2011-01-01
The techniques used in protein production and structural biology have been developing rapidly, but techniques for recording the laboratory information produced have not kept pace. One approach is the development of laboratory information-management systems (LIMS), which typically use a relational database schema to model and store results from a laboratory workflow. The underlying philosophy and implementation of the Protein Information Management System (PiMS), a LIMS development specifically targeted at the flexible and unpredictable workflows of protein-production research laboratories of all scales, is described. PiMS is a web-based Java application that uses either Postgres or Oracle as the underlying relational database-management system. PiMS is available under a free licence to all academic laboratories either for local installation or for use as a managed service. PMID:21460443
Computer systems and methods for the query and visualization of multidimensional databases
Stolte, Chris; Tang, Diane L; Hanrahan, Patrick
2014-04-29
In response to a user request, a computer generates a graphical user interface on a computer display. A schema information region of the graphical user interface includes multiple operand names, each operand name associated with one or more fields of a multi-dimensional database. A data visualization region of the graphical user interface includes multiple shelves. Upon detecting a user selection of the operand names and a user request to associate each user-selected operand name with a respective shelf in the data visualization region, the computer generates a visual table in the data visualization region in accordance with the associations between the operand names and the corresponding shelves. The visual table includes a plurality of panes, each pane having at least one axis defined based on data for the fields associated with a respective operand name.
Computer systems and methods for the query and visualization of multidimensional databases
Stolte, Chris [Palo Alto, CA; Tang, Diane L [Palo Alto, CA; Hanrahan, Patrick [Portola Valley, CA
2011-02-01
In response to a user request, a computer generates a graphical user interface on a computer display. A schema information region of the graphical user interface includes multiple operand names, each operand name associated with one or more fields of a multi-dimensional database. A data visualization region of the graphical user interface includes multiple shelves. Upon detecting a user selection of the operand names and a user request to associate each user-selected operand name with a respective shelf in the data visualization region, the computer generates a visual table in the data visualization region in accordance with the associations between the operand names and the corresponding shelves. The visual table includes a plurality of panes, each pane having at least one axis defined based on data for the fields associated with a respective operand name.
Computer systems and methods for the query and visualization of multidimensional databases
Stolte, Chris [Palo Alto, CA; Tang, Diane L [Palo Alto, CA; Hanrahan, Patrick [Portola Valley, CA
2012-03-20
In response to a user request, a computer generates a graphical user interface on a computer display. A schema information region of the graphical user interface includes multiple operand names, each operand name associated with one or more fields of a multi-dimensional database. A data visualization region of the graphical user interface includes multiple shelves. Upon detecting a user selection of the operand names and a user request to associate each user-selected operand name with a respective shelf in the data visualization region, the computer generates a visual table in the data visualization region in accordance with the associations between the operand names and the corresponding shelves. The visual table includes a plurality of panes, each pane having at least one axis defined based on data for the fields associated with a respective operand name.
Morris, Chris; Pajon, Anne; Griffiths, Susanne L; Daniel, Ed; Savitsky, Marc; Lin, Bill; Diprose, Jonathan M; da Silva, Alan Wilter; Pilicheva, Katya; Troshin, Peter; van Niekerk, Johannes; Isaacs, Neil; Naismith, James; Nave, Colin; Blake, Richard; Wilson, Keith S; Stuart, David I; Henrick, Kim; Esnouf, Robert M
2011-04-01
The techniques used in protein production and structural biology have been developing rapidly, but techniques for recording the laboratory information produced have not kept pace. One approach is the development of laboratory information-management systems (LIMS), which typically use a relational database schema to model and store results from a laboratory workflow. The underlying philosophy and implementation of the Protein Information Management System (PiMS), a LIMS development specifically targeted at the flexible and unpredictable workflows of protein-production research laboratories of all scales, is described. PiMS is a web-based Java application that uses either Postgres or Oracle as the underlying relational database-management system. PiMS is available under a free licence to all academic laboratories either for local installation or for use as a managed service.
Development of geotechnical data schema in transportation : final report.
DOT National Transportation Integrated Search
2012-12-01
The objective of "Development of Geotechnical Data Schema in Transportation" is to develop an : international standard interchange format for geotechnical data. This standard will include a data : dictionary and XML schema which are GML compliant. Th...
Combining Model-driven and Schema-based Program Synthesis
NASA Technical Reports Server (NTRS)
Denney, Ewen; Whittle, John
2004-01-01
We describe ongoing work which aims to extend the schema-based program synthesis paradigm with explicit models. In this context, schemas can be considered as model-to-model transformations. The combination of schemas with explicit models offers a number of advantages, namely, that building synthesis systems becomes much easier since the models can be used in verification and in adaptation of the synthesis systems. We illustrate our approach using an example from signal processing.
XML Schema Guide for Primary CDR Submissions
This document presents the extensible markup language (XML) schema guide for the Office of Pollution Prevention and Toxics’ (OPPT) e-CDRweb tool. E-CDRweb is the electronic, web-based tool provided by Environmental Protection Agency (EPA) for the submission of Chemical Data Reporting (CDR) information. This document provides the user with tips and guidance on correctly using the version 1.7 XML schema. Please note that the order of the elements must match the schema.
Three-dimensional motor schema based navigation
NASA Technical Reports Server (NTRS)
Arkin, Ronald C.
1989-01-01
Reactive schema-based navigation is possible in space domains by extending the methods developed for ground-based navigation found within the Autonomous Robot Architecture (AuRA). Reformulation of two dimensional motor schemas for three dimensional applications is a straightforward process. The manifold advantages of schema-based control persist, including modular development, amenability to distributed processing, and responsiveness to environmental sensing. Simulation results show the feasibility of this methodology for space docking operations in a cluttered work area.
Core schemas and suicidality in a chronically traumatized population.
Dutra, Lissa; Callahan, Kelley; Forman, Evan; Mendelsohn, Michaela; Herman, Judith
2008-01-01
The Young Schema Questionnaire (YSQ) has been demonstrated to tap into core beliefs, or maladaptive schemas, of clinical populations. This study used the YSQ to investigate maladaptive schemas of 137 chronically traumatized patients seeking outpatient psychiatric treatment and to assess whether specific schemas might be associated with suicide risk in this population. Participants completed a modified version of the YSQ-S (short form), post-traumatic diagnostic scale, dissociative experiences scale and self-harm and risk behaviors questionnaire-revised at treatment intake. Significant correlations were found between most YSQ scales and the post-traumatic diagnostic scale, and between all YSQ scales and the dissociative experiences scale. Suicide risk variables were most highly correlated with the social isolation/alienation, defectiveness/shame and failure YSQ scales, suggesting that these schemas may mark individuals at particularly high risk for suicidal ideation and suicide attempts. These results offer important implications for the assessment and treatment of high-risk traumatized patients.
Baby schema modulates the brain reward system in nulliparous women.
Glocker, Melanie L; Langleben, Daniel D; Ruparel, Kosha; Loughead, James W; Valdez, Jeffrey N; Griffin, Mark D; Sachser, Norbert; Gur, Ruben C
2009-06-02
Ethologist Konrad Lorenz defined the baby schema ("Kindchenschema") as a set of infantile physical features, such as round face and big eyes, that is perceived as cute and motivates caretaking behavior in the human, with the evolutionary function of enhancing offspring survival. The neural basis of this fundamental altruistic instinct is not well understood. Prior studies reported a pattern of brain response to pictures of children, but did not dissociate the brain response to baby schema from the response to children. Using functional magnetic resonance imaging and controlled manipulation of the baby schema in infant faces, we found that baby schema activates the nucleus accumbens, a key structure of the mesocorticolimbic system mediating reward processing and appetitive motivation, in nulliparous women. Our findings suggest that engagement of the mesocorticolimbic system is the neurophysiologic mechanism by which baby schema promotes human caregiving, regardless of kinship.
Implementation of UML Schema to RDBM
NASA Astrophysics Data System (ADS)
Nagni, M.; Ventouras, S.; Parton, G.
2012-04-01
Multiple disciplines - especially those within the earth and physical sciences, and increasingly those within social science and medical fields - require Geographic Information (GI) i.e. information concerning phenomena implicitly or explicitly associated with a location relative to the Earth [1]. Therefore geographic datasets are increasingly being shared, exchanged and frequently used for purposes other than those for which they were originally intended. The ISO Technical Committee 211 (ISO/TC 211) together with Open Geospatial Consortium (OGC) provide a series of standards and guidelines for developing application schemas which should: a) capture relevant conceptual aspects of the data involved; and b) be sufficient to satisfy previously defined use-cases of a specific or cross-domain concerns. In addition, the Hollow World technology offers an accessible and industry-standardised methodology for creating and editing Application Schema UML models which conform to international standards for interoperable GI [2]. We present a technology which seamlessly transforms an Application Schema UML model to a relational database model (RDBM). This technology, using the same UML information model, complements the XML transformation of an information model produced by the FullMoon tool [2]. In preparation for the generation of a RDBM the UML model is first mapped to a collection of OO classes and relationships. Any external dependencies that exist are then resolved through the same mechanism. However, a RDBM does not support a hierarchical (relational) data structure - a function that may be required by UML models. Previous approaches have addressed this problem through use of nested sets or an adjacent list to represent such structure. Our unique strategy addresses the hierarchical data structure issue, whether singular or multiple inheritance, by hiding a delegation pattern within an OO class. This permits the object-relational mapping (ORM) software used to generate the RDBM to easily map the class into the RDBM. In other words the particular structure of the resulting OO class may expose a "composition-like aspect" to the ORM whilst maintaining an "inherited-like aspect" for use within an OO program. This methodology has been used to implement a software application to manages the new CEDA metadata model which is based on MOLES 3.4, Python, Django and SQLAlchemy.
Using Gender Schema Theory to Examine Gender Equity in Computing: a Preliminary Study
NASA Astrophysics Data System (ADS)
Agosto, Denise E.
Women continue to constitute a minority of computer science majors in the United States and Canada. One possible contributing factor is that most Web sites, CD-ROMs, and other digital resources do not reflect girls' design and content preferences. This article describes a pilot study that considered whether gender schema theory can serve as a framework for investigating girls' Web site design and content preferences. Eleven 14- and 15-year-old girls participated in the study. The methodology included the administration of the Children's Sex-Role Inventory (CSRI), Web-surfing sessions, interviews, and data analysis using iterative pattern coding. On the basis of their CSRI scores, the participants were divided into feminine-high (FH) and masculine-high (MH) groups. Data analysis uncovered significant differences in the criteria the groups used to evaluate Web sites. The FH group favored evaluation criteria relating to graphic and multimedia design, whereas the MH group favored evaluation criteria relating to subject content. Models of the two groups' evaluation criteria are presented, and the implications of the findings are discussed.
Krystkowiak, Izabella; Lenart, Jakub; Debski, Konrad; Kuterba, Piotr; Petas, Michal; Kaminska, Bozena; Dabrowski, Michal
2013-01-01
We present the Nencki Genomics Database, which extends the functionality of Ensembl Regulatory Build (funcgen) for the three species: human, mouse and rat. The key enhancements over Ensembl funcgen include the following: (i) a user can add private data, analyze them alongside the public data and manage access rights; (ii) inside the database, we provide efficient procedures for computing intersections between regulatory features and for mapping them to the genes. To Ensembl funcgen-derived data, which include data from ENCODE, we add information on conserved non-coding (putative regulatory) sequences, and on genome-wide occurrence of transcription factor binding site motifs from the current versions of two major motif libraries, namely, Jaspar and Transfac. The intersections and mapping to the genes are pre-computed for the public data, and the result of any procedure run on the data added by the users is stored back into the database, thus incrementally increasing the body of pre-computed data. As the Ensembl funcgen schema for the rat is currently not populated, our database is the first database of regulatory features for this frequently used laboratory animal. The database is accessible without registration using the mysql client: mysql -h database.nencki-genomics.org -u public. Registration is required only to add or access private data. A WSDL webservice provides access to the database from any SOAP client, including the Taverna Workbench with a graphical user interface.
Krystkowiak, Izabella; Lenart, Jakub; Debski, Konrad; Kuterba, Piotr; Petas, Michal; Kaminska, Bozena; Dabrowski, Michal
2013-01-01
We present the Nencki Genomics Database, which extends the functionality of Ensembl Regulatory Build (funcgen) for the three species: human, mouse and rat. The key enhancements over Ensembl funcgen include the following: (i) a user can add private data, analyze them alongside the public data and manage access rights; (ii) inside the database, we provide efficient procedures for computing intersections between regulatory features and for mapping them to the genes. To Ensembl funcgen-derived data, which include data from ENCODE, we add information on conserved non-coding (putative regulatory) sequences, and on genome-wide occurrence of transcription factor binding site motifs from the current versions of two major motif libraries, namely, Jaspar and Transfac. The intersections and mapping to the genes are pre-computed for the public data, and the result of any procedure run on the data added by the users is stored back into the database, thus incrementally increasing the body of pre-computed data. As the Ensembl funcgen schema for the rat is currently not populated, our database is the first database of regulatory features for this frequently used laboratory animal. The database is accessible without registration using the mysql client: mysql –h database.nencki-genomics.org –u public. Registration is required only to add or access private data. A WSDL webservice provides access to the database from any SOAP client, including the Taverna Workbench with a graphical user interface. Database URL: http://www.nencki-genomics.org. PMID:24089456
ERIC Educational Resources Information Center
Saito, Hitomi; Miwa, Kazuhisa
2007-01-01
In this study, we design a learning environment that supports reflective activities for information seeking on the Web and evaluate its educational effects. The features of this design are: (1) to visualize the learners' search processes as described, based on a cognitive schema, (2) to support two types of reflective activities, such as…
NASA Astrophysics Data System (ADS)
Veerraju, R. P. S. P.; Rao, A. Srinivasa; Murali, G.
2010-10-01
Refactoring is a disciplined technique for restructuring an existing body of code, altering its internal structure without changing its external behavior. It improves internal code structure without altering its external functionality by transforming functions and rethinking algorithms. It is an iterative process. Refactoring include reducing scope, replacing complex instructions with simpler or built-in instructions, and combining multiple statements into one statement. By transforming the code with refactoring techniques it will be faster to change, execute, and download. It is an excellent best practice to adopt for programmers wanting to improve their productivity. Refactoring is similar to things like performance optimizations, which are also behavior- preserving transformations. It also helps us find bugs when we are trying to fix a bug in difficult-to-understand code. By cleaning things up, we make it easier to expose the bug. Refactoring improves the quality of application design and implementation. In general, three cases concerning refactoring. Iterative refactoring, Refactoring when is necessary, Not refactor. Mr. Martin Fowler identifies four key reasons to refractor. Refactoring improves the design of software, makes software easier to understand, helps us find bugs and also helps in executing the program faster. There is an additional benefit of refactoring. It changes the way a developer thinks about the implementation when not refactoring. There are the three types of refactorings. 1) Code refactoring: It often referred to simply as refactoring. This is the refactoring of programming source code. 2) Database refactoring: It is a simple change to a database schema that improves its design while retaining both its behavioral and informational semantics. 3) User interface (UI) refactoring: It is a simple change to the UI which retains its semantics. Finally, we conclude the benefits of Refactoring are: Improves the design of software, Makes software easier to understand, Software gets cleaned up and Helps us to find bugs and Helps us to program faster.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veerraju, R. P. S. P.; Rao, A. Srinivasa; Murali, G.
2010-10-26
Refactoring is a disciplined technique for restructuring an existing body of code, altering its internal structure without changing its external behavior. It improves internal code structure without altering its external functionality by transforming functions and rethinking algorithms. It is an iterative process. Refactoring include reducing scope, replacing complex instructions with simpler or built-in instructions, and combining multiple statements into one statement. By transforming the code with refactoring techniques it will be faster to change, execute, and download. It is an excellent best practice to adopt for programmers wanting to improve their productivity. Refactoring is similar to things like performance optimizations,more » which are also behavior- preserving transformations. It also helps us find bugs when we are trying to fix a bug in difficult-to-understand code. By cleaning things up, we make it easier to expose the bug. Refactoring improves the quality of application design and implementation. In general, three cases concerning refactoring. Iterative refactoring, Refactoring when is necessary, Not refactor.Mr. Martin Fowler identifies four key reasons to refractor. Refactoring improves the design of software, makes software easier to understand, helps us find bugs and also helps in executing the program faster. There is an additional benefit of refactoring. It changes the way a developer thinks about the implementation when not refactoring. There are the three types of refactorings. 1) Code refactoring: It often referred to simply as refactoring. This is the refactoring of programming source code. 2) Database refactoring: It is a simple change to a database schema that improves its design while retaining both its behavioral and informational semantics. 3) User interface (UI) refactoring: It is a simple change to the UI which retains its semantics. Finally, we conclude the benefits of Refactoring are: Improves the design of software, Makes software easier to understand, Software gets cleaned up and Helps us to find bugs and Helps us to program faster.« less
Simons, Ronald L.; Simons, Leslie Gordon; Lei, Man Kit; Landor, Antoinette
2011-01-01
The present study tests a developmental model designed to explain the romantic relationship difficulties and reluctance to marry often reported for African Americans. Using longitudinal data from a sample of approximately 400 African American young adults, we examine the manner in which race-related adverse experiences during late childhood and early adolescence give rise to the cynical view of romantic partners and marriage held by many young African Americans. Our results indicate that adverse circumstances disproportionately suffered by African American youth (viz., harsh parenting, family instability, discrimination, criminal victimization, and financial hardship) promote distrustful relational schemas that lead to troubled dating relationships, and that these negative relationship experiences, in turn, encourage a less positive view of marriage. PMID:22328799
Design and implementation of temperature and humidity monitoring system for poultry farm
NASA Astrophysics Data System (ADS)
Purnomo, Hindriyanto Dwi; Somya, Ramos; Fibriani, Charitas; Purwoko, Angga; Sadiyah, Ulfa
2016-10-01
Automatic monitoring system gains significant interest in poultry industry due to the need of consistent environment condition. Appropriate environment increase the feed conversion ratio as well as birds productivity. This will increase the competitiveness of the poultry industry. In this research, a temperature and humidity monitoring system is proposed to observer the temperature and relative humidity of a poultry house. The system is intended to be applied in the poultry industry with partnership schema. The proposed system is equipped with CCTV for visual monitoring. The measured temperature and humidity implement wireless sensor network technology. The experiment results reveals that proposed system have the potential to increase the effectiveness of monitoring of poultry house in poultry industry with partnership schema.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dejong, G.F.; Waltz, D.L.
1983-01-01
This paper treats in some detail the problem of designing mechanisms that will allow one to deal with two types of novel language: (1) text requiring scheme learning; and (2) the understanding of novel metaphorical use of verbs. Schema learning is addressed by four types of processes: scheme composition, secondary effect elevation, schema alteration, and volitionalization. The processing of novel metaphors depends on a decompositional analysis of verbs into event shape diagrams, along with a matching process that uses semantic marker-like information, to construct novel meaning structures. The examples described have been chosen to be types that occur commonly, somore » that rules that are needed to understand them can also be used to understand a much wider range of novel language. 38 references.« less
XML Schema Guide for Secondary CDR Submissions
This document presents the extensible markup language (XML) schema guide for the Office of Pollution Prevention and Toxics’ (OPPT) e-CDRweb tool. E-CDRweb is the electronic, web-based tool provided by Environmental Protection Agency (EPA) for the submission of Chemical Data Reporting (CDR) information. This document provides the user with tips and guidance on correctly using the version 1.1 XML schema for the Joint Submission Form. Please note that the order of the elements must match the schema.
1990-09-12
electronics reading to the next. To test this hypothesis and the suitability of EBL to acquiring schemas, I have implemented an automated reader/learner as...used. For example, testing the utility of a kidnapping schema using several readings about kidnapping can only go so far toward establishing the...the cost of carrying the new rules while processing unrelated material will be underestimated. The present research tests the utility of new schemas in
Peixoto, Maria Manuela; Nobre, Pedro
2017-01-01
Personality traits and dysfunctional sexual beliefs have been described as vulnerability factors for sexual dysfunction in women, and have also been proposed as dispositional variables for the activation of incompetence schemas in response to negative sexual events. However, no study has tested the role of personality traits and dysfunctional sexual beliefs in the activation of incompetence schemas. The current study aimed to assess the moderator role of neuroticism, extraversion, and dysfunctional sexual beliefs in the association between frequency of unsuccessful sexual episodes and activation of incompetence schemas in heterosexual and lesbian women. An online survey was completed by 1,121 women (831 heterosexual; 290 lesbian). Participants completed the NEO Five-Factor Inventory (NEO-FFI), the Sexual Dysfunctional Beliefs Questionnaire-Female Version (SDBQ), and the Questionnaire of Cognitive Schemas Activated in Sexual Context (QCSASC). Findings indicate that neuroticism moderates the association between frequency of negative sexual events and activation of incompetence schemas in heterosexual women. Moreover, several sexual beliefs also act as moderators of the relationship between negative sexual episodes and the activation of cognitive schemas in both heterosexual and lesbian women. Overall, findings support the cognitive-emotional model of sexual dysfunctions, emphasizing the role of personality traits and dysfunctional sexual beliefs as facilitators of the activation of incompetence schemas in response to negative events in women.
Khajouei Nia, Maryam; Sovani, Anuradha; Sarami Forooshani, Gholam Reza
2014-12-01
Many studies have reported that inadequate parental styles can contribute to depressive symptoms through dysfunctional cognitive styles. This study aimed to investigate the association of dysfunctional schemas and parenting style with depression, as well as the role of maladaptive schemas such as moderators and mediators in Iran and India. The study sample was selected randomly and consisted of 200 (age group 16-60 y) depressed females (mild to moderate); 100 from Tehran (Iran) and another 100 from Pune (India). The type of the research was causal-comparative. The data collection took place in hospitals and clinics in the targeted cities. Descriptive statistic tests and hierarchical multiple regression were executed (for the purpose of analyzing data) by SPSS 17. It was demonstrated that the association between parenting and depression was not moderated by early maladaptive schemas. On the contrary, the results supported meditational models in which parenting styles are associated with the cognitive schemas, and these in turn are related to depressive symptoms. It was also found that abandonment mediates the impacts of maternal style on depression in Iran. On the other hand, abandonment and punitiveness schemas mediated the relation between paternal style and depression in India. These findings suggest that the correlation between childhood experiences and depression in adulthood are mediated by dysfunctional schemas.
NASA Astrophysics Data System (ADS)
Ulbricht, Damian; Elger, Kirsten; Bertelmann, Roland; Klump, Jens
2016-04-01
With the foundation of DataCite in 2009 and the technical infrastructure installed in the last six years it has become very easy to create citable dataset DOIs. Nowadays, dataset DOIs are increasingly accepted and required by journals in reference lists of manuscripts. In addition, DataCite provides usage statistics [1] of assigned DOIs and offers a public search API to make research data count. By linking related information to the data, they become more useful for future generations of scientists. For this purpose, several identifier systems, as ISBN for books, ISSN for journals, DOI for articles or related data, Orcid for authors, and IGSN for physical samples can be attached to DOIs using the DataCite metadata schema [2]. While these are good preconditions to publish data, free and open solutions that help with the curation of data, the publication of research data, and the assignment of DOIs in one software seem to be rare. At GFZ Potsdam we built a modular software stack that is made of several free and open software solutions and we established 'GFZ Data Services'. 'GFZ Data Services' provides storage, a metadata editor for publication and a facility to moderate minted DOIs. All software solutions are connected through web APIs, which makes it possible to reuse and integrate established software. Core component of 'GFZ Data Services' is an eSciDoc [3] middleware that is used as central storage, and has been designed along the OAIS reference model for digital preservation. Thus, data are stored in self-contained packages that are made of binary file-based data and XML-based metadata. The eSciDoc infrastructure provides access control to data and it is able to handle half-open datasets, which is useful in embargo situations when a subset of the research data are released after an adequate period. The data exchange platform panMetaDocs [4] makes use of eSciDoc's REST API to upload file-based data into eSciDoc and uses a metadata editor [5] to annotate the files with metadata. The metadata editor has a user-friendly interface with nominal lists, extensive explanations, and an interactive mapping tool to provide assistance to scientists describing the data. It is possible to deposit metadata templates to fill certain fields with default values. The metadata editor generates metadata in the schemas ISO19139, NASA GCMD DIF, and DataCite and could be extended for other schemas. panMetaDocs is able to mint dataset DOIs through DOIDB, which is our component to moderate dataset DOIs issued through 'GFZ Data Services'. DOIDB accepts metadata in the schemas ISO19139, DIF, and DataCite. In addition, DOIDB provides an OAI-PMH interface to disseminate all deposited metadata to data portals. The presentation of datasets on DOI landing pages is done though XSLT stylesheet transformation of the XML-based metadata. The landing pages have been designed to meet needs of scientists. We are able to render the metadata to different layouts. Furthermore, additional information about datasets and publications is assembled into the webpage by querying public databases on the internet. The work presented here will focus on technical details of the software stack. [1] http://stats.datacite.org [2] http://www.dlib.org/dlib/january11/starr/01starr.html [3] http://www.escidoc.org [4] http://panmetadocs.sf.net [5] http://github.com/ulbricht
Metadata to Support Data Warehouse Evolution
NASA Astrophysics Data System (ADS)
Solodovnikova, Darja
The focus of this chapter is metadata necessary to support data warehouse evolution. We present the data warehouse framework that is able to track evolution process and adapt data warehouse schemata and data extraction, transformation, and loading (ETL) processes. We discuss the significant part of the framework, the metadata repository that stores information about the data warehouse, logical and physical schemata and their versions. We propose the physical implementation of multiversion data warehouse in a relational DBMS. For each modification of a data warehouse schema, we outline the changes that need to be made to the repository metadata and in the database.
What We Do and Do Not Know about Teaching Medical Image Interpretation.
Kok, Ellen M; van Geel, Koos; van Merriënboer, Jeroen J G; Robben, Simon G F
2017-01-01
Educators in medical image interpretation have difficulty finding scientific evidence as to how they should design their instruction. We review and comment on 81 papers that investigated instructional design in medical image interpretation. We distinguish between studies that evaluated complete offline courses and curricula, studies that evaluated e-learning modules, and studies that evaluated specific educational interventions. Twenty-three percent of all studies evaluated the implementation of complete courses or curricula, and 44% of the studies evaluated the implementation of e-learning modules. We argue that these studies have encouraging results but provide little information for educators: too many differences exist between conditions to unambiguously attribute the learning effects to specific instructional techniques. Moreover, concepts are not uniformly defined and methodological weaknesses further limit the usefulness of evidence provided by these studies. Thirty-two percent of the studies evaluated a specific interventional technique. We discuss three theoretical frameworks that informed these studies: diagnostic reasoning, cognitive schemas and study strategies. Research on diagnostic reasoning suggests teaching students to start with non-analytic reasoning and subsequently applying analytic reasoning, but little is known on how to train non-analytic reasoning. Research on cognitive schemas investigated activities that help the development of appropriate cognitive schemas. Finally, research on study strategies supports the effectiveness of practice testing, but more study strategies could be applicable to learning medical image interpretation. Our commentary highlights the value of evaluating specific instructional techniques, but further evidence is required to optimally inform educators in medical image interpretation.
The Role of Ontologies in Schema-based Program Synthesis
NASA Technical Reports Server (NTRS)
Bures, Tomas; Denney, Ewen; Fischer, Bernd; Nistor, Eugen C.
2004-01-01
Program synthesis is the process of automatically deriving executable code from (non-executable) high-level specifications. It is more flexible and powerful than conventional code generation techniques that simply translate algorithmic specifications into lower-level code or only create code skeletons from structural specifications (such as UML class diagrams). Key to building a successful synthesis system is specializing to an appropriate application domain. The AUTOBAYES and AUTOFILTER systems, under development at NASA Ames, operate in the two domains of data analysis and state estimation, respectively. The central concept of both systems is the schema, a representation of reusable computational knowledge. This can take various forms, including high-level algorithm templates, code optimizations, datatype refinements, or architectural information. A schema also contains applicability conditions that are used to determine when it can be applied safely. These conditions can refer to the initial specification, to intermediate results, or to elements of the partially-instantiated code. Schema-based synthesis uses AI technology to recursively apply schemas to gradually refine a specification into executable code. This process proceeds in two main phases. A front-end gradually transforms the problem specification into a program represented in an abstract intermediate code. A backend then compiles this further down into a concrete target programming language of choice. A core engine applies schemas on the initial problem specification, then uses the output of those schemas as the input for other schemas, until the full implementation is generated. Since there might be different schemas that implement different solutions to the same problem this process can generate an entire solution tree. AUTOBAYES and AUTOFILTER have reached the level of maturity where they enable users to solve interesting application problems, e.g., the analysis of Hubble Space Telescope images. They are large (in total around 100kLoC Prolog), knowledge intensive systems that employ complex symbolic reasoning to generate a wide range of non-trivial programs for complex application do- mains. Their schemas can have complex interactions, which make it hard to change them in isolation or even understand what an existing schema actually does. Adding more capabilities by increasing the number of schemas will only worsen this situation, ultimately leading to the entropy death of the synthesis system. The root came of this problem is that the domain knowledge is scattered throughout the entire system and only represented implicitly in the schema implementations. In our current work, we are addressing this problem by making explicit the knowledge from Merent parts of the synthesis system. Here; we discuss how Gruber's definition of an ontology as an explicit specification of a conceptualization matches our efforts in identifying and explicating the domain-specific concepts. We outline the dual role ontologies play in schema-based synthesis and argue that they address different audiences and serve different purposes. Their first role is descriptive: they serve as explicit documentation, and help to understand the internal structure of the system. Their second role is prescriptive: they provide the formal basis against which the other parts of the system (e.g., schemas) can be checked. Their final role is referential: ontologies also provide semantically meaningful "hooks" which allow schemas and tools to access the internal state of the program derivation process (e.g., fragments of the generated code) in domain-specific rather than language-specific terms, and thus to modify it in a controlled fashion. For discussion purposes we use AUTOLINEAR, a small synthesis system we are currently experimenting with, which can generate code for solving a system of linear equations, Az = b.
McArthur, Brae Anne; Strother, Douglas; Schulte, Fiona
2017-01-01
Research in the area of pediatric oncology has shown that although some children and youth diagnosed with this disease cope adaptively after their diagnosis, others continue to have long-term psychosocial difficulties. The potential mechanisms that may protect against the experience of psychopathology and poor quality of life within this population are not well known. The purpose of this pilot study was to utilize a new comprehensive measure of positive schemas to better understand the relationship between positive schemas, quality of life, and psychopathology, for children on active treatment for cancer. Participants were 22 patients, aged 8-18 years, being treated in a pediatric oncology clinic. Patients and parents completed measures of positive schemas, quality of life, and psychopathology. The mean age at time of initial diagnosis of the patient sample was 11.6 years. Child-reported positive schemas were significantly related to child-reported child quality of life (r = 0.46, p = 0.03). This is the first study to examine positive schemas within a pediatric oncology sample. Future research is needed to further explore facets of positive schemas that may be particularly relevant to child psychological functioning in a pediatric oncology population.
Shorey, Ryan C.; Brasfield, Hope; Anderson, Scott; Stuart, Gregory L.
2014-01-01
Background Recent research has begun to examine the early maladaptive schemas of substance abusers, as it is believed that targeting these core beliefs in treatment may result in improved substance use outcomes. One special population that has received scant attention in the research literature, despite high levels of substance use, is airline pilots. Aims The current study examined the early maladaptive schemas of a sample of airline pilots (n = 64) who were seeking residential treatment for alcohol dependence and whether they differed in early maladaptive schemas from non-pilot substance abusers who were also seeking residential treatment for alcohol dependence (n = 45). Method Pre-existing medical records from patients of a residential substance abuse treatment facility were reviewed for the current study. Results Of the 18 early maladaptive schemas, results demonstrated that pilots scored higher than non-pilots on the early maladaptive schema of unrelenting standards (high internalized standards of behavior), whereas non-pilots scored higher on insufficient self-control (low frustration tolerance and self-control). Conclusions Early maladaptive schemas may be a relevant treatment target for substance abuse treatment seeking pilots and non-pilots. PMID:24701252
Group Schema Therapy for Eating Disorders: A Pilot Study
Simpson, Susan G.; Morrow, Emma; van Vreeswijk, Michiel; Reid, Caroline
2010-01-01
This paper describes the use of Group Schema Therapy for Eating Disorders (ST-E-g) in a case series of eight participants with chronic eating disorders and high levels of co-morbidity. Treatment was comprised of 20 sessions which included cognitive, experiential, and interpersonal strategies, with an emphasis on behavioral change. Specific schema-based strategies focused on bodily felt-sense and body-image, as well as emotional regulation skills. Six attended until end of treatment, two dropped-out at mid-treatment. Eating disorder severity, global schema severity, shame, and anxiety levels were reduced between pre- and post-therapy, with a large effect size at follow-up. Clinically significant improvement in eating severity was found in four out of six completers. Group completers showed a mean reduction in schema severity of 43% at post-treatment, and 59% at follow-up. By follow-up, all completers had achieved over 60% improvement in schema severity. Self-report feedback suggests that group factors may catalyze the change process in schema therapy by increasing perceptions of support and encouragement to take risks and try out new behaviors, whilst providing a de-stigmatizing and de-shaming therapeutic experience. PMID:21833243
Khosravani, Vahid; Sharifi Bastan, Farangis; Samimi Ardestani, Mehdi; Jamaati Ardakani, Razieh
2017-09-01
There are few studies on suicidal risk and its related factors in patients diagnosed with obsessive-compulsive disorder (OCD). This study investigated the associations of early maladaptive schemas, OC symptom dimensions, OCD severity, depression and anxiety with suicidality (i.e., suicidal ideation and suicide attempts) in OCD patients. Sixty OCD outpatients completed the Scale for Suicide Ideation (SSI), the Young Schema Questionnaire-Short Form (YSQ-SF), the Yale-Brown Obsessive Compulsive Scale (Y-BOCS), the Dimensional Obsessive-Compulsive Scale (DOCS) and the Depression Anxiety Stress Scales (DASS-21). 51.7% of patients had lifetime suicide attempts and 75% had suicidal ideation. OCD patients with lifetime suicide attempts exhibited significantly higher scores on early maladaptive schemas than those without such attempts. Logistic regression analysis revealed that the mistrust/abuse schema and the OC symptom dimension of unacceptable thoughts explained lifetime suicide attempts. The mistrust/abuse schema, unacceptable thoughts and depression significantly predicted suicidal ideation. These findings indicated that the mistrust/abuse schema may contribute to high suicidality in OCD patients. Also, patients suffering from unacceptable thoughts need to be assessed more carefully for warning signs of suicide. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
True and false recall and dissociation among maltreated children: the role of self-schema.
Valentino, Kristin; Cicchetti, Dante; Rogosch, Fred A; Toth, Sheree L
2008-01-01
The current investigation addresses the manner through which trauma affects basic memory and self-system processes. True and false recall for self-referent stimuli were assessed in conjunction with dissociative symptomatology among abused (N=76), neglected (N=92), and nonmaltreated (N=116) school-aged children. Abused, neglected, and nonmaltreated children did not differ in the level of processing self-schema effect or in the occurrence and frequency of false recall. Rather, differences in the affective valence of false recall emerged as a function of maltreatment subtype and age. Regarding dissociation, the abused children displayed higher levels of dissociative symptomatology than did the nonmaltreated children. Although abused, neglected, and nonmaltreated children did not exhibit differences in the valence of their self-schemas, positive and negative self-schemas were related to self-integration differently among the subgroups of maltreatment. Negative self-schemas were associated with increased dissociation among the abused children, whereas positive self-schemas were related to increased dissociation for the neglected children. Thus, positive self-schemas displayed by the younger neglected children were related to higher dissociation, suggestive of defensive self-processing. Implications for clinical intervention are underscored.
Continued Bullying Victimization in Adolescents: Maladaptive Schemas as a Mediational Mechanism.
Calvete, Esther; Fernández-González, Liria; González-Cabrera, Joaquín M; Gámez-Guadix, Manuel
2018-03-01
Bullying victimization in adolescence is a significant social problem that can become persistent over time for some victims. However, there is an overall paucity of research examining the factors that contribute to continued bullying victimization. Schema therapy proposes a model that can help us understand why bullying victimization can be persistent for some victims. This study examines the role of maladaptive schemas, the key concept in schema therapy, as a mechanism of continued bullying victimization. The hypothesis was that maladaptive schemas of rejection mediate the predictive association between victimization in both the family and at school and future bullying victimization. Social anxiety was also considered, as previous research suggests that it can increase the risk of victimization. The participants were 1328 adolescents (45% female) with a mean age of 15.05 years (SD = 1.37), who completed questionnaires at three time points with a 6-month interval between them. Time 2 maladaptive schemas of rejection significantly mediated the predictive association from Time 1 bullying victimization, family abuse and social anxiety to Time 3 bullying victimization. The findings pertaining to potentially malleable factors, such as maladaptive schemas that maintain continued interpersonal victimization, have important implications for prevention and treatment strategies with adolescents.
Marques, Sofia; Barrocas, Daniel; Rijo, Daniel
2017-04-28
Borderline personality disorder is the most common personality disorder, with a global prevalence rate between 1.6% and 6%. It is characterized by affective disturbance and impulsivity, which lead to a high number of self-harm behaviors and great amount of health services use. International guidelines recommend psychotherapy as the primary treatment for borderline personality disorder. This paper reviews evidence about the effects and efficacy of cognitive-behavioral oriented psychological treatments for borderline personality disorder. A literature review was conducted in Medline and PubMed databases, using the following keywords: borderline personality disorder, cognitive-behavioral psychotherapy and efficacy. Sixteen randomized clinical trials were evaluate in this review, which analyzed the effects of several cognitive-behavioral oriented psychotherapeutic interventions, namely dialectical behavioral therapy, cognitive behavioral therapy, schema-focused therapy and manual-assisted cognitive therapy. All above stated treatments showed clinical beneficial effects, by reducing borderline personality disorder core pathology and associated general psychopathology, as well as by reducing the severity and frequency of self-harm behaviors, and by improving the overall social, interpersonal and global adjustment. Dialectical behavioral therapy and schema-focused therapy also caused a soaring remission rate of diagnostic borderline personality disorder criteria of 57% and 94%, respectively. Although there were differences between the psychotherapeutic interventions analysed in this review, all showed clinical benefits in the treatment of borderline personality disorder. Dialectical behavioral therapy and schema-focused therapy presented the strongest scientific data documenting their efficacy, but both interventions are integrative cognitive-behavioral therapies which deviate from the traditional cognitive-behavioral model. In summary, the available studies support cognitive-behavioral psychological treatments as an efficacious intervention in borderline personality disorder. However, the existing scientific literature on this topic is still scarce and there is need for more studies, with higher methodological rigor, that should validate these results.
Improving Listening Comprehension through a Whole-Schema Approach.
ERIC Educational Resources Information Center
Ellermeyer, Deborah
1993-01-01
Examines the development of the schema, or cognitive structure, theory of reading comprehension. Advances a model for improving listening comprehension within the classroom through a teacher-facilitated approach which leads students to selecting and utilizing existing schema within a whole-language environment. (MDM)
Group schema therapy for eating disorders: study protocol.
Calvert, Fiona; Smith, Evelyn; Brockman, Rob; Simpson, Susan
2018-01-01
The treatment of eating disorders is a difficult endeavor, with only a relatively small proportion of clients responding to and completing standard cognitive behavioural therapy (CBT). Given the prevalence of co-morbidity and complex personality traits in this population, Schema Therapy has been identified as a potentially viable treatment option. A case series of Group Schema Therapy for Eating Disorders (ST-E-g) yielded positive findings and the study protocol outlined in this article aims to extend upon these preliminary findings to evaluate group Schema Therapy for eating disorders in a larger sample ( n = 40). Participants undergo a two-hour assessment where they complete a number of standard questionnaires and their diagnostic status is ascertained using the Eating Disorder Examination. Participants then commence treatment, which consists of 25 weekly group sessions lasting for 1.5 h and four individual sessions. Each group consists of five to eight participants and is facilitated by two therapists, at least one of who is a registered psychologist trained on schema therapy. The primary outcome in this study is eating disorder symptom severity. Secondary outcomes include: cognitive schemas, self-objectification, general quality of life, self-compassion, schema mode presentations, and Personality Disorder features. Participants complete psychological measures and questionnaires at pre, post, six-month and 1-year follow-up. This study will expand upon preliminary research into the efficacy of group Schema Therapy for individuals with eating disorders. If group Schema Therapy is shown to reduce eating disorder symptoms, it will hold considerable promise as an intervention option for a group of disorders that is typically difficult to treat. ACTRN12615001323516. Registered: 2/12/2015 (retrospectively registered, still recruiting).
Towards a theoretical clarification of biomimetics using conceptual tools from engineering design.
Drack, M; Limpinsel, M; de Bruyn, G; Nebelsick, J H; Betz, O
2017-12-13
Many successful examples of biomimetic products are available, and most research efforts in this emerging field are directed towards the development of specific applications. The theoretical and conceptual underpinnings of the knowledge transfer between biologists, engineers and architects are, however, poorly investigated. The present article addresses this gap. We use a 'technomorphic' approach, i.e. the application of conceptual tools derived from engineering design, to better understand the processes operating during a typical biomimetic research project. This helps to elucidate the formal connections between functions, working principles and constructions (in a broad sense)-because the 'form-function-relationship' is a recurring issue in biology and engineering. The presented schema also serves as a conceptual framework that can be implemented for future biomimetic projects. The concepts of 'function' and 'working principle' are identified as the core elements in the biomimetic knowledge transfer towards applications. This schema not only facilitates the development of a common language in the emerging science of biomimetics, but also promotes the interdisciplinary dialogue among its subdisciplines.
Carlson, Mike; Martínez, Jenny; Guzmán, Laura; Mahajan, Anish; Clark, Florence
2015-01-01
Latino adults between ages 50 and 60 yr are at high risk for developing chronic conditions that can lead to early disability. We conducted a qualitative pilot study with 11 Latinos in this demographic group to develop a foundational schema for the design of health promotion programs that could be implemented by occupational therapy practitioners in primary care settings for this population. One-on-one interviews addressing routines and activities, health management, and health care utilization were conducted, audiotaped, and transcribed. Results of a content analysis of the qualitative data revealed the following six domains of most concern: Weight Management; Disease Management; Mental Health and Well-Being; Personal Finances; Family, Friends, and Community; and Stress Management. A typology of perceived health-actualizing strategies was derived for each domain. This schema can be used by occupational therapy practitioners to inform the development of health-promotion lifestyle interventions designed specifically for late-middle-aged Latinos. PMID:26565102
Schepens Niemiec, Stacey L; Carlson, Mike; Martínez, Jenny; Guzmán, Laura; Mahajan, Anish; Clark, Florence
2015-01-01
Latino adults between ages 50 and 60 yr are at high risk for developing chronic conditions that can lead to early disability. We conducted a qualitative pilot study with 11 Latinos in this demographic group to develop a foundational schema for the design of health promotion programs that could be implemented by occupational therapy practitioners in primary care settings for this population. One-on-one interviews addressing routines and activities, health management, and health care utilization were conducted, audiotaped, and transcribed. Results of a content analysis of the qualitative data revealed the following six domains of most concern: Weight Management; Disease Management; Mental Health and Well-Being; Personal Finances; Family, Friends, and Community; and Stress Management. A typology of perceived health-actualizing strategies was derived for each domain. This schema can be used by occupational therapy practitioners to inform the development of health-promotion lifestyle interventions designed specifically for late-middle-aged Latinos. Copyright © 2015 by the American Occupational Therapy Association, Inc.
Schemas and memory consolidation.
Tse, Dorothy; Langston, Rosamund F; Kakeyama, Masaki; Bethus, Ingrid; Spooner, Patrick A; Wood, Emma R; Witter, Menno P; Morris, Richard G M
2007-04-06
Memory encoding occurs rapidly, but the consolidation of memory in the neocortex has long been held to be a more gradual process. We now report, however, that systems consolidation can occur extremely quickly if an associative "schema" into which new information is incorporated has previously been created. In experiments using a hippocampal-dependent paired-associate task for rats, the memory of flavor-place associations became persistent over time as a putative neocortical schema gradually developed. New traces, trained for only one trial, then became assimilated and rapidly hippocampal-independent. Schemas also played a causal role in the creation of lasting associative memory representations during one-trial learning. The concept of neocortical schemas may unite psychological accounts of knowledge structures with neurobiological theories of systems memory consolidation.
Spatial cyberinfrastructures, ontologies, and the humanities
Sieber, Renee E.; Wellen, Christopher C.; Jin, Yuan
2011-01-01
We report on research into building a cyberinfrastructure for Chinese biographical and geographic data. Our cyberinfrastructure contains (i) the McGill-Harvard-Yenching Library Ming Qing Women's Writings database (MQWW), the only online database on historical Chinese women's writings, (ii) the China Biographical Database, the authority for Chinese historical people, and (iii) the China Historical Geographical Information System, one of the first historical geographic information systems. Key to this integration is that linked databases retain separate identities as bases of knowledge, while they possess sufficient semantic interoperability to allow for multidatabase concepts and to support cross-database queries on an ad hoc basis. Computational ontologies create underlying semantics for database access. This paper focuses on the spatial component in a humanities cyberinfrastructure, which includes issues of conflicting data, heterogeneous data models, disambiguation, and geographic scale. First, we describe the methodology for integrating the databases. Then we detail the system architecture, which includes a tier of ontologies and schema. We describe the user interface and applications that allow for cross-database queries. For instance, users should be able to analyze the data, examine hypotheses on spatial and temporal relationships, and generate historical maps with datasets from MQWW for research, teaching, and publication on Chinese women writers, their familial relations, publishing venues, and the literary and social communities. Last, we discuss the social side of cyberinfrastructure development, as people are considered to be as critical as the technical components for its success. PMID:21444819
SinEx DB: a database for single exon coding sequences in mammalian genomes.
Jorquera, Roddy; Ortiz, Rodrigo; Ossandon, F; Cárdenas, Juan Pablo; Sepúlveda, Rene; González, Carolina; Holmes, David S
2016-01-01
Eukaryotic genes are typically interrupted by intragenic, noncoding sequences termed introns. However, some genes lack introns in their coding sequence (CDS) and are generally known as 'single exon genes' (SEGs). In this work, a SEG is defined as a nuclear, protein-coding gene that lacks introns in its CDS. Whereas, many public databases of Eukaryotic multi-exon genes are available, there are only two specialized databases for SEGs. The present work addresses the need for a more extensive and diverse database by creating SinEx DB, a publicly available, searchable database of predicted SEGs from 10 completely sequenced mammalian genomes including human. SinEx DB houses the DNA and protein sequence information of these SEGs and includes their functional predictions (KOG) and the relative distribution of these functions within species. The information is stored in a relational database built with My SQL Server 5.1.33 and the complete dataset of SEG sequences and their functional predictions are available for downloading. SinEx DB can be interrogated by: (i) a browsable phylogenetic schema, (ii) carrying out BLAST searches to the in-house SinEx DB of SEGs and (iii) via an advanced search mode in which the database can be searched by key words and any combination of searches by species and predicted functions. SinEx DB provides a rich source of information for advancing our understanding of the evolution and function of SEGs.Database URL: www.sinex.cl. © The Author(s) 2016. Published by Oxford University Press.
How Do DSM-5 Personality Traits Align With Schema Therapy Constructs?
Bach, Bo; Lee, Christopher; Mortensen, Erik Lykke; Simonsen, Erik
2016-08-01
DSM-5 offers an alternative model of personality pathology that includes 25 traits. Although personality disorders are mostly treated with psychotherapy, the correspondence between DSM-5 traits and concepts in evidence-based psychotherapy has not yet been evaluated adequately. Suitably, schema therapy was developed for treating personality disorders, and it has achieved promising evidence. The authors examined associations between DSM-5 traits and schema therapy constructs in a mixed sample of 662 adults, including 312 clinical participants. Associations were investigated in terms of factor loadings and regression coefficients in relation to five domains, followed by specific correlations among all constructs. The results indicated conceptually coherent associations, and 15 of 25 traits were strongly related to relevant schema therapy constructs. Conclusively, DSM-5 traits may be considered expressions of schema therapy constructs, which psychotherapists might take advantage of in terms of case formulation and targets of treatment. In turn, schema therapy constructs add theoretical understanding to DSM-5 traits.
Early maladaptive schemas in personality disordered individuals.
Jovev, Martina; Jackson, Henry J
2004-10-01
The present study aimed to examine the specificity of schema domains in three personality disorder (PD) groups, namely borderline (BPD), obsessive-compulsive (OCPD), and avoidant PD (AvPD), and to correctly identify the three PD groups on the basis of these schemas. The sample consisted of 48 clinical participants diagnosed with PDs and assigned to 1 of 3 groups on the basis of their Axis II diagnoses (BPD: n = 13; OCPD: n = 13; AvPD: n = 22). High scores on Dependence/Incompetence, Defectiveness/ Shame and Abandonment were found for the BPD group. Such pattern appears to be most consistent with Young's theory of BPD. Consistent with the theory and empirical findings of Beck et al. (1990, 2001), OCPD was associated with elevations on the Unrelenting Standards schema domain, but not on Emotional Inhibition, which was found to be elevated for AvPD. In conclusion, the present study suggests that there are different patterns of schema domains across different PDs and that the Schema Questionnaire (SQ) is potentially useful in differentiating between these PDs.
Topological Schemas of Cognitive Maps and Spatial Learning.
Babichev, Andrey; Cheng, Sen; Dabaghian, Yuri A
2016-01-01
Spatial navigation in mammals is based on building a mental representation of their environment-a cognitive map. However, both the nature of this cognitive map and its underpinning in neural structures and activity remains vague. A key difficulty is that these maps are collective, emergent phenomena that cannot be reduced to a simple combination of inputs provided by individual neurons. In this paper we suggest computational frameworks for integrating the spiking signals of individual cells into a spatial map, which we call schemas. We provide examples of four schemas defined by different types of topological relations that may be neurophysiologically encoded in the brain and demonstrate that each schema provides its own large-scale characteristics of the environment-the schema integrals. Moreover, we find that, in all cases, these integrals are learned at a rate which is faster than the rate of complete training of neural networks. Thus, the proposed schema framework differentiates between the cognitive aspect of spatial learning and the physiological aspect at the neural network level.
qcML: An Exchange Format for Quality Control Metrics from Mass Spectrometry Experiments*
Walzer, Mathias; Pernas, Lucia Espona; Nasso, Sara; Bittremieux, Wout; Nahnsen, Sven; Kelchtermans, Pieter; Pichler, Peter; van den Toorn, Henk W. P.; Staes, An; Vandenbussche, Jonathan; Mazanek, Michael; Taus, Thomas; Scheltema, Richard A.; Kelstrup, Christian D.; Gatto, Laurent; van Breukelen, Bas; Aiche, Stephan; Valkenborg, Dirk; Laukens, Kris; Lilley, Kathryn S.; Olsen, Jesper V.; Heck, Albert J. R.; Mechtler, Karl; Aebersold, Ruedi; Gevaert, Kris; Vizcaíno, Juan Antonio; Hermjakob, Henning; Kohlbacher, Oliver; Martens, Lennart
2014-01-01
Quality control is increasingly recognized as a crucial aspect of mass spectrometry based proteomics. Several recent papers discuss relevant parameters for quality control and present applications to extract these from the instrumental raw data. What has been missing, however, is a standard data exchange format for reporting these performance metrics. We therefore developed the qcML format, an XML-based standard that follows the design principles of the related mzML, mzIdentML, mzQuantML, and TraML standards from the HUPO-PSI (Proteomics Standards Initiative). In addition to the XML format, we also provide tools for the calculation of a wide range of quality metrics as well as a database format and interconversion tools, so that existing LIMS systems can easily add relational storage of the quality control data to their existing schema. We here describe the qcML specification, along with possible use cases and an illustrative example of the subsequent analysis possibilities. All information about qcML is available at http://code.google.com/p/qcml. PMID:24760958
qcML: an exchange format for quality control metrics from mass spectrometry experiments.
Walzer, Mathias; Pernas, Lucia Espona; Nasso, Sara; Bittremieux, Wout; Nahnsen, Sven; Kelchtermans, Pieter; Pichler, Peter; van den Toorn, Henk W P; Staes, An; Vandenbussche, Jonathan; Mazanek, Michael; Taus, Thomas; Scheltema, Richard A; Kelstrup, Christian D; Gatto, Laurent; van Breukelen, Bas; Aiche, Stephan; Valkenborg, Dirk; Laukens, Kris; Lilley, Kathryn S; Olsen, Jesper V; Heck, Albert J R; Mechtler, Karl; Aebersold, Ruedi; Gevaert, Kris; Vizcaíno, Juan Antonio; Hermjakob, Henning; Kohlbacher, Oliver; Martens, Lennart
2014-08-01
Quality control is increasingly recognized as a crucial aspect of mass spectrometry based proteomics. Several recent papers discuss relevant parameters for quality control and present applications to extract these from the instrumental raw data. What has been missing, however, is a standard data exchange format for reporting these performance metrics. We therefore developed the qcML format, an XML-based standard that follows the design principles of the related mzML, mzIdentML, mzQuantML, and TraML standards from the HUPO-PSI (Proteomics Standards Initiative). In addition to the XML format, we also provide tools for the calculation of a wide range of quality metrics as well as a database format and interconversion tools, so that existing LIMS systems can easily add relational storage of the quality control data to their existing schema. We here describe the qcML specification, along with possible use cases and an illustrative example of the subsequent analysis possibilities. All information about qcML is available at http://code.google.com/p/qcml. © 2014 by The American Society for Biochemistry and Molecular Biology, Inc.
An object-relational model for structured representation of medical knowledge.
Koch, S; Risch, T; Schneider, W; Wagner, I V
2006-07-01
Domain specific knowledge is often not static but continuously evolving. This is especially true for the medical domain. Furthermore, the lack of standardized structures for presenting knowledge makes it difficult or often impossible to assess new knowledge in the context of existing knowledge. Possibilities to compare knowledge easily and directly are often not given. It is therefore of utmost importance to create a model that allows for comparability, consistency and quality assurance of medical knowledge in specific work situations. For this purpose, we have designed on object-relational model based on structured knowledge elements that are dynamically reusable by different multi-media-based tools for case-based documentation, disease course simulation, and decision support. With this model, high-level components, such as patient case reports or simulations of the course of a disease, and low-level components (e.g., diagnoses, symptoms or treatments) as well as the relationships between these components are modeled. The resulting schema has been implemented in AMOS II, on object-relational multi-database system supporting different views with regard to search and analysis depending on different work situations.
A Split-Path Schema-Based RFID Data Storage Model in Supply Chain Management
Fan, Hua; Wu, Quanyuan; Lin, Yisong; Zhang, Jianfeng
2013-01-01
In modern supply chain management systems, Radio Frequency IDentification (RFID) technology has become an indispensable sensor technology and massive RFID data sets are expected to become commonplace. More and more space and time are needed to store and process such huge amounts of RFID data, and there is an increasing realization that the existing approaches cannot satisfy the requirements of RFID data management. In this paper, we present a split-path schema-based RFID data storage model. With a data separation mechanism, the massive RFID data produced in supply chain management systems can be stored and processed more efficiently. Then a tree structure-based path splitting approach is proposed to intelligently and automatically split the movement paths of products. Furthermore, based on the proposed new storage model, we design the relational schema to store the path information and time information of tags, and some typical query templates and SQL statements are defined. Finally, we conduct various experiments to measure the effect and performance of our model and demonstrate that it performs significantly better than the baseline approach in both the data expression and path-oriented RFID data query performance. PMID:23645112
Cognitive structures in women with sexual dysfunction: the role of early maladaptive schemas.
Oliveira, Cátia; Nobre, Pedro J
2013-07-01
Cognitive schemas are often related to psychological problems. However, the role of these structures within sexual problems is not yet well established. The aim of this study was to evaluate the presence and importance of early maladaptive schemas on women's sexual functioning and cognitive schemas activated in response to negative sexual events. A total of 228 women participated in the study: a control sample of 167 women without sexual problems, a subclinical sample of 37 women with low sexual functioning, and a clinical sample of 24 women with sexual dysfunction. Participants completed several self-reported measures: the Schema Questionnaire, the Questionnaire of Cognitive Schema Activation in Sexual Context, the Brief Symptom Inventory, the Beck Depression Inventory, and the Female Sexual Function Index. Findings indicated that women with sexual dysfunction presented significantly more early maladaptive schemas from the Impaired Autonomy and Performance domain, particularly failure (P < 0.001, η(2) = 0.08), dependence/incompetence (P < 0.05, η(2) = 0.03), and vulnerability to danger (P < 0.05, η(2) = 0.04). Additionally, in response to negative sexual events, women with sexual dysfunction presented significantly higher scores on incompetence (P < 0.001, η(2) = 0.16), self-depreciation (P < 0.01, η(2) = 0.05), and difference/loneliness (P < 0.01, η(2) = 0.05) schemas. Results supported differences between women with and without sexual problems regarding cognitive factors. This may have implications for the knowledge, assessment, and treatment of sexual dysfunction in women. © 2012 International Society for Sexual Medicine.
NGSmethDB 2017: enhanced methylomes and differential methylation
Lebrón, Ricardo; Gómez-Martín, Cristina; Carpena, Pedro; Bernaola-Galván, Pedro; Barturen, Guillermo; Hackenberg, Michael; Oliver, José L.
2017-01-01
The 2017 update of NGSmethDB stores whole genome methylomes generated from short-read data sets obtained by bisulfite sequencing (WGBS) technology. To generate high-quality methylomes, stringent quality controls were integrated with third-part software, adding also a two-step mapping process to exploit the advantages of the new genome assembly models. The samples were all profiled under constant parameter settings, thus enabling comparative downstream analyses. Besides a significant increase in the number of samples, NGSmethDB now includes two additional data-types, which are a valuable resource for the discovery of methylation epigenetic biomarkers: (i) differentially methylated single-cytosines; and (ii) methylation segments (i.e. genome regions of homogeneous methylation). The NGSmethDB back-end is now based on MongoDB, a NoSQL hierarchical database using JSON-formatted documents and dynamic schemas, thus accelerating sample comparative analyses. Besides conventional database dumps, track hubs were implemented, which improved database access, visualization in genome browsers and comparative analyses to third-part annotations. In addition, the database can be also accessed through a RESTful API. Lastly, a Python client and a multiplatform virtual machine allow for program-driven access from user desktop. This way, private methylation data can be compared to NGSmethDB without the need to upload them to public servers. Database website: http://bioinfo2.ugr.es/NGSmethDB. PMID:27794041
Development Of International Data Standards For The COSMOS/PEER-LL Virtual Data Center
NASA Astrophysics Data System (ADS)
Swift, J. N.
2005-12-01
The COSMOS -PEER Lifelines Project 2L02 completed a Pilot Geotechnical Virtual Data Center (GVDC) system capable of both archiving geotechnical data and of disseminating data from multiple linked geotechnical databases. The Pilot GVDC system links geotechnical databases of four organizations: the California Geological Survey, Caltrans, PG&E, and the U. S. Geological Survey The System was presented and reviewed in the COSMOS-PEER Lifelines workshop on June 21 - 23, 2004, which was co-sponsored by the Federal Highway Administration (FHWA) and included participation by the United Kingdom Highways Agency (UKHA) , the Association of Geotechnical and Geoenvironmental Specialists in the United Kingdom (AGS), the United States Army Corp of Engineers (USACOE), Caltrans, United States Geological Survey (USGS), California Geological Survey (CGS), a number of state Departments of Transportation (DOTs), county building code officials, and representatives of academic institutions and private sector geotechnical companies. As of February 2005 COSMOS-PEER Lifelines Project 2L03 is currently funded to accomplish the following tasks: 1) expand the Pilot GVDC Geotechnical Data Dictionary and XML Schema to include data definitions and structures to describe in-situ measurements such as shear wave velocity profiles, and additional laboratory geotechnical test types; 2) participate in an international cooperative working group developing a single geotechnical data exchange standard that has broad international acceptance; and 3) upgrade the GVDC system to support corresponding exchange standard data dictionary and schema improvements. The new geophysical data structures being developed will include PS-logs, downhole geophysical logs, cross-hole velocity data, and velocity profiles derived using surface waves. A COSMOS-PEER Lifelines Geophysical Data Dictionary Working Committee constituted of experts in the development of data dictionary standards and experts in the specific data to be captured are presently working on this task. The international geotechnical data dictionary and schema development is a highly collaborative effort funded by a pooled fund study coordinated by state DOTs and FHWA. The technical development of the standards called DIGGS (Data Interchange for Geotechnical and Geoenvironmental Specialists) is lead by a team consisting of representatives from the University of Florida, Department of Civil Engineering (UF), AGS, Construction Industry Research and Information Association (CIRIA), UKHA, Ohio DOT, and COSMOS. The first draft of DIGGS is currently in preparation. A Geotechnical Management System Group (GMS group), composed of representatives from 13 State DOTs, FHWA, US EPA, USACOE, USGS and UKHA, oversees and approves the development of the standards. The ultimate goal of both COSMOS-PEER Lifelines Project 2L03 and the international GMS working group is to produce open and flexible, GML-compliant XML schema-based data structures and data dictionaries for review and approval by DOTs, other public agencies, and the international engineering and geoenvironmental community at large, leading to adoption of internationally accepted geotechnical and geophysical data transfer standards. Establishment of these standards is intended to significantly facilitate the accessibility and exchange of geotechnical information world wide.
A first proposal for a general description model of forensic traces
NASA Astrophysics Data System (ADS)
Lindauer, Ina; Schäler, Martin; Vielhauer, Claus; Saake, Gunter; Hildebrandt, Mario
2012-06-01
In recent years, the amount of digitally captured traces at crime scenes increased rapidly. There are various kinds of such traces, like pick marks on locks, latent fingerprints on various surfaces as well as different micro traces. Those traces are different from each other not only in kind but also in which information they provide. Every kind of trace has its own properties (e.g., minutiae for fingerprints, or raking traces for locks) but there are also large amounts of metadata which all traces have in common like location, time and other additional information in relation to crime scenes. For selected types of crime scene traces, type-specific databases already exist, such as the ViCLAS for sexual offences, the IBIS for ballistic forensics or the AFIS for fingerprints. These existing forensic databases strongly differ in the trace description models. For forensic experts it would be beneficial to work with only one database capable of handling all possible forensic traces acquired at a crime scene. This is especially the case when different kinds of traces are interrelated (e.g., fingerprints and ballistic marks on a bullet casing). Unfortunately, current research on interrelated traces as well as general forensic data models and structures is not mature enough to build such an encompassing forensic database. Nevertheless, recent advances in the field of contact-less scanning make it possible to acquire different kinds of traces with the same device. Therefore the data of these traces is structured similarly what simplifies the design of a general forensic data model for different kinds of traces. In this paper we introduce a first common description model for different forensic trace types. Furthermore, we apply for selected trace types from the well established database schema development process the phases of transferring expert knowledge in the corresponding forensic fields into an extendible, database-driven, generalised forensic description model. The trace types considered here are fingerprint traces, traces at locks, micro traces and ballistic traces. Based on these basic trace types, also combined traces (multiple or overlapped fingerprints, fingerprints on bullet casings, etc) and partial traces are considered.
Enhancing SAMOS Data Access in DOMS via a Neo4j Property Graph Database.
NASA Astrophysics Data System (ADS)
Stallard, A. P.; Smith, S. R.; Elya, J. L.
2016-12-01
The Shipboard Automated Meteorological and Oceanographic System (SAMOS) initiative provides routine access to high-quality marine meteorological and near-surface oceanographic observations from research vessels. The Distributed Oceanographic Match-Up Service (DOMS) under development is a centralized service that allows researchers to easily match in situ and satellite oceanographic data from distributed sources to facilitate satellite calibration, validation, and retrieval algorithm development. The service currently uses Apache Solr as a backend search engine on each node in the distributed network. While Solr is a high-performance solution that facilitates creation and maintenance of indexed data, it is limited in the sense that its schema is fixed. The property graph model escapes this limitation by creating relationships between data objects. The authors will present the development of the SAMOS Neo4j property graph database including new search possibilities that take advantage of the property graph model, performance comparisons with Apache Solr, and a vision for graph databases as a storage tool for oceanographic data. The integration of the SAMOS Neo4j graph into DOMS will also be described. Currently, Neo4j contains spatial and temporal records from SAMOS which are modeled into a time tree and r-tree using Graph Aware and Spatial plugin tools for Neo4j. These extensions provide callable Java procedures within CYPHER (Neo4j's query language) that generate in-graph structures. Once generated, these structures can be queried using procedures from these libraries, or directly via CYPHER statements. Neo4j excels at performing relationship and path-based queries, which challenge relational-SQL databases because they require memory intensive joins due to the limitation of their design. Consider a user who wants to find records over several years, but only for specific months. If a traditional database only stores timestamps, this type of query would be complex and likely prohibitively slow. Using the time tree model, one can specify a path from the root to the data which restricts resolutions to certain timeframes (e.g., months). This query can be executed without joins, unions, or other compute-intensive operations, putting Neo4j at a computational advantage to the SQL database alternative.
A New System To Support Knowledge Discovery: Telemakus.
ERIC Educational Resources Information Center
Revere, Debra; Fuller, Sherrilynne S.; Bugni, Paul F.; Martin, George M.
2003-01-01
The Telemakus System builds on the areas of concept representation, schema theory, and information visualization to enhance knowledge discovery from scientific literature. This article describes the underlying theories and an overview of a working implementation designed to enhance the knowledge discovery process through retrieval, visual and…
STINGRAY: system for integrated genomic resources and analysis.
Wagner, Glauber; Jardim, Rodrigo; Tschoeke, Diogo A; Loureiro, Daniel R; Ocaña, Kary A C S; Ribeiro, Antonio C B; Emmel, Vanessa E; Probst, Christian M; Pitaluga, André N; Grisard, Edmundo C; Cavalcanti, Maria C; Campos, Maria L M; Mattoso, Marta; Dávila, Alberto M R
2014-03-07
The STINGRAY system has been conceived to ease the tasks of integrating, analyzing, annotating and presenting genomic and expression data from Sanger and Next Generation Sequencing (NGS) platforms. STINGRAY includes: (a) a complete and integrated workflow (more than 20 bioinformatics tools) ranging from functional annotation to phylogeny; (b) a MySQL database schema, suitable for data integration and user access control; and (c) a user-friendly graphical web-based interface that makes the system intuitive, facilitating the tasks of data analysis and annotation. STINGRAY showed to be an easy to use and complete system for analyzing sequencing data. While both Sanger and NGS platforms are supported, the system could be faster using Sanger data, since the large NGS datasets could potentially slow down the MySQL database usage. STINGRAY is available at http://stingray.biowebdb.org and the open source code at http://sourceforge.net/projects/stingray-biowebdb/.
STINGRAY: system for integrated genomic resources and analysis
2014-01-01
Background The STINGRAY system has been conceived to ease the tasks of integrating, analyzing, annotating and presenting genomic and expression data from Sanger and Next Generation Sequencing (NGS) platforms. Findings STINGRAY includes: (a) a complete and integrated workflow (more than 20 bioinformatics tools) ranging from functional annotation to phylogeny; (b) a MySQL database schema, suitable for data integration and user access control; and (c) a user-friendly graphical web-based interface that makes the system intuitive, facilitating the tasks of data analysis and annotation. Conclusion STINGRAY showed to be an easy to use and complete system for analyzing sequencing data. While both Sanger and NGS platforms are supported, the system could be faster using Sanger data, since the large NGS datasets could potentially slow down the MySQL database usage. STINGRAY is available at http://stingray.biowebdb.org and the open source code at http://sourceforge.net/projects/stingray-biowebdb/. PMID:24606808
Burns, Gully APC; Cheng, Wei-Cheng
2006-01-01
Background Knowledge bases that summarize the published literature provide useful online references for specific areas of systems-level biology that are not otherwise supported by large-scale databases. In the field of neuroanatomy, groups of small focused teams have constructed medium size knowledge bases to summarize the literature describing tract-tracing experiments in several species. Despite years of collation and curation, these databases only provide partial coverage of the available published literature. Given that the scientists reading these papers must all generate the interpretations that would normally be entered into such a system, we attempt here to provide general-purpose annotation tools to make it easy for members of the community to contribute to the task of data collation. Results In this paper, we describe an open-source, freely available knowledge management system called 'NeuroScholar' that allows straightforward structured markup of the PDF files according to a well-designed schema to capture the essential details of this class of experiment. Although, the example worked through in this paper is quite specific to neuroanatomical connectivity, the design is freely extensible and could conceivably be used to construct local knowledge bases for other experiment types. Knowledge representations of the experiment are also directly linked to the contributing textual fragments from the original research article. Through the use of this system, not only could members of the community contribute to the collation task, but input data can be gathered for automated approaches to permit knowledge acquisition through the use of Natural Language Processing (NLP). Conclusion We present a functional, working tool to permit users to populate knowledge bases for neuroanatomical connectivity data from the literature through the use of structured questionnaires. This system is open-source, fully functional and available for download from [1]. PMID:16895608
Testing Three-Item Versions for Seven of Young's Maladaptive Schema
ERIC Educational Resources Information Center
Blau, Gary; DiMino, John; Sheridan, Natalie; Pred, Robert S.; Beverly, Clyde; Chessler, Marcy
2015-01-01
The Young Schema Questionnaire (YSQ) in either long-form (205- item) or short-form (75-item or 90-item) versions has demonstrated its clinical usefulness for assessing early maladaptive schemas. However, even a 75 or 90-item "short form", particularly when combined with other measures, can represent a lengthy…
Young Adolescents' Gender-, Ethnicity-, and Popularity-Based Social Schemas of Aggressive Behavior
ERIC Educational Resources Information Center
Clemans, Katherine H.; Graber, Julia A.
2016-01-01
Social schemas can influence the perception and recollection of others' behavior and may create biases in the reporting of social events. This study investigated young adolescents' (N = 317) gender-, ethnicity-, and popularity-based social schemas of overtly and relationally aggressive behavior. Results indicated that participants associated overt…
Beacon- and Schema-Based Method for Recognizing Algorithms from Students' Source Code
ERIC Educational Resources Information Center
Taherkhani, Ahmad; Malmi, Lauri
2013-01-01
In this paper, we present a method for recognizing algorithms from students programming submissions coded in Java. The method is based on the concept of "programming schemas" and "beacons". Schemas are high-level programming knowledge with detailed knowledge abstracted out, and beacons are statements that imply specific…
Parenting Schemas and the Process of Change
ERIC Educational Resources Information Center
Azar, Sandra T.; Nix, Robert L.; Makin-Byrd, Kerry N.
2005-01-01
Parents' childrearing behaviors are guided by schemas of the caregiving role, their functioning in that role, what children need in general, and what their own children are like in particular. Sometimes, however, parenting schemas can be maladaptive because they are too rigid or simple, involve inappropriate content, or are dominated by negative…
Understanding Schemas and Emotion in Early Childhood
ERIC Educational Resources Information Center
Arnold, Cath
2010-01-01
This book makes explicit connections between young children's spontaneous repeated actions and their representations of their emotional worlds. Drawing on the literature on schemas, attachment theory and family contexts, the author takes schema theory into the territory of the emotions, making it relevant to the social and emotional development…
Thinking Children: Learning about Schemas.
ERIC Educational Resources Information Center
Meade, Anne; Cubey, Pam
Schemas are cognitive structures or forms of thought, like pieces of ideas or concepts. Patterns in children's behavior, or in their drawings and paintings, indicate common themes or threads (schemas) running through them. The action research study described in this report examined the effects on children's learning of intervening in their…
The Potential Role of Conflict Resolution Schemas in Adolescent Psychosocial Adjustment
ERIC Educational Resources Information Center
Jutengren, Goran; Palmerus, Kerstin
2007-01-01
Four specific schemas of cognitive structures that adolescents may hold concerning interpersonal disagreements with their parents were identified, each reflecting an authoritative, authoritarian, indulgent, or a neglecting parenting style. To examine the occurrence of such schemas across high and low levels of psychosocial adjustment, 120 Swedish…
Androgyny Versus Gender Schema: A Comment on Bem's Gender Schema Theory.
ERIC Educational Resources Information Center
Spence, Janet T.; Helmreich, Robert L.
1981-01-01
A logical contradiction in Bem's (1981) theory is outlined. The Bem Sex Role Inventory cannot measure a unidimensional construct, gender schema, and two independent constructs--masculinity and femininity. Such instruments measure self-images of instrumental and expressive personality traits which show little relationship to the constructs…
Family Functioning and Maladaptive Schemas: The Moderating Effects of Optimism
ERIC Educational Resources Information Center
Buri, John R.; Gunty, Amy L.
2008-01-01
Authoritarian parenting is often shown to be associated with negative outcomes for children, including the development of maladaptive schemas. However, this is not the case for all children who experience Authoritarian parenting. Optimism is examined as a moderator in the relationship between Authoritarian parenting and maladaptive schemas that…
Anterior Cingulate Cortex in Schema Assimilation and Expression
ERIC Educational Resources Information Center
Wang, Szu-Han; Tse, Dorothy; Morris, Richard G. M.
2012-01-01
In humans and in animals, mental schemas can store information within an associative framework that enables rapid and efficient assimilation of new information. Using a hippocampal-dependent paired-associate task, we now report that the anterior cingulate cortex is part of a neocortical network of schema storage with NMDA receptor-mediated…
XML Schema Languages: Beyond DTD.
ERIC Educational Resources Information Center
Ioannides, Demetrios
2000-01-01
Discussion of XML (extensible markup language) and the traditional DTD (document type definition) format focuses on efforts of the World Wide Web Consortium's XML schema working group to develop a schema language to replace DTD that will be capable of defining the set of constraints of any possible data resource. (Contains 14 references.) (LRW)
Scherer, R F; Petrick, J A
2001-02-01
In this empirical study of 649 employees at a federally supported health care facility in the United States, the authors investigated the effects of individual gender role orientation on team schema. The results indicated (a) that nontraditional male and female employees perceived the greatest amount of group cohesion in their team schemas and (b) that both traditional and nontraditional male employees perceived greater problem-solving potential in their team schemas. Meaningful implications for team composition are discussed.
How we see others: the psychobiology of schemas and transference.
Stein, Dan J
2009-01-01
Social cognition involves automatic and stimulus-driven processes; these may be important in mediating stereotypes in the community and schemas and transference in the clinic setting. Significant differences in self-related processing and other-related processing may also lead to important biases in our view of the other. The psychobiology of social cognition is gradually being delineated, and may be useful in understanding these phenomena, and in responding appropriately. In the clinic, schemas can be rigorously assessed, and schema-focused psychotherapy may be useful in a number of indications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
North, Michael J.
SchemaOnRead provides tools for implementing schema-on-read including a single function call (e.g., schemaOnRead("filename")) that reads text (TXT), comma separated value (CSV), raster image (BMP, PNG, GIF, TIFF, and JPG), R data (RDS), HDF5, NetCDF, spreadsheet (XLS, XLSX, ODS, and DIF), Weka Attribute-Relation File Format (ARFF), Epi Info (REC), Pajek network (PAJ), R network (NET), Hypertext Markup Language (HTML), SPSS (SAV), Systat (SYS), and Stata (DTA) files. It also recursively reads folders (e.g., schemaOnRead("folder")), returning a nested list of the contained elements.
Kachadourian, Lorig K; Taft, Casey T; Holowka, Darren W; Woodward, Halley; Marx, Brian P; Burns, Anthony
2013-10-01
This study examined the associations between maladaptive dependency-related schemas, posttraumatic stress disorder (PTSD) hyperarousal symptoms, and intimate-partner psychological and physical aggression in a sample of court-referred men (N = 174) participating in a domestic-abuser-intervention program. The men were largely African American; average age was 33.5 years. The extent to which hyperarousal symptoms moderated the association between dependency schemas and aggression was also examined. Maladaptive dependency-related schemas were positively associated with severe psychological, and mild and severe physical aggression perpetration. Hyperarousal symptoms were positively associated with mild and severe psychological aggression, and mild physical aggression perpetration. Multiple regression analyses showed a significant interaction for mild physical aggression: For those with high levels of hyperarousal symptoms, greater endorsement of maladaptive dependency schemas was associated with the perpetration of aggression (B = 0.98, p = .001). For those with low levels of hyperarousal symptoms, there was no association between dependency schemas and aggression (B = 0.04, ns). These findings suggest that focusing on problematic dependency and PTSD-hyperarousal symptoms in domestic-abuser-intervention programs may be helpful, and that examining related variables as possible moderators between dependency schemas and intimate aggression would be a fruitful area for future research. Published 2013. This article is a US Government work and is in the public domain in the USA.
Schema-driven facilitation of new hierarchy learning in the transitive inference paradigm
Kumaran, Dharshan
2013-01-01
Prior knowledge, in the form of a mental schema or framework, is viewed to facilitate the learning of new information in a range of experimental and everyday scenarios. Despite rising interest in the cognitive and neural mechanisms underlying schema-driven facilitation of new learning, few paradigms have been developed to examine this issue in humans. Here we develop a multiphase experimental scenario aimed at characterizing schema-based effects in the context of a paradigm that has been very widely used across species, the transitive inference task. We show that an associative schema, comprised of prior knowledge of the rank positions of familiar items in the hierarchy, has a marked effect on transitivity performance and the development of relational knowledge of the hierarchy that cannot be accounted for by more general changes in task strategy. Further, we show that participants are capable of deploying prior knowledge to successful effect under surprising conditions (i.e., when corrective feedback is totally absent), but only when the associative schema is robust. Finally, our results provide insights into the cognitive mechanisms underlying such schema-driven effects, and suggest that new hierarchy learning in the transitive inference task can occur through a contextual transfer mechanism that exploits the structure of associative experiences. PMID:23782509
NASA Astrophysics Data System (ADS)
Ferguson, Joseph Paul; Kameniar, Barbara
2014-10-01
This paper investigates the cognitive experiences of four religious students studying evolutionary biology in an inner city government secondary school in Melbourne, Australia. The participants in the study were identified using the Religious Background and Behaviours questionnaire (Connors, Tonigan, & Miller, 1996). Participants were interviewed and asked to respond to questions about their cognitive experiences of studying evolutionary biology. Students' responses were analysed using cultural analysis of discourse to construct a cultural model of religious students of science. This cultural model suggests that these students employ a human schema and a non-human schema, which assert that humans are fundamentally different from non-humans in terms of origins and that humans have a transcendental purpose in life. For these students, these maxims seem to be challenged by their belief that evolutionary biology is dictated by metaphysical naturalism. The model suggests that because the existential foundation of these students is challenged, they employ a believing schema to classify their religious explanations and a learning schema to classify evolutionary biology. These schemas are then hierarchically arranged with the learning schema being made subordinate to the believing schema. Importantly, these students are thus able to maintain their existential foundation while fulfilling the requirements of school science. However, the quality of this "learning" is questionable.
Schema-driven facilitation of new hierarchy learning in the transitive inference paradigm.
Kumaran, Dharshan
2013-06-19
Prior knowledge, in the form of a mental schema or framework, is viewed to facilitate the learning of new information in a range of experimental and everyday scenarios. Despite rising interest in the cognitive and neural mechanisms underlying schema-driven facilitation of new learning, few paradigms have been developed to examine this issue in humans. Here we develop a multiphase experimental scenario aimed at characterizing schema-based effects in the context of a paradigm that has been very widely used across species, the transitive inference task. We show that an associative schema, comprised of prior knowledge of the rank positions of familiar items in the hierarchy, has a marked effect on transitivity performance and the development of relational knowledge of the hierarchy that cannot be accounted for by more general changes in task strategy. Further, we show that participants are capable of deploying prior knowledge to successful effect under surprising conditions (i.e., when corrective feedback is totally absent), but only when the associative schema is robust. Finally, our results provide insights into the cognitive mechanisms underlying such schema-driven effects, and suggest that new hierarchy learning in the transitive inference task can occur through a contextual transfer mechanism that exploits the structure of associative experiences.
Ferreira Junior, José Raniery; Oliveira, Marcelo Costa; de Azevedo-Marques, Paulo Mazzoncini
2016-12-01
Lung cancer is the leading cause of cancer-related deaths in the world, and its main manifestation is pulmonary nodules. Detection and classification of pulmonary nodules are challenging tasks that must be done by qualified specialists, but image interpretation errors make those tasks difficult. In order to aid radiologists on those hard tasks, it is important to integrate the computer-based tools with the lesion detection, pathology diagnosis, and image interpretation processes. However, computer-aided diagnosis research faces the problem of not having enough shared medical reference data for the development, testing, and evaluation of computational methods for diagnosis. In order to minimize this problem, this paper presents a public nonrelational document-oriented cloud-based database of pulmonary nodules characterized by 3D texture attributes, identified by experienced radiologists and classified in nine different subjective characteristics by the same specialists. Our goal with the development of this database is to improve computer-aided lung cancer diagnosis and pulmonary nodule detection and classification research through the deployment of this database in a cloud Database as a Service framework. Pulmonary nodule data was provided by the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI), image descriptors were acquired by a volumetric texture analysis, and database schema was developed using a document-oriented Not only Structured Query Language (NoSQL) approach. The proposed database is now with 379 exams, 838 nodules, and 8237 images, 4029 of them are CT scans and 4208 manually segmented nodules, and it is allocated in a MongoDB instance on a cloud infrastructure.
Efficient data management tools for the heterogeneous big data warehouse
NASA Astrophysics Data System (ADS)
Alekseev, A. A.; Osipova, V. V.; Ivanov, M. A.; Klimentov, A.; Grigorieva, N. V.; Nalamwar, H. S.
2016-09-01
The traditional RDBMS has been consistent for the normalized data structures. RDBMS served well for decades, but the technology is not optimal for data processing and analysis in data intensive fields like social networks, oil-gas industry, experiments at the Large Hadron Collider, etc. Several challenges have been raised recently on the scalability of data warehouse like workload against the transactional schema, in particular for the analysis of archived data or the aggregation of data for summary and accounting purposes. The paper evaluates new database technologies like HBase, Cassandra, and MongoDB commonly referred as NoSQL databases for handling messy, varied and large amount of data. The evaluation depends upon the performance, throughput and scalability of the above technologies for several scientific and industrial use-cases. This paper outlines the technologies and architectures needed for processing Big Data, as well as the description of the back-end application that implements data migration from RDBMS to NoSQL data warehouse, NoSQL database organization and how it could be useful for further data analytics.
Teachers' Attributions for Stress and Their Relationships with Burnout
ERIC Educational Resources Information Center
McCormick, John; Barnett, Kerry
2011-01-01
Purpose: It may be argued that some shared psychological mechanisms (attribution) and structures (schemas) are likely to play a role in how individuals perceive stress. This paper seeks to propose and test some hypothesised relationships between stress attribution domains and burnout dimensions. Design/methodology/approach: The participants were…
The Effects of Technical Illustrations on Cognitive Load.
ERIC Educational Resources Information Center
Purnell, Kenneth N.; And Others
1992-01-01
Outlines two theories of cognitive science that are relevant for instructional design, i.e., schema theory and cognitive load theory; and describes four experiments with Australian secondary school geography students that used these theories to examine the effects of splitting attention between technical illustrations and related text. (20…
Technologies in Literacy Learning: A Case Study
ERIC Educational Resources Information Center
Cloonan, Anne
2010-01-01
This article draws on outcomes of a study which explored changes in teachers' literacy pedagogies as a result of their participation in a collaborative teacher professional learning project. The educational usability of schemas drawn from multiliteracies and Learning by Design theory is illustrated through a case study of a teacher's work on…
Architectural and Functional Design of an Environmental Information Network.
1984-04-30
study was accomplished under contract F08635-83-C-013(,, Task 83- 2 for Headquarters Air Force Engineering and Services Center, Engineering and Services...election Procedure ............................... 11 2 General Architecture of Distributed Data Management System...o.......60 A-1 Schema Architecture .......... o-.................. .... 74 A- 2 MULTIBASE Component Architecture
A Measurement Model of Microgenetic Transfer for Improving Instructional Outcomes
ERIC Educational Resources Information Center
Pavlik, Philip I., Jr.; Yudelson, Michael; Koedinger, Kenneth R.
2015-01-01
Efforts to improve instructional task design often make reference to the mental structures, such as "schemas" (e.g., Gick & Holyoak, 1983) or "identical elements" (Thorndike & Woodworth, 1901), that are common to both the instructional and target tasks. This component based (e.g., Singley & Anderson, 1989) approach…
ERIC Educational Resources Information Center
Root, Jenny; Saunders, Alicia; Spooner, Fred; Brosh, Chelsi
2017-01-01
The ability to solve mathematical problems related to purchasing and personal finance is important in promoting skill generalization and increasing independence for individuals with moderate intellectual disabilities (IDs). Using a multiple probe across participant design, this study investigated the effects of modified schema-based instruction…
Stress Leads to Aberrant Hippocampal Involvement When Processing Schema-Related Information
ERIC Educational Resources Information Center
Vogel, Susanne; Kluen, Lisa Marieke; Fernández, Guillén; Schwabe, Lars
2018-01-01
Prior knowledge, represented as a mental schema, has critical impact on how we organize, interpret, and process incoming information. Recent findings indicate that the use of an existing schema is coordinated by the medial prefrontal cortex (mPFC), communicating with parietal areas. The hippocampus, however, is crucial for encoding…
ERIC Educational Resources Information Center
Calvete, Esther; Orue, Izaskun
2012-01-01
This longitudinal investigation assessed whether cognitive schemas of justification of violence, mistrust, and narcissism predicted social information processing (SIP), and SIP in turn predicted aggressive behavior in adolescents. A total of 650 adolescents completed measures of cognitive schemas at Time 1, SIP in ambiguous social scenarios at…
Counseling Clients with Chronic Pain: A Religiously Oriented Cognitive Behavior Framework
ERIC Educational Resources Information Center
Robertson, Linda A.; Smith, Heather L.; Ray, Shannon L.; Jones, K. Dayle
2009-01-01
The experience of chronic pain is largely influenced by core schemas and cognitive processes, including those that are religious in nature. When these schemas are negative, they contribute to the exacerbation of pain and related problems. A framework is presented for the identification of problematic religious schemas and their modification…
ERIC Educational Resources Information Center
Sinton, Meghan M.; Birch, Leann L.
2006-01-01
Appearance schemas, a suggested cognitive component of body image, have been associated with body dissatisfaction in adolescent and adult samples. This study examined girls' weight status (BMI), depression, and parent, sibling, peer, and media influences as predictors of appearance schemas in 173 pre-adolescent girls. Hierarchical regression…
A Critique of Schema Theory in Reading and a Dual Coding Alternative (Commentary).
ERIC Educational Resources Information Center
Sadoski, Mark; And Others
1991-01-01
Evaluates schema theory and presents dual coding theory as a theoretical alternative. Argues that schema theory is encumbered by lack of a consistent definition, its roots in idealist epistemology, and mixed empirical support. Argues that results of many empirical studies used to demonstrate the existence of schemata are more consistently…
A Natural Teaching Method Based on Learning Theory.
ERIC Educational Resources Information Center
Smilkstein, Rita
1991-01-01
The natural teaching method is active and student-centered, based on schema and constructivist theories, and informed by research in neuroplasticity. A schema is a mental picture or understanding of something we have learned. Humans can have knowledge only to the degree to which they have constructed schemas from learning experiences and practice.…
Ability Related Differences in Schema-Guided Text Processing.
ERIC Educational Resources Information Center
Derry, Sharon J.
A study using a biasing paradigm examined four hypotheses regarding specific mechanisms thought to underlie the Assimilation-plus-Correction (A-C) theory of schema-text interactions. According to this theory, the ideas implied by a schema (type-1 ideas) are thought to be assimilated and obscured, while those ideas representing novel information…
Rape-related cognitive distortions: Preliminary findings on the role of early maladaptive schemas.
Sigre-Leirós, Vera; Carvalho, Joana; Nobre, Pedro J
2015-01-01
Despite the important focus on the notion of cognitive distortions in the sexual offending area, the relevance of underlying cognitive schemas in sexual offenders has also been suggested. The aim of the present study was to investigate a potential relationship between Early Maladaptive Schemas (EMSs) and cognitive distortions in rapists. A total of 33 men convicted for rape completed the Bumby Rape Scale (BRS), the Young Schema Questionnaire - Short form-3 (YSQ-S3), the Brief Symptom Inventory (BSI), and the Socially Desirable Response Set Measure (SDRS-5). Results showed a significant relationship between the impaired limits schematic domain and the Justifying Rape dimension of the BRS. Specifically, after controlling for psychological distress levels and social desirability tendency, the entitlement/grandiosity schema from the impaired limits domain was a significant predictor of cognitive distortions related to Justifying Rape themes. Overall, despite preliminary, there is some evidence that the Young's Schema-Focused model namely the impaired limits dimension may contribute for the conceptualization of cognitive distortions in rapists and further investigation is recommended. Copyright © 2015 Elsevier Ltd. All rights reserved.
Do schema processes mediate links between parenting and eating pathology?
Sheffield, Alex; Waller, Glenn; Emanuelli, Francesca; Murray, James; Meyer, Caroline
2009-07-01
Adverse parenting experiences are commonly linked to eating pathology. A schema-based model of the development and maintenance of eating pathology proposes that one of the potential mediators of the link between parenting and eating pathology might be the development of schema maintenance processes--mechanisms that operate to help the individual avoid intolerable emotions. To test this hypothesis, 353 female students and 124 female eating-disordered clients were recruited. They completed a measure of perceived parenting experiences as related to schema development (Young Parenting Inventory-Revised (YPI-R)), two measures of schema processes (Young Compensatory Inventory; Young-Rygh Avoidance Inventory (YRAI)) and a measure of eating pathology (Eating Disorders Inventory (EDI)). In support of the hypothesis, certain schema processes did mediate the relationship between specific perceptions of parenting and particular forms of eating pathology, although these were different for the clinical and non-clinical samples. In those patients where parenting is implicated in the development of eating pathology, treatment might need to target the cognitive processes that can explain this link. 2009 John Wiley & Sons, Ltd and Eating Disorders Association
Hume, Sam; Aerts, Jozef; Sarnikar, Surendra; Huser, Vojtech
2016-04-01
In order to further advance research and development on the Clinical Data Interchange Standards Consortium (CDISC) Operational Data Model (ODM) standard, the existing research must be well understood. This paper presents a methodological review of the ODM literature. Specifically, it develops a classification schema to categorize the ODM literature according to how the standard has been applied within the clinical research data lifecycle. This paper suggests areas for future research and development that address ODM's limitations and capitalize on its strengths to support new trends in clinical research informatics. A systematic scan of the following databases was performed: (1) ABI/Inform, (2) ACM Digital, (3) AIS eLibrary, (4) Europe Central PubMed, (5) Google Scholar, (5) IEEE Xplore, (7) PubMed, and (8) ScienceDirect. A Web of Science citation analysis was also performed. The search term used on all databases was "CDISC ODM." The two primary inclusion criteria were: (1) the research must examine the use of ODM as an information system solution component, or (2) the research must critically evaluate ODM against a stated solution usage scenario. Out of 2686 articles identified, 266 were included in a title level review, resulting in 183 articles. An abstract review followed, resulting in 121 remaining articles; and after a full text scan 69 articles met the inclusion criteria. As the demand for interoperability has increased, ODM has shown remarkable flexibility and has been extended to cover a broad range of data and metadata requirements that reach well beyond ODM's original use cases. This flexibility has yielded research literature that covers a diverse array of topic areas. A classification schema reflecting the use of ODM within the clinical research data lifecycle was created to provide a categorized and consolidated view of the ODM literature. The elements of the framework include: (1) EDC (Electronic Data Capture) and EHR (Electronic Health Record) infrastructure; (2) planning; (3) data collection; (4) data tabulations and analysis; and (5) study archival. The analysis reviews the strengths and limitations of ODM as a solution component within each section of the classification schema. This paper also identifies opportunities for future ODM research and development, including improved mechanisms for semantic alignment with external terminologies, better representation of the CDISC standards used end-to-end across the clinical research data lifecycle, improved support for real-time data exchange, the use of EHRs for research, and the inclusion of a complete study design. ODM is being used in ways not originally anticipated, and covers a diverse array of use cases across the clinical research data lifecycle. ODM has been used as much as a study metadata standard as it has for data exchange. A significant portion of the literature addresses integrating EHR and clinical research data. The simplicity and readability of ODM has likely contributed to its success and broad implementation as a data and metadata standard. Keeping the core ODM model focused on the most fundamental use cases, while using extensions to handle edge cases, has kept the standard easy for developers to learn and use. Copyright © 2016 Elsevier Inc. All rights reserved.
Misconceived causal explanations for emergent processes.
Chi, Michelene T H; Roscoe, Rod D; Slotta, James D; Roy, Marguerite; Chase, Catherine C
2012-01-01
Studies exploring how students learn and understand science processes such as diffusion and natural selection typically find that students provide misconceived explanations of how the patterns of such processes arise (such as why giraffes' necks get longer over generations, or how ink dropped into water appears to "flow"). Instead of explaining the patterns of these processes as emerging from the collective interactions of all the agents (e.g., both the water and the ink molecules), students often explain the pattern as being caused by controlling agents with intentional goals, as well as express a variety of many other misconceived notions. In this article, we provide a hypothesis for what constitutes a misconceived explanation; why misconceived explanations are so prevalent, robust, and resistant to instruction; and offer one approach of how they may be overcome. In particular, we hypothesize that students misunderstand many science processes because they rely on a generalized version of narrative schemas and scripts (referred to here as a Direct-causal Schema) to interpret them. For science processes that are sequential and stage-like, such as cycles of moon, circulation of blood, stages of mitosis, and photosynthesis, a Direct-causal Schema is adequate for correct understanding. However, for science processes that are non-sequential (or emergent), such as diffusion, natural selection, osmosis, and heat flow, using a Direct Schema to understand these processes will lead to robust misconceptions. Instead, a different type of general schema may be required to interpret non-sequential processes, which we refer to as an Emergent-causal Schema. We propose that students lack this Emergent Schema and teaching it to them may help them learn and understand emergent kinds of science processes such as diffusion. Our study found that directly teaching students this Emergent Schema led to increased learning of the process of diffusion. This article presents a fine-grained characterization of each type of Schema, our instructional intervention, the successes we have achieved, and the lessons we have learned. Copyright © 2011 Cognitive Science Society, Inc.
Sigre-Leirós, Vera; Carvalho, Joana; Nobre, Pedro
2015-02-01
Empirical research has primarily focused on the differences between rapists and child molesters. Nonetheless, a greater understanding of specific needs of specific subtypes of sex offenders is necessary. The aim of the present study was to investigate the relationship between the early maladaptive schemas and different types of sexual offending behavior. Fifty rapists, 59 child molesters (19 pedophilic and 40 nonpedophilic), and 51 nonsexual offenders answered the Young Schema Questionnaire, the Brief Symptom Inventory, and the Socially Desirable Response Set Measure. Data were analyzed using sets of multinomial logistic regression, controlling for sociodemographic variables, psychological distress, and social desirability. Results showed that pedophilic offenders were more likely to hold the defectiveness and subjugation schemas compared to the other three groups. Likewise, nonpedophilic child molesters were more likely to hold the social isolation, enmeshment, and unrelenting standards schemas compared to rapists. Additionally, rapists were more likely to hold the vulnerability to harm, approval-seeking, and punitiveness schemas compared to nonpedophiles and/or nonsex offenders. Overall, our findings suggest that cognitive schemas may play a role in the vulnerability for sexual offending and corroborate the need to distinguish between the two subtypes of child molesters. Despite the need for further investigation, findings may have important implications for the treatment of sex offenders and for the prevention of sexual crimes. Copyright © 2014 Elsevier Ltd. All rights reserved.
Cohen, Lisa J; Tanis, Thachell; Ardalan, Firouz; Yaseen, Zimri; Galynker, Igor
2016-08-30
Diagnostic criteria for borderline personality disorder (BPD) and mood and psychotic disorders characterized by major mood episodes (i.e., major depressive, bipolar and schizoaffective disorder) share marked overlap in symptom presentation, complicating differential diagnosis. The current study tests the hypothesis that maladaptive interpersonal schemas (MIS) are characteristic of BPD, but not of the major mood disorders. One hundred psychiatric inpatients were assessed by SCID I, SCID II and the Young Schema Questionnaire (YSQ-S2). Logistic regression analyses tested the association between MIS (measured by the YSQ-S2) and BPD, bipolar, major depressive and schizoaffective disorder. Receiver operator characteristic (ROC) curve analyses assessed the sensitivity and specificity of MIS as a marker of BPD. After covariation for comorbidity with each of the 3 mood disorders, BPD was robustly associated with 4 out of 5 schema domains. In contrast, only one of fifteen regression analyses demonstrated a significant association between any mood disorder and schema domain after covariation for comorbid BPD. ROC analyses of the 5 schema domains suggested Disconnection/Rejection had the greatest power for identification of BPD cases. These data support the specific role of maladaptive interpersonal schemas in BPD and potentially contribute to greater conceptual clarity about the distinction between BPD and the major mood disorders. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Minimum information required for a DMET experiment reporting.
Kumuthini, Judit; Mbiyavanga, Mamana; Chimusa, Emile R; Pathak, Jyotishman; Somervuo, Panu; Van Schaik, Ron Hn; Dolzan, Vita; Mizzi, Clint; Kalideen, Kusha; Ramesar, Raj S; Macek, Milan; Patrinos, George P; Squassina, Alessio
2016-09-01
To provide pharmacogenomics reporting guidelines, the information and tools required for reporting to public omic databases. For effective DMET data interpretation, sharing, interoperability, reproducibility and reporting, we propose the Minimum Information required for a DMET Experiment (MIDE) reporting. MIDE provides reporting guidelines and describes the information required for reporting, data storage and data sharing in the form of XML. The MIDE guidelines will benefit the scientific community with pharmacogenomics experiments, including reporting pharmacogenomics data from other technology platforms, with the tools that will ease and automate the generation of such reports using the standardized MIDE XML schema, facilitating the sharing, dissemination, reanalysis of datasets through accessible and transparent pharmacogenomics data reporting.
Hierarchical Schemas and Goals in the Control of Sequential Behavior
ERIC Educational Resources Information Center
Cooper, Richard P.; Shallice, Tim
2006-01-01
Traditional accounts of sequential behavior assume that schemas and goals play a causal role in the control of behavior. In contrast, M. Botvinick and D. C. Plaut (see record 2004-12248-005) argued that, at least in routine behavior, schemas and goals are epiphenomenal. The authors evaluate the Botvinick and Plaut account by contrasting the simple…
ERIC Educational Resources Information Center
Farc, Maria-Magdalena; Crouch, Julie L.; Skowronski, John J.; Milner, Joel S.
2008-01-01
Objective: Two studies examined whether accessibility of hostility-related schema influenced ratings of ambiguous child pictures. Based on the social information processing model of child physical abuse (CPA), it was expected that CPA risk status would serve as a proxy for chronic accessibility of hostile schema, while priming procedures were used…
Are Parents' Gender Schemas Related to Their Children's Gender-Related Cognitions? A Meta-Analysis.
ERIC Educational Resources Information Center
Tenenbaum, Harriet R.; Leaper, Campbell
2002-01-01
Used meta-analysis to examine relationship of parents' gender schemas and their offspring's gender-related cognitions, with samples ranging in age from infancy through early adulthood. Found a small but meaningful effect size (r=.16) indicating a positive correlation between parent gender schema and offspring measures. Effect sizes were influenced…
ERIC Educational Resources Information Center
Goldston, Jennifer Anne
2013-01-01
The goal of this study was to analyze the education-related schemas guiding teachers and highly educated, professional immigrant parents in a small southern California elementary school district, and to describe how facets of these schemas converged or diverged as parents and teachers drew upon their social and cultural backgrounds during…
ERIC Educational Resources Information Center
Peltier, Corey; Vannest, Kimberly J.
2018-01-01
The current study examines the effects of schema instruction on the problem-solving performance of four second-grade students with emotional and behavioral disorders. The existence of a functional relationship between the schema instruction intervention and problem-solving accuracy in mathematics is examined through a single case experiment using…
Self-Schemas, Motivational Strategies and Self-Regulated Learning.
ERIC Educational Resources Information Center
Garcia, Teresa; Pintrich, Paul R.
Self-regulated learning is usually viewed as the fusion of skill and will, referring to the students' development of different learning strategies in service of their goals. This definition is expanded in a study of self-schemas as a means of representing multiple goals for learning. Measures of self-schemas were used with 151 seventh graders (86…
ERIC Educational Resources Information Center
Gutkind, Rebeka Chaia
2012-01-01
This mixed method study investigated the schema strategy uses of fourth-grade boys with reading challenges; specifically, their ability to understand text based on two components within schema theory: tuning and restructuring. Based on the reading comprehension scores from the Iowa Test of Basic Skills (Form 2010), four comparison groups were…
Schema Theories as a Base for the Structural Representation of the Knowledge State.
ERIC Educational Resources Information Center
Dochy, F. J. R. C.; Bouwens, M. R. J.
From the view of schema-transfer theory, the use of schemata with their several functions gives an explanation for the facilitative effect of prior knowledge on learning processes. This report gives a theoretical exploration of the concept of schemata, underlying schema theories, and functions of schemata to indicate the importance of schema…
Schema-Driven Facilitation of New Hierarchy Learning in the Transitive Inference Paradigm
ERIC Educational Resources Information Center
Kumaran, Dharshan
2013-01-01
Prior knowledge, in the form of a mental schema or framework, is viewed to facilitate the learning of new information in a range of experimental and everyday scenarios. Despite rising interest in the cognitive and neural mechanisms underlying schema-driven facilitation of new learning, few paradigms have been developed to examine this issue in…
Compressing Aviation Data in XML Format
NASA Technical Reports Server (NTRS)
Patel, Hemil; Lau, Derek; Kulkarni, Deepak
2003-01-01
Design, operations and maintenance activities in aviation involve analysis of variety of aviation data. This data is typically in disparate formats making it difficult to use with different software packages. Use of a self-describing and extensible standard called XML provides a solution to this interoperability problem. XML provides a standardized language for describing the contents of an information stream, performing the same kind of definitional role for Web content as a database schema performs for relational databases. XML data can be easily customized for display using Extensible Style Sheets (XSL). While self-describing nature of XML makes it easy to reuse, it also increases the size of data significantly. Therefore, transfemng a dataset in XML form can decrease throughput and increase data transfer time significantly. It also increases storage requirements significantly. A natural solution to the problem is to compress the data using suitable algorithm and transfer it in the compressed form. We found that XML-specific compressors such as Xmill and XMLPPM generally outperform traditional compressors. However, optimal use of Xmill requires of discovery of optimal options to use while running Xmill. This, in turn, depends on the nature of data used. Manual disc0ver.y of optimal setting can require an engineer to experiment for weeks. We have devised an XML compression advisory tool that can analyze sample data files and recommend what compression tool would work the best for this data and what are the optimal settings to be used with a XML compression tool.
Digital Pain Drawings: Assessing Touch-Screen Technology and 3D Body Schemas.
Boudreau, Shellie A; Badsberg, Susanne; Christensen, Steffan W; Egsgaard, Line L
2016-02-01
To assess the consistency and level of agreement between pain drawings collected on (1) paper and a personal computer tablet; and (2) between a 2-dimensional (2D) line drawing and 3-dimensional (3D) body schema. Pain-free participants (N=24) recreated a premarked "pain" area from a 2D line drawing displayed on paper onto paper or tablet, and individuals with chronic neck pain (N=29) expressed their current pain on paper and tablet. A heterogeneous group (N=26) was recruited from cross-disciplinary pain clinic and expressed their pain on a 2D line drawing and a 3D body schema, as displayed on a tablet, and then completed an user-experience questionnaire. Pain drawings showed moderate to high level of consistency and a high level of agreement for paper and tablet and between 2D line drawing and 3D body schema. A fixed bias (-1.0042, P<0.001) revealed that pain areas were drawn slightly smaller on paper than on tablet, and larger on the 2D than the 3D body schema (-0.6371, P=0.003), as recorded on a tablet. Over one-third of individuals with chronic pain preferred and/or believed that the 3D body schema enabled a more accurate record; 12 believed they were equal, and 3 preferred the 2D line drawing. Pain drawings recorded with touch-screen technology provide equal reliability to paper but the size of the drawing slightly differs between the platforms. Although, 2D line drawings and 3D body schemas were similar in terms of consistency and reliability, it remains to be confirmed whether 3D body schemas increase the accuracy and precision of pain drawings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Starke, M.; Herron, A.; King, D.
Communications systems and protocols are becoming second nature to utilities operating distribution systems. Traditionally, centralized communication approaches are often used, while recently in microgrid applications, distributed communication and control schema emerge offering several advantages such as improved system reliability, plug-and-play operation and distributed intelligence. Still, operation and control of microgrids including distributed communication schema have been less of a discussion in the literature. To address the challenge of multiple-inverter microgrid synchronization, a publish-subscribe protocol based, Data Distribution Service (DDS), communication schema for microgrids is proposed in this paper. The communication schema is discussed in details for individual devices such asmore » generators, photovoltaic systems, energy storage systems, microgrid point of common coupling switch, and supporting applications. In conclusion, islanding and resynchronization of a microgrid are demonstrated on a test-bed utilizing this schema.« less
Aigen, Kenneth
2009-01-01
This study illustrates the use of a new musicological method for analyzing music in music therapy. It examines two pieces of clinical music through the constructs of schema theory. It begins with an argument for enhanced musical analysis in music therapy as a means of elevating the status of explanation in music therapy. Schema theory is introduced as a means of integrating musical with clinical concerns. Some basic ideas in schema theory are explained and the schemas of VERTICALITY and CONTAINER are presented as central ones in the analysis of music. Two transcriptions-one of a composed song and one of an improvisation-are examined in detail to illustrate how decisions in the temporal, melodic, and harmonic dimensions of the music are linked to specific clinical goals. The article concludes with a discussion of the implications of this type of musicological analysis for explanatory theory in music therapy.
Starke, M.; Herron, A.; King, D.; ...
2017-08-24
Communications systems and protocols are becoming second nature to utilities operating distribution systems. Traditionally, centralized communication approaches are often used, while recently in microgrid applications, distributed communication and control schema emerge offering several advantages such as improved system reliability, plug-and-play operation and distributed intelligence. Still, operation and control of microgrids including distributed communication schema have been less of a discussion in the literature. To address the challenge of multiple-inverter microgrid synchronization, a publish-subscribe protocol based, Data Distribution Service (DDS), communication schema for microgrids is proposed in this paper. The communication schema is discussed in details for individual devices such asmore » generators, photovoltaic systems, energy storage systems, microgrid point of common coupling switch, and supporting applications. In conclusion, islanding and resynchronization of a microgrid are demonstrated on a test-bed utilizing this schema.« less
OSCAR/Surface: Metadata for the WMO Integrated Observing System WIGOS
NASA Astrophysics Data System (ADS)
Klausen, Jörg; Pröscholdt, Timo; Mannes, Jürg; Cappelletti, Lucia; Grüter, Estelle; Calpini, Bertrand; Zhang, Wenjian
2016-04-01
The World Meteorological Organization (WMO) Integrated Global Observing System (WIGOS) is a key WMO priority underpinning all WMO Programs and new initiatives such as the Global Framework for Climate Services (GFCS). It does this by better integrating WMO and co-sponsored observing systems, as well as partner networks. For this, an important aspect is the description of the observational capabilities by way of structured metadata. The 17th Congress of the Word Meteorological Organization (Cg-17) has endorsed the semantic WIGOS metadata standard (WMDS) developed by the Task Team on WIGOS Metadata (TT-WMD). The standard comprises of a set of metadata classes that are considered to be of critical importance for the interpretation of observations and the evolution of observing systems relevant to WIGOS. The WMDS serves all recognized WMO Application Areas, and its use for all internationally exchanged observational data generated by WMO Members is mandatory. The standard will be introduced in three phases between 2016 and 2020. The Observing Systems Capability Analysis and Review (OSCAR) platform operated by MeteoSwiss on behalf of WMO is the official repository of WIGOS metadata and an implementation of the WMDS. OSCAR/Surface deals with all surface-based observations from land, air and oceans, combining metadata managed by a number of complementary, more domain-specific systems (e.g., GAWSIS for the Global Atmosphere Watch, JCOMMOPS for the marine domain, the WMO Radar database). It is a modern, web-based client-server application with extended information search, filtering and mapping capabilities including a fully developed management console to add and edit observational metadata. In addition, a powerful application programming interface (API) is being developed to allow machine-to-machine metadata exchange. The API is based on an ISO/OGC-compliant XML schema for the WMDS using the Observations and Measurements (ISO19156) conceptual model. The purpose of the presentation is to acquaint the audience with OSCAR, the WMDS and the current XML schema; and, to explore the relationship to the INSPIRE XML schema. Feedback from experts in the various disciplines of meteorology, climatology, atmospheric chemistry, hydrology on the utility of the new standard and the XML schema will be solicited and will guide WMO in further evolving the WMDS.
The Graphical Representation of the Digital Astronaut Physiology Backbone
NASA Technical Reports Server (NTRS)
Briers, Demarcus
2010-01-01
This report summarizes my internship project with the NASA Digital Astronaut Project to analyze the Digital Astronaut (DA) physiology backbone model. The Digital Astronaut Project (DAP) applies integrated physiology models to support space biomedical operations, and to assist NASA researchers in closing knowledge gaps related to human physiologic responses to space flight. The DA physiology backbone is a set of integrated physiological equations and functions that model the interacting systems of the human body. The current release of the model is HumMod (Human Model) version 1.5 and was developed over forty years at the University of Mississippi Medical Center (UMMC). The physiology equations and functions are scripted in an XML schema specifically designed for physiology modeling by Dr. Thomas G. Coleman at UMMC. Currently it is difficult to examine the physiology backbone without being knowledgeable of the XML schema. While investigating and documenting the tags and algorithms used in the XML schema, I proposed a standard methodology for a graphical representation. This standard methodology may be used to transcribe graphical representations from the DA physiology backbone. In turn, the graphical representations can allow examination of the physiological functions and equations without the need to be familiar with the computer programming languages or markup languages used by DA modeling software.
2016-02-08
Data Display Markup Language HUD heads-up display IRIG Inter-Range Instrumentation Group RCC Range Commanders Council SVG Scalable Vector Graphics...T&E test and evaluation TMATS Telemetry Attributes Transfer Standard XML eXtensible Markup Language DDML Schema Validation, RCC 126-16, February...2016 viii This page intentionally left blank. DDML Schema Validation, RCC 126-16, February 2016 1 1. Introduction This Data Display Markup
ERIC Educational Resources Information Center
Palkovich, Einat Natalie
2015-01-01
Mothers are essential facilitators of early Theory of Mind development and intrinsic to the acquisition, as well as the content, of many basic schemas learnt in infancy. In this article it is argued that the "mother" schema in children's literature can ease a child's transition into literary discourse by exploiting the child's…
ERIC Educational Resources Information Center
Hodnik Cadež, Tatjana; Manfreda Kolar, Vida
2015-01-01
A cognitive schema is a mechanism which allows an individual to organize her/his experiences in such a way that a new similar experience can easily be recognised and dealt with successfully. Well-structured schemas provide for the knowledge base for subsequent mathematical activities. A new experience can be assimilated into a previously existing…
A Schema-Theoretic View of Reading. Technical Report No. 32.
ERIC Educational Resources Information Center
Adams, Marilyn Jager; Collins, Allan
This paper provides a general description of schema-theoretic models of language comprehension and examines some extensions of such models to the study of reading. The goal of schema theory is to specify the interface between the reader and the text: to specify how the reader's knowledge interacts with and shapes the information on the page and to…
Exploration into the Effects of the Schema-Based Instruction: A Bottom-Up Approach
ERIC Educational Resources Information Center
Fujii, Kazuma
2016-01-01
The purpose of this paper is to explore the effective use of the core schema-based instruction (SBI) in a classroom setting. The core schema is a schematic representation of the common underlying meaning of a given lexical item, and was first proposed on the basis of the cognitive linguistic perspectives by the Japanese applied linguists Tanaka,…
ERIC Educational Resources Information Center
Ng, Chi-hung Clarence
2014-01-01
Academic self-schemas are important cognitive frames capable of guiding students' learning engagement. Using a cohort of Year 10 Australian students, this longitudinal study examined the self-congruence engagement hypothesis which maintains that there is a close relationship among academic self-schemas, achievement goals, learning approaches,…
The Evolution of a Coding Schema in a Paced Program of Research
ERIC Educational Resources Information Center
Winters, Charlene A.; Cudney, Shirley; Sullivan, Therese
2010-01-01
A major task involved in the management, analysis, and integration of qualitative data is the development of a coding schema to facilitate the analytic process. Described in this paper is the evolution of a coding schema that was used in the analysis of qualitative data generated from online forums of middle-aged women with chronic conditions who…
Estévez, Ana; Ozerinjauregi, Nagore; Herrero-Fernández, David
2016-01-01
Child sexual abuse is one of the most serious forms of abuse due to the psychological consequences that persist even into adulthood. Expressions of anger among child sexual abuse survivors remain common even years after the event. While child sexual abuse has been extensively studied, the expression of displaced aggression has been studied less. Some factors, such as the maladaptive early schemas, might account for this deficiency. The objective of this study was to analyze the relationships between child sexual abuse, displaced aggression, and these schemas according to gender and determine if these early schemas mediate the relationship between child sexual abuse and displaced aggression. A total of 168 Spanish subjects who were victims of child sexual abuse completed measures of childhood trauma, displaced aggression, and early maladaptive schemas. The results depict the relationship between child sexual abuse, displaced aggression, and early maladaptive schemas. Women scored higher than men in child sexual abuse, emotional abuse, disconnection or rejection and impaired autonomy. Mediational analysis found a significant mediation effect of disconnection or rejection on the relationship between child sexual abuse and displaced aggression; however, impaired autonomy did not mediate significantly.
PPDMs-a resource for mapping small molecule bioactivities from ChEMBL to Pfam-A protein domains.
Kruger, Felix A; Gaulton, Anna; Nowotka, Michal; Overington, John P
2015-03-01
PPDMs is a resource that maps small molecule bioactivities to protein domains from the Pfam-A collection of protein families. Small molecule bioactivities mapped to protein domains add important precision to approaches that use protein sequence searches alignments to assist applications in computational drug discovery and systems and chemical biology. We have previously proposed a mapping heuristic for a subset of bioactivities stored in ChEMBL with the Pfam-A domain most likely to mediate small molecule binding. We have since refined this mapping using a manual procedure. Here, we present a resource that provides up-to-date mappings and the possibility to review assigned mappings as well as to participate in their assignment and curation. We also describe how mappings provided through the PPDMs resource are made accessible through the main schema of the ChEMBL database. The PPDMs resource and curation interface is available at https://www.ebi.ac.uk/chembl/research/ppdms/pfam_maps. The source-code for PPDMs is available under the Apache license at https://github.com/chembl/pfam_maps. Source code is available at https://github.com/chembl/pfam_map_loader to demonstrate the integration process with the main schema of ChEMBL. © The Author 2014. Published by Oxford University Press.
Developing a GIS for CO2 analysis using lightweight, open source components
NASA Astrophysics Data System (ADS)
Verma, R.; Goodale, C. E.; Hart, A. F.; Kulawik, S. S.; Law, E.; Osterman, G. B.; Braverman, A.; Nguyen, H. M.; Mattmann, C. A.; Crichton, D. J.; Eldering, A.; Castano, R.; Gunson, M. R.
2012-12-01
There are advantages to approaching the realm of geographic information systems (GIS) using lightweight, open source components in place of a more traditional web map service (WMS) solution. Rapid prototyping, schema-less data storage, the flexible interchange of components, and open source community support are just some of the benefits. In our effort to develop an application supporting the geospatial and temporal rendering of remote sensing carbon-dioxide (CO2) data for the CO2 Virtual Science Data Environment project, we have connected heterogeneous open source components together to form a GIS. Utilizing widely popular open source components including the schema-less database MongoDB, Leaflet interactive maps, the HighCharts JavaScript graphing library, and Python Bottle web-services, we have constructed a system for rapidly visualizing CO2 data with reduced up-front development costs. These components can be aggregated together, resulting in a configurable stack capable of replicating features provided by more standard GIS technologies. The approach we have taken is not meant to replace the more established GIS solutions, but to instead offer a rapid way to provide GIS features early in the development of an application and to offer a path towards utilizing more capable GIS technology in the future.
Issues and solutions for storage, retrieval, and searching of MPEG-7 documents
NASA Astrophysics Data System (ADS)
Chang, Yuan-Chi; Lo, Ming-Ling; Smith, John R.
2000-10-01
The ongoing MPEG-7 standardization activity aims at creating a standard for describing multimedia content in order to facilitate the interpretation of the associated information content. Attempting to address a broad range of applications, MPEG-7 has defined a flexible framework consisting of Descriptors, Description Schemes, and Description Definition Language. Descriptors and Description Schemes describe features, structure and semantics of multimedia objects. They are written in the Description Definition Language (DDL). In the most recent revision, DDL applies XML (Extensible Markup Language) Schema with MPEG-7 extensions. DDL has constructs that support inclusion, inheritance, reference, enumeration, choice, sequence, and abstract type of Description Schemes and Descriptors. In order to enable multimedia systems to use MPEG-7, a number of important problems in storing, retrieving and searching MPEG-7 documents need to be solved. This paper reports on initial finding on issues and solutions of storing and accessing MPEG-7 documents. In particular, we discuss the benefits of using a virtual document management framework based on XML Access Server (XAS) in order to bridge the MPEG-7 multimedia applications and database systems. The need arises partly because MPEG-7 descriptions need customized storage schema, indexing and search engines. We also discuss issues arising in managing dependence and cross-description scheme search.
Computational design of chimeric protein libraries for directed evolution.
Silberg, Jonathan J; Nguyen, Peter Q; Stevenson, Taylor
2010-01-01
The best approach for creating libraries of functional proteins with large numbers of nondisruptive amino acid substitutions is protein recombination, in which structurally related polypeptides are swapped among homologous proteins. Unfortunately, as more distantly related proteins are recombined, the fraction of variants having a disrupted structure increases. One way to enrich the fraction of folded and potentially interesting chimeras in these libraries is to use computational algorithms to anticipate which structural elements can be swapped without disturbing the integrity of a protein's structure. Herein, we describe how the algorithm Schema uses the sequences and structures of the parent proteins recombined to predict the structural disruption of chimeras, and we outline how dynamic programming can be used to find libraries with a range of amino acid substitution levels that are enriched in variants with low Schema disruption.
Prior schemata transfer as an account for assessing the intuitive use of new technology.
Fischer, Sandrine; Itoh, Makoto; Inagaki, Toshiyuki
2015-01-01
New devices are considered intuitive when they allow users to transfer prior knowledge. Drawing upon fundamental psychology experiments that distinguish prior knowledge transfer from new schema induction, a procedure was specified for assessing intuitive use. This procedure was tested with 31 participants who, prior to using an on-board computer prototype, studied its screenshots in reading vs. schema induction conditions. Distinct patterns of transfer or induction resulted for features of the prototype whose functions were familiar or unfamiliar, respectively. Though moderated by participants' cognitive style, these findings demonstrated a means for quantitatively assessing transfer of prior knowledge as the operation that underlies intuitive use. Implications for interface evaluation and design, as well as potential improvements to the procedure, are discussed. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.
The Processes Involved in Designing Software.
1980-08-01
repeats Itself at the next level, terminating with a plan whose individual steps can be executed to solve the Initial problem. Hayes-Roth and Hayes-Roth...that the original design problem is decomposed into a collection of well structured subproblems under the control of some type of executive process...given element to refine further, the schema is assumed to execute to completion, developing a solution model for that element and refining it into a
ERIC Educational Resources Information Center
Sadoski, Mark; And Others
1993-01-01
The comprehensibility, interestingness, familiarity, and memorability of concrete and abstract instructional texts were studied in 4 experiments involving 221 college students. Results indicate that concreteness (ease of imagery) is the variable overwhelmingly most related to comprehensibility and recall. Dual coding theory and schema theory are…