A Priority Fuzzy Logic Extension of the XQuery Language
NASA Astrophysics Data System (ADS)
Škrbić, Srdjan; Wettayaprasit, Wiphada; Saeueng, Pannipa
2011-09-01
In recent years there have been significant research findings in flexible XML querying techniques using fuzzy set theory. Many types of fuzzy extensions to XML data model and XML query languages have been proposed. In this paper, we introduce priority fuzzy logic extensions to XQuery language. Describing these extensions we introduce a new query language. Moreover, we describe a way to implement an interpreter for this language using an existing XML native database.
EquiX-A Search and Query Language for XML.
ERIC Educational Resources Information Center
Cohen, Sara; Kanza, Yaron; Kogan, Yakov; Sagiv, Yehoshua; Nutt, Werner; Serebrenik, Alexander
2002-01-01
Describes EquiX, a search language for XML that combines querying with searching to query the data and the meta-data content of Web pages. Topics include search engines; a data model for XML documents; search query syntax; search query semantics; an algorithm for evaluating a query on a document; and indexing EquiX queries. (LRW)
2006-06-01
SPARQL SPARQL Protocol and RDF Query Language SQL Structured Query Language SUMO Suggested Upper Merged Ontology SW... Query optimization algorithms are implemented in the Pellet reasoner in order to ensure querying a knowledge base is efficient . These algorithms...memory as a treelike structure in order for the data to be queried . XML Query (XQuery) is the standard language used when querying XML
An Expressive and Efficient Language for XML Information Retrieval.
ERIC Educational Resources Information Center
Chinenyanga, Taurai Tapiwa; Kushmerick, Nicholas
2002-01-01
Discusses XML and information retrieval and describes a query language, ELIXIR (expressive and efficient language for XML information retrieval), with a textual similarity operator that can be used for similarity joins. Explains the algorithm for answering ELIXIR queries to generate intermediate relational data. (Author/LRW)
XGI: a graphical interface for XQuery creation.
Li, Xiang; Gennari, John H; Brinkley, James F
2007-10-11
XML has become the default standard for data exchange among heterogeneous data sources, and in January 2007 XQuery (XML Query language) was recommended by the World Wide Web Consortium as the query language for XML. However, XQuery is a complex language that is difficult for non-programmers to learn. We have therefore developed XGI (XQuery Graphical Interface), a visual interface for graphically generating XQuery. In this paper we demonstrate the functionality of XGI through its application to a biomedical XML dataset. We describe the system architecture and the features of XGI in relation to several existing querying systems, we demonstrate the system's usability through a sample query construction, and we discuss a preliminary evaluation of XGI. Finally, we describe some limitations of the system, and our plans for future improvements.
NASA Astrophysics Data System (ADS)
Bikakis, Nikos; Gioldasis, Nektarios; Tsinaraki, Chrisa; Christodoulakis, Stavros
SPARQL is today the standard access language for Semantic Web data. In the recent years XML databases have also acquired industrial importance due to the widespread applicability of XML in the Web. In this paper we present a framework that bridges the heterogeneity gap and creates an interoperable environment where SPARQL queries are used to access XML databases. Our approach assumes that fairly generic mappings between ontology constructs and XML Schema constructs have been automatically derived or manually specified. The mappings are used to automatically translate SPARQL queries to semantically equivalent XQuery queries which are used to access the XML databases. We present the algorithms and the implementation of SPARQL2XQuery framework, which is used for answering SPARQL queries over XML databases.
Shuttle-Data-Tape XML Translator
NASA Technical Reports Server (NTRS)
Barry, Matthew R.; Osborne, Richard N.
2005-01-01
JSDTImport is a computer program for translating native Shuttle Data Tape (SDT) files from American Standard Code for Information Interchange (ASCII) format into databases in other formats. JSDTImport solves the problem of organizing the SDT content, affording flexibility to enable users to choose how to store the information in a database to better support client and server applications. JSDTImport can be dynamically configured by use of a simple Extensible Markup Language (XML) file. JSDTImport uses this XML file to define how each record and field will be parsed, its layout and definition, and how the resulting database will be structured. JSDTImport also includes a client application programming interface (API) layer that provides abstraction for the data-querying process. The API enables a user to specify the search criteria to apply in gathering all the data relevant to a query. The API can be used to organize the SDT content and translate into a native XML database. The XML format is structured into efficient sections, enabling excellent query performance by use of the XPath query language. Optionally, the content can be translated into a Structured Query Language (SQL) database for fast, reliable SQL queries on standard database server computers.
A Survey in Indexing and Searching XML Documents.
ERIC Educational Resources Information Center
Luk, Robert W. P.; Leong, H. V.; Dillon, Tharam S.; Chan, Alvin T. S.; Croft, W. Bruce; Allan, James
2002-01-01
Discussion of XML focuses on indexing techniques for XML documents, grouping them into flat-file, semistructured, and structured indexing paradigms. Highlights include searching techniques, including full text search and multistage search; search result presentations; database and information retrieval system integration; XML query languages; and…
An XML-Based Manipulation and Query Language for Rule-Based Information
NASA Astrophysics Data System (ADS)
Mansour, Essam; Höpfner, Hagen
Rules are utilized to assist in the monitoring process that is required in activities, such as disease management and customer relationship management. These rules are specified according to the application best practices. Most of research efforts emphasize on the specification and execution of these rules. Few research efforts focus on managing these rules as one object that has a management life-cycle. This paper presents our manipulation and query language that is developed to facilitate the maintenance of this object during its life-cycle and to query the information contained in this object. This language is based on an XML-based model. Furthermore, we evaluate the model and language using a prototype system applied to a clinical case study.
A New Publicly Available Chemical Query Language, CSRML ...
A new XML-based query language, CSRML, has been developed for representing chemical substructures, molecules, reaction rules, and reactions. CSRML queries are capable of integrating additional forms of information beyond the simple substructure (e.g., SMARTS) or reaction transformation (e.g., SMIRKS, reaction SMILES) queries currently in use. Chemotypes, a term used to represent advanced CSRML queries for repeated application can be encoded not only with connectivity and topology, but also with properties of atoms, bonds, electronic systems, or molecules. The CSRML language has been developed in parallel with a public set of chemotypes, i.e., the ToxPrint chemotypes, which are designed to provide excellent coverage of environmental, regulatory and commercial use chemical space, as well as to represent features and frameworks believed to be especially relevant to toxicity concerns. A software application, ChemoTyper, has also been developed and made publicly available to enable chemotype searching and fingerprinting against a target structure set. The public ChemoTyper houses the ToxPrint chemotype CSRML dictionary, as well as reference implementation so that the query specifications may be adopted by other chemical structure knowledge systems. The full specifications of the XML standard used in CSRML-based chemotypes are publicly available to facilitate and encourage the exchange of structural knowledge. Paper details specifications for a new XML-based query lan
A new XML-based query language, CSRML, has been developed for representing chemical substructures, molecules, reaction rules, and reactions. CSRML queries are capable of integrating additional forms of information beyond the simple substructure (e.g., SMARTS) or reaction transfor...
Concept-based query language approach to enterprise information systems
NASA Astrophysics Data System (ADS)
Niemi, Timo; Junkkari, Marko; Järvelin, Kalervo
2014-01-01
In enterprise information systems (EISs) it is necessary to model, integrate and compute very diverse data. In advanced EISs the stored data often are based both on structured (e.g. relational) and semi-structured (e.g. XML) data models. In addition, the ad hoc information needs of end-users may require the manipulation of data-oriented (structural), behavioural and deductive aspects of data. Contemporary languages capable of treating this kind of diversity suit only persons with good programming skills. In this paper we present a concept-oriented query language approach to manipulate this diversity so that the programming skill requirements are considerably reduced. In our query language, the features which need technical knowledge are hidden in application-specific concepts and structures. Therefore, users need not be aware of the underlying technology. Application-specific concepts and structures are represented by the modelling primitives of the extended RDOOM (relational deductive object-oriented modelling) which contains primitives for all crucial real world relationships (is-a relationship, part-of relationship, association), XML documents and views. Our query language also supports intensional and extensional-intensional queries, in addition to conventional extensional queries. In its query formulation, the end-user combines available application-specific concepts and structures through shared variables.
XML at the ADC: Steps to a Next Generation Data Archive
NASA Astrophysics Data System (ADS)
Shaya, E.; Blackwell, J.; Gass, J.; Oliversen, N.; Schneider, G.; Thomas, B.; Cheung, C.; White, R. A.
1999-05-01
The eXtensible Markup Language (XML) is a document markup language that allows users to specify their own tags, to create hierarchical structures to qualify their data, and to support automatic checking of documents for structural validity. It is being intensively supported by nearly every major corporate software developer. Under the funds of a NASA AISRP proposal, the Astronomical Data Center (ADC, http://adc.gsfc.nasa.gov) is developing an infrastructure for importation, enhancement, and distribution of data and metadata using XML as the document markup language. We discuss the preliminary Document Type Definition (DTD, at http://adc.gsfc.nasa.gov/xml) which specifies the elements and their attributes in our metadata documents. This attempts to define both the metadata of an astronomical catalog and the `header' information of an astronomical table. In addition, we give an overview of the planned flow of data through automated pipelines from authors and journal presses into our XML archive and retrieval through the web via the XML-QL Query Language and eXtensible Style Language (XSL) scripts. When completed, the catalogs and journal tables at the ADC will be tightly hyperlinked to enhance data discovery. In addition one will be able to search on fragmentary information. For instance, one could query for a table by entering that the second author is so-and-so or that the third author is at such-and-such institution.
Pathology data integration with eXtensible Markup Language.
Berman, Jules J
2005-02-01
It is impossible to overstate the importance of XML (eXtensible Markup Language) as a data organization tool. With XML, pathologists can annotate all of their data (clinical and anatomic) in a format that can transform every pathology report into a database, without compromising narrative structure. The purpose of this manuscript is to provide an overview of XML for pathologists. Examples will demonstrate how pathologists can use XML to annotate individual data elements and to structure reports in a common format that can be merged with other XML files or queried using standard XML tools. This manuscript gives pathologists a glimpse into how XML allows pathology data to be linked to other types of biomedical data and reduces our dependence on centralized proprietary databases.
Querying archetype-based EHRs by search ontology-based XPath engineering.
Kropf, Stefan; Uciteli, Alexandr; Schierle, Katrin; Krücken, Peter; Denecke, Kerstin; Herre, Heinrich
2018-05-11
Legacy data and new structured data can be stored in a standardized format as XML-based EHRs on XML databases. Querying documents on these databases is crucial for answering research questions. Instead of using free text searches, that lead to false positive results, the precision can be increased by constraining the search to certain parts of documents. A search ontology-based specification of queries on XML documents defines search concepts and relates them to parts in the XML document structure. Such query specification method is practically introduced and evaluated by applying concrete research questions formulated in natural language on a data collection for information retrieval purposes. The search is performed by search ontology-based XPath engineering that reuses ontologies and XML-related W3C standards. The key result is that the specification of research questions can be supported by the usage of search ontology-based XPath engineering. A deeper recognition of entities and a semantic understanding of the content is necessary for a further improvement of precision and recall. Key limitation is that the application of the introduced process requires skills in ontology and software development. In future, the time consuming ontology development could be overcome by implementing a new clinical role: the clinical ontologist. The introduced Search Ontology XML extension connects Search Terms to certain parts in XML documents and enables an ontology-based definition of queries. Search ontology-based XPath engineering can support research question answering by the specification of complex XPath expressions without deep syntax knowledge about XPaths.
An XML-Based Knowledge Management System of Port Information for U.S. Coast Guard Cutters
2003-03-01
using DTDs was not chosen. XML Schema performs many of the same functions as SQL type schemas, but differ by the unique structure of XML documents...to access data from content files within the developed system. XPath is not equivalent to SQL . While XPath is very powerful at reaching into an XML...document and finding nodes or node sets, it is not a complete query language. For operations like joins, unions, intersections, etc., SQL is far
Computing health quality measures using Informatics for Integrating Biology and the Bedside.
Klann, Jeffrey G; Murphy, Shawn N
2013-04-19
The Health Quality Measures Format (HQMF) is a Health Level 7 (HL7) standard for expressing computable Clinical Quality Measures (CQMs). Creating tools to process HQMF queries in clinical databases will become increasingly important as the United States moves forward with its Health Information Technology Strategic Plan to Stages 2 and 3 of the Meaningful Use incentive program (MU2 and MU3). Informatics for Integrating Biology and the Bedside (i2b2) is one of the analytical databases used as part of the Office of the National Coordinator (ONC)'s Query Health platform to move toward this goal. Our goal is to integrate i2b2 with the Query Health HQMF architecture, to prepare for other HQMF use-cases (such as MU2 and MU3), and to articulate the functional overlap between i2b2 and HQMF. Therefore, we analyze the structure of HQMF, and then we apply this understanding to HQMF computation on the i2b2 clinical analytical database platform. Specifically, we develop a translator between two query languages, HQMF and i2b2, so that the i2b2 platform can compute HQMF queries. We use the HQMF structure of queries for aggregate reporting, which define clinical data elements and the temporal and logical relationships between them. We use the i2b2 XML format, which allows flexible querying of a complex clinical data repository in an easy-to-understand domain-specific language. The translator can represent nearly any i2b2-XML query as HQMF and execute in i2b2 nearly any HQMF query expressible in i2b2-XML. This translator is part of the freely available reference implementation of the QueryHealth initiative. We analyze limitations of the conversion and find it covers many, but not all, of the complex temporal and logical operators required by quality measures. HQMF is an expressive language for defining quality measures, and it will be important to understand and implement for CQM computation, in both meaningful use and population health. However, its current form might allow complexity that is intractable for current database systems (both in terms of implementation and computation). Our translator, which supports the subset of HQMF currently expressible in i2b2-XML, may represent the beginnings of a practical compromise. It is being pilot-tested in two Query Health demonstration projects, and it can be further expanded to balance computational tractability with the advanced features needed by measure developers.
Computing Health Quality Measures Using Informatics for Integrating Biology and the Bedside
Murphy, Shawn N
2013-01-01
Background The Health Quality Measures Format (HQMF) is a Health Level 7 (HL7) standard for expressing computable Clinical Quality Measures (CQMs). Creating tools to process HQMF queries in clinical databases will become increasingly important as the United States moves forward with its Health Information Technology Strategic Plan to Stages 2 and 3 of the Meaningful Use incentive program (MU2 and MU3). Informatics for Integrating Biology and the Bedside (i2b2) is one of the analytical databases used as part of the Office of the National Coordinator (ONC)’s Query Health platform to move toward this goal. Objective Our goal is to integrate i2b2 with the Query Health HQMF architecture, to prepare for other HQMF use-cases (such as MU2 and MU3), and to articulate the functional overlap between i2b2 and HQMF. Therefore, we analyze the structure of HQMF, and then we apply this understanding to HQMF computation on the i2b2 clinical analytical database platform. Specifically, we develop a translator between two query languages, HQMF and i2b2, so that the i2b2 platform can compute HQMF queries. Methods We use the HQMF structure of queries for aggregate reporting, which define clinical data elements and the temporal and logical relationships between them. We use the i2b2 XML format, which allows flexible querying of a complex clinical data repository in an easy-to-understand domain-specific language. Results The translator can represent nearly any i2b2-XML query as HQMF and execute in i2b2 nearly any HQMF query expressible in i2b2-XML. This translator is part of the freely available reference implementation of the QueryHealth initiative. We analyze limitations of the conversion and find it covers many, but not all, of the complex temporal and logical operators required by quality measures. Conclusions HQMF is an expressive language for defining quality measures, and it will be important to understand and implement for CQM computation, in both meaningful use and population health. However, its current form might allow complexity that is intractable for current database systems (both in terms of implementation and computation). Our translator, which supports the subset of HQMF currently expressible in i2b2-XML, may represent the beginnings of a practical compromise. It is being pilot-tested in two Query Health demonstration projects, and it can be further expanded to balance computational tractability with the advanced features needed by measure developers. PMID:23603227
Bottom-Up Evaluation of Twig Join Pattern Queries in XML Document Databases
NASA Astrophysics Data System (ADS)
Chen, Yangjun
Since the extensible markup language XML emerged as a new standard for information representation and exchange on the Internet, the problem of storing, indexing, and querying XML documents has been among the major issues of database research. In this paper, we study the twig pattern matching and discuss a new algorithm for processing ordered twig pattern queries. The time complexity of the algorithmis bounded by O(|D|·|Q| + |T|·leaf Q ) and its space overhead is by O(leaf T ·leaf Q ), where T stands for a document tree, Q for a twig pattern and D is a largest data stream associated with a node q of Q, which contains the database nodes that match the node predicate at q. leaf T (leaf Q ) represents the number of the leaf nodes of T (resp. Q). In addition, the algorithm can be adapted to an indexing environment with XB-trees being used.
Managing and Querying Image Annotation and Markup in XML.
Wang, Fusheng; Pan, Tony; Sharma, Ashish; Saltz, Joel
2010-01-01
Proprietary approaches for representing annotations and image markup are serious barriers for researchers to share image data and knowledge. The Annotation and Image Markup (AIM) project is developing a standard based information model for image annotation and markup in health care and clinical trial environments. The complex hierarchical structures of AIM data model pose new challenges for managing such data in terms of performance and support of complex queries. In this paper, we present our work on managing AIM data through a native XML approach, and supporting complex image and annotation queries through native extension of XQuery language. Through integration with xService, AIM databases can now be conveniently shared through caGrid.
Managing and Querying Image Annotation and Markup in XML
Wang, Fusheng; Pan, Tony; Sharma, Ashish; Saltz, Joel
2010-01-01
Proprietary approaches for representing annotations and image markup are serious barriers for researchers to share image data and knowledge. The Annotation and Image Markup (AIM) project is developing a standard based information model for image annotation and markup in health care and clinical trial environments. The complex hierarchical structures of AIM data model pose new challenges for managing such data in terms of performance and support of complex queries. In this paper, we present our work on managing AIM data through a native XML approach, and supporting complex image and annotation queries through native extension of XQuery language. Through integration with xService, AIM databases can now be conveniently shared through caGrid. PMID:21218167
Dynamic XML-based exchange of relational data: application to the Human Brain Project.
Tang, Zhengming; Kadiyska, Yana; Li, Hao; Suciu, Dan; Brinkley, James F
2003-01-01
This paper discusses an approach to exporting relational data in XML format for data exchange over the web. We describe the first real-world application of SilkRoute, a middleware program that dynamically converts existing relational data to a user-defined XML DTD. The application, called XBrain, wraps SilkRoute in a Java Server Pages framework, thus permitting a web-based XQuery interface to a legacy relational database. The application is demonstrated as a query interface to the University of Washington Brain Project's Language Map Experiment Management System, which is used to manage data about language organization in the brain.
TOMML: A Rule Language for Structured Data
NASA Astrophysics Data System (ADS)
Cirstea, Horatiu; Moreau, Pierre-Etienne; Reilles, Antoine
We present the TOM language that extends JAVA with the purpose of providing high level constructs inspired by the rewriting community. TOM bridges thus the gap between a general purpose language and high level specifications based on rewriting. This approach was motivated by the promotion of rule based techniques and their integration in large scale applications. Powerful matching capabilities along with a rich strategy language are among TOM's strong features that make it easy to use and competitive with respect to other rule based languages. TOM is thus a natural choice for querying and transforming structured data and in particular XML documents [1]. We present here its main XML oriented features and illustrate its use on several examples.
Friedman, Carol; Hripcsak, George; Shagina, Lyuda; Liu, Hongfang
1999-01-01
Objective: To design a document model that provides reliable and efficient access to clinical information in patient reports for a broad range of clinical applications, and to implement an automated method using natural language processing that maps textual reports to a form consistent with the model. Methods: A document model that encodes structured clinical information in patient reports while retaining the original contents was designed using the extensible markup language (XML), and a document type definition (DTD) was created. An existing natural language processor (NLP) was modified to generate output consistent with the model. Two hundred reports were processed using the modified NLP system, and the XML output that was generated was validated using an XML validating parser. Results: The modified NLP system successfully processed all 200 reports. The output of one report was invalid, and 199 reports were valid XML forms consistent with the DTD. Conclusions: Natural language processing can be used to automatically create an enriched document that contains a structured component whose elements are linked to portions of the original textual report. This integrated document model provides a representation where documents containing specific information can be accurately and efficiently retrieved by querying the structured components. If manual review of the documents is desired, the salient information in the original reports can also be identified and highlighted. Using an XML model of tagging provides an additional benefit in that software tools that manipulate XML documents are readily available. PMID:9925230
CytoscapeRPC: a plugin to create, modify and query Cytoscape networks from scripting languages.
Bot, Jan J; Reinders, Marcel J T
2011-09-01
CytoscapeRPC is a plugin for Cytoscape which allows users to create, query and modify Cytoscape networks from any programming language which supports XML-RPC. This enables them to access Cytoscape functionality and visualize their data interactively without leaving the programming environment with which they are familiar. Install through the Cytoscape plugin manager or visit the web page: http://wiki.nbic.nl/index.php/CytoscapeRPC for the user tutorial and download. j.j.bot@tudelft.nl; j.j.bot@tudelft.nl.
Method for gathering and summarizing internet information
Potok, Thomas E.; Elmore, Mark Thomas; Reed, Joel Wesley; Treadwell, Jim N.; Samatova, Nagiza Faridovna
2010-04-06
A computer method of gathering and summarizing large amounts of information comprises collecting information from a plurality of information sources (14, 51) according to respective maps (52) of the information sources (14), converting the collected information from a storage format to XML-language documents (26, 53) and storing the XML-language documents in a storage medium, searching for documents (55) according to a search query (13) having at least one term and identifying the documents (26) found in the search, and displaying the documents as nodes (33) of a tree structure (32) having links (34) and nodes (33) so as to indicate similarity of the documents to each other.
System for gathering and summarizing internet information
Potok, Thomas E.; Elmore, Mark Thomas; Reed, Joel Wesley; Treadwell, Jim N.; Samatova, Nagiza Faridovna
2006-07-04
A computer method of gathering and summarizing large amounts of information comprises collecting information from a plurality of information sources (14, 51) according to respective maps (52) of the information sources (14), converting the collected information from a storage format to XML-language documents (26, 53) and storing the XML-language documents in a storage medium, searching for documents (55) according to a search query (13) having at least one term and identifying the documents (26) found in the search, and displaying the documents as nodes (33) of a tree structure (32) having links (34) and nodes (33) so as to indicate similarity of the documents to each other.
Method for gathering and summarizing internet information
Potok, Thomas E [Oak Ridge, TN; Elmore, Mark Thomas [Oak Ridge, TN; Reed, Joel Wesley [Knoxville, TN; Treadwell, Jim N [Louisville, TN; Samatova, Nagiza Faridovna [Oak Ridge, TN
2008-01-01
A computer method of gathering and summarizing large amounts of information comprises collecting information from a plurality of information sources (14, 51) according to respective maps (52) of the information sources (14), converting the collected information from a storage format to XML-language documents (26, 53) and storing the XML-language documents in a storage medium, searching for documents (55) according to a search query (13) having at least one term and identifying the documents (26) found in the search, and displaying the documents as nodes (33) of a tree structure (32) having links (34) and nodes (33) so as to indicate similarity of the documents to each other.
The future application of GML database in GIS
NASA Astrophysics Data System (ADS)
Deng, Yuejin; Cheng, Yushu; Jing, Lianwen
2006-10-01
In 2004, the Geography Markup Language (GML) Implementation Specification (version 3.1.1) was published by Open Geospatial Consortium, Inc. Now more and more applications in geospatial data sharing and interoperability depend on GML. The primary purpose of designing GML is for exchange and transportation of geo-information by standard modeling and encoding of geography phenomena. However, the problems of how to organize and access lots of GML data effectively arise in applications. The research on GML database focuses on these problems. The effective storage of GML data is a hot topic in GIS communities today. GML Database Management System (GDBMS) mainly deals with the problem of storage and management of GML data. Now two types of XML database, namely Native XML Database, and XML-Enabled Database are classified. Since GML is an application of the XML standard to geographic data, the XML database system can also be used for the management of GML. In this paper, we review the status of the art of XML database, including storage, index and query languages, management systems and so on, then move on to the GML database. At the end, the future prospect of GML database in GIS application is presented.
QRFXFreeze: Queryable Compressor for RFX.
Senthilkumar, Radha; Nandagopal, Gomathi; Ronald, Daphne
2015-01-01
The verbose nature of XML has been mulled over again and again and many compression techniques for XML data have been excogitated over the years. Some of the techniques incorporate support for querying the XML database in its compressed format while others have to be decompressed before they can be queried. XML compression in which querying is directly supported instantaneously with no compromise over time is forced to compromise over space. In this paper, we propose the compressor, QRFXFreeze, which not only reduces the space of storage but also supports efficient querying. The compressor does this without decompressing the compressed XML file. The compressor supports all kinds of XML documents along with insert, update, and delete operations. The forte of QRFXFreeze is that the textual data are semantically compressed and are indexed to reduce the querying time. Experimental results show that the proposed compressor performs much better than other well-known compressors.
XML Reconstruction View Selection in XML Databases: Complexity Analysis and Approximation Scheme
NASA Astrophysics Data System (ADS)
Chebotko, Artem; Fu, Bin
Query evaluation in an XML database requires reconstructing XML subtrees rooted at nodes found by an XML query. Since XML subtree reconstruction can be expensive, one approach to improve query response time is to use reconstruction views - materialized XML subtrees of an XML document, whose nodes are frequently accessed by XML queries. For this approach to be efficient, the principal requirement is a framework for view selection. In this work, we are the first to formalize and study the problem of XML reconstruction view selection. The input is a tree T, in which every node i has a size c i and profit p i , and the size limitation C. The target is to find a subset of subtrees rooted at nodes i 1, ⋯ , i k respectively such that c_{i_1}+\\cdots +c_{i_k}le C, and p_{i_1}+\\cdots +p_{i_k} is maximal. Furthermore, there is no overlap between any two subtrees selected in the solution. We prove that this problem is NP-hard and present a fully polynomial-time approximation scheme (FPTAS) as a solution.
Semantic e-Science: From Microformats to Models
NASA Astrophysics Data System (ADS)
Lumb, L. I.; Freemantle, J. R.; Aldridge, K. D.
2009-05-01
A platform has been developed to transform semi-structured ASCII data into a representation based on the eXtensible Markup Language (XML). A subsequent transformation allows the XML-based representation to be rendered in the Resource Description Format (RDF). Editorial metadata, expressed as external annotations (via XML Pointer Language), also survives this transformation process (e.g., Lumb et al., http://dx.doi.org/10.1016/j.cageo.2008.03.009). Because the XML-to-RDF transformation uses XSLT (eXtensible Stylesheet Language Transformations), semantic microformats ultimately encode the scientific data (Lumb & Aldridge, http://dx.doi.org/10.1109/HPCS.2006.26). In building the relationship-centric representation in RDF, a Semantic Model of the scientific data is extracted. The systematic enhancement in the expressivity and richness of the scientific data results in representations of knowledge that are readily understood and manipulated by intelligent software agents. Thus scientists are able to draw upon various resources within and beyond their discipline to use in their scientific applications. Since the resulting Semantic Models are independent conceptualizations of the science itself, the representation of scientific knowledge and interaction with the same can stimulate insight from different perspectives. Using the Global Geodynamics Project (GGP) for the purpose of illustration, the introduction of GGP microformats enable a Semantic Model for the GGP that can be semantically queried (e.g., via SPARQL, http://www.w3.org/TR/rdf-sparql-query). Although the present implementation uses the Open Source Redland RDF Libraries (http://librdf.org/), the approach is generalizable to other platforms and to projects other than the GGP (e.g., Baker et al., Informatics and the 2007-2008 Electronic Geophysical Year, Eos Trans. Am. Geophys. Un., 89(48), 485-486, 2008).
Labeling RDF Graphs for Linear Time and Space Querying
NASA Astrophysics Data System (ADS)
Furche, Tim; Weinzierl, Antonius; Bry, François
Indices and data structures for web querying have mostly considered tree shaped data, reflecting the view of XML documents as tree-shaped. However, for RDF (and when querying ID/IDREF constraints in XML) data is indisputably graph-shaped. In this chapter, we first study existing indexing and labeling schemes for RDF and other graph datawith focus on support for efficient adjacency and reachability queries. For XML, labeling schemes are an important part of the widespread adoption of XML, in particular for mapping XML to existing (relational) database technology. However, the existing indexing and labeling schemes for RDF (and graph data in general) sacrifice one of the most attractive properties of XML labeling schemes, the constant time (and per-node space) test for adjacency (child) and reachability (descendant). In the second part, we introduce the first labeling scheme for RDF data that retains this property and thus achieves linear time and space processing of acyclic RDF queries on a significantly larger class of graphs than previous approaches (which are mostly limited to tree-shaped data). Finally, we show how this labeling scheme can be applied to (acyclic) SPARQL queries to obtain an evaluation algorithm with time and space complexity linear in the number of resources in the queried RDF graph.
Distributed XQuery-Based Integration and Visualization of Multimodality Brain Mapping Data
Detwiler, Landon T.; Suciu, Dan; Franklin, Joshua D.; Moore, Eider B.; Poliakov, Andrew V.; Lee, Eunjung S.; Corina, David P.; Ojemann, George A.; Brinkley, James F.
2008-01-01
This paper addresses the need for relatively small groups of collaborating investigators to integrate distributed and heterogeneous data about the brain. Although various national efforts facilitate large-scale data sharing, these approaches are generally too “heavyweight” for individual or small groups of investigators, with the result that most data sharing among collaborators continues to be ad hoc. Our approach to this problem is to create a “lightweight” distributed query architecture, in which data sources are accessible via web services that accept arbitrary query languages but return XML results. A Distributed XQuery Processor (DXQP) accepts distributed XQueries in which subqueries are shipped to the remote data sources to be executed, with the resulting XML integrated by DXQP. A web-based application called DXBrain accesses DXQP, allowing a user to create, save and execute distributed XQueries, and to view the results in various formats including a 3-D brain visualization. Example results are presented using distributed brain mapping data sources obtained in studies of language organization in the brain, but any other XML source could be included. The advantage of this approach is that it is very easy to add and query a new source, the tradeoff being that the user needs to understand XQuery and the schemata of the underlying sources. For small numbers of known sources this burden is not onerous for a knowledgeable user, leading to the conclusion that the system helps to fill the gap between ad hoc local methods and large scale but complex national data sharing efforts. PMID:19198662
Querying and Ranking XML Documents.
ERIC Educational Resources Information Center
Schlieder, Torsten; Meuss, Holger
2002-01-01
Discussion of XML, information retrieval, precision, and recall focuses on a retrieval technique that adopts the similarity measure of the vector space model, incorporates the document structure, and supports structured queries. Topics include a query model based on tree matching; structured queries and term-based ranking; and term frequency and…
New NED XML/VOtable Services and Client Interface Applications
NASA Astrophysics Data System (ADS)
Pevunova, O.; Good, J.; Mazzarella, J.; Berriman, G. B.; Madore, B.
2005-12-01
The NASA/IPAC Extragalactic Database (NED) provides data and cross-identifications for over 7 million extragalactic objects fused from thousands of survey catalogs and journal articles. The data cover all frequencies from radio through gamma rays and include positions, redshifts, photometry and spectral energy distributions (SEDs), sizes, and images. NED services have traditionally supplied data in HTML format for connections from Web browsers, and a custom ASCII data structure for connections by remote computer programs written in the C programming language. We describe new services that provide responses from NED queries in XML documents compliant with the international virtual observatory VOtable protocol. The XML/VOtable services support cone searches, all-sky searches based on object attributes (survey names, cross-IDs, redshifts, flux densities), and requests for detailed object data. Initial services have been inserted into the NVO registry, and others will follow soon. The first client application is a Style Sheet specification for rendering NED VOtable query results in Web browsers that support XML. The second prototype application is a Java applet that allows users to compare multiple SEDs. The new XML/VOtable output mode will also simplify the integration of data from NED into visualization and analysis packages, software agents, and other virtual observatory applications. We show an example SED from NED plotted using VOPlot. The NED website is: http://nedwww.ipac.caltech.edu.
Indexing Temporal XML Using FIX
NASA Astrophysics Data System (ADS)
Zheng, Tiankun; Wang, Xinjun; Zhou, Yingchun
XML has become an important criterion for description and exchange of information. It is of practical significance to introduce the temporal information on this basis, because time has penetrated into all walks of life as an important property information .Such kind of database can track document history and recover information to state of any time before, and is called Temporal XML database. We advise a new feature vector on the basis of FIX which is a feature-based XML index, and build an index on temporal XML database using B+ tree, donated TFIX. We also put forward a new query algorithm upon it for temporal query. Our experiments proved that this index has better performance over other kinds of XML indexes. The index can satisfy all TXPath queries with depth up to K(>0).
A comparison of database systems for XML-type data.
Risse, Judith E; Leunissen, Jack A M
2010-01-01
In the field of bioinformatics interchangeable data formats based on XML are widely used. XML-type data is also at the core of most web services. With the increasing amount of data stored in XML comes the need for storing and accessing the data. In this paper we analyse the suitability of different database systems for storing and querying large datasets in general and Medline in particular. All reviewed database systems perform well when tested with small to medium sized datasets, however when the full Medline dataset is queried a large variation in query times is observed. There is not one system that is vastly superior to the others in this comparison and, depending on the database size and the query requirements, different systems are most suitable. The best all-round solution is the Oracle 11~g database system using the new binary storage option. Alias-i's Lingpipe is a more lightweight, customizable and sufficiently fast solution. It does however require more initial configuration steps. For data with a changing XML structure Sedna and BaseX as native XML database systems or MySQL with an XML-type column are suitable.
A Query Integrator and Manager for the Query Web
Brinkley, James F.; Detwiler, Landon T.
2012-01-01
We introduce two concepts: the Query Web as a layer of interconnected queries over the document web and the semantic web, and a Query Web Integrator and Manager (QI) that enables the Query Web to evolve. QI permits users to write, save and reuse queries over any web accessible source, including other queries saved in other installations of QI. The saved queries may be in any language (e.g. SPARQL, XQuery); the only condition for interconnection is that the queries return their results in some form of XML. This condition allows queries to chain off each other, and to be written in whatever language is appropriate for the task. We illustrate the potential use of QI for several biomedical use cases, including ontology view generation using a combination of graph-based and logical approaches, value set generation for clinical data management, image annotation using terminology obtained from an ontology web service, ontology-driven brain imaging data integration, small-scale clinical data integration, and wider-scale clinical data integration. Such use cases illustrate the current range of applications of QI and lead us to speculate about the potential evolution from smaller groups of interconnected queries into a larger query network that layers over the document and semantic web. The resulting Query Web could greatly aid researchers and others who now have to manually navigate through multiple information sources in order to answer specific questions. PMID:22531831
Noelle, G; Dudeck, J
1999-01-01
Two years, since the World Wide Web Consortium (W3C) has published the first specification of the eXtensible Markup Language (XML) there exist some concrete tools and applications to work with XML-based data. In particular, new generation Web browsers offer great opportunities to develop new kinds of medical, web-based applications. There are several data-exchange formats in medicine, which have been established in the last years: HL-7, DICOM, EDIFACT and, in the case of Germany, xDT. Whereas communication and information exchange becomes increasingly important, the development of appropriate and necessary interfaces causes problems, rising costs and effort. It has been also recognised that it is difficult to define a standardised interchange format, for one of the major future developments in medical telematics: the electronic patient record (EPR) and its availability on the Internet. Whereas XML, especially in an industrial environment, is celebrated as a generic standard and a solution for all problems concerning e-commerce, in a medical context there are only few applications developed. Nevertheless, the medical environment is an appropriate area for building XML applications: as the information and communication management becomes increasingly important in medical businesses, the role of the Internet changes quickly from an information to a communication medium. The first XML based applications in healthcare show us the advantage for a future engagement of the healthcare industry in XML: such applications are open, easy to extend and cost-effective. Additionally, XML is much more than a simple new data interchange format: many proposals for data query (XQL), data presentation (XSL) and other extensions have been proposed to the W3C and partly realised in medical applications.
Construction of a nasopharyngeal carcinoma 2D/MS repository with Open Source XML database--Xindice.
Li, Feng; Li, Maoyu; Xiao, Zhiqiang; Zhang, Pengfei; Li, Jianling; Chen, Zhuchu
2006-01-11
Many proteomics initiatives require integration of all information with uniformcriteria from collection of samples and data display to publication of experimental results. The integration and exchanging of these data of different formats and structure imposes a great challenge to us. The XML technology presents a promise in handling this task due to its simplicity and flexibility. Nasopharyngeal carcinoma (NPC) is one of the most common cancers in southern China and Southeast Asia, which has marked geographic and racial differences in incidence. Although there are some cancer proteome databases now, there is still no NPC proteome database. The raw NPC proteome experiment data were captured into one XML document with Human Proteome Markup Language (HUP-ML) editor and imported into native XML database Xindice. The 2D/MS repository of NPC proteome was constructed with Apache, PHP and Xindice to provide access to the database via Internet. On our website, two methods, keyword query and click query, were provided at the same time to access the entries of the NPC proteome database. Our 2D/MS repository can be used to share the raw NPC proteomics data that are generated from gel-based proteomics experiments. The database, as well as the PHP source codes for constructing users' own proteome repository, can be accessed at http://www.xyproteomics.org/.
NASA Astrophysics Data System (ADS)
Mueller, Wolfgang; Mueller, Henning; Marchand-Maillet, Stephane; Pun, Thierry; Squire, David M.; Pecenovic, Zoran; Giess, Christoph; de Vries, Arjen P.
2000-10-01
While in the area of relational databases interoperability is ensured by common communication protocols (e.g. ODBC/JDBC using SQL), Content Based Image Retrieval Systems (CBIRS) and other multimedia retrieval systems are lacking both a common query language and a common communication protocol. Besides its obvious short term convenience, interoperability of systems is crucial for the exchange and analysis of user data. In this paper, we present and describe an extensible XML-based query markup language, called MRML (Multimedia Retrieval markup Language). MRML is primarily designed so as to ensure interoperability between different content-based multimedia retrieval systems. Further, MRML allows researchers to preserve their freedom in extending their system as needed. MRML encapsulates multimedia queries in a way that enable multimedia (MM) query languages, MM content descriptions, MM query engines, and MM user interfaces to grow independently from each other, reaching a maximum of interoperability while ensuring a maximum of freedom for the developer. For benefitting from this, only a few simple design principles have to be respected when extending MRML for one's fprivate needs. The design of extensions withing the MRML framework will be described in detail in the paper. MRML has been implemented and tested for the CBIRS Viper, using the user interface Snake Charmer. Both are part of the GNU project and can be downloaded at our site.
At-sea demonstration of RF sensor tasking using XML over a worldwide network
NASA Astrophysics Data System (ADS)
Kellogg, Robert L.; Lee, Tom; Dumas, Diane; Raggo, Barbara
2003-07-01
As part of an At-Sea Demonstration for Space and Naval Warfare Command (SPAWAR, PMW-189), a prototype RF sensor for signal acquisition and direction finding queried and received tasking via a secure worldwide Automated Data Network System (ADNS). Using extended mark-up language (XML) constructs, both mission and signal tasking were available for push and pull Battlespace management. XML tasking was received by the USS Cape St George (CG-71) during an exercise along the Gulf Coast of the US from a test facility at SPAWAR, San Diego, CA. Although only one ship was used in the demonstration, the intent of the software initiative was to show that a network of different RF sensors on different platforms with different capabilitis could be tasked by a common web agent. A sensor software agent interpreted the XML task to match the sensor's capability. Future improvements will focus on enlarging the domain of mission tasking and incorporate report management.
XML, Ontologies, and Their Clinical Applications.
Yu, Chunjiang; Shen, Bairong
2016-01-01
The development of information technology has resulted in its penetration into every area of clinical research. Various clinical systems have been developed, which produce increasing volumes of clinical data. However, saving, exchanging, querying, and exploiting these data are challenging issues. The development of Extensible Markup Language (XML) has allowed the generation of flexible information formats to facilitate the electronic sharing of structured data via networks, and it has been used widely for clinical data processing. In particular, XML is very useful in the fields of data standardization, data exchange, and data integration. Moreover, ontologies have been attracting increased attention in various clinical fields in recent years. An ontology is the basic level of a knowledge representation scheme, and various ontology repositories have been developed, such as Gene Ontology and BioPortal. The creation of these standardized repositories greatly facilitates clinical research in related fields. In this chapter, we discuss the basic concepts of XML and ontologies, as well as their clinical applications.
Adding Hierarchical Objects to Relational Database General-Purpose XML-Based Information Managements
NASA Technical Reports Server (NTRS)
Lin, Shu-Chun; Knight, Chris; La, Tracy; Maluf, David; Bell, David; Tran, Khai Peter; Gawdiak, Yuri
2006-01-01
NETMARK is a flexible, high-throughput software system for managing, storing, and rapid searching of unstructured and semi-structured documents. NETMARK transforms such documents from their original highly complex, constantly changing, heterogeneous data formats into well-structured, common data formats in using Hypertext Markup Language (HTML) and/or Extensible Markup Language (XML). The software implements an object-relational database system that combines the best practices of the relational model utilizing Structured Query Language (SQL) with those of the object-oriented, semantic database model for creating complex data. In particular, NETMARK takes advantage of the Oracle 8i object-relational database model using physical-address data types for very efficient keyword searches of records across both context and content. NETMARK also supports multiple international standards such as WEBDAV for drag-and-drop file management and SOAP for integrated information management using Web services. The document-organization and -searching capabilities afforded by NETMARK are likely to make this software attractive for use in disciplines as diverse as science, auditing, and law enforcement.
An exponentiation method for XML element retrieval.
Wichaiwong, Tanakorn
2014-01-01
XML document is now widely used for modelling and storing structured documents. The structure is very rich and carries important information about contents and their relationships, for example, e-Commerce. XML data-centric collections require query terms allowing users to specify constraints on the document structure; mapping structure queries and assigning the weight are significant for the set of possibly relevant documents with respect to structural conditions. In this paper, we present an extension to the MEXIR search system that supports the combination of structural and content queries in the form of content-and-structure queries, which we call the Exponentiation function. It has been shown the structural information improve the effectiveness of the search system up to 52.60% over the baseline BM25 at MAP.
Conversion of Radiology Reporting Templates to the MRRT Standard.
Kahn, Charles E; Genereaux, Brad; Langlotz, Curtis P
2015-10-01
In 2013, the Integrating the Healthcare Enterprise (IHE) Radiology workgroup developed the Management of Radiology Report Templates (MRRT) profile, which defines both the format of radiology reporting templates using an extension of Hypertext Markup Language version 5 (HTML5), and the transportation mechanism to query, retrieve, and store these templates. Of 200 English-language report templates published by the Radiological Society of North America (RSNA), initially encoded as text and in an XML schema language, 168 have been converted successfully into MRRT using a combination of automated processes and manual editing; conversion of the remaining 32 templates is in progress. The automated conversion process applied Extensible Stylesheet Language Transformation (XSLT) scripts, an XML parsing engine, and a Java servlet. The templates were validated for proper HTML5 and MRRT syntax using web-based services. The MRRT templates allow radiologists to share best-practice templates across organizations and have been uploaded to the template library to supersede the prior XML-format templates. By using MRRT transactions and MRRT-format templates, radiologists will be able to directly import and apply templates from the RSNA Report Template Library in their own MRRT-compatible vendor systems. The availability of MRRT-format reporting templates will stimulate adoption of the MRRT standard and is expected to advance the sharing and use of templates to improve the quality of radiology reports.
An Exponentiation Method for XML Element Retrieval
2014-01-01
XML document is now widely used for modelling and storing structured documents. The structure is very rich and carries important information about contents and their relationships, for example, e-Commerce. XML data-centric collections require query terms allowing users to specify constraints on the document structure; mapping structure queries and assigning the weight are significant for the set of possibly relevant documents with respect to structural conditions. In this paper, we present an extension to the MEXIR search system that supports the combination of structural and content queries in the form of content-and-structure queries, which we call the Exponentiation function. It has been shown the structural information improve the effectiveness of the search system up to 52.60% over the baseline BM25 at MAP. PMID:24696643
Yang, Chihae; Tarkhov, Aleksey; Marusczyk, Jörg; Bienfait, Bruno; Gasteiger, Johann; Kleinoeder, Thomas; Magdziarz, Tomasz; Sacher, Oliver; Schwab, Christof H; Schwoebel, Johannes; Terfloth, Lothar; Arvidson, Kirk; Richard, Ann; Worth, Andrew; Rathman, James
2015-03-23
Chemotypes are a new approach for representing molecules, chemical substructures and patterns, reaction rules, and reactions. Chemotypes are capable of integrating types of information beyond what is possible using current representation methods (e.g., SMARTS patterns) or reaction transformations (e.g., SMIRKS, reaction SMILES). Chemotypes are expressed in the XML-based Chemical Subgraphs and Reactions Markup Language (CSRML), and can be encoded not only with connectivity and topology but also with properties of atoms, bonds, electronic systems, or molecules. CSRML has been developed in parallel with a public set of chemotypes, i.e., the ToxPrint chemotypes, which are designed to provide excellent coverage of environmental, regulatory, and commercial-use chemical space, as well as to represent chemical patterns and properties especially relevant to various toxicity concerns. A software application, ChemoTyper has also been developed and made publicly available in order to enable chemotype searching and fingerprinting against a target structure set. The public ChemoTyper houses the ToxPrint chemotype CSRML dictionary, as well as reference implementation so that the query specifications may be adopted by other chemical structure knowledge systems. The full specifications of the XML-based CSRML standard used to express chemotypes are publicly available to facilitate and encourage the exchange of structural knowledge.
Integrating and visualizing primary data from prospective and legacy taxonomic literature
Agosti, Donat; Penev, Lyubomir; Sautter, Guido; Georgiev, Teodor; Catapano, Terry; Patterson, David; King, David; Pereira, Serrano; Vos, Rutger Aldo; Sierra, Soraya
2015-01-01
Abstract Specimen data in taxonomic literature are among the highest quality primary biodiversity data. Innovative cybertaxonomic journals are using workflows that maintain data structure and disseminate electronic content to aggregators and other users; such structure is lost in traditional taxonomic publishing. Legacy taxonomic literature is a vast repository of knowledge about biodiversity. Currently, access to that resource is cumbersome, especially for non-specialist data consumers. Markup is a mechanism that makes this content more accessible, and is especially suited to machine analysis. Fine-grained XML (Extensible Markup Language) markup was applied to all (37) open-access articles published in the journal Zootaxa containing treatments on spiders (Order: Araneae). The markup approach was optimized to extract primary specimen data from legacy publications. These data were combined with data from articles containing treatments on spiders published in Biodiversity Data Journal where XML structure is part of the routine publication process. A series of charts was developed to visualize the content of specimen data in XML-tagged taxonomic treatments, either singly or in aggregate. The data can be filtered by several fields (including journal, taxon, institutional collection, collecting country, collector, author, article and treatment) to query particular aspects of the data. We demonstrate here that XML markup using GoldenGATE can address the challenge presented by unstructured legacy data, can extract structured primary biodiversity data which can be aggregated with and jointly queried with data from other Darwin Core-compatible sources, and show how visualization of these data can communicate key information contained in biodiversity literature. We complement recent studies on aspects of biodiversity knowledge using XML structured data to explore 1) the time lag between species discovry and description, and 2) the prevelence of rarity in species descriptions. PMID:26023286
ScotlandsPlaces XML: Bespoke XML or XML Mapping?
ERIC Educational Resources Information Center
Beamer, Ashley; Gillick, Mark
2010-01-01
Purpose: The purpose of this paper is to investigate web services (in the form of parameterised URLs), specifically in the context of the ScotlandsPlaces project. This involves cross-domain querying, data retrieval and display via the development of a bespoke XML standard rather than existing XML formats and mapping between them.…
Content-Aware DataGuide with Incremental Index Update using Frequently Used Paths
NASA Astrophysics Data System (ADS)
Sharma, A. K.; Duhan, Neelam; Khattar, Priyanka
2010-11-01
Size of the WWW is increasing day by day. Due to the absence of structured data on the Web, it becomes very difficult for information retrieval tools to fully utilize the Web information. As a solution to this problem, XML pages come into play, which provide structural information to the users to some extent. Without efficient indexes, query processing can be quite inefficient due to an exhaustive traversal on XML data. In this paper an improved content-centric approach of Content-Aware DataGuide, which is an indexing technique for XML databases, is being proposed that uses frequently used paths from historical query logs to improve query performance. The index can be updated incrementally according to the changes in query workload and thus, the overhead of reconstruction can be minimized. Frequently used paths are extracted using any Sequential Pattern mining algorithm on subsequent queries in the query workload. After this, the data structures are incrementally updated. This indexing technique proves to be efficient as partial matching queries can be executed efficiently and users can now get the more relevant documents in results.
Freire, Sergio Miranda; Teodoro, Douglas; Wei-Kleiner, Fang; Sundvall, Erik; Karlsson, Daniel; Lambrix, Patrick
2016-01-01
This study provides an experimental performance evaluation on population-based queries of NoSQL databases storing archetype-based Electronic Health Record (EHR) data. There are few published studies regarding the performance of persistence mechanisms for systems that use multilevel modelling approaches, especially when the focus is on population-based queries. A healthcare dataset with 4.2 million records stored in a relational database (MySQL) was used to generate XML and JSON documents based on the openEHR reference model. Six datasets with different sizes were created from these documents and imported into three single machine XML databases (BaseX, eXistdb and Berkeley DB XML) and into a distributed NoSQL database system based on the MapReduce approach, Couchbase, deployed in different cluster configurations of 1, 2, 4, 8 and 12 machines. Population-based queries were submitted to those databases and to the original relational database. Database size and query response times are presented. The XML databases were considerably slower and required much more space than Couchbase. Overall, Couchbase had better response times than MySQL, especially for larger datasets. However, Couchbase requires indexing for each differently formulated query and the indexing time increases with the size of the datasets. The performances of the clusters with 2, 4, 8 and 12 nodes were not better than the single node cluster in relation to the query response time, but the indexing time was reduced proportionally to the number of nodes. The tested XML databases had acceptable performance for openEHR-based data in some querying use cases and small datasets, but were generally much slower than Couchbase. Couchbase also outperformed the response times of the relational database, but required more disk space and had a much longer indexing time. Systems like Couchbase are thus interesting research targets for scalable storage and querying of archetype-based EHR data when population-based use cases are of interest. PMID:26958859
Freire, Sergio Miranda; Teodoro, Douglas; Wei-Kleiner, Fang; Sundvall, Erik; Karlsson, Daniel; Lambrix, Patrick
2016-01-01
This study provides an experimental performance evaluation on population-based queries of NoSQL databases storing archetype-based Electronic Health Record (EHR) data. There are few published studies regarding the performance of persistence mechanisms for systems that use multilevel modelling approaches, especially when the focus is on population-based queries. A healthcare dataset with 4.2 million records stored in a relational database (MySQL) was used to generate XML and JSON documents based on the openEHR reference model. Six datasets with different sizes were created from these documents and imported into three single machine XML databases (BaseX, eXistdb and Berkeley DB XML) and into a distributed NoSQL database system based on the MapReduce approach, Couchbase, deployed in different cluster configurations of 1, 2, 4, 8 and 12 machines. Population-based queries were submitted to those databases and to the original relational database. Database size and query response times are presented. The XML databases were considerably slower and required much more space than Couchbase. Overall, Couchbase had better response times than MySQL, especially for larger datasets. However, Couchbase requires indexing for each differently formulated query and the indexing time increases with the size of the datasets. The performances of the clusters with 2, 4, 8 and 12 nodes were not better than the single node cluster in relation to the query response time, but the indexing time was reduced proportionally to the number of nodes. The tested XML databases had acceptable performance for openEHR-based data in some querying use cases and small datasets, but were generally much slower than Couchbase. Couchbase also outperformed the response times of the relational database, but required more disk space and had a much longer indexing time. Systems like Couchbase are thus interesting research targets for scalable storage and querying of archetype-based EHR data when population-based use cases are of interest.
NeXML: rich, extensible, and verifiable representation of comparative data and metadata.
Vos, Rutger A; Balhoff, James P; Caravas, Jason A; Holder, Mark T; Lapp, Hilmar; Maddison, Wayne P; Midford, Peter E; Priyam, Anurag; Sukumaran, Jeet; Xia, Xuhua; Stoltzfus, Arlin
2012-07-01
In scientific research, integration and synthesis require a common understanding of where data come from, how much they can be trusted, and what they may be used for. To make such an understanding computer-accessible requires standards for exchanging richly annotated data. The challenges of conveying reusable data are particularly acute in regard to evolutionary comparative analysis, which comprises an ever-expanding list of data types, methods, research aims, and subdisciplines. To facilitate interoperability in evolutionary comparative analysis, we present NeXML, an XML standard (inspired by the current standard, NEXUS) that supports exchange of richly annotated comparative data. NeXML defines syntax for operational taxonomic units, character-state matrices, and phylogenetic trees and networks. Documents can be validated unambiguously. Importantly, any data element can be annotated, to an arbitrary degree of richness, using a system that is both flexible and rigorous. We describe how the use of NeXML by the TreeBASE and Phenoscape projects satisfies user needs that cannot be satisfied with other available file formats. By relying on XML Schema Definition, the design of NeXML facilitates the development and deployment of software for processing, transforming, and querying documents. The adoption of NeXML for practical use is facilitated by the availability of (1) an online manual with code samples and a reference to all defined elements and attributes, (2) programming toolkits in most of the languages used commonly in evolutionary informatics, and (3) input-output support in several widely used software applications. An active, open, community-based development process enables future revision and expansion of NeXML.
NeXML: Rich, Extensible, and Verifiable Representation of Comparative Data and Metadata
Vos, Rutger A.; Balhoff, James P.; Caravas, Jason A.; Holder, Mark T.; Lapp, Hilmar; Maddison, Wayne P.; Midford, Peter E.; Priyam, Anurag; Sukumaran, Jeet; Xia, Xuhua; Stoltzfus, Arlin
2012-01-01
Abstract In scientific research, integration and synthesis require a common understanding of where data come from, how much they can be trusted, and what they may be used for. To make such an understanding computer-accessible requires standards for exchanging richly annotated data. The challenges of conveying reusable data are particularly acute in regard to evolutionary comparative analysis, which comprises an ever-expanding list of data types, methods, research aims, and subdisciplines. To facilitate interoperability in evolutionary comparative analysis, we present NeXML, an XML standard (inspired by the current standard, NEXUS) that supports exchange of richly annotated comparative data. NeXML defines syntax for operational taxonomic units, character-state matrices, and phylogenetic trees and networks. Documents can be validated unambiguously. Importantly, any data element can be annotated, to an arbitrary degree of richness, using a system that is both flexible and rigorous. We describe how the use of NeXML by the TreeBASE and Phenoscape projects satisfies user needs that cannot be satisfied with other available file formats. By relying on XML Schema Definition, the design of NeXML facilitates the development and deployment of software for processing, transforming, and querying documents. The adoption of NeXML for practical use is facilitated by the availability of (1) an online manual with code samples and a reference to all defined elements and attributes, (2) programming toolkits in most of the languages used commonly in evolutionary informatics, and (3) input–output support in several widely used software applications. An active, open, community-based development process enables future revision and expansion of NeXML. PMID:22357728
iFAB Smart Manufacturing Adapting Rapidly to Product Variants (SMARTV)
2012-05-01
of all welds, only one of each can be reached as the angular approach of the robot in its current configuration, with the laser scanner (oriented at...the seam length, the exact trace of the seam can be computed form the intersection point ([X,Y]) of the two lines and their angular bisector ([Θ...php scripts is generated by using the data extracted from plan.xml, filling the appropriate language constructs with this data, and querying the
Development of a Google-based search engine for data mining radiology reports.
Erinjeri, Joseph P; Picus, Daniel; Prior, Fred W; Rubin, David A; Koppel, Paul
2009-08-01
The aim of this study is to develop a secure, Google-based data-mining tool for radiology reports using free and open source technologies and to explore its use within an academic radiology department. A Health Insurance Portability and Accountability Act (HIPAA)-compliant data repository, search engine and user interface were created to facilitate treatment, operations, and reviews preparatory to research. The Institutional Review Board waived review of the project, and informed consent was not required. Comprising 7.9 GB of disk space, 2.9 million text reports were downloaded from our radiology information system to a fileserver. Extensible markup language (XML) representations of the reports were indexed using Google Desktop Enterprise search engine software. A hypertext markup language (HTML) form allowed users to submit queries to Google Desktop, and Google's XML response was interpreted by a practical extraction and report language (PERL) script, presenting ranked results in a web browser window. The query, reason for search, results, and documents visited were logged to maintain HIPAA compliance. Indexing averaged approximately 25,000 reports per hour. Keyword search of a common term like "pneumothorax" yielded the first ten most relevant results of 705,550 total results in 1.36 s. Keyword search of a rare term like "hemangioendothelioma" yielded the first ten most relevant results of 167 total results in 0.23 s; retrieval of all 167 results took 0.26 s. Data mining tools for radiology reports will improve the productivity of academic radiologists in clinical, educational, research, and administrative tasks. By leveraging existing knowledge of Google's interface, radiologists can quickly perform useful searches.
XML: A Language To Manage the World Wide Web. ERIC Digest.
ERIC Educational Resources Information Center
Davis-Tanous, Jennifer R.
This digest provides an overview of XML (Extensible Markup Language), a markup language used to construct World Wide Web pages. Topics addressed include: (1) definition of a markup language, including comparison of XML with SGML (Standard Generalized Markup Language) and HTML (HyperText Markup Language); (2) how XML works, including sample tags,…
ERIC Educational Resources Information Center
Mitri, Michel
2012-01-01
XML has become the most ubiquitous format for exchange of data between applications running on the Internet. Most Web Services provide their information to clients in the form of XML. The ability to process complex XML documents in order to extract relevant information is becoming as important a skill for IS students to master as querying…
An Introduction to the Extensible Markup Language (XML).
ERIC Educational Resources Information Center
Bryan, Martin
1998-01-01
Describes Extensible Markup Language (XML), a subset of the Standard Generalized Markup Language (SGML) that is designed to make it easy to interchange structured documents over the Internet. Topics include Document Type Definition (DTD), components of XML, the use of XML, text and non-text elements, and uses for XML-coded files. (LRW)
Engineering the ATLAS TAG Browser
NASA Astrophysics Data System (ADS)
Zhang, Qizhi; ATLAS Collaboration
2011-12-01
ELSSI is a web-based event metadata (TAG) browser and event-level selection service for ATLAS. In this paper, we describe some of the challenges encountered in the process of developing ELSSI, and the software engineering strategies adopted to address those challenges. Approaches to management of access to data, browsing, data rendering, query building, query validation, execution, connection management, and communication with auxiliary services are discussed. We also describe strategies for dealing with data that may vary over time, such as run-dependent trigger decision decoding. Along with examples, we illustrate how programming techniques in multiple languages (PHP, JAVASCRIPT, XML, AJAX, and PL/SQL) have been blended to achieve the required results. Finally, we evaluate features of the ELSSI service in terms of functionality, scalability, and performance.
2015-07-01
Acronyms ASCII American Standard Code for Information Interchange DAU data acquisition unit DDML data display markup language IHAL...Transfer Standard URI uniform resource identifier W3C World Wide Web Consortium XML extensible markup language XSD XML schema definition XML Style...Style Guide, RCC 125-15, July 2015 1 Introduction The next generation of telemetry systems will rely heavily on extensible markup language (XML
Ong, Edison; Xiang, Zuoshuang; Zhao, Bin; Liu, Yue; Lin, Yu; Zheng, Jie; Mungall, Chris; Courtot, Mélanie; Ruttenberg, Alan; He, Yongqun
2017-01-01
Linked Data (LD) aims to achieve interconnected data by representing entities using Unified Resource Identifiers (URIs), and sharing information using Resource Description Frameworks (RDFs) and HTTP. Ontologies, which logically represent entities and relations in specific domains, are the basis of LD. Ontobee (http://www.ontobee.org/) is a linked ontology data server that stores ontology information using RDF triple store technology and supports query, visualization and linkage of ontology terms. Ontobee is also the default linked data server for publishing and browsing biomedical ontologies in the Open Biological Ontology (OBO) Foundry (http://obofoundry.org) library. Ontobee currently hosts more than 180 ontologies (including 131 OBO Foundry Library ontologies) with over four million terms. Ontobee provides a user-friendly web interface for querying and visualizing the details and hierarchy of a specific ontology term. Using the eXtensible Stylesheet Language Transformation (XSLT) technology, Ontobee is able to dereference a single ontology term URI, and then output RDF/eXtensible Markup Language (XML) for computer processing or display the HTML information on a web browser for human users. Statistics and detailed information are generated and displayed for each ontology listed in Ontobee. In addition, a SPARQL web interface is provided for custom advanced SPARQL queries of one or multiple ontologies. PMID:27733503
Information persistence using XML database technology
NASA Astrophysics Data System (ADS)
Clark, Thomas A.; Lipa, Brian E. G.; Macera, Anthony R.; Staskevich, Gennady R.
2005-05-01
The Joint Battlespace Infosphere (JBI) Information Management (IM) services provide information exchange and persistence capabilities that support tailored, dynamic, and timely access to required information, enabling near real-time planning, control, and execution for DoD decision making. JBI IM services will be built on a substrate of network centric core enterprise services and when transitioned, will establish an interoperable information space that aggregates, integrates, fuses, and intelligently disseminates relevant information to support effective warfighter business processes. This virtual information space provides individual users with information tailored to their specific functional responsibilities and provides a highly tailored repository of, or access to, information that is designed to support a specific Community of Interest (COI), geographic area or mission. Critical to effective operation of JBI IM services is the implementation of repositories, where data, represented as information, is represented and persisted for quick and easy retrieval. This paper will address information representation, persistence and retrieval using existing database technologies to manage structured data in Extensible Markup Language (XML) format as well as unstructured data in an IM services-oriented environment. Three basic categories of database technologies will be compared and contrasted: Relational, XML-Enabled, and Native XML. These technologies have diverse properties such as maturity, performance, query language specifications, indexing, and retrieval methods. We will describe our application of these evolving technologies within the context of a JBI Reference Implementation (RI) by providing some hopefully insightful anecdotes and lessons learned along the way. This paper will also outline future directions, promising technologies and emerging COTS products that can offer more powerful information management representations, better persistence mechanisms and improved retrieval techniques.
Gmz: a Gml Compression Model for Webgis
NASA Astrophysics Data System (ADS)
Khandelwal, A.; Rajan, K. S.
2017-09-01
Geography markup language (GML) is an XML specification for expressing geographical features. Defined by Open Geospatial Consortium (OGC), it is widely used for storage and transmission of maps over the Internet. XML schemas provide the convenience to define custom features profiles in GML for specific needs as seen in widely popular cityGML, simple features profile, coverage, etc. Simple features profile (SFP) is a simpler subset of GML profile with support for point, line and polygon geometries. SFP has been constructed to make sure it covers most commonly used GML geometries. Web Feature Service (WFS) serves query results in SFP by default. But it falls short of being an ideal choice due to its high verbosity and size-heavy nature, which provides immense scope for compression. GMZ is a lossless compression model developed to work for SFP compliant GML files. Our experiments indicate GMZ achieves reasonably good compression ratios and can be useful in WebGIS based applications.
Lee, Ken Ka-Yin; Tang, Wai-Choi; Choi, Kup-Sze
2013-04-01
Clinical data are dynamic in nature, often arranged hierarchically and stored as free text and numbers. Effective management of clinical data and the transformation of the data into structured format for data analysis are therefore challenging issues in electronic health records development. Despite the popularity of relational databases, the scalability of the NoSQL database model and the document-centric data structure of XML databases appear to be promising features for effective clinical data management. In this paper, three database approaches--NoSQL, XML-enabled and native XML--are investigated to evaluate their suitability for structured clinical data. The database query performance is reported, together with our experience in the databases development. The results show that NoSQL database is the best choice for query speed, whereas XML databases are advantageous in terms of scalability, flexibility and extensibility, which are essential to cope with the characteristics of clinical data. While NoSQL and XML technologies are relatively new compared to the conventional relational database, both of them demonstrate potential to become a key database technology for clinical data management as the technology further advances. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Harmonised information exchange between decentralised food composition database systems.
Pakkala, H; Christensen, T; de Victoria, I Martínez; Presser, K; Kadvan, A
2010-11-01
The main aim of the European Food Information Resource (EuroFIR) project is to develop and disseminate a comprehensive, coherent and validated data bank for the distribution of food composition data (FCD). This can only be accomplished by harmonising food description and data documentation and by the use of standardised thesauri. The data bank is implemented through a network of local FCD storages (usually national) under the control and responsibility of the local (national) EuroFIR partner. The implementation of the system based on the EuroFIR specifications is under development. The data interchange happens through the EuroFIR Web Services interface, allowing the partners to implement their system using methods and software suitable for the local computer environment. The implementation uses common international standards, such as Simple Object Access Protocol, Web Service Description Language and Extensible Markup Language (XML). A specifically constructed EuroFIR search facility (eSearch) was designed for end users. The EuroFIR eSearch facility compiles queries using a specifically designed Food Data Query Language and sends a request to those network nodes linked to the EuroFIR Web Services that will most likely have the requested information. The retrieved FCD are compiled into a specifically designed data interchange format (the EuroFIR Food Data Transport Package) in XML, which is sent back to the EuroFIR eSearch facility as the query response. The same request-response operation happens in all the nodes that have been selected in the EuroFIR eSearch facility for a certain task. Finally, the FCD are combined by the EuroFIR eSearch facility and delivered to the food compiler. The implementation of FCD interchange using decentralised computer systems instead of traditional data-centre models has several advantages. First of all, the local partners have more control over their FCD, which will increase commitment and improve quality. Second, a multicentred solution is more economically viable than the creation of a centralised data bank, because of the lack of national political support for multinational systems.
Ong, Edison; Xiang, Zuoshuang; Zhao, Bin; Liu, Yue; Lin, Yu; Zheng, Jie; Mungall, Chris; Courtot, Mélanie; Ruttenberg, Alan; He, Yongqun
2017-01-04
Linked Data (LD) aims to achieve interconnected data by representing entities using Unified Resource Identifiers (URIs), and sharing information using Resource Description Frameworks (RDFs) and HTTP. Ontologies, which logically represent entities and relations in specific domains, are the basis of LD. Ontobee (http://www.ontobee.org/) is a linked ontology data server that stores ontology information using RDF triple store technology and supports query, visualization and linkage of ontology terms. Ontobee is also the default linked data server for publishing and browsing biomedical ontologies in the Open Biological Ontology (OBO) Foundry (http://obofoundry.org) library. Ontobee currently hosts more than 180 ontologies (including 131 OBO Foundry Library ontologies) with over four million terms. Ontobee provides a user-friendly web interface for querying and visualizing the details and hierarchy of a specific ontology term. Using the eXtensible Stylesheet Language Transformation (XSLT) technology, Ontobee is able to dereference a single ontology term URI, and then output RDF/eXtensible Markup Language (XML) for computer processing or display the HTML information on a web browser for human users. Statistics and detailed information are generated and displayed for each ontology listed in Ontobee. In addition, a SPARQL web interface is provided for custom advanced SPARQL queries of one or multiple ontologies. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Speed up of XML parsers with PHP language implementation
NASA Astrophysics Data System (ADS)
Georgiev, Bozhidar; Georgieva, Adriana
2012-11-01
In this paper, authors introduce PHP5's XML implementation and show how to read, parse, and write a short and uncomplicated XML file using Simple XML in a PHP environment. The possibilities for mutual work of PHP5 language and XML standard are described. The details of parsing process with Simple XML are also cleared. A practical project PHP-XML-MySQL presents the advantages of XML implementation in PHP modules. This approach allows comparatively simple search of XML hierarchical data by means of PHP software tools. The proposed project includes database, which can be extended with new data and new XML parsing functions.
XML and E-Journals: The State of Play.
ERIC Educational Resources Information Center
Wusteman, Judith
2003-01-01
Discusses the introduction of the use of XML (Extensible Markup Language) in publishing electronic journals. Topics include standards, including DTDs (Document Type Definition), or document type definitions; aggregator requirements; SGML (Standard Generalized Markup Language); benefits of XML for e-journals; XML metadata; the possibility of…
The XSD-Builder Specification Language—Toward a Semantic View of XML Schema Definition
NASA Astrophysics Data System (ADS)
Fong, Joseph; Cheung, San Kuen
In the present database market, XML database model is a main structure for the forthcoming database system in the Internet environment. As a conceptual schema of XML database, XML Model has its limitation on presenting its data semantics. System analyst has no toolset for modeling and analyzing XML system. We apply XML Tree Model (shown in Figure 2) as a conceptual schema of XML database to model and analyze the structure of an XML database. It is important not only for visualizing, specifying, and documenting structural models, but also for constructing executable systems. The tree model represents inter-relationship among elements inside different logical schema such as XML Schema Definition (XSD), DTD, Schematron, XDR, SOX, and DSD (shown in Figure 1, an explanation of the terms in the figure are shown in Table 1). The XSD-Builder consists of XML Tree Model, source language, translator, and XSD. The source language is called XSD-Source which is mainly for providing an environment with concept of user friendliness while writing an XSD. The source language will consequently be translated by XSD-Translator. Output of XSD-Translator is an XSD which is our target and is called as an object language.
XML Content Finally Arrives on the Web!
ERIC Educational Resources Information Center
Funke, Susan
1998-01-01
Explains extensible markup language (XML) and how it differs from hypertext markup language (HTML) and standard generalized markup language (SGML). Highlights include features of XML, including better formatting of documents, better searching capabilities, multiple uses for hyperlinking, and an increase in Web applications; Web browsers; and what…
XML Schema Languages: Beyond DTD.
ERIC Educational Resources Information Center
Ioannides, Demetrios
2000-01-01
Discussion of XML (extensible markup language) and the traditional DTD (document type definition) format focuses on efforts of the World Wide Web Consortium's XML schema working group to develop a schema language to replace DTD that will be capable of defining the set of constraints of any possible data resource. (Contains 14 references.) (LRW)
PIML: the Pathogen Information Markup Language.
He, Yongqun; Vines, Richard R; Wattam, Alice R; Abramochkin, Georgiy V; Dickerman, Allan W; Eckart, J Dana; Sobral, Bruno W S
2005-01-01
A vast amount of information about human, animal and plant pathogens has been acquired, stored and displayed in varied formats through different resources, both electronically and otherwise. However, there is no community standard format for organizing this information or agreement on machine-readable format(s) for data exchange, thereby hampering interoperation efforts across information systems harboring such infectious disease data. The Pathogen Information Markup Language (PIML) is a free, open, XML-based format for representing pathogen information. XSLT-based visual presentations of valid PIML documents were developed and can be accessed through the PathInfo website or as part of the interoperable web services federation known as ToolBus/PathPort. Currently, detailed PIML documents are available for 21 pathogens deemed of high priority with regard to public health and national biological defense. A dynamic query system allows simple queries as well as comparisons among these pathogens. Continuing efforts are being taken to include other groups' supporting PIML and to develop more PIML documents. All the PIML-related information is accessible from http://www.vbi.vt.edu/pathport/pathinfo/
The carbohydrate sequence markup language (CabosML): an XML description of carbohydrate structures.
Kikuchi, Norihiro; Kameyama, Akihiko; Nakaya, Shuuichi; Ito, Hiromi; Sato, Takashi; Shikanai, Toshihide; Takahashi, Yoriko; Narimatsu, Hisashi
2005-04-15
Bioinformatics resources for glycomics are very poor as compared with those for genomics and proteomics. The complexity of carbohydrate sequences makes it difficult to define a common language to represent them, and the development of bioinformatics tools for glycomics has not progressed. In this study, we developed a carbohydrate sequence markup language (CabosML), an XML description of carbohydrate structures. The language definition (XML Schema) and an experimental database of carbohydrate structures using an XML database management system are available at http://www.phoenix.hydra.mki.co.jp/CabosDemo.html kikuchi@hydra.mki.co.jp.
NASA Technical Reports Server (NTRS)
Vanderbilt, Peter
1999-01-01
This paper gives an overview of GXD, a framework facilitating publication and use of data from diverse data sources. GXD defines an object-oriented data model designed to represent a wide range of things including data, its metadata, resources and query results. GXD also defines a data transport language. a dialect of XML, for representing instances of the data model. This language allows for a wide range of data source implementations by supporting both the direct incorporation of data and the specification of data by various rules. The GXD software library, proto-typed in Java, includes client and server runtimes. The server runtime facilitates the generation of entities containing data encoded in the GXD transport language. The GXD client runtime interprets these entities (potentially from many data sources) to create an illusion of a globally interconnected data space, one that is independent of data source location and implementation.
WellnessRules: A Web 3.0 Case Study in RuleML-Based Prolog-N3 Profile Interoperation
NASA Astrophysics Data System (ADS)
Boley, Harold; Osmun, Taylor Michael; Craig, Benjamin Larry
An interoperation study, WellnessRules, is described, where rules about wellness opportunities are created by participants in rule languages such as Prolog and N3, and translated within a wellness community using RuleML/XML. The wellness rules are centered around participants, as profiles, encoding knowledge about their activities conditional on the season, the time-of-day, the weather, etc. This distributed knowledge base extends FOAF profiles with a vocabulary and rules about wellness group networking. The communication between participants is organized through Rule Responder, permitting wellness-profile translation and distributed querying across engines. WellnessRules interoperates between rules and queries in the relational (Datalog) paradigm of the pure-Prolog subset of POSL and in the frame (F-logic) paradigm of N3. An evaluation of Rule Responder instantiated for WellnessRules revealed acceptable Web response times.
Development of XML Schema for Broadband Digital Seismograms and Data Center Portal
NASA Astrophysics Data System (ADS)
Takeuchi, N.; Tsuboi, S.; Ishihara, Y.; Nagao, H.; Yamagishi, Y.; Watanabe, T.; Yanaka, H.; Yamaji, H.
2008-12-01
There are a number of data centers around the globe, where the digital broadband seismograms are opened to researchers. Those centers use their own user interfaces and there are no standard to access and retrieve seismograms from different data centers using unified interface. One of the emergent technologies to realize unified user interface for different data centers is the concept of WebService and WebService portal. Here we have developed a prototype of data center portal for digital broadband seismograms. This WebService portal uses WSDL (Web Services Description Language) to accommodate differences among the different data centers. By using the WSDL, alteration and addition of data center user interfaces can be easily managed. This portal, called NINJA Portal, assumes three WebServices: (1) database Query service, (2) Seismic event data request service, and (3) Seismic continuous data request service. Current system supports both station search of database Query service and seismic continuous data request service. Data centers supported by this NINJA portal will be OHP data center in ERI and Pacific21 data center in IFREE/JAMSTEC in the beginning. We have developed metadata standard for seismological data based on QuakeML for parametric data, which has been developed by ETH Zurich, and XML-SEED for waveform data, which was developed by IFREE/JAMSTEC. The prototype of NINJA portal is now released through IFREE web page (http://www.jamstec.go.jp/pacific21/).
ERIC Educational Resources Information Center
Gazan, Rich
2000-01-01
Surveys the current state of Extensible Markup Language (XML), a metalanguage for creating structured documents that describe their own content, and its implications for information professionals. Predicts that XML will become the common language underlying Web, word processing, and database formats. Also discusses Extensible Stylesheet Language…
A Survey and Analysis of Access Control Architectures for XML Data
2006-03-01
13 4. XML Query Engines ...castle and the drawbridge over the moat. Extending beyond the visual analogy, there are many key components to the protection of information and...technology. While XML’s original intent was to enable large-scale electronic publishing over the internet, its functionality is firmly rooted in its
Ordered Backward XPath Axis Processing against XML Streams
NASA Astrophysics Data System (ADS)
Nizar M., Abdul; Kumar, P. Sreenivasa
Processing of backward XPath axes against XML streams is challenging for two reasons: (i) Data is not cached for future access. (ii) Query contains steps specifying navigation to the data that already passed by. While there are some attempts to process parent and ancestor axes, there are very few proposals to process ordered backward axes namely, preceding and preceding-sibling. For ordered backward axis processing, the algorithm, in addition to overcoming the limitations on data availability, has to take care of ordering constraints imposed by these axes. In this paper, we show how backward ordered axes can be effectively represented using forward constraints. We then discuss an algorithm for XML stream processing of XPath expressions containing ordered backward axes. The algorithm uses a layered cache structure to systematically accumulate query results. Our experiments show that the new algorithm gains remarkable speed up over the existing algorithm without compromising on bufferspace requirement.
TMATS/ IHAL/ DDML Schema Validation
2017-02-01
task was to create a method for performing IRIG eXtensible Markup Language (XML) schema validation. As opposed to XML instance document validation...TMATS / IHAL / DDML Schema Validation, RCC 126-17, February 2017 vii Acronyms DDML Data Display Markup Language HUD heads-up display iNET...system XML eXtensible Markup Language TMATS / IHAL / DDML Schema Validation, RCC 126-17, February 2017 viii This page intentionally left blank
Structuring Legacy Pathology Reports by openEHR Archetypes to Enable Semantic Querying.
Kropf, Stefan; Krücken, Peter; Mueller, Wolf; Denecke, Kerstin
2017-05-18
Clinical information is often stored as free text, e.g. in discharge summaries or pathology reports. These documents are semi-structured using section headers, numbered lists, items and classification strings. However, it is still challenging to retrieve relevant documents since keyword searches applied on complete unstructured documents result in many false positive retrieval results. We are concentrating on the processing of pathology reports as an example for unstructured clinical documents. The objective is to transform reports semi-automatically into an information structure that enables an improved access and retrieval of relevant data. The data is expected to be stored in a standardized, structured way to make it accessible for queries that are applied to specific sections of a document (section-sensitive queries) and for information reuse. Our processing pipeline comprises information modelling, section boundary detection and section-sensitive queries. For enabling a focused search in unstructured data, documents are automatically structured and transformed into a patient information model specified through openEHR archetypes. The resulting XML-based pathology electronic health records (PEHRs) are queried by XQuery and visualized by XSLT in HTML. Pathology reports (PRs) can be reliably structured into sections by a keyword-based approach. The information modelling using openEHR allows saving time in the modelling process since many archetypes can be reused. The resulting standardized, structured PEHRs allow accessing relevant data by retrieving data matching user queries. Mapping unstructured reports into a standardized information model is a practical solution for a better access to data. Archetype-based XML enables section-sensitive retrieval and visualisation by well-established XML techniques. Focussing the retrieval to particular sections has the potential of saving retrieval time and improving the accuracy of the retrieval.
ERIC Educational Resources Information Center
Scharf, David
2002-01-01
Discusses XML (extensible markup language), particularly as it relates to libraries. Topics include organizing information; cataloging; metadata; similarities to HTML; organizations dealing with XML; making XML useful; a history of XML; the semantic Web; related technologies; XML at the Library of Congress; and its role in improving the…
XML — an opportunity for
NASA Astrophysics Data System (ADS)
Houlding, Simon W.
2001-08-01
Extensible markup language (XML) is a recently introduced meta-language standard on the Web. It provides the rules for development of metadata (markup) standards for information transfer in specific fields. XML allows development of markup languages that describe what information is rather than how it should be presented. This allows computer applications to process the information in intelligent ways. In contrast hypertext markup language (HTML), which fuelled the initial growth of the Web, is a metadata standard concerned exclusively with presentation of information. Besides its potential for revolutionizing Web activities, XML provides an opportunity for development of meaningful data standards in specific application fields. The rapid endorsement of XML by science, industry and e-commerce has already spawned new metadata standards in such fields as mathematics, chemistry, astronomy, multi-media and Web micro-payments. Development of XML-based data standards in the geosciences would significantly reduce the effort currently wasted on manipulating and reformatting data between different computer platforms and applications and would ensure compatibility with the new generation of Web browsers. This paper explores the evolution, benefits and status of XML and related standards in the more general context of Web activities and uses this as a platform for discussion of its potential for development of data standards in the geosciences. Some of the advantages of XML are illustrated by a simple, browser-compatible demonstration of XML functionality applied to a borehole log dataset. The XML dataset and the associated stylesheet and schema declarations are available for FTP download.
Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Castro, Antonio L; Moreno, Oscar; Pascual, Mario
2018-01-01
This research shows a protocol to assess the computational complexity of querying relational and non-relational (NoSQL (not only Structured Query Language)) standardized electronic health record (EHR) medical information database systems (DBMS). It uses a set of three doubling-sized databases, i.e. databases storing 5000, 10,000 and 20,000 realistic standardized EHR extracts, in three different database management systems (DBMS): relational MySQL object-relational mapping (ORM), document-based NoSQL MongoDB, and native extensible markup language (XML) NoSQL eXist. The average response times to six complexity-increasing queries were computed, and the results showed a linear behavior in the NoSQL cases. In the NoSQL field, MongoDB presents a much flatter linear slope than eXist. NoSQL systems may also be more appropriate to maintain standardized medical information systems due to the special nature of the updating policies of medical information, which should not affect the consistency and efficiency of the data stored in NoSQL databases. One limitation of this protocol is the lack of direct results of improved relational systems such as archetype relational mapping (ARM) with the same data. However, the interpolation of doubling-size database results to those presented in the literature and other published results suggests that NoSQL systems might be more appropriate in many specific scenarios and problems to be solved. For example, NoSQL may be appropriate for document-based tasks such as EHR extracts used in clinical practice, or edition and visualization, or situations where the aim is not only to query medical information, but also to restore the EHR in exactly its original form. PMID:29608174
Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Castro, Antonio L; Moreno, Oscar; Pascual, Mario
2018-03-19
This research shows a protocol to assess the computational complexity of querying relational and non-relational (NoSQL (not only Structured Query Language)) standardized electronic health record (EHR) medical information database systems (DBMS). It uses a set of three doubling-sized databases, i.e. databases storing 5000, 10,000 and 20,000 realistic standardized EHR extracts, in three different database management systems (DBMS): relational MySQL object-relational mapping (ORM), document-based NoSQL MongoDB, and native extensible markup language (XML) NoSQL eXist. The average response times to six complexity-increasing queries were computed, and the results showed a linear behavior in the NoSQL cases. In the NoSQL field, MongoDB presents a much flatter linear slope than eXist. NoSQL systems may also be more appropriate to maintain standardized medical information systems due to the special nature of the updating policies of medical information, which should not affect the consistency and efficiency of the data stored in NoSQL databases. One limitation of this protocol is the lack of direct results of improved relational systems such as archetype relational mapping (ARM) with the same data. However, the interpolation of doubling-size database results to those presented in the literature and other published results suggests that NoSQL systems might be more appropriate in many specific scenarios and problems to be solved. For example, NoSQL may be appropriate for document-based tasks such as EHR extracts used in clinical practice, or edition and visualization, or situations where the aim is not only to query medical information, but also to restore the EHR in exactly its original form.
Masseroli, Marco; Marchente, Mario
2008-07-01
We present X-PAT, a platform-independent software prototype that is able to manage patient referral multimedia data in an intranet network scenario according to the specific control procedures of a healthcare institution. It is a self-developed storage framework based on a file system, implemented in eXtensible Markup Language (XML) and PHP Hypertext Preprocessor Language, and addressed to the requirements of limited-dimension healthcare entities (small hospitals, private medical centers, outpatient clinics, and laboratories). In X-PAT, healthcare data descriptions, stored in a novel Referral Base Management System (RBMS) according to Health Level 7 Clinical Document Architecture Release 2 (CDA R2) standard, can be easily applied to the specific data and organizational procedures of a particular healthcare working environment thanks also to the use of standard clinical terminology. Managed data, centralized on a server, are structured in the RBMS schema using a flexible patient record and CDA healthcare referral document structures based on XML technology. A novel search engine allows defining and performing queries on stored data, whose rapid execution is ensured by expandable RBMS indexing structures. Healthcare personnel can interface the X-PAT system, according to applied state-of-the-art privacy and security measures, through friendly and intuitive Web pages that facilitate user acceptance.
A Leaner, Meaner Markup Language.
ERIC Educational Resources Information Center
Online & CD-ROM Review, 1997
1997-01-01
In 1996 a working group of the World Wide Web Consortium developed and released a simpler form of markup language, Extensible Markup Language (XML), combining the flexibility of standard Generalized Markup Language (SGML) and the Web suitability of HyperText Markup Language (HTML). Reviews SGML and discusses XML's suitability for journal…
ERIC Educational Resources Information Center
Lewis, John D.
1998-01-01
Describes XML (extensible markup language), a new language classification submitted to the World Wide Web Consortium that is defined in terms of both SGML (Standard Generalized Markup Language) and HTML (Hypertext Markup Language), specifically designed for the Internet. Limitations of PDF (Portable Document Format) files for electronic journals…
XML: A Publisher's Perspective.
ERIC Educational Resources Information Center
Andrews, Timothy M.
1999-01-01
Explains eXtensible Markup Language (XML) and describes how Dow Jones Interactive is using it to improve the news-gathering and dissemination process through intranets and the World Wide Web. Discusses benefits of using XML, the relationship to HyperText Markup Language (HTML), lack of available software tools and industry support, and future…
XML technology planning database : lessons learned
NASA Technical Reports Server (NTRS)
Some, Raphael R.; Neff, Jon M.
2005-01-01
A hierarchical Extensible Markup Language(XML) database called XCALIBR (XML Analysis LIBRary) has been developed by Millennium Program to assist in technology investment (ROI) analysis and technology Language Capability the New return on portfolio optimization. The database contains mission requirements and technology capabilities, which are related by use of an XML dictionary. The XML dictionary codifies a standardized taxonomy for space missions, systems, subsystems and technologies. In addition to being used for ROI analysis, the database is being examined for use in project planning, tracking and documentation. During the past year, the database has moved from development into alpha testing. This paper describes the lessons learned during construction and testing of the prototype database and the motivation for moving from an XML taxonomy to a standard XML-based ontology.
Bacon, James; Tardella, Neil; Pratt, Janey; Hu, John; English, James
2006-01-01
Under contract with the Telemedicine & Advanced Technology Research Center (TATRC), Energid Technologies is developing a new XML-based language for describing surgical training exercises, the Surgical Simulation and Training Markup Language (SSTML). SSTML must represent everything from organ models (including tissue properties) to surgical procedures. SSTML is an open language (i.e., freely downloadable) that defines surgical training data through an XML schema. This article focuses on the data representation of the surgical procedures and organ modeling, as they highlight the need for a standard language and illustrate the features of SSTML. Integration of SSTML with software is also discussed.
An XML-Based Mission Command Language for Autonomous Underwater Vehicles (AUVs)
2003-06-01
P. XML: How To Program . Prentice Hall, Inc. Upper Saddle River, New Jersey, 2001 Digital Signature Activity Statement, W3C www.w3.org/Signature...languages because it does not directly specify how information is to be presented, but rather defines the structure (and thus semantics) of the...command and control (C2) aspects of using XML to increase the utility of AUVs. XML programming will be addressed. Current mine warfare doctrine will be
106-17 Telemetry Standards Metadata Configuration Chapter 23
2017-07-01
23-1 23.2 Metadata Description Language ...Chapter 23, July 2017 iii Acronyms HTML Hypertext Markup Language MDL Metadata Description Language PCM pulse code modulation TMATS Telemetry...Attributes Transfer Standard W3C World Wide Web Consortium XML eXtensible Markup Language XSD XML schema document Telemetry Network Standard
Adding XML to the MIS Curriculum: Lessons from the Classroom
ERIC Educational Resources Information Center
Wagner, William P.; Pant, Vik; Hilken, Ralph
2008-01-01
eXtensible Markup Language (XML) is a new technology that is currently being extolled by many industry experts and software vendors. Potentially it represents a platform independent language for sharing information over networks in a way that is much more seamless than with previous technologies. It is extensible in that XML serves as a "meta"…
Compression of Probabilistic XML Documents
NASA Astrophysics Data System (ADS)
Veldman, Irma; de Keijzer, Ander; van Keulen, Maurice
Database techniques to store, query and manipulate data that contains uncertainty receives increasing research interest. Such UDBMSs can be classified according to their underlying data model: relational, XML, or RDF. We focus on uncertain XML DBMS with as representative example the Probabilistic XML model (PXML) of [10,9]. The size of a PXML document is obviously a factor in performance. There are PXML-specific techniques to reduce the size, such as a push down mechanism, that produces equivalent but more compact PXML documents. It can only be applied, however, where possibilities are dependent. For normal XML documents there also exist several techniques for compressing a document. Since Probabilistic XML is (a special form of) normal XML, it might benefit from these methods even more. In this paper, we show that existing compression mechanisms can be combined with PXML-specific compression techniques. We also show that best compression rates are obtained with a combination of PXML-specific technique with a rather simple generic DAG-compression technique.
Agile Datacube Analytics (not just) for the Earth Sciences
NASA Astrophysics Data System (ADS)
Misev, Dimitar; Merticariu, Vlad; Baumann, Peter
2017-04-01
Metadata are considered small, smart, and queryable; data, on the other hand, are known as big, clumsy, hard to analyze. Consequently, gridded data - such as images, image timeseries, and climate datacubes - are managed separately from the metadata, and with different, restricted retrieval capabilities. One reason for this silo approach is that databases, while good at tables, XML hierarchies, RDF graphs, etc., traditionally do not support multi-dimensional arrays well. This gap is being closed by Array Databases which extend the SQL paradigm of "any query, anytime" to NoSQL arrays. They introduce semantically rich modelling combined with declarative, high-level query languages on n-D arrays. On Server side, such queries can be optimized, parallelized, and distributed based on partitioned array storage. This way, they offer new vistas in flexibility, scalability, performance, and data integration. In this respect, the forthcoming ISO SQL extension MDA ("Multi-dimensional Arrays") will be a game changer in Big Data Analytics. We introduce concepts and opportunities through the example of rasdaman ("raster data manager") which in fact has pioneered the field of Array Databases and forms the blueprint for ISO SQL/MDA and further Big Data standards, such as OGC WCPS for querying spatio-temporal Earth datacubes. With operational installations exceeding 140 TB queries have been split across more than one thousand cloud nodes, using CPUs as well as GPUs. Installations can easily be mashed up securely, enabling large-scale location-transparent query processing in federations. Federation queries have been demonstrated live at EGU 2016 spanning Europe and Australia in the context of the intercontinental EarthServer initiative, visualized through NASA WorldWind.
Agile Datacube Analytics (not just) for the Earth Sciences
NASA Astrophysics Data System (ADS)
Baumann, P.
2016-12-01
Metadata are considered small, smart, and queryable; data, on the other hand, are known as big, clumsy, hard to analyze. Consequently, gridded data - such as images, image timeseries, and climate datacubes - are managed separately from the metadata, and with different, restricted retrieval capabilities. One reason for this silo approach is that databases, while good at tables, XML hierarchies, RDF graphs, etc., traditionally do not support multi-dimensional arrays well.This gap is being closed by Array Databases which extend the SQL paradigm of "any query, anytime" to NoSQL arrays. They introduce semantically rich modelling combined with declarative, high-level query languages on n-D arrays. On Server side, such queries can be optimized, parallelized, and distributed based on partitioned array storage. This way, they offer new vistas in flexibility, scalability, performance, and data integration. In this respect, the forthcoming ISO SQL extension MDA ("Multi-dimensional Arrays") will be a game changer in Big Data Analytics.We introduce concepts and opportunities through the example of rasdaman ("raster data manager") which in fact has pioneered the field of Array Databases and forms the blueprint for ISO SQL/MDA and further Big Data standards, such as OGC WCPS for querying spatio-temporal Earth datacubes. With operational installations exceeding 140 TB queries have been split across more than one thousand cloud nodes, using CPUs as well as GPUs. Installations can easily be mashed up securely, enabling large-scale location-transparent query processing in federations. Federation queries have been demonstrated live at EGU 2016 spanning Europe and Australia in the context of the intercontinental EarthServer initiative, visualized through NASA WorldWind.
Integrating personalized medical test contents with XML and XSL-FO.
Toddenroth, Dennis; Dugas, Martin; Frankewitsch, Thomas
2011-03-01
In 2004 the adoption of a modular curriculum at the medical faculty in Muenster led to the introduction of centralized examinations based on multiple-choice questions (MCQs). We report on how organizational challenges of realizing faculty-wide personalized tests were addressed by implementation of a specialized software module to automatically generate test sheets from individual test registrations and MCQ contents. Key steps of the presented method for preparing personalized test sheets are (1) the compilation of relevant item contents and graphical media from a relational database with database queries, (2) the creation of Extensible Markup Language (XML) intermediates, and (3) the transformation into paginated documents. The software module by use of an open source print formatter consistently produced high-quality test sheets, while the blending of vectorized textual contents and pixel graphics resulted in efficient output file sizes. Concomitantly the module permitted an individual randomization of item sequences to prevent illicit collusion. The automatic generation of personalized MCQ test sheets is feasible using freely available open source software libraries, and can be efficiently deployed on a faculty-wide scale.
A Practical Introduction to the XML, Extensible Markup Language, by Way of Some Useful Examples
ERIC Educational Resources Information Center
Snyder, Robin
2004-01-01
XML, Extensible Markup Language, is important as a way to represent and encapsulate the structure of underlying data in a portable way that supports data exchange regardless of the physical storage of the data. This paper (and session) introduces some useful and practical aspects of XML technology for sharing information in a educational setting…
Thomas R. Cech, RNA, and Ribozymes
RSS Archive Videos XML DOE R&D Accomplishments DOE R&D Accomplishments searchQuery à Find searchQuery x Find DOE R&D Acccomplishments Navigation dropdown arrow The Basics dropdown arrow Home About Physicists Nobel Chemists Medicine Nobels Explore dropdown arrow Insights Blog Archive SC Stories Snapshots R
DOE Research and Development Accomplishments Website Policies/Important
Links RSS Archive Videos XML DOE R&D Accomplishments DOE R&D Accomplishments searchQuery à Find searchQuery x Find DOE R&D Acccomplishments Navigation dropdown arrow The Basics Stories Snapshots R&D Nuggets Database dropdown arrow Search Tag Cloud Browse Reports Database Help
PDBj Mine: design and implementation of relational database interface for Protein Data Bank Japan
Kinjo, Akira R.; Yamashita, Reiko; Nakamura, Haruki
2010-01-01
This article is a tutorial for PDBj Mine, a new database and its interface for Protein Data Bank Japan (PDBj). In PDBj Mine, data are loaded from files in the PDBMLplus format (an extension of PDBML, PDB's canonical XML format, enriched with annotations), which are then served for the user of PDBj via the worldwide web (WWW). We describe the basic design of the relational database (RDB) and web interfaces of PDBj Mine. The contents of PDBMLplus files are first broken into XPath entities, and these paths and data are indexed in the way that reflects the hierarchical structure of the XML files. The data for each XPath type are saved into the corresponding relational table that is named as the XPath itself. The generation of table definitions from the PDBMLplus XML schema is fully automated. For efficient search, frequently queried terms are compiled into a brief summary table. Casual users can perform simple keyword search, and 'Advanced Search' which can specify various conditions on the entries. More experienced users can query the database using SQL statements which can be constructed in a uniform manner. Thus, PDBj Mine achieves a combination of the flexibility of XML documents and the robustness of the RDB. Database URL: http://www.pdbj.org/ PMID:20798081
PDBj Mine: design and implementation of relational database interface for Protein Data Bank Japan.
Kinjo, Akira R; Yamashita, Reiko; Nakamura, Haruki
2010-08-25
This article is a tutorial for PDBj Mine, a new database and its interface for Protein Data Bank Japan (PDBj). In PDBj Mine, data are loaded from files in the PDBMLplus format (an extension of PDBML, PDB's canonical XML format, enriched with annotations), which are then served for the user of PDBj via the worldwide web (WWW). We describe the basic design of the relational database (RDB) and web interfaces of PDBj Mine. The contents of PDBMLplus files are first broken into XPath entities, and these paths and data are indexed in the way that reflects the hierarchical structure of the XML files. The data for each XPath type are saved into the corresponding relational table that is named as the XPath itself. The generation of table definitions from the PDBMLplus XML schema is fully automated. For efficient search, frequently queried terms are compiled into a brief summary table. Casual users can perform simple keyword search, and 'Advanced Search' which can specify various conditions on the entries. More experienced users can query the database using SQL statements which can be constructed in a uniform manner. Thus, PDBj Mine achieves a combination of the flexibility of XML documents and the robustness of the RDB. Database URL: http://www.pdbj.org/
DOE Research and Development Accomplishments Nobel Chemists Associated with
the DOE and Predecessors RSS Archive Videos XML DOE R&D Accomplishments DOE R&D Accomplishments searchQuery à Find searchQuery x Find DOE R&D Acccomplishments Navigation dropdown arrow The Blog Archive SC Stories Snapshots R&D Nuggets Database dropdown arrow Search Tag Cloud Browse
DOE Research and Development Accomplishments: Visions of Success I
RSS Archive Videos XML DOE R&D Accomplishments DOE R&D Accomplishments searchQuery à Find searchQuery x Find DOE R&D Acccomplishments Navigation dropdown arrow The Basics dropdown arrow Home About Physicists Nobel Chemists Medicine Nobels Explore dropdown arrow Insights Blog Archive SC Stories Snapshots R
DOE Research and Development Accomplishments Contact Us
RSS Archive Videos XML DOE R&D Accomplishments DOE R&D Accomplishments searchQuery à Find searchQuery x Find DOE R&D Acccomplishments Navigation dropdown arrow The Basics dropdown arrow Home About Physicists Nobel Chemists Medicine Nobels Explore dropdown arrow Insights Blog Archive SC Stories Snapshots R
DOE Research and Development Accomplishments QR Code
RSS Archive Videos XML DOE R&D Accomplishments DOE R&D Accomplishments searchQuery à Find searchQuery x Find DOE R&D Acccomplishments Navigation dropdown arrow The Basics dropdown arrow Home About Physicists Nobel Chemists Medicine Nobels Explore dropdown arrow Insights Blog Archive SC Stories Snapshots R
DOE Research and Development Accomplishments What's New
RSS Archive Videos XML DOE R&D Accomplishments DOE R&D Accomplishments searchQuery à Find searchQuery x Find DOE R&D Acccomplishments Navigation dropdown arrow The Basics dropdown arrow Home About Physicists Nobel Chemists Medicine Nobels Explore dropdown arrow Insights Blog Archive SC Stories Snapshots R
DOE Research and Development Accomplishments: Fast Facts
RSS Archive Videos XML DOE R&D Accomplishments DOE R&D Accomplishments searchQuery à Find searchQuery x Find DOE R&D Acccomplishments Navigation dropdown arrow The Basics dropdown arrow Home About Physicists Nobel Chemists Medicine Nobels Explore dropdown arrow Insights Blog Archive SC Stories Snapshots R
ERIC Educational Resources Information Center
Tennant, Roy, Ed.
This book presents examples of how libraries are using XML (eXtensible Markup Language) to solve problems, expand services, and improve systems. Part I contains papers on using XML in library catalog records: "Updating MARC Records with XMLMARC" (Kevin S. Clarke, Stanford University) and "Searching and Retrieving XML Records via the…
Using XML to Separate Content from the Presentation Software in eLearning Applications
ERIC Educational Resources Information Center
Merrill, Paul F.
2005-01-01
This paper has shown how XML (extensible Markup Language) can be used to mark up content. Since XML documents, with meaningful tags, can be interpreted easily by humans as well as computers, they are ideal for the interchange of information. Because XML tags can be defined by an individual or organization, XML documents have proven useful in a…
Specifics on a XML Data Format for Scientific Data
NASA Astrophysics Data System (ADS)
Shaya, E.; Thomas, B.; Cheung, C.
An XML-based data format for interchange and archiving of scientific data would benefit in many ways from the features standardized in XML. Foremost of these features is the world-wide acceptance and adoption of XML. Applications, such as browsers, XQL and XSQL advanced query, XML editing, or CSS or XSLT transformation, that are coming out of industry and academia can be easily adopted and provide startling new benefits and features. We have designed a prototype of a core format for holding, in a very general way, parameters, tables, scalar and vector fields, atlases, animations and complex combinations of these. This eXtensible Data Format (XDF) makes use of XML functionalities such as: self-validation of document structure, default values for attributes, XLink hyperlinks, entity replacements, internal referencing, inheritance, and XSLT transformation. An API is available to aid in detailed assembly, extraction, and manipulation. Conversion tools to and from FITS and other existing data formats are under development. In the future, we hope to provide object oriented interfaces to C++, Java, Python, IDL, Mathematica, Maple, and various databases. http://xml.gsfc.nasa.gov/XDF
Setting the Standard: XML on Campus.
ERIC Educational Resources Information Center
Rawlins, Mike
2001-01-01
Explains what XML (Extensible Markup Language) is; where to find it in a few years (everywhere from Web pages, to database management systems, to common campus applications); issues that will make XML somewhat of an experimental strategy in the near term; and the importance of decision-makers being abreast of XML trends in standards, tools…
2003-01-01
Authenticat’n (XCBF) Authorizat’n (XACML) (SAML) Privacy (P3P) Digital Rights Management (XrML) Content Mngmnt (DASL) (WebDAV) Content Syndicat’n...Registry/ Repository BPSS eCommerce XML/EDI Universal Business Language (UBL) Internet & Computing Human Resources (HR-XML) Semantic KEY XML SPECIFICATIONS
Castles Made of Sand: Building Sustainable Digitized Collections Using XML.
ERIC Educational Resources Information Center
Ragon, Bart
2003-01-01
Describes work at the University of Virginia library to digitize special collections. Discusses the use of XML (Extensible Markup Language); providing access to original source materials; DTD (Document Type Definition); TEI (Text Encoding Initiative); metadata; XSL (Extensible Style Language); and future possibilities. (LRW)
Modeling the Arden Syntax for medical decisions in XML.
Kim, Sukil; Haug, Peter J; Rocha, Roberto A; Choi, Inyoung
2008-10-01
A new model expressing Arden Syntax with the eXtensible Markup Language (XML) was developed to increase its portability. Every example was manually parsed and reviewed until the schema and the style sheet were considered to be optimized. When the first schema was finished, several MLMs in Arden Syntax Markup Language (ArdenML) were validated against the schema. They were then transformed to HTML formats with the style sheet, during which they were compared to the original text version of their own MLM. When faults were found in the transformed MLM, the schema and/or style sheet was fixed. This cycle continued until all the examples were encoded into XML documents. The original MLMs were encoded in XML according to the proposed XML schema and reverse-parsed MLMs in ArdenML were checked using a public domain Arden Syntax checker. Two hundred seventy seven examples of MLMs were successfully transformed into XML documents using the model, and the reverse-parse yielded the original text version of MLMs. Two hundred sixty five of the 277 MLMs showed the same error patterns before and after transformation, and all 11 errors related to statement structure were resolved in XML version. The model uses two syntax checking mechanisms, first an XML validation process, and second, a syntax check using an XSL style sheet. Now that we have a schema for ArdenML, we can also begin the development of style sheets for transformation ArdenML into other languages.
Symmetric Key Services Markup Language (SKSML)
NASA Astrophysics Data System (ADS)
Noor, Arshad
Symmetric Key Services Markup Language (SKSML) is the eXtensible Markup Language (XML) being standardized by the OASIS Enterprise Key Management Infrastructure Technical Committee for requesting and receiving symmetric encryption cryptographic keys within a Symmetric Key Management System (SKMS). This protocol is designed to be used between clients and servers within an Enterprise Key Management Infrastructure (EKMI) to secure data, independent of the application and platform. Building on many security standards such as XML Signature, XML Encryption, Web Services Security and PKI, SKSML provides standards-based capability to allow any application to use symmetric encryption keys, while maintaining centralized control. This article describes the SKSML protocol and its capabilities.
ERIC Educational Resources Information Center
VanLengen, Craig Alan
2010-01-01
The Securities and Exchange Commission (SEC) has recently announced a proposal that will require all public companies to report their financial data in Extensible Business Reporting Language (XBRL). XBRL is an extension of Extensible Markup Language (XML). Moving to a standard reporting format makes it easier for organizations to report the…
DoD Business Mission Area Service-Oriented Architecture to Support Business Transformation
2008-10-01
Notation ( BPMN ). The research also found strong support across vendors for the Business Process Execution Language standard, though there is also...emerging support for direct execution of BPMN through the use of the XML Process Definition Language, an XML serialization of BPMN . Many vendors also
Representing nested semantic information in a linear string of text using XML.
Krauthammer, Michael; Johnson, Stephen B; Hripcsak, George; Campbell, David A; Friedman, Carol
2002-01-01
XML has been widely adopted as an important data interchange language. The structure of XML enables sharing of data elements with variable degrees of nesting as long as the elements are grouped in a strict tree-like fashion. This requirement potentially restricts the usefulness of XML for marking up written text, which often includes features that do not properly nest within other features. We encountered this problem while marking up medical text with structured semantic information from a Natural Language Processor. Traditional approaches to this problem separate the structured information from the actual text mark up. This paper introduces an alternative solution, which tightly integrates the semantic structure with the text. The resulting XML markup preserves the linearity of the medical texts and can therefore be easily expanded with additional types of information.
Representing nested semantic information in a linear string of text using XML.
Krauthammer, Michael; Johnson, Stephen B.; Hripcsak, George; Campbell, David A.; Friedman, Carol
2002-01-01
XML has been widely adopted as an important data interchange language. The structure of XML enables sharing of data elements with variable degrees of nesting as long as the elements are grouped in a strict tree-like fashion. This requirement potentially restricts the usefulness of XML for marking up written text, which often includes features that do not properly nest within other features. We encountered this problem while marking up medical text with structured semantic information from a Natural Language Processor. Traditional approaches to this problem separate the structured information from the actual text mark up. This paper introduces an alternative solution, which tightly integrates the semantic structure with the text. The resulting XML markup preserves the linearity of the medical texts and can therefore be easily expanded with additional types of information. PMID:12463856
phyloXML: XML for evolutionary biology and comparative genomics
Han, Mira V; Zmasek, Christian M
2009-01-01
Background Evolutionary trees are central to a wide range of biological studies. In many of these studies, tree nodes and branches need to be associated (or annotated) with various attributes. For example, in studies concerned with organismal relationships, tree nodes are associated with taxonomic names, whereas tree branches have lengths and oftentimes support values. Gene trees used in comparative genomics or phylogenomics are usually annotated with taxonomic information, genome-related data, such as gene names and functional annotations, as well as events such as gene duplications, speciations, or exon shufflings, combined with information related to the evolutionary tree itself. The data standards currently used for evolutionary trees have limited capacities to incorporate such annotations of different data types. Results We developed a XML language, named phyloXML, for describing evolutionary trees, as well as various associated data items. PhyloXML provides elements for commonly used items, such as branch lengths, support values, taxonomic names, and gene names and identifiers. By using "property" elements, phyloXML can be adapted to novel and unforeseen use cases. We also developed various software tools for reading, writing, conversion, and visualization of phyloXML formatted data. Conclusion PhyloXML is an XML language defined by a complete schema in XSD that allows storing and exchanging the structures of evolutionary trees as well as associated data. More information about phyloXML itself, the XSD schema, as well as tools implementing and supporting phyloXML, is available at . PMID:19860910
Chapter 51: How to Build a Simple Cone Search Service Using a Local Database
NASA Astrophysics Data System (ADS)
Kent, B. R.; Greene, G. R.
The cone search service protocol will be examined from the server side in this chapter. A simple cone search service will be setup and configured locally using MySQL. Data will be read into a table, and the Java JDBC will be used to connect to the database. Readers will understand the VO cone search specification and how to use it to query a database on their local systems and return an XML/VOTable file based on an input of RA/DEC coordinates and a search radius. The cone search in this example will be deployed as a Java servlet. The resulting cone search can be tested with a verification service. This basic setup can be used with other languages and relational databases.
WaterML: an XML Language for Communicating Water Observations Data
NASA Astrophysics Data System (ADS)
Maidment, D. R.; Zaslavsky, I.; Valentine, D.
2007-12-01
One of the great impediments to the synthesis of water information is the plethora of formats used to publish such data. Each water agency uses its own approach. XML (eXtended Markup Languages) are generalizations of Hypertext Markup Language to communicate specific kinds of information via the internet. WaterML is an XML language for water observations data - streamflow, water quality, groundwater levels, climate, precipitation and aquatic biology data, recorded at fixed, point locations as a function of time. The Hydrologic Information System project of the Consortium of Universities for the Advancement of Hydrologic Science, Inc (CUAHSI) has defined WaterML and prepared a set of web service functions called WaterOneFLow that use WaterML to provide information about observation sites, the variables measured there and the values of those measurments. WaterML has been submitted to the Open GIS Consortium for harmonization with its standards for XML languages. Academic investigators at a number of testbed locations in the WATERS network are providing data in WaterML format using WaterOneFlow web services. The USGS and other federal agencies are also working with CUAHSI to similarly provide access to their data in WaterML through WaterOneFlow services.
A future Outlook: Web based Simulation of Hydrodynamic models
NASA Astrophysics Data System (ADS)
Islam, A. S.; Piasecki, M.
2003-12-01
Despite recent advances to present simulation results as 3D graphs or animation contours, the modeling user community still faces some shortcomings when trying to move around and analyze data. Typical problems include the lack of common platforms with standard vocabulary to exchange simulation results from different numerical models, insufficient descriptions about data (metadata), lack of robust search and retrieval tools for data, and difficulties to reuse simulation domain knowledge. This research demonstrates how to create a shared simulation domain in the WWW and run a number of models through multi-user interfaces. Firstly, meta-datasets have been developed to describe hydrodynamic model data based on geographic metadata standard (ISO 19115) that has been extended to satisfy the need of the hydrodynamic modeling community. The Extended Markup Language (XML) is used to publish this metadata by the Resource Description Framework (RDF). Specific domain ontology for Web Based Simulation (WBS) has been developed to explicitly define vocabulary for the knowledge based simulation system. Subsequently, this knowledge based system is converted into an object model using Meta Object Family (MOF). The knowledge based system acts as a Meta model for the object oriented system, which aids in reusing the domain knowledge. Specific simulation software has been developed based on the object oriented model. Finally, all model data is stored in an object relational database. Database back-ends help store, retrieve and query information efficiently. This research uses open source software and technology such as Java Servlet and JSP, Apache web server, Tomcat Servlet Engine, PostgresSQL databases, Protégé ontology editor, RDQL and RQL for querying RDF in semantic level, Jena Java API for RDF. Also, we use international standards such as the ISO 19115 metadata standard, and specifications such as XML, RDF, OWL, XMI, and UML. The final web based simulation product is deployed as Web Archive (WAR) files which is platform and OS independent and can be used by Windows, UNIX, or Linux. Keywords: Apache, ISO 19115, Java Servlet, Jena, JSP, Metadata, MOF, Linux, Ontology, OWL, PostgresSQL, Protégé, RDF, RDQL, RQL, Tomcat, UML, UNIX, Windows, WAR, XML
ERIC Educational Resources Information Center
Banerjee, Kyle
2002-01-01
Discusses XML, how it has transformed the way information is managed and delivered, and its impact on libraries. Topics include how XML differs from other markup languages; the document object model (DOM); style sheets; practical applications for archival materials, interlibrary loans, digital collections, and MARC data; and future possibilities.…
XSemantic: An Extension of LCA Based XML Semantic Search
NASA Astrophysics Data System (ADS)
Supasitthimethee, Umaporn; Shimizu, Toshiyuki; Yoshikawa, Masatoshi; Porkaew, Kriengkrai
One of the most convenient ways to query XML data is a keyword search because it does not require any knowledge of XML structure or learning a new user interface. However, the keyword search is ambiguous. The users may use different terms to search for the same information. Furthermore, it is difficult for a system to decide which node is likely to be chosen as a return node and how much information should be included in the result. To address these challenges, we propose an XML semantic search based on keywords called XSemantic. On the one hand, we give three definitions to complete in terms of semantics. Firstly, the semantic term expansion, our system is robust from the ambiguous keywords by using the domain ontology. Secondly, to return semantic meaningful answers, we automatically infer the return information from the user queries and take advantage of the shortest path to return meaningful connections between keywords. Thirdly, we present the semantic ranking that reflects the degree of similarity as well as the semantic relationship so that the search results with the higher relevance are presented to the users first. On the other hand, in the LCA and the proximity search approaches, we investigated the problem of information included in the search results. Therefore, we introduce the notion of the Lowest Common Element Ancestor (LCEA) and define our simple rule without any requirement on the schema information such as the DTD or XML Schema. The first experiment indicated that XSemantic not only properly infers the return information but also generates compact meaningful results. Additionally, the benefits of our proposed semantics are demonstrated by the second experiment.
XML in an Adaptive Framework for Instrument Control
NASA Technical Reports Server (NTRS)
Ames, Troy J.
2004-01-01
NASA Goddard Space Flight Center is developing an extensible framework for instrument command and control, known as Instrument Remote Control (IRC), that combines the platform independent processing capabilities of Java with the power of the Extensible Markup Language (XML). A key aspect of the architecture is software that is driven by an instrument description, written using the Instrument Markup Language (IML). IML is an XML dialect used to describe interfaces to control and monitor the instrument, command sets and command formats, data streams, communication mechanisms, and data processing algorithms.
XML and Bibliographic Data: The TVS (Transport, Validation and Services) Model.
ERIC Educational Resources Information Center
de Carvalho, Joaquim; Cordeiro, Maria Ines
This paper discusses the role of XML in library information systems at three major levels: as are presentation language that enables the transport of bibliographic data in a way that is technologically independent and universally understood across systems and domains; as a language that enables the specification of complex validation rules…
ERIC Educational Resources Information Center
Chang, May
2000-01-01
Describes the development of electronic finding aids for archives at the University of Illinois, Urbana-Champaign that used XML (extensible markup language) and EAD (encoded archival description) to enable more flexible information management and retrieval than using MARC or a relational database management system. EAD template is appended.…
The Essen Learning Model--A Step towards a Representation of Learning Objectives.
ERIC Educational Resources Information Center
Bick, Markus; Pawlowski, Jan M.; Veith, Patrick
The importance of the Extensible Markup Language (XML) technology family in the field of Computer Assisted Learning (CAL) can not be denied. The Instructional Management Systems Project (IMS), for example, provides a learning resource XML binding specification. Considering this specification and other implementations using XML to represent…
Development of the Plate Tectonics and Seismology markup languages with XML
NASA Astrophysics Data System (ADS)
Babaie, H.; Babaei, A.
2003-04-01
The Extensible Markup Language (XML) and its specifications such as the XSD Schema, allow geologists to design discipline-specific vocabularies such as Seismology Markup Language (SeismML) or Plate Tectonics Markup Language (TectML). These languages make it possible to store and interchange structured geological information over the Web. Development of a geological markup language requires mapping geological concepts, such as "Earthquake" or "Plate" into a UML object model, applying a modeling and design environment. We have selected four inter-related geological concepts: earthquake, fault, plate, and orogeny, and developed four XML Schema Definitions (XSD), that define the relationships, cardinalities, hierarchies, and semantics of these concepts. In such a geological concept model, the UML object "Earthquake" is related to one or more "Wave" objects, each arriving to a seismic station at a specific "DateTime", and relating to a specific "Epicenter" object that lies at a unique "Location". The "Earthquake" object occurs along a "Segment" of a "Fault" object, which is related to a specific "Plate" object. The "Fault" has its own associations with such things as "Bend", "Step", and "Segment", and could be of any kind (e.g., "Thrust", "Transform'). The "Plate" is related to many other objects such as "MOR", "Subduction", and "Forearc", and is associated with an "Orogeny" object that relates to "Deformation" and "Strain" and several other objects. These UML objects were mapped into XML Metadata Interchange (XMI) formats, which were then converted into four XSD Schemas. The schemas were used to create and validate the XML instance documents, and to create a relational database hosting the plate tectonics and seismological data in the Microsoft Access format. The SeismML and TectML allow seismologists and structural geologists, among others, to submit and retrieve structured geological data on the Internet. A seismologist, for example, can submit peer-reviewed and reliable data about a specific earthquake to a Java Server Page on our web site hosting the XML application. Other geologists can readily retrieve the submitted data, saved in files or special tables of the designed database, through a search engine designed with J2EE (JSP, servlet, Java Bean) and XML specifications such as XPath, XPointer, and XSLT. When extended to include all the important concepts of seismology and plate tectonics, the two markup languages will make global interchange of geological data a reality.
C3I and Modelling and Simulation (M&S) Interoperability
2004-03-01
customised Open Source products. The technical implementation is based on the use of the eXtendend Markup Language (XML) and Python . XML is developed...to structure, store and send information. The language is focus on the description of data. Python is a portable, interpreted, object-oriented...programming language. A huge variety of usable Open Source Projects were issued by the Python Community. 3.1 Phase 1: Feasibility Studies Phase 1 was
XML syntax for clinical laboratory procedure manuals.
Saadawi, Gilan; Harrison, James H
2003-01-01
We have developed a document type description (DTD) in Extensable Markup Language (XML) for clinical laboratory procedures. Our XML syntax can adequately structure a variety of procedure types across different laboratories and is compatible with current procedure standards. The combination of this format with an XML content management system and appropriate style sheets will allow efficient procedure maintenance, distributed access, customized display and effective searching across a large body of test information.
Nassi-Schneiderman Diagram in HTML Based on AML
ERIC Educational Resources Information Center
Menyhárt, László
2013-01-01
In an earlier work I defined an extension of XML called Algorithm Markup Language (AML) for easy and understandable coding in an IDE which supports XML editing (e.g. NetBeans). The AML extension contains annotations and native language (English or Hungarian) tag names used when coding our algorithm. This paper presents a drawing tool with which…
Astronomical Instrumentation System Markup Language
NASA Astrophysics Data System (ADS)
Goldbaum, Jesse M.
2016-05-01
The Astronomical Instrumentation System Markup Language (AISML) is an Extensible Markup Language (XML) based file format for maintaining and exchanging information about astronomical instrumentation. The factors behind the need for an AISML are first discussed followed by the reasons why XML was chosen as the format. Next it's shown how XML also provides the framework for a more precise definition of an astronomical instrument and how these instruments can be combined to form an Astronomical Instrumentation System (AIS). AISML files for several instruments as well as one for a sample AIS are provided. The files demonstrate how AISML can be utilized for various tasks from web page generation and programming interface to instrument maintenance and quality management. The advantages of widespread adoption of AISML are discussed.
Framework and prototype for a secure XML-based electronic health records system.
Steele, Robert; Gardner, William; Chandra, Darius; Dillon, Tharam S
2007-01-01
Security of personal medical information has always been a challenge for the advancement of Electronic Health Records (EHRs) initiatives. eXtensible Markup Language (XML), is rapidly becoming the key standard for data representation and transportation. The widespread use of XML and the prospect of its use in the Electronic Health (e-health) domain highlights the need for flexible access control models for XML data and documents. This paper presents a declarative access control model for XML data repositories that utilises an expressive XML role control model. The operational semantics of this model are illustrated by Xplorer, a user interface generation engine which supports search-browse-navigate activities on XML repositories.
Sánchez-de-Madariaga, Ricardo; Muñoz, Adolfo; Lozano-Rubí, Raimundo; Serrano-Balazote, Pablo; Castro, Antonio L; Moreno, Oscar; Pascual, Mario
2017-08-18
The objective of this research is to compare the relational and non-relational (NoSQL) database systems approaches in order to store, recover, query and persist standardized medical information in the form of ISO/EN 13606 normalized Electronic Health Record XML extracts, both in isolation and concurrently. NoSQL database systems have recently attracted much attention, but few studies in the literature address their direct comparison with relational databases when applied to build the persistence layer of a standardized medical information system. One relational and two NoSQL databases (one document-based and one native XML database) of three different sizes have been created in order to evaluate and compare the response times (algorithmic complexity) of six different complexity growing queries, which have been performed on them. Similar appropriate results available in the literature have also been considered. Relational and non-relational NoSQL database systems show almost linear algorithmic complexity query execution. However, they show very different linear slopes, the former being much steeper than the two latter. Document-based NoSQL databases perform better in concurrency than in isolation, and also better than relational databases in concurrency. Non-relational NoSQL databases seem to be more appropriate than standard relational SQL databases when database size is extremely high (secondary use, research applications). Document-based NoSQL databases perform in general better than native XML NoSQL databases. EHR extracts visualization and edition are also document-based tasks more appropriate to NoSQL database systems. However, the appropriate database solution much depends on each particular situation and specific problem.
XML Based Markup Languages for Specific Domains
NASA Astrophysics Data System (ADS)
Varde, Aparna; Rundensteiner, Elke; Fahrenholz, Sally
A challenging area in web based support systems is the study of human activities in connection with the web, especially with reference to certain domains. This includes capturing human reasoning in information retrieval, facilitating the exchange of domain-specific knowledge through a common platform and developing tools for the analysis of data on the web from a domain expert's angle. Among the techniques and standards related to such work, we have XML, the eXtensible Markup Language. This serves as a medium of communication for storing and publishing textual, numeric and other forms of data seamlessly. XML tag sets are such that they preserve semantics and simplify the understanding of stored information by users. Often domain-specific markup languages are designed using XML, with a user-centric perspective. Standardization bodies and research communities may extend these to include additional semantics of areas within and related to the domain. This chapter outlines the issues to be considered in developing domain-specific markup languages: the motivation for development, the semantic considerations, the syntactic constraints and other relevant aspects, especially taking into account human factors. Illustrating examples are provided from domains such as Medicine, Finance and Materials Science. Particular emphasis in these examples is on the Materials Markup Language MatML and the semantics of one of its areas, namely, the Heat Treating of Materials. The focus of this chapter, however, is not the design of one particular language but rather the generic issues concerning the development of domain-specific markup languages.
iSMART: Ontology-based Semantic Query of CDA Documents
Liu, Shengping; Ni, Yuan; Mei, Jing; Li, Hanyu; Xie, Guotong; Hu, Gang; Liu, Haifeng; Hou, Xueqiao; Pan, Yue
2009-01-01
The Health Level 7 Clinical Document Architecture (CDA) is widely accepted as the format for electronic clinical document. With the rich ontological references in CDA documents, the ontology-based semantic query could be performed to retrieve CDA documents. In this paper, we present iSMART (interactive Semantic MedicAl Record reTrieval), a prototype system designed for ontology-based semantic query of CDA documents. The clinical information in CDA documents will be extracted into RDF triples by a declarative XML to RDF transformer. An ontology reasoner is developed to infer additional information by combining the background knowledge from SNOMED CT ontology. Then an RDF query engine is leveraged to enable the semantic queries. This system has been evaluated using the real clinical documents collected from a large hospital in southern China. PMID:20351883
ERIC Educational Resources Information Center
Walsh, Lucas
2007-01-01
This article seeks to provide an introduction to Extensible Markup Language (XML) by looking at its use in a single source publishing approach to the provision of teaching resources in both hardcopy and online. Using the development of the International Baccalaureate Organisation's online Economics Subject Guide as a practical example, this…
PRIDE: new developments and new datasets.
Jones, Philip; Côté, Richard G; Cho, Sang Yun; Klie, Sebastian; Martens, Lennart; Quinn, Antony F; Thorneycroft, David; Hermjakob, Henning
2008-01-01
The PRIDE (http://www.ebi.ac.uk/pride) database of protein and peptide identifications was previously described in the NAR Database Special Edition in 2006. Since this publication, the volume of public data in the PRIDE relational database has increased by more than an order of magnitude. Several significant public datasets have been added, including identifications and processed mass spectra generated by the HUPO Brain Proteome Project and the HUPO Liver Proteome Project. The PRIDE software development team has made several significant changes and additions to the user interface and tool set associated with PRIDE. The focus of these changes has been to facilitate the submission process and to improve the mechanisms by which PRIDE can be queried. The PRIDE team has developed a Microsoft Excel workbook that allows the required data to be collated in a series of relatively simple spreadsheets, with automatic generation of PRIDE XML at the end of the process. The ability to query PRIDE has been augmented by the addition of a BioMart interface allowing complex queries to be constructed. Collaboration with groups outside the EBI has been fruitful in extending PRIDE, including an approach to encode iTRAQ quantitative data in PRIDE XML.
XML DTD and Schemas for HDF-EOS
NASA Technical Reports Server (NTRS)
Ullman, Richard; Yang, Jingli
2008-01-01
An Extensible Markup Language (XML) document type definition (DTD) standard for the structure and contents of HDF-EOS files and their contents, and an equivalent standard in the form of schemas, have been developed.
CytometryML, an XML format based on DICOM and FCS for analytical cytology data.
Leif, Robert C; Leif, Suzanne B; Leif, Stephanie H
2003-07-01
Flow Cytometry Standard (FCS) was initially created to standardize the software researchers use to analyze, transmit, and store data produced by flow cytometers and sorters. Because of the clinical utility of flow cytometry, it is necessary to have a standard consistent with the requirements of medical regulatory agencies. We extended the existing mapping of FCS to the Digital Imaging and Communications in Medicine (DICOM) standard to include list-mode data produced by flow cytometry, laser scanning cytometry, and microscopic image cytometry. FCS list-mode was mapped to the DICOM Waveform Information Object. We created a collection of Extensible Markup Language (XML) schemas to express the DICOM analytical cytologic text-based data types except for large binary objects. We also developed a cytometry markup language, CytometryML, in an open environment subject to continuous peer review. The feasibility of expressing the data contained in FCS, including list-mode in DICOM, was demonstrated; and a preliminary mapping for list-mode data in the form of XML schemas and documents was completed. DICOM permitted the creation of indices that can be used to rapidly locate in a list-mode file the cells that are members of a subset. DICOM and its coding schemes for other medical standards can be represented by XML schemas, which can be combined with other relevant XML applications, such as Mathematical Markup Language (MathML). The use of XML format based on DICOM for analytical cytology met most of the previously specified requirements and appears capable of meeting the others; therefore, the present FCS should be retired and replaced by an open, XML-based, standard CytometryML. Copyright 2003 Wiley-Liss, Inc.
A Modular Framework for Transforming Structured Data into HTML with Machine-Readable Annotations
NASA Astrophysics Data System (ADS)
Patton, E. W.; West, P.; Rozell, E.; Zheng, J.
2010-12-01
There is a plethora of web-based Content Management Systems (CMS) available for maintaining projects and data, i.a. However, each system varies in its capabilities and often content is stored separately and accessed via non-uniform web interfaces. Moving from one CMS to another (e.g., MediaWiki to Drupal) can be cumbersome, especially if a large quantity of data must be adapted to the new system. To standardize the creation, display, management, and sharing of project information, we have assembled a framework that uses existing web technologies to transform data provided by any service that supports the SPARQL Protocol and RDF Query Language (SPARQL) queries into HTML fragments, allowing it to be embedded in any existing website. The framework utilizes a two-tier XML Stylesheet Transformation (XSLT) that uses existing ontologies (e.g., Friend-of-a-Friend, Dublin Core) to interpret query results and render them as HTML documents. These ontologies can be used in conjunction with custom ontologies suited to individual needs (e.g., domain-specific ontologies for describing data records). Furthermore, this transformation process encodes machine-readable annotations, namely, the Resource Description Framework in attributes (RDFa), into the resulting HTML, so that capable parsers and search engines can extract the relationships between entities (e.g, people, organizations, datasets). To facilitate editing of content, the framework provides a web-based form system, mapping each query to a dynamically generated form that can be used to modify and create entities, while keeping the native data store up-to-date. This open framework makes it easy to duplicate data across many different sites, allowing researchers to distribute their data in many different online forums. In this presentation we will outline the structure of queries and the stylesheets used to transform them, followed by a brief walkthrough that follows the data from storage to human- and machine-accessible web page. We conclude with a discussion on content caching and steps toward performing queries across multiple domains.
An object-oriented approach for harmonization of multimedia markup languages
NASA Astrophysics Data System (ADS)
Chen, Yih-Feng; Kuo, May-Chen; Sun, Xiaoming; Kuo, C.-C. Jay
2003-12-01
An object-oriented methodology is proposed to harmonize several different markup languages in this research. First, we adopt the Unified Modelling Language (UML) as the data model to formalize the concept and the process of the harmonization process between the eXtensible Markup Language (XML) applications. Then, we design the Harmonization eXtensible Markup Language (HXML) based on the data model and formalize the transformation between the Document Type Definitions (DTDs) of the original XML applications and HXML. The transformation between instances is also discussed. We use the harmonization of SMIL and X3D as an example to demonstrate the proposed methodology. This methodology can be generalized to various application domains.
Development of clinical contents model markup language for electronic health records.
Yun, Ji-Hyun; Ahn, Sun-Ju; Kim, Yoon
2012-09-01
To develop dedicated markup language for clinical contents models (CCM) to facilitate the active use of CCM in electronic health record systems. Based on analysis of the structure and characteristics of CCM in the clinical domain, we designed extensible markup language (XML) based CCM markup language (CCML) schema manually. CCML faithfully reflects CCM in both the syntactic and semantic aspects. As this language is based on XML, it can be expressed and processed in computer systems and can be used in a technology-neutral way. CCML HAS THE FOLLOWING STRENGTHS: it is machine-readable and highly human-readable, it does not require a dedicated parser, and it can be applied for existing electronic health record systems.
ADASS Web Database XML Project
NASA Astrophysics Data System (ADS)
Barg, M. I.; Stobie, E. B.; Ferro, A. J.; O'Neil, E. J.
In the spring of 2000, at the request of the ADASS Program Organizing Committee (POC), we began organizing information from previous ADASS conferences in an effort to create a centralized database. The beginnings of this database originated from data (invited speakers, participants, papers, etc.) extracted from HyperText Markup Language (HTML) documents from past ADASS host sites. Unfortunately, not all HTML documents are well formed and parsing them proved to be an iterative process. It was evident at the beginning that if these Web documents were organized in a standardized way, such as XML (Extensible Markup Language), the processing of this information across the Web could be automated, more efficient, and less error prone. This paper will briefly review the many programming tools available for processing XML, including Java, Perl and Python, and will explore the mapping of relational data from our MySQL database to XML.
XML Based Scientific Data Management Facility
NASA Technical Reports Server (NTRS)
Mehrotra, P.; Zubair, M.; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
The World Wide Web consortium has developed an Extensible Markup Language (XML) to support the building of better information management infrastructures. The scientific computing community realizing the benefits of XML has designed markup languages for scientific data. In this paper, we propose a XML based scientific data management ,facility, XDMF. The project is motivated by the fact that even though a lot of scientific data is being generated, it is not being shared because of lack of standards and infrastructure support for discovering and transforming the data. The proposed data management facility can be used to discover the scientific data itself, the transformation functions, and also for applying the required transformations. We have built a prototype system of the proposed data management facility that can work on different platforms. We have implemented the system using Java, and Apache XSLT engine Xalan. To support remote data and transformation functions, we had to extend the XSLT specification and the Xalan package.
Bouyssié, David; Dubois, Marc; Nasso, Sara; Gonzalez de Peredo, Anne; Burlet-Schiltz, Odile; Aebersold, Ruedi; Monsarrat, Bernard
2015-01-01
The analysis and management of MS data, especially those generated by data independent MS acquisition, exemplified by SWATH-MS, pose significant challenges for proteomics bioinformatics. The large size and vast amount of information inherent to these data sets need to be properly structured to enable an efficient and straightforward extraction of the signals used to identify specific target peptides. Standard XML based formats are not well suited to large MS data files, for example, those generated by SWATH-MS, and compromise high-throughput data processing and storing. We developed mzDB, an efficient file format for large MS data sets. It relies on the SQLite software library and consists of a standardized and portable server-less single-file database. An optimized 3D indexing approach is adopted, where the LC-MS coordinates (retention time and m/z), along with the precursor m/z for SWATH-MS data, are used to query the database for data extraction. In comparison with XML formats, mzDB saves ∼25% of storage space and improves access times by a factor of twofold up to even 2000-fold, depending on the particular data access. Similarly, mzDB shows also slightly to significantly lower access times in comparison with other formats like mz5. Both C++ and Java implementations, converting raw or XML formats to mzDB and providing access methods, will be released under permissive license. mzDB can be easily accessed by the SQLite C library and its drivers for all major languages, and browsed with existing dedicated GUIs. The mzDB described here can boost existing mass spectrometry data analysis pipelines, offering unprecedented performance in terms of efficiency, portability, compactness, and flexibility. PMID:25505153
jmzML, an open-source Java API for mzML, the PSI standard for MS data.
Côté, Richard G; Reisinger, Florian; Martens, Lennart
2010-04-01
We here present jmzML, a Java API for the Proteomics Standards Initiative mzML data standard. Based on the Java Architecture for XML Binding and XPath-based XML indexer random-access XML parser, jmzML can handle arbitrarily large files in minimal memory, allowing easy and efficient processing of mzML files using the Java programming language. jmzML also automatically resolves internal XML references on-the-fly. The library (which includes a viewer) can be downloaded from http://jmzml.googlecode.com.
Integrated Array/Metadata Analytics
NASA Astrophysics Data System (ADS)
Misev, Dimitar; Baumann, Peter
2015-04-01
Data comes in various forms and types, and integration usually presents a problem that is often simply ignored and solved with ad-hoc solutions. Multidimensional arrays are an ubiquitous data type, that we find at the core of virtually all science and engineering domains, as sensor, model, image, statistics data. Naturally, arrays are richly described by and intertwined with additional metadata (alphanumeric relational data, XML, JSON, etc). Database systems, however, a fundamental building block of what we call "Big Data", lack adequate support for modelling and expressing these array data/metadata relationships. Array analytics is hence quite primitive or non-existent at all in modern relational DBMS. Recognizing this, we extended SQL with a new SQL/MDA part seamlessly integrating multidimensional array analytics into the standard database query language. We demonstrate the benefits of SQL/MDA with real-world examples executed in ASQLDB, an open-source mediator system based on HSQLDB and rasdaman, that already implements SQL/MDA.
CrossTalk: The Journal of Defense Software Engineering. Volume 21, Number 10, October 2008
2008-10-01
proprietary modeling offerings, there is considerable conver- gence around Business Process Modeling Notation ( BPMN ). The research also found strong...support across vendors for the Business Process Execution Language standard, though there is also emerging support for direct execution of BPMN through...the use of the XML Process Definition Language, an XML serialization of BPMN . Many vendors also provide the needed moni- toring of those processes at
NASA Technical Reports Server (NTRS)
Ullman, Richard; Bane, Bob; Yang, Jingli
2008-01-01
A shell script has been written as a means of automatically making HDF-EOS-formatted data sets available via the World Wide Web. ("HDF-EOS" and variants thereof are defined in the first of the two immediately preceding articles.) The shell script chains together some software tools developed by the Data Usability Group at Goddard Space Flight Center to perform the following actions: Extract metadata in Object Definition Language (ODL) from an HDF-EOS file, Convert the metadata from ODL to Extensible Markup Language (XML), Reformat the XML metadata into human-readable Hypertext Markup Language (HTML), Publish the HTML metadata and the original HDF-EOS file to a Web server and an Open-source Project for a Network Data Access Protocol (OPeN-DAP) server computer, and Reformat the XML metadata and submit the resulting file to the EOS Clearinghouse, which is a Web-based metadata clearinghouse that facilitates searching for, and exchange of, Earth-Science data.
An adaptable XML based approach for scientific data management and integration
NASA Astrophysics Data System (ADS)
Wang, Fusheng; Thiel, Florian; Furrer, Daniel; Vergara-Niedermayr, Cristobal; Qin, Chen; Hackenberg, Georg; Bourgue, Pierre-Emmanuel; Kaltschmidt, David; Wang, Mo
2008-03-01
Increased complexity of scientific research poses new challenges to scientific data management. Meanwhile, scientific collaboration is becoming increasing important, which relies on integrating and sharing data from distributed institutions. We develop SciPort, a Web-based platform on supporting scientific data management and integration based on a central server based distributed architecture, where researchers can easily collect, publish, and share their complex scientific data across multi-institutions. SciPort provides an XML based general approach to model complex scientific data by representing them as XML documents. The documents capture not only hierarchical structured data, but also images and raw data through references. In addition, SciPort provides an XML based hierarchical organization of the overall data space to make it convenient for quick browsing. To provide generalization, schemas and hierarchies are customizable with XML-based definitions, thus it is possible to quickly adapt the system to different applications. While each institution can manage documents on a Local SciPort Server independently, selected documents can be published to a Central Server to form a global view of shared data across all sites. By storing documents in a native XML database, SciPort provides high schema extensibility and supports comprehensive queries through XQuery. By providing a unified and effective means for data modeling, data access and customization with XML, SciPort provides a flexible and powerful platform for sharing scientific data for scientific research communities, and has been successfully used in both biomedical research and clinical trials.
An Adaptable XML Based Approach for Scientific Data Management and Integration.
Wang, Fusheng; Thiel, Florian; Furrer, Daniel; Vergara-Niedermayr, Cristobal; Qin, Chen; Hackenberg, Georg; Bourgue, Pierre-Emmanuel; Kaltschmidt, David; Wang, Mo
2008-02-20
Increased complexity of scientific research poses new challenges to scientific data management. Meanwhile, scientific collaboration is becoming increasing important, which relies on integrating and sharing data from distributed institutions. We develop SciPort, a Web-based platform on supporting scientific data management and integration based on a central server based distributed architecture, where researchers can easily collect, publish, and share their complex scientific data across multi-institutions. SciPort provides an XML based general approach to model complex scientific data by representing them as XML documents. The documents capture not only hierarchical structured data, but also images and raw data through references. In addition, SciPort provides an XML based hierarchical organization of the overall data space to make it convenient for quick browsing. To provide generalization, schemas and hierarchies are customizable with XML-based definitions, thus it is possible to quickly adapt the system to different applications. While each institution can manage documents on a Local SciPort Server independently, selected documents can be published to a Central Server to form a global view of shared data across all sites. By storing documents in a native XML database, SciPort provides high schema extensibility and supports comprehensive queries through XQuery. By providing a unified and effective means for data modeling, data access and customization with XML, SciPort provides a flexible and powerful platform for sharing scientific data for scientific research communities, and has been successfully used in both biomedical research and clinical trials.
RDF-GL: A SPARQL-Based Graphical Query Language for RDF
NASA Astrophysics Data System (ADS)
Hogenboom, Frederik; Milea, Viorel; Frasincar, Flavius; Kaymak, Uzay
This chapter presents RDF-GL, a graphical query language (GQL) for RDF. The GQL is based on the textual query language SPARQL and mainly focuses on SPARQL SELECT queries. The advantage of a GQL over textual query languages is that complexity is hidden through the use of graphical symbols. RDF-GL is supported by a Java-based editor, SPARQLinG, which is presented as well. The editor does not only allow for RDF-GL query creation, but also converts RDF-GL queries to SPARQL queries and is able to subsequently execute these. Experiments show that using the GQL in combination with the editor makes RDF querying more accessible for end users.
The XML approach to implementing space link extension service management
NASA Technical Reports Server (NTRS)
Tai, W.; Welz, G. A.; Theis, G.; Yamada, T.
2001-01-01
A feasibility study has been conducted at JPL, ESOC, and ISAS to assess the possible applications of the eXtensible Mark-up Language (XML) capabilities to the implementation of the CCSDS Space Link Extension (SLE) Service Management function.
Development of Clinical Contents Model Markup Language for Electronic Health Records
Yun, Ji-Hyun; Kim, Yoon
2012-01-01
Objectives To develop dedicated markup language for clinical contents models (CCM) to facilitate the active use of CCM in electronic health record systems. Methods Based on analysis of the structure and characteristics of CCM in the clinical domain, we designed extensible markup language (XML) based CCM markup language (CCML) schema manually. Results CCML faithfully reflects CCM in both the syntactic and semantic aspects. As this language is based on XML, it can be expressed and processed in computer systems and can be used in a technology-neutral way. Conclusions CCML has the following strengths: it is machine-readable and highly human-readable, it does not require a dedicated parser, and it can be applied for existing electronic health record systems. PMID:23115739
Array Databases: Agile Analytics (not just) for the Earth Sciences
NASA Astrophysics Data System (ADS)
Baumann, P.; Misev, D.
2015-12-01
Gridded data, such as images, image timeseries, and climate datacubes, today are managed separately from the metadata, and with different, restricted retrieval capabilities. While databases are good at metadata modelled in tables, XML hierarchies, or RDF graphs, they traditionally do not support multi-dimensional arrays.This gap is being closed by Array Databases, pioneered by the scalable rasdaman ("raster data manager") array engine. Its declarative query language, rasql, extends SQL with array operators which are optimized and parallelized on server side. Installations can easily be mashed up securely, thereby enabling large-scale location-transparent query processing in federations. Domain experts value the integration with their commonly used tools leading to a quick learning curve.Earth, Space, and Life sciences, but also Social sciences as well as business have massive amounts of data and complex analysis challenges that are answered by rasdaman. As of today, rasdaman is mature and in operational use on hundreds of Terabytes of timeseries datacubes, with transparent query distribution across more than 1,000 nodes. Additionally, its concepts have shaped international Big Data standards in the field, including the forthcoming array extension to ISO SQL, many of which are supported by both open-source and commercial systems meantime. In the geo field, rasdaman is reference implementation for the Open Geospatial Consortium (OGC) Big Data standard, WCS, now also under adoption by ISO. Further, rasdaman is in the final stage of OSGeo incubation.In this contribution we present array queries a la rasdaman, describe the architecture and novel optimization and parallelization techniques introduced in 2015, and put this in context of the intercontinental EarthServer initiative which utilizes rasdaman for enabling agile analytics on Petascale datacubes.
Leveraging Small-Lexicon Language Models
2016-12-31
shown in Figure 1. This “easy to use” XML build (from a lexicon.xml file) bakes in source and language metadata, shows both raw (“copper”) and...requires it (e.g. used as standoff annotation), or some or all metadata can be baked into each and every set. Please let us know if a custom...interpretations are plausible, they are pipe-separated: bake #v#1|toast#v#1. • several word classes have been added (with all items numbered #1): d
XML Schema Guide for Primary CDR Submissions
This document presents the extensible markup language (XML) schema guide for the Office of Pollution Prevention and Toxics’ (OPPT) e-CDRweb tool. E-CDRweb is the electronic, web-based tool provided by Environmental Protection Agency (EPA) for the submission of Chemical Data Reporting (CDR) information. This document provides the user with tips and guidance on correctly using the version 1.7 XML schema. Please note that the order of the elements must match the schema.
Hoelzer, Simon; Schweiger, Ralf K; Dudeck, Joachim
2003-01-01
With the introduction of ICD-10 as the standard for diagnostics, it becomes necessary to develop an electronic representation of its complete content, inherent semantics, and coding rules. The authors' design relates to the current efforts by the CEN/TC 251 to establish a European standard for hierarchical classification systems in health care. The authors have developed an electronic representation of ICD-10 with the eXtensible Markup Language (XML) that facilitates integration into current information systems and coding software, taking different languages and versions into account. In this context, XML provides a complete processing framework of related technologies and standard tools that helps develop interoperable applications. XML provides semantic markup. It allows domain-specific definition of tags and hierarchical document structure. The idea of linking and thus combining information from different sources is a valuable feature of XML. In addition, XML topic maps are used to describe relationships between different sources, or "semantically associated" parts of these sources. The issue of achieving a standardized medical vocabulary becomes more and more important with the stepwise implementation of diagnostically related groups, for example. The aim of the authors' work is to provide a transparent and open infrastructure that can be used to support clinical coding and to develop further software applications. The authors are assuming that a comprehensive representation of the content, structure, inherent semantics, and layout of medical classification systems can be achieved through a document-oriented approach.
Hoelzer, Simon; Schweiger, Ralf K.; Dudeck, Joachim
2003-01-01
With the introduction of ICD-10 as the standard for diagnostics, it becomes necessary to develop an electronic representation of its complete content, inherent semantics, and coding rules. The authors' design relates to the current efforts by the CEN/TC 251 to establish a European standard for hierarchical classification systems in health care. The authors have developed an electronic representation of ICD-10 with the eXtensible Markup Language (XML) that facilitates integration into current information systems and coding software, taking different languages and versions into account. In this context, XML provides a complete processing framework of related technologies and standard tools that helps develop interoperable applications. XML provides semantic markup. It allows domain-specific definition of tags and hierarchical document structure. The idea of linking and thus combining information from different sources is a valuable feature of XML. In addition, XML topic maps are used to describe relationships between different sources, or “semantically associated” parts of these sources. The issue of achieving a standardized medical vocabulary becomes more and more important with the stepwise implementation of diagnostically related groups, for example. The aim of the authors' work is to provide a transparent and open infrastructure that can be used to support clinical coding and to develop further software applications. The authors are assuming that a comprehensive representation of the content, structure, inherent semantics, and layout of medical classification systems can be achieved through a document-oriented approach. PMID:12807813
Biological data integration: wrapping data and tools.
Lacroix, Zoé
2002-06-01
Nowadays scientific data is inevitably digital and stored in a wide variety of formats in heterogeneous systems. Scientists need to access an integrated view of remote or local heterogeneous data sources with advanced data accessing, analyzing, and visualization tools. Building a digital library for scientific data requires accessing and manipulating data extracted from flat files or databases, documents retrieved from the Web as well as data generated by software. We present an approach to wrapping web data sources, databases, flat files, or data generated by tools through a database view mechanism. Generally, a wrapper has two tasks: it first sends a query to the source to retrieve data and, second builds the expected output with respect to the virtual structure. Our wrappers are composed of a retrieval component based on an intermediate object view mechanism called search views mapping the source capabilities to attributes, and an eXtensible Markup Language (XML) engine, respectively, to perform these two tasks. The originality of the approach consists of: 1) a generic view mechanism to access seamlessly data sources with limited capabilities and 2) the ability to wrap data sources as well as the useful specific tools they may provide. Our approach has been developed and demonstrated as part of the multidatabase system supporting queries via uniform object protocol model (OPM) interfaces.
Future perspectives - proposal for Oxford Physiome Project.
Oku, Yoshitaka
2010-01-01
The Physiome Project is an effort to understand living creatures using "analysis by synthesis" strategy, i.e., by reproducing their behaviors. In order to achieve its goal, sharing developed models between different computer languages and application programs to incorporate into integrated models is critical. To date, several XML-based markup languages has been developed for this purpose. However, source codes written with XML-based languages are very difficult to read and edit using text editors. An alternative way is to use an object-oriented meta-language, which can be translated to different computer languages and transplanted to different application programs. Object-oriented languages are suitable for describing structural organization by hierarchical classes and taking advantage of statistical properties to reduce the number of parameter while keeping the complexity of behaviors. Using object-oriented languages to describe each element and posting it to a public domain should be the next step to build up integrated models of the respiratory control system.
XML Schema Guide for Secondary CDR Submissions
This document presents the extensible markup language (XML) schema guide for the Office of Pollution Prevention and Toxics’ (OPPT) e-CDRweb tool. E-CDRweb is the electronic, web-based tool provided by Environmental Protection Agency (EPA) for the submission of Chemical Data Reporting (CDR) information. This document provides the user with tips and guidance on correctly using the version 1.1 XML schema for the Joint Submission Form. Please note that the order of the elements must match the schema.
Automated population of an i2b2 clinical data warehouse from an openEHR-based data repository.
Haarbrandt, Birger; Tute, Erik; Marschollek, Michael
2016-10-01
Detailed Clinical Model (DCM) approaches have recently seen wider adoption. More specifically, openEHR-based application systems are now used in production in several countries, serving diverse fields of application such as health information exchange, clinical registries and electronic medical record systems. However, approaches to efficiently provide openEHR data to researchers for secondary use have not yet been investigated or established. We developed an approach to automatically load openEHR data instances into the open source clinical data warehouse i2b2. We evaluated query capabilities and the performance of this approach in the context of the Hanover Medical School Translational Research Framework (HaMSTR), an openEHR-based data repository. Automated creation of i2b2 ontologies from archetypes and templates and the integration of openEHR data instances from 903 patients of a paediatric intensive care unit has been achieved. In total, it took an average of ∼2527s to create 2.311.624 facts from 141.917 XML documents. Using the imported data, we conducted sample queries to compare the performance with two openEHR systems and to investigate if this representation of data is feasible to support cohort identification and record level data extraction. We found the automated population of an i2b2 clinical data warehouse to be a feasible approach to make openEHR data instances available for secondary use. Such an approach can facilitate timely provision of clinical data to researchers. It complements analytics based on the Archetype Query Language by allowing querying on both, legacy clinical data sources and openEHR data instances at the same time and by providing an easy-to-use query interface. However, due to different levels of expressiveness in the data models, not all semantics could be preserved during the ETL process. Copyright © 2016 Elsevier Inc. All rights reserved.
Applying Query Structuring in Cross-language Retrieval.
ERIC Educational Resources Information Center
Pirkola, Ari; Puolamaki, Deniz; Jarvelin, Kalervo
2003-01-01
Explores ways to apply query structuring in cross-language information retrieval. Tested were: English queries translated into Finnish using an electronic dictionary, and run in a Finnish newspaper databases; effects of compound-based structuring using a proximity operator for translation equivalents of query language compound components; and a…
A Semantic Graph Query Language
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaplan, I L
2006-10-16
Semantic graphs can be used to organize large amounts of information from a number of sources into one unified structure. A semantic query language provides a foundation for extracting information from the semantic graph. The graph query language described here provides a simple, powerful method for querying semantic graphs.
Model-based query language for analyzing clinical processes.
Barzdins, Janis; Barzdins, Juris; Rencis, Edgars; Sostaks, Agris
2013-01-01
Nowadays large databases of clinical process data exist in hospitals. However, these data are rarely used in full scope. In order to perform queries on hospital processes, one must either choose from the predefined queries or develop queries using MS Excel-type software system, which is not always a trivial task. In this paper we propose a new query language for analyzing clinical processes that is easily perceptible also by non-IT professionals. We develop this language based on a process modeling language which is also described in this paper. Prototypes of both languages have already been verified using real examples from hospitals.
Light at Night Markup Language (LANML): XML Technology for Light at Night Monitoring Data
NASA Astrophysics Data System (ADS)
Craine, B. L.; Craine, E. R.; Craine, E. M.; Crawford, D. L.
2013-05-01
Light at Night Markup Language (LANML) is a standard, based upon XML, useful in acquiring, validating, transporting, archiving and analyzing multi-dimensional light at night (LAN) datasets of any size. The LANML standard can accommodate a variety of measurement scenarios including single spot measures, static time-series, web based monitoring networks, mobile measurements, and airborne measurements. LANML is human-readable, machine-readable, and does not require a dedicated parser. In addition LANML is flexible; ensuring future extensions of the format will remain backward compatible with analysis software. The XML technology is at the heart of communicating over the internet and can be equally useful at the desktop level, making this standard particularly attractive for web based applications, educational outreach and efficient collaboration between research groups.
End-User Use of Data Base Query Language: Pros and Cons.
ERIC Educational Resources Information Center
Nicholes, Walter
1988-01-01
Man-machine interface, the concept of a computer "query," a review of database technology, and a description of the use of query languages at Brigham Young University are discussed. The pros and cons of end-user use of database query languages are explored. (Author/MLW)
Astronomical Data Integration Beyond the Virtual Observatory
NASA Astrophysics Data System (ADS)
Lemson, G.; Laurino, O.
2015-09-01
"Data integration" generally refers to the process of combining data from different source data bases into a unified view. Much work has been devoted in this area by the International Virtual Observatory Alliance (IVOA), allowing users to discover and access databases through standard protocols. However, different archives present their data through their own schemas and users must still select, filter, and combine data for each archive individually. An important reason for this is that the creation of common data models that satisfy all sub-disciplines is fraught with difficulties. Furthermore it requires a substantial amount of work for data providers to present their data according to some standard representation. We will argue that existing standards allow us to build a data integration framework that works around these problems. The particular framework requires the implementation of the IVOA Table Access Protocol (TAP) only. It uses the newly developed VO data modelling language (VO-DML) specification, which allows one to define extensible object-oriented data models using a subset of UML concepts through a simple XML serialization language. A rich mapping language allows one to describe how instances of VO-DML data models are represented by the TAP service, bridging the possible mismatch between a local archive's schema and some agreed-upon representation of the astronomical domain. In this so called local-as-view approach to data integration, “mediators" use the mapping prescriptions to translate queries phrased in terms of the common schema to the underlying TAP service. This mapping language has a graphical representation, which we expose through a web based graphical “drag-and-drop-and-connect" interface. This service allows any user to map the holdings of any TAP service to the data model(s) of choice. The mappings are defined and stored outside of the data sources themselves, which allows the interface to be used in a kind of crowd-sourcing effort to annotate any remote database of interest. This reduces the burden of publishing one's data and allows a great flexibility in the definition of the views through which particular communities might wish to access remote archives. At the same time, the framework easies the user's effort to select, filter, and combine data from many different archives, so as to build knowledge bases for their analysis. We will present the framework and demonstrate a prototype implementation. We will discuss ideas for producing the missing elements, in particular the query language and the implementation of mediator tools to translate object queries to ADQL
Briache, Abdelaali; Marrakchi, Kamar; Kerzazi, Amine; Navas-Delgado, Ismael; Rossi Hassani, Badr D; Lairini, Khalid; Aldana-Montes, José F
2012-01-25
Saccharomyces cerevisiae is recognized as a model system representing a simple eukaryote whose genome can be easily manipulated. Information solicited by scientists on its biological entities (Proteins, Genes, RNAs...) is scattered within several data sources like SGD, Yeastract, CYGD-MIPS, BioGrid, PhosphoGrid, etc. Because of the heterogeneity of these sources, querying them separately and then manually combining the returned results is a complex and time-consuming task for biologists most of whom are not bioinformatics expert. It also reduces and limits the use that can be made on the available data. To provide transparent and simultaneous access to yeast sources, we have developed YeastMed: an XML and mediator-based system. In this paper, we present our approach in developing this system which takes advantage of SB-KOM to perform the query transformation needed and a set of Data Services to reach the integrated data sources. The system is composed of a set of modules that depend heavily on XML and Semantic Web technologies. User queries are expressed in terms of a domain ontology through a simple form-based web interface. YeastMed is the first mediation-based system specific for integrating yeast data sources. It was conceived mainly to help biologists to find simultaneously relevant data from multiple data sources. It has a biologist-friendly interface easy to use. The system is available at http://www.khaos.uma.es/yeastmed/.
RGG: A general GUI Framework for R scripts
Visne, Ilhami; Dilaveroglu, Erkan; Vierlinger, Klemens; Lauss, Martin; Yildiz, Ahmet; Weinhaeusel, Andreas; Noehammer, Christa; Leisch, Friedrich; Kriegner, Albert
2009-01-01
Background R is the leading open source statistics software with a vast number of biostatistical and bioinformatical analysis packages. To exploit the advantages of R, extensive scripting/programming skills are required. Results We have developed a software tool called R GUI Generator (RGG) which enables the easy generation of Graphical User Interfaces (GUIs) for the programming language R by adding a few Extensible Markup Language (XML) – tags. RGG consists of an XML-based GUI definition language and a Java-based GUI engine. GUIs are generated in runtime from defined GUI tags that are embedded into the R script. User-GUI input is returned to the R code and replaces the XML-tags. RGG files can be developed using any text editor. The current version of RGG is available as a stand-alone software (RGGRunner) and as a plug-in for JGR. Conclusion RGG is a general GUI framework for R that has the potential to introduce R statistics (R packages, built-in functions and scripts) to users with limited programming skills and helps to bridge the gap between R developers and GUI-dependent users. RGG aims to abstract the GUI development from individual GUI toolkits by using an XML-based GUI definition language. Thus RGG can be easily integrated in any software. The RGG project further includes the development of a web-based repository for RGG-GUIs. RGG is an open source project licensed under the Lesser General Public License (LGPL) and can be downloaded freely at PMID:19254356
Converting from XML to HDF-EOS
NASA Technical Reports Server (NTRS)
Ullman, Richard; Bane, Bob; Yang, Jingli
2008-01-01
A computer program recreates an HDF-EOS file from an Extensible Markup Language (XML) representation of the contents of that file. This program is one of two programs written to enable testing of the schemas described in the immediately preceding article to determine whether the schemas capture all details of HDF-EOS files.
NASA Astrophysics Data System (ADS)
Bargatze, L. F.
2015-12-01
Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted files, or the addition of new or the deletion of old data products. Next, ADAPT routines analyzed the query results and issued updates to the metadata stored in the UCLA CDAWEB and SPDF metadata registries. In this way, the SPASE metadata registries generated by ADAPT can be relied on to provide up to date and complete access to Heliophysics CDF data resources on a daily basis.
A Natural Language Interface Concordant with a Knowledge Base.
Han, Yong-Jin; Park, Seong-Bae; Park, Se-Young
2016-01-01
The discordance between expressions interpretable by a natural language interface (NLI) system and those answerable by a knowledge base is a critical problem in the field of NLIs. In order to solve this discordance problem, this paper proposes a method to translate natural language questions into formal queries that can be generated from a graph-based knowledge base. The proposed method considers a subgraph of a knowledge base as a formal query. Thus, all formal queries corresponding to a concept or a predicate in the knowledge base can be generated prior to query time and all possible natural language expressions corresponding to each formal query can also be collected in advance. A natural language expression has a one-to-one mapping with a formal query. Hence, a natural language question is translated into a formal query by matching the question with the most appropriate natural language expression. If the confidence of this matching is not sufficiently high the proposed method rejects the question and does not answer it. Multipredicate queries are processed by regarding them as a set of collected expressions. The experimental results show that the proposed method thoroughly handles answerable questions from the knowledge base and rejects unanswerable ones effectively.
Design Recommendations for Query Languages
1980-09-01
DESIGN RECOMMENDATIONS FOR QUERY LANGUAGES S.L. Ehrenreich Submitted by: Stanley M. Halpin, Acting Chief HUMAN FACTORS TECHNICAL AREA Approved by: Edgar ...respond to que- ries that it recognizes as faulty. Codd (1974) states that in designing a nat- ural query language, attention must be given to dealing...impaired. Codd (1974) also regarded the user’s perception of the data base to be of critical importance in properly designing a query language system
The Cadmio XML healthcare record.
Barbera, Francesco; Ferri, Fernando; Ricci, Fabrizio L; Sottile, Pier Angelo
2002-01-01
The management of clinical data is a complex task. Patient related information reported in patient folders is a set of heterogeneous and structured data accessed by different users having different goals (in local or geographical networks). XML language provides a mechanism for describing, manipulating, and visualising structured data in web-based applications. XML ensures that the structured data is managed in a uniform and transparent manner independently from the applications and their providers guaranteeing some interoperability. Extracting data from the healthcare record and structuring them according to XML makes the data available through browsers. The MIC/MIE model (Medical Information Category/Medical Information Elements), which allows the definition and management of healthcare records and used in CADMIO, a HISA based project, is described in this paper, using XML for allowing the data to be visualised through web browsers.
Using XML Configuration-Driven Development to Create a Customizable Ground Data System
NASA Technical Reports Server (NTRS)
Nash, Brent; DeMore, Martha
2009-01-01
The Mission data Processing and Control Subsystem (MPCS) is being developed as a multi-mission Ground Data System with the Mars Science Laboratory (MSL) as the first fully supported mission. MPCS is a fully featured, Java-based Ground Data System (GDS) for telecommand and telemetry processing based on Configuration-Driven Development (CDD). The eXtensible Markup Language (XML) is the ideal language for CDD because it is easily readable and editable by all levels of users and is also backed by a World Wide Web Consortium (W3C) standard and numerous powerful processing tools that make it uniquely flexible. The CDD approach adopted by MPCS minimizes changes to compiled code by using XML to create a series of configuration files that provide both coarse and fine grained control over all aspects of GDS operation.
A Conversion Tool for Mathematical Expressions in Web XML Files.
ERIC Educational Resources Information Center
Ohtake, Nobuyuki; Kanahori, Toshihiro
2003-01-01
This article discusses the conversion of mathematical equations into Extensible Markup Language (XML) on the World Wide Web for individuals with visual impairments. A program is described that converts the presentation markup style to the content markup style in MathML to allow browsers to render mathematical expressions without other programs.…
Application of XML to Journal Table Archiving
NASA Astrophysics Data System (ADS)
Shaya, E. J.; Blackwell, J. H.; Gass, J. E.; Kargatis, V. E.; Schneider, G. L.; Weiland, J. L.; Borne, K. D.; White, R. A.; Cheung, C. Y.
1998-12-01
The Astronomical Data Center (ADC) at the NASA Goddard Space Flight Center is a major archive for machine-readable astronomical data tables. Many ADC tables are derived from published journal articles. Article tables are reformatted to be machine-readable and documentation is crafted to facilitate proper reuse by researchers. The recent switch of journals to web based electronic format has resulted in the generation of large amounts of tabular data that could be captured into machine-readable archive format at fairly low cost. The large data flow of the tables from all major North American astronomical journals (a factor of 100 greater than the present rate at the ADC) necessitates the development of rigorous standards for the exchange of data between researchers, publishers, and the archives. We have selected a suitable markup language that can fully describe the large variety of astronomical information contained in ADC tables. The eXtensible Markup Language XML is a powerful internet-ready documentation format for data. It provides a precise and clear data description language that is both machine- and human-readable. It is rapidly becoming the standard format for business and information transactions on the internet and it is an ideal common metadata exchange format. By labelling, or "marking up", all elements of the information content, documents are created that computers can easily parse. An XML archive can easily and automatically be maintained, ingested into standard databases or custom software, and even totally restructured whenever necessary. Structuring astronomical data into XML format will enable efficient and focused search capabilities via off-the-shelf software. The ADC is investigating XML's expanded hyperlinking power to enhance connectivity within the ADC data/metadata and developing XSL display scripts to enhance display of astronomical data. The ADC XML Definition Type Document can be viewed at http://messier.gsfc.nasa.gov/dtdhtml/DTD-TREE.html
DAS: A Data Management System for Instrument Tests and Operations
NASA Astrophysics Data System (ADS)
Frailis, M.; Sartor, S.; Zacchei, A.; Lodi, M.; Cirami, R.; Pasian, F.; Trifoglio, M.; Bulgarelli, A.; Gianotti, F.; Franceschi, E.; Nicastro, L.; Conforti, V.; Zoli, A.; Smart, R.; Morbidelli, R.; Dadina, M.
2014-05-01
The Data Access System (DAS) is a and data management software system, providing a reusable solution for the storage of data acquired both from telescopes and auxiliary data sources during the instrument development phases and operations. It is part of the Customizable Instrument WorkStation system (CIWS-FW), a framework for the storage, processing and quick-look at the data acquired from scientific instruments. The DAS provides a data access layer mainly targeted to software applications: quick-look displays, pre-processing pipelines and scientific workflows. It is logically organized in three main components: an intuitive and compact Data Definition Language (DAS DDL) in XML format, aimed for user-defined data types; an Application Programming Interface (DAS API), automatically adding classes and methods supporting the DDL data types, and providing an object-oriented query language; a data management component, which maps the metadata of the DDL data types in a relational Data Base Management System (DBMS), and stores the data in a shared (network) file system. With the DAS DDL, developers define the data model for a particular project, specifying for each data type the metadata attributes, the data format and layout (if applicable), and named references to related or aggregated data types. Together with the DDL user-defined data types, the DAS API acts as the only interface to store, query and retrieve the metadata and data in the DAS system, providing both an abstract interface and a data model specific one in C, C++ and Python. The mapping of metadata in the back-end database is automatic and supports several relational DBMSs, including MySQL, Oracle and PostgreSQL.
Information Retrieval and Graph Analysis Approaches for Book Recommendation.
Benkoussas, Chahinez; Bellot, Patrice
2015-01-01
A combination of multiple information retrieval approaches is proposed for the purpose of book recommendation. In this paper, book recommendation is based on complex user's query. We used different theoretical retrieval models: probabilistic as InL2 (Divergence from Randomness model) and language model and tested their interpolated combination. Graph analysis algorithms such as PageRank have been successful in Web environments. We consider the application of this algorithm in a new retrieval approach to related document network comprised of social links. We called Directed Graph of Documents (DGD) a network constructed with documents and social information provided from each one of them. Specifically, this work tackles the problem of book recommendation in the context of INEX (Initiative for the Evaluation of XML retrieval) Social Book Search track. A series of reranking experiments demonstrate that combining retrieval models yields significant improvements in terms of standard ranked retrieval metrics. These results extend the applicability of link analysis algorithms to different environments.
Information Retrieval and Graph Analysis Approaches for Book Recommendation
Benkoussas, Chahinez; Bellot, Patrice
2015-01-01
A combination of multiple information retrieval approaches is proposed for the purpose of book recommendation. In this paper, book recommendation is based on complex user's query. We used different theoretical retrieval models: probabilistic as InL2 (Divergence from Randomness model) and language model and tested their interpolated combination. Graph analysis algorithms such as PageRank have been successful in Web environments. We consider the application of this algorithm in a new retrieval approach to related document network comprised of social links. We called Directed Graph of Documents (DGD) a network constructed with documents and social information provided from each one of them. Specifically, this work tackles the problem of book recommendation in the context of INEX (Initiative for the Evaluation of XML retrieval) Social Book Search track. A series of reranking experiments demonstrate that combining retrieval models yields significant improvements in terms of standard ranked retrieval metrics. These results extend the applicability of link analysis algorithms to different environments. PMID:26504899
Data Integration Using SOAP in the VSO
NASA Astrophysics Data System (ADS)
Tian, K. Q.; Bogart, R. S.; Davey, A.; Dimitoglou, G.; Gurman, J. B.; Hill, F.; Martens, P. C.; Wampler, S.
2003-05-01
The Virtual Solar Observatory (VSO) project has implemented a time interval search for all four participating data archives. The back-end query services are implemented as web services, and are accessible via SOAP. SOAP (Simple Object Access Protocol) defines an RPC (Remote Procedure Call) mechanism that employs HTTP as its transport and encodes the client-server interactions (request and response messages) in XML (eXtensible Markup Language) documents. In addition to its core function of identifying relevant datasets in the local archive, the SOAP server at each data provider acts as a "wrapper" that maps descriptions in an abstract data model to those in the provider-specific data model, and vice versa. It is in this way that VSO integrates heterogeneous data services and allows access to them using a common interface. Our experience with SOAP has been fruitful. It has proven to be a better alternative to traditional web access methods, namely POST and GET, because of its flexibility and interoperability.
NASA Astrophysics Data System (ADS)
Kuznetsov, Valentin; Riley, Daniel; Afaq, Anzar; Sekhri, Vijay; Guo, Yuyi; Lueking, Lee
2010-04-01
The CMS experiment has implemented a flexible and powerful system enabling users to find data within the CMS physics data catalog. The Dataset Bookkeeping Service (DBS) comprises a database and the services used to store and access metadata related to CMS physics data. To this, we have added a generalized query system in addition to the existing web and programmatic interfaces to the DBS. This query system is based on a query language that hides the complexity of the underlying database structure by discovering the join conditions between database tables. This provides a way of querying the system that is simple and straightforward for CMS data managers and physicists to use without requiring knowledge of the database tables or keys. The DBS Query Language uses the ANTLR tool to build the input query parser and tokenizer, followed by a query builder that uses a graph representation of the DBS schema to construct the SQL query sent to underlying database. We will describe the design of the query system, provide details of the language components and overview of how this component fits into the overall data discovery system architecture.
Markó, K; Schulz, S; Hahn, U
2005-01-01
We propose an interlingua-based indexing approach to account for the particular challenges that arise in the design and implementation of cross-language document retrieval systems for the medical domain. Documents, as well as queries, are mapped to a language-independent conceptual layer on which retrieval operations are performed. We contrast this approach with the direct translation of German queries to English ones which, subsequently, are matched against English documents. We evaluate both approaches, interlingua-based and direct translation, on a large medical document collection, the OHSUMED corpus. A substantial benefit for interlingua-based document retrieval using German queries on English texts is found, which amounts to 93% of the (monolingual) English baseline. Most state-of-the-art cross-language information retrieval systems translate user queries to the language(s) of the target documents. In contra-distinction to this approach, translating both documents and user queries into a language-independent, concept-like representation format is more beneficial to enhance cross-language retrieval performance.
XML Based Scientific Data Management Facility
NASA Technical Reports Server (NTRS)
Mehrotra, Piyush; Zubair, M.; Ziebartt, John (Technical Monitor)
2001-01-01
The World Wide Web consortium has developed an Extensible Markup Language (XML) to support the building of better information management infrastructures. The scientific computing community realizing the benefits of HTML has designed markup languages for scientific data. In this paper, we propose a XML based scientific data management facility, XDMF. The project is motivated by the fact that even though a lot of scientific data is being generated, it is not being shared because of lack of standards and infrastructure support for discovering and transforming the data. The proposed data management facility can be used to discover the scientific data itself, the transformation functions, and also for applying the required transformations. We have built a prototype system of the proposed data management facility that can work on different platforms. We have implemented the system using Java, and Apache XSLT engine Xalan. To support remote data and transformation functions, we had to extend the XSLT specification and the Xalan package.
Towards Big Earth Data Analytics: The EarthServer Approach
NASA Astrophysics Data System (ADS)
Baumann, Peter
2013-04-01
Big Data in the Earth sciences, the Tera- to Exabyte archives, mostly are made up from coverage data whereby the term "coverage", according to ISO and OGC, is defined as the digital representation of some space-time varying phenomenon. Common examples include 1-D sensor timeseries, 2-D remote sensing imagery, 3D x/y/t image timeseries and x/y/z geology data, and 4-D x/y/z/t atmosphere and ocean data. Analytics on such data requires on-demand processing of sometimes significant complexity, such as getting the Fourier transform of satellite images. As network bandwidth limits prohibit transfer of such Big Data it is indispensable to devise protocols allowing clients to task flexible and fast processing on the server. The EarthServer initiative, funded by EU FP7 eInfrastructures, unites 11 partners from computer and earth sciences to establish Big Earth Data Analytics. One key ingredient is flexibility for users to ask what they want, not impeded and complicated by system internals. The EarthServer answer to this is to use high-level query languages; these have proven tremendously successful on tabular and XML data, and we extend them with a central geo data structure, multi-dimensional arrays. A second key ingredient is scalability. Without any doubt, scalability ultimately can only be achieved through parallelization. In the past, parallelizing code has been done at compile time and usually with manual intervention. The EarthServer approach is to perform a samentic-based dynamic distribution of queries fragments based on networks optimization and further criteria. The EarthServer platform is comprised by rasdaman, an Array DBMS enabling efficient storage and retrieval of any-size, any-type multi-dimensional raster data. In the project, rasdaman is being extended with several functionality and scalability features, including: support for irregular grids and general meshes; in-situ retrieval (evaluation of database queries on existing archive structures, avoiding data import and, hence, duplication); the aforementioned distributed query processing. Additionally, Web clients for multi-dimensional data visualization are being established. Client/server interfaces are strictly based on OGC and W3C standards, in particular the Web Coverage Processing Service (WCPS) which defines a high-level raster query language. We present the EarthServer project with its vision and approaches, relate it to the current state of standardization, and demonstrate it by way of large-scale data centers and their services using rasdaman.
An RDF version of the VO Registry Version 1.00
NASA Astrophysics Data System (ADS)
Gray, Norman; Gray, Norman
2007-09-01
We describe the initial implementation of an RDF version of the IVOA Resource Registry, serving the registry data via a SPARQL query endpoint, including the creation of the ontology analogues of an important subset of the relevant XML Schemas, and the mechanics of the conversion process. The result is an experimental service, and this is an interim document.
PACS and electronic health records
NASA Astrophysics Data System (ADS)
Cohen, Simona; Gilboa, Flora; Shani, Uri
2002-05-01
Electronic Health Record (EHR) is a major component of the health informatics domain. An important part of the EHR is the medical images obtained over a patient's lifetime and stored in diverse PACS. The vision presented in this paper is that future medical information systems will convert data from various medical sources -- including diverse modalities, PACS, HIS, CIS, RIS, and proprietary systems -- to HL7 standard XML documents. Then, the various documents are indexed and compiled to EHRs, upon which complex queries can be posed. We describe the conversion of data retrieved from PACS systems through DICOM to HL7 standard XML documents. This enables the EHR system to answer queries such as 'Get all chest images of patients at the age of 20-30, that have blood type 'A' and are allergic to pine trees', which a single PACS cannot answer. The integration of data from multiple sources makes our approach capable of delivering such answers. It enables the correlation of medical, demographic, clinical, and even genetic information. In addition, by fully indexing all the tagged data in DICOM objects, it becomes possible to offer access to huge amounts of valuable data, which can be better exploited in the specific radiology domain.
XML-Based Generator of C++ Code for Integration With GUIs
NASA Technical Reports Server (NTRS)
Hua, Hook; Oyafuso, Fabiano; Klimeck, Gerhard
2003-01-01
An open source computer program has been developed to satisfy a need for simplified organization of structured input data for scientific simulation programs. Typically, such input data are parsed in from a flat American Standard Code for Information Interchange (ASCII) text file into computational data structures. Also typically, when a graphical user interface (GUI) is used, there is a need to completely duplicate the input information while providing it to a user in a more structured form. Heretofore, the duplication of the input information has entailed duplication of software efforts and increases in susceptibility to software errors because of the concomitant need to maintain two independent input-handling mechanisms. The present program implements a method in which the input data for a simulation program are completely specified in an Extensible Markup Language (XML)-based text file. The key benefit for XML is storing input data in a structured manner. More importantly, XML allows not just storing of data but also describing what each of the data items are. That XML file contains information useful for rendering the data by other applications. It also then generates data structures in the C++ language that are to be used in the simulation program. In this method, all input data are specified in one place only, and it is easy to integrate the data structures into both the simulation program and the GUI. XML-to-C is useful in two ways: 1. As an executable, it generates the corresponding C++ classes and 2. As a library, it automatically fills the objects with the input data values.
Relational Algebra and SQL: Better Together
ERIC Educational Resources Information Center
McMaster, Kirby; Sambasivam, Samuel; Hadfield, Steven; Wolthuis, Stuart
2013-01-01
In this paper, we describe how database instructors can teach Relational Algebra and Structured Query Language together through programming. Students write query programs consisting of sequences of Relational Algebra operations vs. Structured Query Language SELECT statements. The query programs can then be run interactively, allowing students to…
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Liu, I-Hsiung
1985-01-01
The currently developed multi-level language interfaces of information systems are generally designed for experienced users. These interfaces commonly ignore the nature and needs of the largest user group, i.e., casual users. This research identifies the importance of natural language query system research within information storage and retrieval system development; addresses the topics of developing such a query system; and finally, proposes a framework for the development of natural language query systems in order to facilitate the communication between casual users and information storage and retrieval systems.
A natural language interface plug-in for cooperative query answering in biological databases.
Jamil, Hasan M
2012-06-11
One of the many unique features of biological databases is that the mere existence of a ground data item is not always a precondition for a query response. It may be argued that from a biologist's standpoint, queries are not always best posed using a structured language. By this we mean that approximate and flexible responses to natural language like queries are well suited for this domain. This is partly due to biologists' tendency to seek simpler interfaces and partly due to the fact that questions in biology involve high level concepts that are open to interpretations computed using sophisticated tools. In such highly interpretive environments, rigidly structured databases do not always perform well. In this paper, our goal is to propose a semantic correspondence plug-in to aid natural language query processing over arbitrary biological database schema with an aim to providing cooperative responses to queries tailored to users' interpretations. Natural language interfaces for databases are generally effective when they are tuned to the underlying database schema and its semantics. Therefore, changes in database schema become impossible to support, or a substantial reorganization cost must be absorbed to reflect any change. We leverage developments in natural language parsing, rule languages and ontologies, and data integration technologies to assemble a prototype query processor that is able to transform a natural language query into a semantically equivalent structured query over the database. We allow knowledge rules and their frequent modifications as part of the underlying database schema. The approach we adopt in our plug-in overcomes some of the serious limitations of many contemporary natural language interfaces, including support for schema modifications and independence from underlying database schema. The plug-in introduced in this paper is generic and facilitates connecting user selected natural language interfaces to arbitrary databases using a semantic description of the intended application. We demonstrate the feasibility of our approach with a practical example.
A Visual Interface for Querying Heterogeneous Phylogenetic Databases.
Jamil, Hasan M
2017-01-01
Despite the recent growth in the number of phylogenetic databases, access to these wealth of resources remain largely tool or form-based interface driven. It is our thesis that the flexibility afforded by declarative query languages may offer the opportunity to access these repositories in a better way, and to use such a language to pose truly powerful queries in unprecedented ways. In this paper, we propose a substantially enhanced closed visual query language, called PhyQL, that can be used to query phylogenetic databases represented in a canonical form. The canonical representation presented helps capture most phylogenetic tree formats in a convenient way, and is used as the storage model for our PhyloBase database for which PhyQL serves as the query language. We have implemented a visual interface for the end users to pose PhyQL queries using visual icons, and drag and drop operations defined over them. Once a query is posed, the interface translates the visual query into a Datalog query for execution over the canonical database. Responses are returned as hyperlinks to phylogenies that can be viewed in several formats using the tree viewers supported by PhyloBase. Results cached in PhyQL buffer allows secondary querying on the computed results making it a truly powerful querying architecture.
2011-12-28
specify collaboration constraints that occur in Java and XML frameworks and that the collaboration constraints from these frameworks matter in practice. (a...programming language boundaries, and Chapter 6 and Appendix A demonstrate that Fusion can specify constraints across both Java and XML in practice. (c...designed JUnit, Josh Bloch designed Java Collec- tions, and Krzysztof Cwalina designed the .NET Framework APIs. While all of these frameworks are very
Telemetry Attributes Transfer Standard (TMATS) Handbook
2015-07-01
Example ......................... 6-1 Appendix A. Extensible Markup Language TMATS Differences ...................................... A-1 Appendix B...return-to-zero - level TG Telemetry Group TM telemetry TMATS Telemetry Attributes Transfer Standard XML eXtensible Markup Language Telemetry... Markup Language) format. The initial version of a standard 1 Range Commanders Council. Telemetry
Brave New World: Data Intensive Science with SDSS and the VO
NASA Astrophysics Data System (ADS)
Thakar, A. R.; Szalay, A. S.; O'Mullane, W.; Nieto-Santisteban, M.; Budavari, T.; Li, N.; Carliles, S.; Haridas, V.; Malik, T.; Gray, J.
2004-12-01
With the advent of digital archives and the VO, astronomy is quickly changing from a data-hungry to a data-intensive science. Local and specialized access to data will remain the most direct and efficient way to get data out of individual archives, especially if you know what you are looking for. However, the enormous sizes of the upcoming archives will preclude this type of access for most institutions, and will not allow researchers to tap the vast potential for discovery in cross-matching and comparing data between different archives. The VO makes this type of interoperability and distributed data access possible by adopting industry standards for data access (SQL) and data interchange (SOAP/XML) with platform independence (Web services). As a sneak preview of this brave new world where astronomers may need to become SQL warriors, we present a look at VO-enabled access to catalog data in the SDSS Catalog Archive Server (CAS): CasJobs - a workbench environment that allows arbitrarily complex SQL queries and your own personal database (MyDB) that you can share with collaborators; OpenSkyQuery - an IVOA (International Virtual Observatory Alliance) compliant federation of multiple archives (OpenSkyNodes) that currently links nearly 20 catalogs and allows cross-match queries (in ADQL - Astronomical Data Query Language) between them; Spectrum and Filter Profile Web services that provide access to an open database of spectra (registered users may add their own spectra); and VO-enabled Mirage - a Java visualizatiion tool developed at Bell Labs and enhanced at JHU that allows side-by-side comparison of SDSS catalog and FITS image data. Anticipating the next generation of Petabyte archives like LSST by the end of the decade, we are developing a parallel cross-match engine for all-sky cross-matches between large surveys, along with a 100-Terabyte data intensive science laboratory with high-speed parallel data access.
A Relational Algebra Query Language for Programming Relational Databases
ERIC Educational Resources Information Center
McMaster, Kirby; Sambasivam, Samuel; Anderson, Nicole
2011-01-01
In this paper, we describe a Relational Algebra Query Language (RAQL) and Relational Algebra Query (RAQ) software product we have developed that allows database instructors to teach relational algebra through programming. Instead of defining query operations using mathematical notation (the approach commonly taken in database textbooks), students…
Object-Oriented Query Language For Events Detection From Images Sequences
NASA Astrophysics Data System (ADS)
Ganea, Ion Eugen
2015-09-01
In this paper is presented a method to represent the events extracted from images sequences and the query language used for events detection. Using an object oriented model the spatial and temporal relationships between salient objects and also between events are stored and queried. This works aims to unify the storing and querying phases for video events processing. The object oriented language syntax used for events processing allow the instantiation of the indexes classes in order to improve the accuracy of the query results. The experiments were performed on images sequences provided from sport domain and it shows the reliability and the robustness of the proposed language. To extend the language will be added a specific syntax for constructing the templates for abnormal events and for detection of the incidents as the final goal of the research.
Motivated Proteins: A web application for studying small three-dimensional protein motifs
Leader, David P; Milner-White, E James
2009-01-01
Background Small loop-shaped motifs are common constituents of the three-dimensional structure of proteins. Typically they comprise between three and seven amino acid residues, and are defined by a combination of dihedral angles and hydrogen bonding partners. The most abundant of these are αβ-motifs, asx-motifs, asx-turns, β-bulges, β-bulge loops, β-turns, nests, niches, Schellmann loops, ST-motifs, ST-staples and ST-turns. We have constructed a database of such motifs from a range of high-quality protein structures and built a web application as a visual interface to this. Description The web application, Motivated Proteins, provides access to these 12 motifs (with 48 sub-categories) in a database of over 400 representative proteins. Queries can be made for specific categories or sub-categories of motif, motifs in the vicinity of ligands, motifs which include part of an enzyme active site, overlapping motifs, or motifs which include a particular amino acid sequence. Individual proteins can be specified, or, where appropriate, motifs for all proteins listed. The results of queries are presented in textual form as an (X)HTML table, and may be saved as parsable plain text or XML. Motifs can be viewed and manipulated either individually or in the context of the protein in the Jmol applet structural viewer. Cartoons of the motifs imposed on a linear representation of protein secondary structure are also provided. Summary information for the motifs is available, as are histograms of amino acid distribution, and graphs of dihedral angles at individual positions in the motifs. Conclusion Motivated Proteins is a publicly and freely accessible web application that enables protein scientists to study small three-dimensional motifs without requiring knowledge of either Structured Query Language or the underlying database schema. PMID:19210785
Federated querying architecture with clinical & translational health IT application.
Livne, Oren E; Schultz, N Dustin; Narus, Scott P
2011-10-01
We present a software architecture that federates data from multiple heterogeneous health informatics data sources owned by multiple organizations. The architecture builds upon state-of-the-art open-source Java and XML frameworks in innovative ways. It consists of (a) federated query engine, which manages federated queries and result set aggregation via a patient identification service; and (b) data source facades, which translate the physical data models into a common model on-the-fly and handle large result set streaming. System modules are connected via reusable Apache Camel integration routes and deployed to an OSGi enterprise service bus. We present an application of our architecture that allows users to construct queries via the i2b2 web front-end, and federates patient data from the University of Utah Enterprise Data Warehouse and the Utah Population database. Our system can be easily adopted, extended and integrated with existing SOA Healthcare and HL7 frameworks such as i2b2 and caGrid.
XML Flight/Ground Data Dictionary Management
NASA Technical Reports Server (NTRS)
Wright, Jesse; Wiklow, Colette
2007-01-01
A computer program generates Extensible Markup Language (XML) files that effect coupling between the command- and telemetry-handling software running aboard a spacecraft and the corresponding software running in ground support systems. The XML files are produced by use of information from the flight software and from flight-system engineering. The XML files are converted to legacy ground-system data formats for command and telemetry, transformed into Web-based and printed documentation, and used in developing new ground-system data-handling software. Previously, the information about telemetry and command was scattered in various paper documents that were not synchronized. The process of searching and reading the documents was time-consuming and introduced errors. In contrast, the XML files contain all of the information in one place. XML structures can evolve in such a manner as to enable the addition, to the XML files, of the metadata necessary to track the changes and the associated documentation. The use of this software has reduced the extent of manual operations in developing a ground data system, thereby saving considerable time and removing errors that previously arose in the translation and transcription of software information from the flight to the ground system.
Web Browser Trends and Technologies.
ERIC Educational Resources Information Center
Goodwin-Jones, Bob
2000-01-01
Discusses Web browsers and how their capabilities have been expanded, support for Web browsing on different devices (cell phones, palmtop computers, TV sets), and browser support for the next-generation Web authoring language, XML ("extensible markup language"). (Author/VWL)
Descriptive Metadata: Emerging Standards.
ERIC Educational Resources Information Center
Ahronheim, Judith R.
1998-01-01
Discusses metadata, digital resources, cross-disciplinary activity, and standards. Highlights include Standard Generalized Markup Language (SGML); Extensible Markup Language (XML); Dublin Core; Resource Description Framework (RDF); Text Encoding Initiative (TEI); Encoded Archival Description (EAD); art and cultural-heritage metadata initiatives;…
Teaching XBRL to Graduate Business Students: A Hands-On Approach
ERIC Educational Resources Information Center
Pinsker, Robert
2004-01-01
EXtensible Business Reporting Language (XBRL) is a non-proprietary, computer language that has many uses. Known primarily as the Extensible Markup Language (XML) for business reporting, XBRL allows entities to report their business information (i.e., financial statements, announcements, etc.) on the Internet and communicate with other entities'…
Using OPeNDAP's Data-Services Framework to Lift Mash-Ups above Blind Dates
NASA Astrophysics Data System (ADS)
Gallagher, J. H. R.; Fulker, D. W.
2015-12-01
OPeNDAP's data-as-service framework (Hyrax) matches diverse sources with many end-user tools and contexts. Keys to its flexibility include: A data model embracing tabular data alongside n-dim arrays and other structures useful in geoinformatics. A REST-like protocol that supports—via suffix notation—a growing set of output forms (netCDF, XML, etc.) plus a query syntax for subsetting. Subsetting applies (via constraints on column values) to tabular data or (via constraints on indices or coordinates) to array-style data . A handler-style architecture that admits a growing set of input types. Community members may contribute handlers, making Hyrax effective as middleware, where N sources are mapped to M outputs with order N+M effort (not NxM). Hyrax offers virtual aggregations of source data, enabling granularity aimed at users, not data-collectors. OPeNDAP-access libraries exist in multiple languages, including Python, Java, and C++. Recent enhancements are increasing this framework's interoperability (i.e., its mash-up) potential. Extensions implemented as servlets—running adjacent to Hyrax—are enriching the forms of aggregation and enabling new protocols: User-specified aggregations, namely, applying a query to (huge) lists of source granules, and receiving one (large) table or zipped netCDF file. OGC (Open Geospatial Consortium) protocols, WMS and WCS. A Webification (W10n) protocol that returns JavaScript Object Notation (JSON). Extensions to OPeNDAP's query language are reducing transfer volumes and enabling new forms of inspection. Advances underway include: Functions that, for triangular-mesh sources, return sub-meshes spec'd via geospatial bounding boxes. Functions that, for data from multiple, satellite-borne sensors (with differing orbits), select observations based on coincidence. Calculations of means, histograms, etc. that greatly reduce output volumes.. Paths for communities to contribute new server functions (in Python, e.g.) that data providers may incorporate into Hyrax via installation parameters. One could say Hyrax itself is a mash-up, but we suggest it as an instrument for a mash-up artist's toolbox. This instrument can support mash-ups built on netCDF files, OGC protocols, JavaScript Web pages, and/or programs written in Python, Java, C or C++.
NASA Astrophysics Data System (ADS)
Li, C.; Zhu, X.; Guo, W.; Liu, Y.; Huang, H.
2015-05-01
A method suitable for indoor complex semantic query considering the computation of indoor spatial relations is provided According to the characteristics of indoor space. This paper designs ontology model describing the space related information of humans, events and Indoor space objects (e.g. Storey and Room) as well as their relations to meet the indoor semantic query. The ontology concepts are used in IndoorSPARQL query language which extends SPARQL syntax for representing and querying indoor space. And four types specific primitives for indoor query, "Adjacent", "Opposite", "Vertical" and "Contain", are defined as query functions in IndoorSPARQL used to support quantitative spatial computations. Also a method is proposed to analysis the query language. Finally this paper adopts this method to realize indoor semantic query on the study area through constructing the ontology model for the study building. The experimental results show that the method proposed in this paper can effectively support complex indoor space semantic query.
Knowledge Query Language (KQL)
2016-02-12
Lexington Massachusetts This page intentionally left blank. iii EXECUTIVE SUMMARY Currently, queries for data ...retrieval from non-Structured Query Language (NoSQL) data stores are tightly coupled to the specific implementation of the data store implementation...independent of the storage content and format for querying NoSQL or relational data stores. This approach uses address expressions (or A-Expressions
Making journals accessible to the visually impaired: the future is near
GARDNER, John; BULATOV, Vladimir; KELLY, Robert
2010-01-01
The American Physical Society (APS) has been a leader in using markup languages for publishing. ViewPlus has led development of innovative technologies for graphical information accessibility by people with print disabilities. APS, ViewPlus, and other collaborators in the Enhanced Reading Project are working together to develop the necessary technology and infrastructure for APS to publish its journals in the DAISY (Digital Accessible Information SYstem) eXtended Markup Language (XML) format, in which all text, math, and figures would be accessible to people who are blind or have other print disabilities. The first APS DAISY XML publications are targeted for late 2010. PMID:20676358
SBMLeditor: effective creation of models in the Systems Biology Markup Language (SBML)
Rodriguez, Nicolas; Donizelli, Marco; Le Novère, Nicolas
2007-01-01
Background The need to build a tool to facilitate the quick creation and editing of models encoded in the Systems Biology Markup language (SBML) has been growing with the number of users and the increased complexity of the language. SBMLeditor tries to answer this need by providing a very simple, low level editor of SBML files. Users can create and remove all the necessary bits and pieces of SBML in a controlled way, that maintains the validity of the final SBML file. Results SBMLeditor is written in JAVA using JCompneur, a library providing interfaces to easily display an XML document as a tree. This decreases dramatically the development time for a new XML editor. The possibility to include custom dialogs for different tags allows a lot of freedom for the editing and validation of the document. In addition to Xerces, SBMLeditor uses libSBML to check the validity and consistency of SBML files. A graphical equation editor allows an easy manipulation of MathML. SBMLeditor can be used as a module of the Systems Biology Workbench. Conclusion SBMLeditor contains many improvements compared to a generic XML editor, and allow users to create an SBML model quickly and without syntactic errors. PMID:17341299
SBMLeditor: effective creation of models in the Systems Biology Markup language (SBML).
Rodriguez, Nicolas; Donizelli, Marco; Le Novère, Nicolas
2007-03-06
The need to build a tool to facilitate the quick creation and editing of models encoded in the Systems Biology Markup language (SBML) has been growing with the number of users and the increased complexity of the language. SBMLeditor tries to answer this need by providing a very simple, low level editor of SBML files. Users can create and remove all the necessary bits and pieces of SBML in a controlled way, that maintains the validity of the final SBML file. SBMLeditor is written in JAVA using JCompneur, a library providing interfaces to easily display an XML document as a tree. This decreases dramatically the development time for a new XML editor. The possibility to include custom dialogs for different tags allows a lot of freedom for the editing and validation of the document. In addition to Xerces, SBMLeditor uses libSBML to check the validity and consistency of SBML files. A graphical equation editor allows an easy manipulation of MathML. SBMLeditor can be used as a module of the Systems Biology Workbench. SBMLeditor contains many improvements compared to a generic XML editor, and allow users to create an SBML model quickly and without syntactic errors.
Generating and Executing Complex Natural Language Queries across Linked Data.
Hamon, Thierry; Mougin, Fleur; Grabar, Natalia
2015-01-01
With the recent and intensive research in the biomedical area, the knowledge accumulated is disseminated through various knowledge bases. Links between these knowledge bases are needed in order to use them jointly. Linked Data, SPARQL language, and interfaces in Natural Language question-answering provide interesting solutions for querying such knowledge bases. We propose a method for translating natural language questions in SPARQL queries. We use Natural Language Processing tools, semantic resources, and the RDF triples description. The method is designed on 50 questions over 3 biomedical knowledge bases, and evaluated on 27 questions. It achieves 0.78 F-measure on the test set. The method for translating natural language questions into SPARQL queries is implemented as Perl module available at http://search.cpan.org/ thhamon/RDF-NLP-SPARQLQuery.
Schema for Spacecraft-Command Dictionary
NASA Technical Reports Server (NTRS)
Laubach, Sharon; Garcia, Celina; Maxwell, Scott; Wright, Jesse
2008-01-01
An Extensible Markup Language (XML) schema was developed as a means of defining and describing a structure for capturing spacecraft command- definition and tracking information in a single location in a form readable by both engineers and software used to generate software for flight and ground systems. A structure defined within this schema is then used as the basis for creating an XML file that contains command definitions.
Knowledge Query Language (KQL)
2016-02-01
unlimited. This page intentionally left blank. iii EXECUTIVE SUMMARY Currently, queries for data ...retrieval from non-Structured Query Language (NoSQL) data stores are tightly coupled to the specific implementation of the data store implementation, making...of the storage content and format for querying NoSQL or relational data stores. This approach uses address expressions (or A-Expressions) embedded in
Towards health care process description framework: an XML DTD design.
Staccini, P.; Joubert, M.; Quaranta, J. F.; Aymard, S.; Fieschi, D.; Fieschi, M.
2001-01-01
The development of health care and hospital information systems has to meet users needs as well as requirements such as the tracking of all care activities and the support of quality improvement. The use of process-oriented analysis is of-value to provide analysts with: (i) a systematic description of activities; (ii) the elicitation of the useful data to perform and record care tasks; (iii) the selection of relevant decision-making support. But paper-based tools are not a very suitable way to manage and share the documentation produced during this step. The purpose of this work is to propose a method to implement the results of process analysis according to XML techniques (eXtensible Markup Language). It is based on the IDEF0 activity modeling language (Integration DEfinition for Function modeling). A hierarchical description of a process and its components has been defined through a flat XML file with a grammar of proper metadata tags. Perspectives of this method are discussed. PMID:11825265
Compressing Aviation Data in XML Format
NASA Technical Reports Server (NTRS)
Patel, Hemil; Lau, Derek; Kulkarni, Deepak
2003-01-01
Design, operations and maintenance activities in aviation involve analysis of variety of aviation data. This data is typically in disparate formats making it difficult to use with different software packages. Use of a self-describing and extensible standard called XML provides a solution to this interoperability problem. XML provides a standardized language for describing the contents of an information stream, performing the same kind of definitional role for Web content as a database schema performs for relational databases. XML data can be easily customized for display using Extensible Style Sheets (XSL). While self-describing nature of XML makes it easy to reuse, it also increases the size of data significantly. Therefore, transfemng a dataset in XML form can decrease throughput and increase data transfer time significantly. It also increases storage requirements significantly. A natural solution to the problem is to compress the data using suitable algorithm and transfer it in the compressed form. We found that XML-specific compressors such as Xmill and XMLPPM generally outperform traditional compressors. However, optimal use of Xmill requires of discovery of optimal options to use while running Xmill. This, in turn, depends on the nature of data used. Manual disc0ver.y of optimal setting can require an engineer to experiment for weeks. We have devised an XML compression advisory tool that can analyze sample data files and recommend what compression tool would work the best for this data and what are the optimal settings to be used with a XML compression tool.
Report of Official Foreign Travel to Montreal, Canada
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mason, J. D.
How can DOE, NNSA, and Y-12 best handle the integration of information from diverse sources, and what will best ensure that legacy data will survive changes in computing systems for the future? Although there is no simple answer, it is becoming increasingly clear throughout the information-management industry that a key component of both preservation and integration of information is the adoption of standardized data formats. The most notable standardized format is XML, to which almost all data is now migrating. XML is derived from SGML, as is HTML, the common language of the World Wide Web. XML is becoming increasinglymore » important as part of the Y-12 data infrastructure. Y-12 is implementing a new generation of XML-based publishing systems. Y-12 already has been supporting projects at DOE Headquarters, such as the Guidance Streamlining Initiative (GSI) that will result in the storage of classification guidance in XML. Y-12 collects some test data in XML as the result of Electronic Data Capture (EDC), and XML data is also used in Engineering Releases. I am participating in a series of projects sponsored by the PRIDE initiative that include the capture of dimensional certification and other similar records in XML, the creation of XML formats for Electronic Data Capture, and the creation of Quality Evaluation Reports in XML. In support of DOE's use of SGML, XML, HTML, Topic Maps, and related standards, I served 1985-2007 as chairman of the international committee responsible for SGML and standards derived from it, ISO/IEC JTC1/SC34 (SC34) and its predecessor organizations; I continue to belong to the committee. During the August 2010 trip, I co-chaired the conference Balisage 2010.« less
Interactive, Secure Web-enabled Aircraft Engine Simulation Using XML Databinding Integration
NASA Technical Reports Server (NTRS)
Lin, Risheng; Afjeh, Abdollah A.
2003-01-01
This paper discusses the detailed design of an XML databinding framework for aircraft engine simulation. The framework provides an object interface to access and use engine data. while at the same time preserving the meaning of the original data. The Language independent representation of engine component data enables users to move around XML data using HTTP through disparate networks. The application of this framework is demonstrated via a web-based turbofan propulsion system simulation using the World Wide Web (WWW). A Java Servlet based web component architecture is used for rendering XML engine data into HTML format and dealing with input events from the user, which allows users to interact with simulation data from a web browser. The simulation data can also be saved to a local disk for archiving or to restart the simulation at a later time.
Yokochi, Masashi; Kobayashi, Naohiro; Ulrich, Eldon L; Kinjo, Akira R; Iwata, Takeshi; Ioannidis, Yannis E; Livny, Miron; Markley, John L; Nakamura, Haruki; Kojima, Chojiro; Fujiwara, Toshimichi
2016-05-05
The nuclear magnetic resonance (NMR) spectroscopic data for biological macromolecules archived at the BioMagResBank (BMRB) provide a rich resource of biophysical information at atomic resolution. The NMR data archived in NMR-STAR ASCII format have been implemented in a relational database. However, it is still fairly difficult for users to retrieve data from the NMR-STAR files or the relational database in association with data from other biological databases. To enhance the interoperability of the BMRB database, we present a full conversion of BMRB entries to two standard structured data formats, XML and RDF, as common open representations of the NMR-STAR data. Moreover, a SPARQL endpoint has been deployed. The described case study demonstrates that a simple query of the SPARQL endpoints of the BMRB, UniProt, and Online Mendelian Inheritance in Man (OMIM), can be used in NMR and structure-based analysis of proteins combined with information of single nucleotide polymorphisms (SNPs) and their phenotypes. We have developed BMRB/XML and BMRB/RDF and demonstrate their use in performing a federated SPARQL query linking the BMRB to other databases through standard semantic web technologies. This will facilitate data exchange across diverse information resources.
SPARQL Assist language-neutral query composer
2012-01-01
Background SPARQL query composition is difficult for the lay-person, and even the experienced bioinformatician in cases where the data model is unfamiliar. Moreover, established best-practices and internationalization concerns dictate that the identifiers for ontological terms should be opaque rather than human-readable, which further complicates the task of synthesizing queries manually. Results We present SPARQL Assist: a Web application that addresses these issues by providing context-sensitive type-ahead completion during SPARQL query construction. Ontological terms are suggested using their multi-lingual labels and descriptions, leveraging existing support for internationalization and language-neutrality. Moreover, the system utilizes the semantics embedded in ontologies, and within the query itself, to help prioritize the most likely suggestions. Conclusions To ensure success, the Semantic Web must be easily available to all users, regardless of locale, training, or preferred language. By enhancing support for internationalization, and moreover by simplifying the manual construction of SPARQL queries through the use of controlled-natural-language interfaces, we believe we have made some early steps towards simplifying access to Semantic Web resources. PMID:22373327
SPARQL assist language-neutral query composer.
McCarthy, Luke; Vandervalk, Ben; Wilkinson, Mark
2012-01-25
SPARQL query composition is difficult for the lay-person, and even the experienced bioinformatician in cases where the data model is unfamiliar. Moreover, established best-practices and internationalization concerns dictate that the identifiers for ontological terms should be opaque rather than human-readable, which further complicates the task of synthesizing queries manually. We present SPARQL Assist: a Web application that addresses these issues by providing context-sensitive type-ahead completion during SPARQL query construction. Ontological terms are suggested using their multi-lingual labels and descriptions, leveraging existing support for internationalization and language-neutrality. Moreover, the system utilizes the semantics embedded in ontologies, and within the query itself, to help prioritize the most likely suggestions. To ensure success, the Semantic Web must be easily available to all users, regardless of locale, training, or preferred language. By enhancing support for internationalization, and moreover by simplifying the manual construction of SPARQL queries through the use of controlled-natural-language interfaces, we believe we have made some early steps towards simplifying access to Semantic Web resources.
Hoelzer, Simon; Schweiger, Ralf K; Liu, Raymond; Rudolf, Dirk; Rieger, Joerg; Dudeck, Joachim
2005-01-01
With the introduction of the ICD-10 as the standard for diagnosis, the development of an electronic representation of its complete content, inherent semantics and coding rules is necessary. Our concept refers to current efforts of the CEN/TC 251 to establish a European standard for hierarchical classification systems in healthcare. We have developed an electronic representation of the ICD-10 with the extensible Markup Language (XML) that facilitates the integration in current information systems or coding software taking into account different languages and versions. In this context, XML offers a complete framework of related technologies and standard tools for processing that helps to develop interoperable applications.
Common Data Model for Neuroscience Data and Data Model Exchange
Gardner, Daniel; Knuth, Kevin H.; Abato, Michael; Erde, Steven M.; White, Thomas; DeBellis, Robert; Gardner, Esther P.
2001-01-01
Objective: Generalizing the data models underlying two prototype neurophysiology databases, the authors describe and propose the Common Data Model (CDM) as a framework for federating a broad spectrum of disparate neuroscience information resources. Design: Each component of the CDM derives from one of five superclasses—data, site, method, model, and reference—or from relations defined between them. A hierarchic attribute-value scheme for metadata enables interoperability with variable tree depth to serve specific intra- or broad inter-domain queries. To mediate data exchange between disparate systems, the authors propose a set of XML-derived schema for describing not only data sets but data models. These include biophysical description markup language (BDML), which mediates interoperability between data resources by providing a meta-description for the CDM. Results: The set of superclasses potentially spans data needs of contemporary neuroscience. Data elements abstracted from neurophysiology time series and histogram data represent data sets that differ in dimension and concordance. Site elements transcend neurons to describe subcellular compartments, circuits, regions, or slices; non-neuroanatomic sites include sequences to patients. Methods and models are highly domain-dependent. Conclusions: True federation of data resources requires explicit public description, in a metalanguage, of the contents, query methods, data formats, and data models of each data resource. Any data model that can be derived from the defined superclasses is potentially conformant and interoperability can be enabled by recognition of BDML-described compatibilities. Such metadescriptions can buffer technologic changes. PMID:11141510
NASA Technical Reports Server (NTRS)
Aspinall, David; Denney, Ewen; Lueth, Christoph
2012-01-01
We motivate and introduce a query language PrQL designed for inspecting machine representations of proofs. PrQL natively supports hiproofs which express proof structure using hierarchical nested labelled trees. The core language presented in this paper is locally structured (first-order), with queries built using recursion and patterns over proof structure and rule names. We define the syntax and semantics of locally structured queries, demonstrate their power, and sketch some implementation experiments.
Querying Proofs (Work in Progress)
NASA Technical Reports Server (NTRS)
Aspinall, David; Denney, Ewen; Lueth, Christoph
2011-01-01
We motivate and introduce the basis for a query language designed for inspecting electronic representations of proofs. We argue that there is much to learn from large proofs beyond their validity, and that a dedicated query language can provide a principled way of implementing a family of useful operations.
Saadawi, Gilan M; Harrison, James H
2006-10-01
Clinical laboratory procedure manuals are typically maintained as word processor files and are inefficient to store and search, require substantial effort for review and updating, and integrate poorly with other laboratory information. Electronic document management systems could improve procedure management and utility. As a first step toward building such systems, we have developed a prototype electronic format for laboratory procedures using Extensible Markup Language (XML). Representative laboratory procedures were analyzed to identify document structure and data elements. This information was used to create a markup vocabulary, CLP-ML, expressed as an XML Document Type Definition (DTD). To determine whether this markup provided advantages over generic markup, we compared procedures structured with CLP-ML or with the vocabulary of the Health Level Seven, Inc. (HL7) Clinical Document Architecture (CDA) narrative block. CLP-ML includes 124 XML tags and supports a variety of procedure types across different laboratory sections. When compared with a general-purpose markup vocabulary (CDA narrative block), CLP-ML documents were easier to edit and read, less complex structurally, and simpler to traverse for searching and retrieval. In combination with appropriate software, CLP-ML is designed to support electronic authoring, reviewing, distributing, and searching of clinical laboratory procedures from a central repository, decreasing procedure maintenance effort and increasing the utility of procedure information. A standard electronic procedure format could also allow laboratories and vendors to share procedures and procedure layouts, minimizing duplicative word processor editing. Our results suggest that laboratory-specific markup such as CLP-ML will provide greater benefit for such systems than generic markup.
Tao, Cui; Jiang, Guoqian; Oniki, Thomas A; Freimuth, Robert R; Zhu, Qian; Sharma, Deepak; Pathak, Jyotishman; Huff, Stanley M; Chute, Christopher G
2013-05-01
The clinical element model (CEM) is an information model designed for representing clinical information in electronic health records (EHR) systems across organizations. The current representation of CEMs does not support formal semantic definitions and therefore it is not possible to perform reasoning and consistency checking on derived models. This paper introduces our efforts to represent the CEM specification using the Web Ontology Language (OWL). The CEM-OWL representation connects the CEM content with the Semantic Web environment, which provides authoring, reasoning, and querying tools. This work may also facilitate the harmonization of the CEMs with domain knowledge represented in terminology models as well as other clinical information models such as the openEHR archetype model. We have created the CEM-OWL meta ontology based on the CEM specification. A convertor has been implemented in Java to automatically translate detailed CEMs from XML to OWL. A panel evaluation has been conducted, and the results show that the OWL modeling can faithfully represent the CEM specification and represent patient data.
Tao, Cui; Jiang, Guoqian; Oniki, Thomas A; Freimuth, Robert R; Zhu, Qian; Sharma, Deepak; Pathak, Jyotishman; Huff, Stanley M; Chute, Christopher G
2013-01-01
The clinical element model (CEM) is an information model designed for representing clinical information in electronic health records (EHR) systems across organizations. The current representation of CEMs does not support formal semantic definitions and therefore it is not possible to perform reasoning and consistency checking on derived models. This paper introduces our efforts to represent the CEM specification using the Web Ontology Language (OWL). The CEM-OWL representation connects the CEM content with the Semantic Web environment, which provides authoring, reasoning, and querying tools. This work may also facilitate the harmonization of the CEMs with domain knowledge represented in terminology models as well as other clinical information models such as the openEHR archetype model. We have created the CEM-OWL meta ontology based on the CEM specification. A convertor has been implemented in Java to automatically translate detailed CEMs from XML to OWL. A panel evaluation has been conducted, and the results show that the OWL modeling can faithfully represent the CEM specification and represent patient data. PMID:23268487
Chemical Markup, XML, and the World Wide Web. 7. CMLSpect, an XML vocabulary for spectral data.
Kuhn, Stefan; Helmus, Tobias; Lancashire, Robert J; Murray-Rust, Peter; Rzepa, Henry S; Steinbeck, Christoph; Willighagen, Egon L
2007-01-01
CMLSpect is an extension of Chemical Markup Language (CML) for managing spectral and other analytical data. It is designed to be flexible enough to contain a wide variety of spectral data. The paper describes the CMLElements used and gives practical examples for common types of spectra. In addition it demonstrates how different views of the data can be expressed and what problems still exist.
A Framework for Building and Reasoning with Adaptive and Interoperable PMESII Models
2007-11-01
Description Logic SOA Service Oriented Architecture SPARQL Simple Protocol And RDF Query Language SQL Standard Query Language SROM Stability and...another by providing a more expressive ontological structure for one of the models, e.g., semantic networks can be mapped to first- order logical...Pellet is an open-source reasoner that works with OWL-DL. It accepts the SPARQL protocol and RDF query language ( SPARQL ) and provides a Java API to
NASA Technical Reports Server (NTRS)
Rice, J. Kevin
2013-01-01
The XTCE GOVSAT software suite contains three tools: validation, search, and reporting. The Extensible Markup Language (XML) Telemetric and Command Exchange (XTCE) GOVSAT Tool Suite is written in Java for manipulating XTCE XML files. XTCE is a Consultative Committee for Space Data Systems (CCSDS) and Object Management Group (OMG) specification for describing the format and information in telemetry and command packet streams. These descriptions are files that are used to configure real-time telemetry and command systems for mission operations. XTCE s purpose is to exchange database information between different systems. XTCE GOVSAT consists of rules for narrowing the use of XTCE for missions. The Validation Tool is used to syntax check GOVSAT XML files. The Search Tool is used to search (i.e. command and telemetry mnemonics) the GOVSAT XML files and view the results. Finally, the Reporting Tool is used to create command and telemetry reports. These reports can be displayed or printed for use by the operations team.
Standardization of XML Database Exchanges and the James Webb Space Telescope Experience
NASA Technical Reports Server (NTRS)
Gal-Edd, Jonathan; Detter, Ryan; Jones, Ron; Fatig, Curtis C.
2007-01-01
Personnel from the National Aeronautics and Space Administration (NASA) James Webb Space Telescope (JWST) Project have been working with various standard communities such the Object Management Group (OMG) and the Consultative Committee for Space Data Systems (CCSDS) to assist in the definition of a common extensible Markup Language (XML) for database exchange format. The CCSDS and OMG standards are intended for the exchange of core command and telemetry information, not for all database information needed to exercise a NASA space mission. The mission-specific database, containing all the information needed for a space mission, is translated from/to the standard using a translator. The standard is meant to provide a system that encompasses 90% of the information needed for command and telemetry processing. This paper will discuss standardization of the XML database exchange format, tools used, and the JWST experience, as well as future work with XML standard groups both commercial and government.
2009-04-01
information on user’s interests. In that case, the polarity takes the value of zero. Positive polarity examples: Query, Question/Assertion, cut/paste, chat ...Polarity Query (Keywords/Question/Assertion) 1 +1 cut/paste 0.9 +1 Selection from list 0.8 +1 Saving/printing 0.7 +1 Chat 0.6 +1 Reading doc/Web...3. logging all VIGEstimates (from UMS and IMS separately) and user snap shots as xml files for post‐process analysis As new InfoPacks come into the
Guo, Jinqiu; Takada, Akira; Tanaka, Koji; Sato, Junzo; Suzuki, Muneou; Takahashi, Kiwamu; Daimon, Hiroyuki; Suzuki, Toshiaki; Nakashima, Yusei; Araki, Kenji; Yoshihara, Hiroyuki
2005-08-01
With the evolving and diverse electronic medical record (EMR) systems, there appears to be an ever greater need to link EMR systems and patient accounting systems with a standardized data exchange format. To this end, the CLinical Accounting InforMation (CLAIM) data exchange standard was developed. CLAIM is subordinate to the Medical Markup Language (MML) standard, which allows the exchange of medical data among different medical institutions. CLAIM uses eXtensible Markup Language (XML) as a meta-language. The current version, 2.1, inherited the basic structure of MML 2.x and contains two modules including information related to registration, appointment, procedure and charging. CLAIM 2.1 was implemented successfully in Japan in 2001. Consequently, it was confirmed that CLAIM could be used as an effective data exchange format between EMR systems and patient accounting systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hurst, Aaron M.
A data structure based on an eXtensible Markup Language (XML) hierarchy according to experimental nuclear structure data in the Evaluated Nuclear Structure Data File (ENSDF) is presented. A Python-coded translator has been developed to interpret the standard one-card records of the ENSDF datasets, together with their associated quantities defined according to field position, and generate corresponding representative XML output. The quantities belonging to this mixed-record format are described in the ENSDF manual. Of the 16 ENSDF records in total, XML output has been successfully generated for 15 records. An XML-translation for the Comment Record is yet to be implemented; thismore » will be considered in a separate phase of the overall translation effort. Continuation records, not yet implemented, will also be treated in a future phase of this work. Several examples are presented in this document to illustrate the XML schema and methods for handling the various ENSDF data types. However, the proposed nomenclature for the XML elements and attributes need not necessarily be considered as a fixed set of constructs. Indeed, better conventions may be suggested and a consensus can be achieved amongst the various groups of people interested in this project. The main purpose here is to present an initial phase of the translation effort to demonstrate the feasibility of interpreting ENSDF datasets and creating a representative XML-structured hierarchy for data storage.« less
XML and its impact on content and structure in electronic health care documents.
Sokolowski, R.; Dudeck, J.
1999-01-01
Worldwide information networks have the requirement that electronic documents must be easily accessible, portable, flexible and system-independent. With the development of XML (eXtensible Markup Language), the future of electronic documents, health care informatics and the Web itself are about to change. The intent of the recently formed ASTM E31.25 subcommittee, "XML DTDs for Health Care", is to develop standard electronic document representations of paper-based health care documents and forms. A goal of the subcommittee is to work together to enhance existing levels of interoperability among the various XML/SGML standardization efforts, products and systems in health care. The ASTM E31.25 subcommittee uses common practices and software standards to develop the implementation recommendations for XML documents in health care. The implementation recommendations are being developed to standardize the many different structures of documents. These recommendations are in the form of a set of standard DTDs, or document type definitions that match the electronic document requirements in the health care industry. This paper discusses recent efforts of the ASTM E31.25 subcommittee. PMID:10566338
An Experimental Investigation of Complexity in Database Query Formulation Tasks
ERIC Educational Resources Information Center
Casterella, Gretchen Irwin; Vijayasarathy, Leo
2013-01-01
Information Technology professionals and other knowledge workers rely on their ability to extract data from organizational databases to respond to business questions and support decision making. Structured query language (SQL) is the standard programming language for querying data in relational databases, and SQL skills are in high demand and are…
Personalising e-learning modules: targeting Rasmussen levels using XML.
Renard, J M; Leroy, S; Camus, H; Picavet, M; Beuscart, R
2003-01-01
The development of Internet technologies has made it possible to increase the number and the diversity of on-line resources for teachers and students. Initiatives like the French-speaking Virtual Medical University Project (UMVF) try to organise the access to these resources. But both teachers and students are working on a partly redundant subset of knowledge. From the analysis of some French courses we propose a model for knowledge organisation derived from Rasmussen's stepladder. In the context of decision-making Rasmussen has identified skill-based, rule-based and knowledge-based levels for the mental process. In the medical context of problem-solving, we apply these three levels to the definition of three students levels: beginners, intermediate-level learners, experts. Based on our model, we build a representation of the hierarchical structure of data using XML language. We use XSLT Transformation Language in order to filter relevant data according to student level and to propose an appropriate display on students' terminal. The model and the XML implementation we define help to design tools for building personalised e-learning modules.
2004-08-01
components, and B2B /B2C aspects of those in a technical and economic snapshot. Talk number six discusses the trade-off between quality and cost, which...web services have been defined. The fifth talk summarizes key aspects of XML (Extended Markup Language), Web Services and their components, and B2B ...Internet is Run: A Worldwide Perspective 69 Christoph Pauls 5 XML, Web Services and B2C/ B2B : A Technical and Economical Snap- shot 87 Matthias Pitt 6
SIMS: addressing the problem of heterogeneity in databases
NASA Astrophysics Data System (ADS)
Arens, Yigal
1997-02-01
The heterogeneity of remotely accessible databases -- with respect to contents, query language, semantics, organization, etc. -- presents serious obstacles to convenient querying. The SIMS (single interface to multiple sources) system addresses this global integration problem. It does so by defining a single language for describing the domain about which information is stored in the databases and using this language as the query language. Each database to which SIMS is to provide access is modeled using this language. The model describes a database's contents, organization, and other relevant features. SIMS uses these models, together with a planning system drawing on techniques from artificial intelligence, to decompose a given user's high-level query into a series of queries against the databases and other data manipulation steps. The retrieval plan is constructed so as to minimize data movement over the network and maximize parallelism to increase execution speed. SIMS can recover from network failures during plan execution by obtaining data from alternate sources, when possible. SIMS has been demonstrated in the domains of medical informatics and logistics, using real databases.
Supporting temporal queries on clinical relational databases: the S-WATCH-QL language.
Combi, C.; Missora, L.; Pinciroli, F.
1996-01-01
Due to the ubiquitous and special nature of time, specially in clinical datábases there's the need of particular temporal data and operators. In this paper we describe S-WATCH-QL (Structured Watch Query Language), a temporal extension of SQL, the widespread query language based on the relational model. S-WATCH-QL extends the well-known SQL by the addition of: a) temporal data types that allow the storage of information with different levels of granularity; b) historical relations that can store together both instantaneous valid times and intervals; c) some temporal clauses, functions and predicates allowing to define complex temporal queries. PMID:8947722
Networking observers and observatories with remote telescope markup language
NASA Astrophysics Data System (ADS)
Hessman, Frederic V.; Tuparev, Georg; Allan, Alasdair
2006-06-01
Remote Telescope Markup Language (RTML) is an XML-based protocol for the transport of the high-level description of a set of observations to be carried out on a remote, robotic or service telescope. We describe how RTML is being used in a wide variety of contexts: the transport of service and robotic observing requests in the Hands-On Universe TM, ACP, eSTAR, and MONET networks; how RTML is easily combined with other XML protocols for more localized control of telescopes; RTML as a secondary observation report format for the IVOA's VOEvent protocol; the input format for a general-purpose observation simulator; and the observatory-independent means for carrying out request transactions for the international Heterogeneous Telescope Network (HTN).
Finding Relevant Data in a Sea of Languages
2016-04-26
full machine-translated text , unbiased word clouds , query-biased word clouds , and query-biased sentence...and information retrieval to automate language processing tasks so that the limited number of linguists available for analyzing text and spoken...the crime (stock market). The Cross-LAnguage Search Engine (CLASE) has already preprocessed the documents, extracting text to identify the language
The Implications of Well-Formedness on Web-Based Educational Resources.
ERIC Educational Resources Information Center
Mohler, James L.
Within all institutions, Web developers are beginning to utilize technologies that make sites more than static information resources. Databases such as XML (Extensible Markup Language) and XSL (Extensible Stylesheet Language) are key technologies that promise to extend the Web beyond the "information storehouse" paradigm and provide…
NASA Technical Reports Server (NTRS)
Jackson, Bruce
2006-01-01
DAVEtools is a set of Java archives that embodies tools for manipulating flight-dynamics models that have been encoded in dynamic aerospace vehicle exchange markup language (DAVE-ML). [DAVE-ML is an application program, written in Extensible Markup Language (XML), for encoding complete computational models of the dynamics of aircraft and spacecraft.
An Introduction to the Resource Description Framework.
ERIC Educational Resources Information Center
Miller, Eric
1998-01-01
Explains the Resource Description Framework (RDF), an infrastructure developed under the World Wide Web Consortium that enables the encoding, exchange, and reuse of structured metadata. It is an application of Extended Markup Language (XML), which is a subset of Standard Generalized Markup Language (SGML), and helps with expressing semantics.…
Report of Official Foreign Travel to Germany, May 16-June 1, 2001
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. D. Mason
2001-06-18
The Department of Energy (DOE) and associated agencies have moved rapidly toward electronic production, management, and dissemination of scientific and technical information. The World-Wide Web (WWW) has become a primary means of information dissemination. Electronic commerce (EC) is becoming the preferred means of procurement. DOE, like other government agencies, depends on and encourages the use of international standards in data communications. Like most government agencies, DOE has expressed a preference for openly developed standards over proprietary designs promoted as ''standards'' by vendors. In particular, there is a preference for standards developed by organizations such as the International Organization for Standardizationmore » (ISO) and the American National Standards Institute (ANSI) that use open, public processes to develop their standards. Among the most widely adopted international standards is the Standard Generalized Markup Language (SGML, ISO 8879:1986, FIPS 152), to which DOE long ago made a commitment. Besides the official commitment, which has resulted in several specialized projects, DOE makes heavy use of coding derived from SGML: Most documents on the WWW are coded in HTML (Hypertext Markup Language), which is an application of SGML. The World-Wide Web Consortium (W3C), with the backing of major software houses like Adobe, IBM, Microsoft, Netscape, Oracle, and Sun, is promoting XML (eXtensible Markup Language), a class of SGML applications, for the future of the WWW and the basis for EC. In support of DOE's use of these standards, I have served since 1985 as Chairman of the international committee responsible for SGML and related standards, ISO/IEC JTC1/SC34 (SC34) and its predecessor organizations. During my May 2001 trip, I chaired the spring 2001 meeting of SC34 in Berlin, Germany. I also attended XML Europe 2001, a major conference on the use of SGML and XML sponsored by the Graphic Communications Association (GCA), and chaired a meeting of the International SGML/XML Users' Group (ISUG). In addition to the widespread use of the WWW among DOE's plants and facilities in Oak Ridge and among DOE sites across the nation, there have been several past and present SGML- and XML-based projects at the Y-12 National Security Complex (Y-12). Our local project team has done SGML and XML development at Y-12 and Oak Ridge National Laboratory (ORNL) since the late 1980s. SGML is a component of the Weapons Records Archiving and Preservation (WRAP) project at Y-12 and is the format for catalog metadata chosen for weapons records by the Nuclear Weapons Information Group (NWIG). The ''Ferret'' system for automated classification analysis uses XML to structure its knowledge base. The Ferret team also provides XML consulting to OSTI and DOE Headquarters, particularly the National Nuclear Security Administration (NNSA). Supporting standards development allows DOE and Y-12 the opportunity both to provide input into the process and to benefit from contact with some of the leading experts in the subject matter. Oak Ridge has been for some years the location to which other DOE sites turn for expertise in SGML, XML, and related topics.« less
DBPQL: A view-oriented query language for the Intel Data Base Processor
NASA Technical Reports Server (NTRS)
Fishwick, P. A.
1983-01-01
An interactive query language (BDPQL) for the Intel Data Base Processor (DBP) is defined. DBPQL includes a parser generator package which permits the analyst to easily create and manipulate the query statement syntax and semantics. The prototype language, DBPQL, includes trace and performance commands to aid the analyst when implementing new commands and analyzing the execution characteristics of the DBP. The DBPQL grammar file and associated key procedures are included as an appendix to this report.
Time series patterns and language support in DBMS
NASA Astrophysics Data System (ADS)
Telnarova, Zdenka
2017-07-01
This contribution is focused on pattern type Time Series as a rich in semantics representation of data. Some example of implementation of this pattern type in traditional Data Base Management Systems is briefly presented. There are many approaches how to manipulate with patterns and query patterns. Crucial issue can be seen in systematic approach to pattern management and specific pattern query language which takes into consideration semantics of patterns. Query language SQL-TS for manipulating with patterns is shown on Time Series data.
A New Framework for Textual Information Mining over Parse Trees. CRESST Report 805
ERIC Educational Resources Information Center
Mousavi, Hamid; Kerr, Deirdre; Iseli, Markus R.
2011-01-01
Textual information mining is a challenging problem that has resulted in the creation of many different rule-based linguistic query languages. However, these languages generally are not optimized for the purpose of text mining. In other words, they usually consider queries as individuals and only return raw results for each query. Moreover they…
Graphical modeling and query language for hospitals.
Barzdins, Janis; Barzdins, Juris; Rencis, Edgars; Sostaks, Agris
2013-01-01
So far there has been little evidence that implementation of the health information technologies (HIT) is leading to health care cost savings. One of the reasons for this lack of impact by the HIT likely lies in the complexity of the business process ownership in the hospitals. The goal of our research is to develop a business model-based method for hospital use which would allow doctors to retrieve directly the ad-hoc information from various hospital databases. We have developed a special domain-specific process modelling language called the MedMod. Formally, we define the MedMod language as a profile on UML Class diagrams, but we also demonstrate it on examples, where we explain the semantics of all its elements informally. Moreover, we have developed the Process Query Language (PQL) that is based on MedMod process definition language. The purpose of PQL is to allow a doctor querying (filtering) runtime data of hospital's processes described using MedMod. The MedMod language tries to overcome deficiencies in existing process modeling languages, allowing to specify the loosely-defined sequence of the steps to be performed in the clinical process. The main advantages of PQL are in two main areas - usability and efficiency. They are: 1) the view on data through "glasses" of familiar process, 2) the simple and easy-to-perceive means of setting filtering conditions require no more expertise than using spreadsheet applications, 3) the dynamic response to each step in construction of the complete query that shortens the learning curve greatly and reduces the error rate, and 4) the selected means of filtering and data retrieving allows to execute queries in O(n) time regarding the size of the dataset. We are about to continue developing this project with three further steps. First, we are planning to develop user-friendly graphical editors for the MedMod process modeling and query languages. The second step is to do evaluation of usability the proposed language and tool involving the physicians from several hospitals in Latvia and working with real data from these hospitals. Our third step is to develop an efficient implementation of the query language.
NASA Astrophysics Data System (ADS)
Arenas, Marcelo; Gutierrez, Claudio; Pérez, Jorge
The goal of this paper is to give an overview of the basics of the theory of RDF databases. We provide a formal definition of RDF that includes the features that distinguish this model from other graph data models. We then move into the fundamental issue of querying RDF data. We start by considering the RDF query language SPARQL, which is a W3C Recommendation since January 2008. We provide an algebraic syntax and a compositional semantics for this language, study the complexity of the evaluation problem for different fragments of SPARQL, and consider the problem of optimizing the evaluation of SPARQL queries, showing that a natural fragment of this language has some good properties in this respect. We furthermore study the expressive power of SPARQL, by comparing it with some well-known query languages such as relational algebra. We conclude by considering the issue of querying RDF data in the presence of RDFS vocabulary. In particular, we present a recently proposed extension of SPARQL with navigational capabilities.
Comparing Emerging XML Based Formats from a Multi-discipline Perspective
NASA Astrophysics Data System (ADS)
Sawyer, D. M.; Reich, L. I.; Nikhinson, S.
2002-12-01
This paper analyzes the similarity and differences among several examples of an emerging generation of Scientific Data Formats that are based on XML technologies. Some of the factors evaluated include the goals of these efforts, the data models, and XML technologies used, and the maturity of currently available software. This paper then investigates the practicality of developing a single set of structural data objects and basic scientific concepts, such as units, that could be used across discipline boundaries and extended by disciplines and missions to create Scientific Data Formats for their communities. This analysis is partly based on an effort sponsored by the ESDIS office at GSFC to compare the Earth Science Markup Language (ESML) and the eXtensible Data Format( XDF), two members of this new generation of XML based Data Description Languages that have been developed by NASA funded efforts in recent years. This paper adds FITSML and potentially CDFML to the list of XML based Scientific Data Formats discussed. This paper draws heavily a Formats Evolution Process Committee (http://ssdoo.gsfc.nasa.gov/nost/fep/) draft white paper primarily developed by Lou Reich, Mike Folk and Don Sawyer to assist the Space Science community in understanding Scientific Data Formats. One of primary conclusions of that paper is that a scientific data format object model should be examined along two basic axes. The first is the complexity of the computer/mathematical data types supported and the second is the level of scientific domain specialization incorporated. This paper also discusses several of the issues that affect the decision on whether to implement a discipline or project specific Scientific Data Format as a formal extension of a general purpose Scientific Data Format or to implement the APIs independently.
CliniProteus: A flexible clinical trials information management system
Mathura, Venkatarajan S; Rangareddy, Mahendiranath; Gupta, Pankaj; Mullan, Michael
2007-01-01
Clinical trials involve multi-site heterogeneous data generation with complex data input-formats and forms. The data should be captured and queried in an integrated fashion to facilitate further analysis. Electronic case-report forms (eCRF) are gaining popularity since it allows capture of clinical information in a rapid manner. We have designed and developed an XML based flexible clinical trials data management framework in .NET environment that can be used for efficient design and deployment of eCRFs to efficiently collate data and analyze information from multi-site clinical trials. The main components of our system include an XML form designer, a Patient registration eForm, reusable eForms, multiple-visit data capture and consolidated reports. A unique id is used for tracking the trial, site of occurrence, the patient and the year of recruitment. Availability http://www.rfdn.org/bioinfo/CTMS/ctms.html. PMID:21670796
Bioinformatics data distribution and integration via Web Services and XML.
Li, Xiao; Zhang, Yizheng
2003-11-01
It is widely recognized that exchange, distribution, and integration of biological data are the keys to improve bioinformatics and genome biology in post-genomic era. However, the problem of exchanging and integrating biology data is not solved satisfactorily. The eXtensible Markup Language (XML) is rapidly spreading as an emerging standard for structuring documents to exchange and integrate data on the World Wide Web (WWW). Web service is the next generation of WWW and is founded upon the open standards of W3C (World Wide Web Consortium) and IETF (Internet Engineering Task Force). This paper presents XML and Web Services technologies and their use for an appropriate solution to the problem of bioinformatics data exchange and integration.
QuakeML - An XML Schema for Seismology
NASA Astrophysics Data System (ADS)
Wyss, A.; Schorlemmer, D.; Maraini, S.; Baer, M.; Wiemer, S.
2004-12-01
We propose an extensible format-definition for seismic data (QuakeML). Sharing data and seismic information efficiently is one of the most important issues for research and observational seismology in the future. The eXtensible Markup Language (XML) is playing an increasingly important role in the exchange of a variety of data. Due to its extensible definition capabilities, its wide acceptance and the existing large number of utilities and libraries for XML, a structured representation of various types of seismological data should in our opinion be developed by defining a 'QuakeML' standard. Here we present the QuakeML definitions for parameter databases and further efforts, e.g. a central QuakeML catalog database and a web portal for exchanging codes and stylesheets.
Use of standard vocabulary services in validation of water resources data
NASA Astrophysics Data System (ADS)
Yu, Jonathan; Cox, Simon; Ratcliffe, David
2010-05-01
Ontology repositories are increasingly being exposed through vocabulary and concept services. Primarily this is in support of resource discovery. Thesaurus functionality and even more sophisticated reasoning offers the possibility of overcoming the limitations of simple text-matching and tagging which is the basis of most search. However, controlled vocabularies have other important roles in distributed systems: in particular in constraining content validity. A national water information system established by the Australian Bureau of Meterorology ('the Bureau') has deployed a system for ingestion of data from multiple providers. This uses a http interface onto separately maintained vocabulary services as part of the quality assurance chain. With over 200 data providers potentially transferring data to the Bureau, a standard XML-based Water Data Transfer Format (WDTF) was developed for receipt of data into an integrated national water information system. The WDTF schema was built upon standards from the Open Geospatial Consortium (OGC). The structure and syntax specified by a W3C XML Schema is complemented by additional constraints described using Schematron. These implement important content requirements and business rules including: • Restricted cardinality: where optional elements and attributes inherited from the base standards become mandatory in the application, or repeatable elements or attributes are limited to one or omitted. For example, the sampledFeature element from O&M is optional but is mandatory for a samplingPoint element in WDTF. • Vocabulary checking: WDTF data use seventeen vocabularies or code lists derived from Regulations under the Commonwealth Water Act 2007. Examples of codelists are the Australian Water Regulations list, observed property vocabulary, and units of measures. • Contextual constraints: in many places, the permissible value is dependent on the value of another field. For example, within observations the unit of measure must be commensurate with the observed property type Validation of data submitted in WDTF uses a two-pass approach. First, syntax and structural validation is performed by standard XML Schema validation tools. Second, validation of contextual constraints and code list checking is performed using a hybrid method combining context-sensitive rule-based validation (allowing the rules to be expressed within a given context) and semantic vocabulary services. Schematron allows rules to incorporate assertions of XPath expressions to access and constrain element content, therefore enabling contextual constraints. Schematron is also used to perform element cardinality checking. The vocabularies or code lists are formalized in SKOS (Simple Knowledge Organization System), an RDF-based language. SKOS provides mechanisms to define concepts, associate them with (multi-lingual) labels or terms, and record thesaurus-like relationships between them. The vocabularies are managed in a RDF database or semantic triple store. Querying is implemented as a semantic vocabulary service, with an http-based API that allows queries to be issued from rules written in Schematron. WDTF has required development and deployment of some ontologies whose scope is much more general than this application, in particular covering 'observed properties' and 'units of measure', which also have to be related to each other and consistent with the dimensional analysis. Separation of the two validation passes reflects the separate governance and stability of the structural and content rules, and allows an organisation's business rules to be moved out of the XML schema definition and the XML schema to be reused by other businesses with their own specific rules. With the general approach proven, harmonization opportunities with more generic services are being explored, such as the GEMET API for SKOS, developed by the European Environment Agency. Acknowledgements: The authors would like to thank the AUSCOPE team for their development and support provided of the vocabulary services.
2012-09-01
Thesis Advisor: Mikhail Auguston Second Reader: Terry Norbraten THIS PAGE INTENTIONALLY LEFT BLANK i REPORT DOCUMENTATION PAGE Form Approved...Language (GraphML). MPGrapher compiles well- formed XML files that conform to the yEd GraphML schema. These files will be opened and analyzed using...ABSTRACT UU NSN 7540-01-280-5500 Standard Form 298 (Rev. 2-89) Prescribed by ANSI Std. 239-18 ii THIS PAGE INTENTIONALLY LEFT BLANK iii Approved
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goodall, John; Iannacone, Mike; Athalye, Anish
2013-08-01
Morph is a framework and domain-specific language (DSL) that helps parse and transform structured documents. It currently supports several file formats including XML, JSON, and CSV, and custom formats are usable as well.
ExplorEnz: the primary source of the IUBMB enzyme list
McDonald, Andrew G.; Boyce, Sinéad; Tipton, Keith F.
2009-01-01
ExplorEnz is the MySQL database that is used for the curation and dissemination of the International Union of Biochemistry and Molecular Biology (IUBMB) Enzyme Nomenclature. A simple web-based query interface is provided, along with an advanced search engine for more complex Boolean queries. The WWW front-end is accessible at http://www.enzyme-database.org, from where downloads of the database as SQL and XML are also available. An associated form-based curatorial application has been developed to facilitate the curation of enzyme data as well as the internal and public review processes that occur before an enzyme entry is made official. Suggestions for new enzyme entries, or modifications to existing ones, can be made using the forms provided at http://www.enzyme-database.org/forms.php. PMID:18776214
Vella, Michael; Cannon, Robert C; Crook, Sharon; Davison, Andrew P; Ganapathy, Gautham; Robinson, Hugh P C; Silver, R Angus; Gleeson, Padraig
2014-01-01
NeuroML is an XML-based model description language, which provides a powerful common data format for defining and exchanging models of neurons and neuronal networks. In the latest version of NeuroML, the structure and behavior of ion channel, synapse, cell, and network model descriptions are based on underlying definitions provided in LEMS, a domain-independent language for expressing hierarchical mathematical models of physical entities. While declarative approaches for describing models have led to greater exchange of model elements among software tools in computational neuroscience, a frequent criticism of XML-based languages is that they are difficult to work with directly. Here we describe two Application Programming Interfaces (APIs) written in Python (http://www.python.org), which simplify the process of developing and modifying models expressed in NeuroML and LEMS. The libNeuroML API provides a Python object model with a direct mapping to all NeuroML concepts defined by the NeuroML Schema, which facilitates reading and writing the XML equivalents. In addition, it offers a memory-efficient, array-based internal representation, which is useful for handling large-scale connectomics data. The libNeuroML API also includes support for performing common operations that are required when working with NeuroML documents. Access to the LEMS data model is provided by the PyLEMS API, which provides a Python implementation of the LEMS language, including the ability to simulate most models expressed in LEMS. Together, libNeuroML and PyLEMS provide a comprehensive solution for interacting with NeuroML models in a Python environment.
Vella, Michael; Cannon, Robert C.; Crook, Sharon; Davison, Andrew P.; Ganapathy, Gautham; Robinson, Hugh P. C.; Silver, R. Angus; Gleeson, Padraig
2014-01-01
NeuroML is an XML-based model description language, which provides a powerful common data format for defining and exchanging models of neurons and neuronal networks. In the latest version of NeuroML, the structure and behavior of ion channel, synapse, cell, and network model descriptions are based on underlying definitions provided in LEMS, a domain-independent language for expressing hierarchical mathematical models of physical entities. While declarative approaches for describing models have led to greater exchange of model elements among software tools in computational neuroscience, a frequent criticism of XML-based languages is that they are difficult to work with directly. Here we describe two Application Programming Interfaces (APIs) written in Python (http://www.python.org), which simplify the process of developing and modifying models expressed in NeuroML and LEMS. The libNeuroML API provides a Python object model with a direct mapping to all NeuroML concepts defined by the NeuroML Schema, which facilitates reading and writing the XML equivalents. In addition, it offers a memory-efficient, array-based internal representation, which is useful for handling large-scale connectomics data. The libNeuroML API also includes support for performing common operations that are required when working with NeuroML documents. Access to the LEMS data model is provided by the PyLEMS API, which provides a Python implementation of the LEMS language, including the ability to simulate most models expressed in LEMS. Together, libNeuroML and PyLEMS provide a comprehensive solution for interacting with NeuroML models in a Python environment. PMID:24795618
SkyQuery - A Prototype Distributed Query and Cross-Matching Web Service for the Virtual Observatory
NASA Astrophysics Data System (ADS)
Thakar, A. R.; Budavari, T.; Malik, T.; Szalay, A. S.; Fekete, G.; Nieto-Santisteban, M.; Haridas, V.; Gray, J.
2002-12-01
We have developed a prototype distributed query and cross-matching service for the VO community, called SkyQuery, which is implemented with hierarchichal Web Services. SkyQuery enables astronomers to run combined queries on existing distributed heterogeneous astronomy archives. SkyQuery provides a simple, user-friendly interface to run distributed queries over the federation of registered astronomical archives in the VO. The SkyQuery client connects to the portal Web Service, which farms the query out to the individual archives, which are also Web Services called SkyNodes. The cross-matching algorithm is run recursively on each SkyNode. Each archive is a relational DBMS with a HTM index for fast spatial lookups. The results of the distributed query are returned as an XML DataSet that is automatically rendered by the client. SkyQuery also returns the image cutout corresponding to the query result. SkyQuery finds not only matches between the various catalogs, but also dropouts - objects that exist in some of the catalogs but not in others. This is often as important as finding matches. We demonstrate the utility of SkyQuery with a brown-dwarf search between SDSS and 2MASS, and a search for radio-quiet quasars in SDSS, 2MASS and FIRST. The importance of a service like SkyQuery for the worldwide astronomical community cannot be overstated: data on the same objects in various archives is mapped in different wavelength ranges and looks very different due to different errors, instrument sensitivities and other peculiarities of each archive. Our cross-matching algorithm preforms a fuzzy spatial join across multiple catalogs. This type of cross-matching is currently often done by eye, one object at a time. A static cross-identification table for a set of archives would become obsolete by the time it was built - the exponential growth of astronomical data means that a dynamic cross-identification mechanism like SkyQuery is the only viable option. SkyQuery was funded by a grant from the NASA AISR program.
Developing a Markup Language for Encoding Graphic Content in Plan Documents
ERIC Educational Resources Information Center
Li, Jinghuan
2009-01-01
While deliberating and making decisions, participants in urban development processes need easy access to the pertinent content scattered among different plans. A Planning Markup Language (PML) has been proposed to represent the underlying structure of plans in an XML-compliant way. However, PML currently covers only textual information and lacks…
Overview of the World Wide Web Consortium (W3C) (SIGs IA, USE).
ERIC Educational Resources Information Center
Daly, Janet
2000-01-01
Provides an overview of a planned session to describe the work of the World Wide Web Consortium, including technical specifications for HTML (Hypertext Markup Language), XML (Extensible Markup Language), CSS (Cascading Style Sheets), and over 20 other Web standards that address graphics, multimedia, privacy, metadata, and other technologies. (LRW)
Simple proteomics data analysis in the object-oriented PowerShell.
Mohammed, Yassene; Palmblad, Magnus
2013-01-01
Scripting languages such as Perl and Python are appreciated for solving simple, everyday tasks in bioinformatics. A more recent, object-oriented command shell and scripting language, Windows PowerShell, has many attractive features: an object-oriented interactive command line, fluent navigation and manipulation of XML files, ability to consume Web services from the command line, consistent syntax and grammar, rich regular expressions, and advanced output formatting. The key difference between classical command shells and scripting languages, such as bash, and object-oriented ones, such as PowerShell, is that in the latter the result of a command is a structured object with inherited properties and methods rather than a simple stream of characters. Conveniently, PowerShell is included in all new releases of Microsoft Windows and therefore already installed on most computers in classrooms and teaching labs. In this chapter we demonstrate how PowerShell in particular allows easy interaction with mass spectrometry data in XML formats, connection to Web services for tools such as BLAST, and presentation of results as formatted text or graphics. These features make PowerShell much more than "yet another scripting language."
Web Services and Other Enhancements at the Northern California Earthquake Data Center
NASA Astrophysics Data System (ADS)
Neuhauser, D. S.; Zuzlewski, S.; Allen, R. M.
2012-12-01
The Northern California Earthquake Data Center (NCEDC) provides data archive and distribution services for seismological and geophysical data sets that encompass northern California. The NCEDC is enhancing its ability to deliver rapid information through Web Services. NCEDC Web Services use well-established web server and client protocols and REST software architecture to allow users to easily make queries using web browsers or simple program interfaces and to receive the requested data in real-time rather than through batch or email-based requests. Data are returned to the user in the appropriate format such as XML, RESP, or MiniSEED depending on the service, and are compatible with the equivalent IRIS DMC web services. The NCEDC is currently providing the following Web Services: (1) Station inventory and channel response information delivered in StationXML format, (2) Channel response information delivered in RESP format, (3) Time series availability delivered in text and XML formats, (4) Single channel and bulk data request delivered in MiniSEED format. The NCEDC is also developing a rich Earthquake Catalog Web Service to allow users to query earthquake catalogs based on selection parameters such as time, location or geographic region, magnitude, depth, azimuthal gap, and rms. It will return (in QuakeML format) user-specified results that can include simple earthquake parameters, as well as observations such as phase arrivals, codas, amplitudes, and computed parameters such as first motion mechanisms, moment tensors, and rupture length. The NCEDC will work with both IRIS and the International Federation of Digital Seismograph Networks (FDSN) to define a uniform set of web service specifications that can be implemented by multiple data centers to provide users with a common data interface across data centers. The NCEDC now hosts earthquake catalogs and waveforms from the US Department of Energy (DOE) Enhanced Geothermal Systems (EGS) monitoring networks. These data can be accessed through the above web services and through special NCEDC web pages.
Recommender System for Learning SQL Using Hints
ERIC Educational Resources Information Center
Lavbic, Dejan; Matek, Tadej; Zrnec, Aljaž
2017-01-01
Today's software industry requires individuals who are proficient in as many programming languages as possible. Structured query language (SQL), as an adopted standard, is no exception, as it is the most widely used query language to retrieve and manipulate data. However, the process of learning SQL turns out to be challenging. The need for a…
Chemical markup, XML and the World-Wide Web. 3. Toward a signed semantic chemical web of trust.
Gkoutos, G V; Murray-Rust, P; Rzepa, H S; Wright, M
2001-01-01
We describe how a collection of documents expressed in XML-conforming languages such as CML and XHTML can be authenticated and validated against digital signatures which make use of established X.509 certificate technology. These can be associated either with specific nodes in the XML document or with the entire document. We illustrate this with two examples. An entire journal article expressed in XML has its individual components digitally signed by separate authors, and the collection is placed in an envelope and again signed. The second example involves using a software robot agent to acquire a collection of documents from a specified URL, to perform various operations and transformations on the content, including expressing molecules in CML, and to automatically sign the various components and deposit the result in a repository. We argue that these operations can used as components for building what we term an authenticated and semantic chemical web of trust.
Ontological Approach to Military Knowledge Modeling and Management
2004-03-01
federated search mechanism has to reformulate user queries (expressed using the ontology) in the query languages of the different sources (e.g. SQL...ontologies as a common terminology – Unified query to perform federated search • Query processing – Ontology mapping to sources reformulate queries
de Beer, R; Graveron-Demilly, D; Nastase, S; van Ormondt, D
2004-03-01
Recently we have developed a Java-based heterogeneous distributed computing system for the field of magnetic resonance imaging (MRI). It is a software system for embedding the various image reconstruction algorithms that we have created for handling MRI data sets with sparse sampling distributions. Since these data sets may result from multi-dimensional MRI measurements our system has to control the storage and manipulation of large amounts of data. In this paper we describe how we have employed the extensible markup language (XML) to realize this data handling in a highly structured way. To that end we have used Java packages, recently released by Sun Microsystems, to process XML documents and to compile pieces of XML code into Java classes. We have effectuated a flexible storage and manipulation approach for all kinds of data within the MRI system, such as data describing and containing multi-dimensional MRI measurements, data configuring image reconstruction methods and data representing and visualizing the various services of the system. We have found that the object-oriented approach, possible with the Java programming environment, combined with the XML technology is a convenient way of describing and handling various data streams in heterogeneous distributed computing systems.
Flight Dynamic Model Exchange using XML
NASA Technical Reports Server (NTRS)
Jackson, E. Bruce; Hildreth, Bruce L.
2002-01-01
The AIAA Modeling and Simulation Technical Committee has worked for several years to develop a standard by which the information needed to develop physics-based models of aircraft can be specified. The purpose of this standard is to provide a well-defined set of information, definitions, data tables and axis systems so that cooperating organizations can transfer a model from one simulation facility to another with maximum efficiency. This paper proposes using an application of the eXtensible Markup Language (XML) to implement the AIAA simulation standard. The motivation and justification for using a standard such as XML is discussed. Necessary data elements to be supported are outlined. An example of an aerodynamic model as an XML file is given. This example includes definition of independent and dependent variables for function tables, definition of key variables used to define the model, and axis systems used. The final steps necessary for implementation of the standard are presented. Software to take an XML-defined model and import/export it to/from a given simulation facility is discussed, but not demonstrated. That would be the next step in final implementation of standards for physics-based aircraft dynamic models.
Spreadsheets for Analyzing and Optimizing Space Missions
NASA Technical Reports Server (NTRS)
Some, Raphael R.; Agrawal, Anil K.; Czikmantory, Akos J.; Weisbin, Charles R.; Hua, Hook; Neff, Jon M.; Cowdin, Mark A.; Lewis, Brian S.; Iroz, Juana; Ross, Rick
2009-01-01
XCALIBR (XML Capability Analysis LIBRary) is a set of Extensible Markup Language (XML) database and spreadsheet- based analysis software tools designed to assist in technology-return-on-investment analysis and optimization of technology portfolios pertaining to outer-space missions. XCALIBR is also being examined for use in planning, tracking, and documentation of projects. An XCALIBR database contains information on mission requirements and technological capabilities, which are related by use of an XML taxonomy. XCALIBR incorporates a standardized interface for exporting data and analysis templates to an Excel spreadsheet. Unique features of XCALIBR include the following: It is inherently hierarchical by virtue of its XML basis. The XML taxonomy codifies a comprehensive data structure and data dictionary that includes performance metrics for spacecraft, sensors, and spacecraft systems other than sensors. The taxonomy contains >700 nodes representing all levels, from system through subsystem to individual parts. All entries are searchable and machine readable. There is an intuitive Web-based user interface. The software automatically matches technologies to mission requirements. The software automatically generates, and makes the required entries in, an Excel return-on-investment analysis software tool. The results of an analysis are presented in both tabular and graphical displays.
Andrenucci, Andrea
2016-01-01
Few studies have been performed within cross-language information retrieval (CLIR) in the field of psychology and psychotherapy. The aim of this paper is to to analyze and assess the quality of available query translation methods for CLIR on a health portal for psychology. A test base of 100 user queries, 50 Multi Word Units (WUs) and 50 Single WUs, was used. Swedish was the source language and English the target language. Query translation methods based on machine translation (MT) and dictionary look-up were utilized in order to submit query translations to two search engines: Google Site Search and Quick Ask. Standard IR evaluation measures and a qualitative analysis were utilized to assess the results. The lexicon extracted with word alignment of the portal's parallel corpus provided better statistical results among dictionary look-ups. Google Translate provided more linguistically correct translations overall and also delivered better retrieval results in MT.
Hoelzer, S; Schweiger, R K; Boettcher, H A; Tafazzoli, A G; Dudeck, J
2001-01-01
The purpose of guidelines in clinical practice is to improve the effectiveness and efficiency of clinical care. It is known that nationally or internationally produced guidelines which, in particular, do not involve medical processes at the time of consultation, do not take local factors into account, and have no consistent implementation strategy, have limited impact in changing either the behaviour of physicians, or patterns of care. The literature provides evidence for the effectiveness of computerization of CPGs for increasing compliance and improving patient outcomes. Probably the most effective concepts are knowledge-based functions for decision support or monitoring that are integrated in clinical information systems. This approach is mostly restricted by the effort required for development and maintenance of the information systems and the limited number of implemented medical rules. Most of the guidelines are text-based, and are primarily published in medical journals and posted on the internet. However, internet-published guidelines have little impact on the behaviour of physicians. It can be difficult and time-consuming to browse the internet to find (a) the correct guidelines to an existing diagnosis and (b) and adequate recommendation for a specific clinical problem. Our objective is to provide a web-based guideline service that takes as input clinical data on a particular patient and returns as output a customizable set of recommendations regarding diagnosis and treatment. Information in healthcare is to a very large extent transmitted and stored as unstructured or slightly structured text such as discharge letters, reports, forms, etc. The same applies for facilities containing medical information resources for clinical purposes and research such as text books, articles, guidelines, etc. Physicians are used to obtaining information from text-based sources. Since most guidelines are text-based, it would be practical to use a document-based solution that preserves the original cohesiveness. The lack of structure limits the automatic identification and extraction of the information contained in these resources. For this reason, we have chosen a document-based approach using eXtensible Markup Language (XML) with its schema definition and related technologies. XML empowers the applications for in-context searching. In addition it allows the same content to be represented in different ways. Our XML reference clinical data model for guidelines has been realized with the XML schema definition. The schema is used for structuring new text-based guidelines and updating existing documents. It is also used to establish search strategies on the document base. We hypothesize that enabling the physicians to query the available CPGs easily, and to get access to selected and specific information at the point of care will foster increased use. Based on current evidence we are confident that it will have substantial impact on the care provided, and will improve health outcomes.
The SGML Standardization Framework and the Introduction of XML
Grütter, Rolf
2000-01-01
Extensible Markup Language (XML) is on its way to becoming a global standard for the representation, exchange, and presentation of information on the World Wide Web (WWW). More than that, XML is creating a standardization framework, in terms of an open network of meta-standards and mediators that allows for the definition of further conventions and agreements in specific business domains. Such an approach is particularly needed in the healthcare domain; XML promises to especially suit the particularities of patient records and their lifelong storage, retrieval, and exchange. At a time when change rather than steadiness is becoming the faithful feature of our society, standardization frameworks which support a diversified growth of specifications that are appropriate to the actual needs of the users are becoming more and more important; and efforts should be made to encourage this new attempt at standardization to grow in a fruitful direction. Thus, the introduction of XML reflects a standardization process which is neither exclusively based on an acknowledged standardization authority, nor a pure market standard. Instead, a consortium of companies, academic institutions, and public bodies has agreed on a common recommendation based on an existing standardization framework. The consortium's process of agreeing to a standardization framework will doubtlessly be successful in the case of XML, and it is suggested that it should be considered as a generic model for standardization processes in the future. PMID:11720931
The SGML standardization framework and the introduction of XML.
Fierz, W; Grütter, R
2000-01-01
Extensible Markup Language (XML) is on its way to becoming a global standard for the representation, exchange, and presentation of information on the World Wide Web (WWW). More than that, XML is creating a standardization framework, in terms of an open network of meta-standards and mediators that allows for the definition of further conventions and agreements in specific business domains. Such an approach is particularly needed in the healthcare domain; XML promises to especially suit the particularities of patient records and their lifelong storage, retrieval, and exchange. At a time when change rather than steadiness is becoming the faithful feature of our society, standardization frameworks which support a diversified growth of specifications that are appropriate to the actual needs of the users are becoming more and more important; and efforts should be made to encourage this new attempt at standardization to grow in a fruitful direction. Thus, the introduction of XML reflects a standardization process which is neither exclusively based on an acknowledged standardization authority, nor a pure market standard. Instead, a consortium of companies, academic institutions, and public bodies has agreed on a common recommendation based on an existing standardization framework. The consortium's process of agreeing to a standardization framework will doubtlessly be successful in the case of XML, and it is suggested that it should be considered as a generic model for standardization processes in the future.
Query Language for Location-Based Services: A Model Checking Approach
NASA Astrophysics Data System (ADS)
Hoareau, Christian; Satoh, Ichiro
We present a model checking approach to the rationale, implementation, and applications of a query language for location-based services. Such query mechanisms are necessary so that users, objects, and/or services can effectively benefit from the location-awareness of their surrounding environment. The underlying data model is founded on a symbolic model of space organized in a tree structure. Once extended to a semantic model for modal logic, we regard location query processing as a model checking problem, and thus define location queries as hybrid logicbased formulas. Our approach is unique to existing research because it explores the connection between location models and query processing in ubiquitous computing systems, relies on a sound theoretical basis, and provides modal logic-based query mechanisms for expressive searches over a decentralized data structure. A prototype implementation is also presented and will be discussed.
ERIC Educational Resources Information Center
Battalio, John T.
2002-01-01
Describes the influence that Extensible Markup Language (XML) will have on the software documentation process and subsequently on the curricula of advanced undergraduate and master's programs in technical communication. Recommends how curricula of advanced undergraduate and master's programs in technical communication ought to change in order to…
Luo, Yuan; Szolovits, Peter
2016-01-01
In natural language processing, stand-off annotation uses the starting and ending positions of an annotation to anchor it to the text and stores the annotation content separately from the text. We address the fundamental problem of efficiently storing stand-off annotations when applying natural language processing on narrative clinical notes in electronic medical records (EMRs) and efficiently retrieving such annotations that satisfy position constraints. Efficient storage and retrieval of stand-off annotations can facilitate tasks such as mapping unstructured text to electronic medical record ontologies. We first formulate this problem into the interval query problem, for which optimal query/update time is in general logarithm. We next perform a tight time complexity analysis on the basic interval tree query algorithm and show its nonoptimality when being applied to a collection of 13 query types from Allen's interval algebra. We then study two closely related state-of-the-art interval query algorithms, proposed query reformulations, and augmentations to the second algorithm. Our proposed algorithm achieves logarithmic time stabbing-max query time complexity and solves the stabbing-interval query tasks on all of Allen's relations in logarithmic time, attaining the theoretic lower bound. Updating time is kept logarithmic and the space requirement is kept linear at the same time. We also discuss interval management in external memory models and higher dimensions.
Luo, Yuan; Szolovits, Peter
2016-01-01
In natural language processing, stand-off annotation uses the starting and ending positions of an annotation to anchor it to the text and stores the annotation content separately from the text. We address the fundamental problem of efficiently storing stand-off annotations when applying natural language processing on narrative clinical notes in electronic medical records (EMRs) and efficiently retrieving such annotations that satisfy position constraints. Efficient storage and retrieval of stand-off annotations can facilitate tasks such as mapping unstructured text to electronic medical record ontologies. We first formulate this problem into the interval query problem, for which optimal query/update time is in general logarithm. We next perform a tight time complexity analysis on the basic interval tree query algorithm and show its nonoptimality when being applied to a collection of 13 query types from Allen’s interval algebra. We then study two closely related state-of-the-art interval query algorithms, proposed query reformulations, and augmentations to the second algorithm. Our proposed algorithm achieves logarithmic time stabbing-max query time complexity and solves the stabbing-interval query tasks on all of Allen’s relations in logarithmic time, attaining the theoretic lower bound. Updating time is kept logarithmic and the space requirement is kept linear at the same time. We also discuss interval management in external memory models and higher dimensions. PMID:27478379
GraQL: A Query Language for High-Performance Attributed Graph Databases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chavarría-Miranda, Daniel; Castellana, Vito G.; Morari, Alessandro
Graph databases have gained increasing interest in the last few years due to the emergence of data sources which are not easily analyzable in traditional relational models or for which a graph data model is the natural representation. In order to understand the design and implementation choices for an attributed graph database backend and query language, we have started to design our infrastructure for attributed graph databases. In this paper, we describe the design considerations of our in-memory attributed graph database system with a particular focus on the data definition and query language components.
Frank, M S; Schultz, T; Dreyer, K
2001-06-01
To provide a standardized and scaleable mechanism for exchanging digital radiologic educational content between software systems that use disparate authoring, storage, and presentation technologies. Our institution uses two distinct software systems for creating educational content for radiology. Each system is used to create in-house educational content as well as commercial educational products. One system is an authoring and viewing application that facilitates the input and storage of hierarchical knowledge and associated imagery, and is capable of supporting a variety of entity relationships. This system is primarily used for the production and subsequent viewing of educational CD-ROMS. Another software system is primarily used for radiologic education on the world wide web. This system facilitates input and storage of interactive knowledge and associated imagery, delivering this content over the internet in a Socratic manner simulating in-person interaction with an expert. A subset of knowledge entities common to both systems was derived. An additional subset of knowledge entities that could be bidirectionally mapped via algorithmic transforms was also derived. An extensible markup language (XML) object model and associated lexicon were then created to represent these knowledge entities and their interactive behaviors. Forward-looking attention was exercised in the creation of the object model in order to facilitate straightforward future integration of other sources of educational content. XML generators and interpreters were written for both systems. Deriving the XML object model and lexicon was the most critical and time-consuming aspect of the project. The coding of the XML generators and interpreters required only a few hours for each environment. Subsequently, the transfer of hundreds of educational cases and thematic presentations between the systems can now be accomplished in a matter of minutes. The use of algorithmic transforms results in nearly 100% transfer of context as well as content, thus providing "presentation-ready" outcomes. The automation of knowledge exchange between dissimilar digital teaching environments magnifies the efforts of educators and enriches the learning experience for participants. XML is a powerful and useful mechanism for transfering educational content, as well as the context and interactive behaviors of such content, between disparate systems.
Schuers, Matthieu; Joulakian, Mher; Kerdelhué, Gaetan; Segas, Léa; Grosjean, Julien; Darmoni, Stéfan J; Griffon, Nicolas
2017-07-03
MEDLINE is the most widely used medical bibliographic database in the world. Most of its citations are in English and this can be an obstacle for some researchers to access the information the database contains. We created a multilingual query builder to facilitate access to the PubMed subset using a language other than English. The aim of our study was to assess the impact of this multilingual query builder on the quality of PubMed queries for non-native English speaking physicians and medical researchers. A randomised controlled study was conducted among French speaking general practice residents. We designed a multi-lingual query builder to facilitate information retrieval, based on available MeSH translations and providing users with both an interface and a controlled vocabulary in their own language. Participating residents were randomly allocated either the French or the English version of the query builder. They were asked to translate 12 short medical questions into MeSH queries. The main outcome was the quality of the query. Two librarians blind to the arm independently evaluated each query, using a modified published classification that differentiated eight types of errors. Twenty residents used the French version of the query builder and 22 used the English version. 492 queries were analysed. There were significantly more perfect queries in the French group vs. the English group (respectively 37.9% vs. 17.9%; p < 0.01). It took significantly more time for the members of the English group than the members of the French group to build each query, respectively 194 sec vs. 128 sec; p < 0.01. This multi-lingual query builder is an effective tool to improve the quality of PubMed queries in particular for researchers whose first language is not English.
An effective XML based name mapping mechanism within StoRM
NASA Astrophysics Data System (ADS)
Corso, E.; Forti, A.; Ghiselli, A.; Magnoni, L.; Zappi, R.
2008-07-01
In a Grid environment the naming capability allows users to refer to specific data resources in a physical storage system using a high level logical identifier. This logical identifier is typically organized in a file system like structure, a hierarchical tree of names. Storage Resource Manager (SRM) services map the logical identifier to the physical location of data evaluating a set of parameters as the desired quality of services and the VOMS attributes specified in the requests. StoRM is a SRM service developed by INFN and ICTP-EGRID to manage file and space on standard POSIX and high performing parallel and cluster file systems. An upcoming requirement in the Grid data scenario is the orthogonality of the logical name and the physical location of data, in order to refer, with the same identifier, to different copies of data archived in various storage areas with different quality of service. The mapping mechanism proposed in StoRM is based on a XML document that represents the different storage components managed by the service, the storage areas defined by the site administrator, the quality of service they provide and the Virtual Organization that want to use the storage area. An appropriate directory tree is realized in each storage component reflecting the XML schema. In this scenario StoRM is able to identify the physical location of a requested data evaluating the logical identifier and the specified attributes following the XML schema, without querying any database service. This paper presents the namespace schema defined, the different entities represented and the technical details of the StoRM implementation.
ExplorEnz: a MySQL database of the IUBMB enzyme nomenclature
McDonald, Andrew G; Boyce, Sinéad; Moss, Gerard P; Dixon, Henry BF; Tipton, Keith F
2007-01-01
Background We describe the database ExplorEnz, which is the primary repository for EC numbers and enzyme data that are being curated on behalf of the IUBMB. The enzyme nomenclature is incorporated into many other resources, including the ExPASy-ENZYME, BRENDA and KEGG bioinformatics databases. Description The data, which are stored in a MySQL database, preserve the formatting of chemical and enzyme names. A simple, easy to use, web-based query interface is provided, along with an advanced search engine for more complex queries. The database is publicly available at . The data are available for download as SQL and XML files via FTP. Conclusion ExplorEnz has powerful and flexible search capabilities and provides the scientific community with the most up-to-date version of the IUBMB Enzyme List. PMID:17662133
ExplorEnz: a MySQL database of the IUBMB enzyme nomenclature.
McDonald, Andrew G; Boyce, Sinéad; Moss, Gerard P; Dixon, Henry B F; Tipton, Keith F
2007-07-27
We describe the database ExplorEnz, which is the primary repository for EC numbers and enzyme data that are being curated on behalf of the IUBMB. The enzyme nomenclature is incorporated into many other resources, including the ExPASy-ENZYME, BRENDA and KEGG bioinformatics databases. The data, which are stored in a MySQL database, preserve the formatting of chemical and enzyme names. A simple, easy to use, web-based query interface is provided, along with an advanced search engine for more complex queries. The database is publicly available at http://www.enzyme-database.org. The data are available for download as SQL and XML files via FTP. ExplorEnz has powerful and flexible search capabilities and provides the scientific community with the most up-to-date version of the IUBMB Enzyme List.
Report of Official foreign Travel to Spain April 17-29, 1999. (in English;)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mason, j.d.
The Department of Energy (DOE) has moved rapidly toward electronic production, management, and dissemination of scientific and technical information. The World-Wide Web (WWW) has become a primary means of information dissemination. Electronic commerce (EC) is becoming the preferred means of procurement. DOE, like other government agencies, depends on and encourages the use of international standards in data communications. Like most government agencies, DOE has expressed a preference for openly developed standards in preference to proprietary designs promoted as "standards" by vendors. In particular, there is a preference for standards developed by organizations such as the International Organization for Standardization (ISO)more » and the American National Standards Institute (ANSI) that use open, public processes to develop their standards. Among the most widely adopted international standards is the Standard Generalized Markup Language (SGML, ISO 8879:1986, FIPS 152), which DOE has selected as the basis of its electronic management of documents. Besides the official commitment, which has resulted in several specialized projects, DOE makes heavy use of coding derived from SGML, and its use is likely to increase in the future. Most documents on the WWW are coded in HTML ("Hypertext Markup Language"), which is an application of SGML. The World-Wide Web Consortium (W3C), with the backing of major software houses like Microsoft, Adobe, and Netscape, is promoting XML ("eXtensible Markup Language"), a class of SGML applications, for the future of the WWW and the basis for EC. W3C has announced its intention of discontinuing future development of HTML and replacing it with XHTML, an application of XML. In support of DOE's use of these standards, I have served since 1985 as Chairman of the international committee responsible for SGML and related standards, ISO/IEC JTC1/SC34 (SC34) and its predecessor organizations. During my April 1999 trip, I convened the spring 1999 meeting of SC34 in Granada, Spain. I also attended a major conference on the use of SGML and XML. SC34 maintains and continues to enhance several standards. In addition to SGML, which is the basis of HTML and XML, SC34 also works on the Document Style Semantics and Specification Language (DSSSL), which is the basis for W3C's XSL ("eXtensible Style Language," to be used with XML) and the Hypermedia/Time-based Document Structuring Language (HyTime), which is a major influence on W3C's XLink ("XML Linking Language"). SC34 is also involved in work with ISO's TC184, Industrial Data, on the linking of STEP (the standard for the interchange of product model data) with SGML. In addition to the widespread use of the WWW among DOE's plants and facilities in Oak Ridge and among DOE sites across the nation, there are several SGML-based projects at the Y-12 Plant. My project team in Information Technology Services developed an SGML-based publications system that has been used for several major reports at the Y-12 Plant and Oak Ridge National Laboratory (ORNL). SGML is a component of the Weapons Records Archiving and Preservation (WRAP) project at the Y-12 Plant and is the format for catalog metadata chosen for weapons records by the Nuclear Weapons Information Group (NWIG). Supporting standards development allows DOE and the Y-12 plant both input into the process and the opportunity to benefit from contact with some of the leading experts in the subject matter. Oak Ridge has been for some years the location to which other DOE sites turn for expertise in SGML and related topics.« less
Selecting the Best Mobile Information Service with Natural Language User Input
NASA Astrophysics Data System (ADS)
Feng, Qiangze; Qi, Hongwei; Fukushima, Toshikazu
Information services accessed via mobile phones provide information directly relevant to subscribers’ daily lives and are an area of dynamic market growth worldwide. Although many information services are currently offered by mobile operators, many of the existing solutions require a unique gateway for each service, and it is inconvenient for users to have to remember a large number of such gateways. Furthermore, the Short Message Service (SMS) is very popular in China and Chinese users would prefer to access these services in natural language via SMS. This chapter describes a Natural Language Based Service Selection System (NL3S) for use with a large number of mobile information services. The system can accept user queries in natural language and navigate it to the required service. Since it is difficult for existing methods to achieve high accuracy and high coverage and anticipate which other services a user might want to query, the NL3S is developed based on a Multi-service Ontology (MO) and Multi-service Query Language (MQL). The MO and MQL provide semantic and linguistic knowledge, respectively, to facilitate service selection for a user query and to provide adaptive service recommendations. Experiments show that the NL3S can achieve 75-95% accuracies and 85-95% satisfactions for processing various styles of natural language queries. A trial involving navigation of 30 different mobile services shows that the NL3S can provide a viable commercial solution for mobile operators.
2016-02-08
Data Display Markup Language HUD heads-up display IRIG Inter-Range Instrumentation Group RCC Range Commanders Council SVG Scalable Vector Graphics...T&E test and evaluation TMATS Telemetry Attributes Transfer Standard XML eXtensible Markup Language DDML Schema Validation, RCC 126-16, February...2016 viii This page intentionally left blank. DDML Schema Validation, RCC 126-16, February 2016 1 1. Introduction This Data Display Markup
NASA Astrophysics Data System (ADS)
Guimaraes, Cayley; Antunes, Diego R.; de F. Guilhermino Trindade, Daniela; da Silva, Rafaella A. Lopes; Garcia, Laura Sanchez
This work presents a computational model (XML) of the Brazilian Sign Language (Libras), based on its phonology. The model was used to create a sample of representative signs to aid the recording of a base of videos whose aim is to support the development of tools to support genuine social inclusion of the deaf.
Vittorini, Pierpaolo; Tarquinio, Antonietta; di Orio, Ferdinando
2009-03-01
The eXtensible markup language (XML) is a metalanguage which is useful to represent and exchange data between heterogeneous systems. XML may enable healthcare practitioners to document, monitor, evaluate, and archive medical information and services into distributed computer environments. Therefore, the most recent proposals on electronic health records (EHRs) are usually based on XML documents. Since none of the existing nomenclatures were specifically developed for use in automated clinical information systems, but were adapted to such use, numerous current EHRs are organized as a sequence of events, each represented through codes taken from international classification systems. In nursing, a hierarchically organized problem-solving approach is followed, which hardly couples with the sequential organization of such EHRs. Therefore, the paper presents an XML data model for the Omaha System taxonomy, which is one of the most important international nomenclatures used in the home healthcare nursing context. Such a data model represents the formal definition of EHRs specifically developed for nursing practice. Furthermore, the paper delineates a Java application prototype which is able to manage such documents, shows the possibility to transform such documents into readable web pages, and reports several case studies, one currently managed by the home care service of a Health Center in Central Italy.
User-Friendly Interface Developed for a Web-Based Service for SpaceCAL Emulations
NASA Technical Reports Server (NTRS)
Liszka, Kathy J.; Holtz, Allen P.
2004-01-01
A team at the NASA Glenn Research Center is developing a Space Communications Architecture Laboratory (SpaceCAL) for protocol development activities for coordinated satellite missions. SpaceCAL will provide a multiuser, distributed system to emulate space-based Internet architectures, backbone networks, formation clusters, and constellations. As part of a new effort in 2003, building blocks are being defined for an open distributed system to make the satellite emulation test bed accessible through an Internet connection. The first step in creating a Web-based service to control the emulation remotely is providing a user-friendly interface for encoding the data into a well-formed and complete Extensible Markup Language (XML) document. XML provides coding that allows data to be transferred between dissimilar systems. Scenario specifications include control parameters, network routes, interface bandwidths, delay, and bit error rate. Specifications for all satellite, instruments, and ground stations in a given scenario are also included in the XML document. For the SpaceCAL emulation, the XML document can be created using XForms, a Webbased forms language for data collection. Contrary to older forms technology, the interactive user interface makes the science prevalent, not the data representation. Required versus optional input fields, default values, automatic calculations, data validation, and reuse will help researchers quickly and accurately define missions. XForms can apply any XML schema defined for the test mission to validate data before forwarding it to the emulation facility. New instrument definitions, facilities, and mission types can be added to the existing schema. The first prototype user interface incorporates components for interactive input and form processing. Internet address, data rate, and the location of the facility are implemented with basic form controls with default values provided for convenience and efficiency using basic XForms operations. Because different emulation scenarios will vary widely in their component structure, more complex operations are used to add and delete facilities.
An XML-based Generic Tool for Information Retrieval in Solar Databases
NASA Astrophysics Data System (ADS)
Scholl, Isabelle F.; Legay, Eric; Linsolas, Romain
This paper presents the current architecture of the `Solar Web Project' now in its development phase. This tool will provide scientists interested in solar data with a single web-based interface for browsing distributed and heterogeneous catalogs of solar observations. The main goal is to have a generic application that can be easily extended to new sets of data or to new missions with a low level of maintenance. It is developed with Java and XML is used as a powerful configuration language. The server, independent of any database scheme, can communicate with a client (the user interface) and several local or remote archive access systems (such as existing web pages, ftp sites or SQL databases). Archive access systems are externally described in XML files. The user interface is also dynamically generated from an XML file containing the window building rules and a simplified database description. This project is developed at MEDOC (Multi-Experiment Data and Operations Centre), located at the Institut d'Astrophysique Spatiale (Orsay, France). Successful tests have been conducted with other solar archive access systems.
Rock.XML - Towards a library of rock physics models
NASA Astrophysics Data System (ADS)
Jensen, Erling Hugo; Hauge, Ragnar; Ulvmoen, Marit; Johansen, Tor Arne; Drottning, Åsmund
2016-08-01
Rock physics modelling provides tools for correlating physical properties of rocks and their constituents to the geophysical observations we measure on a larger scale. Many different theoretical and empirical models exist, to cover the range of different types of rocks. However, upon reviewing these, we see that they are all built around a few main concepts. Based on this observation, we propose a format for digitally storing the specifications for rock physics models which we have named Rock.XML. It does not only contain data about the various constituents, but also the theories and how they are used to combine these building blocks to make a representative model for a particular rock. The format is based on the Extensible Markup Language XML, making it flexible enough to handle complex models as well as scalable towards extending it with new theories and models. This technology has great advantages as far as documenting and exchanging models in an unambiguous way between people and between software. Rock.XML can become a platform for creating a library of rock physics models; making them more accessible to everyone.
ArdenML: The Arden Syntax Markup Language (or Arden Syntax: It's Not Just Text Any More!)
Sailors, R. Matthew
2001-01-01
It is no longer necessary to think of Arden Syntax as simply a text-based knowledge base format. The development of ArdenML (Arden Syntax Markup Language), an XML-based markup language allows structured access to most of the maintenance and library categories without the need to write or buy a compiler may lead to the development of simple commercial and freeware tools for processing Arden Syntax Medical Logic Modules (MLMs)
A Low-Storage-Consumption XML Labeling Method for Efficient Structural Information Extraction
NASA Astrophysics Data System (ADS)
Liang, Wenxin; Takahashi, Akihiro; Yokota, Haruo
Recently, labeling methods to extract and reconstruct the structural information of XML data, which are important for many applications such as XPath query and keyword search, are becoming more attractive. To achieve efficient structural information extraction, in this paper we propose C-DO-VLEI code, a novel update-friendly bit-vector encoding scheme, based on register-length bit operations combining with the properties of Dewey Order numbers, which cannot be implemented in other relevant existing schemes such as ORDPATH. Meanwhile, the proposed method also achieves lower storage consumption because it does not require either prefix schema or any reserved codes for node insertion. We performed experiments to evaluate and compare the performance and storage consumption of the proposed method with those of the ORDPATH method. Experimental results show that the execution times for extracting depth information and parent node labels using the C-DO-VLEI code are about 25% and 15% less, respectively, and the average label size using the C-DO-VLEI code is about 24% smaller, comparing with ORDPATH.
Manchester visual query language
NASA Astrophysics Data System (ADS)
Oakley, John P.; Davis, Darryl N.; Shann, Richard T.
1993-04-01
We report a database language for visual retrieval which allows queries on image feature information which has been computed and stored along with images. The language is novel in that it provides facilities for dealing with feature data which has actually been obtained from image analysis. Each line in the Manchester Visual Query Language (MVQL) takes a set of objects as input and produces another, usually smaller, set as output. The MVQL constructs are mainly based on proven operators from the field of digital image analysis. An example is the Hough-group operator which takes as input a specification for the objects to be grouped, a specification for the relevant Hough space, and a definition of the voting rule. The output is a ranked list of high scoring bins. The query could be directed towards one particular image or an entire image database, in the latter case the bins in the output list would in general be associated with different images. We have implemented MVQL in two layers. The command interpreter is a Lisp program which maps each MVQL line to a sequence of commands which are used to control a specialized database engine. The latter is a hybrid graph/relational system which provides low-level support for inheritance and schema evolution. In the paper we outline the language and provide examples of useful queries. We also describe our solution to the engineering problems associated with the implementation of MVQL.
Redefining the Data Pipeline Using GPUs
NASA Astrophysics Data System (ADS)
Warner, C.; Eikenberry, S. S.; Gonzalez, A. H.; Packham, C.
2013-10-01
There are two major challenges facing the next generation of data processing pipelines: 1) handling an ever increasing volume of data as array sizes continue to increase and 2) the desire to process data in near real-time to maximize observing efficiency by providing rapid feedback on data quality. Combining the power of modern graphics processing units (GPUs), relational database management systems (RDBMSs), and extensible markup language (XML) to re-imagine traditional data pipelines will allow us to meet these challenges. Modern GPUs contain hundreds of processing cores, each of which can process hundreds of threads concurrently. Technologies such as Nvidia's Compute Unified Device Architecture (CUDA) platform and the PyCUDA (http://mathema.tician.de/software/pycuda) module for Python allow us to write parallel algorithms and easily link GPU-optimized code into existing data pipeline frameworks. This approach has produced speed gains of over a factor of 100 compared to CPU implementations for individual algorithms and overall pipeline speed gains of a factor of 10-25 compared to traditionally built data pipelines for both imaging and spectroscopy (Warner et al., 2011). However, there are still many bottlenecks inherent in the design of traditional data pipelines. For instance, file input/output of intermediate steps is now a significant portion of the overall processing time. In addition, most traditional pipelines are not designed to be able to process data on-the-fly in real time. We present a model for a next-generation data pipeline that has the flexibility to process data in near real-time at the observatory as well as to automatically process huge archives of past data by using a simple XML configuration file. XML is ideal for describing both the dataset and the processes that will be applied to the data. Meta-data for the datasets would be stored using an RDBMS (such as mysql or PostgreSQL) which could be easily and rapidly queried and file I/O would be kept at a minimum. We believe this redefined data pipeline will be able to process data at the telescope, concurrent with continuing observations, thus maximizing precious observing time and optimizing the observational process in general. We also believe that using this design, it is possible to obtain a speed gain of a factor of 30-40 over traditional data pipelines when processing large archives of data.
Report of Official Foreign Travel to France May 8-27, 1998
DOE Office of Scientific and Technical Information (OSTI.GOV)
mason, j d
1998-06-11
The Department of Energy (DOE) has moved ever more rapidly towards electronic production, management, and dissemination of scientific and technical information. The World-Wide Web (WWW) has become a primary means of information dissemination. Electronic commerce (EC) is becoming the preferred means of procurement. DOE, like other government agencies, depends on and encourages the use of international standards in data communications. Among the most widely adopted standards is the Standard Generalized Markup Language (SGML, ISO 8879:1986, FIPS 152), which DOE has selected as the basis of its electronic management of documents. Besides the official commitment, which has resulted in several specializedmore » projects, DOE makes heavy use of coding derived from SGML, and its use is likely to increase in the future. Most documents on the WWW are coded in HTML (Hypertext Markup Language), which is an application of SGML. The World-Wide Web Consortium (W3C), with the backing of major software houses like Microsoft, Adobe, and Netscape, is promoting XML (eXtensible Markup Language), a class of SGML applications, for the future of the WWW and the basis for EC. In support of DOE's use of these standards, I have served since 1985 as Convenor of the international committee responsible for SGML and related standards, ISO/IEC JTC1/WG4 (WG4). During this trip I convened the spring 1998 meeting of WG4 in Paris, France. I also attended a major conference on the use of SGML and XML. At the close of the conference, I chaired a workshop of standards developers looking at ways of improving online searching of electronic documents. Note: Since the end of the meetings in France, JTC1 has raised the level of WG4 to a full Subcommittee; its designator is now ISO/IEC JTC1/SC34. WG4 maintains and continues to enhance several standards. In addition to SGML, which is the basis of HTML and XML, WG4 also works on the Document Style Semantics and Specification Language (DSSSL), which is the basis for the W3C's XSL (eXtensible Style Language, to be used with XML) and the Hypermedia/Time-based Document Structuring Language (HyTime), which is a major influence on the W3C's XLink (XML Linking Language). WG4 is also involved in work with the ISO's TC184, Industrial Data, on the linking of STEP (the standard for the interchange of product model data) with SGML. In addition to the widespread use of the WWW among DOE's plants and facilities in Oak Ridge and among DOE sites across the nation, there are several SGML-based projects at the Y-12 Plant. My project team in Information Technology Services has developed an SGML-based publications system that has been used for several major reports at the Y-12 Plant and Oak Ridge National Laboratory (ORNL). SGML is a component of the Weapons Records Archiving and Preservation (WRAP) project at Y-12 and is the format for catalog metadata chosen for weapons records by the Nuclear Weapons Information Group (NWIG). Supporting standards development allows DOE and Y-12 both input into the process and the opportunity to benefit from contact with some of the leading experts in the subject matter. Oak Ridge has been for some years the location to which other DOE sites turn for expertise in SGML and related topics.« less
SuML: A Survey Markup Language for Generalized Survey Encoding
Barclay, MW; Lober, WB; Karras, BT
2002-01-01
There is a need in clinical and research settings for a sophisticated, generalized, web based survey tool that supports complex logic, separation of content and presentation, and computable guidelines. There are many commercial and open source survey packages available that provide simple logic; few provide sophistication beyond “goto” statements; none support the use of guidelines. These tools are driven by databases, static web pages, and structured documents using markup languages such as eXtensible Markup Language (XML). We propose a generalized, guideline aware language and an implementation architecture using open source standards.
Query Expansion and Query Translation as Logical Inference.
ERIC Educational Resources Information Center
Nie, Jian-Yun
2003-01-01
Examines query expansion during query translation in cross language information retrieval and develops a general framework for inferential information retrieval in two particular contexts: using fuzzy logic and probability theory. Obtains evaluation formulas that are shown to strongly correspond to those used in other information retrieval models.…
The Graphical Representation of the Digital Astronaut Physiology Backbone
NASA Technical Reports Server (NTRS)
Briers, Demarcus
2010-01-01
This report summarizes my internship project with the NASA Digital Astronaut Project to analyze the Digital Astronaut (DA) physiology backbone model. The Digital Astronaut Project (DAP) applies integrated physiology models to support space biomedical operations, and to assist NASA researchers in closing knowledge gaps related to human physiologic responses to space flight. The DA physiology backbone is a set of integrated physiological equations and functions that model the interacting systems of the human body. The current release of the model is HumMod (Human Model) version 1.5 and was developed over forty years at the University of Mississippi Medical Center (UMMC). The physiology equations and functions are scripted in an XML schema specifically designed for physiology modeling by Dr. Thomas G. Coleman at UMMC. Currently it is difficult to examine the physiology backbone without being knowledgeable of the XML schema. While investigating and documenting the tags and algorithms used in the XML schema, I proposed a standard methodology for a graphical representation. This standard methodology may be used to transcribe graphical representations from the DA physiology backbone. In turn, the graphical representations can allow examination of the physiological functions and equations without the need to be familiar with the computer programming languages or markup languages used by DA modeling software.
Kataoka, Satoshi; Ohe, Kazuhiko; Mochizuki, Mayumi; Ueda, Shiro
2002-01-01
We have developed an adverse drug reaction (ADR) reporting system integrating it with Hospital Information System (HIS) of the University of Tokyo Hospital. Since this system is designed with JAVA, it is portable without re-compiling to any operating systems on which JAVA virtual machines work. In this system, we implemented an automatic data filling function using XML-based (extended Markup Language) files generated by HIS. This new specification would decrease the time needed for physicians and pharmacists to fill the spontaneous ADR reports. By clicking a button, the report is sent to the text database through Simple Mail Transfer Protocol (SMTP) electronic mails. The destination of the report mail can be changed arbitrarily by administrators, which adds this system more flexibility for practical operation. Although we tried our best to use the SGML-based (Standard Generalized Markup Language) ICH M2 guideline to follow the global standard of the case report, we eventually adopted XML as the output report format. This is because we found some problems in handling two bytes characters with ICH guideline and XML has a lot of useful features. According to our pilot survey conducted at the University of Tokyo Hospital, many physicians answered that our idea, integrating ADR reporting system to HIS, would increase the ADR reporting numbers.
Distribution of immunodeficiency fact files with XML--from Web to WAP.
Väliaho, Jouni; Riikonen, Pentti; Vihinen, Mauno
2005-06-26
Although biomedical information is growing rapidly, it is difficult to find and retrieve validated data especially for rare hereditary diseases. There is an increased need for services capable of integrating and validating information as well as proving it in a logically organized structure. A XML-based language enables creation of open source databases for storage, maintenance and delivery for different platforms. Here we present a new data model called fact file and an XML-based specification Inherited Disease Markup Language (IDML), that were developed to facilitate disease information integration, storage and exchange. The data model was applied to primary immunodeficiencies, but it can be used for any hereditary disease. Fact files integrate biomedical, genetic and clinical information related to hereditary diseases. IDML and fact files were used to build a comprehensive Web and WAP accessible knowledge base ImmunoDeficiency Resource (IDR) available at http://bioinf.uta.fi/idr/. A fact file is a user oriented user interface, which serves as a starting point to explore information on hereditary diseases. The IDML enables the seamless integration and presentation of genetic and disease information resources in the Internet. IDML can be used to build information services for all kinds of inherited diseases. The open source specification and related programs are available at http://bioinf.uta.fi/idml/.
Wolff, A C; Mludek, V; van der Haak, M; Bork, W; Bülzebruck, H; Drings, P; Schmücker, P; Wannenmacher, M; Haux, R
2001-01-01
Communication between different institutions which are responsible for the treatment of the same patient is of outstanding significance, especially in the field of tumor diseases. Regional electronic patient records could support the co-operation of different institutions by providing ac-cess to all necessary information whether it belongs to the own institution or to a partner. The Department of Medical Informatics, University of Heidelberg is performing a project in co-operation with the Thoraxclinic-Heidelberg and the Department of Clinical Radiology, University of Heidelberg with the goal: to define an architectural concept for interlinking the electronic patient record of the two clinical institutions to build a common virtual electronic patient record and carry out an exemplary implementation, to examine composition, structure and content of medical documents for tumor patients with the aim of defining an XML-based markup language allowing summarizing overviews and suitable granularities, and to integrate clinical practice guidelines and other external knowledge with the electronic patient record using XML-technologies to support the physician in the daily decision process. This paper will show, how a regional electronic patient record could be built on an architectural level and describe elementary steps towards a on content-oriented structuring of medical records.
A Text Knowledge Base from the AI Handbook.
ERIC Educational Resources Information Center
Simmons, Robert F.
1987-01-01
Describes a prototype natural language text knowledge system (TKS) that was used to organize 50 pages of a handbook on artificial intelligence as an inferential knowledge base with natural language query and command capabilities. Representation of text, database navigation, query systems, discourse structuring, and future research needs are…
Kottmann, Renzo; Gray, Tanya; Murphy, Sean; Kagan, Leonid; Kravitz, Saul; Lombardot, Thierry; Field, Dawn; Glöckner, Frank Oliver
2008-06-01
The Genomic Contextual Data Markup Language (GCDML) is a core project of the Genomic Standards Consortium (GSC) that implements the "Minimum Information about a Genome Sequence" (MIGS) specification and its extension, the "Minimum Information about a Metagenome Sequence" (MIMS). GCDML is an XML Schema for generating MIGS/MIMS compliant reports for data entry, exchange, and storage. When mature, this sample-centric, strongly-typed schema will provide a diverse set of descriptors for describing the exact origin and processing of a biological sample, from sampling to sequencing, and subsequent analysis. Here we describe the need for such a project, outline design principles required to support the project, and make an open call for participation in defining the future content of GCDML. GCDML is freely available, and can be downloaded, along with documentation, from the GSC Web site (http://gensc.org).
Path querying system on mobile devices
NASA Astrophysics Data System (ADS)
Lin, Xing; Wang, Yifei; Tian, Yuan; Wu, Lun
2006-01-01
Traditional approaches to path querying problems are not efficient and convenient under most circumstances. A more convenient and reliable approach to this problem has to be found. This paper is devoted to a path querying solution on mobile devices. By using an improved Dijkstra's shortest path algorithm and a natural language translating module, this system can help people find the shortest path between two places through their cell phones or other mobile devices. The chosen path is prompted in text of natural language, as well as a map picture. This system would be useful in solving best path querying problems and have potential to be a profitable business system.
mod_bio: Apache modules for Next-Generation sequencing data.
Lindenbaum, Pierre; Redon, Richard
2015-01-01
We describe mod_bio, a set of modules for the Apache HTTP server that allows the users to access and query fastq, tabix, fasta and bam files through a Web browser. Those data are made available in plain text, HTML, XML, JSON and JSON-P. A javascript-based genome browser using the JSON-P communication technique is provided as an example of cross-domain Web service. https://github.com/lindenb/mod_bio. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
SQL/NF Translator for the Triton Nested Relational Database System
1990-12-01
18as., Ohio .. 9~~ ~~ 1 4- AFIT/GCE/ENG/90D-05 SQL/Nk1 TRANSLATOR FOR THE TRITON NESTED RELATIONAL DATABASE SYSTEM THESIS Craig William Schnepf Captain...FOR THE TRITON NESTED RELATIONAL DATABASE SYSTEM THESIS Presented to the Faculty of the School of Engineering of the Air Force Institute of Technnlogy... systems . The SQL/NF query language used for the nested relationil model is an extension of the popular relational model query language SQL. The query
Extending the Query Language of a Data Warehouse for Patient Recruitment.
Dietrich, Georg; Ertl, Maximilian; Fette, Georg; Kaspar, Mathias; Krebs, Jonathan; Mackenrodt, Daniel; Störk, Stefan; Puppe, Frank
2017-01-01
Patient recruitment for clinical trials is a laborious task, as many texts have to be screened. Usually, this work is done manually and takes a lot of time. We have developed a system that automates the screening process. Besides standard keyword queries, the query language supports extraction of numbers, time-spans and negations. In a feasibility study for patient recruitment from a stroke unit with 40 patients, we achieved encouraging extraction rates above 95% for numbers and negations and ca. 86% for time spans.
Computerization of guidelines: towards a "guideline markup language".
Dart, T; Xu, Y; Chatellier, G; Degoulet, P
2001-01-01
Medical decision making is one of the most difficult daily tasks for physicians. Guidelines have been designed to reduce variance between physicians in daily practice, to improve patient outcomes and to control costs. In fact, few physicians use guidelines in daily practice. A way to ease the use of guidelines is to implement computerised guidelines (computer reminders). We present in this paper a method of computerising guidelines. Our objectives were: 1) to propose a generic model that can be instantiated for any specific guidelines; 2) to use eXtensible Markup Language (XML) as a guideline representation language to instantiate the generic model for a specific guideline. Our model is an object representation of a clinical algorithm, it has been validated by running two different guidelines issued by a French official Agency. In spite of some limitations, we found that this model is expressive enough to represent complex guidelines devoted to diabetes and hypertension management. We conclude that XML can be used as a description format to structure guidelines and as an interface between paper-based guidelines and computer applications.
Tirado-Ramos, Alfredo; Hu, Jingkun; Lee, K P
2002-01-01
Supplement 23 to DICOM (Digital Imaging and Communications for Medicine), Structured Reporting, is a specification that supports a semantically rich representation of image and waveform content, enabling experts to share image and related patient information. DICOM SR supports the representation of textual and coded data linked to images and waveforms. Nevertheless, the medical information technology community needs models that work as bridges between the DICOM relational model and open object-oriented technologies. The authors assert that representations of the DICOM Structured Reporting standard, using object-oriented modeling languages such as the Unified Modeling Language, can provide a high-level reference view of the semantically rich framework of DICOM and its complex structures. They have produced an object-oriented model to represent the DICOM SR standard and have derived XML-exchangeable representations of this model using World Wide Web Consortium specifications. They expect the model to benefit developers and system architects who are interested in developing applications that are compliant with the DICOM SR specification.
An advanced web query interface for biological databases
Latendresse, Mario; Karp, Peter D.
2010-01-01
Although most web-based biological databases (DBs) offer some type of web-based form to allow users to author DB queries, these query forms are quite restricted in the complexity of DB queries that they can formulate. They can typically query only one DB, and can query only a single type of object at a time (e.g. genes) with no possible interaction between the objects—that is, in SQL parlance, no joins are allowed between DB objects. Writing precise queries against biological DBs is usually left to a programmer skillful enough in complex DB query languages like SQL. We present a web interface for building precise queries for biological DBs that can construct much more precise queries than most web-based query forms, yet that is user friendly enough to be used by biologists. It supports queries containing multiple conditions, and connecting multiple object types without using the join concept, which is unintuitive to biologists. This interactive web interface is called the Structured Advanced Query Page (SAQP). Users interactively build up a wide range of query constructs. Interactive documentation within the SAQP describes the schema of the queried DBs. The SAQP is based on BioVelo, a query language based on list comprehension. The SAQP is part of the Pathway Tools software and is available as part of several bioinformatics web sites powered by Pathway Tools, including the BioCyc.org site that contains more than 500 Pathway/Genome DBs. PMID:20624715
SU-E-T-327: The Update of a XML Composing Tool for TrueBeam Developer Mode
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, Y; Mao, W; Jiang, S
2014-06-01
Purpose: To introduce a major upgrade of a novel XML beam composing tool to scientists and engineers who strive to translate certain capabilities of TrueBeam Developer Mode to future clinical benefits of radiation therapy. Methods: TrueBeam Developer Mode provides the users with a test bed for unconventional plans utilizing certain unique features not accessible at the clinical mode. To access the full set of capabilities, a XML beam definition file accommodating all parameters including kV/MV imaging triggers in the plan can be locally loaded at this mode, however it is difficult and laborious to compose one in a text editor.more » In this study, a stand-along interactive XML beam composing application, TrueBeam TeachMod, was developed on Windows platforms to assist users in making their unique plans in a WYSWYG manner. A conventional plan can be imported in a DICOM RT object as the start of the beam editing process in which trajectories of all axes of a TrueBeam machine can be modified to the intended values at any control point. TeachMod also includes libraries of predefined imaging and treatment procedures to further expedite the process. Results: The TeachMod application is a major of the TeachMod module within DICOManTX. It fully supports TrueBeam 2.0. Trajectories of all axes including all MLC leaves can be graphically rendered and edited as needed. The time for XML beam composing has been reduced to a negligible amount regardless the complexity of the plan. A good understanding of XML language and TrueBeam schema is not required though preferred. Conclusion: Creating XML beams manually in a text editor will be a lengthy error-prone process for sophisticated plans. A XML beam composing tool is highly desirable for R and D activities. It will bridge the gap between scopes of TrueBeam capabilities and their clinical application potentials.« less
James Webb Space Telescope XML Database: From the Beginning to Today
NASA Technical Reports Server (NTRS)
Gal-Edd, Jonathan; Fatig, Curtis C.
2005-01-01
The James Webb Space Telescope (JWST) Project has been defining, developing, and exercising the use of a common eXtensible Markup Language (XML) for the command and telemetry (C&T) database structure. JWST is the first large NASA space mission to use XML for databases. The JWST project started developing the concepts for the C&T database in 2002. The database will need to last at least 20 years since it will be used beginning with flight software development, continuing through Observatory integration and test (I&T) and through operations. Also, a database tool kit has been provided to the 18 various flight software development laboratories located in the United States, Europe, and Canada that allows the local users to create their own databases. Recently the JWST Project has been working with the Jet Propulsion Laboratory (JPL) and Object Management Group (OMG) XML Telemetry and Command Exchange (XTCE) personnel to provide all the information needed by JWST and JPL for exchanging database information using a XML standard structure. The lack of standardization requires custom ingest scripts for each ground system segment, increasing the cost of the total system. Providing a non-proprietary standard of the telemetry and command database definition formation will allow dissimilar systems to communicate without the need for expensive mission specific database tools and testing of the systems after the database translation. The various ground system components that would benefit from a standardized database are the telemetry and command systems, archives, simulators, and trending tools. JWST has exchanged the XML database with the Eclipse, EPOCH, ASIST ground systems, Portable spacecraft simulator (PSS), a front-end system, and Integrated Trending and Plotting System (ITPS) successfully. This paper will discuss how JWST decided to use XML, the barriers to a new concept, experiences utilizing the XML structure, exchanging databases with other users, and issues that have been experienced in creating databases for the C&T system.
Regular paths in SparQL: querying the NCI Thesaurus.
Detwiler, Landon T; Suciu, Dan; Brinkley, James F
2008-11-06
OWL, the Web Ontology Language, provides syntax and semantics for representing knowledge for the semantic web. Many of the constructs of OWL have a basis in the field of description logics. While the formal underpinnings of description logics have lead to a highly computable language, it has come at a cognitive cost. OWL ontologies are often unintuitive to readers lacking a strong logic background. In this work we describe GLEEN, a regular path expression library, which extends the RDF query language SparQL to support complex path expressions over OWL and other RDF-based ontologies. We illustrate the utility of GLEEN by showing how it can be used in a query-based approach to defining simpler, more intuitive views of OWL ontologies. In particular we show how relatively simple GLEEN-enhanced SparQL queries can create views of the OWL version of the NCI Thesaurus that match the views generated by the web-based NCI browser.
Social media based NPL system to find and retrieve ARM data: Concept paper
DOE Office of Scientific and Technical Information (OSTI.GOV)
Devarakonda, Ranjeet; Giansiracusa, Michael T.; Kumar, Jitendra
Information connectivity and retrieval has a role in our daily lives. The most pervasive source of online information is databases. The amount of data is growing at rapid rate and database technology is improving and having a profound effect. Almost all online applications are storing and retrieving information from databases. One challenge in supplying the public with wider access to informational databases is the need for knowledge of database languages like Structured Query Language (SQL). Although the SQL language has been published in many forms, not everybody is able to write SQL queries. Another challenge is that it may notmore » be practical to make the public aware of the structure of the database. There is a need for novice users to query relational databases using their natural language. To solve this problem, many natural language interfaces to structured databases have been developed. The goal is to provide more intuitive method for generating database queries and delivering responses. Social media makes it possible to interact with a wide section of the population. Through this medium, and with the help of Natural Language Processing (NLP) we can make the data of the Atmospheric Radiation Measurement Data Center (ADC) more accessible to the public. We propose an architecture for using Apache Lucene/Solr [1], OpenML [2,3], and Kafka [4] to generate an automated query/response system with inputs from Twitter5, our Cassandra DB, and our log database. Using the Twitter API and NLP we can give the public the ability to ask questions of our database and get automated responses.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Devarakonda, Ranjeet; Giansiracusa, Michael T.; Kumar, Jitendra
Information connectivity and retrieval has a role in our daily lives. The most pervasive source of online information is databases. The amount of data is growing at rapid rate and database technology is improving and having a profound effect. Almost all online applications are storing and retrieving information from databases. One challenge in supplying the public with wider access to informational databases is the need for knowledge of database languages like Structured Query Language (SQL). Although the SQL language has been published in many forms, not everybody is able to write SQL queries. Another challenge is that it may notmore » be practical to make the public aware of the structure of the database. There is a need for novice users to query relational databases using their natural language. To solve this problem, many natural language interfaces to structured databases have been developed. The goal is to provide more intuitive method for generating database queries and delivering responses. Social media makes it possible to interact with a wide section of the population. Through this medium, and with the help of Natural Language Processing (NLP) we can make the data of the Atmospheric Radiation Measurement Data Center (ADC) more accessible to the public. We propose an architecture for using Apache Lucene/Solr [1], OpenML [2,3], and Kafka [4] to generate an automated query/response system with inputs from Twitter5, our Cassandra DB, and our log database. Using the Twitter API and NLP we can give the public the ability to ask questions of our database and get automated responses.« less
Internet-based data interchange with XML
NASA Astrophysics Data System (ADS)
Fuerst, Karl; Schmidt, Thomas
2000-12-01
In this paper, a complete concept for Internet Electronic Data Interchange (EDI) - a well-known buzzword in the area of logistics and supply chain management to enable the automation of the interactions between companies and their partners - using XML (eXtensible Markup Language) will be proposed. This approach is based on Internet and XML, because the implementation of traditional EDI (e.g. EDIFACT, ANSI X.12) is mostly too costly for small and medium sized enterprises, which want to integrate their suppliers and customers in a supply chain. The paper will also present the results of the implementation of a prototype for such a system, which has been developed for an industrial partner to improve the current situation of parts delivery. The main functions of this system are an early warning system to detect problems during the parts delivery process as early as possible, and a transport following system to pursue the transportation.
CytometryML with DICOM and FCS
NASA Astrophysics Data System (ADS)
Leif, Robert C.
2018-02-01
Abstract: Flow Cytometry Standard, FCS, and Digital Imaging and Communications in Medicine standard, DICOM, are based on extensive, superb domain knowledge, However, they are isolated systems, do not take advantage of data structures, require special programs to read and write the data, lack the capability to interoperate or work with other standards and FCS lacks many of the datatypes necessary for clinical laboratory data. The large overlap between imaging and flow cytometry provides strong evidence that both modalities should be covered by the same standard. Method: The XML Schema Definition Language, XSD 1.1 was used to translate FCS and/or DICOM objects. A MIFlowCyt file was tested with published values. Results: Previously, a significant part of an XML standard based upon a combination of FCS and DICOM has been implemented and validated with MIFlowCyt data. Strongly typed translations of FCS keywords have been constructed in XML. These keywords contain links to their DICOM and FCS equivalents.
Karvounis, E C; Tsakanikas, V D; Fotiou, E; Fotiadis, D I
2010-01-01
The paper proposes a novel Extensible Markup Language (XML) based format called ART-ML that aims at supporting the interoperability and the reuse of models of blood flow, mass transport and plaque formation, exported by ARTool. ARTool is a platform for the automatic processing of various image modalities of coronary and carotid arteries. The images and their content are fused to develop morphological models of the arteries in easy to handle 3D representations. The platform incorporates efficient algorithms which are able to perform blood flow simulation. In addition atherosclerotic plaque development is estimated taking into account morphological, flow and genetic factors. ART-ML provides a XML format that enables the representation and management of embedded models within the ARTool platform and the storage and interchange of well-defined information. This approach influences in the model creation, model exchange, model reuse and result evaluation.
Non-invasive lightweight integration engine for building EHR from autonomous distributed systems.
Angulo, Carlos; Crespo, Pere; Maldonado, José A; Moner, David; Pérez, Daniel; Abad, Irene; Mandingorra, Jesús; Robles, Montserrat
2007-12-01
In this paper we describe Pangea-LE, a message-oriented lightweight data integration engine that allows homogeneous and concurrent access to clinical information from disperse and heterogeneous data sources. The engine extracts the information and passes it to the requesting client applications in a flexible XML format. The XML response message can be formatted on demand by appropriate Extensible Stylesheet Language (XSL) transformations in order to meet the needs of client applications. We also present a real deployment in a hospital where Pangea-LE collects and generates an XML view of all the available patient clinical information. The information is presented to healthcare professionals in an Electronic Health Record (EHR) viewer Web application with patient search and EHR browsing capabilities. Implantation in a real setting has been a success due to the non-invasive nature of Pangea-LE which respects the existing information systems.
Techniques for integrating ‐omics data
Akula, Siva Prasad; Miriyala, Raghava Naidu; Thota, Hanuman; Rao, Allam Appa; Gedela, Srinubabu
2009-01-01
The challenge for -omics research is to tackle the problem of fragmentation of knowledge by integrating several sources of heterogeneous information into a coherent entity. It is widely recognized that successful data integration is one of the keys to improve productivity for stored data. Through proper data integration tools and algorithms, researchers may correlate relationships that enable them to make better and faster decisions. The need for data integration is essential for present ‐omics community, because ‐omics data is currently spread world wide in wide variety of formats. These formats can be integrated and migrated across platforms through different techniques and one of the important techniques often used is XML. XML is used to provide a document markup language that is easier to learn, retrieve, store and transmit. It is semantically richer than HTML. Here, we describe bio warehousing, database federation, controlled vocabularies and highlighting the XML application to store, migrate and validate -omics data. PMID:19255651
Techniques for integrating -omics data.
Akula, Siva Prasad; Miriyala, Raghava Naidu; Thota, Hanuman; Rao, Allam Appa; Gedela, Srinubabu
2009-01-01
The challenge for -omics research is to tackle the problem of fragmentation of knowledge by integrating several sources of heterogeneous information into a coherent entity. It is widely recognized that successful data integration is one of the keys to improve productivity for stored data. Through proper data integration tools and algorithms, researchers may correlate relationships that enable them to make better and faster decisions. The need for data integration is essential for present -omics community, because -omics data is currently spread world wide in wide variety of formats. These formats can be integrated and migrated across platforms through different techniques and one of the important techniques often used is XML. XML is used to provide a document markup language that is easier to learn, retrieve, store and transmit. It is semantically richer than HTML. Here, we describe bio warehousing, database federation, controlled vocabularies and highlighting the XML application to store, migrate and validate -omics data.
A natural language query system for Hubble Space Telescope proposal selection
NASA Technical Reports Server (NTRS)
Hornick, Thomas; Cohen, William; Miller, Glenn
1987-01-01
The proposal selection process for the Hubble Space Telescope is assisted by a robust and easy to use query program (TACOS). The system parses an English subset language sentence regardless of the order of the keyword phases, allowing the user a greater flexibility than a standard command query language. Capabilities for macro and procedure definition are also integrated. The system was designed for flexibility in both use and maintenance. In addition, TACOS can be applied to any knowledge domain that can be expressed in terms of a single reaction. The system was implemented mostly in Common LISP. The TACOS design is described in detail, with particular attention given to the implementation methods of sentence processing.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-29
... of Fish and Game, and the International Pacific Halibut Commission will present a technical workshop... Language (XML) interface is designed to facilitate an exchange of landings and production data between e...
Knowledge-based engineering of a PLC controlled telescope
NASA Astrophysics Data System (ADS)
Pessemier, Wim; Raskin, Gert; Saey, Philippe; Van Winckel, Hans; Deconinck, Geert
2016-08-01
As the new control system of the Mercator Telescope is being finalized, we can review some technologies and design methodologies that are advantageous, despite their relative uncommonness in astronomical instrumentation. Particular for the Mercator Telescope is that it is controlled by a single high-end soft-PLC (Programmable Logic Controller). Using off-the-shelf components only, our distributed embedded system controls all subsystems of the telescope such as the pneumatic primary mirror support, the hydrostatic bearing, the telescope axes, the dome, the safety system, and so on. We show how real-time application logic can be written conveniently in typical PLC languages (IEC 61131-3) and in C++ (to implement the pointing kernel) using the commercial TwinCAT 3 programming environment. This software processes the inputs and outputs of the distributed system in real-time via an observatory-wide EtherCAT network, which is synchronized with high precision to an IEEE 1588 (PTP, Precision Time Protocol) time reference clock. Taking full advantage of the ability of soft-PLCs to run both real-time and non real-time software, the same device also hosts the most important user interfaces (HMIs or Human Machine Interfaces) and communication servers (OPC UA for process data, FTP for XML configuration data, and VNC for remote control). To manage the complexity of the system and to streamline the development process, we show how most of the software, electronics and systems engineering aspects of the control system have been modeled as a set of scripts written in a Domain Specific Language (DSL). When executed, these scripts populate a Knowledge Base (KB) which can be queried to retrieve specific information. By feeding the results of those queries to a template system, we were able to generate very detailed "browsable" web-based documentation about the system, but also PLC software code, Python client code, model verification reports, etc. The aim of this paper is to demonstrate the added value that technologies such as soft-PLCs and DSL-scripts and design methodologies such as knowledge-based engineering can bring to astronomical instrumentation.
Design of an On-Line Query Language for Full Text Patent Search.
ERIC Educational Resources Information Center
Glantz, Richard S.
The design of an English-like query language and an interactive computer environment for searching the full text of the U.S. patent collection are discussed. Special attention is paid to achieving a transparent user interface, to providing extremely broad search capabilities (including nested substitution classes, Kleene star events, and domain…
Automating Data Submission to a National Archive
NASA Astrophysics Data System (ADS)
Work, T. T.; Chandler, C. L.; Groman, R. C.; Allison, M. D.; Gegg, S. R.; Biological; Chemical Oceanography Data Management Office
2010-12-01
In late 2006, the U.S. National Science Foundation (NSF) funded the Biological and Chemical Oceanographic Data Management Office (BCO-DMO) at Woods Hole Oceanographic Institution (WHOI) to work closely with investigators to manage oceanographic data generated from their research projects. One of the final data management tasks is to ensure that the data are permanently archived at the U.S. National Oceanographic Data Center (NODC) or other appropriate national archiving facility. In the past, BCO-DMO submitted data to NODC as an email with attachments including a PDF file (a manually completed metadata record) and one or more data files. This method is no longer feasible given the rate at which data sets are contributed to BCO-DMO. Working with collaborators at NODC, a more streamlined and automated workflow was developed to keep up with the increased volume of data that must be archived at NODC. We will describe our new workflow; a semi-automated approach for contributing data to NODC that includes a Federal Geographic Data Committee (FGDC) compliant Extensible Markup Language (XML) metadata file accompanied by comma-delimited data files. The FGDC XML file is populated from information stored in a MySQL database. A crosswalk described by an Extensible Stylesheet Language Transformation (XSLT) is used to transform the XML formatted MySQL result set to a FGDC compliant XML metadata file. To ensure data integrity, the MD5 algorithm is used to generate a checksum and manifest of the files submitted to NODC for permanent archive. The revised system supports preparation of detailed, standards-compliant metadata that facilitate data sharing and enable accurate reuse of multidisciplinary information. The approach is generic enough to be adapted for use by other data management groups.
NASA Astrophysics Data System (ADS)
Aloisio, Giovanni; Fiore, Sandro; Negro, A.
2010-05-01
The CMCC Data Distribution Centre (DDC) is the primary entry point (web gateway) to the CMCC. It is a Data Grid Portal providing a ubiquitous and pervasive way to ease data publishing, climate metadata search, datasets discovery, metadata annotation, data access, data aggregation, sub-setting, etc. The grid portal security model includes the use of HTTPS protocol for secure communication with the client (based on X509v3 certificates that must be loaded into the browser) and secure cookies to establish and maintain user sessions. The CMCC DDC is now in a pre-production phase and it is currently used only by internal users (CMCC researchers and climate scientists). The most important component already available in the CMCC DDC is the Search Engine which allows users to perform, through web interfaces, distributed search and discovery activities by introducing one or more of the following search criteria: horizontal extent (which can be specified by interacting with a geographic map), vertical extent, temporal extent, keywords, topics, creation date, etc. By means of this page the user submits the first step of the query process on the metadata DB, then, she can choose one or more datasets retrieving and displaying the complete XML metadata description (from the browser). This way, the second step of the query process is carried out by accessing to a specific XML document of the metadata DB. Finally, through the web interface, the user can access to and download (partially or totally) the data stored on the storage device accessing to OPeNDAP servers and to other available grid storage interfaces. Requests concerning datasets stored in deep storage will be served asynchronously.
In-field Access to Geoscientific Metadata through GPS-enabled Mobile Phones
NASA Astrophysics Data System (ADS)
Hobona, Gobe; Jackson, Mike; Jordan, Colm; Butchart, Ben
2010-05-01
Fieldwork is an integral part of much geosciences research. But whilst geoscientists have physical or online access to data collections whilst in the laboratory or at base stations, equivalent in-field access is not standard or straightforward. The increasing availability of mobile internet and GPS-supported mobile phones, however, now provides the basis for addressing this issue. The SPACER project was commissioned by the Rapid Innovation initiative of the UK Joint Information Systems Committee (JISC) to explore the potential for GPS-enabled mobile phones to access geoscientific metadata collections. Metadata collections within the geosciences and the wider geospatial domain can be disseminated through web services based on the Catalogue Service for Web(CSW) standard of the Open Geospatial Consortium (OGC) - a global grouping of over 380 private, public and academic organisations aiming to improve interoperability between geospatial technologies. CSW offers an XML-over-HTTP interface for querying and retrieval of geospatial metadata. By default, the metadata returned by CSW is based on the ISO19115 standard and encoded in XML conformant to ISO19139. The SPACER project has created a prototype application that enables mobile phones to send queries to CSW containing user-defined keywords and coordinates acquired from GPS devices built-into the phones. The prototype has been developed using the free and open source Google Android platform. The mobile application offers views for listing titles, presenting multiple metadata elements and a Google Map with an overlay of bounding coordinates of datasets. The presentation will describe the architecture and approach applied in the development of the prototype.
Barbosa-Silva, A; Pafilis, E; Ortega, J M; Schneider, R
2007-12-11
Data integration has become an important task for biological database providers. The current model for data exchange among different sources simplifies the manner that distinct information is accessed by users. The evolution of data representation from HTML to XML enabled programs, instead of humans, to interact with biological databases. We present here SRS.php, a PHP library that can interact with the data integration Sequence Retrieval System (SRS). The library has been written using SOAP definitions, and permits the programmatic communication through webservices with the SRS. The interactions are possible by invoking the methods described in WSDL by exchanging XML messages. The current functions available in the library have been built to access specific data stored in any of the 90 different databases (such as UNIPROT, KEGG and GO) using the same query syntax format. The inclusion of the described functions in the source of scripts written in PHP enables them as webservice clients to the SRS server. The functions permit one to query the whole content of any SRS database, to list specific records in these databases, to get specific fields from the records, and to link any record among any pair of linked databases. The case study presented exemplifies the library usage to retrieve information regarding registries of a Plant Defense Mechanisms database. The Plant Defense Mechanisms database is currently being developed, and the proposal of SRS.php library usage is to enable the data acquisition for the further warehousing tasks related to its setup and maintenance.
2017-02-01
entity relationship (diagram) EwID Enterprise-wide Identifier FMID Force Management Identifier GFM Global Force Management HTML Hypertext Markup Language... Management Data Initiative by Frederick S Brundick Approved for public release; distribution is unlimited. NOTICES Disclaimers The findings in this report...Schema in the Global Force Management Data Initiative by Frederick S Brundick Computing and Information Sciences Directorate, ARL Approved for public
RTML: remote telescope markup language and you
NASA Astrophysics Data System (ADS)
Hessman, F. V.
2001-12-01
In order to coordinate the use of robotic and remotely operated telescopes in networks -- like Göttingen's MOnitoring NEtwork of Telescopes (MONET) -- a standard format for the exchange of observing requests and reports is needed. I describe the benefits of Remote Telescope Markup Language (RTML), an XML-based protocol originally developed by the Hands-On Universe Project, which is being used and further developed by several robotic telescope projects and firms.
SBRML: a markup language for associating systems biology data with models.
Dada, Joseph O; Spasić, Irena; Paton, Norman W; Mendes, Pedro
2010-04-01
Research in systems biology is carried out through a combination of experiments and models. Several data standards have been adopted for representing models (Systems Biology Markup Language) and various types of relevant experimental data (such as FuGE and those of the Proteomics Standards Initiative). However, until now, there has been no standard way to associate a model and its entities to the corresponding datasets, or vice versa. Such a standard would provide a means to represent computational simulation results as well as to frame experimental data in the context of a particular model. Target applications include model-driven data analysis, parameter estimation, and sharing and archiving model simulations. We propose the Systems Biology Results Markup Language (SBRML), an XML-based language that associates a model with several datasets. Each dataset is represented as a series of values associated with model variables, and their corresponding parameter values. SBRML provides a flexible way of indexing the results to model parameter values, which supports both spreadsheet-like data and multidimensional data cubes. We present and discuss several examples of SBRML usage in applications such as enzyme kinetics, microarray gene expression and various types of simulation results. The XML Schema file for SBRML is available at http://www.comp-sys-bio.org/SBRML under the Academic Free License (AFL) v3.0.
SP2Bench: A SPARQL Performance Benchmark
NASA Astrophysics Data System (ADS)
Schmidt, Michael; Hornung, Thomas; Meier, Michael; Pinkel, Christoph; Lausen, Georg
A meaningful analysis and comparison of both existing storage schemes for RDF data and evaluation approaches for SPARQL queries necessitates a comprehensive and universal benchmark platform. We present SP2Bench, a publicly available, language-specific performance benchmark for the SPARQL query language. SP2Bench is settled in the DBLP scenario and comprises a data generator for creating arbitrarily large DBLP-like documents and a set of carefully designed benchmark queries. The generated documents mirror vital key characteristics and social-world distributions encountered in the original DBLP data set, while the queries implement meaningful requests on top of this data, covering a variety of SPARQL operator constellations and RDF access patterns. In this chapter, we discuss requirements and desiderata for SPARQL benchmarks and present the SP2Bench framework, including its data generator, benchmark queries and performance metrics.
Chemical Markup, XML and the World-Wide Web. 8. Polymer Markup Language.
Adams, Nico; Winter, Jerry; Murray-Rust, Peter; Rzepa, Henry S
2008-11-01
Polymers are among the most important classes of materials but are only inadequately supported by modern informatics. The paper discusses the reasons why polymer informatics is considerably more challenging than small molecule informatics and develops a vision for the computer-aided design of polymers, based on modern semantic web technologies. The paper then discusses the development of Polymer Markup Language (PML). PML is an extensible language, designed to support the (structural) representation of polymers and polymer-related information. PML closely interoperates with Chemical Markup Language (CML) and overcomes a number of the previously identified challenges.
Activate/Inhibit KGCS Gateway via Master Console EIC Pad-B Display
NASA Technical Reports Server (NTRS)
Ferreira, Pedro Henrique
2014-01-01
My internship consisted of two major projects for the Launch Control System.The purpose of the first project was to implement the Application Control Language (ACL) to Activate Data Acquisition (ADA) and to Inhibit Data Acquisition (IDA) the Kennedy Ground Control Sub-Systems (KGCS) Gateway, to update existing Pad-B End Item Control (EIC) Display to program the ADA and IDA buttons with new ACL, and to test and release the ACL Display.The second project consisted of unit testing all of the Application Services Framework (ASF) by March 21st. The XmlFileReader was unit tested and reached 100 coverage. The XmlFileReader class is used to grab information from XML files and use them to initialize elements in the other framework elements by using the Xerces C++ XML Parser; which is open source commercial off the shelf software. The ScriptThread was also tested. ScriptThread manages the creation and activation of script threads. A large amount of the time was used in initializing the environment and learning how to set up unit tests and getting familiar with the specific segments of the project that were assigned to us.
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Liu, I-Hsiung
1985-01-01
This Working Paper Series entry represents a collection of presentation visuals associated with the companion report entitled Natural Language Query System Design for Interactive Information Storage and Retrieval Systems, USL/DBMS NASA/RECON Working Paper Series report number DBMS.NASA/RECON-17.
The Effectiveness of Stemming for Natural-Language Access to Slovene Textual Data.
ERIC Educational Resources Information Center
Popovic, Mirko; Willett, Peter
1992-01-01
Reports on the use of stemming for Slovene language documents and queries in free-text retrieval systems and demonstrates that an appropriate stemming algorithm results in an increase in retrieval effectiveness when compared with nonstemming processing. A comparison is made with stemming of English versions of the same documents and queries. (24…
ERIC Educational Resources Information Center
Piyayodilokchai, Hongsiri; Panjaburee, Patcharin; Laosinchai, Parames; Ketpichainarong, Watcharee; Ruenwongsa, Pintip
2013-01-01
With the benefit of multimedia and the learning cycle approach in promoting effective active learning, this paper proposed a learning cycle approach-based, multimedia-supplemented instructional unit for Structured Query Language (SQL) for second-year undergraduate students with the aim of enhancing their basic knowledge of SQL and ability to apply…
NASA Astrophysics Data System (ADS)
Curland, Matthew; Halpin, Terry; Stirewalt, Kurt
A conceptual schema of an information system specifies the fact structures of interest as well as related business rules that are either constraints or derivation rules. Constraints restrict the possible or permitted states or state transitions, while derivation rules enable some facts to be derived from others. Graphical languages are commonly used to specify conceptual schemas, but often need to be supplemented by more expressive textual languages to capture additional business rules, as well as conceptual queries that enable conceptual models to be queried directly. This paper describes research to provide a role calculus to underpin textual languages for Object-Role Modeling (ORM), to enable business rules and queries to be formulated in a language intelligible to business users. The role-based nature of this calculus, which exploits the attribute-free nature of ORM, appears to offer significant advantages over other proposed approaches, especially in the area of semantic stability.
Improving Interoperability by Incorporating UnitsML Into Markup Languages
Celebi, Ismet; Dragoset, Robert A.; Olsen, Karen J.; Schaefer, Reinhold; Kramer, Gary W.
2010-01-01
Maintaining the integrity of analytical data over time is a challenge. Years ago, data were recorded on paper that was pasted directly into a laboratory notebook. The digital age has made maintaining the integrity of data harder. Nowadays, digitized analytical data are often separated from information about how the sample was collected and prepared for analysis and how the data were acquired. The data are stored on digital media, while the related information about the data may be written in a paper notebook or stored separately in other digital files. Sometimes the connection between this “scientific meta-data” and the analytical data is lost, rendering the spectrum or chromatogram useless. We have been working with ASTM Subcommittee E13.15 on Analytical Data to create the Analytical Information Markup Language or AnIML—a new way to interchange and store spectroscopy and chromatography data based on XML (Extensible Markup Language). XML is a language for describing what data are by enclosing them in computer-useable tags. Recording the units associated with the analytical data and metadata is an essential issue for any data representation scheme that must be addressed by all domain-specific markup languages. As scientific markup languages proliferate, it is very desirable to have a single scheme for handling units to facilitate moving information between different data domains. At NIST, we have been developing a general markup language just for units that we call UnitsML. This presentation will describe how UnitsML is used and how it is being incorporated into AnIML. PMID:27134778
Improving Interoperability by Incorporating UnitsML Into Markup Languages.
Celebi, Ismet; Dragoset, Robert A; Olsen, Karen J; Schaefer, Reinhold; Kramer, Gary W
2010-01-01
Maintaining the integrity of analytical data over time is a challenge. Years ago, data were recorded on paper that was pasted directly into a laboratory notebook. The digital age has made maintaining the integrity of data harder. Nowadays, digitized analytical data are often separated from information about how the sample was collected and prepared for analysis and how the data were acquired. The data are stored on digital media, while the related information about the data may be written in a paper notebook or stored separately in other digital files. Sometimes the connection between this "scientific meta-data" and the analytical data is lost, rendering the spectrum or chromatogram useless. We have been working with ASTM Subcommittee E13.15 on Analytical Data to create the Analytical Information Markup Language or AnIML-a new way to interchange and store spectroscopy and chromatography data based on XML (Extensible Markup Language). XML is a language for describing what data are by enclosing them in computer-useable tags. Recording the units associated with the analytical data and metadata is an essential issue for any data representation scheme that must be addressed by all domain-specific markup languages. As scientific markup languages proliferate, it is very desirable to have a single scheme for handling units to facilitate moving information between different data domains. At NIST, we have been developing a general markup language just for units that we call UnitsML. This presentation will describe how UnitsML is used and how it is being incorporated into AnIML.
The inclusion of an online journal in PubMed central - a difficult path.
Grech, Victor
2016-01-01
The indexing of a journal in a prominent database (such as PubMed) is an important imprimatur. Journals accepted for inclusion in PubMed Central (PMC) are automatically indexed in PubMed but must provide the entire contents of their publications as XML-tagged (Extensible Markup Language) data files compliant with PubMed's document type definition (DTD). This paper describes the various attempts that the journal Images in Paediatric Cardiology made in its efforts to convert the journal contents (including all of the extant backlog) to PMC-compliant XML for archiving and indexing in PubMed after the journal was accepted for inclusion by the database.
Searching for cancer information on the internet: analyzing natural language search queries.
Bader, Judith L; Theofanos, Mary Frances
2003-12-11
Searching for health information is one of the most-common tasks performed by Internet users. Many users begin searching on popular search engines rather than on prominent health information sites. We know that many visitors to our (National Cancer Institute) Web site, cancer.gov, arrive via links in search engine result. To learn more about the specific needs of our general-public users, we wanted to understand what lay users really wanted to know about cancer, how they phrased their questions, and how much detail they used. The National Cancer Institute partnered with AskJeeves, Inc to develop a methodology to capture, sample, and analyze 3 months of cancer-related queries on the Ask.com Web site, a prominent United States consumer search engine, which receives over 35 million queries per week. Using a benchmark set of 500 terms and word roots supplied by the National Cancer Institute, AskJeeves identified a test sample of cancer queries for 1 week in August 2001. From these 500 terms only 37 appeared >or= 5 times/day over the trial test week in 17208 queries. Using these 37 terms, 204165 instances of cancer queries were found in the Ask.com query logs for the actual test period of June-August 2001. Of these, 7500 individual user questions were randomly selected for detailed analysis and assigned to appropriate categories. The exact language of sample queries is presented. Considering multiples of the same questions, the sample of 7500 individual user queries represented 76077 queries (37% of the total 3-month pool). Overall 78.37% of sampled Cancer queries asked about 14 specific cancer types. Within each cancer type, queries were sorted into appropriate subcategories including at least the following: General Information, Symptoms, Diagnosis and Testing, Treatment, Statistics, Definition, and Cause/Risk/Link. The most-common specific cancer types mentioned in queries were Digestive/Gastrointestinal/Bowel (15.0%), Breast (11.7%), Skin (11.3%), and Genitourinary (10.5%). Additional subcategories of queries about specific cancer types varied, depending on user input. Queries that were not specific to a cancer type were also tracked and categorized. Natural-language searching affords users the opportunity to fully express their information needs and can aid users naïve to the content and vocabulary. The specific queries analyzed for this study reflect news and research studies reported during the study dates and would surely change with different study dates. Analyzing queries from search engines represents one way of knowing what kinds of content to provide to users of a given Web site. Users ask questions using whole sentences and keywords, often misspelling words. Providing the option for natural-language searching does not obviate the need for good information architecture, usability engineering, and user testing in order to optimize user experience.
Searching for Cancer Information on the Internet: Analyzing Natural Language Search Queries
Theofanos, Mary Frances
2003-01-01
Background Searching for health information is one of the most-common tasks performed by Internet users. Many users begin searching on popular search engines rather than on prominent health information sites. We know that many visitors to our (National Cancer Institute) Web site, cancer.gov, arrive via links in search engine result. Objective To learn more about the specific needs of our general-public users, we wanted to understand what lay users really wanted to know about cancer, how they phrased their questions, and how much detail they used. Methods The National Cancer Institute partnered with AskJeeves, Inc to develop a methodology to capture, sample, and analyze 3 months of cancer-related queries on the Ask.com Web site, a prominent United States consumer search engine, which receives over 35 million queries per week. Using a benchmark set of 500 terms and word roots supplied by the National Cancer Institute, AskJeeves identified a test sample of cancer queries for 1 week in August 2001. From these 500 terms only 37 appeared ≥ 5 times/day over the trial test week in 17208 queries. Using these 37 terms, 204165 instances of cancer queries were found in the Ask.com query logs for the actual test period of June-August 2001. Of these, 7500 individual user questions were randomly selected for detailed analysis and assigned to appropriate categories. The exact language of sample queries is presented. Results Considering multiples of the same questions, the sample of 7500 individual user queries represented 76077 queries (37% of the total 3-month pool). Overall 78.37% of sampled Cancer queries asked about 14 specific cancer types. Within each cancer type, queries were sorted into appropriate subcategories including at least the following: General Information, Symptoms, Diagnosis and Testing, Treatment, Statistics, Definition, and Cause/Risk/Link. The most-common specific cancer types mentioned in queries were Digestive/Gastrointestinal/Bowel (15.0%), Breast (11.7%), Skin (11.3%), and Genitourinary (10.5%). Additional subcategories of queries about specific cancer types varied, depending on user input. Queries that were not specific to a cancer type were also tracked and categorized. Conclusions Natural-language searching affords users the opportunity to fully express their information needs and can aid users naïve to the content and vocabulary. The specific queries analyzed for this study reflect news and research studies reported during the study dates and would surely change with different study dates. Analyzing queries from search engines represents one way of knowing what kinds of content to provide to users of a given Web site. Users ask questions using whole sentences and keywords, often misspelling words. Providing the option for natural-language searching does not obviate the need for good information architecture, usability engineering, and user testing in order to optimize user experience. PMID:14713659
Using XML and Java for Astronomical Instrument Control
NASA Astrophysics Data System (ADS)
Koons, L.; Ames, T.; Evans, R.; Warsaw, C.; Sall, K.
1999-12-01
Traditionally, instrument command and control systems have been highly specialized, consisting mostly of custom code that is difficult to develop, maintain, and extend. Such solutions are initially very costly and are inflexible to subsequent engineering change requests. Instrument description is too tightly coupled with details of implementation. NASA/Goddard Space Flight Center and AppNet, Inc. are developing a very general and highly extensible framework that applies to virtually any kind of instrument that can be controlled by a computer (e.g., telescopes, microscopes and printers). A key aspect of the object-oriented architecture, implemented in Java, involves software that is driven by an instrument description. The Astronomical Instrument Markup Language (AIML) is a domain-specific implementation of the more generalized Instrument Markup Language (IML). The software architecture combines the platform-independent processing capabilities of Java with the vendor-independent data description syntax of Extensible Markup Language (XML), a human-readable and machine-understandable way to describe structured data. IML is used to describe command sets (including parameters, datatypes, and constraints) and their associated formats, telemetry, and communication mechanisms. The software uses this description to present graphical user interfaces to control and monitor the instrument. Recent efforts have extended to command procedures (scripting) and representation of data pipeline inputs, outputs, and connections. Near future efforts are likely to include an XML description of data visualizations, as well as the potential use of XSL (Extensible Stylesheet Language) to permit astronomers to customize the user interface on several levels: per user, instrument, subsystem, or observatory-wide. Our initial prototyping effort was targeted for HAWC (High-resolution Airborne Wideband Camera), a first-light instrument of SOFIA (the Stratospheric Observatory for Infrared Astronomy). A production-level application of this technology is for one of the three candidate detectors of SPIRE (Spectral and Photometric Imaging REceiver), a focal plane instrument proposed for the European Space Agency's Far Infrared Space Telescope. The detectors are being developed by the Infrared Astrophysics Branch of NASA/GSFC.
Incorporating Feature-Based Annotations into Automatically Generated Knowledge Representations
NASA Astrophysics Data System (ADS)
Lumb, L. I.; Lederman, J. I.; Aldridge, K. D.
2006-12-01
Earth Science Markup Language (ESML) is efficient and effective in representing scientific data in an XML- based formalism. However, features of the data being represented are not accounted for in ESML. Such features might derive from events (e.g., a gap in data collection due to instrument servicing), identifications (e.g., a scientifically interesting area/volume in an image), or some other source. In order to account for features in an ESML context, we consider them from the perspective of annotation, i.e., the addition of information to existing documents without changing the originals. Although it is possible to extend ESML to incorporate feature-based annotations internally (e.g., by extending the XML schema for ESML), there are a number of complicating factors that we identify. Rather than pursuing the ESML-extension approach, we focus on an external representation for feature-based annotations via XML Pointer Language (XPointer). In previous work (Lumb &Aldridge, HPCS 2006, IEEE, doi:10.1109/HPCS.2006.26), we have shown that it is possible to extract relationships from ESML-based representations, and capture the results in the Resource Description Format (RDF). Thus we explore and report on this same requirement for XPointer-based annotations of ESML representations. As in our past efforts, the Global Geodynamics Project (GGP) allows us to illustrate with a real-world example this approach for introducing annotations into automatically generated knowledge representations.
The application of geography markup language (GML) to the geological sciences
NASA Astrophysics Data System (ADS)
Lake, Ron
2005-11-01
GML 3.0 became an adopted specification of the Open Geospatial Consortium (OGC) in January 2003, and is rapidly emerging as the world standard for the encoding, transport and storage of all forms of geographic information. This paper looks at the application of GML to one of the more challenging areas of automated geography, namely the geological sciences. Specific features of GML of interest to geologists are discussed and then illustrated through a series of geological case studies. We conclude the paper with a discussion of anticipated geological web services that GML will enable. GML is written in XML and makes use of XML Schema for extensibility. It can be used both to represent or model geographic objects and to transport them across the Internet. In this way it serves as the foundation for all manner of geographic web services. Unlike vertical application grammars such as LandXML, GML was intended to define geographic application languages, and hence is applicable to any geographic domain including forestry, environmental sciences, geology and oceanography. This paper provides a review of the basic features of GML that are fundamental to the geological sciences including geometry, coverages, observations, reference systems and temporality. These constructs are then employed in a series of simple geological case studies including structural geological description, surficial geology, representation of geological time scales, mineral occurrences, geohazards and geochemical reconnaissance.
Ramzan, Asia; Wang, Hai; Buckingham, Christopher
2014-01-01
Clinical decision support systems (CDSSs) often base their knowledge and advice on human expertise. Knowledge representation needs to be in a format that can be easily understood by human users as well as supporting ongoing knowledge engineering, including evolution and consistency of knowledge. This paper reports on the development of an ontology specification for managing knowledge engineering in a CDSS for assessing and managing risks associated with mental-health problems. The Galatean Risk and Safety Tool, GRiST, represents mental-health expertise in the form of a psychological model of classification. The hierarchical structure was directly represented in the machine using an XML document. Functionality of the model and knowledge management were controlled using attributes in the XML nodes, with an accompanying paper manual for specifying how end-user tools should behave when interfacing with the XML. This paper explains the advantages of using the web-ontology language, OWL, as the specification, details some of the issues and problems encountered in translating the psychological model to OWL, and shows how OWL benefits knowledge engineering. The conclusions are that OWL can have an important role in managing complex knowledge domains for systems based on human expertise without impeding the end-users' understanding of the knowledge base. The generic classification model underpinning GRiST makes it applicable to many decision domains and the accompanying OWL specification facilitates its implementation.
Saying What You're Looking For: Linguistics Meets Video Search.
Barrett, Daniel Paul; Barbu, Andrei; Siddharth, N; Siskind, Jeffrey Mark
2016-10-01
We present an approach to searching large video corpora for clips which depict a natural-language query in the form of a sentence. Compositional semantics is used to encode subtle meaning differences lost in other approaches, such as the difference between two sentences which have identical words but entirely different meaning: The person rode the horse versus The horse rode the person. Given a sentential query and a natural-language parser, we produce a score indicating how well a video clip depicts that sentence for each clip in a corpus and return a ranked list of clips. Two fundamental problems are addressed simultaneously: detecting and tracking objects, and recognizing whether those tracks depict the query. Because both tracking and object detection are unreliable, our approach uses the sentential query to focus the tracker on the relevant participants and ensures that the resulting tracks are described by the sentential query. While most earlier work was limited to single-word queries which correspond to either verbs or nouns, we search for complex queries which contain multiple phrases, such as prepositional phrases, and modifiers, such as adverbs. We demonstrate this approach by searching for 2,627 naturally elicited sentential queries in 10 Hollywood movies.
vSPARQL: A View Definition Language for the Semantic Web
Shaw, Marianne; Detwiler, Landon T.; Noy, Natalya; Brinkley, James; Suciu, Dan
2010-01-01
Translational medicine applications would like to leverage the biological and biomedical ontologies, vocabularies, and data sets available on the semantic web. We present a general solution for RDF information set reuse inspired by database views. Our view definition language, vSPARQL, allows applications to specify the exact content that they are interested in and how that content should be restructured or modified. Applications can access relevant content by querying against these view definitions. We evaluate the expressivity of our approach by defining views for practical use cases and comparing our view definition language to existing query languages. PMID:20800106
Concepts and implementations of natural language query systems
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Liu, I-Hsiung
1984-01-01
The currently developed user language interfaces of information systems are generally intended for serious users. These interfaces commonly ignore potentially the largest user group, i.e., casual users. This project discusses the concepts and implementations of a natural query language system which satisfy the nature and information needs of casual users by allowing them to communicate with the system in the form of their native (natural) language. In addition, a framework for the development of such an interface is also introduced for the MADAM (Multics Approach to Data Access and Management) system at the University of Southwestern Louisiana.
The Proteins API: accessing key integrated protein and genome information
Antunes, Ricardo; Alpi, Emanuele; Gonzales, Leonardo; Liu, Wudong; Luo, Jie; Qi, Guoying; Turner, Edd
2017-01-01
Abstract The Proteins API provides searching and programmatic access to protein and associated genomics data such as curated protein sequence positional annotations from UniProtKB, as well as mapped variation and proteomics data from large scale data sources (LSS). Using the coordinates service, researchers are able to retrieve the genomic sequence coordinates for proteins in UniProtKB. This, the LSS genomics and proteomics data for UniProt proteins is programmatically only available through this service. A Swagger UI has been implemented to provide documentation, an interface for users, with little or no programming experience, to ‘talk’ to the services to quickly and easily formulate queries with the services and obtain dynamically generated source code for popular programming languages, such as Java, Perl, Python and Ruby. Search results are returned as standard JSON, XML or GFF data objects. The Proteins API is a scalable, reliable, fast, easy to use RESTful services that provides a broad protein information resource for users to ask questions based upon their field of expertise and allowing them to gain an integrated overview of protein annotations available to aid their knowledge gain on proteins in biological processes. The Proteins API is available at (http://www.ebi.ac.uk/proteins/api/doc). PMID:28383659
Updates to the Virtual Atomic and Molecular Data Centre
NASA Astrophysics Data System (ADS)
Hill, Christian; Tennyson, Jonathan; Gordon, Iouli E.; Rothman, Laurence S.; Dubernet, Marie-Lise
2014-06-01
The Virtual Atomic and Molecular Data Centre (VAMDC) has established a set of standards for the storage and transmission of atomic and molecular data and an SQL-based query language (VSS2) for searching online databases, known as nodes. The project has also created an online service, the VAMDC Portal, through which all of these databases may be searched and their results compared and aggregated. Since its inception four years ago, the VAMDC e-infrastructure has grown to encompass over 40 databases, including HITRAN, in more than 20 countries and engages actively with scientists in six continents. Associated with the portal are a growing suite of software tools for the transformation of data from its native, XML-based, XSAMS format, to a range of more convenient human-readable (such as HTML) and machinereadable (such as CSV) formats. The relational database for HITRAN1, created as part of the VAMDC project is a flexible and extensible data model which is able to represent a wider range of parameters than the current fixed-format text-based one. Over the next year, a new online interface to this database will be tested, released and fully documented - this web application, HITRANonline2, will fully replace the ageing and incomplete JavaHAWKS software suite.
The Proteins API: accessing key integrated protein and genome information.
Nightingale, Andrew; Antunes, Ricardo; Alpi, Emanuele; Bursteinas, Borisas; Gonzales, Leonardo; Liu, Wudong; Luo, Jie; Qi, Guoying; Turner, Edd; Martin, Maria
2017-07-03
The Proteins API provides searching and programmatic access to protein and associated genomics data such as curated protein sequence positional annotations from UniProtKB, as well as mapped variation and proteomics data from large scale data sources (LSS). Using the coordinates service, researchers are able to retrieve the genomic sequence coordinates for proteins in UniProtKB. This, the LSS genomics and proteomics data for UniProt proteins is programmatically only available through this service. A Swagger UI has been implemented to provide documentation, an interface for users, with little or no programming experience, to 'talk' to the services to quickly and easily formulate queries with the services and obtain dynamically generated source code for popular programming languages, such as Java, Perl, Python and Ruby. Search results are returned as standard JSON, XML or GFF data objects. The Proteins API is a scalable, reliable, fast, easy to use RESTful services that provides a broad protein information resource for users to ask questions based upon their field of expertise and allowing them to gain an integrated overview of protein annotations available to aid their knowledge gain on proteins in biological processes. The Proteins API is available at (http://www.ebi.ac.uk/proteins/api/doc). © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Safari, Leila; Patrick, Jon D
2018-06-01
This paper reports on a generic framework to provide clinicians with the ability to conduct complex analyses on elaborate research topics using cascaded queries to resolve internal time-event dependencies in the research questions, as an extension to the proposed Clinical Data Analytics Language (CliniDAL). A cascaded query model is proposed to resolve internal time-event dependencies in the queries which can have up to five levels of criteria starting with a query to define subjects to be admitted into a study, followed by a query to define the time span of the experiment. Three more cascaded queries can be required to define control groups, control variables and output variables which all together simulate a real scientific experiment. According to the complexity of the research questions, the cascaded query model has the flexibility of merging some lower level queries for simple research questions or adding a nested query to each level to compose more complex queries. Three different scenarios (one of them contains two studies) are described and used for evaluation of the proposed solution. CliniDAL's complex analyses solution enables answering complex queries with time-event dependencies at most in a few hours which manually would take many days. An evaluation of results of the research studies based on the comparison between CliniDAL and SQL solutions reveals high usability and efficiency of CliniDAL's solution. Copyright © 2018 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Peterson, Gabriel M.; Su, Kuichun; Ries, James E.; Sievert, Mary Ellen C.
2002-01-01
Discussion of Internet use for information searches on health-related topics focuses on a study that examined complexity and variability of natural language in using search terms that express the concept of electronic health (e-health). Highlights include precision of retrieved information; shift in terminology; and queries using the Pub Med…
NASA Astrophysics Data System (ADS)
Lam, Ho-Pun; Governatori, Guido
We present the design and implementation of SPINdle - an open source Java based defeasible logic reasoner capable to perform efficient and scalable reasoning on defeasible logic theories (including theories with over 1 million rules). The implementation covers both the standard and modal extensions to defeasible logics. It can be used as a standalone theory prover and can be embedded into any applications as a defeasible logic rule engine. It allows users or agents to issues queries, on a given knowledge base or a theory generated on the fly by other applications, and automatically produces the conclusions of its consequences. The theory can also be represented using XML.
ERIC Educational Resources Information Center
Bosc, P.; Lietard, L.; Pivert, O.
2003-01-01
Considers flexible querying of relational databases. Highlights include SQL languages and basic aggregate operators; Sugeno's fuzzy integral; evaluation examples; and how and under what conditions other aggregate functions could be applied to fuzzy sets in a flexible query. (Author/LRW)
XML-Based SHINE Knowledge Base Interchange Language
NASA Technical Reports Server (NTRS)
James, Mark; Mackey, Ryan; Tikidjian, Raffi
2008-01-01
The SHINE Knowledge Base Interchange Language software has been designed to more efficiently send new knowledge bases to spacecraft that have been embedded with the Spacecraft Health Inference Engine (SHINE) tool. The intention of the behavioral model is to capture most of the information generally associated with a spacecraft functional model, while specifically addressing the needs of execution within SHINE and Livingstone. As such, it has some constructs that are based on one or the other.
Simulation Experiment Description Markup Language (SED-ML) Level 1 Version 2.
Bergmann, Frank T; Cooper, Jonathan; Le Novère, Nicolas; Nickerson, David; Waltemath, Dagmar
2015-09-04
The number, size and complexity of computational models of biological systems are growing at an ever increasing pace. It is imperative to build on existing studies by reusing and adapting existing models and parts thereof. The description of the structure of models is not sufficient to enable the reproduction of simulation results. One also needs to describe the procedures the models are subjected to, as recommended by the Minimum Information About a Simulation Experiment (MIASE) guidelines. This document presents Level 1 Version 2 of the Simulation Experiment Description Markup Language (SED-ML), a computer-readable format for encoding simulation and analysis experiments to apply to computational models. SED-ML files are encoded in the Extensible Markup Language (XML) and can be used in conjunction with any XML-based model encoding format, such as CellML or SBML. A SED-ML file includes details of which models to use, how to modify them prior to executing a simulation, which simulation and analysis procedures to apply, which results to extract and how to present them. Level 1 Version 2 extends the format by allowing the encoding of repeated and chained procedures.
Simulation Experiment Description Markup Language (SED-ML) Level 1 Version 2.
Bergmann, Frank T; Cooper, Jonathan; Le Novère, Nicolas; Nickerson, David; Waltemath, Dagmar
2015-06-01
The number, size and complexity of computational models of biological systems are growing at an ever increasing pace. It is imperative to build on existing studies by reusing and adapting existing models and parts thereof. The description of the structure of models is not sufficient to enable the reproduction of simulation results. One also needs to describe the procedures the models are subjected to, as recommended by the Minimum Information About a Simulation Experiment (MIASE) guidelines. This document presents Level 1 Version 2 of the Simulation Experiment Description Markup Language (SED-ML), a computer-readable format for encoding simulation and analysis experiments to apply to computational models. SED-ML files are encoded in the Extensible Markup Language (XML) and can be used in conjunction with any XML-based model encoding format, such as CellML or SBML. A SED-ML file includes details of which models to use, how to modify them prior to executing a simulation, which simulation and analysis procedures to apply, which results to extract and how to present them. Level 1 Version 2 extends the format by allowing the encoding of repeated and chained procedures.
PyPDB: a Python API for the Protein Data Bank.
Gilpin, William
2016-01-01
We have created a Python programming interface for the RCSB Protein Data Bank (PDB) that allows search and data retrieval for a wide range of result types, including BLAST and sequence motif queries. The API relies on the existing XML-based API and operates by creating custom XML requests from native Python types, allowing extensibility and straightforward modification. The package has the ability to perform many types of advanced search of the PDB that are otherwise only available through the PDB website. PyPDB is implemented exclusively in Python 3 using standard libraries for maximal compatibility. The most up-to-date version, including iPython notebooks containing usage tutorials, is available free-of-charge under an open-source MIT license via GitHub at https://github.com/williamgilpin/pypdb, and the full API reference is at http://williamgilpin.github.io/pypdb_docs/html/. The latest stable release is also available on PyPI. wgilpin@stanford.edu. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Reppe, Linda Amundstuen; Spigset, Olav; Kampmann, Jens Peter; Damkier, Per; Christensen, Hanne Rolighed; Böttiger, Ylva; Schjøtt, Jan
2017-05-01
The aim of this study was to identify structure and language elements affecting the quality of responses from Scandinavian drug information centres (DICs). Six different fictitious drug-related queries were sent to each of seven Scandinavian DICs. The centres were blinded for which queries were part of the study. The responses were assessed qualitatively by six clinical pharmacologists (internal experts) and six general practitioners (GPs, external experts). In addition, linguistic aspects of the responses were evaluated by a plain language expert. The quality of responses was generally judged as satisfactory to good. Presenting specific advice and conclusions were considered to improve the quality of the responses. However, small nuances in language formulations could affect the individual judgments of the experts, e.g. on whether or not advice was given. Some experts preferred the use of primary sources to the use of secondary and tertiary sources. Both internal and external experts criticised the use of abbreviations, professional terminology and study findings that was left unexplained. The plain language expert emphasised the importance of defining and explaining pharmacological terms to ensure that enquirers understand the response as intended. In addition, more use of active voice and less compressed text structure would be desirable. This evaluation of responses to DIC queries may give some indications on how to improve written responses on drug-related queries with respect to language and text structure. Giving specific advice and precise conclusions and avoiding too compressed language and non-standard abbreviations may aid to reach this goal.
Internet Patient Records: new techniques
Moehrs, Sascha; Anedda, Paolo; Tuveri, Massimiliano; Zanetti, Gianluigi
2001-01-01
Background The ease by which the Internet is able to distribute information to geographically-distant users on a wide variety of computers makes it an obvious candidate for a technological solution for electronic patient record systems. Indeed, second-generation Internet technologies such as the ones described in this article - XML (eXtensible Markup Language), XSL (eXtensible Style Language), DOM (Document Object Model), CSS (Cascading Style Sheet), JavaScript, and JavaBeans - may significantly reduce the complexity of the development of distributed healthcare systems. Objective The demonstration of an experimental Electronic Patient Record (EPR) system built from those technologies that can support viewing of medical imaging exams and graphically-rich clinical reporting tools, while conforming to the newly emerging XML standard for digital documents. In particular, we aim to promote rapid prototyping of new reports by clinical specialists. Methods We have built a prototype EPR client, InfoDOM, that runs in both the popular web browsers. In this second version it receives each EPR as an XML record served via the secure SSL (Secure Socket Layer) protocol. JavaBean software components manipulate the XML to store it and then to transform it into a variety of useful clinical views. First a web page summary for the patient is produced. From that web page other JavaBeans can be launched. In particular, we have developed a medical imaging exam Viewer and a clinical Reporter bean parameterized appropriately for the particular patient and exam in question. Both present particular views of the XML data. The Viewer reads image sequences from a patient-specified network URL on a PACS (Picture Archiving and Communications System) server and presents them in a user-controllable animated sequence, while the Reporter provides a configurable anatomical map of the site of the pathology, from which individual "reportlets" can be launched. The specification of these reportlets is achieved using standard HTML forms and thus may conceivably be authored by clinical specialists. A generic JavaScript library has been written that allows the seamless incorporation of such contributions into the InfoDOM client. In conjunction with another JavaBean, that library renders graphically-enhanced reporting tools that read and write content to and from the XML data-structure, ready for resubmission to the EPR server. Results We demonstrate the InfoDOM experimental EPR system that is currently being adapted for test-bed use in three hospitals in Cagliari, Italy. For this we are working with specialists in neurology, radiology, and epilepsy. Conclusions Early indications are that the rapid prototyping of reports afforded by our EPR system can assist communication between clinical specialists and our system developers. We are now experimenting with new technologies that may provide services to the kind of XML EPR client described here. PMID:11720950
The E-Book: Pipe Dream or Potential Disaster?
ERIC Educational Resources Information Center
Dorman, David
1999-01-01
Discusses the development of electronic books and considers marketing and distribution, rights management, and technical standards. Economic and institutional relationships, copyrights, Extensible Markup Language (XML), access to content, free access versus fees, preservation versus loss of control over long-term storage and access, and trusted…
Designing and Managing Your Digital Library.
ERIC Educational Resources Information Center
Guenther, Kim
2000-01-01
Discusses digital libraries and Web site design issues. Highlights include accessibility issues, including standards, markup languages like HTML and XML, and metadata; building virtual communities; the use of Web portals for customized delivery of information; quality assurance tools, including data mining; and determining user needs, including…
Accessing Electronic Theses: Progress?
ERIC Educational Resources Information Center
Tennant, Roy
2000-01-01
Describes various ways by which universities provide access to their electronic theses and dissertations (ETDs), discussing UMI (University Microfilms International), XML (eXtensible Markup Language), and other formats. Discusses key leaders--national and international--in the ETD effort. Outlines the two main methods for locating ETDs. Presents a…
owlcpp: a C++ library for working with OWL ontologies.
Levin, Mikhail K; Cowell, Lindsay G
2015-01-01
The increasing use of ontologies highlights the need for a library for working with ontologies that is efficient, accessible from various programming languages, and compatible with common computational platforms. We developed owlcpp, a library for storing and searching RDF triples, parsing RDF/XML documents, converting triples into OWL axioms, and reasoning. The library is written in ISO-compliant C++ to facilitate efficiency, portability, and accessibility from other programming languages. Internally, owlcpp uses the Raptor RDF Syntax library for parsing RDF/XML and the FaCT++ library for reasoning. The current version of owlcpp is supported under Linux, OSX, and Windows platforms and provides an API for Python. The results of our evaluation show that, compared to other commonly used libraries, owlcpp is significantly more efficient in terms of memory usage and searching RDF triple stores. owlcpp performs strict parsing and detects errors ignored by other libraries, thus reducing the possibility of incorrect semantic interpretation of ontologies. owlcpp is available at http://owl-cpp.sf.net/ under the Boost Software License, Version 1.0.
Sankar, Punnaivanam; Aghila, Gnanasekaran
2007-01-01
The mechanism models for primary organic reactions encoding the structural fragments undergoing substitution, addition, elimination, and rearrangements are developed. In the proposed models, each and every structural component of mechanistic pathways is represented with flexible and fragment based markup technique in XML syntax. A significant feature of the system is the encoding of the electron movements along with the other components like charges, partial charges, half bonded species, lone pair electrons, free radicals, reaction arrows, etc. needed for a complete representation of reaction mechanism. The rendering of reaction schemes described with the proposed methodology is achieved with a concise XML extension language interoperating with the structure markup. The reaction scheme is visualized as 2D graphics in a browser by converting them into SVG documents enabling the desired layouts normally perceived by the chemists conventionally. An automatic representation of the complex patterns of the reaction mechanism is achieved by reusing the knowledge in chemical ontologies and developing artificial intelligence components in terms of axioms.
Agent-Based Computing Integration and Testing
2006-12-01
Query Language (DQL). Regrettably, DQL never became a W3C Member Submission itself, but likely had some influence on the SPARQL Protocol And RDF... Query Language ( SPARQL ) subsequently produced by the W3C Data Access Working Group (DAWG) as that working group also contained members from the DAML...Sponsored by Defense Advanced Research Projects Agency DARPA Order No. K536 APPROVED FOR PUBLIC RELEASE
Using Web Ontology Language to Integrate Heterogeneous Databases in the Neurosciences
Lam, Hugo Y.K.; Marenco, Luis; Shepherd, Gordon M.; Miller, Perry L.; Cheung, Kei-Hoi
2006-01-01
Integrative neuroscience involves the integration and analysis of diverse types of neuroscience data involving many different experimental techniques. This data will increasingly be distributed across many heterogeneous databases that are web-accessible. Currently, these databases do not expose their schemas (database structures) and their contents to web applications/agents in a standardized, machine-friendly way. This limits database interoperation. To address this problem, we describe a pilot project that illustrates how neuroscience databases can be expressed using the Web Ontology Language, which is a semantically-rich ontological language, as a common data representation language to facilitate complex cross-database queries. In this pilot project, an existing tool called “D2RQ” was used to translate two neuroscience databases (NeuronDB and CoCoDat) into OWL, and the resulting OWL ontologies were then merged. An OWL-based reasoner (Racer) was then used to provide a sophisticated query language (nRQL) to perform integrated queries across the two databases based on the merged ontology. This pilot project is one step toward exploring the use of semantic web technologies in the neurosciences. PMID:17238384
33 CFR 160.210 - Methods for submitting an NOA.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Language (XML) formatted documents via web service; (iii) Electronic submission via Microsoft InfoPath... NVMC, United States Coast Guard, 408 Coast Guard Drive, Kearneysville, WV 25430, by: (1) Electronic submission via the electronic Notice of Arrival and Departure (eNOAD) and consisting of the following three...
33 CFR 160.210 - Methods for submitting an NOA.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Language (XML) formatted documents via web service; (iii) Electronic submission via Microsoft InfoPath... NVMC, United States Coast Guard, 408 Coast Guard Drive, Kearneysville, WV 25430, by: (1) Electronic submission via the electronic Notice of Arrival and Departure (eNOAD) and consisting of the following three...
33 CFR 160.210 - Methods for submitting an NOA.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Language (XML) formatted documents via web service; (iii) Electronic submission via Microsoft InfoPath... NVMC, United States Coast Guard, 408 Coast Guard Drive, Kearneysville, WV 25430, by: (1) Electronic submission via the electronic Notice of Arrival and Departure (eNOAD) and consisting of the following three...
Applying Data Mining Principles to Library Data Collection.
ERIC Educational Resources Information Center
Guenther, Kim
2000-01-01
Explains how libraries can use data mining techniques for more effective data collection. Highlights include three phases: data selection and acquisition; data preparation and processing, including a discussion of the use of XML (extensible markup language); and data interpretation and integration, including database management systems. (LRW)
vSPARQL: a view definition language for the semantic web.
Shaw, Marianne; Detwiler, Landon T; Noy, Natalya; Brinkley, James; Suciu, Dan
2011-02-01
Translational medicine applications would like to leverage the biological and biomedical ontologies, vocabularies, and data sets available on the semantic web. We present a general solution for RDF information set reuse inspired by database views. Our view definition language, vSPARQL, allows applications to specify the exact content that they are interested in and how that content should be restructured or modified. Applications can access relevant content by querying against these view definitions. We evaluate the expressivity of our approach by defining views for practical use cases and comparing our view definition language to existing query languages. Copyright © 2010 Elsevier Inc. All rights reserved.
QATT: a Natural Language Interface for QPE. M.S. Thesis
NASA Technical Reports Server (NTRS)
White, Douglas Robert-Graham
1989-01-01
QATT, a natural language interface developed for the Qualitative Process Engine (QPE) system is presented. The major goal was to evaluate the use of a preexisting natural language understanding system designed to be tailored for query processing in multiple domains of application. The other goal of QATT is to provide a comfortable environment in which to query envisionments in order to gain insight into the qualitative behavior of physical systems. It is shown that the use of the preexisting system made possible the development of a reasonably useful interface in a few months.
FAIMS Mobile: Flexible, open-source software for field research
NASA Astrophysics Data System (ADS)
Ballsun-Stanton, Brian; Ross, Shawn A.; Sobotkova, Adela; Crook, Penny
2018-01-01
FAIMS Mobile is a native Android application supported by an Ubuntu server facilitating human-mediated field research across disciplines. It consists of 'core' Java and Ruby software providing a platform for data capture, which can be deeply customised using 'definition packets' consisting of XML documents (data schema and UI) and Beanshell scripts (automation). Definition packets can also be generated using an XML-based domain-specific language, making customisation easier. FAIMS Mobile includes features allowing rich and efficient data capture tailored to the needs of fieldwork. It also promotes synthetic research and improves transparency and reproducibility through the production of comprehensive datasets that can be mapped to vocabularies or ontologies as they are created.
WITH: a system to write clinical trials using XML and RDBMS.
Fazi, Paola; Luzi, Daniela; Manco, Mariarosaria; Ricci, Fabrizio L.; Toffoli, Giovanni; Vignetti, Marco
2002-01-01
The paper illustrates the system WITH (Write on Internet clinical Trials in Haematology) which supports the writing of a clinical trial (CT) document. The requirements of this system have been defined analysing the writing process of a CT and then modelling the content of its sections together with their logical and temporal relationships. The system WITH allows: a) editing the document text; b) re-using the text; and c) facilitating the cooperation and the collaborative writing. It is based on XML mark-up language, and on a RDBMS. This choice guarantees: a) process standardisation; b) process management; c) efficient delivery of information-based tasks; and d) explicit focus on process design. PMID:12463823
Dogac, Asuman; Kabak, Yildiray; Namli, Tuncay; Okcan, Alper
2008-11-01
Integrating healthcare enterprise (IHE) specifies integration profiles describing selected real world use cases to facilitate the interoperability of healthcare information resources. While realizing a complex real-world scenario, IHE profiles are combined by grouping the related IHE actors. Grouping IHE actors implies that the associated business processes (IHE profiles) that the actors are involved must be combined, that is, the choreography of the resulting collaborative business process must be determined by deciding on the execution sequence of transactions coming from different profiles. There are many IHE profiles and each user or vendor may support a different set of IHE profiles that fits to its business need. However, determining the precedence of all the involved transactions manually for each possible combination of the profiles is a very tedious task. In this paper, we describe how to obtain the overall business process automatically when IHE actors are grouped. For this purpose, we represent the IHE profiles through a standard, machine-processable language, namely, Organization for the Advancement of Structured Information Standards (OASIS) ebusiness eXtensible Markup Language (ebXML) Business Process Specification (ebBP) Language. We define the precedence rules among the transactions of the IHE profiles, again, in a machine-processable way. Then, through a graphical tool, we allow users to select the actors to be grouped and automatically produce the overall business process in a machine-processable format.
Root System Markup Language: Toward a Unified Root Architecture Description Language1[OPEN
Pound, Michael P.; Pradal, Christophe; Draye, Xavier; Godin, Christophe; Leitner, Daniel; Meunier, Félicien; Pridmore, Tony P.; Schnepf, Andrea
2015-01-01
The number of image analysis tools supporting the extraction of architectural features of root systems has increased in recent years. These tools offer a handy set of complementary facilities, yet it is widely accepted that none of these software tools is able to extract in an efficient way the growing array of static and dynamic features for different types of images and species. We describe the Root System Markup Language (RSML), which has been designed to overcome two major challenges: (1) to enable portability of root architecture data between different software tools in an easy and interoperable manner, allowing seamless collaborative work; and (2) to provide a standard format upon which to base central repositories that will soon arise following the expanding worldwide root phenotyping effort. RSML follows the XML standard to store two- or three-dimensional image metadata, plant and root properties and geometries, continuous functions along individual root paths, and a suite of annotations at the image, plant, or root scale at one or several time points. Plant ontologies are used to describe botanical entities that are relevant at the scale of root system architecture. An XML schema describes the features and constraints of RSML, and open-source packages have been developed in several languages (R, Excel, Java, Python, and C#) to enable researchers to integrate RSML files into popular research workflow. PMID:25614065
Instrument Remote Control via the Astronomical Instrument Markup Language
NASA Technical Reports Server (NTRS)
Sall, Ken; Ames, Troy; Warsaw, Craig; Koons, Lisa; Shafer, Richard
1998-01-01
The Instrument Remote Control (IRC) project ongoing at NASA's Goddard Space Flight Center's (GSFC) Information Systems Center (ISC) supports NASA's mission by defining an adaptive intranet-based framework that provides robust interactive and distributed control and monitoring of remote instruments. An astronomical IRC architecture that combines the platform-independent processing capabilities of Java with the power of Extensible Markup Language (XML) to express hierarchical data in an equally platform-independent, as well as human readable manner, has been developed. This architecture is implemented using a variety of XML support tools and Application Programming Interfaces (API) written in Java. IRC will enable trusted astronomers from around the world to easily access infrared instruments (e.g., telescopes, cameras, and spectrometers) located in remote, inhospitable environments, such as the South Pole, a high Chilean mountaintop, or an airborne observatory aboard a Boeing 747. Using IRC's frameworks, an astronomer or other scientist can easily define the type of onboard instrument, control the instrument remotely, and return monitoring data all through the intranet. The Astronomical Instrument Markup Language (AIML) is the first implementation of the more general Instrument Markup Language (IML). The key aspects of our approach to instrument description and control applies to many domains, from medical instruments to machine assembly lines. The concepts behind AIML apply equally well to the description and control of instruments in general. IRC enables us to apply our techniques to several instruments, preferably from different observatories.
Root system markup language: toward a unified root architecture description language.
Lobet, Guillaume; Pound, Michael P; Diener, Julien; Pradal, Christophe; Draye, Xavier; Godin, Christophe; Javaux, Mathieu; Leitner, Daniel; Meunier, Félicien; Nacry, Philippe; Pridmore, Tony P; Schnepf, Andrea
2015-03-01
The number of image analysis tools supporting the extraction of architectural features of root systems has increased in recent years. These tools offer a handy set of complementary facilities, yet it is widely accepted that none of these software tools is able to extract in an efficient way the growing array of static and dynamic features for different types of images and species. We describe the Root System Markup Language (RSML), which has been designed to overcome two major challenges: (1) to enable portability of root architecture data between different software tools in an easy and interoperable manner, allowing seamless collaborative work; and (2) to provide a standard format upon which to base central repositories that will soon arise following the expanding worldwide root phenotyping effort. RSML follows the XML standard to store two- or three-dimensional image metadata, plant and root properties and geometries, continuous functions along individual root paths, and a suite of annotations at the image, plant, or root scale at one or several time points. Plant ontologies are used to describe botanical entities that are relevant at the scale of root system architecture. An XML schema describes the features and constraints of RSML, and open-source packages have been developed in several languages (R, Excel, Java, Python, and C#) to enable researchers to integrate RSML files into popular research workflow. © 2015 American Society of Plant Biologists. All Rights Reserved.
Query Expansion Using SNOMED-CT and Weighing Schemes
2014-11-01
For this research, we have used SNOMED-CT along with UMLS Methathesaurus as our ontology in medical domain to expand the queries. General Terms...CT along with UMLS Methathesaurus as our ontology in medical domain to expand the queries. 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17...University of the Basque country discuss their finding on query expansion using external sources headlined by Unified Medical Language System ( UMLS
Spatial information semantic query based on SPARQL
NASA Astrophysics Data System (ADS)
Xiao, Zhifeng; Huang, Lei; Zhai, Xiaofang
2009-10-01
How can the efficiency of spatial information inquiries be enhanced in today's fast-growing information age? We are rich in geospatial data but poor in up-to-date geospatial information and knowledge that are ready to be accessed by public users. This paper adopts an approach for querying spatial semantic by building an Web Ontology language(OWL) format ontology and introducing SPARQL Protocol and RDF Query Language(SPARQL) to search spatial semantic relations. It is important to establish spatial semantics that support for effective spatial reasoning for performing semantic query. Compared to earlier keyword-based and information retrieval techniques that rely on syntax, we use semantic approaches in our spatial queries system. Semantic approaches need to be developed by ontology, so we use OWL to describe spatial information extracted by the large-scale map of Wuhan. Spatial information expressed by ontology with formal semantics is available to machines for processing and to people for understanding. The approach is illustrated by introducing a case study for using SPARQL to query geo-spatial ontology instances of Wuhan. The paper shows that making use of SPARQL to search OWL ontology instances can ensure the result's accuracy and applicability. The result also indicates constructing a geo-spatial semantic query system has positive efforts on forming spatial query and retrieval.
Towards a distributed information architecture for avionics data
NASA Technical Reports Server (NTRS)
Mattmann, Chris; Freeborn, Dana; Crichton, Dan
2003-01-01
Avionics data at the National Aeronautics and Space Administration's (NASA) Jet Propulsion Laboratory (JPL consists of distributed, unmanaged, and heterogeneous information that is hard for flight system design engineers to find and use on new NASA/JPL missions. The development of a systematic approach for capturing, accessing and sharing avionics data critical to the support of NASA/JPL missions and projects is required. We propose a general information architecture for managing the existing distributed avionics data sources and a method for querying and retrieving avionics data using the Object Oriented Data Technology (OODT) framework. OODT uses XML messaging infrastructure that profiles data products and their locations using the ISO-11179 data model for describing data products. Queries against a common data dictionary (which implements the ISO model) are translated to domain dependent source data models, and distributed data products are returned asynchronously through the OODT middleware. Further work will include the ability to 'plug and play' new manufacturer data sources, which are distributed at avionics component manufacturer locations throughout the United States.
Query Health: standards-based, cross-platform population health surveillance
Klann, Jeffrey G; Buck, Michael D; Brown, Jeffrey; Hadley, Marc; Elmore, Richard; Weber, Griffin M; Murphy, Shawn N
2014-01-01
Objective Understanding population-level health trends is essential to effectively monitor and improve public health. The Office of the National Coordinator for Health Information Technology (ONC) Query Health initiative is a collaboration to develop a national architecture for distributed, population-level health queries across diverse clinical systems with disparate data models. Here we review Query Health activities, including a standards-based methodology, an open-source reference implementation, and three pilot projects. Materials and methods Query Health defined a standards-based approach for distributed population health queries, using an ontology based on the Quality Data Model and Consolidated Clinical Document Architecture, Health Quality Measures Format (HQMF) as the query language, the Query Envelope as the secure transport layer, and the Quality Reporting Document Architecture as the result language. Results We implemented this approach using Informatics for Integrating Biology and the Bedside (i2b2) and hQuery for data analytics and PopMedNet for access control, secure query distribution, and response. We deployed the reference implementation at three pilot sites: two public health departments (New York City and Massachusetts) and one pilot designed to support Food and Drug Administration post-market safety surveillance activities. The pilots were successful, although improved cross-platform data normalization is needed. Discussions This initiative resulted in a standards-based methodology for population health queries, a reference implementation, and revision of the HQMF standard. It also informed future directions regarding interoperability and data access for ONC's Data Access Framework initiative. Conclusions Query Health was a test of the learning health system that supplied a functional methodology and reference implementation for distributed population health queries that has been validated at three sites. PMID:24699371
Query Health: standards-based, cross-platform population health surveillance.
Klann, Jeffrey G; Buck, Michael D; Brown, Jeffrey; Hadley, Marc; Elmore, Richard; Weber, Griffin M; Murphy, Shawn N
2014-01-01
Understanding population-level health trends is essential to effectively monitor and improve public health. The Office of the National Coordinator for Health Information Technology (ONC) Query Health initiative is a collaboration to develop a national architecture for distributed, population-level health queries across diverse clinical systems with disparate data models. Here we review Query Health activities, including a standards-based methodology, an open-source reference implementation, and three pilot projects. Query Health defined a standards-based approach for distributed population health queries, using an ontology based on the Quality Data Model and Consolidated Clinical Document Architecture, Health Quality Measures Format (HQMF) as the query language, the Query Envelope as the secure transport layer, and the Quality Reporting Document Architecture as the result language. We implemented this approach using Informatics for Integrating Biology and the Bedside (i2b2) and hQuery for data analytics and PopMedNet for access control, secure query distribution, and response. We deployed the reference implementation at three pilot sites: two public health departments (New York City and Massachusetts) and one pilot designed to support Food and Drug Administration post-market safety surveillance activities. The pilots were successful, although improved cross-platform data normalization is needed. This initiative resulted in a standards-based methodology for population health queries, a reference implementation, and revision of the HQMF standard. It also informed future directions regarding interoperability and data access for ONC's Data Access Framework initiative. Query Health was a test of the learning health system that supplied a functional methodology and reference implementation for distributed population health queries that has been validated at three sites. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
The contribution of morphological knowledge to French MeSH mapping for information retrieval.
Zweigenbaum, P.; Darmoni, S. J.; Grabar, N.
2001-01-01
MeSH-indexed Internet health directories must provide a mapping from natural language queries to MeSH terms so that both health professionals and the general public can query their contents. We describe here the design of lexical knowledge bases for mapping French expressions to MeSH terms, and the initial evaluation of their contribution to Doc'CISMeF, the search tool of a MeSH-indexed directory of French-language medical Internet resources. The observed trend is in favor of the use of morphological knowledge as a moderate (approximately 5%) but effective factor for improving query to term mapping capabilities. PMID:11825295
Searchers Net Treasure in Monterey.
ERIC Educational Resources Information Center
McDermott, Irene E.
1999-01-01
Reports on Web keyword searching, metadata, Dublin Core, Extensible Markup Language (XML), metasearch engines (metasearch engines search several Web indexes and/or directories and/or Usenet and/or specific Web sites), and the Year 2000 (Y2K) dilemma, all topics discussed at the second annual Internet Librarian Conference sponsored by Information…
A universal exchange language for healthcare.
Robson, Barry; Caruso, Thomas P
2013-01-01
We have defined a Universal Exchange Language (UEL) for healthcare that takes a green field approach to the development of a novel "XML-like" language. We consider here what given a free hand might mean: a UEL that incorporates an advanced mathematical foundation that uses Dirac's notation and algebra. For consented and public information, it allows probabilistic inference from UEL semantic web triplet tags. But also it is possible to use similar thinking to maximize the security and analytic characteristics of private health data by disaggregating or "shredding" it. Both are scalable to millions of records that could be spread across the Internet.
SPARK: Adapting Keyword Query to Semantic Search
NASA Astrophysics Data System (ADS)
Zhou, Qi; Wang, Chong; Xiong, Miao; Wang, Haofen; Yu, Yong
Semantic search promises to provide more accurate result than present-day keyword search. However, progress with semantic search has been delayed due to the complexity of its query languages. In this paper, we explore a novel approach of adapting keywords to querying the semantic web: the approach automatically translates keyword queries into formal logic queries so that end users can use familiar keywords to perform semantic search. A prototype system named 'SPARK' has been implemented in light of this approach. Given a keyword query, SPARK outputs a ranked list of SPARQL queries as the translation result. The translation in SPARK consists of three major steps: term mapping, query graph construction and query ranking. Specifically, a probabilistic query ranking model is proposed to select the most likely SPARQL query. In the experiment, SPARK achieved an encouraging translation result.
NASA Astrophysics Data System (ADS)
Tan, Kian Lam; Lim, Chen Kim
2017-10-01
With the explosive growth of online information such as email messages, news articles, and scientific literature, many institutions and museums are converting their cultural collections from physical data to digital format. However, this conversion resulted in the issues of inconsistency and incompleteness. Besides, the usage of inaccurate keywords also resulted in short query problem. Most of the time, the inconsistency and incompleteness are caused by the aggregation fault in annotating a document itself while the short query problem is caused by naive user who has prior knowledge and experience in cultural heritage domain. In this paper, we presented an approach to solve the problem of inconsistency, incompleteness and short query by incorporating the Term Similarity Matrix into the Language Model. Our approach is tested on the Cultural Heritage in CLEF (CHiC) collection which consists of short queries and documents. The results show that the proposed approach is effective and has improved the accuracy in retrieval time.
A Query System Implementation Case Study.
ERIC Educational Resources Information Center
Hiser, Judith N.; Neil, M. Elizabeth
1985-01-01
The Department of Administrative Programming Services of Clemson University investigated products available in user-friendly retrieval systems. The test of INTELLECT, a natural language query system written by Artifical Intelligence Corporation, is described. (Author/MLW)
A SQL-Database Based Meta-CASE System and its Query Subsystem
NASA Astrophysics Data System (ADS)
Eessaar, Erki; Sgirka, Rünno
Meta-CASE systems simplify the creation of CASE (Computer Aided System Engineering) systems. In this paper, we present a meta-CASE system that provides a web-based user interface and uses an object-relational database system (ORDBMS) as its basis. The use of ORDBMSs allows us to integrate different parts of the system and simplify the creation of meta-CASE and CASE systems. ORDBMSs provide powerful query mechanism. The proposed system allows developers to use queries to evaluate and gradually improve artifacts and calculate values of software measures. We illustrate the use of the systems by using SimpleM modeling language and discuss the use of SQL in the context of queries about artifacts. We have created a prototype of the meta-CASE system by using PostgreSQL™ ORDBMS and PHP scripting language.
TU-CD-304-11: Veritas 2.0: A Cloud-Based Tool to Facilitate Research and Innovation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mishra, P; Patankar, A; Etmektzoglou, A
Purpose: We introduce Veritas 2.0, a cloud-based, non-clinical research portal, to facilitate translation of radiotherapy research ideas to new delivery techniques. The ecosystem of research tools includes web apps for a research beam builder for TrueBeam Developer Mode, an image reader for compressed and uncompressed XIM files, and a trajectory log file based QA/beam delivery analyzer. Methods: The research beam builder can generate TrueBeam readable XML file either from scratch or from pre-existing DICOM-RT plans. DICOM-RT plan is first converted to XML format and then researcher can interactively modify or add control points to them. Delivered beam can be verifiedmore » via reading generated images and analyzing trajectory log files. Image reader can read both uncompressed and HND-compressed XIM images. The trajectory log analyzer lets researchers plot expected vs. actual values and deviations among 30 mechanical axes. The analyzer gives an animated view of MLC patterns for the beam delivery. Veritas 2.0 is freely available and its advantages versus standalone software are i) No software installation or maintenance needed, ii) easy accessibility across all devices iii) seamless upgrades and iv) OS independence. Veritas is written using open-source tools like twitter bootstrap, jQuery, flask, and Python-based modules. Results: In the first experiment, an anonymized 7-beam DICOM-RT IMRT plan was converted to XML beam containing 1400 control points. kV and MV imaging points were inserted into this XML beam. In another experiment, a binary log file was analyzed to compare actual vs expected values and deviations among axes. Conclusions: Veritas 2.0 is a public cloud-based web app that hosts a pool of research tools for facilitating research from conceptualization to verification. It is aimed at providing a platform for facilitating research and collaboration. I am full time employee at Varian Medical systems, Palo Alto.« less
The Limitations of Term Co-Occurrence Data for Query Expansion in Document Retrieval Systems.
ERIC Educational Resources Information Center
Peat, Helen J.; Willett, Peter
1991-01-01
Identifies limitations in the use of term co-occurrence data as a basis for automatic query expansion in natural language document retrieval systems. The use of similarity coefficients to calculate the degree of similarity between pairs of terms is explained, and frequency and discriminatory characteristics for nearest neighbors of query terms are…
Importing MAGE-ML format microarray data into BioConductor.
Durinck, Steffen; Allemeersch, Joke; Carey, Vincent J; Moreau, Yves; De Moor, Bart
2004-12-12
The microarray gene expression markup language (MAGE-ML) is a widely used XML (eXtensible Markup Language) standard for describing and exchanging information about microarray experiments. It can describe microarray designs, microarray experiment designs, gene expression data and data analysis results. We describe RMAGEML, a new Bioconductor package that provides a link between cDNA microarray data stored in MAGE-ML format and the Bioconductor framework for preprocessing, visualization and analysis of microarray experiments. http://www.bioconductor.org. Open Source.
TAPRegExt: a VOResource Schema Extension for Describing TAP Services Version 1.0
NASA Astrophysics Data System (ADS)
Demleitner, Markus; Dowler, Patrick; Plante, Ray; Rixon, Guy; Taylor, Mark; Demleitner, Markus
2012-08-01
This document describes an XML encoding standard for metadata about services implementing the table access protocol TAP [TAP], referred to as TAPRegExt. Instance documents are part of the service's registry record or can be obtained from the service itself. They deliver information to both humans and software on the languages, output formats, and upload methods supported by the service, as well as data models implemented by the exposed tables, optional language features, and certain limits enforced by the service.
Murphy, SN; Barnett, GO; Chueh, HC
2000-01-01
The patient base of the Partners HealthCare System in Boston exceeds 1.8 million. Many of these patients are desirable for participation in research studies. To facilitate their discovery, we developed a data warehouse to contain clinical characteristics of these patients. The data warehouse contains diagnosis and procedures from administrative databases. The patients are indexed across institutions and their demographics provided by an Enterprise Master Patient Indexing service. Characteristics of the diagnoses and procedures such as associated providers, dates of service, inpatient/outpatient status, and other visit-related characteristics are also fed from the administrative systems. The targeted users of this system are research clinician s interested in finding patient cohorts for research studies. Their data requirements were analyzed and have been reported elsewhere. We did not expect the clinicians to become expert users of the system. Tools for querying healthcare data have traditionally been text based, although graphical interfaces have been pursued. In order to support the simple drag and drop visual model, as well as the identification and distribution of the patient data, a three-tier software architecture was developed. The user interface was developed in Visual Basic and distributed as an ActiveX object embedded in an HTML page. The middle layer was developed in Java and Microsoft COM. The queries are represented throughout their lifetime as XML objects, and the Microsoft SQL7 database is queried and managed in standard SQL. PMID:11080028
Murphy; Barnett; Chueh
2000-01-01
The patient base of the Partners HealthCare System in Boston exceeds 1.8 million. Many of these patients are desirable for participation in research studies. To facilitate their discovery, we developed a data warehouse to contain clinical characteristics of these patients. The data warehouse contains diagnosis and procedures from administrative databases. The patients are indexed across institutions and their demographics provided by an Enterprise Master Patient Indexing service. Characteristics of the diagnoses and procedures such as associated providers, dates of service, inpatient/outpatient status, and other visit-related characteristics are also fed from the administrative systems. The targeted users of this system are research clinician s interested in finding patient cohorts for research studies. Their data requirements were analyzed and have been reported elsewhere. We did not expect the clinicians to become expert users of the system. Tools for querying healthcare data have traditionally been text based, although graphical interfaces have been pursued. In order to support the simple drag and drop visual model, as well as the identification and distribution of the patient data, a three-tier software architecture was developed. The user interface was developed in Visual Basic and distributed as an ActiveX object embedded in an HTML page. The middle layer was developed in Java and Microsoft COM. The queries are represented throughout their lifetime as XML objects, and the Microsoft SQL7 database is queried and managed in standard SQL.
A data analysis expert system for large established distributed databases
NASA Technical Reports Server (NTRS)
Gnacek, Anne-Marie; An, Y. Kim; Ryan, J. Patrick
1987-01-01
A design for a natural language database interface system, called the Deductively Augmented NASA Management Decision support System (DANMDS), is presented. The DANMDS system components have been chosen on the basis of the following considerations: maximal employment of the existing NASA IBM-PC computers and supporting software; local structuring and storing of external data via the entity-relationship model; a natural easy-to-use error-free database query language; user ability to alter query language vocabulary and data analysis heuristic; and significant artificial intelligence data analysis heuristic techniques that allow the system to become progressively and automatically more useful.
GELLO: an object-oriented query and expression language for clinical decision support.
Sordo, Margarita; Ogunyemi, Omolola; Boxwala, Aziz A; Greenes, Robert A
2003-01-01
GELLO is a purpose-specific, object-oriented (OO) query and expression language. GELLO is the result of a concerted effort of the Decision Systems Group (DSG) working with the HL7 Clinical Decision Support Technical Committee (CDSTC) to provide the HL7 community with a common format for data encoding and manipulation. GELLO will soon be submitted for ballot to the HL7 CDSTC for consideration as a standard.
Machine Translation-Supported Cross-Language Information Retrieval for a Consumer Health Resource
Rosemblat, Graciela; Gemoets, Darren; Browne, Allen C.; Tse, Tony
2003-01-01
The U.S. National Institutes of Health, through its National Library of Medicine, developed ClinicalTrials.gov to provide the public with easy access to information on clinical trials on a wide range of conditions or diseases. Only English language information retrieval is currently supported. Given the growing number of Spanish speakers in the U.S. and their increasing use of the Web, we anticipate a significant increase in Spanish-speaking users. This study compares the effectiveness of two common cross-language information retrieval methods using machine translation, query translation versus document translation, using a subset of genuine user queries from ClinicalTrials.gov. Preliminary results conducted with the ClinicalTrials.gov search engine show that in our environment, query translation is statistically significantly better than document translation. We discuss possible reasons for this result and we conclude with suggestions for future work. PMID:14728236
Issues and solutions for storage, retrieval, and searching of MPEG-7 documents
NASA Astrophysics Data System (ADS)
Chang, Yuan-Chi; Lo, Ming-Ling; Smith, John R.
2000-10-01
The ongoing MPEG-7 standardization activity aims at creating a standard for describing multimedia content in order to facilitate the interpretation of the associated information content. Attempting to address a broad range of applications, MPEG-7 has defined a flexible framework consisting of Descriptors, Description Schemes, and Description Definition Language. Descriptors and Description Schemes describe features, structure and semantics of multimedia objects. They are written in the Description Definition Language (DDL). In the most recent revision, DDL applies XML (Extensible Markup Language) Schema with MPEG-7 extensions. DDL has constructs that support inclusion, inheritance, reference, enumeration, choice, sequence, and abstract type of Description Schemes and Descriptors. In order to enable multimedia systems to use MPEG-7, a number of important problems in storing, retrieving and searching MPEG-7 documents need to be solved. This paper reports on initial finding on issues and solutions of storing and accessing MPEG-7 documents. In particular, we discuss the benefits of using a virtual document management framework based on XML Access Server (XAS) in order to bridge the MPEG-7 multimedia applications and database systems. The need arises partly because MPEG-7 descriptions need customized storage schema, indexing and search engines. We also discuss issues arising in managing dependence and cross-description scheme search.
The Binding Database: data management and interface design.
Chen, Xi; Lin, Yuhmei; Liu, Ming; Gilson, Michael K
2002-01-01
The large and growing body of experimental data on biomolecular binding is of enormous value in developing a deeper understanding of molecular biology, in developing new therapeutics, and in various molecular design applications. However, most of these data are found only in the published literature and are therefore difficult to access and use. No existing public database has focused on measured binding affinities and has provided query capabilities that include chemical structure and sequence homology searches. We have created Binding DataBase (BindingDB), a public, web-accessible database of measured binding affinities. BindingDB is based upon a relational data specification for describing binding measurements via Isothermal Titration Calorimetry (ITC) and enzyme inhibition. A corresponding XML Document Type Definition (DTD) is used to create and parse intermediate files during the on-line deposition process and will also be used for data interchange, including collection of data from other sources. The on-line query interface, which is constructed with Java Servlet technology, supports standard SQL queries as well as searches for molecules by chemical structure and sequence homology. The on-line deposition interface uses Java Server Pages and JavaBean objects to generate dynamic HTML and to store intermediate results. The resulting data resource provides a range of functionality with brisk response-times, and lends itself well to continued development and enhancement.
Space Communications Emulation Facility
NASA Technical Reports Server (NTRS)
Hill, Chante A.
2004-01-01
Establishing space communication between ground facilities and other satellites is a painstaking task that requires many precise calculations dealing with relay time, atmospheric conditions, and satellite positions, to name a few. The Space Communications Emulation Facility (SCEF) team here at NASA is developing a facility that will approximately emulate the conditions in space that impact space communication. The emulation facility is comprised of a 32 node distributed cluster of computers; each node representing a satellite or ground station. The objective of the satellites is to observe the topography of the Earth (water, vegetation, land, and ice) and relay this information back to the ground stations. Software originally designed by the University of Kansas, labeled the Emulation Manager, controls the interaction of the satellites and ground stations, as well as handling the recording of data. The Emulation Manager is installed on a Linux Operating System, employing both Java and C++ programming codes. The emulation scenarios are written in extensible Markup Language, XML. XML documents are designed to store, carry, and exchange data. With XML documents data can be exchanged between incompatible systems, which makes it ideal for this project because Linux, MAC and Windows Operating Systems are all used. Unfortunately, XML documents cannot display data like HTML documents. Therefore, the SCEF team uses XML Schema Definition (XSD) or just schema to describe the structure of an XML document. Schemas are very important because they have the capability to validate the correctness of data, define restrictions on data, define data formats, and convert data between different data types, among other things. At this time, in order for the Emulation Manager to open and run an XML emulation scenario file, the user must first establish a link between the schema file and the directory under which the XML scenario files are saved. This procedure takes place on the command line on the Linux Operating System. Once this link has been established the Emulation manager validates all the XML files in that directory against the schema file, before the actual scenario is run. Using some very sophisticated commercial software called the Satellite Tool Kit (STK) installed on the Linux box, the Emulation Manager is able to display the data and graphics generated by the execution of a XML emulation scenario file. The Emulation Manager software is written in JAVA programming code. Since the SCEF project is in the developmental stage, the source code for this type of software is being modified to better fit the requirements of the SCEF project. Some parameters for the emulation are hard coded, set at fixed values. Members of the SCEF team are altering the code to allow the user to choose the values of these hard coded parameters by inserting a toolbar onto the preexisting GUI.
Cyclone: java-based querying and computing with Pathway/Genome databases.
Le Fèvre, François; Smidtas, Serge; Schächter, Vincent
2007-05-15
Cyclone aims at facilitating the use of BioCyc, a collection of Pathway/Genome Databases (PGDBs). Cyclone provides a fully extensible Java Object API to analyze and visualize these data. Cyclone can read and write PGDBs, and can write its own data in the CycloneML format. This format is automatically generated from the BioCyc ontology by Cyclone itself, ensuring continued compatibility. Cyclone objects can also be stored in a relational database CycloneDB. Queries can be written in SQL, and in an intuitive and concise object-oriented query language, Hibernate Query Language (HQL). In addition, Cyclone interfaces easily with Java software including the Eclipse IDE for HQL edition, the Jung API for graph algorithms or Cytoscape for graph visualization. Cyclone is freely available under an open source license at: http://sourceforge.net/projects/nemo-cyclone. For download and installation instructions, tutorials, use cases and examples, see http://nemo-cyclone.sourceforge.net.
An XML-based interchange format for genotype-phenotype data.
Whirl-Carrillo, M; Woon, M; Thorn, C F; Klein, T E; Altman, R B
2008-02-01
Recent advances in high-throughput genotyping and phenotyping have accelerated the creation of pharmacogenomic data. Consequently, the community requires standard formats to exchange large amounts of diverse information. To facilitate the transfer of pharmacogenomics data between databases and analysis packages, we have created a standard XML (eXtensible Markup Language) schema that describes both genotype and phenotype data as well as associated metadata. The schema accommodates information regarding genes, drugs, diseases, experimental methods, genomic/RNA/protein sequences, subjects, subject groups, and literature. The Pharmacogenetics and Pharmacogenomics Knowledge Base (PharmGKB; www.pharmgkb.org) has used this XML schema for more than 5 years to accept and process submissions containing more than 1,814,139 SNPs on 20,797 subjects using 8,975 assays. Although developed in the context of pharmacogenomics, the schema is of general utility for exchange of genotype and phenotype data. We have written syntactic and semantic validators to check documents using this format. The schema and code for validation is available to the community at http://www.pharmgkb.org/schema/index.html (last accessed: 8 October 2007). (c) 2007 Wiley-Liss, Inc.
2007-08-01
In this domain, queries typically show a deeply nested structure, which makes the semantic parsing task rather challenging , e.g.: What states border...only 80% of the GEOQUERY queries are semantically tractable, which shows that GEOQUERY is indeed a more challenging domain than ATIS. Note that none...a particularly challenging task, because of the inherent ambiguity of natural languages on both sides. It has inspired a large body of research. In
Progress on an implementation of MIFlowCyt in XML
NASA Astrophysics Data System (ADS)
Leif, Robert C.; Leif, Stephanie H.
2015-03-01
Introduction: The International Society for Advancement of Cytometry (ISAC) Data Standards Task Force (DSTF) has created a standard for the Minimum Information about a Flow Cytometry Experiment (MIFlowCyt 1.0). The CytometryML schemas, are based in part upon the Flow Cytometry Standard and Digital Imaging and Communication (DICOM) standards. CytometryML has and will be extended and adapted to include MIFlowCyt, as well as to serve as a common standard for flow and image cytometry (digital microscopy). Methods: The MIFlowCyt data-types were created, as is the rest of CytometryML, in the XML Schema Definition Language (XSD1.1). Individual major elements of the MIFlowCyt schema were translated into XML and filled with reasonable data. A small section of the code was formatted with HTML formatting elements. Results: The differences in the amount of detail to be recorded for 1) users of standard techniques including data analysts and 2) others, such as method and device creators, laboratory and other managers, engineers, and regulatory specialists required that separate data-types be created to describe the instrument configuration and components. A very substantial part of the MIFlowCyt element that describes the Experimental Overview part of the MIFlowCyt and substantial parts of several other major elements have been developed. Conclusions: The future use of structured XML tags and web technology should facilitate searching of experimental information, its presentation, and inclusion in structured research, clinical, and regulatory documents, as well as demonstrate in publications adherence to the MIFlowCyt standard. The use of CytometryML together with XML technology should also result in the textual and numeric data being published using web technology without any change in composition. Preliminary testing indicates that CytometryML XML pages can be directly formatted with the combination of HTML and CSS.
Menezes, Pedro Monteiro; Cook, Timothy Wayne; Cavalini, Luciana Tricai
2016-01-01
To present the technical background and the development of a procedure that enriches the semantics of Health Level Seven version 2 (HL7v2) messages for software-intensive systems in telemedicine trauma care. This study followed a multilevel model-driven approach for the development of semantically interoperable health information systems. The Pre-Hospital Trauma Life Support (PHTLS) ABCDE protocol was adopted as the use case. A prototype application embedded the semantics into an HL7v2 message as an eXtensible Markup Language (XML) file, which was validated against an XML schema that defines constraints on a common reference model. This message was exchanged with a second prototype application, developed on the Mirth middleware, which was also used to parse and validate both the original and the hybrid messages. Both versions of the data instance (one pure XML, one embedded in the HL7v2 message) were equally validated and the RDF-based semantics recovered by the receiving side of the prototype from the shared XML schema. This study demonstrated the semantic enrichment of HL7v2 messages for intensive-software telemedicine systems for trauma care, by validating components of extracts generated in various computing environments. The adoption of the method proposed in this study ensures the compliance of the HL7v2 standard in Semantic Web technologies.
NASA Astrophysics Data System (ADS)
Lugmayr, Artur R.; Mailaparampil, Anurag; Tico, Florina; Kalli, Seppo; Creutzburg, Reiner
2003-01-01
Digital television (digiTV) is an additional multimedia environment, where metadata is one key element for the description of arbitrary content. This implies adequate structures for content description, which is provided by XML metadata schemes (e.g. MPEG-7, MPEG-21). Content and metadata management is the task of a multimedia repository, from which digiTV clients - equipped with an Internet connection - can access rich additional multimedia types over an "All-HTTP" protocol layer. Within this research work, we focus on conceptual design issues of a metadata repository for the storage of metadata, accessible from the feedback channel of a local set-top box. Our concept describes the whole heterogeneous life-cycle chain of XML metadata from the service provider to the digiTV equipment, device independent representation of content, accessing and querying the metadata repository, management of metadata related to digiTV, and interconnection of basic system components (http front-end, relational database system, and servlet container). We present our conceptual test configuration of a metadata repository that is aimed at a real-world deployment, done within the scope of the future interaction (fiTV) project at the Digital Media Institute (DMI) Tampere (www.futureinteraction.tv).
Web-Based Customizable Viewer for Mars Network Overflight Opportunities
NASA Technical Reports Server (NTRS)
Gladden, Roy E.; Wallick, Michael N.; Allard, Daniel A.
2012-01-01
This software displays a full summary of information regarding the overflight opportunities between any set of lander and orbiter pairs that the user has access to view. The information display can be customized, allowing the user to choose which fields to view/hide and filter. The software works from a Web browser on any modern operating system. A full summary of information pertaining to an overflight is available, including the proposed, tentative, requested, planned, and implemented. This gives the user a chance to quickly check for inconsistencies and fix any problems. Overflights from multiple lander/ orbiter pairs can be compared instantly, and information can be filtered through the query and shown/hidden, giving the user a customizable view of the data. The information can be exported to a CSV (comma separated value) or XML (extensible markup language) file. The software only grants access to users who are authorized to view the information. This application is an addition to the MaROS Web suite. Prior to this addition, information pertaining to overflight opportunities would have a limited amount of data (displayed graphically) and could only be shown in strict temporal ordering. This new display shows more information, allows direct comparisons between overflights, and allows the data to be manipulated in ways that it was unable to be done in the past. The current software solution is to use CSV files to view the overflight opportunities.
TEQUEL: The query language of SADDLE
NASA Technical Reports Server (NTRS)
Rajan, S. D.
1984-01-01
A relational database management system is presented that is tailored for engineering applications. A wide variety of engineering data types are supported and the data definition language (DDL) and data manipulation language (DML) are extended to handle matrices. The system can be used either in the standalone mode or through a FORTRAN or PASCAL application program. The query language is of the relational calculus type and allows the user to store, retrieve, update and delete tuples from relations. The relational operations including union, intersect and differ facilitate creation of temporary relations that can be used for manipulating information in a powerful manner. Sample applications are shown to illustrate the creation of data through a FORTRAN program and data manipulation using the TEQUEL DML.
Using XML and Java Technologies for Astronomical Instrument Control
NASA Technical Reports Server (NTRS)
Ames, Troy; Case, Lynne; Powers, Edward I. (Technical Monitor)
2001-01-01
Traditionally, instrument command and control systems have been highly specialized, consisting mostly of custom code that is difficult to develop, maintain, and extend. Such solutions are initially very costly and are inflexible to subsequent engineering change requests, increasing software maintenance costs. Instrument description is too tightly coupled with details of implementation. NASA Goddard Space Flight Center, under the Instrument Remote Control (IRC) project, is developing a general and highly extensible framework that applies to any kind of instrument that can be controlled by a computer. The software architecture combines the platform independent processing capabilities of Java with the power of the Extensible Markup Language (XML), a human readable and machine understandable way to describe structured data. A key aspect of the object-oriented architecture is that the software is driven by an instrument description, written using the Instrument Markup Language (IML), a dialect of XML. IML is used to describe the command sets and command formats of the instrument, communication mechanisms, format of the data coming from the instrument, and characteristics of the graphical user interface to control and monitor the instrument. The IRC framework allows the users to define a data analysis pipeline which converts data coming out of the instrument. The data can be used in visualizations in order for the user to assess the data in real-time, if necessary. The data analysis pipeline algorithms can be supplied by the user in a variety of forms or programming languages. Although the current integration effort is targeted for the High-resolution Airborne Wideband Camera (HAWC) and the Submillimeter and Far Infrared Experiment (SAFIRE), first-light instruments of the Stratospheric Observatory for Infrared Astronomy (SOFIA), the framework is designed to be generic and extensible so that it can be applied to any instrument. Plans are underway to test the framework with other types of instruments, such as remote sensing earth science instruments.
Implementing a Domain Specific Language to configure and run LHCb Continuous Integration builds
NASA Astrophysics Data System (ADS)
Clemencic, M.; Couturier, B.
2015-12-01
The new LHCb nightly build system described at CHEP 2013 was limited by the use of JSON files for its configuration. JSON had been chosen as a temporary solution to maintain backward compatibility towards the old XML format by means of a translation function. Modern languages like Python leverage on meta-programming techniques to enable the development of Domain Specific Languages (DSLs). In this contribution we will present the advantages of such techniques and how they have been used to implement a DSL that can be used to both describe the configuration of the LHCb Nightly Builds and actually operate them.
An Analysis of Application Generators.
1983-03-01
query language OUEL in the programming language C, THESEUS [20], which embeds relational operators in the language Euclid. Schmidt [21] reports some...34The Design and Implementation of INGRES," ACM-TODS, Vol. 1. No. 3, 1976,. 33 £ 20. Shopiro,J.E., " THESEUS -A Programming Language for Relational
NASA Astrophysics Data System (ADS)
Arenas, Marcelo; Gutierrez, Claudio; Pérez, Jorge
The Resource Description Framework (RDF) is the standard data model for representing information about World Wide Web resources. In January 2008, it was released the recommendation of the W3C for querying RDF data, a query language called SPARQL. In this chapter, we give a detailed description of the semantics of this language. We start by focusing on the definition of a formal semantics for the core part of SPARQL, and then move to the definition for the entire language, including all the features in the specification of SPARQL by the W3C such as blank nodes in graph patterns and bag semantics for solutions.
A web-based data-querying tool based on ontology-driven methodology and flowchart-based model.
Ping, Xiao-Ou; Chung, Yufang; Tseng, Yi-Ju; Liang, Ja-Der; Yang, Pei-Ming; Huang, Guan-Tarn; Lai, Feipei
2013-10-08
Because of the increased adoption rate of electronic medical record (EMR) systems, more health care records have been increasingly accumulating in clinical data repositories. Therefore, querying the data stored in these repositories is crucial for retrieving the knowledge from such large volumes of clinical data. The aim of this study is to develop a Web-based approach for enriching the capabilities of the data-querying system along the three following considerations: (1) the interface design used for query formulation, (2) the representation of query results, and (3) the models used for formulating query criteria. The Guideline Interchange Format version 3.5 (GLIF3.5), an ontology-driven clinical guideline representation language, was used for formulating the query tasks based on the GLIF3.5 flowchart in the Protégé environment. The flowchart-based data-querying model (FBDQM) query execution engine was developed and implemented for executing queries and presenting the results through a visual and graphical interface. To examine a broad variety of patient data, the clinical data generator was implemented to automatically generate the clinical data in the repository, and the generated data, thereby, were employed to evaluate the system. The accuracy and time performance of the system for three medical query tasks relevant to liver cancer were evaluated based on the clinical data generator in the experiments with varying numbers of patients. In this study, a prototype system was developed to test the feasibility of applying a methodology for building a query execution engine using FBDQMs by formulating query tasks using the existing GLIF. The FBDQM-based query execution engine was used to successfully retrieve the clinical data based on the query tasks formatted using the GLIF3.5 in the experiments with varying numbers of patients. The accuracy of the three queries (ie, "degree of liver damage," "degree of liver damage when applying a mutually exclusive setting," and "treatments for liver cancer") was 100% for all four experiments (10 patients, 100 patients, 1000 patients, and 10,000 patients). Among the three measured query phases, (1) structured query language operations, (2) criteria verification, and (3) other, the first two had the longest execution time. The ontology-driven FBDQM-based approach enriched the capabilities of the data-querying system. The adoption of the GLIF3.5 increased the potential for interoperability, shareability, and reusability of the query tasks.
NASA Technical Reports Server (NTRS)
Hildreth, Bruce L.; Jackson, E. Bruce
2009-01-01
The American Institute of Aeronautics Astronautics (AIAA) Modeling and Simulation Technical Committee is in final preparation of a new standard for the exchange of flight dynamics models. The standard will become an ANSI standard and is under consideration for submission to ISO for acceptance by the international community. The standard has some a spects that should provide benefits to the simulation training community. Use of the new standard by the training simulation community will reduce development, maintenance and technical refresh investment on each device. Furthermore, it will significantly lower the cost of performing model updates to improve fidelity or expand the envelope of the training device. Higher flight fidelity should result in better transfer of training, a direct benefit to the pilots under instruction. Costs of adopting the standard are minimal and should be paid back within the cost of the first use for that training device. The standard achie ves these advantages by making it easier to update the aerodynamic model. It provides a standard format for the model in a custom eXtensible Markup Language (XML) grammar, the Dynamic Aerospace Vehicle Exchange Markup Language (DAVE-ML). It employs an existing XML grammar, MathML, to describe the aerodynamic model in an input data file, eliminating the requirement for actual software compilation. The major components of the aero model become simply an input data file, and updates are simply new XML input files. It includes naming and axis system conventions to further simplify the exchange of information.
Active Wiki Knowledge Repository
2012-10-01
data using SPARQL queries or RESTful web-services; ‘gardening’ tools for examining the semantically tagged content in the wiki; high-level language tool...Tagging & RDF triple-store Fusion and inferences for collaboration Tools for Consuming Data SPARQL queries or RESTful WS Inference & Gardening tools...other stores using AW SPARQL queries and rendering templates; and 4) Interactively share maps and other content using annotation tools to post notes
2011-01-01
Shotgun lipidome profiling relies on direct mass spectrometric analysis of total lipid extracts from cells, tissues or organisms and is a powerful tool to elucidate the molecular composition of lipidomes. We present a novel informatics concept of the molecular fragmentation query language implemented within the LipidXplorer open source software kit that supports accurate quantification of individual species of any ionizable lipid class in shotgun spectra acquired on any mass spectrometry platform. PMID:21247462
HepML, an XML-based format for describing simulated data in high energy physics
NASA Astrophysics Data System (ADS)
Belov, S.; Dudko, L.; Kekelidze, D.; Sherstnev, A.
2010-10-01
In this paper we describe a HepML format and a corresponding C++ library developed for keeping complete description of parton level events in a unified and flexible form. HepML tags contain enough information to understand what kind of physics the simulated events describe and how the events have been prepared. A HepML block can be included into event files in the LHEF format. The structure of the HepML block is described by means of several XML Schemas. The Schemas define necessary information for the HepML block and how this information should be located within the block. The library libhepml is a C++ library intended for parsing and serialization of HepML tags, and representing the HepML block in computer memory. The library is an API for external software. For example, Matrix Element Monte Carlo event generators can use the library for preparing and writing a header of an LHEF file in the form of HepML tags. In turn, Showering and Hadronization event generators can parse the HepML header and get the information in the form of C++ classes. libhepml can be used in C++, C, and Fortran programs. All necessary parts of HepML have been prepared and we present the project to the HEP community. Program summaryProgram title: libhepml Catalogue identifier: AEGL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGL_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU GPLv3 No. of lines in distributed program, including test data, etc.: 138 866 No. of bytes in distributed program, including test data, etc.: 613 122 Distribution format: tar.gz Programming language: C++, C Computer: PCs and workstations Operating system: Scientific Linux CERN 4/5, Ubuntu 9.10 RAM: 1 073 741 824 bytes (1 Gb) Classification: 6.2, 11.1, 11.2 External routines: Xerces XML library ( http://xerces.apache.org/xerces-c/), Expat XML Parser ( http://expat.sourceforge.net/) Nature of problem: Monte Carlo simulation in high energy physics is divided into several stages. Various programs exist for these stages. In this article we are interested in interfacing different Monte Carlo event generators via data files, in particular, Matrix Element (ME) generators and Showering and Hadronization (SH) generators. There is a widely accepted format for data files for such interfaces - Les Houches Event Format (LHEF). Although information kept in an LHEF file is enough for proper working of SH generators, it is insufficient for understanding how events in the LHEF file have been prepared and which physical model has been applied. In this paper we propose an extension of the format for keeping additional information available in generators. We propose to add a new information block, marked up with XML tags, to the LHEF file. This block describes events in the file in more detail. In particular, it stores information about a physical model, kinematical cuts, generator, etc. This helps to make LHEF files self-documented. Certainly, HepML can be applied in more general context, not in LHEF files only. Solution method: In order to overcome drawbacks of the original LHEF accord we propose to add a new information block of HepML tags. HepML is an XML-based markup language. We designed several XML Schemas for all tags in the language. Any HepML document should follow rules of the Schemas. The language is equipped with a library for operation with HepML tags and documents. This C++ library, called libhepml, consists of classes for HepML objects, which represent a HepML document in computer memory, parsing classes, serializating classes, and some auxiliary classes. Restrictions: The software is adapted for solving problems, described in the article. There are no additional restrictions. Running time: Tests have been done on a computer with Intel(R) Core(TM)2 Solo, 1.4 GHz. Parsing of a HepML file: 6 ms (size of the HepML files is 12.5 Kb) Writing of a HepML block to file: 14 ms (file size 12.5 Kb) Merging of two HepML blocks and writing to file: 18 ms (file size - 25.0 Kb).
Cross-Language Information Retrieval: An Analysis of Errors.
ERIC Educational Resources Information Center
Ruiz, Miguel E.; Srinivasan, Padmini
1998-01-01
Investigates an automatic method for Cross Language Information Retrieval (CLIR) that utilizes the multilingual Unified Medical Language System (UMLS) Metathesaurus to translate Spanish natural-language queries into English. Results indicate that for Spanish, the UMLS Metathesaurus-based CLIR method is at least equivalent to if not better than…
ERIC Educational Resources Information Center
Hong, Zeng-Wei; Chen, Yen-Lin; Lan, Chien-Ho
2014-01-01
Animated agents are virtual characters who demonstrate facial expressions, gestures, movements, and speech to facilitate students' engagement in the learning environment. Our research developed a courseware that supports a XML-based markup language and an authoring tool for teachers to script animated pedagogical agents in teaching materials. The…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-02
... language)-enabled Adobe PDF form, Form FDA 3331--Automated to submit new drug application (NDA) and...-enabled Adobe PDF form, Form FDA 3331--Automated, will be available for piloting between May 1, 2013, and... modernize the FAR submission and review pathway using an XML-enabled PDF form to enable integration with...
2004-12-01
NY 7. Erik Chaum NUWC Newport, RI 8. David Bellino NPRI Newport, RI 9. Dick Nadolink NUWC Newport, RI 10. VADM Roger Bacon (Ret...Science Advisor Pearl Harbor, HI 16. LT Andrew Hurvitz, USN FNMOC Monterey, CA 17. ENS Darin Keeter, USN FNMOC Monterey, CA 18. CAPT David
A Learning Design Ontology Based on the IMS Specification
ERIC Educational Resources Information Center
Amorim, Ricardo R.; Lama, Manuel; Sanchez, Eduardo; Riera, Adolfo; Vila, Xose A.
2006-01-01
In this paper, we present an ontology to represent the semantics of the IMS Learning Design (IMS LD) specification, a meta-language used to describe the main elements of the learning design process. The motivation of this work relies on the expressiveness limitations found on the current XML-Schema implementation of the IMS LD conceptual model. To…
Visually defining and querying consistent multi-granular clinical temporal abstractions.
Combi, Carlo; Oliboni, Barbara
2012-02-01
The main goal of this work is to propose a framework for the visual specification and query of consistent multi-granular clinical temporal abstractions. We focus on the issue of querying patient clinical information by visually defining and composing temporal abstractions, i.e., high level patterns derived from several time-stamped raw data. In particular, we focus on the visual specification of consistent temporal abstractions with different granularities and on the visual composition of different temporal abstractions for querying clinical databases. Temporal abstractions on clinical data provide a concise and high-level description of temporal raw data, and a suitable way to support decision making. Granularities define partitions on the time line and allow one to represent time and, thus, temporal clinical information at different levels of detail, according to the requirements coming from the represented clinical domain. The visual representation of temporal information has been considered since several years in clinical domains. Proposed visualization techniques must be easy and quick to understand, and could benefit from visual metaphors that do not lead to ambiguous interpretations. Recently, physical metaphors such as strips, springs, weights, and wires have been proposed and evaluated on clinical users for the specification of temporal clinical abstractions. Visual approaches to boolean queries have been considered in the last years and confirmed that the visual support to the specification of complex boolean queries is both an important and difficult research topic. We propose and describe a visual language for the definition of temporal abstractions based on a set of intuitive metaphors (striped wall, plastered wall, brick wall), allowing the clinician to use different granularities. A new algorithm, underlying the visual language, allows the physician to specify only consistent abstractions, i.e., abstractions not containing contradictory conditions on the component abstractions. Moreover, we propose a visual query language where different temporal abstractions can be composed to build complex queries: temporal abstractions are visually connected through the usual logical connectives AND, OR, and NOT. The proposed visual language allows one to simply define temporal abstractions by using intuitive metaphors, and to specify temporal intervals related to abstractions by using different temporal granularities. The physician can interact with the designed and implemented tool by point-and-click selections, and can visually compose queries involving several temporal abstractions. The evaluation of the proposed granularity-related metaphors consisted in two parts: (i) solving 30 interpretation exercises by choosing the correct interpretation of a given screenshot representing a possible scenario, and (ii) solving a complex exercise, by visually specifying through the interface a scenario described only in natural language. The exercises were done by 13 subjects. The percentage of correct answers to the interpretation exercises were slightly different with respect to the considered metaphors (54.4--striped wall, 73.3--plastered wall, 61--brick wall, and 61--no wall), but post hoc statistical analysis on means confirmed that differences were not statistically significant. The result of the user's satisfaction questionnaire related to the evaluation of the proposed granularity-related metaphors ratified that there are no preferences for one of them. The evaluation of the proposed logical notation consisted in two parts: (i) solving five interpretation exercises provided by a screenshot representing a possible scenario and by three different possible interpretations, of which only one was correct, and (ii) solving five exercises, by visually defining through the interface a scenario described only in natural language. Exercises had an increasing difficulty. The evaluation involved a total of 31 subjects. Results related to this evaluation phase confirmed us about the soundness of the proposed solution even in comparison with a well known proposal based on a tabular query form (the only significant difference is that our proposal requires more time for the training phase: 21 min versus 14 min). In this work we have considered the issue of visually composing and querying temporal clinical patient data. In this context we have proposed a visual framework for the specification of consistent temporal abstractions with different granularities and for the visual composition of different temporal abstractions to build (possibly) complex queries on clinical databases. A new algorithm has been proposed to check the consistency of the specified granular abstraction. From the evaluation of the proposed metaphors and interfaces and from the comparison of the visual query language with a well known visual method for boolean queries, the soundness of the overall system has been confirmed; moreover, pros and cons and possible improvements emerged from the comparison of different visual metaphors and solutions. Copyright © 2011 Elsevier B.V. All rights reserved.
Dealing with Diversity in Computational Cancer Modeling
Johnson, David; McKeever, Steve; Stamatakos, Georgios; Dionysiou, Dimitra; Graf, Norbert; Sakkalis, Vangelis; Marias, Konstantinos; Wang, Zhihui; Deisboeck, Thomas S.
2013-01-01
This paper discusses the need for interconnecting computational cancer models from different sources and scales within clinically relevant scenarios to increase the accuracy of the models and speed up their clinical adaptation, validation, and eventual translation. We briefly review current interoperability efforts drawing upon our experiences with the development of in silico models for predictive oncology within a number of European Commission Virtual Physiological Human initiative projects on cancer. A clinically relevant scenario, addressing brain tumor modeling that illustrates the need for coupling models from different sources and levels of complexity, is described. General approaches to enabling interoperability using XML-based markup languages for biological modeling are reviewed, concluding with a discussion on efforts towards developing cancer-specific XML markup to couple multiple component models for predictive in silico oncology. PMID:23700360
Automatic Query Formulations in Information Retrieval.
ERIC Educational Resources Information Center
Salton, G.; And Others
1983-01-01
Introduces methods designed to reduce role of search intermediaries by generating Boolean search formulations automatically using term frequency considerations from natural language statements provided by system patrons. Experimental results are supplied and methods are described for applying automatic query formulation process in practice.…
On-Demand Associative Cross-Language Information Retrieval
NASA Astrophysics Data System (ADS)
Geraldo, André Pinto; Moreira, Viviane P.; Gonçalves, Marcos A.
This paper proposes the use of algorithms for mining association rules as an approach for Cross-Language Information Retrieval. These algorithms have been widely used to analyse market basket data. The idea is to map the problem of finding associations between sales items to the problem of finding term translations over a parallel corpus. The proposal was validated by means of experiments using queries in two distinct languages: Portuguese and Finnish to retrieve documents in English. The results show that the performance of our proposed approach is comparable to the performance of the monolingual baseline and to query translation via machine translation, even though these systems employ more complex Natural Language Processing techniques. The combination between machine translation and our approach yielded the best results, even outperforming the monolingual baseline.
Zhu, Xinjie; Zhang, Qiang; Ho, Eric Dun; Yu, Ken Hung-On; Liu, Chris; Huang, Tim H; Cheng, Alfred Sze-Lok; Kao, Ben; Lo, Eric; Yip, Kevin Y
2017-09-22
A genomic signal track is a set of genomic intervals associated with values of various types, such as measurements from high-throughput experiments. Analysis of signal tracks requires complex computational methods, which often make the analysts focus too much on the detailed computational steps rather than on their biological questions. Here we propose Signal Track Query Language (STQL) for simple analysis of signal tracks. It is a Structured Query Language (SQL)-like declarative language, which means one only specifies what computations need to be done but not how these computations are to be carried out. STQL provides a rich set of constructs for manipulating genomic intervals and their values. To run STQL queries, we have developed the Signal Track Analytical Research Tool (START, http://yiplab.cse.cuhk.edu.hk/start/ ), a system that includes a Web-based user interface and a back-end execution system. The user interface helps users select data from our database of around 10,000 commonly-used public signal tracks, manage their own tracks, and construct, store and share STQL queries. The back-end system automatically translates STQL queries into optimized low-level programs and runs them on a computer cluster in parallel. We use STQL to perform 14 representative analytical tasks. By repeating these analyses using bedtools, Galaxy and custom Python scripts, we show that the STQL solution is usually the simplest, and the parallel execution achieves significant speed-up with large data files. Finally, we describe how a biologist with minimal formal training in computer programming self-learned STQL to analyze DNA methylation data we produced from 60 pairs of hepatocellular carcinoma (HCC) samples. Overall, STQL and START provide a generic way for analyzing a large number of genomic signal tracks in parallel easily.
VISAGE: Interactive Visual Graph Querying.
Pienta, Robert; Navathe, Shamkant; Tamersoy, Acar; Tong, Hanghang; Endert, Alex; Chau, Duen Horng
2016-06-01
Extracting useful patterns from large network datasets has become a fundamental challenge in many domains. We present VISAGE, an interactive visual graph querying approach that empowers users to construct expressive queries, without writing complex code (e.g., finding money laundering rings of bankers and business owners). Our contributions are as follows: (1) we introduce graph autocomplete , an interactive approach that guides users to construct and refine queries, preventing over-specification; (2) VISAGE guides the construction of graph queries using a data-driven approach, enabling users to specify queries with varying levels of specificity, from concrete and detailed (e.g., query by example), to abstract (e.g., with "wildcard" nodes of any types), to purely structural matching; (3) a twelve-participant, within-subject user study demonstrates VISAGE's ease of use and the ability to construct graph queries significantly faster than using a conventional query language; (4) VISAGE works on real graphs with over 468K edges, achieving sub-second response times for common queries.
VISAGE: Interactive Visual Graph Querying
Pienta, Robert; Navathe, Shamkant; Tamersoy, Acar; Tong, Hanghang; Endert, Alex; Chau, Duen Horng
2017-01-01
Extracting useful patterns from large network datasets has become a fundamental challenge in many domains. We present VISAGE, an interactive visual graph querying approach that empowers users to construct expressive queries, without writing complex code (e.g., finding money laundering rings of bankers and business owners). Our contributions are as follows: (1) we introduce graph autocomplete, an interactive approach that guides users to construct and refine queries, preventing over-specification; (2) VISAGE guides the construction of graph queries using a data-driven approach, enabling users to specify queries with varying levels of specificity, from concrete and detailed (e.g., query by example), to abstract (e.g., with “wildcard” nodes of any types), to purely structural matching; (3) a twelve-participant, within-subject user study demonstrates VISAGE’s ease of use and the ability to construct graph queries significantly faster than using a conventional query language; (4) VISAGE works on real graphs with over 468K edges, achieving sub-second response times for common queries. PMID:28553670
Semantic based man-machine interface for real-time communication
NASA Technical Reports Server (NTRS)
Ali, M.; Ai, C.-S.
1988-01-01
A flight expert system (FLES) was developed to assist pilots in monitoring, diagnosing and recovering from in-flight faults. To provide a communications interface between the flight crew and FLES, a natural language interface (NALI) was implemented. Input to NALI is processed by three processors: (1) the semantics parser; (2) the knowledge retriever; and (3) the response generator. First the semantic parser extracts meaningful words and phrases to generate an internal representation of the query. At this point, the semantic parser has the ability to map different input forms related to the same concept into the same internal representation. Then the knowledge retriever analyzes and stores the context of the query to aid in resolving ellipses and pronoun references. At the end of this process, a sequence of retrievel functions is created as a first step in generating the proper response. Finally, the response generator generates the natural language response to the query. The architecture of NALI was designed to process both temporal and nontemporal queries. The architecture and implementation of NALI are described.
Time-related patient data retrieval for the case studies from the pharmacogenomics research network
Zhu, Qian; Tao, Cui; Ding, Ying; Chute, Christopher G.
2012-01-01
There are lots of question-based data elements from the pharmacogenomics research network (PGRN) studies. Many data elements contain temporal information. To semantically represent these elements so that they can be machine processiable is a challenging problem for the following reasons: (1) the designers of these studies usually do not have the knowledge of any computer modeling and query languages, so that the original data elements usually are represented in spreadsheets in human languages; and (2) the time aspects in these data elements can be too complex to be represented faithfully in a machine-understandable way. In this paper, we introduce our efforts on representing these data elements using semantic web technologies. We have developed an ontology, CNTRO, for representing clinical events and their temporal relations in the web ontology language (OWL). Here we use CNTRO to represent the time aspects in the data elements. We have evaluated 720 time-related data elements from PGRN studies. We adapted and extended the knowledge representation requirements for EliXR-TIME to categorize our data elements. A CNTRO-based SPARQL query builder has been developed to customize users’ own SPARQL queries for each knowledge representation requirement. The SPARQL query builder has been evaluated with a simulated EHR triple store to ensure its functionalities. PMID:23076712
Time-related patient data retrieval for the case studies from the pharmacogenomics research network.
Zhu, Qian; Tao, Cui; Ding, Ying; Chute, Christopher G
2012-11-01
There are lots of question-based data elements from the pharmacogenomics research network (PGRN) studies. Many data elements contain temporal information. To semantically represent these elements so that they can be machine processiable is a challenging problem for the following reasons: (1) the designers of these studies usually do not have the knowledge of any computer modeling and query languages, so that the original data elements usually are represented in spreadsheets in human languages; and (2) the time aspects in these data elements can be too complex to be represented faithfully in a machine-understandable way. In this paper, we introduce our efforts on representing these data elements using semantic web technologies. We have developed an ontology, CNTRO, for representing clinical events and their temporal relations in the web ontology language (OWL). Here we use CNTRO to represent the time aspects in the data elements. We have evaluated 720 time-related data elements from PGRN studies. We adapted and extended the knowledge representation requirements for EliXR-TIME to categorize our data elements. A CNTRO-based SPARQL query builder has been developed to customize users' own SPARQL queries for each knowledge representation requirement. The SPARQL query builder has been evaluated with a simulated EHR triple store to ensure its functionalities.
SPARQLGraph: a web-based platform for graphically querying biological Semantic Web databases.
Schweiger, Dominik; Trajanoski, Zlatko; Pabinger, Stephan
2014-08-15
Semantic Web has established itself as a framework for using and sharing data across applications and database boundaries. Here, we present a web-based platform for querying biological Semantic Web databases in a graphical way. SPARQLGraph offers an intuitive drag & drop query builder, which converts the visual graph into a query and executes it on a public endpoint. The tool integrates several publicly available Semantic Web databases, including the databases of the just recently released EBI RDF platform. Furthermore, it provides several predefined template queries for answering biological questions. Users can easily create and save new query graphs, which can also be shared with other researchers. This new graphical way of creating queries for biological Semantic Web databases considerably facilitates usability as it removes the requirement of knowing specific query languages and database structures. The system is freely available at http://sparqlgraph.i-med.ac.at.
Prototype Development: Context-Driven Dynamic XML Ophthalmologic Data Capture Application
Schwei, Kelsey M; Kadolph, Christopher; Finamore, Joseph; Cancel, Efrain; McCarty, Catherine A; Okorie, Asha; Thomas, Kate L; Allen Pacheco, Jennifer; Pathak, Jyotishman; Ellis, Stephen B; Denny, Joshua C; Rasmussen, Luke V; Tromp, Gerard; Williams, Marc S; Vrabec, Tamara R; Brilliant, Murray H
2017-01-01
Background The capture and integration of structured ophthalmologic data into electronic health records (EHRs) has historically been a challenge. However, the importance of this activity for patient care and research is critical. Objective The purpose of this study was to develop a prototype of a context-driven dynamic extensible markup language (XML) ophthalmologic data capture application for research and clinical care that could be easily integrated into an EHR system. Methods Stakeholders in the medical, research, and informatics fields were interviewed and surveyed to determine data and system requirements for ophthalmologic data capture. On the basis of these requirements, an ophthalmology data capture application was developed to collect and store discrete data elements with important graphical information. Results The context-driven data entry application supports several features, including ink-over drawing capability for documenting eye abnormalities, context-based Web controls that guide data entry based on preestablished dependencies, and an adaptable database or XML schema that stores Web form specifications and allows for immediate changes in form layout or content. The application utilizes Web services to enable data integration with a variety of EHRs for retrieval and storage of patient data. Conclusions This paper describes the development process used to create a context-driven dynamic XML data capture application for optometry and ophthalmology. The list of ophthalmologic data elements identified as important for care and research can be used as a baseline list for future ophthalmologic data collection activities. PMID:28903894
Cook, Timothy Wayne; Cavalini, Luciana Tricai
2016-01-01
Objectives To present the technical background and the development of a procedure that enriches the semantics of Health Level Seven version 2 (HL7v2) messages for software-intensive systems in telemedicine trauma care. Methods This study followed a multilevel model-driven approach for the development of semantically interoperable health information systems. The Pre-Hospital Trauma Life Support (PHTLS) ABCDE protocol was adopted as the use case. A prototype application embedded the semantics into an HL7v2 message as an eXtensible Markup Language (XML) file, which was validated against an XML schema that defines constraints on a common reference model. This message was exchanged with a second prototype application, developed on the Mirth middleware, which was also used to parse and validate both the original and the hybrid messages. Results Both versions of the data instance (one pure XML, one embedded in the HL7v2 message) were equally validated and the RDF-based semantics recovered by the receiving side of the prototype from the shared XML schema. Conclusions This study demonstrated the semantic enrichment of HL7v2 messages for intensive-software telemedicine systems for trauma care, by validating components of extracts generated in various computing environments. The adoption of the method proposed in this study ensures the compliance of the HL7v2 standard in Semantic Web technologies. PMID:26893947
Pinciroli, Francesco; Masseroli, Marco; Acerbo, Livio A; Bonacina, Stefano; Ferrari, Roberto; Marchente, Mario
2004-01-01
This paper presents a low cost software platform prototype supporting health care personnel in retrieving patient referral multimedia data. These information are centralized in a server machine and structured by using a flexible eXtensible Markup Language (XML) Bio-Image Referral Database (BIRD). Data are distributed on demand to requesting client in an Intranet network and transformed via eXtensible Stylesheet Language (XSL) to be visualized in an uniform way on market browsers. The core server operation software has been developed in PHP Hypertext Preprocessor scripting language, which is very versatile and useful for crafting a dynamic Web environment.
Java Application Shell: A Framework for Piecing Together Java Applications
NASA Technical Reports Server (NTRS)
Miller, Philip; Powers, Edward I. (Technical Monitor)
2001-01-01
This session describes the architecture of Java Application Shell (JAS), a Swing-based framework for developing interactive Java applications. Java Application Shell is being developed by Commerce One, Inc. for NASA Goddard Space Flight Center Code 588. The purpose of JAS is to provide a framework for the development of Java applications, providing features that enable the development process to be more efficient, consistent and flexible. Fundamentally, JAS is based upon an architecture where an application is considered a collection of 'plugins'. In turn, a plug-in is a collection of Swing actions defined using XML and packaged in a jar file. Plug-ins may be local to the host platform or remotely-accessible through HTTP. Local and remote plugins are automatically discovered by JAS upon application startup; plugins may also be loaded dynamically without having to re-start the application. Using Extensible Markup Language (XML) to define actions, as opposed to hardcoding them in application logic, allows easier customization of application-specific operations by separating application logic from presentation. Through XML, a developer defines an action that may appear on any number of menus, toolbars, and buttons. Actions maintain and propagate enable/disable states and specify icons, tool-tips, titles, etc. Furthermore, JAS allows actions to be implemented using various scripting languages through the use of IBM's Bean Scripting Framework. Scripted action implementation is seamless to the end-user. In addition to action implementation, scripts may be used for application and unit-level testing. In the case of application-level testing, JAS has hooks to assist a script in simulating end-user input. JAS also provides property and user preference management, JavaHelp, Undo/Redo, Multi-Document Interface, Single-Document Interface, printing, and logging. Finally, Jini technology has also been included into the framework by means of a Jini services browser and the ability to associate services with actions. Several Java technologies have been incorporated into JAS, including Swing, Internal Frames, Java Beans, XML, JavaScript, JavaHelp, and Jini. Additional information is contained in the original extended abstract.
Nohle, David G; Ayers, Leona W
2005-01-01
Background The Association for Pathology Informatics (API) Extensible Mark-up Language (XML) TMA Data Exchange Specification (TMA DES) proposed in April 2003 provides a community-based, open source tool for sharing tissue microarray (TMA) data in a common format. Each tissue core within an array has separate data including digital images; therefore an organized, common approach to produce, navigate and publish such data facilitates viewing, sharing and merging TMA data from different laboratories. The AIDS and Cancer Specimen Resource (ACSR) is a HIV/AIDS tissue bank consortium sponsored by the National Cancer Institute (NCI) Division of Cancer Treatment and Diagnosis (DCTD). The ACSR offers HIV-related malignancies and uninfected control tissues in microarrays (TMA) accompanied by de-identified clinical data to approved researchers. Exporting our TMA data into the proposed API specified format offers an opportunity to evaluate the API specification in an applied setting and to explore its usefulness. Results A document type definition (DTD) that governs the allowed common data elements (CDE) in TMA DES export XML files was written, tested and evolved and is in routine use by the ACSR. This DTD defines TMA DES CDEs which are implemented in an external file that can be supplemented by internal DTD extensions for locally defined TMA data elements (LDE). Conclusion ACSR implementation of the TMA DES demonstrated the utility of the specification and allowed application of a DTD to validate the language of the API specified XML elements and to identify possible enhancements within our TMA data management application. Improvements to the specification have additionally been suggested by our experience in importing other institution's exported TMA data. Enhancements to TMA DES to remove ambiguous situations and clarify the data should be considered. Better specified identifiers and hierarchical relationships will make automatic use of the data possible. Our tool can be used to reorder data and add identifiers; upgrading data for changes in the specification can be automatically accomplished. Using a DTD (optionally reflecting our proposed enhancements) can provide stronger validation of exported TMA data. PMID:15871741
A Web-Based Data-Querying Tool Based on Ontology-Driven Methodology and Flowchart-Based Model
Ping, Xiao-Ou; Chung, Yufang; Liang, Ja-Der; Yang, Pei-Ming; Huang, Guan-Tarn; Lai, Feipei
2013-01-01
Background Because of the increased adoption rate of electronic medical record (EMR) systems, more health care records have been increasingly accumulating in clinical data repositories. Therefore, querying the data stored in these repositories is crucial for retrieving the knowledge from such large volumes of clinical data. Objective The aim of this study is to develop a Web-based approach for enriching the capabilities of the data-querying system along the three following considerations: (1) the interface design used for query formulation, (2) the representation of query results, and (3) the models used for formulating query criteria. Methods The Guideline Interchange Format version 3.5 (GLIF3.5), an ontology-driven clinical guideline representation language, was used for formulating the query tasks based on the GLIF3.5 flowchart in the Protégé environment. The flowchart-based data-querying model (FBDQM) query execution engine was developed and implemented for executing queries and presenting the results through a visual and graphical interface. To examine a broad variety of patient data, the clinical data generator was implemented to automatically generate the clinical data in the repository, and the generated data, thereby, were employed to evaluate the system. The accuracy and time performance of the system for three medical query tasks relevant to liver cancer were evaluated based on the clinical data generator in the experiments with varying numbers of patients. Results In this study, a prototype system was developed to test the feasibility of applying a methodology for building a query execution engine using FBDQMs by formulating query tasks using the existing GLIF. The FBDQM-based query execution engine was used to successfully retrieve the clinical data based on the query tasks formatted using the GLIF3.5 in the experiments with varying numbers of patients. The accuracy of the three queries (ie, “degree of liver damage,” “degree of liver damage when applying a mutually exclusive setting,” and “treatments for liver cancer”) was 100% for all four experiments (10 patients, 100 patients, 1000 patients, and 10,000 patients). Among the three measured query phases, (1) structured query language operations, (2) criteria verification, and (3) other, the first two had the longest execution time. Conclusions The ontology-driven FBDQM-based approach enriched the capabilities of the data-querying system. The adoption of the GLIF3.5 increased the potential for interoperability, shareability, and reusability of the query tasks. PMID:25600078
Computer systems and methods for the query and visualization of multidimensional databases
Stolte, Chris; Tang, Diane L.; Hanrahan, Patrick
2006-08-08
A method and system for producing graphics. A hierarchical structure of a database is determined. A visual table, comprising a plurality of panes, is constructed by providing a specification that is in a language based on the hierarchical structure of the database. In some cases, this language can include fields that are in the database schema. The database is queried to retrieve a set of tuples in accordance with the specification. A subset of the set of tuples is associated with a pane in the plurality of panes.
Computer systems and methods for the query and visualization of multidimensional database
Stolte, Chris; Tang, Diane L.; Hanrahan, Patrick
2010-05-11
A method and system for producing graphics. A hierarchical structure of a database is determined. A visual table, comprising a plurality of panes, is constructed by providing a specification that is in a language based on the hierarchical structure of the database. In some cases, this language can include fields that are in the database schema. The database is queried to retrieve a set of tuples in accordance with the specification. A subset of the set of tuples is associated with a pane in the plurality of panes.
A database system to support image algorithm evaluation
NASA Technical Reports Server (NTRS)
Lien, Y. E.
1977-01-01
The design is given of an interactive image database system IMDB, which allows the user to create, retrieve, store, display, and manipulate images through the facility of a high-level, interactive image query (IQ) language. The query language IQ permits the user to define false color functions, pixel value transformations, overlay functions, zoom functions, and windows. The user manipulates the images through generic functions. The user can direct images to display devices for visual and qualitative analysis. Image histograms and pixel value distributions can also be computed to obtain a quantitative analysis of images.
The Use of Dynamic Segment Scoring for Language-Independent Question Answering
2001-01-01
initial window with one sentence is compared to scores corre- his/PRONOUN brother/ CONSANGUINITY like/SIMILARITY his/PRONOUN call/NOMENCLATURE he/PRONOUN...the query processing mod- ule. Using the differences between index numbers to specify phys- ical distance relationships among query keywords, we can
Enabling model customization and integration
NASA Astrophysics Data System (ADS)
Park, Minho; Fishwick, Paul A.
2003-09-01
Until fairly recently, the idea of dynamic model content and presentation were treated synonymously. For example, if one was to take a data flow network, which captures the dynamics of a target system in terms of the flow of data through nodal operators, then one would often standardize on rectangles and arrows for the model display. The increasing web emphasis on XML, however, suggests that the network model can have its content specified in an XML language, and then the model can be represented in a number of ways depending on the chosen style. We have developed a formal method, based on styles, that permits a model to be specified in XML and presented in 1D (text), 2D, and 3D. This method allows for customization and personalization to exert their benefits beyond e-commerce, to the area of model structures used in computer simulation. This customization leads naturally to solving the bigger problem of model integration - the act of taking models of a scene and integrating them with that scene so that there is only one unified modeling interface. This work focuses mostly on customization, but we address the integration issue in the future work section.
Kiah, M L Mat; Nabi, Mohamed S; Zaidan, B B; Zaidan, A A
2013-10-01
This study aims to provide security solutions for implementing electronic medical records (EMRs). E-Health organizations could utilize the proposed method and implement recommended solutions in medical/health systems. Majority of the required security features of EMRs were noted. The methods used were tested against each of these security features. In implementing the system, the combination that satisfied all of the security features of EMRs was selected. Secure implementation and management of EMRs facilitate the safeguarding of the confidentiality, integrity, and availability of e-health organization systems. Health practitioners, patients, and visitors can use the information system facilities safely and with confidence anytime and anywhere. After critically reviewing security and data transmission methods, a new hybrid method was proposed to be implemented on EMR systems. This method will enhance the robustness, security, and integration of EMR systems. The hybrid of simple object access protocol/extensible markup language (XML) with advanced encryption standard and secure hash algorithm version 1 has achieved the security requirements of an EMR system with the capability of integrating with other systems through the design of XML messages.
RUBE: an XML-based architecture for 3D process modeling and model fusion
NASA Astrophysics Data System (ADS)
Fishwick, Paul A.
2002-07-01
Information fusion is a critical problem for science and engineering. There is a need to fuse information content specified as either data or model. We frame our work in terms of fusing dynamic and geometric models, to create an immersive environment where these models can be juxtaposed in 3D, within the same interface. The method by which this is accomplished fits well into other eXtensible Markup Language (XML) approaches to fusion in general. The task of modeling lies at the heart of the human-computer interface, joining the human to the system under study through a variety of sensory modalities. I overview modeling as a key concern for the Defense Department and the Air Force, and then follow with a discussion of past, current, and future work. Past work began with a package with C and has progressed, in current work, to an implementation in XML. Our current work is defined within the RUBE architecture, which is detailed in subsequent papers devoted to key components. We have built RUBE as a next generation modeling framework using our prior software, with research opportunities in immersive 3D and tangible user interfaces.
ERIC Educational Resources Information Center
Herrera-Viedma, Enrique; Peis, Eduardo
2003-01-01
Presents a fuzzy evaluation method of SGML documents based on computing with words. Topics include filtering the amount of information available on the Web to assist users in their search processes; document type definitions; linguistic modeling; user-system interaction; and use with XML and other markup languages. (Author/LRW)
An Interface Transformation Strategy for AF-IPPS
2012-12-01
Representational State Transfer (REST) and Java Enterprise Edition ( Java EE) to implement a reusable “translation service.” For SOAP and REST protocols, XML and...of best-of-breed open source software. The product baseline is summarized in the following table: Product Function Description Java Language...Compiler & Runtime JBoss Application Server Applications, Messaging, Translation Java EE Application Server Ruby on Rails Applications Ruby Web
Automatic Configuration of Programmable Logic Controller Emulators
2015-03-01
25 11 Example tree generated using UPGMA [Edw13] . . . . . . . . . . . . . . . . . . . . 33 12 Example sequence alignment for two... UPGMA Unweighted Pair Group Method with Arithmetic Mean URL uniform resource locator VM virtual machine XML Extensible Markup Language xx List of...appearance in the ses- sion, and then they are clustered again using Unweighted Pair Group Method with Arithmetic Mean ( UPGMA ) with a distance matrix based
Telescope networking and user support via Remote Telescope Markup Language
NASA Astrophysics Data System (ADS)
Hessman, Frederic V.; Pennypacker, Carlton R.; Romero-Colmenero, Encarni; Tuparev, Georg
2004-09-01
Remote Telescope Markup Language (RTML) is an XML-based interface/document format designed to facilitate the exchange of astronomical observing requests and results between investigators and observatories as well as within networks of observatories. While originally created to support simple imaging telescope requests (Versions 1.0-2.1), RTML Version 3.0 now supports a wide range of applications, from request preparation, exposure calculation, spectroscopy, and observation reports to remote telescope scheduling, target-of-opportunity observations and telescope network administration. The elegance of RTML is that all of this is made possible using a public XML Schema which provides a general-purpose, easily parsed, and syntax-checked medium for the exchange of astronomical and user information while not restricting or otherwise constraining the use of the information at either end. Thus, RTML can be used to connect heterogeneous systems and their users without requiring major changes in existing local resources and procedures. Projects as very different as a number of advanced amateur observatories, the global Hands-On Universe project, the MONET network (robotic imaging), the STELLA consortium (robotic spectroscopy), and the 11-m Southern African Large Telescope are now using or intending to use RTML in various forms and for various purposes.
Using XML and Java for Astronomical Instrumentation Control
NASA Technical Reports Server (NTRS)
Ames, Troy; Koons, Lisa; Sall, Ken; Warsaw, Craig
2000-01-01
Traditionally, instrument command and control systems have been highly specialized, consisting mostly of custom code that is difficult to develop, maintain, and extend. Such solutions are initially very costly and are inflexible to subsequent engineering change requests, increasing software maintenance costs. Instrument description is too tightly coupled with details of implementation. NASA Goddard Space Flight Center is developing a general and highly extensible framework that applies to any kind of instrument that can be controlled by a computer. The software architecture combines the platform independent processing capabilities of Java with the power of the Extensible Markup Language (XML), a human readable and machine understandable way to describe structured data. A key aspect of the object-oriented architecture is software that is driven by an instrument description, written using the Instrument Markup Language (IML). ]ML is used to describe graphical user interfaces to control and monitor the instrument, command sets and command formats, data streams, and communication mechanisms. Although the current effort is targeted for the High-resolution Airborne Wideband Camera, a first-light instrument of the Stratospheric Observatory for Infrared Astronomy, the framework is designed to be generic and extensible so that it can be applied to any instrument.
Meystre, Stéphane M; Lee, Sanghoon; Jung, Chai Young; Chevrier, Raphaël D
2012-08-01
An increasing need for collaboration and resources sharing in the Natural Language Processing (NLP) research and development community motivates efforts to create and share a common data model and a common terminology for all information annotated and extracted from clinical text. We have combined two existing standards: the HL7 Clinical Document Architecture (CDA), and the ISO Graph Annotation Format (GrAF; in development), to develop such a data model entitled "CDA+GrAF". We experimented with several methods to combine these existing standards, and eventually selected a method wrapping separate CDA and GrAF parts in a common standoff annotation (i.e., separate from the annotated text) XML document. Two use cases, clinical document sections, and the 2010 i2b2/VA NLP Challenge (i.e., problems, tests, and treatments, with their assertions and relations), were used to create examples of such standoff annotation documents, and were successfully validated with the XML schemata provided with both standards. We developed a tool to automatically translate annotation documents from the 2010 i2b2/VA NLP Challenge format to GrAF, and automatically generated 50 annotation documents using this tool, all successfully validated. Finally, we adapted the XSL stylesheet provided with HL7 CDA to allow viewing annotation XML documents in a web browser, and plan to adapt existing tools for translating annotation documents between CDA+GrAF and the UIMA and GATE frameworks. This common data model may ease directly comparing NLP tools and applications, combining their output, transforming and "translating" annotations between different NLP applications, and eventually "plug-and-play" of different modules in NLP applications. Copyright © 2011 Elsevier Inc. All rights reserved.
Harvesting implementation for the GI-cat distributed catalog
NASA Astrophysics Data System (ADS)
Boldrini, Enrico; Papeschi, Fabrizio; Bigagli, Lorenzo; Mazzetti, Paolo
2010-05-01
GI-cat framework implements a distributed catalog service supporting different international standards and interoperability arrangements in use by the geoscientific community. The distribution functionality in conjunction with the mediation functionality allows to seamlessly query remote heterogeneous data sources, including OGC Web Services - e.e. OGC CSW, WCS, WFS and WMS, community standards such as UNIDATA THREDDS/OPeNDAP, SeaDataNet CDI (Common Data Index), GBIF (Global Biodiversity Information Facility) services and OpenSearch engines. In the GI-cat modular architecture a distributor component carry out the distribution functionality by query delegation to the mediator components (one for each different data source). Each of these mediator components is able to query a specific data source and convert back the results by mapping of the foreign data model to the GI-cat internal one, based on ISO 19139. In order to cope with deployment scenarios in which local data is expected, an harvesting approach has been experimented. The new strategy comes in addition to the consolidated distributed approach, allowing the user to switch between a remote and a local search at will for each federated resource; this extends GI-cat configuration possibilities. The harvesting strategy is designed in GI-cat by the use at the core of a local cache component, implemented as a native XML database and based on eXist. The different heterogeneous sources are queried for the bulk of available data; this data is then injected into the cache component after being converted to the GI-cat data model. The query and conversion steps are performed by the mediator components that were are part of the GI-cat framework. Afterward each new query can be exercised against local data that have been stored in the cache component. Considering both advantages and shortcomings that affect harvesting and query distribution approaches, it comes out that a user driven tuning is required to take the best of them. This is often related to the specific user scenarios to be implemented. GI-cat proved to be a flexible framework to address user need. The GI-cat configurator tool was updated to make such a tuning possible: each data source can be configured to enable either harvesting or query distribution approaches; in the former case an appropriate harvesting interval can be set.
A Simple Blueprint for Automatic Boolean Query Processing.
ERIC Educational Resources Information Center
Salton, G.
1988-01-01
Describes a new Boolean retrieval environment in which an extended soft Boolean logic is used to automatically construct queries from original natural language formulations provided by users. Experimental results that compare the retrieval effectiveness of this method to conventional Boolean and vector processing are discussed. (27 references)…
Experiments in Multi-Lingual Information Retrieval.
ERIC Educational Resources Information Center
Salton, Gerard
A comparison was made of the performance in an automatic information retrieval environment of user queries and document abstracts available in natural language form in both English and French. The results obtained indicate that the automatic indexing and retrieval techniques actually used appear equally effective in handling the query and document…
EOS ODL Metadata On-line Viewer
NASA Astrophysics Data System (ADS)
Yang, J.; Rabi, M.; Bane, B.; Ullman, R.
2002-12-01
We have recently developed and deployed an EOS ODL metadata on-line viewer. The EOS ODL metadata viewer is a web server that takes: 1) an EOS metadata file in Object Description Language (ODL), 2) parameters, such as which metadata to view and what style of display to use, and returns an HTML or XML document displaying the requested metadata in the requested style. This tool is developed to address widespread complaints by science community that the EOS Data and Information System (EOSDIS) metadata files in ODL are difficult to read by allowing users to upload and view an ODL metadata file in different styles using a web browser. Users have the selection to view all the metadata or part of the metadata, such as Collection metadata, Granule metadata, or Unsupported Metadata. Choices of display styles include 1) Web: a mouseable display with tabs and turn-down menus, 2) Outline: Formatted and colored text, suitable for printing, 3) Generic: Simple indented text, a direct representation of the underlying ODL metadata, and 4) None: No stylesheet is applied and the XML generated by the converter is returned directly. Not all display styles are implemented for all the metadata choices. For example, Web style is only implemented for Collection and Granule metadata groups with known attribute fields, but not for Unsupported, Other, and All metadata. The overall strategy of the ODL viewer is to transform an ODL metadata file to a viewable HTML in two steps. The first step is to convert the ODL metadata file to an XML using a Java-based parser/translator called ODL2XML. The second step is to transform the XML to an HTML using stylesheets. Both operations are done on the server side. This allows a lot of flexibility in the final result, and is very portable cross-platform. Perl CGI behind the Apache web server is used to run the Java ODL2XML, and then run the results through an XSLT processor. The EOS ODL viewer can be accessed from either a PC or a Mac using Internet Explorer 5.0+ or Netscape 4.7+.
Blinov, Michael L.; Moraru, Ion I.
2011-01-01
Multi-state molecules and multi-component complexes are commonly involved in cellular signaling. Accounting for molecules that have multiple potential states, such as a protein that may be phosphorylated on multiple residues, and molecules that combine to form heterogeneous complexes located among multiple compartments, generates an effect of combinatorial complexity. Models involving relatively few signaling molecules can include thousands of distinct chemical species. Several software tools (StochSim, BioNetGen) are already available to deal with combinatorial complexity. Such tools need information standards if models are to be shared, jointly evaluated and developed. Here we discuss XML conventions that can be adopted for modeling biochemical reaction networks described by user-specified reaction rules. These could form a basis for possible future extensions of the Systems Biology Markup Language (SBML). PMID:21464833
Implementation of relational data base management systems on micro-computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, C.L.
1982-01-01
This dissertation describes an implementation of a Relational Data Base Management System on a microcomputer. A specific floppy disk based hardward called TERAK is being used, and high level query interface which is similar to a subset of the SEQUEL language is provided. The system contains sub-systems such as I/O, file management, virtual memory management, query system, B-tree management, scanner, command interpreter, expression compiler, garbage collection, linked list manipulation, disk space management, etc. The software has been implemented to fulfill the following goals: (1) it is highly modularized. (2) The system is physically segmented into 16 logically independent, overlayable segments,more » in a way such that a minimal amount of memory is needed at execution time. (3) Virtual memory system is simulated that provides the system with seemingly unlimited memory space. (4) A language translator is applied to recognize user requests in the query language. The code generation of this translator generates compact code for the execution of UPDATE, DELETE, and QUERY commands. (5) A complete set of basic functions needed for on-line data base manipulations is provided through the use of a friendly query interface. (6) To eliminate the dependency on the environment (both software and hardware) as much as possible, so that it would be easy to transplant the system to other computers. (7) To simulate each relation as a sequential file. It is intended to be a highly efficient, single user system suited to be used by small or medium sized organizations for, say, administrative purposes. Experiments show that quite satisfying results have indeed been achieved.« less
NASA Technical Reports Server (NTRS)
Dominick, Wayne D. (Editor); Triantafyllopoulos, Spiros
1985-01-01
A collection of presentation visuals associated with the companion report entitled KARL: A Knowledge-Assisted Retrieval Language, is presented. Information is given on data retrieval, natural language database front ends, generic design objectives, processing capababilities and the query processing cycle.
Getting Answers to Natural Language Questions on the Web.
ERIC Educational Resources Information Center
Radev, Dragomir R.; Libner, Kelsey; Fan, Weiguo
2002-01-01
Describes a study that investigated the use of natural language questions on Web search engines. Highlights include query languages; differences in search engine syntax; and results of logistic regression and analysis of variance that showed aspects of questions that predicted significantly different performances, including the number of words,…
PharmML in Action: an Interoperable Language for Modeling and Simulation.
Bizzotto, R; Comets, E; Smith, G; Yvon, F; Kristensen, N R; Swat, M J
2017-10-01
PharmML is an XML-based exchange format created with a focus on nonlinear mixed-effect (NLME) models used in pharmacometrics, but providing a very general framework that also allows describing mathematical and statistical models such as single-subject or nonlinear and multivariate regression models. This tutorial provides an overview of the structure of this language, brief suggestions on how to work with it, and use cases demonstrating its power and flexibility. © 2017 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.
cluML: A markup language for clustering and cluster validity assessment of microarray data.
Bolshakova, Nadia; Cunningham, Pádraig
2005-01-01
cluML is a new markup language for microarray data clustering and cluster validity assessment. The XML-based format has been designed to address some of the limitations observed in traditional formats, such as inability to store multiple clustering (including biclustering) and validation results within a dataset. cluML is an effective tool to support biomedical knowledge representation in gene expression data analysis. Although cluML was developed for DNA microarray analysis applications, it can be effectively used for the representation of clustering and for the validation of other biomedical and physical data that has no limitations.
Query-Based Outlier Detection in Heterogeneous Information Networks.
Kuck, Jonathan; Zhuang, Honglei; Yan, Xifeng; Cam, Hasan; Han, Jiawei
2015-03-01
Outlier or anomaly detection in large data sets is a fundamental task in data science, with broad applications. However, in real data sets with high-dimensional space, most outliers are hidden in certain dimensional combinations and are relative to a user's search space and interest. It is often more effective to give power to users and allow them to specify outlier queries flexibly, and the system will then process such mining queries efficiently. In this study, we introduce the concept of query-based outlier in heterogeneous information networks, design a query language to facilitate users to specify such queries flexibly, define a good outlier measure in heterogeneous networks, and study how to process outlier queries efficiently in large data sets. Our experiments on real data sets show that following such a methodology, interesting outliers can be defined and uncovered flexibly and effectively in large heterogeneous networks.
Query-Based Outlier Detection in Heterogeneous Information Networks
Kuck, Jonathan; Zhuang, Honglei; Yan, Xifeng; Cam, Hasan; Han, Jiawei
2015-01-01
Outlier or anomaly detection in large data sets is a fundamental task in data science, with broad applications. However, in real data sets with high-dimensional space, most outliers are hidden in certain dimensional combinations and are relative to a user’s search space and interest. It is often more effective to give power to users and allow them to specify outlier queries flexibly, and the system will then process such mining queries efficiently. In this study, we introduce the concept of query-based outlier in heterogeneous information networks, design a query language to facilitate users to specify such queries flexibly, define a good outlier measure in heterogeneous networks, and study how to process outlier queries efficiently in large data sets. Our experiments on real data sets show that following such a methodology, interesting outliers can be defined and uncovered flexibly and effectively in large heterogeneous networks. PMID:27064397
On describing human white matter anatomy: the white matter query language.
Wassermann, Demian; Makris, Nikos; Rathi, Yogesh; Shenton, Martha; Kikinis, Ron; Kubicki, Marek; Westin, Carl-Fredrik
2013-01-01
The main contribution of this work is the careful syntactical definition of major white matter tracts in the human brain based on a neuroanatomist's expert knowledge. We present a technique to formally describe white matter tracts and to automatically extract them from diffusion MRI data. The framework is based on a novel query language with a near-to-English textual syntax. This query language allows us to construct a dictionary of anatomical definitions describing white matter tracts. The definitions include adjacent gray and white matter regions, and rules for spatial relations. This enables automated coherent labeling of white matter anatomy across subjects. We use our method to encode anatomical knowledge in human white matter describing 10 association and 8 projection tracts per hemisphere and 7 commissural tracts. The technique is shown to be comparable in accuracy to manual labeling. We present results applying this framework to create a white matter atlas from 77 healthy subjects, and we use this atlas in a proof-of-concept study to detect tract changes specific to schizophrenia.
An ontology-driven tool for structured data acquisition using Web forms.
Gonçalves, Rafael S; Tu, Samson W; Nyulas, Csongor I; Tierney, Michael J; Musen, Mark A
2017-08-01
Structured data acquisition is a common task that is widely performed in biomedicine. However, current solutions for this task are far from providing a means to structure data in such a way that it can be automatically employed in decision making (e.g., in our example application domain of clinical functional assessment, for determining eligibility for disability benefits) based on conclusions derived from acquired data (e.g., assessment of impaired motor function). To use data in these settings, we need it structured in a way that can be exploited by automated reasoning systems, for instance, in the Web Ontology Language (OWL); the de facto ontology language for the Web. We tackle the problem of generating Web-based assessment forms from OWL ontologies, and aggregating input gathered through these forms as an ontology of "semantically-enriched" form data that can be queried using an RDF query language, such as SPARQL. We developed an ontology-based structured data acquisition system, which we present through its specific application to the clinical functional assessment domain. We found that data gathered through our system is highly amenable to automatic analysis using queries. We demonstrated how ontologies can be used to help structuring Web-based forms and to semantically enrich the data elements of the acquired structured data. The ontologies associated with the enriched data elements enable automated inferences and provide a rich vocabulary for performing queries.
SPARQLog: SPARQL with Rules and Quantification
NASA Astrophysics Data System (ADS)
Bry, François; Furche, Tim; Marnette, Bruno; Ley, Clemens; Linse, Benedikt; Poppe, Olga
SPARQL has become the gold-standard for RDF query languages. Nevertheless, we believe there is further room for improving RDF query languages. In this chapter, we investigate the addition of rules and quantifier alternation to SPARQL. That extension, called SPARQLog, extends previous RDF query languages by arbitrary quantifier alternation: blank nodes may occur in the scope of all, some, or none of the universal variables of a rule. In addition, SPARQLog is aware of important RDF features such as the distinction between blank nodes, literals and IRIs or the RDFS vocabulary. The semantics of SPARQLog is closed (every answer is an RDF graph), but lifts RDF's restrictions on literal and blank node occurrences for intermediary data. We show how to define a sound and complete operational semantics that can be implemented using existing logic programming techniques. While SPARQLog is Turing complete, we identify a decidable (in fact, polynomial time) fragment SwARQLog ensuring polynomial data-complexity inspired from the notion of super-weak acyclicity in data exchange. Furthermore, we prove that SPARQLog with no universal quantifiers in the scope of existential ones (∀ ∃ fragment) is equivalent to full SPARQLog in presence of graph projection. Thus, the convenience of arbitrary quantifier alternation comes, in fact, for free. These results, though here presented in the context of RDF querying, apply similarly also in the more general setting of data exchange.
Database Reports Over the Internet
NASA Technical Reports Server (NTRS)
Smith, Dean Lance
2002-01-01
Most of the summer was spent developing software that would permit existing test report forms to be printed over the web on a printer that is supported by Adobe Acrobat Reader. The data is stored in a DBMS (Data Base Management System). The client asks for the information from the database using an HTML (Hyper Text Markup Language) form in a web browser. JavaScript is used with the forms to assist the user and verify the integrity of the entered data. Queries to a database are made in SQL (Sequential Query Language), a widely supported standard for making queries to databases. Java servlets, programs written in the Java programming language running under the control of network server software, interrogate the database and complete a PDF form template kept in a file. The completed report is sent to the browser requesting the report. Some errors are sent to the browser in an HTML web page, others are reported to the server. Access to the databases was restricted since the data are being transported to new DBMS software that will run on new hardware. However, the SQL queries were made to Microsoft Access, a DBMS that is available on most PCs (Personal Computers). Access does support the SQL commands that were used, and a database was created with Access that contained typical data for the report forms. Some of the problems and features are discussed below.
OTF CCSDS Mission Operations Prototype. Directory and Action Service. Phase I: Exit Presentation
NASA Technical Reports Server (NTRS)
Reynolds, Walter F.; Lucord, Steven A.; Stevens, John E.
2009-01-01
This slide presentation describes the phase I directory and action service prototype for the CCSDS system. The project goals are to: (1) Demonstrate the use of Mission Operations standards to implement Directory and Action Services (2) Investigate Mission Operations language neutrality (3) Investigate C3I XML interoperability concepts (4) Integrate applicable open source technologies in a Service Oriented Architecture
Closed Loop Analysis Meta-Language Program (CLAMP)
2012-05-01
formats of Spreadsheets, XML, MCPML, or something else should be the ( anthropometry or other) experts’ productivity in: 1) crafting data 2) applying...FORCE MATERIEL COMMAND UNITED STATES AIR FORCE NOTICE AND SIGNATURE PAGE Using Government drawings, specifications, or other data included in...formulated or supplied the drawings, specifications, or other data does not license the holder or any other person or corporation; or convey any rights or
The tissue micro-array data exchange specification: a web based experience browsing imported data
Nohle, David G; Hackman, Barbara A; Ayers, Leona W
2005-01-01
Background The AIDS and Cancer Specimen Resource (ACSR) is an HIV/AIDS tissue bank consortium sponsored by the National Cancer Institute (NCI) Division of Cancer Treatment and Diagnosis (DCTD). The ACSR offers to approved researchers HIV infected biologic samples and uninfected control tissues including tissue cores in micro-arrays (TMA) accompanied by de-identified clinical data. Researchers interested in the type and quality of TMA tissue cores and the associated clinical data need an efficient method for viewing available TMA materials. Because each of the tissue samples within a TMA has separate data including a core tissue digital image and clinical data, an organized, standard approach to producing, navigating and publishing such data is necessary. The Association for Pathology Informatics (API) extensible mark-up language (XML) TMA data exchange specification (TMA DES) proposed in April 2003 provides a common format for TMA data. Exporting TMA data into the proposed format offers an opportunity to implement the API TMA DES. Using our public BrowseTMA tool, we created a web site that organizes and cross references TMA lists, digital "virtual slide" images, TMA DES export data, linked legends and clinical details for researchers. Microsoft Excel® and Microsoft Word® are used to convert tabular clinical data and produce an XML file in the TMA DES format. The BrowseTMA tool contains Extensible Stylesheet Language Transformation (XSLT) scripts that convert XML data into Hyper-Text Mark-up Language (HTML) web pages with hyperlinks automatically added to allow rapid navigation. Results Block lists, virtual slide images, legends, clinical details and exports have been placed on the ACSR web site for 14 blocks with 1623 cores of 2.0, 1.0 and 0.6 mm sizes. Our virtual microscope can be used to view and annotate these TMA images. Researchers can readily navigate from TMA block lists to TMA legends and to clinical details for a selected tissue core. Exports for 11 blocks with 3812 cores from three other institutions were processed with the BrowseTMA tool. Fifty common data elements (CDE) from the TMA DES were used and 42 more created for site-specific data. Researchers can download TMA clinical data in the TMA DES format. Conclusion Virtual TMAs with clinical data can be viewed on the Internet by interested researchers using the BrowseTMA tool. We have organized our approach to producing, sorting, navigating and publishing TMA information to facilitate such review. We have converted Excel TMA data into TMA DES XML, and imported it and TMA DES XML from another institution into BrowseTMA to produce web pages that allow us to browse through the merged data. We proposed enhancements to the TMA DES as a result of this experience. We implemented improvements to the API TMA DES as a result of using exported data from several institutions. A document type definition was written for the API TMA DES (that optionally includes proposed enhancements). Independent validators can be used to check exports against the DTD (with or without the proposed enhancements). Linking tissue core images to readily navigable clinical data greatly improves the value of the TMA. PMID:16086837
Information Model Translation to Support a Wider Science Community
NASA Astrophysics Data System (ADS)
Hughes, John S.; Crichton, Daniel; Ritschel, Bernd; Hardman, Sean; Joyner, Ronald
2014-05-01
The Planetary Data System (PDS), NASA's long-term archive for solar system exploration data, has just released PDS4, a modernization of the PDS architecture, data standards, and technical infrastructure. This next generation system positions the PDS to meet the demands of the coming decade, including big data, international cooperation, distributed nodes, and multiple ways of analysing and interpreting data. It also addresses three fundamental project goals: providing more efficient data delivery by data providers to the PDS, enabling a stable, long-term usable planetary science data archive, and enabling services for the data consumer to find, access, and use the data they require in contemporary data formats. The PDS4 information architecture is used to describe all PDS data using a common model. Captured in an ontology modeling tool it supports a hierarchy of data dictionaries built to the ISO/IEC 11179 standard and is designed to increase flexibility, enable complex searches at the product level, and to promote interoperability that facilitates data sharing both nationally and internationally. A PDS4 information architecture design requirement stipulates that the content of the information model must be translatable to external data definition languages such as XML Schema, XMI/XML, and RDF/XML. To support the semantic Web standards we are now in the process of mapping the contents into RDF/XML to support SPARQL capable databases. We are also building a terminological ontology to support virtually unified data retrieval and access. This paper will provide an overview of the PDS4 information architecture focusing on its domain information model and how the translation and mapping are being accomplished.
Lapin Data Interchange Among Database, Analysis and Display Programs Using XML-Based Text Files
NASA Technical Reports Server (NTRS)
2005-01-01
The purpose of grant NCC3-966 was to investigate and evaluate the interchange of application-specific data among multiple programs each carrying out part of the analysis and design task. This has been carried out previously by creating a custom program to read data produced by one application and then write that data to a file whose format is specific to the second application that needs all or part of that data. In this investigation, data of interest is described using the XML markup language that allows the data to be stored in a text-string. Software to transform output data of a task into an XML-string and software to read an XML string and extract all or a portion of the data needed for another application is used to link two independent applications together as part of an overall design effort. This approach was initially used with a standard analysis program, Lapin, along with standard applications a standard spreadsheet program, a relational database program, and a conventional dialog and display program to demonstrate the successful sharing of data among independent programs. Most of the effort beyond that demonstration has been concentrated on the inclusion of more complex display programs. Specifically, a custom-written windowing program organized around dialogs to control the interactions have been combined with an independent CAD program (Open Cascade) that supports sophisticated display of CAD elements such as lines, spline curves, and surfaces and turbine-blade data produced by an independent blade design program (UD0300).
Vaccine-criticism on the internet: new insights based on French-speaking websites.
Ward, Jeremy K; Peretti-Watel, Patrick; Larson, Heidi J; Raude, Jocelyn; Verger, Pierre
2015-02-18
The internet is playing an increasingly important part in fueling vaccine related controversies and in generating vaccine hesitant behaviors. English language Antivaccination websites have been thoroughly analyzed, however, little is known of the arguments presented in other languages on the internet. This study presents three types of results: (1) Authors apply a time tested content analysis methodology to describe the information diffused by French language vaccine critical websites in comparison with English speaking websites. The contents of French language vaccine critical websites are very similar to those of English language websites except for the relative absence of moral and religious arguments. (2) Authors evaluate the likelihood that internet users will find those websites through vaccine-related queries on a variety of French-language versions of google. Queries on controversial vaccines generated many more vaccine critical websites than queries on vaccination in general. (3) Authors propose a typology of vaccine critical websites. Authors distinguish between (a) websites that criticize all vaccines ("antivaccine" websites) and websites that criticize only some vaccines ("vaccine-selective" websites), and between (b) websites that focus on vaccines ("vaccine-focused" websites) and those for which vaccines were only a secondary topic of interest ("generalist" websites). The differences in stances by groups and websites affect the likelihood that they will be believed and by whom. This study therefore helps understand the different information landscapes that may contribute to the variety of forms of vaccine hesitancy. Public authorities should have better awareness and understanding of these stances to bring appropriate answers to the different controversies about vaccination. Copyright © 2014 Elsevier Ltd. All rights reserved.
Functional Analysis of Language Interactions between Down Syndrome Children and Their Mothers.
ERIC Educational Resources Information Center
Hooshyar, Nahid T.
A 20-minute videotape sample was obtained of the language interactions between 20 Down syndrome children (ages 38 to 107 months) and their mothers during informal playtime. Linguistic utterances of mothers and children were coded according to the following language categories: query, declarative, imperative, performative, feedback, imitation,…
Taboada, María; Martínez, Diego; Pilo, Belén; Jiménez-Escrig, Adriano; Robinson, Peter N; Sobrido, María J
2012-07-31
Semantic Web technology can considerably catalyze translational genetics and genomics research in medicine, where the interchange of information between basic research and clinical levels becomes crucial. This exchange involves mapping abstract phenotype descriptions from research resources, such as knowledge databases and catalogs, to unstructured datasets produced through experimental methods and clinical practice. This is especially true for the construction of mutation databases. This paper presents a way of harmonizing abstract phenotype descriptions with patient data from clinical practice, and querying this dataset about relationships between phenotypes and genetic variants, at different levels of abstraction. Due to the current availability of ontological and terminological resources that have already reached some consensus in biomedicine, a reuse-based ontology engineering approach was followed. The proposed approach uses the Ontology Web Language (OWL) to represent the phenotype ontology and the patient model, the Semantic Web Rule Language (SWRL) to bridge the gap between phenotype descriptions and clinical data, and the Semantic Query Web Rule Language (SQWRL) to query relevant phenotype-genotype bidirectional relationships. The work tests the use of semantic web technology in the biomedical research domain named cerebrotendinous xanthomatosis (CTX), using a real dataset and ontologies. A framework to query relevant phenotype-genotype bidirectional relationships is provided. Phenotype descriptions and patient data were harmonized by defining 28 Horn-like rules in terms of the OWL concepts. In total, 24 patterns of SWQRL queries were designed following the initial list of competency questions. As the approach is based on OWL, the semantic of the framework adapts the standard logical model of an open world assumption. This work demonstrates how semantic web technologies can be used to support flexible representation and computational inference mechanisms required to query patient datasets at different levels of abstraction. The open world assumption is especially good for describing only partially known phenotype-genotype relationships, in a way that is easily extensible. In future, this type of approach could offer researchers a valuable resource to infer new data from patient data for statistical analysis in translational research. In conclusion, phenotype description formalization and mapping to clinical data are two key elements for interchanging knowledge between basic and clinical research.
Connecting geoscience systems and data using Linked Open Data in the Web of Data
NASA Astrophysics Data System (ADS)
Ritschel, Bernd; Neher, Günther; Iyemori, Toshihiko; Koyama, Yukinobu; Yatagai, Akiyo; Murayama, Yasuhiro; Galkin, Ivan; King, Todd; Fung, Shing F.; Hughes, Steve; Habermann, Ted; Hapgood, Mike; Belehaki, Anna
2014-05-01
Linked Data or Linked Open Data (LOD) in the realm of free and publically accessible data is one of the most promising and most used semantic Web frameworks connecting various types of data and vocabularies including geoscience and related domains. The semantic Web extension to the commonly existing and used World Wide Web is based on the meaning of entities and relationships or in different words classes and properties used for data in a global data and information space, the Web of Data. LOD data is referenced and mash-uped by URIs and is retrievable using simple parameter controlled HTTP-requests leading to a result which is human-understandable or machine-readable. Furthermore the publishing and mash-up of data in the semantic Web realm is realized by specific Web standards, such as RDF, RDFS, OWL and SPARQL defined for the Web of Data. Semantic Web based mash-up is the Web method to aggregate and reuse various contents from different sources, such as e.g. using FOAF as a model and vocabulary for the description of persons and organizations -in our case- related to geoscience projects, instruments, observations, data and so on. On the example of three different geoscience data and information management systems, such as ESPAS, IUGONET and GFZ ISDC and the associated science data and related metadata or better called context data, the concept of the mash-up of systems and data using the semantic Web approach and the Linked Open Data framework is described in this publication. Because the three systems are based on different data models, data storage structures and technical implementations an extra semantic Web layer upon the existing interfaces is used for mash-up solutions. In order to satisfy the semantic Web standards, data transition processes, such as the transfer of content stored in relational databases or mapped in XML documents into SPARQL capable databases or endpoints using D2R or XSLT is necessary. In addition, the use of mapped and/or merged domain specific and cross-domain vocabularies in the sense of terminological ontologies are the foundation for a virtually unified data retrieval and access in IUGONET, ESPAS and GFZ ISDC data management systems. SPARQL endpoints realized either by originally RDF databases, e.g. Virtuoso or by virtual SPARQL endpoints, e.g. D2R services enable an only upon Web standard-based mash-up of domain-specific systems and data, such as in this case the space weather and geomagnetic domain but also cross-domain connection to data and vocabularies, e.g. related to NASA's VxOs, particularly VWO or NASA's PDS data system within LOD. LOD - Linked Open Data RDF - Resource Description Framework RDFS - RDF Schema OWL - Ontology Web Language SPARQL - SPARQL Protocol and RDF Query Language FOAF - Friends of a Friend ontology ESPAS - Near Earth Space Data Infrastructure for e-Science (Project) IUGONET - Inter-university Upper Atmosphere Global Observation Network (Project) GFZ ISDC - German Research Centre for Geosciences Information System and Data Center XML - Extensible Mark-up Language D2R - (Relational) Database to RDF (Transformation) XSLT - Extensible Stylesheet Language Transformation Virtuoso - OpenLink Virtuoso Universal Server (including RDF data management) NASA - National Aeronautics and Space Administration VOx - Virtual Observatories VWO - Virtual Wave Observatory PDS - Planetary Data System
An integrated information retrieval and document management system
NASA Technical Reports Server (NTRS)
Coles, L. Stephen; Alvarez, J. Fernando; Chen, James; Chen, William; Cheung, Lai-Mei; Clancy, Susan; Wong, Alexis
1993-01-01
This paper describes the requirements and prototype development for an intelligent document management and information retrieval system that will be capable of handling millions of pages of text or other data. Technologies for scanning, Optical Character Recognition (OCR), magneto-optical storage, and multiplatform retrieval using a Standard Query Language (SQL) will be discussed. The semantic ambiguity inherent in the English language is somewhat compensated-for through the use of coefficients or weighting factors for partial synonyms. Such coefficients are used both for defining structured query trees for routine queries and for establishing long-term interest profiles that can be used on a regular basis to alert individual users to the presence of relevant documents that may have just arrived from an external source, such as a news wire service. Although this attempt at evidential reasoning is limited in comparison with the latest developments in AI Expert Systems technology, it has the advantage of being commercially available.
2017-01-01
Reusing the data from healthcare information systems can effectively facilitate clinical trials (CTs). How to select candidate patients eligible for CT recruitment criteria is a central task. Related work either depends on DBA (database administrator) to convert the recruitment criteria to native SQL queries or involves the data mapping between a standard ontology/information model and individual data source schema. This paper proposes an alternative computer-aided CT recruitment paradigm, based on syntax translation between different DSLs (domain-specific languages). In this paradigm, the CT recruitment criteria are first formally represented as production rules. The referenced rule variables are all from the underlying database schema. Then the production rule is translated to an intermediate query-oriented DSL (e.g., LINQ). Finally, the intermediate DSL is directly mapped to native database queries (e.g., SQL) automated by ORM (object-relational mapping). PMID:29065644
The EPMI Malay Basin petroleum geology database: Design philosophy and keys to success
DOE Office of Scientific and Technical Information (OSTI.GOV)
Low, H.E.; Creaney, S.; Fairchild, L.H.
1994-07-01
Esso Production Malaysia Inc. (EPMI) developed and populated a database containing information collected in the areas of basic well data: stratigraphy, lithology, facies; pressure, temperature, column/contacts; geochemistry, shows and stains, migration, fluid properties; maturation; seal; structure. Paradox was used as the database engine and query language, with links to ZYCOR ZMAP+ for mapping and SAS for data analysis. Paradox has a query language that is simple enough for users. The ability to link to good analytical packages was deemed more important than having the capability in the package. Important elements of design philosophy were included: (1) information on data qualitymore » had to be rigorously recorded; (2) raw and interpreted data were kept separate and clearly identified; (3) correlations between rock and chronostratigraphic surfaces were recorded; and (4) queries across technical boundaries had to be seamless.« less