Sample records for optimizing xml compression

  1. Compressing Aviation Data in XML Format

    NASA Technical Reports Server (NTRS)

    Patel, Hemil; Lau, Derek; Kulkarni, Deepak

    2003-01-01

    Design, operations and maintenance activities in aviation involve analysis of variety of aviation data. This data is typically in disparate formats making it difficult to use with different software packages. Use of a self-describing and extensible standard called XML provides a solution to this interoperability problem. XML provides a standardized language for describing the contents of an information stream, performing the same kind of definitional role for Web content as a database schema performs for relational databases. XML data can be easily customized for display using Extensible Style Sheets (XSL). While self-describing nature of XML makes it easy to reuse, it also increases the size of data significantly. Therefore, transfemng a dataset in XML form can decrease throughput and increase data transfer time significantly. It also increases storage requirements significantly. A natural solution to the problem is to compress the data using suitable algorithm and transfer it in the compressed form. We found that XML-specific compressors such as Xmill and XMLPPM generally outperform traditional compressors. However, optimal use of Xmill requires of discovery of optimal options to use while running Xmill. This, in turn, depends on the nature of data used. Manual disc0ver.y of optimal setting can require an engineer to experiment for weeks. We have devised an XML compression advisory tool that can analyze sample data files and recommend what compression tool would work the best for this data and what are the optimal settings to be used with a XML compression tool.

  2. The Role of Efficient XML Interchange (EXI) in Navy Wide-Area Network (WAN) Optimization

    DTIC Science & Technology

    2015-03-01

    compress, and re-encrypt data to continue providing optimization through compression; however, that capability requires careful consideration of...optimization 23 of encrypted data requires a careful analysis and comparison of performance improvements and IA vulnerabilities. It is important...Contained EXI capitalizes on multiple techniques to improve compression, and they vary depending on a set of EXI options passed to the codec

  3. Efficient XML Interchange (EXI) Compression and Performance Benefits: Development, Implementation and Evaluation

    DTIC Science & Technology

    2010-03-01

    to a graphics card , and not the redesign of XML. The justification is that if XML is going to be prevalent, special optimized hardware is...the answer, similar to the specialized functions of a video card .  Given the Moore’s law that processing power doubles every few years, let the...and numerous multimedia players such as iTunes from Apple. These applications are free to use, but the source is restricted by software licenses

  4. Compression of Probabilistic XML Documents

    NASA Astrophysics Data System (ADS)

    Veldman, Irma; de Keijzer, Ander; van Keulen, Maurice

    Database techniques to store, query and manipulate data that contains uncertainty receives increasing research interest. Such UDBMSs can be classified according to their underlying data model: relational, XML, or RDF. We focus on uncertain XML DBMS with as representative example the Probabilistic XML model (PXML) of [10,9]. The size of a PXML document is obviously a factor in performance. There are PXML-specific techniques to reduce the size, such as a push down mechanism, that produces equivalent but more compact PXML documents. It can only be applied, however, where possibilities are dependent. For normal XML documents there also exist several techniques for compressing a document. Since Probabilistic XML is (a special form of) normal XML, it might benefit from these methods even more. In this paper, we show that existing compression mechanisms can be combined with PXML-specific compression techniques. We also show that best compression rates are obtained with a combination of PXML-specific technique with a rather simple generic DAG-compression technique.

  5. QRFXFreeze: Queryable Compressor for RFX.

    PubMed

    Senthilkumar, Radha; Nandagopal, Gomathi; Ronald, Daphne

    2015-01-01

    The verbose nature of XML has been mulled over again and again and many compression techniques for XML data have been excogitated over the years. Some of the techniques incorporate support for querying the XML database in its compressed format while others have to be decompressed before they can be queried. XML compression in which querying is directly supported instantaneously with no compromise over time is forced to compromise over space. In this paper, we propose the compressor, QRFXFreeze, which not only reduces the space of storage but also supports efficient querying. The compressor does this without decompressing the compressed XML file. The compressor supports all kinds of XML documents along with insert, update, and delete operations. The forte of QRFXFreeze is that the textual data are semantically compressed and are indexed to reduce the querying time. Experimental results show that the proposed compressor performs much better than other well-known compressors.

  6. Compression-based aggregation model for medical web services.

    PubMed

    Al-Shammary, Dhiah; Khalil, Ibrahim

    2010-01-01

    Many organizations such as hospitals have adopted Cloud Web services in applying their network services to avoid investing heavily computing infrastructure. SOAP (Simple Object Access Protocol) is the basic communication protocol of Cloud Web services that is XML based protocol. Generally,Web services often suffer congestions and bottlenecks as a result of the high network traffic that is caused by the large XML overhead size. At the same time, the massive load on Cloud Web services in terms of the large demand of client requests has resulted in the same problem. In this paper, two XML-aware aggregation techniques that are based on exploiting the compression concepts are proposed in order to aggregate the medical Web messages and achieve higher message size reduction.

  7. Evaluation of Efficient XML Interchange (EXI) for Large Datasets and as an Alternative to Binary JSON Encodings

    DTIC Science & Technology

    2015-03-01

    fall in the lossy category (Gonzalez, Woods , & Eddins, 2009, p. 420). For the textual or numeric data in XML, however, lossy compression is...7/1,337 > Professional Notes Being Efficient with Bandwidth By Lieutenant Commander Steve Debich, Lieutenant Bruce Hill, Captain Scot Miller (Retired...2005). XML Binary Characterization. Retrieved from http://www.w3.org/TR/xbc-characterization/ Gonzalez, R., Woods , R., & Eddins, S. (2009

  8. Context- and Template-Based Compression for Efficient Management of Data Models in Resource-Constrained Systems.

    PubMed

    Macho, Jorge Berzosa; Montón, Luis Gardeazabal; Rodriguez, Roberto Cortiñas

    2017-08-01

    The Cyber Physical Systems (CPS) paradigm is based on the deployment of interconnected heterogeneous devices and systems, so interoperability is at the heart of any CPS architecture design. In this sense, the adoption of standard and generic data formats for data representation and communication, e.g., XML or JSON, effectively addresses the interoperability problem among heterogeneous systems. Nevertheless, the verbosity of those standard data formats usually demands system resources that might suppose an overload for the resource-constrained devices that are typically deployed in CPS. In this work we present Context- and Template-based Compression (CTC), a data compression approach targeted to resource-constrained devices, which allows reducing the resources needed to transmit, store and process data models. Additionally, we provide a benchmark evaluation and comparison with current implementations of the Efficient XML Interchange (EXI) processor, which is promoted by the World Wide Web Consortium (W3C), and it is the most prominent XML compression mechanism nowadays. Interestingly, the results from the evaluation show that CTC outperforms EXI implementations in terms of memory usage and speed, keeping similar compression rates. As a conclusion, CTC is shown to be a good candidate for managing standard data model representation formats in CPS composed of resource-constrained devices.

  9. Context- and Template-Based Compression for Efficient Management of Data Models in Resource-Constrained Systems

    PubMed Central

    Montón, Luis Gardeazabal

    2017-01-01

    The Cyber Physical Systems (CPS) paradigm is based on the deployment of interconnected heterogeneous devices and systems, so interoperability is at the heart of any CPS architecture design. In this sense, the adoption of standard and generic data formats for data representation and communication, e.g., XML or JSON, effectively addresses the interoperability problem among heterogeneous systems. Nevertheless, the verbosity of those standard data formats usually demands system resources that might suppose an overload for the resource-constrained devices that are typically deployed in CPS. In this work we present Context- and Template-based Compression (CTC), a data compression approach targeted to resource-constrained devices, which allows reducing the resources needed to transmit, store and process data models. Additionally, we provide a benchmark evaluation and comparison with current implementations of the Efficient XML Interchange (EXI) processor, which is promoted by the World Wide Web Consortium (W3C), and it is the most prominent XML compression mechanism nowadays. Interestingly, the results from the evaluation show that CTC outperforms EXI implementations in terms of memory usage and speed, keeping similar compression rates. As a conclusion, CTC is shown to be a good candidate for managing standard data model representation formats in CPS composed of resource-constrained devices. PMID:28763013

  10. Gmz: a Gml Compression Model for Webgis

    NASA Astrophysics Data System (ADS)

    Khandelwal, A.; Rajan, K. S.

    2017-09-01

    Geography markup language (GML) is an XML specification for expressing geographical features. Defined by Open Geospatial Consortium (OGC), it is widely used for storage and transmission of maps over the Internet. XML schemas provide the convenience to define custom features profiles in GML for specific needs as seen in widely popular cityGML, simple features profile, coverage, etc. Simple features profile (SFP) is a simpler subset of GML profile with support for point, line and polygon geometries. SFP has been constructed to make sure it covers most commonly used GML geometries. Web Feature Service (WFS) serves query results in SFP by default. But it falls short of being an ideal choice due to its high verbosity and size-heavy nature, which provides immense scope for compression. GMZ is a lossless compression model developed to work for SFP compliant GML files. Our experiments indicate GMZ achieves reasonably good compression ratios and can be useful in WebGIS based applications.

  11. Mapping DICOM to OpenDocument format

    NASA Astrophysics Data System (ADS)

    Yu, Cong; Yao, Zhihong

    2009-02-01

    In order to enhance the readability, extensibility and sharing of DICOM files, we have introduced XML into DICOM file system (SPIE Volume 5748)[1] and the multilayer tree structure into DICOM (SPIE Volume 6145)[2]. In this paper, we proposed mapping DICOM to ODF(OpenDocument Format), for it is also based on XML. As a result, the new format realizes the separation of content(including text content and image) and display style. Meanwhile, since OpenDocument files take the format of a ZIP compressed archive, the new kind of DICOM files can benefit from ZIP's lossless compression to reduce file size. Moreover, this open format can also guarantee long-term access to data without legal or technical barriers, making medical images accessible to various fields.

  12. XML technology planning database : lessons learned

    NASA Technical Reports Server (NTRS)

    Some, Raphael R.; Neff, Jon M.

    2005-01-01

    A hierarchical Extensible Markup Language(XML) database called XCALIBR (XML Analysis LIBRary) has been developed by Millennium Program to assist in technology investment (ROI) analysis and technology Language Capability the New return on portfolio optimization. The database contains mission requirements and technology capabilities, which are related by use of an XML dictionary. The XML dictionary codifies a standardized taxonomy for space missions, systems, subsystems and technologies. In addition to being used for ROI analysis, the database is being examined for use in project planning, tracking and documentation. During the past year, the database has moved from development into alpha testing. This paper describes the lessons learned during construction and testing of the prototype database and the motivation for moving from an XML taxonomy to a standard XML-based ontology.

  13. Spreadsheets for Analyzing and Optimizing Space Missions

    NASA Technical Reports Server (NTRS)

    Some, Raphael R.; Agrawal, Anil K.; Czikmantory, Akos J.; Weisbin, Charles R.; Hua, Hook; Neff, Jon M.; Cowdin, Mark A.; Lewis, Brian S.; Iroz, Juana; Ross, Rick

    2009-01-01

    XCALIBR (XML Capability Analysis LIBRary) is a set of Extensible Markup Language (XML) database and spreadsheet- based analysis software tools designed to assist in technology-return-on-investment analysis and optimization of technology portfolios pertaining to outer-space missions. XCALIBR is also being examined for use in planning, tracking, and documentation of projects. An XCALIBR database contains information on mission requirements and technological capabilities, which are related by use of an XML taxonomy. XCALIBR incorporates a standardized interface for exporting data and analysis templates to an Excel spreadsheet. Unique features of XCALIBR include the following: It is inherently hierarchical by virtue of its XML basis. The XML taxonomy codifies a comprehensive data structure and data dictionary that includes performance metrics for spacecraft, sensors, and spacecraft systems other than sensors. The taxonomy contains >700 nodes representing all levels, from system through subsystem to individual parts. All entries are searchable and machine readable. There is an intuitive Web-based user interface. The software automatically matches technologies to mission requirements. The software automatically generates, and makes the required entries in, an Excel return-on-investment analysis software tool. The results of an analysis are presented in both tabular and graphical displays.

  14. TU-CD-304-11: Veritas 2.0: A Cloud-Based Tool to Facilitate Research and Innovation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mishra, P; Patankar, A; Etmektzoglou, A

    Purpose: We introduce Veritas 2.0, a cloud-based, non-clinical research portal, to facilitate translation of radiotherapy research ideas to new delivery techniques. The ecosystem of research tools includes web apps for a research beam builder for TrueBeam Developer Mode, an image reader for compressed and uncompressed XIM files, and a trajectory log file based QA/beam delivery analyzer. Methods: The research beam builder can generate TrueBeam readable XML file either from scratch or from pre-existing DICOM-RT plans. DICOM-RT plan is first converted to XML format and then researcher can interactively modify or add control points to them. Delivered beam can be verifiedmore » via reading generated images and analyzing trajectory log files. Image reader can read both uncompressed and HND-compressed XIM images. The trajectory log analyzer lets researchers plot expected vs. actual values and deviations among 30 mechanical axes. The analyzer gives an animated view of MLC patterns for the beam delivery. Veritas 2.0 is freely available and its advantages versus standalone software are i) No software installation or maintenance needed, ii) easy accessibility across all devices iii) seamless upgrades and iv) OS independence. Veritas is written using open-source tools like twitter bootstrap, jQuery, flask, and Python-based modules. Results: In the first experiment, an anonymized 7-beam DICOM-RT IMRT plan was converted to XML beam containing 1400 control points. kV and MV imaging points were inserted into this XML beam. In another experiment, a binary log file was analyzed to compare actual vs expected values and deviations among axes. Conclusions: Veritas 2.0 is a public cloud-based web app that hosts a pool of research tools for facilitating research from conceptualization to verification. It is aimed at providing a platform for facilitating research and collaboration. I am full time employee at Varian Medical systems, Palo Alto.« less

  15. Modeling the Arden Syntax for medical decisions in XML.

    PubMed

    Kim, Sukil; Haug, Peter J; Rocha, Roberto A; Choi, Inyoung

    2008-10-01

    A new model expressing Arden Syntax with the eXtensible Markup Language (XML) was developed to increase its portability. Every example was manually parsed and reviewed until the schema and the style sheet were considered to be optimized. When the first schema was finished, several MLMs in Arden Syntax Markup Language (ArdenML) were validated against the schema. They were then transformed to HTML formats with the style sheet, during which they were compared to the original text version of their own MLM. When faults were found in the transformed MLM, the schema and/or style sheet was fixed. This cycle continued until all the examples were encoded into XML documents. The original MLMs were encoded in XML according to the proposed XML schema and reverse-parsed MLMs in ArdenML were checked using a public domain Arden Syntax checker. Two hundred seventy seven examples of MLMs were successfully transformed into XML documents using the model, and the reverse-parse yielded the original text version of MLMs. Two hundred sixty five of the 277 MLMs showed the same error patterns before and after transformation, and all 11 errors related to statement structure were resolved in XML version. The model uses two syntax checking mechanisms, first an XML validation process, and second, a syntax check using an XSL style sheet. Now that we have a schema for ArdenML, we can also begin the development of style sheets for transformation ArdenML into other languages.

  16. Applying Semantic Web Concepts to Support Net-Centric Warfare Using the Tactical Assessment Markup Language (TAML)

    DTIC Science & Technology

    2006-06-01

    SPARQL SPARQL Protocol and RDF Query Language SQL Structured Query Language SUMO Suggested Upper Merged Ontology SW... Query optimization algorithms are implemented in the Pellet reasoner in order to ensure querying a knowledge base is efficient . These algorithms...memory as a treelike structure in order for the data to be queried . XML Query (XQuery) is the standard language used when querying XML

  17. Utilization of Forward Error Correction (FEC) Techniques With Extensible Markup Language (XML) Schema-Based Binary Compression (XSBC) Technology

    DTIC Science & Technology

    2004-12-01

    NY 7. Erik Chaum NUWC Newport, RI 8. David Bellino NPRI Newport, RI 9. Dick Nadolink NUWC Newport, RI 10. VADM Roger Bacon (Ret...Science Advisor Pearl Harbor, HI 16. LT Andrew Hurvitz, USN FNMOC Monterey, CA 17. ENS Darin Keeter, USN FNMOC Monterey, CA 18. CAPT David

  18. Using SWE Standards for Ubiquitous Environmental Sensing: A Performance Analysis

    PubMed Central

    Tamayo, Alain; Granell, Carlos; Huerta, Joaquín

    2012-01-01

    Although smartphone applications represent the most typical data consumer tool from the citizen perspective in environmental applications, they can also be used for in-situ data collection and production in varied scenarios, such as geological sciences and biodiversity. The use of standard protocols, such as SWE, to exchange information between smartphones and sensor infrastructures brings benefits such as interoperability and scalability, but their reliance on XML is a potential problem when large volumes of data are transferred, due to limited bandwidth and processing capabilities on mobile phones. In this article we present a performance analysis about the use of SWE standards in smartphone applications to consume and produce environmental sensor data, analysing to what extent the performance problems related to XML can be alleviated by using alternative uncompressed and compressed formats.

  19. Extending Phrase-Based Decoding with a Dependency-Based Reordering Model

    DTIC Science & Technology

    2009-11-01

    strictly within the confines of phrase-based translation. The hope was to introduce an approach that could take advantage of monolingual syntactic...tuple represents one element of the XML markup, where element is the name of this element, attributes is a dictionary (mapping strings to strings...representing the range of possible compressions, in the form of a dictionary mapping the latter to the former. To represent multiple dependency

  20. [The compression and storage of enhanced external counterpulsation waveform based on DICOM standard].

    PubMed

    Hu, Ding; Xie, Shuqun; Yu, Donglan; Zheng, Zhensheng; Wang, Kuijian

    2010-04-01

    The development of external counterpulsation (ECP) local area network system and extensible markup language (XML)-based remote ECP medical information system conformable to digital imaging and communications in medicine (DICOM) standard has been improving the digital interchangeablity and sharability of ECP data. However, the therapy process of ECP is a continuous and longtime supervision which builds a mass of waveform data. In order to reduce the storage space and improve the transmission efficiency, the waveform data with the normative format of ECP data files have to be compressed. In this article, we introduced the compression arithmetic of template matching and improved quick fitting of linear approximation distance thresholding (LADT) in combimation with the characters of enhanced external counterpulsation (EECP) waveform signal. The DICOM standard is used as the storage and transmission standard to make our system compatible with hospital information system. According to the rules of transfer syntaxes, we defined private transfer syntax for one-dimensional compressed waveform data and stored EECP data into a DICOM file. Testing result indicates that the compressed and normative data can be correctly transmitted and displayed between EECP workstations in our EECP laboratory.

  1. XML-BSPM: an XML format for storing Body Surface Potential Map recordings.

    PubMed

    Bond, Raymond R; Finlay, Dewar D; Nugent, Chris D; Moore, George

    2010-05-14

    The Body Surface Potential Map (BSPM) is an electrocardiographic method, for recording and displaying the electrical activity of the heart, from a spatial perspective. The BSPM has been deemed more accurate for assessing certain cardiac pathologies when compared to the 12-lead ECG. Nevertheless, the 12-lead ECG remains the most popular ECG acquisition method for non-invasively assessing the electrical activity of the heart. Although data from the 12-lead ECG can be stored and shared using open formats such as SCP-ECG, no open formats currently exist for storing and sharing the BSPM. As a result, an innovative format for storing BSPM datasets has been developed within this study. The XML vocabulary was chosen for implementation, as opposed to binary for the purpose of human readability. There are currently no standards to dictate the number of electrodes and electrode positions for recording a BSPM. In fact, there are at least 11 different BSPM electrode configurations in use today. Therefore, in order to support these BSPM variants, the XML-BSPM format was made versatile. Hence, the format supports the storage of custom torso diagrams using SVG graphics. This diagram can then be used in a 2D coordinate system for retaining electrode positions. This XML-BSPM format has been successfully used to store the Kornreich-117 BSPM dataset and the Lux-192 BSPM dataset. The resulting file sizes were in the region of 277 kilobytes for each BSPM recording and can be deemed suitable for example, for use with any telemonitoring application. Moreover, there is potential for file sizes to be further reduced using basic compression algorithms, i.e. the deflate algorithm. Finally, these BSPM files have been parsed and visualised within a convenient time period using a web based BSPM viewer. This format, if widely adopted could promote BSPM interoperability, knowledge sharing and data mining. This work could also be used to provide conceptual solutions and inspire existing formats such as DICOM, SCP-ECG and aECG to support the storage of BSPMs. In summary, this research provides initial ground work for creating a complete BSPM management system.

  2. Trick Simulation Environment 07

    NASA Technical Reports Server (NTRS)

    Lin, Alexander S.; Penn, John M.

    2012-01-01

    The Trick Simulation Environment is a generic simulation toolkit used for constructing and running simulations. This release includes a Monte Carlo analysis simulation framework and a data analysis package. It produces all auto documentation in XML. Also, the software is capable of inserting a malfunction at any point during the simulation. Trick 07 adds variable server output options and error messaging and is capable of using and manipulating wide characters for international support. Wide character strings are available as a fundamental type for variables processed by Trick. A Trick Monte Carlo simulation uses a statistically generated, or predetermined, set of inputs to iteratively drive the simulation. Also, there is a framework in place for optimization and solution finding where developers may iteratively modify the inputs per run based on some analysis of the outputs. The data analysis package is capable of reading data from external simulation packages such as MATLAB and Octave, as well as the common comma-separated values (CSV) format used by Excel, without the use of external converters. The file formats for MATLAB and Octave were obtained from their documentation sets, and Trick maintains generic file readers for each format. XML tags store the fields in the Trick header comments. For header files, XML tags for structures and enumerations, and the members within are stored in the auto documentation. For source code files, XML tags for each function and the calling arguments are stored in the auto documentation. When a simulation is built, a top level XML file, which includes all of the header and source code XML auto documentation files, is created in the simulation directory. Trick 07 provides an XML to TeX converter. The converter reads in header and source code XML documentation files and converts the data to TeX labels and tables suitable for inclusion in TeX documents. A malfunction insertion capability allows users to override the value of any simulation variable, or call a malfunction job, at any time during the simulation. Users may specify conditions, use the return value of a malfunction trigger job, or manually activate a malfunction. The malfunction action may consist of executing a block of input file statements in an action block, setting simulation variable values, call a malfunction job, or turn on/off simulation jobs.

  3. An Efficient G-XML Data Management Method using XML Spatial Index for Mobile Devices

    NASA Astrophysics Data System (ADS)

    Tamada, Takashi; Momma, Kei; Seo, Kazuo; Hijikata, Yoshinori; Nishida, Shogo

    This paper presents an efficient G-XML data management method for mobile devices. G-XML is XML based encoding for the transport of geographic information. Mobile devices, such as PDA and mobile-phone, performance trail desktop machines, so some techniques are needed for processing G-XML data on mobile devices. In this method, XML-format spatial index file is used to improve an initial display time of G-XML data. This index file contains XML pointer of each feature in G-XML data and classifies these features by multi-dimensional data structures. From the experimental result, we can prove this method speed up about 3-7 times an initial display time of G-XML data on mobile devices.

  4. Speed up of XML parsers with PHP language implementation

    NASA Astrophysics Data System (ADS)

    Georgiev, Bozhidar; Georgieva, Adriana

    2012-11-01

    In this paper, authors introduce PHP5's XML implementation and show how to read, parse, and write a short and uncomplicated XML file using Simple XML in a PHP environment. The possibilities for mutual work of PHP5 language and XML standard are described. The details of parsing process with Simple XML are also cleared. A practical project PHP-XML-MySQL presents the advantages of XML implementation in PHP modules. This approach allows comparatively simple search of XML hierarchical data by means of PHP software tools. The proposed project includes database, which can be extended with new data and new XML parsing functions.

  5. XML under the Hood.

    ERIC Educational Resources Information Center

    Scharf, David

    2002-01-01

    Discusses XML (extensible markup language), particularly as it relates to libraries. Topics include organizing information; cataloging; metadata; similarities to HTML; organizations dealing with XML; making XML useful; a history of XML; the semantic Web; related technologies; XML at the Library of Congress; and its role in improving the…

  6. Integrating and visualizing primary data from prospective and legacy taxonomic literature

    PubMed Central

    Agosti, Donat; Penev, Lyubomir; Sautter, Guido; Georgiev, Teodor; Catapano, Terry; Patterson, David; King, David; Pereira, Serrano; Vos, Rutger Aldo; Sierra, Soraya

    2015-01-01

    Abstract Specimen data in taxonomic literature are among the highest quality primary biodiversity data. Innovative cybertaxonomic journals are using workflows that maintain data structure and disseminate electronic content to aggregators and other users; such structure is lost in traditional taxonomic publishing. Legacy taxonomic literature is a vast repository of knowledge about biodiversity. Currently, access to that resource is cumbersome, especially for non-specialist data consumers. Markup is a mechanism that makes this content more accessible, and is especially suited to machine analysis. Fine-grained XML (Extensible Markup Language) markup was applied to all (37) open-access articles published in the journal Zootaxa containing treatments on spiders (Order: Araneae). The markup approach was optimized to extract primary specimen data from legacy publications. These data were combined with data from articles containing treatments on spiders published in Biodiversity Data Journal where XML structure is part of the routine publication process. A series of charts was developed to visualize the content of specimen data in XML-tagged taxonomic treatments, either singly or in aggregate. The data can be filtered by several fields (including journal, taxon, institutional collection, collecting country, collector, author, article and treatment) to query particular aspects of the data. We demonstrate here that XML markup using GoldenGATE can address the challenge presented by unstructured legacy data, can extract structured primary biodiversity data which can be aggregated with and jointly queried with data from other Darwin Core-compatible sources, and show how visualization of these data can communicate key information contained in biodiversity literature. We complement recent studies on aspects of biodiversity knowledge using XML structured data to explore 1) the time lag between species discovry and description, and 2) the prevelence of rarity in species descriptions. PMID:26023286

  7. Semantically Interoperable XML Data

    PubMed Central

    Vergara-Niedermayr, Cristobal; Wang, Fusheng; Pan, Tony; Kurc, Tahsin; Saltz, Joel

    2013-01-01

    XML is ubiquitously used as an information exchange platform for web-based applications in healthcare, life sciences, and many other domains. Proliferating XML data are now managed through latest native XML database technologies. XML data sources conforming to common XML schemas could be shared and integrated with syntactic interoperability. Semantic interoperability can be achieved through semantic annotations of data models using common data elements linked to concepts from ontologies. In this paper, we present a framework and software system to support the development of semantic interoperable XML based data sources that can be shared through a Grid infrastructure. We also present our work on supporting semantic validated XML data through semantic annotations for XML Schema, semantic validation and semantic authoring of XML data. We demonstrate the use of the system for a biomedical database of medical image annotations and markups. PMID:25298789

  8. Using XML to encode TMA DES metadata.

    PubMed

    Lyttleton, Oliver; Wright, Alexander; Treanor, Darren; Lewis, Paul

    2011-01-01

    The Tissue Microarray Data Exchange Specification (TMA DES) is an XML specification for encoding TMA experiment data. While TMA DES data is encoded in XML, the files that describe its syntax, structure, and semantics are not. The DTD format is used to describe the syntax and structure of TMA DES, and the ISO 11179 format is used to define the semantics of TMA DES. However, XML Schema can be used in place of DTDs, and another XML encoded format, RDF, can be used in place of ISO 11179. Encoding all TMA DES data and metadata in XML would simplify the development and usage of programs which validate and parse TMA DES data. XML Schema has advantages over DTDs such as support for data types, and a more powerful means of specifying constraints on data values. An advantage of RDF encoded in XML over ISO 11179 is that XML defines rules for encoding data, whereas ISO 11179 does not. We created an XML Schema version of the TMA DES DTD. We wrote a program that converted ISO 11179 definitions to RDF encoded in XML, and used it to convert the TMA DES ISO 11179 definitions to RDF. We validated a sample TMA DES XML file that was supplied with the publication that originally specified TMA DES using our XML Schema. We successfully validated the RDF produced by our ISO 11179 converter with the W3C RDF validation service. All TMA DES data could be encoded using XML, which simplifies its processing. XML Schema allows datatypes and valid value ranges to be specified for CDEs, which enables a wider range of error checking to be performed using XML Schemas than could be performed using DTDs.

  9. Using XML to encode TMA DES metadata

    PubMed Central

    Lyttleton, Oliver; Wright, Alexander; Treanor, Darren; Lewis, Paul

    2011-01-01

    Background: The Tissue Microarray Data Exchange Specification (TMA DES) is an XML specification for encoding TMA experiment data. While TMA DES data is encoded in XML, the files that describe its syntax, structure, and semantics are not. The DTD format is used to describe the syntax and structure of TMA DES, and the ISO 11179 format is used to define the semantics of TMA DES. However, XML Schema can be used in place of DTDs, and another XML encoded format, RDF, can be used in place of ISO 11179. Encoding all TMA DES data and metadata in XML would simplify the development and usage of programs which validate and parse TMA DES data. XML Schema has advantages over DTDs such as support for data types, and a more powerful means of specifying constraints on data values. An advantage of RDF encoded in XML over ISO 11179 is that XML defines rules for encoding data, whereas ISO 11179 does not. Materials and Methods: We created an XML Schema version of the TMA DES DTD. We wrote a program that converted ISO 11179 definitions to RDF encoded in XML, and used it to convert the TMA DES ISO 11179 definitions to RDF. Results: We validated a sample TMA DES XML file that was supplied with the publication that originally specified TMA DES using our XML Schema. We successfully validated the RDF produced by our ISO 11179 converter with the W3C RDF validation service. Conclusions: All TMA DES data could be encoded using XML, which simplifies its processing. XML Schema allows datatypes and valid value ranges to be specified for CDEs, which enables a wider range of error checking to be performed using XML Schemas than could be performed using DTDs. PMID:21969921

  10. XWeB: The XML Warehouse Benchmark

    NASA Astrophysics Data System (ADS)

    Mahboubi, Hadj; Darmont, Jérôme

    With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.

  11. Software Development Of XML Parser Based On Algebraic Tools

    NASA Astrophysics Data System (ADS)

    Georgiev, Bozhidar; Georgieva, Adriana

    2011-12-01

    In this paper, is presented one software development and implementation of an algebraic method for XML data processing, which accelerates XML parsing process. Therefore, the proposed in this article nontraditional approach for fast XML navigation with algebraic tools contributes to advanced efforts in the making of an easier user-friendly API for XML transformations. Here the proposed software for XML documents processing (parser) is easy to use and can manage files with strictly defined data structure. The purpose of the presented algorithm is to offer a new approach for search and restructuring hierarchical XML data. This approach permits fast XML documents processing, using algebraic model developed in details in previous works of the same authors. So proposed parsing mechanism is easy accessible to the web consumer who is able to control XML file processing, to search different elements (tags) in it, to delete and to add a new XML content as well. The presented various tests show higher rapidity and low consumption of resources in comparison with some existing commercial parsers.

  12. A Space Surveillance Ontology: Captured in an XML Schema

    DTIC Science & Technology

    2000-10-01

    characterization in a way most appropriate to a sub- domain. 6. The commercial market is embracing XML, and the military can take advantage of this significant...the space surveillance ontology effort to two key efforts: the Defense Information Infrastructure Common Operating Environment (DII COE) XML...strongly believe XML schemas will supplant them. Some of the advantages that XML schemas provide over DTDs include: • Strong data typing: The XML Schema

  13. The stochastic energy-Casimir method

    NASA Astrophysics Data System (ADS)

    Arnaudon, Alexis; Ganaba, Nader; Holm, Darryl D.

    2018-04-01

    In this paper, we extend the energy-Casimir stability method for deterministic Lie-Poisson Hamiltonian systems to provide sufficient conditions for stability in probability of stochastic dynamical systems with symmetries. We illustrate this theory with classical examples of coadjoint motion, including the rigid body, the heavy top, and the compressible Euler equation in two dimensions. The main result is that stable deterministic equilibria remain stable in probability up to a certain stopping time that depends on the amplitude of the noise for finite-dimensional systems and on the amplitude of the spatial derivative of the noise for infinite-dimensional systems. xml:lang="fr"

  14. An XML Data Model for Inverted Image Indexing

    NASA Astrophysics Data System (ADS)

    So, Simon W.; Leung, Clement H. C.; Tse, Philip K. C.

    2003-01-01

    The Internet world makes increasing use of XML-based technologies. In multimedia data indexing and retrieval, the MPEG-7 standard for Multimedia Description Scheme is specified using XML. The flexibility of XML allows users to define other markup semantics for special contexts, construct data-centric XML documents, exchange standardized data between computer systems, and present data in different applications. In this paper, the Inverted Image Indexing paradigm is presented and modeled using XML Schema.

  15. ScotlandsPlaces XML: Bespoke XML or XML Mapping?

    ERIC Educational Resources Information Center

    Beamer, Ashley; Gillick, Mark

    2010-01-01

    Purpose: The purpose of this paper is to investigate web services (in the form of parameterised URLs), specifically in the context of the ScotlandsPlaces project. This involves cross-domain querying, data retrieval and display via the development of a bespoke XML standard rather than existing XML formats and mapping between them.…

  16. An Introduction to the Extensible Markup Language (XML).

    ERIC Educational Resources Information Center

    Bryan, Martin

    1998-01-01

    Describes Extensible Markup Language (XML), a subset of the Standard Generalized Markup Language (SGML) that is designed to make it easy to interchange structured documents over the Internet. Topics include Document Type Definition (DTD), components of XML, the use of XML, text and non-text elements, and uses for XML-coded files. (LRW)

  17. XML Reconstruction View Selection in XML Databases: Complexity Analysis and Approximation Scheme

    NASA Astrophysics Data System (ADS)

    Chebotko, Artem; Fu, Bin

    Query evaluation in an XML database requires reconstructing XML subtrees rooted at nodes found by an XML query. Since XML subtree reconstruction can be expensive, one approach to improve query response time is to use reconstruction views - materialized XML subtrees of an XML document, whose nodes are frequently accessed by XML queries. For this approach to be efficient, the principal requirement is a framework for view selection. In this work, we are the first to formalize and study the problem of XML reconstruction view selection. The input is a tree T, in which every node i has a size c i and profit p i , and the size limitation C. The target is to find a subset of subtrees rooted at nodes i 1, ⋯ , i k respectively such that c_{i_1}+\\cdots +c_{i_k}le C, and p_{i_1}+\\cdots +p_{i_k} is maximal. Furthermore, there is no overlap between any two subtrees selected in the solution. We prove that this problem is NP-hard and present a fully polynomial-time approximation scheme (FPTAS) as a solution.

  18. Optimization of Error-Bounded Lossy Compression for Hard-to-Compress HPC Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di, Sheng; Cappello, Franck

    Since today’s scientific applications are producing vast amounts of data, compressing them before storage/transmission is critical. Results of existing compressors show two types of HPC data sets: highly compressible and hard to compress. In this work, we carefully design and optimize the error-bounded lossy compression for hard-tocompress scientific data. We propose an optimized algorithm that can adaptively partition the HPC data into best-fit consecutive segments each having mutually close data values, such that the compression condition can be optimized. Another significant contribution is the optimization of shifting offset such that the XOR-leading-zero length between two consecutive unpredictable data points canmore » be maximized. We finally devise an adaptive method to select the best-fit compressor at runtime for maximizing the compression factor. We evaluate our solution using 13 benchmarks based on real-world scientific problems, and we compare it with 9 other state-of-the-art compressors. Experiments show that our compressor can always guarantee the compression errors within the user-specified error bounds. Most importantly, our optimization can improve the compression factor effectively, by up to 49% for hard-tocompress data sets with similar compression/decompression time cost.« less

  19. jmzReader: A Java parser library to process and visualize multiple text and XML-based mass spectrometry data formats.

    PubMed

    Griss, Johannes; Reisinger, Florian; Hermjakob, Henning; Vizcaíno, Juan Antonio

    2012-03-01

    We here present the jmzReader library: a collection of Java application programming interfaces (APIs) to parse the most commonly used peak list and XML-based mass spectrometry (MS) data formats: DTA, MS2, MGF, PKL, mzXML, mzData, and mzML (based on the already existing API jmzML). The library is optimized to be used in conjunction with mzIdentML, the recently released standard data format for reporting protein and peptide identifications, developed by the HUPO proteomics standards initiative (PSI). mzIdentML files do not contain spectra data but contain references to different kinds of external MS data files. As a key functionality, all parsers implement a common interface that supports the various methods used by mzIdentML to reference external spectra. Thus, when developing software for mzIdentML, programmers no longer have to support multiple MS data file formats but only this one interface. The library (which includes a viewer) is open source and, together with detailed documentation, can be downloaded from http://code.google.com/p/jmzreader/. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. XML Style Guide

    DTIC Science & Technology

    2015-07-01

    Acronyms ASCII American Standard Code for Information Interchange DAU data acquisition unit DDML data display markup language IHAL...Transfer Standard URI uniform resource identifier W3C World Wide Web Consortium XML extensible markup language XSD XML schema definition XML Style...Style Guide, RCC 125-15, July 2015 1 Introduction The next generation of telemetry systems will rely heavily on extensible markup language (XML

  1. Generalized massive optimal data compression

    NASA Astrophysics Data System (ADS)

    Alsing, Justin; Wandelt, Benjamin

    2018-05-01

    In this paper, we provide a general procedure for optimally compressing N data down to n summary statistics, where n is equal to the number of parameters of interest. We show that compression to the score function - the gradient of the log-likelihood with respect to the parameters - yields n compressed statistics that are optimal in the sense that they preserve the Fisher information content of the data. Our method generalizes earlier work on linear Karhunen-Loéve compression for Gaussian data whilst recovering both lossless linear compression and quadratic estimation as special cases when they are optimal. We give a unified treatment that also includes the general non-Gaussian case as long as mild regularity conditions are satisfied, producing optimal non-linear summary statistics when appropriate. As a worked example, we derive explicitly the n optimal compressed statistics for Gaussian data in the general case where both the mean and covariance depend on the parameters.

  2. Framework and prototype for a secure XML-based electronic health records system.

    PubMed

    Steele, Robert; Gardner, William; Chandra, Darius; Dillon, Tharam S

    2007-01-01

    Security of personal medical information has always been a challenge for the advancement of Electronic Health Records (EHRs) initiatives. eXtensible Markup Language (XML), is rapidly becoming the key standard for data representation and transportation. The widespread use of XML and the prospect of its use in the Electronic Health (e-health) domain highlights the need for flexible access control models for XML data and documents. This paper presents a declarative access control model for XML data repositories that utilises an expressive XML role control model. The operational semantics of this model are illustrated by Xplorer, a user interface generation engine which supports search-browse-navigate activities on XML repositories.

  3. Using XML to Separate Content from the Presentation Software in eLearning Applications

    ERIC Educational Resources Information Center

    Merrill, Paul F.

    2005-01-01

    This paper has shown how XML (extensible Markup Language) can be used to mark up content. Since XML documents, with meaningful tags, can be interpreted easily by humans as well as computers, they are ideal for the interchange of information. Because XML tags can be defined by an individual or organization, XML documents have proven useful in a…

  4. The XSD-Builder Specification Language—Toward a Semantic View of XML Schema Definition

    NASA Astrophysics Data System (ADS)

    Fong, Joseph; Cheung, San Kuen

    In the present database market, XML database model is a main structure for the forthcoming database system in the Internet environment. As a conceptual schema of XML database, XML Model has its limitation on presenting its data semantics. System analyst has no toolset for modeling and analyzing XML system. We apply XML Tree Model (shown in Figure 2) as a conceptual schema of XML database to model and analyze the structure of an XML database. It is important not only for visualizing, specifying, and documenting structural models, but also for constructing executable systems. The tree model represents inter-relationship among elements inside different logical schema such as XML Schema Definition (XSD), DTD, Schematron, XDR, SOX, and DSD (shown in Figure 1, an explanation of the terms in the figure are shown in Table 1). The XSD-Builder consists of XML Tree Model, source language, translator, and XSD. The source language is called XSD-Source which is mainly for providing an environment with concept of user friendliness while writing an XSD. The source language will consequently be translated by XSD-Translator. Output of XSD-Translator is an XSD which is our target and is called as an object language.

  5. Querying XML Data with SPARQL

    NASA Astrophysics Data System (ADS)

    Bikakis, Nikos; Gioldasis, Nektarios; Tsinaraki, Chrisa; Christodoulakis, Stavros

    SPARQL is today the standard access language for Semantic Web data. In the recent years XML databases have also acquired industrial importance due to the widespread applicability of XML in the Web. In this paper we present a framework that bridges the heterogeneity gap and creates an interoperable environment where SPARQL queries are used to access XML databases. Our approach assumes that fairly generic mappings between ontology constructs and XML Schema constructs have been automatically derived or manually specified. The mappings are used to automatically translate SPARQL queries to semantically equivalent XQuery queries which are used to access the XML databases. We present the algorithms and the implementation of SPARQL2XQuery framework, which is used for answering SPARQL queries over XML databases.

  6. Pathology data integration with eXtensible Markup Language.

    PubMed

    Berman, Jules J

    2005-02-01

    It is impossible to overstate the importance of XML (eXtensible Markup Language) as a data organization tool. With XML, pathologists can annotate all of their data (clinical and anatomic) in a format that can transform every pathology report into a database, without compromising narrative structure. The purpose of this manuscript is to provide an overview of XML for pathologists. Examples will demonstrate how pathologists can use XML to annotate individual data elements and to structure reports in a common format that can be merged with other XML files or queried using standard XML tools. This manuscript gives pathologists a glimpse into how XML allows pathology data to be linked to other types of biomedical data and reduces our dependence on centralized proprietary databases.

  7. phyloXML: XML for evolutionary biology and comparative genomics

    PubMed Central

    Han, Mira V; Zmasek, Christian M

    2009-01-01

    Background Evolutionary trees are central to a wide range of biological studies. In many of these studies, tree nodes and branches need to be associated (or annotated) with various attributes. For example, in studies concerned with organismal relationships, tree nodes are associated with taxonomic names, whereas tree branches have lengths and oftentimes support values. Gene trees used in comparative genomics or phylogenomics are usually annotated with taxonomic information, genome-related data, such as gene names and functional annotations, as well as events such as gene duplications, speciations, or exon shufflings, combined with information related to the evolutionary tree itself. The data standards currently used for evolutionary trees have limited capacities to incorporate such annotations of different data types. Results We developed a XML language, named phyloXML, for describing evolutionary trees, as well as various associated data items. PhyloXML provides elements for commonly used items, such as branch lengths, support values, taxonomic names, and gene names and identifiers. By using "property" elements, phyloXML can be adapted to novel and unforeseen use cases. We also developed various software tools for reading, writing, conversion, and visualization of phyloXML formatted data. Conclusion PhyloXML is an XML language defined by a complete schema in XSD that allows storing and exchanging the structures of evolutionary trees as well as associated data. More information about phyloXML itself, the XSD schema, as well as tools implementing and supporting phyloXML, is available at . PMID:19860910

  8. XML Files

    MedlinePlus

    ... this page, please enable JavaScript. MedlinePlus produces XML data sets that you are welcome to download and use. If you have questions about the MedlinePlus XML files, please contact us . For additional sources of MedlinePlus data in XML format, visit our Web service page, ...

  9. δ-dependency for privacy-preserving XML data publishing.

    PubMed

    Landberg, Anders H; Nguyen, Kinh; Pardede, Eric; Rahayu, J Wenny

    2014-08-01

    An ever increasing amount of medical data such as electronic health records, is being collected, stored, shared and managed in large online health information systems and electronic medical record systems (EMR) (Williams et al., 2001; Virtanen, 2009; Huang and Liou, 2007) [1-3]. From such rich collections, data is often published in the form of census and statistical data sets for the purpose of knowledge sharing and enabling medical research. This brings with it an increasing need for protecting individual people privacy, and it becomes an issue of great importance especially when information about patients is exposed to the public. While the concept of data privacy has been comprehensively studied for relational data, models and algorithms addressing the distinct differences and complex structure of XML data are yet to be explored. Currently, the common compromise method is to convert private XML data into relational data for publication. This ad hoc approach results in significant loss of useful semantic information previously carried in the private XML data. Health data often has very complex structure, which is best expressed in XML. In fact, XML is the standard format for exchanging (e.g. HL7 version 3(1)) and publishing health information. Lack of means to deal directly with data in XML format is inevitably a serious drawback. In this paper we propose a novel privacy protection model for XML, and an algorithm for implementing this model. We provide general rules, both for transforming a private XML schema into a published XML schema, and for mapping private XML data to the new privacy-protected published XML data. In addition, we propose a new privacy property, δ-dependency, which can be applied to both relational and XML data, and that takes into consideration the hierarchical nature of sensitive data (as opposed to "quasi-identifiers"). Lastly, we provide an implementation of our model, algorithm and privacy property, and perform an experimental analysis, to demonstrate the proposed privacy scheme in practical application. Copyright © 2014. Published by Elsevier Inc.

  10. A Survey in Indexing and Searching XML Documents.

    ERIC Educational Resources Information Center

    Luk, Robert W. P.; Leong, H. V.; Dillon, Tharam S.; Chan, Alvin T. S.; Croft, W. Bruce; Allan, James

    2002-01-01

    Discussion of XML focuses on indexing techniques for XML documents, grouping them into flat-file, semistructured, and structured indexing paradigms. Highlights include searching techniques, including full text search and multistage search; search result presentations; database and information retrieval system integration; XML query languages; and…

  11. XML and E-Journals: The State of Play.

    ERIC Educational Resources Information Center

    Wusteman, Judith

    2003-01-01

    Discusses the introduction of the use of XML (Extensible Markup Language) in publishing electronic journals. Topics include standards, including DTDs (Document Type Definition), or document type definitions; aggregator requirements; SGML (Standard Generalized Markup Language); benefits of XML for e-journals; XML metadata; the possibility of…

  12. XML in Libraries.

    ERIC Educational Resources Information Center

    Tennant, Roy, Ed.

    This book presents examples of how libraries are using XML (eXtensible Markup Language) to solve problems, expand services, and improve systems. Part I contains papers on using XML in library catalog records: "Updating MARC Records with XMLMARC" (Kevin S. Clarke, Stanford University) and "Searching and Retrieving XML Records via the…

  13. Image compression system and method having optimized quantization tables

    NASA Technical Reports Server (NTRS)

    Ratnakar, Viresh (Inventor); Livny, Miron (Inventor)

    1998-01-01

    A digital image compression preprocessor for use in a discrete cosine transform-based digital image compression device is provided. The preprocessor includes a gathering mechanism for determining discrete cosine transform statistics from input digital image data. A computing mechanism is operatively coupled to the gathering mechanism to calculate a image distortion array and a rate of image compression array based upon the discrete cosine transform statistics for each possible quantization value. A dynamic programming mechanism is operatively coupled to the computing mechanism to optimize the rate of image compression array against the image distortion array such that a rate-distortion-optimal quantization table is derived. In addition, a discrete cosine transform-based digital image compression device and a discrete cosine transform-based digital image compression and decompression system are provided. Also, a method for generating a rate-distortion-optimal quantization table, using discrete cosine transform-based digital image compression, and operating a discrete cosine transform-based digital image compression and decompression system are provided.

  14. Setting the Standard: XML on Campus.

    ERIC Educational Resources Information Center

    Rawlins, Mike

    2001-01-01

    Explains what XML (Extensible Markup Language) is; where to find it in a few years (everywhere from Web pages, to database management systems, to common campus applications); issues that will make XML somewhat of an experimental strategy in the near term; and the importance of decision-makers being abreast of XML trends in standards, tools…

  15. Using a Combination of UML, C2RM, XML, and Metadata Registries to Support Long-Term Development/Engineering

    DTIC Science & Technology

    2003-01-01

    Authenticat’n (XCBF) Authorizat’n (XACML) (SAML) Privacy (P3P) Digital Rights Management (XrML) Content Mngmnt (DASL) (WebDAV) Content Syndicat’n...Registry/ Repository BPSS eCommerce XML/EDI Universal Business Language (UBL) Internet & Computing Human Resources (HR-XML) Semantic KEY XML SPECIFICATIONS

  16. XML — an opportunity for data standards in the geosciences

    NASA Astrophysics Data System (ADS)

    Houlding, Simon W.

    2001-08-01

    Extensible markup language (XML) is a recently introduced meta-language standard on the Web. It provides the rules for development of metadata (markup) standards for information transfer in specific fields. XML allows development of markup languages that describe what information is rather than how it should be presented. This allows computer applications to process the information in intelligent ways. In contrast hypertext markup language (HTML), which fuelled the initial growth of the Web, is a metadata standard concerned exclusively with presentation of information. Besides its potential for revolutionizing Web activities, XML provides an opportunity for development of meaningful data standards in specific application fields. The rapid endorsement of XML by science, industry and e-commerce has already spawned new metadata standards in such fields as mathematics, chemistry, astronomy, multi-media and Web micro-payments. Development of XML-based data standards in the geosciences would significantly reduce the effort currently wasted on manipulating and reformatting data between different computer platforms and applications and would ensure compatibility with the new generation of Web browsers. This paper explores the evolution, benefits and status of XML and related standards in the more general context of Web activities and uses this as a platform for discussion of its potential for development of data standards in the geosciences. Some of the advantages of XML are illustrated by a simple, browser-compatible demonstration of XML functionality applied to a borehole log dataset. The XML dataset and the associated stylesheet and schema declarations are available for FTP download.

  17. Report of Official Foreign Travel to Montreal, Canada

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mason, J. D.

    How can DOE, NNSA, and Y-12 best handle the integration of information from diverse sources, and what will best ensure that legacy data will survive changes in computing systems for the future? Although there is no simple answer, it is becoming increasingly clear throughout the information-management industry that a key component of both preservation and integration of information is the adoption of standardized data formats. The most notable standardized format is XML, to which almost all data is now migrating. XML is derived from SGML, as is HTML, the common language of the World Wide Web. XML is becoming increasinglymore » important as part of the Y-12 data infrastructure. Y-12 is implementing a new generation of XML-based publishing systems. Y-12 already has been supporting projects at DOE Headquarters, such as the Guidance Streamlining Initiative (GSI) that will result in the storage of classification guidance in XML. Y-12 collects some test data in XML as the result of Electronic Data Capture (EDC), and XML data is also used in Engineering Releases. I am participating in a series of projects sponsored by the PRIDE initiative that include the capture of dimensional certification and other similar records in XML, the creation of XML formats for Electronic Data Capture, and the creation of Quality Evaluation Reports in XML. In support of DOE's use of SGML, XML, HTML, Topic Maps, and related standards, I served 1985-2007 as chairman of the international committee responsible for SGML and standards derived from it, ISO/IEC JTC1/SC34 (SC34) and its predecessor organizations; I continue to belong to the committee. During the August 2010 trip, I co-chaired the conference Balisage 2010.« less

  18. XML: A Language To Manage the World Wide Web. ERIC Digest.

    ERIC Educational Resources Information Center

    Davis-Tanous, Jennifer R.

    This digest provides an overview of XML (Extensible Markup Language), a markup language used to construct World Wide Web pages. Topics addressed include: (1) definition of a markup language, including comparison of XML with SGML (Standard Generalized Markup Language) and HTML (HyperText Markup Language); (2) how XML works, including sample tags,…

  19. Interoperability, Data Control and Battlespace Visualization using XML, XSLT and X3D

    DTIC Science & Technology

    2003-09-01

    26 Rosenthal, Arnon, Seligman , Len and Costello, Roger, XML, Databases, and Interoperability, Federal Database Colloquium, AFCEA, San Diego...79 Rosenthal, Arnon, Seligman , Len and Costello, Roger, “XML, Databases, and Interoperability”, Federal Database Colloquium, AFCEA, San Diego, 1999... Linda , Mastering XML, Premium Edition, SYBEX, 2001 Wooldridge, Michael , An Introduction to MultiAgent Systems, Wiley, 2002 PAPERS Abernathy, M

  20. Applying Analogical Reasoning Techniques for Teaching XML Document Querying Skills in Database Classes

    ERIC Educational Resources Information Center

    Mitri, Michel

    2012-01-01

    XML has become the most ubiquitous format for exchange of data between applications running on the Internet. Most Web Services provide their information to clients in the form of XML. The ability to process complex XML documents in order to extract relevant information is becoming as important a skill for IS students to master as querying…

  1. Indexing Temporal XML Using FIX

    NASA Astrophysics Data System (ADS)

    Zheng, Tiankun; Wang, Xinjun; Zhou, Yingchun

    XML has become an important criterion for description and exchange of information. It is of practical significance to introduce the temporal information on this basis, because time has penetrated into all walks of life as an important property information .Such kind of database can track document history and recover information to state of any time before, and is called Temporal XML database. We advise a new feature vector on the basis of FIX which is a feature-based XML index, and build an index on temporal XML database using B+ tree, donated TFIX. We also put forward a new query algorithm upon it for temporal query. Our experiments proved that this index has better performance over other kinds of XML indexes. The index can satisfy all TXPath queries with depth up to K(>0).

  2. Towards the XML schema measurement based on mapping between XML and OO domain

    NASA Astrophysics Data System (ADS)

    Rakić, Gordana; Budimac, Zoran; Heričko, Marjan; Pušnik, Maja

    2017-07-01

    Measuring quality of IT solutions is a priority in software engineering. Although numerous metrics for measuring object-oriented code already exist, measuring quality of UML models or XML Schemas is still developing. One of the research questions in the overall research leaded by ideas described in this paper is whether we can apply already defined object-oriented design metrics on XML schemas based on predefined mappings. In this paper, basic ideas for mentioned mapping are presented. This mapping is prerequisite for setting the future approach to XML schema quality measuring with object-oriented metrics.

  3. Information Retrieval System for Japanese Standard Disease-Code Master Using XML Web Service

    PubMed Central

    Hatano, Kenji; Ohe, Kazuhiko

    2003-01-01

    Information retrieval system of Japanese Standard Disease-Code Master Using XML Web Service is developed. XML Web Service is a new distributed processing system by standard internet technologies. With seamless remote method invocation of XML Web Service, users are able to get the latest disease code master information from their rich desktop applications or internet web sites, which refer to this service. PMID:14728364

  4. XML syntax for clinical laboratory procedure manuals.

    PubMed

    Saadawi, Gilan; Harrison, James H

    2003-01-01

    We have developed a document type description (DTD) in Extensable Markup Language (XML) for clinical laboratory procedures. Our XML syntax can adequately structure a variety of procedure types across different laboratories and is compatible with current procedure standards. The combination of this format with an XML content management system and appropriate style sheets will allow efficient procedure maintenance, distributed access, customized display and effective searching across a large body of test information.

  5. Optimal wavelets for biomedical signal compression.

    PubMed

    Nielsen, Mogens; Kamavuako, Ernest Nlandu; Andersen, Michael Midtgaard; Lucas, Marie-Françoise; Farina, Dario

    2006-07-01

    Signal compression is gaining importance in biomedical engineering due to the potential applications in telemedicine. In this work, we propose a novel scheme of signal compression based on signal-dependent wavelets. To adapt the mother wavelet to the signal for the purpose of compression, it is necessary to define (1) a family of wavelets that depend on a set of parameters and (2) a quality criterion for wavelet selection (i.e., wavelet parameter optimization). We propose the use of an unconstrained parameterization of the wavelet for wavelet optimization. A natural performance criterion for compression is the minimization of the signal distortion rate given the desired compression rate. For coding the wavelet coefficients, we adopted the embedded zerotree wavelet coding algorithm, although any coding scheme may be used with the proposed wavelet optimization. As a representative example of application, the coding/encoding scheme was applied to surface electromyographic signals recorded from ten subjects. The distortion rate strongly depended on the mother wavelet (for example, for 50% compression rate, optimal wavelet, mean+/-SD, 5.46+/-1.01%; worst wavelet 12.76+/-2.73%). Thus, optimization significantly improved performance with respect to previous approaches based on classic wavelets. The algorithm can be applied to any signal type since the optimal wavelet is selected on a signal-by-signal basis. Examples of application to ECG and EEG signals are also reported.

  6. Specifics on a XML Data Format for Scientific Data

    NASA Astrophysics Data System (ADS)

    Shaya, E.; Thomas, B.; Cheung, C.

    An XML-based data format for interchange and archiving of scientific data would benefit in many ways from the features standardized in XML. Foremost of these features is the world-wide acceptance and adoption of XML. Applications, such as browsers, XQL and XSQL advanced query, XML editing, or CSS or XSLT transformation, that are coming out of industry and academia can be easily adopted and provide startling new benefits and features. We have designed a prototype of a core format for holding, in a very general way, parameters, tables, scalar and vector fields, atlases, animations and complex combinations of these. This eXtensible Data Format (XDF) makes use of XML functionalities such as: self-validation of document structure, default values for attributes, XLink hyperlinks, entity replacements, internal referencing, inheritance, and XSLT transformation. An API is available to aid in detailed assembly, extraction, and manipulation. Conversion tools to and from FITS and other existing data formats are under development. In the future, we hope to provide object oriented interfaces to C++, Java, Python, IDL, Mathematica, Maple, and various databases. http://xml.gsfc.nasa.gov/XDF

  7. A Priority Fuzzy Logic Extension of the XQuery Language

    NASA Astrophysics Data System (ADS)

    Škrbić, Srdjan; Wettayaprasit, Wiphada; Saeueng, Pannipa

    2011-09-01

    In recent years there have been significant research findings in flexible XML querying techniques using fuzzy set theory. Many types of fuzzy extensions to XML data model and XML query languages have been proposed. In this paper, we introduce priority fuzzy logic extensions to XQuery language. Describing these extensions we introduce a new query language. Moreover, we describe a way to implement an interpreter for this language using an existing XML native database.

  8. CPAC: Energy-Efficient Data Collection through Adaptive Selection of Compression Algorithms for Sensor Networks

    PubMed Central

    Lee, HyungJune; Kim, HyunSeok; Chang, Ik Joon

    2014-01-01

    We propose a technique to optimize the energy efficiency of data collection in sensor networks by exploiting a selective data compression. To achieve such an aim, we need to make optimal decisions regarding two aspects: (1) which sensor nodes should execute compression; and (2) which compression algorithm should be used by the selected sensor nodes. We formulate this problem into binary integer programs, which provide an energy-optimal solution under the given latency constraint. Our simulation results show that the optimization algorithm significantly reduces the overall network-wide energy consumption for data collection. In the environment having a stationary sink from stationary sensor nodes, the optimized data collection shows 47% energy savings compared to the state-of-the-art collection protocol (CTP). More importantly, we demonstrate that our optimized data collection provides the best performance in an intermittent network under high interference. In such networks, we found that the selective compression for frequent packet retransmissions saves up to 55% energy compared to the best known protocol. PMID:24721763

  9. NeXML: rich, extensible, and verifiable representation of comparative data and metadata.

    PubMed

    Vos, Rutger A; Balhoff, James P; Caravas, Jason A; Holder, Mark T; Lapp, Hilmar; Maddison, Wayne P; Midford, Peter E; Priyam, Anurag; Sukumaran, Jeet; Xia, Xuhua; Stoltzfus, Arlin

    2012-07-01

    In scientific research, integration and synthesis require a common understanding of where data come from, how much they can be trusted, and what they may be used for. To make such an understanding computer-accessible requires standards for exchanging richly annotated data. The challenges of conveying reusable data are particularly acute in regard to evolutionary comparative analysis, which comprises an ever-expanding list of data types, methods, research aims, and subdisciplines. To facilitate interoperability in evolutionary comparative analysis, we present NeXML, an XML standard (inspired by the current standard, NEXUS) that supports exchange of richly annotated comparative data. NeXML defines syntax for operational taxonomic units, character-state matrices, and phylogenetic trees and networks. Documents can be validated unambiguously. Importantly, any data element can be annotated, to an arbitrary degree of richness, using a system that is both flexible and rigorous. We describe how the use of NeXML by the TreeBASE and Phenoscape projects satisfies user needs that cannot be satisfied with other available file formats. By relying on XML Schema Definition, the design of NeXML facilitates the development and deployment of software for processing, transforming, and querying documents. The adoption of NeXML for practical use is facilitated by the availability of (1) an online manual with code samples and a reference to all defined elements and attributes, (2) programming toolkits in most of the languages used commonly in evolutionary informatics, and (3) input-output support in several widely used software applications. An active, open, community-based development process enables future revision and expansion of NeXML.

  10. Application of XML in DICOM

    NASA Astrophysics Data System (ADS)

    You, Xiaozhen; Yao, Zhihong

    2005-04-01

    As a standard of communication and storage for medical digital images, DICOM has been playing a very important role in integration of hospital information. In DICOM, tags are expressed by numbers, and only standard data elements can be shared by looking up Data Dictionary while private tags can not. As such, a DICOM file's readability and extensibility is limited. In addition, reading DICOM files needs special software. In our research, we introduced XML into DICOM, defining an XML-based DICOM special transfer format, XML-DCM, a DICOM storage format, X-DCM, as well as developing a program package to realize format interchange among DICOM, XML-DCM, and X-DCM. XML-DCM is based on the DICOM structure while replacing numeric tags with accessible XML character string tags. The merits are as following: a) every character string tag of XML-DCM has explicit meaning, so users can understand standard data elements and those private data elements easily without looking up the Data Dictionary. In this way, the readability and data sharing of DICOM files are greatly improved; b) According to requirements, users can set new character string tags with explicit meaning to their own system to extend the capacity of data elements; c) User can read the medical image and associated information conveniently through IE, ultimately enlarging the scope of data sharing. The application of storage format X-DCM will reduce data redundancy and save storage memory. The result of practical application shows that XML-DCM does favor integration and share of medical image data among different systems or devices.

  11. NeXML: Rich, Extensible, and Verifiable Representation of Comparative Data and Metadata

    PubMed Central

    Vos, Rutger A.; Balhoff, James P.; Caravas, Jason A.; Holder, Mark T.; Lapp, Hilmar; Maddison, Wayne P.; Midford, Peter E.; Priyam, Anurag; Sukumaran, Jeet; Xia, Xuhua; Stoltzfus, Arlin

    2012-01-01

    Abstract In scientific research, integration and synthesis require a common understanding of where data come from, how much they can be trusted, and what they may be used for. To make such an understanding computer-accessible requires standards for exchanging richly annotated data. The challenges of conveying reusable data are particularly acute in regard to evolutionary comparative analysis, which comprises an ever-expanding list of data types, methods, research aims, and subdisciplines. To facilitate interoperability in evolutionary comparative analysis, we present NeXML, an XML standard (inspired by the current standard, NEXUS) that supports exchange of richly annotated comparative data. NeXML defines syntax for operational taxonomic units, character-state matrices, and phylogenetic trees and networks. Documents can be validated unambiguously. Importantly, any data element can be annotated, to an arbitrary degree of richness, using a system that is both flexible and rigorous. We describe how the use of NeXML by the TreeBASE and Phenoscape projects satisfies user needs that cannot be satisfied with other available file formats. By relying on XML Schema Definition, the design of NeXML facilitates the development and deployment of software for processing, transforming, and querying documents. The adoption of NeXML for practical use is facilitated by the availability of (1) an online manual with code samples and a reference to all defined elements and attributes, (2) programming toolkits in most of the languages used commonly in evolutionary informatics, and (3) input–output support in several widely used software applications. An active, open, community-based development process enables future revision and expansion of NeXML. PMID:22357728

  12. An XML-Based Mission Command Language for Autonomous Underwater Vehicles (AUVs)

    DTIC Science & Technology

    2003-06-01

    P. XML: How To Program . Prentice Hall, Inc. Upper Saddle River, New Jersey, 2001 Digital Signature Activity Statement, W3C www.w3.org/Signature...languages because it does not directly specify how information is to be presented, but rather defines the structure (and thus semantics) of the...command and control (C2) aspects of using XML to increase the utility of AUVs. XML programming will be addressed. Current mine warfare doctrine will be

  13. A comparison of database systems for XML-type data.

    PubMed

    Risse, Judith E; Leunissen, Jack A M

    2010-01-01

    In the field of bioinformatics interchangeable data formats based on XML are widely used. XML-type data is also at the core of most web services. With the increasing amount of data stored in XML comes the need for storing and accessing the data. In this paper we analyse the suitability of different database systems for storing and querying large datasets in general and Medline in particular. All reviewed database systems perform well when tested with small to medium sized datasets, however when the full Medline dataset is queried a large variation in query times is observed. There is not one system that is vastly superior to the others in this comparison and, depending on the database size and the query requirements, different systems are most suitable. The best all-round solution is the Oracle 11~g database system using the new binary storage option. Alias-i's Lingpipe is a more lightweight, customizable and sufficiently fast solution. It does however require more initial configuration steps. For data with a changing XML structure Sedna and BaseX as native XML database systems or MySQL with an XML-type column are suitable.

  14. XML Flight/Ground Data Dictionary Management

    NASA Technical Reports Server (NTRS)

    Wright, Jesse; Wiklow, Colette

    2007-01-01

    A computer program generates Extensible Markup Language (XML) files that effect coupling between the command- and telemetry-handling software running aboard a spacecraft and the corresponding software running in ground support systems. The XML files are produced by use of information from the flight software and from flight-system engineering. The XML files are converted to legacy ground-system data formats for command and telemetry, transformed into Web-based and printed documentation, and used in developing new ground-system data-handling software. Previously, the information about telemetry and command was scattered in various paper documents that were not synchronized. The process of searching and reading the documents was time-consuming and introduced errors. In contrast, the XML files contain all of the information in one place. XML structures can evolve in such a manner as to enable the addition, to the XML files, of the metadata necessary to track the changes and the associated documentation. The use of this software has reduced the extent of manual operations in developing a ground data system, thereby saving considerable time and removing errors that previously arose in the translation and transcription of software information from the flight to the ground system.

  15. Using XML and XSLT for flexible elicitation of mental-health risk knowledge.

    PubMed

    Buckingham, C D; Ahmed, A; Adams, A E

    2007-03-01

    Current tools for assessing risks associated with mental-health problems require assessors to make high-level judgements based on clinical experience. This paper describes how new technologies can enhance qualitative research methods to identify lower-level cues underlying these judgements, which can be collected by people without a specialist mental-health background. Content analysis of interviews with 46 multidisciplinary mental-health experts exposed the cues and their interrelationships, which were represented by a mind map using software that stores maps as XML. All 46 mind maps were integrated into a single XML knowledge structure and analysed by a Lisp program to generate quantitative information about the numbers of experts associated with each part of it. The knowledge was refined by the experts, using software developed in Flash to record their collective views within the XML itself. These views specified how the XML should be transformed by XSLT, a technology for rendering XML, which resulted in a validated hierarchical knowledge structure associating patient cues with risks. Changing knowledge elicitation requirements were accommodated by flexible transformations of XML data using XSLT, which also facilitated generation of multiple data-gathering tools suiting different assessment circumstances and levels of mental-health knowledge.

  16. "The Wonder Years" of XML.

    ERIC Educational Resources Information Center

    Gazan, Rich

    2000-01-01

    Surveys the current state of Extensible Markup Language (XML), a metalanguage for creating structured documents that describe their own content, and its implications for information professionals. Predicts that XML will become the common language underlying Web, word processing, and database formats. Also discusses Extensible Stylesheet Language…

  17. An XML-Based Knowledge Management System of Port Information for U.S. Coast Guard Cutters

    DTIC Science & Technology

    2003-03-01

    using DTDs was not chosen. XML Schema performs many of the same functions as SQL type schemas, but differ by the unique structure of XML documents...to access data from content files within the developed system. XPath is not equivalent to SQL . While XPath is very powerful at reaching into an XML...document and finding nodes or node sets, it is not a complete query language. For operations like joins, unions, intersections, etc., SQL is far

  18. FY17 Status Report on the Initial Development of a Constitutive Model for Grade 91 Steel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Messner, M. C.; Phan, V. -T.; Sham, T. -L.

    Grade 91 is a candidate structural material for high temperature advanced reactor applications. Existing ASME Section III, Subsection HB, Subpart B simplified design rules based on elastic analysis are setup as conservative screening tools with the intent to supplement these screening rules with full inelastic analysis when required. The Code provides general guidelines for suitable inelastic models but does not provide constitutive model implementations. This report describes the development of an inelastic constitutive model for Gr. 91 steel aimed at fulfilling the ASME Code requirements and being included into a new Section III Code appendix, HBB-Z. A large database ofmore » over 300 experiments on Gr. 91 was collected and converted to a standard XML form. Five families of Gr. 91 material models were identified in the literature. Of these five, two are potentially suitable for use in the ASME code. These two models were implemented and evaluated against the experimental database. Both models have deficiencies so the report develops a framework for developing and calibrating an improved model. This required creating a new modeling method for representing changes in material rate sensitivity across the full ASME allowable temperature range for Gr. 91 structural components: room temperature to 650° C. On top of this framework for rate sensitivity the report describes calibrating a model for work hardening and softening in the material using genetic algorithm optimization. Future work will focus on improving this trial model by including tension/compression asymmetry observed in experiments and necessary to capture material ratcheting under zero mean stress and by improving the optimization and analysis framework.« less

  19. MER Telemetry Processor

    NASA Technical Reports Server (NTRS)

    Lee, Hyun H.

    2012-01-01

    MERTELEMPROC processes telemetered data in data product format and generates Experiment Data Records (EDRs) for many instruments (HAZCAM, NAVCAM, PANCAM, microscopic imager, Moessbauer spectrometer, APXS, RAT, and EDLCAM) on the Mars Exploration Rover (MER). If the data is compressed, then MERTELEMPROC decompresses the data with an appropriate decompression algorithm. There are two compression algorithms (ICER and LOCO) used in MER. This program fulfills a MER specific need to generate Level 1 products within a 60-second time requirement. EDRs generated by this program are used by merinverter, marscahv, marsrad, and marsjplstereo to generate higher-level products for the mission operations. MERTELEPROC was the first GDS program to process the data product. Metadata of the data product is in XML format. The software allows user-configurable input parameters, per-product processing (not streambased processing), and fail-over is allowed if the leading image header is corrupted. It is used within the MER automated pipeline. MERTELEMPROC is part of the OPGS (Operational Product Generation Subsystem) automated pipeline, which analyzes images returned by in situ spacecraft and creates level 1 products to assist in operations, science, and outreach.

  20. Research on Optimization of Encoding Algorithm of PDF417 Barcodes

    NASA Astrophysics Data System (ADS)

    Sun, Ming; Fu, Longsheng; Han, Shuqing

    The purpose of this research is to develop software to optimize the data compression of a PDF417 barcode using VC++6.0. According to the different compression mode and the particularities of Chinese, the relevant approaches which optimize the encoding algorithm of data compression such as spillage and the Chinese characters encoding are proposed, a simple approach to compute complex polynomial is introduced. After the whole data compression is finished, the number of the codeword is reduced and then the encoding algorithm is optimized. The developed encoding system of PDF 417 barcodes will be applied in the logistics management of fruits, therefore also will promote the fast development of the two-dimensional bar codes.

  1. A Novel Navigation Paradigm for XML Repositories.

    ERIC Educational Resources Information Center

    Azagury, Alain; Factor, Michael E.; Maarek, Yoelle S.; Mandler, Benny

    2002-01-01

    Discusses data exchange over the Internet and describes the architecture and implementation of an XML document repository that promotes a navigation paradigm for XML documents based on content and context. Topics include information retrieval and semistructured documents; and file systems as information storage infrastructure, particularly XMLFS.…

  2. An Expressive and Efficient Language for XML Information Retrieval.

    ERIC Educational Resources Information Center

    Chinenyanga, Taurai Tapiwa; Kushmerick, Nicholas

    2002-01-01

    Discusses XML and information retrieval and describes a query language, ELIXIR (expressive and efficient language for XML information retrieval), with a textual similarity operator that can be used for similarity joins. Explains the algorithm for answering ELIXIR queries to generate intermediate relational data. (Author/LRW)

  3. XML Content Finally Arrives on the Web!

    ERIC Educational Resources Information Center

    Funke, Susan

    1998-01-01

    Explains extensible markup language (XML) and how it differs from hypertext markup language (HTML) and standard generalized markup language (SGML). Highlights include features of XML, including better formatting of documents, better searching capabilities, multiple uses for hyperlinking, and an increase in Web applications; Web browsers; and what…

  4. How Does XML Help Libraries?

    ERIC Educational Resources Information Center

    Banerjee, Kyle

    2002-01-01

    Discusses XML, how it has transformed the way information is managed and delivered, and its impact on libraries. Topics include how XML differs from other markup languages; the document object model (DOM); style sheets; practical applications for archival materials, interlibrary loans, digital collections, and MARC data; and future possibilities.…

  5. Transthoracic impedance for the monitoring of quality of manual chest compression during cardiopulmonary resuscitation.

    PubMed

    Zhang, Hehua; Yang, Zhengfei; Huang, Zitong; Chen, Bihua; Zhang, Lei; Li, Heng; Wu, Baoming; Yu, Tao; Li, Yongqin

    2012-10-01

    The quality of cardiopulmonary resuscitation (CPR), especially adequate compression depth, is associated with return of spontaneous circulation (ROSC) and is therefore recommended to be measured routinely. In the current study, we investigated the relationship between changes of transthoracic impedance (TTI) measured through the defibrillation electrodes, chest compression depth and coronary perfusion pressure (CPP) in a porcine model of cardiac arrest. In 14 male pigs weighing between 28 and 34 kg, ventricular fibrillation (VF) was electrically induced and untreated for 6 min. Animals were randomized to either optimal or suboptimal chest compression group. Optimal depth of manual compression in 7 pigs was defined as a decrease of 25% (50 mm) in anterior posterior diameter of the chest, while suboptimal compression was defined as 70% of the optimal depth (35 mm). After 2 min of chest compression, defibrillation was attempted with a 120-J rectilinear biphasic shock. There were no differences in baseline measurements between groups. All animals had ROSC after optimal compressions; this contrasted with suboptimal compressions, after which only 2 of the animals had ROSC (100% vs. 28.57%, p=0.021). The correlation coefficient was 0.89 between TTI amplitude and compression depth (p<0.001), 0.83 between TTI amplitude and CPP (p<0.001). Amplitude change of TTI was correlated with compression depth and CPP in this porcine model of cardiac arrest. The TTI measured from defibrillator electrodes, therefore has the potential to serve as an indicator to monitor the quality of chest compression and estimate CPP during CPR. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  6. The Essen Learning Model--A Step towards a Representation of Learning Objectives.

    ERIC Educational Resources Information Center

    Bick, Markus; Pawlowski, Jan M.; Veith, Patrick

    The importance of the Extensible Markup Language (XML) technology family in the field of Computer Assisted Learning (CAL) can not be denied. The Instructional Management Systems Project (IMS), for example, provides a learning resource XML binding specification. Considering this specification and other implementations using XML to represent…

  7. Get It Together: Integrating Data with XML.

    ERIC Educational Resources Information Center

    Miller, Ron

    2003-01-01

    Discusses the use of XML for data integration to move data across different platforms, including across the Internet, from a variety of sources. Topics include flexibility; standards; organizing databases; unstructured data and the use of meta tags to encode it with XML information; cost effectiveness; and eliminating client software licenses.…

  8. XML Schema Languages: Beyond DTD.

    ERIC Educational Resources Information Center

    Ioannides, Demetrios

    2000-01-01

    Discussion of XML (extensible markup language) and the traditional DTD (document type definition) format focuses on efforts of the World Wide Web Consortium's XML schema working group to develop a schema language to replace DTD that will be capable of defining the set of constraints of any possible data resource. (Contains 14 references.) (LRW)

  9. XML: A Publisher's Perspective.

    ERIC Educational Resources Information Center

    Andrews, Timothy M.

    1999-01-01

    Explains eXtensible Markup Language (XML) and describes how Dow Jones Interactive is using it to improve the news-gathering and dissemination process through intranets and the World Wide Web. Discusses benefits of using XML, the relationship to HyperText Markup Language (HTML), lack of available software tools and industry support, and future…

  10. jmzML, an open-source Java API for mzML, the PSI standard for MS data.

    PubMed

    Côté, Richard G; Reisinger, Florian; Martens, Lennart

    2010-04-01

    We here present jmzML, a Java API for the Proteomics Standards Initiative mzML data standard. Based on the Java Architecture for XML Binding and XPath-based XML indexer random-access XML parser, jmzML can handle arbitrarily large files in minimal memory, allowing easy and efficient processing of mzML files using the Java programming language. jmzML also automatically resolves internal XML references on-the-fly. The library (which includes a viewer) can be downloaded from http://jmzml.googlecode.com.

  11. The Cadmio XML healthcare record.

    PubMed

    Barbera, Francesco; Ferri, Fernando; Ricci, Fabrizio L; Sottile, Pier Angelo

    2002-01-01

    The management of clinical data is a complex task. Patient related information reported in patient folders is a set of heterogeneous and structured data accessed by different users having different goals (in local or geographical networks). XML language provides a mechanism for describing, manipulating, and visualising structured data in web-based applications. XML ensures that the structured data is managed in a uniform and transparent manner independently from the applications and their providers guaranteeing some interoperability. Extracting data from the healthcare record and structuring them according to XML makes the data available through browsers. The MIC/MIE model (Medical Information Category/Medical Information Elements), which allows the definition and management of healthcare records and used in CADMIO, a HISA based project, is described in this paper, using XML for allowing the data to be visualised through web browsers.

  12. Representing nested semantic information in a linear string of text using XML.

    PubMed

    Krauthammer, Michael; Johnson, Stephen B; Hripcsak, George; Campbell, David A; Friedman, Carol

    2002-01-01

    XML has been widely adopted as an important data interchange language. The structure of XML enables sharing of data elements with variable degrees of nesting as long as the elements are grouped in a strict tree-like fashion. This requirement potentially restricts the usefulness of XML for marking up written text, which often includes features that do not properly nest within other features. We encountered this problem while marking up medical text with structured semantic information from a Natural Language Processor. Traditional approaches to this problem separate the structured information from the actual text mark up. This paper introduces an alternative solution, which tightly integrates the semantic structure with the text. The resulting XML markup preserves the linearity of the medical texts and can therefore be easily expanded with additional types of information.

  13. Representing nested semantic information in a linear string of text using XML.

    PubMed Central

    Krauthammer, Michael; Johnson, Stephen B.; Hripcsak, George; Campbell, David A.; Friedman, Carol

    2002-01-01

    XML has been widely adopted as an important data interchange language. The structure of XML enables sharing of data elements with variable degrees of nesting as long as the elements are grouped in a strict tree-like fashion. This requirement potentially restricts the usefulness of XML for marking up written text, which often includes features that do not properly nest within other features. We encountered this problem while marking up medical text with structured semantic information from a Natural Language Processor. Traditional approaches to this problem separate the structured information from the actual text mark up. This paper introduces an alternative solution, which tightly integrates the semantic structure with the text. The resulting XML markup preserves the linearity of the medical texts and can therefore be easily expanded with additional types of information. PMID:12463856

  14. The carbohydrate sequence markup language (CabosML): an XML description of carbohydrate structures.

    PubMed

    Kikuchi, Norihiro; Kameyama, Akihiko; Nakaya, Shuuichi; Ito, Hiromi; Sato, Takashi; Shikanai, Toshihide; Takahashi, Yoriko; Narimatsu, Hisashi

    2005-04-15

    Bioinformatics resources for glycomics are very poor as compared with those for genomics and proteomics. The complexity of carbohydrate sequences makes it difficult to define a common language to represent them, and the development of bioinformatics tools for glycomics has not progressed. In this study, we developed a carbohydrate sequence markup language (CabosML), an XML description of carbohydrate structures. The language definition (XML Schema) and an experimental database of carbohydrate structures using an XML database management system are available at http://www.phoenix.hydra.mki.co.jp/CabosDemo.html kikuchi@hydra.mki.co.jp.

  15. Hybrid-drive implosion system for ICF targets

    DOEpatents

    Mark, James W.

    1988-08-02

    Hybrid-drive implosion systems (20,40) for ICF targets (10,22,42) are described which permit a significant increase in target gain at fixed total driver energy. The ICF target is compressed in two phases, an initial compression phase and a final peak power phase, with each phase driven by a separate, optimized driver. The targets comprise a hollow spherical ablator (12) surroundingly disposed around fusion fuel (14). The ablator is first compressed to higher density by a laser system (24), or by an ion beam system (44), that in each case is optimized for this initial phase of compression of the target. Then, following compression of the ablator, energy is directly delivered into the compressed ablator by an ion beam driver system (30,48) that is optimized for this second phase of operation of the target. The fusion fuel (14) is driven, at high gain, to conditions wherein fusion reactions occur. This phase separation allows hydrodynamic efficiency and energy deposition uniformity to be individually optimized, thereby securing significant advantages in energy gain. In additional embodiments, the same or separate drivers supply energy for ICF target implosion.

  16. Hybrid-drive implosion system for ICF targets

    DOEpatents

    Mark, James W.

    1988-01-01

    Hybrid-drive implosion systems (20,40) for ICF targets (10,22,42) are described which permit a significant increase in target gain at fixed total driver energy. The ICF target is compressed in two phases, an initial compression phase and a final peak power phase, with each phase driven by a separate, optimized driver. The targets comprise a hollow spherical ablator (12) surroundingly disposed around fusion fuel (14). The ablator is first compressed to higher density by a laser system (24), or by an ion beam system (44), that in each case is optimized for this initial phase of compression of the target. Then, following compression of the ablator, energy is directly delivered into the compressed ablator by an ion beam driver system (30,48) that is optimized for this second phase of operation of the target. The fusion fuel (14) is driven, at high gain, to conditions wherein fusion reactions occur. This phase separation allows hydrodynamic efficiency and energy deposition uniformity to be individually optimized, thereby securing significant advantages in energy gain. In additional embodiments, the same or separate drivers supply energy for ICF target implosion.

  17. Hybrid-drive implosion system for ICF targets

    DOEpatents

    Mark, J.W.K.

    1987-10-14

    Hybrid-drive implosion systems for ICF targets are described which permit a significant increase in target gain at fixed total driver energy. The ICF target is compressed in two phases, an initial compression phase and a final peak power phase, with each phase driven by a separate, optimized driver. The targets comprise a hollow spherical ablator surroundingly disposed around fusion fuel. The ablator is first compressed to higher density by a laser system, or by an ion beam system, that in each case is optimized for this initial phase of compression of the target. Then, following compression of the ablator, energy is directly delivered into the compressed ablator by an ion beam driver system that is optimized for this second phase of operation of the target. The fusion fuel is driven, at high gain, to conditions wherein fusion reactions occur. This phase separation allows hydrodynamic efficiency and energy deposition uniformity to be individually optimized, thereby securing significant advantages in energy gain. In additional embodiments, the same or separate drivers supply energy for ICF target implosion. 3 figs.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hurst, Aaron M.

    A data structure based on an eXtensible Markup Language (XML) hierarchy according to experimental nuclear structure data in the Evaluated Nuclear Structure Data File (ENSDF) is presented. A Python-coded translator has been developed to interpret the standard one-card records of the ENSDF datasets, together with their associated quantities defined according to field position, and generate corresponding representative XML output. The quantities belonging to this mixed-record format are described in the ENSDF manual. Of the 16 ENSDF records in total, XML output has been successfully generated for 15 records. An XML-translation for the Comment Record is yet to be implemented; thismore » will be considered in a separate phase of the overall translation effort. Continuation records, not yet implemented, will also be treated in a future phase of this work. Several examples are presented in this document to illustrate the XML schema and methods for handling the various ENSDF data types. However, the proposed nomenclature for the XML elements and attributes need not necessarily be considered as a fixed set of constructs. Indeed, better conventions may be suggested and a consensus can be achieved amongst the various groups of people interested in this project. The main purpose here is to present an initial phase of the translation effort to demonstrate the feasibility of interpreting ENSDF datasets and creating a representative XML-structured hierarchy for data storage.« less

  19. Recognizable or Not: Towards Image Semantic Quality Assessment for Compression

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Wang, Dandan; Li, Houqiang

    2017-12-01

    Traditionally, image compression was optimized for the pixel-wise fidelity or the perceptual quality of the compressed images given a bit-rate budget. But recently, compressed images are more and more utilized for automatic semantic analysis tasks such as recognition and retrieval. For these tasks, we argue that the optimization target of compression is no longer perceptual quality, but the utility of the compressed images in the given automatic semantic analysis task. Accordingly, we propose to evaluate the quality of the compressed images neither at pixel level nor at perceptual level, but at semantic level. In this paper, we make preliminary efforts towards image semantic quality assessment (ISQA), focusing on the task of optical character recognition (OCR) from compressed images. We propose a full-reference ISQA measure by comparing the features extracted from text regions of original and compressed images. We then propose to integrate the ISQA measure into an image compression scheme. Experimental results show that our proposed ISQA measure is much better than PSNR and SSIM in evaluating the semantic quality of compressed images; accordingly, adopting our ISQA measure to optimize compression for OCR leads to significant bit-rate saving compared to using PSNR or SSIM. Moreover, we perform subjective test about text recognition from compressed images, and observe that our ISQA measure has high consistency with subjective recognizability. Our work explores new dimensions in image quality assessment, and demonstrates promising direction to achieve higher compression ratio for specific semantic analysis tasks.

  20. Integrated Syntactic/Semantic XML Data Validation with a Reusable Software Component

    ERIC Educational Resources Information Center

    Golikov, Steven

    2013-01-01

    Data integration is a critical component of enterprise system integration, and XML data validation is the foundation for sound data integration of XML-based information systems. Since B2B e-commerce relies on data validation as one of the critical components for enterprise integration, it is imperative for financial industries and e-commerce…

  1. Adding XML to the MIS Curriculum: Lessons from the Classroom

    ERIC Educational Resources Information Center

    Wagner, William P.; Pant, Vik; Hilken, Ralph

    2008-01-01

    eXtensible Markup Language (XML) is a new technology that is currently being extolled by many industry experts and software vendors. Potentially it represents a platform independent language for sharing information over networks in a way that is much more seamless than with previous technologies. It is extensible in that XML serves as a "meta"…

  2. Data Manipulation in an XML-Based Digital Image Library

    ERIC Educational Resources Information Center

    Chang, Naicheng

    2005-01-01

    Purpose: To help to clarify the role of XML tools and standards in supporting transition and migration towards a fully XML-based environment for managing access to information. Design/methodology/approach: The Ching Digital Image Library, built on a three-tier architecture, is used as a source of examples to illustrate a number of methods of data…

  3. Adaptive Hypermedia Educational System Based on XML Technologies.

    ERIC Educational Resources Information Center

    Baek, Yeongtae; Wang, Changjong; Lee, Sehoon

    This paper proposes an adaptive hypermedia educational system using XML technologies, such as XML, XSL, XSLT, and XLink. Adaptive systems are capable of altering the presentation of the content of the hypermedia on the basis of a dynamic understanding of the individual user. The user profile can be collected in a user model, while the knowledge…

  4. EquiX-A Search and Query Language for XML.

    ERIC Educational Resources Information Center

    Cohen, Sara; Kanza, Yaron; Kogan, Yakov; Sagiv, Yehoshua; Nutt, Werner; Serebrenik, Alexander

    2002-01-01

    Describes EquiX, a search language for XML that combines querying with searching to query the data and the meta-data content of Web pages. Topics include search engines; a data model for XML documents; search query syntax; search query semantics; an algorithm for evaluating a query on a document; and indexing EquiX queries. (LRW)

  5. TME2/342: The Role of the EXtensible Markup Language (XML) for Future Healthcare Application Development

    PubMed Central

    Noelle, G; Dudeck, J

    1999-01-01

    Two years, since the World Wide Web Consortium (W3C) has published the first specification of the eXtensible Markup Language (XML) there exist some concrete tools and applications to work with XML-based data. In particular, new generation Web browsers offer great opportunities to develop new kinds of medical, web-based applications. There are several data-exchange formats in medicine, which have been established in the last years: HL-7, DICOM, EDIFACT and, in the case of Germany, xDT. Whereas communication and information exchange becomes increasingly important, the development of appropriate and necessary interfaces causes problems, rising costs and effort. It has been also recognised that it is difficult to define a standardised interchange format, for one of the major future developments in medical telematics: the electronic patient record (EPR) and its availability on the Internet. Whereas XML, especially in an industrial environment, is celebrated as a generic standard and a solution for all problems concerning e-commerce, in a medical context there are only few applications developed. Nevertheless, the medical environment is an appropriate area for building XML applications: as the information and communication management becomes increasingly important in medical businesses, the role of the Internet changes quickly from an information to a communication medium. The first XML based applications in healthcare show us the advantage for a future engagement of the healthcare industry in XML: such applications are open, easy to extend and cost-effective. Additionally, XML is much more than a simple new data interchange format: many proposals for data query (XQL), data presentation (XSL) and other extensions have been proposed to the W3C and partly realised in medical applications.

  6. XML-based approaches for the integration of heterogeneous bio-molecular data.

    PubMed

    Mesiti, Marco; Jiménez-Ruiz, Ernesto; Sanz, Ismael; Berlanga-Llavori, Rafael; Perlasca, Paolo; Valentini, Giorgio; Manset, David

    2009-10-15

    The today's public database infrastructure spans a very large collection of heterogeneous biological data, opening new opportunities for molecular biology, bio-medical and bioinformatics research, but raising also new problems for their integration and computational processing. In this paper we survey the most interesting and novel approaches for the representation, integration and management of different kinds of biological data by exploiting XML and the related recommendations and approaches. Moreover, we present new and interesting cutting edge approaches for the appropriate management of heterogeneous biological data represented through XML. XML has succeeded in the integration of heterogeneous biomolecular information, and has established itself as the syntactic glue for biological data sources. Nevertheless, a large variety of XML-based data formats have been proposed, thus resulting in a difficult effective integration of bioinformatics data schemes. The adoption of a few semantic-rich standard formats is urgent to achieve a seamless integration of the current biological resources.

  7. Analysis of the faster-than-Nyquist optimal linear multicarrier system

    NASA Astrophysics Data System (ADS)

    Marquet, Alexandre; Siclet, Cyrille; Roque, Damien

    2017-02-01

    Faster-than-Nyquist signalization enables a better spectral efficiency at the expense of an increased computational complexity. Regarding multicarrier communications, previous work mainly relied on the study of non-linear systems exploiting coding and/or equalization techniques, with no particular optimization of the linear part of the system. In this article, we analyze the performance of the optimal linear multicarrier system when used together with non-linear receiving structures (iterative decoding and direct feedback equalization), or in a standalone fashion. We also investigate the limits of the normality assumption of the interference, used for implementing such non-linear systems. The use of this optimal linear system leads to a closed-form expression of the bit-error probability that can be used to predict the performance and help the design of coded systems. Our work also highlights the great performance/complexity trade-off offered by decision feedback equalization in a faster-than-Nyquist context. xml:lang="fr"

  8. A Practical Introduction to the XML, Extensible Markup Language, by Way of Some Useful Examples

    ERIC Educational Resources Information Center

    Snyder, Robin

    2004-01-01

    XML, Extensible Markup Language, is important as a way to represent and encapsulate the structure of underlying data in a portable way that supports data exchange regardless of the physical storage of the data. This paper (and session) introduces some useful and practical aspects of XML technology for sharing information in a educational setting…

  9. Alternatives to relational database: comparison of NoSQL and XML approaches for clinical data storage.

    PubMed

    Lee, Ken Ka-Yin; Tang, Wai-Choi; Choi, Kup-Sze

    2013-04-01

    Clinical data are dynamic in nature, often arranged hierarchically and stored as free text and numbers. Effective management of clinical data and the transformation of the data into structured format for data analysis are therefore challenging issues in electronic health records development. Despite the popularity of relational databases, the scalability of the NoSQL database model and the document-centric data structure of XML databases appear to be promising features for effective clinical data management. In this paper, three database approaches--NoSQL, XML-enabled and native XML--are investigated to evaluate their suitability for structured clinical data. The database query performance is reported, together with our experience in the databases development. The results show that NoSQL database is the best choice for query speed, whereas XML databases are advantageous in terms of scalability, flexibility and extensibility, which are essential to cope with the characteristics of clinical data. While NoSQL and XML technologies are relatively new compared to the conventional relational database, both of them demonstrate potential to become a key database technology for clinical data management as the technology further advances. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  10. The future application of GML database in GIS

    NASA Astrophysics Data System (ADS)

    Deng, Yuejin; Cheng, Yushu; Jing, Lianwen

    2006-10-01

    In 2004, the Geography Markup Language (GML) Implementation Specification (version 3.1.1) was published by Open Geospatial Consortium, Inc. Now more and more applications in geospatial data sharing and interoperability depend on GML. The primary purpose of designing GML is for exchange and transportation of geo-information by standard modeling and encoding of geography phenomena. However, the problems of how to organize and access lots of GML data effectively arise in applications. The research on GML database focuses on these problems. The effective storage of GML data is a hot topic in GIS communities today. GML Database Management System (GDBMS) mainly deals with the problem of storage and management of GML data. Now two types of XML database, namely Native XML Database, and XML-Enabled Database are classified. Since GML is an application of the XML standard to geographic data, the XML database system can also be used for the management of GML. In this paper, we review the status of the art of XML database, including storage, index and query languages, management systems and so on, then move on to the GML database. At the end, the future prospect of GML database in GIS application is presented.

  11. An adaptable XML based approach for scientific data management and integration

    NASA Astrophysics Data System (ADS)

    Wang, Fusheng; Thiel, Florian; Furrer, Daniel; Vergara-Niedermayr, Cristobal; Qin, Chen; Hackenberg, Georg; Bourgue, Pierre-Emmanuel; Kaltschmidt, David; Wang, Mo

    2008-03-01

    Increased complexity of scientific research poses new challenges to scientific data management. Meanwhile, scientific collaboration is becoming increasing important, which relies on integrating and sharing data from distributed institutions. We develop SciPort, a Web-based platform on supporting scientific data management and integration based on a central server based distributed architecture, where researchers can easily collect, publish, and share their complex scientific data across multi-institutions. SciPort provides an XML based general approach to model complex scientific data by representing them as XML documents. The documents capture not only hierarchical structured data, but also images and raw data through references. In addition, SciPort provides an XML based hierarchical organization of the overall data space to make it convenient for quick browsing. To provide generalization, schemas and hierarchies are customizable with XML-based definitions, thus it is possible to quickly adapt the system to different applications. While each institution can manage documents on a Local SciPort Server independently, selected documents can be published to a Central Server to form a global view of shared data across all sites. By storing documents in a native XML database, SciPort provides high schema extensibility and supports comprehensive queries through XQuery. By providing a unified and effective means for data modeling, data access and customization with XML, SciPort provides a flexible and powerful platform for sharing scientific data for scientific research communities, and has been successfully used in both biomedical research and clinical trials.

  12. Querying archetype-based EHRs by search ontology-based XPath engineering.

    PubMed

    Kropf, Stefan; Uciteli, Alexandr; Schierle, Katrin; Krücken, Peter; Denecke, Kerstin; Herre, Heinrich

    2018-05-11

    Legacy data and new structured data can be stored in a standardized format as XML-based EHRs on XML databases. Querying documents on these databases is crucial for answering research questions. Instead of using free text searches, that lead to false positive results, the precision can be increased by constraining the search to certain parts of documents. A search ontology-based specification of queries on XML documents defines search concepts and relates them to parts in the XML document structure. Such query specification method is practically introduced and evaluated by applying concrete research questions formulated in natural language on a data collection for information retrieval purposes. The search is performed by search ontology-based XPath engineering that reuses ontologies and XML-related W3C standards. The key result is that the specification of research questions can be supported by the usage of search ontology-based XPath engineering. A deeper recognition of entities and a semantic understanding of the content is necessary for a further improvement of precision and recall. Key limitation is that the application of the introduced process requires skills in ontology and software development. In future, the time consuming ontology development could be overcome by implementing a new clinical role: the clinical ontologist. The introduced Search Ontology XML extension connects Search Terms to certain parts in XML documents and enables an ontology-based definition of queries. Search ontology-based XPath engineering can support research question answering by the specification of complex XPath expressions without deep syntax knowledge about XPaths.

  13. An Adaptable XML Based Approach for Scientific Data Management and Integration.

    PubMed

    Wang, Fusheng; Thiel, Florian; Furrer, Daniel; Vergara-Niedermayr, Cristobal; Qin, Chen; Hackenberg, Georg; Bourgue, Pierre-Emmanuel; Kaltschmidt, David; Wang, Mo

    2008-02-20

    Increased complexity of scientific research poses new challenges to scientific data management. Meanwhile, scientific collaboration is becoming increasing important, which relies on integrating and sharing data from distributed institutions. We develop SciPort, a Web-based platform on supporting scientific data management and integration based on a central server based distributed architecture, where researchers can easily collect, publish, and share their complex scientific data across multi-institutions. SciPort provides an XML based general approach to model complex scientific data by representing them as XML documents. The documents capture not only hierarchical structured data, but also images and raw data through references. In addition, SciPort provides an XML based hierarchical organization of the overall data space to make it convenient for quick browsing. To provide generalization, schemas and hierarchies are customizable with XML-based definitions, thus it is possible to quickly adapt the system to different applications. While each institution can manage documents on a Local SciPort Server independently, selected documents can be published to a Central Server to form a global view of shared data across all sites. By storing documents in a native XML database, SciPort provides high schema extensibility and supports comprehensive queries through XQuery. By providing a unified and effective means for data modeling, data access and customization with XML, SciPort provides a flexible and powerful platform for sharing scientific data for scientific research communities, and has been successfully used in both biomedical research and clinical trials.

  14. SGDB: a database of synthetic genes re-designed for optimizing protein over-expression.

    PubMed

    Wu, Gang; Zheng, Yuanpu; Qureshi, Imran; Zin, Htar Thant; Beck, Tyler; Bulka, Blazej; Freeland, Stephen J

    2007-01-01

    Here we present the Synthetic Gene Database (SGDB): a relational database that houses sequences and associated experimental information on synthetic (artificially engineered) genes from all peer-reviewed studies published to date. At present, the database comprises information from more than 200 published experiments. This resource not only provides reference material to guide experimentalists in designing new genes that improve protein expression, but also offers a dataset for analysis by bioinformaticians who seek to test ideas regarding the underlying factors that influence gene expression. The SGDB was built under MySQL database management system. We also offer an XML schema for standardized data description of synthetic genes. Users can access the database at http://www.evolvingcode.net/codon/sgdb/index.php, or batch downloads all information through XML files. Moreover, users may visually compare the coding sequences of a synthetic gene and its natural counterpart with an integrated web tool at http://www.evolvingcode.net/codon/sgdb/aligner.php, and discuss questions, findings and related information on an associated e-forum at http://www.evolvingcode.net/forum/viewforum.php?f=27.

  15. Efficacy of chest compressions directed by end-tidal CO2 feedback in a pediatric resuscitation model of basic life support.

    PubMed

    Hamrick, Jennifer L; Hamrick, Justin T; Lee, Jennifer K; Lee, Benjamin H; Koehler, Raymond C; Shaffner, Donald H

    2014-04-14

    End-tidal carbon dioxide (ETCO2) correlates with systemic blood flow and resuscitation rate during cardiopulmonary resuscitation (CPR) and may potentially direct chest compression performance. We compared ETCO2-directed chest compressions with chest compressions optimized to pediatric basic life support guidelines in an infant swine model to determine the effect on rate of return of spontaneous circulation (ROSC). Forty 2-kg piglets underwent general anesthesia, tracheostomy, placement of vascular catheters, ventricular fibrillation, and 90 seconds of no-flow before receiving 10 or 12 minutes of pediatric basic life support. In the optimized group, chest compressions were optimized by marker, video, and verbal feedback to obtain American Heart Association-recommended depth and rate. In the ETCO2-directed group, compression depth, rate, and hand position were modified to obtain a maximal ETCO2 without video or verbal feedback. After the interval of pediatric basic life support, external defibrillation and intravenous epinephrine were administered for another 10 minutes of CPR or until ROSC. Mean ETCO2 at 10 minutes of CPR was 22.7±7.8 mm Hg in the optimized group (n=20) and 28.5±7.0 mm Hg in the ETCO2-directed group (n=20; P=0.02). Despite higher ETCO2 and mean arterial pressure in the latter group, ROSC rates were similar: 13 of 20 (65%; optimized) and 14 of 20 (70%; ETCO2 directed). The best predictor of ROSC was systemic perfusion pressure. Defibrillation attempts, epinephrine doses required, and CPR-related injuries were similar between groups. The use of ETCO2-directed chest compressions is a novel guided approach to resuscitation that can be as effective as standard CPR optimized with marker, video, and verbal feedback.

  16. Applications of wavelet-based compression to multidimensional Earth science data

    NASA Technical Reports Server (NTRS)

    Bradley, Jonathan N.; Brislawn, Christopher M.

    1993-01-01

    A data compression algorithm involving vector quantization (VQ) and the discrete wavelet transform (DWT) is applied to two different types of multidimensional digital earth-science data. The algorithms (WVQ) is optimized for each particular application through an optimization procedure that assigns VQ parameters to the wavelet transform subbands subject to constraints on compression ratio and encoding complexity. Preliminary results of compressing global ocean model data generated on a Thinking Machines CM-200 supercomputer are presented. The WVQ scheme is used in both a predictive and nonpredictive mode. Parameters generated by the optimization algorithm are reported, as are signal-to-noise (SNR) measurements of actual quantized data. The problem of extrapolating hydrodynamic variables across the continental landmasses in order to compute the DWT on a rectangular grid is discussed. Results are also presented for compressing Landsat TM 7-band data using the WVQ scheme. The formulation of the optimization problem is presented along with SNR measurements of actual quantized data. Postprocessing applications are considered in which the seven spectral bands are clustered into 256 clusters using a k-means algorithm and analyzed using the Los Alamos multispectral data analysis program, SPECTRUM, both before and after being compressed using the WVQ program.

  17. TMATS/ IHAL/ DDML Schema Validation

    DTIC Science & Technology

    2017-02-01

    task was to create a method for performing IRIG eXtensible Markup Language (XML) schema validation. As opposed to XML instance document validation...TMATS / IHAL / DDML Schema Validation, RCC 126-17, February 2017 vii Acronyms DDML Data Display Markup Language HUD heads-up display iNET...system XML eXtensible Markup Language TMATS / IHAL / DDML Schema Validation, RCC 126-17, February 2017 viii This page intentionally left blank

  18. 78 FR 28732 - Revisions to Electric Quarterly Report Filing Process; Availability of Draft XML Schema

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-16

    ... posting CSV file samples. Order No. 770 revised the process for filing EQRs. Pursuant to Order No. 770, one of the new processes for filing allows EQRs to be filed using an XML file. The XML schema that is needed to file EQRs in this manner is now posted on the Commission's Web site at http://www.ferc.gov/docs...

  19. Minimizing pre- and post-defibrillation pauses increases the likelihood of return of spontaneous circulation (ROSC).

    PubMed

    Sell, Rebecca E; Sarno, Renee; Lawrence, Brenna; Castillo, Edward M; Fisher, Roger; Brainard, Criss; Dunford, James V; Davis, Daniel P

    2010-07-01

    The three-phase model of ventricular fibrillation (VF) arrest suggests a period of compressions to "prime" the heart prior to defibrillation attempts. In addition, post-shock compressions may increase the likelihood of return of spontaneous circulation (ROSC). The optimal intervals for shock delivery following cessation of compressions (pre-shock interval) and resumption of compressions following a shock (post-shock interval) remain unclear. To define optimal pre- and post-defibrillation compression pauses for out-of-hospital cardiac arrest (OOHCA). All patients suffering OOHCA from VF were identified over a 1-month period. Defibrillator data were abstracted and analyzed using the combination of ECG, impedance, and audio recording. Receiver-operator curve (ROC) analysis was used to define the optimal pre- and post-shock compression intervals. Multiple logistic regression analysis was used to quantify the relationship between these intervals and ROSC. Covariates included cumulative number of defibrillation attempts, intubation status, and administration of epinephrine in the immediate pre-shock compression cycle. Cluster adjustment was performed due to the possibility of multiple defibrillation attempts for each patient. A total of 36 patients with 96 defibrillation attempts were included. The ROC analysis identified an optimal pre-shock interval of <3s and an optimal post-shock interval of <6s. Increased likelihood of ROSC was observed with a pre-shock interval <3s (adjusted OR 6.7, 95% CI 2.0-22.3, p=0.002) and a post-shock interval of <6s (adjusted OR 10.7, 95% CI 2.8-41.4, p=0.001). Likelihood of ROSC was substantially increased with the optimization of both pre- and post-shock intervals (adjusted OR 13.1, 95% CI 3.4-49.9, p<0.001). Decreasing pre- and post-shock compression intervals increases the likelihood of ROSC in OOHCA from VF.

  20. Labeling RDF Graphs for Linear Time and Space Querying

    NASA Astrophysics Data System (ADS)

    Furche, Tim; Weinzierl, Antonius; Bry, François

    Indices and data structures for web querying have mostly considered tree shaped data, reflecting the view of XML documents as tree-shaped. However, for RDF (and when querying ID/IDREF constraints in XML) data is indisputably graph-shaped. In this chapter, we first study existing indexing and labeling schemes for RDF and other graph datawith focus on support for efficient adjacency and reachability queries. For XML, labeling schemes are an important part of the widespread adoption of XML, in particular for mapping XML to existing (relational) database technology. However, the existing indexing and labeling schemes for RDF (and graph data in general) sacrifice one of the most attractive properties of XML labeling schemes, the constant time (and per-node space) test for adjacency (child) and reachability (descendant). In the second part, we introduce the first labeling scheme for RDF data that retains this property and thus achieves linear time and space processing of acyclic RDF queries on a significantly larger class of graphs than previous approaches (which are mostly limited to tree-shaped data). Finally, we show how this labeling scheme can be applied to (acyclic) SPARQL queries to obtain an evaluation algorithm with time and space complexity linear in the number of resources in the queried RDF graph.

  1. Automatic Summarization as a Combinatorial Optimization Problem

    NASA Astrophysics Data System (ADS)

    Hirao, Tsutomu; Suzuki, Jun; Isozaki, Hideki

    We derived the oracle summary with the highest ROUGE score that can be achieved by integrating sentence extraction with sentence compression from the reference abstract. The analysis results of the oracle revealed that summarization systems have to assign an appropriate compression rate for each sentence in the document. In accordance with this observation, this paper proposes a summarization method as a combinatorial optimization: selecting the set of sentences that maximize the sum of the sentence scores from the pool which consists of the sentences with various compression rates, subject to length constrains. The score of the sentence is defined by its compression rate, content words and positional information. The parameters for the compression rates and positional information are optimized by minimizing the loss between score of oracles and that of candidates. The results obtained from TSC-2 corpus showed that our method outperformed the previous systems with statistical significance.

  2. Squish: Near-Optimal Compression for Archival of Relational Datasets

    PubMed Central

    Gao, Yihan; Parameswaran, Aditya

    2017-01-01

    Relational datasets are being generated at an alarmingly rapid rate across organizations and industries. Compressing these datasets could significantly reduce storage and archival costs. Traditional compression algorithms, e.g., gzip, are suboptimal for compressing relational datasets since they ignore the table structure and relationships between attributes. We study compression algorithms that leverage the relational structure to compress datasets to a much greater extent. We develop Squish, a system that uses a combination of Bayesian Networks and Arithmetic Coding to capture multiple kinds of dependencies among attributes and achieve near-entropy compression rate. Squish also supports user-defined attributes: users can instantiate new data types by simply implementing five functions for a new class interface. We prove the asymptotic optimality of our compression algorithm and conduct experiments to show the effectiveness of our system: Squish achieves a reduction of over 50% in storage size relative to systems developed in prior work on a variety of real datasets. PMID:28180028

  3. SU-E-T-327: The Update of a XML Composing Tool for TrueBeam Developer Mode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, Y; Mao, W; Jiang, S

    2014-06-01

    Purpose: To introduce a major upgrade of a novel XML beam composing tool to scientists and engineers who strive to translate certain capabilities of TrueBeam Developer Mode to future clinical benefits of radiation therapy. Methods: TrueBeam Developer Mode provides the users with a test bed for unconventional plans utilizing certain unique features not accessible at the clinical mode. To access the full set of capabilities, a XML beam definition file accommodating all parameters including kV/MV imaging triggers in the plan can be locally loaded at this mode, however it is difficult and laborious to compose one in a text editor.more » In this study, a stand-along interactive XML beam composing application, TrueBeam TeachMod, was developed on Windows platforms to assist users in making their unique plans in a WYSWYG manner. A conventional plan can be imported in a DICOM RT object as the start of the beam editing process in which trajectories of all axes of a TrueBeam machine can be modified to the intended values at any control point. TeachMod also includes libraries of predefined imaging and treatment procedures to further expedite the process. Results: The TeachMod application is a major of the TeachMod module within DICOManTX. It fully supports TrueBeam 2.0. Trajectories of all axes including all MLC leaves can be graphically rendered and edited as needed. The time for XML beam composing has been reduced to a negligible amount regardless the complexity of the plan. A good understanding of XML language and TrueBeam schema is not required though preferred. Conclusion: Creating XML beams manually in a text editor will be a lengthy error-prone process for sophisticated plans. A XML beam composing tool is highly desirable for R and D activities. It will bridge the gap between scopes of TrueBeam capabilities and their clinical application potentials.« less

  4. James Webb Space Telescope XML Database: From the Beginning to Today

    NASA Technical Reports Server (NTRS)

    Gal-Edd, Jonathan; Fatig, Curtis C.

    2005-01-01

    The James Webb Space Telescope (JWST) Project has been defining, developing, and exercising the use of a common eXtensible Markup Language (XML) for the command and telemetry (C&T) database structure. JWST is the first large NASA space mission to use XML for databases. The JWST project started developing the concepts for the C&T database in 2002. The database will need to last at least 20 years since it will be used beginning with flight software development, continuing through Observatory integration and test (I&T) and through operations. Also, a database tool kit has been provided to the 18 various flight software development laboratories located in the United States, Europe, and Canada that allows the local users to create their own databases. Recently the JWST Project has been working with the Jet Propulsion Laboratory (JPL) and Object Management Group (OMG) XML Telemetry and Command Exchange (XTCE) personnel to provide all the information needed by JWST and JPL for exchanging database information using a XML standard structure. The lack of standardization requires custom ingest scripts for each ground system segment, increasing the cost of the total system. Providing a non-proprietary standard of the telemetry and command database definition formation will allow dissimilar systems to communicate without the need for expensive mission specific database tools and testing of the systems after the database translation. The various ground system components that would benefit from a standardized database are the telemetry and command systems, archives, simulators, and trending tools. JWST has exchanged the XML database with the Eclipse, EPOCH, ASIST ground systems, Portable spacecraft simulator (PSS), a front-end system, and Integrated Trending and Plotting System (ITPS) successfully. This paper will discuss how JWST decided to use XML, the barriers to a new concept, experiences utilizing the XML structure, exchanging databases with other users, and issues that have been experienced in creating databases for the C&T system.

  5. Automating Disk Forensic Processing with SleuthKit, XML and Python

    DTIC Science & Technology

    2009-05-01

    1 Automating Disk Forensic Processing with SleuthKit, XML and Python Simson L. Garfinkel Abstract We have developed a program called fiwalk which...files themselves. We show how it is relatively simple to create automated disk forensic applications using a Python module we have written that reads...software that the portable device may contain. Keywords: Computer Forensics; XML; Sleuth Kit; Python I. INTRODUCTION In recent years we have found many

  6. 76 FR 17413 - Contract Reporting Requirements of Intrastate Natural Gas Companies; Notice of Technical Workshop...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-29

    ... the PDF or XML version of Form No. 549D as a method to eFile quarterly data and whether they intend on... fillable Form No. 549D PDF and XML to file data \\2\\ pursuant to Order Nos. 735 and 735-A.\\3\\ [[Page 17414... PDF ( http://www.ferc.gov/docs-filing/forms/form-549d/form-549d.pdf ) or XML ( http://www.ferc.gov...

  7. XML Schema Guide for Primary CDR Submissions

    EPA Pesticide Factsheets

    This document presents the extensible markup language (XML) schema guide for the Office of Pollution Prevention and Toxics’ (OPPT) e-CDRweb tool. E-CDRweb is the electronic, web-based tool provided by Environmental Protection Agency (EPA) for the submission of Chemical Data Reporting (CDR) information. This document provides the user with tips and guidance on correctly using the version 1.7 XML schema. Please note that the order of the elements must match the schema.

  8. Using XML/HTTP to Store, Serve and Annotate Tactical Scenarios for X3D Operational Visualization and Anti-Terrorist Training

    DTIC Science & Technology

    2003-03-01

    PXSLServlet Paul A. Open Source Relational x X 23 Tchistopolskii sql2dtd David Mertz Public domain Relational x -- sql2xml Scott Hathaway Public...March 2003. [Hunter 2001] Hunter, David ; Cagle, Kurt; Dix, Chris; Kovack, Roger; Pinnock, Jonathan, Rafter, Jeff; Beginning XML (2nd Edition...Postgraduate School Monterey, California 4. Curt Blais Naval Postgraduate School Monterey, California 5 Erik Chaum NAVSEA Undersea

  9. Building VoiceXML-Based Applications

    DTIC Science & Technology

    2002-01-01

    basketball games. The Busline systems were pri- y developed using an early implementation of VoiceXML he NBA Update Line was developed using VoiceXML...traveling in and out of Pittsburgh’s rsity neighborhood. The second project is the NBA Up- Line, which provides callers with real-time information NBA ... NBA UPDATE LINE The target user of this system is a fairly knowledgeable basket- ball fan; the system must therefore be able to provide detailed

  10. CytometryML, an XML format based on DICOM and FCS for analytical cytology data.

    PubMed

    Leif, Robert C; Leif, Suzanne B; Leif, Stephanie H

    2003-07-01

    Flow Cytometry Standard (FCS) was initially created to standardize the software researchers use to analyze, transmit, and store data produced by flow cytometers and sorters. Because of the clinical utility of flow cytometry, it is necessary to have a standard consistent with the requirements of medical regulatory agencies. We extended the existing mapping of FCS to the Digital Imaging and Communications in Medicine (DICOM) standard to include list-mode data produced by flow cytometry, laser scanning cytometry, and microscopic image cytometry. FCS list-mode was mapped to the DICOM Waveform Information Object. We created a collection of Extensible Markup Language (XML) schemas to express the DICOM analytical cytologic text-based data types except for large binary objects. We also developed a cytometry markup language, CytometryML, in an open environment subject to continuous peer review. The feasibility of expressing the data contained in FCS, including list-mode in DICOM, was demonstrated; and a preliminary mapping for list-mode data in the form of XML schemas and documents was completed. DICOM permitted the creation of indices that can be used to rapidly locate in a list-mode file the cells that are members of a subset. DICOM and its coding schemes for other medical standards can be represented by XML schemas, which can be combined with other relevant XML applications, such as Mathematical Markup Language (MathML). The use of XML format based on DICOM for analytical cytology met most of the previously specified requirements and appears capable of meeting the others; therefore, the present FCS should be retired and replaced by an open, XML-based, standard CytometryML. Copyright 2003 Wiley-Liss, Inc.

  11. Optimal color coding for compression of true color images

    NASA Astrophysics Data System (ADS)

    Musatenko, Yurij S.; Kurashov, Vitalij N.

    1998-11-01

    In the paper we present the method that improves lossy compression of the true color or other multispectral images. The essence of the method is to project initial color planes into Karhunen-Loeve (KL) basis that gives completely decorrelated representation for the image and to compress basis functions instead of the planes. To do that the new fast algorithm of true KL basis construction with low memory consumption is suggested and our recently proposed scheme for finding optimal losses of Kl functions while compression is used. Compare to standard JPEG compression of the CMYK images the method provides the PSNR gain from 0.2 to 2 dB for the convenient compression ratios. Experimental results are obtained for high resolution CMYK images. It is demonstrated that presented scheme could work on common hardware.

  12. A browser-based tool for conversion between Fortran NAMELIST and XML/HTML

    NASA Astrophysics Data System (ADS)

    Naito, O.

    A browser-based tool for conversion between Fortran NAMELIST and XML/HTML is presented. It runs on an HTML5 compliant browser and generates reusable XML files to aid interoperability. It also provides a graphical interface for editing and annotating variables in NAMELIST, hence serves as a primitive code documentation environment. Although the tool is not comprehensive, it could be viewed as a test bed for integrating legacy codes into modern systems.

  13. XML Schema Guide for Secondary CDR Submissions

    EPA Pesticide Factsheets

    This document presents the extensible markup language (XML) schema guide for the Office of Pollution Prevention and Toxics’ (OPPT) e-CDRweb tool. E-CDRweb is the electronic, web-based tool provided by Environmental Protection Agency (EPA) for the submission of Chemical Data Reporting (CDR) information. This document provides the user with tips and guidance on correctly using the version 1.1 XML schema for the Joint Submission Form. Please note that the order of the elements must match the schema.

  14. The SGML Standardization Framework and the Introduction of XML

    PubMed Central

    Grütter, Rolf

    2000-01-01

    Extensible Markup Language (XML) is on its way to becoming a global standard for the representation, exchange, and presentation of information on the World Wide Web (WWW). More than that, XML is creating a standardization framework, in terms of an open network of meta-standards and mediators that allows for the definition of further conventions and agreements in specific business domains. Such an approach is particularly needed in the healthcare domain; XML promises to especially suit the particularities of patient records and their lifelong storage, retrieval, and exchange. At a time when change rather than steadiness is becoming the faithful feature of our society, standardization frameworks which support a diversified growth of specifications that are appropriate to the actual needs of the users are becoming more and more important; and efforts should be made to encourage this new attempt at standardization to grow in a fruitful direction. Thus, the introduction of XML reflects a standardization process which is neither exclusively based on an acknowledged standardization authority, nor a pure market standard. Instead, a consortium of companies, academic institutions, and public bodies has agreed on a common recommendation based on an existing standardization framework. The consortium's process of agreeing to a standardization framework will doubtlessly be successful in the case of XML, and it is suggested that it should be considered as a generic model for standardization processes in the future. PMID:11720931

  15. The SGML standardization framework and the introduction of XML.

    PubMed

    Fierz, W; Grütter, R

    2000-01-01

    Extensible Markup Language (XML) is on its way to becoming a global standard for the representation, exchange, and presentation of information on the World Wide Web (WWW). More than that, XML is creating a standardization framework, in terms of an open network of meta-standards and mediators that allows for the definition of further conventions and agreements in specific business domains. Such an approach is particularly needed in the healthcare domain; XML promises to especially suit the particularities of patient records and their lifelong storage, retrieval, and exchange. At a time when change rather than steadiness is becoming the faithful feature of our society, standardization frameworks which support a diversified growth of specifications that are appropriate to the actual needs of the users are becoming more and more important; and efforts should be made to encourage this new attempt at standardization to grow in a fruitful direction. Thus, the introduction of XML reflects a standardization process which is neither exclusively based on an acknowledged standardization authority, nor a pure market standard. Instead, a consortium of companies, academic institutions, and public bodies has agreed on a common recommendation based on an existing standardization framework. The consortium's process of agreeing to a standardization framework will doubtlessly be successful in the case of XML, and it is suggested that it should be considered as a generic model for standardization processes in the future.

  16. XML at the ADC: Steps to a Next Generation Data Archive

    NASA Astrophysics Data System (ADS)

    Shaya, E.; Blackwell, J.; Gass, J.; Oliversen, N.; Schneider, G.; Thomas, B.; Cheung, C.; White, R. A.

    1999-05-01

    The eXtensible Markup Language (XML) is a document markup language that allows users to specify their own tags, to create hierarchical structures to qualify their data, and to support automatic checking of documents for structural validity. It is being intensively supported by nearly every major corporate software developer. Under the funds of a NASA AISRP proposal, the Astronomical Data Center (ADC, http://adc.gsfc.nasa.gov) is developing an infrastructure for importation, enhancement, and distribution of data and metadata using XML as the document markup language. We discuss the preliminary Document Type Definition (DTD, at http://adc.gsfc.nasa.gov/xml) which specifies the elements and their attributes in our metadata documents. This attempts to define both the metadata of an astronomical catalog and the `header' information of an astronomical table. In addition, we give an overview of the planned flow of data through automated pipelines from authors and journal presses into our XML archive and retrieval through the web via the XML-QL Query Language and eXtensible Style Language (XSL) scripts. When completed, the catalogs and journal tables at the ADC will be tightly hyperlinked to enhance data discovery. In addition one will be able to search on fragmentary information. For instance, one could query for a table by entering that the second author is so-and-so or that the third author is at such-and-such institution.

  17. Existence of Optimal Controls for Compressible Viscous Flow

    NASA Astrophysics Data System (ADS)

    Doboszczak, Stefan; Mohan, Manil T.; Sritharan, Sivaguru S.

    2018-03-01

    We formulate a control problem for a distributed parameter system where the state is governed by the compressible Navier-Stokes equations. Introducing a suitable cost functional, the existence of an optimal control is established within the framework of strong solutions in three dimensions.

  18. Digital compression algorithms for HDTV transmission

    NASA Technical Reports Server (NTRS)

    Adkins, Kenneth C.; Shalkhauser, Mary JO; Bibyk, Steven B.

    1990-01-01

    Digital compression of video images is a possible avenue for high definition television (HDTV) transmission. Compression needs to be optimized while picture quality remains high. Two techniques for compression the digital images are explained and comparisons are drawn between the human vision system and artificial compression techniques. Suggestions for improving compression algorithms through the use of neural and analog circuitry are given.

  19. XML and its impact on content and structure in electronic health care documents.

    PubMed Central

    Sokolowski, R.; Dudeck, J.

    1999-01-01

    Worldwide information networks have the requirement that electronic documents must be easily accessible, portable, flexible and system-independent. With the development of XML (eXtensible Markup Language), the future of electronic documents, health care informatics and the Web itself are about to change. The intent of the recently formed ASTM E31.25 subcommittee, "XML DTDs for Health Care", is to develop standard electronic document representations of paper-based health care documents and forms. A goal of the subcommittee is to work together to enhance existing levels of interoperability among the various XML/SGML standardization efforts, products and systems in health care. The ASTM E31.25 subcommittee uses common practices and software standards to develop the implementation recommendations for XML documents in health care. The implementation recommendations are being developed to standardize the many different structures of documents. These recommendations are in the form of a set of standard DTDs, or document type definitions that match the electronic document requirements in the health care industry. This paper discusses recent efforts of the ASTM E31.25 subcommittee. PMID:10566338

  20. XML schemas for common bioinformatic data types and their application in workflow systems

    PubMed Central

    Seibel, Philipp N; Krüger, Jan; Hartmeier, Sven; Schwarzer, Knut; Löwenthal, Kai; Mersch, Henning; Dandekar, Thomas; Giegerich, Robert

    2006-01-01

    Background Today, there is a growing need in bioinformatics to combine available software tools into chains, thus building complex applications from existing single-task tools. To create such workflows, the tools involved have to be able to work with each other's data – therefore, a common set of well-defined data formats is needed. Unfortunately, current bioinformatic tools use a great variety of heterogeneous formats. Results Acknowledging the need for common formats, the Helmholtz Open BioInformatics Technology network (HOBIT) identified several basic data types used in bioinformatics and developed appropriate format descriptions, formally defined by XML schemas, and incorporated them in a Java library (BioDOM). These schemas currently cover sequence, sequence alignment, RNA secondary structure and RNA secondary structure alignment formats in a form that is independent of any specific program, thus enabling seamless interoperation of different tools. All XML formats are available at , the BioDOM library can be obtained at . Conclusion The HOBIT XML schemas and the BioDOM library simplify adding XML support to newly created and existing bioinformatic tools, enabling these tools to interoperate seamlessly in workflow scenarios. PMID:17087823

  1. mzDB: A File Format Using Multiple Indexing Strategies for the Efficient Analysis of Large LC-MS/MS and SWATH-MS Data Sets*

    PubMed Central

    Bouyssié, David; Dubois, Marc; Nasso, Sara; Gonzalez de Peredo, Anne; Burlet-Schiltz, Odile; Aebersold, Ruedi; Monsarrat, Bernard

    2015-01-01

    The analysis and management of MS data, especially those generated by data independent MS acquisition, exemplified by SWATH-MS, pose significant challenges for proteomics bioinformatics. The large size and vast amount of information inherent to these data sets need to be properly structured to enable an efficient and straightforward extraction of the signals used to identify specific target peptides. Standard XML based formats are not well suited to large MS data files, for example, those generated by SWATH-MS, and compromise high-throughput data processing and storing. We developed mzDB, an efficient file format for large MS data sets. It relies on the SQLite software library and consists of a standardized and portable server-less single-file database. An optimized 3D indexing approach is adopted, where the LC-MS coordinates (retention time and m/z), along with the precursor m/z for SWATH-MS data, are used to query the database for data extraction. In comparison with XML formats, mzDB saves ∼25% of storage space and improves access times by a factor of twofold up to even 2000-fold, depending on the particular data access. Similarly, mzDB shows also slightly to significantly lower access times in comparison with other formats like mz5. Both C++ and Java implementations, converting raw or XML formats to mzDB and providing access methods, will be released under permissive license. mzDB can be easily accessed by the SQLite C library and its drivers for all major languages, and browsed with existing dedicated GUIs. The mzDB described here can boost existing mass spectrometry data analysis pipelines, offering unprecedented performance in terms of efficiency, portability, compactness, and flexibility. PMID:25505153

  2. XGI: a graphical interface for XQuery creation.

    PubMed

    Li, Xiang; Gennari, John H; Brinkley, James F

    2007-10-11

    XML has become the default standard for data exchange among heterogeneous data sources, and in January 2007 XQuery (XML Query language) was recommended by the World Wide Web Consortium as the query language for XML. However, XQuery is a complex language that is difficult for non-programmers to learn. We have therefore developed XGI (XQuery Graphical Interface), a visual interface for graphically generating XQuery. In this paper we demonstrate the functionality of XGI through its application to a biomedical XML dataset. We describe the system architecture and the features of XGI in relation to several existing querying systems, we demonstrate the system's usability through a sample query construction, and we discuss a preliminary evaluation of XGI. Finally, we describe some limitations of the system, and our plans for future improvements.

  3. Space Communications Emulation Facility

    NASA Technical Reports Server (NTRS)

    Hill, Chante A.

    2004-01-01

    Establishing space communication between ground facilities and other satellites is a painstaking task that requires many precise calculations dealing with relay time, atmospheric conditions, and satellite positions, to name a few. The Space Communications Emulation Facility (SCEF) team here at NASA is developing a facility that will approximately emulate the conditions in space that impact space communication. The emulation facility is comprised of a 32 node distributed cluster of computers; each node representing a satellite or ground station. The objective of the satellites is to observe the topography of the Earth (water, vegetation, land, and ice) and relay this information back to the ground stations. Software originally designed by the University of Kansas, labeled the Emulation Manager, controls the interaction of the satellites and ground stations, as well as handling the recording of data. The Emulation Manager is installed on a Linux Operating System, employing both Java and C++ programming codes. The emulation scenarios are written in extensible Markup Language, XML. XML documents are designed to store, carry, and exchange data. With XML documents data can be exchanged between incompatible systems, which makes it ideal for this project because Linux, MAC and Windows Operating Systems are all used. Unfortunately, XML documents cannot display data like HTML documents. Therefore, the SCEF team uses XML Schema Definition (XSD) or just schema to describe the structure of an XML document. Schemas are very important because they have the capability to validate the correctness of data, define restrictions on data, define data formats, and convert data between different data types, among other things. At this time, in order for the Emulation Manager to open and run an XML emulation scenario file, the user must first establish a link between the schema file and the directory under which the XML scenario files are saved. This procedure takes place on the command line on the Linux Operating System. Once this link has been established the Emulation manager validates all the XML files in that directory against the schema file, before the actual scenario is run. Using some very sophisticated commercial software called the Satellite Tool Kit (STK) installed on the Linux box, the Emulation Manager is able to display the data and graphics generated by the execution of a XML emulation scenario file. The Emulation Manager software is written in JAVA programming code. Since the SCEF project is in the developmental stage, the source code for this type of software is being modified to better fit the requirements of the SCEF project. Some parameters for the emulation are hard coded, set at fixed values. Members of the SCEF team are altering the code to allow the user to choose the values of these hard coded parameters by inserting a toolbar onto the preexisting GUI.

  4. Integrated Design and Production Reference Integration with ArchGenXML V1.00

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barter, R H

    2004-07-20

    ArchGenXML is a tool that allows easy creation of Zope products through the use of Archetypes. The Integrated Design and Production Reference (IDPR) should be highly configurable in order to meet the needs of a diverse engineering community. Ease of configuration is key to the success of IDPR. The purpose of this paper is to describe a method of using a UML diagram editor to configure IDPR through ArchGenXML and Archetypes.

  5. PDF for Healthcare and Child Health Data Forms.

    PubMed

    Zuckerman, Alan E; Schneider, Joseph H; Miller, Ken

    2008-11-06

    PDF-H is a new best practices standard that uses XFA forms and embedded JavaScript to combine PDF forms with XML data. Preliminary experience with AAP child health forms shows that the combination of PDF with XML is a more effective method to visualize familiar data on paper and the web than the traditional use of XML and XSLT. Both PDF-H and HL7 Clinical Document Architecture can co-exist using the same data for different display formats.

  6. Follow the Leader Tracking by Autonomous Underwater Vehicles (AUVs) Using Acoustic Communications and Ranging

    DTIC Science & Technology

    2003-09-01

    590-595, September 1996. Deitel , H.M., Deitel , P.J., Nieto, T.R., Lin, T.M., Sadhu, P., XML: How to Program , Prentice Hall, 2001. Du, Y...communications will result in a total track following error equal to the sum of the errors for the two vehicles........48 xv Figure 36. Test point programming ...Refer to (Hunter 2000), ( Deitel 2001), or similar references for additional information regarding the XML standard. Figure 17. XML example

  7. Catalog Descriptions Using VOTable Files

    NASA Astrophysics Data System (ADS)

    Thompson, R.; Levay, K.; Kimball, T.; White, R.

    2008-08-01

    Additional information is frequently required to describe database table contents and make it understandable to users. For this reason, the Multimission Archive at Space Telescope (MAST) creates Òdescription filesÓ for each table/catalog. After trying various XML and CSV formats, we finally chose VOTable. These files are easy to update via an HTML form, easily read using an XML parser such as (in our case) the PHP5 SimpleXML extension, and have found multiple uses in our data access/retrieval process.

  8. TH-C-12A-12: Veritas: An Open Source Tool to Facilitate User Interaction with TrueBeam Developer Mode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mishra, P; Varian Medical Systems, Palo Alto, CA; Lewis, J

    2014-06-15

    Purpose: To address the challenges of creating delivery trajectories and imaging sequences with TrueBeam Developer Mode, a new open-source graphical XML builder, Veritas, has been developed, tested and made freely available. Veritas eliminates most of the need to understand the underlying schema and write XML scripts, by providing a graphical menu for each control point specifying the state of 30 mechanical/dose axes. All capabilities of Developer Mode are accessible in Veritas. Methods: Veritas was designed using QT Designer, a ‘what-you-is-what-you-get’ (WYSIWIG) tool for building graphical user interfaces (GUI). Different components of the GUI are integrated using QT's signals and slotsmore » mechanism. Functionalities are added using PySide, an open source, cross platform Python binding for the QT framework. The XML code generated is immediately visible, making it an interactive learning tool. A user starts from an anonymized DICOM file or XML example and introduces delivery modifications, or begins their experiment from scratch, then uses the GUI to modify control points as desired. The software automatically generates XML plans following the appropriate schema. Results: Veritas was tested by generating and delivering two XML plans at Brigham and Women's Hospital. The first example was created to irradiate the letter ‘B’ with a narrow MV beam using dynamic couch movements. The second was created to acquire 4D CBCT projections for four minutes. The delivery of the letter ‘B’ was observed using a 2D array of ionization chambers. Both deliveries were generated quickly in Veritas by non-expert Developer Mode users. Conclusion: We introduced a new open source tool Veritas for generating XML plans (delivery trajectories and imaging sequences). Veritas makes Developer Mode more accessible by reducing the learning curve for quick translation of research ideas into XML plans. Veritas is an open source initiative, creating the possibility for future developments and collaboration with other researchers. I am an employee of Varian Medical Systems.« less

  9. A Discriminative Sentence Compression Method as Combinatorial Optimization Problem

    NASA Astrophysics Data System (ADS)

    Hirao, Tsutomu; Suzuki, Jun; Isozaki, Hideki

    In the study of automatic summarization, the main research topic was `important sentence extraction' but nowadays `sentence compression' is a hot research topic. Conventional sentence compression methods usually transform a given sentence into a parse tree or a dependency tree, and modify them to get a shorter sentence. However, this method is sometimes too rigid. In this paper, we regard sentence compression as an combinatorial optimization problem that extracts an optimal subsequence of words. Hori et al. also proposed a similar method, but they used only a small number of features and their weights were tuned by hand. We introduce a large number of features such as part-of-speech bigrams and word position in the sentence. Furthermore, we train the system by discriminative learning. According to our experiments, our method obtained better score than other methods with statistical significance.

  10. XML DTD and Schemas for HDF-EOS

    NASA Technical Reports Server (NTRS)

    Ullman, Richard; Yang, Jingli

    2008-01-01

    An Extensible Markup Language (XML) document type definition (DTD) standard for the structure and contents of HDF-EOS files and their contents, and an equivalent standard in the form of schemas, have been developed.

  11. MedlinePlus Milestones: 1998-present

    MedlinePlus

    ... page links and information daily and also offers access to this full XML content through its Web ... search-based Web service that allows developers to access MedlinePlus health topic data in XML format. MedlinePlus ...

  12. ForConX: A forcefield conversion tool based on XML.

    PubMed

    Lesch, Volker; Diddens, Diddo; Bernardes, Carlos E S; Golub, Benjamin; Dequidt, Alain; Zeindlhofer, Veronika; Sega, Marcello; Schröder, Christian

    2017-04-05

    The force field conversion from one MD program to another one is exhausting and error-prone. Although single conversion tools from one MD program to another exist not every combination and both directions of conversion are available for the favorite MD programs Amber, Charmm, Dl-Poly, Gromacs, and Lammps. We present here a general tool for the force field conversion on the basis of an XML document. The force field is converted to and from this XML structure facilitating the implementation of new MD programs for the conversion. Furthermore, the XML structure is human readable and can be manipulated before continuing the conversion. We report, as testcases, the conversions of topologies for acetonitrile, dimethylformamide, and 1-ethyl-3-methylimidazolium trifluoromethanesulfonate comprising also Urey-Bradley and Ryckaert-Bellemans potentials. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  13. Interactive, Secure Web-enabled Aircraft Engine Simulation Using XML Databinding Integration

    NASA Technical Reports Server (NTRS)

    Lin, Risheng; Afjeh, Abdollah A.

    2003-01-01

    This paper discusses the detailed design of an XML databinding framework for aircraft engine simulation. The framework provides an object interface to access and use engine data. while at the same time preserving the meaning of the original data. The Language independent representation of engine component data enables users to move around XML data using HTTP through disparate networks. The application of this framework is demonstrated via a web-based turbofan propulsion system simulation using the World Wide Web (WWW). A Java Servlet based web component architecture is used for rendering XML engine data into HTML format and dealing with input events from the user, which allows users to interact with simulation data from a web browser. The simulation data can also be saved to a local disk for archiving or to restart the simulation at a later time.

  14. Representing Information in Patient Reports Using Natural Language Processing and the Extensible Markup Language

    PubMed Central

    Friedman, Carol; Hripcsak, George; Shagina, Lyuda; Liu, Hongfang

    1999-01-01

    Objective: To design a document model that provides reliable and efficient access to clinical information in patient reports for a broad range of clinical applications, and to implement an automated method using natural language processing that maps textual reports to a form consistent with the model. Methods: A document model that encodes structured clinical information in patient reports while retaining the original contents was designed using the extensible markup language (XML), and a document type definition (DTD) was created. An existing natural language processor (NLP) was modified to generate output consistent with the model. Two hundred reports were processed using the modified NLP system, and the XML output that was generated was validated using an XML validating parser. Results: The modified NLP system successfully processed all 200 reports. The output of one report was invalid, and 199 reports were valid XML forms consistent with the DTD. Conclusions: Natural language processing can be used to automatically create an enriched document that contains a structured component whose elements are linked to portions of the original textual report. This integrated document model provides a representation where documents containing specific information can be accurately and efficiently retrieved by querying the structured components. If manual review of the documents is desired, the salient information in the original reports can also be identified and highlighted. Using an XML model of tagging provides an additional benefit in that software tools that manipulate XML documents are readily available. PMID:9925230

  15. Compressed Air System Optimization: Case Study Food Industry in Indonesia

    NASA Astrophysics Data System (ADS)

    Widayati, Endang; Nuzahar, Hasril

    2016-01-01

    Compressors and compressed air systems was one of the most important utilities in industries or factories. Approximately 10% of the cost of electricity in the industry was used to produce compressed air. Therefore the potential for energy savings in the compressors and compressed air systems had a big challenge. This field was conducted especially in Indonesia food industry or factory. Compressed air system optimization was a technique approach to determine the optimal conditions for the operation of compressors and compressed air systems that included evaluation of the energy needs, supply adjustment, eliminating or reconfiguring the use and operation of inefficient, changing and complementing some equipment and improving operating efficiencies. This technique gave the significant impact for energy saving and costs. The potential savings based on this study through measurement and optimization e.g. system that lowers the pressure of 7.5 barg to 6.8 barg would reduce energy consumption and running costs approximately 4.2%, switch off the compressor GA110 and GA75 was obtained annual savings of USD 52,947 ≈ 455 714 kWh, running GA75 light load or unloaded then obtained annual savings of USD 31,841≈ 270,685 kWh, install new compressor 2x132 kW and 1x 132 kW VSD obtained annual savings of USD 108,325≈ 928,500 kWh. Furthermore it was needed to conduct study of technical aspect of energy saving potential (Investment Grade Audit) and performed Cost Benefit Analysis. This study was one of best practice solutions how to save energy and improve energy performance in compressors and compressed air system.

  16. Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain

    PubMed Central

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality. PMID:23049544

  17. Medical image compression based on vector quantization with variable block sizes in wavelet domain.

    PubMed

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.

  18. XML schemas for common bioinformatic data types and their application in workflow systems.

    PubMed

    Seibel, Philipp N; Krüger, Jan; Hartmeier, Sven; Schwarzer, Knut; Löwenthal, Kai; Mersch, Henning; Dandekar, Thomas; Giegerich, Robert

    2006-11-06

    Today, there is a growing need in bioinformatics to combine available software tools into chains, thus building complex applications from existing single-task tools. To create such workflows, the tools involved have to be able to work with each other's data--therefore, a common set of well-defined data formats is needed. Unfortunately, current bioinformatic tools use a great variety of heterogeneous formats. Acknowledging the need for common formats, the Helmholtz Open BioInformatics Technology network (HOBIT) identified several basic data types used in bioinformatics and developed appropriate format descriptions, formally defined by XML schemas, and incorporated them in a Java library (BioDOM). These schemas currently cover sequence, sequence alignment, RNA secondary structure and RNA secondary structure alignment formats in a form that is independent of any specific program, thus enabling seamless interoperation of different tools. All XML formats are available at http://bioschemas.sourceforge.net, the BioDOM library can be obtained at http://biodom.sourceforge.net. The HOBIT XML schemas and the BioDOM library simplify adding XML support to newly created and existing bioinformatic tools, enabling these tools to interoperate seamlessly in workflow scenarios.

  19. A distributed computing system for magnetic resonance imaging: Java-based processing and binding of XML.

    PubMed

    de Beer, R; Graveron-Demilly, D; Nastase, S; van Ormondt, D

    2004-03-01

    Recently we have developed a Java-based heterogeneous distributed computing system for the field of magnetic resonance imaging (MRI). It is a software system for embedding the various image reconstruction algorithms that we have created for handling MRI data sets with sparse sampling distributions. Since these data sets may result from multi-dimensional MRI measurements our system has to control the storage and manipulation of large amounts of data. In this paper we describe how we have employed the extensible markup language (XML) to realize this data handling in a highly structured way. To that end we have used Java packages, recently released by Sun Microsystems, to process XML documents and to compile pieces of XML code into Java classes. We have effectuated a flexible storage and manipulation approach for all kinds of data within the MRI system, such as data describing and containing multi-dimensional MRI measurements, data configuring image reconstruction methods and data representing and visualizing the various services of the system. We have found that the object-oriented approach, possible with the Java programming environment, combined with the XML technology is a convenient way of describing and handling various data streams in heterogeneous distributed computing systems.

  20. Flight Dynamic Model Exchange using XML

    NASA Technical Reports Server (NTRS)

    Jackson, E. Bruce; Hildreth, Bruce L.

    2002-01-01

    The AIAA Modeling and Simulation Technical Committee has worked for several years to develop a standard by which the information needed to develop physics-based models of aircraft can be specified. The purpose of this standard is to provide a well-defined set of information, definitions, data tables and axis systems so that cooperating organizations can transfer a model from one simulation facility to another with maximum efficiency. This paper proposes using an application of the eXtensible Markup Language (XML) to implement the AIAA simulation standard. The motivation and justification for using a standard such as XML is discussed. Necessary data elements to be supported are outlined. An example of an aerodynamic model as an XML file is given. This example includes definition of independent and dependent variables for function tables, definition of key variables used to define the model, and axis systems used. The final steps necessary for implementation of the standard are presented. Software to take an XML-defined model and import/export it to/from a given simulation facility is discussed, but not demonstrated. That would be the next step in final implementation of standards for physics-based aircraft dynamic models.

  1. Shuttle-Data-Tape XML Translator

    NASA Technical Reports Server (NTRS)

    Barry, Matthew R.; Osborne, Richard N.

    2005-01-01

    JSDTImport is a computer program for translating native Shuttle Data Tape (SDT) files from American Standard Code for Information Interchange (ASCII) format into databases in other formats. JSDTImport solves the problem of organizing the SDT content, affording flexibility to enable users to choose how to store the information in a database to better support client and server applications. JSDTImport can be dynamically configured by use of a simple Extensible Markup Language (XML) file. JSDTImport uses this XML file to define how each record and field will be parsed, its layout and definition, and how the resulting database will be structured. JSDTImport also includes a client application programming interface (API) layer that provides abstraction for the data-querying process. The API enables a user to specify the search criteria to apply in gathering all the data relevant to a query. The API can be used to organize the SDT content and translate into a native XML database. The XML format is structured into efficient sections, enabling excellent query performance by use of the XPath query language. Optionally, the content can be translated into a Structured Query Language (SQL) database for fast, reliable SQL queries on standard database server computers.

  2. [Development of a medical equipment support information system based on PDF portable document].

    PubMed

    Cheng, Jiangbo; Wang, Weidong

    2010-07-01

    According to the organizational structure and management system of the hospital medical engineering support, integrate medical engineering support workflow to ensure the medical engineering data effectively, accurately and comprehensively collected and kept in electronic archives. Analyse workflow of the medical, equipment support work and record all work processes by the portable electronic document. Using XML middleware technology and SQL Server database, complete process management, data calculation, submission, storage and other functions. The practical application shows that the medical equipment support information system optimizes the existing work process, standardized and digital, automatic and efficient orderly and controllable. The medical equipment support information system based on portable electronic document can effectively optimize and improve hospital medical engineering support work, improve performance, reduce costs, and provide full and accurate digital data

  3. Tongue Scrapers Only Slightly Reduce Bad Breath

    MedlinePlus

    ... information you need from the Academy of General Dentistry Friday, June 29, 2018 About | Contact InfoBites Quick ... study in the September/October issue of General Dentistry, the Academy?xml:namespace> of General Dentistry?xml: ...

  4. XML in an Adaptive Framework for Instrument Control

    NASA Technical Reports Server (NTRS)

    Ames, Troy J.

    2004-01-01

    NASA Goddard Space Flight Center is developing an extensible framework for instrument command and control, known as Instrument Remote Control (IRC), that combines the platform independent processing capabilities of Java with the power of the Extensible Markup Language (XML). A key aspect of the architecture is software that is driven by an instrument description, written using the Instrument Markup Language (IML). IML is an XML dialect used to describe interfaces to control and monitor the instrument, command sets and command formats, data streams, communication mechanisms, and data processing algorithms.

  5. Redefining the Data Pipeline Using GPUs

    NASA Astrophysics Data System (ADS)

    Warner, C.; Eikenberry, S. S.; Gonzalez, A. H.; Packham, C.

    2013-10-01

    There are two major challenges facing the next generation of data processing pipelines: 1) handling an ever increasing volume of data as array sizes continue to increase and 2) the desire to process data in near real-time to maximize observing efficiency by providing rapid feedback on data quality. Combining the power of modern graphics processing units (GPUs), relational database management systems (RDBMSs), and extensible markup language (XML) to re-imagine traditional data pipelines will allow us to meet these challenges. Modern GPUs contain hundreds of processing cores, each of which can process hundreds of threads concurrently. Technologies such as Nvidia's Compute Unified Device Architecture (CUDA) platform and the PyCUDA (http://mathema.tician.de/software/pycuda) module for Python allow us to write parallel algorithms and easily link GPU-optimized code into existing data pipeline frameworks. This approach has produced speed gains of over a factor of 100 compared to CPU implementations for individual algorithms and overall pipeline speed gains of a factor of 10-25 compared to traditionally built data pipelines for both imaging and spectroscopy (Warner et al., 2011). However, there are still many bottlenecks inherent in the design of traditional data pipelines. For instance, file input/output of intermediate steps is now a significant portion of the overall processing time. In addition, most traditional pipelines are not designed to be able to process data on-the-fly in real time. We present a model for a next-generation data pipeline that has the flexibility to process data in near real-time at the observatory as well as to automatically process huge archives of past data by using a simple XML configuration file. XML is ideal for describing both the dataset and the processes that will be applied to the data. Meta-data for the datasets would be stored using an RDBMS (such as mysql or PostgreSQL) which could be easily and rapidly queried and file I/O would be kept at a minimum. We believe this redefined data pipeline will be able to process data at the telescope, concurrent with continuing observations, thus maximizing precious observing time and optimizing the observational process in general. We also believe that using this design, it is possible to obtain a speed gain of a factor of 30-40 over traditional data pipelines when processing large archives of data.

  6. Fpack and Funpack Utilities for FITS Image Compression and Uncompression

    NASA Technical Reports Server (NTRS)

    Pence, W.

    2008-01-01

    Fpack is a utility program for optimally compressing images in the FITS (Flexible Image Transport System) data format (see http://fits.gsfc.nasa.gov). The associated funpack program restores the compressed image file back to its original state (as long as a lossless compression algorithm is used). These programs may be run from the host operating system command line and are analogous to the gzip and gunzip utility programs except that they are optimized for FITS format images and offer a wider choice of compression algorithms. Fpack stores the compressed image using the FITS tiled image compression convention (see http://fits.gsfc.nasa.gov/fits_registry.html). Under this convention, the image is first divided into a user-configurable grid of rectangular tiles, and then each tile is individually compressed and stored in a variable-length array column in a FITS binary table. By default, fpack usually adopts a row-by-row tiling pattern. The FITS image header keywords remain uncompressed for fast access by FITS reading and writing software. The tiled image compression convention can in principle support any number of different compression algorithms. The fpack and funpack utilities call on routines in the CFITSIO library (http://hesarc.gsfc.nasa.gov/fitsio) to perform the actual compression and uncompression of the FITS images, which currently supports the GZIP, Rice, H-compress, and PLIO IRAF pixel list compression algorithms.

  7. KAT: A Flexible XML-based Knowledge Authoring Environment

    PubMed Central

    Hulse, Nathan C.; Rocha, Roberto A.; Del Fiol, Guilherme; Bradshaw, Richard L.; Hanna, Timothy P.; Roemer, Lorrie K.

    2005-01-01

    As part of an enterprise effort to develop new clinical information systems at Intermountain Health Care, the authors have built a knowledge authoring tool that facilitates the development and refinement of medical knowledge content. At present, users of the application can compose order sets and an assortment of other structured clinical knowledge documents based on XML schemas. The flexible nature of the application allows the immediate authoring of new types of documents once an appropriate XML schema and accompanying Web form have been developed and stored in a shared repository. The need for a knowledge acquisition tool stems largely from the desire for medical practitioners to be able to write their own content for use within clinical applications. We hypothesize that medical knowledge content for clinical use can be successfully created and maintained through XML-based document frameworks containing structured and coded knowledge. PMID:15802477

  8. ADASS Web Database XML Project

    NASA Astrophysics Data System (ADS)

    Barg, M. I.; Stobie, E. B.; Ferro, A. J.; O'Neil, E. J.

    In the spring of 2000, at the request of the ADASS Program Organizing Committee (POC), we began organizing information from previous ADASS conferences in an effort to create a centralized database. The beginnings of this database originated from data (invited speakers, participants, papers, etc.) extracted from HyperText Markup Language (HTML) documents from past ADASS host sites. Unfortunately, not all HTML documents are well formed and parsing them proved to be an iterative process. It was evident at the beginning that if these Web documents were organized in a standardized way, such as XML (Extensible Markup Language), the processing of this information across the Web could be automated, more efficient, and less error prone. This paper will briefly review the many programming tools available for processing XML, including Java, Perl and Python, and will explore the mapping of relational data from our MySQL database to XML.

  9. Suggestions for Improvement of User Access to GOCE L2 Data

    NASA Astrophysics Data System (ADS)

    Tscherning, C. C.

    2011-07-01

    ESA's has required that most GOCE L2 products are delivered in XML format. This creates difficulties for the users because a Parser written in Perl is needed to convert the files to files without XML tags. However several products, such as the coefficients of spherical harmonic coefficients are made available on standard form through the International Center for Global Gravity Field Models. The variance-covariance information for the gravity field models is only available without XML tags. It is suggested that all XML products are made available in the Virtual Data Archive as files without tags. This will besides making the data directly usable by a FORTRAN program also reduce the size (storage requirements) of the product to about 30 %. A further reduction of used storage should be made by tuning the number of digits for the individual quantities in the products, so that it corresponds to the actual number of significant digits.

  10. An XML-based method for astronomy software designing

    NASA Astrophysics Data System (ADS)

    Liao, Mingxue; Aili, Yusupu; Zhang, Jin

    XML-based method for standardization of software designing is introduced and analyzed and successfully applied to renovating the hardware and software of the digital clock at Urumqi Astronomical Station. Basic strategy for eliciting time information from the new digital clock of FT206 in the antenna control program is introduced. By FT206, the need to compute how many centuries passed since a certain day with sophisticated formulas is eliminated and it is no longer necessary to set right UT time for the computer holding control over antenna because the information about year, month, day are all deduced from Julian day dwelling in FT206, rather than from computer time. With XML-based method and standard for software designing, various existing designing methods are unified, communications and collaborations between developers are facilitated, and thus Internet-based mode of developing software becomes possible. The trend of development of XML-based designing method is predicted.

  11. Utilizing the Structure and Content Information for XML Document Clustering

    NASA Astrophysics Data System (ADS)

    Tran, Tien; Kutty, Sangeetha; Nayak, Richi

    This paper reports on the experiments and results of a clustering approach used in the INEX 2008 document mining challenge. The clustering approach utilizes both the structure and content information of the Wikipedia XML document collection. A latent semantic kernel (LSK) is used to measure the semantic similarity between XML documents based on their content features. The construction of a latent semantic kernel involves the computing of singular vector decomposition (SVD). On a large feature space matrix, the computation of SVD is very expensive in terms of time and memory requirements. Thus in this clustering approach, the dimension of the document space of a term-document matrix is reduced before performing SVD. The document space reduction is based on the common structural information of the Wikipedia XML document collection. The proposed clustering approach has shown to be effective on the Wikipedia collection in the INEX 2008 document mining challenge.

  12. XML Based Scientific Data Management Facility

    NASA Technical Reports Server (NTRS)

    Mehrotra, P.; Zubair, M.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    The World Wide Web consortium has developed an Extensible Markup Language (XML) to support the building of better information management infrastructures. The scientific computing community realizing the benefits of XML has designed markup languages for scientific data. In this paper, we propose a XML based scientific data management ,facility, XDMF. The project is motivated by the fact that even though a lot of scientific data is being generated, it is not being shared because of lack of standards and infrastructure support for discovering and transforming the data. The proposed data management facility can be used to discover the scientific data itself, the transformation functions, and also for applying the required transformations. We have built a prototype system of the proposed data management facility that can work on different platforms. We have implemented the system using Java, and Apache XSLT engine Xalan. To support remote data and transformation functions, we had to extend the XSLT specification and the Xalan package.

  13. XTCE GOVSAT Tool Suite 1.0

    NASA Technical Reports Server (NTRS)

    Rice, J. Kevin

    2013-01-01

    The XTCE GOVSAT software suite contains three tools: validation, search, and reporting. The Extensible Markup Language (XML) Telemetric and Command Exchange (XTCE) GOVSAT Tool Suite is written in Java for manipulating XTCE XML files. XTCE is a Consultative Committee for Space Data Systems (CCSDS) and Object Management Group (OMG) specification for describing the format and information in telemetry and command packet streams. These descriptions are files that are used to configure real-time telemetry and command systems for mission operations. XTCE s purpose is to exchange database information between different systems. XTCE GOVSAT consists of rules for narrowing the use of XTCE for missions. The Validation Tool is used to syntax check GOVSAT XML files. The Search Tool is used to search (i.e. command and telemetry mnemonics) the GOVSAT XML files and view the results. Finally, the Reporting Tool is used to create command and telemetry reports. These reports can be displayed or printed for use by the operations team.

  14. Standardization of XML Database Exchanges and the James Webb Space Telescope Experience

    NASA Technical Reports Server (NTRS)

    Gal-Edd, Jonathan; Detter, Ryan; Jones, Ron; Fatig, Curtis C.

    2007-01-01

    Personnel from the National Aeronautics and Space Administration (NASA) James Webb Space Telescope (JWST) Project have been working with various standard communities such the Object Management Group (OMG) and the Consultative Committee for Space Data Systems (CCSDS) to assist in the definition of a common extensible Markup Language (XML) for database exchange format. The CCSDS and OMG standards are intended for the exchange of core command and telemetry information, not for all database information needed to exercise a NASA space mission. The mission-specific database, containing all the information needed for a space mission, is translated from/to the standard using a translator. The standard is meant to provide a system that encompasses 90% of the information needed for command and telemetry processing. This paper will discuss standardization of the XML database exchange format, tools used, and the JWST experience, as well as future work with XML standard groups both commercial and government.

  15. Cloud Optimized Image Format and Compression

    NASA Astrophysics Data System (ADS)

    Becker, P.; Plesea, L.; Maurer, T.

    2015-04-01

    Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.

  16. Determination of stresses in RC eccentrically compressed members using optimization methods

    NASA Astrophysics Data System (ADS)

    Lechman, Marek; Stachurski, Andrzej

    2018-01-01

    The paper presents an optimization method for determining the strains and stresses in reinforced concrete (RC) members subjected to the eccentric compression. The governing equations for strains in the rectangular cross-sections are derived by integrating the equilibrium equations of cross-sections, taking account of the effect of concrete softening in plastic range and the mean compressive strength of concrete. The stress-strain relationship for concrete in compression for short term uniaxial loading is assumed according to Eurocode 2 for nonlinear analysis. For reinforcing steel linear-elastic model with hardening in plastic range is applied. The task consists in the solving the set of the derived equations s.t. box constraints. The resulting problem was solved by means of fmincon function implemented from the Matlab's Optimization Toolbox. Numerical experiments have shown the existence of many points verifying the equations with a very good accuracy. Therefore, some operations from the global optimization were included: start of fmincon from many points and clusterization. The model is verified on the set of data encountered in the engineering practice.

  17. Optimization of poorly compactable drug tablets manufactured by direct compression using the mixture experimental design.

    PubMed

    Martinello, Tiago; Kaneko, Telma Mary; Velasco, Maria Valéria Robles; Taqueda, Maria Elena Santos; Consiglieri, Vladi O

    2006-09-28

    The poor flowability and bad compressibility characteristics of paracetamol are well known. As a result, the production of paracetamol tablets is almost exclusively by wet granulation, a disadvantageous method when compared to direct compression. The development of a new tablet formulation is still based on a large number of experiments and often relies merely on the experience of the analyst. The purpose of this study was to apply experimental design methodology (DOE) to the development and optimization of tablet formulations containing high amounts of paracetamol (more than 70%) and manufactured by direct compression. Nineteen formulations, screened by DOE methodology, were produced with different proportions of Microcel 102, Kollydon VA 64, Flowlac, Kollydon CL 30, PEG 4000, Aerosil, and magnesium stearate. Tablet properties, except friability, were in accordance with the USP 28th ed. requirements. These results were used to generate plots for optimization, mainly for friability. The physical-chemical data found from the optimized formulation were very close to those from the regression analysis, demonstrating that the mixture project is a great tool for the research and development of new formulations.

  18. Even Four Minutes of Poor Quality of CPR Compromises Outcome in a Porcine Model of Prolonged Cardiac Arrest

    PubMed Central

    Li, Heng; Zhang, Lei; Yang, Zhengfei; Huang, Zitong; Chen, Bihua; Li, Yongqin; Yu, Tao

    2013-01-01

    Objective. Untrained bystanders usually delivered suboptimal chest compression to victims who suffered from cardiac arrest in out-of-hospital settings. We therefore investigated the hemodynamics and resuscitation outcome of initial suboptimal quality of chest compressions compared to the optimal ones in a porcine model of cardiac arrest. Methods. Fourteen Yorkshire pigs weighted 30 ± 2 kg were randomized into good and poor cardiopulmonary resuscitation (CPR) groups. Ventricular fibrillation was electrically induced and untreated for 6 mins. In good CPR group, animals received high quality manual chest compressions according to the Guidelines (25% of animal's anterior-posterior thoracic diameter) during first two minutes of CPR compared with poor (70% of the optimal depth) compressions. After that, a 120-J biphasic shock was delivered. If the animal did not acquire return of spontaneous circulation, another 2 mins of CPR and shock followed. Four minutes later, both groups received optimal CPR until total 10 mins of CPR has been finished. Results. All seven animals in good CPR group were resuscitated compared with only two in poor CPR group (P < 0.05). The delayed optimal compressions which followed 4 mins of suboptimal compressions failed to increase the lower coronary perfusion pressure of five non-survival animals in poor CPR group. Conclusions. In a porcine model of prolonged cardiac arrest, even four minutes of initial poor quality of CPR compromises the hemodynamics and survival outcome. PMID:24364028

  19. Studies of a new multi-layer compression bandage for the treatment of venous ulceration.

    PubMed

    Scriven, J M; Bello, M; Taylor, L E; Wood, A J; London, N J

    2000-03-01

    This study aimed to develop an alternative graduated compression bandage for the treatment of venous leg ulcers. Alternative bandage components were identified and assessed for optimal performance as a graduated multi-layer compression bandage. Subsequently the physical characteristics and clinical efficacy of the optimal bandage combination was prospectively examined. Ten healthy limbs were used to develop the optimal combination and 20 limbs with venous ulceration to compare the physical properties of the two bandage types. Subsequently 42 consecutive ulcerated limbs were prospectively treated to examine the efficacy of the new bandage combination. The new combination produced graduated median (range) sub-bandage pressures (mmHg) as follows: ankle 59 (42-100), calf 36 (27-67) and knee 35 (16-67). Over a seven-day period this combination maintained a comparable level of compression with the Charing Cross system, and achieved an overall healing rate at one year of 88%. The described combination should be brought to the attention of healthcare professionals treating venous ulcers as a possible alternative to other forms of multi-layer graduated compression bandages pending prospective, randomised clinical trials.

  20. XML-Based Generator of C++ Code for Integration With GUIs

    NASA Technical Reports Server (NTRS)

    Hua, Hook; Oyafuso, Fabiano; Klimeck, Gerhard

    2003-01-01

    An open source computer program has been developed to satisfy a need for simplified organization of structured input data for scientific simulation programs. Typically, such input data are parsed in from a flat American Standard Code for Information Interchange (ASCII) text file into computational data structures. Also typically, when a graphical user interface (GUI) is used, there is a need to completely duplicate the input information while providing it to a user in a more structured form. Heretofore, the duplication of the input information has entailed duplication of software efforts and increases in susceptibility to software errors because of the concomitant need to maintain two independent input-handling mechanisms. The present program implements a method in which the input data for a simulation program are completely specified in an Extensible Markup Language (XML)-based text file. The key benefit for XML is storing input data in a structured manner. More importantly, XML allows not just storing of data but also describing what each of the data items are. That XML file contains information useful for rendering the data by other applications. It also then generates data structures in the C++ language that are to be used in the simulation program. In this method, all input data are specified in one place only, and it is easy to integrate the data structures into both the simulation program and the GUI. XML-to-C is useful in two ways: 1. As an executable, it generates the corresponding C++ classes and 2. As a library, it automatically fills the objects with the input data values.

  1. Automated construction of an intraoperative high-dose-rate treatment plan library for the Varian brachytherapy treatment planning system.

    PubMed

    Deufel, Christopher L; Furutani, Keith M; Dahl, Robert A; Haddock, Michael G

    2016-01-01

    The ability to create treatment plans for intraoperative high-dose-rate (IOHDR) brachytherapy is limited by lack of imaging and time constraints. An automated method for creation of a library of high-dose-rate brachytherapy plans that can be used with standard planar applicators in the intraoperative setting is highly desirable. Nonnegative least squares algebraic methods were used to identify dwell time values for flat, rectangular planar applicators. The planar applicators ranged in length and width from 2 cm to 25 cm. Plans were optimized to deliver an absorbed dose of 10 Gy to three different depths from the patient surface: 0 cm, 0.5 cm, and 1.0 cm. Software was written to calculate the optimized dwell times and insert dwell times and positions into a .XML plan template that can be imported into the Varian brachytherapy treatment planning system. The user may import the .XML template into the treatment planning system in the intraoperative setting to match the patient applicator size and prescribed treatment depth. A total of 1587 library plans were created for IOHDR brachytherapy. Median plan generation time was approximately 1 minute per plan. Plan dose was typically 100% ± 1% (mean, standard deviation) of the prescribed dose over the entire length and width of the applicator. Plan uniformity was best for prescription depths of 0 cm and 0.5 cm from the patient surface. An IOHDR plan library may be created using automated methods. Thousands of plan templates may be optimized and prepared in a few hours to accommodate different applicator sizes and treatment depths and reduce treatment planning time. The automated method also enforces dwell time symmetry for symmetrical applicator geometries, which simplifies quality assurance. Copyright © 2016 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  2. FIBER OPTICS. ACOUSTOOPTICS: Compression of random pulses in fiber waveguides

    NASA Astrophysics Data System (ADS)

    Aleshkevich, Viktor A.; Kozhoridze, G. D.

    1990-07-01

    An investigation is made of the compression of randomly modulated signal + noise pulses during their propagation in a fiber waveguide. An allowance is made for a cubic nonlinearity and quadratic dispersion. The relationships governing the kinetics of transformation of the time envelope, and those which determine the duration and intensity of a random pulse are derived. The expressions for the optimal length of a fiber waveguide and for the maximum degree of compression are compared with the available data for regular pulses and the recommendations on selection of the optimal parameters are given.

  3. Pulse compression at 1.06 μm in dispersion-decreasing holey fibers

    NASA Astrophysics Data System (ADS)

    Tse, M. L. V.; Horak, P.; Price, J. H. V.; Poletti, F.; He, F.; Richardson, D. J.

    2006-12-01

    We report compression of low-power femtosecond pulses at 1.06 μm in a dispersion-decreasing holey fiber. Near-adiabatic compression of 130 fs pulses down to 60 fs has been observed. Measured spectra and pulse shapes agree well with numerical simulations. Compression factors of ten are possible in optimized fibers.

  4. The XML approach to implementing space link extension service management

    NASA Technical Reports Server (NTRS)

    Tai, W.; Welz, G. A.; Theis, G.; Yamada, T.

    2001-01-01

    A feasibility study has been conducted at JPL, ESOC, and ISAS to assess the possible applications of the eXtensible Mark-up Language (XML) capabilities to the implementation of the CCSDS Space Link Extension (SLE) Service Management function.

  5. XTCE. XML Telemetry and Command Exchange Tutorial

    NASA Technical Reports Server (NTRS)

    Rice, Kevin; Kizzort, Brad; Simon, Jerry

    2010-01-01

    An XML Telemetry Command Exchange (XTCE) tutoral oriented towards packets or minor frames is shown. The contents include: 1) The Basics; 2) Describing Telemetry; 3) Describing the Telemetry Format; 4) Commanding; 5) Forgotten Elements; 6) Implementing XTCE; and 7) GovSat.

  6. XML Schema Versioning Policies Version 1.0

    NASA Astrophysics Data System (ADS)

    Harrison, Paul; Demleitner, Markus; Major, Brian; Dowler, Pat; Harrison, Paul

    2018-05-01

    This note describes the recommended practice for the evolution of IVOA standard XML schemata that are associated with IVOA standards. The criteria for deciding what might be considered major and minor changes and the policies for dealing with each case are described.

  7. XML technologies for the Omaha System: a data model, a Java tool and several case studies supporting home healthcare.

    PubMed

    Vittorini, Pierpaolo; Tarquinio, Antonietta; di Orio, Ferdinando

    2009-03-01

    The eXtensible markup language (XML) is a metalanguage which is useful to represent and exchange data between heterogeneous systems. XML may enable healthcare practitioners to document, monitor, evaluate, and archive medical information and services into distributed computer environments. Therefore, the most recent proposals on electronic health records (EHRs) are usually based on XML documents. Since none of the existing nomenclatures were specifically developed for use in automated clinical information systems, but were adapted to such use, numerous current EHRs are organized as a sequence of events, each represented through codes taken from international classification systems. In nursing, a hierarchically organized problem-solving approach is followed, which hardly couples with the sequential organization of such EHRs. Therefore, the paper presents an XML data model for the Omaha System taxonomy, which is one of the most important international nomenclatures used in the home healthcare nursing context. Such a data model represents the formal definition of EHRs specifically developed for nursing practice. Furthermore, the paper delineates a Java application prototype which is able to manage such documents, shows the possibility to transform such documents into readable web pages, and reports several case studies, one currently managed by the home care service of a Health Center in Central Italy.

  8. Data-Driven Sampling Matrix Boolean Optimization for Energy-Efficient Biomedical Signal Acquisition by Compressive Sensing.

    PubMed

    Wang, Yuhao; Li, Xin; Xu, Kai; Ren, Fengbo; Yu, Hao

    2017-04-01

    Compressive sensing is widely used in biomedical applications, and the sampling matrix plays a critical role on both quality and power consumption of signal acquisition. It projects a high-dimensional vector of data into a low-dimensional subspace by matrix-vector multiplication. An optimal sampling matrix can ensure accurate data reconstruction and/or high compression ratio. Most existing optimization methods can only produce real-valued embedding matrices that result in large energy consumption during data acquisition. In this paper, we propose an efficient method that finds an optimal Boolean sampling matrix in order to reduce the energy consumption. Compared to random Boolean embedding, our data-driven Boolean sampling matrix can improve the image recovery quality by 9 dB. Moreover, in terms of sampling hardware complexity, it reduces the energy consumption by 4.6× and the silicon area by 1.9× over the data-driven real-valued embedding.

  9. General purpose graphic processing unit implementation of adaptive pulse compression algorithms

    NASA Astrophysics Data System (ADS)

    Cai, Jingxiao; Zhang, Yan

    2017-07-01

    This study introduces a practical approach to implement real-time signal processing algorithms for general surveillance radar based on NVIDIA graphical processing units (GPUs). The pulse compression algorithms are implemented using compute unified device architecture (CUDA) libraries such as CUDA basic linear algebra subroutines and CUDA fast Fourier transform library, which are adopted from open source libraries and optimized for the NVIDIA GPUs. For more advanced, adaptive processing algorithms such as adaptive pulse compression, customized kernel optimization is needed and investigated. A statistical optimization approach is developed for this purpose without needing much knowledge of the physical configurations of the kernels. It was found that the kernel optimization approach can significantly improve the performance. Benchmark performance is compared with the CPU performance in terms of processing accelerations. The proposed implementation framework can be used in various radar systems including ground-based phased array radar, airborne sense and avoid radar, and aerospace surveillance radar.

  10. XML: An Introduction.

    ERIC Educational Resources Information Center

    Lewis, John D.

    1998-01-01

    Describes XML (extensible markup language), a new language classification submitted to the World Wide Web Consortium that is defined in terms of both SGML (Standard Generalized Markup Language) and HTML (Hypertext Markup Language), specifically designed for the Internet. Limitations of PDF (Portable Document Format) files for electronic journals…

  11. Integration of M&S (Modeling and Simulation), Software Design and DoDAF (Department of Defense Architecture Framework (RT 24)

    DTIC Science & Technology

    2012-04-09

    between BPMN , SysML, and Arena ........................................... 16 Capabilities, Activities, Resources, Performers...Proof of Concept ................................................................ 22 BPMN 2.0 XML to Arena Converter...21 Figure 5: BPMN 2.0 XML StartEvent (Excerpt

  12. Component optimization of dairy manure vermicompost, straw, and peat in seedling compressed substrates using simplex-centroid design.

    PubMed

    Yang, Longyuan; Cao, Hongliang; Yuan, Qiaoxia; Luoa, Shuai; Liu, Zhigang

    2018-03-01

    Vermicomposting is a promising method to disposal dairy manures, and the dairy manure vermicompost (DMV) to replace expensive peat is of high value in the application of seedling compressed substrates. In this research, three main components: DMV, straw, and peat, are conducted in the compressed substrates, and the effect of individual components and the corresponding optimal ratio for the seedling production are significant. To address these issues, the simplex-centroid experimental mixture design is employed, and the cucumber seedling experiment is conducted to evaluate the compressed substrates. Results demonstrated that the mechanical strength and physicochemical properties of compressed substrates for cucumber seedling can be well satisfied with suitable mixture ratio of the components. Moreover, DMV, straw, and peat) could be determined at 0.5917:0.1608:0.2475 when the weight coefficients of the three parameters (shoot length, root dry weight, and aboveground dry weight) were 1:1:1. For different purpose, the optimum ratio can be little changed on the basis of different weight coefficients. Compressed substrate is lump and has certain mechanical strength, produced by application of mechanical pressure to the seedling substrates. It will not harm seedlings when bedding out the seedlings, since the compressed substrate and seedling are bedded out together. However, there is no one using the vermicompost and agricultural waste components of compressed substrate for vegetable seedling production before. Thus, it is important to understand the effect of individual components to seedling production, and to determine the optimal ratio of components.

  13. Fast and Efficient XML Data Access for Next-Generation Mass Spectrometry.

    PubMed

    Röst, Hannes L; Schmitt, Uwe; Aebersold, Ruedi; Malmström, Lars

    2015-01-01

    In mass spectrometry-based proteomics, XML formats such as mzML and mzXML provide an open and standardized way to store and exchange the raw data (spectra and chromatograms) of mass spectrometric experiments. These file formats are being used by a multitude of open-source and cross-platform tools which allow the proteomics community to access algorithms in a vendor-independent fashion and perform transparent and reproducible data analysis. Recent improvements in mass spectrometry instrumentation have increased the data size produced in a single LC-MS/MS measurement and put substantial strain on open-source tools, particularly those that are not equipped to deal with XML data files that reach dozens of gigabytes in size. Here we present a fast and versatile parsing library for mass spectrometric XML formats available in C++ and Python, based on the mature OpenMS software framework. Our library implements an API for obtaining spectra and chromatograms under memory constraints using random access or sequential access functions, allowing users to process datasets that are much larger than system memory. For fast access to the raw data structures, small XML files can also be completely loaded into memory. In addition, we have improved the parsing speed of the core mzML module by over 4-fold (compared to OpenMS 1.11), making our library suitable for a wide variety of algorithms that need fast access to dozens of gigabytes of raw mass spectrometric data. Our C++ and Python implementations are available for the Linux, Mac, and Windows operating systems. All proposed modifications to the OpenMS code have been merged into the OpenMS mainline codebase and are available to the community at https://github.com/OpenMS/OpenMS.

  14. Fast and Efficient XML Data Access for Next-Generation Mass Spectrometry

    PubMed Central

    Röst, Hannes L.; Schmitt, Uwe; Aebersold, Ruedi; Malmström, Lars

    2015-01-01

    Motivation In mass spectrometry-based proteomics, XML formats such as mzML and mzXML provide an open and standardized way to store and exchange the raw data (spectra and chromatograms) of mass spectrometric experiments. These file formats are being used by a multitude of open-source and cross-platform tools which allow the proteomics community to access algorithms in a vendor-independent fashion and perform transparent and reproducible data analysis. Recent improvements in mass spectrometry instrumentation have increased the data size produced in a single LC-MS/MS measurement and put substantial strain on open-source tools, particularly those that are not equipped to deal with XML data files that reach dozens of gigabytes in size. Results Here we present a fast and versatile parsing library for mass spectrometric XML formats available in C++ and Python, based on the mature OpenMS software framework. Our library implements an API for obtaining spectra and chromatograms under memory constraints using random access or sequential access functions, allowing users to process datasets that are much larger than system memory. For fast access to the raw data structures, small XML files can also be completely loaded into memory. In addition, we have improved the parsing speed of the core mzML module by over 4-fold (compared to OpenMS 1.11), making our library suitable for a wide variety of algorithms that need fast access to dozens of gigabytes of raw mass spectrometric data. Availability Our C++ and Python implementations are available for the Linux, Mac, and Windows operating systems. All proposed modifications to the OpenMS code have been merged into the OpenMS mainline codebase and are available to the community at https://github.com/OpenMS/OpenMS. PMID:25927999

  15. Castles Made of Sand: Building Sustainable Digitized Collections Using XML.

    ERIC Educational Resources Information Center

    Ragon, Bart

    2003-01-01

    Describes work at the University of Virginia library to digitize special collections. Discusses the use of XML (Extensible Markup Language); providing access to original source materials; DTD (Document Type Definition); TEI (Text Encoding Initiative); metadata; XSL (Extensible Style Language); and future possibilities. (LRW)

  16. At-sea demonstration of RF sensor tasking using XML over a worldwide network

    NASA Astrophysics Data System (ADS)

    Kellogg, Robert L.; Lee, Tom; Dumas, Diane; Raggo, Barbara

    2003-07-01

    As part of an At-Sea Demonstration for Space and Naval Warfare Command (SPAWAR, PMW-189), a prototype RF sensor for signal acquisition and direction finding queried and received tasking via a secure worldwide Automated Data Network System (ADNS). Using extended mark-up language (XML) constructs, both mission and signal tasking were available for push and pull Battlespace management. XML tasking was received by the USS Cape St George (CG-71) during an exercise along the Gulf Coast of the US from a test facility at SPAWAR, San Diego, CA. Although only one ship was used in the demonstration, the intent of the software initiative was to show that a network of different RF sensors on different platforms with different capabilitis could be tasked by a common web agent. A sensor software agent interpreted the XML task to match the sensor's capability. Future improvements will focus on enlarging the domain of mission tasking and incorporate report management.

  17. Performance evaluation of continuity of care records (CCRs): parsing models in a mobile health management system.

    PubMed

    Chen, Hung-Ming; Liou, Yong-Zan

    2014-10-01

    In a mobile health management system, mobile devices act as the application hosting devices for personal health records (PHRs) and the healthcare servers construct to exchange and analyze PHRs. One of the most popular PHR standards is continuity of care record (CCR). The CCR is expressed in XML formats. However, parsing is an expensive operation that can degrade XML processing performance. Hence, the objective of this study was to identify different operational and performance characteristics for those CCR parsing models including the XML DOM parser, the SAX parser, the PULL parser, and the JSON parser with regard to JSON data converted from XML-based CCR. Thus, developers can make sensible choices for their target PHR applications to parse CCRs when using mobile devices or servers with different system resources. Furthermore, the simulation experiments of four case studies are conducted to compare the parsing performance on Android mobile devices and the server with large quantities of CCR data.

  18. Chemical markup, XML and the World-Wide Web. 3. Toward a signed semantic chemical web of trust.

    PubMed

    Gkoutos, G V; Murray-Rust, P; Rzepa, H S; Wright, M

    2001-01-01

    We describe how a collection of documents expressed in XML-conforming languages such as CML and XHTML can be authenticated and validated against digital signatures which make use of established X.509 certificate technology. These can be associated either with specific nodes in the XML document or with the entire document. We illustrate this with two examples. An entire journal article expressed in XML has its individual components digitally signed by separate authors, and the collection is placed in an envelope and again signed. The second example involves using a software robot agent to acquire a collection of documents from a specified URL, to perform various operations and transformations on the content, including expressing molecules in CML, and to automatically sign the various components and deposit the result in a repository. We argue that these operations can used as components for building what we term an authenticated and semantic chemical web of trust.

  19. XML, Ontologies, and Their Clinical Applications.

    PubMed

    Yu, Chunjiang; Shen, Bairong

    2016-01-01

    The development of information technology has resulted in its penetration into every area of clinical research. Various clinical systems have been developed, which produce increasing volumes of clinical data. However, saving, exchanging, querying, and exploiting these data are challenging issues. The development of Extensible Markup Language (XML) has allowed the generation of flexible information formats to facilitate the electronic sharing of structured data via networks, and it has been used widely for clinical data processing. In particular, XML is very useful in the fields of data standardization, data exchange, and data integration. Moreover, ontologies have been attracting increased attention in various clinical fields in recent years. An ontology is the basic level of a knowledge representation scheme, and various ontology repositories have been developed, such as Gene Ontology and BioPortal. The creation of these standardized repositories greatly facilitates clinical research in related fields. In this chapter, we discuss the basic concepts of XML and ontologies, as well as their clinical applications.

  20. Semi-automated XML markup of biosystematic legacy literature with the GoldenGATE editor.

    PubMed

    Sautter, Guido; Böhm, Klemens; Agosti, Donat

    2007-01-01

    Today, digitization of legacy literature is a big issue. This also applies to the domain of biosystematics, where this process has just started. Digitized biosystematics literature requires a very precise and fine grained markup in order to be useful for detailed search, data linkage and mining. However, manual markup on sentence level and below is cumbersome and time consuming. In this paper, we present and evaluate the GoldenGATE editor, which is designed for the special needs of marking up OCR output with XML. It is built in order to support the user in this process as far as possible: Its functionality ranges from easy, intuitive tagging through markup conversion to dynamic binding of configurable plug-ins provided by third parties. Our evaluation shows that marking up an OCR document using GoldenGATE is three to four times faster than with an off-the-shelf XML editor like XML-Spy. Using domain-specific NLP-based plug-ins, these numbers are even higher.

  1. XML Translator for Interface Descriptions

    NASA Technical Reports Server (NTRS)

    Boroson, Elizabeth R.

    2009-01-01

    A computer program defines an XML schema for specifying the interface to a generic FPGA from the perspective of software that will interact with the device. This XML interface description is then translated into header files for C, Verilog, and VHDL. User interface definition input is checked via both the provided XML schema and the translator module to ensure consistency and accuracy. Currently, programming used on both sides of an interface is inconsistent. This makes it hard to find and fix errors. By using a common schema, both sides are forced to use the same structure by using the same framework and toolset. This makes for easy identification of problems, which leads to the ability to formulate a solution. The toolset contains constants that allow a programmer to use each register, and to access each field in the register. Once programming is complete, the translator is run as part of the make process, which ensures that whenever an interface is changed, all of the code that uses the header files describing it is recompiled.

  2. Optimal chest compression rate in cardiopulmonary resuscitation: a prospective, randomized crossover study using a manikin model.

    PubMed

    Lee, Seong Hwa; Ryu, Ji Ho; Min, Mun Ki; Kim, Yong In; Park, Maeng Real; Yeom, Seok Ran; Han, Sang Kyoon; Park, Seong Wook

    2016-08-01

    When performing cardiopulmonary resuscitation (CPR), the 2010 American Heart Association guidelines recommend a chest compression rate of at least 100 min, whereas the 2010 European Resuscitation Council guidelines recommend a rate of between 100 and 120 min. The aim of this study was to examine the rate of chest compression that fulfilled various quality indicators, thereby determining the optimal rate of compression. Thirty-two trainee emergency medical technicians and six paramedics were enrolled in this study. All participants had been trained in basic life support. Each participant performed 2 min of continuous compressions on a skill reporter manikin, while listening to a metronome sound at rates of 100, 120, 140, and 160 beats/min, in a random order. Mean compression depth, incomplete chest recoil, and the proportion of correctly performed chest compressions during the 2 min were measured and recorded. The rate of incomplete chest recoil was lower at compression rates of 100 and 120 min compared with that at 160 min (P=0.001). The numbers of compressions that fulfilled the criteria for high-quality CPR at a rate of 120 min were significantly higher than those at 100 min (P=0.016). The number of high-quality CPR compressions was the highest at a compression rate of 120 min, and increased incomplete recoil occurred with increasing compression rate. However, further studies are needed to confirm the results.

  3. GTZ: a fast compression and cloud transmission tool optimized for FASTQ files.

    PubMed

    Xing, Yuting; Li, Gen; Wang, Zhenguo; Feng, Bolun; Song, Zhuo; Wu, Chengkun

    2017-12-28

    The dramatic development of DNA sequencing technology is generating real big data, craving for more storage and bandwidth. To speed up data sharing and bring data to computing resource faster and cheaper, it is necessary to develop a compression tool than can support efficient compression and transmission of sequencing data onto the cloud storage. This paper presents GTZ, a compression and transmission tool, optimized for FASTQ files. As a reference-free lossless FASTQ compressor, GTZ treats different lines of FASTQ separately, utilizes adaptive context modelling to estimate their characteristic probabilities, and compresses data blocks with arithmetic coding. GTZ can also be used to compress multiple files or directories at once. Furthermore, as a tool to be used in the cloud computing era, it is capable of saving compressed data locally or transmitting data directly into cloud by choice. We evaluated the performance of GTZ on some diverse FASTQ benchmarks. Results show that in most cases, it outperforms many other tools in terms of the compression ratio, speed and stability. GTZ is a tool that enables efficient lossless FASTQ data compression and simultaneous data transmission onto to cloud. It emerges as a useful tool for NGS data storage and transmission in the cloud environment. GTZ is freely available online at: https://github.com/Genetalks/gtz .

  4. Dynamic XML-based exchange of relational data: application to the Human Brain Project.

    PubMed

    Tang, Zhengming; Kadiyska, Yana; Li, Hao; Suciu, Dan; Brinkley, James F

    2003-01-01

    This paper discusses an approach to exporting relational data in XML format for data exchange over the web. We describe the first real-world application of SilkRoute, a middleware program that dynamically converts existing relational data to a user-defined XML DTD. The application, called XBrain, wraps SilkRoute in a Java Server Pages framework, thus permitting a web-based XQuery interface to a legacy relational database. The application is demonstrated as a query interface to the University of Washington Brain Project's Language Map Experiment Management System, which is used to manage data about language organization in the brain.

  5. voevent-parse: Parse, manipulate, and generate VOEvent XML packets

    NASA Astrophysics Data System (ADS)

    Staley, Tim D.

    2014-11-01

    voevent-parse, written in Python, parses, manipulates, and generates VOEvent XML packets; it is built atop lxml.objectify. Details of transients detected by many projects, including Fermi, Swift, and the Catalina Sky Survey, are currently made available as VOEvents, which is also the standard alert format by future facilities such as LSST and SKA. However, working with XML and adhering to the sometimes lengthy VOEvent schema can be a tricky process. voevent-parse provides convenience routines for common tasks, while allowing the user to utilise the full power of the lxml library when required. An earlier version of voevent-parse was part of the pysovo (ascl:1411.002) library.

  6. Symmetric Key Services Markup Language (SKSML)

    NASA Astrophysics Data System (ADS)

    Noor, Arshad

    Symmetric Key Services Markup Language (SKSML) is the eXtensible Markup Language (XML) being standardized by the OASIS Enterprise Key Management Infrastructure Technical Committee for requesting and receiving symmetric encryption cryptographic keys within a Symmetric Key Management System (SKMS). This protocol is designed to be used between clients and servers within an Enterprise Key Management Infrastructure (EKMI) to secure data, independent of the application and platform. Building on many security standards such as XML Signature, XML Encryption, Web Services Security and PKI, SKSML provides standards-based capability to allow any application to use symmetric encryption keys, while maintaining centralized control. This article describes the SKSML protocol and its capabilities.

  7. The Surgical Simulation and Training Markup Language (SSTML): an XML-based language for medical simulation.

    PubMed

    Bacon, James; Tardella, Neil; Pratt, Janey; Hu, John; English, James

    2006-01-01

    Under contract with the Telemedicine & Advanced Technology Research Center (TATRC), Energid Technologies is developing a new XML-based language for describing surgical training exercises, the Surgical Simulation and Training Markup Language (SSTML). SSTML must represent everything from organ models (including tissue properties) to surgical procedures. SSTML is an open language (i.e., freely downloadable) that defines surgical training data through an XML schema. This article focuses on the data representation of the surgical procedures and organ modeling, as they highlight the need for a standard language and illustrate the features of SSTML. Integration of SSTML with software is also discussed.

  8. Structural efficiency studies of corrugated compression panels with curved caps and beaded webs

    NASA Technical Reports Server (NTRS)

    Davis, R. C.; Mills, C. T.; Prabhakaran, R.; Jackson, L. R.

    1984-01-01

    Curved cross-sectional elements are employed in structural concepts for minimum-mass compression panels. Corrugated panel concepts with curved caps and beaded webs are optimized by using a nonlinear mathematical programming procedure and a rigorous buckling analysis. These panel geometries are shown to have superior structural efficiencies compared with known concepts published in the literature. Fabrication of these efficient corrugation concepts became possible by advances made in the art of superplastically forming of metals. Results of the mass optimization studies of the concepts are presented as structural efficiency charts for axial compression.

  9. XBRL: Beyond Basic XML

    ERIC Educational Resources Information Center

    VanLengen, Craig Alan

    2010-01-01

    The Securities and Exchange Commission (SEC) has recently announced a proposal that will require all public companies to report their financial data in Extensible Business Reporting Language (XBRL). XBRL is an extension of Extensible Markup Language (XML). Moving to a standard reporting format makes it easier for organizations to report the…

  10. Querying and Ranking XML Documents.

    ERIC Educational Resources Information Center

    Schlieder, Torsten; Meuss, Holger

    2002-01-01

    Discussion of XML, information retrieval, precision, and recall focuses on a retrieval technique that adopts the similarity measure of the vector space model, incorporates the document structure, and supports structured queries. Topics include a query model based on tree matching; structured queries and term-based ranking; and term frequency and…

  11. New NED XML/VOtable Services and Client Interface Applications

    NASA Astrophysics Data System (ADS)

    Pevunova, O.; Good, J.; Mazzarella, J.; Berriman, G. B.; Madore, B.

    2005-12-01

    The NASA/IPAC Extragalactic Database (NED) provides data and cross-identifications for over 7 million extragalactic objects fused from thousands of survey catalogs and journal articles. The data cover all frequencies from radio through gamma rays and include positions, redshifts, photometry and spectral energy distributions (SEDs), sizes, and images. NED services have traditionally supplied data in HTML format for connections from Web browsers, and a custom ASCII data structure for connections by remote computer programs written in the C programming language. We describe new services that provide responses from NED queries in XML documents compliant with the international virtual observatory VOtable protocol. The XML/VOtable services support cone searches, all-sky searches based on object attributes (survey names, cross-IDs, redshifts, flux densities), and requests for detailed object data. Initial services have been inserted into the NVO registry, and others will follow soon. The first client application is a Style Sheet specification for rendering NED VOtable query results in Web browsers that support XML. The second prototype application is a Java applet that allows users to compare multiple SEDs. The new XML/VOtable output mode will also simplify the integration of data from NED into visualization and analysis packages, software agents, and other virtual observatory applications. We show an example SED from NED plotted using VOPlot. The NED website is: http://nedwww.ipac.caltech.edu.

  12. Automating data acquisition into ontologies from pharmacogenetics relational data sources using declarative object definitions and XML.

    PubMed

    Rubin, Daniel L; Hewett, Micheal; Oliver, Diane E; Klein, Teri E; Altman, Russ B

    2002-01-01

    Ontologies are useful for organizing large numbers of concepts having complex relationships, such as the breadth of genetic and clinical knowledge in pharmacogenomics. But because ontologies change and knowledge evolves, it is time consuming to maintain stable mappings to external data sources that are in relational format. We propose a method for interfacing ontology models with data acquisition from external relational data sources. This method uses a declarative interface between the ontology and the data source, and this interface is modeled in the ontology and implemented using XML schema. Data is imported from the relational source into the ontology using XML, and data integrity is checked by validating the XML submission with an XML schema. We have implemented this approach in PharmGKB (http://www.pharmgkb.org/), a pharmacogenetics knowledge base. Our goals were to (1) import genetic sequence data, collected in relational format, into the pharmacogenetics ontology, and (2) automate the process of updating the links between the ontology and data acquisition when the ontology changes. We tested our approach by linking PharmGKB with data acquisition from a relational model of genetic sequence information. The ontology subsequently evolved, and we were able to rapidly update our interface with the external data and continue acquiring the data. Similar approaches may be helpful for integrating other heterogeneous information sources in order make the diversity of pharmacogenetics data amenable to computational analysis.

  13. The XBabelPhish MAGE-ML and XML translator.

    PubMed

    Maier, Don; Wymore, Farrell; Sherlock, Gavin; Ball, Catherine A

    2008-01-18

    MAGE-ML has been promoted as a standard format for describing microarray experiments and the data they produce. Two characteristics of the MAGE-ML format compromise its use as a universal standard: First, MAGE-ML files are exceptionally large - too large to be easily read by most people, and often too large to be read by most software programs. Second, the MAGE-ML standard permits many ways of representing the same information. As a result, different producers of MAGE-ML create different documents describing the same experiment and its data. Recognizing all the variants is an unwieldy software engineering task, resulting in software packages that can read and process MAGE-ML from some, but not all producers. This Tower of MAGE-ML Babel bars the unencumbered exchange of microarray experiment descriptions couched in MAGE-ML. We have developed XBabelPhish - an XQuery-based technology for translating one MAGE-ML variant into another. XBabelPhish's use is not restricted to translating MAGE-ML documents. It can transform XML files independent of their DTD, XML schema, or semantic content. Moreover, it is designed to work on very large (> 200 Mb.) files, which are common in the world of MAGE-ML. XBabelPhish provides a way to inter-translate MAGE-ML variants for improved interchange of microarray experiment information. More generally, it can be used to transform most XML files, including very large ones that exceed the capacity of most XML tools.

  14. An Optimal Seed Based Compression Algorithm for DNA Sequences

    PubMed Central

    Gopalakrishnan, Gopakumar; Karunakaran, Muralikrishnan

    2016-01-01

    This paper proposes a seed based lossless compression algorithm to compress a DNA sequence which uses a substitution method that is similar to the LempelZiv compression scheme. The proposed method exploits the repetition structures that are inherent in DNA sequences by creating an offline dictionary which contains all such repeats along with the details of mismatches. By ensuring that only promising mismatches are allowed, the method achieves a compression ratio that is at par or better than the existing lossless DNA sequence compression algorithms. PMID:27555868

  15. 106-17 Telemetry Standards Metadata Configuration Chapter 23

    DTIC Science & Technology

    2017-07-01

    23-1 23.2 Metadata Description Language ...Chapter 23, July 2017 iii Acronyms HTML Hypertext Markup Language MDL Metadata Description Language PCM pulse code modulation TMATS Telemetry...Attributes Transfer Standard W3C World Wide Web Consortium XML eXtensible Markup Language XSD XML schema document Telemetry Network Standard

  16. Agility: Agent - Ility Architecture

    DTIC Science & Technology

    2002-10-01

    existing and emerging standards (e.g., distributed objects, email, web, search engines , XML, Java, Jini). Three agent system components resulted from...agents and other Internet resources and operate over the web (AgentGram), a yellow pages service that uses Internet search engines to locate XML ads for agents and other Internet resources (WebTrader).

  17. The semantic planetary data system

    NASA Technical Reports Server (NTRS)

    Hughes, J. Steven; Crichton, Daniel; Kelly, Sean; Mattmann, Chris

    2005-01-01

    This paper will provide a brief overview of the PDS data model and the PDS catalog. It will then describe the implentation of the Semantic PDS including the development of the formal ontology, the generation of RDFS/XML and RDF/XML data sets, and the buiding of the semantic search application.

  18. Bridging the Gap in Port Security; Network Centric Theory Applied to Public/Private Collaboration

    DTIC Science & Technology

    2007-03-01

    commercial_enforcement/ ctpat /security_guideline/guideline_port.xml [Accessed January 2, 2007] 16 The four core elements of CSI include:36 • Identify high...www.cbp.gov/xp/cgov/import/commercial_enforcement/ ctpat /security_guideline/guideline_port.xml [Accessed January 2, 2007]. 17 Connecting them

  19. Converting from XML to HDF-EOS

    NASA Technical Reports Server (NTRS)

    Ullman, Richard; Bane, Bob; Yang, Jingli

    2008-01-01

    A computer program recreates an HDF-EOS file from an Extensible Markup Language (XML) representation of the contents of that file. This program is one of two programs written to enable testing of the schemas described in the immediately preceding article to determine whether the schemas capture all details of HDF-EOS files.

  20. A Leaner, Meaner Markup Language.

    ERIC Educational Resources Information Center

    Online & CD-ROM Review, 1997

    1997-01-01

    In 1996 a working group of the World Wide Web Consortium developed and released a simpler form of markup language, Extensible Markup Language (XML), combining the flexibility of standard Generalized Markup Language (SGML) and the Web suitability of HyperText Markup Language (HTML). Reviews SGML and discusses XML's suitability for journal…

  1. DoD Business Mission Area Service-Oriented Architecture to Support Business Transformation

    DTIC Science & Technology

    2008-10-01

    Notation ( BPMN ). The research also found strong support across vendors for the Business Process Execution Language standard, though there is also...emerging support for direct execution of BPMN through the use of the XML Process Definition Language, an XML serialization of BPMN . Many vendors also

  2. Blast protection of infrastructure using advanced composites

    NASA Astrophysics Data System (ADS)

    Brodsky, Evan

    This research was a systematic investigation detailing the energy absorption mechanisms of an E-glass web core composite sandwich panel subjected to an impulse loading applied orthogonal to the facesheet. Key roles of the fiberglass and polyisocyanurate foam material were identified, characterized, and analyzed. A quasi-static test fixture was used to compressively load a unit cell web core specimen machined from the sandwich panel. The web and foam both exhibited non-linear stress-strain responses during axial compressive loading. Through several analyses, the composite web situated in the web core had failed in axial compression. Optimization studies were performed on the sandwich panel unit cell in order to maximize the energy absorption capabilities of the web core. Ultimately, a sandwich panel was designed to optimize the energy dissipation subjected to through-the-thickness compressive loading.

  3. The effect of compression speed on intelligibility: simulated hearing-aid processing with and without original temporal fine structure information.

    PubMed

    Hopkins, Kathryn; King, Andrew; Moore, Brian C J

    2012-09-01

    Hearing aids use amplitude compression to compensate for the effects of loudness recruitment. The compression speed that gives the best speech intelligibility varies among individuals. Moore [(2008). Trends Amplif. 12, 300-315] suggested that an individual's sensitivity to temporal fine structure (TFS) information may affect which compression speed gives most benefit. This hypothesis was tested using normal-hearing listeners with a simulated hearing loss. Sentences in a competing talker background were processed using multi-channel fast or slow compression followed by a simulation of threshold elevation and loudness recruitment. Signals were either tone vocoded with 1-ERB(N)-wide channels (where ERB(N) is the bandwidth of normal auditory filters) to remove the original TFS information, or not processed further. In a second experiment, signals were vocoded with either 1 - or 2-ERB(N)-wide channels, to test whether the available spectral detail affects the optimal compression speed. Intelligibility was significantly better for fast than slow compression regardless of vocoder channel bandwidth. The results suggest that the availability of original TFS or detailed spectral information does not affect the optimal compression speed. This conclusion is tentative, since while the vocoder processing removed the original TFS information, listeners may have used the altered TFS in the vocoded signals.

  4. Image-Data Compression Using Edge-Optimizing Algorithm for WFA Inference.

    ERIC Educational Resources Information Center

    Culik, Karel II; Kari, Jarkko

    1994-01-01

    Presents an inference algorithm that produces a weighted finite automata (WFA), in particular, the grayness functions of graytone images. Image-data compression results based on the new inference algorithm produces a WFA with a relatively small number of edges. Image-data compression results alone and in combination with wavelets are discussed.…

  5. Integration of HTML documents into an XML-based knowledge repository.

    PubMed

    Roemer, Lorrie K; Rocha, Roberto A; Del Fiol, Guilherme

    2005-01-01

    The Emergency Patient Instruction Generator (EPIG) is an electronic content compiler / viewer / editor developed by Intermountain Health Care. The content is vendor-licensed HTML patient discharge instructions. This work describes the process by which discharge instructions where converted from ASCII-encoded HTML to XML, then loaded to a database for use by EPIG.

  6. A Survey and Analysis of Access Control Architectures for XML Data

    DTIC Science & Technology

    2006-03-01

    13 4. XML Query Engines ...castle and the drawbridge over the moat. Extending beyond the visual analogy, there are many key components to the protection of information and...technology. While XML’s original intent was to enable large-scale electronic publishing over the internet, its functionality is firmly rooted in its

  7. Personalization of XML Content Browsing Based on User Preferences

    ERIC Educational Resources Information Center

    Encelle, Benoit; Baptiste-Jessel, Nadine; Sedes, Florence

    2009-01-01

    Personalization of user interfaces for browsing content is a key concept to ensure content accessibility. In this direction, we introduce concepts that result in the generation of personalized multimodal user interfaces for browsing XML content. User requirements concerning the browsing of a specific content type can be specified by means of…

  8. XML and Bibliographic Data: The TVS (Transport, Validation and Services) Model.

    ERIC Educational Resources Information Center

    de Carvalho, Joaquim; Cordeiro, Maria Ines

    This paper discusses the role of XML in library information systems at three major levels: as are presentation language that enables the transport of bibliographic data in a way that is technologically independent and universally understood across systems and domains; as a language that enables the specification of complex validation rules…

  9. A Conversion Tool for Mathematical Expressions in Web XML Files.

    ERIC Educational Resources Information Center

    Ohtake, Nobuyuki; Kanahori, Toshihiro

    2003-01-01

    This article discusses the conversion of mathematical equations into Extensible Markup Language (XML) on the World Wide Web for individuals with visual impairments. A program is described that converts the presentation markup style to the content markup style in MathML to allow browsers to render mathematical expressions without other programs.…

  10. An Electronic Finding Aid Using Extensible Markup Language (XML) and Encoded Archival Description (EAD).

    ERIC Educational Resources Information Center

    Chang, May

    2000-01-01

    Describes the development of electronic finding aids for archives at the University of Illinois, Urbana-Champaign that used XML (extensible markup language) and EAD (encoded archival description) to enable more flexible information management and retrieval than using MARC or a relational database management system. EAD template is appended.…

  11. Strategic Industrial Alliances in Paper Industry: XML- vs Ontology-Based Integration Platforms

    ERIC Educational Resources Information Center

    Naumenko, Anton; Nikitin, Sergiy; Terziyan, Vagan; Zharko, Andriy

    2005-01-01

    Purpose: To identify cases related to design of ICT platforms for industrial alliances, where the use of Ontology-driven architectures based on Semantic web standards is more advantageous than application of conventional modeling together with XML standards. Design/methodology/approach: A comparative analysis of the two latest and the most obvious…

  12. An XML-based Generic Tool for Information Retrieval in Solar Databases

    NASA Astrophysics Data System (ADS)

    Scholl, Isabelle F.; Legay, Eric; Linsolas, Romain

    This paper presents the current architecture of the `Solar Web Project' now in its development phase. This tool will provide scientists interested in solar data with a single web-based interface for browsing distributed and heterogeneous catalogs of solar observations. The main goal is to have a generic application that can be easily extended to new sets of data or to new missions with a low level of maintenance. It is developed with Java and XML is used as a powerful configuration language. The server, independent of any database scheme, can communicate with a client (the user interface) and several local or remote archive access systems (such as existing web pages, ftp sites or SQL databases). Archive access systems are externally described in XML files. The user interface is also dynamically generated from an XML file containing the window building rules and a simplified database description. This project is developed at MEDOC (Multi-Experiment Data and Operations Centre), located at the Institut d'Astrophysique Spatiale (Orsay, France). Successful tests have been conducted with other solar archive access systems.

  13. Experimental Evaluation of Processing Time for the Synchronization of XML-Based Business Objects

    NASA Astrophysics Data System (ADS)

    Ameling, Michael; Wolf, Bernhard; Springer, Thomas; Schill, Alexander

    Business objects (BOs) are data containers for complex data structures used in business applications such as Supply Chain Management and Customer Relationship Management. Due to the replication of application logic, multiple copies of BOs are created which have to be synchronized and updated. This is a complex and time consuming task because BOs rigorously vary in their structure according to the distribution, number and size of elements. Since BOs are internally represented as XML documents, the parsing of XML is one major cost factor which has to be considered for minimizing the processing time during synchronization. The prediction of the parsing time for BOs is an significant property for the selection of an efficient synchronization mechanism. In this paper, we present a method to evaluate the influence of the structure of BOs on their parsing time. The results of our experimental evaluation incorporating four different XML parsers examine the dependencies between the distribution of elements and the parsing time. Finally, a general cost model will be validated and simplified according to the results of the experimental setup.

  14. Rock.XML - Towards a library of rock physics models

    NASA Astrophysics Data System (ADS)

    Jensen, Erling Hugo; Hauge, Ragnar; Ulvmoen, Marit; Johansen, Tor Arne; Drottning, Åsmund

    2016-08-01

    Rock physics modelling provides tools for correlating physical properties of rocks and their constituents to the geophysical observations we measure on a larger scale. Many different theoretical and empirical models exist, to cover the range of different types of rocks. However, upon reviewing these, we see that they are all built around a few main concepts. Based on this observation, we propose a format for digitally storing the specifications for rock physics models which we have named Rock.XML. It does not only contain data about the various constituents, but also the theories and how they are used to combine these building blocks to make a representative model for a particular rock. The format is based on the Extensible Markup Language XML, making it flexible enough to handle complex models as well as scalable towards extending it with new theories and models. This technology has great advantages as far as documenting and exchanging models in an unambiguous way between people and between software. Rock.XML can become a platform for creating a library of rock physics models; making them more accessible to everyone.

  15. Compressing with dominant hand improves quality of manual chest compressions for rescuers who performed suboptimal CPR in manikins.

    PubMed

    Wang, Juan; Tang, Ce; Zhang, Lei; Gong, Yushun; Yin, Changlin; Li, Yongqin

    2015-07-01

    The question of whether the placement of the dominant hand against the sternum could improve the quality of manual chest compressions remains controversial. In the present study, we evaluated the influence of dominant vs nondominant hand positioning on the quality of conventional cardiopulmonary resuscitation (CPR) during prolonged basic life support (BLS) by rescuers who performed optimal and suboptimal compressions. Six months after completing a standard BLS training course, 101 medical students were instructed to perform adult single-rescuer BLS for 8 minutes on a manikin with a randomized hand position. Twenty-four hours later, the students placed the opposite hand in contact with the sternum while performing CPR. Those with an average compression depth of less than 50 mm were considered suboptimal. Participants who had performed suboptimal compressions were significantly shorter (170.2 ± 6.8 vs 174.0 ± 5.6 cm, P = .008) and lighter (58.9 ± 7.6 vs 66.9 ± 9.6 kg, P < .001) than those who performed optimal compressions. No significant differences in CPR quality were observed between dominant and nondominant hand placements for these who had an average compression depth of greater than 50 mm. However, both the compression depth (49.7 ± 4.2 vs 46.5 ± 4.1 mm, P = .003) and proportion of chest compressions with an appropriate depth (47.6% ± 27.8% vs 28.0% ± 23.4%, P = .006) were remarkably higher when compressing the chest with the dominant hand against the sternum for those who performed suboptimal CPR. Chest compression quality significantly improved when the dominant hand was placed against the sternum for those who performed suboptimal compressions during conventional CPR. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Simultaneous compression and encryption of closely resembling images: application to video sequences and polarimetric images.

    PubMed

    Aldossari, M; Alfalou, A; Brosseau, C

    2014-09-22

    This study presents and validates an optimized method of simultaneous compression and encryption designed to process images with close spectra. This approach is well adapted to the compression and encryption of images of a time-varying scene but also to static polarimetric images. We use the recently developed spectral fusion method [Opt. Lett.35, 1914-1916 (2010)] to deal with the close resemblance of the images. The spectral plane (containing the information to send and/or to store) is decomposed in several independent areas which are assigned according a specific way. In addition, each spectrum is shifted in order to minimize their overlap. The dual purpose of these operations is to optimize the spectral plane allowing us to keep the low- and high-frequency information (compression) and to introduce an additional noise for reconstructing the images (encryption). Our results show that not only can the control of the spectral plane enhance the number of spectra to be merged, but also that a compromise between the compression rate and the quality of the reconstructed images can be tuned. We use a root-mean-square (RMS) optimization criterion to treat compression. Image encryption is realized at different security levels. Firstly, we add a specific encryption level which is related to the different areas of the spectral plane, and then, we make use of several random phase keys. An in-depth analysis at the spectral fusion methodology is done in order to find a good trade-off between the compression rate and the quality of the reconstructed images. Our new proposal spectral shift allows us to minimize the image overlap. We further analyze the influence of the spectral shift on the reconstructed image quality and compression rate. The performance of the multiple-image optical compression and encryption method is verified by analyzing several video sequences and polarimetric images.

  17. Transparent ICD and DRG coding using information technology: linking and associating information sources with the eXtensible Markup Language.

    PubMed

    Hoelzer, Simon; Schweiger, Ralf K; Dudeck, Joachim

    2003-01-01

    With the introduction of ICD-10 as the standard for diagnostics, it becomes necessary to develop an electronic representation of its complete content, inherent semantics, and coding rules. The authors' design relates to the current efforts by the CEN/TC 251 to establish a European standard for hierarchical classification systems in health care. The authors have developed an electronic representation of ICD-10 with the eXtensible Markup Language (XML) that facilitates integration into current information systems and coding software, taking different languages and versions into account. In this context, XML provides a complete processing framework of related technologies and standard tools that helps develop interoperable applications. XML provides semantic markup. It allows domain-specific definition of tags and hierarchical document structure. The idea of linking and thus combining information from different sources is a valuable feature of XML. In addition, XML topic maps are used to describe relationships between different sources, or "semantically associated" parts of these sources. The issue of achieving a standardized medical vocabulary becomes more and more important with the stepwise implementation of diagnostically related groups, for example. The aim of the authors' work is to provide a transparent and open infrastructure that can be used to support clinical coding and to develop further software applications. The authors are assuming that a comprehensive representation of the content, structure, inherent semantics, and layout of medical classification systems can be achieved through a document-oriented approach.

  18. Transparent ICD and DRG Coding Using Information Technology: Linking and Associating Information Sources with the eXtensible Markup Language

    PubMed Central

    Hoelzer, Simon; Schweiger, Ralf K.; Dudeck, Joachim

    2003-01-01

    With the introduction of ICD-10 as the standard for diagnostics, it becomes necessary to develop an electronic representation of its complete content, inherent semantics, and coding rules. The authors' design relates to the current efforts by the CEN/TC 251 to establish a European standard for hierarchical classification systems in health care. The authors have developed an electronic representation of ICD-10 with the eXtensible Markup Language (XML) that facilitates integration into current information systems and coding software, taking different languages and versions into account. In this context, XML provides a complete processing framework of related technologies and standard tools that helps develop interoperable applications. XML provides semantic markup. It allows domain-specific definition of tags and hierarchical document structure. The idea of linking and thus combining information from different sources is a valuable feature of XML. In addition, XML topic maps are used to describe relationships between different sources, or “semantically associated” parts of these sources. The issue of achieving a standardized medical vocabulary becomes more and more important with the stepwise implementation of diagnostically related groups, for example. The aim of the authors' work is to provide a transparent and open infrastructure that can be used to support clinical coding and to develop further software applications. The authors are assuming that a comprehensive representation of the content, structure, inherent semantics, and layout of medical classification systems can be achieved through a document-oriented approach. PMID:12807813

  19. Progress on an implementation of MIFlowCyt in XML

    NASA Astrophysics Data System (ADS)

    Leif, Robert C.; Leif, Stephanie H.

    2015-03-01

    Introduction: The International Society for Advancement of Cytometry (ISAC) Data Standards Task Force (DSTF) has created a standard for the Minimum Information about a Flow Cytometry Experiment (MIFlowCyt 1.0). The CytometryML schemas, are based in part upon the Flow Cytometry Standard and Digital Imaging and Communication (DICOM) standards. CytometryML has and will be extended and adapted to include MIFlowCyt, as well as to serve as a common standard for flow and image cytometry (digital microscopy). Methods: The MIFlowCyt data-types were created, as is the rest of CytometryML, in the XML Schema Definition Language (XSD1.1). Individual major elements of the MIFlowCyt schema were translated into XML and filled with reasonable data. A small section of the code was formatted with HTML formatting elements. Results: The differences in the amount of detail to be recorded for 1) users of standard techniques including data analysts and 2) others, such as method and device creators, laboratory and other managers, engineers, and regulatory specialists required that separate data-types be created to describe the instrument configuration and components. A very substantial part of the MIFlowCyt element that describes the Experimental Overview part of the MIFlowCyt and substantial parts of several other major elements have been developed. Conclusions: The future use of structured XML tags and web technology should facilitate searching of experimental information, its presentation, and inclusion in structured research, clinical, and regulatory documents, as well as demonstrate in publications adherence to the MIFlowCyt standard. The use of CytometryML together with XML technology should also result in the textual and numeric data being published using web technology without any change in composition. Preliminary testing indicates that CytometryML XML pages can be directly formatted with the combination of HTML and CSS.

  20. Application of XML to Journal Table Archiving

    NASA Astrophysics Data System (ADS)

    Shaya, E. J.; Blackwell, J. H.; Gass, J. E.; Kargatis, V. E.; Schneider, G. L.; Weiland, J. L.; Borne, K. D.; White, R. A.; Cheung, C. Y.

    1998-12-01

    The Astronomical Data Center (ADC) at the NASA Goddard Space Flight Center is a major archive for machine-readable astronomical data tables. Many ADC tables are derived from published journal articles. Article tables are reformatted to be machine-readable and documentation is crafted to facilitate proper reuse by researchers. The recent switch of journals to web based electronic format has resulted in the generation of large amounts of tabular data that could be captured into machine-readable archive format at fairly low cost. The large data flow of the tables from all major North American astronomical journals (a factor of 100 greater than the present rate at the ADC) necessitates the development of rigorous standards for the exchange of data between researchers, publishers, and the archives. We have selected a suitable markup language that can fully describe the large variety of astronomical information contained in ADC tables. The eXtensible Markup Language XML is a powerful internet-ready documentation format for data. It provides a precise and clear data description language that is both machine- and human-readable. It is rapidly becoming the standard format for business and information transactions on the internet and it is an ideal common metadata exchange format. By labelling, or "marking up", all elements of the information content, documents are created that computers can easily parse. An XML archive can easily and automatically be maintained, ingested into standard databases or custom software, and even totally restructured whenever necessary. Structuring astronomical data into XML format will enable efficient and focused search capabilities via off-the-shelf software. The ADC is investigating XML's expanded hyperlinking power to enhance connectivity within the ADC data/metadata and developing XSL display scripts to enhance display of astronomical data. The ADC XML Definition Type Document can be viewed at http://messier.gsfc.nasa.gov/dtdhtml/DTD-TREE.html

  1. Development of the Plate Tectonics and Seismology markup languages with XML

    NASA Astrophysics Data System (ADS)

    Babaie, H.; Babaei, A.

    2003-04-01

    The Extensible Markup Language (XML) and its specifications such as the XSD Schema, allow geologists to design discipline-specific vocabularies such as Seismology Markup Language (SeismML) or Plate Tectonics Markup Language (TectML). These languages make it possible to store and interchange structured geological information over the Web. Development of a geological markup language requires mapping geological concepts, such as "Earthquake" or "Plate" into a UML object model, applying a modeling and design environment. We have selected four inter-related geological concepts: earthquake, fault, plate, and orogeny, and developed four XML Schema Definitions (XSD), that define the relationships, cardinalities, hierarchies, and semantics of these concepts. In such a geological concept model, the UML object "Earthquake" is related to one or more "Wave" objects, each arriving to a seismic station at a specific "DateTime", and relating to a specific "Epicenter" object that lies at a unique "Location". The "Earthquake" object occurs along a "Segment" of a "Fault" object, which is related to a specific "Plate" object. The "Fault" has its own associations with such things as "Bend", "Step", and "Segment", and could be of any kind (e.g., "Thrust", "Transform'). The "Plate" is related to many other objects such as "MOR", "Subduction", and "Forearc", and is associated with an "Orogeny" object that relates to "Deformation" and "Strain" and several other objects. These UML objects were mapped into XML Metadata Interchange (XMI) formats, which were then converted into four XSD Schemas. The schemas were used to create and validate the XML instance documents, and to create a relational database hosting the plate tectonics and seismological data in the Microsoft Access format. The SeismML and TectML allow seismologists and structural geologists, among others, to submit and retrieve structured geological data on the Internet. A seismologist, for example, can submit peer-reviewed and reliable data about a specific earthquake to a Java Server Page on our web site hosting the XML application. Other geologists can readily retrieve the submitted data, saved in files or special tables of the designed database, through a search engine designed with J2EE (JSP, servlet, Java Bean) and XML specifications such as XPath, XPointer, and XSLT. When extended to include all the important concepts of seismology and plate tectonics, the two markup languages will make global interchange of geological data a reality.

  2. A Flexible Online Metadata Editing and Management System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aguilar, Raul; Pan, Jerry Yun; Gries, Corinna

    2010-01-01

    A metadata editing and management system is being developed employing state of the art XML technologies. A modular and distributed design was chosen for scalability, flexibility, options for customizations, and the possibility to add more functionality at a later stage. The system consists of a desktop design tool or schema walker used to generate code for the actual online editor, a native XML database, and an online user access management application. The design tool is a Java Swing application that reads an XML schema, provides the designer with options to combine input fields into online forms and give the fieldsmore » user friendly tags. Based on design decisions, the tool generates code for the online metadata editor. The code generated is an implementation of the XForms standard using the Orbeon Framework. The design tool fulfills two requirements: First, data entry forms based on one schema may be customized at design time and second data entry applications may be generated for any valid XML schema without relying on custom information in the schema. However, the customized information generated at design time is saved in a configuration file which may be re-used and changed again in the design tool. Future developments will add functionality to the design tool to integrate help text, tool tips, project specific keyword lists, and thesaurus services. Additional styling of the finished editor is accomplished via cascading style sheets which may be further customized and different look-and-feels may be accumulated through the community process. The customized editor produces XML files in compliance with the original schema, however, data from the current page is saved into a native XML database whenever the user moves to the next screen or pushes the save button independently of validity. Currently the system uses the open source XML database eXist for storage and management, which comes with third party online and desktop management tools. However, access to metadata files in the application introduced here is managed in a custom online module, using a MySQL backend accessed by a simple Java Server Faces front end. A flexible system with three grouping options, organization, group and single editing access is provided. Three levels were chosen to distribute administrative responsibilities and handle the common situation of an information manager entering the bulk of the metadata but leave specifics to the actual data provider.« less

  3. Report of Official Foreign Travel to Germany, May 16-June 1, 2001

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. D. Mason

    2001-06-18

    The Department of Energy (DOE) and associated agencies have moved rapidly toward electronic production, management, and dissemination of scientific and technical information. The World-Wide Web (WWW) has become a primary means of information dissemination. Electronic commerce (EC) is becoming the preferred means of procurement. DOE, like other government agencies, depends on and encourages the use of international standards in data communications. Like most government agencies, DOE has expressed a preference for openly developed standards over proprietary designs promoted as ''standards'' by vendors. In particular, there is a preference for standards developed by organizations such as the International Organization for Standardizationmore » (ISO) and the American National Standards Institute (ANSI) that use open, public processes to develop their standards. Among the most widely adopted international standards is the Standard Generalized Markup Language (SGML, ISO 8879:1986, FIPS 152), to which DOE long ago made a commitment. Besides the official commitment, which has resulted in several specialized projects, DOE makes heavy use of coding derived from SGML: Most documents on the WWW are coded in HTML (Hypertext Markup Language), which is an application of SGML. The World-Wide Web Consortium (W3C), with the backing of major software houses like Adobe, IBM, Microsoft, Netscape, Oracle, and Sun, is promoting XML (eXtensible Markup Language), a class of SGML applications, for the future of the WWW and the basis for EC. In support of DOE's use of these standards, I have served since 1985 as Chairman of the international committee responsible for SGML and related standards, ISO/IEC JTC1/SC34 (SC34) and its predecessor organizations. During my May 2001 trip, I chaired the spring 2001 meeting of SC34 in Berlin, Germany. I also attended XML Europe 2001, a major conference on the use of SGML and XML sponsored by the Graphic Communications Association (GCA), and chaired a meeting of the International SGML/XML Users' Group (ISUG). In addition to the widespread use of the WWW among DOE's plants and facilities in Oak Ridge and among DOE sites across the nation, there have been several past and present SGML- and XML-based projects at the Y-12 National Security Complex (Y-12). Our local project team has done SGML and XML development at Y-12 and Oak Ridge National Laboratory (ORNL) since the late 1980s. SGML is a component of the Weapons Records Archiving and Preservation (WRAP) project at Y-12 and is the format for catalog metadata chosen for weapons records by the Nuclear Weapons Information Group (NWIG). The ''Ferret'' system for automated classification analysis uses XML to structure its knowledge base. The Ferret team also provides XML consulting to OSTI and DOE Headquarters, particularly the National Nuclear Security Administration (NNSA). Supporting standards development allows DOE and Y-12 the opportunity both to provide input into the process and to benefit from contact with some of the leading experts in the subject matter. Oak Ridge has been for some years the location to which other DOE sites turn for expertise in SGML, XML, and related topics.« less

  4. Public health, GIS, and the internet.

    PubMed

    Croner, Charles M

    2003-01-01

    Internet access and use of georeferenced public health information for GIS application will be an important and exciting development for the nation's Department of Health and Human Services and other health agencies in this new millennium. Technological progress toward public health geospatial data integration, analysis, and visualization of space-time events using the Web portends eventual robust use of GIS by public health and other sectors of the economy. Increasing Web resources from distributed spatial data portals and global geospatial libraries, and a growing suite of Web integration tools, will provide new opportunities to advance disease surveillance, control, and prevention, and insure public access and community empowerment in public health decision making. Emerging supercomputing, data mining, compression, and transmission technologies will play increasingly critical roles in national emergency, catastrophic planning and response, and risk management. Web-enabled public health GIS will be guided by Federal Geographic Data Committee spatial metadata, OpenGIS Web interoperability, and GML/XML geospatial Web content standards. Public health will become a responsive and integral part of the National Spatial Data Infrastructure.

  5. Bandwidth compression of multispectral satellite imagery

    NASA Technical Reports Server (NTRS)

    Habibi, A.

    1978-01-01

    The results of two studies aimed at developing efficient adaptive and nonadaptive techniques for compressing the bandwidth of multispectral images are summarized. These techniques are evaluated and compared using various optimality criteria including MSE, SNR, and recognition accuracy of the bandwidth compressed images. As an example of future requirements, the bandwidth requirements for the proposed Landsat-D Thematic Mapper are considered.

  6. A Semantic Analysis of XML Schema Matching for B2B Systems Integration

    ERIC Educational Resources Information Center

    Kim, Jaewook

    2011-01-01

    One of the most critical steps to integrating heterogeneous e-Business applications using different XML schemas is schema matching, which is known to be costly and error-prone. Many automatic schema matching approaches have been proposed, but the challenge is still daunting because of the complexity of schemas and immaturity of technologies in…

  7. 76 FR 4897 - Submission for OMB Review; OMB Control No. 3090-0291; FSRS Registration and Prime Awardee Entity...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-27

    ... methods for submitting multiple FFATA subaward reports: A batch upload template using Microsoft Excel, an... three methods for submitting multiple FFATA subaward reports: A batch upload template using Microsoft Excel, an XML report submission template and an XML web service. These methods do take advantage of the...

  8. Integration of HTML Documents into an XML-Based Knowledge Repository

    PubMed Central

    Roemer, Lorrie K; Rocha, Roberto A; Del Fiol, Guilherme

    2005-01-01

    The Emergency Patient Instruction Generator (EPIG) is an electronic content compiler/viewer/editor developed by Intermountain Health Care. The content is vendor-licensed HTML patient discharge instructions. This work describes the process by which discharge instructions where converted from ASCII-encoded HTML to XML, then loaded to a database for use by EPIG. PMID:16779384

  9. International Data | Geospatial Data Science | NREL

    Science.gov Websites

    International Data International Data These datasets detail solar and wind resources for select Annual.xml India 10-km Monthly Direct Normal and Global Horizontal Zip 4.68 MB 04/25/2013 Monthly.xml Wind Data 50-m Wind Data These 50-m hub-height datasets have been validated by NREL and wind energy

  10. Nassi-Schneiderman Diagram in HTML Based on AML

    ERIC Educational Resources Information Center

    Menyhárt, László

    2013-01-01

    In an earlier work I defined an extension of XML called Algorithm Markup Language (AML) for easy and understandable coding in an IDE which supports XML editing (e.g. NetBeans). The AML extension contains annotations and native language (English or Hungarian) tag names used when coding our algorithm. This paper presents a drawing tool with which…

  11. DCTune Perceptual Optimization of Compressed Dental X-Rays

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Null, Cynthia H. (Technical Monitor)

    1997-01-01

    In current dental practice, x-rays of completed dental work are often sent to the insurer for verification. It is faster and cheaper to transmit instead digital scans of the x-rays. Further economies result if the images are sent in compressed form. DCtune is a technology for optimizing DCT quantization matrices to yield maximum perceptual quality for a given bit-rate, or minimum bit-rate for a given perceptual quality. In addition, the technology provides a means of setting the perceptual quality of compressed imagery in a systematic way. The purpose of this research was, with respect to dental x-rays: (1) to verify the advantage of DCTune over standard JPEG; (2) to verify the quality control feature of DCTune; and (3) to discover regularities in the optimized matrices of a set of images. Additional information is contained in the original extended abstract.

  12. An exponentiation method for XML element retrieval.

    PubMed

    Wichaiwong, Tanakorn

    2014-01-01

    XML document is now widely used for modelling and storing structured documents. The structure is very rich and carries important information about contents and their relationships, for example, e-Commerce. XML data-centric collections require query terms allowing users to specify constraints on the document structure; mapping structure queries and assigning the weight are significant for the set of possibly relevant documents with respect to structural conditions. In this paper, we present an extension to the MEXIR search system that supports the combination of structural and content queries in the form of content-and-structure queries, which we call the Exponentiation function. It has been shown the structural information improve the effectiveness of the search system up to 52.60% over the baseline BM25 at MAP.

  13. Representation of thermal infrared imaging data in the DICOM using XML configuration files.

    PubMed

    Ruminski, Jacek

    2007-01-01

    The DICOM standard has become a widely accepted and implemented format for the exchange and storage of medical imaging data. Different imaging modalities are supported however there is not a dedicated solution for thermal infrared imaging in medicine. In this article we propose new ideas and improvements to final proposal of the new DICOM Thermal Infrared Imaging structures and services. Additionally, we designed, implemented and tested software packages for universal conversion of existing thermal imaging files to the DICOM format using XML configuration files. The proposed solution works fast and requires minimal number of user interactions. The XML configuration file enables to compose a set of attributes for any source file format of thermal imaging camera.

  14. Bioinformatics data distribution and integration via Web Services and XML.

    PubMed

    Li, Xiao; Zhang, Yizheng

    2003-11-01

    It is widely recognized that exchange, distribution, and integration of biological data are the keys to improve bioinformatics and genome biology in post-genomic era. However, the problem of exchanging and integrating biology data is not solved satisfactorily. The eXtensible Markup Language (XML) is rapidly spreading as an emerging standard for structuring documents to exchange and integrate data on the World Wide Web (WWW). Web service is the next generation of WWW and is founded upon the open standards of W3C (World Wide Web Consortium) and IETF (Internet Engineering Task Force). This paper presents XML and Web Services technologies and their use for an appropriate solution to the problem of bioinformatics data exchange and integration.

  15. Astronomical Instrumentation System Markup Language

    NASA Astrophysics Data System (ADS)

    Goldbaum, Jesse M.

    2016-05-01

    The Astronomical Instrumentation System Markup Language (AISML) is an Extensible Markup Language (XML) based file format for maintaining and exchanging information about astronomical instrumentation. The factors behind the need for an AISML are first discussed followed by the reasons why XML was chosen as the format. Next it's shown how XML also provides the framework for a more precise definition of an astronomical instrument and how these instruments can be combined to form an Astronomical Instrumentation System (AIS). AISML files for several instruments as well as one for a sample AIS are provided. The files demonstrate how AISML can be utilized for various tasks from web page generation and programming interface to instrument maintenance and quality management. The advantages of widespread adoption of AISML are discussed.

  16. QuakeML - An XML Schema for Seismology

    NASA Astrophysics Data System (ADS)

    Wyss, A.; Schorlemmer, D.; Maraini, S.; Baer, M.; Wiemer, S.

    2004-12-01

    We propose an extensible format-definition for seismic data (QuakeML). Sharing data and seismic information efficiently is one of the most important issues for research and observational seismology in the future. The eXtensible Markup Language (XML) is playing an increasingly important role in the exchange of a variety of data. Due to its extensible definition capabilities, its wide acceptance and the existing large number of utilities and libraries for XML, a structured representation of various types of seismological data should in our opinion be developed by defining a 'QuakeML' standard. Here we present the QuakeML definitions for parameter databases and further efforts, e.g. a central QuakeML catalog database and a web portal for exchanging codes and stylesheets.

  17. A High-Availability, Distributed Hardware Control System Using Java

    NASA Technical Reports Server (NTRS)

    Niessner, Albert F.

    2011-01-01

    Two independent coronagraph experiments that require 24/7 availability with different optical layouts and different motion control requirements are commanded and controlled with the same Java software system executing on many geographically scattered computer systems interconnected via TCP/IP. High availability of a distributed system requires that the computers have a robust communication messaging system making the mix of TCP/IP (a robust transport), and XML (a robust message) a natural choice. XML also adds the configuration flexibility. Java then adds object-oriented paradigms, exception handling, heavily tested libraries, and many third party tools for implementation robustness. The result is a software system that provides users 24/7 access to two diverse experiments with XML files defining the differences

  18. Digitized hand-wrist radiographs: comparison of subjective and software-derived image quality at various compression ratios.

    PubMed

    McCord, Layne K; Scarfe, William C; Naylor, Rachel H; Scheetz, James P; Silveira, Anibal; Gillespie, Kevin R

    2007-05-01

    The objectives of this study were to compare the effect of JPEG 2000 compression of hand-wrist radiographs on observer image quality qualitative assessment and to compare with a software-derived quantitative image quality index. Fifteen hand-wrist radiographs were digitized and saved as TIFF and JPEG 2000 images at 4 levels of compression (20:1, 40:1, 60:1, and 80:1). The images, including rereads, were viewed by 13 orthodontic residents who determined the image quality rating on a scale of 1 to 5. A quantitative analysis was also performed by using a readily available software based on the human visual system (Image Quality Measure Computer Program, version 6.2, Mitre, Bedford, Mass). ANOVA was used to determine the optimal compression level (P < or =.05). When we compared subjective indexes, JPEG compression greater than 60:1 significantly reduced image quality. When we used quantitative indexes, the JPEG 2000 images had lower quality at all compression ratios compared with the original TIFF images. There was excellent correlation (R2 >0.92) between qualitative and quantitative indexes. Image Quality Measure indexes are more sensitive than subjective image quality assessments in quantifying image degradation with compression. There is potential for this software-based quantitative method in determining the optimal compression ratio for any image without the use of subjective raters.

  19. Definition of an XML markup language for clinical laboratory procedures and comparison with generic XML markup.

    PubMed

    Saadawi, Gilan M; Harrison, James H

    2006-10-01

    Clinical laboratory procedure manuals are typically maintained as word processor files and are inefficient to store and search, require substantial effort for review and updating, and integrate poorly with other laboratory information. Electronic document management systems could improve procedure management and utility. As a first step toward building such systems, we have developed a prototype electronic format for laboratory procedures using Extensible Markup Language (XML). Representative laboratory procedures were analyzed to identify document structure and data elements. This information was used to create a markup vocabulary, CLP-ML, expressed as an XML Document Type Definition (DTD). To determine whether this markup provided advantages over generic markup, we compared procedures structured with CLP-ML or with the vocabulary of the Health Level Seven, Inc. (HL7) Clinical Document Architecture (CDA) narrative block. CLP-ML includes 124 XML tags and supports a variety of procedure types across different laboratory sections. When compared with a general-purpose markup vocabulary (CDA narrative block), CLP-ML documents were easier to edit and read, less complex structurally, and simpler to traverse for searching and retrieval. In combination with appropriate software, CLP-ML is designed to support electronic authoring, reviewing, distributing, and searching of clinical laboratory procedures from a central repository, decreasing procedure maintenance effort and increasing the utility of procedure information. A standard electronic procedure format could also allow laboratories and vendors to share procedures and procedure layouts, minimizing duplicative word processor editing. Our results suggest that laboratory-specific markup such as CLP-ML will provide greater benefit for such systems than generic markup.

  20. PACE: Power-Aware Computing Engines

    DTIC Science & Technology

    2005-02-01

    more costly than compu- tation on our test platform, and it is memory access that dominates most lossless data compression algorithms . In fact, even...Performance and implementation concerns A compression algorithm may be implemented with many different, yet reasonable, data structures (including...Related work This section discusses data compression for low- bandwidth devices and optimizing algorithms for low energy. Though much work has gone

  1. Least Median of Squares Filtering of Locally Optimal Point Matches for Compressible Flow Image Registration

    PubMed Central

    Castillo, Edward; Castillo, Richard; White, Benjamin; Rojo, Javier; Guerrero, Thomas

    2012-01-01

    Compressible flow based image registration operates under the assumption that the mass of the imaged material is conserved from one image to the next. Depending on how the mass conservation assumption is modeled, the performance of existing compressible flow methods is limited by factors such as image quality, noise, large magnitude voxel displacements, and computational requirements. The Least Median of Squares Filtered Compressible Flow (LFC) method introduced here is based on a localized, nonlinear least squares, compressible flow model that describes the displacement of a single voxel that lends itself to a simple grid search (block matching) optimization strategy. Spatially inaccurate grid search point matches, corresponding to erroneous local minimizers of the nonlinear compressible flow model, are removed by a novel filtering approach based on least median of squares fitting and the forward search outlier detection method. The spatial accuracy of the method is measured using ten thoracic CT image sets and large samples of expert determined landmarks (available at www.dir-lab.com). The LFC method produces an average error within the intra-observer error on eight of the ten cases, indicating that the method is capable of achieving a high spatial accuracy for thoracic CT registration. PMID:22797602

  2. Information extraction and transmission techniques for spaceborne synthetic aperture radar images

    NASA Technical Reports Server (NTRS)

    Frost, V. S.; Yurovsky, L.; Watson, E.; Townsend, K.; Gardner, S.; Boberg, D.; Watson, J.; Minden, G. J.; Shanmugan, K. S.

    1984-01-01

    Information extraction and transmission techniques for synthetic aperture radar (SAR) imagery were investigated. Four interrelated problems were addressed. An optimal tonal SAR image classification algorithm was developed and evaluated. A data compression technique was developed for SAR imagery which is simple and provides a 5:1 compression with acceptable image quality. An optimal textural edge detector was developed. Several SAR image enhancement algorithms have been proposed. The effectiveness of each algorithm was compared quantitatively.

  3. Optimizing Cloud Based Image Storage, Dissemination and Processing Through Use of Mrf and Lerc

    NASA Astrophysics Data System (ADS)

    Becker, Peter; Plesea, Lucian; Maurer, Thomas

    2016-06-01

    The volume and numbers of geospatial images being collected continue to increase exponentially with the ever increasing number of airborne and satellite imaging platforms, and the increasing rate of data collection. As a result, the cost of fast storage required to provide access to the imagery is a major cost factor in enterprise image management solutions to handle, process and disseminate the imagery and information extracted from the imagery. Cloud based object storage offers to provide significantly lower cost and elastic storage for this imagery, but also adds some disadvantages in terms of greater latency for data access and lack of traditional file access. Although traditional file formats geoTIF, JPEG2000 and NITF can be downloaded from such object storage, their structure and available compression are not optimum and access performance is curtailed. This paper provides details on a solution by utilizing a new open image formats for storage and access to geospatial imagery optimized for cloud storage and processing. MRF (Meta Raster Format) is optimized for large collections of scenes such as those acquired from optical sensors. The format enables optimized data access from cloud storage, along with the use of new compression options which cannot easily be added to existing formats. The paper also provides an overview of LERC a new image compression that can be used with MRF that provides very good lossless and controlled lossy compression.

  4. Proper Plugin Protocols

    DTIC Science & Technology

    2011-12-28

    specify collaboration constraints that occur in Java and XML frameworks and that the collaboration constraints from these frameworks matter in practice. (a...programming language boundaries, and Chapter 6 and Appendix A demonstrate that Fusion can specify constraints across both Java and XML in practice. (c...designed JUnit, Josh Bloch designed Java Collec- tions, and Krzysztof Cwalina designed the .NET Framework APIs. While all of these frameworks are very

  5. Tomcat, Oracle & XML Web Archive

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cothren, D. C.

    2008-01-01

    The TOX (Tomcat Oracle & XML) web archive is a foundation for development of HTTP-based applications using Tomcat (or some other servlet container) and an Oracle RDBMS. Use of TOX requires coding primarily in PL/SQL, JavaScript, and XSLT, but also in HTML, CSS and potentially Java. Coded in Java and PL/SQL itself, TOX provides the foundation for more complex applications to be built.

  6. 77 FR 28541 - Request for Comments on the Recommendation for the Disclosure of Sequence Listings Using XML...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-15

    ... (EPO) as the lead, to propose a revised standard for the filing of nucleotide and/or amino acid.... ST.25 uses a controlled vocabulary of feature keys to describe nucleic acid and amino acid sequences... patent data purposes. The XML standard also includes four qualifiers for amino acids. These feature keys...

  7. Integrating XQuery-Enabled SCORM XML Metadata Repositories into an RDF-Based E-Learning P2P Network

    ERIC Educational Resources Information Center

    Qu, Changtao; Nejdl, Wolfgang

    2004-01-01

    Edutella is an RDF-based E-Learning P2P network that is aimed to accommodate heterogeneous learning resource metadata repositories in a P2P manner and further facilitate the exchange of metadata between these repositories based on RDF. Whereas Edutella provides RDF metadata repositories with a quite natural integration approach, XML metadata…

  8. Using Extensible Markup Language (XML) for the Single Source Delivery of Educational Resources by Print and Online: A Case Study

    ERIC Educational Resources Information Center

    Walsh, Lucas

    2007-01-01

    This article seeks to provide an introduction to Extensible Markup Language (XML) by looking at its use in a single source publishing approach to the provision of teaching resources in both hardcopy and online. Using the development of the International Baccalaureate Organisation's online Economics Subject Guide as a practical example, this…

  9. WaterML: an XML Language for Communicating Water Observations Data

    NASA Astrophysics Data System (ADS)

    Maidment, D. R.; Zaslavsky, I.; Valentine, D.

    2007-12-01

    One of the great impediments to the synthesis of water information is the plethora of formats used to publish such data. Each water agency uses its own approach. XML (eXtended Markup Languages) are generalizations of Hypertext Markup Language to communicate specific kinds of information via the internet. WaterML is an XML language for water observations data - streamflow, water quality, groundwater levels, climate, precipitation and aquatic biology data, recorded at fixed, point locations as a function of time. The Hydrologic Information System project of the Consortium of Universities for the Advancement of Hydrologic Science, Inc (CUAHSI) has defined WaterML and prepared a set of web service functions called WaterOneFLow that use WaterML to provide information about observation sites, the variables measured there and the values of those measurments. WaterML has been submitted to the Open GIS Consortium for harmonization with its standards for XML languages. Academic investigators at a number of testbed locations in the WATERS network are providing data in WaterML format using WaterOneFlow web services. The USGS and other federal agencies are also working with CUAHSI to similarly provide access to their data in WaterML through WaterOneFlow services.

  10. Transformation of HDF-EOS metadata from the ECS model to ISO 19115-based XML

    NASA Astrophysics Data System (ADS)

    Wei, Yaxing; Di, Liping; Zhao, Baohua; Liao, Guangxuan; Chen, Aijun

    2007-02-01

    Nowadays, geographic data, such as NASA's Earth Observation System (EOS) data, are playing an increasing role in many areas, including academic research, government decisions and even in people's every lives. As the quantity of geographic data becomes increasingly large, a major problem is how to fully make use of such data in a distributed, heterogeneous network environment. In order for a user to effectively discover and retrieve the specific information that is useful, the geographic metadata should be described and managed properly. Fortunately, the emergence of XML and Web Services technologies greatly promotes information distribution across the Internet. The research effort discussed in this paper presents a method and its implementation for transforming Hierarchical Data Format (HDF)-EOS metadata from the NASA ECS model to ISO 19115-based XML, which will be managed by the Open Geospatial Consortium (OGC) Catalogue Services—Web Profile (CSW). Using XML and international standards rather than domain-specific models to describe the metadata of those HDF-EOS data, and further using CSW to manage the metadata, can allow metadata information to be searched and interchanged more widely and easily, thus promoting the sharing of HDF-EOS data.

  11. Activate/Inhibit KGCS Gateway via Master Console EIC Pad-B Display

    NASA Technical Reports Server (NTRS)

    Ferreira, Pedro Henrique

    2014-01-01

    My internship consisted of two major projects for the Launch Control System.The purpose of the first project was to implement the Application Control Language (ACL) to Activate Data Acquisition (ADA) and to Inhibit Data Acquisition (IDA) the Kennedy Ground Control Sub-Systems (KGCS) Gateway, to update existing Pad-B End Item Control (EIC) Display to program the ADA and IDA buttons with new ACL, and to test and release the ACL Display.The second project consisted of unit testing all of the Application Services Framework (ASF) by March 21st. The XmlFileReader was unit tested and reached 100 coverage. The XmlFileReader class is used to grab information from XML files and use them to initialize elements in the other framework elements by using the Xerces C++ XML Parser; which is open source commercial off the shelf software. The ScriptThread was also tested. ScriptThread manages the creation and activation of script threads. A large amount of the time was used in initializing the environment and learning how to set up unit tests and getting familiar with the specific segments of the project that were assigned to us.

  12. Perceptual Image Compression in Telemedicine

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications of our technology to the special problems of telemedicine.

  13. Model-Based Speech Signal Coding Using Optimized Temporal Decomposition for Storage and Broadcasting Applications

    NASA Astrophysics Data System (ADS)

    Athaudage, Chandranath R. N.; Bradley, Alan B.; Lech, Margaret

    2003-12-01

    A dynamic programming-based optimization strategy for a temporal decomposition (TD) model of speech and its application to low-rate speech coding in storage and broadcasting is presented. In previous work with the spectral stability-based event localizing (SBEL) TD algorithm, the event localization was performed based on a spectral stability criterion. Although this approach gave reasonably good results, there was no assurance on the optimality of the event locations. In the present work, we have optimized the event localizing task using a dynamic programming-based optimization strategy. Simulation results show that an improved TD model accuracy can be achieved. A methodology of incorporating the optimized TD algorithm within the standard MELP speech coder for the efficient compression of speech spectral information is also presented. The performance evaluation results revealed that the proposed speech coding scheme achieves 50%-60% compression of speech spectral information with negligible degradation in the decoded speech quality.

  14. Efficacy of graduated compression stockings for an additional 3 weeks after sclerotherapy treatment of reticular and telangiectatic leg veins.

    PubMed

    Nootheti, Pavan K; Cadag, Kristian M; Magpantay, Angela; Goldman, Mitchel P

    2009-01-01

    Sclerotherapy with post-treatment graduated compression remains the criterion standard for treating lower leg telangiectatic, reticular, and varicose veins, but the optimal duration for that postsclerotherapy compression is unknown. To determine whether 3 weeks of additional graduated compression with Class I compression stockings (20-30 mmHg) improves efficacy when used immediately after 1 week of Class II (30-40 mmHg) graduated compression stockings. Twenty-nine patients with reticular or telangiectatic leg veins were treated with sclerotherapy; one leg was assigned to wear Class II compression stocking for 1 week only, and the contralateral leg was assigned an additional 3 weeks of Class I graduated compression stocking. Postsclerotherapy pigmentation and bruising was significantly less with the addition of 3 weeks of Class I graduated compression stockings.

  15. A new DWT/MC/DPCM video compression framework based on EBCOT

    NASA Astrophysics Data System (ADS)

    Mei, L. M.; Wu, H. R.; Tan, D. M.

    2005-07-01

    A novel Discrete Wavelet Transform (DWT)/Motion Compensation (MC)/Differential Pulse Code Modulation (DPCM) video compression framework is proposed in this paper. Although the Discrete Cosine Transform (DCT)/MC/DPCM is the mainstream framework for video coders in industry and international standards, the idea of DWT/MC/DPCM has existed for more than one decade in the literature and the investigation is still undergoing. The contribution of this work is twofold. Firstly, the Embedded Block Coding with Optimal Truncation (EBCOT) is used here as the compression engine for both intra- and inter-frame coding, which provides good compression ratio and embedded rate-distortion (R-D) optimization mechanism. This is an extension of the EBCOT application from still images to videos. Secondly, this framework offers a good interface for the Perceptual Distortion Measure (PDM) based on the Human Visual System (HVS) where the Mean Squared Error (MSE) can be easily replaced with the PDM in the R-D optimization. Some of the preliminary results are reported here. They are also compared with benchmarks such as MPEG-2 and MPEG-4 version 2. The results demonstrate that under specified condition the proposed coder outperforms the benchmarks in terms of rate vs. distortion.

  16. An Exponentiation Method for XML Element Retrieval

    PubMed Central

    2014-01-01

    XML document is now widely used for modelling and storing structured documents. The structure is very rich and carries important information about contents and their relationships, for example, e-Commerce. XML data-centric collections require query terms allowing users to specify constraints on the document structure; mapping structure queries and assigning the weight are significant for the set of possibly relevant documents with respect to structural conditions. In this paper, we present an extension to the MEXIR search system that supports the combination of structural and content queries in the form of content-and-structure queries, which we call the Exponentiation function. It has been shown the structural information improve the effectiveness of the search system up to 52.60% over the baseline BM25 at MAP. PMID:24696643

  17. Light at Night Markup Language (LANML): XML Technology for Light at Night Monitoring Data

    NASA Astrophysics Data System (ADS)

    Craine, B. L.; Craine, E. R.; Craine, E. M.; Crawford, D. L.

    2013-05-01

    Light at Night Markup Language (LANML) is a standard, based upon XML, useful in acquiring, validating, transporting, archiving and analyzing multi-dimensional light at night (LAN) datasets of any size. The LANML standard can accommodate a variety of measurement scenarios including single spot measures, static time-series, web based monitoring networks, mobile measurements, and airborne measurements. LANML is human-readable, machine-readable, and does not require a dedicated parser. In addition LANML is flexible; ensuring future extensions of the format will remain backward compatible with analysis software. The XML technology is at the heart of communicating over the internet and can be equally useful at the desktop level, making this standard particularly attractive for web based applications, educational outreach and efficient collaboration between research groups.

  18. Aerodynamic Design Optimization on Unstructured Meshes Using the Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Anderson, W. Kyle

    1998-01-01

    A discrete adjoint method is developed and demonstrated for aerodynamic design optimization on unstructured grids. The governing equations are the three-dimensional Reynolds-averaged Navier-Stokes equations coupled with a one-equation turbulence model. A discussion of the numerical implementation of the flow and adjoint equations is presented. Both compressible and incompressible solvers are differentiated and the accuracy of the sensitivity derivatives is verified by comparing with gradients obtained using finite differences. Several simplifying approximations to the complete linearization of the residual are also presented, and the resulting accuracy of the derivatives is examined. Demonstration optimizations for both compressible and incompressible flows are given.

  19. Properties of Foamed Mortar Prepared with Granulated Blast-Furnace Slag.

    PubMed

    Zhao, Xiao; Lim, Siong-Kang; Tan, Cher-Siang; Li, Bo; Ling, Tung-Chai; Huang, Runqiu; Wang, Qingyuan

    2015-01-30

    Foamed mortar with a density of 1300 kg/m³ was prepared. In the initial laboratory trials, water-to-cement (w/c) ratios ranging from 0.54 to 0.64 were tested to determine the optimal value for foamed mortar corresponding to the highest compressive strength without compromising its fresh state properties. With the obtained optimal w/c ratio of 0.56, two types of foamed mortar were prepared, namely cement-foamed mortar (CFM) and slag-foamed mortar (SFM, 50% cement was replaced by slag weight). Four different curing conditions were adopted for both types of foamed mortar to assess their compressive strength, ultrasonic pulse velocity (UPV) and thermal insulation performance. The test results indicated that utilizing 50% of slag as cement replacement in the production of foamed mortar improved the compressive strength, UPV and thermal insulation properties. Additionally, the initial water curing of seven days gained higher compressive strength and increased UPV values as compared to the air cured and natural weather curing samples. However, this positive effect was more pronounced in the case of compressive strength than in the UPV and thermal conductivity of foamed mortar.

  20. Automated Test Methods for XML Metadata

    DTIC Science & Technology

    2017-12-28

    Group under RCC Task TG-147. This document (Volume VI of the RCC Document 118 series) describes procedures used for evaluating XML metadata documents...including TMATS, MDL, IHAL, and DDML documents. These documents contain specifications or descriptions of artifacts and systems of importance to...the collection and management of telemetry data. The methods defined in this report provide a means of evaluating the suitability of such a metadata

  1. Aircraft

    DTIC Science & Technology

    2003-01-01

    xml Internet . Teal Group Corp. Aviation Week and Space Technology , 18 March 2003, 1. 62 Babak Minovi, “Turbine Industry Struggles with Weak Markets ...xml Internet . Teal Group Corp. Aviation Week and Space Technology , 18 March 2003, 1. 64 Babak Minovi, “Turbine Industry Struggles with Weak Markets ...what several executives referred to as the “perfect storm” now blowing through the aviation market . With this information many questions remain: Will

  2. Schema for Spacecraft-Command Dictionary

    NASA Technical Reports Server (NTRS)

    Laubach, Sharon; Garcia, Celina; Maxwell, Scott; Wright, Jesse

    2008-01-01

    An Extensible Markup Language (XML) schema was developed as a means of defining and describing a structure for capturing spacecraft command- definition and tracking information in a single location in a form readable by both engineers and software used to generate software for flight and ground systems. A structure defined within this schema is then used as the basis for creating an XML file that contains command definitions.

  3. Integrating digital educational content created and stored within disparate software environments: an extensible markup language (XML) solution in real-world use.

    PubMed

    Frank, M S; Schultz, T; Dreyer, K

    2001-06-01

    To provide a standardized and scaleable mechanism for exchanging digital radiologic educational content between software systems that use disparate authoring, storage, and presentation technologies. Our institution uses two distinct software systems for creating educational content for radiology. Each system is used to create in-house educational content as well as commercial educational products. One system is an authoring and viewing application that facilitates the input and storage of hierarchical knowledge and associated imagery, and is capable of supporting a variety of entity relationships. This system is primarily used for the production and subsequent viewing of educational CD-ROMS. Another software system is primarily used for radiologic education on the world wide web. This system facilitates input and storage of interactive knowledge and associated imagery, delivering this content over the internet in a Socratic manner simulating in-person interaction with an expert. A subset of knowledge entities common to both systems was derived. An additional subset of knowledge entities that could be bidirectionally mapped via algorithmic transforms was also derived. An extensible markup language (XML) object model and associated lexicon were then created to represent these knowledge entities and their interactive behaviors. Forward-looking attention was exercised in the creation of the object model in order to facilitate straightforward future integration of other sources of educational content. XML generators and interpreters were written for both systems. Deriving the XML object model and lexicon was the most critical and time-consuming aspect of the project. The coding of the XML generators and interpreters required only a few hours for each environment. Subsequently, the transfer of hundreds of educational cases and thematic presentations between the systems can now be accomplished in a matter of minutes. The use of algorithmic transforms results in nearly 100% transfer of context as well as content, thus providing "presentation-ready" outcomes. The automation of knowledge exchange between dissimilar digital teaching environments magnifies the efforts of educators and enriches the learning experience for participants. XML is a powerful and useful mechanism for transfering educational content, as well as the context and interactive behaviors of such content, between disparate systems.

  4. The CMIP5 Model Documentation Questionnaire: Development of a Metadata Retrieval System for the METAFOR Common Information Model

    NASA Astrophysics Data System (ADS)

    Pascoe, Charlotte; Lawrence, Bryan; Moine, Marie-Pierre; Ford, Rupert; Devine, Gerry

    2010-05-01

    The EU METAFOR Project (http://metaforclimate.eu) has created a web-based model documentation questionnaire to collect metadata from the modelling groups that are running simulations in support of the Coupled Model Intercomparison Project - 5 (CMIP5). The CMIP5 model documentation questionnaire will retrieve information about the details of the models used, how the simulations were carried out, how the simulations conformed to the CMIP5 experiment requirements and details of the hardware used to perform the simulations. The metadata collected by the CMIP5 questionnaire will allow CMIP5 data to be compared in a scientifically meaningful way. This paper describes the life-cycle of the CMIP5 questionnaire development which starts with relatively unstructured input from domain specialists and ends with formal XML documents that comply with the METAFOR Common Information Model (CIM). Each development step is associated with a specific tool. (1) Mind maps are used to capture information requirements from domain experts and build a controlled vocabulary, (2) a python parser processes the XML files generated by the mind maps, (3) Django (python) is used to generate the dynamic structure and content of the web based questionnaire from processed xml and the METAFOR CIM, (4) Python parsers ensure that information entered into the CMIP5 questionnaire is output as CIM compliant xml, (5) CIM compliant output allows automatic information capture tools to harvest questionnaire content into databases such as the Earth System Grid (ESG) metadata catalogue. This paper will focus on how Django (python) and XML input files are used to generate the structure and content of the CMIP5 questionnaire. It will also address how the choice of development tools listed above provided a framework that enabled working scientists (who we would never ordinarily get to interact with UML and XML) to be part the iterative development process and ensure that the CMIP5 model documentation questionnaire reflects what scientists want to know about the models. Keywords: metadata, CMIP5, automatic information capture, tool development

  5. EOS ODL Metadata On-line Viewer

    NASA Astrophysics Data System (ADS)

    Yang, J.; Rabi, M.; Bane, B.; Ullman, R.

    2002-12-01

    We have recently developed and deployed an EOS ODL metadata on-line viewer. The EOS ODL metadata viewer is a web server that takes: 1) an EOS metadata file in Object Description Language (ODL), 2) parameters, such as which metadata to view and what style of display to use, and returns an HTML or XML document displaying the requested metadata in the requested style. This tool is developed to address widespread complaints by science community that the EOS Data and Information System (EOSDIS) metadata files in ODL are difficult to read by allowing users to upload and view an ODL metadata file in different styles using a web browser. Users have the selection to view all the metadata or part of the metadata, such as Collection metadata, Granule metadata, or Unsupported Metadata. Choices of display styles include 1) Web: a mouseable display with tabs and turn-down menus, 2) Outline: Formatted and colored text, suitable for printing, 3) Generic: Simple indented text, a direct representation of the underlying ODL metadata, and 4) None: No stylesheet is applied and the XML generated by the converter is returned directly. Not all display styles are implemented for all the metadata choices. For example, Web style is only implemented for Collection and Granule metadata groups with known attribute fields, but not for Unsupported, Other, and All metadata. The overall strategy of the ODL viewer is to transform an ODL metadata file to a viewable HTML in two steps. The first step is to convert the ODL metadata file to an XML using a Java-based parser/translator called ODL2XML. The second step is to transform the XML to an HTML using stylesheets. Both operations are done on the server side. This allows a lot of flexibility in the final result, and is very portable cross-platform. Perl CGI behind the Apache web server is used to run the Java ODL2XML, and then run the results through an XSLT processor. The EOS ODL viewer can be accessed from either a PC or a Mac using Internet Explorer 5.0+ or Netscape 4.7+.

  6. FastBit Reference Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Kesheng

    2007-08-02

    An index in a database system is a data structure that utilizes redundant information about the base data to speed up common searching and retrieval operations. Most commonly used indexes are variants of B-trees, such as B+-tree and B*-tree. FastBit implements a set of alternative indexes call compressed bitmap indexes. Compared with B-tree variants, these indexes provide very efficient searching and retrieval operations by sacrificing the efficiency of updating the indexes after the modification of an individual record. In addition to the well-known strengths of bitmap indexes, FastBit has a special strength stemming from the bitmap compression scheme used. Themore » compression method is called the Word-Aligned Hybrid (WAH) code. It reduces the bitmap indexes to reasonable sizes and at the same time allows very efficient bitwise logical operations directly on the compressed bitmaps. Compared with the well-known compression methods such as LZ77 and Byte-aligned Bitmap code (BBC), WAH sacrifices some space efficiency for a significant improvement in operational efficiency. Since the bitwise logical operations are the most important operations needed to answer queries, using WAH compression has been shown to answer queries significantly faster than using other compression schemes. Theoretical analyses showed that WAH compressed bitmap indexes are optimal for one-dimensional range queries. Only the most efficient indexing schemes such as B+-tree and B*-tree have this optimality property. However, bitmap indexes are superior because they can efficiently answer multi-dimensional range queries by combining the answers to one-dimensional queries.« less

  7. Colon Targeted Guar Gum Compression Coated Tablets of Flurbiprofen: Formulation, Development, and Pharmacokinetics

    PubMed Central

    Bontha, Vijaya Kumar

    2013-01-01

    The rationale of the present study is to formulate flurbiprofen colon targeted compression coated tablets using guar gum to improve the therapeutic efficacy by increasing drug levels in colon, and also to reduce the side effects in upper gastrointestinal tract. Direct compression method was used to prepare flurbiprofen core tablets, and they were compression coated with guar gum. Then the tablets were optimized with the support of in vitro dissolution studies, and further it was proved by pharmacokinetic studies. The optimized formulation (F4) showed almost complete drug release in the colon (99.86%) within 24 h without drug loss in the initial lag period of 5 h (only 6.84% drug release was observed during this period). The pharmacokinetic estimations proved the capability of guar gum compression coated tablets to achieve colon targeting. The C max of colon targeted tablets was 11956.15 ng/mL at T max of 10 h whereas it was 15677.52 ng/mL at 3 h in case of immediate release tablets. The area under the curve for the immediate release and compression coated tablets was 40385.78 and 78214.50 ng-h/mL and the mean resident time was 3.49 and 10.78 h, respectively. In conclusion, formulation of guar gum compression coated tablets was appropriate for colon targeting of flurbiprofen. PMID:24260738

  8. Colon targeted guar gum compression coated tablets of flurbiprofen: formulation, development, and pharmacokinetics.

    PubMed

    Vemula, Sateesh Kumar; Bontha, Vijaya Kumar

    2013-01-01

    The rationale of the present study is to formulate flurbiprofen colon targeted compression coated tablets using guar gum to improve the therapeutic efficacy by increasing drug levels in colon, and also to reduce the side effects in upper gastrointestinal tract. Direct compression method was used to prepare flurbiprofen core tablets, and they were compression coated with guar gum. Then the tablets were optimized with the support of in vitro dissolution studies, and further it was proved by pharmacokinetic studies. The optimized formulation (F4) showed almost complete drug release in the colon (99.86%) within 24 h without drug loss in the initial lag period of 5 h (only 6.84% drug release was observed during this period). The pharmacokinetic estimations proved the capability of guar gum compression coated tablets to achieve colon targeting. The C(max) of colon targeted tablets was 11956.15 ng/mL at T max of 10 h whereas it was 15677.52 ng/mL at 3 h in case of immediate release tablets. The area under the curve for the immediate release and compression coated tablets was 40385.78 and 78214.50 ng-h/mL and the mean resident time was 3.49 and 10.78 h, respectively. In conclusion, formulation of guar gum compression coated tablets was appropriate for colon targeting of flurbiprofen.

  9. Optimal erasure protection for scalably compressed video streams with limited retransmission.

    PubMed

    Taubman, David; Thie, Johnson

    2005-08-01

    This paper shows how the priority encoding transmission (PET) framework may be leveraged to exploit both unequal error protection and limited retransmission for RD-optimized delivery of streaming media. Previous work on scalable media protection with PET has largely ignored the possibility of retransmission. Conversely, the PET framework has not been harnessed by the substantial body of previous work on RD optimized hybrid forward error correction/automatic repeat request schemes. We limit our attention to sources which can be modeled as independently compressed frames (e.g., video frames), where each element in the scalable representation of each frame can be transmitted in one or both of two transmission slots. An optimization algorithm determines the level of protection which should be assigned to each element in each slot, subject to transmission bandwidth constraints. To balance the protection assigned to elements which are being transmitted for the first time with those which are being retransmitted, the proposed algorithm formulates a collection of hypotheses concerning its own behavior in future transmission slots. We show how the PET framework allows for a decoupled optimization algorithm with only modest complexity. Experimental results obtained with Motion JPEG2000 compressed video demonstrate that substantial performance benefits can be obtained using the proposed framework.

  10. XTCE and XML Database Evolution and Lessons from JWST, LandSat, and Constellation

    NASA Technical Reports Server (NTRS)

    Gal-Edd, Jonathan; Kreistle, Steven; Fatig. Cirtos; Jones, Ronald

    2008-01-01

    The database organizations within three different NASA projects have advanced current practices by creating database synergy between the various spacecraft life cycle stakeholders and educating users in the benefits of the Consultative Committee for Space Data Systems (CCSDS) XML Telemetry and Command Exchange (XTCE) format. The combination of XML for managing program data and CCSDS XTCE for exchange is a robust approach that will meet all user requirements using Standards and Non proprietary tools. COTS tools for XTCEKML are very wide and varied. To combine together various low cost and free tools can be more expensive in the long run than choosing a more expensive COTS tool that meets all the needs. This was especially important when deploying in 32 remote sites with no need for licenses. A common mission XTCEKML format between dissimilar systems is possible and is not difficult. Command XMLKTCE is more complex than telemetry and the use of XTCEKML metadata to describe pages and scripts is needed due to the proprietary nature of most current ground systems. Other mission and science products such as spacecraft loads, science image catalogs, and mission operation procedures can all be described with XML as well to increase there flexibility as systems evolve and change. Figure 10 is an example of a spacecraft table load. The word is out and the XTCE community is growing, The f sXt TCE user group was held in October and in addition to ESAESOC, SC02000, and CNES identified several systems based on XTCE. The second XTCE user group is scheduled for March 10, 2008 with LDMC and others joining. As the experience with XTCE grows and the user community receives the promised benefits of using XTCE and XML the interest is growing fast.

  11. XSemantic: An Extension of LCA Based XML Semantic Search

    NASA Astrophysics Data System (ADS)

    Supasitthimethee, Umaporn; Shimizu, Toshiyuki; Yoshikawa, Masatoshi; Porkaew, Kriengkrai

    One of the most convenient ways to query XML data is a keyword search because it does not require any knowledge of XML structure or learning a new user interface. However, the keyword search is ambiguous. The users may use different terms to search for the same information. Furthermore, it is difficult for a system to decide which node is likely to be chosen as a return node and how much information should be included in the result. To address these challenges, we propose an XML semantic search based on keywords called XSemantic. On the one hand, we give three definitions to complete in terms of semantics. Firstly, the semantic term expansion, our system is robust from the ambiguous keywords by using the domain ontology. Secondly, to return semantic meaningful answers, we automatically infer the return information from the user queries and take advantage of the shortest path to return meaningful connections between keywords. Thirdly, we present the semantic ranking that reflects the degree of similarity as well as the semantic relationship so that the search results with the higher relevance are presented to the users first. On the other hand, in the LCA and the proximity search approaches, we investigated the problem of information included in the search results. Therefore, we introduce the notion of the Lowest Common Element Ancestor (LCEA) and define our simple rule without any requirement on the schema information such as the DTD or XML Schema. The first experiment indicated that XSemantic not only properly infers the return information but also generates compact meaningful results. Additionally, the benefits of our proposed semantics are demonstrated by the second experiment.

  12. Comparing Emerging XML Based Formats from a Multi-discipline Perspective

    NASA Astrophysics Data System (ADS)

    Sawyer, D. M.; Reich, L. I.; Nikhinson, S.

    2002-12-01

    This paper analyzes the similarity and differences among several examples of an emerging generation of Scientific Data Formats that are based on XML technologies. Some of the factors evaluated include the goals of these efforts, the data models, and XML technologies used, and the maturity of currently available software. This paper then investigates the practicality of developing a single set of structural data objects and basic scientific concepts, such as units, that could be used across discipline boundaries and extended by disciplines and missions to create Scientific Data Formats for their communities. This analysis is partly based on an effort sponsored by the ESDIS office at GSFC to compare the Earth Science Markup Language (ESML) and the eXtensible Data Format( XDF), two members of this new generation of XML based Data Description Languages that have been developed by NASA funded efforts in recent years. This paper adds FITSML and potentially CDFML to the list of XML based Scientific Data Formats discussed. This paper draws heavily a Formats Evolution Process Committee (http://ssdoo.gsfc.nasa.gov/nost/fep/) draft white paper primarily developed by Lou Reich, Mike Folk and Don Sawyer to assist the Space Science community in understanding Scientific Data Formats. One of primary conclusions of that paper is that a scientific data format object model should be examined along two basic axes. The first is the complexity of the computer/mathematical data types supported and the second is the level of scientific domain specialization incorporated. This paper also discusses several of the issues that affect the decision on whether to implement a discipline or project specific Scientific Data Format as a formal extension of a general purpose Scientific Data Format or to implement the APIs independently.

  13. Optimizing management of the condensing heat and cooling of gases compression in oxy block using of a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Brzęczek, Mateusz; Bartela, Łukasz

    2013-12-01

    This paper presents the parameters of the reference oxy combustion block operating with supercritical steam parameters, equipped with an air separation unit and a carbon dioxide capture and compression installation. The possibility to recover the heat in the analyzed power plant is discussed. The decision variables and the thermodynamic functions for the optimization algorithm were identified. The principles of operation of genetic algorithm and methodology of conducted calculations are presented. The sensitivity analysis was performed for the best solutions to determine the effects of the selected variables on the power and efficiency of the unit. Optimization of the heat recovery from the air separation unit, flue gas condition and CO2 capture and compression installation using genetic algorithm was designed to replace the low-pressure section of the regenerative water heaters of steam cycle in analyzed unit. The result was to increase the power and efficiency of the entire power plant.

  14. Optimizing the Compressive Strength of Strain-Hardenable Stretch-Formed Microtruss Architectures

    NASA Astrophysics Data System (ADS)

    Yu, Bosco; Abu Samk, Khaled; Hibbard, Glenn D.

    2015-05-01

    The mechanical performance of stretch-formed microtrusses is determined by both the internal strut architecture and the accumulated plastic strain during fabrication. The current study addresses the question of optimization, by taking into consideration the interdependency between fabrication path, material properties and architecture. Low carbon steel (AISI1006) and aluminum (AA3003) material systems were investigated experimentally, with good agreement between measured values and the analytical model. The compressive performance of the microtrusses was then optimized on a minimum weight basis under design constraints such as fixed starting sheet thickness and final microtruss height by satisfying the Karush-Kuhn-Tucker condition. The optimization results were summarized as carpet plots in order to meaningfully visualize the interdependency between architecture, microstructural state, and mechanical performance, enabling material and processing path selection.

  15. State of the art techniques for preservation and reuse of hard copy electrocardiograms.

    PubMed

    Lobodzinski, Suave M; Teppner, Ulrich; Laks, Michael

    2003-01-01

    Baseline examinations and periodic reexaminations in longitudinal population studies, together with ongoing surveillance for morbidity and mortality, provide unique opportunities for seeking ways to enhance the value of electrocardiography (ECG) as an inexpensive and noninvasive tool for prognosis and diagnosis. We used newly developed optical ECG waveform recognition (OEWR) technique capable of extracting raw waveform data from legacy hard copy ECG recording. Hardcopy ECG recordings were scanned and processed by the OEWR algorithm. The extracted ECG datasets were formatted into a newly proposed, vendor-neutral, ECG XML data format. Oracle database was used as a repository for ECG records in XML format. The proposed technique for XML encapsulation of OEWR processed hard copy records resulted in an efficient method for inclusion of paper ECG records into research databases, thus providing their preservation, reuse and accession.

  16. Clustering XML Documents Using Frequent Subtrees

    NASA Astrophysics Data System (ADS)

    Kutty, Sangeetha; Tran, Tien; Nayak, Richi; Li, Yuefeng

    This paper presents an experimental study conducted over the INEX 2008 Document Mining Challenge corpus using both the structure and the content of XML documents for clustering them. The concise common substructures known as the closed frequent subtrees are generated using the structural information of the XML documents. The closed frequent subtrees are then used to extract the constrained content from the documents. A matrix containing the term distribution of the documents in the dataset is developed using the extracted constrained content. The k-way clustering algorithm is applied to the matrix to obtain the required clusters. In spite of the large number of documents in the INEX 2008 Wikipedia dataset, the proposed frequent subtree-based clustering approach was successful in clustering the documents. This approach significantly reduces the dimensionality of the terms used for clustering without much loss in accuracy.

  17. Using XML Configuration-Driven Development to Create a Customizable Ground Data System

    NASA Technical Reports Server (NTRS)

    Nash, Brent; DeMore, Martha

    2009-01-01

    The Mission data Processing and Control Subsystem (MPCS) is being developed as a multi-mission Ground Data System with the Mars Science Laboratory (MSL) as the first fully supported mission. MPCS is a fully featured, Java-based Ground Data System (GDS) for telecommand and telemetry processing based on Configuration-Driven Development (CDD). The eXtensible Markup Language (XML) is the ideal language for CDD because it is easily readable and editable by all levels of users and is also backed by a World Wide Web Consortium (W3C) standard and numerous powerful processing tools that make it uniquely flexible. The CDD approach adopted by MPCS minimizes changes to compiled code by using XML to create a series of configuration files that provide both coarse and fine grained control over all aspects of GDS operation.

  18. Code Compression for DSP

    DTIC Science & Technology

    1998-12-01

    PAGES 6 19a. NAME OF RESPONSIBLE PERSON a. REPORT unclassified b . ABSTRACT unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8...Automation Conference, June 1998. [Liao95] S. Liao, S. Devadas , K. Keutzer, “Code Density Optimization for Embedded DSP Processors Using Data Compression

  19. A Framework for Resilient Remote Monitoring

    DTIC Science & Technology

    2014-08-01

    of low-level observables are availa- ble, audited , and recorded. This establishes the need for a re- mote monitoring framework that can integrate with...Security, WS-Policy, SAML, XML Signature, and XML Encryption. Pearson Higher Education, 2004. [3] OMG, “Common Secure Interoperability Protocol...www.darpa.mil/Our_Work/I2O/Programs/Integrated_Cyb er_Analysis_System_%28ICAS%29.aspx. [8] D. Miller and B. Pearson , Security information and event man

  20. XTCE: XML Telemetry and Command Exchange Tutorial, XTCE Version 1

    NASA Technical Reports Server (NTRS)

    Rice, Kevin; Kizzort, Brad

    2008-01-01

    These presentation slides are a tutorial on XML Telemetry and Command Exchange (XTCE). The goal of XTCE is to provide an industry standard mechanism for describing telemetry and command streams (particularly from satellites.) it wiill lower cost and increase validation over traditional formats, and support exchange or native format.XCTE is designed to describe bit streams, that are typical of telemetry and command in the historic space domain.

  1. Toward XML Representation of NSS Simulation Scenario for Mission Scenario Exchange Capability

    DTIC Science & Technology

    2003-09-01

    app.html Deitel , H. M., Deitel , P. J., Nieto, T. R., Lin, T. M., Sadhu, P. (2001). XML How to Program . Upper Saddle River: Prentice Hall...Combat XXI Program ...........................13 2. Transition NSS to a Java Environment ...........................................13 3. Shift to an...STATEMENT The Naval Simulation System (NSS) is a powerful computer program developed by the Navy to provide a force-on-force modeling and simulation

  2. Chemical markup, XML, and the world wide web. 6. CMLReact, an XML vocabulary for chemical reactions.

    PubMed

    Holliday, Gemma L; Murray-Rust, Peter; Rzepa, Henry S

    2006-01-01

    A set of components (CMLReact) for managing chemical and biochemical reactions has been added to CML. These can be combined to support most of the strategies for the formal representation of reactions. The elements, attributes, and types are formally defined as XMLSchema components, and their semantics are developed. New syntax and semantics in CML are reported and illustrated with 10 examples.

  3. PDBj Mine: design and implementation of relational database interface for Protein Data Bank Japan

    PubMed Central

    Kinjo, Akira R.; Yamashita, Reiko; Nakamura, Haruki

    2010-01-01

    This article is a tutorial for PDBj Mine, a new database and its interface for Protein Data Bank Japan (PDBj). In PDBj Mine, data are loaded from files in the PDBMLplus format (an extension of PDBML, PDB's canonical XML format, enriched with annotations), which are then served for the user of PDBj via the worldwide web (WWW). We describe the basic design of the relational database (RDB) and web interfaces of PDBj Mine. The contents of PDBMLplus files are first broken into XPath entities, and these paths and data are indexed in the way that reflects the hierarchical structure of the XML files. The data for each XPath type are saved into the corresponding relational table that is named as the XPath itself. The generation of table definitions from the PDBMLplus XML schema is fully automated. For efficient search, frequently queried terms are compiled into a brief summary table. Casual users can perform simple keyword search, and 'Advanced Search' which can specify various conditions on the entries. More experienced users can query the database using SQL statements which can be constructed in a uniform manner. Thus, PDBj Mine achieves a combination of the flexibility of XML documents and the robustness of the RDB. Database URL: http://www.pdbj.org/ PMID:20798081

  4. PDBj Mine: design and implementation of relational database interface for Protein Data Bank Japan.

    PubMed

    Kinjo, Akira R; Yamashita, Reiko; Nakamura, Haruki

    2010-08-25

    This article is a tutorial for PDBj Mine, a new database and its interface for Protein Data Bank Japan (PDBj). In PDBj Mine, data are loaded from files in the PDBMLplus format (an extension of PDBML, PDB's canonical XML format, enriched with annotations), which are then served for the user of PDBj via the worldwide web (WWW). We describe the basic design of the relational database (RDB) and web interfaces of PDBj Mine. The contents of PDBMLplus files are first broken into XPath entities, and these paths and data are indexed in the way that reflects the hierarchical structure of the XML files. The data for each XPath type are saved into the corresponding relational table that is named as the XPath itself. The generation of table definitions from the PDBMLplus XML schema is fully automated. For efficient search, frequently queried terms are compiled into a brief summary table. Casual users can perform simple keyword search, and 'Advanced Search' which can specify various conditions on the entries. More experienced users can query the database using SQL statements which can be constructed in a uniform manner. Thus, PDBj Mine achieves a combination of the flexibility of XML documents and the robustness of the RDB. Database URL: http://www.pdbj.org/

  5. ISO, FGDC, DIF and Dublin Core - Making Sense of Metadata Standards for Earth Science Data

    NASA Astrophysics Data System (ADS)

    Jones, P. R.; Ritchey, N. A.; Peng, G.; Toner, V. A.; Brown, H.

    2014-12-01

    Metadata standards provide common definitions of metadata fields for information exchange across user communities. Despite the broad adoption of metadata standards for Earth science data, there are still heterogeneous and incompatible representations of information due to differences between the many standards in use and how each standard is applied. Federal agencies are required to manage and publish metadata in different metadata standards and formats for various data catalogs. In 2014, the NOAA National Climatic data Center (NCDC) managed metadata for its scientific datasets in ISO 19115-2 in XML, GCMD Directory Interchange Format (DIF) in XML, DataCite Schema in XML, Dublin Core in XML, and Data Catalog Vocabulary (DCAT) in JSON, with more standards and profiles of standards planned. Of these standards, the ISO 19115-series metadata is the most complete and feature-rich, and for this reason it is used by NCDC as the source for the other metadata standards. We will discuss the capabilities of metadata standards and how these standards are being implemented to document datasets. Successful implementations include developing translations and displays using XSLTs, creating links to related data and resources, documenting dataset lineage, and establishing best practices. Benefits, gaps, and challenges will be highlighted with suggestions for improved approaches to metadata storage and maintenance.

  6. Sammy Houssainy | NREL

    Science.gov Websites

    focused on the design, analysis, and optimization of hybrid thermal and compressed air energy storage analysis and optimization, and the design of building and community scale systems. Education Ph.D

  7. Compressed sensing for high-resolution nonlipid suppressed 1 H FID MRSI of the human brain at 9.4T.

    PubMed

    Nassirpour, Sahar; Chang, Paul; Avdievitch, Nikolai; Henning, Anke

    2018-04-29

    The aim of this study was to apply compressed sensing to accelerate the acquisition of high resolution metabolite maps of the human brain using a nonlipid suppressed ultra-short TR and TE 1 H FID MRSI sequence at 9.4T. X-t sparse compressed sensing reconstruction was optimized for nonlipid suppressed 1 H FID MRSI data. Coil-by-coil x-t sparse reconstruction was compared with SENSE x-t sparse and low rank reconstruction. The effect of matrix size and spatial resolution on the achievable acceleration factor was studied. Finally, in vivo metabolite maps with different acceleration factors of 2, 4, 5, and 10 were acquired and compared. Coil-by-coil x-t sparse compressed sensing reconstruction was not able to reliably recover the nonlipid suppressed data, rather a combination of parallel and sparse reconstruction was necessary (SENSE x-t sparse). For acceleration factors of up to 5, both the low-rank and the compressed sensing methods were able to reconstruct the data comparably well (root mean squared errors [RMSEs] ≤ 10.5% for Cre). However, the reconstruction time of the low rank algorithm was drastically longer than compressed sensing. Using the optimized compressed sensing reconstruction, acceleration factors of 4 or 5 could be reached for the MRSI data with a matrix size of 64 × 64. For lower spatial resolutions, an acceleration factor of up to R∼4 was successfully achieved. By tailoring the reconstruction scheme to the nonlipid suppressed data through parameter optimization and performance evaluation, we present high resolution (97 µL voxel size) accelerated in vivo metabolite maps of the human brain acquired at 9.4T within scan times of 3 to 3.75 min. © 2018 International Society for Magnetic Resonance in Medicine.

  8. Laser-pulse compression in a collisional plasma under weak-relativistic ponderomotive nonlinearity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Mamta; Gupta, D. N., E-mail: dngupta@physics.du.ac.in

    We present theory and numerical analysis which demonstrate laser-pulse compression in a collisional plasma under the weak-relativistic ponderomotive nonlinearity. Plasma equilibrium density is modified due to the ohmic heating of electrons, the collisions, and the weak relativistic-ponderomotive force during the interaction of a laser pulse with plasmas. First, within one-dimensional analysis, the longitudinal self-compression mechanism is discussed. Three-dimensional analysis (spatiotemporal) of laser pulse propagation is also investigated by coupling the self-compression with the self-focusing. In the regime in which the laser becomes self-focused due to the weak relativistic-ponderomotive nonlinearity, we provide results for enhanced pulse compression. The results show thatmore » the matched interplay between self-focusing and self-compression can improve significantly the temporal profile of the compressed pulse. Enhanced pulse compression can be achieved by optimizing and selecting the parameters such as collision frequency, ion-temperature, and laser intensity.« less

  9. Compression in Working Memory and Its Relationship With Fluid Intelligence.

    PubMed

    Chekaf, Mustapha; Gauvrit, Nicolas; Guida, Alessandro; Mathy, Fabien

    2018-06-01

    Working memory has been shown to be strongly related to fluid intelligence; however, our goal is to shed further light on the process of information compression in working memory as a determining factor of fluid intelligence. Our main hypothesis was that compression in working memory is an excellent indicator for studying the relationship between working-memory capacity and fluid intelligence because both depend on the optimization of storage capacity. Compressibility of memoranda was estimated using an algorithmic complexity metric. The results showed that compressibility can be used to predict working-memory performance and that fluid intelligence is well predicted by the ability to compress information. We conclude that the ability to compress information in working memory is the reason why both manipulation and retention of information are linked to intelligence. This result offers a new concept of intelligence based on the idea that compression and intelligence are equivalent problems. Copyright © 2018 Cognitive Science Society, Inc.

  10. Joint Battlespace Infosphere: Information Management Within a C2 Enterprise

    DTIC Science & Technology

    2005-06-01

    using. In version 1.2, we support both MySQL and Oracle as underlying implementations where the XML metadata schema is mapped into relational tables in...Identity Servers, Role-Based Access Control, and Policy Representation – Databases: Oracle , MySQL , TigerLogic, Berkeley XML DB 15 Instrumentation Services...converted to SQL for execution. Invocations are then forwarded to the appropriate underlying IOR core components that have the responsibility of issuing

  11. eMontage: An Architecture for Rapid Integration of Situational Awareness Data at the Edge

    DTIC Science & Technology

    2013-05-01

    Request Response Interaction Android Client Dispatcher Servlet Spring MVC Controller Camel Producer Template Camel Route Remote Data Service - REST...8SATURN 2013© 2013 Carnegie Mellon University Publish Subscribe Interaction Android Client Dispatcher Servlet Spring MVC Controller Remote Data...set ..., "’"C oUkRelw•b J’- - ’ ~ ’~------’ ~-------- ’-------~ , Parse XML into a single Java XML Document object. -=--:=’ Software Engineering

  12. C3I and Modelling and Simulation (M&S) Interoperability

    DTIC Science & Technology

    2004-03-01

    customised Open Source products. The technical implementation is based on the use of the eXtendend Markup Language (XML) and Python . XML is developed...to structure, store and send information. The language is focus on the description of data. Python is a portable, interpreted, object-oriented...programming language. A huge variety of usable Open Source Projects were issued by the Python Community. 3.1 Phase 1: Feasibility Studies Phase 1 was

  13. Chemical Markup, XML, and the World Wide Web. 7. CMLSpect, an XML vocabulary for spectral data.

    PubMed

    Kuhn, Stefan; Helmus, Tobias; Lancashire, Robert J; Murray-Rust, Peter; Rzepa, Henry S; Steinbeck, Christoph; Willighagen, Egon L

    2007-01-01

    CMLSpect is an extension of Chemical Markup Language (CML) for managing spectral and other analytical data. It is designed to be flexible enough to contain a wide variety of spectral data. The paper describes the CMLElements used and gives practical examples for common types of spectra. In addition it demonstrates how different views of the data can be expressed and what problems still exist.

  14. CrossTalk: The Journal of Defense Software Engineering. Volume 21, Number 10, October 2008

    DTIC Science & Technology

    2008-10-01

    proprietary modeling offerings, there is considerable conver- gence around Business Process Modeling Notation ( BPMN ). The research also found strong...support across vendors for the Business Process Execution Language standard, though there is also emerging support for direct execution of BPMN through...the use of the XML Process Definition Language, an XML serialization of BPMN . Many vendors also provide the needed moni- toring of those processes at

  15. Properties of Foamed Mortar Prepared with Granulated Blast-Furnace Slag

    PubMed Central

    Zhao, Xiao; Lim, Siong-Kang; Tan, Cher-Siang; Li, Bo; Ling, Tung-Chai; Huang, Runqiu; Wang, Qingyuan

    2015-01-01

    Foamed mortar with a density of 1300 kg/m3 was prepared. In the initial laboratory trials, water-to-cement (w/c) ratios ranging from 0.54 to 0.64 were tested to determine the optimal value for foamed mortar corresponding to the highest compressive strength without compromising its fresh state properties. With the obtained optimal w/c ratio of 0.56, two types of foamed mortar were prepared, namely cement-foamed mortar (CFM) and slag-foamed mortar (SFM, 50% cement was replaced by slag weight). Four different curing conditions were adopted for both types of foamed mortar to assess their compressive strength, ultrasonic pulse velocity (UPV) and thermal insulation performance. The test results indicated that utilizing 50% of slag as cement replacement in the production of foamed mortar improved the compressive strength, UPV and thermal insulation properties. Additionally, the initial water curing of seven days gained higher compressive strength and increased UPV values as compared to the air cured and natural weather curing samples. However, this positive effect was more pronounced in the case of compressive strength than in the UPV and thermal conductivity of foamed mortar. PMID:28787950

  16. Optimum SNR data compression in hardware using an Eigencoil array.

    PubMed

    King, Scott B; Varosi, Steve M; Duensing, G Randy

    2010-05-01

    With the number of receivers available on clinical MRI systems now ranging from 8 to 32 channels, data compression methods are being explored to lessen the demands on the computer for data handling and processing. Although software-based methods of compression after reception lessen computational requirements, a hardware-based method before the receiver also reduces the number of receive channels required. An eight-channel Eigencoil array is constructed by placing a hardware radiofrequency signal combiner inline after preamplification, before the receiver system. The Eigencoil array produces signal-to-noise ratio (SNR) of an optimal reconstruction using a standard sum-of-squares reconstruction, with peripheral SNR gains of 30% over the standard array. The concept of "receiver channel reduction" or MRI data compression is demonstrated, with optimal SNR using only four channels, and with a three-channel Eigencoil, superior sum-of-squares SNR was achieved over the standard eight-channel array. A three-channel Eigencoil portion of a product neurovascular array confirms in vivo SNR performance and demonstrates parallel MRI up to R = 3. This SNR-preserving data compression method advantageously allows users of MRI systems with fewer receiver channels to achieve the SNR of higher-channel MRI systems. (c) 2010 Wiley-Liss, Inc.

  17. Internet-based data interchange with XML

    NASA Astrophysics Data System (ADS)

    Fuerst, Karl; Schmidt, Thomas

    2000-12-01

    In this paper, a complete concept for Internet Electronic Data Interchange (EDI) - a well-known buzzword in the area of logistics and supply chain management to enable the automation of the interactions between companies and their partners - using XML (eXtensible Markup Language) will be proposed. This approach is based on Internet and XML, because the implementation of traditional EDI (e.g. EDIFACT, ANSI X.12) is mostly too costly for small and medium sized enterprises, which want to integrate their suppliers and customers in a supply chain. The paper will also present the results of the implementation of a prototype for such a system, which has been developed for an industrial partner to improve the current situation of parts delivery. The main functions of this system are an early warning system to detect problems during the parts delivery process as early as possible, and a transport following system to pursue the transportation.

  18. CytometryML with DICOM and FCS

    NASA Astrophysics Data System (ADS)

    Leif, Robert C.

    2018-02-01

    Abstract: Flow Cytometry Standard, FCS, and Digital Imaging and Communications in Medicine standard, DICOM, are based on extensive, superb domain knowledge, However, they are isolated systems, do not take advantage of data structures, require special programs to read and write the data, lack the capability to interoperate or work with other standards and FCS lacks many of the datatypes necessary for clinical laboratory data. The large overlap between imaging and flow cytometry provides strong evidence that both modalities should be covered by the same standard. Method: The XML Schema Definition Language, XSD 1.1 was used to translate FCS and/or DICOM objects. A MIFlowCyt file was tested with published values. Results: Previously, a significant part of an XML standard based upon a combination of FCS and DICOM has been implemented and validated with MIFlowCyt data. Strongly typed translations of FCS keywords have been constructed in XML. These keywords contain links to their DICOM and FCS equivalents.

  19. Ordered Backward XPath Axis Processing against XML Streams

    NASA Astrophysics Data System (ADS)

    Nizar M., Abdul; Kumar, P. Sreenivasa

    Processing of backward XPath axes against XML streams is challenging for two reasons: (i) Data is not cached for future access. (ii) Query contains steps specifying navigation to the data that already passed by. While there are some attempts to process parent and ancestor axes, there are very few proposals to process ordered backward axes namely, preceding and preceding-sibling. For ordered backward axis processing, the algorithm, in addition to overcoming the limitations on data availability, has to take care of ordering constraints imposed by these axes. In this paper, we show how backward ordered axes can be effectively represented using forward constraints. We then discuss an algorithm for XML stream processing of XPath expressions containing ordered backward axes. The algorithm uses a layered cache structure to systematically accumulate query results. Our experiments show that the new algorithm gains remarkable speed up over the existing algorithm without compromising on bufferspace requirement.

  20. ART-ML - a novel XML format for the biological procedures modeling and the representation of blood flow simulation.

    PubMed

    Karvounis, E C; Tsakanikas, V D; Fotiou, E; Fotiadis, D I

    2010-01-01

    The paper proposes a novel Extensible Markup Language (XML) based format called ART-ML that aims at supporting the interoperability and the reuse of models of blood flow, mass transport and plaque formation, exported by ARTool. ARTool is a platform for the automatic processing of various image modalities of coronary and carotid arteries. The images and their content are fused to develop morphological models of the arteries in easy to handle 3D representations. The platform incorporates efficient algorithms which are able to perform blood flow simulation. In addition atherosclerotic plaque development is estimated taking into account morphological, flow and genetic factors. ART-ML provides a XML format that enables the representation and management of embedded models within the ARTool platform and the storage and interchange of well-defined information. This approach influences in the model creation, model exchange, model reuse and result evaluation.

  1. HDF-EOS Web Server

    NASA Technical Reports Server (NTRS)

    Ullman, Richard; Bane, Bob; Yang, Jingli

    2008-01-01

    A shell script has been written as a means of automatically making HDF-EOS-formatted data sets available via the World Wide Web. ("HDF-EOS" and variants thereof are defined in the first of the two immediately preceding articles.) The shell script chains together some software tools developed by the Data Usability Group at Goddard Space Flight Center to perform the following actions: Extract metadata in Object Definition Language (ODL) from an HDF-EOS file, Convert the metadata from ODL to Extensible Markup Language (XML), Reformat the XML metadata into human-readable Hypertext Markup Language (HTML), Publish the HTML metadata and the original HDF-EOS file to a Web server and an Open-source Project for a Network Data Access Protocol (OPeN-DAP) server computer, and Reformat the XML metadata and submit the resulting file to the EOS Clearinghouse, which is a Web-based metadata clearinghouse that facilitates searching for, and exchange of, Earth-Science data.

  2. XML Based Scientific Data Management Facility

    NASA Technical Reports Server (NTRS)

    Mehrotra, Piyush; Zubair, M.; Ziebartt, John (Technical Monitor)

    2001-01-01

    The World Wide Web consortium has developed an Extensible Markup Language (XML) to support the building of better information management infrastructures. The scientific computing community realizing the benefits of HTML has designed markup languages for scientific data. In this paper, we propose a XML based scientific data management facility, XDMF. The project is motivated by the fact that even though a lot of scientific data is being generated, it is not being shared because of lack of standards and infrastructure support for discovering and transforming the data. The proposed data management facility can be used to discover the scientific data itself, the transformation functions, and also for applying the required transformations. We have built a prototype system of the proposed data management facility that can work on different platforms. We have implemented the system using Java, and Apache XSLT engine Xalan. To support remote data and transformation functions, we had to extend the XSLT specification and the Xalan package.

  3. JCMT observatory control system

    NASA Astrophysics Data System (ADS)

    Rees, Nicholas P.; Economou, Frossie; Jenness, Tim; Kackley, Russell D.; Walther, Craig A.; Dent, William R. F.; Folger, Martin; Gao, Xiaofeng; Kelly, Dennis; Lightfoot, John F.; Pain, Ian; Hovey, Gary J.; Redman, Russell O.

    2002-12-01

    The JCMT, the world's largest sub-mm telescope, has had essentially the same VAX/VMS based control system since it was commissioned. For the next generation of instrumentation we are implementing a new Unix/VxWorks based system, based on the successful ORAC system that was recently released on UKIRT. The system is now entering the integration and testing phase. This paper gives a broad overview of the system architecture and includes some discussion on the choices made. (Other papers in this conference cover some areas in more detail). The basic philosophy is to control the sub-systems with a small and simple set of commands, but passing detailed XML configuration descriptions along with the commands to give the flexibility required. The XML files can be passed between various layers in the system without interpretation, and so simplify the design enormously. This has all been made possible by the adoption of an Observation Preparation Tool, which essentially serves as an intelligent XML editor.

  4. Non-invasive lightweight integration engine for building EHR from autonomous distributed systems.

    PubMed

    Angulo, Carlos; Crespo, Pere; Maldonado, José A; Moner, David; Pérez, Daniel; Abad, Irene; Mandingorra, Jesús; Robles, Montserrat

    2007-12-01

    In this paper we describe Pangea-LE, a message-oriented lightweight data integration engine that allows homogeneous and concurrent access to clinical information from disperse and heterogeneous data sources. The engine extracts the information and passes it to the requesting client applications in a flexible XML format. The XML response message can be formatted on demand by appropriate Extensible Stylesheet Language (XSL) transformations in order to meet the needs of client applications. We also present a real deployment in a hospital where Pangea-LE collects and generates an XML view of all the available patient clinical information. The information is presented to healthcare professionals in an Electronic Health Record (EHR) viewer Web application with patient search and EHR browsing capabilities. Implantation in a real setting has been a success due to the non-invasive nature of Pangea-LE which respects the existing information systems.

  5. Bottom-Up Evaluation of Twig Join Pattern Queries in XML Document Databases

    NASA Astrophysics Data System (ADS)

    Chen, Yangjun

    Since the extensible markup language XML emerged as a new standard for information representation and exchange on the Internet, the problem of storing, indexing, and querying XML documents has been among the major issues of database research. In this paper, we study the twig pattern matching and discuss a new algorithm for processing ordered twig pattern queries. The time complexity of the algorithmis bounded by O(|D|·|Q| + |T|·leaf Q ) and its space overhead is by O(leaf T ·leaf Q ), where T stands for a document tree, Q for a twig pattern and D is a largest data stream associated with a node q of Q, which contains the database nodes that match the node predicate at q. leaf T (leaf Q ) represents the number of the leaf nodes of T (resp. Q). In addition, the algorithm can be adapted to an indexing environment with XB-trees being used.

  6. A Simple XML Producer-Consumer Protocol

    NASA Technical Reports Server (NTRS)

    Smith, Warren; Gunter, Dan; Quesnel, Darcy

    2000-01-01

    This document describes a simple XML-based protocol that can be used for producers of events to communicate with consumers of events. The protocol described here is not meant to be the most efficient protocol, the most logical protocol, or the best protocol in any way. This protocol was defined quickly and it's intent is to give us a reasonable protocol that we can implement relatively easily and then use to gain experience in distributed event services. This experience will help us evaluate proposals for event representations, XML-based encoding of information, and communication protocols. The next section of this document describes how we represent events in this protocol and then defines the two events that we choose to use for our initial experiments. These definitions are made by example so that they are informal and easy to understand. The following section then proceeds to define the producer-consumer protocol we have agreed upon for our initial experiments.

  7. Techniques for integrating ‐omics data

    PubMed Central

    Akula, Siva Prasad; Miriyala, Raghava Naidu; Thota, Hanuman; Rao, Allam Appa; Gedela, Srinubabu

    2009-01-01

    The challenge for -omics research is to tackle the problem of fragmentation of knowledge by integrating several sources of heterogeneous information into a coherent entity. It is widely recognized that successful data integration is one of the keys to improve productivity for stored data. Through proper data integration tools and algorithms, researchers may correlate relationships that enable them to make better and faster decisions. The need for data integration is essential for present ‐omics community, because ‐omics data is currently spread world wide in wide variety of formats. These formats can be integrated and migrated across platforms through different techniques and one of the important techniques often used is XML. XML is used to provide a document markup language that is easier to learn, retrieve, store and transmit. It is semantically richer than HTML. Here, we describe bio warehousing, database federation, controlled vocabularies and highlighting the XML application to store, migrate and validate -omics data. PMID:19255651

  8. Techniques for integrating -omics data.

    PubMed

    Akula, Siva Prasad; Miriyala, Raghava Naidu; Thota, Hanuman; Rao, Allam Appa; Gedela, Srinubabu

    2009-01-01

    The challenge for -omics research is to tackle the problem of fragmentation of knowledge by integrating several sources of heterogeneous information into a coherent entity. It is widely recognized that successful data integration is one of the keys to improve productivity for stored data. Through proper data integration tools and algorithms, researchers may correlate relationships that enable them to make better and faster decisions. The need for data integration is essential for present -omics community, because -omics data is currently spread world wide in wide variety of formats. These formats can be integrated and migrated across platforms through different techniques and one of the important techniques often used is XML. XML is used to provide a document markup language that is easier to learn, retrieve, store and transmit. It is semantically richer than HTML. Here, we describe bio warehousing, database federation, controlled vocabularies and highlighting the XML application to store, migrate and validate -omics data.

  9. An XML-based system for the flexible classification and retrieval of clinical practice guidelines.

    PubMed Central

    Ganslandt, T.; Mueller, M. L.; Krieglstein, C. F.; Senninger, N.; Prokosch, H. U.

    2002-01-01

    Beneficial effects of clinical practice guidelines (CPGs) have not yet reached expectations due to limited routine adoption. Electronic distribution and reminder systems have the potential to overcome implementation barriers. Existing electronic CPG repositories like the National Guideline Clearinghouse (NGC) provide individual access but lack standardized computer-readable interfaces necessary for automated guideline retrieval. The aim of this paper was to facilitate automated context-based selection and presentation of CPGs. Using attributes from the NGC classification scheme, an XML-based metadata repository was successfully implemented, providing document storage, classification and retrieval functionality. Semi-automated extraction of attributes was implemented for the import of XML guideline documents using XPath. A hospital information system interface was exemplarily implemented for diagnosis-based guideline invocation. Limitations of the implemented system are discussed and possible future work is outlined. Integration of standardized computer-readable search interfaces into existing CPG repositories is proposed. PMID:12463831

  10. XML: James Webb Space Telescope Database Issues, Lessons, and Status

    NASA Technical Reports Server (NTRS)

    Detter, Ryan; Mooney, Michael; Fatig, Curtis

    2003-01-01

    This paper will present the current concept using extensible Markup Language (XML) as the underlying structure for the James Webb Space Telescope (JWST) database. The purpose of using XML is to provide a JWST database, independent of any portion of the ground system, yet still compatible with the various systems using a variety of different structures. The testing of the JWST Flight Software (FSW) started in 2002, yet the launch is scheduled for 2011 with a planned 5-year mission and a 5-year follow on option. The initial database and ground system elements, including the commands, telemetry, and ground system tools will be used for 19 years, plus post mission activities. During the Integration and Test (I&T) phases of the JWST development, 24 distinct laboratories, each geographically dispersed, will have local database tools with an XML database. Each of these laboratories database tools will be used for the exporting and importing of data both locally and to a central database system, inputting data to the database certification process, and providing various reports. A centralized certified database repository will be maintained by the Space Telescope Science Institute (STScI), in Baltimore, Maryland, USA. One of the challenges for the database is to be flexible enough to allow for the upgrade, addition or changing of individual items without effecting the entire ground system. Also, using XML should allow for the altering of the import and export formats needed by the various elements, tracking the verification/validation of each database item, allow many organizations to provide database inputs, and the merging of the many existing database processes into one central database structure throughout the JWST program. Many National Aeronautics and Space Administration (NASA) projects have attempted to take advantage of open source and commercial technology. Often this causes a greater reliance on the use of Commercial-Off-The-Shelf (COTS), which is often limiting. In our review of the database requirements and the COTS software available, only very expensive COTS software will meet 90% of requirements. Even with the high projected initial cost of COTS, the development and support for custom code over the 19-year mission period was forecasted to be higher than the total licensing costs. A group did look at reusing existing database tools and formats. If the JWST database was already in a mature state, the reuse made sense, but with the database still needing to handing the addition of different types of command and telemetry structures, defining new spacecraft systems, accept input and export to systems which has not been defined yet, XML provided the flexibility desired. It remains to be determined whether the XML database will reduce the over all cost for the JWST mission.

  11. Internet Patient Records: new techniques

    PubMed Central

    Moehrs, Sascha; Anedda, Paolo; Tuveri, Massimiliano; Zanetti, Gianluigi

    2001-01-01

    Background The ease by which the Internet is able to distribute information to geographically-distant users on a wide variety of computers makes it an obvious candidate for a technological solution for electronic patient record systems. Indeed, second-generation Internet technologies such as the ones described in this article - XML (eXtensible Markup Language), XSL (eXtensible Style Language), DOM (Document Object Model), CSS (Cascading Style Sheet), JavaScript, and JavaBeans - may significantly reduce the complexity of the development of distributed healthcare systems. Objective The demonstration of an experimental Electronic Patient Record (EPR) system built from those technologies that can support viewing of medical imaging exams and graphically-rich clinical reporting tools, while conforming to the newly emerging XML standard for digital documents. In particular, we aim to promote rapid prototyping of new reports by clinical specialists. Methods We have built a prototype EPR client, InfoDOM, that runs in both the popular web browsers. In this second version it receives each EPR as an XML record served via the secure SSL (Secure Socket Layer) protocol. JavaBean software components manipulate the XML to store it and then to transform it into a variety of useful clinical views. First a web page summary for the patient is produced. From that web page other JavaBeans can be launched. In particular, we have developed a medical imaging exam Viewer and a clinical Reporter bean parameterized appropriately for the particular patient and exam in question. Both present particular views of the XML data. The Viewer reads image sequences from a patient-specified network URL on a PACS (Picture Archiving and Communications System) server and presents them in a user-controllable animated sequence, while the Reporter provides a configurable anatomical map of the site of the pathology, from which individual "reportlets" can be launched. The specification of these reportlets is achieved using standard HTML forms and thus may conceivably be authored by clinical specialists. A generic JavaScript library has been written that allows the seamless incorporation of such contributions into the InfoDOM client. In conjunction with another JavaBean, that library renders graphically-enhanced reporting tools that read and write content to and from the XML data-structure, ready for resubmission to the EPR server. Results We demonstrate the InfoDOM experimental EPR system that is currently being adapted for test-bed use in three hospitals in Cagliari, Italy. For this we are working with specialists in neurology, radiology, and epilepsy. Conclusions Early indications are that the rapid prototyping of reports afforded by our EPR system can assist communication between clinical specialists and our system developers. We are now experimenting with new technologies that may provide services to the kind of XML EPR client described here. PMID:11720950

  12. EASEE: an open architecture approach for modeling battlespace signal and sensor phenomenology

    NASA Astrophysics Data System (ADS)

    Waldrop, Lauren E.; Wilson, D. Keith; Ekegren, Michael T.; Borden, Christian T.

    2017-04-01

    Open architecture in the context of defense applications encourages collaboration across government agencies and academia. This paper describes a success story in the implementation of an open architecture framework that fosters transparency and modularity in the context of Environmental Awareness for Sensor and Emitter Employment (EASEE), a complex physics-based software package for modeling the effects of terrain and atmospheric conditions on signal propagation and sensor performance. Among the highlighted features in this paper are: (1) a code refactorization to separate sensitive parts of EASEE, thus allowing collaborators the opportunity to view and interact with non-sensitive parts of the EASEE framework with the end goal of supporting collaborative innovation, (2) a data exchange and validation effort to enable the dynamic addition of signatures within EASEE thus supporting a modular notion that components can be easily added or removed to the software without requiring recompilation by developers, and (3) a flexible and extensible XML interface, which aids in decoupling graphical user interfaces from EASEE's calculation engine, and thus encourages adaptability to many different defense applications. In addition to the outlined points above, this paper also addresses EASEE's ability to interface with both proprietary systems such as ArcGIS. A specific use case regarding the implementation of an ArcGIS toolbar that leverages EASEE's XML interface and enables users to set up an EASEE-compliant configuration for probability of detection or optimal sensor placement calculations in various modalities is discussed as well.

  13. DCTune Perceptual Optimization of Compressed Dental X-Rays

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    In current dental practice, x-rays of completed dental work are often sent to the insurer for verification. It is faster and cheaper to transmit instead digital scans of the x-rays. Further economies result if the images are sent in compressed form. DCTune is a technology for optimizing DCT (digital communication technology) quantization matrices to yield maximum perceptual quality for a given bit-rate, or minimum bit-rate for a given perceptual quality. Perceptual optimization of DCT color quantization matrices. In addition, the technology provides a means of setting the perceptual quality of compressed imagery in a systematic way. The purpose of this research was, with respect to dental x-rays, 1) to verify the advantage of DCTune over standard JPEG (Joint Photographic Experts Group), 2) to verify the quality control feature of DCTune, and 3) to discover regularities in the optimized matrices of a set of images. We optimized matrices for a total of 20 images at two resolutions (150 and 300 dpi) and four bit-rates (0.25, 0.5, 0.75, 1.0 bits/pixel), and examined structural regularities in the resulting matrices. We also conducted psychophysical studies (1) to discover the DCTune quality level at which the images became 'visually lossless,' and (2) to rate the relative quality of DCTune and standard JPEG images at various bitrates. Results include: (1) At both resolutions, DCTune quality is a linear function of bit-rate. (2) DCTune quantization matrices for all images at all bitrates and resolutions are modeled well by an inverse Gaussian, with parameters of amplitude and width. (3) As bit-rate is varied, optimal values of both amplitude and width covary in an approximately linear fashion. (4) Both amplitude and width vary in systematic and orderly fashion with either bit-rate or DCTune quality; simple mathematical functions serve to describe these relationships. (5) In going from 150 to 300 dpi, amplitude parameters are substantially lower and widths larger at corresponding bit-rates or qualities. (6) Visually lossless compression occurs at a DCTune quality value of about 1. (7) At 0.25 bits/pixel, comparative ratings give DCTune a substantial advantage over standard JPEG. As visually lossless bit-rates are approached, this advantage of necessity diminishes. We have concluded that DCTune optimized quantization matrices provide better visual quality than standard JPEG. Meaningful quality levels may be specified by means of the DCTune metric. Optimized matrices are very similar across the class of dental x-rays, suggesting the possibility of a 'class-optimal' matrix. DCTune technology appears to provide some value in the context of compressed dental x-rays.

  14. Compression of next-generation sequencing quality scores using memetic algorithm

    PubMed Central

    2014-01-01

    Background The exponential growth of next-generation sequencing (NGS) derived DNA data poses great challenges to data storage and transmission. Although many compression algorithms have been proposed for DNA reads in NGS data, few methods are designed specifically to handle the quality scores. Results In this paper we present a memetic algorithm (MA) based NGS quality score data compressor, namely MMQSC. The algorithm extracts raw quality score sequences from FASTQ formatted files, and designs compression codebook using MA based multimodal optimization. The input data is then compressed in a substitutional manner. Experimental results on five representative NGS data sets show that MMQSC obtains higher compression ratio than the other state-of-the-art methods. Particularly, MMQSC is a lossless reference-free compression algorithm, yet obtains an average compression ratio of 22.82% on the experimental data sets. Conclusions The proposed MMQSC compresses NGS quality score data effectively. It can be utilized to improve the overall compression ratio on FASTQ formatted files. PMID:25474747

  15. SU-E-T-406: Use of TrueBeam Developer Mode and API to Increase the Efficiency and Accuracy of Commissioning Measurements for the Varian EDGE Stereotactic Linac

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, S; Gulam, M; Song, K

    2014-06-01

    Purpose: The Varian EDGE machine is a new stereotactic platform, combining Calypso and VisionRT localization systems with a stereotactic linac. The system includes TrueBeam DeveloperMode, making possible the use of XML-scripting for automation of linac-related tasks. This study details the use of DeveloperMode to automate commissioning tasks for Varian EDGE, thereby improving efficiency and measurement consistency. Methods: XML-scripting was used for various commissioning tasks,including couch model verification,beam-scanning,and isocenter verification. For couch measurements, point measurements were acquired for several field sizes (2×2,4×4,10×10cm{sup 2}) at 42 gantry angles for two couch-models. Measurements were acquired with variations in couch position(rails in/out,couch shifted inmore » each of motion axes) compared to treatment planning system(TPS)-calculated values,which were logged automatically through advanced planning interface(API) scripting functionality. For beam scanning, XML-scripts were used to create custom MLC-apertures. For isocenter verification, XML-scripts were used to automate various Winston-Lutz-type tests. Results: For couch measurements, the time required for each set of angles was approximately 9 minutes. Without scripting, each set required approximately 12 minutes. Automated measurements required only one physicist, while manual measurements required at least two physicists to handle linac positions/beams and data recording. MLC apertures were generated outside of the TPS,and with the .xml file format, double-checking without use of TPS/operator console was possible. Similar time efficiency gains were found for isocenter verification measurements Conclusion: The use of XML scripting in TrueBeam DeveloperMode allows for efficient and accurate data acquisition during commissioning. The efficiency improvement is most pronounced for iterative measurements, exemplified by the time savings for couch modeling measurements(approximately 10 hours). The scripting also allowed for creation of the files in advance without requiring access to TPS. The API scripting functionality enabled efficient creation/mining of TPS data. Finally, automation reduces the potential for human error in entering linac values at the machine console,and the script provides a log of measurements acquired for each session. This research was supported in part by a grant from Varian Medical Systems, Palo Alto, CA.« less

  16. Data compression of discrete sequence: A tree based approach using dynamic programming

    NASA Technical Reports Server (NTRS)

    Shivaram, Gurusrasad; Seetharaman, Guna; Rao, T. R. N.

    1994-01-01

    A dynamic programming based approach for data compression of a ID sequence is presented. The compression of an input sequence of size N to that of a smaller size k is achieved by dividing the input sequence into k subsequences and replacing the subsequences by their respective average values. The partitioning of the input sequence is carried with the intention of reducing the mean squared error in the reconstructed sequence. The complexity involved in finding the partitions which would result in such an optimal compressed sequence is reduced by using the dynamic programming approach, which is presented.

  17. Formation of nanosecond SBS-compressed pulses for pumping an ultra-high power parametric amplifier

    NASA Astrophysics Data System (ADS)

    Kuz’min, A. A.; Kulagin, O. V.; Rodchenkov, V. I.

    2018-04-01

    Compression of pulsed Nd : glass laser radiation under stimulated Brillouin scattering (SBS) in perfluorooctane is investigated. Compression of 16-ns pulses at a beam diameter of 30 mm is implemented. The maximum compression coefficient is 28 in the optimal range of laser pulse energies from 2 to 4 J. The Stokes pulse power exceeds that of the initial laser pulse by a factor of about 11.5. The Stokes pulse jitter (fluctuations of the Stokes pulse exit time from the compressor) is studied. The rms spread of these fluctuations is found to be 0.85 ns.

  18. C2 Product-Centric Approach to Transforming Current C4ISR Information Architectures

    DTIC Science & Technology

    2004-06-01

    each type of environment . For “Cultural Feature” Entity Kind only the “Land” Domain is defined and for “ Environmental ” Entity Kind only the...take advantage of both worlds. In particular, the unifying concept of a Model-Driven Architecture (MDA) under development by the Object Management...Exchange Requirements (IER) to an XML environment . FCS [4] developers have embraced both UML and XML for their architectures and MIP [5] too is migrating

  19. Leveraging Small-Lexicon Language Models

    DTIC Science & Technology

    2016-12-31

    shown in Figure 1. This “easy to use” XML build (from a lexicon.xml file) bakes in source and language metadata, shows both raw (“copper”) and...requires it (e.g. used as standoff annotation), or some or all metadata can be baked into each and every set. Please let us know if a custom...interpretations are plausible, they are pipe-separated: bake #v#1|toast#v#1. • several word classes have been added (with all items numbered #1): d

  20. Internet Economics IV

    DTIC Science & Technology

    2004-08-01

    components, and B2B /B2C aspects of those in a technical and economic snapshot. Talk number six discusses the trade-off between quality and cost, which...web services have been defined. The fifth talk summarizes key aspects of XML (Extended Markup Language), Web Services and their components, and B2B ...Internet is Run: A Worldwide Perspective 69 Christoph Pauls 5 XML, Web Services and B2C/ B2B : A Technical and Economical Snap- shot 87 Matthias Pitt 6

  1. Design and implementation of the mobility assessment tool: software description.

    PubMed

    Barnard, Ryan T; Marsh, Anthony P; Rejeski, Walter Jack; Pecorella, Anthony; Ip, Edward H

    2013-07-23

    In previous work, we described the development of an 81-item video-animated tool for assessing mobility. In response to criticism levied during a pilot study of this tool, we sought to develop a new version built upon a flexible framework for designing and administering the instrument. Rather than constructing a self-contained software application with a hard-coded instrument, we designed an XML schema capable of describing a variety of psychometric instruments. The new version of our video-animated assessment tool was then defined fully within the context of a compliant XML document. Two software applications--one built in Java, the other in Objective-C for the Apple iPad--were then built that could present the instrument described in the XML document and collect participants' responses. Separating the instrument's definition from the software application implementing it allowed for rapid iteration and easy, reliable definition of variations. Defining instruments in a software-independent XML document simplifies the process of defining instruments and variations and allows a single instrument to be deployed on as many platforms as there are software applications capable of interpreting the instrument, thereby broadening the potential target audience for the instrument. Continued work will be done to further specify and refine this type of instrument specification with a focus on spurring adoption by researchers in gerontology and geriatric medicine.

  2. Convergence of Health Level Seven Version 2 Messages to Semantic Web Technologies for Software-Intensive Systems in Telemedicine Trauma Care.

    PubMed

    Menezes, Pedro Monteiro; Cook, Timothy Wayne; Cavalini, Luciana Tricai

    2016-01-01

    To present the technical background and the development of a procedure that enriches the semantics of Health Level Seven version 2 (HL7v2) messages for software-intensive systems in telemedicine trauma care. This study followed a multilevel model-driven approach for the development of semantically interoperable health information systems. The Pre-Hospital Trauma Life Support (PHTLS) ABCDE protocol was adopted as the use case. A prototype application embedded the semantics into an HL7v2 message as an eXtensible Markup Language (XML) file, which was validated against an XML schema that defines constraints on a common reference model. This message was exchanged with a second prototype application, developed on the Mirth middleware, which was also used to parse and validate both the original and the hybrid messages. Both versions of the data instance (one pure XML, one embedded in the HL7v2 message) were equally validated and the RDF-based semantics recovered by the receiving side of the prototype from the shared XML schema. This study demonstrated the semantic enrichment of HL7v2 messages for intensive-software telemedicine systems for trauma care, by validating components of extracts generated in various computing environments. The adoption of the method proposed in this study ensures the compliance of the HL7v2 standard in Semantic Web technologies.

  3. Construction of a nasopharyngeal carcinoma 2D/MS repository with Open Source XML database--Xindice.

    PubMed

    Li, Feng; Li, Maoyu; Xiao, Zhiqiang; Zhang, Pengfei; Li, Jianling; Chen, Zhuchu

    2006-01-11

    Many proteomics initiatives require integration of all information with uniformcriteria from collection of samples and data display to publication of experimental results. The integration and exchanging of these data of different formats and structure imposes a great challenge to us. The XML technology presents a promise in handling this task due to its simplicity and flexibility. Nasopharyngeal carcinoma (NPC) is one of the most common cancers in southern China and Southeast Asia, which has marked geographic and racial differences in incidence. Although there are some cancer proteome databases now, there is still no NPC proteome database. The raw NPC proteome experiment data were captured into one XML document with Human Proteome Markup Language (HUP-ML) editor and imported into native XML database Xindice. The 2D/MS repository of NPC proteome was constructed with Apache, PHP and Xindice to provide access to the database via Internet. On our website, two methods, keyword query and click query, were provided at the same time to access the entries of the NPC proteome database. Our 2D/MS repository can be used to share the raw NPC proteomics data that are generated from gel-based proteomics experiments. The database, as well as the PHP source codes for constructing users' own proteome repository, can be accessed at http://www.xyproteomics.org/.

  4. Design and implementation of the mobility assessment tool: software description

    PubMed Central

    2013-01-01

    Background In previous work, we described the development of an 81-item video-animated tool for assessing mobility. In response to criticism levied during a pilot study of this tool, we sought to develop a new version built upon a flexible framework for designing and administering the instrument. Results Rather than constructing a self-contained software application with a hard-coded instrument, we designed an XML schema capable of describing a variety of psychometric instruments. The new version of our video-animated assessment tool was then defined fully within the context of a compliant XML document. Two software applications—one built in Java, the other in Objective-C for the Apple iPad—were then built that could present the instrument described in the XML document and collect participants’ responses. Separating the instrument’s definition from the software application implementing it allowed for rapid iteration and easy, reliable definition of variations. Conclusions Defining instruments in a software-independent XML document simplifies the process of defining instruments and variations and allows a single instrument to be deployed on as many platforms as there are software applications capable of interpreting the instrument, thereby broadening the potential target audience for the instrument. Continued work will be done to further specify and refine this type of instrument specification with a focus on spurring adoption by researchers in gerontology and geriatric medicine. PMID:23879716

  5. Curvelet-based compressive sensing for InSAR raw data

    NASA Astrophysics Data System (ADS)

    Costa, Marcello G.; da Silva Pinho, Marcelo; Fernandes, David

    2015-10-01

    The aim of this work is to evaluate the compression performance of SAR raw data for interferometry applications collected by airborne from BRADAR (Brazilian SAR System operating in X and P bands) using the new approach based on compressive sensing (CS) to achieve an effective recovery with a good phase preserving. For this framework is desirable a real-time capability, where the collected data can be compressed to reduce onboard storage and bandwidth required for transmission. In the CS theory, a sparse unknown signals can be recovered from a small number of random or pseudo-random measurements by sparsity-promoting nonlinear recovery algorithms. Therefore, the original signal can be significantly reduced. To achieve the sparse representation of SAR signal, was done a curvelet transform. The curvelets constitute a directional frame, which allows an optimal sparse representation of objects with discontinuities along smooth curves as observed in raw data and provides an advanced denoising optimization. For the tests were made available a scene of 8192 x 2048 samples in range and azimuth in X-band with 2 m of resolution. The sparse representation was compressed using low dimension measurements matrices in each curvelet subband. Thus, an iterative CS reconstruction method based on IST (iterative soft/shrinkage threshold) was adjusted to recover the curvelets coefficients and then the original signal. To evaluate the compression performance were computed the compression ratio (CR), signal to noise ratio (SNR), and because the interferometry applications require more reconstruction accuracy the phase parameters like the standard deviation of the phase (PSD) and the mean phase error (MPE) were also computed. Moreover, in the image domain, a single-look complex image was generated to evaluate the compression effects. All results were computed in terms of sparsity analysis to provides an efficient compression and quality recovering appropriated for inSAR applications, therefore, providing a feasibility for compressive sensing application.

  6. User-Friendly Interface Developed for a Web-Based Service for SpaceCAL Emulations

    NASA Technical Reports Server (NTRS)

    Liszka, Kathy J.; Holtz, Allen P.

    2004-01-01

    A team at the NASA Glenn Research Center is developing a Space Communications Architecture Laboratory (SpaceCAL) for protocol development activities for coordinated satellite missions. SpaceCAL will provide a multiuser, distributed system to emulate space-based Internet architectures, backbone networks, formation clusters, and constellations. As part of a new effort in 2003, building blocks are being defined for an open distributed system to make the satellite emulation test bed accessible through an Internet connection. The first step in creating a Web-based service to control the emulation remotely is providing a user-friendly interface for encoding the data into a well-formed and complete Extensible Markup Language (XML) document. XML provides coding that allows data to be transferred between dissimilar systems. Scenario specifications include control parameters, network routes, interface bandwidths, delay, and bit error rate. Specifications for all satellite, instruments, and ground stations in a given scenario are also included in the XML document. For the SpaceCAL emulation, the XML document can be created using XForms, a Webbased forms language for data collection. Contrary to older forms technology, the interactive user interface makes the science prevalent, not the data representation. Required versus optional input fields, default values, automatic calculations, data validation, and reuse will help researchers quickly and accurately define missions. XForms can apply any XML schema defined for the test mission to validate data before forwarding it to the emulation facility. New instrument definitions, facilities, and mission types can be added to the existing schema. The first prototype user interface incorporates components for interactive input and form processing. Internet address, data rate, and the location of the facility are implemented with basic form controls with default values provided for convenience and efficiency using basic XForms operations. Because different emulation scenarios will vary widely in their component structure, more complex operations are used to add and delete facilities.

  7. Comparing the Performance of NoSQL Approaches for Managing Archetype-Based Electronic Health Record Data

    PubMed Central

    Freire, Sergio Miranda; Teodoro, Douglas; Wei-Kleiner, Fang; Sundvall, Erik; Karlsson, Daniel; Lambrix, Patrick

    2016-01-01

    This study provides an experimental performance evaluation on population-based queries of NoSQL databases storing archetype-based Electronic Health Record (EHR) data. There are few published studies regarding the performance of persistence mechanisms for systems that use multilevel modelling approaches, especially when the focus is on population-based queries. A healthcare dataset with 4.2 million records stored in a relational database (MySQL) was used to generate XML and JSON documents based on the openEHR reference model. Six datasets with different sizes were created from these documents and imported into three single machine XML databases (BaseX, eXistdb and Berkeley DB XML) and into a distributed NoSQL database system based on the MapReduce approach, Couchbase, deployed in different cluster configurations of 1, 2, 4, 8 and 12 machines. Population-based queries were submitted to those databases and to the original relational database. Database size and query response times are presented. The XML databases were considerably slower and required much more space than Couchbase. Overall, Couchbase had better response times than MySQL, especially for larger datasets. However, Couchbase requires indexing for each differently formulated query and the indexing time increases with the size of the datasets. The performances of the clusters with 2, 4, 8 and 12 nodes were not better than the single node cluster in relation to the query response time, but the indexing time was reduced proportionally to the number of nodes. The tested XML databases had acceptable performance for openEHR-based data in some querying use cases and small datasets, but were generally much slower than Couchbase. Couchbase also outperformed the response times of the relational database, but required more disk space and had a much longer indexing time. Systems like Couchbase are thus interesting research targets for scalable storage and querying of archetype-based EHR data when population-based use cases are of interest. PMID:26958859

  8. Comparing the Performance of NoSQL Approaches for Managing Archetype-Based Electronic Health Record Data.

    PubMed

    Freire, Sergio Miranda; Teodoro, Douglas; Wei-Kleiner, Fang; Sundvall, Erik; Karlsson, Daniel; Lambrix, Patrick

    2016-01-01

    This study provides an experimental performance evaluation on population-based queries of NoSQL databases storing archetype-based Electronic Health Record (EHR) data. There are few published studies regarding the performance of persistence mechanisms for systems that use multilevel modelling approaches, especially when the focus is on population-based queries. A healthcare dataset with 4.2 million records stored in a relational database (MySQL) was used to generate XML and JSON documents based on the openEHR reference model. Six datasets with different sizes were created from these documents and imported into three single machine XML databases (BaseX, eXistdb and Berkeley DB XML) and into a distributed NoSQL database system based on the MapReduce approach, Couchbase, deployed in different cluster configurations of 1, 2, 4, 8 and 12 machines. Population-based queries were submitted to those databases and to the original relational database. Database size and query response times are presented. The XML databases were considerably slower and required much more space than Couchbase. Overall, Couchbase had better response times than MySQL, especially for larger datasets. However, Couchbase requires indexing for each differently formulated query and the indexing time increases with the size of the datasets. The performances of the clusters with 2, 4, 8 and 12 nodes were not better than the single node cluster in relation to the query response time, but the indexing time was reduced proportionally to the number of nodes. The tested XML databases had acceptable performance for openEHR-based data in some querying use cases and small datasets, but were generally much slower than Couchbase. Couchbase also outperformed the response times of the relational database, but required more disk space and had a much longer indexing time. Systems like Couchbase are thus interesting research targets for scalable storage and querying of archetype-based EHR data when population-based use cases are of interest.

  9. Optimized Projection Matrix for Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Xu, Jianping; Pi, Yiming; Cao, Zongjie

    2010-12-01

    Compressive sensing (CS) is mainly concerned with low-coherence pairs, since the number of samples needed to recover the signal is proportional to the mutual coherence between projection matrix and sparsifying matrix. Until now, papers on CS always assume the projection matrix to be a random matrix. In this paper, aiming at minimizing the mutual coherence, a method is proposed to optimize the projection matrix. This method is based on equiangular tight frame (ETF) design because an ETF has minimum coherence. It is impossible to solve the problem exactly because of the complexity. Therefore, an alternating minimization type method is used to find a feasible solution. The optimally designed projection matrix can further reduce the necessary number of samples for recovery or improve the recovery accuracy. The proposed method demonstrates better performance than conventional optimization methods, which brings benefits to both basis pursuit and orthogonal matching pursuit.

  10. Pervious concrete mix optimization for sustainable pavement solution

    NASA Astrophysics Data System (ADS)

    Barišić, Ivana; Galić, Mario; Netinger Grubeša, Ivanka

    2017-10-01

    In order to fulfill requirements of sustainable road construction, new materials for pavement construction are investigated with the main goal to preserve natural resources and achieve energy savings. One of such sustainable pavement material is pervious concrete as a new solution for low volume pavements. To accommodate required strength and porosity as the measure of appropriate drainage capability, four mixtures of pervious concrete are investigated and results of laboratory tests of compressive and flexural strength and porosity are presented. For defining the optimal pervious concrete mixture in a view of aggregate and financial savings, optimization model is utilized and optimal mixtures defined according to required strength and porosity characteristics. Results of laboratory research showed that comparing single-sized aggregate pervious concrete mixtures, coarse aggregate mixture result in increased porosity but reduced strengths. The optimal share of the coarse aggregate turn to be 40.21%, the share of fine aggregate is 49.79% for achieving required compressive strength of 25 MPa, flexural strength of 4.31 MPa and porosity of 21.66%.

  11. Compressed/reconstructed test images for CRAF/Cassini

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Cheung, K.-M.; Onyszchuk, I.; Pollara, F.; Arnold, S.

    1991-01-01

    A set of compressed, then reconstructed, test images submitted to the Comet Rendezvous Asteroid Flyby (CRAF)/Cassini project is presented as part of its evaluation of near lossless high compression algorithms for representing image data. A total of seven test image files were provided by the project. The seven test images were compressed, then reconstructed with high quality (root mean square error of approximately one or two gray levels on an 8 bit gray scale), using discrete cosine transforms or Hadamard transforms and efficient entropy coders. The resulting compression ratios varied from about 2:1 to about 10:1, depending on the activity or randomness in the source image. This was accomplished without any special effort to optimize the quantizer or to introduce special postprocessing to filter the reconstruction errors. A more complete set of measurements, showing the relative performance of the compression algorithms over a wide range of compression ratios and reconstruction errors, shows that additional compression is possible at a small sacrifice in fidelity.

  12. Synthetic aperture radar signal data compression using block adaptive quantization

    NASA Technical Reports Server (NTRS)

    Kuduvalli, Gopinath; Dutkiewicz, Melanie; Cumming, Ian

    1994-01-01

    This paper describes the design and testing of an on-board SAR signal data compression algorithm for ESA's ENVISAT satellite. The Block Adaptive Quantization (BAQ) algorithm was selected, and optimized for the various operational modes of the ASAR instrument. A flexible BAQ scheme was developed which allows a selection of compression ratio/image quality trade-offs. Test results show the high quality of the SAR images processed from the reconstructed signal data, and the feasibility of on-board implementation using a single ASIC.

  13. Image and Video Compression with VLSI Neural Networks

    NASA Technical Reports Server (NTRS)

    Fang, W.; Sheu, B.

    1993-01-01

    An advanced motion-compensated predictive video compression system based on artificial neural networks has been developed to effectively eliminate the temporal and spatial redundancy of video image sequences and thus reduce the bandwidth and storage required for the transmission and recording of the video signal. The VLSI neuroprocessor for high-speed high-ratio image compression based upon a self-organization network and the conventional algorithm for vector quantization are compared. The proposed method is quite efficient and can achieve near-optimal results.

  14. Design optimization of natural laminar flow bodies in compressible flow

    NASA Technical Reports Server (NTRS)

    Dodbele, Simha S.

    1992-01-01

    An optimization method has been developed to design axisymmetric body shapes such as fuselages, nacelles, and external fuel tanks with increased transition Reynolds numbers in subsonic compressible flow. The new design method involves a constraint minimization procedure coupled with analysis of the inviscid and viscous flow regions and linear stability analysis of the compressible boundary-layer. In order to reduce the computer time, Granville's transition criterion is used to predict boundary-layer transition and to calculate the gradients of the objective function, and linear stability theory coupled with the e(exp n)-method is used to calculate the objective function at the end of each design iteration. Use of a method to design an axisymmetric body with extensive natural laminar flow is illustrated through the design of a tiptank of a business jet. For the original tiptank, boundary layer transition is predicted to occur at a transition Reynolds number of 6.04 x 10(exp 6). For the designed body shape, a transition Reynolds number of 7.22 x 10(exp 6) is predicted using compressible linear stability theory coupled with the e(exp n)-method.

  15. Backwards compatible high dynamic range video compression

    NASA Astrophysics Data System (ADS)

    Dolzhenko, Vladimir; Chesnokov, Vyacheslav; Edirisinghe, Eran A.

    2014-02-01

    This paper presents a two layer CODEC architecture for high dynamic range video compression. The base layer contains the tone mapped video stream encoded with 8 bits per component which can be decoded using conventional equipment. The base layer content is optimized for rendering on low dynamic range displays. The enhancement layer contains the image difference, in perceptually uniform color space, between the result of inverse tone mapped base layer content and the original video stream. Prediction of the high dynamic range content reduces the redundancy in the transmitted data while still preserves highlights and out-of-gamut colors. Perceptually uniform colorspace enables using standard ratedistortion optimization algorithms. We present techniques for efficient implementation and encoding of non-uniform tone mapping operators with low overhead in terms of bitstream size and number of operations. The transform representation is based on human vision system model and suitable for global and local tone mapping operators. The compression techniques include predicting the transform parameters from previously decoded frames and from already decoded data for current frame. Different video compression techniques are compared: backwards compatible and non-backwards compatible using AVC and HEVC codecs.

  16. An XML-Based Protocol for Distributed Event Services

    NASA Technical Reports Server (NTRS)

    Smith, Warren; Gunter, Dan; Quesnel, Darcy; Biegel, Bryan (Technical Monitor)

    2001-01-01

    A recent trend in distributed computing is the construction of high-performance distributed systems called computational grids. One difficulty we have encountered is that there is no standard format for the representation of performance information and no standard protocol for transmitting this information. This limits the types of performance analysis that can be undertaken in complex distributed systems. To address this problem, we present an XML-based protocol for transmitting performance events in distributed systems and evaluate the performance of this protocol.

  17. Research on Heterogeneous Data Exchange based on XML

    NASA Astrophysics Data System (ADS)

    Li, Huanqin; Liu, Jinfeng

    Integration of multiple data sources is becoming increasingly important for enterprises that cooperate closely with their partners for e-commerce. OLAP enables analysts and decision makers fast access to various materialized views from data warehouses. However, many corporations have internal business applications deployed on different platforms. This paper introduces a model for heterogeneous data exchange based on XML. The system can exchange and share the data among the different sources. The method used to realize the heterogeneous data exchange is given in this paper.

  18. Tactical Web Services: Using XML and Java Web Services to Conduct Real-Time Net-Centric Sonar Visualization

    DTIC Science & Technology

    2004-09-01

    Rosetti USN U.S. Navy Chesterton, IN 6. Erik Chaum NUWC Newport, RI 7. David Bellino NPRI Newport, RI 8. Dick Nadolink NUWC Newport, RI...found at (http://www.parallelgraphics.com/products/cortona). G. JFREECHART JFreeChart is an open source Java API created by David Gilbert and...www.xj3d.org/. Accessed 3 September 2004. Hunter, David , Kurt Cagle, and Chris Dix, eds. Beginning XML, Second Edition. Indianapolis, IN

  19. Toxics Release Inventory Chemical Hazard Information Profiles (TRI-CHIP) Dataset

    EPA Pesticide Factsheets

    The Toxics Release Inventory (TRI) Chemical Hazard Information Profiles (TRI-CHIP) dataset contains hazard information about the chemicals reported in TRI. Users can use this XML-format dataset to create their own databases and hazard analyses of TRI chemicals. The hazard information is compiled from a series of authoritative sources including the Integrated Risk Information System (IRIS). The dataset is provided as a downloadable .zip file that when extracted provides XML files and schemas for the hazard information tables.

  20. 2035 Air Dominance Requirements for State-On-State Conflict

    DTIC Science & Technology

    2011-02-16

    story.jsp?id=news/ awst /2011/01/10/AW_01_10_ 2011_p58-280833.xml&channel=misc (accessed 3 Jan 2011). Bilbro, James. “Technology Readiness Levels.” JB...channel = awst &id =news/aw121508p2.xml&headline=null&prev=10. (accessed 3 January 2011). Global Security.org. “Miniature Air-Launched Decoy (MALD...Battle.” Center for Strategic and Budgetary Assessment, http://www.csba online.com/4Publications/PubLibrary/ R .20100518.Slides_AirSea_Batt/ R .20100518

  1. Generating GraphML XML Files for Graph Visualization of Architectures and Event Traces for the Monterey Phoenix Program

    DTIC Science & Technology

    2012-09-01

    Thesis Advisor: Mikhail Auguston Second Reader: Terry Norbraten THIS PAGE INTENTIONALLY LEFT BLANK i REPORT DOCUMENTATION PAGE Form Approved...Language (GraphML). MPGrapher compiles well- formed XML files that conform to the yEd GraphML schema. These files will be opened and analyzed using...ABSTRACT UU NSN 7540-01-280-5500 Standard Form 298 (Rev. 2-89) Prescribed by ANSI Std. 239-18 ii THIS PAGE INTENTIONALLY LEFT BLANK iii Approved

  2. Finite Element Simulation of Compression Molding of Woven Fabric Carbon Fiber/Epoxy Composites: Part I Material Model Development

    DOE PAGES

    Li, Yang; Zhao, Qiangsheng; Mirdamadi, Mansour; ...

    2016-01-06

    Woven fabric carbon fiber/epoxy composites made through compression molding are one of the promising choices of material for the vehicle light-weighting strategy. Previous studies have shown that the processing conditions can have substantial influence on the performance of this type of the material. Therefore the optimization of the compression molding process is of great importance to the manufacturing practice. An efficient way to achieve the optimized design of this process would be through conducting finite element (FE) simulations of compression molding for woven fabric carbon fiber/epoxy composites. However, performing such simulation remains a challenging task for FE as multiple typesmore » of physics are involved during the compression molding process, including the epoxy resin curing and the complex mechanical behavior of woven fabric structure. In the present study, the FE simulation of the compression molding process of resin based woven fabric composites at continuum level is conducted, which is enabled by the implementation of an integrated material modeling methodology in LS-Dyna. Specifically, the chemo-thermo-mechanical problem of compression molding is solved through the coupling of three material models, i.e., one thermal model for temperature history in the resin, one mechanical model to update the curing-dependent properties of the resin and another mechanical model to simulate the behavior of the woven fabric composites. Preliminary simulations of the carbon fiber/epoxy woven fabric composites in LS-Dyna are presented as a demonstration, while validations and models with real part geometry are planned in the future work.« less

  3. A new simultaneous compression and encryption method for images suitable to recognize form by optical correlation

    NASA Astrophysics Data System (ADS)

    Alfalou, Ayman; Elbouz, Marwa; Jridi, Maher; Loussert, Alain

    2009-09-01

    In some recognition form applications (which require multiple images: facial identification or sign-language), many images should be transmitted or stored. This requires the use of communication systems with a good security level (encryption) and an acceptable transmission rate (compression rate). In the literature, several encryption and compression techniques can be found. In order to use optical correlation, encryption and compression techniques cannot be deployed independently and in a cascade manner. Otherwise, our system will suffer from two major problems. In fact, we cannot simply use these techniques in a cascade manner without considering the impact of one technique over another. Secondly, a standard compression can affect the correlation decision, because the correlation is sensitive to the loss of information. To solve both problems, we developed a new technique to simultaneously compress & encrypt multiple images using a BPOF optimized filter. The main idea of our approach consists in multiplexing the spectrums of different transformed images by a Discrete Cosine Transform (DCT). To this end, the spectral plane should be divided into several areas and each of them corresponds to the spectrum of one image. On the other hand, Encryption is achieved using the multiplexing, a specific rotation functions, biometric encryption keys and random phase keys. A random phase key is widely used in optical encryption approaches. Finally, many simulations have been conducted. Obtained results corroborate the good performance of our approach. We should also mention that the recording of the multiplexed and encrypted spectra is optimized using an adapted quantification technique to improve the overall compression rate.

  4. Study of on-board compression of earth resources data

    NASA Technical Reports Server (NTRS)

    Habibi, A.

    1975-01-01

    The current literature on image bandwidth compression was surveyed and those methods relevant to compression of multispectral imagery were selected. Typical satellite multispectral data was then analyzed statistically and the results used to select a smaller set of candidate bandwidth compression techniques particularly relevant to earth resources data. These were compared using both theoretical analysis and simulation, under various criteria of optimality such as mean square error (MSE), signal-to-noise ratio, classification accuracy, and computational complexity. By concatenating some of the most promising techniques, three multispectral data compression systems were synthesized which appear well suited to current and future NASA earth resources applications. The performance of these three recommended systems was then examined in detail by all of the above criteria. Finally, merits and deficiencies were summarized and a number of recommendations for future NASA activities in data compression proposed.

  5. Optimization of calcium carbonate content on synthesis of aluminum foam and its compressive strength characteristic

    NASA Astrophysics Data System (ADS)

    Sutarno, Nugraha, Bagja; Kusharjanto

    2017-01-01

    One of the most important characteristic of aluminum foam is compressive strength, which is reflected by its impact energy and Young's modulus. In the present research, optimization of calcium carbonate (CaCO3) content in the synthesized aluminum foam in order to obtain the highest compressive strength was carried out. The results of this study will be used to determine the CaCO3 content synthesis process parameter in pilot plant scale production of an aluminum foam. The experiment was performed by varying the concentration of calcium carbonate content, which was used as foaming agent, at constant alumina concentration (1.5 wt%), which was added as stabilizer, and temperature (725°C). It was found that 4 wt% CaCO3 gave the lowest relative density, which was 0.15, and the highest porosity, which was 85.29%, and compressive strength of as high as 0.26 Mpa. The pore morphology of the obtained aluminum foam at such condition was as follow: the average pore diameter was 4.42 mm, the wall thickness minimum of the pore was 83.24 µm, roundness of the pore was 0.91. Based on the fractal porosity, the compressive strength was inversely proportional to the porosity and huddled on a power law value of 2.91.

  6. Pharmacokinetics of colon-specific pH and time-dependent flurbiprofen tablets.

    PubMed

    Vemula, Sateesh Kumar; Veerareddy, Prabhakar Reddy; Devadasu, Venkat Ratnam

    2015-09-01

    Present research deals with the development of compression-coated flurbiprofen colon-targeted tablets to retard the drug release in the upper gastro intestinal system, but progressively release the drug in the colon. Flurbiprofen core tablets were prepared by direct compression method and were compression coated using sodium alginate and Eudragit S100. The formulation is optimized based on the in vitro drug release study and further evaluated by X-ray imaging and pharmacokinetic studies in healthy humans for colonic delivery. The optimized formulation showed negligible drug release (4.33 ± 0.06 %) in the initial lag period followed by progressive release (100.78 ± 0.64 %) for 24 h. The X-ray imaging in human volunteers showed that the tablets reached the colon without disintegrating in the upper gastrointestinal tract. The C max of colon-targeted tablets was 12,374.67 ng/ml at T max 10 h, where as in case of immediate release tablets the C max was 15,677.52 ng/ml at T max 3 h, that signifies the ability of compression-coated tablets to target the colon. Development of compression-coated tablets using combination of time-dependent and pH-sensitive approaches was suitable to target the flurbiprofen to colon.

  7. Collaborative Science Using Web Services and the SciFlo Grid Dataflow Engine

    NASA Astrophysics Data System (ADS)

    Wilson, B. D.; Manipon, G.; Xing, Z.; Yunck, T.

    2006-12-01

    The General Earth Science Investigation Suite (GENESIS) project is a NASA-sponsored partnership between the Jet Propulsion Laboratory, academia, and NASA data centers to develop a new suite of Web Services tools to facilitate multi-sensor investigations in Earth System Science. The goal of GENESIS is to enable large-scale, multi-instrument atmospheric science using combined datasets from the AIRS, MODIS, MISR, and GPS sensors. Investigations include cross-comparison of spaceborne climate sensors, cloud spectral analysis, study of upper troposphere-stratosphere water transport, study of the aerosol indirect cloud effect, and global climate model validation. The challenges are to bring together very large datasets, reformat and understand the individual instrument retrievals, co-register or re-grid the retrieved physical parameters, perform computationally-intensive data fusion and data mining operations, and accumulate complex statistics over months to years of data. To meet these challenges, we have developed a Grid computing and dataflow framework, named SciFlo, in which we are deploying a set of versatile and reusable operators for data access, subsetting, registration, mining, fusion, compression, and advanced statistical analysis. SciFlo leverages remote Web Services, called via Simple Object Access Protocol (SOAP) or REST (one-line) URLs, and the Grid Computing standards (WS-* &Globus Alliance toolkits), and enables scientists to do multi-instrument Earth Science by assembling reusable Web Services and native executables into a distributed computing flow (tree of operators). The SciFlo client &server engines optimize the execution of such distributed data flows and allow the user to transparently find and use datasets and operators without worrying about the actual location of the Grid resources. In particular, SciFlo exploits the wealth of datasets accessible by OpenGIS Consortium (OGC) Web Mapping Servers & Web Coverage Servers (WMS/WCS), and by Open Data Access Protocol (OpenDAP) servers. The scientist injects a distributed computation into the Grid by simply filling out an HTML form or directly authoring the underlying XML dataflow document, and results are returned directly to the scientist's desktop. Once an analysis has been specified for a chunk or day of data, it can be easily repeated with different control parameters or over months of data. Recently, the Earth Science Information Partners (ESIP) Federation sponsored a collaborative activity in which several ESIP members advertised their respective WMS/WCS and SOAP services, developed some collaborative science scenarios for atmospheric and aerosol science, and then choreographed services from multiple groups into demonstration workflows using the SciFlo engine and a Business Process Execution Language (BPEL) workflow engine. For several scenarios, the same collaborative workflow was executed in three ways: using hand-coded scripts, by executing a SciFlo document, and by executing a BPEL workflow document. We will discuss the lessons learned from this activity, the need for standardized interfaces (like WMS/WCS), the difficulty in agreeing on even simple XML formats and interfaces, and further collaborations that are being pursued.

  8. Optimal Compression Methods for Floating-point Format Images

    NASA Technical Reports Server (NTRS)

    Pence, W. D.; White, R. L.; Seaman, R.

    2009-01-01

    We report on the results of a comparison study of different techniques for compressing FITS images that have floating-point (real*4) pixel values. Standard file compression methods like GZIP are generally ineffective in this case (with compression ratios only in the range 1.2 - 1.6), so instead we use a technique of converting the floating-point values into quantized scaled integers which are compressed using the Rice algorithm. The compressed data stream is stored in FITS format using the tiled-image compression convention. This is technically a lossy compression method, since the pixel values are not exactly reproduced, however all the significant photometric and astrometric information content of the image can be preserved while still achieving file compression ratios in the range of 4 to 8. We also show that introducing dithering, or randomization, when assigning the quantized pixel-values can significantly improve the photometric and astrometric precision in the stellar images in the compressed file without adding additional noise. We quantify our results by comparing the stellar magnitudes and positions as measured in the original uncompressed image to those derived from the same image after applying successively greater amounts of compression.

  9. Real-time compression of raw computed tomography data: technology, architecture, and benefits

    NASA Astrophysics Data System (ADS)

    Wegener, Albert; Chandra, Naveen; Ling, Yi; Senzig, Robert; Herfkens, Robert

    2009-02-01

    Compression of computed tomography (CT) projection samples reduces slip ring and disk drive costs. A lowcomplexity, CT-optimized compression algorithm called Prism CTTM achieves at least 1.59:1 and up to 2.75:1 lossless compression on twenty-six CT projection data sets. We compare the lossless compression performance of Prism CT to alternative lossless coders, including Lempel-Ziv, Golomb-Rice, and Huffman coders using representative CT data sets. Prism CT provides the best mean lossless compression ratio of 1.95:1 on the representative data set. Prism CT compression can be integrated into existing slip rings using a single FPGA. Prism CT decompression operates at 100 Msamp/sec using one core of a dual-core Xeon CPU. We describe a methodology to evaluate the effects of lossy compression on image quality to achieve even higher compression ratios. We conclude that lossless compression of raw CT signals provides significant cost savings and performance improvements for slip rings and disk drive subsystems in all CT machines. Lossy compression should be considered in future CT data acquisition subsystems because it provides even more system benefits above lossless compression while achieving transparent diagnostic image quality. This result is demonstrated on a limited dataset using appropriately selected compression ratios and an experienced radiologist.

  10. Synthetic optimization of air turbine for dental handpieces.

    PubMed

    Shi, Z Y; Dong, T

    2014-01-01

    A synthetic optimization of Pelton air turbine in dental handpieces concerning the power output, compressed air consumption and rotation speed in the mean time is implemented by employing a standard design procedure and variable limitation from practical dentistry. The Pareto optimal solution sets acquired by using the Normalized Normal Constraint method are mainly comprised of two piecewise continuous parts. On the Pareto frontier, the supply air stagnation pressure stalls at the lower boundary of the design space, the rotation speed is a constant value within the recommended range from literature, the blade tip clearance insensitive to while the nozzle radius increases with power output and mass flow rate of compressed air to which the residual geometric dimensions are showing an opposite trend within their respective "pieces" compared to the nozzle radius.

  11. Design, Optimization, and Evaluation of A1-2139 Compression Panel with Integral T-Stiffeners

    NASA Technical Reports Server (NTRS)

    Mulani, Sameer B.; Havens, David; Norris, Ashley; Bird, R. Keith; Kapania, Rakesh K.; Olliffe, Robert

    2012-01-01

    A T-stiffened panel was designed and optimized for minimum mass subjected to constraints on buckling load, yielding, and crippling or local stiffener failure using a new analysis and design tool named EBF3PanelOpt. The panel was designed for a compression loading configuration, a realistic load case for a typical aircraft skin-stiffened panel. The panel was integrally machined from 2139 aluminum alloy plate and was tested in compression. The panel was loaded beyond buckling and strains and out-of-plane displacements were extracted from 36 strain gages and one linear variable displacement transducer. A digital photogrammetric system was used to obtain full field displacements and strains on the smooth (unstiffened) side of the panel. The experimental data were compared with the strains and out-of-plane deflections from a high-fidelity nonlinear finite element analysis.

  12. Evaluation of mucoadhesive potential of gum cordia, an anionic polysaccharide.

    PubMed

    Ahuja, Munish; Kumar, Suresh; Kumar, Ashok

    2013-04-01

    The study involves mucoadhesive evaluation by formulating buccal discs using fluconazole as the model drug. The effect of compression pressure and gum cordia/lactose ratio on the ex vivo bioadhesion time and in vitro release of fluconazole was optimized using central composite experimental design. It was observed that the response ex vivo bioadhesion time was affected significantly by the proportion of gum cordia in the buccal discs while the in vitro release of fluconazole from the buccal discs was influenced significantly by the compression pressure. The optimized batch of buccal discs comprised of gum cordia/lactose - 0.66, fluconazole - 20 mg and was compressed at the pressure of 6600 kg. Further, it provided the ex vivo bioadhesion of 22 h and in vitro release of 80% in 24h. In conclusion, gum cordia is a promising bucoadhesive polymer. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. A novel data hiding scheme for block truncation coding compressed images using dynamic programming strategy

    NASA Astrophysics Data System (ADS)

    Chang, Ching-Chun; Liu, Yanjun; Nguyen, Son T.

    2015-03-01

    Data hiding is a technique that embeds information into digital cover data. This technique has been concentrated on the spatial uncompressed domain, and it is considered more challenging to perform in the compressed domain, i.e., vector quantization, JPEG, and block truncation coding (BTC). In this paper, we propose a new data hiding scheme for BTC-compressed images. In the proposed scheme, a dynamic programming strategy was used to search for the optimal solution of the bijective mapping function for LSB substitution. Then, according to the optimal solution, each mean value embeds three secret bits to obtain high hiding capacity with low distortion. The experimental results indicated that the proposed scheme obtained both higher hiding capacity and hiding efficiency than the other four existing schemes, while ensuring good visual quality of the stego-image. In addition, the proposed scheme achieved a low bit rate as original BTC algorithm.

  14. An Extended Petri-Net Based Approach for Supply Chain Process Enactment in Resource-Centric Web Service Environment

    NASA Astrophysics Data System (ADS)

    Wang, Xiaodong; Zhang, Xiaoyu; Cai, Hongming; Xu, Boyi

    Enacting a supply-chain process involves variant partners and different IT systems. REST receives increasing attention for distributed systems with loosely coupled resources. Nevertheless, resource model incompatibilities and conflicts prevent effective process modeling and deployment in resource-centric Web service environment. In this paper, a Petri-net based framework for supply-chain process integration is proposed. A resource meta-model is constructed to represent the basic information of resources. Then based on resource meta-model, XML schemas and documents are derived, which represent resources and their states in Petri-net. Thereafter, XML-net, a high level Petri-net, is employed for modeling control and data flow of process. From process model in XML-net, RESTful services and choreography descriptions are deduced. Therefore, unified resource representation and RESTful services description are proposed for cross-system integration in a more effective way. A case study is given to illustrate the approach and the desirable features of the approach are discussed.

  15. Ontology aided modeling of organic reaction mechanisms with flexible and fragment based XML markup procedures.

    PubMed

    Sankar, Punnaivanam; Aghila, Gnanasekaran

    2007-01-01

    The mechanism models for primary organic reactions encoding the structural fragments undergoing substitution, addition, elimination, and rearrangements are developed. In the proposed models, each and every structural component of mechanistic pathways is represented with flexible and fragment based markup technique in XML syntax. A significant feature of the system is the encoding of the electron movements along with the other components like charges, partial charges, half bonded species, lone pair electrons, free radicals, reaction arrows, etc. needed for a complete representation of reaction mechanism. The rendering of reaction schemes described with the proposed methodology is achieved with a concise XML extension language interoperating with the structure markup. The reaction scheme is visualized as 2D graphics in a browser by converting them into SVG documents enabling the desired layouts normally perceived by the chemists conventionally. An automatic representation of the complex patterns of the reaction mechanism is achieved by reusing the knowledge in chemical ontologies and developing artificial intelligence components in terms of axioms.

  16. Multipurpose Controller with EPICS integration and data logging: BPM application for ESS Bilbao

    NASA Astrophysics Data System (ADS)

    Arredondo, I.; del Campo, M.; Echevarria, P.; Jugo, J.; Etxebarria, V.

    2013-10-01

    This work presents a multipurpose configurable control system which can be integrated in an EPICS control network, this functionality being configured through a XML configuration file. The core of the system is the so-called Hardware Controller which is in charge of the control hardware management, the set up and communication with the EPICS network and the data storage. The reconfigurable nature of the controller is based on a single XML file, allowing any final user to easily modify and adjust the control system to any specific requirement. The selected Java development environment ensures a multiplatform operation and large versatility, even regarding the control hardware to be controlled. Specifically, this paper, focused on fast control based on a high performance FPGA, describes also an application approach for the ESS Bilbao's Beam Position Monitoring system. The implementation of the XML configuration file and the satisfactory performance outcome achieved are presented, as well as a general description of the Multipurpose Controller itself.

  17. Towards health care process description framework: an XML DTD design.

    PubMed Central

    Staccini, P.; Joubert, M.; Quaranta, J. F.; Aymard, S.; Fieschi, D.; Fieschi, M.

    2001-01-01

    The development of health care and hospital information systems has to meet users needs as well as requirements such as the tracking of all care activities and the support of quality improvement. The use of process-oriented analysis is of-value to provide analysts with: (i) a systematic description of activities; (ii) the elicitation of the useful data to perform and record care tasks; (iii) the selection of relevant decision-making support. But paper-based tools are not a very suitable way to manage and share the documentation produced during this step. The purpose of this work is to propose a method to implement the results of process analysis according to XML techniques (eXtensible Markup Language). It is based on the IDEF0 activity modeling language (Integration DEfinition for Function modeling). A hierarchical description of a process and its components has been defined through a flat XML file with a grammar of proper metadata tags. Perspectives of this method are discussed. PMID:11825265

  18. Progress in energy generation for Canadian remote sites

    NASA Astrophysics Data System (ADS)

    Saad, Y.; Younes, R.; Abboudi, S.; Ilinca, A.; Nohra, C.

    2016-07-01

    Many remote areas around the world are isolated, for various reasons, from electricity networks. They are usually supplied with electricity through diesel generators. The cost of operation and transportation of diesel fuel in addition to its price have led to the procurement of a more efficient and environmentally greener method of supply. Various studies have shown that a wind-diesel hybrid system with compressed air storage (WDCAS) seems to be one of the best solutions, and presents itself as an optimal configuration for the electrification of isolated sites. This system allows significant fuel savings to be made because the stored compressed air is used to supercharge the engine. In order to optimize system performance and minimize fuel consumption, installation of a system for recovering and storing the heat of compression (TES) seems necessary. In addition, the use of hydro-pneumatic energy storage systems that use the same machine as the hydraulic pump and turbine allow us to store energy in tight spaces and, if possible, contribute to power generation. The scrupulous study of this technical approach will be the focus of our research which will validate (or not) the use of such a system for the regulation of frequency of electrical networks. In this article we will skim through the main research that recently examined the wind-diesel hybrid system which addressed topics such as adiabatic compression and hydro-pneumatic storage. Instead, we will offer (based on existing studies) a new ACP-WDCAS (wind-diesel hybrid system with adiabatic air compression and storage at constant pressure), which combines these three concepts in one system for the optimization of wind-diesel hybrid system.

  19. Single Layer Extended Release Two-in-One Guaifenesin Matrix Tablet: Formulation Method, Optimization, Release Kinetics Evaluation and Its Comparison with Mucinex® Using Box-Behnken Design.

    PubMed

    Morovati, Amirhosein; Ghaffari, Alireza; Erfani Jabarian, Lale; Mehramizi, Ali

    2017-01-01

    Guaifenesin, a highly water-soluble active (50 mg/mL), classified as a BCS class I drug. Owing to its poor flowability and compressibility, formulating tablets especially high-dose one, may be a challenge. Direct compression may not be feasible. Bilayer tablet technology applied to Mucinex®, endures challenges to deliver a robust formulation. To overcome challenges involved in bilayer-tablet manufacturing and powder compressibility, an optimized single layer tablet prepared by a binary mixture (Two-in-one), mimicking the dual drug release character of Mucinex ® was purposed. A 3-factor, 3-level Box-Behnken design was applied to optimize seven considered dependent variables (Release "%" in 1, 2, 4, 6, 8, 10 and 12 h) regarding different levels of independent one (X 1 : Cetyl alcohol, X 2 : Starch 1500 ® , X 3 : HPMC K100M amounts). Two granule portions were prepared using melt and wet granulations, blended together prior to compression. An optimum formulation was obtained (X 1 : 37.10, X 2 : 2, X 3 : 42.49 mg). Desirability function was 0.616. F2 and f1 between release profiles of Mucinex® and the optimum formulation were 74 and 3, respectively. An n-value of about 0.5 for both optimum and Mucinex® formulations showed diffusion (Fickian) control mechanism. However, HPMC K100M rise in 70 mg accompanied cetyl alcohol rise in 60 mg led to first order kinetic (n = 0.6962). The K values of 1.56 represented an identical burst drug releases. Cetyl alcohol and starch 1500 ® modulated guaifenesin release from HPMC K100M matrices, while due to their binding properties, improved its poor flowability and compressibility, too.

  20. Single Layer Extended Release Two-in-One Guaifenesin Matrix Tablet: Formulation Method, Optimization, Release Kinetics Evaluation and Its Comparison with Mucinex® Using Box-Behnken Design

    PubMed Central

    Morovati, Amirhosein; Ghaffari, Alireza; Erfani jabarian, Lale; Mehramizi, Ali

    2017-01-01

    Guaifenesin, a highly water-soluble active (50 mg/mL), classified as a BCS class I drug. Owing to its poor flowability and compressibility, formulating tablets especially high-dose one, may be a challenge. Direct compression may not be feasible. Bilayer tablet technology applied to Mucinex®, endures challenges to deliver a robust formulation. To overcome challenges involved in bilayer-tablet manufacturing and powder compressibility, an optimized single layer tablet prepared by a binary mixture (Two-in-one), mimicking the dual drug release character of Mucinex® was purposed. A 3-factor, 3-level Box-Behnken design was applied to optimize seven considered dependent variables (Release “%” in 1, 2, 4, 6, 8, 10 and 12 h) regarding different levels of independent one (X1: Cetyl alcohol, X2: Starch 1500®, X3: HPMC K100M amounts). Two granule portions were prepared using melt and wet granulations, blended together prior to compression. An optimum formulation was obtained (X1: 37.10, X2: 2, X3: 42.49 mg). Desirability function was 0.616. F2 and f1 between release profiles of Mucinex® and the optimum formulation were 74 and 3, respectively. An n-value of about 0.5 for both optimum and Mucinex® formulations showed diffusion (Fickian) control mechanism. However, HPMC K100M rise in 70 mg accompanied cetyl alcohol rise in 60 mg led to first order kinetic (n = 0.6962). The K values of 1.56 represented an identical burst drug releases. Cetyl alcohol and starch 1500® modulated guaifenesin release from HPMC K100M matrices, while due to their binding properties, improved its poor flowability and compressibility, too. PMID:29552045

  1. Imaging industry expectations for compressed sensing in MRI

    NASA Astrophysics Data System (ADS)

    King, Kevin F.; Kanwischer, Adriana; Peters, Rob

    2015-09-01

    Compressed sensing requires compressible data, incoherent acquisition and a nonlinear reconstruction algorithm to force creation of a compressible image consistent with the acquired data. MRI images are compressible using various transforms (commonly total variation or wavelets). Incoherent acquisition of MRI data by appropriate selection of pseudo-random or non-Cartesian locations in k-space is straightforward. Increasingly, commercial scanners are sold with enough computing power to enable iterative reconstruction in reasonable times. Therefore integration of compressed sensing into commercial MRI products and clinical practice is beginning. MRI frequently requires the tradeoff of spatial resolution, temporal resolution and volume of spatial coverage to obtain reasonable scan times. Compressed sensing improves scan efficiency and reduces the need for this tradeoff. Benefits to the user will include shorter scans, greater patient comfort, better image quality, more contrast types per patient slot, the enabling of previously impractical applications, and higher throughput. Challenges to vendors include deciding which applications to prioritize, guaranteeing diagnostic image quality, maintaining acceptable usability and workflow, and acquisition and reconstruction algorithm details. Application choice depends on which customer needs the vendor wants to address. The changing healthcare environment is putting cost and productivity pressure on healthcare providers. The improved scan efficiency of compressed sensing can help alleviate some of this pressure. Image quality is strongly influenced by image compressibility and acceleration factor, which must be appropriately limited. Usability and workflow concerns include reconstruction time and user interface friendliness and response. Reconstruction times are limited to about one minute for acceptable workflow. The user interface should be designed to optimize workflow and minimize additional customer training. Algorithm concerns include the decision of which algorithms to implement as well as the problem of optimal setting of adjustable parameters. It will take imaging vendors several years to work through these challenges and provide solutions for a wide range of applications.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Yang; Zhao, Qiangsheng; Mirdamadi, Mansour

    Woven fabric carbon fiber/epoxy composites made through compression molding are one of the promising choices of material for the vehicle light-weighting strategy. Previous studies have shown that the processing conditions can have substantial influence on the performance of this type of the material. Therefore the optimization of the compression molding process is of great importance to the manufacturing practice. An efficient way to achieve the optimized design of this process would be through conducting finite element (FE) simulations of compression molding for woven fabric carbon fiber/epoxy composites. However, performing such simulation remains a challenging task for FE as multiple typesmore » of physics are involved during the compression molding process, including the epoxy resin curing and the complex mechanical behavior of woven fabric structure. In the present study, the FE simulation of the compression molding process of resin based woven fabric composites at continuum level is conducted, which is enabled by the implementation of an integrated material modeling methodology in LS-Dyna. Specifically, the chemo-thermo-mechanical problem of compression molding is solved through the coupling of three material models, i.e., one thermal model for temperature history in the resin, one mechanical model to update the curing-dependent properties of the resin and another mechanical model to simulate the behavior of the woven fabric composites. Preliminary simulations of the carbon fiber/epoxy woven fabric composites in LS-Dyna are presented as a demonstration, while validations and models with real part geometry are planned in the future work.« less

  3. Electrostatic vibration energy harvester with 2.4-GHz Cockcroft-Walton rectenna start-up

    NASA Astrophysics Data System (ADS)

    Takhedmit, Hakim; Saddi, Zied; Karami, Armine; Basset, Philippe; Cirio, Laurent

    2017-02-01

    In this paper, we propose the design, fabrication and experiments of a macro-scale electrostatic vibration energy harvester (e-VEH), pre-charged wirelessly for the first time with a 2.4-GHz Cockcroft-Walton rectenna. The rectenna is designed and optimized to operate at low power densities and provide high voltage levels: 0.5 V at 0.76 μW/cm2 and 1 V at 1.53 μW/cm2. The e-VEH uses a Bennet doubler as a conditioning circuit. Experiments show a 23-V voltage across the transducer terminal, when the harvester is excited at 25 Hz by 1.5 g of external acceleration. An accumulated energy of 275 μJ and a maximum available power of 0.4 μW are achieved. xml:lang="fr"

  4. FastSim: A Fast Simulation for the SuperB Detector

    NASA Astrophysics Data System (ADS)

    Andreassen, R.; Arnaud, N.; Brown, D. N.; Burmistrov, L.; Carlson, J.; Cheng, C.-h.; Di Simone, A.; Gaponenko, I.; Manoni, E.; Perez, A.; Rama, M.; Roberts, D.; Rotondo, M.; Simi, G.; Sokoloff, M.; Suzuki, A.; Walsh, J.

    2011-12-01

    We have developed a parameterized (fast) simulation for detector optimization and physics reach studies of the proposed SuperB Flavor Factory in Italy. Detector components are modeled as thin sections of planes, cylinders, disks or cones. Particle-material interactions are modeled using simplified cross-sections and formulas. Active detectors are modeled using parameterized response functions. Geometry and response parameters are configured using xml files with a custom-designed schema. Reconstruction algorithms adapted from BaBar are used to build tracks and clusters. Multiple sources of background signals can be merged with primary signals. Pattern recognition errors are modeled statistically by randomly misassigning nearby tracking hits. Standard BaBar analysis tuples are used as an event output. Hadronic B meson pair events can be simulated at roughly 10Hz.

  5. Optimizing Libraries’ Content Findability Using Simple Object Access Protocol (SOAP) With Multi-Tier Architecture

    NASA Astrophysics Data System (ADS)

    Lahinta, A.; Haris, I.; Abdillah, T.

    2017-03-01

    The aim of this paper is to describe a developed application of Simple Object Access Protocol (SOAP) as a model for improving libraries’ digital content findability on the library web. The study applies XML text-based protocol tools in the collection of data about libraries’ visibility performance in the search results of the book. Model from the integrated Web Service Document Language (WSDL) and Universal Description, Discovery and Integration (UDDI) are applied to analyse SOAP as element within the system. The results showed that the developed application of SOAP with multi-tier architecture can help people simply access the website in the library server Gorontalo Province and support access to digital collections, subscription databases, and library catalogs in each library in Regency or City in Gorontalo Province.

  6. Prototype Development: Context-Driven Dynamic XML Ophthalmologic Data Capture Application

    PubMed Central

    Schwei, Kelsey M; Kadolph, Christopher; Finamore, Joseph; Cancel, Efrain; McCarty, Catherine A; Okorie, Asha; Thomas, Kate L; Allen Pacheco, Jennifer; Pathak, Jyotishman; Ellis, Stephen B; Denny, Joshua C; Rasmussen, Luke V; Tromp, Gerard; Williams, Marc S; Vrabec, Tamara R; Brilliant, Murray H

    2017-01-01

    Background The capture and integration of structured ophthalmologic data into electronic health records (EHRs) has historically been a challenge. However, the importance of this activity for patient care and research is critical. Objective The purpose of this study was to develop a prototype of a context-driven dynamic extensible markup language (XML) ophthalmologic data capture application for research and clinical care that could be easily integrated into an EHR system. Methods Stakeholders in the medical, research, and informatics fields were interviewed and surveyed to determine data and system requirements for ophthalmologic data capture. On the basis of these requirements, an ophthalmology data capture application was developed to collect and store discrete data elements with important graphical information. Results The context-driven data entry application supports several features, including ink-over drawing capability for documenting eye abnormalities, context-based Web controls that guide data entry based on preestablished dependencies, and an adaptable database or XML schema that stores Web form specifications and allows for immediate changes in form layout or content. The application utilizes Web services to enable data integration with a variety of EHRs for retrieval and storage of patient data. Conclusions This paper describes the development process used to create a context-driven dynamic XML data capture application for optometry and ophthalmology. The list of ophthalmologic data elements identified as important for care and research can be used as a baseline list for future ophthalmologic data collection activities. PMID:28903894

  7. Convergence of Health Level Seven Version 2 Messages to Semantic Web Technologies for Software-Intensive Systems in Telemedicine Trauma Care

    PubMed Central

    Cook, Timothy Wayne; Cavalini, Luciana Tricai

    2016-01-01

    Objectives To present the technical background and the development of a procedure that enriches the semantics of Health Level Seven version 2 (HL7v2) messages for software-intensive systems in telemedicine trauma care. Methods This study followed a multilevel model-driven approach for the development of semantically interoperable health information systems. The Pre-Hospital Trauma Life Support (PHTLS) ABCDE protocol was adopted as the use case. A prototype application embedded the semantics into an HL7v2 message as an eXtensible Markup Language (XML) file, which was validated against an XML schema that defines constraints on a common reference model. This message was exchanged with a second prototype application, developed on the Mirth middleware, which was also used to parse and validate both the original and the hybrid messages. Results Both versions of the data instance (one pure XML, one embedded in the HL7v2 message) were equally validated and the RDF-based semantics recovered by the receiving side of the prototype from the shared XML schema. Conclusions This study demonstrated the semantic enrichment of HL7v2 messages for intensive-software telemedicine systems for trauma care, by validating components of extracts generated in various computing environments. The adoption of the method proposed in this study ensures the compliance of the HL7v2 standard in Semantic Web technologies. PMID:26893947

  8. XML Based Markup Languages for Specific Domains

    NASA Astrophysics Data System (ADS)

    Varde, Aparna; Rundensteiner, Elke; Fahrenholz, Sally

    A challenging area in web based support systems is the study of human activities in connection with the web, especially with reference to certain domains. This includes capturing human reasoning in information retrieval, facilitating the exchange of domain-specific knowledge through a common platform and developing tools for the analysis of data on the web from a domain expert's angle. Among the techniques and standards related to such work, we have XML, the eXtensible Markup Language. This serves as a medium of communication for storing and publishing textual, numeric and other forms of data seamlessly. XML tag sets are such that they preserve semantics and simplify the understanding of stored information by users. Often domain-specific markup languages are designed using XML, with a user-centric perspective. Standardization bodies and research communities may extend these to include additional semantics of areas within and related to the domain. This chapter outlines the issues to be considered in developing domain-specific markup languages: the motivation for development, the semantic considerations, the syntactic constraints and other relevant aspects, especially taking into account human factors. Illustrating examples are provided from domains such as Medicine, Finance and Materials Science. Particular emphasis in these examples is on the Materials Markup Language MatML and the semantics of one of its areas, namely, the Heat Treating of Materials. The focus of this chapter, however, is not the design of one particular language but rather the generic issues concerning the development of domain-specific markup languages.

  9. RGG: A general GUI Framework for R scripts

    PubMed Central

    Visne, Ilhami; Dilaveroglu, Erkan; Vierlinger, Klemens; Lauss, Martin; Yildiz, Ahmet; Weinhaeusel, Andreas; Noehammer, Christa; Leisch, Friedrich; Kriegner, Albert

    2009-01-01

    Background R is the leading open source statistics software with a vast number of biostatistical and bioinformatical analysis packages. To exploit the advantages of R, extensive scripting/programming skills are required. Results We have developed a software tool called R GUI Generator (RGG) which enables the easy generation of Graphical User Interfaces (GUIs) for the programming language R by adding a few Extensible Markup Language (XML) – tags. RGG consists of an XML-based GUI definition language and a Java-based GUI engine. GUIs are generated in runtime from defined GUI tags that are embedded into the R script. User-GUI input is returned to the R code and replaces the XML-tags. RGG files can be developed using any text editor. The current version of RGG is available as a stand-alone software (RGGRunner) and as a plug-in for JGR. Conclusion RGG is a general GUI framework for R that has the potential to introduce R statistics (R packages, built-in functions and scripts) to users with limited programming skills and helps to bridge the gap between R developers and GUI-dependent users. RGG aims to abstract the GUI development from individual GUI toolkits by using an XML-based GUI definition language. Thus RGG can be easily integrated in any software. The RGG project further includes the development of a web-based repository for RGG-GUIs. RGG is an open source project licensed under the Lesser General Public License (LGPL) and can be downloaded freely at PMID:19254356

  10. Optimization of a Simple Ship Structural Model Using MAESTRO

    DTIC Science & Technology

    1999-03-01

    Substructures MAESTRO Model Modules . . . MAESTRO Model Girders . . . . MAESTRO Model Tranverse Frames 9 10 11 12 13 Structural and Non-Structural...Weight Distribution 14 Longitudinal Load Distribution on the Model . 15 Tranverse Load Distribution on the Model . . . 16 Hogging Displacement of...Compression, Flange PYCP Panel Yield - Compression, Plate PSPBT Panel Serviceability- Plate Bending Tranverse PSPBL Panel Serviceability - Plate

  11. Exploring compression techniques for ROOT IO

    NASA Astrophysics Data System (ADS)

    Zhang, Z.; Bockelman, B.

    2017-10-01

    ROOT provides an flexible format used throughout the HEP community. The number of use cases - from an archival data format to end-stage analysis - has required a number of tradeoffs to be exposed to the user. For example, a high “compression level” in the traditional DEFLATE algorithm will result in a smaller file (saving disk space) at the cost of slower decompression (costing CPU time when read). At the scale of the LHC experiment, poor design choices can result in terabytes of wasted space or wasted CPU time. We explore and attempt to quantify some of these tradeoffs. Specifically, we explore: the use of alternate compressing algorithms to optimize for read performance; an alternate method of compressing individual events to allow efficient random access; and a new approach to whole-file compression. Quantitative results are given, as well as guidance on how to make compression decisions for different use cases.

  12. Telemedicine + OCT: toward design of optimized algorithms for high-quality compressed images

    NASA Astrophysics Data System (ADS)

    Mousavi, Mahta; Lurie, Kristen; Land, Julian; Javidi, Tara; Ellerbee, Audrey K.

    2014-03-01

    Telemedicine is an emerging technology that aims to provide clinical healthcare at a distance. Among its goals, the transfer of diagnostic images over telecommunication channels has been quite appealing to the medical community. When viewed as an adjunct to biomedical device hardware, one highly important consideration aside from the transfer rate and speed is the accuracy of the reconstructed image at the receiver end. Although optical coherence tomography (OCT) is an established imaging technique that is ripe for telemedicine, the effects of OCT data compression, which may be necessary on certain telemedicine platforms, have not received much attention in the literature. We investigate the performance and efficiency of several lossless and lossy compression techniques for OCT data and characterize their effectiveness with respect to achievable compression ratio, compression rate and preservation of image quality. We examine the effects of compression in the interferogram vs. A-scan domain as assessed with various objective and subjective metrics.

  13. An optimized compression algorithm for real-time ECG data transmission in wireless network of medical information systems.

    PubMed

    Cho, Gyoun-Yon; Lee, Seo-Joon; Lee, Tae-Ro

    2015-01-01

    Recent medical information systems are striving towards real-time monitoring models to care patients anytime and anywhere through ECG signals. However, there are several limitations such as data distortion and limited bandwidth in wireless communications. In order to overcome such limitations, this research focuses on compression. Few researches have been made to develop a specialized compression algorithm for ECG data transmission in real-time monitoring wireless network. Not only that, recent researches' algorithm is not appropriate for ECG signals. Therefore this paper presents a more developed algorithm EDLZW for efficient ECG data transmission. Results actually showed that the EDLZW compression ratio was 8.66, which was a performance that was 4 times better than any other recent compression method widely used today.

  14. Medical Treatment for Postthrombotic Syndrome

    PubMed Central

    Palacios, Federico Silva; Rathbun, Suman Wasan

    2017-01-01

    Deep vein thrombosis (DVT) is a prevalent disease. About 20 to 30% of patients with DVT will develop postthrombotic syndrome (PTS) within months after the initial diagnosis of DVT. There is no gold standard for diagnosis of PTS, but clinical signs include pitting edema, hyperpigmentation, phlebectatic crown, venous eczema, and varicose veins. Several scoring systems have been developed for diagnostic evaluation. Conservative treatment includes compression therapy, medications, lifestyle modification, and exercise. Compression therapy, the mainstay and most proven noninvasive therapy for patients with PTS, can be prescribed as compression stockings, bandaging, adjustable compression wrap devices, and intermittent pneumatic compression. Medications may be used to both prevent and treat PTS and include anticoagulation, anti-inflammatories, vasoactive drugs, antibiotics, and diuretics. Exercise, weight loss, smoking cessation, and leg elevation are also recommended. Areas of further research include the duration, compliance, and strength of compression stockings in the prevention of PTS after DVT; the use of intermittent compression devices; the optimal medical anticoagulant regimen after endovascular therapy; and the role of newer anticoagulants as anti-inflammatory agents. PMID:28265131

  15. A new optimization method using a compressed sensing inspired solver for real-time LDR-brachytherapy treatment planning

    NASA Astrophysics Data System (ADS)

    Guthier, C.; Aschenbrenner, K. P.; Buergy, D.; Ehmann, M.; Wenz, F.; Hesser, J. W.

    2015-03-01

    This work discusses a novel strategy for inverse planning in low dose rate brachytherapy. It applies the idea of compressed sensing to the problem of inverse treatment planning and a new solver for this formulation is developed. An inverse planning algorithm was developed incorporating brachytherapy dose calculation methods as recommended by AAPM TG-43. For optimization of the functional a new variant of a matching pursuit type solver is presented. The results are compared with current state-of-the-art inverse treatment planning algorithms by means of real prostate cancer patient data. The novel strategy outperforms the best state-of-the-art methods in speed, while achieving comparable quality. It is able to find solutions with comparable values for the objective function and it achieves these results within a few microseconds, being up to 542 times faster than competing state-of-the-art strategies, allowing real-time treatment planning. The sparse solution of inverse brachytherapy planning achieved with methods from compressed sensing is a new paradigm for optimization in medical physics. Through the sparsity of required needles and seeds identified by this method, the cost of intervention may be reduced.

  16. A new optimization method using a compressed sensing inspired solver for real-time LDR-brachytherapy treatment planning.

    PubMed

    Guthier, C; Aschenbrenner, K P; Buergy, D; Ehmann, M; Wenz, F; Hesser, J W

    2015-03-21

    This work discusses a novel strategy for inverse planning in low dose rate brachytherapy. It applies the idea of compressed sensing to the problem of inverse treatment planning and a new solver for this formulation is developed. An inverse planning algorithm was developed incorporating brachytherapy dose calculation methods as recommended by AAPM TG-43. For optimization of the functional a new variant of a matching pursuit type solver is presented. The results are compared with current state-of-the-art inverse treatment planning algorithms by means of real prostate cancer patient data. The novel strategy outperforms the best state-of-the-art methods in speed, while achieving comparable quality. It is able to find solutions with comparable values for the objective function and it achieves these results within a few microseconds, being up to 542 times faster than competing state-of-the-art strategies, allowing real-time treatment planning. The sparse solution of inverse brachytherapy planning achieved with methods from compressed sensing is a new paradigm for optimization in medical physics. Through the sparsity of required needles and seeds identified by this method, the cost of intervention may be reduced.

  17. Assessment of municipal solid waste settlement models based on field-scale data analysis.

    PubMed

    Bareither, Christopher A; Kwak, Seungbok

    2015-08-01

    An evaluation of municipal solid waste (MSW) settlement model performance and applicability was conducted based on analysis of two field-scale datasets: (1) Yolo and (2) Deer Track Bioreactor Experiment (DTBE). Twelve MSW settlement models were considered that included a range of compression behavior (i.e., immediate compression, mechanical creep, and biocompression) and range of total (2-22) and optimized (2-7) model parameters. A multi-layer immediate settlement analysis developed for Yolo provides a framework to estimate initial waste thickness and waste thickness at the end-of-immediate compression. Model application to the Yolo test cells (conventional and bioreactor landfills) via least squares optimization yielded high coefficient of determinations for all settlement models (R(2)>0.83). However, empirical models (i.e., power creep, logarithmic, and hyperbolic models) are not recommended for use in MSW settlement modeling due to potential non-representative long-term MSW behavior, limited physical significance of model parameters, and required settlement data for model parameterization. Settlement models that combine mechanical creep and biocompression into a single mathematical function constrain time-dependent settlement to a single process with finite magnitude, which limits model applicability. Overall, all models evaluated that couple multiple compression processes (immediate, creep, and biocompression) provided accurate representations of both Yolo and DTBE datasets. A model presented in Gourc et al. (2010) included the lowest number of total and optimized model parameters and yielded high statistical performance for all model applications (R(2)⩾0.97). Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Rate distortion optimal bit allocation methods for volumetric data using JPEG 2000.

    PubMed

    Kosheleva, Olga M; Usevitch, Bryan E; Cabrera, Sergio D; Vidal, Edward

    2006-08-01

    Computer modeling programs that generate three-dimensional (3-D) data on fine grids are capable of generating very large amounts of information. These data sets, as well as 3-D sensor/measured data sets, are prime candidates for the application of data compression algorithms. A very flexible and powerful compression algorithm for imagery data is the newly released JPEG 2000 standard. JPEG 2000 also has the capability to compress volumetric data, as described in Part 2 of the standard, by treating the 3-D data as separate slices. As a decoder standard, JPEG 2000 does not describe any specific method to allocate bits among the separate slices. This paper proposes two new bit allocation algorithms for accomplishing this task. The first procedure is rate distortion optimal (for mean squared error), and is conceptually similar to postcompression rate distortion optimization used for coding codeblocks within JPEG 2000. The disadvantage of this approach is its high computational complexity. The second bit allocation algorithm, here called the mixed model (MM) approach, mathematically models each slice's rate distortion curve using two distinct regions to get more accurate modeling at low bit rates. These two bit allocation algorithms are applied to a 3-D Meteorological data set. Test results show that the MM approach gives distortion results that are nearly identical to the optimal approach, while significantly reducing computational complexity.

  19. Optimization of a two stage light gas gun. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Rynearson, R. J.; Rand, J. L.

    1972-01-01

    Performance characteristics of the Texas A&M University light gas gun are presented along with a review of basic gun theory and popular prediction methods. A computer routine based on the simple isentropic compression method is discussed. Results from over 60 test shots are given which demonstrate an increase in gun muzzle velocity from 9.100 ft/sec. to 19,000 ft/sec. The data gathered indicated the Texas A&M light gas gun more closely resembles an isentropic compression gun rather than a shock compression gun.

  20. An ant colony optimization based feature selection for web page classification.

    PubMed

    Saraç, Esra; Özel, Selma Ayşe

    2014-01-01

    The increased popularity of the web has caused the inclusion of huge amount of information to the web, and as a result of this explosive information growth, automated web page classification systems are needed to improve search engines' performance. Web pages have a large number of features such as HTML/XML tags, URLs, hyperlinks, and text contents that should be considered during an automated classification process. The aim of this study is to reduce the number of features to be used to improve runtime and accuracy of the classification of web pages. In this study, we used an ant colony optimization (ACO) algorithm to select the best features, and then we applied the well-known C4.5, naive Bayes, and k nearest neighbor classifiers to assign class labels to web pages. We used the WebKB and Conference datasets in our experiments, and we showed that using the ACO for feature selection improves both accuracy and runtime performance of classification. We also showed that the proposed ACO based algorithm can select better features with respect to the well-known information gain and chi square feature selection methods.

  1. Knowledge representation for fuzzy inference aided medical image interpretation.

    PubMed

    Gal, Norbert; Stoicu-Tivadar, Vasile

    2012-01-01

    Knowledge defines how an automated system transforms data into information. This paper suggests a representation method of medical imaging knowledge using fuzzy inference systems coded in XML files. The imaging knowledge incorporates features of the investigated objects in linguistic form and inference rules that can transform the linguistic data into information about a possible diagnosis. A fuzzy inference system is used to model the vagueness of the linguistic medical imaging terms. XML files are used to facilitate easy manipulation and deployment of the knowledge into the imaging software. Preliminary results are presented.

  2. Application of Architectural Patterns and Lightweight Formal Method for the Validation and Verification of Safety Critical Systems

    DTIC Science & Technology

    2013-09-01

    to a XML file, a code that Bonine in [21] developed for a similar purpose. Using the StateRover XML log file import tool, we are able to generate a...C. Bonine , M. Shing, T.W. Otani, “Computer-aided process and tools for mobile software acquisition,” NPS, Monterey, CA, Tech. Rep. NPS-SE-13...C10P07R05– 075, 2013. [21] C. Bonine , “Specification, validation and verification of mobile application behavior,” M.S. thesis, Dept. Comp. Science, NPS

  3. YAdumper: extracting and translating large information volumes from relational databases to structured flat files.

    PubMed

    Fernández, José M; Valencia, Alfonso

    2004-10-12

    Downloading the information stored in relational databases into XML and other flat formats is a common task in bioinformatics. This periodical dumping of information requires considerable CPU time, disk and memory resources. YAdumper has been developed as a purpose-specific tool to deal with the integral structured information download of relational databases. YAdumper is a Java application that organizes database extraction following an XML template based on an external Document Type Declaration. Compared with other non-native alternatives, YAdumper substantially reduces memory requirements and considerably improves writing performance.

  4. Middleware Trade Study for NASA Domain

    NASA Technical Reports Server (NTRS)

    Bowman, Dan

    2007-01-01

    This presentation presents preliminary results of a trade study designed to assess three distributed simulation middleware technologies for support of the NASA Constellation Distributed Space Exploration Simulation (DSES) project and Test and Verification Distributed System Integration Laboratory (DSIL). The technologies are: the High Level Architecture (HLA), the Test and Training Enabling Architecture (TENA), and an XML-based variant of Distributed Interactive Simulation (DIS-XML) coupled with the Extensible Messaging and Presence Protocol (XMPP). According to the criteria and weights determined in this study, HLA scores better than the other two for DSES as well as the DSIL

  5. Performance Comparison of Relational and Native-XML Databases using the Semantics of the Land Command and Control Information Exchange Data Model (LC2IEDM)

    DTIC Science & Technology

    2005-09-01

    aid this thesis would not have come to existence. First, we would like to thank Eric Chaum for his enthusiasm and recommendation for doing this...by David Hina [HINA 00], discusses the use of available Commercial-Off-The-Shelf (COTS) XML technology to provide for the exchange of data between...air and maritime components and therefore forms the basis of a joint command and control data model [ CHAUM 04]. The data model is one of the two

  6. Graphene/Polyaniline Aerogel with Superelasticity and High Capacitance as Highly Compression-Tolerant Supercapacitor Electrode

    NASA Astrophysics Data System (ADS)

    Lv, Peng; Tang, Xun; Zheng, Ruilin; Ma, Xiaobo; Yu, Kehan; Wei, Wei

    2017-12-01

    Superelastic graphene aerogel with ultra-high compressibility shows promising potential for compression-tolerant supercapacitor electrode. However, its specific capacitance is too low to meet the practical application. Herein, we deposited polyaniline (PANI) into the superelastic graphene aerogel to improve the capacitance while maintaining the superelasticity. Graphene/PANI aerogel with optimized PANI mass content of 63 wt% shows the improved specific capacitance of 713 F g-1 in the three-electrode system. And the graphene/PANI aerogel presents a high recoverable compressive strain of 90% due to the strong interaction between PANI and graphene. The all-solid-state supercapacitors were assembled to demonstrate the compression-tolerant ability of graphene/PANI electrodes. The gravimetric capacitance of graphene/PANI electrodes reaches 424 F g-1 and retains 96% even at 90% compressive strain. And a volumetric capacitance of 65.5 F cm-3 is achieved, which is much higher than that of other compressible composite electrodes. Furthermore, several compressible supercapacitors can be integrated and connected in series to enhance the overall output voltage, suggesting the potential to meet the practical application.

  7. Graphene/Polyaniline Aerogel with Superelasticity and High Capacitance as Highly Compression-Tolerant Supercapacitor Electrode.

    PubMed

    Lv, Peng; Tang, Xun; Zheng, Ruilin; Ma, Xiaobo; Yu, Kehan; Wei, Wei

    2017-12-19

    Superelastic graphene aerogel with ultra-high compressibility shows promising potential for compression-tolerant supercapacitor electrode. However, its specific capacitance is too low to meet the practical application. Herein, we deposited polyaniline (PANI) into the superelastic graphene aerogel to improve the capacitance while maintaining the superelasticity. Graphene/PANI aerogel with optimized PANI mass content of 63 wt% shows the improved specific capacitance of 713 F g -1 in the three-electrode system. And the graphene/PANI aerogel presents a high recoverable compressive strain of 90% due to the strong interaction between PANI and graphene. The all-solid-state supercapacitors were assembled to demonstrate the compression-tolerant ability of graphene/PANI electrodes. The gravimetric capacitance of graphene/PANI electrodes reaches 424 F g -1 and retains 96% even at 90% compressive strain. And a volumetric capacitance of 65.5 F cm -3 is achieved, which is much higher than that of other compressible composite electrodes. Furthermore, several compressible supercapacitors can be integrated and connected in series to enhance the overall output voltage, suggesting the potential to meet the practical application.

  8. Formulation development and optimization of sustained release matrix tablet of Itopride HCl by response surface methodology and its evaluation of release kinetics

    PubMed Central

    Bose, Anirbandeep; Wong, Tin Wui; Singh, Navjot

    2012-01-01

    The objective of this present investigation was to develop and formulate sustained release (SR) matrix tablets of Itopride HCl, by using different polymer combinations and fillers, to optimize by Central Composite Design response surface methodology for different drug release variables and to evaluate drug release pattern of the optimized product. Sustained release matrix tablets of various combinations were prepared with cellulose-based polymers: hydroxy propyl methyl cellulose (HPMC) and polyvinyl pyrolidine (pvp) and lactose as fillers. Study of pre-compression and post-compression parameters facilitated the screening of a formulation with best characteristics that underwent here optimization study by response surface methodology (Central Composite Design). The optimized tablet was further subjected to scanning electron microscopy to reveal its release pattern. The in vitro study revealed that combining of HPMC K100M (24.65 MG) with pvp(20 mg)and use of LACTOSE as filler sustained the action more than 12 h. The developed sustained release matrix tablet of improved efficacy can perform therapeutically better than a conventional tablet. PMID:23960836

  9. Formulation development and optimization of sustained release matrix tablet of Itopride HCl by response surface methodology and its evaluation of release kinetics.

    PubMed

    Bose, Anirbandeep; Wong, Tin Wui; Singh, Navjot

    2013-04-01

    The objective of this present investigation was to develop and formulate sustained release (SR) matrix tablets of Itopride HCl, by using different polymer combinations and fillers, to optimize by Central Composite Design response surface methodology for different drug release variables and to evaluate drug release pattern of the optimized product. Sustained release matrix tablets of various combinations were prepared with cellulose-based polymers: hydroxy propyl methyl cellulose (HPMC) and polyvinyl pyrolidine (pvp) and lactose as fillers. Study of pre-compression and post-compression parameters facilitated the screening of a formulation with best characteristics that underwent here optimization study by response surface methodology (Central Composite Design). The optimized tablet was further subjected to scanning electron microscopy to reveal its release pattern. The in vitro study revealed that combining of HPMC K100M (24.65 MG) with pvp(20 mg)and use of LACTOSE as filler sustained the action more than 12 h. The developed sustained release matrix tablet of improved efficacy can perform therapeutically better than a conventional tablet.

  10. SU-G-BRC-10: Feasibility of a Web-Based Monte Carlo Simulation Tool for Dynamic Electron Arc Radiotherapy (DEAR)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodrigues, A; Wu, Q; Sawkey, D

    Purpose: DEAR is a radiation therapy technique utilizing synchronized motion of gantry and couch during delivery to optimize dose distribution homogeneity and penumbra for treatment of superficial disease. Dose calculation for DEAR is not yet supported by commercial TPSs. The purpose of this study is to demonstrate the feasibility of using a web-based Monte Carlo (MC) simulation tool (VirtuaLinac) to calculate dose distributions for a DEAR delivery. Methods: MC simulations were run through VirtuaLinac, which is based on the GEANT4 platform. VirtuaLinac utilizes detailed linac head geometry and material models, validated phase space files, and a voxelized phantom. The inputmore » was expanded to include an XML file for simulation of varying mechanical axes as a function of MU. A DEAR XML plan was generated and used in the MC simulation and delivered on a TrueBeam in Developer Mode. Radiographic film wrapped on a cylindrical phantom (12.5 cm radius) measured dose at a depth of 1.5 cm and compared to the simulation results. Results: A DEAR plan was simulated using an energy of 6 MeV and a 3×10 cm{sup 2} cut-out in a 15×15 cm{sup 2} applicator for a delivery of a 90° arc. The resulting data were found to provide qualitative and quantitative evidence that the simulation platform could be used as the basis for DEAR dose calculations. The resulting unwrapped 2D dose distributions agreed well in the cross-plane direction along the arc, with field sizes of 18.4 and 18.2 cm and penumbrae of 1.9 and 2.0 cm for measurements and simulations, respectively. Conclusion: Preliminary feasibility of a DEAR delivery using a web-based MC simulation platform has been demonstrated. This tool will benefit treatment planning for DEAR as a benchmark for developing other model based algorithms, allowing efficient optimization of trajectories, and quality assurance of plans without the need for extensive measurements.« less

  11. A modular framework for biomedical concept recognition

    PubMed Central

    2013-01-01

    Background Concept recognition is an essential task in biomedical information extraction, presenting several complex and unsolved challenges. The development of such solutions is typically performed in an ad-hoc manner or using general information extraction frameworks, which are not optimized for the biomedical domain and normally require the integration of complex external libraries and/or the development of custom tools. Results This article presents Neji, an open source framework optimized for biomedical concept recognition built around four key characteristics: modularity, scalability, speed, and usability. It integrates modules for biomedical natural language processing, such as sentence splitting, tokenization, lemmatization, part-of-speech tagging, chunking and dependency parsing. Concept recognition is provided through dictionary matching and machine learning with normalization methods. Neji also integrates an innovative concept tree implementation, supporting overlapped concept names and respective disambiguation techniques. The most popular input and output formats, namely Pubmed XML, IeXML, CoNLL and A1, are also supported. On top of the built-in functionalities, developers and researchers can implement new processing modules or pipelines, or use the provided command-line interface tool to build their own solutions, applying the most appropriate techniques to identify heterogeneous biomedical concepts. Neji was evaluated against three gold standard corpora with heterogeneous biomedical concepts (CRAFT, AnEM and NCBI disease corpus), achieving high performance results on named entity recognition (F1-measure for overlap matching: species 95%, cell 92%, cellular components 83%, gene and proteins 76%, chemicals 65%, biological processes and molecular functions 63%, disorders 85%, and anatomical entities 82%) and on entity normalization (F1-measure for overlap name matching and correct identifier included in the returned list of identifiers: species 88%, cell 71%, cellular components 72%, gene and proteins 64%, chemicals 53%, and biological processes and molecular functions 40%). Neji provides fast and multi-threaded data processing, annotating up to 1200 sentences/second when using dictionary-based concept identification. Conclusions Considering the provided features and underlying characteristics, we believe that Neji is an important contribution to the biomedical community, streamlining the development of complex concept recognition solutions. Neji is freely available at http://bioinformatics.ua.pt/neji. PMID:24063607

  12. Design Space Approach in Optimization of Fluid Bed Granulation and Tablets Compression Process

    PubMed Central

    Djuriš, Jelena; Medarević, Djordje; Krstić, Marko; Vasiljević, Ivana; Mašić, Ivana; Ibrić, Svetlana

    2012-01-01

    The aim of this study was to optimize fluid bed granulation and tablets compression processes using design space approach. Type of diluent, binder concentration, temperature during mixing, granulation and drying, spray rate, and atomization pressure were recognized as critical formulation and process parameters. They were varied in the first set of experiments in order to estimate their influences on critical quality attributes, that is, granules characteristics (size distribution, flowability, bulk density, tapped density, Carr's index, Hausner's ratio, and moisture content) using Plackett-Burman experimental design. Type of diluent and atomization pressure were selected as the most important parameters. In the second set of experiments, design space for process parameters (atomization pressure and compression force) and its influence on tablets characteristics was developed. Percent of paracetamol released and tablets hardness were determined as critical quality attributes. Artificial neural networks (ANNs) were applied in order to determine design space. ANNs models showed that atomization pressure influences mostly on the dissolution profile, whereas compression force affects mainly the tablets hardness. Based on the obtained ANNs models, it is possible to predict tablet hardness and paracetamol release profile for any combination of analyzed factors. PMID:22919295

  13. Lossless medical image compression with a hybrid coder

    NASA Astrophysics Data System (ADS)

    Way, Jing-Dar; Cheng, Po-Yuen

    1998-10-01

    The volume of medical image data is expected to increase dramatically in the next decade due to the large use of radiological image for medical diagnosis. The economics of distributing the medical image dictate that data compression is essential. While there is lossy image compression, the medical image must be recorded and transmitted lossless before it reaches the users to avoid wrong diagnosis due to the image data lost. Therefore, a low complexity, high performance lossless compression schematic that can approach the theoretic bound and operate in near real-time is needed. In this paper, we propose a hybrid image coder to compress the digitized medical image without any data loss. The hybrid coder is constituted of two key components: an embedded wavelet coder and a lossless run-length coder. In this system, the medical image is compressed with the lossy wavelet coder first, and the residual image between the original and the compressed ones is further compressed with the run-length coder. Several optimization schemes have been used in these coders to increase the coding performance. It is shown that the proposed algorithm is with higher compression ratio than run-length entropy coders such as arithmetic, Huffman and Lempel-Ziv coders.

  14. Particle Engineering of Excipients for Direct Compression: Understanding the Role of Material Properties.

    PubMed

    Mangal, Sharad; Meiser, Felix; Morton, David; Larson, Ian

    2015-01-01

    Tablets represent the preferred and most commonly dispensed pharmaceutical dosage form for administering active pharmaceutical ingredients (APIs). Minimizing the cost of goods and improving manufacturing output efficiency has motivated companies to use direct compression as a preferred method of tablet manufacturing. Excipients dictate the success of direct compression, notably by optimizing powder formulation compactability and flow, thus there has been a surge in creating excipients specifically designed to meet these needs for direct compression. Greater scientific understanding of tablet manufacturing coupled with effective application of the principles of material science and particle engineering has resulted in a number of improved direct compression excipients. Despite this, significant practical disadvantages of direct compression remain relative to granulation, and this is partly due to the limitations of direct compression excipients. For instance, in formulating high-dose APIs, a much higher level of excipient is required relative to wet or dry granulation and so tablets are much bigger. Creating excipients to enable direct compression of high-dose APIs requires the knowledge of the relationship between fundamental material properties and excipient functionalities. In this paper, we review the current understanding of the relationship between fundamental material properties and excipient functionality for direct compression.

  15. The effect of compression and attention allocation on speech intelligibility

    NASA Astrophysics Data System (ADS)

    Choi, Sangsook; Carrell, Thomas

    2003-10-01

    Research investigating the effects of amplitude compression on speech intelligibility for individuals with sensorineural hearing loss has demonstrated contradictory results [Souza and Turner (1999)]. Because percent-correct measures may not be the best indicator of compression effectiveness, a speech intelligibility and motor coordination task was developed to provide data that may more thoroughly explain the perception of compressed speech signals. In the present study, a pursuit rotor task [Dlhopolsky (2000)] was employed along with word identification task to measure the amount of attention required to perceive compressed and non-compressed words in noise. Monosyllabic words were mixed with speech-shaped noise at a fixed signal-to-noise ratio and compressed using a wide dynamic range compression scheme. Participants with normal hearing identified each word with or without a simultaneous pursuit-rotor task. Also, participants completed the pursuit-rotor task without simultaneous word presentation. It was expected that the performance on the additional motor task would reflect effect of the compression better than simple word-accuracy measures. Results were complex. For example, in some conditions an irrelevant task actually improved performance on a simultaneous listening task. This suggests there might be an optimal level of attention required for recognition of monosyllabic words.

  16. PARLO: PArallel Run-Time Layout Optimization for Scientific Data Explorations with Heterogeneous Access Pattern

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gong, Zhenhuan; Boyuka, David; Zou, X

    Download Citation Email Print Request Permissions Save to Project The size and scope of cutting-edge scientific simulations are growing much faster than the I/O and storage capabilities of their run-time environments. The growing gap is exacerbated by exploratory, data-intensive analytics, such as querying simulation data with multivariate, spatio-temporal constraints, which induces heterogeneous access patterns that stress the performance of the underlying storage system. Previous work addresses data layout and indexing techniques to improve query performance for a single access pattern, which is not sufficient for complex analytics jobs. We present PARLO a parallel run-time layout optimization framework, to achieve multi-levelmore » data layout optimization for scientific applications at run-time before data is written to storage. The layout schemes optimize for heterogeneous access patterns with user-specified priorities. PARLO is integrated with ADIOS, a high-performance parallel I/O middleware for large-scale HPC applications, to achieve user-transparent, light-weight layout optimization for scientific datasets. It offers simple XML-based configuration for users to achieve flexible layout optimization without the need to modify or recompile application codes. Experiments show that PARLO improves performance by 2 to 26 times for queries with heterogeneous access patterns compared to state-of-the-art scientific database management systems. Compared to traditional post-processing approaches, its underlying run-time layout optimization achieves a 56% savings in processing time and a reduction in storage overhead of up to 50%. PARLO also exhibits a low run-time resource requirement, while also limiting the performance impact on running applications to a reasonable level.« less

  17. Intelligent Visualization of Geo-Information on the Future Web

    NASA Astrophysics Data System (ADS)

    Slusallek, P.; Jochem, R.; Sons, K.; Hoffmann, H.

    2012-04-01

    Visualization is a key component of the "Observation Web" and will become even more important in the future as geo data becomes more widely accessible. The common statement that "Data that cannot be seen, does not exist" is especially true for non-experts, like most citizens. The Web provides the most interesting platform for making data easily and widely available. However, today's Web is not well suited for the interactive visualization and exploration that is often needed for geo data. Support for 3D data was added only recently and at an extremely low level (WebGL), but even the 2D visualization capabilities of HTML e.g. (images, canvas, SVG) are rather limited, especially regarding interactivity. We have developed XML3D as an extension to HTML-5. It allows for compactly describing 2D and 3D data directly as elements of an HTML-5 document. All graphics elements are part of the Document Object Model (DOM) and can be manipulated via the same set of DOM events and methods that millions of Web developers use on a daily basis. Thus, XML3D makes highly interactive 2D and 3D visualization easily usable, not only for geo data. XML3D is supported by any WebGL-capable browser but we also provide native implementations in Firefox and Chromium. As an example, we show how OpenStreetMap data can be mapped directly to XML3D and visualized interactively in any Web page. We show how this data can be easily augmented with additional data from the Web via a few lines of Javascript. We also show how embedded semantic data (via RDFa) allows for linking the visualization back to the data's origin, thus providing an immersive interface for interacting with and modifying the original data. XML3D is used as key input for standardization within the W3C Community Group on "Declarative 3D for the Web" chaired by the DFKI and has recently been selected as one of the Generic Enabler for the EU Future Internet initiative.

  18. Information persistence using XML database technology

    NASA Astrophysics Data System (ADS)

    Clark, Thomas A.; Lipa, Brian E. G.; Macera, Anthony R.; Staskevich, Gennady R.

    2005-05-01

    The Joint Battlespace Infosphere (JBI) Information Management (IM) services provide information exchange and persistence capabilities that support tailored, dynamic, and timely access to required information, enabling near real-time planning, control, and execution for DoD decision making. JBI IM services will be built on a substrate of network centric core enterprise services and when transitioned, will establish an interoperable information space that aggregates, integrates, fuses, and intelligently disseminates relevant information to support effective warfighter business processes. This virtual information space provides individual users with information tailored to their specific functional responsibilities and provides a highly tailored repository of, or access to, information that is designed to support a specific Community of Interest (COI), geographic area or mission. Critical to effective operation of JBI IM services is the implementation of repositories, where data, represented as information, is represented and persisted for quick and easy retrieval. This paper will address information representation, persistence and retrieval using existing database technologies to manage structured data in Extensible Markup Language (XML) format as well as unstructured data in an IM services-oriented environment. Three basic categories of database technologies will be compared and contrasted: Relational, XML-Enabled, and Native XML. These technologies have diverse properties such as maturity, performance, query language specifications, indexing, and retrieval methods. We will describe our application of these evolving technologies within the context of a JBI Reference Implementation (RI) by providing some hopefully insightful anecdotes and lessons learned along the way. This paper will also outline future directions, promising technologies and emerging COTS products that can offer more powerful information management representations, better persistence mechanisms and improved retrieval techniques.

  19. Information Model Translation to Support a Wider Science Community

    NASA Astrophysics Data System (ADS)

    Hughes, John S.; Crichton, Daniel; Ritschel, Bernd; Hardman, Sean; Joyner, Ronald

    2014-05-01

    The Planetary Data System (PDS), NASA's long-term archive for solar system exploration data, has just released PDS4, a modernization of the PDS architecture, data standards, and technical infrastructure. This next generation system positions the PDS to meet the demands of the coming decade, including big data, international cooperation, distributed nodes, and multiple ways of analysing and interpreting data. It also addresses three fundamental project goals: providing more efficient data delivery by data providers to the PDS, enabling a stable, long-term usable planetary science data archive, and enabling services for the data consumer to find, access, and use the data they require in contemporary data formats. The PDS4 information architecture is used to describe all PDS data using a common model. Captured in an ontology modeling tool it supports a hierarchy of data dictionaries built to the ISO/IEC 11179 standard and is designed to increase flexibility, enable complex searches at the product level, and to promote interoperability that facilitates data sharing both nationally and internationally. A PDS4 information architecture design requirement stipulates that the content of the information model must be translatable to external data definition languages such as XML Schema, XMI/XML, and RDF/XML. To support the semantic Web standards we are now in the process of mapping the contents into RDF/XML to support SPARQL capable databases. We are also building a terminological ontology to support virtually unified data retrieval and access. This paper will provide an overview of the PDS4 information architecture focusing on its domain information model and how the translation and mapping are being accomplished.

  20. Structured Course Objects in a Digital Library

    NASA Technical Reports Server (NTRS)

    Maly, K.; Zubair, M.; Liu, X.; Nelson, M.; Zeil, S.

    1999-01-01

    We are developing an Undergraduate Digital Library Framework (UDLF) that will support creation/archiving of courses and reuse of existing course material to evolve courses. UDLF supports the publication of course materials for later instantiation for a specific offering and allows the addition of time-dependent and student-specific information and structures. Instructors and, depending on permissions, students can access the general course materials or the materials for a specific offering. We are building a reference implementation based on NCSTRL+, a digital library derived from NCSTRL. Digital objects in NCSTRL+ are called buckets, self-contained entities that carry their own methods for access and display. Current bucket implementations have a two level structure of packages and elements. This is not a rich enough structure for course objects in UDLF. Typically, courses can only be modeled as a multilevel hierarchy and among different courses, both the syntax and semantics of terms may vary. Therefore, we need a mechanism to define, within a particular library, course models, their constituent objects, and the associated semantics in a flexible, extensible way. In this paper, we describe our approach to define and implement these multilayered course objects. We use XML technology to emulate complex data structures within the NCSTRL+ buckets. We have developed authoring and browsing tools to manipulate these course objects. In our current implementation a user downloading an XML based course bucket also downloads the XML-aware tools: an applet that enables the user to edit or browse the bucket. We claim that XML provides an effective means to represent multi-level structure of a course bucket.

  1. SU-F-P-36: Automation of Linear Accelerator Star Shot Measurement with Advanced XML Scripting and Electronic Portal Imaging Device

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, N; Knutson, N; Schmidt, M

    Purpose: To verify a method used to automatically acquire jaw, MLC, collimator and couch star shots for a Varian TrueBeam linear accelerator utilizing Developer Mode and an Electronic Portal Imaging Device (EPID). Methods: An XML script was written to automate motion of the jaws, MLC, collimator and couch in TrueBeam Developer Mode (TBDM) to acquire star shot measurements. The XML script also dictates MV imaging parameters to facilitate automatic acquisition and recording of integrated EPID images. Since couch star shot measurements cannot be acquired using a combination of EPID and jaw/MLC collimation alone due to a fixed imager geometry, amore » method utilizing a 5mm wide steel ruler placed on the table and centered within a 15×15cm2 open field to produce a surrogate of the narrow field aperture was investigated. Four individual star shot measurements (X jaw, Y jaw, MLC and couch) were obtained using our proposed as well as traditional film-based method. Integrated EPID images and scanned measurement films were analyzed and compared. Results: Star shot (X jaw, Y jaw, MLC and couch) measurements were obtained in a single 5 minute delivery using the TBDM XML script method compared to 60 minutes for equivalent traditional film measurements. Analysis of the images and films demonstrated comparable isocentricity results, agreeing within 0.3mm of each other. Conclusion: The presented automatic approach of acquiring star shot measurements using TBDM and EPID has proven to be more efficient than the traditional film approach with equivalent results.« less

  2. Poster — Thur Eve — 55: An automated XML technique for isocentre verification on the Varian TrueBeam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Asiev, Krum; Mullins, Joel; DeBlois, François

    2014-08-15

    Isocentre verification tests, such as the Winston-Lutz (WL) test, have gained popularity in the recent years as techniques such as stereotactic radiosurgery/radiotherapy (SRS/SRT) treatments are more commonly performed on radiotherapy linacs. These highly conformal treatments require frequent monitoring of the geometrical accuracy of the isocentre to ensure proper radiation delivery. At our clinic, the WL test is performed by acquiring with the EPID a collection of 8 images of a WL phantom fixed on the couch for various couch/gantry angles. This set of images is later analyzed to determine the isocentre size. The current work addresses the acquisition process. Amore » manual WL test acquisition performed by and experienced physicist takes in average 25 minutes and is prone to user manipulation errors. We have automated this acquisition on a Varian TrueBeam STx linac (Varian, Palo Alto, USA). The Varian developer mode allows the execution of custom-made XML script files to control all aspects of the linac operation. We have created an XML-WL script that cycles through each couch/gantry combinations taking an EPID image at each position. This automated acquisition is done in less than 4 minutes. The reproducibility of the method was verified by repeating the execution of the XML file 5 times. The analysis of the images showed variation of the isocenter size less than 0.1 mm along the X, Y and Z axes and compares favorably to a manual acquisition for which we typically observe variations up to 0.5 mm.« less

  3. An effective XML based name mapping mechanism within StoRM

    NASA Astrophysics Data System (ADS)

    Corso, E.; Forti, A.; Ghiselli, A.; Magnoni, L.; Zappi, R.

    2008-07-01

    In a Grid environment the naming capability allows users to refer to specific data resources in a physical storage system using a high level logical identifier. This logical identifier is typically organized in a file system like structure, a hierarchical tree of names. Storage Resource Manager (SRM) services map the logical identifier to the physical location of data evaluating a set of parameters as the desired quality of services and the VOMS attributes specified in the requests. StoRM is a SRM service developed by INFN and ICTP-EGRID to manage file and space on standard POSIX and high performing parallel and cluster file systems. An upcoming requirement in the Grid data scenario is the orthogonality of the logical name and the physical location of data, in order to refer, with the same identifier, to different copies of data archived in various storage areas with different quality of service. The mapping mechanism proposed in StoRM is based on a XML document that represents the different storage components managed by the service, the storage areas defined by the site administrator, the quality of service they provide and the Virtual Organization that want to use the storage area. An appropriate directory tree is realized in each storage component reflecting the XML schema. In this scenario StoRM is able to identify the physical location of a requested data evaluating the logical identifier and the specified attributes following the XML schema, without querying any database service. This paper presents the namespace schema defined, the different entities represented and the technical details of the StoRM implementation.

  4. Lapin Data Interchange Among Database, Analysis and Display Programs Using XML-Based Text Files

    NASA Technical Reports Server (NTRS)

    2005-01-01

    The purpose of grant NCC3-966 was to investigate and evaluate the interchange of application-specific data among multiple programs each carrying out part of the analysis and design task. This has been carried out previously by creating a custom program to read data produced by one application and then write that data to a file whose format is specific to the second application that needs all or part of that data. In this investigation, data of interest is described using the XML markup language that allows the data to be stored in a text-string. Software to transform output data of a task into an XML-string and software to read an XML string and extract all or a portion of the data needed for another application is used to link two independent applications together as part of an overall design effort. This approach was initially used with a standard analysis program, Lapin, along with standard applications a standard spreadsheet program, a relational database program, and a conventional dialog and display program to demonstrate the successful sharing of data among independent programs. Most of the effort beyond that demonstration has been concentrated on the inclusion of more complex display programs. Specifically, a custom-written windowing program organized around dialogs to control the interactions have been combined with an independent CAD program (Open Cascade) that supports sophisticated display of CAD elements such as lines, spline curves, and surfaces and turbine-blade data produced by an independent blade design program (UD0300).

  5. Changes in Small Intestine Tissue Compressed by a Linear Stapler Based on Cole Y Model.

    PubMed

    Zhou, Yu; Ren, Binbin; Li, Boting; Xu, Jingjing; Jin, Yiyun; Song, Chengli

    2016-12-01

    Clarifying changes in gastrointestinal tissue compressed by surgical stapler is a crucial prerequisite for stapler design optimization. For this study, a stapler was modified, and multifrequency bioimpedance of a porcine small intestine tissue compressed by the stapler was measured. The Cole Y model was fitted to the bioimpedance, and changes in tissue were analyzed using model parameters: G 0 , extracellular fluid conductance; ΔG, intracellular fluid conductance; C cpeF , equivalent capacitance of cell membrane. The changes could be divided into two stages: first, all parameters decreased sharply with slopes more than 15.70 ± 2.67, 4.25 ± 1.23 μS/s and 72.68 ± 6.99 pF/s respectively; and subsequently, with an increase in compression strength, G 0 decreased with slopes less than 2.54 ± 0.40 μS/s, ΔG decreased slightly with slope of 0.26 ± 0.04 μS/s after fluctuating mildly, and C cpeF remained nearly invariant after initially increasing with slope of -2.94 ± 0.64 pF/s. In conclusion, when the stapler is closed, a portion of tissue is squeezed out of the measurement space, causing all parameters' sharp decrease. Subsequently, the stapler continues compressing the tissue, leading to extracellular fluid expulsion. The changes in intracellular fluid are related to the compression strength and may be explained by cell restoration. This study could provide a basis for stapler design optimization.

  6. Effect of Compression Devices on Preventing Deep Vein Thrombosis Among Adult Trauma Patients: A Systematic Review.

    PubMed

    Ibrahim, Mona; Ahmed, Azza; Mohamed, Warda Yousef; El-Sayed Abu Abduo, Somaya

    2015-01-01

    Trauma is the leading cause of death in Americans up to 44 years old each year. Deep vein thrombosis (DVT) is a significant condition occurring in trauma, and prophylaxis is essential to the appropriate management of trauma patients. The incidence of DVT varies in trauma patients, depending on patients' risk factors, modality of prophylaxis, and methods of detection. However, compression devices and arteriovenous (A-V) foot pumps prophylaxis are recommended in trauma patients, but the efficacy and optimal use of it is not well documented in the literature. The aim of this study was to review the literature on the effect of compression devices in preventing DVT among adult trauma patients. We searched through PubMed, CINAHL, and Cochrane Central Register of Controlled Trials for eligible studies published from 1990 until June 2014. Reviewers identified all randomized controlled trials that satisfied the study criteria, and the quality of included studies was assessed by Cochrane risk of bias tool. Five randomized controlled trials were included with a total of 1072 patients. Sequential compression devices significantly reduced the incidence of DVT in trauma patients. Also, foot pumps were more effective in reducing incidence of DVT compared with sequential compression devices. Sequential compression devices and foot pumps reduced the incidence of DVT in trauma patients. However, the evidence is limited to a small sample size and did not take into account other confounding variables that may affect the incidence of DVT in trauma patients. Future randomized controlled trials with larger probability samples to investigate the optimal use of mechanical prophylaxis in trauma patients are needed.

  7. Personalising e-learning modules: targeting Rasmussen levels using XML.

    PubMed

    Renard, J M; Leroy, S; Camus, H; Picavet, M; Beuscart, R

    2003-01-01

    The development of Internet technologies has made it possible to increase the number and the diversity of on-line resources for teachers and students. Initiatives like the French-speaking Virtual Medical University Project (UMVF) try to organise the access to these resources. But both teachers and students are working on a partly redundant subset of knowledge. From the analysis of some French courses we propose a model for knowledge organisation derived from Rasmussen's stepladder. In the context of decision-making Rasmussen has identified skill-based, rule-based and knowledge-based levels for the mental process. In the medical context of problem-solving, we apply these three levels to the definition of three students levels: beginners, intermediate-level learners, experts. Based on our model, we build a representation of the hierarchical structure of data using XML language. We use XSLT Transformation Language in order to filter relevant data according to student level and to propose an appropriate display on students' terminal. The model and the XML implementation we define help to design tools for building personalised e-learning modules.

  8. Content-Aware DataGuide with Incremental Index Update using Frequently Used Paths

    NASA Astrophysics Data System (ADS)

    Sharma, A. K.; Duhan, Neelam; Khattar, Priyanka

    2010-11-01

    Size of the WWW is increasing day by day. Due to the absence of structured data on the Web, it becomes very difficult for information retrieval tools to fully utilize the Web information. As a solution to this problem, XML pages come into play, which provide structural information to the users to some extent. Without efficient indexes, query processing can be quite inefficient due to an exhaustive traversal on XML data. In this paper an improved content-centric approach of Content-Aware DataGuide, which is an indexing technique for XML databases, is being proposed that uses frequently used paths from historical query logs to improve query performance. The index can be updated incrementally according to the changes in query workload and thus, the overhead of reconstruction can be minimized. Frequently used paths are extracted using any Sequential Pattern mining algorithm on subsequent queries in the query workload. After this, the data structures are incrementally updated. This indexing technique proves to be efficient as partial matching queries can be executed efficiently and users can now get the more relevant documents in results.

  9. An XML-based interchange format for genotype-phenotype data.

    PubMed

    Whirl-Carrillo, M; Woon, M; Thorn, C F; Klein, T E; Altman, R B

    2008-02-01

    Recent advances in high-throughput genotyping and phenotyping have accelerated the creation of pharmacogenomic data. Consequently, the community requires standard formats to exchange large amounts of diverse information. To facilitate the transfer of pharmacogenomics data between databases and analysis packages, we have created a standard XML (eXtensible Markup Language) schema that describes both genotype and phenotype data as well as associated metadata. The schema accommodates information regarding genes, drugs, diseases, experimental methods, genomic/RNA/protein sequences, subjects, subject groups, and literature. The Pharmacogenetics and Pharmacogenomics Knowledge Base (PharmGKB; www.pharmgkb.org) has used this XML schema for more than 5 years to accept and process submissions containing more than 1,814,139 SNPs on 20,797 subjects using 8,975 assays. Although developed in the context of pharmacogenomics, the schema is of general utility for exchange of genotype and phenotype data. We have written syntactic and semantic validators to check documents using this format. The schema and code for validation is available to the community at http://www.pharmgkb.org/schema/index.html (last accessed: 8 October 2007). (c) 2007 Wiley-Liss, Inc.

  10. Wave propagation in a strongly heterogeneous elastic porous medium: Homogenization of Biot medium with double porosities

    NASA Astrophysics Data System (ADS)

    Rohan, Eduard; Naili, Salah; Nguyen, Vu-Hieu

    2016-08-01

    We study wave propagation in an elastic porous medium saturated with a compressible Newtonian fluid. The porous network is interconnected whereby the pores are characterized by two very different characteristic sizes. At the mesoscopic scale, the medium is described using the Biot model, characterized by a high contrast in the hydraulic permeability and anisotropic elasticity, whereas the contrast in the Biot coupling coefficient is only moderate. Fluid motion is governed by the Darcy flow model extended by inertia terms and by the mass conservation equation. The homogenization method based on the asymptotic analysis is used to obtain a macroscopic model. To respect the high contrast in the material properties, they are scaled by the small parameter, which is involved in the asymptotic analysis and characterized by the size of the heterogeneities. Using the estimates of wavelengths in the double-porosity networks, it is shown that the macroscopic descriptions depend on the contrast in the static permeability associated with pores and micropores and on the frequency. Moreover, the microflow in the double porosity is responsible for fading memory effects via the macroscopic poroviscoelastic constitutive law. xml:lang="fr"

  11. Optimization design and laser damage threshold analysis of pulse compression multilayer dielectric gratings

    NASA Astrophysics Data System (ADS)

    Fan, Shuwei; Bai, Liang; Chen, Nana

    2016-08-01

    As one of the key elements of high-power laser systems, the pulse compression multilayer dielectric grating is required for broader band, higher diffraction efficiency and higher damage threshold. In this paper, the multilayer dielectric film and the multilayer dielectric gratings(MDG) were designed by eigen matrix and optimized with the help of generic algorithm and rigorous coupled wave method. The reflectivity was close to 100% and the bandwith were over 250nm, twice compared to the unoptimized film structure. The simulation software of standing wave field distribution within MDG was developed and the electric field of the MDG was calculated. And the key parameters which affected the electric field distribution were also studied.

  12. Designing a new structure for storing nuclear data: Progress of the Working Party for Evaluation Cooperation subgroup #38

    DOE PAGES

    Mattoon, C. M.; Beck, B. R.

    2015-12-24

    An international effort is underway to design a new structure for storing and using nuclear reaction data, with the goal of eventually replacing the current standard, ENDF-6. This effort, organized by the Working Party for Evaluation Cooperation, was initiated in 2012 and has resulted in a list of requirements and specifications for how the proposed new structure shall perform. The new structure will take advantage of new developments in computational tools, using a nested hierarchy to store data. Here, the structure can be stored in text form (such as an XML file) for human readability and data sharing, or itmore » can be stored in binary to optimize data access. In this paper, we present the progress towards completing the requirements, specifications and implementation of the new structure.« less

  13. Designing a new structure for storing nuclear data: Progress of the Working Party for Evaluation Cooperation subgroup #38

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mattoon, C. M.; Beck, B. R.

    An international effort is underway to design a new structure for storing and using nuclear reaction data, with the goal of eventually replacing the current standard, ENDF-6. This effort, organized by the Working Party for Evaluation Cooperation, was initiated in 2012 and has resulted in a list of requirements and specifications for how the proposed new structure shall perform. The new structure will take advantage of new developments in computational tools, using a nested hierarchy to store data. Here, the structure can be stored in text form (such as an XML file) for human readability and data sharing, or itmore » can be stored in binary to optimize data access. In this paper, we present the progress towards completing the requirements, specifications and implementation of the new structure.« less

  14. Final Report for DE-AR0000708

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donohue, Marc; Aranovich, Gregory; Wang, Chao

    This project determined the effect of adsorption compression on the rates of catalytic chemical reactions. It was shown that in regions of strong adsorption compression there is a dramatic increase in the rate of catalytic chemical reaction. Experiments focused on the conversion of NO to molecular nitrogen and oxygen. Data analysis techniques were developed to allow interpretation of experimental data and prediction of conditions for optimal reaction rates.

  15. Discontinuity minimization for omnidirectional video projections

    NASA Astrophysics Data System (ADS)

    Alshina, Elena; Zakharchenko, Vladyslav

    2017-09-01

    Advances in display technologies both for head mounted devices and television panels demand resolution increase beyond 4K for source signal in virtual reality video streaming applications. This poses a problem of content delivery trough a bandwidth limited distribution networks. Considering a fact that source signal covers entire surrounding space investigation reviled that compression efficiency may fluctuate 40% in average depending on origin selection at the conversion stage from 3D space to 2D projection. Based on these knowledge the origin selection algorithm for video compression applications has been proposed. Using discontinuity entropy minimization function projection origin rotation may be defined to provide optimal compression results. Outcome of this research may be applied across various video compression solutions for omnidirectional content.

  16. Analysis, Design and Optimization of Non-Cylindrical Fuselage for Blended-Wing-Body (BWB) Vehicle

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, V.; Sobieszczanski-Sobieski, J.; Kosaka, I.; Quinn, G.; Charpentier, C.

    2002-01-01

    Initial results of an investigation towards finding an efficient non-cylindrical fuselage configuration for a conceptual blended-wing-body flight vehicle were presented. A simplified 2-D beam column analysis and optimization was performed first. Then a set of detailed finite element models of deep sandwich panel and ribbed shell construction concepts were analyzed and optimized. Generally these concepts with flat surfaces were found to be structurally inefficient to withstand internal pressure and resultant compressive loads simultaneously. Alternatively, a set of multi-bubble fuselage configuration concepts were developed for balancing internal cabin pressure load efficiently, through membrane stress in inner-stiffened shell and inter-cabin walls. An outer-ribbed shell was designed to prevent buckling due to external resultant compressive loads. Initial results from finite element analysis appear to be promising. These concepts should be developed further to exploit their inherent structurally efficiency.

  17. Stacking-sequence optimization for buckling of laminated plates by integer programming

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.; Walsh, Joanne L.

    1991-01-01

    Integer-programming formulations for the design of symmetric and balanced laminated plates under biaxial compression are presented. Both maximization of buckling load for a given total thickness and the minimization of total thickness subject to a buckling constraint are formulated. The design variables that define the stacking sequence of the laminate are zero-one integers. It is shown that the formulation results in a linear optimization problem that can be solved on readily available software. This is in contrast to the continuous case, where the design variables are the thicknesses of layers with specified ply orientations, and the optimization problem is nonlinear. Constraints on the stacking sequence such as a limit on the number of contiguous plies of the same orientation and limits on in-plane stiffnesses are easily accommodated. Examples are presented for graphite-epoxy plates under uniaxial and biaxial compression using a commercial software package based on the branch-and-bound algorithm.

  18. Rate and power efficient image compressed sensing and transmission

    NASA Astrophysics Data System (ADS)

    Olanigan, Saheed; Cao, Lei; Viswanathan, Ramanarayanan

    2016-01-01

    This paper presents a suboptimal quantization and transmission scheme for multiscale block-based compressed sensing images over wireless channels. The proposed method includes two stages: dealing with quantization distortion and transmission errors. First, given the total transmission bit rate, the optimal number of quantization bits is assigned to the sensed measurements in different wavelet sub-bands so that the total quantization distortion is minimized. Second, given the total transmission power, the energy is allocated to different quantization bit layers based on their different error sensitivities. The method of Lagrange multipliers with Karush-Kuhn-Tucker conditions is used to solve both optimization problems, for which the first problem can be solved with relaxation and the second problem can be solved completely. The effectiveness of the scheme is illustrated through simulation results, which have shown up to 10 dB improvement over the method without the rate and power optimization in medium and low signal-to-noise ratio cases.

  19. Adaptive Origami for Efficiently Folded Structures

    DTIC Science & Technology

    2016-02-01

    design optimization to find optimal origami patterns for in-plane compression. 3. Self-folding and programmable material systems were developed for...2014, 1st place in the Midwest and 2nd place in the National 2014 SAMPE student research symposium). • Design of self-folding and programmable ... material systems: Nafion SMP Programming: To integrate active materials into origami, mechanical analysis and optimization tools where applied to the

  20. Massive optimal data compression and density estimation for scalable, likelihood-free inference in cosmology

    NASA Astrophysics Data System (ADS)

    Alsing, Justin; Wandelt, Benjamin; Feeney, Stephen

    2018-07-01

    Many statistical models in cosmology can be simulated forwards but have intractable likelihood functions. Likelihood-free inference methods allow us to perform Bayesian inference from these models using only forward simulations, free from any likelihood assumptions or approximations. Likelihood-free inference generically involves simulating mock data and comparing to the observed data; this comparison in data space suffers from the curse of dimensionality and requires compression of the data to a small number of summary statistics to be tractable. In this paper, we use massive asymptotically optimal data compression to reduce the dimensionality of the data space to just one number per parameter, providing a natural and optimal framework for summary statistic choice for likelihood-free inference. Secondly, we present the first cosmological application of Density Estimation Likelihood-Free Inference (DELFI), which learns a parametrized model for joint distribution of data and parameters, yielding both the parameter posterior and the model evidence. This approach is conceptually simple, requires less tuning than traditional Approximate Bayesian Computation approaches to likelihood-free inference and can give high-fidelity posteriors from orders of magnitude fewer forward simulations. As an additional bonus, it enables parameter inference and Bayesian model comparison simultaneously. We demonstrate DELFI with massive data compression on an analysis of the joint light-curve analysis supernova data, as a simple validation case study. We show that high-fidelity posterior inference is possible for full-scale cosmological data analyses with as few as ˜104 simulations, with substantial scope for further improvement, demonstrating the scalability of likelihood-free inference to large and complex cosmological data sets.

  1. NASA Constellation Distributed Simulation Middleware Trade Study

    NASA Technical Reports Server (NTRS)

    Hasan, David; Bowman, James D.; Fisher, Nancy; Cutts, Dannie; Cures, Edwin Z.

    2008-01-01

    This paper presents the results of a trade study designed to assess three distributed simulation middleware technologies for support of the NASA Constellation Distributed Space Exploration Simulation (DSES) project and Test and Verification Distributed System Integration Laboratory (DSIL). The technologies are the High Level Architecture (HLA), the Test and Training Enabling Architecture (TENA), and an XML-based variant of Distributed Interactive Simulation (DIS-XML) coupled with the Extensible Messaging and Presence Protocol (XMPP). According to the criteria and weights determined in this study, HLA scores better than the other two for DSES as well as the DSIL.

  2. A standard format and a graphical user interface for spin system specification.

    PubMed

    Biternas, A G; Charnock, G T P; Kuprov, Ilya

    2014-03-01

    We introduce a simple and general XML format for spin system description that is the result of extensive consultations within Magnetic Resonance community and unifies under one roof all major existing spin interaction specification conventions. The format is human-readable, easy to edit and easy to parse using standard XML libraries. We also describe a graphical user interface that was designed to facilitate construction and visualization of complicated spin systems. The interface is capable of generating input files for several popular spin dynamics simulation packages. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  3. Beam steering performance of compressed Luneburg lens based on transformation optics

    NASA Astrophysics Data System (ADS)

    Gao, Ju; Wang, Cong; Zhang, Kuang; Hao, Yang; Wu, Qun

    2018-06-01

    In this paper, two types of compressed Luneburg lenses based on transformation optics are investigated and simulated using two different sources, namely, waveguides and dipoles, which represent plane and spherical wave sources, respectively. We determined that the largest beam steering angle and the related feed point are intrinsic characteristics of a certain type of compressed Luneburg lens, and that the optimized distance between the feed and lens, gain enhancement, and side-lobe suppression are related to the type of source. Based on our results, we anticipate that these lenses will prove useful in various future antenna applications.

  4. COSAL: A black-box compressible stability analysis code for transition prediction in three-dimensional boundary layers

    NASA Technical Reports Server (NTRS)

    Malik, M. R.

    1982-01-01

    A fast computer code COSAL for transition prediction in three dimensional boundary layers using compressible stability analysis is described. The compressible stability eigenvalue problem is solved using a finite difference method, and the code is a black box in the sense that no guess of the eigenvalue is required from the user. Several optimization procedures were incorporated into COSAL to calculate integrated growth rates (N factor) for transition correlation for swept and tapered laminar flow control wings using the well known e to the Nth power method. A user's guide to the program is provided.

  5. An iterative forward analysis technique to determine the equation of state of dynamically compressed materials

    DOE PAGES

    Ali, S. J.; Kraus, R. G.; Fratanduono, D. E.; ...

    2017-05-18

    Here, we developed an iterative forward analysis (IFA) technique with the ability to use hydrocode simulations as a fitting function for analysis of dynamic compression experiments. The IFA method optimizes over parameterized quantities in the hydrocode simulations, breaking the degeneracy of contributions to the measured material response. Velocity profiles from synthetic data generated using a hydrocode simulation are analyzed as a first-order validation of the technique. We also analyze multiple magnetically driven ramp compression experiments on copper and compare with more conventional techniques. Excellent agreement is obtained in both cases.

  6. The effect of different torque wrenches on rotational stiffness in compressive femoral nails: a biomechanical study.

    PubMed

    Karaarslan, A A; Acar, N

    2018-02-01

    Rotation instability and locking screws failure are common problems. We aimed to determine optimal torque wrench offering maximum rotational stiffness without locking screw failure. We used 10 conventional compression nails, 10 novel compression nails and 10 interlocking nails with 30 composite femurs. We examined rotation stiffness and fracture site compression value by load cell with 3, 6 and 8 Nm torque wrenches using torsion apparatus with a maximum torque moment of 5 Nm in both directions. Rotational stiffness of composite femur-nail constructs was calculated. Rotational stiffness of composite femur-compression nail constructs compressed by 6 Nm torque wrench was 3.27 ± 1.81 Nm/angle (fracture site compression: 1588 N) and 60% more than that compressed with 3 Nm torque wrench (advised previously) with 2.04 ± 0.81 Nm/angle (inter fragmentary compression: 818 N) (P = 0.000). Rotational stiffness of composite-femur-compression nail constructs compressed by 3 Nm torque wrench was 2.04 ± 0.81 Nm/angle (fracture site compression: 818 N) and 277% more than that of interlocking nail with 0.54 ± 0.08 Nm/angle (fracture site compression: 0 N) (P = 0.000). Rotational stiffness and fracture site compression value produced by 3 Nm torque wrench was not satisfactory. To obtain maximum rotational stiffness and fracture site compression value without locking screw failure, 6 Nm torque wrench in compression nails and 8 Nm torque wrench in novel compression nails should be used.

  7. Quasi-Static Compression and Low-Velocity Impact Behavior of Tri-Axial Bio-Composite Structural Panels Using a Spherical Head

    PubMed Central

    Li, Jinghao; Hunt, John F; Gong, Shaoqin; Cai, Zhiyong

    2017-01-01

    This paper presents experimental results of both quasi-static compression and low-velocity impact behavior for tri-axial bio-composite structural panels using a spherical load head. Panels were made having different core and face configurations. The results showed that panels made having either carbon fiber fabric composite faces or a foam-filled core had significantly improved impact and compressive performance over panels without either. Different localized impact responses were observed based on the location of the compression or impact relative to the tri-axial structural core; the core with a smaller structural element had better impact performance. Furthermore, during the early contact phase for both quasi-static compression and low-velocity impact tests, the panels with the same configuration had similar load-displacement responses. The experimental results show basic compression data could be used for the future design and optimization of tri-axial bio-composite structural panels for potential impact applications. PMID:28772542

  8. Space charge effects and aberrations on electron pulse compression in a spherical electrostatic capacitor.

    PubMed

    Yu, Lei; Li, Haibo; Wan, Weishi; Wei, Zheng; Grzelakowski, Krzysztof P; Tromp, Rudolf M; Tang, Wen-Xin

    2017-12-01

    The effects of space charge, aberrations and relativity on temporal compression are investigated for a compact spherical electrostatic capacitor (α-SDA). By employing the three-dimensional (3D) field simulation and the 3D space charge model based on numerical General Particle Tracer and SIMION, we map the compression efficiency for a wide range of initial beam size and single-pulse electron number and determine the optimum conditions of electron pulses for the most effective compression. The results demonstrate that both space charge effects and aberrations prevent the compression of electron pulses into the sub-ps region if the electron number and the beam size are not properly optimized. Our results suggest that α-SDA is an effective compression approach for electron pulses under the optimum conditions. It may serve as a potential key component in designing future time-resolved electron sources for electron diffraction and spectroscopy experiments. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Word aligned bitmap compression method, data structure, and apparatus

    DOEpatents

    Wu, Kesheng; Shoshani, Arie; Otoo, Ekow

    2004-12-14

    The Word-Aligned Hybrid (WAH) bitmap compression method and data structure is a relatively efficient method for searching and performing logical, counting, and pattern location operations upon large datasets. The technique is comprised of a data structure and methods that are optimized for computational efficiency by using the WAH compression method, which typically takes advantage of the target computing system's native word length. WAH is particularly apropos to infrequently varying databases, including those found in the on-line analytical processing (OLAP) industry, due to the increased computational efficiency of the WAH compressed bitmap index. Some commercial database products already include some version of a bitmap index, which could possibly be replaced by the WAH bitmap compression techniques for potentially increased operation speed, as well as increased efficiencies in constructing compressed bitmaps. Combined together, this technique may be particularly useful for real-time business intelligence. Additional WAH applications may include scientific modeling, such as climate and combustion simulations, to minimize search time for analysis and subsequent data visualization.

  10. Blind compressed sensing image reconstruction based on alternating direction method

    NASA Astrophysics Data System (ADS)

    Liu, Qinan; Guo, Shuxu

    2018-04-01

    In order to solve the problem of how to reconstruct the original image under the condition of unknown sparse basis, this paper proposes an image reconstruction method based on blind compressed sensing model. In this model, the image signal is regarded as the product of a sparse coefficient matrix and a dictionary matrix. Based on the existing blind compressed sensing theory, the optimal solution is solved by the alternative minimization method. The proposed method solves the problem that the sparse basis in compressed sensing is difficult to represent, which restrains the noise and improves the quality of reconstructed image. This method ensures that the blind compressed sensing theory has a unique solution and can recover the reconstructed original image signal from a complex environment with a stronger self-adaptability. The experimental results show that the image reconstruction algorithm based on blind compressed sensing proposed in this paper can recover high quality image signals under the condition of under-sampling.

  11. Experimental spinal cord trauma: a review of mechanically induced spinal cord injury in rat models.

    PubMed

    Abdullahi, Dauda; Annuar, Azlina Ahmad; Mohamad, Masro; Aziz, Izzuddin; Sanusi, Junedah

    2017-01-01

    It has been shown that animal spinal cord compression (using methods such as clips, balloons, spinal cord strapping, or calibrated forceps) mimics the persistent spinal canal occlusion that is common in human spinal cord injury (SCI). These methods can be used to investigate the effects of compression or to know the optimal timing of decompression (as duration of compression can affect the outcome of pathology) in acute SCI. Compression models involve prolonged cord compression and are distinct from contusion models, which apply only transient force to inflict an acute injury to the spinal cord. While the use of forceps to compress the spinal cord is a common choice due to it being inexpensive, it has not been critically assessed against the other methods to determine whether it is the best method to use. To date, there is no available review specifically focused on the current compression methods of inducing SCI in rats; thus, we performed a systematic and comprehensive publication search to identify studies on experimental spinalization in rat models, and this review discusses the advantages and limitations of each method.

  12. Optimal Compression of Floating-Point Astronomical Images Without Significant Loss of Information

    NASA Technical Reports Server (NTRS)

    Pence, William D.; White, R. L.; Seaman, R.

    2010-01-01

    We describe a compression method for floating-point astronomical images that gives compression ratios of 6 - 10 while still preserving the scientifically important information in the image. The pixel values are first preprocessed by quantizing them into scaled integer intensity levels, which removes some of the uncompressible noise in the image. The integers are then losslessly compressed using the fast and efficient Rice algorithm and stored in a portable FITS format file. Quantizing an image more coarsely gives greater image compression, but it also increases the noise and degrades the precision of the photometric and astrometric measurements in the quantized image. Dithering the pixel values during the quantization process greatly improves the precision of measurements in the more coarsely quantized images. We perform a series of experiments on both synthetic and real astronomical CCD images to quantitatively demonstrate that the magnitudes and positions of stars in the quantized images can be measured with the predicted amount of precision. In order to encourage wider use of these image compression methods, we have made available a pair of general-purpose image compression programs, called fpack and funpack, which can be used to compress any FITS format image.

  13. Characterization of compression behaviors of fully covered biodegradable polydioxanone biliary stent for human body: A numerical approach by finite element model.

    PubMed

    Liu, Yanhui; Zhang, Peihua

    2016-09-01

    This paper presents a study of the compression behaviors of fully covered biodegradable polydioxanone biliary stents (FCBPBs) developed for human body by finite element method. To investigate the relationship between the compression force and structure parameter (monofilament diameter and braid-pin number), nine numerical models based on actual biliary stent were established, the simulation and experimental results are in good agreement with each other when calculating the compression force derived from both experiment and simulation results, indicating that the simulation results can be provided a useful reference to the investigation of biliary stents. The stress distribution on FCBPBSs was studied to optimize the structure of FCBPBSs. In addition, the plastic dissipation analysis and plastic strain of FCBPBSs were obtained via the compression simulation, revealing the structure parameter effect on the tolerance. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Loss of interface pressure in various compression bandage systems over seven days.

    PubMed

    Protz, Kerstin; Heyer, Kristina; Verheyen-Cronau, Ida; Augustin, Matthias

    2014-01-01

    Manufacturers' instructions of multi-component compression bandage systems inform that these products can remain up to 7 days during the therapy of venous leg ulcer. This implies that the pressure needed will be sustained during this time. The present research investigated the persistence of pressure of compression systems over 7 days. All 6 compression systems available in Germany at the time of the trial were tested on 35 volunteering persons without signs of venous leg disease. Bandaging with short-stretch bandages was included for comparison. Pressure was measured by using PicoPress®. Initially, all products showed sufficient resting pressure of 40 mm Hg checked with a pressure monitor, except for one system in which the pressure fell by at least 23.8%, the maximum being 47.5% over a period of 7 days. The currently available compression systems are not fit to keep the required pressure. Optimized products need to be developed.

  15. Compressed glassy carbon: An ultrastrong and elastic interpenetrating graphene network

    PubMed Central

    Hu, Meng; He, Julong; Zhao, Zhisheng; Strobel, Timothy A.; Hu, Wentao; Yu, Dongli; Sun, Hao; Liu, Lingyu; Li, Zihe; Ma, Mengdong; Kono, Yoshio; Shu, Jinfu; Mao, Ho-kwang; Fei, Yingwei; Shen, Guoyin; Wang, Yanbin; Juhl, Stephen J.; Huang, Jian Yu; Liu, Zhongyuan; Xu, Bo; Tian, Yongjun

    2017-01-01

    Carbon’s unique ability to have both sp2 and sp3 bonding states gives rise to a range of physical attributes, including excellent mechanical and electrical properties. We show that a series of lightweight, ultrastrong, hard, elastic, and conductive carbons are recovered after compressing sp2-hybridized glassy carbon at various temperatures. Compression induces the local buckling of graphene sheets through sp3 nodes to form interpenetrating graphene networks with long-range disorder and short-range order on the nanometer scale. The compressed glassy carbons have extraordinary specific compressive strengths—more than two times that of commonly used ceramics—and simultaneously exhibit robust elastic recovery in response to local deformations. This type of carbon is an optimal ultralight, ultrastrong material for a wide range of multifunctional applications, and the synthesis methodology demonstrates potential to access entirely new metastable materials with exceptional properties. PMID:28630918

  16. Quantum autoencoders for efficient compression of quantum data

    NASA Astrophysics Data System (ADS)

    Romero, Jonathan; Olson, Jonathan P.; Aspuru-Guzik, Alan

    2017-12-01

    Classical autoencoders are neural networks that can learn efficient low-dimensional representations of data in higher-dimensional space. The task of an autoencoder is, given an input x, to map x to a lower dimensional point y such that x can likely be recovered from y. The structure of the underlying autoencoder network can be chosen to represent the data on a smaller dimension, effectively compressing the input. Inspired by this idea, we introduce the model of a quantum autoencoder to perform similar tasks on quantum data. The quantum autoencoder is trained to compress a particular data set of quantum states, where a classical compression algorithm cannot be employed. The parameters of the quantum autoencoder are trained using classical optimization algorithms. We show an example of a simple programmable circuit that can be trained as an efficient autoencoder. We apply our model in the context of quantum simulation to compress ground states of the Hubbard model and molecular Hamiltonians.

  17. [The optimization of chondromalacia patellae diagnosis by NMR tomography. The use of an apparatus for cartilage compression].

    PubMed

    König, H; Dinkelaker, F; Wolf, K J

    1991-08-01

    The aim of this study was to improve the MRI diagnosis of CMP, with special reference to the early stages and accurate staging. For this purpose, the retropatellar cartilage was examined by MRI while compression was carried out, using 21 patients and five normal controls. The compression was applied by means of a specially constructed device. Changes in cartilage thickness and signal intensity were evaluated quantitatively during FLASH and FISP sequences. In all patients the results of arthroscopies were available and in 12 patients, cartilage biopsies had been obtained. CMP stage I could be distinguished from normal cartilage by reduction in cartilage thickness and signal increase from the oedematous cartilage during compression. In CMP stages II/III, abnormal protein deposition of collagen type I could be demonstrated by its compressibility. In stages III and IV, the method does not add any significant additional information.

  18. Numerical estimation of deformation energy of selected bulk oilseeds in compression loading

    NASA Astrophysics Data System (ADS)

    Demirel, C.; Kabutey, A.; Herak, D.; Gurdil, G. A. K.

    2017-09-01

    This paper aimed at the determination of the deformation energy of some bulk oilseeds or kernels namely oil palm, sunflower, rape and flax in linear pressing applying the trapezoidal rule which is characterized by the area under the force and deformation curve.The bulk samples were measured at the initial pressing height of 60 mm with the vessel diameter of 60 mm where they were compressed under the universal compression machine at a maximum force of 200 kN and speed of 5 mm/min.Based on the compression test, the optimal deformation energy for recovering the oil was observed at a force of 163 kN where there was no seed/kernel cake ejection in comparison to the initial maximum force used particularly for rape and flax bulk oilseeds.This information is needed for analyzing the energy efficiency of the non-linear compression process involving a mechanical screw press or expeller.

  19. Correlation estimation and performance optimization for distributed image compression

    NASA Astrophysics Data System (ADS)

    He, Zhihai; Cao, Lei; Cheng, Hui

    2006-01-01

    Correlation estimation plays a critical role in resource allocation and rate control for distributed data compression. A Wyner-Ziv encoder for distributed image compression is often considered as a lossy source encoder followed by a lossless Slepian-Wolf encoder. The source encoder consists of spatial transform, quantization, and bit plane extraction. In this work, we find that Gray code, which has been extensively used in digital modulation, is able to significantly improve the correlation between the source data and its side information. Theoretically, we analyze the behavior of Gray code within the context of distributed image compression. Using this theoretical model, we are able to efficiently allocate the bit budget and determine the code rate of the Slepian-Wolf encoder. Our experimental results demonstrate that the Gray code, coupled with accurate correlation estimation and rate control, significantly improves the picture quality, by up to 4 dB, over the existing methods for distributed image compression.

  20. On Gravitational Radiation: A Nonlinear Wave Theory in a Viscoelastic Kerr-Lambda Spacetime

    NASA Astrophysics Data System (ADS)

    Gamble, Ronald

    This project presents the experimental results concerning the mix design, fresh and hardened properties of an ultra-high strength concrete that has already been developed for high performance construction applications but now needs to be evaluated for a 3D printing process. The concrete is designed to be extruded through a nozzle and pump system, and have layers printed to analyze deformation within printed layers. The key factors for printable concrete are, the ability to be extruded through a pump and nozzle (flowability) and buildability. The flow of mortar will be studied by looking at the rheological properties of the mix and assessing the acceptable range of shear strength. Three different water to cement ratios and varying dosages of superplasticizers were incorporated to optimize a workable mortar/concrete mix to be applied for 3D printing. A Brookfield DV-III Ultra programmable rheometer was used to determine the viscosity and yield strength of the mortar mixes; these values were used to calculate the shear strength of the printable concrete. Compressive strengths of optimal mixtures were taken to assess the feasibility of 3D printed concrete as compared to traditional means. Compression test was conducted on a High Capacity Series Compression Testing Machine with 2" x 2" mortars cubes. The results indicated that the mortars that have shear ranges between of 0.3 - 0.9 kPa could be used in a 3D printer. The compressive strength of the concrete made with a 25% water/cement ratio and 10% superplasticizer dosage reached 62.8 MPa, which qualifies it as ultrahigh strength mortar. An optimum mix will be validated by printing the most filaments until deformation occurs. The end goal of this project is to develop an optimal concrete to produce the strength needed for 3D printed concrete. Using our predesigned ultra-high strength concrete mix ingredients, we will optimize that mix to have the same performance characteristics and be used in 3D printing applications.

  1. SeqHound: biological sequence and structure database as a platform for bioinformatics research

    PubMed Central

    2002-01-01

    Background SeqHound has been developed as an integrated biological sequence, taxonomy, annotation and 3-D structure database system. It provides a high-performance server platform for bioinformatics research in a locally-hosted environment. Results SeqHound is based on the National Center for Biotechnology Information data model and programming tools. It offers daily updated contents of all Entrez sequence databases in addition to 3-D structural data and information about sequence redundancies, sequence neighbours, taxonomy, complete genomes, functional annotation including Gene Ontology terms and literature links to PubMed. SeqHound is accessible via a web server through a Perl, C or C++ remote API or an optimized local API. It provides functionality necessary to retrieve specialized subsets of sequences, structures and structural domains. Sequences may be retrieved in FASTA, GenBank, ASN.1 and XML formats. Structures are available in ASN.1, XML and PDB formats. Emphasis has been placed on complete genomes, taxonomy, domain and functional annotation as well as 3-D structural functionality in the API, while fielded text indexing functionality remains under development. SeqHound also offers a streamlined WWW interface for simple web-user queries. Conclusions The system has proven useful in several published bioinformatics projects such as the BIND database and offers a cost-effective infrastructure for research. SeqHound will continue to develop and be provided as a service of the Blueprint Initiative at the Samuel Lunenfeld Research Institute. The source code and examples are available under the terms of the GNU public license at the Sourceforge site http://sourceforge.net/projects/slritools/ in the SLRI Toolkit. PMID:12401134

  2. Dcs Data Viewer, an Application that Accesses ATLAS DCS Historical Data

    NASA Astrophysics Data System (ADS)

    Tsarouchas, C.; Schlenker, S.; Dimitrov, G.; Jahn, G.

    2014-06-01

    The ATLAS experiment at CERN is one of the four Large Hadron Collider experiments. The Detector Control System (DCS) of ATLAS is responsible for the supervision of the detector equipment, the reading of operational parameters, the propagation of the alarms and the archiving of important operational data in a relational database (DB). DCS Data Viewer (DDV) is an application that provides access to the ATLAS DCS historical data through a web interface. Its design is structured using a client-server architecture. The pythonic server connects to the DB and fetches the data by using optimized SQL requests. It communicates with the outside world, by accepting HTTP requests and it can be used stand alone. The client is an AJAX (Asynchronous JavaScript and XML) interactive web application developed under the Google Web Toolkit (GWT) framework. Its web interface is user friendly, platform and browser independent. The selection of metadata is done via a column-tree view or with a powerful search engine. The final visualization of the data is done using java applets or java script applications as plugins. The default output is a value-over-time chart, but other types of outputs like tables, ascii or ROOT files are supported too. Excessive access or malicious use of the database is prevented by a dedicated protection mechanism, allowing the exposure of the tool to hundreds of inexperienced users. The current configuration of the client and of the outputs can be saved in an XML file. Protection against web security attacks is foreseen and authentication constrains have been taken into account, allowing the exposure of the tool to hundreds of users world wide. Due to its flexible interface and its generic and modular approach, DDV could be easily used for other experiment control systems.

  3. Solidification/stabilization of ASR fly ash using Thiomer material: Optimization of compressive strength and heavy metals leaching.

    PubMed

    Baek, Jin Woong; Choi, Angelo Earvin Sy; Park, Hung Suck

    2017-12-01

    Optimization studies of a novel and eco-friendly construction material, Thiomer, was investigated in the solidification/stabilization of automobile shredded residue (ASR) fly ash. A D-optimal mixture design was used to evaluate and optimize maximum compressive strength and heavy metals leaching by varying Thiomer (20-40wt%), ASR fly ash (30-50wt%) and sand (20-40wt%). The analysis of variance was utilized to determine the level of significance of each process parameters and interactions. The microstructure of the solidified materials was taken from a field emission-scanning electron microscopy and energy dispersive X-ray spectroscopy that confirmed successful Thiomer solidified ASR fly ash due to reduced pores and gaps in comparison with an untreated ASR fly ash. The X-ray diffraction detected the enclosed materials on the ASR fly ash primarily contained sulfur associated crystalline complexes. Results indicated the optimal conditions of 30wt% Thiomer, 30wt% ASR fly ash and 40wt% sand reached a compressive strength of 54.9MPa. For the optimum results in heavy metals leaching, 0.0078mg/LPb, 0.0260mg/L Cr, 0.0007mg/LCd, 0.0020mg/L Cu, 0.1027mg/L Fe, 0.0046mg/L Ni and 0.0920mg/L Zn were leached out, being environmentally safe due to being substantially lower than the Korean standard leaching requirements. The results also showed that Thiomer has superiority over the commonly used Portland cement asa binding material which confirmed its potential usage as an innovative approach to simultaneously synthesize durable concrete and satisfactorily pass strict environmental regulations by heavy metals leaching. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Predicting the sinkage of a moving tracked mining vehicle using a new rheological formulation for soft deep-sea sediment

    NASA Astrophysics Data System (ADS)

    Xu, Feng; Rao, Qiuhua; Ma, Wenbo

    2018-03-01

    The sinkage of a moving tracked mining vehicle is greatly affected by the combined compression-shear rheological properties of soft deep-sea sediments. For test purposes, the best sediment simulant is prepared based on soft deep-sea sediment from a C-C poly-metallic nodule mining area in the Pacific Ocean. Compressive creep tests and shear creep tests are combined to obtain compressive and shear rheological parameters to establish a combined compressive-shear rheological constitutive model and a compression-sinkage rheological constitutive model. The combined compression-shear rheological sinkage of the tracked mining vehicle at different speeds is calculated using the RecurDyn software with a selfprogrammed subroutine to implement the combined compression-shear rheological constitutive model. The model results are compared with shear rheological sinkage and ordinary sinkage (without consideration of rheological properties). These results show that the combined compression-shear rheological constitutive model must be taken into account when calculating the sinkage of a tracked mining vehicle. The combined compression-shear rheological sinkage decrease with vehicle speed and is the largest among the three types of sinkage. The developed subroutine in the RecurDyn software can be used to study the performance and structural optimization of moving tracked mining vehicles.

  5. Automating Data Submission to a National Archive

    NASA Astrophysics Data System (ADS)

    Work, T. T.; Chandler, C. L.; Groman, R. C.; Allison, M. D.; Gegg, S. R.; Biological; Chemical Oceanography Data Management Office

    2010-12-01

    In late 2006, the U.S. National Science Foundation (NSF) funded the Biological and Chemical Oceanographic Data Management Office (BCO-DMO) at Woods Hole Oceanographic Institution (WHOI) to work closely with investigators to manage oceanographic data generated from their research projects. One of the final data management tasks is to ensure that the data are permanently archived at the U.S. National Oceanographic Data Center (NODC) or other appropriate national archiving facility. In the past, BCO-DMO submitted data to NODC as an email with attachments including a PDF file (a manually completed metadata record) and one or more data files. This method is no longer feasible given the rate at which data sets are contributed to BCO-DMO. Working with collaborators at NODC, a more streamlined and automated workflow was developed to keep up with the increased volume of data that must be archived at NODC. We will describe our new workflow; a semi-automated approach for contributing data to NODC that includes a Federal Geographic Data Committee (FGDC) compliant Extensible Markup Language (XML) metadata file accompanied by comma-delimited data files. The FGDC XML file is populated from information stored in a MySQL database. A crosswalk described by an Extensible Stylesheet Language Transformation (XSLT) is used to transform the XML formatted MySQL result set to a FGDC compliant XML metadata file. To ensure data integrity, the MD5 algorithm is used to generate a checksum and manifest of the files submitted to NODC for permanent archive. The revised system supports preparation of detailed, standards-compliant metadata that facilitate data sharing and enable accurate reuse of multidisciplinary information. The approach is generic enough to be adapted for use by other data management groups.

  6. Prototype Development: Context-Driven Dynamic XML Ophthalmologic Data Capture Application.

    PubMed

    Peissig, Peggy; Schwei, Kelsey M; Kadolph, Christopher; Finamore, Joseph; Cancel, Efrain; McCarty, Catherine A; Okorie, Asha; Thomas, Kate L; Allen Pacheco, Jennifer; Pathak, Jyotishman; Ellis, Stephen B; Denny, Joshua C; Rasmussen, Luke V; Tromp, Gerard; Williams, Marc S; Vrabec, Tamara R; Brilliant, Murray H

    2017-09-13

    The capture and integration of structured ophthalmologic data into electronic health records (EHRs) has historically been a challenge. However, the importance of this activity for patient care and research is critical. The purpose of this study was to develop a prototype of a context-driven dynamic extensible markup language (XML) ophthalmologic data capture application for research and clinical care that could be easily integrated into an EHR system. Stakeholders in the medical, research, and informatics fields were interviewed and surveyed to determine data and system requirements for ophthalmologic data capture. On the basis of these requirements, an ophthalmology data capture application was developed to collect and store discrete data elements with important graphical information. The context-driven data entry application supports several features, including ink-over drawing capability for documenting eye abnormalities, context-based Web controls that guide data entry based on preestablished dependencies, and an adaptable database or XML schema that stores Web form specifications and allows for immediate changes in form layout or content. The application utilizes Web services to enable data integration with a variety of EHRs for retrieval and storage of patient data. This paper describes the development process used to create a context-driven dynamic XML data capture application for optometry and ophthalmology. The list of ophthalmologic data elements identified as important for care and research can be used as a baseline list for future ophthalmologic data collection activities. ©Peggy Peissig, Kelsey M Schwei, Christopher Kadolph, Joseph Finamore, Efrain Cancel, Catherine A McCarty, Asha Okorie, Kate L Thomas, Jennifer Allen Pacheco, Jyotishman Pathak, Stephen B Ellis, Joshua C Denny, Luke V Rasmussen, Gerard Tromp, Marc S Williams, Tamara R Vrabec, Murray H Brilliant. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 13.09.2017.

  7. PRIDE Inspector Toolsuite: Moving Toward a Universal Visualization Tool for Proteomics Data Standard Formats and Quality Assessment of ProteomeXchange Datasets.

    PubMed

    Perez-Riverol, Yasset; Xu, Qing-Wei; Wang, Rui; Uszkoreit, Julian; Griss, Johannes; Sanchez, Aniel; Reisinger, Florian; Csordas, Attila; Ternent, Tobias; Del-Toro, Noemi; Dianes, Jose A; Eisenacher, Martin; Hermjakob, Henning; Vizcaíno, Juan Antonio

    2016-01-01

    The original PRIDE Inspector tool was developed as an open source standalone tool to enable the visualization and validation of mass-spectrometry (MS)-based proteomics data before data submission or already publicly available in the Proteomics Identifications (PRIDE) database. The initial implementation of the tool focused on visualizing PRIDE data by supporting the PRIDE XML format and a direct access to private (password protected) and public experiments in PRIDE.The ProteomeXchange (PX) Consortium has been set up to enable a better integration of existing public proteomics repositories, maximizing its benefit to the scientific community through the implementation of standard submission and dissemination pipelines. Within the Consortium, PRIDE is focused on supporting submissions of tandem MS data. The increasing use and popularity of the new Proteomics Standards Initiative (PSI) data standards such as mzIdentML and mzTab, and the diversity of workflows supported by the PX resources, prompted us to design and implement a new suite of algorithms and libraries that would build upon the success of the original PRIDE Inspector and would enable users to visualize and validate PX "complete" submissions. The PRIDE Inspector Toolsuite supports the handling and visualization of different experimental output files, ranging from spectra (mzML, mzXML, and the most popular peak lists formats) and peptide and protein identification results (mzIdentML, PRIDE XML, mzTab) to quantification data (mzTab, PRIDE XML), using a modular and extensible set of open-source, cross-platform libraries. We believe that the PRIDE Inspector Toolsuite represents a milestone in the visualization and quality assessment of proteomics data. It is freely available at http://github.com/PRIDE-Toolsuite/. © 2016 by The American Society for Biochemistry and Molecular Biology, Inc.

  8. Semantic e-Science: From Microformats to Models

    NASA Astrophysics Data System (ADS)

    Lumb, L. I.; Freemantle, J. R.; Aldridge, K. D.

    2009-05-01

    A platform has been developed to transform semi-structured ASCII data into a representation based on the eXtensible Markup Language (XML). A subsequent transformation allows the XML-based representation to be rendered in the Resource Description Format (RDF). Editorial metadata, expressed as external annotations (via XML Pointer Language), also survives this transformation process (e.g., Lumb et al., http://dx.doi.org/10.1016/j.cageo.2008.03.009). Because the XML-to-RDF transformation uses XSLT (eXtensible Stylesheet Language Transformations), semantic microformats ultimately encode the scientific data (Lumb & Aldridge, http://dx.doi.org/10.1109/HPCS.2006.26). In building the relationship-centric representation in RDF, a Semantic Model of the scientific data is extracted. The systematic enhancement in the expressivity and richness of the scientific data results in representations of knowledge that are readily understood and manipulated by intelligent software agents. Thus scientists are able to draw upon various resources within and beyond their discipline to use in their scientific applications. Since the resulting Semantic Models are independent conceptualizations of the science itself, the representation of scientific knowledge and interaction with the same can stimulate insight from different perspectives. Using the Global Geodynamics Project (GGP) for the purpose of illustration, the introduction of GGP microformats enable a Semantic Model for the GGP that can be semantically queried (e.g., via SPARQL, http://www.w3.org/TR/rdf-sparql-query). Although the present implementation uses the Open Source Redland RDF Libraries (http://librdf.org/), the approach is generalizable to other platforms and to projects other than the GGP (e.g., Baker et al., Informatics and the 2007-2008 Electronic Geophysical Year, Eos Trans. Am. Geophys. Un., 89(48), 485-486, 2008).

  9. PRIDE Inspector Toolsuite: Moving Toward a Universal Visualization Tool for Proteomics Data Standard Formats and Quality Assessment of ProteomeXchange Datasets*

    PubMed Central

    Perez-Riverol, Yasset; Xu, Qing-Wei; Wang, Rui; Uszkoreit, Julian; Griss, Johannes; Sanchez, Aniel; Reisinger, Florian; Csordas, Attila; Ternent, Tobias; del-Toro, Noemi; Dianes, Jose A.; Eisenacher, Martin; Hermjakob, Henning; Vizcaíno, Juan Antonio

    2016-01-01

    The original PRIDE Inspector tool was developed as an open source standalone tool to enable the visualization and validation of mass-spectrometry (MS)-based proteomics data before data submission or already publicly available in the Proteomics Identifications (PRIDE) database. The initial implementation of the tool focused on visualizing PRIDE data by supporting the PRIDE XML format and a direct access to private (password protected) and public experiments in PRIDE. The ProteomeXchange (PX) Consortium has been set up to enable a better integration of existing public proteomics repositories, maximizing its benefit to the scientific community through the implementation of standard submission and dissemination pipelines. Within the Consortium, PRIDE is focused on supporting submissions of tandem MS data. The increasing use and popularity of the new Proteomics Standards Initiative (PSI) data standards such as mzIdentML and mzTab, and the diversity of workflows supported by the PX resources, prompted us to design and implement a new suite of algorithms and libraries that would build upon the success of the original PRIDE Inspector and would enable users to visualize and validate PX “complete” submissions. The PRIDE Inspector Toolsuite supports the handling and visualization of different experimental output files, ranging from spectra (mzML, mzXML, and the most popular peak lists formats) and peptide and protein identification results (mzIdentML, PRIDE XML, mzTab) to quantification data (mzTab, PRIDE XML), using a modular and extensible set of open-source, cross-platform libraries. We believe that the PRIDE Inspector Toolsuite represents a milestone in the visualization and quality assessment of proteomics data. It is freely available at http://github.com/PRIDE-Toolsuite/. PMID:26545397

  10. Lapin Data Interchange Among Database, Analysis and Display Programs Using XML-Based Text Files

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The purpose was to investigate and evaluate the interchange of application- specific data among multiple programs each carrying out part of the analysis and design task. This has been carried out previously by creating a custom program to read data produced by one application and then write that data to a file whose format is specific to the second application that needs all or part of that data. In this investigation, data of interest is described using the XML markup language that allows the data to be stored in a text-string. Software to transform output data of a task into an XML-string and software to read an XML string and extract all or a portion of the data needed for another application is used to link two independent applications together as part of an overall design effort. This approach was initially used with a standard analysis program, Lapin, along with standard applications a standard spreadsheet program, a relational database program, and a conventional dialog and display program to demonstrate the successful sharing of data among independent programs. See Engineering Analysis Using a Web-Based Protocol by J.D. Schoeffler and R.W. Claus, NASA TM-2002-211981, October 2002. Most of the effort beyond that demonstration has been concentrated on the inclusion of more complex display programs. Specifically, a custom-written windowing program organized around dialogs to control the interactions have been combined with an independent CAD program (Open Cascade) that supports sophisticated display of CAD elements such as lines, spline curves, and surfaces and turbine-blade data produced by an independent blade design program (UD0300).

  11. Rate-distortion analysis of directional wavelets.

    PubMed

    Maleki, Arian; Rajaei, Boshra; Pourreza, Hamid Reza

    2012-02-01

    The inefficiency of separable wavelets in representing smooth edges has led to a great interest in the study of new 2-D transformations. The most popular criterion for analyzing these transformations is the approximation power. Transformations with near-optimal approximation power are useful in many applications such as denoising and enhancement. However, they are not necessarily good for compression. Therefore, most of the nearly optimal transformations such as curvelets and contourlets have not found any application in image compression yet. One of the most promising schemes for image compression is the elegant idea of directional wavelets (DIWs). While these algorithms outperform the state-of-the-art image coders in practice, our theoretical understanding of them is very limited. In this paper, we adopt the notion of rate-distortion and calculate the performance of the DIW on a class of edge-like images. Our theoretical analysis shows that if the edges are not "sharp," the DIW will compress them more efficiently than the separable wavelets. It also demonstrates the inefficiency of the quadtree partitioning that is often used with the DIW. To solve this issue, we propose a new partitioning scheme called megaquad partitioning. Our simulation results on real-world images confirm the benefits of the proposed partitioning algorithm, promised by our theoretical analysis. © 2011 IEEE

  12. Optimization studies on compression coated floating-pulsatile drug delivery of bisoprolol.

    PubMed

    Jagdale, Swati C; Bari, Nilesh A; Kuchekar, Bhanudas S; Chabukswar, Aniruddha R

    2013-01-01

    The purpose of the present work was to design and optimize compression coated floating pulsatile drug delivery systems of bisoprolol. Floating pulsatile concept was applied to increase the gastric residence of the dosage form having lag phase followed by a burst release. The prepared system consisted of two parts: a core tablet containing the active ingredient and an erodible outer shell with gas generating agent. The rapid release core tablet (RRCT) was prepared by using superdisintegrants with active ingredient. Press coating of optimized RRCT was done by polymer. A 3² full factorial design was used for optimization. The amount of Polyox WSR205 and Polyox WSR N12K was selected as independent variables. Lag period, drug release, and swelling index were selected as dependent variables. Floating pulsatile release formulation (FPRT) F13 at level 0 (55 mg) for Polyox WSR205 and level +1 (65 mg) for Polyox WSR N12K showed lag time of 4 h with >90% drug release. The data were statistically analyzed using ANOVA, and P < 0.05 was statistically significant. Release kinetics of the optimized formulation best fitted the zero order model. In vivo study confirms burst effect at 4 h in indicating the optimization of the dosage form.

  13. Optimization Studies on Compression Coated Floating-Pulsatile Drug Delivery of Bisoprolol

    PubMed Central

    Jagdale, Swati C.; Bari, Nilesh A.; Kuchekar, Bhanudas S.; Chabukswar, Aniruddha R.

    2013-01-01

    The purpose of the present work was to design and optimize compression coated floating pulsatile drug delivery systems of bisoprolol. Floating pulsatile concept was applied to increase the gastric residence of the dosage form having lag phase followed by a burst release. The prepared system consisted of two parts: a core tablet containing the active ingredient and an erodible outer shell with gas generating agent. The rapid release core tablet (RRCT) was prepared by using superdisintegrants with active ingredient. Press coating of optimized RRCT was done by polymer. A 32 full factorial design was used for optimization. The amount of Polyox WSR205 and Polyox WSR N12K was selected as independent variables. Lag period, drug release, and swelling index were selected as dependent variables. Floating pulsatile release formulation (FPRT) F13 at level 0 (55 mg) for Polyox WSR205 and level +1 (65 mg) for Polyox WSR N12K showed lag time of 4 h with >90% drug release. The data were statistically analyzed using ANOVA, and P < 0.05 was statistically significant. Release kinetics of the optimized formulation best fitted the zero order model. In vivo study confirms burst effect at 4 h in indicating the optimization of the dosage form. PMID:24367788

  14. Hearing AIDS and music.

    PubMed

    Chasin, Marshall; Russo, Frank A

    2004-01-01

    Historically, the primary concern for hearing aid design and fitting is optimization for speech inputs. However, increasingly other types of inputs are being investigated and this is certainly the case for music. Whether the hearing aid wearer is a musician or merely someone who likes to listen to music, the electronic and electro-acoustic parameters described can be optimized for music as well as for speech. That is, a hearing aid optimally set for music can be optimally set for speech, even though the converse is not necessarily true. Similarities and differences between speech and music as inputs to a hearing aid are described. Many of these lead to the specification of a set of optimal electro-acoustic characteristics. Parameters such as the peak input-limiting level, compression issues-both compression ratio and knee-points-and number of channels all can deleteriously affect music perception through hearing aids. In other cases, it is not clear how to set other parameters such as noise reduction and feedback control mechanisms. Regardless of the existence of a "music program,'' unless the various electro-acoustic parameters are available in a hearing aid, music fidelity will almost always be less than optimal. There are many unanswered questions and hypotheses in this area. Future research by engineers, researchers, clinicians, and musicians will aid in the clarification of these questions and their ultimate solutions.

  15. Perceptual Optimization of DCT Color Quantization Matrices

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Statler, Irving C. (Technical Monitor)

    1994-01-01

    Many image compression schemes employ a block Discrete Cosine Transform (DCT) and uniform quantization. Acceptable rate/distortion performance depends upon proper design of the quantization matrix. In previous work, we showed how to use a model of the visibility of DCT basis functions to design quantization matrices for arbitrary display resolutions and color spaces. Subsequently, we showed how to optimize greyscale quantization matrices for individual images, for optimal rate/perceptual distortion performance. Here we describe extensions of this optimization algorithm to color images.

  16. Real time on-chip sequential adaptive principal component analysis for data feature extraction and image compression

    NASA Technical Reports Server (NTRS)

    Duong, T. A.

    2004-01-01

    In this paper, we present a new, simple, and optimized hardware architecture sequential learning technique for adaptive Principle Component Analysis (PCA) which will help optimize the hardware implementation in VLSI and to overcome the difficulties of the traditional gradient descent in learning convergence and hardware implementation.

  17. Evaluation of polyvinyl alcohols as mucoadhesive polymers for mucoadhesive buccal tablets prepared by direct compression.

    PubMed

    Ikeuchi-Takahashi, Yuri; Ishihara, Chizuko; Onishi, Hiraku

    2017-09-01

    The purpose of the present work was to evaluate polyvinyl alcohols (PVAs) as a mucoadhesive polymer for mucoadhesive buccal tablets prepared by direct compression. Various polymerization degree and particle diameter PVAs were investigated for their usability. The tensile strength, in vitro adhesive force, and water absorption properties of the tablets were determined to compare the various PVAs. The highest values of the tensile strength and the in vitro adhesive force were observed for PVAs with a medium viscosity and small particle size. The optimal PVA was identified by a factorial design analysis. Mucoadhesive tablets containing the optimal PVA were compared with carboxyvinyl polymer and hydroxypropyl cellulose formulations. The optimal PVA gives a high adhesive force, has a low viscosity, and resulted in relatively rapid drug release. Formulations containing carboxyvinyl polymer had high tensile strengths but short disintegration times. Higher hydroxypropyl cellulose concentration formulations had good adhesion forces and very long disintegration times. We identified the optimal characteristics of PVA, and the usefulness of mucoadhesive buccal tablets containing this PVA was suggested from their formulation properties.

  18. CARGO: effective format-free compressed storage of genomic information

    PubMed Central

    Roguski, Łukasz; Ribeca, Paolo

    2016-01-01

    The recent super-exponential growth in the amount of sequencing data generated worldwide has put techniques for compressed storage into the focus. Most available solutions, however, are strictly tied to specific bioinformatics formats, sometimes inheriting from them suboptimal design choices; this hinders flexible and effective data sharing. Here, we present CARGO (Compressed ARchiving for GenOmics), a high-level framework to automatically generate software systems optimized for the compressed storage of arbitrary types of large genomic data collections. Straightforward applications of our approach to FASTQ and SAM archives require a few lines of code, produce solutions that match and sometimes outperform specialized format-tailored compressors and scale well to multi-TB datasets. All CARGO software components can be freely downloaded for academic and non-commercial use from http://bio-cargo.sourceforge.net. PMID:27131376

  19. Generation of stable subfemtosecond hard x-ray pulses with optimized nonlinear bunch compression

    DOE PAGES

    Huang, Senlin; Ding, Yuantao; Huang, Zhirong; ...

    2014-12-15

    In this paper, we propose a simple scheme that leverages existing x-ray free-electron laser hardware to produce stable single-spike, subfemtosecond x-ray pulses. By optimizing a high-harmonic radio-frequency linearizer to achieve nonlinear compression of a low-charge (20 pC) electron beam, we obtain a sharp current profile possessing a few-femtosecond full width at half maximum temporal duration. A reverse undulator taper is applied to enable lasing only within the current spike, where longitudinal space charge forces induce an electron beam time-energy chirp. Simulations based on the Linac Coherent Light Source parameters show that stable single-spike x-ray pulses with a duration less thanmore » 200 attoseconds can be obtained.« less

  20. Making journals accessible to the visually impaired: the future is near

    PubMed Central

    GARDNER, John; BULATOV, Vladimir; KELLY, Robert

    2010-01-01

    The American Physical Society (APS) has been a leader in using markup languages for publishing. ViewPlus has led development of innovative technologies for graphical information accessibility by people with print disabilities. APS, ViewPlus, and other collaborators in the Enhanced Reading Project are working together to develop the necessary technology and infrastructure for APS to publish its journals in the DAISY (Digital Accessible Information SYstem) eXtended Markup Language (XML) format, in which all text, math, and figures would be accessible to people who are blind or have other print disabilities. The first APS DAISY XML publications are targeted for late 2010. PMID:20676358

  1. Method for gathering and summarizing internet information

    DOEpatents

    Potok, Thomas E.; Elmore, Mark Thomas; Reed, Joel Wesley; Treadwell, Jim N.; Samatova, Nagiza Faridovna

    2010-04-06

    A computer method of gathering and summarizing large amounts of information comprises collecting information from a plurality of information sources (14, 51) according to respective maps (52) of the information sources (14), converting the collected information from a storage format to XML-language documents (26, 53) and storing the XML-language documents in a storage medium, searching for documents (55) according to a search query (13) having at least one term and identifying the documents (26) found in the search, and displaying the documents as nodes (33) of a tree structure (32) having links (34) and nodes (33) so as to indicate similarity of the documents to each other.

  2. System for gathering and summarizing internet information

    DOEpatents

    Potok, Thomas E.; Elmore, Mark Thomas; Reed, Joel Wesley; Treadwell, Jim N.; Samatova, Nagiza Faridovna

    2006-07-04

    A computer method of gathering and summarizing large amounts of information comprises collecting information from a plurality of information sources (14, 51) according to respective maps (52) of the information sources (14), converting the collected information from a storage format to XML-language documents (26, 53) and storing the XML-language documents in a storage medium, searching for documents (55) according to a search query (13) having at least one term and identifying the documents (26) found in the search, and displaying the documents as nodes (33) of a tree structure (32) having links (34) and nodes (33) so as to indicate similarity of the documents to each other.

  3. Method for gathering and summarizing internet information

    DOEpatents

    Potok, Thomas E [Oak Ridge, TN; Elmore, Mark Thomas [Oak Ridge, TN; Reed, Joel Wesley [Knoxville, TN; Treadwell, Jim N [Louisville, TN; Samatova, Nagiza Faridovna [Oak Ridge, TN

    2008-01-01

    A computer method of gathering and summarizing large amounts of information comprises collecting information from a plurality of information sources (14, 51) according to respective maps (52) of the information sources (14), converting the collected information from a storage format to XML-language documents (26, 53) and storing the XML-language documents in a storage medium, searching for documents (55) according to a search query (13) having at least one term and identifying the documents (26) found in the search, and displaying the documents as nodes (33) of a tree structure (32) having links (34) and nodes (33) so as to indicate similarity of the documents to each other.

  4. DbMap: improving database interoperability issues in medical software using a simple, Java-Xml based solution.

    PubMed Central

    Karadimas, H.; Hemery, F.; Roland, P.; Lepage, E.

    2000-01-01

    In medical software development, the use of databases plays a central role. However, most of the databases have heterogeneous encoding and data models. To deal with these variations in the application code directly is error-prone and reduces the potential reuse of the produced software. Several approaches to overcome these limitations have been proposed in the medical database literature, which will be presented. We present a simple solution, based on a Java library, and a central Metadata description file in XML. This development approach presents several benefits in software design and development cycles, the main one being the simplicity in maintenance. PMID:11079915

  5. The inclusion of an online journal in PubMed central - a difficult path.

    PubMed

    Grech, Victor

    2016-01-01

    The indexing of a journal in a prominent database (such as PubMed) is an important imprimatur. Journals accepted for inclusion in PubMed Central (PMC) are automatically indexed in PubMed but must provide the entire contents of their publications as XML-tagged (Extensible Markup Language) data files compliant with PubMed's document type definition (DTD). This paper describes the various attempts that the journal Images in Paediatric Cardiology made in its efforts to convert the journal contents (including all of the extant backlog) to PMC-compliant XML for archiving and indexing in PubMed after the journal was accepted for inclusion by the database.

  6. Minimum information required for a DMET experiment reporting.

    PubMed

    Kumuthini, Judit; Mbiyavanga, Mamana; Chimusa, Emile R; Pathak, Jyotishman; Somervuo, Panu; Van Schaik, Ron Hn; Dolzan, Vita; Mizzi, Clint; Kalideen, Kusha; Ramesar, Raj S; Macek, Milan; Patrinos, George P; Squassina, Alessio

    2016-09-01

    To provide pharmacogenomics reporting guidelines, the information and tools required for reporting to public omic databases. For effective DMET data interpretation, sharing, interoperability, reproducibility and reporting, we propose the Minimum Information required for a DMET Experiment (MIDE) reporting. MIDE provides reporting guidelines and describes the information required for reporting, data storage and data sharing in the form of XML. The MIDE guidelines will benefit the scientific community with pharmacogenomics experiments, including reporting pharmacogenomics data from other technology platforms, with the tools that will ease and automate the generation of such reports using the standardized MIDE XML schema, facilitating the sharing, dissemination, reanalysis of datasets through accessible and transparent pharmacogenomics data reporting.

  7. jqcML: an open-source java API for mass spectrometry quality control data in the qcML format.

    PubMed

    Bittremieux, Wout; Kelchtermans, Pieter; Valkenborg, Dirk; Martens, Lennart; Laukens, Kris

    2014-07-03

    The awareness that systematic quality control is an essential factor to enable the growth of proteomics into a mature analytical discipline has increased over the past few years. To this aim, a controlled vocabulary and document structure have recently been proposed by Walzer et al. to store and disseminate quality-control metrics for mass-spectrometry-based proteomics experiments, called qcML. To facilitate the adoption of this standardized quality control routine, we introduce jqcML, a Java application programming interface (API) for the qcML data format. First, jqcML provides a complete object model to represent qcML data. Second, jqcML provides the ability to read, write, and work in a uniform manner with qcML data from different sources, including the XML-based qcML file format and the relational database qcDB. Interaction with the XML-based file format is obtained through the Java Architecture for XML Binding (JAXB), while generic database functionality is obtained by the Java Persistence API (JPA). jqcML is released as open-source software under the permissive Apache 2.0 license and can be downloaded from https://bitbucket.org/proteinspector/jqcml .

  8. Ajax Architecture Implementation Techniques

    NASA Astrophysics Data System (ADS)

    Hussaini, Syed Asadullah; Tabassum, S. Nasira; Baig, Tabassum, M. Khader

    2012-03-01

    Today's rich Web applications use a mix of Java Script and asynchronous communication with the application server. This mechanism is also known as Ajax: Asynchronous JavaScript and XML. The intent of Ajax is to exchange small pieces of data between the browser and the application server, and in doing so, use partial page refresh instead of reloading the entire Web page. AJAX (Asynchronous JavaScript and XML) is a powerful Web development model for browser-based Web applications. Technologies that form the AJAX model, such as XML, JavaScript, HTTP, and XHTML, are individually widely used and well known. However, AJAX combines these technologies to let Web pages retrieve small amounts of data from the server without having to reload the entire page. This capability makes Web pages more interactive and lets them behave like local applications. Web 2.0 enabled by the Ajax architecture has given rise to a new level of user interactivity through web browsers. Many new and extremely popular Web applications have been introduced such as Google Maps, Google Docs, Flickr, and so on. Ajax Toolkits such as Dojo allow web developers to build Web 2.0 applications quickly and with little effort.

  9. Enabling model customization and integration

    NASA Astrophysics Data System (ADS)

    Park, Minho; Fishwick, Paul A.

    2003-09-01

    Until fairly recently, the idea of dynamic model content and presentation were treated synonymously. For example, if one was to take a data flow network, which captures the dynamics of a target system in terms of the flow of data through nodal operators, then one would often standardize on rectangles and arrows for the model display. The increasing web emphasis on XML, however, suggests that the network model can have its content specified in an XML language, and then the model can be represented in a number of ways depending on the chosen style. We have developed a formal method, based on styles, that permits a model to be specified in XML and presented in 1D (text), 2D, and 3D. This method allows for customization and personalization to exert their benefits beyond e-commerce, to the area of model structures used in computer simulation. This customization leads naturally to solving the bigger problem of model integration - the act of taking models of a scene and integrating them with that scene so that there is only one unified modeling interface. This work focuses mostly on customization, but we address the integration issue in the future work section.

  10. An enhanced security solution for electronic medical records based on AES hybrid technique with SOAP/XML and SHA-1.

    PubMed

    Kiah, M L Mat; Nabi, Mohamed S; Zaidan, B B; Zaidan, A A

    2013-10-01

    This study aims to provide security solutions for implementing electronic medical records (EMRs). E-Health organizations could utilize the proposed method and implement recommended solutions in medical/health systems. Majority of the required security features of EMRs were noted. The methods used were tested against each of these security features. In implementing the system, the combination that satisfied all of the security features of EMRs was selected. Secure implementation and management of EMRs facilitate the safeguarding of the confidentiality, integrity, and availability of e-health organization systems. Health practitioners, patients, and visitors can use the information system facilities safely and with confidence anytime and anywhere. After critically reviewing security and data transmission methods, a new hybrid method was proposed to be implemented on EMR systems. This method will enhance the robustness, security, and integration of EMR systems. The hybrid of simple object access protocol/extensible markup language (XML) with advanced encryption standard and secure hash algorithm version 1 has achieved the security requirements of an EMR system with the capability of integrating with other systems through the design of XML messages.

  11. RUBE: an XML-based architecture for 3D process modeling and model fusion

    NASA Astrophysics Data System (ADS)

    Fishwick, Paul A.

    2002-07-01

    Information fusion is a critical problem for science and engineering. There is a need to fuse information content specified as either data or model. We frame our work in terms of fusing dynamic and geometric models, to create an immersive environment where these models can be juxtaposed in 3D, within the same interface. The method by which this is accomplished fits well into other eXtensible Markup Language (XML) approaches to fusion in general. The task of modeling lies at the heart of the human-computer interface, joining the human to the system under study through a variety of sensory modalities. I overview modeling as a key concern for the Defense Department and the Air Force, and then follow with a discussion of past, current, and future work. Past work began with a package with C and has progressed, in current work, to an implementation in XML. Our current work is defined within the RUBE architecture, which is detailed in subsequent papers devoted to key components. We have built RUBE as a next generation modeling framework using our prior software, with research opportunities in immersive 3D and tangible user interfaces.

  12. SBMLeditor: effective creation of models in the Systems Biology Markup Language (SBML)

    PubMed Central

    Rodriguez, Nicolas; Donizelli, Marco; Le Novère, Nicolas

    2007-01-01

    Background The need to build a tool to facilitate the quick creation and editing of models encoded in the Systems Biology Markup language (SBML) has been growing with the number of users and the increased complexity of the language. SBMLeditor tries to answer this need by providing a very simple, low level editor of SBML files. Users can create and remove all the necessary bits and pieces of SBML in a controlled way, that maintains the validity of the final SBML file. Results SBMLeditor is written in JAVA using JCompneur, a library providing interfaces to easily display an XML document as a tree. This decreases dramatically the development time for a new XML editor. The possibility to include custom dialogs for different tags allows a lot of freedom for the editing and validation of the document. In addition to Xerces, SBMLeditor uses libSBML to check the validity and consistency of SBML files. A graphical equation editor allows an easy manipulation of MathML. SBMLeditor can be used as a module of the Systems Biology Workbench. Conclusion SBMLeditor contains many improvements compared to a generic XML editor, and allow users to create an SBML model quickly and without syntactic errors. PMID:17341299

  13. SBMLeditor: effective creation of models in the Systems Biology Markup language (SBML).

    PubMed

    Rodriguez, Nicolas; Donizelli, Marco; Le Novère, Nicolas

    2007-03-06

    The need to build a tool to facilitate the quick creation and editing of models encoded in the Systems Biology Markup language (SBML) has been growing with the number of users and the increased complexity of the language. SBMLeditor tries to answer this need by providing a very simple, low level editor of SBML files. Users can create and remove all the necessary bits and pieces of SBML in a controlled way, that maintains the validity of the final SBML file. SBMLeditor is written in JAVA using JCompneur, a library providing interfaces to easily display an XML document as a tree. This decreases dramatically the development time for a new XML editor. The possibility to include custom dialogs for different tags allows a lot of freedom for the editing and validation of the document. In addition to Xerces, SBMLeditor uses libSBML to check the validity and consistency of SBML files. A graphical equation editor allows an easy manipulation of MathML. SBMLeditor can be used as a module of the Systems Biology Workbench. SBMLeditor contains many improvements compared to a generic XML editor, and allow users to create an SBML model quickly and without syntactic errors.

  14. XML-Based Visual Specification of Multidisciplinary Applications

    NASA Technical Reports Server (NTRS)

    Al-Theneyan, Ahmed; Jakatdar, Amol; Mehrotra, Piyush; Zubair, Mohammad

    2001-01-01

    The advancements in the Internet and Web technologies have fueled a growing interest in developing a web-based distributed computing environment. We have designed and developed Arcade, a web-based environment for designing, executing, monitoring, and controlling distributed heterogeneous applications, which is easy to use and access, portable, and provides support through all phases of the application development and execution. A major focus of the environment is the specification of heterogeneous, multidisciplinary applications. In this paper we focus on the visual and script-based specification interface of Arcade. The web/browser-based visual interface is designed to be intuitive to use and can also be used for visual monitoring during execution. The script specification is based on XML to: (1) make it portable across different frameworks, and (2) make the development of our tools easier by using the existing freely available XML parsers and editors. There is a one-to-one correspondence between the visual and script-based interfaces allowing users to go back and forth between the two. To support this we have developed translators that translate a script-based specification to a visual-based specification, and vice-versa. These translators are integrated with our tools and are transparent to users.

  15. Compressed modes for variational problems in mathematics and physics

    PubMed Central

    Ozoliņš, Vidvuds; Lai, Rongjie; Caflisch, Russel; Osher, Stanley

    2013-01-01

    This article describes a general formalism for obtaining spatially localized (“sparse”) solutions to a class of problems in mathematical physics, which can be recast as variational optimization problems, such as the important case of Schrödinger’s equation in quantum mechanics. Sparsity is achieved by adding an regularization term to the variational principle, which is shown to yield solutions with compact support (“compressed modes”). Linear combinations of these modes approximate the eigenvalue spectrum and eigenfunctions in a systematically improvable manner, and the localization properties of compressed modes make them an attractive choice for use with efficient numerical algorithms that scale linearly with the problem size. PMID:24170861

  16. Compressed modes for variational problems in mathematics and physics.

    PubMed

    Ozolins, Vidvuds; Lai, Rongjie; Caflisch, Russel; Osher, Stanley

    2013-11-12

    This article describes a general formalism for obtaining spatially localized ("sparse") solutions to a class of problems in mathematical physics, which can be recast as variational optimization problems, such as the important case of Schrödinger's equation in quantum mechanics. Sparsity is achieved by adding an regularization term to the variational principle, which is shown to yield solutions with compact support ("compressed modes"). Linear combinations of these modes approximate the eigenvalue spectrum and eigenfunctions in a systematically improvable manner, and the localization properties of compressed modes make them an attractive choice for use with efficient numerical algorithms that scale linearly with the problem size.

  17. Synthesis and mechanical properties of double cross-linked gelatin-graphene oxide hydrogels.

    PubMed

    Piao, Yongzhe; Chen, Biqiong

    2017-08-01

    Gelatin is an interesting biological macromolecule for biomedical applications. Here, double cross-linked gelatin nanocomposite hydrogels with incorporation of graphene oxide (GO) were synthesized in one pot using glutaraldehyde (GTA) and GTA-grafted GO as double chemical cross-linkers. The nanocomposite hydrogels, in contrast to the neat gelatin hydrogel, exhibited significant increases in mechanical properties by up to 288% in compressive strength, 195% in compressive modulus, 267% in compressive fracture energy and 160% shear storage modulus with the optimal GO concentration. Fourier transform infrared spectroscopy, scanning electron microscopy and swelling tests were implemented to characterize the nanocomposite hydrogels. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Toughening of PMR composites by semi-interpenetrating networks

    NASA Technical Reports Server (NTRS)

    Tiwari, S. N.; Srinivansan, K.

    1991-01-01

    Polymerization of monomer reactants (PMR-15) type polyimide and RP46 prepregs were drum wound using IM-7 fibers. Prepregging and processing conditions were optimized to yield good quality laminates with fiber volume fractions of 60 percent (+/- 2 percent). Samples were fabricated and tested to determine comprehensive engineering properties of both systems. These included 0 deg flexure, short beam shear, transverse flexure and tension, 0 deg tension and compression, intralaminar shear, short block compression, mode 1 and 2 fracture toughness, and compression after impact properties. Semi-2-IPN (interpenetrating polymer networks) toughened PMR-15 and RP46 laminates were also fabricated and tested for the same properties.

  19. CMO: Cruise Metadata Organizer for JAMSTEC Research Cruises

    NASA Astrophysics Data System (ADS)

    Fukuda, K.; Saito, H.; Hanafusa, Y.; Vanroosebeke, A.; Kitayama, T.

    2011-12-01

    JAMSTEC's Data Research Center for Marine-Earth Sciences manages and distributes a wide variety of observational data and samples obtained from JAMSTEC research vessels and deep sea submersibles. Generally, metadata are essential to identify data and samples were obtained. In JAMSTEC, cruise metadata include cruise information such as cruise ID, name of vessel, research theme, and diving information such as dive number, name of submersible and position of diving point. They are submitted by chief scientists of research cruises in the Microsoft Excel° spreadsheet format, and registered into a data management database to confirm receipt of observational data files, cruise summaries, and cruise reports. The cruise metadata are also published via "JAMSTEC Data Site for Research Cruises" within two months after end of cruise. Furthermore, these metadata are distributed with observational data, images and samples via several data and sample distribution websites after a publication moratorium period. However, there are two operational issues in the metadata publishing process. One is that duplication efforts and asynchronous metadata across multiple distribution websites due to manual metadata entry into individual websites by administrators. The other is that differential data types or representation of metadata in each website. To solve those problems, we have developed a cruise metadata organizer (CMO) which allows cruise metadata to be connected from the data management database to several distribution websites. CMO is comprised of three components: an Extensible Markup Language (XML) database, an Enterprise Application Integration (EAI) software, and a web-based interface. The XML database is used because of its flexibility for any change of metadata. Daily differential uptake of metadata from the data management database to the XML database is automatically processed via the EAI software. Some metadata are entered into the XML database using the web-based interface by a metadata editor in CMO as needed. Then daily differential uptake of metadata from the XML database to databases in several distribution websites is automatically processed using a convertor defined by the EAI software. Currently, CMO is available for three distribution websites: "Deep Sea Floor Rock Sample Database GANSEKI", "Marine Biological Sample Database", and "JAMSTEC E-library of Deep-sea Images". CMO is planned to provide "JAMSTEC Data Site for Research Cruises" with metadata in the future.

  20. Cardiopulmonary resuscitation duty cycle in out-of-hospital cardiac arrest.

    PubMed

    Johnson, Bryce V; Johnson, Bryce; Coult, Jason; Fahrenbruch, Carol; Blackwood, Jennifer; Sherman, Larry; Kudenchuk, Peter; Sayre, Michael; Rea, Thomas

    2015-02-01

    Duty cycle is the portion of time spent in compression relative to total time of the compression-decompression cycle. Guidelines recommend a 50% duty cycle based largely on animal investigation. We undertook a descriptive evaluation of duty cycle in human resuscitation, and whether duty cycle correlates with other CPR measures. We calculated the duty cycle, compression depth, and compression rate during EMS resuscitation of 164 patients with out-of-hospital ventricular fibrillation cardiac arrest. We captured force recordings from a chest accelerometer to measure ten-second CPR epochs that preceded rhythm analysis. Duty cycle was calculated using two methods. Effective compression time (ECT) is the time from beginning to end of compression divided by total period for that compression-decompression cycle. Area duty cycle (ADC) is the ratio of area under the force curve divided by total area of one compression-decompression cycle. We evaluated the compression depth and compression rate according to duty cycle quartiles. There were 369 ten-second epochs among 164 patients. The median duty cycle was 38.8% (SD=5.5%) using ECT and 32.2% (SD=4.3%) using ADC. A relatively shorter compression phase (lower duty cycle) was associated with greater compression depth (test for trend <0.05 for ECT and ADC) and slower compression rate (test for trend <0.05 for ADC). Sixty-one of 164 patients (37%) survived to hospital discharge. Duty cycle was below the 50% recommended guideline, and was associated with compression depth and rate. These findings provider rationale to incorporate duty cycle into research aimed at understanding optimal CPR metrics. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

Top