Sample records for boosting xml filtering

  1. owlcpp: a C++ library for working with OWL ontologies.

    PubMed

    Levin, Mikhail K; Cowell, Lindsay G

    2015-01-01

    The increasing use of ontologies highlights the need for a library for working with ontologies that is efficient, accessible from various programming languages, and compatible with common computational platforms. We developed owlcpp, a library for storing and searching RDF triples, parsing RDF/XML documents, converting triples into OWL axioms, and reasoning. The library is written in ISO-compliant C++ to facilitate efficiency, portability, and accessibility from other programming languages. Internally, owlcpp uses the Raptor RDF Syntax library for parsing RDF/XML and the FaCT++ library for reasoning. The current version of owlcpp is supported under Linux, OSX, and Windows platforms and provides an API for Python. The results of our evaluation show that, compared to other commonly used libraries, owlcpp is significantly more efficient in terms of memory usage and searching RDF triple stores. owlcpp performs strict parsing and detects errors ignored by other libraries, thus reducing the possibility of incorrect semantic interpretation of ontologies. owlcpp is available at http://owl-cpp.sf.net/ under the Boost Software License, Version 1.0.

  2. Recent Flight Results of the TRMM Kalman Filter

    NASA Technical Reports Server (NTRS)

    Andrews, Stephen F.; Bilanow, Stephen; Bauer, Frank (Technical Monitor)

    2002-01-01

    The Tropical Rainfall Measuring Mission (TRMM) spacecraft is a nadir pointing spacecraft that nominally controls the roll and pitch attitude based on the Earth Sensor Assembly (ESA) output. TRMM's nominal orbit altitude was 350 km, until raised to 402 km to prolong mission life. During the boost, the ESA experienced a decreasing signal to noise ratio, until sun interference at 393 km altitude made the ESA data unreliable for attitude determination. At that point, the backup attitude determination algorithm, an extended Kalman filter, was enabled. After the boost finished, TRMM reacquired its nadir-pointing attitude, and continued its mission. This paper will briefly discuss the boost and the decision to turn on the backup attitude determination algorithm. A description of the extended Kalman filter algorithm will be given. In addition, flight results from analyzing attitude data and the results of software changes made onboard TRMM will be discussed. Some lessons learned are presented.

  3. Symposium on Applications and the Internet (SAINT 2003) Proceedings (Orlando, Florida, January 27-31, 2003).

    ERIC Educational Resources Information Center

    Helal, Sumi, Ed.; Oie, Yuji, Ed.; Chang, Carl, Ed.; Murai, Jun, Ed.

    This proceedings from the 2003 Symposium on Applications and the Internet (SAINT) contains papers from sessions on: (1) mobile Internet, including a target-driven cache replacement policy, context-awareness for service discovery, and XML transformation; (2) collaboration technology I, including human-network-based filtering, virtual collaboration…

  4. Personalising e-learning modules: targeting Rasmussen levels using XML.

    PubMed

    Renard, J M; Leroy, S; Camus, H; Picavet, M; Beuscart, R

    2003-01-01

    The development of Internet technologies has made it possible to increase the number and the diversity of on-line resources for teachers and students. Initiatives like the French-speaking Virtual Medical University Project (UMVF) try to organise the access to these resources. But both teachers and students are working on a partly redundant subset of knowledge. From the analysis of some French courses we propose a model for knowledge organisation derived from Rasmussen's stepladder. In the context of decision-making Rasmussen has identified skill-based, rule-based and knowledge-based levels for the mental process. In the medical context of problem-solving, we apply these three levels to the definition of three students levels: beginners, intermediate-level learners, experts. Based on our model, we build a representation of the hierarchical structure of data using XML language. We use XSLT Transformation Language in order to filter relevant data according to student level and to propose an appropriate display on students' terminal. The model and the XML implementation we define help to design tools for building personalised e-learning modules.

  5. Integrating and visualizing primary data from prospective and legacy taxonomic literature

    PubMed Central

    Agosti, Donat; Penev, Lyubomir; Sautter, Guido; Georgiev, Teodor; Catapano, Terry; Patterson, David; King, David; Pereira, Serrano; Vos, Rutger Aldo; Sierra, Soraya

    2015-01-01

    Abstract Specimen data in taxonomic literature are among the highest quality primary biodiversity data. Innovative cybertaxonomic journals are using workflows that maintain data structure and disseminate electronic content to aggregators and other users; such structure is lost in traditional taxonomic publishing. Legacy taxonomic literature is a vast repository of knowledge about biodiversity. Currently, access to that resource is cumbersome, especially for non-specialist data consumers. Markup is a mechanism that makes this content more accessible, and is especially suited to machine analysis. Fine-grained XML (Extensible Markup Language) markup was applied to all (37) open-access articles published in the journal Zootaxa containing treatments on spiders (Order: Araneae). The markup approach was optimized to extract primary specimen data from legacy publications. These data were combined with data from articles containing treatments on spiders published in Biodiversity Data Journal where XML structure is part of the routine publication process. A series of charts was developed to visualize the content of specimen data in XML-tagged taxonomic treatments, either singly or in aggregate. The data can be filtered by several fields (including journal, taxon, institutional collection, collecting country, collector, author, article and treatment) to query particular aspects of the data. We demonstrate here that XML markup using GoldenGATE can address the challenge presented by unstructured legacy data, can extract structured primary biodiversity data which can be aggregated with and jointly queried with data from other Darwin Core-compatible sources, and show how visualization of these data can communicate key information contained in biodiversity literature. We complement recent studies on aspects of biodiversity knowledge using XML structured data to explore 1) the time lag between species discovry and description, and 2) the prevelence of rarity in species descriptions. PMID:26023286

  6. Evaluating the Informative Quality of Documents in SGML Format from Judgements by Means of Fuzzy Linguistic Techniques Based on Computing with Words.

    ERIC Educational Resources Information Center

    Herrera-Viedma, Enrique; Peis, Eduardo

    2003-01-01

    Presents a fuzzy evaluation method of SGML documents based on computing with words. Topics include filtering the amount of information available on the Web to assist users in their search processes; document type definitions; linguistic modeling; user-system interaction; and use with XML and other markup languages. (Author/LRW)

  7. Integrating DC/DC Conversion with Possible Reconfiguration within Submodule Solar Photovoltaic Systems

    NASA Astrophysics Data System (ADS)

    Huang, Peter Jen-Hung

    This research first proposes a method to merge photovoltaic (PV) cells or PV panels within the internal components DC-DC converters. The purpose of this merged structure is to reconfigure the PV modules between series and parallel connections using high switching frequencies (hundreds of kHz). This leads to multi-levels of voltages and currents that become applied to the output filter of the converter. Further, this research introduces a concept of a switching cell that utilizes the reconfiguration of series and parallel connections in DC-DC converters. The switching occurs at high switching frequency and the switches can be integrated to be within the solar panels or in between the solar cells. The concept is generalized and applied to basic buck and boost topologies. As examples of the new types of converters: reconfigurable PV-buck and PV-boost converter topologies are presented. It is also possible to create other reconfigurable power converters: non-isolated and isolated topologies. Analysis, simulation and experimental verification for the reconfigurable PV-buck and PV-boost converters are presented extensively to illustrate proof of concept. Benefits and drawbacks of the new approach are discussed. The second part of this research proposes to utilize the internal solar cell capacitance and internal solar module wire parasitic inductances to replace the input capacitor and filter inductor in boost derived DC-DC converters for energy harvesting applications. High switching frequency (MHz) hard switched and resonant boost converters are proposed. Their analysis, simulation and experimental prototypes are presented. A specific proof-of-concept application is especially tested for foldable PV panels, which are known for their high internal wire inductance. The experimental converters successfully boost solar module voltage without adding any external input capacitance or filter inductor. Benefits and drawbacks of new proposed PV submodule integrated boost converters are discussed.

  8. Arbitrary-step randomly delayed robust filter with application to boost phase tracking

    NASA Astrophysics Data System (ADS)

    Qin, Wutao; Wang, Xiaogang; Bai, Yuliang; Cui, Naigang

    2018-04-01

    The conventional filters such as extended Kalman filter, unscented Kalman filter and cubature Kalman filter assume that the measurement is available in real-time and the measurement noise is Gaussian white noise. But in practice, both two assumptions are invalid. To solve this problem, a novel algorithm is proposed by taking the following four steps. At first, the measurement model is modified by the Bernoulli random variables to describe the random delay. Then, the expression of predicted measurement and covariance are reformulated, which could get rid of the restriction that the maximum number of delay must be one or two and the assumption that probabilities of Bernoulli random variables taking the value one are equal. Next, the arbitrary-step randomly delayed high-degree cubature Kalman filter is derived based on the 5th-degree spherical-radial rule and the reformulated expressions. Finally, the arbitrary-step randomly delayed high-degree cubature Kalman filter is modified to the arbitrary-step randomly delayed high-degree cubature Huber-based filter based on the Huber technique, which is essentially an M-estimator. Therefore, the proposed filter is not only robust to the randomly delayed measurements, but robust to the glint noise. The application to the boost phase tracking example demonstrate the superiority of the proposed algorithms.

  9. Reference Architecture for MNE 5 Technical System

    DTIC Science & Technology

    2007-05-30

    of being available in most experiments. Core Services A core set of applications whi directories, web portal and collaboration applications etc. A...classifications Messages (xml, JMS, content level…) Meta data filtering, who can initiate services Web browsing Collaboration & messaging Border...Exchange Ref Architecture for MNE5 Tech System.doc 9 of 21 audit logging Person and machine Data lev objects, web services, messages rification el

  10. Preprocessing of PHERMEX flash radiographic images with Haar and adaptive filtering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brolley, J.E.

    1978-11-01

    Work on image preparation has continued with the application of high-sequency boosting via Haar filtering. This is useful in developing line or edge structures. Widrow LMS adaptive filtering has also been shown to be useful in developing edge structure in special problems. Shadow effects can be obtained with the latter which may be useful for some problems. Combined Haar and adaptive filtering is illustrated for a PHERMEX image.

  11. A Sensorless Predictive Current Controlled Boost Converter by Using an EKF with Load Variation Effect Elimination Function

    PubMed Central

    Tong, Qiaoling; Chen, Chen; Zhang, Qiao; Zou, Xuecheng

    2015-01-01

    To realize accurate current control for a boost converter, a precise measurement of the inductor current is required to achieve high resolution current regulating. Current sensors are widely used to measure the inductor current. However, the current sensors and their processing circuits significantly contribute extra hardware cost, delay and noise to the system. They can also harm the system reliability. Therefore, current sensorless control techniques can bring cost effective and reliable solutions for various boost converter applications. According to the derived accurate model, which contains a number of parasitics, the boost converter is a nonlinear system. An Extended Kalman Filter (EKF) is proposed for inductor current estimation and output voltage filtering. With this approach, the system can have the same advantages as sensored current control mode. To implement EKF, the load value is necessary. However, the load may vary from time to time. This can lead to errors of current estimation and filtered output voltage. To solve this issue, a load variation elimination effect elimination (LVEE) module is added. In addition, a predictive average current controller is used to regulate the current. Compared with conventional voltage controlled system, the transient response is greatly improved since it only takes two switching cycles for the current to reach its reference. Finally, experimental results are presented to verify the stable operation and output tracking capability for large-signal transients of the proposed algorithm. PMID:25928061

  12. mzDB: A File Format Using Multiple Indexing Strategies for the Efficient Analysis of Large LC-MS/MS and SWATH-MS Data Sets*

    PubMed Central

    Bouyssié, David; Dubois, Marc; Nasso, Sara; Gonzalez de Peredo, Anne; Burlet-Schiltz, Odile; Aebersold, Ruedi; Monsarrat, Bernard

    2015-01-01

    The analysis and management of MS data, especially those generated by data independent MS acquisition, exemplified by SWATH-MS, pose significant challenges for proteomics bioinformatics. The large size and vast amount of information inherent to these data sets need to be properly structured to enable an efficient and straightforward extraction of the signals used to identify specific target peptides. Standard XML based formats are not well suited to large MS data files, for example, those generated by SWATH-MS, and compromise high-throughput data processing and storing. We developed mzDB, an efficient file format for large MS data sets. It relies on the SQLite software library and consists of a standardized and portable server-less single-file database. An optimized 3D indexing approach is adopted, where the LC-MS coordinates (retention time and m/z), along with the precursor m/z for SWATH-MS data, are used to query the database for data extraction. In comparison with XML formats, mzDB saves ∼25% of storage space and improves access times by a factor of twofold up to even 2000-fold, depending on the particular data access. Similarly, mzDB shows also slightly to significantly lower access times in comparison with other formats like mz5. Both C++ and Java implementations, converting raw or XML formats to mzDB and providing access methods, will be released under permissive license. mzDB can be easily accessed by the SQLite C library and its drivers for all major languages, and browsed with existing dedicated GUIs. The mzDB described here can boost existing mass spectrometry data analysis pipelines, offering unprecedented performance in terms of efficiency, portability, compactness, and flexibility. PMID:25505153

  13. Boosting CNN performance for lung texture classification using connected filtering

    NASA Astrophysics Data System (ADS)

    Tarando, Sebastián. Roberto; Fetita, Catalin; Kim, Young-Wouk; Cho, Hyoun; Brillet, Pierre-Yves

    2018-02-01

    Infiltrative lung diseases describe a large group of irreversible lung disorders requiring regular follow-up with CT imaging. Quantifying the evolution of the patient status imposes the development of automated classification tools for lung texture. This paper presents an original image pre-processing framework based on locally connected filtering applied in multiresolution, which helps improving the learning process and boost the performance of CNN for lung texture classification. By removing the dense vascular network from images used by the CNN for lung classification, locally connected filters provide a better discrimination between different lung patterns and help regularizing the classification output. The approach was tested in a preliminary evaluation on a 10 patient database of various lung pathologies, showing an increase of 10% in true positive rate (on average for all the cases) with respect to the state of the art cascade of CNNs for this task.

  14. Detecting Weak Spectral Lines in Interferometric Data through Matched Filtering

    NASA Astrophysics Data System (ADS)

    Loomis, Ryan A.; Öberg, Karin I.; Andrews, Sean M.; Walsh, Catherine; Czekala, Ian; Huang, Jane; Rosenfeld, Katherine A.

    2018-04-01

    Modern radio interferometers enable observations of spectral lines with unprecedented spatial resolution and sensitivity. In spite of these technical advances, many lines of interest are still at best weakly detected and therefore necessitate detection and analysis techniques specialized for the low signal-to-noise ratio (S/N) regime. Matched filters can leverage knowledge of the source structure and kinematics to increase sensitivity of spectral line observations. Application of the filter in the native Fourier domain improves S/N while simultaneously avoiding the computational cost and ambiguities associated with imaging, making matched filtering a fast and robust method for weak spectral line detection. We demonstrate how an approximate matched filter can be constructed from a previously observed line or from a model of the source, and we show how this filter can be used to robustly infer a detection significance for weak spectral lines. When applied to ALMA Cycle 2 observations of CH3OH in the protoplanetary disk around TW Hya, the technique yields a ≈53% S/N boost over aperture-based spectral extraction methods, and we show that an even higher boost will be achieved for observations at higher spatial resolution. A Python-based open-source implementation of this technique is available under the MIT license at http://github.com/AstroChem/VISIBLE.

  15. A content-boosted collaborative filtering algorithm for personalized training in interpretation of radiological imaging.

    PubMed

    Lin, Hongli; Yang, Xuedong; Wang, Weisheng

    2014-08-01

    Devising a method that can select cases based on the performance levels of trainees and the characteristics of cases is essential for developing a personalized training program in radiology education. In this paper, we propose a novel hybrid prediction algorithm called content-boosted collaborative filtering (CBCF) to predict the difficulty level of each case for each trainee. The CBCF utilizes a content-based filtering (CBF) method to enhance existing trainee-case ratings data and then provides final predictions through a collaborative filtering (CF) algorithm. The CBCF algorithm incorporates the advantages of both CBF and CF, while not inheriting the disadvantages of either. The CBCF method is compared with the pure CBF and pure CF approaches using three datasets. The experimental data are then evaluated in terms of the MAE metric. Our experimental results show that the CBCF outperforms the pure CBF and CF methods by 13.33 and 12.17 %, respectively, in terms of prediction precision. This also suggests that the CBCF can be used in the development of personalized training systems in radiology education.

  16. An Efficient G-XML Data Management Method using XML Spatial Index for Mobile Devices

    NASA Astrophysics Data System (ADS)

    Tamada, Takashi; Momma, Kei; Seo, Kazuo; Hijikata, Yoshinori; Nishida, Shogo

    This paper presents an efficient G-XML data management method for mobile devices. G-XML is XML based encoding for the transport of geographic information. Mobile devices, such as PDA and mobile-phone, performance trail desktop machines, so some techniques are needed for processing G-XML data on mobile devices. In this method, XML-format spatial index file is used to improve an initial display time of G-XML data. This index file contains XML pointer of each feature in G-XML data and classifies these features by multi-dimensional data structures. From the experimental result, we can prove this method speed up about 3-7 times an initial display time of G-XML data on mobile devices.

  17. Speed up of XML parsers with PHP language implementation

    NASA Astrophysics Data System (ADS)

    Georgiev, Bozhidar; Georgieva, Adriana

    2012-11-01

    In this paper, authors introduce PHP5's XML implementation and show how to read, parse, and write a short and uncomplicated XML file using Simple XML in a PHP environment. The possibilities for mutual work of PHP5 language and XML standard are described. The details of parsing process with Simple XML are also cleared. A practical project PHP-XML-MySQL presents the advantages of XML implementation in PHP modules. This approach allows comparatively simple search of XML hierarchical data by means of PHP software tools. The proposed project includes database, which can be extended with new data and new XML parsing functions.

  18. A robust technique based on VLM and Frangi filter for retinal vessel extraction and denoising.

    PubMed

    Khan, Khan Bahadar; Khaliq, Amir A; Jalil, Abdul; Shahid, Muhammad

    2018-01-01

    The exploration of retinal vessel structure is colossally important on account of numerous diseases including stroke, Diabetic Retinopathy (DR) and coronary heart diseases, which can damage the retinal vessel structure. The retinal vascular network is very hard to be extracted due to its spreading and diminishing geometry and contrast variation in an image. The proposed technique consists of unique parallel processes for denoising and extraction of blood vessels in retinal images. In the preprocessing section, an adaptive histogram equalization enhances dissimilarity between the vessels and the background and morphological top-hat filters are employed to eliminate macula and optic disc, etc. To remove local noise, the difference of images is computed from the top-hat filtered image and the high-boost filtered image. Frangi filter is applied at multi scale for the enhancement of vessels possessing diverse widths. Segmentation is performed by using improved Otsu thresholding on the high-boost filtered image and Frangi's enhanced image, separately. In the postprocessing steps, a Vessel Location Map (VLM) is extracted by using raster to vector transformation. Postprocessing steps are employed in a novel way to reject misclassified vessel pixels. The final segmented image is obtained by using pixel-by-pixel AND operation between VLM and Frangi output image. The method has been rigorously analyzed on the STARE, DRIVE and HRF datasets.

  19. XML under the Hood.

    ERIC Educational Resources Information Center

    Scharf, David

    2002-01-01

    Discusses XML (extensible markup language), particularly as it relates to libraries. Topics include organizing information; cataloging; metadata; similarities to HTML; organizations dealing with XML; making XML useful; a history of XML; the semantic Web; related technologies; XML at the Library of Congress; and its role in improving the…

  20. From quantum physics to digital communication: Single sideband continuous phase modulation

    NASA Astrophysics Data System (ADS)

    Farès, Haïfa; Christian Glattli, D.; Louët, Yves; Palicot, Jacques; Moy, Christophe; Roulleau, Preden

    2018-01-01

    In the present paper, we propose a new frequency-shift keying continuous phase modulation (FSK-CPM) scheme having, by essence, the interesting feature of single-sideband (SSB) spectrum providing a very compact frequency occupation. First, the original principle, inspired from quantum physics (levitons), is presented. Besides, we address the problem of low-complexity coherent detection of this new waveform, based on orthonormal wave functions used to perform matched filtering for efficient demodulation. Consequently, this shows that the proposed modulation can operate using existing digital communication technology, since only well-known operations are performed (e.g., filtering, integration). This SSB property can be exploited to allow large bit rates transmissions at low carrier frequency without caring about image frequency degradation effects typical of ordinary double-sideband signals. xml:lang="fr"

  1. Semantically Interoperable XML Data

    PubMed Central

    Vergara-Niedermayr, Cristobal; Wang, Fusheng; Pan, Tony; Kurc, Tahsin; Saltz, Joel

    2013-01-01

    XML is ubiquitously used as an information exchange platform for web-based applications in healthcare, life sciences, and many other domains. Proliferating XML data are now managed through latest native XML database technologies. XML data sources conforming to common XML schemas could be shared and integrated with syntactic interoperability. Semantic interoperability can be achieved through semantic annotations of data models using common data elements linked to concepts from ontologies. In this paper, we present a framework and software system to support the development of semantic interoperable XML based data sources that can be shared through a Grid infrastructure. We also present our work on supporting semantic validated XML data through semantic annotations for XML Schema, semantic validation and semantic authoring of XML data. We demonstrate the use of the system for a biomedical database of medical image annotations and markups. PMID:25298789

  2. Using XML to encode TMA DES metadata.

    PubMed

    Lyttleton, Oliver; Wright, Alexander; Treanor, Darren; Lewis, Paul

    2011-01-01

    The Tissue Microarray Data Exchange Specification (TMA DES) is an XML specification for encoding TMA experiment data. While TMA DES data is encoded in XML, the files that describe its syntax, structure, and semantics are not. The DTD format is used to describe the syntax and structure of TMA DES, and the ISO 11179 format is used to define the semantics of TMA DES. However, XML Schema can be used in place of DTDs, and another XML encoded format, RDF, can be used in place of ISO 11179. Encoding all TMA DES data and metadata in XML would simplify the development and usage of programs which validate and parse TMA DES data. XML Schema has advantages over DTDs such as support for data types, and a more powerful means of specifying constraints on data values. An advantage of RDF encoded in XML over ISO 11179 is that XML defines rules for encoding data, whereas ISO 11179 does not. We created an XML Schema version of the TMA DES DTD. We wrote a program that converted ISO 11179 definitions to RDF encoded in XML, and used it to convert the TMA DES ISO 11179 definitions to RDF. We validated a sample TMA DES XML file that was supplied with the publication that originally specified TMA DES using our XML Schema. We successfully validated the RDF produced by our ISO 11179 converter with the W3C RDF validation service. All TMA DES data could be encoded using XML, which simplifies its processing. XML Schema allows datatypes and valid value ranges to be specified for CDEs, which enables a wider range of error checking to be performed using XML Schemas than could be performed using DTDs.

  3. Using XML to encode TMA DES metadata

    PubMed Central

    Lyttleton, Oliver; Wright, Alexander; Treanor, Darren; Lewis, Paul

    2011-01-01

    Background: The Tissue Microarray Data Exchange Specification (TMA DES) is an XML specification for encoding TMA experiment data. While TMA DES data is encoded in XML, the files that describe its syntax, structure, and semantics are not. The DTD format is used to describe the syntax and structure of TMA DES, and the ISO 11179 format is used to define the semantics of TMA DES. However, XML Schema can be used in place of DTDs, and another XML encoded format, RDF, can be used in place of ISO 11179. Encoding all TMA DES data and metadata in XML would simplify the development and usage of programs which validate and parse TMA DES data. XML Schema has advantages over DTDs such as support for data types, and a more powerful means of specifying constraints on data values. An advantage of RDF encoded in XML over ISO 11179 is that XML defines rules for encoding data, whereas ISO 11179 does not. Materials and Methods: We created an XML Schema version of the TMA DES DTD. We wrote a program that converted ISO 11179 definitions to RDF encoded in XML, and used it to convert the TMA DES ISO 11179 definitions to RDF. Results: We validated a sample TMA DES XML file that was supplied with the publication that originally specified TMA DES using our XML Schema. We successfully validated the RDF produced by our ISO 11179 converter with the W3C RDF validation service. Conclusions: All TMA DES data could be encoded using XML, which simplifies its processing. XML Schema allows datatypes and valid value ranges to be specified for CDEs, which enables a wider range of error checking to be performed using XML Schemas than could be performed using DTDs. PMID:21969921

  4. Multi-band transmission color filters for multi-color white LEDs based visible light communication

    NASA Astrophysics Data System (ADS)

    Wang, Qixia; Zhu, Zhendong; Gu, Huarong; Chen, Mengzhu; Tan, Qiaofeng

    2017-11-01

    Light-emitting diodes (LEDs) based visible light communication (VLC) can provide license-free bands, high data rates, and high security levels, which is a promising technique that will be extensively applied in future. Multi-band transmission color filters with enough peak transmittance and suitable bandwidth play a pivotal role for boosting signal-noise-ratio in VLC systems. In this paper, multi-band transmission color filters with bandwidth of dozens nanometers are designed by a simple analytical method. Experiment results of one-dimensional (1D) and two-dimensional (2D) tri-band color filters demonstrate the effectiveness of the multi-band transmission color filters and the corresponding analytical method.

  5. Searching and Extracting Data from the EMBL-EBI Complex Portal.

    PubMed

    Meldal, Birgit H M; Orchard, Sandra

    2018-01-01

    The Complex Portal ( www.ebi.ac.uk/complexportal ) is an encyclopedia of macromolecular complexes. Complexes are assigned unique, stable IDs, are species specific, and list all participating members with links to an appropriate reference database (UniProtKB, ChEBI, RNAcentral). Each complex is annotated extensively with its functions, properties, structure, stoichiometry, tissue expression profile, and subcellular location. Links to domain-specific databases allow the user to access additional information and enable data searching and filtering. Complexes can be saved and downloaded in PSI-MI XML, MI-JSON, and tab-delimited formats.

  6. Real-time acquisition and tracking system with multiple Kalman filters

    NASA Astrophysics Data System (ADS)

    Beard, Gary C.; McCarter, Timothy G.; Spodeck, Walter; Fletcher, James E.

    1994-07-01

    The design of a real-time, ground-based, infrared tracking system with proven field success in tracking boost vehicles through burnout is presented with emphasis on the software design. The system was originally developed to deliver relative angular positions during boost, and thrust termination time to a sensor fusion station in real-time. Autonomous target acquisition and angle-only tracking features were developed to ensure success under stressing conditions. A unique feature of the system is the incorporation of multiple copies of a Kalman filter tracking algorithm running in parallel in order to minimize run-time. The system is capable of updating the state vector for an object at measurement rates approaching 90 Hz. This paper will address the top-level software design, details of the algorithms employed, system performance history in the field, and possible future upgrades.

  7. Supervised retinal vessel segmentation from color fundus images based on matched filtering and AdaBoost classifier.

    PubMed

    Memari, Nogol; Ramli, Abd Rahman; Bin Saripan, M Iqbal; Mashohor, Syamsiah; Moghbel, Mehrdad

    2017-01-01

    The structure and appearance of the blood vessel network in retinal fundus images is an essential part of diagnosing various problems associated with the eyes, such as diabetes and hypertension. In this paper, an automatic retinal vessel segmentation method utilizing matched filter techniques coupled with an AdaBoost classifier is proposed. The fundus image is enhanced using morphological operations, the contrast is increased using contrast limited adaptive histogram equalization (CLAHE) method and the inhomogeneity is corrected using Retinex approach. Then, the blood vessels are enhanced using a combination of B-COSFIRE and Frangi matched filters. From this preprocessed image, different statistical features are computed on a pixel-wise basis and used in an AdaBoost classifier to extract the blood vessel network inside the image. Finally, the segmented images are postprocessed to remove the misclassified pixels and regions. The proposed method was validated using publicly accessible Digital Retinal Images for Vessel Extraction (DRIVE), Structured Analysis of the Retina (STARE) and Child Heart and Health Study in England (CHASE_DB1) datasets commonly used for determining the accuracy of retinal vessel segmentation methods. The accuracy of the proposed segmentation method was comparable to other state of the art methods while being very close to the manual segmentation provided by the second human observer with an average accuracy of 0.972, 0.951 and 0.948 in DRIVE, STARE and CHASE_DB1 datasets, respectively.

  8. XWeB: The XML Warehouse Benchmark

    NASA Astrophysics Data System (ADS)

    Mahboubi, Hadj; Darmont, Jérôme

    With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.

  9. Software Development Of XML Parser Based On Algebraic Tools

    NASA Astrophysics Data System (ADS)

    Georgiev, Bozhidar; Georgieva, Adriana

    2011-12-01

    In this paper, is presented one software development and implementation of an algebraic method for XML data processing, which accelerates XML parsing process. Therefore, the proposed in this article nontraditional approach for fast XML navigation with algebraic tools contributes to advanced efforts in the making of an easier user-friendly API for XML transformations. Here the proposed software for XML documents processing (parser) is easy to use and can manage files with strictly defined data structure. The purpose of the presented algorithm is to offer a new approach for search and restructuring hierarchical XML data. This approach permits fast XML documents processing, using algebraic model developed in details in previous works of the same authors. So proposed parsing mechanism is easy accessible to the web consumer who is able to control XML file processing, to search different elements (tags) in it, to delete and to add a new XML content as well. The presented various tests show higher rapidity and low consumption of resources in comparison with some existing commercial parsers.

  10. A Space Surveillance Ontology: Captured in an XML Schema

    DTIC Science & Technology

    2000-10-01

    characterization in a way most appropriate to a sub- domain. 6. The commercial market is embracing XML, and the military can take advantage of this significant...the space surveillance ontology effort to two key efforts: the Defense Information Infrastructure Common Operating Environment (DII COE) XML...strongly believe XML schemas will supplant them. Some of the advantages that XML schemas provide over DTDs include: • Strong data typing: The XML Schema

  11. An XML Data Model for Inverted Image Indexing

    NASA Astrophysics Data System (ADS)

    So, Simon W.; Leung, Clement H. C.; Tse, Philip K. C.

    2003-01-01

    The Internet world makes increasing use of XML-based technologies. In multimedia data indexing and retrieval, the MPEG-7 standard for Multimedia Description Scheme is specified using XML. The flexibility of XML allows users to define other markup semantics for special contexts, construct data-centric XML documents, exchange standardized data between computer systems, and present data in different applications. In this paper, the Inverted Image Indexing paradigm is presented and modeled using XML Schema.

  12. AlignerBoost: A Generalized Software Toolkit for Boosting Next-Gen Sequencing Mapping Accuracy Using a Bayesian-Based Mapping Quality Framework.

    PubMed

    Zheng, Qi; Grice, Elizabeth A

    2016-10-01

    Accurate mapping of next-generation sequencing (NGS) reads to reference genomes is crucial for almost all NGS applications and downstream analyses. Various repetitive elements in human and other higher eukaryotic genomes contribute in large part to ambiguously (non-uniquely) mapped reads. Most available NGS aligners attempt to address this by either removing all non-uniquely mapping reads, or reporting one random or "best" hit based on simple heuristics. Accurate estimation of the mapping quality of NGS reads is therefore critical albeit completely lacking at present. Here we developed a generalized software toolkit "AlignerBoost", which utilizes a Bayesian-based framework to accurately estimate mapping quality of ambiguously mapped NGS reads. We tested AlignerBoost with both simulated and real DNA-seq and RNA-seq datasets at various thresholds. In most cases, but especially for reads falling within repetitive regions, AlignerBoost dramatically increases the mapping precision of modern NGS aligners without significantly compromising the sensitivity even without mapping quality filters. When using higher mapping quality cutoffs, AlignerBoost achieves a much lower false mapping rate while exhibiting comparable or higher sensitivity compared to the aligner default modes, therefore significantly boosting the detection power of NGS aligners even using extreme thresholds. AlignerBoost is also SNP-aware, and higher quality alignments can be achieved if provided with known SNPs. AlignerBoost's algorithm is computationally efficient, and can process one million alignments within 30 seconds on a typical desktop computer. AlignerBoost is implemented as a uniform Java application and is freely available at https://github.com/Grice-Lab/AlignerBoost.

  13. ScotlandsPlaces XML: Bespoke XML or XML Mapping?

    ERIC Educational Resources Information Center

    Beamer, Ashley; Gillick, Mark

    2010-01-01

    Purpose: The purpose of this paper is to investigate web services (in the form of parameterised URLs), specifically in the context of the ScotlandsPlaces project. This involves cross-domain querying, data retrieval and display via the development of a bespoke XML standard rather than existing XML formats and mapping between them.…

  14. An Introduction to the Extensible Markup Language (XML).

    ERIC Educational Resources Information Center

    Bryan, Martin

    1998-01-01

    Describes Extensible Markup Language (XML), a subset of the Standard Generalized Markup Language (SGML) that is designed to make it easy to interchange structured documents over the Internet. Topics include Document Type Definition (DTD), components of XML, the use of XML, text and non-text elements, and uses for XML-coded files. (LRW)

  15. XML Reconstruction View Selection in XML Databases: Complexity Analysis and Approximation Scheme

    NASA Astrophysics Data System (ADS)

    Chebotko, Artem; Fu, Bin

    Query evaluation in an XML database requires reconstructing XML subtrees rooted at nodes found by an XML query. Since XML subtree reconstruction can be expensive, one approach to improve query response time is to use reconstruction views - materialized XML subtrees of an XML document, whose nodes are frequently accessed by XML queries. For this approach to be efficient, the principal requirement is a framework for view selection. In this work, we are the first to formalize and study the problem of XML reconstruction view selection. The input is a tree T, in which every node i has a size c i and profit p i , and the size limitation C. The target is to find a subset of subtrees rooted at nodes i 1, ⋯ , i k respectively such that c_{i_1}+\\cdots +c_{i_k}le C, and p_{i_1}+\\cdots +p_{i_k} is maximal. Furthermore, there is no overlap between any two subtrees selected in the solution. We prove that this problem is NP-hard and present a fully polynomial-time approximation scheme (FPTAS) as a solution.

  16. Yellow filters can improve magnocellular function: motion sensitivity, convergence, accommodation, and reading.

    PubMed

    Ray, N J; Fowler, S; Stein, J F

    2005-04-01

    The magnocellular system plays an important role in visual motion processing, controlling vergence eye movements, and in reading. Yellow filters may boost magnocellular activity by eliminating inhibitory blue input to this pathway. It was found that wearing yellow filters increased motion sensitivity, convergence, and accommodation in many children with reading difficulties, both immediately and after three months using the filters. Motion sensitivity was not increased using control neutral density filters. Moreover, reading-impaired children showed significant gains in reading ability after three months wearing the filters compared with those who had used a placebo. It was concluded that yellow filters can improve magnocellular function permanently. Hence, they should be considered as an alternative to corrective lenses, prisms, or exercises for treating poor convergence and accommodation, and also as an aid for children with reading problems.

  17. XML Style Guide

    DTIC Science & Technology

    2015-07-01

    Acronyms ASCII American Standard Code for Information Interchange DAU data acquisition unit DDML data display markup language IHAL...Transfer Standard URI uniform resource identifier W3C World Wide Web Consortium XML extensible markup language XSD XML schema definition XML Style...Style Guide, RCC 125-15, July 2015 1 Introduction The next generation of telemetry systems will rely heavily on extensible markup language (XML

  18. Optimization of Adaboost Algorithm for Sonar Target Detection in a Multi-Stage ATR System

    NASA Technical Reports Server (NTRS)

    Lin, Tsung Han (Hank)

    2011-01-01

    JPL has developed a multi-stage Automated Target Recognition (ATR) system to locate objects in images. First, input images are preprocessed and sent to a Grayscale Optical Correlator (GOC) filter to identify possible regions-of-interest (ROIs). Second, feature extraction operations are performed using Texton filters and Principal Component Analysis (PCA). Finally, the features are fed to a classifier, to identify ROIs that contain the targets. Previous work used the Feed-forward Back-propagation Neural Network for classification. In this project we investigate a version of Adaboost as a classifier for comparison. The version we used is known as GentleBoost. We used the boosted decision tree as the weak classifier. We have tested our ATR system against real-world sonar images using the Adaboost approach. Results indicate an improvement in performance over a single Neural Network design.

  19. Framework and prototype for a secure XML-based electronic health records system.

    PubMed

    Steele, Robert; Gardner, William; Chandra, Darius; Dillon, Tharam S

    2007-01-01

    Security of personal medical information has always been a challenge for the advancement of Electronic Health Records (EHRs) initiatives. eXtensible Markup Language (XML), is rapidly becoming the key standard for data representation and transportation. The widespread use of XML and the prospect of its use in the Electronic Health (e-health) domain highlights the need for flexible access control models for XML data and documents. This paper presents a declarative access control model for XML data repositories that utilises an expressive XML role control model. The operational semantics of this model are illustrated by Xplorer, a user interface generation engine which supports search-browse-navigate activities on XML repositories.

  20. Using XML to Separate Content from the Presentation Software in eLearning Applications

    ERIC Educational Resources Information Center

    Merrill, Paul F.

    2005-01-01

    This paper has shown how XML (extensible Markup Language) can be used to mark up content. Since XML documents, with meaningful tags, can be interpreted easily by humans as well as computers, they are ideal for the interchange of information. Because XML tags can be defined by an individual or organization, XML documents have proven useful in a…

  1. The XSD-Builder Specification Language—Toward a Semantic View of XML Schema Definition

    NASA Astrophysics Data System (ADS)

    Fong, Joseph; Cheung, San Kuen

    In the present database market, XML database model is a main structure for the forthcoming database system in the Internet environment. As a conceptual schema of XML database, XML Model has its limitation on presenting its data semantics. System analyst has no toolset for modeling and analyzing XML system. We apply XML Tree Model (shown in Figure 2) as a conceptual schema of XML database to model and analyze the structure of an XML database. It is important not only for visualizing, specifying, and documenting structural models, but also for constructing executable systems. The tree model represents inter-relationship among elements inside different logical schema such as XML Schema Definition (XSD), DTD, Schematron, XDR, SOX, and DSD (shown in Figure 1, an explanation of the terms in the figure are shown in Table 1). The XSD-Builder consists of XML Tree Model, source language, translator, and XSD. The source language is called XSD-Source which is mainly for providing an environment with concept of user friendliness while writing an XSD. The source language will consequently be translated by XSD-Translator. Output of XSD-Translator is an XSD which is our target and is called as an object language.

  2. Querying XML Data with SPARQL

    NASA Astrophysics Data System (ADS)

    Bikakis, Nikos; Gioldasis, Nektarios; Tsinaraki, Chrisa; Christodoulakis, Stavros

    SPARQL is today the standard access language for Semantic Web data. In the recent years XML databases have also acquired industrial importance due to the widespread applicability of XML in the Web. In this paper we present a framework that bridges the heterogeneity gap and creates an interoperable environment where SPARQL queries are used to access XML databases. Our approach assumes that fairly generic mappings between ontology constructs and XML Schema constructs have been automatically derived or manually specified. The mappings are used to automatically translate SPARQL queries to semantically equivalent XQuery queries which are used to access the XML databases. We present the algorithms and the implementation of SPARQL2XQuery framework, which is used for answering SPARQL queries over XML databases.

  3. Pathology data integration with eXtensible Markup Language.

    PubMed

    Berman, Jules J

    2005-02-01

    It is impossible to overstate the importance of XML (eXtensible Markup Language) as a data organization tool. With XML, pathologists can annotate all of their data (clinical and anatomic) in a format that can transform every pathology report into a database, without compromising narrative structure. The purpose of this manuscript is to provide an overview of XML for pathologists. Examples will demonstrate how pathologists can use XML to annotate individual data elements and to structure reports in a common format that can be merged with other XML files or queried using standard XML tools. This manuscript gives pathologists a glimpse into how XML allows pathology data to be linked to other types of biomedical data and reduces our dependence on centralized proprietary databases.

  4. XML technology planning database : lessons learned

    NASA Technical Reports Server (NTRS)

    Some, Raphael R.; Neff, Jon M.

    2005-01-01

    A hierarchical Extensible Markup Language(XML) database called XCALIBR (XML Analysis LIBRary) has been developed by Millennium Program to assist in technology investment (ROI) analysis and technology Language Capability the New return on portfolio optimization. The database contains mission requirements and technology capabilities, which are related by use of an XML dictionary. The XML dictionary codifies a standardized taxonomy for space missions, systems, subsystems and technologies. In addition to being used for ROI analysis, the database is being examined for use in project planning, tracking and documentation. During the past year, the database has moved from development into alpha testing. This paper describes the lessons learned during construction and testing of the prototype database and the motivation for moving from an XML taxonomy to a standard XML-based ontology.

  5. Optical devices

    DOEpatents

    Chaves, Julio C.; Falicoff, Waqidi; Minano, Juan C.; Benitez, Pablo; Dross, Oliver; Parkyn, Jr., William A.

    2010-07-13

    An optical manifold for efficiently combining a plurality of blue LED outputs to illuminate a phosphor for a single, substantially homogeneous output, in a small, cost-effective package. Embodiments are disclosed that use a single or multiple LEDs and a remote phosphor, and an intermediate wavelength-selective filter arranged so that backscattered photoluminescence is recycled to boost the luminance and flux of the output aperture. A further aperture mask is used to boost phosphor luminance with only modest loss of luminosity. Alternative non-recycling embodiments provide blue and yellow light in collimated beams, either separately or combined into white.

  6. Entropy-guided switching trimmed mean deviation-boosted anisotropic diffusion filter

    NASA Astrophysics Data System (ADS)

    Nnolim, Uche A.

    2016-07-01

    An effective anisotropic diffusion (AD) mean filter variant is proposed for filtering of salt-and-pepper impulse noise. The implemented filter is robust to impulse noise ranging from low to high density levels. The algorithm involves a switching scheme in addition to utilizing the unsymmetric trimmed mean/median deviation to filter image noise while greatly preserving image edges, regardless of impulse noise density (ND). It operates with threshold parameters selected manually or adaptively estimated from the image statistics. It is further combined with the partial differential equations (PDE)-based AD for edge preservation at high NDs to enhance the properties of the trimmed mean filter. Based on experimental results, the proposed filter easily and consistently outperforms the median filter and its other variants ranging from simple to complex filter structures, especially the known PDE-based variants. In addition, the switching scheme and threshold calculation enables the filter to avoid smoothing an uncorrupted image, and filtering is activated only when impulse noise is present. Ultimately, the particular properties of the filter make its combination with the AD algorithm a unique and powerful edge-preservation smoothing filter at high-impulse NDs.

  7. A prospective case study of high boost, high frequency emphasis and two-way diffusion filters on MR images of glioblastoma multiforme.

    PubMed

    Anoop, B N; Joseph, Justin; Williams, J; Jayaraman, J Sivaraman; Sebastian, Ansa Maria; Sihota, Praveer

    2018-06-01

    Glioblastoma multiforme (GBM) appears undifferentiated and non-enhancing on magnetic resonance (MR) imagery. As MRI does not offer adequate image quality to allow visual discrimination of the boundary between GBM focus and perifocal vasogenic edema, surgical and radiotherapy planning become difficult. The presence of noise in MR images influences the computation of radiation dosage and precludes the edge based segmentation schemes in automated software for radiation treatment planning. The performance of techniques meant for simultaneous denoising and sharpening, like high boost filters, high frequency emphasize filters and two-way anisotropic diffusion is sensitive to the selection of their operational parameters. Improper selection may cause overshoot and saturation artefacts or noisy grey level transitions can be left unsuppressed. This paper is a prospective case study of the performance of high boost filters, high frequency emphasize filters and two-way anisotropic diffusion on MR images of GBM, for their ability to suppress noise from homogeneous regions and to selectively sharpen the true morphological edges. An objective method for determining the optimum value of the operational parameters of these techniques is also demonstrated. Saturation Evaluation Index (SEI), Perceptual Sharpness Index (PSI), Edge Model based Blur Metric (EMBM), Sharpness of Ridges (SOR), Structural Similarity Index Metric (SSIM), Peak Signal to Noise Ratio (PSNR) and Noise Suppression Ratio (NSR) are the objective functions used. They account for overshoot and saturation artefacts, sharpness of the image, width of salient edges (haloes), susceptibility of edge quality to noise, feature preservation and degree of noise suppression. Two-way diffusion is found to be superior to others in all these respects. The SEI, PSI, EMBM, SOR, SSIM, PSNR and NSR exhibited by two-way diffusion are 0.0016 ± 0.0012, 0.2049 ± 0.0187, 0.0905 ± 0.0408, 2.64 × 10 12  ± 1.6 × 10 12 , 0.9955 ± 0.0024, 38.214 ± 5.2145 and 0.3547 ± 0.0069, respectively.

  8. phyloXML: XML for evolutionary biology and comparative genomics

    PubMed Central

    Han, Mira V; Zmasek, Christian M

    2009-01-01

    Background Evolutionary trees are central to a wide range of biological studies. In many of these studies, tree nodes and branches need to be associated (or annotated) with various attributes. For example, in studies concerned with organismal relationships, tree nodes are associated with taxonomic names, whereas tree branches have lengths and oftentimes support values. Gene trees used in comparative genomics or phylogenomics are usually annotated with taxonomic information, genome-related data, such as gene names and functional annotations, as well as events such as gene duplications, speciations, or exon shufflings, combined with information related to the evolutionary tree itself. The data standards currently used for evolutionary trees have limited capacities to incorporate such annotations of different data types. Results We developed a XML language, named phyloXML, for describing evolutionary trees, as well as various associated data items. PhyloXML provides elements for commonly used items, such as branch lengths, support values, taxonomic names, and gene names and identifiers. By using "property" elements, phyloXML can be adapted to novel and unforeseen use cases. We also developed various software tools for reading, writing, conversion, and visualization of phyloXML formatted data. Conclusion PhyloXML is an XML language defined by a complete schema in XSD that allows storing and exchanging the structures of evolutionary trees as well as associated data. More information about phyloXML itself, the XSD schema, as well as tools implementing and supporting phyloXML, is available at . PMID:19860910

  9. QRev—Software for computation and quality assurance of acoustic doppler current profiler moving-boat streamflow measurements—Technical manual for version 2.8

    USGS Publications Warehouse

    Mueller, David S.

    2016-06-21

    The software program, QRev applies common and consistent computational algorithms combined with automated filtering and quality assessment of the data to improve the quality and efficiency of streamflow measurements and helps ensure that U.S. Geological Survey streamflow measurements are consistent, accurate, and independent of the manufacturer of the instrument used to make the measurement. Software from different manufacturers uses different algorithms for various aspects of the data processing and discharge computation. The algorithms used by QRev to filter data, interpolate data, and compute discharge are documented and compared to the algorithms used in the manufacturers’ software. QRev applies consistent algorithms and creates a data structure that is independent of the data source. QRev saves an extensible markup language (XML) file that can be imported into databases or electronic field notes software. This report is the technical manual for version 2.8 of QRev.

  10. XML Files

    MedlinePlus

    ... this page, please enable JavaScript. MedlinePlus produces XML data sets that you are welcome to download and use. If you have questions about the MedlinePlus XML files, please contact us . For additional sources of MedlinePlus data in XML format, visit our Web service page, ...

  11. δ-dependency for privacy-preserving XML data publishing.

    PubMed

    Landberg, Anders H; Nguyen, Kinh; Pardede, Eric; Rahayu, J Wenny

    2014-08-01

    An ever increasing amount of medical data such as electronic health records, is being collected, stored, shared and managed in large online health information systems and electronic medical record systems (EMR) (Williams et al., 2001; Virtanen, 2009; Huang and Liou, 2007) [1-3]. From such rich collections, data is often published in the form of census and statistical data sets for the purpose of knowledge sharing and enabling medical research. This brings with it an increasing need for protecting individual people privacy, and it becomes an issue of great importance especially when information about patients is exposed to the public. While the concept of data privacy has been comprehensively studied for relational data, models and algorithms addressing the distinct differences and complex structure of XML data are yet to be explored. Currently, the common compromise method is to convert private XML data into relational data for publication. This ad hoc approach results in significant loss of useful semantic information previously carried in the private XML data. Health data often has very complex structure, which is best expressed in XML. In fact, XML is the standard format for exchanging (e.g. HL7 version 3(1)) and publishing health information. Lack of means to deal directly with data in XML format is inevitably a serious drawback. In this paper we propose a novel privacy protection model for XML, and an algorithm for implementing this model. We provide general rules, both for transforming a private XML schema into a published XML schema, and for mapping private XML data to the new privacy-protected published XML data. In addition, we propose a new privacy property, δ-dependency, which can be applied to both relational and XML data, and that takes into consideration the hierarchical nature of sensitive data (as opposed to "quasi-identifiers"). Lastly, we provide an implementation of our model, algorithm and privacy property, and perform an experimental analysis, to demonstrate the proposed privacy scheme in practical application. Copyright © 2014. Published by Elsevier Inc.

  12. A Survey in Indexing and Searching XML Documents.

    ERIC Educational Resources Information Center

    Luk, Robert W. P.; Leong, H. V.; Dillon, Tharam S.; Chan, Alvin T. S.; Croft, W. Bruce; Allan, James

    2002-01-01

    Discussion of XML focuses on indexing techniques for XML documents, grouping them into flat-file, semistructured, and structured indexing paradigms. Highlights include searching techniques, including full text search and multistage search; search result presentations; database and information retrieval system integration; XML query languages; and…

  13. XML and E-Journals: The State of Play.

    ERIC Educational Resources Information Center

    Wusteman, Judith

    2003-01-01

    Discusses the introduction of the use of XML (Extensible Markup Language) in publishing electronic journals. Topics include standards, including DTDs (Document Type Definition), or document type definitions; aggregator requirements; SGML (Standard Generalized Markup Language); benefits of XML for e-journals; XML metadata; the possibility of…

  14. XML in Libraries.

    ERIC Educational Resources Information Center

    Tennant, Roy, Ed.

    This book presents examples of how libraries are using XML (eXtensible Markup Language) to solve problems, expand services, and improve systems. Part I contains papers on using XML in library catalog records: "Updating MARC Records with XMLMARC" (Kevin S. Clarke, Stanford University) and "Searching and Retrieving XML Records via the…

  15. AlignerBoost: A Generalized Software Toolkit for Boosting Next-Gen Sequencing Mapping Accuracy Using a Bayesian-Based Mapping Quality Framework

    PubMed Central

    Zheng, Qi; Grice, Elizabeth A.

    2016-01-01

    Accurate mapping of next-generation sequencing (NGS) reads to reference genomes is crucial for almost all NGS applications and downstream analyses. Various repetitive elements in human and other higher eukaryotic genomes contribute in large part to ambiguously (non-uniquely) mapped reads. Most available NGS aligners attempt to address this by either removing all non-uniquely mapping reads, or reporting one random or "best" hit based on simple heuristics. Accurate estimation of the mapping quality of NGS reads is therefore critical albeit completely lacking at present. Here we developed a generalized software toolkit "AlignerBoost", which utilizes a Bayesian-based framework to accurately estimate mapping quality of ambiguously mapped NGS reads. We tested AlignerBoost with both simulated and real DNA-seq and RNA-seq datasets at various thresholds. In most cases, but especially for reads falling within repetitive regions, AlignerBoost dramatically increases the mapping precision of modern NGS aligners without significantly compromising the sensitivity even without mapping quality filters. When using higher mapping quality cutoffs, AlignerBoost achieves a much lower false mapping rate while exhibiting comparable or higher sensitivity compared to the aligner default modes, therefore significantly boosting the detection power of NGS aligners even using extreme thresholds. AlignerBoost is also SNP-aware, and higher quality alignments can be achieved if provided with known SNPs. AlignerBoost’s algorithm is computationally efficient, and can process one million alignments within 30 seconds on a typical desktop computer. AlignerBoost is implemented as a uniform Java application and is freely available at https://github.com/Grice-Lab/AlignerBoost. PMID:27706155

  16. Application handbook for a Standardized Control Module (SCM) for DC-DC converters, volume 1

    NASA Astrophysics Data System (ADS)

    Lee, F. C.; Mahmoud, M. F.; Yu, Y.

    1980-04-01

    The standardized control module (SCM) was developed for application in the buck, boost and buck/boost DC-DC converters. The SCM used multiple feedback loops to provide improved input line and output load regulation, stable feedback control system, good dynamic transient response and adaptive compensation of the control loop for changes in open loop gain and output filter time constraints. The necessary modeling and analysis tools to aid the design engineer in the application of the SCM to DC-DC Converters were developed. The SCM functional block diagram and the different analysis techniques were examined. The average time domain analysis technique was chosen as the basic analytical tool. The power stage transfer functions were developed for the buck, boost and buck/boost converters. The analog signal and digital signal processor transfer functions were developed for the three DC-DC Converter types using the constant on time, constant off time and constant frequency control laws.

  17. Application handbook for a Standardized Control Module (SCM) for DC-DC converters, volume 1

    NASA Technical Reports Server (NTRS)

    Lee, F. C.; Mahmoud, M. F.; Yu, Y.

    1980-01-01

    The standardized control module (SCM) was developed for application in the buck, boost and buck/boost DC-DC converters. The SCM used multiple feedback loops to provide improved input line and output load regulation, stable feedback control system, good dynamic transient response and adaptive compensation of the control loop for changes in open loop gain and output filter time constraints. The necessary modeling and analysis tools to aid the design engineer in the application of the SCM to DC-DC Converters were developed. The SCM functional block diagram and the different analysis techniques were examined. The average time domain analysis technique was chosen as the basic analytical tool. The power stage transfer functions were developed for the buck, boost and buck/boost converters. The analog signal and digital signal processor transfer functions were developed for the three DC-DC Converter types using the constant on time, constant off time and constant frequency control laws.

  18. Improved reading performance using individualized compensation filters for observers with losses in central vision

    NASA Technical Reports Server (NTRS)

    Lawton, Teri B.

    1989-01-01

    A method to improve the reading performance of subjects with losses in central vision is proposed in which the amplitudes of the intermediate spatial frequencies are boosted relative to the lower spatial frequencies. In the method, words are filtered using an image enhancement function which is based on a subject's losses in visual function relative to a normal subject. It was found that 30-70 percent less magnification was necessary, and that reading rates were improved 2-3 times, using the method. The individualized compensation filters improved the clarity and visibility of words. The shape of the enhancement function was shown to be important in determining the optimum compensation filter for improving reading performance.

  19. On energy harvesting for augmented tags

    NASA Astrophysics Data System (ADS)

    Allane, Dahmane; Duroc, Yvan; Andia Vera, Gianfranco; Touhami, Rachida; Tedjini, Smail

    2017-02-01

    In this paper, the harmonic signals generated by UHF RFID chips, usually considered as spurious effects and unused, are exploited. Indeed, the harmonic signals are harvested to feed a supplementary circuitry associated with a passive RFID tag. Two approaches are presented and compared. In the first one, the third-harmonic signal is combined with an external 2.45-GHz Wi-Fi signal. The integration is done in such a way that the composite signal boosts the conversion efficiency of the energy harvester. In the second approach, the third-harmonic signal is used as the only source of a harvester that energizes a commercial temperature sensor associated with the tag. The design procedures of the two "augmented-tag" approaches are presented. The performance of each system is simulated with ADS software, and using Harmonic Balance tool (HB), the results obtained in simulation and measurements are compared also. xml:lang="fr"

  20. Setting the Standard: XML on Campus.

    ERIC Educational Resources Information Center

    Rawlins, Mike

    2001-01-01

    Explains what XML (Extensible Markup Language) is; where to find it in a few years (everywhere from Web pages, to database management systems, to common campus applications); issues that will make XML somewhat of an experimental strategy in the near term; and the importance of decision-makers being abreast of XML trends in standards, tools…

  1. Using a Combination of UML, C2RM, XML, and Metadata Registries to Support Long-Term Development/Engineering

    DTIC Science & Technology

    2003-01-01

    Authenticat’n (XCBF) Authorizat’n (XACML) (SAML) Privacy (P3P) Digital Rights Management (XrML) Content Mngmnt (DASL) (WebDAV) Content Syndicat’n...Registry/ Repository BPSS eCommerce XML/EDI Universal Business Language (UBL) Internet & Computing Human Resources (HR-XML) Semantic KEY XML SPECIFICATIONS

  2. XML — an opportunity for data standards in the geosciences

    NASA Astrophysics Data System (ADS)

    Houlding, Simon W.

    2001-08-01

    Extensible markup language (XML) is a recently introduced meta-language standard on the Web. It provides the rules for development of metadata (markup) standards for information transfer in specific fields. XML allows development of markup languages that describe what information is rather than how it should be presented. This allows computer applications to process the information in intelligent ways. In contrast hypertext markup language (HTML), which fuelled the initial growth of the Web, is a metadata standard concerned exclusively with presentation of information. Besides its potential for revolutionizing Web activities, XML provides an opportunity for development of meaningful data standards in specific application fields. The rapid endorsement of XML by science, industry and e-commerce has already spawned new metadata standards in such fields as mathematics, chemistry, astronomy, multi-media and Web micro-payments. Development of XML-based data standards in the geosciences would significantly reduce the effort currently wasted on manipulating and reformatting data between different computer platforms and applications and would ensure compatibility with the new generation of Web browsers. This paper explores the evolution, benefits and status of XML and related standards in the more general context of Web activities and uses this as a platform for discussion of its potential for development of data standards in the geosciences. Some of the advantages of XML are illustrated by a simple, browser-compatible demonstration of XML functionality applied to a borehole log dataset. The XML dataset and the associated stylesheet and schema declarations are available for FTP download.

  3. Report of Official Foreign Travel to Montreal, Canada

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mason, J. D.

    How can DOE, NNSA, and Y-12 best handle the integration of information from diverse sources, and what will best ensure that legacy data will survive changes in computing systems for the future? Although there is no simple answer, it is becoming increasingly clear throughout the information-management industry that a key component of both preservation and integration of information is the adoption of standardized data formats. The most notable standardized format is XML, to which almost all data is now migrating. XML is derived from SGML, as is HTML, the common language of the World Wide Web. XML is becoming increasinglymore » important as part of the Y-12 data infrastructure. Y-12 is implementing a new generation of XML-based publishing systems. Y-12 already has been supporting projects at DOE Headquarters, such as the Guidance Streamlining Initiative (GSI) that will result in the storage of classification guidance in XML. Y-12 collects some test data in XML as the result of Electronic Data Capture (EDC), and XML data is also used in Engineering Releases. I am participating in a series of projects sponsored by the PRIDE initiative that include the capture of dimensional certification and other similar records in XML, the creation of XML formats for Electronic Data Capture, and the creation of Quality Evaluation Reports in XML. In support of DOE's use of SGML, XML, HTML, Topic Maps, and related standards, I served 1985-2007 as chairman of the international committee responsible for SGML and standards derived from it, ISO/IEC JTC1/SC34 (SC34) and its predecessor organizations; I continue to belong to the committee. During the August 2010 trip, I co-chaired the conference Balisage 2010.« less

  4. Dimensional Representation and Gradient Boosting for Seismic Event Classification

    NASA Astrophysics Data System (ADS)

    Semmelmayer, F. C.; Kappedal, R. D.; Magana-Zook, S. A.

    2017-12-01

    In this research, we conducted experiments of representational structures on 5009 seismic signals with the intent of finding a method to classify signals as either an explosion or an earthquake in an automated fashion. We also applied a gradient boosted classifier. While perfect classification was not attained (approximately 88% was our best model), some cases demonstrate that many events can be filtered out as very high probability being explosions or earthquakes, diminishing subject-matter experts'(SME) workload for first stage analysis. It is our hope that these methods can be refined, further increasing the classification probability.

  5. Tactical missile turbulence problems

    NASA Technical Reports Server (NTRS)

    Dickson, Richard E.

    1987-01-01

    Of particular interest is atmospheric turbulence in the atmospheric boundary layer, since this affects both the launch and terminal phase of flight, and the total flight for direct fire systems. Brief discussions are presented on rocket artillery boost wind problems, mean wind correction, turbulent boost wind correction, the Dynamically Aimed Free Flight Rocket (DAFFR) wind filter, the DAFFR test, and rocket wake turbulence problems. It is concluded that many of the turbulence problems of rockets and missiles are common to those of aircraft, such as structural loading and control system design. However, these problems have not been solved at this time.

  6. Optical manifold for light-emitting diodes

    DOEpatents

    Chaves, Julio C.; Falicoff, Waqidi; Minano, Juan C.; Benitez, Pablo; Parkyn, Jr., William A.; Alvarez, Roberto; Dross, Oliver

    2008-06-03

    An optical manifold for efficiently combining a plurality of blue LED outputs to illuminate a phosphor for a single, substantially homogeneous output, in a small, cost-effective package. Embodiments are disclosed that use a single or multiple LEDs and a remote phosphor, and an intermediate wavelength-selective filter arranged so that backscattered photoluminescence is recycled to boost the luminance and flux of the output aperture. A further aperture mask is used to boost phosphor luminance with only modest loss of luminosity. Alternative non-recycling embodiments provide blue and yellow light in collimated beams, either separately or combined into white.

  7. XML: A Language To Manage the World Wide Web. ERIC Digest.

    ERIC Educational Resources Information Center

    Davis-Tanous, Jennifer R.

    This digest provides an overview of XML (Extensible Markup Language), a markup language used to construct World Wide Web pages. Topics addressed include: (1) definition of a markup language, including comparison of XML with SGML (Standard Generalized Markup Language) and HTML (HyperText Markup Language); (2) how XML works, including sample tags,…

  8. Interoperability, Data Control and Battlespace Visualization using XML, XSLT and X3D

    DTIC Science & Technology

    2003-09-01

    26 Rosenthal, Arnon, Seligman , Len and Costello, Roger, XML, Databases, and Interoperability, Federal Database Colloquium, AFCEA, San Diego...79 Rosenthal, Arnon, Seligman , Len and Costello, Roger, “XML, Databases, and Interoperability”, Federal Database Colloquium, AFCEA, San Diego, 1999... Linda , Mastering XML, Premium Edition, SYBEX, 2001 Wooldridge, Michael , An Introduction to MultiAgent Systems, Wiley, 2002 PAPERS Abernathy, M

  9. Applying Analogical Reasoning Techniques for Teaching XML Document Querying Skills in Database Classes

    ERIC Educational Resources Information Center

    Mitri, Michel

    2012-01-01

    XML has become the most ubiquitous format for exchange of data between applications running on the Internet. Most Web Services provide their information to clients in the form of XML. The ability to process complex XML documents in order to extract relevant information is becoming as important a skill for IS students to master as querying…

  10. Compression of Probabilistic XML Documents

    NASA Astrophysics Data System (ADS)

    Veldman, Irma; de Keijzer, Ander; van Keulen, Maurice

    Database techniques to store, query and manipulate data that contains uncertainty receives increasing research interest. Such UDBMSs can be classified according to their underlying data model: relational, XML, or RDF. We focus on uncertain XML DBMS with as representative example the Probabilistic XML model (PXML) of [10,9]. The size of a PXML document is obviously a factor in performance. There are PXML-specific techniques to reduce the size, such as a push down mechanism, that produces equivalent but more compact PXML documents. It can only be applied, however, where possibilities are dependent. For normal XML documents there also exist several techniques for compressing a document. Since Probabilistic XML is (a special form of) normal XML, it might benefit from these methods even more. In this paper, we show that existing compression mechanisms can be combined with PXML-specific compression techniques. We also show that best compression rates are obtained with a combination of PXML-specific technique with a rather simple generic DAG-compression technique.

  11. Indexing Temporal XML Using FIX

    NASA Astrophysics Data System (ADS)

    Zheng, Tiankun; Wang, Xinjun; Zhou, Yingchun

    XML has become an important criterion for description and exchange of information. It is of practical significance to introduce the temporal information on this basis, because time has penetrated into all walks of life as an important property information .Such kind of database can track document history and recover information to state of any time before, and is called Temporal XML database. We advise a new feature vector on the basis of FIX which is a feature-based XML index, and build an index on temporal XML database using B+ tree, donated TFIX. We also put forward a new query algorithm upon it for temporal query. Our experiments proved that this index has better performance over other kinds of XML indexes. The index can satisfy all TXPath queries with depth up to K(>0).

  12. TRMM On-Orbit Performance Re-Accessed After Control Change

    NASA Technical Reports Server (NTRS)

    Bilanow, Steve

    2006-01-01

    The Tropical Rainfall Measuring Mission (TRMM) spacecraft, a joint mission between the U.S. and Japan, launched onboard an HI1 rocket on November 27,1997 and transitioned in August, 2001 from an average operating altitude of 350 kilometers to 402.5 kilometers. Due to problems using the Earth Sensor Assembly (ESA) at the higher altitude, TRMM switched to a backup attitude control mode. Prior to the orbit boost TRMM controlled pitch and roll to the local vertical using ESA measurements while using gyro data to propagate yaw attitude between yaw updates from the Sun sensors. After the orbit boost, a Kalman filter used 3-axis gyro data with Sun sensor and magnetometers to estimate onboard attitude. While originally intended to meet a degraded attitude accuracy of 0.7 degrees, the new control mode met the original 0.2 degree attitude accuracy requirement after improving onboard ephemeris prediction and adjusting the magnetometer calibration onboard. Independent roll attitude checks using a science instrument, the Precipitation Radar (PR) which was built in Japan, provided a novel insight into the pointing performance. The PR data helped identify the pointing errors after the orbit boost, track the performance improvements, and show subtle effects from ephemeris errors and gyro bias errors. It also helped identify average bias trends throughout the mission. Roll errors tracked by the PR from sample orbits pre-boost and post-boost are shown in Figure 1. Prior to the orbit boost the largest attitude errors were due to occasional interference in the ESA. These errors were sometime larger than 0.2 degrees in pitch and roll, but usually less, as estimated from a comprehensive review of the attitude excursions using gyro data. Sudden jumps in the onboard roll show up as spikes in the reported attitude since the control responds within tens of seconds to null the pointing error. The PR estimated roll tracks well with an estimate of the roll history propagated using gyro data. After the orbit boost, the attitude errors shown by the PR roll have a smooth sine-wave type signal because of the way that attitude errors propagate with the use of gyro data. Yaw errors couple at orbit period to roll with '/4 orbit lag. By tracking the amplitude, phase, and bias of the sinusoidal PR roll error signal, it was shown that the average pitch rotation axis tends to be offset from orbit normal in a direction perpendicular to the Sun direction, as shown in Figure 2 for a 200 day period following the orbit boost. This is a result of the higher accuracy and stability of the Sun sensor measurements relative to the magnetometer measurements used in the Kalman filter. In November, 2001 a magnetometer calibration adjustment was uploaded which improved the pointing performance, keeping the roll and yaw amplitudes within about 0.1 degrees. After the boost, onboard ephemeris errors had a direct effect on the pitch pointing, being used to compute the Earth pointing reference frame. Improvements after the orbit boost have kept the the onboard ephemeris errors generally below 20 kilometers. Ephemeris errors have secondary effects on roll and yaw, especially during high beta angle when pitch effects can couple into roll and yaw. This is illustrated in figure 3. The onboard roll bias trends as measured by PR data show correlations with the Kalman filter's gyro bias error. This particularly shows up after yaw turns (every 2 to 4 weeks) as shown in Figure 3, when a slight roll bias is observed while the onboard computed gyro biases settle to new values. As for longer term trends, the PR data shows that the roll bias was influenced by Earth horizon radiance effects prior to the boost, changing values at yaw turns, and indicated a long term drift as shown in Figure 4. After the boost, the bias variations were smaller and showed some possible correlation with solar beta angle, probably due to sun sensor misalignment effects.

  13. Towards the XML schema measurement based on mapping between XML and OO domain

    NASA Astrophysics Data System (ADS)

    Rakić, Gordana; Budimac, Zoran; Heričko, Marjan; Pušnik, Maja

    2017-07-01

    Measuring quality of IT solutions is a priority in software engineering. Although numerous metrics for measuring object-oriented code already exist, measuring quality of UML models or XML Schemas is still developing. One of the research questions in the overall research leaded by ideas described in this paper is whether we can apply already defined object-oriented design metrics on XML schemas based on predefined mappings. In this paper, basic ideas for mentioned mapping are presented. This mapping is prerequisite for setting the future approach to XML schema quality measuring with object-oriented metrics.

  14. Image enhancement filters significantly improve reading performance for low vision observers

    NASA Technical Reports Server (NTRS)

    Lawton, T. B.

    1992-01-01

    As people age, so do their photoreceptors; many photoreceptors in central vision stop functioning when a person reaches their late sixties or early seventies. Low vision observers with losses in central vision, those with age-related maculopathies, were studied. Low vision observers no longer see high spatial frequencies, being unable to resolve fine edge detail. We developed image enhancement filters to compensate for the low vision observer's losses in contrast sensitivity to intermediate and high spatial frequencies. The filters work by boosting the amplitude of the less visible intermediate spatial frequencies. The lower spatial frequencies. These image enhancement filters not only reduce the magnification needed for reading by up to 70 percent, but they also increase the observer's reading speed by 2-4 times. A summary of this research is presented.

  15. Compressing Aviation Data in XML Format

    NASA Technical Reports Server (NTRS)

    Patel, Hemil; Lau, Derek; Kulkarni, Deepak

    2003-01-01

    Design, operations and maintenance activities in aviation involve analysis of variety of aviation data. This data is typically in disparate formats making it difficult to use with different software packages. Use of a self-describing and extensible standard called XML provides a solution to this interoperability problem. XML provides a standardized language for describing the contents of an information stream, performing the same kind of definitional role for Web content as a database schema performs for relational databases. XML data can be easily customized for display using Extensible Style Sheets (XSL). While self-describing nature of XML makes it easy to reuse, it also increases the size of data significantly. Therefore, transfemng a dataset in XML form can decrease throughput and increase data transfer time significantly. It also increases storage requirements significantly. A natural solution to the problem is to compress the data using suitable algorithm and transfer it in the compressed form. We found that XML-specific compressors such as Xmill and XMLPPM generally outperform traditional compressors. However, optimal use of Xmill requires of discovery of optimal options to use while running Xmill. This, in turn, depends on the nature of data used. Manual disc0ver.y of optimal setting can require an engineer to experiment for weeks. We have devised an XML compression advisory tool that can analyze sample data files and recommend what compression tool would work the best for this data and what are the optimal settings to be used with a XML compression tool.

  16. Information Retrieval System for Japanese Standard Disease-Code Master Using XML Web Service

    PubMed Central

    Hatano, Kenji; Ohe, Kazuhiko

    2003-01-01

    Information retrieval system of Japanese Standard Disease-Code Master Using XML Web Service is developed. XML Web Service is a new distributed processing system by standard internet technologies. With seamless remote method invocation of XML Web Service, users are able to get the latest disease code master information from their rich desktop applications or internet web sites, which refer to this service. PMID:14728364

  17. XML syntax for clinical laboratory procedure manuals.

    PubMed

    Saadawi, Gilan; Harrison, James H

    2003-01-01

    We have developed a document type description (DTD) in Extensable Markup Language (XML) for clinical laboratory procedures. Our XML syntax can adequately structure a variety of procedure types across different laboratories and is compatible with current procedure standards. The combination of this format with an XML content management system and appropriate style sheets will allow efficient procedure maintenance, distributed access, customized display and effective searching across a large body of test information.

  18. Specifics on a XML Data Format for Scientific Data

    NASA Astrophysics Data System (ADS)

    Shaya, E.; Thomas, B.; Cheung, C.

    An XML-based data format for interchange and archiving of scientific data would benefit in many ways from the features standardized in XML. Foremost of these features is the world-wide acceptance and adoption of XML. Applications, such as browsers, XQL and XSQL advanced query, XML editing, or CSS or XSLT transformation, that are coming out of industry and academia can be easily adopted and provide startling new benefits and features. We have designed a prototype of a core format for holding, in a very general way, parameters, tables, scalar and vector fields, atlases, animations and complex combinations of these. This eXtensible Data Format (XDF) makes use of XML functionalities such as: self-validation of document structure, default values for attributes, XLink hyperlinks, entity replacements, internal referencing, inheritance, and XSLT transformation. An API is available to aid in detailed assembly, extraction, and manipulation. Conversion tools to and from FITS and other existing data formats are under development. In the future, we hope to provide object oriented interfaces to C++, Java, Python, IDL, Mathematica, Maple, and various databases. http://xml.gsfc.nasa.gov/XDF

  19. A Priority Fuzzy Logic Extension of the XQuery Language

    NASA Astrophysics Data System (ADS)

    Škrbić, Srdjan; Wettayaprasit, Wiphada; Saeueng, Pannipa

    2011-09-01

    In recent years there have been significant research findings in flexible XML querying techniques using fuzzy set theory. Many types of fuzzy extensions to XML data model and XML query languages have been proposed. In this paper, we introduce priority fuzzy logic extensions to XQuery language. Describing these extensions we introduce a new query language. Moreover, we describe a way to implement an interpreter for this language using an existing XML native database.

  20. QRFXFreeze: Queryable Compressor for RFX.

    PubMed

    Senthilkumar, Radha; Nandagopal, Gomathi; Ronald, Daphne

    2015-01-01

    The verbose nature of XML has been mulled over again and again and many compression techniques for XML data have been excogitated over the years. Some of the techniques incorporate support for querying the XML database in its compressed format while others have to be decompressed before they can be queried. XML compression in which querying is directly supported instantaneously with no compromise over time is forced to compromise over space. In this paper, we propose the compressor, QRFXFreeze, which not only reduces the space of storage but also supports efficient querying. The compressor does this without decompressing the compressed XML file. The compressor supports all kinds of XML documents along with insert, update, and delete operations. The forte of QRFXFreeze is that the textual data are semantically compressed and are indexed to reduce the querying time. Experimental results show that the proposed compressor performs much better than other well-known compressors.

  1. NeXML: rich, extensible, and verifiable representation of comparative data and metadata.

    PubMed

    Vos, Rutger A; Balhoff, James P; Caravas, Jason A; Holder, Mark T; Lapp, Hilmar; Maddison, Wayne P; Midford, Peter E; Priyam, Anurag; Sukumaran, Jeet; Xia, Xuhua; Stoltzfus, Arlin

    2012-07-01

    In scientific research, integration and synthesis require a common understanding of where data come from, how much they can be trusted, and what they may be used for. To make such an understanding computer-accessible requires standards for exchanging richly annotated data. The challenges of conveying reusable data are particularly acute in regard to evolutionary comparative analysis, which comprises an ever-expanding list of data types, methods, research aims, and subdisciplines. To facilitate interoperability in evolutionary comparative analysis, we present NeXML, an XML standard (inspired by the current standard, NEXUS) that supports exchange of richly annotated comparative data. NeXML defines syntax for operational taxonomic units, character-state matrices, and phylogenetic trees and networks. Documents can be validated unambiguously. Importantly, any data element can be annotated, to an arbitrary degree of richness, using a system that is both flexible and rigorous. We describe how the use of NeXML by the TreeBASE and Phenoscape projects satisfies user needs that cannot be satisfied with other available file formats. By relying on XML Schema Definition, the design of NeXML facilitates the development and deployment of software for processing, transforming, and querying documents. The adoption of NeXML for practical use is facilitated by the availability of (1) an online manual with code samples and a reference to all defined elements and attributes, (2) programming toolkits in most of the languages used commonly in evolutionary informatics, and (3) input-output support in several widely used software applications. An active, open, community-based development process enables future revision and expansion of NeXML.

  2. Application of XML in DICOM

    NASA Astrophysics Data System (ADS)

    You, Xiaozhen; Yao, Zhihong

    2005-04-01

    As a standard of communication and storage for medical digital images, DICOM has been playing a very important role in integration of hospital information. In DICOM, tags are expressed by numbers, and only standard data elements can be shared by looking up Data Dictionary while private tags can not. As such, a DICOM file's readability and extensibility is limited. In addition, reading DICOM files needs special software. In our research, we introduced XML into DICOM, defining an XML-based DICOM special transfer format, XML-DCM, a DICOM storage format, X-DCM, as well as developing a program package to realize format interchange among DICOM, XML-DCM, and X-DCM. XML-DCM is based on the DICOM structure while replacing numeric tags with accessible XML character string tags. The merits are as following: a) every character string tag of XML-DCM has explicit meaning, so users can understand standard data elements and those private data elements easily without looking up the Data Dictionary. In this way, the readability and data sharing of DICOM files are greatly improved; b) According to requirements, users can set new character string tags with explicit meaning to their own system to extend the capacity of data elements; c) User can read the medical image and associated information conveniently through IE, ultimately enlarging the scope of data sharing. The application of storage format X-DCM will reduce data redundancy and save storage memory. The result of practical application shows that XML-DCM does favor integration and share of medical image data among different systems or devices.

  3. NeXML: Rich, Extensible, and Verifiable Representation of Comparative Data and Metadata

    PubMed Central

    Vos, Rutger A.; Balhoff, James P.; Caravas, Jason A.; Holder, Mark T.; Lapp, Hilmar; Maddison, Wayne P.; Midford, Peter E.; Priyam, Anurag; Sukumaran, Jeet; Xia, Xuhua; Stoltzfus, Arlin

    2012-01-01

    Abstract In scientific research, integration and synthesis require a common understanding of where data come from, how much they can be trusted, and what they may be used for. To make such an understanding computer-accessible requires standards for exchanging richly annotated data. The challenges of conveying reusable data are particularly acute in regard to evolutionary comparative analysis, which comprises an ever-expanding list of data types, methods, research aims, and subdisciplines. To facilitate interoperability in evolutionary comparative analysis, we present NeXML, an XML standard (inspired by the current standard, NEXUS) that supports exchange of richly annotated comparative data. NeXML defines syntax for operational taxonomic units, character-state matrices, and phylogenetic trees and networks. Documents can be validated unambiguously. Importantly, any data element can be annotated, to an arbitrary degree of richness, using a system that is both flexible and rigorous. We describe how the use of NeXML by the TreeBASE and Phenoscape projects satisfies user needs that cannot be satisfied with other available file formats. By relying on XML Schema Definition, the design of NeXML facilitates the development and deployment of software for processing, transforming, and querying documents. The adoption of NeXML for practical use is facilitated by the availability of (1) an online manual with code samples and a reference to all defined elements and attributes, (2) programming toolkits in most of the languages used commonly in evolutionary informatics, and (3) input–output support in several widely used software applications. An active, open, community-based development process enables future revision and expansion of NeXML. PMID:22357728

  4. An XML-Based Mission Command Language for Autonomous Underwater Vehicles (AUVs)

    DTIC Science & Technology

    2003-06-01

    P. XML: How To Program . Prentice Hall, Inc. Upper Saddle River, New Jersey, 2001 Digital Signature Activity Statement, W3C www.w3.org/Signature...languages because it does not directly specify how information is to be presented, but rather defines the structure (and thus semantics) of the...command and control (C2) aspects of using XML to increase the utility of AUVs. XML programming will be addressed. Current mine warfare doctrine will be

  5. A comparison of database systems for XML-type data.

    PubMed

    Risse, Judith E; Leunissen, Jack A M

    2010-01-01

    In the field of bioinformatics interchangeable data formats based on XML are widely used. XML-type data is also at the core of most web services. With the increasing amount of data stored in XML comes the need for storing and accessing the data. In this paper we analyse the suitability of different database systems for storing and querying large datasets in general and Medline in particular. All reviewed database systems perform well when tested with small to medium sized datasets, however when the full Medline dataset is queried a large variation in query times is observed. There is not one system that is vastly superior to the others in this comparison and, depending on the database size and the query requirements, different systems are most suitable. The best all-round solution is the Oracle 11~g database system using the new binary storage option. Alias-i's Lingpipe is a more lightweight, customizable and sufficiently fast solution. It does however require more initial configuration steps. For data with a changing XML structure Sedna and BaseX as native XML database systems or MySQL with an XML-type column are suitable.

  6. XML Flight/Ground Data Dictionary Management

    NASA Technical Reports Server (NTRS)

    Wright, Jesse; Wiklow, Colette

    2007-01-01

    A computer program generates Extensible Markup Language (XML) files that effect coupling between the command- and telemetry-handling software running aboard a spacecraft and the corresponding software running in ground support systems. The XML files are produced by use of information from the flight software and from flight-system engineering. The XML files are converted to legacy ground-system data formats for command and telemetry, transformed into Web-based and printed documentation, and used in developing new ground-system data-handling software. Previously, the information about telemetry and command was scattered in various paper documents that were not synchronized. The process of searching and reading the documents was time-consuming and introduced errors. In contrast, the XML files contain all of the information in one place. XML structures can evolve in such a manner as to enable the addition, to the XML files, of the metadata necessary to track the changes and the associated documentation. The use of this software has reduced the extent of manual operations in developing a ground data system, thereby saving considerable time and removing errors that previously arose in the translation and transcription of software information from the flight to the ground system.

  7. Using XML and XSLT for flexible elicitation of mental-health risk knowledge.

    PubMed

    Buckingham, C D; Ahmed, A; Adams, A E

    2007-03-01

    Current tools for assessing risks associated with mental-health problems require assessors to make high-level judgements based on clinical experience. This paper describes how new technologies can enhance qualitative research methods to identify lower-level cues underlying these judgements, which can be collected by people without a specialist mental-health background. Content analysis of interviews with 46 multidisciplinary mental-health experts exposed the cues and their interrelationships, which were represented by a mind map using software that stores maps as XML. All 46 mind maps were integrated into a single XML knowledge structure and analysed by a Lisp program to generate quantitative information about the numbers of experts associated with each part of it. The knowledge was refined by the experts, using software developed in Flash to record their collective views within the XML itself. These views specified how the XML should be transformed by XSLT, a technology for rendering XML, which resulted in a validated hierarchical knowledge structure associating patient cues with risks. Changing knowledge elicitation requirements were accommodated by flexible transformations of XML data using XSLT, which also facilitated generation of multiple data-gathering tools suiting different assessment circumstances and levels of mental-health knowledge.

  8. OSCAR/Surface: Metadata for the WMO Integrated Observing System WIGOS

    NASA Astrophysics Data System (ADS)

    Klausen, Jörg; Pröscholdt, Timo; Mannes, Jürg; Cappelletti, Lucia; Grüter, Estelle; Calpini, Bertrand; Zhang, Wenjian

    2016-04-01

    The World Meteorological Organization (WMO) Integrated Global Observing System (WIGOS) is a key WMO priority underpinning all WMO Programs and new initiatives such as the Global Framework for Climate Services (GFCS). It does this by better integrating WMO and co-sponsored observing systems, as well as partner networks. For this, an important aspect is the description of the observational capabilities by way of structured metadata. The 17th Congress of the Word Meteorological Organization (Cg-17) has endorsed the semantic WIGOS metadata standard (WMDS) developed by the Task Team on WIGOS Metadata (TT-WMD). The standard comprises of a set of metadata classes that are considered to be of critical importance for the interpretation of observations and the evolution of observing systems relevant to WIGOS. The WMDS serves all recognized WMO Application Areas, and its use for all internationally exchanged observational data generated by WMO Members is mandatory. The standard will be introduced in three phases between 2016 and 2020. The Observing Systems Capability Analysis and Review (OSCAR) platform operated by MeteoSwiss on behalf of WMO is the official repository of WIGOS metadata and an implementation of the WMDS. OSCAR/Surface deals with all surface-based observations from land, air and oceans, combining metadata managed by a number of complementary, more domain-specific systems (e.g., GAWSIS for the Global Atmosphere Watch, JCOMMOPS for the marine domain, the WMO Radar database). It is a modern, web-based client-server application with extended information search, filtering and mapping capabilities including a fully developed management console to add and edit observational metadata. In addition, a powerful application programming interface (API) is being developed to allow machine-to-machine metadata exchange. The API is based on an ISO/OGC-compliant XML schema for the WMDS using the Observations and Measurements (ISO19156) conceptual model. The purpose of the presentation is to acquaint the audience with OSCAR, the WMDS and the current XML schema; and, to explore the relationship to the INSPIRE XML schema. Feedback from experts in the various disciplines of meteorology, climatology, atmospheric chemistry, hydrology on the utility of the new standard and the XML schema will be solicited and will guide WMO in further evolving the WMDS.

  9. "The Wonder Years" of XML.

    ERIC Educational Resources Information Center

    Gazan, Rich

    2000-01-01

    Surveys the current state of Extensible Markup Language (XML), a metalanguage for creating structured documents that describe their own content, and its implications for information professionals. Predicts that XML will become the common language underlying Web, word processing, and database formats. Also discusses Extensible Stylesheet Language…

  10. An XML-Based Knowledge Management System of Port Information for U.S. Coast Guard Cutters

    DTIC Science & Technology

    2003-03-01

    using DTDs was not chosen. XML Schema performs many of the same functions as SQL type schemas, but differ by the unique structure of XML documents...to access data from content files within the developed system. XPath is not equivalent to SQL . While XPath is very powerful at reaching into an XML...document and finding nodes or node sets, it is not a complete query language. For operations like joins, unions, intersections, etc., SQL is far

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kopeliovich, B. Z.; Institut fuer Theoretische Physik der Universitaet, Philosophenweg 19, D-69120 Heidelberg; Potashnikova, I. K.

    Two novel QCD effects, double-color filtering and mutual boosting of the saturation scales in colliding nuclei, affect the transparency of the nuclei for quark dipoles in comparison with proton-nucleus collisions. The former effect increases the survival probability of the dipoles, since color filtering in one nucleus makes the other one more transparent. The second effect acts in the opposite direction and is stronger; it makes the colliding nuclei more opaque than in the case of pA collisions. As a result of parton saturation in nuclei the effective scale is shifted upward, which leads to an increase of the gluon densitymore » at small x. This in turn leads to a stronger transverse momentum broadening in AA compared with pA collisions, i.e., to an additional growth of the saturation momentum. Such a mutual boosting leads to a system of reciprocity equations, which result in a saturation scale, a few times higher in AA than in pA collisions at the energies of the large hadron collider (LHC). Since the dipole cross section is proportional to the saturation momentum squared, the nuclei become much more opaque for dipoles in AA than in pA collisions. For the same reason gluon shadowing turns out to be boosted to a larger magnitude compared with the product of the gluon shadowing factors in each of the colliding nuclei. All these effects make it more difficult to establish a baseline for anomalous J/{Psi} suppression in heavy ion collisions at high energies.« less

  12. Simultaneous integrated boost (SIB) radiation therapy of right sided breast cancer with and without flattening filter - A treatment planning study.

    PubMed

    Maier, Johannes; Knott, Bernadette; Maerz, Manuel; Loeschel, Rainer; Koelbl, Oliver; Dobler, Barbara

    2016-08-31

    The aim of the study was to compare the two irradiation modes with (FF) and without flattening filter (FFF) for three different treatment techniques for simultaneous integrated boost radiation therapy of patients with right sided breast cancer. An Elekta Synergy linac with Agility collimating device is used to simulate the treatment of 10 patients. Six plans were generated in Monaco 5.0 for each patient treating the whole breast and a simultaneous integrated boost (SIB) volume: intensity modulated radiation therapy (IMRT), volumetric modulated arc therapy (VMAT) and a tangential arc VMAT (tVMAT), each with and without flattening filter. Plan quality was assessed considering target coverage, sparing of the contralateral breast, the lungs, the heart and the normal tissue. All plans were verified by a 2D-ionisation-chamber-array and delivery times were measured and compared. The Wilcoxon test was used for statistical analysis with a significance level of 0.05. Significantly best target coverage and homogeneity was achieved using VMAT FFF with V95% = (98.7 ± 0.8) % and HI = (8.2 ± 0.9) % for the SIB and V95% = (98.3 ± 0.7) % for the PTV, whereas tVMAT showed significantly lowest doses to the contralateral organs at risk with a Dmean of (0.7 ± 0.1) Gy for the contralateral lung, (1.0 ± 0.2) Gy for the contralateral breast and (1.4 ± 0.2) Gy for the heart. All plans passed the gamma evaluation with a mean passing rate of (99.2 ± 0.8) %. Delivery times were significantly reduced for VMAT and tVMAT but increased for IMRT, when FFF was used. Lowest delivery times were observed for tVMAT FFF with (1:20 ± 0:07) min. Balancing target coverage, OAR sparing and delivery time, VMAT FFF and tVMAT FFF are considered the preferable of the investigated treatment options in simultaneous integrated boost irradiation of right sided breast cancer for the combination of an Elekta Synergy linac with Agility and the treatment planning system Monaco 5.0.

  13. Modeling the Arden Syntax for medical decisions in XML.

    PubMed

    Kim, Sukil; Haug, Peter J; Rocha, Roberto A; Choi, Inyoung

    2008-10-01

    A new model expressing Arden Syntax with the eXtensible Markup Language (XML) was developed to increase its portability. Every example was manually parsed and reviewed until the schema and the style sheet were considered to be optimized. When the first schema was finished, several MLMs in Arden Syntax Markup Language (ArdenML) were validated against the schema. They were then transformed to HTML formats with the style sheet, during which they were compared to the original text version of their own MLM. When faults were found in the transformed MLM, the schema and/or style sheet was fixed. This cycle continued until all the examples were encoded into XML documents. The original MLMs were encoded in XML according to the proposed XML schema and reverse-parsed MLMs in ArdenML were checked using a public domain Arden Syntax checker. Two hundred seventy seven examples of MLMs were successfully transformed into XML documents using the model, and the reverse-parse yielded the original text version of MLMs. Two hundred sixty five of the 277 MLMs showed the same error patterns before and after transformation, and all 11 errors related to statement structure were resolved in XML version. The model uses two syntax checking mechanisms, first an XML validation process, and second, a syntax check using an XSL style sheet. Now that we have a schema for ArdenML, we can also begin the development of style sheets for transformation ArdenML into other languages.

  14. A Novel Navigation Paradigm for XML Repositories.

    ERIC Educational Resources Information Center

    Azagury, Alain; Factor, Michael E.; Maarek, Yoelle S.; Mandler, Benny

    2002-01-01

    Discusses data exchange over the Internet and describes the architecture and implementation of an XML document repository that promotes a navigation paradigm for XML documents based on content and context. Topics include information retrieval and semistructured documents; and file systems as information storage infrastructure, particularly XMLFS.…

  15. An Expressive and Efficient Language for XML Information Retrieval.

    ERIC Educational Resources Information Center

    Chinenyanga, Taurai Tapiwa; Kushmerick, Nicholas

    2002-01-01

    Discusses XML and information retrieval and describes a query language, ELIXIR (expressive and efficient language for XML information retrieval), with a textual similarity operator that can be used for similarity joins. Explains the algorithm for answering ELIXIR queries to generate intermediate relational data. (Author/LRW)

  16. XML Content Finally Arrives on the Web!

    ERIC Educational Resources Information Center

    Funke, Susan

    1998-01-01

    Explains extensible markup language (XML) and how it differs from hypertext markup language (HTML) and standard generalized markup language (SGML). Highlights include features of XML, including better formatting of documents, better searching capabilities, multiple uses for hyperlinking, and an increase in Web applications; Web browsers; and what…

  17. How Does XML Help Libraries?

    ERIC Educational Resources Information Center

    Banerjee, Kyle

    2002-01-01

    Discusses XML, how it has transformed the way information is managed and delivered, and its impact on libraries. Topics include how XML differs from other markup languages; the document object model (DOM); style sheets; practical applications for archival materials, interlibrary loans, digital collections, and MARC data; and future possibilities.…

  18. The Essen Learning Model--A Step towards a Representation of Learning Objectives.

    ERIC Educational Resources Information Center

    Bick, Markus; Pawlowski, Jan M.; Veith, Patrick

    The importance of the Extensible Markup Language (XML) technology family in the field of Computer Assisted Learning (CAL) can not be denied. The Instructional Management Systems Project (IMS), for example, provides a learning resource XML binding specification. Considering this specification and other implementations using XML to represent…

  19. Get It Together: Integrating Data with XML.

    ERIC Educational Resources Information Center

    Miller, Ron

    2003-01-01

    Discusses the use of XML for data integration to move data across different platforms, including across the Internet, from a variety of sources. Topics include flexibility; standards; organizing databases; unstructured data and the use of meta tags to encode it with XML information; cost effectiveness; and eliminating client software licenses.…

  20. XML Schema Languages: Beyond DTD.

    ERIC Educational Resources Information Center

    Ioannides, Demetrios

    2000-01-01

    Discussion of XML (extensible markup language) and the traditional DTD (document type definition) format focuses on efforts of the World Wide Web Consortium's XML schema working group to develop a schema language to replace DTD that will be capable of defining the set of constraints of any possible data resource. (Contains 14 references.) (LRW)

  1. XML: A Publisher's Perspective.

    ERIC Educational Resources Information Center

    Andrews, Timothy M.

    1999-01-01

    Explains eXtensible Markup Language (XML) and describes how Dow Jones Interactive is using it to improve the news-gathering and dissemination process through intranets and the World Wide Web. Discusses benefits of using XML, the relationship to HyperText Markup Language (HTML), lack of available software tools and industry support, and future…

  2. jmzML, an open-source Java API for mzML, the PSI standard for MS data.

    PubMed

    Côté, Richard G; Reisinger, Florian; Martens, Lennart

    2010-04-01

    We here present jmzML, a Java API for the Proteomics Standards Initiative mzML data standard. Based on the Java Architecture for XML Binding and XPath-based XML indexer random-access XML parser, jmzML can handle arbitrarily large files in minimal memory, allowing easy and efficient processing of mzML files using the Java programming language. jmzML also automatically resolves internal XML references on-the-fly. The library (which includes a viewer) can be downloaded from http://jmzml.googlecode.com.

  3. The Cadmio XML healthcare record.

    PubMed

    Barbera, Francesco; Ferri, Fernando; Ricci, Fabrizio L; Sottile, Pier Angelo

    2002-01-01

    The management of clinical data is a complex task. Patient related information reported in patient folders is a set of heterogeneous and structured data accessed by different users having different goals (in local or geographical networks). XML language provides a mechanism for describing, manipulating, and visualising structured data in web-based applications. XML ensures that the structured data is managed in a uniform and transparent manner independently from the applications and their providers guaranteeing some interoperability. Extracting data from the healthcare record and structuring them according to XML makes the data available through browsers. The MIC/MIE model (Medical Information Category/Medical Information Elements), which allows the definition and management of healthcare records and used in CADMIO, a HISA based project, is described in this paper, using XML for allowing the data to be visualised through web browsers.

  4. Advanced Math: Closing the Equity Gap. Math Works

    ERIC Educational Resources Information Center

    Achieve, Inc., 2013

    2013-01-01

    Minority and low-income students are less likely to have access to, enroll in and succeed in higher-level math courses in high school than their more advantaged peers. Under these circumstances, higher-level math courses function not as the intellectual and practical boost they should be, but as a filter that screens students out of the pathway to…

  5. A boosted optimal linear learner for retinal vessel segmentation

    NASA Astrophysics Data System (ADS)

    Poletti, E.; Grisan, E.

    2014-03-01

    Ocular fundus images provide important information about retinal degeneration, which may be related to acute pathologies or to early signs of systemic diseases. An automatic and quantitative assessment of vessel morphological features, such as diameters and tortuosity, can improve clinical diagnosis and evaluation of retinopathy. At variance with available methods, we propose a data-driven approach, in which the system learns a set of optimal discriminative convolution kernels (linear learner). The set is progressively built based on an ADA-boost sample weighting scheme, providing seamless integration between linear learner estimation and classification. In order to capture the vessel appearance changes at different scales, the kernels are estimated on a pyramidal decomposition of the training samples. The set is employed as a rotating bank of matched filters, whose response is used by the boosted linear classifier to provide a classification of each image pixel into the two classes of interest (vessel/background). We tested the approach fundus images available from the DRIVE dataset. We show that the segmentation performance yields an accuracy of 0.94.

  6. Representing nested semantic information in a linear string of text using XML.

    PubMed

    Krauthammer, Michael; Johnson, Stephen B; Hripcsak, George; Campbell, David A; Friedman, Carol

    2002-01-01

    XML has been widely adopted as an important data interchange language. The structure of XML enables sharing of data elements with variable degrees of nesting as long as the elements are grouped in a strict tree-like fashion. This requirement potentially restricts the usefulness of XML for marking up written text, which often includes features that do not properly nest within other features. We encountered this problem while marking up medical text with structured semantic information from a Natural Language Processor. Traditional approaches to this problem separate the structured information from the actual text mark up. This paper introduces an alternative solution, which tightly integrates the semantic structure with the text. The resulting XML markup preserves the linearity of the medical texts and can therefore be easily expanded with additional types of information.

  7. Representing nested semantic information in a linear string of text using XML.

    PubMed Central

    Krauthammer, Michael; Johnson, Stephen B.; Hripcsak, George; Campbell, David A.; Friedman, Carol

    2002-01-01

    XML has been widely adopted as an important data interchange language. The structure of XML enables sharing of data elements with variable degrees of nesting as long as the elements are grouped in a strict tree-like fashion. This requirement potentially restricts the usefulness of XML for marking up written text, which often includes features that do not properly nest within other features. We encountered this problem while marking up medical text with structured semantic information from a Natural Language Processor. Traditional approaches to this problem separate the structured information from the actual text mark up. This paper introduces an alternative solution, which tightly integrates the semantic structure with the text. The resulting XML markup preserves the linearity of the medical texts and can therefore be easily expanded with additional types of information. PMID:12463856

  8. The carbohydrate sequence markup language (CabosML): an XML description of carbohydrate structures.

    PubMed

    Kikuchi, Norihiro; Kameyama, Akihiko; Nakaya, Shuuichi; Ito, Hiromi; Sato, Takashi; Shikanai, Toshihide; Takahashi, Yoriko; Narimatsu, Hisashi

    2005-04-15

    Bioinformatics resources for glycomics are very poor as compared with those for genomics and proteomics. The complexity of carbohydrate sequences makes it difficult to define a common language to represent them, and the development of bioinformatics tools for glycomics has not progressed. In this study, we developed a carbohydrate sequence markup language (CabosML), an XML description of carbohydrate structures. The language definition (XML Schema) and an experimental database of carbohydrate structures using an XML database management system are available at http://www.phoenix.hydra.mki.co.jp/CabosDemo.html kikuchi@hydra.mki.co.jp.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hurst, Aaron M.

    A data structure based on an eXtensible Markup Language (XML) hierarchy according to experimental nuclear structure data in the Evaluated Nuclear Structure Data File (ENSDF) is presented. A Python-coded translator has been developed to interpret the standard one-card records of the ENSDF datasets, together with their associated quantities defined according to field position, and generate corresponding representative XML output. The quantities belonging to this mixed-record format are described in the ENSDF manual. Of the 16 ENSDF records in total, XML output has been successfully generated for 15 records. An XML-translation for the Comment Record is yet to be implemented; thismore » will be considered in a separate phase of the overall translation effort. Continuation records, not yet implemented, will also be treated in a future phase of this work. Several examples are presented in this document to illustrate the XML schema and methods for handling the various ENSDF data types. However, the proposed nomenclature for the XML elements and attributes need not necessarily be considered as a fixed set of constructs. Indeed, better conventions may be suggested and a consensus can be achieved amongst the various groups of people interested in this project. The main purpose here is to present an initial phase of the translation effort to demonstrate the feasibility of interpreting ENSDF datasets and creating a representative XML-structured hierarchy for data storage.« less

  10. Partial Automation of Requirements Tracing

    NASA Technical Reports Server (NTRS)

    Hayes, Jane; Dekhtyar, Alex; Sundaram, Senthil; Vadlamudi, Sravanthi

    2006-01-01

    Requirements Tracing on Target (RETRO) is software for after-the-fact tracing of textual requirements to support independent verification and validation of software. RETRO applies one of three user-selectable information-retrieval techniques: (1) term frequency/inverse document frequency (TF/IDF) vector retrieval, (2) TF/IDF vector retrieval with simple thesaurus, or (3) keyword extraction. One component of RETRO is the graphical user interface (GUI) for use in initiating a requirements-tracing project (a pair of artifacts to be traced to each other, such as a requirements spec and a design spec). Once the artifacts have been specified and the IR technique chosen, another component constructs a representation of the artifact elements and stores it on disk. Next, the IR technique is used to produce a first list of candidate links (potential matches between the two artifact levels). This list, encoded in Extensible Markup Language (XML), is optionally processed by a filtering component designed to make the list somewhat smaller without sacrificing accuracy. Through the GUI, the user examines a number of links and returns decisions (yes, these are links; no, these are not links). Coded in XML, these decisions are provided to a "feedback processor" component that prepares the data for the next application of the IR technique. The feedback reduces the incidence of erroneous candidate links. Unlike related prior software, RETRO does not require the user to assign keywords, and automatically builds a document index.

  11. Integrated Syntactic/Semantic XML Data Validation with a Reusable Software Component

    ERIC Educational Resources Information Center

    Golikov, Steven

    2013-01-01

    Data integration is a critical component of enterprise system integration, and XML data validation is the foundation for sound data integration of XML-based information systems. Since B2B e-commerce relies on data validation as one of the critical components for enterprise integration, it is imperative for financial industries and e-commerce…

  12. Adding XML to the MIS Curriculum: Lessons from the Classroom

    ERIC Educational Resources Information Center

    Wagner, William P.; Pant, Vik; Hilken, Ralph

    2008-01-01

    eXtensible Markup Language (XML) is a new technology that is currently being extolled by many industry experts and software vendors. Potentially it represents a platform independent language for sharing information over networks in a way that is much more seamless than with previous technologies. It is extensible in that XML serves as a "meta"…

  13. Data Manipulation in an XML-Based Digital Image Library

    ERIC Educational Resources Information Center

    Chang, Naicheng

    2005-01-01

    Purpose: To help to clarify the role of XML tools and standards in supporting transition and migration towards a fully XML-based environment for managing access to information. Design/methodology/approach: The Ching Digital Image Library, built on a three-tier architecture, is used as a source of examples to illustrate a number of methods of data…

  14. Adaptive Hypermedia Educational System Based on XML Technologies.

    ERIC Educational Resources Information Center

    Baek, Yeongtae; Wang, Changjong; Lee, Sehoon

    This paper proposes an adaptive hypermedia educational system using XML technologies, such as XML, XSL, XSLT, and XLink. Adaptive systems are capable of altering the presentation of the content of the hypermedia on the basis of a dynamic understanding of the individual user. The user profile can be collected in a user model, while the knowledge…

  15. EquiX-A Search and Query Language for XML.

    ERIC Educational Resources Information Center

    Cohen, Sara; Kanza, Yaron; Kogan, Yakov; Sagiv, Yehoshua; Nutt, Werner; Serebrenik, Alexander

    2002-01-01

    Describes EquiX, a search language for XML that combines querying with searching to query the data and the meta-data content of Web pages. Topics include search engines; a data model for XML documents; search query syntax; search query semantics; an algorithm for evaluating a query on a document; and indexing EquiX queries. (LRW)

  16. TME2/342: The Role of the EXtensible Markup Language (XML) for Future Healthcare Application Development

    PubMed Central

    Noelle, G; Dudeck, J

    1999-01-01

    Two years, since the World Wide Web Consortium (W3C) has published the first specification of the eXtensible Markup Language (XML) there exist some concrete tools and applications to work with XML-based data. In particular, new generation Web browsers offer great opportunities to develop new kinds of medical, web-based applications. There are several data-exchange formats in medicine, which have been established in the last years: HL-7, DICOM, EDIFACT and, in the case of Germany, xDT. Whereas communication and information exchange becomes increasingly important, the development of appropriate and necessary interfaces causes problems, rising costs and effort. It has been also recognised that it is difficult to define a standardised interchange format, for one of the major future developments in medical telematics: the electronic patient record (EPR) and its availability on the Internet. Whereas XML, especially in an industrial environment, is celebrated as a generic standard and a solution for all problems concerning e-commerce, in a medical context there are only few applications developed. Nevertheless, the medical environment is an appropriate area for building XML applications: as the information and communication management becomes increasingly important in medical businesses, the role of the Internet changes quickly from an information to a communication medium. The first XML based applications in healthcare show us the advantage for a future engagement of the healthcare industry in XML: such applications are open, easy to extend and cost-effective. Additionally, XML is much more than a simple new data interchange format: many proposals for data query (XQL), data presentation (XSL) and other extensions have been proposed to the W3C and partly realised in medical applications.

  17. XML-based approaches for the integration of heterogeneous bio-molecular data.

    PubMed

    Mesiti, Marco; Jiménez-Ruiz, Ernesto; Sanz, Ismael; Berlanga-Llavori, Rafael; Perlasca, Paolo; Valentini, Giorgio; Manset, David

    2009-10-15

    The today's public database infrastructure spans a very large collection of heterogeneous biological data, opening new opportunities for molecular biology, bio-medical and bioinformatics research, but raising also new problems for their integration and computational processing. In this paper we survey the most interesting and novel approaches for the representation, integration and management of different kinds of biological data by exploiting XML and the related recommendations and approaches. Moreover, we present new and interesting cutting edge approaches for the appropriate management of heterogeneous biological data represented through XML. XML has succeeded in the integration of heterogeneous biomolecular information, and has established itself as the syntactic glue for biological data sources. Nevertheless, a large variety of XML-based data formats have been proposed, thus resulting in a difficult effective integration of bioinformatics data schemes. The adoption of a few semantic-rich standard formats is urgent to achieve a seamless integration of the current biological resources.

  18. Transformerless dc-Isolated Converter

    NASA Technical Reports Server (NTRS)

    Rippel, Wally E.

    1987-01-01

    Efficient voltage converter employs capacitive instead of transformer coupling to provide dc isolation. Offers buck/boost operation, minimal filtering, and low parts count, with possible application in photovoltaic power inverters, power supplies and battery charges. In photovoltaic inverter circuit with transformerless converter, Q2, Q3, Q4, and Q5 form line-commutated inverter. Switching losses and stresses nil because switching performed when current is zero.

  19. A Practical Introduction to the XML, Extensible Markup Language, by Way of Some Useful Examples

    ERIC Educational Resources Information Center

    Snyder, Robin

    2004-01-01

    XML, Extensible Markup Language, is important as a way to represent and encapsulate the structure of underlying data in a portable way that supports data exchange regardless of the physical storage of the data. This paper (and session) introduces some useful and practical aspects of XML technology for sharing information in a educational setting…

  20. Alternatives to relational database: comparison of NoSQL and XML approaches for clinical data storage.

    PubMed

    Lee, Ken Ka-Yin; Tang, Wai-Choi; Choi, Kup-Sze

    2013-04-01

    Clinical data are dynamic in nature, often arranged hierarchically and stored as free text and numbers. Effective management of clinical data and the transformation of the data into structured format for data analysis are therefore challenging issues in electronic health records development. Despite the popularity of relational databases, the scalability of the NoSQL database model and the document-centric data structure of XML databases appear to be promising features for effective clinical data management. In this paper, three database approaches--NoSQL, XML-enabled and native XML--are investigated to evaluate their suitability for structured clinical data. The database query performance is reported, together with our experience in the databases development. The results show that NoSQL database is the best choice for query speed, whereas XML databases are advantageous in terms of scalability, flexibility and extensibility, which are essential to cope with the characteristics of clinical data. While NoSQL and XML technologies are relatively new compared to the conventional relational database, both of them demonstrate potential to become a key database technology for clinical data management as the technology further advances. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  1. The future application of GML database in GIS

    NASA Astrophysics Data System (ADS)

    Deng, Yuejin; Cheng, Yushu; Jing, Lianwen

    2006-10-01

    In 2004, the Geography Markup Language (GML) Implementation Specification (version 3.1.1) was published by Open Geospatial Consortium, Inc. Now more and more applications in geospatial data sharing and interoperability depend on GML. The primary purpose of designing GML is for exchange and transportation of geo-information by standard modeling and encoding of geography phenomena. However, the problems of how to organize and access lots of GML data effectively arise in applications. The research on GML database focuses on these problems. The effective storage of GML data is a hot topic in GIS communities today. GML Database Management System (GDBMS) mainly deals with the problem of storage and management of GML data. Now two types of XML database, namely Native XML Database, and XML-Enabled Database are classified. Since GML is an application of the XML standard to geographic data, the XML database system can also be used for the management of GML. In this paper, we review the status of the art of XML database, including storage, index and query languages, management systems and so on, then move on to the GML database. At the end, the future prospect of GML database in GIS application is presented.

  2. An adaptable XML based approach for scientific data management and integration

    NASA Astrophysics Data System (ADS)

    Wang, Fusheng; Thiel, Florian; Furrer, Daniel; Vergara-Niedermayr, Cristobal; Qin, Chen; Hackenberg, Georg; Bourgue, Pierre-Emmanuel; Kaltschmidt, David; Wang, Mo

    2008-03-01

    Increased complexity of scientific research poses new challenges to scientific data management. Meanwhile, scientific collaboration is becoming increasing important, which relies on integrating and sharing data from distributed institutions. We develop SciPort, a Web-based platform on supporting scientific data management and integration based on a central server based distributed architecture, where researchers can easily collect, publish, and share their complex scientific data across multi-institutions. SciPort provides an XML based general approach to model complex scientific data by representing them as XML documents. The documents capture not only hierarchical structured data, but also images and raw data through references. In addition, SciPort provides an XML based hierarchical organization of the overall data space to make it convenient for quick browsing. To provide generalization, schemas and hierarchies are customizable with XML-based definitions, thus it is possible to quickly adapt the system to different applications. While each institution can manage documents on a Local SciPort Server independently, selected documents can be published to a Central Server to form a global view of shared data across all sites. By storing documents in a native XML database, SciPort provides high schema extensibility and supports comprehensive queries through XQuery. By providing a unified and effective means for data modeling, data access and customization with XML, SciPort provides a flexible and powerful platform for sharing scientific data for scientific research communities, and has been successfully used in both biomedical research and clinical trials.

  3. Querying archetype-based EHRs by search ontology-based XPath engineering.

    PubMed

    Kropf, Stefan; Uciteli, Alexandr; Schierle, Katrin; Krücken, Peter; Denecke, Kerstin; Herre, Heinrich

    2018-05-11

    Legacy data and new structured data can be stored in a standardized format as XML-based EHRs on XML databases. Querying documents on these databases is crucial for answering research questions. Instead of using free text searches, that lead to false positive results, the precision can be increased by constraining the search to certain parts of documents. A search ontology-based specification of queries on XML documents defines search concepts and relates them to parts in the XML document structure. Such query specification method is practically introduced and evaluated by applying concrete research questions formulated in natural language on a data collection for information retrieval purposes. The search is performed by search ontology-based XPath engineering that reuses ontologies and XML-related W3C standards. The key result is that the specification of research questions can be supported by the usage of search ontology-based XPath engineering. A deeper recognition of entities and a semantic understanding of the content is necessary for a further improvement of precision and recall. Key limitation is that the application of the introduced process requires skills in ontology and software development. In future, the time consuming ontology development could be overcome by implementing a new clinical role: the clinical ontologist. The introduced Search Ontology XML extension connects Search Terms to certain parts in XML documents and enables an ontology-based definition of queries. Search ontology-based XPath engineering can support research question answering by the specification of complex XPath expressions without deep syntax knowledge about XPaths.

  4. An Adaptable XML Based Approach for Scientific Data Management and Integration.

    PubMed

    Wang, Fusheng; Thiel, Florian; Furrer, Daniel; Vergara-Niedermayr, Cristobal; Qin, Chen; Hackenberg, Georg; Bourgue, Pierre-Emmanuel; Kaltschmidt, David; Wang, Mo

    2008-02-20

    Increased complexity of scientific research poses new challenges to scientific data management. Meanwhile, scientific collaboration is becoming increasing important, which relies on integrating and sharing data from distributed institutions. We develop SciPort, a Web-based platform on supporting scientific data management and integration based on a central server based distributed architecture, where researchers can easily collect, publish, and share their complex scientific data across multi-institutions. SciPort provides an XML based general approach to model complex scientific data by representing them as XML documents. The documents capture not only hierarchical structured data, but also images and raw data through references. In addition, SciPort provides an XML based hierarchical organization of the overall data space to make it convenient for quick browsing. To provide generalization, schemas and hierarchies are customizable with XML-based definitions, thus it is possible to quickly adapt the system to different applications. While each institution can manage documents on a Local SciPort Server independently, selected documents can be published to a Central Server to form a global view of shared data across all sites. By storing documents in a native XML database, SciPort provides high schema extensibility and supports comprehensive queries through XQuery. By providing a unified and effective means for data modeling, data access and customization with XML, SciPort provides a flexible and powerful platform for sharing scientific data for scientific research communities, and has been successfully used in both biomedical research and clinical trials.

  5. TMATS/ IHAL/ DDML Schema Validation

    DTIC Science & Technology

    2017-02-01

    task was to create a method for performing IRIG eXtensible Markup Language (XML) schema validation. As opposed to XML instance document validation...TMATS / IHAL / DDML Schema Validation, RCC 126-17, February 2017 vii Acronyms DDML Data Display Markup Language HUD heads-up display iNET...system XML eXtensible Markup Language TMATS / IHAL / DDML Schema Validation, RCC 126-17, February 2017 viii This page intentionally left blank

  6. 78 FR 28732 - Revisions to Electric Quarterly Report Filing Process; Availability of Draft XML Schema

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-16

    ... posting CSV file samples. Order No. 770 revised the process for filing EQRs. Pursuant to Order No. 770, one of the new processes for filing allows EQRs to be filed using an XML file. The XML schema that is needed to file EQRs in this manner is now posted on the Commission's Web site at http://www.ferc.gov/docs...

  7. Optimized Beam Sculpting with Generalized Fringe-rate Filters

    NASA Astrophysics Data System (ADS)

    Parsons, Aaron R.; Liu, Adrian; Ali, Zaki S.; Cheng, Carina

    2016-03-01

    We generalize the technique of fringe-rate filtering, whereby visibilities measured by a radio interferometer are re-weighted according to their temporal variation. As the Earth rotates, radio sources traverse through an interferometer’s fringe pattern at rates that depend on their position on the sky. Capitalizing on this geometric interpretation of fringe rates, we employ time-domain convolution kernels to enact fringe-rate filters that sculpt the effective primary beam of antennas in an interferometer. As we show, beam sculpting through fringe-rate filtering can be used to optimize measurements for a variety of applications, including mapmaking, minimizing polarization leakage, suppressing instrumental systematics, and enhancing the sensitivity of power-spectrum measurements. We show that fringe-rate filtering arises naturally in minimum variance treatments of many of these problems, enabling optimal visibility-based approaches to analyses of interferometric data that avoid systematics potentially introduced by traditional approaches such as imaging. Our techniques have recently been demonstrated in Ali et al., where new upper limits were placed on the 21 {cm} power spectrum from reionization, showcasing the ability of fringe-rate filtering to successfully boost sensitivity and reduce the impact of systematics in deep observations.

  8. Efficient and Scalable Graph Similarity Joins in MapReduce

    PubMed Central

    Chen, Yifan; Zhang, Weiming; Tang, Jiuyang

    2014-01-01

    Along with the emergence of massive graph-modeled data, it is of great importance to investigate graph similarity joins due to their wide applications for multiple purposes, including data cleaning, and near duplicate detection. This paper considers graph similarity joins with edit distance constraints, which return pairs of graphs such that their edit distances are no larger than a given threshold. Leveraging the MapReduce programming model, we propose MGSJoin, a scalable algorithm following the filtering-verification framework for efficient graph similarity joins. It relies on counting overlapping graph signatures for filtering out nonpromising candidates. With the potential issue of too many key-value pairs in the filtering phase, spectral Bloom filters are introduced to reduce the number of key-value pairs. Furthermore, we integrate the multiway join strategy to boost the verification, where a MapReduce-based method is proposed for GED calculation. The superior efficiency and scalability of the proposed algorithms are demonstrated by extensive experimental results. PMID:25121135

  9. Efficient and scalable graph similarity joins in MapReduce.

    PubMed

    Chen, Yifan; Zhao, Xiang; Xiao, Chuan; Zhang, Weiming; Tang, Jiuyang

    2014-01-01

    Along with the emergence of massive graph-modeled data, it is of great importance to investigate graph similarity joins due to their wide applications for multiple purposes, including data cleaning, and near duplicate detection. This paper considers graph similarity joins with edit distance constraints, which return pairs of graphs such that their edit distances are no larger than a given threshold. Leveraging the MapReduce programming model, we propose MGSJoin, a scalable algorithm following the filtering-verification framework for efficient graph similarity joins. It relies on counting overlapping graph signatures for filtering out nonpromising candidates. With the potential issue of too many key-value pairs in the filtering phase, spectral Bloom filters are introduced to reduce the number of key-value pairs. Furthermore, we integrate the multiway join strategy to boost the verification, where a MapReduce-based method is proposed for GED calculation. The superior efficiency and scalability of the proposed algorithms are demonstrated by extensive experimental results.

  10. Labeling RDF Graphs for Linear Time and Space Querying

    NASA Astrophysics Data System (ADS)

    Furche, Tim; Weinzierl, Antonius; Bry, François

    Indices and data structures for web querying have mostly considered tree shaped data, reflecting the view of XML documents as tree-shaped. However, for RDF (and when querying ID/IDREF constraints in XML) data is indisputably graph-shaped. In this chapter, we first study existing indexing and labeling schemes for RDF and other graph datawith focus on support for efficient adjacency and reachability queries. For XML, labeling schemes are an important part of the widespread adoption of XML, in particular for mapping XML to existing (relational) database technology. However, the existing indexing and labeling schemes for RDF (and graph data in general) sacrifice one of the most attractive properties of XML labeling schemes, the constant time (and per-node space) test for adjacency (child) and reachability (descendant). In the second part, we introduce the first labeling scheme for RDF data that retains this property and thus achieves linear time and space processing of acyclic RDF queries on a significantly larger class of graphs than previous approaches (which are mostly limited to tree-shaped data). Finally, we show how this labeling scheme can be applied to (acyclic) SPARQL queries to obtain an evaluation algorithm with time and space complexity linear in the number of resources in the queried RDF graph.

  11. SU-E-T-327: The Update of a XML Composing Tool for TrueBeam Developer Mode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, Y; Mao, W; Jiang, S

    2014-06-01

    Purpose: To introduce a major upgrade of a novel XML beam composing tool to scientists and engineers who strive to translate certain capabilities of TrueBeam Developer Mode to future clinical benefits of radiation therapy. Methods: TrueBeam Developer Mode provides the users with a test bed for unconventional plans utilizing certain unique features not accessible at the clinical mode. To access the full set of capabilities, a XML beam definition file accommodating all parameters including kV/MV imaging triggers in the plan can be locally loaded at this mode, however it is difficult and laborious to compose one in a text editor.more » In this study, a stand-along interactive XML beam composing application, TrueBeam TeachMod, was developed on Windows platforms to assist users in making their unique plans in a WYSWYG manner. A conventional plan can be imported in a DICOM RT object as the start of the beam editing process in which trajectories of all axes of a TrueBeam machine can be modified to the intended values at any control point. TeachMod also includes libraries of predefined imaging and treatment procedures to further expedite the process. Results: The TeachMod application is a major of the TeachMod module within DICOManTX. It fully supports TrueBeam 2.0. Trajectories of all axes including all MLC leaves can be graphically rendered and edited as needed. The time for XML beam composing has been reduced to a negligible amount regardless the complexity of the plan. A good understanding of XML language and TrueBeam schema is not required though preferred. Conclusion: Creating XML beams manually in a text editor will be a lengthy error-prone process for sophisticated plans. A XML beam composing tool is highly desirable for R and D activities. It will bridge the gap between scopes of TrueBeam capabilities and their clinical application potentials.« less

  12. James Webb Space Telescope XML Database: From the Beginning to Today

    NASA Technical Reports Server (NTRS)

    Gal-Edd, Jonathan; Fatig, Curtis C.

    2005-01-01

    The James Webb Space Telescope (JWST) Project has been defining, developing, and exercising the use of a common eXtensible Markup Language (XML) for the command and telemetry (C&T) database structure. JWST is the first large NASA space mission to use XML for databases. The JWST project started developing the concepts for the C&T database in 2002. The database will need to last at least 20 years since it will be used beginning with flight software development, continuing through Observatory integration and test (I&T) and through operations. Also, a database tool kit has been provided to the 18 various flight software development laboratories located in the United States, Europe, and Canada that allows the local users to create their own databases. Recently the JWST Project has been working with the Jet Propulsion Laboratory (JPL) and Object Management Group (OMG) XML Telemetry and Command Exchange (XTCE) personnel to provide all the information needed by JWST and JPL for exchanging database information using a XML standard structure. The lack of standardization requires custom ingest scripts for each ground system segment, increasing the cost of the total system. Providing a non-proprietary standard of the telemetry and command database definition formation will allow dissimilar systems to communicate without the need for expensive mission specific database tools and testing of the systems after the database translation. The various ground system components that would benefit from a standardized database are the telemetry and command systems, archives, simulators, and trending tools. JWST has exchanged the XML database with the Eclipse, EPOCH, ASIST ground systems, Portable spacecraft simulator (PSS), a front-end system, and Integrated Trending and Plotting System (ITPS) successfully. This paper will discuss how JWST decided to use XML, the barriers to a new concept, experiences utilizing the XML structure, exchanging databases with other users, and issues that have been experienced in creating databases for the C&T system.

  13. Comparison of the filtering models for airborne LiDAR data by three classifiers with exploration on model transfer

    NASA Astrophysics Data System (ADS)

    Ma, Hongchao; Cai, Zhan; Zhang, Liang

    2018-01-01

    This paper discusses airborne light detection and ranging (LiDAR) point cloud filtering (a binary classification problem) from the machine learning point of view. We compared three supervised classifiers for point cloud filtering, namely, Adaptive Boosting, support vector machine, and random forest (RF). Nineteen features were generated from raw LiDAR point cloud based on height and other geometric information within a given neighborhood. The test datasets issued by the International Society for Photogrammetry and Remote Sensing (ISPRS) were used to evaluate the performance of the three filtering algorithms; RF showed the best results with an average total error of 5.50%. The paper also makes tentative exploration in the application of transfer learning theory to point cloud filtering, which has not been introduced into the LiDAR field to the authors' knowledge. We performed filtering of three datasets from real projects carried out in China with RF models constructed by learning from the 15 ISPRS datasets and then transferred with little to no change of the parameters. Reliable results were achieved, especially in rural area (overall accuracy achieved 95.64%), indicating the feasibility of model transfer in the context of point cloud filtering for both easy automation and acceptable accuracy.

  14. Robust control of the DC-DC boost converter based on the uncertainty and disturbance estimator

    NASA Astrophysics Data System (ADS)

    Oucheriah, Said

    2017-11-01

    In this paper, a robust non-linear controller based on the uncertainty and disturbance estimator (UDE) scheme is successfully developed and implemented for the output voltage regulation of the DC-DC boost converter. System uncertainties, external disturbances and unknown non-linear dynamics are lumped as a signal that is accurately estimated using a low-pass filter and their effects are cancelled by the controller. This methodology forms the basis of the UDE-based controller. A simple procedure is also developed that systematically determines the parameters of the controller to meet certain specifications. Using simulation, the effectiveness of the proposed controller is compared against the sliding-mode control (SMC). Experimental tests also show that the proposed controller is robust to system uncertainties, large input and load perturbations.

  15. Automating Disk Forensic Processing with SleuthKit, XML and Python

    DTIC Science & Technology

    2009-05-01

    1 Automating Disk Forensic Processing with SleuthKit, XML and Python Simson L. Garfinkel Abstract We have developed a program called fiwalk which...files themselves. We show how it is relatively simple to create automated disk forensic applications using a Python module we have written that reads...software that the portable device may contain. Keywords: Computer Forensics; XML; Sleuth Kit; Python I. INTRODUCTION In recent years we have found many

  16. 76 FR 17413 - Contract Reporting Requirements of Intrastate Natural Gas Companies; Notice of Technical Workshop...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-29

    ... the PDF or XML version of Form No. 549D as a method to eFile quarterly data and whether they intend on... fillable Form No. 549D PDF and XML to file data \\2\\ pursuant to Order Nos. 735 and 735-A.\\3\\ [[Page 17414... PDF ( http://www.ferc.gov/docs-filing/forms/form-549d/form-549d.pdf ) or XML ( http://www.ferc.gov...

  17. Evaluation of Efficient XML Interchange (EXI) for Large Datasets and as an Alternative to Binary JSON Encodings

    DTIC Science & Technology

    2015-03-01

    fall in the lossy category (Gonzalez, Woods , & Eddins, 2009, p. 420). For the textual or numeric data in XML, however, lossy compression is...7/1,337 > Professional Notes Being Efficient with Bandwidth By Lieutenant Commander Steve Debich, Lieutenant Bruce Hill, Captain Scot Miller (Retired...2005). XML Binary Characterization. Retrieved from http://www.w3.org/TR/xbc-characterization/ Gonzalez, R., Woods , R., & Eddins, S. (2009

  18. XML Schema Guide for Primary CDR Submissions

    EPA Pesticide Factsheets

    This document presents the extensible markup language (XML) schema guide for the Office of Pollution Prevention and Toxics’ (OPPT) e-CDRweb tool. E-CDRweb is the electronic, web-based tool provided by Environmental Protection Agency (EPA) for the submission of Chemical Data Reporting (CDR) information. This document provides the user with tips and guidance on correctly using the version 1.7 XML schema. Please note that the order of the elements must match the schema.

  19. Using XML/HTTP to Store, Serve and Annotate Tactical Scenarios for X3D Operational Visualization and Anti-Terrorist Training

    DTIC Science & Technology

    2003-03-01

    PXSLServlet Paul A. Open Source Relational x X 23 Tchistopolskii sql2dtd David Mertz Public domain Relational x -- sql2xml Scott Hathaway Public...March 2003. [Hunter 2001] Hunter, David ; Cagle, Kurt; Dix, Chris; Kovack, Roger; Pinnock, Jonathan, Rafter, Jeff; Beginning XML (2nd Edition...Postgraduate School Monterey, California 4. Curt Blais Naval Postgraduate School Monterey, California 5 Erik Chaum NAVSEA Undersea

  20. Building VoiceXML-Based Applications

    DTIC Science & Technology

    2002-01-01

    basketball games. The Busline systems were pri- y developed using an early implementation of VoiceXML he NBA Update Line was developed using VoiceXML...traveling in and out of Pittsburgh’s rsity neighborhood. The second project is the NBA Up- Line, which provides callers with real-time information NBA ... NBA UPDATE LINE The target user of this system is a fairly knowledgeable basket- ball fan; the system must therefore be able to provide detailed

  1. New Directions in the NOAO Observing Proposal System

    NASA Astrophysics Data System (ADS)

    Gasson, David; Bell, Dave

    For the past eight years NOAO has been refining its on-line observing proposal system. Virtually all related processes are now handled electronically. Members of the astronomical community can submit proposals through email, web form, or via the Gemini Phase I Tool. NOAO staff can use the system to do administrative tasks, scheduling, and compilation of various statistics. In addition, all information relevant to the TAC process is made available on-line, including the proposals themselves (in HTML, PDF and PostScript) and technical comments. Grades and TAC comments are entered and edited through web forms, and can be sorted and filtered according to specified criteria. Current developments include a move away from proprietary solutions, toward open standards such as SQL (in the form of the MySQL relational database system), Perl, PHP and XML.

  2. Optimization of heterologous DNA-prime, protein boost regimens and site of vaccination to enhance therapeutic immunity against human papillomavirus-associated disease.

    PubMed

    Peng, Shiwen; Qiu, Jin; Yang, Andrew; Yang, Benjamin; Jeang, Jessica; Wang, Joshua W; Chang, Yung-Nien; Brayton, Cory; Roden, Richard B S; Hung, Chien-Fu; Wu, T-C

    2016-01-01

    Human papillomavirus (HPV) has been identified as the primary etiologic factor of cervical cancer as well as subsets of anogenital and oropharyngeal cancers. The two HPV viral oncoproteins, E6 and E7, are uniquely and consistently expressed in all HPV infected cells and are therefore promising targets for therapeutic vaccination. Both recombinant naked DNA and protein-based HPV vaccines have been demonstrated to elicit HPV-specific CD8+ T cell responses that provide therapeutic effects against HPV-associated tumor models. Here we examine the immunogenicity in a preclinical model of priming with HPV DNA vaccine followed by boosting with filterable aggregates of HPV 16 L2E6E7 fusion protein (TA-CIN). We observed that priming twice with an HPV DNA vaccine followed by a single TA-CIN booster immunization generated the strongest antigen-specific CD8+ T cell response compared to other prime-boost combinations tested in C57BL/6 mice, whether naïve or bearing the HPV16 E6/E7 transformed syngeneic tumor model, TC-1. We showed that the magnitude of antigen-specific CD8+ T cell response generated by the DNA vaccine prime, TA-CIN protein vaccine boost combinatorial strategy is dependent on the dose of TA-CIN protein vaccine. In addition, we found that a single booster immunization comprising intradermal or intramuscular administration of TA-CIN after priming twice with an HPV DNA vaccine generated a comparable boost to E7-specific CD8+ T cell responses. We also demonstrated that the immune responses elicited by the DNA vaccine prime, TA-CIN protein vaccine boost strategy translate into potent prophylactic and therapeutic antitumor effects. Finally, as seen for repeat TA-CIN protein vaccination, we showed that the heterologous DNA prime and protein boost vaccination strategy is well tolerated by mice. Our results provide rationale for future clinical testing of HPV DNA vaccine prime, TA-CIN protein vaccine boost immunization regimen for the control of HPV-associated diseases.

  3. Computing a Comprehensible Model for Spam Filtering

    NASA Astrophysics Data System (ADS)

    Ruiz-Sepúlveda, Amparo; Triviño-Rodriguez, José L.; Morales-Bueno, Rafael

    In this paper, we describe the application of the Desicion Tree Boosting (DTB) learning model to spam email filtering.This classification task implies the learning in a high dimensional feature space. So, it is an example of how the DTB algorithm performs in such feature space problems. In [1], it has been shown that hypotheses computed by the DTB model are more comprehensible that the ones computed by another ensemble methods. Hence, this paper tries to show that the DTB algorithm maintains the same comprehensibility of hypothesis in high dimensional feature space problems while achieving the performance of other ensemble methods. Four traditional evaluation measures (precision, recall, F1 and accuracy) have been considered for performance comparison between DTB and others models usually applied to spam email filtering. The size of the hypothesis computed by a DTB is smaller and more comprehensible than the hypothesis computed by Adaboost and Naïve Bayes.

  4. CytometryML, an XML format based on DICOM and FCS for analytical cytology data.

    PubMed

    Leif, Robert C; Leif, Suzanne B; Leif, Stephanie H

    2003-07-01

    Flow Cytometry Standard (FCS) was initially created to standardize the software researchers use to analyze, transmit, and store data produced by flow cytometers and sorters. Because of the clinical utility of flow cytometry, it is necessary to have a standard consistent with the requirements of medical regulatory agencies. We extended the existing mapping of FCS to the Digital Imaging and Communications in Medicine (DICOM) standard to include list-mode data produced by flow cytometry, laser scanning cytometry, and microscopic image cytometry. FCS list-mode was mapped to the DICOM Waveform Information Object. We created a collection of Extensible Markup Language (XML) schemas to express the DICOM analytical cytologic text-based data types except for large binary objects. We also developed a cytometry markup language, CytometryML, in an open environment subject to continuous peer review. The feasibility of expressing the data contained in FCS, including list-mode in DICOM, was demonstrated; and a preliminary mapping for list-mode data in the form of XML schemas and documents was completed. DICOM permitted the creation of indices that can be used to rapidly locate in a list-mode file the cells that are members of a subset. DICOM and its coding schemes for other medical standards can be represented by XML schemas, which can be combined with other relevant XML applications, such as Mathematical Markup Language (MathML). The use of XML format based on DICOM for analytical cytology met most of the previously specified requirements and appears capable of meeting the others; therefore, the present FCS should be retired and replaced by an open, XML-based, standard CytometryML. Copyright 2003 Wiley-Liss, Inc.

  5. Efficient XML Interchange (EXI) Compression and Performance Benefits: Development, Implementation and Evaluation

    DTIC Science & Technology

    2010-03-01

    to a graphics card , and not the redesign of XML. The justification is that if XML is going to be prevalent, special optimized hardware is...the answer, similar to the specialized functions of a video card .  Given the Moore’s law that processing power doubles every few years, let the...and numerous multimedia players such as iTunes from Apple. These applications are free to use, but the source is restricted by software licenses

  6. A browser-based tool for conversion between Fortran NAMELIST and XML/HTML

    NASA Astrophysics Data System (ADS)

    Naito, O.

    A browser-based tool for conversion between Fortran NAMELIST and XML/HTML is presented. It runs on an HTML5 compliant browser and generates reusable XML files to aid interoperability. It also provides a graphical interface for editing and annotating variables in NAMELIST, hence serves as a primitive code documentation environment. Although the tool is not comprehensive, it could be viewed as a test bed for integrating legacy codes into modern systems.

  7. XML Schema Guide for Secondary CDR Submissions

    EPA Pesticide Factsheets

    This document presents the extensible markup language (XML) schema guide for the Office of Pollution Prevention and Toxics’ (OPPT) e-CDRweb tool. E-CDRweb is the electronic, web-based tool provided by Environmental Protection Agency (EPA) for the submission of Chemical Data Reporting (CDR) information. This document provides the user with tips and guidance on correctly using the version 1.1 XML schema for the Joint Submission Form. Please note that the order of the elements must match the schema.

  8. The SGML Standardization Framework and the Introduction of XML

    PubMed Central

    Grütter, Rolf

    2000-01-01

    Extensible Markup Language (XML) is on its way to becoming a global standard for the representation, exchange, and presentation of information on the World Wide Web (WWW). More than that, XML is creating a standardization framework, in terms of an open network of meta-standards and mediators that allows for the definition of further conventions and agreements in specific business domains. Such an approach is particularly needed in the healthcare domain; XML promises to especially suit the particularities of patient records and their lifelong storage, retrieval, and exchange. At a time when change rather than steadiness is becoming the faithful feature of our society, standardization frameworks which support a diversified growth of specifications that are appropriate to the actual needs of the users are becoming more and more important; and efforts should be made to encourage this new attempt at standardization to grow in a fruitful direction. Thus, the introduction of XML reflects a standardization process which is neither exclusively based on an acknowledged standardization authority, nor a pure market standard. Instead, a consortium of companies, academic institutions, and public bodies has agreed on a common recommendation based on an existing standardization framework. The consortium's process of agreeing to a standardization framework will doubtlessly be successful in the case of XML, and it is suggested that it should be considered as a generic model for standardization processes in the future. PMID:11720931

  9. The SGML standardization framework and the introduction of XML.

    PubMed

    Fierz, W; Grütter, R

    2000-01-01

    Extensible Markup Language (XML) is on its way to becoming a global standard for the representation, exchange, and presentation of information on the World Wide Web (WWW). More than that, XML is creating a standardization framework, in terms of an open network of meta-standards and mediators that allows for the definition of further conventions and agreements in specific business domains. Such an approach is particularly needed in the healthcare domain; XML promises to especially suit the particularities of patient records and their lifelong storage, retrieval, and exchange. At a time when change rather than steadiness is becoming the faithful feature of our society, standardization frameworks which support a diversified growth of specifications that are appropriate to the actual needs of the users are becoming more and more important; and efforts should be made to encourage this new attempt at standardization to grow in a fruitful direction. Thus, the introduction of XML reflects a standardization process which is neither exclusively based on an acknowledged standardization authority, nor a pure market standard. Instead, a consortium of companies, academic institutions, and public bodies has agreed on a common recommendation based on an existing standardization framework. The consortium's process of agreeing to a standardization framework will doubtlessly be successful in the case of XML, and it is suggested that it should be considered as a generic model for standardization processes in the future.

  10. XML at the ADC: Steps to a Next Generation Data Archive

    NASA Astrophysics Data System (ADS)

    Shaya, E.; Blackwell, J.; Gass, J.; Oliversen, N.; Schneider, G.; Thomas, B.; Cheung, C.; White, R. A.

    1999-05-01

    The eXtensible Markup Language (XML) is a document markup language that allows users to specify their own tags, to create hierarchical structures to qualify their data, and to support automatic checking of documents for structural validity. It is being intensively supported by nearly every major corporate software developer. Under the funds of a NASA AISRP proposal, the Astronomical Data Center (ADC, http://adc.gsfc.nasa.gov) is developing an infrastructure for importation, enhancement, and distribution of data and metadata using XML as the document markup language. We discuss the preliminary Document Type Definition (DTD, at http://adc.gsfc.nasa.gov/xml) which specifies the elements and their attributes in our metadata documents. This attempts to define both the metadata of an astronomical catalog and the `header' information of an astronomical table. In addition, we give an overview of the planned flow of data through automated pipelines from authors and journal presses into our XML archive and retrieval through the web via the XML-QL Query Language and eXtensible Style Language (XSL) scripts. When completed, the catalogs and journal tables at the ADC will be tightly hyperlinked to enhance data discovery. In addition one will be able to search on fragmentary information. For instance, one could query for a table by entering that the second author is so-and-so or that the third author is at such-and-such institution.

  11. XML and its impact on content and structure in electronic health care documents.

    PubMed Central

    Sokolowski, R.; Dudeck, J.

    1999-01-01

    Worldwide information networks have the requirement that electronic documents must be easily accessible, portable, flexible and system-independent. With the development of XML (eXtensible Markup Language), the future of electronic documents, health care informatics and the Web itself are about to change. The intent of the recently formed ASTM E31.25 subcommittee, "XML DTDs for Health Care", is to develop standard electronic document representations of paper-based health care documents and forms. A goal of the subcommittee is to work together to enhance existing levels of interoperability among the various XML/SGML standardization efforts, products and systems in health care. The ASTM E31.25 subcommittee uses common practices and software standards to develop the implementation recommendations for XML documents in health care. The implementation recommendations are being developed to standardize the many different structures of documents. These recommendations are in the form of a set of standard DTDs, or document type definitions that match the electronic document requirements in the health care industry. This paper discusses recent efforts of the ASTM E31.25 subcommittee. PMID:10566338

  12. XML schemas for common bioinformatic data types and their application in workflow systems

    PubMed Central

    Seibel, Philipp N; Krüger, Jan; Hartmeier, Sven; Schwarzer, Knut; Löwenthal, Kai; Mersch, Henning; Dandekar, Thomas; Giegerich, Robert

    2006-01-01

    Background Today, there is a growing need in bioinformatics to combine available software tools into chains, thus building complex applications from existing single-task tools. To create such workflows, the tools involved have to be able to work with each other's data – therefore, a common set of well-defined data formats is needed. Unfortunately, current bioinformatic tools use a great variety of heterogeneous formats. Results Acknowledging the need for common formats, the Helmholtz Open BioInformatics Technology network (HOBIT) identified several basic data types used in bioinformatics and developed appropriate format descriptions, formally defined by XML schemas, and incorporated them in a Java library (BioDOM). These schemas currently cover sequence, sequence alignment, RNA secondary structure and RNA secondary structure alignment formats in a form that is independent of any specific program, thus enabling seamless interoperation of different tools. All XML formats are available at , the BioDOM library can be obtained at . Conclusion The HOBIT XML schemas and the BioDOM library simplify adding XML support to newly created and existing bioinformatic tools, enabling these tools to interoperate seamlessly in workflow scenarios. PMID:17087823

  13. XGI: a graphical interface for XQuery creation.

    PubMed

    Li, Xiang; Gennari, John H; Brinkley, James F

    2007-10-11

    XML has become the default standard for data exchange among heterogeneous data sources, and in January 2007 XQuery (XML Query language) was recommended by the World Wide Web Consortium as the query language for XML. However, XQuery is a complex language that is difficult for non-programmers to learn. We have therefore developed XGI (XQuery Graphical Interface), a visual interface for graphically generating XQuery. In this paper we demonstrate the functionality of XGI through its application to a biomedical XML dataset. We describe the system architecture and the features of XGI in relation to several existing querying systems, we demonstrate the system's usability through a sample query construction, and we discuss a preliminary evaluation of XGI. Finally, we describe some limitations of the system, and our plans for future improvements.

  14. Weighted hybrid technique for recommender system

    NASA Astrophysics Data System (ADS)

    Suriati, S.; Dwiastuti, Meisyarah; Tulus, T.

    2017-12-01

    Recommender system becomes very popular and has important role in an information system or webpages nowadays. A recommender system tries to make a prediction of which item a user may like based on his activity on the system. There are some familiar techniques to build a recommender system, such as content-based filtering and collaborative filtering. Content-based filtering does not involve opinions from human to make the prediction, while collaborative filtering does, so collaborative filtering can predict more accurately. However, collaborative filtering cannot give prediction to items which have never been rated by any user. In order to cover the drawbacks of each approach with the advantages of other approach, both approaches can be combined with an approach known as hybrid technique. Hybrid technique used in this work is weighted technique in which the prediction score is combination linear of scores gained by techniques that are combined.The purpose of this work is to show how an approach of weighted hybrid technique combining content-based filtering and item-based collaborative filtering can work in a movie recommender system and to show the performance comparison when both approachare combined and when each approach works alone. There are three experiments done in this work, combining both techniques with different parameters. The result shows that the weighted hybrid technique that is done in this work does not really boost the performance up, but it helps to give prediction score for unrated movies that are impossible to be recommended by only using collaborative filtering.

  15. Space Communications Emulation Facility

    NASA Technical Reports Server (NTRS)

    Hill, Chante A.

    2004-01-01

    Establishing space communication between ground facilities and other satellites is a painstaking task that requires many precise calculations dealing with relay time, atmospheric conditions, and satellite positions, to name a few. The Space Communications Emulation Facility (SCEF) team here at NASA is developing a facility that will approximately emulate the conditions in space that impact space communication. The emulation facility is comprised of a 32 node distributed cluster of computers; each node representing a satellite or ground station. The objective of the satellites is to observe the topography of the Earth (water, vegetation, land, and ice) and relay this information back to the ground stations. Software originally designed by the University of Kansas, labeled the Emulation Manager, controls the interaction of the satellites and ground stations, as well as handling the recording of data. The Emulation Manager is installed on a Linux Operating System, employing both Java and C++ programming codes. The emulation scenarios are written in extensible Markup Language, XML. XML documents are designed to store, carry, and exchange data. With XML documents data can be exchanged between incompatible systems, which makes it ideal for this project because Linux, MAC and Windows Operating Systems are all used. Unfortunately, XML documents cannot display data like HTML documents. Therefore, the SCEF team uses XML Schema Definition (XSD) or just schema to describe the structure of an XML document. Schemas are very important because they have the capability to validate the correctness of data, define restrictions on data, define data formats, and convert data between different data types, among other things. At this time, in order for the Emulation Manager to open and run an XML emulation scenario file, the user must first establish a link between the schema file and the directory under which the XML scenario files are saved. This procedure takes place on the command line on the Linux Operating System. Once this link has been established the Emulation manager validates all the XML files in that directory against the schema file, before the actual scenario is run. Using some very sophisticated commercial software called the Satellite Tool Kit (STK) installed on the Linux box, the Emulation Manager is able to display the data and graphics generated by the execution of a XML emulation scenario file. The Emulation Manager software is written in JAVA programming code. Since the SCEF project is in the developmental stage, the source code for this type of software is being modified to better fit the requirements of the SCEF project. Some parameters for the emulation are hard coded, set at fixed values. Members of the SCEF team are altering the code to allow the user to choose the values of these hard coded parameters by inserting a toolbar onto the preexisting GUI.

  16. Integrated Design and Production Reference Integration with ArchGenXML V1.00

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barter, R H

    2004-07-20

    ArchGenXML is a tool that allows easy creation of Zope products through the use of Archetypes. The Integrated Design and Production Reference (IDPR) should be highly configurable in order to meet the needs of a diverse engineering community. Ease of configuration is key to the success of IDPR. The purpose of this paper is to describe a method of using a UML diagram editor to configure IDPR through ArchGenXML and Archetypes.

  17. PDF for Healthcare and Child Health Data Forms.

    PubMed

    Zuckerman, Alan E; Schneider, Joseph H; Miller, Ken

    2008-11-06

    PDF-H is a new best practices standard that uses XFA forms and embedded JavaScript to combine PDF forms with XML data. Preliminary experience with AAP child health forms shows that the combination of PDF with XML is a more effective method to visualize familiar data on paper and the web than the traditional use of XML and XSLT. Both PDF-H and HL7 Clinical Document Architecture can co-exist using the same data for different display formats.

  18. Follow the Leader Tracking by Autonomous Underwater Vehicles (AUVs) Using Acoustic Communications and Ranging

    DTIC Science & Technology

    2003-09-01

    590-595, September 1996. Deitel , H.M., Deitel , P.J., Nieto, T.R., Lin, T.M., Sadhu, P., XML: How to Program , Prentice Hall, 2001. Du, Y...communications will result in a total track following error equal to the sum of the errors for the two vehicles........48 xv Figure 36. Test point programming ...Refer to (Hunter 2000), ( Deitel 2001), or similar references for additional information regarding the XML standard. Figure 17. XML example

  19. Catalog Descriptions Using VOTable Files

    NASA Astrophysics Data System (ADS)

    Thompson, R.; Levay, K.; Kimball, T.; White, R.

    2008-08-01

    Additional information is frequently required to describe database table contents and make it understandable to users. For this reason, the Multimission Archive at Space Telescope (MAST) creates Òdescription filesÓ for each table/catalog. After trying various XML and CSV formats, we finally chose VOTable. These files are easy to update via an HTML form, easily read using an XML parser such as (in our case) the PHP5 SimpleXML extension, and have found multiple uses in our data access/retrieval process.

  20. A gradient-boosting approach for filtering de novo mutations in parent-offspring trios.

    PubMed

    Liu, Yongzhuang; Li, Bingshan; Tan, Renjie; Zhu, Xiaolin; Wang, Yadong

    2014-07-01

    Whole-genome and -exome sequencing on parent-offspring trios is a powerful approach to identifying disease-associated genes by detecting de novo mutations in patients. Accurate detection of de novo mutations from sequencing data is a critical step in trio-based genetic studies. Existing bioinformatic approaches usually yield high error rates due to sequencing artifacts and alignment issues, which may either miss true de novo mutations or call too many false ones, making downstream validation and analysis difficult. In particular, current approaches have much worse specificity than sensitivity, and developing effective filters to discriminate genuine from spurious de novo mutations remains an unsolved challenge. In this article, we curated 59 sequence features in whole genome and exome alignment context which are considered to be relevant to discriminating true de novo mutations from artifacts, and then employed a machine-learning approach to classify candidates as true or false de novo mutations. Specifically, we built a classifier, named De Novo Mutation Filter (DNMFilter), using gradient boosting as the classification algorithm. We built the training set using experimentally validated true and false de novo mutations as well as collected false de novo mutations from an in-house large-scale exome-sequencing project. We evaluated DNMFilter's theoretical performance and investigated relative importance of different sequence features on the classification accuracy. Finally, we applied DNMFilter on our in-house whole exome trios and one CEU trio from the 1000 Genomes Project and found that DNMFilter could be coupled with commonly used de novo mutation detection approaches as an effective filtering approach to significantly reduce false discovery rate without sacrificing sensitivity. The software DNMFilter implemented using a combination of Java and R is freely available from the website at http://humangenome.duke.edu/software. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  1. TH-C-12A-12: Veritas: An Open Source Tool to Facilitate User Interaction with TrueBeam Developer Mode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mishra, P; Varian Medical Systems, Palo Alto, CA; Lewis, J

    2014-06-15

    Purpose: To address the challenges of creating delivery trajectories and imaging sequences with TrueBeam Developer Mode, a new open-source graphical XML builder, Veritas, has been developed, tested and made freely available. Veritas eliminates most of the need to understand the underlying schema and write XML scripts, by providing a graphical menu for each control point specifying the state of 30 mechanical/dose axes. All capabilities of Developer Mode are accessible in Veritas. Methods: Veritas was designed using QT Designer, a ‘what-you-is-what-you-get’ (WYSIWIG) tool for building graphical user interfaces (GUI). Different components of the GUI are integrated using QT's signals and slotsmore » mechanism. Functionalities are added using PySide, an open source, cross platform Python binding for the QT framework. The XML code generated is immediately visible, making it an interactive learning tool. A user starts from an anonymized DICOM file or XML example and introduces delivery modifications, or begins their experiment from scratch, then uses the GUI to modify control points as desired. The software automatically generates XML plans following the appropriate schema. Results: Veritas was tested by generating and delivering two XML plans at Brigham and Women's Hospital. The first example was created to irradiate the letter ‘B’ with a narrow MV beam using dynamic couch movements. The second was created to acquire 4D CBCT projections for four minutes. The delivery of the letter ‘B’ was observed using a 2D array of ionization chambers. Both deliveries were generated quickly in Veritas by non-expert Developer Mode users. Conclusion: We introduced a new open source tool Veritas for generating XML plans (delivery trajectories and imaging sequences). Veritas makes Developer Mode more accessible by reducing the learning curve for quick translation of research ideas into XML plans. Veritas is an open source initiative, creating the possibility for future developments and collaboration with other researchers. I am an employee of Varian Medical Systems.« less

  2. Photodegradation of avobenzone: stabilization effect of antioxidants.

    PubMed

    Afonso, S; Horita, K; Sousa e Silva, J P; Almeida, I F; Amaral, M H; Lobão, P A; Costa, P C; Miranda, Margarida S; Esteves da Silva, Joaquim C G; Sousa Lobo, J M

    2014-11-01

    Avobenzone is one of the most common UVA-filters in sunscreens, and is known to be photounstable. Some of the strategies used to stabilize this filter present some drawbacks like photosensitization reactions. Antioxidants are widely used as cosmetic ingredients that prevent photoageing and complement the photoprotection offered by the UV-filters preventing or reducing photogenerated reactive species. The purpose of this work was to study the effect of antioxidants in the photostabilization of avobenzone. The filter dissolved in dimethyl sulfoxide or incorporated in a sunscreen formulation was irradiated with simulated solar radiation (750 W/m(2)). The tested antioxidants were vitamin C, vitamin E, and ubiquinone. The area under the curve of the absorption spectrum for UVA range and the sun protection factor (SPF) were calculated. Vitamin E (1:2), vitamin C (1:0.5) and ubiquinone (1:0.5) were the more effective concentrations increasing the photostability of avobenzone. In sunscreen formulations, the most effective photostabilizer was ubiquinone which also promoted an increase in SPF. This knowledge is important to improve effectiveness of sunscreen formulation. Antioxidants can be valuable ingredients for sunscreens with a triple activity of filter stabilization, SPF boosting and photoageing prevention. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Essential Annotation Schema for Ecology (EASE)-A framework supporting the efficient data annotation and faceted navigation in ecology.

    PubMed

    Pfaff, Claas-Thido; Eichenberg, David; Liebergesell, Mario; König-Ries, Birgitta; Wirth, Christian

    2017-01-01

    Ecology has become a data intensive science over the last decades which often relies on the reuse of data in cross-experimental analyses. However, finding data which qualifies for the reuse in a specific context can be challenging. It requires good quality metadata and annotations as well as efficient search strategies. To date, full text search (often on the metadata only) is the most widely used search strategy although it is known to be inaccurate. Faceted navigation is providing a filter mechanism which is based on fine granular metadata, categorizing search objects along numeric and categorical parameters relevant for their discovery. Selecting from these parameters during a full text search creates a system of filters which allows to refine and improve the results towards more relevance. We developed a framework for the efficient annotation and faceted navigation in ecology. It consists of an XML schema for storing the annotation of search objects and is accompanied by a vocabulary focused on ecology to support the annotation process. The framework consolidates ideas which originate from widely accepted metadata standards, textbooks, scientific literature, and vocabularies as well as from expert knowledge contributed by researchers from ecology and adjacent disciplines.

  4. XML DTD and Schemas for HDF-EOS

    NASA Technical Reports Server (NTRS)

    Ullman, Richard; Yang, Jingli

    2008-01-01

    An Extensible Markup Language (XML) document type definition (DTD) standard for the structure and contents of HDF-EOS files and their contents, and an equivalent standard in the form of schemas, have been developed.

  5. MedlinePlus Milestones: 1998-present

    MedlinePlus

    ... page links and information daily and also offers access to this full XML content through its Web ... search-based Web service that allows developers to access MedlinePlus health topic data in XML format. MedlinePlus ...

  6. ForConX: A forcefield conversion tool based on XML.

    PubMed

    Lesch, Volker; Diddens, Diddo; Bernardes, Carlos E S; Golub, Benjamin; Dequidt, Alain; Zeindlhofer, Veronika; Sega, Marcello; Schröder, Christian

    2017-04-05

    The force field conversion from one MD program to another one is exhausting and error-prone. Although single conversion tools from one MD program to another exist not every combination and both directions of conversion are available for the favorite MD programs Amber, Charmm, Dl-Poly, Gromacs, and Lammps. We present here a general tool for the force field conversion on the basis of an XML document. The force field is converted to and from this XML structure facilitating the implementation of new MD programs for the conversion. Furthermore, the XML structure is human readable and can be manipulated before continuing the conversion. We report, as testcases, the conversions of topologies for acetonitrile, dimethylformamide, and 1-ethyl-3-methylimidazolium trifluoromethanesulfonate comprising also Urey-Bradley and Ryckaert-Bellemans potentials. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  7. Interactive, Secure Web-enabled Aircraft Engine Simulation Using XML Databinding Integration

    NASA Technical Reports Server (NTRS)

    Lin, Risheng; Afjeh, Abdollah A.

    2003-01-01

    This paper discusses the detailed design of an XML databinding framework for aircraft engine simulation. The framework provides an object interface to access and use engine data. while at the same time preserving the meaning of the original data. The Language independent representation of engine component data enables users to move around XML data using HTTP through disparate networks. The application of this framework is demonstrated via a web-based turbofan propulsion system simulation using the World Wide Web (WWW). A Java Servlet based web component architecture is used for rendering XML engine data into HTML format and dealing with input events from the user, which allows users to interact with simulation data from a web browser. The simulation data can also be saved to a local disk for archiving or to restart the simulation at a later time.

  8. Representing Information in Patient Reports Using Natural Language Processing and the Extensible Markup Language

    PubMed Central

    Friedman, Carol; Hripcsak, George; Shagina, Lyuda; Liu, Hongfang

    1999-01-01

    Objective: To design a document model that provides reliable and efficient access to clinical information in patient reports for a broad range of clinical applications, and to implement an automated method using natural language processing that maps textual reports to a form consistent with the model. Methods: A document model that encodes structured clinical information in patient reports while retaining the original contents was designed using the extensible markup language (XML), and a document type definition (DTD) was created. An existing natural language processor (NLP) was modified to generate output consistent with the model. Two hundred reports were processed using the modified NLP system, and the XML output that was generated was validated using an XML validating parser. Results: The modified NLP system successfully processed all 200 reports. The output of one report was invalid, and 199 reports were valid XML forms consistent with the DTD. Conclusions: Natural language processing can be used to automatically create an enriched document that contains a structured component whose elements are linked to portions of the original textual report. This integrated document model provides a representation where documents containing specific information can be accurately and efficiently retrieved by querying the structured components. If manual review of the documents is desired, the salient information in the original reports can also be identified and highlighted. Using an XML model of tagging provides an additional benefit in that software tools that manipulate XML documents are readily available. PMID:9925230

  9. XML schemas for common bioinformatic data types and their application in workflow systems.

    PubMed

    Seibel, Philipp N; Krüger, Jan; Hartmeier, Sven; Schwarzer, Knut; Löwenthal, Kai; Mersch, Henning; Dandekar, Thomas; Giegerich, Robert

    2006-11-06

    Today, there is a growing need in bioinformatics to combine available software tools into chains, thus building complex applications from existing single-task tools. To create such workflows, the tools involved have to be able to work with each other's data--therefore, a common set of well-defined data formats is needed. Unfortunately, current bioinformatic tools use a great variety of heterogeneous formats. Acknowledging the need for common formats, the Helmholtz Open BioInformatics Technology network (HOBIT) identified several basic data types used in bioinformatics and developed appropriate format descriptions, formally defined by XML schemas, and incorporated them in a Java library (BioDOM). These schemas currently cover sequence, sequence alignment, RNA secondary structure and RNA secondary structure alignment formats in a form that is independent of any specific program, thus enabling seamless interoperation of different tools. All XML formats are available at http://bioschemas.sourceforge.net, the BioDOM library can be obtained at http://biodom.sourceforge.net. The HOBIT XML schemas and the BioDOM library simplify adding XML support to newly created and existing bioinformatic tools, enabling these tools to interoperate seamlessly in workflow scenarios.

  10. A distributed computing system for magnetic resonance imaging: Java-based processing and binding of XML.

    PubMed

    de Beer, R; Graveron-Demilly, D; Nastase, S; van Ormondt, D

    2004-03-01

    Recently we have developed a Java-based heterogeneous distributed computing system for the field of magnetic resonance imaging (MRI). It is a software system for embedding the various image reconstruction algorithms that we have created for handling MRI data sets with sparse sampling distributions. Since these data sets may result from multi-dimensional MRI measurements our system has to control the storage and manipulation of large amounts of data. In this paper we describe how we have employed the extensible markup language (XML) to realize this data handling in a highly structured way. To that end we have used Java packages, recently released by Sun Microsystems, to process XML documents and to compile pieces of XML code into Java classes. We have effectuated a flexible storage and manipulation approach for all kinds of data within the MRI system, such as data describing and containing multi-dimensional MRI measurements, data configuring image reconstruction methods and data representing and visualizing the various services of the system. We have found that the object-oriented approach, possible with the Java programming environment, combined with the XML technology is a convenient way of describing and handling various data streams in heterogeneous distributed computing systems.

  11. Flight Dynamic Model Exchange using XML

    NASA Technical Reports Server (NTRS)

    Jackson, E. Bruce; Hildreth, Bruce L.

    2002-01-01

    The AIAA Modeling and Simulation Technical Committee has worked for several years to develop a standard by which the information needed to develop physics-based models of aircraft can be specified. The purpose of this standard is to provide a well-defined set of information, definitions, data tables and axis systems so that cooperating organizations can transfer a model from one simulation facility to another with maximum efficiency. This paper proposes using an application of the eXtensible Markup Language (XML) to implement the AIAA simulation standard. The motivation and justification for using a standard such as XML is discussed. Necessary data elements to be supported are outlined. An example of an aerodynamic model as an XML file is given. This example includes definition of independent and dependent variables for function tables, definition of key variables used to define the model, and axis systems used. The final steps necessary for implementation of the standard are presented. Software to take an XML-defined model and import/export it to/from a given simulation facility is discussed, but not demonstrated. That would be the next step in final implementation of standards for physics-based aircraft dynamic models.

  12. Spreadsheets for Analyzing and Optimizing Space Missions

    NASA Technical Reports Server (NTRS)

    Some, Raphael R.; Agrawal, Anil K.; Czikmantory, Akos J.; Weisbin, Charles R.; Hua, Hook; Neff, Jon M.; Cowdin, Mark A.; Lewis, Brian S.; Iroz, Juana; Ross, Rick

    2009-01-01

    XCALIBR (XML Capability Analysis LIBRary) is a set of Extensible Markup Language (XML) database and spreadsheet- based analysis software tools designed to assist in technology-return-on-investment analysis and optimization of technology portfolios pertaining to outer-space missions. XCALIBR is also being examined for use in planning, tracking, and documentation of projects. An XCALIBR database contains information on mission requirements and technological capabilities, which are related by use of an XML taxonomy. XCALIBR incorporates a standardized interface for exporting data and analysis templates to an Excel spreadsheet. Unique features of XCALIBR include the following: It is inherently hierarchical by virtue of its XML basis. The XML taxonomy codifies a comprehensive data structure and data dictionary that includes performance metrics for spacecraft, sensors, and spacecraft systems other than sensors. The taxonomy contains >700 nodes representing all levels, from system through subsystem to individual parts. All entries are searchable and machine readable. There is an intuitive Web-based user interface. The software automatically matches technologies to mission requirements. The software automatically generates, and makes the required entries in, an Excel return-on-investment analysis software tool. The results of an analysis are presented in both tabular and graphical displays.

  13. Shuttle-Data-Tape XML Translator

    NASA Technical Reports Server (NTRS)

    Barry, Matthew R.; Osborne, Richard N.

    2005-01-01

    JSDTImport is a computer program for translating native Shuttle Data Tape (SDT) files from American Standard Code for Information Interchange (ASCII) format into databases in other formats. JSDTImport solves the problem of organizing the SDT content, affording flexibility to enable users to choose how to store the information in a database to better support client and server applications. JSDTImport can be dynamically configured by use of a simple Extensible Markup Language (XML) file. JSDTImport uses this XML file to define how each record and field will be parsed, its layout and definition, and how the resulting database will be structured. JSDTImport also includes a client application programming interface (API) layer that provides abstraction for the data-querying process. The API enables a user to specify the search criteria to apply in gathering all the data relevant to a query. The API can be used to organize the SDT content and translate into a native XML database. The XML format is structured into efficient sections, enabling excellent query performance by use of the XPath query language. Optionally, the content can be translated into a Structured Query Language (SQL) database for fast, reliable SQL queries on standard database server computers.

  14. Input-current shaped ac to dc converters

    NASA Technical Reports Server (NTRS)

    1986-01-01

    The problem of achieving near unity power factor while supplying power to a dc load from a single phase ac source of power is examined. Power processors for this application must perform three functions: input current shaping, energy storage, and output voltage regulation. The methods available for performing each of these three functions are reviewed. Input current shaping methods are either active or passive, with the active methods divided into buck-like and boost-like techniques. In addition to large reactances, energy storage methods include resonant filters, active filters, and active storage schemes. Fast voltage regulation can be achieved by post regulation or by supplementing the current shaping topology with an extra switch. Some indications of which methods are best suited for particular applications concludes the discussion.

  15. Tongue Scrapers Only Slightly Reduce Bad Breath

    MedlinePlus

    ... information you need from the Academy of General Dentistry Friday, June 29, 2018 About | Contact InfoBites Quick ... study in the September/October issue of General Dentistry, the Academy?xml:namespace> of General Dentistry?xml: ...

  16. PDB data curation.

    PubMed

    Wang, Yanchao; Sunderraman, Rajshekhar

    2006-01-01

    In this paper, we propose two architectures for curating PDB data to improve its quality. The first one, PDB Data Curation System, is developed by adding two parts, Checking Filter and Curation Engine, between User Interface and Database. This architecture supports the basic PDB data curation. The other one, PDB Data Curation System with XCML, is designed for further curation which adds four more parts, PDB-XML, PDB, OODB, Protin-OODB, into the previous one. This architecture uses XCML language to automatically check errors of PDB data that enables PDB data more consistent and accurate. These two tools can be used for cleaning existing PDB files and creating new PDB files. We also show some ideas how to add constraints and assertions with XCML to get better data. In addition, we discuss the data provenance that may affect data accuracy and consistency.

  17. XML in an Adaptive Framework for Instrument Control

    NASA Technical Reports Server (NTRS)

    Ames, Troy J.

    2004-01-01

    NASA Goddard Space Flight Center is developing an extensible framework for instrument command and control, known as Instrument Remote Control (IRC), that combines the platform independent processing capabilities of Java with the power of the Extensible Markup Language (XML). A key aspect of the architecture is software that is driven by an instrument description, written using the Instrument Markup Language (IML). IML is an XML dialect used to describe interfaces to control and monitor the instrument, command sets and command formats, data streams, communication mechanisms, and data processing algorithms.

  18. Stationary Wavelet Transform and AdaBoost with SVM Based Pathological Brain Detection in MRI Scanning.

    PubMed

    Nayak, Deepak Ranjan; Dash, Ratnakar; Majhi, Banshidhar

    2017-01-01

    This paper presents an automatic classification system for segregating pathological brain from normal brains in magnetic resonance imaging scanning. The proposed system employs contrast limited adaptive histogram equalization scheme to enhance the diseased region in brain MR images. Two-dimensional stationary wavelet transform is harnessed to extract features from the preprocessed images. The feature vector is constructed using the energy and entropy values, computed from the level- 2 SWT coefficients. Then, the relevant and uncorrelated features are selected using symmetric uncertainty ranking filter. Subsequently, the selected features are given input to the proposed AdaBoost with support vector machine classifier, where SVM is used as the base classifier of AdaBoost algorithm. To validate the proposed system, three standard MR image datasets, Dataset-66, Dataset-160, and Dataset- 255 have been utilized. The 5 runs of k-fold stratified cross validation results indicate the suggested scheme offers better performance than other existing schemes in terms of accuracy and number of features. The proposed system earns ideal classification over Dataset-66 and Dataset-160; whereas, for Dataset- 255, an accuracy of 99.45% is achieved. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  19. A boost and bounce theory of temporal attention.

    PubMed

    Olivers, Christian N L; Meeter, Martijn

    2008-10-01

    What is the time course of visual attention? Attentional blink studies have found that the 2nd of 2 targets is often missed when presented within about 500 ms from the 1st target, resulting in theories about relatively long-lasting capacity limitations or bottlenecks. Earlier studies, however, reported quite the opposite finding: Attention is transiently enhanced, rather than reduced, for several hundreds of milliseconds after a relevant event. The authors present a general theory, as well as a working computational model, that integrate these findings. There is no central role for capacity limitations or bottlenecks. Central is a rapidly responding gating system (or attentional filter) that seeks to enhance relevant and suppress irrelevant information. When items sufficiently match the target description, they elicit transient excitatory feedback activity (a "boost" function), meant to provide access to working memory. However, in the attentional blink task, the distractor after the target is accidentally boosted, resulting in subsequent strong inhibitory feedback response (a "bounce"), which, in effect, closes the gate to working memory. The theory explains many findings that are problematic for limited-capacity accounts, including a new experiment showing that the attentional blink can be postponed.

  20. KAT: A Flexible XML-based Knowledge Authoring Environment

    PubMed Central

    Hulse, Nathan C.; Rocha, Roberto A.; Del Fiol, Guilherme; Bradshaw, Richard L.; Hanna, Timothy P.; Roemer, Lorrie K.

    2005-01-01

    As part of an enterprise effort to develop new clinical information systems at Intermountain Health Care, the authors have built a knowledge authoring tool that facilitates the development and refinement of medical knowledge content. At present, users of the application can compose order sets and an assortment of other structured clinical knowledge documents based on XML schemas. The flexible nature of the application allows the immediate authoring of new types of documents once an appropriate XML schema and accompanying Web form have been developed and stored in a shared repository. The need for a knowledge acquisition tool stems largely from the desire for medical practitioners to be able to write their own content for use within clinical applications. We hypothesize that medical knowledge content for clinical use can be successfully created and maintained through XML-based document frameworks containing structured and coded knowledge. PMID:15802477

  1. ADASS Web Database XML Project

    NASA Astrophysics Data System (ADS)

    Barg, M. I.; Stobie, E. B.; Ferro, A. J.; O'Neil, E. J.

    In the spring of 2000, at the request of the ADASS Program Organizing Committee (POC), we began organizing information from previous ADASS conferences in an effort to create a centralized database. The beginnings of this database originated from data (invited speakers, participants, papers, etc.) extracted from HyperText Markup Language (HTML) documents from past ADASS host sites. Unfortunately, not all HTML documents are well formed and parsing them proved to be an iterative process. It was evident at the beginning that if these Web documents were organized in a standardized way, such as XML (Extensible Markup Language), the processing of this information across the Web could be automated, more efficient, and less error prone. This paper will briefly review the many programming tools available for processing XML, including Java, Perl and Python, and will explore the mapping of relational data from our MySQL database to XML.

  2. Suggestions for Improvement of User Access to GOCE L2 Data

    NASA Astrophysics Data System (ADS)

    Tscherning, C. C.

    2011-07-01

    ESA's has required that most GOCE L2 products are delivered in XML format. This creates difficulties for the users because a Parser written in Perl is needed to convert the files to files without XML tags. However several products, such as the coefficients of spherical harmonic coefficients are made available on standard form through the International Center for Global Gravity Field Models. The variance-covariance information for the gravity field models is only available without XML tags. It is suggested that all XML products are made available in the Virtual Data Archive as files without tags. This will besides making the data directly usable by a FORTRAN program also reduce the size (storage requirements) of the product to about 30 %. A further reduction of used storage should be made by tuning the number of digits for the individual quantities in the products, so that it corresponds to the actual number of significant digits.

  3. An XML-based method for astronomy software designing

    NASA Astrophysics Data System (ADS)

    Liao, Mingxue; Aili, Yusupu; Zhang, Jin

    XML-based method for standardization of software designing is introduced and analyzed and successfully applied to renovating the hardware and software of the digital clock at Urumqi Astronomical Station. Basic strategy for eliciting time information from the new digital clock of FT206 in the antenna control program is introduced. By FT206, the need to compute how many centuries passed since a certain day with sophisticated formulas is eliminated and it is no longer necessary to set right UT time for the computer holding control over antenna because the information about year, month, day are all deduced from Julian day dwelling in FT206, rather than from computer time. With XML-based method and standard for software designing, various existing designing methods are unified, communications and collaborations between developers are facilitated, and thus Internet-based mode of developing software becomes possible. The trend of development of XML-based designing method is predicted.

  4. Utilizing the Structure and Content Information for XML Document Clustering

    NASA Astrophysics Data System (ADS)

    Tran, Tien; Kutty, Sangeetha; Nayak, Richi

    This paper reports on the experiments and results of a clustering approach used in the INEX 2008 document mining challenge. The clustering approach utilizes both the structure and content information of the Wikipedia XML document collection. A latent semantic kernel (LSK) is used to measure the semantic similarity between XML documents based on their content features. The construction of a latent semantic kernel involves the computing of singular vector decomposition (SVD). On a large feature space matrix, the computation of SVD is very expensive in terms of time and memory requirements. Thus in this clustering approach, the dimension of the document space of a term-document matrix is reduced before performing SVD. The document space reduction is based on the common structural information of the Wikipedia XML document collection. The proposed clustering approach has shown to be effective on the Wikipedia collection in the INEX 2008 document mining challenge.

  5. XML Based Scientific Data Management Facility

    NASA Technical Reports Server (NTRS)

    Mehrotra, P.; Zubair, M.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    The World Wide Web consortium has developed an Extensible Markup Language (XML) to support the building of better information management infrastructures. The scientific computing community realizing the benefits of XML has designed markup languages for scientific data. In this paper, we propose a XML based scientific data management ,facility, XDMF. The project is motivated by the fact that even though a lot of scientific data is being generated, it is not being shared because of lack of standards and infrastructure support for discovering and transforming the data. The proposed data management facility can be used to discover the scientific data itself, the transformation functions, and also for applying the required transformations. We have built a prototype system of the proposed data management facility that can work on different platforms. We have implemented the system using Java, and Apache XSLT engine Xalan. To support remote data and transformation functions, we had to extend the XSLT specification and the Xalan package.

  6. XTCE GOVSAT Tool Suite 1.0

    NASA Technical Reports Server (NTRS)

    Rice, J. Kevin

    2013-01-01

    The XTCE GOVSAT software suite contains three tools: validation, search, and reporting. The Extensible Markup Language (XML) Telemetric and Command Exchange (XTCE) GOVSAT Tool Suite is written in Java for manipulating XTCE XML files. XTCE is a Consultative Committee for Space Data Systems (CCSDS) and Object Management Group (OMG) specification for describing the format and information in telemetry and command packet streams. These descriptions are files that are used to configure real-time telemetry and command systems for mission operations. XTCE s purpose is to exchange database information between different systems. XTCE GOVSAT consists of rules for narrowing the use of XTCE for missions. The Validation Tool is used to syntax check GOVSAT XML files. The Search Tool is used to search (i.e. command and telemetry mnemonics) the GOVSAT XML files and view the results. Finally, the Reporting Tool is used to create command and telemetry reports. These reports can be displayed or printed for use by the operations team.

  7. Standardization of XML Database Exchanges and the James Webb Space Telescope Experience

    NASA Technical Reports Server (NTRS)

    Gal-Edd, Jonathan; Detter, Ryan; Jones, Ron; Fatig, Curtis C.

    2007-01-01

    Personnel from the National Aeronautics and Space Administration (NASA) James Webb Space Telescope (JWST) Project have been working with various standard communities such the Object Management Group (OMG) and the Consultative Committee for Space Data Systems (CCSDS) to assist in the definition of a common extensible Markup Language (XML) for database exchange format. The CCSDS and OMG standards are intended for the exchange of core command and telemetry information, not for all database information needed to exercise a NASA space mission. The mission-specific database, containing all the information needed for a space mission, is translated from/to the standard using a translator. The standard is meant to provide a system that encompasses 90% of the information needed for command and telemetry processing. This paper will discuss standardization of the XML database exchange format, tools used, and the JWST experience, as well as future work with XML standard groups both commercial and government.

  8. XML-Based Generator of C++ Code for Integration With GUIs

    NASA Technical Reports Server (NTRS)

    Hua, Hook; Oyafuso, Fabiano; Klimeck, Gerhard

    2003-01-01

    An open source computer program has been developed to satisfy a need for simplified organization of structured input data for scientific simulation programs. Typically, such input data are parsed in from a flat American Standard Code for Information Interchange (ASCII) text file into computational data structures. Also typically, when a graphical user interface (GUI) is used, there is a need to completely duplicate the input information while providing it to a user in a more structured form. Heretofore, the duplication of the input information has entailed duplication of software efforts and increases in susceptibility to software errors because of the concomitant need to maintain two independent input-handling mechanisms. The present program implements a method in which the input data for a simulation program are completely specified in an Extensible Markup Language (XML)-based text file. The key benefit for XML is storing input data in a structured manner. More importantly, XML allows not just storing of data but also describing what each of the data items are. That XML file contains information useful for rendering the data by other applications. It also then generates data structures in the C++ language that are to be used in the simulation program. In this method, all input data are specified in one place only, and it is easy to integrate the data structures into both the simulation program and the GUI. XML-to-C is useful in two ways: 1. As an executable, it generates the corresponding C++ classes and 2. As a library, it automatically fills the objects with the input data values.

  9. Trick Simulation Environment 07

    NASA Technical Reports Server (NTRS)

    Lin, Alexander S.; Penn, John M.

    2012-01-01

    The Trick Simulation Environment is a generic simulation toolkit used for constructing and running simulations. This release includes a Monte Carlo analysis simulation framework and a data analysis package. It produces all auto documentation in XML. Also, the software is capable of inserting a malfunction at any point during the simulation. Trick 07 adds variable server output options and error messaging and is capable of using and manipulating wide characters for international support. Wide character strings are available as a fundamental type for variables processed by Trick. A Trick Monte Carlo simulation uses a statistically generated, or predetermined, set of inputs to iteratively drive the simulation. Also, there is a framework in place for optimization and solution finding where developers may iteratively modify the inputs per run based on some analysis of the outputs. The data analysis package is capable of reading data from external simulation packages such as MATLAB and Octave, as well as the common comma-separated values (CSV) format used by Excel, without the use of external converters. The file formats for MATLAB and Octave were obtained from their documentation sets, and Trick maintains generic file readers for each format. XML tags store the fields in the Trick header comments. For header files, XML tags for structures and enumerations, and the members within are stored in the auto documentation. For source code files, XML tags for each function and the calling arguments are stored in the auto documentation. When a simulation is built, a top level XML file, which includes all of the header and source code XML auto documentation files, is created in the simulation directory. Trick 07 provides an XML to TeX converter. The converter reads in header and source code XML documentation files and converts the data to TeX labels and tables suitable for inclusion in TeX documents. A malfunction insertion capability allows users to override the value of any simulation variable, or call a malfunction job, at any time during the simulation. Users may specify conditions, use the return value of a malfunction trigger job, or manually activate a malfunction. The malfunction action may consist of executing a block of input file statements in an action block, setting simulation variable values, call a malfunction job, or turn on/off simulation jobs.

  10. Space platform power system hardware testbed

    NASA Technical Reports Server (NTRS)

    Sable, D.; Patil, A.; Sizemore, T.; Deuty, S.; Noon, J.; Cho, B. H.; Lee, F. C.

    1991-01-01

    The scope of the work on the NASA Space Platform includes the design of a multi-module, multi-phase boost regulator, and a voltage-fed, push-pull autotransformer converter for the battery discharger. A buck converter was designed for the charge regulator. Also included is the associated mode control electronics for the charger and discharger, as well as continued development of a comprehensive modeling and simulation tool for the system. The design of the multi-module boost converter is discussed for use as a battery discharger. An alternative battery discharger design is discussed using a voltage-fed, push-pull autotransformer converter. The design of the charge regulator is explained using a simple buck converter. The design of the mode controller and effects of locating the bus filter capacitor bank 20 feet away from the power ORU are discussed. A brief discussion of some alternative topologies for battery charging and discharging is included. The power system modeling is described.

  11. QCD for Postgraduates (5/5)

    ScienceCinema

    None

    2018-05-14

    We will introduce and discuss in some detail the two main classes of jets: cone type and sequential-recombination type. We will discuss their basic properties, as well as more advanced concepts such as jet substructure, jet filtering, ways of optimizing the jet radius, ways of defining the areas of jets, and of establishing the quality measure of the jet-algorithm in terms of discriminating power in specific searches. Finally we will discuss applications for Higgs searches involving boosted particles.

  12. The XML approach to implementing space link extension service management

    NASA Technical Reports Server (NTRS)

    Tai, W.; Welz, G. A.; Theis, G.; Yamada, T.

    2001-01-01

    A feasibility study has been conducted at JPL, ESOC, and ISAS to assess the possible applications of the eXtensible Mark-up Language (XML) capabilities to the implementation of the CCSDS Space Link Extension (SLE) Service Management function.

  13. XTCE. XML Telemetry and Command Exchange Tutorial

    NASA Technical Reports Server (NTRS)

    Rice, Kevin; Kizzort, Brad; Simon, Jerry

    2010-01-01

    An XML Telemetry Command Exchange (XTCE) tutoral oriented towards packets or minor frames is shown. The contents include: 1) The Basics; 2) Describing Telemetry; 3) Describing the Telemetry Format; 4) Commanding; 5) Forgotten Elements; 6) Implementing XTCE; and 7) GovSat.

  14. XML Schema Versioning Policies Version 1.0

    NASA Astrophysics Data System (ADS)

    Harrison, Paul; Demleitner, Markus; Major, Brian; Dowler, Pat; Harrison, Paul

    2018-05-01

    This note describes the recommended practice for the evolution of IVOA standard XML schemata that are associated with IVOA standards. The criteria for deciding what might be considered major and minor changes and the policies for dealing with each case are described.

  15. XML technologies for the Omaha System: a data model, a Java tool and several case studies supporting home healthcare.

    PubMed

    Vittorini, Pierpaolo; Tarquinio, Antonietta; di Orio, Ferdinando

    2009-03-01

    The eXtensible markup language (XML) is a metalanguage which is useful to represent and exchange data between heterogeneous systems. XML may enable healthcare practitioners to document, monitor, evaluate, and archive medical information and services into distributed computer environments. Therefore, the most recent proposals on electronic health records (EHRs) are usually based on XML documents. Since none of the existing nomenclatures were specifically developed for use in automated clinical information systems, but were adapted to such use, numerous current EHRs are organized as a sequence of events, each represented through codes taken from international classification systems. In nursing, a hierarchically organized problem-solving approach is followed, which hardly couples with the sequential organization of such EHRs. Therefore, the paper presents an XML data model for the Omaha System taxonomy, which is one of the most important international nomenclatures used in the home healthcare nursing context. Such a data model represents the formal definition of EHRs specifically developed for nursing practice. Furthermore, the paper delineates a Java application prototype which is able to manage such documents, shows the possibility to transform such documents into readable web pages, and reports several case studies, one currently managed by the home care service of a Health Center in Central Italy.

  16. Augmenting distractor filtering via transcranial magnetic stimulation of the lateral occipital cortex.

    PubMed

    Eštočinová, Jana; Lo Gerfo, Emanuele; Della Libera, Chiara; Chelazzi, Leonardo; Santandrea, Elisa

    2016-11-01

    Visual selective attention (VSA) optimizes perception and behavioral control by enabling efficient selection of relevant information and filtering of distractors. While focusing resources on task-relevant information helps counteract distraction, dedicated filtering mechanisms have recently been demonstrated, allowing neural systems to implement suitable policies for the suppression of potential interference. Limited evidence is presently available concerning the neural underpinnings of these mechanisms, and whether neural circuitry within the visual cortex might play a causal role in their instantiation, a possibility that we directly tested here. In two related experiments, transcranial magnetic stimulation (TMS) was applied over the lateral occipital cortex of healthy humans at different times during the execution of a behavioral task which entailed varying levels of distractor interference and need for attentional engagement. While earlier TMS boosted target selection, stimulation within a restricted time epoch close to (and in the course of) stimulus presentation engendered selective enhancement of distractor suppression, by affecting the ongoing, reactive instantiation of attentional filtering mechanisms required by specific task conditions. The results attest to a causal role of mid-tier ventral visual areas in distractor filtering and offer insights into the mechanisms through which TMS may have affected ongoing neural activity in the stimulated tissue. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Extending Correlation Filter-Based Visual Tracking by Tree-Structured Ensemble and Spatial Windowing.

    PubMed

    Gundogdu, Erhan; Ozkan, Huseyin; Alatan, A Aydin

    2017-11-01

    Correlation filters have been successfully used in visual tracking due to their modeling power and computational efficiency. However, the state-of-the-art correlation filter-based (CFB) tracking algorithms tend to quickly discard the previous poses of the target, since they consider only a single filter in their models. On the contrary, our approach is to register multiple CFB trackers for previous poses and exploit the registered knowledge when an appearance change occurs. To this end, we propose a novel tracking algorithm [of complexity O(D) ] based on a large ensemble of CFB trackers. The ensemble [of size O(2 D ) ] is organized over a binary tree (depth D ), and learns the target appearance subspaces such that each constituent tracker becomes an expert of a certain appearance. During tracking, the proposed algorithm combines only the appearance-aware relevant experts to produce boosted tracking decisions. Additionally, we propose a versatile spatial windowing technique to enhance the individual expert trackers. For this purpose, spatial windows are learned for target objects as well as the correlation filters and then the windowed regions are processed for more robust correlations. In our extensive experiments on benchmark datasets, we achieve a substantial performance increase by using the proposed tracking algorithm together with the spatial windowing.

  18. XML: An Introduction.

    ERIC Educational Resources Information Center

    Lewis, John D.

    1998-01-01

    Describes XML (extensible markup language), a new language classification submitted to the World Wide Web Consortium that is defined in terms of both SGML (Standard Generalized Markup Language) and HTML (Hypertext Markup Language), specifically designed for the Internet. Limitations of PDF (Portable Document Format) files for electronic journals…

  19. Integration of M&S (Modeling and Simulation), Software Design and DoDAF (Department of Defense Architecture Framework (RT 24)

    DTIC Science & Technology

    2012-04-09

    between BPMN , SysML, and Arena ........................................... 16 Capabilities, Activities, Resources, Performers...Proof of Concept ................................................................ 22 BPMN 2.0 XML to Arena Converter...21 Figure 5: BPMN 2.0 XML StartEvent (Excerpt

  20. Fast and Efficient XML Data Access for Next-Generation Mass Spectrometry.

    PubMed

    Röst, Hannes L; Schmitt, Uwe; Aebersold, Ruedi; Malmström, Lars

    2015-01-01

    In mass spectrometry-based proteomics, XML formats such as mzML and mzXML provide an open and standardized way to store and exchange the raw data (spectra and chromatograms) of mass spectrometric experiments. These file formats are being used by a multitude of open-source and cross-platform tools which allow the proteomics community to access algorithms in a vendor-independent fashion and perform transparent and reproducible data analysis. Recent improvements in mass spectrometry instrumentation have increased the data size produced in a single LC-MS/MS measurement and put substantial strain on open-source tools, particularly those that are not equipped to deal with XML data files that reach dozens of gigabytes in size. Here we present a fast and versatile parsing library for mass spectrometric XML formats available in C++ and Python, based on the mature OpenMS software framework. Our library implements an API for obtaining spectra and chromatograms under memory constraints using random access or sequential access functions, allowing users to process datasets that are much larger than system memory. For fast access to the raw data structures, small XML files can also be completely loaded into memory. In addition, we have improved the parsing speed of the core mzML module by over 4-fold (compared to OpenMS 1.11), making our library suitable for a wide variety of algorithms that need fast access to dozens of gigabytes of raw mass spectrometric data. Our C++ and Python implementations are available for the Linux, Mac, and Windows operating systems. All proposed modifications to the OpenMS code have been merged into the OpenMS mainline codebase and are available to the community at https://github.com/OpenMS/OpenMS.

  1. Fast and Efficient XML Data Access for Next-Generation Mass Spectrometry

    PubMed Central

    Röst, Hannes L.; Schmitt, Uwe; Aebersold, Ruedi; Malmström, Lars

    2015-01-01

    Motivation In mass spectrometry-based proteomics, XML formats such as mzML and mzXML provide an open and standardized way to store and exchange the raw data (spectra and chromatograms) of mass spectrometric experiments. These file formats are being used by a multitude of open-source and cross-platform tools which allow the proteomics community to access algorithms in a vendor-independent fashion and perform transparent and reproducible data analysis. Recent improvements in mass spectrometry instrumentation have increased the data size produced in a single LC-MS/MS measurement and put substantial strain on open-source tools, particularly those that are not equipped to deal with XML data files that reach dozens of gigabytes in size. Results Here we present a fast and versatile parsing library for mass spectrometric XML formats available in C++ and Python, based on the mature OpenMS software framework. Our library implements an API for obtaining spectra and chromatograms under memory constraints using random access or sequential access functions, allowing users to process datasets that are much larger than system memory. For fast access to the raw data structures, small XML files can also be completely loaded into memory. In addition, we have improved the parsing speed of the core mzML module by over 4-fold (compared to OpenMS 1.11), making our library suitable for a wide variety of algorithms that need fast access to dozens of gigabytes of raw mass spectrometric data. Availability Our C++ and Python implementations are available for the Linux, Mac, and Windows operating systems. All proposed modifications to the OpenMS code have been merged into the OpenMS mainline codebase and are available to the community at https://github.com/OpenMS/OpenMS. PMID:25927999

  2. Comparative Study of Speckle Filtering Methods in PolSAR Radar Images

    NASA Astrophysics Data System (ADS)

    Boutarfa, S.; Bouchemakh, L.; Smara, Y.

    2015-04-01

    Images acquired by polarimetric SAR (PolSAR) radar systems are characterized by the presence of a noise called speckle. This noise has a multiplicative nature, corrupts both the amplitude and phase images, which complicates data interpretation, degrades segmentation performance and reduces the detectability of targets. Hence, the need to preprocess the images by adapted filtering methods before analysis.In this paper, we present a comparative study of implemented methods for reducing speckle in PolSAR images. These developed filters are: refined Lee filter based on the estimation of the minimum mean square error MMSE, improved Sigma filter with detection of strong scatterers based on the calculation of the coherency matrix to detect the different scatterers in order to preserve the polarization signature and maintain structures that are necessary for image interpretation, filtering by stationary wavelet transform SWT using multi-scale edge detection and the technique for improving the wavelet coefficients called SSC (sum of squared coefficients), and Turbo filter which is a combination between two complementary filters the refined Lee filter and the wavelet transform SWT. One filter can boost up the results of the other.The originality of our work is based on the application of these methods to several types of images: amplitude, intensity and complex, from a satellite or an airborne radar, and on the optimization of wavelet filtering by adding a parameter in the calculation of the threshold. This parameter will control the filtering effect and get a good compromise between smoothing homogeneous areas and preserving linear structures.The methods are applied to the fully polarimetric RADARSAT-2 images (HH, HV, VH, VV) acquired on Algiers, Algeria, in C-band and to the three polarimetric E-SAR images (HH, HV, VV) acquired on Oberpfaffenhofen area located in Munich, Germany, in P-band.To evaluate the performance of each filter, we used the following criteria: smoothing homogeneous areas, preserving edges and polarimetric information.Experimental results are included to illustrate the different implemented methods.

  3. Castles Made of Sand: Building Sustainable Digitized Collections Using XML.

    ERIC Educational Resources Information Center

    Ragon, Bart

    2003-01-01

    Describes work at the University of Virginia library to digitize special collections. Discusses the use of XML (Extensible Markup Language); providing access to original source materials; DTD (Document Type Definition); TEI (Text Encoding Initiative); metadata; XSL (Extensible Style Language); and future possibilities. (LRW)

  4. At-sea demonstration of RF sensor tasking using XML over a worldwide network

    NASA Astrophysics Data System (ADS)

    Kellogg, Robert L.; Lee, Tom; Dumas, Diane; Raggo, Barbara

    2003-07-01

    As part of an At-Sea Demonstration for Space and Naval Warfare Command (SPAWAR, PMW-189), a prototype RF sensor for signal acquisition and direction finding queried and received tasking via a secure worldwide Automated Data Network System (ADNS). Using extended mark-up language (XML) constructs, both mission and signal tasking were available for push and pull Battlespace management. XML tasking was received by the USS Cape St George (CG-71) during an exercise along the Gulf Coast of the US from a test facility at SPAWAR, San Diego, CA. Although only one ship was used in the demonstration, the intent of the software initiative was to show that a network of different RF sensors on different platforms with different capabilitis could be tasked by a common web agent. A sensor software agent interpreted the XML task to match the sensor's capability. Future improvements will focus on enlarging the domain of mission tasking and incorporate report management.

  5. Performance evaluation of continuity of care records (CCRs): parsing models in a mobile health management system.

    PubMed

    Chen, Hung-Ming; Liou, Yong-Zan

    2014-10-01

    In a mobile health management system, mobile devices act as the application hosting devices for personal health records (PHRs) and the healthcare servers construct to exchange and analyze PHRs. One of the most popular PHR standards is continuity of care record (CCR). The CCR is expressed in XML formats. However, parsing is an expensive operation that can degrade XML processing performance. Hence, the objective of this study was to identify different operational and performance characteristics for those CCR parsing models including the XML DOM parser, the SAX parser, the PULL parser, and the JSON parser with regard to JSON data converted from XML-based CCR. Thus, developers can make sensible choices for their target PHR applications to parse CCRs when using mobile devices or servers with different system resources. Furthermore, the simulation experiments of four case studies are conducted to compare the parsing performance on Android mobile devices and the server with large quantities of CCR data.

  6. Chemical markup, XML and the World-Wide Web. 3. Toward a signed semantic chemical web of trust.

    PubMed

    Gkoutos, G V; Murray-Rust, P; Rzepa, H S; Wright, M

    2001-01-01

    We describe how a collection of documents expressed in XML-conforming languages such as CML and XHTML can be authenticated and validated against digital signatures which make use of established X.509 certificate technology. These can be associated either with specific nodes in the XML document or with the entire document. We illustrate this with two examples. An entire journal article expressed in XML has its individual components digitally signed by separate authors, and the collection is placed in an envelope and again signed. The second example involves using a software robot agent to acquire a collection of documents from a specified URL, to perform various operations and transformations on the content, including expressing molecules in CML, and to automatically sign the various components and deposit the result in a repository. We argue that these operations can used as components for building what we term an authenticated and semantic chemical web of trust.

  7. XML, Ontologies, and Their Clinical Applications.

    PubMed

    Yu, Chunjiang; Shen, Bairong

    2016-01-01

    The development of information technology has resulted in its penetration into every area of clinical research. Various clinical systems have been developed, which produce increasing volumes of clinical data. However, saving, exchanging, querying, and exploiting these data are challenging issues. The development of Extensible Markup Language (XML) has allowed the generation of flexible information formats to facilitate the electronic sharing of structured data via networks, and it has been used widely for clinical data processing. In particular, XML is very useful in the fields of data standardization, data exchange, and data integration. Moreover, ontologies have been attracting increased attention in various clinical fields in recent years. An ontology is the basic level of a knowledge representation scheme, and various ontology repositories have been developed, such as Gene Ontology and BioPortal. The creation of these standardized repositories greatly facilitates clinical research in related fields. In this chapter, we discuss the basic concepts of XML and ontologies, as well as their clinical applications.

  8. Semi-automated XML markup of biosystematic legacy literature with the GoldenGATE editor.

    PubMed

    Sautter, Guido; Böhm, Klemens; Agosti, Donat

    2007-01-01

    Today, digitization of legacy literature is a big issue. This also applies to the domain of biosystematics, where this process has just started. Digitized biosystematics literature requires a very precise and fine grained markup in order to be useful for detailed search, data linkage and mining. However, manual markup on sentence level and below is cumbersome and time consuming. In this paper, we present and evaluate the GoldenGATE editor, which is designed for the special needs of marking up OCR output with XML. It is built in order to support the user in this process as far as possible: Its functionality ranges from easy, intuitive tagging through markup conversion to dynamic binding of configurable plug-ins provided by third parties. Our evaluation shows that marking up an OCR document using GoldenGATE is three to four times faster than with an off-the-shelf XML editor like XML-Spy. Using domain-specific NLP-based plug-ins, these numbers are even higher.

  9. XML Translator for Interface Descriptions

    NASA Technical Reports Server (NTRS)

    Boroson, Elizabeth R.

    2009-01-01

    A computer program defines an XML schema for specifying the interface to a generic FPGA from the perspective of software that will interact with the device. This XML interface description is then translated into header files for C, Verilog, and VHDL. User interface definition input is checked via both the provided XML schema and the translator module to ensure consistency and accuracy. Currently, programming used on both sides of an interface is inconsistent. This makes it hard to find and fix errors. By using a common schema, both sides are forced to use the same structure by using the same framework and toolset. This makes for easy identification of problems, which leads to the ability to formulate a solution. The toolset contains constants that allow a programmer to use each register, and to access each field in the register. Once programming is complete, the translator is run as part of the make process, which ensures that whenever an interface is changed, all of the code that uses the header files describing it is recompiled.

  10. Profex: a graphical user interface for the Rietveld refinement program BGMN.

    PubMed

    Doebelin, Nicola; Kleeberg, Reinhard

    2015-10-01

    Profex is a graphical user interface for the Rietveld refinement program BGMN . Its interface focuses on preserving BGMN 's powerful and flexible scripting features by giving direct access to BGMN input files. Very efficient workflows for single or batch refinements are achieved by managing refinement control files and structure files, by providing dialogues and shortcuts for many operations, by performing operations in the background, and by providing import filters for CIF and XML crystal structure files. Refinement results can be easily exported for further processing. State-of-the-art graphical export of diffraction patterns to pixel and vector graphics formats allows the creation of publication-quality graphs with minimum effort. Profex reads and converts a variety of proprietary raw data formats and is thus largely instrument independent. Profex and BGMN are available under an open-source license for Windows, Linux and OS X operating systems.

  11. Profex: a graphical user interface for the Rietveld refinement program BGMN

    PubMed Central

    Doebelin, Nicola; Kleeberg, Reinhard

    2015-01-01

    Profex is a graphical user interface for the Rietveld refinement program BGMN. Its interface focuses on preserving BGMN’s powerful and flexible scripting features by giving direct access to BGMN input files. Very efficient workflows for single or batch refinements are achieved by managing refinement control files and structure files, by providing dialogues and shortcuts for many operations, by performing operations in the background, and by providing import filters for CIF and XML crystal structure files. Refinement results can be easily exported for further processing. State-of-the-art graphical export of diffraction patterns to pixel and vector graphics formats allows the creation of publication-quality graphs with minimum effort. Profex reads and converts a variety of proprietary raw data formats and is thus largely instrument independent. Profex and BGMN are available under an open-source license for Windows, Linux and OS X operating systems. PMID:26500466

  12. Dynamic XML-based exchange of relational data: application to the Human Brain Project.

    PubMed

    Tang, Zhengming; Kadiyska, Yana; Li, Hao; Suciu, Dan; Brinkley, James F

    2003-01-01

    This paper discusses an approach to exporting relational data in XML format for data exchange over the web. We describe the first real-world application of SilkRoute, a middleware program that dynamically converts existing relational data to a user-defined XML DTD. The application, called XBrain, wraps SilkRoute in a Java Server Pages framework, thus permitting a web-based XQuery interface to a legacy relational database. The application is demonstrated as a query interface to the University of Washington Brain Project's Language Map Experiment Management System, which is used to manage data about language organization in the brain.

  13. voevent-parse: Parse, manipulate, and generate VOEvent XML packets

    NASA Astrophysics Data System (ADS)

    Staley, Tim D.

    2014-11-01

    voevent-parse, written in Python, parses, manipulates, and generates VOEvent XML packets; it is built atop lxml.objectify. Details of transients detected by many projects, including Fermi, Swift, and the Catalina Sky Survey, are currently made available as VOEvents, which is also the standard alert format by future facilities such as LSST and SKA. However, working with XML and adhering to the sometimes lengthy VOEvent schema can be a tricky process. voevent-parse provides convenience routines for common tasks, while allowing the user to utilise the full power of the lxml library when required. An earlier version of voevent-parse was part of the pysovo (ascl:1411.002) library.

  14. Symmetric Key Services Markup Language (SKSML)

    NASA Astrophysics Data System (ADS)

    Noor, Arshad

    Symmetric Key Services Markup Language (SKSML) is the eXtensible Markup Language (XML) being standardized by the OASIS Enterprise Key Management Infrastructure Technical Committee for requesting and receiving symmetric encryption cryptographic keys within a Symmetric Key Management System (SKMS). This protocol is designed to be used between clients and servers within an Enterprise Key Management Infrastructure (EKMI) to secure data, independent of the application and platform. Building on many security standards such as XML Signature, XML Encryption, Web Services Security and PKI, SKSML provides standards-based capability to allow any application to use symmetric encryption keys, while maintaining centralized control. This article describes the SKSML protocol and its capabilities.

  15. The Surgical Simulation and Training Markup Language (SSTML): an XML-based language for medical simulation.

    PubMed

    Bacon, James; Tardella, Neil; Pratt, Janey; Hu, John; English, James

    2006-01-01

    Under contract with the Telemedicine & Advanced Technology Research Center (TATRC), Energid Technologies is developing a new XML-based language for describing surgical training exercises, the Surgical Simulation and Training Markup Language (SSTML). SSTML must represent everything from organ models (including tissue properties) to surgical procedures. SSTML is an open language (i.e., freely downloadable) that defines surgical training data through an XML schema. This article focuses on the data representation of the surgical procedures and organ modeling, as they highlight the need for a standard language and illustrate the features of SSTML. Integration of SSTML with software is also discussed.

  16. Automatic vs. manual curation of a multi-source chemical dictionary: the impact on text mining.

    PubMed

    Hettne, Kristina M; Williams, Antony J; van Mulligen, Erik M; Kleinjans, Jos; Tkachenko, Valery; Kors, Jan A

    2010-03-23

    Previously, we developed a combined dictionary dubbed Chemlist for the identification of small molecules and drugs in text based on a number of publicly available databases and tested it on an annotated corpus. To achieve an acceptable recall and precision we used a number of automatic and semi-automatic processing steps together with disambiguation rules. However, it remained to be investigated which impact an extensive manual curation of a multi-source chemical dictionary would have on chemical term identification in text. ChemSpider is a chemical database that has undergone extensive manual curation aimed at establishing valid chemical name-to-structure relationships. We acquired the component of ChemSpider containing only manually curated names and synonyms. Rule-based term filtering, semi-automatic manual curation, and disambiguation rules were applied. We tested the dictionary from ChemSpider on an annotated corpus and compared the results with those for the Chemlist dictionary. The ChemSpider dictionary of ca. 80 k names was only a 1/3 to a 1/4 the size of Chemlist at around 300 k. The ChemSpider dictionary had a precision of 0.43 and a recall of 0.19 before the application of filtering and disambiguation and a precision of 0.87 and a recall of 0.19 after filtering and disambiguation. The Chemlist dictionary had a precision of 0.20 and a recall of 0.47 before the application of filtering and disambiguation and a precision of 0.67 and a recall of 0.40 after filtering and disambiguation. We conclude the following: (1) The ChemSpider dictionary achieved the best precision but the Chemlist dictionary had a higher recall and the best F-score; (2) Rule-based filtering and disambiguation is necessary to achieve a high precision for both the automatically generated and the manually curated dictionary. ChemSpider is available as a web service at http://www.chemspider.com/ and the Chemlist dictionary is freely available as an XML file in Simple Knowledge Organization System format on the web at http://www.biosemantics.org/chemlist.

  17. Automatic vs. manual curation of a multi-source chemical dictionary: the impact on text mining

    PubMed Central

    2010-01-01

    Background Previously, we developed a combined dictionary dubbed Chemlist for the identification of small molecules and drugs in text based on a number of publicly available databases and tested it on an annotated corpus. To achieve an acceptable recall and precision we used a number of automatic and semi-automatic processing steps together with disambiguation rules. However, it remained to be investigated which impact an extensive manual curation of a multi-source chemical dictionary would have on chemical term identification in text. ChemSpider is a chemical database that has undergone extensive manual curation aimed at establishing valid chemical name-to-structure relationships. Results We acquired the component of ChemSpider containing only manually curated names and synonyms. Rule-based term filtering, semi-automatic manual curation, and disambiguation rules were applied. We tested the dictionary from ChemSpider on an annotated corpus and compared the results with those for the Chemlist dictionary. The ChemSpider dictionary of ca. 80 k names was only a 1/3 to a 1/4 the size of Chemlist at around 300 k. The ChemSpider dictionary had a precision of 0.43 and a recall of 0.19 before the application of filtering and disambiguation and a precision of 0.87 and a recall of 0.19 after filtering and disambiguation. The Chemlist dictionary had a precision of 0.20 and a recall of 0.47 before the application of filtering and disambiguation and a precision of 0.67 and a recall of 0.40 after filtering and disambiguation. Conclusions We conclude the following: (1) The ChemSpider dictionary achieved the best precision but the Chemlist dictionary had a higher recall and the best F-score; (2) Rule-based filtering and disambiguation is necessary to achieve a high precision for both the automatically generated and the manually curated dictionary. ChemSpider is available as a web service at http://www.chemspider.com/ and the Chemlist dictionary is freely available as an XML file in Simple Knowledge Organization System format on the web at http://www.biosemantics.org/chemlist. PMID:20331846

  18. XBRL: Beyond Basic XML

    ERIC Educational Resources Information Center

    VanLengen, Craig Alan

    2010-01-01

    The Securities and Exchange Commission (SEC) has recently announced a proposal that will require all public companies to report their financial data in Extensible Business Reporting Language (XBRL). XBRL is an extension of Extensible Markup Language (XML). Moving to a standard reporting format makes it easier for organizations to report the…

  19. Querying and Ranking XML Documents.

    ERIC Educational Resources Information Center

    Schlieder, Torsten; Meuss, Holger

    2002-01-01

    Discussion of XML, information retrieval, precision, and recall focuses on a retrieval technique that adopts the similarity measure of the vector space model, incorporates the document structure, and supports structured queries. Topics include a query model based on tree matching; structured queries and term-based ranking; and term frequency and…

  20. DeepBlue epigenomic data server: programmatic data retrieval and analysis of epigenome region sets

    PubMed Central

    Albrecht, Felipe; List, Markus; Bock, Christoph; Lengauer, Thomas

    2016-01-01

    Large amounts of epigenomic data are generated under the umbrella of the International Human Epigenome Consortium, which aims to establish 1000 reference epigenomes within the next few years. These data have the potential to unravel the complexity of epigenomic regulation. However, their effective use is hindered by the lack of flexible and easy-to-use methods for data retrieval. Extracting region sets of interest is a cumbersome task that involves several manual steps: identifying the relevant experiments, downloading the corresponding data files and filtering the region sets of interest. Here we present the DeepBlue Epigenomic Data Server, which streamlines epigenomic data analysis as well as software development. DeepBlue provides a comprehensive programmatic interface for finding, selecting, filtering, summarizing and downloading region sets. It contains data from four major epigenome projects, namely ENCODE, ROADMAP, BLUEPRINT and DEEP. DeepBlue comes with a user manual, examples and a well-documented application programming interface (API). The latter is accessed via the XML-RPC protocol supported by many programming languages. To demonstrate usage of the API and to enable convenient data retrieval for non-programmers, we offer an optional web interface. DeepBlue can be openly accessed at http://deepblue.mpi-inf.mpg.de. PMID:27084938

  1. Essential Annotation Schema for Ecology (EASE)—A framework supporting the efficient data annotation and faceted navigation in ecology

    PubMed Central

    Eichenberg, David; Liebergesell, Mario; König-Ries, Birgitta; Wirth, Christian

    2017-01-01

    Ecology has become a data intensive science over the last decades which often relies on the reuse of data in cross-experimental analyses. However, finding data which qualifies for the reuse in a specific context can be challenging. It requires good quality metadata and annotations as well as efficient search strategies. To date, full text search (often on the metadata only) is the most widely used search strategy although it is known to be inaccurate. Faceted navigation is providing a filter mechanism which is based on fine granular metadata, categorizing search objects along numeric and categorical parameters relevant for their discovery. Selecting from these parameters during a full text search creates a system of filters which allows to refine and improve the results towards more relevance. We developed a framework for the efficient annotation and faceted navigation in ecology. It consists of an XML schema for storing the annotation of search objects and is accompanied by a vocabulary focused on ecology to support the annotation process. The framework consolidates ideas which originate from widely accepted metadata standards, textbooks, scientific literature, and vocabularies as well as from expert knowledge contributed by researchers from ecology and adjacent disciplines. PMID:29023519

  2. New NED XML/VOtable Services and Client Interface Applications

    NASA Astrophysics Data System (ADS)

    Pevunova, O.; Good, J.; Mazzarella, J.; Berriman, G. B.; Madore, B.

    2005-12-01

    The NASA/IPAC Extragalactic Database (NED) provides data and cross-identifications for over 7 million extragalactic objects fused from thousands of survey catalogs and journal articles. The data cover all frequencies from radio through gamma rays and include positions, redshifts, photometry and spectral energy distributions (SEDs), sizes, and images. NED services have traditionally supplied data in HTML format for connections from Web browsers, and a custom ASCII data structure for connections by remote computer programs written in the C programming language. We describe new services that provide responses from NED queries in XML documents compliant with the international virtual observatory VOtable protocol. The XML/VOtable services support cone searches, all-sky searches based on object attributes (survey names, cross-IDs, redshifts, flux densities), and requests for detailed object data. Initial services have been inserted into the NVO registry, and others will follow soon. The first client application is a Style Sheet specification for rendering NED VOtable query results in Web browsers that support XML. The second prototype application is a Java applet that allows users to compare multiple SEDs. The new XML/VOtable output mode will also simplify the integration of data from NED into visualization and analysis packages, software agents, and other virtual observatory applications. We show an example SED from NED plotted using VOPlot. The NED website is: http://nedwww.ipac.caltech.edu.

  3. Automating data acquisition into ontologies from pharmacogenetics relational data sources using declarative object definitions and XML.

    PubMed

    Rubin, Daniel L; Hewett, Micheal; Oliver, Diane E; Klein, Teri E; Altman, Russ B

    2002-01-01

    Ontologies are useful for organizing large numbers of concepts having complex relationships, such as the breadth of genetic and clinical knowledge in pharmacogenomics. But because ontologies change and knowledge evolves, it is time consuming to maintain stable mappings to external data sources that are in relational format. We propose a method for interfacing ontology models with data acquisition from external relational data sources. This method uses a declarative interface between the ontology and the data source, and this interface is modeled in the ontology and implemented using XML schema. Data is imported from the relational source into the ontology using XML, and data integrity is checked by validating the XML submission with an XML schema. We have implemented this approach in PharmGKB (http://www.pharmgkb.org/), a pharmacogenetics knowledge base. Our goals were to (1) import genetic sequence data, collected in relational format, into the pharmacogenetics ontology, and (2) automate the process of updating the links between the ontology and data acquisition when the ontology changes. We tested our approach by linking PharmGKB with data acquisition from a relational model of genetic sequence information. The ontology subsequently evolved, and we were able to rapidly update our interface with the external data and continue acquiring the data. Similar approaches may be helpful for integrating other heterogeneous information sources in order make the diversity of pharmacogenetics data amenable to computational analysis.

  4. The XBabelPhish MAGE-ML and XML translator.

    PubMed

    Maier, Don; Wymore, Farrell; Sherlock, Gavin; Ball, Catherine A

    2008-01-18

    MAGE-ML has been promoted as a standard format for describing microarray experiments and the data they produce. Two characteristics of the MAGE-ML format compromise its use as a universal standard: First, MAGE-ML files are exceptionally large - too large to be easily read by most people, and often too large to be read by most software programs. Second, the MAGE-ML standard permits many ways of representing the same information. As a result, different producers of MAGE-ML create different documents describing the same experiment and its data. Recognizing all the variants is an unwieldy software engineering task, resulting in software packages that can read and process MAGE-ML from some, but not all producers. This Tower of MAGE-ML Babel bars the unencumbered exchange of microarray experiment descriptions couched in MAGE-ML. We have developed XBabelPhish - an XQuery-based technology for translating one MAGE-ML variant into another. XBabelPhish's use is not restricted to translating MAGE-ML documents. It can transform XML files independent of their DTD, XML schema, or semantic content. Moreover, it is designed to work on very large (> 200 Mb.) files, which are common in the world of MAGE-ML. XBabelPhish provides a way to inter-translate MAGE-ML variants for improved interchange of microarray experiment information. More generally, it can be used to transform most XML files, including very large ones that exceed the capacity of most XML tools.

  5. 106-17 Telemetry Standards Metadata Configuration Chapter 23

    DTIC Science & Technology

    2017-07-01

    23-1 23.2 Metadata Description Language ...Chapter 23, July 2017 iii Acronyms HTML Hypertext Markup Language MDL Metadata Description Language PCM pulse code modulation TMATS Telemetry...Attributes Transfer Standard W3C World Wide Web Consortium XML eXtensible Markup Language XSD XML schema document Telemetry Network Standard

  6. Agility: Agent - Ility Architecture

    DTIC Science & Technology

    2002-10-01

    existing and emerging standards (e.g., distributed objects, email, web, search engines , XML, Java, Jini). Three agent system components resulted from...agents and other Internet resources and operate over the web (AgentGram), a yellow pages service that uses Internet search engines to locate XML ads for agents and other Internet resources (WebTrader).

  7. The semantic planetary data system

    NASA Technical Reports Server (NTRS)

    Hughes, J. Steven; Crichton, Daniel; Kelly, Sean; Mattmann, Chris

    2005-01-01

    This paper will provide a brief overview of the PDS data model and the PDS catalog. It will then describe the implentation of the Semantic PDS including the development of the formal ontology, the generation of RDFS/XML and RDF/XML data sets, and the buiding of the semantic search application.

  8. Bridging the Gap in Port Security; Network Centric Theory Applied to Public/Private Collaboration

    DTIC Science & Technology

    2007-03-01

    commercial_enforcement/ ctpat /security_guideline/guideline_port.xml [Accessed January 2, 2007] 16 The four core elements of CSI include:36 • Identify high...www.cbp.gov/xp/cgov/import/commercial_enforcement/ ctpat /security_guideline/guideline_port.xml [Accessed January 2, 2007]. 17 Connecting them

  9. Converting from XML to HDF-EOS

    NASA Technical Reports Server (NTRS)

    Ullman, Richard; Bane, Bob; Yang, Jingli

    2008-01-01

    A computer program recreates an HDF-EOS file from an Extensible Markup Language (XML) representation of the contents of that file. This program is one of two programs written to enable testing of the schemas described in the immediately preceding article to determine whether the schemas capture all details of HDF-EOS files.

  10. A Leaner, Meaner Markup Language.

    ERIC Educational Resources Information Center

    Online & CD-ROM Review, 1997

    1997-01-01

    In 1996 a working group of the World Wide Web Consortium developed and released a simpler form of markup language, Extensible Markup Language (XML), combining the flexibility of standard Generalized Markup Language (SGML) and the Web suitability of HyperText Markup Language (HTML). Reviews SGML and discusses XML's suitability for journal…

  11. DoD Business Mission Area Service-Oriented Architecture to Support Business Transformation

    DTIC Science & Technology

    2008-10-01

    Notation ( BPMN ). The research also found strong support across vendors for the Business Process Execution Language standard, though there is also...emerging support for direct execution of BPMN through the use of the XML Process Definition Language, an XML serialization of BPMN . Many vendors also

  12. Multisensor fusion for 3D target tracking using track-before-detect particle filter

    NASA Astrophysics Data System (ADS)

    Moshtagh, Nima; Romberg, Paul M.; Chan, Moses W.

    2015-05-01

    This work presents a novel fusion mechanism for estimating the three-dimensional trajectory of a moving target using images collected by multiple imaging sensors. The proposed projective particle filter avoids the explicit target detection prior to fusion. In projective particle filter, particles that represent the posterior density (of target state in a high-dimensional space) are projected onto the lower-dimensional observation space. Measurements are generated directly in the observation space (image plane) and a marginal (sensor) likelihood is computed. The particles states and their weights are updated using the joint likelihood computed from all the sensors. The 3D state estimate of target (system track) is then generated from the states of the particles. This approach is similar to track-before-detect particle filters that are known to perform well in tracking dim and stealthy targets in image collections. Our approach extends the track-before-detect approach to 3D tracking using the projective particle filter. The performance of this measurement-level fusion method is compared with that of a track-level fusion algorithm using the projective particle filter. In the track-level fusion algorithm, the 2D sensor tracks are generated separately and transmitted to a fusion center, where they are treated as measurements to the state estimator. The 2D sensor tracks are then fused to reconstruct the system track. A realistic synthetic scenario with a boosting target was generated, and used to study the performance of the fusion mechanisms.

  13. Quality Management, Certification and Rating of Health Information on the Net with MedCERTAIN: Using a medPICS/RDF/XML metadata structure for implementing eHealth ethics and creating trust globally

    PubMed Central

    Eysenbach, Gunther; Yihune, Gabriel; Lampe, Kristian; Cross, Phil; Brickley, Dan

    2000-01-01

    MedCERTAIN (MedPICS Certification and Rating of Trustworthy Health Information on the Net, http://www.medcertain.org/) is a recently launched international project funded under the European Union's (EU) "Action Plan for safer use of the Internet". It provides a technical infrastructure and a conceptual basis for an international system of "quality seals", ratings and self-labelling of Internet health information, with the final aim to establish a global "trustmark" for networked health information. Digital "quality seals" are evaluative metadata (using standards such as PICS=Platform for Internet Content Selection, now being replaced by RDF/XML) assigned by trusted third-party raters. The project also enables and encourages self-labelling with descriptive metainformation by web authors. Together these measures will help consumers as well as professionals to identify high-quality information on the Internet. MedCERTAIN establishes a fully functional demonstrator for a self- and third-party rating system enabling consumers and professionals to filter harmful health information and to positively identify and select high quality information. We aim to provide a trustmark system which allows citizens to place greater confidence in networked information, to encourage health information providers to follow best practices guidelines such as the Washington eHealth Code of Ethics, to provide effective feedback and law enforcement channels to handle user complaints, and to stimulate medical societies to develop standard for patient information. The project further proposes and identifies standards for interoperability of rating and description services (such as libraries or national health portals) and fosters a worldwide collaboration to guide consumers to high-quality information on the web.

  14. Integration of HTML documents into an XML-based knowledge repository.

    PubMed

    Roemer, Lorrie K; Rocha, Roberto A; Del Fiol, Guilherme

    2005-01-01

    The Emergency Patient Instruction Generator (EPIG) is an electronic content compiler / viewer / editor developed by Intermountain Health Care. The content is vendor-licensed HTML patient discharge instructions. This work describes the process by which discharge instructions where converted from ASCII-encoded HTML to XML, then loaded to a database for use by EPIG.

  15. A Survey and Analysis of Access Control Architectures for XML Data

    DTIC Science & Technology

    2006-03-01

    13 4. XML Query Engines ...castle and the drawbridge over the moat. Extending beyond the visual analogy, there are many key components to the protection of information and...technology. While XML’s original intent was to enable large-scale electronic publishing over the internet, its functionality is firmly rooted in its

  16. Personalization of XML Content Browsing Based on User Preferences

    ERIC Educational Resources Information Center

    Encelle, Benoit; Baptiste-Jessel, Nadine; Sedes, Florence

    2009-01-01

    Personalization of user interfaces for browsing content is a key concept to ensure content accessibility. In this direction, we introduce concepts that result in the generation of personalized multimodal user interfaces for browsing XML content. User requirements concerning the browsing of a specific content type can be specified by means of…

  17. XML and Bibliographic Data: The TVS (Transport, Validation and Services) Model.

    ERIC Educational Resources Information Center

    de Carvalho, Joaquim; Cordeiro, Maria Ines

    This paper discusses the role of XML in library information systems at three major levels: as are presentation language that enables the transport of bibliographic data in a way that is technologically independent and universally understood across systems and domains; as a language that enables the specification of complex validation rules…

  18. A Conversion Tool for Mathematical Expressions in Web XML Files.

    ERIC Educational Resources Information Center

    Ohtake, Nobuyuki; Kanahori, Toshihiro

    2003-01-01

    This article discusses the conversion of mathematical equations into Extensible Markup Language (XML) on the World Wide Web for individuals with visual impairments. A program is described that converts the presentation markup style to the content markup style in MathML to allow browsers to render mathematical expressions without other programs.…

  19. An Electronic Finding Aid Using Extensible Markup Language (XML) and Encoded Archival Description (EAD).

    ERIC Educational Resources Information Center

    Chang, May

    2000-01-01

    Describes the development of electronic finding aids for archives at the University of Illinois, Urbana-Champaign that used XML (extensible markup language) and EAD (encoded archival description) to enable more flexible information management and retrieval than using MARC or a relational database management system. EAD template is appended.…

  20. Strategic Industrial Alliances in Paper Industry: XML- vs Ontology-Based Integration Platforms

    ERIC Educational Resources Information Center

    Naumenko, Anton; Nikitin, Sergiy; Terziyan, Vagan; Zharko, Andriy

    2005-01-01

    Purpose: To identify cases related to design of ICT platforms for industrial alliances, where the use of Ontology-driven architectures based on Semantic web standards is more advantageous than application of conventional modeling together with XML standards. Design/methodology/approach: A comparative analysis of the two latest and the most obvious…

  1. An XML-based Generic Tool for Information Retrieval in Solar Databases

    NASA Astrophysics Data System (ADS)

    Scholl, Isabelle F.; Legay, Eric; Linsolas, Romain

    This paper presents the current architecture of the `Solar Web Project' now in its development phase. This tool will provide scientists interested in solar data with a single web-based interface for browsing distributed and heterogeneous catalogs of solar observations. The main goal is to have a generic application that can be easily extended to new sets of data or to new missions with a low level of maintenance. It is developed with Java and XML is used as a powerful configuration language. The server, independent of any database scheme, can communicate with a client (the user interface) and several local or remote archive access systems (such as existing web pages, ftp sites or SQL databases). Archive access systems are externally described in XML files. The user interface is also dynamically generated from an XML file containing the window building rules and a simplified database description. This project is developed at MEDOC (Multi-Experiment Data and Operations Centre), located at the Institut d'Astrophysique Spatiale (Orsay, France). Successful tests have been conducted with other solar archive access systems.

  2. Experimental Evaluation of Processing Time for the Synchronization of XML-Based Business Objects

    NASA Astrophysics Data System (ADS)

    Ameling, Michael; Wolf, Bernhard; Springer, Thomas; Schill, Alexander

    Business objects (BOs) are data containers for complex data structures used in business applications such as Supply Chain Management and Customer Relationship Management. Due to the replication of application logic, multiple copies of BOs are created which have to be synchronized and updated. This is a complex and time consuming task because BOs rigorously vary in their structure according to the distribution, number and size of elements. Since BOs are internally represented as XML documents, the parsing of XML is one major cost factor which has to be considered for minimizing the processing time during synchronization. The prediction of the parsing time for BOs is an significant property for the selection of an efficient synchronization mechanism. In this paper, we present a method to evaluate the influence of the structure of BOs on their parsing time. The results of our experimental evaluation incorporating four different XML parsers examine the dependencies between the distribution of elements and the parsing time. Finally, a general cost model will be validated and simplified according to the results of the experimental setup.

  3. Rock.XML - Towards a library of rock physics models

    NASA Astrophysics Data System (ADS)

    Jensen, Erling Hugo; Hauge, Ragnar; Ulvmoen, Marit; Johansen, Tor Arne; Drottning, Åsmund

    2016-08-01

    Rock physics modelling provides tools for correlating physical properties of rocks and their constituents to the geophysical observations we measure on a larger scale. Many different theoretical and empirical models exist, to cover the range of different types of rocks. However, upon reviewing these, we see that they are all built around a few main concepts. Based on this observation, we propose a format for digitally storing the specifications for rock physics models which we have named Rock.XML. It does not only contain data about the various constituents, but also the theories and how they are used to combine these building blocks to make a representative model for a particular rock. The format is based on the Extensible Markup Language XML, making it flexible enough to handle complex models as well as scalable towards extending it with new theories and models. This technology has great advantages as far as documenting and exchanging models in an unambiguous way between people and between software. Rock.XML can become a platform for creating a library of rock physics models; making them more accessible to everyone.

  4. Transparent ICD and DRG coding using information technology: linking and associating information sources with the eXtensible Markup Language.

    PubMed

    Hoelzer, Simon; Schweiger, Ralf K; Dudeck, Joachim

    2003-01-01

    With the introduction of ICD-10 as the standard for diagnostics, it becomes necessary to develop an electronic representation of its complete content, inherent semantics, and coding rules. The authors' design relates to the current efforts by the CEN/TC 251 to establish a European standard for hierarchical classification systems in health care. The authors have developed an electronic representation of ICD-10 with the eXtensible Markup Language (XML) that facilitates integration into current information systems and coding software, taking different languages and versions into account. In this context, XML provides a complete processing framework of related technologies and standard tools that helps develop interoperable applications. XML provides semantic markup. It allows domain-specific definition of tags and hierarchical document structure. The idea of linking and thus combining information from different sources is a valuable feature of XML. In addition, XML topic maps are used to describe relationships between different sources, or "semantically associated" parts of these sources. The issue of achieving a standardized medical vocabulary becomes more and more important with the stepwise implementation of diagnostically related groups, for example. The aim of the authors' work is to provide a transparent and open infrastructure that can be used to support clinical coding and to develop further software applications. The authors are assuming that a comprehensive representation of the content, structure, inherent semantics, and layout of medical classification systems can be achieved through a document-oriented approach.

  5. Transparent ICD and DRG Coding Using Information Technology: Linking and Associating Information Sources with the eXtensible Markup Language

    PubMed Central

    Hoelzer, Simon; Schweiger, Ralf K.; Dudeck, Joachim

    2003-01-01

    With the introduction of ICD-10 as the standard for diagnostics, it becomes necessary to develop an electronic representation of its complete content, inherent semantics, and coding rules. The authors' design relates to the current efforts by the CEN/TC 251 to establish a European standard for hierarchical classification systems in health care. The authors have developed an electronic representation of ICD-10 with the eXtensible Markup Language (XML) that facilitates integration into current information systems and coding software, taking different languages and versions into account. In this context, XML provides a complete processing framework of related technologies and standard tools that helps develop interoperable applications. XML provides semantic markup. It allows domain-specific definition of tags and hierarchical document structure. The idea of linking and thus combining information from different sources is a valuable feature of XML. In addition, XML topic maps are used to describe relationships between different sources, or “semantically associated” parts of these sources. The issue of achieving a standardized medical vocabulary becomes more and more important with the stepwise implementation of diagnostically related groups, for example. The aim of the authors' work is to provide a transparent and open infrastructure that can be used to support clinical coding and to develop further software applications. The authors are assuming that a comprehensive representation of the content, structure, inherent semantics, and layout of medical classification systems can be achieved through a document-oriented approach. PMID:12807813

  6. Progress on an implementation of MIFlowCyt in XML

    NASA Astrophysics Data System (ADS)

    Leif, Robert C.; Leif, Stephanie H.

    2015-03-01

    Introduction: The International Society for Advancement of Cytometry (ISAC) Data Standards Task Force (DSTF) has created a standard for the Minimum Information about a Flow Cytometry Experiment (MIFlowCyt 1.0). The CytometryML schemas, are based in part upon the Flow Cytometry Standard and Digital Imaging and Communication (DICOM) standards. CytometryML has and will be extended and adapted to include MIFlowCyt, as well as to serve as a common standard for flow and image cytometry (digital microscopy). Methods: The MIFlowCyt data-types were created, as is the rest of CytometryML, in the XML Schema Definition Language (XSD1.1). Individual major elements of the MIFlowCyt schema were translated into XML and filled with reasonable data. A small section of the code was formatted with HTML formatting elements. Results: The differences in the amount of detail to be recorded for 1) users of standard techniques including data analysts and 2) others, such as method and device creators, laboratory and other managers, engineers, and regulatory specialists required that separate data-types be created to describe the instrument configuration and components. A very substantial part of the MIFlowCyt element that describes the Experimental Overview part of the MIFlowCyt and substantial parts of several other major elements have been developed. Conclusions: The future use of structured XML tags and web technology should facilitate searching of experimental information, its presentation, and inclusion in structured research, clinical, and regulatory documents, as well as demonstrate in publications adherence to the MIFlowCyt standard. The use of CytometryML together with XML technology should also result in the textual and numeric data being published using web technology without any change in composition. Preliminary testing indicates that CytometryML XML pages can be directly formatted with the combination of HTML and CSS.

  7. Application of XML to Journal Table Archiving

    NASA Astrophysics Data System (ADS)

    Shaya, E. J.; Blackwell, J. H.; Gass, J. E.; Kargatis, V. E.; Schneider, G. L.; Weiland, J. L.; Borne, K. D.; White, R. A.; Cheung, C. Y.

    1998-12-01

    The Astronomical Data Center (ADC) at the NASA Goddard Space Flight Center is a major archive for machine-readable astronomical data tables. Many ADC tables are derived from published journal articles. Article tables are reformatted to be machine-readable and documentation is crafted to facilitate proper reuse by researchers. The recent switch of journals to web based electronic format has resulted in the generation of large amounts of tabular data that could be captured into machine-readable archive format at fairly low cost. The large data flow of the tables from all major North American astronomical journals (a factor of 100 greater than the present rate at the ADC) necessitates the development of rigorous standards for the exchange of data between researchers, publishers, and the archives. We have selected a suitable markup language that can fully describe the large variety of astronomical information contained in ADC tables. The eXtensible Markup Language XML is a powerful internet-ready documentation format for data. It provides a precise and clear data description language that is both machine- and human-readable. It is rapidly becoming the standard format for business and information transactions on the internet and it is an ideal common metadata exchange format. By labelling, or "marking up", all elements of the information content, documents are created that computers can easily parse. An XML archive can easily and automatically be maintained, ingested into standard databases or custom software, and even totally restructured whenever necessary. Structuring astronomical data into XML format will enable efficient and focused search capabilities via off-the-shelf software. The ADC is investigating XML's expanded hyperlinking power to enhance connectivity within the ADC data/metadata and developing XSL display scripts to enhance display of astronomical data. The ADC XML Definition Type Document can be viewed at http://messier.gsfc.nasa.gov/dtdhtml/DTD-TREE.html

  8. Development of the Plate Tectonics and Seismology markup languages with XML

    NASA Astrophysics Data System (ADS)

    Babaie, H.; Babaei, A.

    2003-04-01

    The Extensible Markup Language (XML) and its specifications such as the XSD Schema, allow geologists to design discipline-specific vocabularies such as Seismology Markup Language (SeismML) or Plate Tectonics Markup Language (TectML). These languages make it possible to store and interchange structured geological information over the Web. Development of a geological markup language requires mapping geological concepts, such as "Earthquake" or "Plate" into a UML object model, applying a modeling and design environment. We have selected four inter-related geological concepts: earthquake, fault, plate, and orogeny, and developed four XML Schema Definitions (XSD), that define the relationships, cardinalities, hierarchies, and semantics of these concepts. In such a geological concept model, the UML object "Earthquake" is related to one or more "Wave" objects, each arriving to a seismic station at a specific "DateTime", and relating to a specific "Epicenter" object that lies at a unique "Location". The "Earthquake" object occurs along a "Segment" of a "Fault" object, which is related to a specific "Plate" object. The "Fault" has its own associations with such things as "Bend", "Step", and "Segment", and could be of any kind (e.g., "Thrust", "Transform'). The "Plate" is related to many other objects such as "MOR", "Subduction", and "Forearc", and is associated with an "Orogeny" object that relates to "Deformation" and "Strain" and several other objects. These UML objects were mapped into XML Metadata Interchange (XMI) formats, which were then converted into four XSD Schemas. The schemas were used to create and validate the XML instance documents, and to create a relational database hosting the plate tectonics and seismological data in the Microsoft Access format. The SeismML and TectML allow seismologists and structural geologists, among others, to submit and retrieve structured geological data on the Internet. A seismologist, for example, can submit peer-reviewed and reliable data about a specific earthquake to a Java Server Page on our web site hosting the XML application. Other geologists can readily retrieve the submitted data, saved in files or special tables of the designed database, through a search engine designed with J2EE (JSP, servlet, Java Bean) and XML specifications such as XPath, XPointer, and XSLT. When extended to include all the important concepts of seismology and plate tectonics, the two markup languages will make global interchange of geological data a reality.

  9. A Flexible Online Metadata Editing and Management System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aguilar, Raul; Pan, Jerry Yun; Gries, Corinna

    2010-01-01

    A metadata editing and management system is being developed employing state of the art XML technologies. A modular and distributed design was chosen for scalability, flexibility, options for customizations, and the possibility to add more functionality at a later stage. The system consists of a desktop design tool or schema walker used to generate code for the actual online editor, a native XML database, and an online user access management application. The design tool is a Java Swing application that reads an XML schema, provides the designer with options to combine input fields into online forms and give the fieldsmore » user friendly tags. Based on design decisions, the tool generates code for the online metadata editor. The code generated is an implementation of the XForms standard using the Orbeon Framework. The design tool fulfills two requirements: First, data entry forms based on one schema may be customized at design time and second data entry applications may be generated for any valid XML schema without relying on custom information in the schema. However, the customized information generated at design time is saved in a configuration file which may be re-used and changed again in the design tool. Future developments will add functionality to the design tool to integrate help text, tool tips, project specific keyword lists, and thesaurus services. Additional styling of the finished editor is accomplished via cascading style sheets which may be further customized and different look-and-feels may be accumulated through the community process. The customized editor produces XML files in compliance with the original schema, however, data from the current page is saved into a native XML database whenever the user moves to the next screen or pushes the save button independently of validity. Currently the system uses the open source XML database eXist for storage and management, which comes with third party online and desktop management tools. However, access to metadata files in the application introduced here is managed in a custom online module, using a MySQL backend accessed by a simple Java Server Faces front end. A flexible system with three grouping options, organization, group and single editing access is provided. Three levels were chosen to distribute administrative responsibilities and handle the common situation of an information manager entering the bulk of the metadata but leave specifics to the actual data provider.« less

  10. Report of Official Foreign Travel to Germany, May 16-June 1, 2001

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. D. Mason

    2001-06-18

    The Department of Energy (DOE) and associated agencies have moved rapidly toward electronic production, management, and dissemination of scientific and technical information. The World-Wide Web (WWW) has become a primary means of information dissemination. Electronic commerce (EC) is becoming the preferred means of procurement. DOE, like other government agencies, depends on and encourages the use of international standards in data communications. Like most government agencies, DOE has expressed a preference for openly developed standards over proprietary designs promoted as ''standards'' by vendors. In particular, there is a preference for standards developed by organizations such as the International Organization for Standardizationmore » (ISO) and the American National Standards Institute (ANSI) that use open, public processes to develop their standards. Among the most widely adopted international standards is the Standard Generalized Markup Language (SGML, ISO 8879:1986, FIPS 152), to which DOE long ago made a commitment. Besides the official commitment, which has resulted in several specialized projects, DOE makes heavy use of coding derived from SGML: Most documents on the WWW are coded in HTML (Hypertext Markup Language), which is an application of SGML. The World-Wide Web Consortium (W3C), with the backing of major software houses like Adobe, IBM, Microsoft, Netscape, Oracle, and Sun, is promoting XML (eXtensible Markup Language), a class of SGML applications, for the future of the WWW and the basis for EC. In support of DOE's use of these standards, I have served since 1985 as Chairman of the international committee responsible for SGML and related standards, ISO/IEC JTC1/SC34 (SC34) and its predecessor organizations. During my May 2001 trip, I chaired the spring 2001 meeting of SC34 in Berlin, Germany. I also attended XML Europe 2001, a major conference on the use of SGML and XML sponsored by the Graphic Communications Association (GCA), and chaired a meeting of the International SGML/XML Users' Group (ISUG). In addition to the widespread use of the WWW among DOE's plants and facilities in Oak Ridge and among DOE sites across the nation, there have been several past and present SGML- and XML-based projects at the Y-12 National Security Complex (Y-12). Our local project team has done SGML and XML development at Y-12 and Oak Ridge National Laboratory (ORNL) since the late 1980s. SGML is a component of the Weapons Records Archiving and Preservation (WRAP) project at Y-12 and is the format for catalog metadata chosen for weapons records by the Nuclear Weapons Information Group (NWIG). The ''Ferret'' system for automated classification analysis uses XML to structure its knowledge base. The Ferret team also provides XML consulting to OSTI and DOE Headquarters, particularly the National Nuclear Security Administration (NNSA). Supporting standards development allows DOE and Y-12 the opportunity both to provide input into the process and to benefit from contact with some of the leading experts in the subject matter. Oak Ridge has been for some years the location to which other DOE sites turn for expertise in SGML, XML, and related topics.« less

  11. Modelling and Simulation of Grid Connected SPV System with Active Power Filtering Features

    NASA Astrophysics Data System (ADS)

    Saroha, Jaipal; Pandove, Gitanjali; Singh, Mukhtiar

    2017-09-01

    In this paper, the detailed simulation studies for a grid connected solar photovoltaic system (SPV) have been presented. The power electronics devices like DC-DC boost converter and grid interfacing inverter are most important components of proposed system. Here, the DC-DC boost converter is controlled to extract maximum power out of SPV under different irradiation levels, while the grid interfacing inverter is utilized to evacuate the active power and feed it into grid at synchronized voltage and frequency. Moreover, the grid interfacing inverter is also controlled to sort out the issues related to power quality by compensating the reactive power and harmonics current component of nearby load at point of common coupling. Besides, detailed modeling of various component utilized in proposed system is also presented. Finally, extensive simulations have been performed under different irradiation levels with various kinds of load to validate the aforementioned claims. The overall system design and simulation have been performed by using Sim Power System toolbox available in the library of MATLAB.

  12. Application of LogitBoost Classifier for Traceability Using SNP Chip Data

    PubMed Central

    Kang, Hyunsung; Cho, Seoae; Kim, Heebal; Seo, Kang-Seok

    2015-01-01

    Consumer attention to food safety has increased rapidly due to animal-related diseases; therefore, it is important to identify their places of origin (POO) for safety purposes. However, only a few studies have addressed this issue and focused on machine learning-based approaches. In the present study, classification analyses were performed using a customized SNP chip for POO prediction. To accomplish this, 4,122 pigs originating from 104 farms were genotyped using the SNP chip. Several factors were considered to establish the best prediction model based on these data. We also assessed the applicability of the suggested model using a kinship coefficient-filtering approach. Our results showed that the LogitBoost-based prediction model outperformed other classifiers in terms of classification performance under most conditions. Specifically, a greater level of accuracy was observed when a higher kinship-based cutoff was employed. These results demonstrated the applicability of a machine learning-based approach using SNP chip data for practical traceability. PMID:26436917

  13. Application of LogitBoost Classifier for Traceability Using SNP Chip Data.

    PubMed

    Kim, Kwondo; Seo, Minseok; Kang, Hyunsung; Cho, Seoae; Kim, Heebal; Seo, Kang-Seok

    2015-01-01

    Consumer attention to food safety has increased rapidly due to animal-related diseases; therefore, it is important to identify their places of origin (POO) for safety purposes. However, only a few studies have addressed this issue and focused on machine learning-based approaches. In the present study, classification analyses were performed using a customized SNP chip for POO prediction. To accomplish this, 4,122 pigs originating from 104 farms were genotyped using the SNP chip. Several factors were considered to establish the best prediction model based on these data. We also assessed the applicability of the suggested model using a kinship coefficient-filtering approach. Our results showed that the LogitBoost-based prediction model outperformed other classifiers in terms of classification performance under most conditions. Specifically, a greater level of accuracy was observed when a higher kinship-based cutoff was employed. These results demonstrated the applicability of a machine learning-based approach using SNP chip data for practical traceability.

  14. One-dimensional photonic crystals for eliminating cross-talk in mid-IR photonics-based respiratory gas sensing

    NASA Astrophysics Data System (ADS)

    Fleming, L.; Gibson, D.; Song, S.; Hutson, D.; Reid, S.; MacGregor, C.; Clark, C.

    2017-02-01

    Mid-IR carbon dioxide (CO2) gas sensing is critical for monitoring in respiratory care, and is finding increasing importance in surgical anaesthetics where nitrous oxide (N2O) induced cross-talk is a major obstacle to accurate CO2 monitoring. In this work, a novel, solid state mid-IR photonics based CO2 gas sensor is described, and the role that 1- dimensional photonic crystals, often referred to as multilayer thin film optical coatings [1], play in boosting the sensor's capability of gas discrimination is discussed. Filter performance in isolating CO2 IR absorption is tested on an optical filter test bed and a theoretical gas sensor model is developed, with the inclusion of a modelled multilayer optical filter to analyse the efficacy of optical filtering on eliminating N2O induced cross-talk for this particular gas sensor architecture. Future possible in-house optical filter fabrication techniques are discussed. As the actual gas sensor configuration is small, it would be challenging to manufacture a filter of the correct size; dismantling the sensor and mounting a new filter for different optical coating designs each time would prove to be laborious. For this reason, an optical filter testbed set-up is described and, using a commercial optical filter, it is demonstrated that cross-talk can be considerably reduced; cross-talk is minimal even for very high concentrations of N2O, which are unlikely to be encountered in exhaled surgical anaesthetic patient breath profiles. A completely new and versatile system for breath emulation is described and the capability it has for producing realistic human exhaled CO2 vs. time waveforms is shown. The cross-talk inducing effect that N2O has on realistic emulated CO2 vs. time waveforms as measured using the NDIR gas sensing technique is demonstrated and the effect that optical filtering will have on said cross-talk is discussed.

  15. A Semantic Analysis of XML Schema Matching for B2B Systems Integration

    ERIC Educational Resources Information Center

    Kim, Jaewook

    2011-01-01

    One of the most critical steps to integrating heterogeneous e-Business applications using different XML schemas is schema matching, which is known to be costly and error-prone. Many automatic schema matching approaches have been proposed, but the challenge is still daunting because of the complexity of schemas and immaturity of technologies in…

  16. 76 FR 4897 - Submission for OMB Review; OMB Control No. 3090-0291; FSRS Registration and Prime Awardee Entity...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-27

    ... methods for submitting multiple FFATA subaward reports: A batch upload template using Microsoft Excel, an... three methods for submitting multiple FFATA subaward reports: A batch upload template using Microsoft Excel, an XML report submission template and an XML web service. These methods do take advantage of the...

  17. Integration of HTML Documents into an XML-Based Knowledge Repository

    PubMed Central

    Roemer, Lorrie K; Rocha, Roberto A; Del Fiol, Guilherme

    2005-01-01

    The Emergency Patient Instruction Generator (EPIG) is an electronic content compiler/viewer/editor developed by Intermountain Health Care. The content is vendor-licensed HTML patient discharge instructions. This work describes the process by which discharge instructions where converted from ASCII-encoded HTML to XML, then loaded to a database for use by EPIG. PMID:16779384

  18. International Data | Geospatial Data Science | NREL

    Science.gov Websites

    International Data International Data These datasets detail solar and wind resources for select Annual.xml India 10-km Monthly Direct Normal and Global Horizontal Zip 4.68 MB 04/25/2013 Monthly.xml Wind Data 50-m Wind Data These 50-m hub-height datasets have been validated by NREL and wind energy

  19. Nassi-Schneiderman Diagram in HTML Based on AML

    ERIC Educational Resources Information Center

    Menyhárt, László

    2013-01-01

    In an earlier work I defined an extension of XML called Algorithm Markup Language (AML) for easy and understandable coding in an IDE which supports XML editing (e.g. NetBeans). The AML extension contains annotations and native language (English or Hungarian) tag names used when coding our algorithm. This paper presents a drawing tool with which…

  20. Compression-based aggregation model for medical web services.

    PubMed

    Al-Shammary, Dhiah; Khalil, Ibrahim

    2010-01-01

    Many organizations such as hospitals have adopted Cloud Web services in applying their network services to avoid investing heavily computing infrastructure. SOAP (Simple Object Access Protocol) is the basic communication protocol of Cloud Web services that is XML based protocol. Generally,Web services often suffer congestions and bottlenecks as a result of the high network traffic that is caused by the large XML overhead size. At the same time, the massive load on Cloud Web services in terms of the large demand of client requests has resulted in the same problem. In this paper, two XML-aware aggregation techniques that are based on exploiting the compression concepts are proposed in order to aggregate the medical Web messages and achieve higher message size reduction.

  1. An exponentiation method for XML element retrieval.

    PubMed

    Wichaiwong, Tanakorn

    2014-01-01

    XML document is now widely used for modelling and storing structured documents. The structure is very rich and carries important information about contents and their relationships, for example, e-Commerce. XML data-centric collections require query terms allowing users to specify constraints on the document structure; mapping structure queries and assigning the weight are significant for the set of possibly relevant documents with respect to structural conditions. In this paper, we present an extension to the MEXIR search system that supports the combination of structural and content queries in the form of content-and-structure queries, which we call the Exponentiation function. It has been shown the structural information improve the effectiveness of the search system up to 52.60% over the baseline BM25 at MAP.

  2. Representation of thermal infrared imaging data in the DICOM using XML configuration files.

    PubMed

    Ruminski, Jacek

    2007-01-01

    The DICOM standard has become a widely accepted and implemented format for the exchange and storage of medical imaging data. Different imaging modalities are supported however there is not a dedicated solution for thermal infrared imaging in medicine. In this article we propose new ideas and improvements to final proposal of the new DICOM Thermal Infrared Imaging structures and services. Additionally, we designed, implemented and tested software packages for universal conversion of existing thermal imaging files to the DICOM format using XML configuration files. The proposed solution works fast and requires minimal number of user interactions. The XML configuration file enables to compose a set of attributes for any source file format of thermal imaging camera.

  3. Bioinformatics data distribution and integration via Web Services and XML.

    PubMed

    Li, Xiao; Zhang, Yizheng

    2003-11-01

    It is widely recognized that exchange, distribution, and integration of biological data are the keys to improve bioinformatics and genome biology in post-genomic era. However, the problem of exchanging and integrating biology data is not solved satisfactorily. The eXtensible Markup Language (XML) is rapidly spreading as an emerging standard for structuring documents to exchange and integrate data on the World Wide Web (WWW). Web service is the next generation of WWW and is founded upon the open standards of W3C (World Wide Web Consortium) and IETF (Internet Engineering Task Force). This paper presents XML and Web Services technologies and their use for an appropriate solution to the problem of bioinformatics data exchange and integration.

  4. Astronomical Instrumentation System Markup Language

    NASA Astrophysics Data System (ADS)

    Goldbaum, Jesse M.

    2016-05-01

    The Astronomical Instrumentation System Markup Language (AISML) is an Extensible Markup Language (XML) based file format for maintaining and exchanging information about astronomical instrumentation. The factors behind the need for an AISML are first discussed followed by the reasons why XML was chosen as the format. Next it's shown how XML also provides the framework for a more precise definition of an astronomical instrument and how these instruments can be combined to form an Astronomical Instrumentation System (AIS). AISML files for several instruments as well as one for a sample AIS are provided. The files demonstrate how AISML can be utilized for various tasks from web page generation and programming interface to instrument maintenance and quality management. The advantages of widespread adoption of AISML are discussed.

  5. QuakeML - An XML Schema for Seismology

    NASA Astrophysics Data System (ADS)

    Wyss, A.; Schorlemmer, D.; Maraini, S.; Baer, M.; Wiemer, S.

    2004-12-01

    We propose an extensible format-definition for seismic data (QuakeML). Sharing data and seismic information efficiently is one of the most important issues for research and observational seismology in the future. The eXtensible Markup Language (XML) is playing an increasingly important role in the exchange of a variety of data. Due to its extensible definition capabilities, its wide acceptance and the existing large number of utilities and libraries for XML, a structured representation of various types of seismological data should in our opinion be developed by defining a 'QuakeML' standard. Here we present the QuakeML definitions for parameter databases and further efforts, e.g. a central QuakeML catalog database and a web portal for exchanging codes and stylesheets.

  6. A High-Availability, Distributed Hardware Control System Using Java

    NASA Technical Reports Server (NTRS)

    Niessner, Albert F.

    2011-01-01

    Two independent coronagraph experiments that require 24/7 availability with different optical layouts and different motion control requirements are commanded and controlled with the same Java software system executing on many geographically scattered computer systems interconnected via TCP/IP. High availability of a distributed system requires that the computers have a robust communication messaging system making the mix of TCP/IP (a robust transport), and XML (a robust message) a natural choice. XML also adds the configuration flexibility. Java then adds object-oriented paradigms, exception handling, heavily tested libraries, and many third party tools for implementation robustness. The result is a software system that provides users 24/7 access to two diverse experiments with XML files defining the differences

  7. Definition of an XML markup language for clinical laboratory procedures and comparison with generic XML markup.

    PubMed

    Saadawi, Gilan M; Harrison, James H

    2006-10-01

    Clinical laboratory procedure manuals are typically maintained as word processor files and are inefficient to store and search, require substantial effort for review and updating, and integrate poorly with other laboratory information. Electronic document management systems could improve procedure management and utility. As a first step toward building such systems, we have developed a prototype electronic format for laboratory procedures using Extensible Markup Language (XML). Representative laboratory procedures were analyzed to identify document structure and data elements. This information was used to create a markup vocabulary, CLP-ML, expressed as an XML Document Type Definition (DTD). To determine whether this markup provided advantages over generic markup, we compared procedures structured with CLP-ML or with the vocabulary of the Health Level Seven, Inc. (HL7) Clinical Document Architecture (CDA) narrative block. CLP-ML includes 124 XML tags and supports a variety of procedure types across different laboratory sections. When compared with a general-purpose markup vocabulary (CDA narrative block), CLP-ML documents were easier to edit and read, less complex structurally, and simpler to traverse for searching and retrieval. In combination with appropriate software, CLP-ML is designed to support electronic authoring, reviewing, distributing, and searching of clinical laboratory procedures from a central repository, decreasing procedure maintenance effort and increasing the utility of procedure information. A standard electronic procedure format could also allow laboratories and vendors to share procedures and procedure layouts, minimizing duplicative word processor editing. Our results suggest that laboratory-specific markup such as CLP-ML will provide greater benefit for such systems than generic markup.

  8. Proper Plugin Protocols

    DTIC Science & Technology

    2011-12-28

    specify collaboration constraints that occur in Java and XML frameworks and that the collaboration constraints from these frameworks matter in practice. (a...programming language boundaries, and Chapter 6 and Appendix A demonstrate that Fusion can specify constraints across both Java and XML in practice. (c...designed JUnit, Josh Bloch designed Java Collec- tions, and Krzysztof Cwalina designed the .NET Framework APIs. While all of these frameworks are very

  9. Tomcat, Oracle & XML Web Archive

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cothren, D. C.

    2008-01-01

    The TOX (Tomcat Oracle & XML) web archive is a foundation for development of HTTP-based applications using Tomcat (or some other servlet container) and an Oracle RDBMS. Use of TOX requires coding primarily in PL/SQL, JavaScript, and XSLT, but also in HTML, CSS and potentially Java. Coded in Java and PL/SQL itself, TOX provides the foundation for more complex applications to be built.

  10. 77 FR 28541 - Request for Comments on the Recommendation for the Disclosure of Sequence Listings Using XML...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-15

    ... (EPO) as the lead, to propose a revised standard for the filing of nucleotide and/or amino acid.... ST.25 uses a controlled vocabulary of feature keys to describe nucleic acid and amino acid sequences... patent data purposes. The XML standard also includes four qualifiers for amino acids. These feature keys...

  11. Integrating XQuery-Enabled SCORM XML Metadata Repositories into an RDF-Based E-Learning P2P Network

    ERIC Educational Resources Information Center

    Qu, Changtao; Nejdl, Wolfgang

    2004-01-01

    Edutella is an RDF-based E-Learning P2P network that is aimed to accommodate heterogeneous learning resource metadata repositories in a P2P manner and further facilitate the exchange of metadata between these repositories based on RDF. Whereas Edutella provides RDF metadata repositories with a quite natural integration approach, XML metadata…

  12. Using Extensible Markup Language (XML) for the Single Source Delivery of Educational Resources by Print and Online: A Case Study

    ERIC Educational Resources Information Center

    Walsh, Lucas

    2007-01-01

    This article seeks to provide an introduction to Extensible Markup Language (XML) by looking at its use in a single source publishing approach to the provision of teaching resources in both hardcopy and online. Using the development of the International Baccalaureate Organisation's online Economics Subject Guide as a practical example, this…

  13. WaterML: an XML Language for Communicating Water Observations Data

    NASA Astrophysics Data System (ADS)

    Maidment, D. R.; Zaslavsky, I.; Valentine, D.

    2007-12-01

    One of the great impediments to the synthesis of water information is the plethora of formats used to publish such data. Each water agency uses its own approach. XML (eXtended Markup Languages) are generalizations of Hypertext Markup Language to communicate specific kinds of information via the internet. WaterML is an XML language for water observations data - streamflow, water quality, groundwater levels, climate, precipitation and aquatic biology data, recorded at fixed, point locations as a function of time. The Hydrologic Information System project of the Consortium of Universities for the Advancement of Hydrologic Science, Inc (CUAHSI) has defined WaterML and prepared a set of web service functions called WaterOneFLow that use WaterML to provide information about observation sites, the variables measured there and the values of those measurments. WaterML has been submitted to the Open GIS Consortium for harmonization with its standards for XML languages. Academic investigators at a number of testbed locations in the WATERS network are providing data in WaterML format using WaterOneFlow web services. The USGS and other federal agencies are also working with CUAHSI to similarly provide access to their data in WaterML through WaterOneFlow services.

  14. Transformation of HDF-EOS metadata from the ECS model to ISO 19115-based XML

    NASA Astrophysics Data System (ADS)

    Wei, Yaxing; Di, Liping; Zhao, Baohua; Liao, Guangxuan; Chen, Aijun

    2007-02-01

    Nowadays, geographic data, such as NASA's Earth Observation System (EOS) data, are playing an increasing role in many areas, including academic research, government decisions and even in people's every lives. As the quantity of geographic data becomes increasingly large, a major problem is how to fully make use of such data in a distributed, heterogeneous network environment. In order for a user to effectively discover and retrieve the specific information that is useful, the geographic metadata should be described and managed properly. Fortunately, the emergence of XML and Web Services technologies greatly promotes information distribution across the Internet. The research effort discussed in this paper presents a method and its implementation for transforming Hierarchical Data Format (HDF)-EOS metadata from the NASA ECS model to ISO 19115-based XML, which will be managed by the Open Geospatial Consortium (OGC) Catalogue Services—Web Profile (CSW). Using XML and international standards rather than domain-specific models to describe the metadata of those HDF-EOS data, and further using CSW to manage the metadata, can allow metadata information to be searched and interchanged more widely and easily, thus promoting the sharing of HDF-EOS data.

  15. Activate/Inhibit KGCS Gateway via Master Console EIC Pad-B Display

    NASA Technical Reports Server (NTRS)

    Ferreira, Pedro Henrique

    2014-01-01

    My internship consisted of two major projects for the Launch Control System.The purpose of the first project was to implement the Application Control Language (ACL) to Activate Data Acquisition (ADA) and to Inhibit Data Acquisition (IDA) the Kennedy Ground Control Sub-Systems (KGCS) Gateway, to update existing Pad-B End Item Control (EIC) Display to program the ADA and IDA buttons with new ACL, and to test and release the ACL Display.The second project consisted of unit testing all of the Application Services Framework (ASF) by March 21st. The XmlFileReader was unit tested and reached 100 coverage. The XmlFileReader class is used to grab information from XML files and use them to initialize elements in the other framework elements by using the Xerces C++ XML Parser; which is open source commercial off the shelf software. The ScriptThread was also tested. ScriptThread manages the creation and activation of script threads. A large amount of the time was used in initializing the environment and learning how to set up unit tests and getting familiar with the specific segments of the project that were assigned to us.

  16. An Exponentiation Method for XML Element Retrieval

    PubMed Central

    2014-01-01

    XML document is now widely used for modelling and storing structured documents. The structure is very rich and carries important information about contents and their relationships, for example, e-Commerce. XML data-centric collections require query terms allowing users to specify constraints on the document structure; mapping structure queries and assigning the weight are significant for the set of possibly relevant documents with respect to structural conditions. In this paper, we present an extension to the MEXIR search system that supports the combination of structural and content queries in the form of content-and-structure queries, which we call the Exponentiation function. It has been shown the structural information improve the effectiveness of the search system up to 52.60% over the baseline BM25 at MAP. PMID:24696643

  17. Light at Night Markup Language (LANML): XML Technology for Light at Night Monitoring Data

    NASA Astrophysics Data System (ADS)

    Craine, B. L.; Craine, E. R.; Craine, E. M.; Crawford, D. L.

    2013-05-01

    Light at Night Markup Language (LANML) is a standard, based upon XML, useful in acquiring, validating, transporting, archiving and analyzing multi-dimensional light at night (LAN) datasets of any size. The LANML standard can accommodate a variety of measurement scenarios including single spot measures, static time-series, web based monitoring networks, mobile measurements, and airborne measurements. LANML is human-readable, machine-readable, and does not require a dedicated parser. In addition LANML is flexible; ensuring future extensions of the format will remain backward compatible with analysis software. The XML technology is at the heart of communicating over the internet and can be equally useful at the desktop level, making this standard particularly attractive for web based applications, educational outreach and efficient collaboration between research groups.

  18. Web-Based Customizable Viewer for Mars Network Overflight Opportunities

    NASA Technical Reports Server (NTRS)

    Gladden, Roy E.; Wallick, Michael N.; Allard, Daniel A.

    2012-01-01

    This software displays a full summary of information regarding the overflight opportunities between any set of lander and orbiter pairs that the user has access to view. The information display can be customized, allowing the user to choose which fields to view/hide and filter. The software works from a Web browser on any modern operating system. A full summary of information pertaining to an overflight is available, including the proposed, tentative, requested, planned, and implemented. This gives the user a chance to quickly check for inconsistencies and fix any problems. Overflights from multiple lander/ orbiter pairs can be compared instantly, and information can be filtered through the query and shown/hidden, giving the user a customizable view of the data. The information can be exported to a CSV (comma separated value) or XML (extensible markup language) file. The software only grants access to users who are authorized to view the information. This application is an addition to the MaROS Web suite. Prior to this addition, information pertaining to overflight opportunities would have a limited amount of data (displayed graphically) and could only be shown in strict temporal ordering. This new display shows more information, allows direct comparisons between overflights, and allows the data to be manipulated in ways that it was unable to be done in the past. The current software solution is to use CSV files to view the overflight opportunities.

  19. DeepBlue epigenomic data server: programmatic data retrieval and analysis of epigenome region sets.

    PubMed

    Albrecht, Felipe; List, Markus; Bock, Christoph; Lengauer, Thomas

    2016-07-08

    Large amounts of epigenomic data are generated under the umbrella of the International Human Epigenome Consortium, which aims to establish 1000 reference epigenomes within the next few years. These data have the potential to unravel the complexity of epigenomic regulation. However, their effective use is hindered by the lack of flexible and easy-to-use methods for data retrieval. Extracting region sets of interest is a cumbersome task that involves several manual steps: identifying the relevant experiments, downloading the corresponding data files and filtering the region sets of interest. Here we present the DeepBlue Epigenomic Data Server, which streamlines epigenomic data analysis as well as software development. DeepBlue provides a comprehensive programmatic interface for finding, selecting, filtering, summarizing and downloading region sets. It contains data from four major epigenome projects, namely ENCODE, ROADMAP, BLUEPRINT and DEEP. DeepBlue comes with a user manual, examples and a well-documented application programming interface (API). The latter is accessed via the XML-RPC protocol supported by many programming languages. To demonstrate usage of the API and to enable convenient data retrieval for non-programmers, we offer an optional web interface. DeepBlue can be openly accessed at http://deepblue.mpi-inf.mpg.de. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  20. Behavioral study and design of a digital interpolator filter for wireless reconfigurable transmitters

    NASA Astrophysics Data System (ADS)

    Ferragina, V.; Frassone, A.; Ghittori, N.; Malcovati, P.; Vigna, A.

    2005-06-01

    The behavioral analysis and the design in a 0.13 μm CMOS technology of a digital interpolator filter for wireless applications are presented. The proposed block is designed to be embedded in the baseband part of a reconfigurable transmitter (WLAN 802.11a, UMTS) to operate as a sampling frequency boost between the digital signal processor (DSP) and the digital-to-analog converter (DAC). In recent trends the DAC of such transmitters usually operates at high conversion frequencies (to allow a relaxed implementation of the following analog reconstruction filter), while the DSP output flows at low frequencies (typically Nyquist rate). Thus a block able to increase the digital data rate, like the one proposed, is needed before the DAC. For example, in the WLAN case, an interpolation factor of 4 has been used, allowing the digital data frequency to raise from 20 MHz to 80 MHz. Using a time-domain model of the TX chain, a behavioral analysis has been performed to determine the impact of the filter performance on the quality of the signal at the antenna. This study has led to the evaluation of the z-domain filter transfer function, together with the specifications concerning a finite precision implementation. A VHDL description has allowed an automatic synthesis of the circuit in a 0.13 μm CMOS technology (with a supply voltage of 1.2 V). Post-synthesis simulations have confirmed the effectiveness of the proposed study.

  1. Automated Test Methods for XML Metadata

    DTIC Science & Technology

    2017-12-28

    Group under RCC Task TG-147. This document (Volume VI of the RCC Document 118 series) describes procedures used for evaluating XML metadata documents...including TMATS, MDL, IHAL, and DDML documents. These documents contain specifications or descriptions of artifacts and systems of importance to...the collection and management of telemetry data. The methods defined in this report provide a means of evaluating the suitability of such a metadata

  2. Aircraft

    DTIC Science & Technology

    2003-01-01

    xml Internet . Teal Group Corp. Aviation Week and Space Technology , 18 March 2003, 1. 62 Babak Minovi, “Turbine Industry Struggles with Weak Markets ...xml Internet . Teal Group Corp. Aviation Week and Space Technology , 18 March 2003, 1. 64 Babak Minovi, “Turbine Industry Struggles with Weak Markets ...what several executives referred to as the “perfect storm” now blowing through the aviation market . With this information many questions remain: Will

  3. Schema for Spacecraft-Command Dictionary

    NASA Technical Reports Server (NTRS)

    Laubach, Sharon; Garcia, Celina; Maxwell, Scott; Wright, Jesse

    2008-01-01

    An Extensible Markup Language (XML) schema was developed as a means of defining and describing a structure for capturing spacecraft command- definition and tracking information in a single location in a form readable by both engineers and software used to generate software for flight and ground systems. A structure defined within this schema is then used as the basis for creating an XML file that contains command definitions.

  4. Integrating digital educational content created and stored within disparate software environments: an extensible markup language (XML) solution in real-world use.

    PubMed

    Frank, M S; Schultz, T; Dreyer, K

    2001-06-01

    To provide a standardized and scaleable mechanism for exchanging digital radiologic educational content between software systems that use disparate authoring, storage, and presentation technologies. Our institution uses two distinct software systems for creating educational content for radiology. Each system is used to create in-house educational content as well as commercial educational products. One system is an authoring and viewing application that facilitates the input and storage of hierarchical knowledge and associated imagery, and is capable of supporting a variety of entity relationships. This system is primarily used for the production and subsequent viewing of educational CD-ROMS. Another software system is primarily used for radiologic education on the world wide web. This system facilitates input and storage of interactive knowledge and associated imagery, delivering this content over the internet in a Socratic manner simulating in-person interaction with an expert. A subset of knowledge entities common to both systems was derived. An additional subset of knowledge entities that could be bidirectionally mapped via algorithmic transforms was also derived. An extensible markup language (XML) object model and associated lexicon were then created to represent these knowledge entities and their interactive behaviors. Forward-looking attention was exercised in the creation of the object model in order to facilitate straightforward future integration of other sources of educational content. XML generators and interpreters were written for both systems. Deriving the XML object model and lexicon was the most critical and time-consuming aspect of the project. The coding of the XML generators and interpreters required only a few hours for each environment. Subsequently, the transfer of hundreds of educational cases and thematic presentations between the systems can now be accomplished in a matter of minutes. The use of algorithmic transforms results in nearly 100% transfer of context as well as content, thus providing "presentation-ready" outcomes. The automation of knowledge exchange between dissimilar digital teaching environments magnifies the efforts of educators and enriches the learning experience for participants. XML is a powerful and useful mechanism for transfering educational content, as well as the context and interactive behaviors of such content, between disparate systems.

  5. The CMIP5 Model Documentation Questionnaire: Development of a Metadata Retrieval System for the METAFOR Common Information Model

    NASA Astrophysics Data System (ADS)

    Pascoe, Charlotte; Lawrence, Bryan; Moine, Marie-Pierre; Ford, Rupert; Devine, Gerry

    2010-05-01

    The EU METAFOR Project (http://metaforclimate.eu) has created a web-based model documentation questionnaire to collect metadata from the modelling groups that are running simulations in support of the Coupled Model Intercomparison Project - 5 (CMIP5). The CMIP5 model documentation questionnaire will retrieve information about the details of the models used, how the simulations were carried out, how the simulations conformed to the CMIP5 experiment requirements and details of the hardware used to perform the simulations. The metadata collected by the CMIP5 questionnaire will allow CMIP5 data to be compared in a scientifically meaningful way. This paper describes the life-cycle of the CMIP5 questionnaire development which starts with relatively unstructured input from domain specialists and ends with formal XML documents that comply with the METAFOR Common Information Model (CIM). Each development step is associated with a specific tool. (1) Mind maps are used to capture information requirements from domain experts and build a controlled vocabulary, (2) a python parser processes the XML files generated by the mind maps, (3) Django (python) is used to generate the dynamic structure and content of the web based questionnaire from processed xml and the METAFOR CIM, (4) Python parsers ensure that information entered into the CMIP5 questionnaire is output as CIM compliant xml, (5) CIM compliant output allows automatic information capture tools to harvest questionnaire content into databases such as the Earth System Grid (ESG) metadata catalogue. This paper will focus on how Django (python) and XML input files are used to generate the structure and content of the CMIP5 questionnaire. It will also address how the choice of development tools listed above provided a framework that enabled working scientists (who we would never ordinarily get to interact with UML and XML) to be part the iterative development process and ensure that the CMIP5 model documentation questionnaire reflects what scientists want to know about the models. Keywords: metadata, CMIP5, automatic information capture, tool development

  6. EOS ODL Metadata On-line Viewer

    NASA Astrophysics Data System (ADS)

    Yang, J.; Rabi, M.; Bane, B.; Ullman, R.

    2002-12-01

    We have recently developed and deployed an EOS ODL metadata on-line viewer. The EOS ODL metadata viewer is a web server that takes: 1) an EOS metadata file in Object Description Language (ODL), 2) parameters, such as which metadata to view and what style of display to use, and returns an HTML or XML document displaying the requested metadata in the requested style. This tool is developed to address widespread complaints by science community that the EOS Data and Information System (EOSDIS) metadata files in ODL are difficult to read by allowing users to upload and view an ODL metadata file in different styles using a web browser. Users have the selection to view all the metadata or part of the metadata, such as Collection metadata, Granule metadata, or Unsupported Metadata. Choices of display styles include 1) Web: a mouseable display with tabs and turn-down menus, 2) Outline: Formatted and colored text, suitable for printing, 3) Generic: Simple indented text, a direct representation of the underlying ODL metadata, and 4) None: No stylesheet is applied and the XML generated by the converter is returned directly. Not all display styles are implemented for all the metadata choices. For example, Web style is only implemented for Collection and Granule metadata groups with known attribute fields, but not for Unsupported, Other, and All metadata. The overall strategy of the ODL viewer is to transform an ODL metadata file to a viewable HTML in two steps. The first step is to convert the ODL metadata file to an XML using a Java-based parser/translator called ODL2XML. The second step is to transform the XML to an HTML using stylesheets. Both operations are done on the server side. This allows a lot of flexibility in the final result, and is very portable cross-platform. Perl CGI behind the Apache web server is used to run the Java ODL2XML, and then run the results through an XSLT processor. The EOS ODL viewer can be accessed from either a PC or a Mac using Internet Explorer 5.0+ or Netscape 4.7+.

  7. BEASTling: A software tool for linguistic phylogenetics using BEAST 2

    PubMed Central

    Forkel, Robert; Kaiping, Gereon A.; Atkinson, Quentin D.

    2017-01-01

    We present a new open source software tool called BEASTling, designed to simplify the preparation of Bayesian phylogenetic analyses of linguistic data using the BEAST 2 platform. BEASTling transforms comparatively short and human-readable configuration files into the XML files used by BEAST to specify analyses. By taking advantage of Creative Commons-licensed data from the Glottolog language catalog, BEASTling allows the user to conveniently filter datasets using names for recognised language families, to impose monophyly constraints so that inferred language trees are backward compatible with Glottolog classifications, or to assign geographic location data to languages for phylogeographic analyses. Support for the emerging cross-linguistic linked data format (CLDF) permits easy incorporation of data published in cross-linguistic linked databases into analyses. BEASTling is intended to make the power of Bayesian analysis more accessible to historical linguists without strong programming backgrounds, in the hopes of encouraging communication and collaboration between those developing computational models of language evolution (who are typically not linguists) and relevant domain experts. PMID:28796784

  8. BEASTling: A software tool for linguistic phylogenetics using BEAST 2.

    PubMed

    Maurits, Luke; Forkel, Robert; Kaiping, Gereon A; Atkinson, Quentin D

    2017-01-01

    We present a new open source software tool called BEASTling, designed to simplify the preparation of Bayesian phylogenetic analyses of linguistic data using the BEAST 2 platform. BEASTling transforms comparatively short and human-readable configuration files into the XML files used by BEAST to specify analyses. By taking advantage of Creative Commons-licensed data from the Glottolog language catalog, BEASTling allows the user to conveniently filter datasets using names for recognised language families, to impose monophyly constraints so that inferred language trees are backward compatible with Glottolog classifications, or to assign geographic location data to languages for phylogeographic analyses. Support for the emerging cross-linguistic linked data format (CLDF) permits easy incorporation of data published in cross-linguistic linked databases into analyses. BEASTling is intended to make the power of Bayesian analysis more accessible to historical linguists without strong programming backgrounds, in the hopes of encouraging communication and collaboration between those developing computational models of language evolution (who are typically not linguists) and relevant domain experts.

  9. The Biological Connection Markup Language: a SBGN-compliant format for visualization, filtering and analysis of biological pathways.

    PubMed

    Beltrame, Luca; Calura, Enrica; Popovici, Razvan R; Rizzetto, Lisa; Guedez, Damariz Rivero; Donato, Michele; Romualdi, Chiara; Draghici, Sorin; Cavalieri, Duccio

    2011-08-01

    Many models and analysis of signaling pathways have been proposed. However, neither of them takes into account that a biological pathway is not a fixed system, but instead it depends on the organism, tissue and cell type as well as on physiological, pathological and experimental conditions. The Biological Connection Markup Language (BCML) is a format to describe, annotate and visualize pathways. BCML is able to store multiple information, permitting a selective view of the pathway as it exists and/or behave in specific organisms, tissues and cells. Furthermore, BCML can be automatically converted into data formats suitable for analysis and into a fully SBGN-compliant graphical representation, making it an important tool that can be used by both computational biologists and 'wet lab' scientists. The XML schema and the BCML software suite are freely available under the LGPL for download at http://bcml.dc-atlas.net. They are implemented in Java and supported on MS Windows, Linux and OS X.

  10. XTCE and XML Database Evolution and Lessons from JWST, LandSat, and Constellation

    NASA Technical Reports Server (NTRS)

    Gal-Edd, Jonathan; Kreistle, Steven; Fatig. Cirtos; Jones, Ronald

    2008-01-01

    The database organizations within three different NASA projects have advanced current practices by creating database synergy between the various spacecraft life cycle stakeholders and educating users in the benefits of the Consultative Committee for Space Data Systems (CCSDS) XML Telemetry and Command Exchange (XTCE) format. The combination of XML for managing program data and CCSDS XTCE for exchange is a robust approach that will meet all user requirements using Standards and Non proprietary tools. COTS tools for XTCEKML are very wide and varied. To combine together various low cost and free tools can be more expensive in the long run than choosing a more expensive COTS tool that meets all the needs. This was especially important when deploying in 32 remote sites with no need for licenses. A common mission XTCEKML format between dissimilar systems is possible and is not difficult. Command XMLKTCE is more complex than telemetry and the use of XTCEKML metadata to describe pages and scripts is needed due to the proprietary nature of most current ground systems. Other mission and science products such as spacecraft loads, science image catalogs, and mission operation procedures can all be described with XML as well to increase there flexibility as systems evolve and change. Figure 10 is an example of a spacecraft table load. The word is out and the XTCE community is growing, The f sXt TCE user group was held in October and in addition to ESAESOC, SC02000, and CNES identified several systems based on XTCE. The second XTCE user group is scheduled for March 10, 2008 with LDMC and others joining. As the experience with XTCE grows and the user community receives the promised benefits of using XTCE and XML the interest is growing fast.

  11. XSemantic: An Extension of LCA Based XML Semantic Search

    NASA Astrophysics Data System (ADS)

    Supasitthimethee, Umaporn; Shimizu, Toshiyuki; Yoshikawa, Masatoshi; Porkaew, Kriengkrai

    One of the most convenient ways to query XML data is a keyword search because it does not require any knowledge of XML structure or learning a new user interface. However, the keyword search is ambiguous. The users may use different terms to search for the same information. Furthermore, it is difficult for a system to decide which node is likely to be chosen as a return node and how much information should be included in the result. To address these challenges, we propose an XML semantic search based on keywords called XSemantic. On the one hand, we give three definitions to complete in terms of semantics. Firstly, the semantic term expansion, our system is robust from the ambiguous keywords by using the domain ontology. Secondly, to return semantic meaningful answers, we automatically infer the return information from the user queries and take advantage of the shortest path to return meaningful connections between keywords. Thirdly, we present the semantic ranking that reflects the degree of similarity as well as the semantic relationship so that the search results with the higher relevance are presented to the users first. On the other hand, in the LCA and the proximity search approaches, we investigated the problem of information included in the search results. Therefore, we introduce the notion of the Lowest Common Element Ancestor (LCEA) and define our simple rule without any requirement on the schema information such as the DTD or XML Schema. The first experiment indicated that XSemantic not only properly infers the return information but also generates compact meaningful results. Additionally, the benefits of our proposed semantics are demonstrated by the second experiment.

  12. Comparing Emerging XML Based Formats from a Multi-discipline Perspective

    NASA Astrophysics Data System (ADS)

    Sawyer, D. M.; Reich, L. I.; Nikhinson, S.

    2002-12-01

    This paper analyzes the similarity and differences among several examples of an emerging generation of Scientific Data Formats that are based on XML technologies. Some of the factors evaluated include the goals of these efforts, the data models, and XML technologies used, and the maturity of currently available software. This paper then investigates the practicality of developing a single set of structural data objects and basic scientific concepts, such as units, that could be used across discipline boundaries and extended by disciplines and missions to create Scientific Data Formats for their communities. This analysis is partly based on an effort sponsored by the ESDIS office at GSFC to compare the Earth Science Markup Language (ESML) and the eXtensible Data Format( XDF), two members of this new generation of XML based Data Description Languages that have been developed by NASA funded efforts in recent years. This paper adds FITSML and potentially CDFML to the list of XML based Scientific Data Formats discussed. This paper draws heavily a Formats Evolution Process Committee (http://ssdoo.gsfc.nasa.gov/nost/fep/) draft white paper primarily developed by Lou Reich, Mike Folk and Don Sawyer to assist the Space Science community in understanding Scientific Data Formats. One of primary conclusions of that paper is that a scientific data format object model should be examined along two basic axes. The first is the complexity of the computer/mathematical data types supported and the second is the level of scientific domain specialization incorporated. This paper also discusses several of the issues that affect the decision on whether to implement a discipline or project specific Scientific Data Format as a formal extension of a general purpose Scientific Data Format or to implement the APIs independently.

  13. State of the art techniques for preservation and reuse of hard copy electrocardiograms.

    PubMed

    Lobodzinski, Suave M; Teppner, Ulrich; Laks, Michael

    2003-01-01

    Baseline examinations and periodic reexaminations in longitudinal population studies, together with ongoing surveillance for morbidity and mortality, provide unique opportunities for seeking ways to enhance the value of electrocardiography (ECG) as an inexpensive and noninvasive tool for prognosis and diagnosis. We used newly developed optical ECG waveform recognition (OEWR) technique capable of extracting raw waveform data from legacy hard copy ECG recording. Hardcopy ECG recordings were scanned and processed by the OEWR algorithm. The extracted ECG datasets were formatted into a newly proposed, vendor-neutral, ECG XML data format. Oracle database was used as a repository for ECG records in XML format. The proposed technique for XML encapsulation of OEWR processed hard copy records resulted in an efficient method for inclusion of paper ECG records into research databases, thus providing their preservation, reuse and accession.

  14. Clustering XML Documents Using Frequent Subtrees

    NASA Astrophysics Data System (ADS)

    Kutty, Sangeetha; Tran, Tien; Nayak, Richi; Li, Yuefeng

    This paper presents an experimental study conducted over the INEX 2008 Document Mining Challenge corpus using both the structure and the content of XML documents for clustering them. The concise common substructures known as the closed frequent subtrees are generated using the structural information of the XML documents. The closed frequent subtrees are then used to extract the constrained content from the documents. A matrix containing the term distribution of the documents in the dataset is developed using the extracted constrained content. The k-way clustering algorithm is applied to the matrix to obtain the required clusters. In spite of the large number of documents in the INEX 2008 Wikipedia dataset, the proposed frequent subtree-based clustering approach was successful in clustering the documents. This approach significantly reduces the dimensionality of the terms used for clustering without much loss in accuracy.

  15. Using XML Configuration-Driven Development to Create a Customizable Ground Data System

    NASA Technical Reports Server (NTRS)

    Nash, Brent; DeMore, Martha

    2009-01-01

    The Mission data Processing and Control Subsystem (MPCS) is being developed as a multi-mission Ground Data System with the Mars Science Laboratory (MSL) as the first fully supported mission. MPCS is a fully featured, Java-based Ground Data System (GDS) for telecommand and telemetry processing based on Configuration-Driven Development (CDD). The eXtensible Markup Language (XML) is the ideal language for CDD because it is easily readable and editable by all levels of users and is also backed by a World Wide Web Consortium (W3C) standard and numerous powerful processing tools that make it uniquely flexible. The CDD approach adopted by MPCS minimizes changes to compiled code by using XML to create a series of configuration files that provide both coarse and fine grained control over all aspects of GDS operation.

  16. A Framework for Resilient Remote Monitoring

    DTIC Science & Technology

    2014-08-01

    of low-level observables are availa- ble, audited , and recorded. This establishes the need for a re- mote monitoring framework that can integrate with...Security, WS-Policy, SAML, XML Signature, and XML Encryption. Pearson Higher Education, 2004. [3] OMG, “Common Secure Interoperability Protocol...www.darpa.mil/Our_Work/I2O/Programs/Integrated_Cyb er_Analysis_System_%28ICAS%29.aspx. [8] D. Miller and B. Pearson , Security information and event man

  17. XTCE: XML Telemetry and Command Exchange Tutorial, XTCE Version 1

    NASA Technical Reports Server (NTRS)

    Rice, Kevin; Kizzort, Brad

    2008-01-01

    These presentation slides are a tutorial on XML Telemetry and Command Exchange (XTCE). The goal of XTCE is to provide an industry standard mechanism for describing telemetry and command streams (particularly from satellites.) it wiill lower cost and increase validation over traditional formats, and support exchange or native format.XCTE is designed to describe bit streams, that are typical of telemetry and command in the historic space domain.

  18. Toward XML Representation of NSS Simulation Scenario for Mission Scenario Exchange Capability

    DTIC Science & Technology

    2003-09-01

    app.html Deitel , H. M., Deitel , P. J., Nieto, T. R., Lin, T. M., Sadhu, P. (2001). XML How to Program . Upper Saddle River: Prentice Hall...Combat XXI Program ...........................13 2. Transition NSS to a Java Environment ...........................................13 3. Shift to an...STATEMENT The Naval Simulation System (NSS) is a powerful computer program developed by the Navy to provide a force-on-force modeling and simulation

  19. Applying Semantic Web Concepts to Support Net-Centric Warfare Using the Tactical Assessment Markup Language (TAML)

    DTIC Science & Technology

    2006-06-01

    SPARQL SPARQL Protocol and RDF Query Language SQL Structured Query Language SUMO Suggested Upper Merged Ontology SW... Query optimization algorithms are implemented in the Pellet reasoner in order to ensure querying a knowledge base is efficient . These algorithms...memory as a treelike structure in order for the data to be queried . XML Query (XQuery) is the standard language used when querying XML

  20. Chemical markup, XML, and the world wide web. 6. CMLReact, an XML vocabulary for chemical reactions.

    PubMed

    Holliday, Gemma L; Murray-Rust, Peter; Rzepa, Henry S

    2006-01-01

    A set of components (CMLReact) for managing chemical and biochemical reactions has been added to CML. These can be combined to support most of the strategies for the formal representation of reactions. The elements, attributes, and types are formally defined as XMLSchema components, and their semantics are developed. New syntax and semantics in CML are reported and illustrated with 10 examples.

  1. PDBj Mine: design and implementation of relational database interface for Protein Data Bank Japan

    PubMed Central

    Kinjo, Akira R.; Yamashita, Reiko; Nakamura, Haruki

    2010-01-01

    This article is a tutorial for PDBj Mine, a new database and its interface for Protein Data Bank Japan (PDBj). In PDBj Mine, data are loaded from files in the PDBMLplus format (an extension of PDBML, PDB's canonical XML format, enriched with annotations), which are then served for the user of PDBj via the worldwide web (WWW). We describe the basic design of the relational database (RDB) and web interfaces of PDBj Mine. The contents of PDBMLplus files are first broken into XPath entities, and these paths and data are indexed in the way that reflects the hierarchical structure of the XML files. The data for each XPath type are saved into the corresponding relational table that is named as the XPath itself. The generation of table definitions from the PDBMLplus XML schema is fully automated. For efficient search, frequently queried terms are compiled into a brief summary table. Casual users can perform simple keyword search, and 'Advanced Search' which can specify various conditions on the entries. More experienced users can query the database using SQL statements which can be constructed in a uniform manner. Thus, PDBj Mine achieves a combination of the flexibility of XML documents and the robustness of the RDB. Database URL: http://www.pdbj.org/ PMID:20798081

  2. PDBj Mine: design and implementation of relational database interface for Protein Data Bank Japan.

    PubMed

    Kinjo, Akira R; Yamashita, Reiko; Nakamura, Haruki

    2010-08-25

    This article is a tutorial for PDBj Mine, a new database and its interface for Protein Data Bank Japan (PDBj). In PDBj Mine, data are loaded from files in the PDBMLplus format (an extension of PDBML, PDB's canonical XML format, enriched with annotations), which are then served for the user of PDBj via the worldwide web (WWW). We describe the basic design of the relational database (RDB) and web interfaces of PDBj Mine. The contents of PDBMLplus files are first broken into XPath entities, and these paths and data are indexed in the way that reflects the hierarchical structure of the XML files. The data for each XPath type are saved into the corresponding relational table that is named as the XPath itself. The generation of table definitions from the PDBMLplus XML schema is fully automated. For efficient search, frequently queried terms are compiled into a brief summary table. Casual users can perform simple keyword search, and 'Advanced Search' which can specify various conditions on the entries. More experienced users can query the database using SQL statements which can be constructed in a uniform manner. Thus, PDBj Mine achieves a combination of the flexibility of XML documents and the robustness of the RDB. Database URL: http://www.pdbj.org/

  3. ISO, FGDC, DIF and Dublin Core - Making Sense of Metadata Standards for Earth Science Data

    NASA Astrophysics Data System (ADS)

    Jones, P. R.; Ritchey, N. A.; Peng, G.; Toner, V. A.; Brown, H.

    2014-12-01

    Metadata standards provide common definitions of metadata fields for information exchange across user communities. Despite the broad adoption of metadata standards for Earth science data, there are still heterogeneous and incompatible representations of information due to differences between the many standards in use and how each standard is applied. Federal agencies are required to manage and publish metadata in different metadata standards and formats for various data catalogs. In 2014, the NOAA National Climatic data Center (NCDC) managed metadata for its scientific datasets in ISO 19115-2 in XML, GCMD Directory Interchange Format (DIF) in XML, DataCite Schema in XML, Dublin Core in XML, and Data Catalog Vocabulary (DCAT) in JSON, with more standards and profiles of standards planned. Of these standards, the ISO 19115-series metadata is the most complete and feature-rich, and for this reason it is used by NCDC as the source for the other metadata standards. We will discuss the capabilities of metadata standards and how these standards are being implemented to document datasets. Successful implementations include developing translations and displays using XSLTs, creating links to related data and resources, documenting dataset lineage, and establishing best practices. Benefits, gaps, and challenges will be highlighted with suggestions for improved approaches to metadata storage and maintenance.

  4. Cancer diagnosis marker extraction for soft tissue sarcomas based on gene expression profiling data by using projective adaptive resonance theory (PART) filtering method

    PubMed Central

    Takahashi, Hiro; Nemoto, Takeshi; Yoshida, Teruhiko; Honda, Hiroyuki; Hasegawa, Tadashi

    2006-01-01

    Background Recent advances in genome technologies have provided an excellent opportunity to determine the complete biological characteristics of neoplastic tissues, resulting in improved diagnosis and selection of treatment. To accomplish this objective, it is important to establish a sophisticated algorithm that can deal with large quantities of data such as gene expression profiles obtained by DNA microarray analysis. Results Previously, we developed the projective adaptive resonance theory (PART) filtering method as a gene filtering method. This is one of the clustering methods that can select specific genes for each subtype. In this study, we applied the PART filtering method to analyze microarray data that were obtained from soft tissue sarcoma (STS) patients for the extraction of subtype-specific genes. The performance of the filtering method was evaluated by comparison with other widely used methods, such as signal-to-noise, significance analysis of microarrays, and nearest shrunken centroids. In addition, various combinations of filtering and modeling methods were used to extract essential subtype-specific genes. The combination of the PART filtering method and boosting – the PART-BFCS method – showed the highest accuracy. Seven genes among the 15 genes that are frequently selected by this method – MIF, CYFIP2, HSPCB, TIMP3, LDHA, ABR, and RGS3 – are known prognostic marker genes for other tumors. These genes are candidate marker genes for the diagnosis of STS. Correlation analysis was performed to extract marker genes that were not selected by PART-BFCS. Sixteen genes among those extracted are also known prognostic marker genes for other tumors, and they could be candidate marker genes for the diagnosis of STS. Conclusion The procedure that consisted of two steps, such as the PART-BFCS and the correlation analysis, was proposed. The results suggest that novel diagnostic and therapeutic targets for STS can be extracted by a procedure that includes the PART filtering method. PMID:16948864

  5. Dynamic tunable notch filters for the Antarctic Impulsive Transient Antenna (ANITA)

    NASA Astrophysics Data System (ADS)

    Allison, P.; Banerjee, O.; Beatty, J. J.; Connolly, A.; Deaconu, C.; Gordon, J.; Gorham, P. W.; Kovacevich, M.; Miki, C.; Oberla, E.; Roberts, J.; Rotter, B.; Stafford, S.; Tatem, K.; Batten, L.; Belov, K.; Besson, D. Z.; Binns, W. R.; Bugaev, V.; Cao, P.; Chen, C.; Chen, P.; Chen, Y.; Clem, J. M.; Cremonesi, L.; Dailey, B.; Dowkontt, P. F.; Hsu, S.; Huang, J.; Hupe, R.; Israel, M. H.; Kowalski, J.; Lam, J.; Learned, J. G.; Liewer, K. M.; Liu, T. C.; Ludwig, A. B.; Matsuno, S.; Mulrey, K.; Nam, J.; Nichol, R. J.; Novikov, A.; Prohira, S.; Rauch, B. F.; Ripa, J.; Romero-Wolf, A.; Russell, J.; Saltzberg, D.; Seckel, D.; Shiao, J.; Stockham, J.; Stockham, M.; Strutt, B.; Varner, G. S.; Vieregg, A. G.; Wang, S.; Wissel, S. A.; Wu, F.; Young, R.

    2018-06-01

    The Antarctic Impulsive Transient Antenna (ANITA) is a NASA long-duration balloon experiment with the primary goal of detecting ultra-high-energy (> 1018eV) neutrinos via the Askaryan Effect. The fourth ANITA mission, ANITA-IV, recently flew from Dec 2 to Dec 29, 2016. For the first time, the Tunable Universal Filter Frontend (TUFF) boards were deployed for mitigation of narrow-band, anthropogenic noise with tunable, switchable notch filters. The TUFF boards also performed second-stage amplification by approximately 45 dB to boost the ∼ μV-level radio frequency (RF) signals to ∼ mV-level for digitization, and supplied power via bias tees to the first-stage, antenna-mounted amplifiers. The other major change in signal processing in ANITA-IV is the resurrection of the 90 ° hybrids deployed previously in ANITA-I, in the trigger system, although in this paper we focus on the TUFF boards. During the ANITA-IV mission, the TUFF boards were successfully operated throughout the flight. They contributed to a factor of 2.8 higher total instrument livetime on average in ANITA-IV compared to ANITA-III due to reduction of narrow-band, anthropogenic noise before a trigger decision is made.

  6. Joint Battlespace Infosphere: Information Management Within a C2 Enterprise

    DTIC Science & Technology

    2005-06-01

    using. In version 1.2, we support both MySQL and Oracle as underlying implementations where the XML metadata schema is mapped into relational tables in...Identity Servers, Role-Based Access Control, and Policy Representation – Databases: Oracle , MySQL , TigerLogic, Berkeley XML DB 15 Instrumentation Services...converted to SQL for execution. Invocations are then forwarded to the appropriate underlying IOR core components that have the responsibility of issuing

  7. eMontage: An Architecture for Rapid Integration of Situational Awareness Data at the Edge

    DTIC Science & Technology

    2013-05-01

    Request Response Interaction Android Client Dispatcher Servlet Spring MVC Controller Camel Producer Template Camel Route Remote Data Service - REST...8SATURN 2013© 2013 Carnegie Mellon University Publish Subscribe Interaction Android Client Dispatcher Servlet Spring MVC Controller Remote Data...set ..., "’"C oUkRelw•b J’- - ’ ~ ’~------’ ~-------- ’-------~ , Parse XML into a single Java XML Document object. -=--:=’ Software Engineering

  8. C3I and Modelling and Simulation (M&S) Interoperability

    DTIC Science & Technology

    2004-03-01

    customised Open Source products. The technical implementation is based on the use of the eXtendend Markup Language (XML) and Python . XML is developed...to structure, store and send information. The language is focus on the description of data. Python is a portable, interpreted, object-oriented...programming language. A huge variety of usable Open Source Projects were issued by the Python Community. 3.1 Phase 1: Feasibility Studies Phase 1 was

  9. Chemical Markup, XML, and the World Wide Web. 7. CMLSpect, an XML vocabulary for spectral data.

    PubMed

    Kuhn, Stefan; Helmus, Tobias; Lancashire, Robert J; Murray-Rust, Peter; Rzepa, Henry S; Steinbeck, Christoph; Willighagen, Egon L

    2007-01-01

    CMLSpect is an extension of Chemical Markup Language (CML) for managing spectral and other analytical data. It is designed to be flexible enough to contain a wide variety of spectral data. The paper describes the CMLElements used and gives practical examples for common types of spectra. In addition it demonstrates how different views of the data can be expressed and what problems still exist.

  10. CrossTalk: The Journal of Defense Software Engineering. Volume 21, Number 10, October 2008

    DTIC Science & Technology

    2008-10-01

    proprietary modeling offerings, there is considerable conver- gence around Business Process Modeling Notation ( BPMN ). The research also found strong...support across vendors for the Business Process Execution Language standard, though there is also emerging support for direct execution of BPMN through...the use of the XML Process Definition Language, an XML serialization of BPMN . Many vendors also provide the needed moni- toring of those processes at

  11. Internet-based data interchange with XML

    NASA Astrophysics Data System (ADS)

    Fuerst, Karl; Schmidt, Thomas

    2000-12-01

    In this paper, a complete concept for Internet Electronic Data Interchange (EDI) - a well-known buzzword in the area of logistics and supply chain management to enable the automation of the interactions between companies and their partners - using XML (eXtensible Markup Language) will be proposed. This approach is based on Internet and XML, because the implementation of traditional EDI (e.g. EDIFACT, ANSI X.12) is mostly too costly for small and medium sized enterprises, which want to integrate their suppliers and customers in a supply chain. The paper will also present the results of the implementation of a prototype for such a system, which has been developed for an industrial partner to improve the current situation of parts delivery. The main functions of this system are an early warning system to detect problems during the parts delivery process as early as possible, and a transport following system to pursue the transportation.

  12. CytometryML with DICOM and FCS

    NASA Astrophysics Data System (ADS)

    Leif, Robert C.

    2018-02-01

    Abstract: Flow Cytometry Standard, FCS, and Digital Imaging and Communications in Medicine standard, DICOM, are based on extensive, superb domain knowledge, However, they are isolated systems, do not take advantage of data structures, require special programs to read and write the data, lack the capability to interoperate or work with other standards and FCS lacks many of the datatypes necessary for clinical laboratory data. The large overlap between imaging and flow cytometry provides strong evidence that both modalities should be covered by the same standard. Method: The XML Schema Definition Language, XSD 1.1 was used to translate FCS and/or DICOM objects. A MIFlowCyt file was tested with published values. Results: Previously, a significant part of an XML standard based upon a combination of FCS and DICOM has been implemented and validated with MIFlowCyt data. Strongly typed translations of FCS keywords have been constructed in XML. These keywords contain links to their DICOM and FCS equivalents.

  13. Ordered Backward XPath Axis Processing against XML Streams

    NASA Astrophysics Data System (ADS)

    Nizar M., Abdul; Kumar, P. Sreenivasa

    Processing of backward XPath axes against XML streams is challenging for two reasons: (i) Data is not cached for future access. (ii) Query contains steps specifying navigation to the data that already passed by. While there are some attempts to process parent and ancestor axes, there are very few proposals to process ordered backward axes namely, preceding and preceding-sibling. For ordered backward axis processing, the algorithm, in addition to overcoming the limitations on data availability, has to take care of ordering constraints imposed by these axes. In this paper, we show how backward ordered axes can be effectively represented using forward constraints. We then discuss an algorithm for XML stream processing of XPath expressions containing ordered backward axes. The algorithm uses a layered cache structure to systematically accumulate query results. Our experiments show that the new algorithm gains remarkable speed up over the existing algorithm without compromising on bufferspace requirement.

  14. ART-ML - a novel XML format for the biological procedures modeling and the representation of blood flow simulation.

    PubMed

    Karvounis, E C; Tsakanikas, V D; Fotiou, E; Fotiadis, D I

    2010-01-01

    The paper proposes a novel Extensible Markup Language (XML) based format called ART-ML that aims at supporting the interoperability and the reuse of models of blood flow, mass transport and plaque formation, exported by ARTool. ARTool is a platform for the automatic processing of various image modalities of coronary and carotid arteries. The images and their content are fused to develop morphological models of the arteries in easy to handle 3D representations. The platform incorporates efficient algorithms which are able to perform blood flow simulation. In addition atherosclerotic plaque development is estimated taking into account morphological, flow and genetic factors. ART-ML provides a XML format that enables the representation and management of embedded models within the ARTool platform and the storage and interchange of well-defined information. This approach influences in the model creation, model exchange, model reuse and result evaluation.

  15. HDF-EOS Web Server

    NASA Technical Reports Server (NTRS)

    Ullman, Richard; Bane, Bob; Yang, Jingli

    2008-01-01

    A shell script has been written as a means of automatically making HDF-EOS-formatted data sets available via the World Wide Web. ("HDF-EOS" and variants thereof are defined in the first of the two immediately preceding articles.) The shell script chains together some software tools developed by the Data Usability Group at Goddard Space Flight Center to perform the following actions: Extract metadata in Object Definition Language (ODL) from an HDF-EOS file, Convert the metadata from ODL to Extensible Markup Language (XML), Reformat the XML metadata into human-readable Hypertext Markup Language (HTML), Publish the HTML metadata and the original HDF-EOS file to a Web server and an Open-source Project for a Network Data Access Protocol (OPeN-DAP) server computer, and Reformat the XML metadata and submit the resulting file to the EOS Clearinghouse, which is a Web-based metadata clearinghouse that facilitates searching for, and exchange of, Earth-Science data.

  16. XML Based Scientific Data Management Facility

    NASA Technical Reports Server (NTRS)

    Mehrotra, Piyush; Zubair, M.; Ziebartt, John (Technical Monitor)

    2001-01-01

    The World Wide Web consortium has developed an Extensible Markup Language (XML) to support the building of better information management infrastructures. The scientific computing community realizing the benefits of HTML has designed markup languages for scientific data. In this paper, we propose a XML based scientific data management facility, XDMF. The project is motivated by the fact that even though a lot of scientific data is being generated, it is not being shared because of lack of standards and infrastructure support for discovering and transforming the data. The proposed data management facility can be used to discover the scientific data itself, the transformation functions, and also for applying the required transformations. We have built a prototype system of the proposed data management facility that can work on different platforms. We have implemented the system using Java, and Apache XSLT engine Xalan. To support remote data and transformation functions, we had to extend the XSLT specification and the Xalan package.

  17. JCMT observatory control system

    NASA Astrophysics Data System (ADS)

    Rees, Nicholas P.; Economou, Frossie; Jenness, Tim; Kackley, Russell D.; Walther, Craig A.; Dent, William R. F.; Folger, Martin; Gao, Xiaofeng; Kelly, Dennis; Lightfoot, John F.; Pain, Ian; Hovey, Gary J.; Redman, Russell O.

    2002-12-01

    The JCMT, the world's largest sub-mm telescope, has had essentially the same VAX/VMS based control system since it was commissioned. For the next generation of instrumentation we are implementing a new Unix/VxWorks based system, based on the successful ORAC system that was recently released on UKIRT. The system is now entering the integration and testing phase. This paper gives a broad overview of the system architecture and includes some discussion on the choices made. (Other papers in this conference cover some areas in more detail). The basic philosophy is to control the sub-systems with a small and simple set of commands, but passing detailed XML configuration descriptions along with the commands to give the flexibility required. The XML files can be passed between various layers in the system without interpretation, and so simplify the design enormously. This has all been made possible by the adoption of an Observation Preparation Tool, which essentially serves as an intelligent XML editor.

  18. Non-invasive lightweight integration engine for building EHR from autonomous distributed systems.

    PubMed

    Angulo, Carlos; Crespo, Pere; Maldonado, José A; Moner, David; Pérez, Daniel; Abad, Irene; Mandingorra, Jesús; Robles, Montserrat

    2007-12-01

    In this paper we describe Pangea-LE, a message-oriented lightweight data integration engine that allows homogeneous and concurrent access to clinical information from disperse and heterogeneous data sources. The engine extracts the information and passes it to the requesting client applications in a flexible XML format. The XML response message can be formatted on demand by appropriate Extensible Stylesheet Language (XSL) transformations in order to meet the needs of client applications. We also present a real deployment in a hospital where Pangea-LE collects and generates an XML view of all the available patient clinical information. The information is presented to healthcare professionals in an Electronic Health Record (EHR) viewer Web application with patient search and EHR browsing capabilities. Implantation in a real setting has been a success due to the non-invasive nature of Pangea-LE which respects the existing information systems.

  19. Bottom-Up Evaluation of Twig Join Pattern Queries in XML Document Databases

    NASA Astrophysics Data System (ADS)

    Chen, Yangjun

    Since the extensible markup language XML emerged as a new standard for information representation and exchange on the Internet, the problem of storing, indexing, and querying XML documents has been among the major issues of database research. In this paper, we study the twig pattern matching and discuss a new algorithm for processing ordered twig pattern queries. The time complexity of the algorithmis bounded by O(|D|·|Q| + |T|·leaf Q ) and its space overhead is by O(leaf T ·leaf Q ), where T stands for a document tree, Q for a twig pattern and D is a largest data stream associated with a node q of Q, which contains the database nodes that match the node predicate at q. leaf T (leaf Q ) represents the number of the leaf nodes of T (resp. Q). In addition, the algorithm can be adapted to an indexing environment with XB-trees being used.

  20. A Simple XML Producer-Consumer Protocol

    NASA Technical Reports Server (NTRS)

    Smith, Warren; Gunter, Dan; Quesnel, Darcy

    2000-01-01

    This document describes a simple XML-based protocol that can be used for producers of events to communicate with consumers of events. The protocol described here is not meant to be the most efficient protocol, the most logical protocol, or the best protocol in any way. This protocol was defined quickly and it's intent is to give us a reasonable protocol that we can implement relatively easily and then use to gain experience in distributed event services. This experience will help us evaluate proposals for event representations, XML-based encoding of information, and communication protocols. The next section of this document describes how we represent events in this protocol and then defines the two events that we choose to use for our initial experiments. These definitions are made by example so that they are informal and easy to understand. The following section then proceeds to define the producer-consumer protocol we have agreed upon for our initial experiments.

  1. Techniques for integrating ‐omics data

    PubMed Central

    Akula, Siva Prasad; Miriyala, Raghava Naidu; Thota, Hanuman; Rao, Allam Appa; Gedela, Srinubabu

    2009-01-01

    The challenge for -omics research is to tackle the problem of fragmentation of knowledge by integrating several sources of heterogeneous information into a coherent entity. It is widely recognized that successful data integration is one of the keys to improve productivity for stored data. Through proper data integration tools and algorithms, researchers may correlate relationships that enable them to make better and faster decisions. The need for data integration is essential for present ‐omics community, because ‐omics data is currently spread world wide in wide variety of formats. These formats can be integrated and migrated across platforms through different techniques and one of the important techniques often used is XML. XML is used to provide a document markup language that is easier to learn, retrieve, store and transmit. It is semantically richer than HTML. Here, we describe bio warehousing, database federation, controlled vocabularies and highlighting the XML application to store, migrate and validate -omics data. PMID:19255651

  2. Techniques for integrating -omics data.

    PubMed

    Akula, Siva Prasad; Miriyala, Raghava Naidu; Thota, Hanuman; Rao, Allam Appa; Gedela, Srinubabu

    2009-01-01

    The challenge for -omics research is to tackle the problem of fragmentation of knowledge by integrating several sources of heterogeneous information into a coherent entity. It is widely recognized that successful data integration is one of the keys to improve productivity for stored data. Through proper data integration tools and algorithms, researchers may correlate relationships that enable them to make better and faster decisions. The need for data integration is essential for present -omics community, because -omics data is currently spread world wide in wide variety of formats. These formats can be integrated and migrated across platforms through different techniques and one of the important techniques often used is XML. XML is used to provide a document markup language that is easier to learn, retrieve, store and transmit. It is semantically richer than HTML. Here, we describe bio warehousing, database federation, controlled vocabularies and highlighting the XML application to store, migrate and validate -omics data.

  3. An XML-based system for the flexible classification and retrieval of clinical practice guidelines.

    PubMed Central

    Ganslandt, T.; Mueller, M. L.; Krieglstein, C. F.; Senninger, N.; Prokosch, H. U.

    2002-01-01

    Beneficial effects of clinical practice guidelines (CPGs) have not yet reached expectations due to limited routine adoption. Electronic distribution and reminder systems have the potential to overcome implementation barriers. Existing electronic CPG repositories like the National Guideline Clearinghouse (NGC) provide individual access but lack standardized computer-readable interfaces necessary for automated guideline retrieval. The aim of this paper was to facilitate automated context-based selection and presentation of CPGs. Using attributes from the NGC classification scheme, an XML-based metadata repository was successfully implemented, providing document storage, classification and retrieval functionality. Semi-automated extraction of attributes was implemented for the import of XML guideline documents using XPath. A hospital information system interface was exemplarily implemented for diagnosis-based guideline invocation. Limitations of the implemented system are discussed and possible future work is outlined. Integration of standardized computer-readable search interfaces into existing CPG repositories is proposed. PMID:12463831

  4. XML: James Webb Space Telescope Database Issues, Lessons, and Status

    NASA Technical Reports Server (NTRS)

    Detter, Ryan; Mooney, Michael; Fatig, Curtis

    2003-01-01

    This paper will present the current concept using extensible Markup Language (XML) as the underlying structure for the James Webb Space Telescope (JWST) database. The purpose of using XML is to provide a JWST database, independent of any portion of the ground system, yet still compatible with the various systems using a variety of different structures. The testing of the JWST Flight Software (FSW) started in 2002, yet the launch is scheduled for 2011 with a planned 5-year mission and a 5-year follow on option. The initial database and ground system elements, including the commands, telemetry, and ground system tools will be used for 19 years, plus post mission activities. During the Integration and Test (I&T) phases of the JWST development, 24 distinct laboratories, each geographically dispersed, will have local database tools with an XML database. Each of these laboratories database tools will be used for the exporting and importing of data both locally and to a central database system, inputting data to the database certification process, and providing various reports. A centralized certified database repository will be maintained by the Space Telescope Science Institute (STScI), in Baltimore, Maryland, USA. One of the challenges for the database is to be flexible enough to allow for the upgrade, addition or changing of individual items without effecting the entire ground system. Also, using XML should allow for the altering of the import and export formats needed by the various elements, tracking the verification/validation of each database item, allow many organizations to provide database inputs, and the merging of the many existing database processes into one central database structure throughout the JWST program. Many National Aeronautics and Space Administration (NASA) projects have attempted to take advantage of open source and commercial technology. Often this causes a greater reliance on the use of Commercial-Off-The-Shelf (COTS), which is often limiting. In our review of the database requirements and the COTS software available, only very expensive COTS software will meet 90% of requirements. Even with the high projected initial cost of COTS, the development and support for custom code over the 19-year mission period was forecasted to be higher than the total licensing costs. A group did look at reusing existing database tools and formats. If the JWST database was already in a mature state, the reuse made sense, but with the database still needing to handing the addition of different types of command and telemetry structures, defining new spacecraft systems, accept input and export to systems which has not been defined yet, XML provided the flexibility desired. It remains to be determined whether the XML database will reduce the over all cost for the JWST mission.

  5. Internet Patient Records: new techniques

    PubMed Central

    Moehrs, Sascha; Anedda, Paolo; Tuveri, Massimiliano; Zanetti, Gianluigi

    2001-01-01

    Background The ease by which the Internet is able to distribute information to geographically-distant users on a wide variety of computers makes it an obvious candidate for a technological solution for electronic patient record systems. Indeed, second-generation Internet technologies such as the ones described in this article - XML (eXtensible Markup Language), XSL (eXtensible Style Language), DOM (Document Object Model), CSS (Cascading Style Sheet), JavaScript, and JavaBeans - may significantly reduce the complexity of the development of distributed healthcare systems. Objective The demonstration of an experimental Electronic Patient Record (EPR) system built from those technologies that can support viewing of medical imaging exams and graphically-rich clinical reporting tools, while conforming to the newly emerging XML standard for digital documents. In particular, we aim to promote rapid prototyping of new reports by clinical specialists. Methods We have built a prototype EPR client, InfoDOM, that runs in both the popular web browsers. In this second version it receives each EPR as an XML record served via the secure SSL (Secure Socket Layer) protocol. JavaBean software components manipulate the XML to store it and then to transform it into a variety of useful clinical views. First a web page summary for the patient is produced. From that web page other JavaBeans can be launched. In particular, we have developed a medical imaging exam Viewer and a clinical Reporter bean parameterized appropriately for the particular patient and exam in question. Both present particular views of the XML data. The Viewer reads image sequences from a patient-specified network URL on a PACS (Picture Archiving and Communications System) server and presents them in a user-controllable animated sequence, while the Reporter provides a configurable anatomical map of the site of the pathology, from which individual "reportlets" can be launched. The specification of these reportlets is achieved using standard HTML forms and thus may conceivably be authored by clinical specialists. A generic JavaScript library has been written that allows the seamless incorporation of such contributions into the InfoDOM client. In conjunction with another JavaBean, that library renders graphically-enhanced reporting tools that read and write content to and from the XML data-structure, ready for resubmission to the EPR server. Results We demonstrate the InfoDOM experimental EPR system that is currently being adapted for test-bed use in three hospitals in Cagliari, Italy. For this we are working with specialists in neurology, radiology, and epilepsy. Conclusions Early indications are that the rapid prototyping of reports afforded by our EPR system can assist communication between clinical specialists and our system developers. We are now experimenting with new technologies that may provide services to the kind of XML EPR client described here. PMID:11720950

  6. SU-E-T-406: Use of TrueBeam Developer Mode and API to Increase the Efficiency and Accuracy of Commissioning Measurements for the Varian EDGE Stereotactic Linac

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, S; Gulam, M; Song, K

    2014-06-01

    Purpose: The Varian EDGE machine is a new stereotactic platform, combining Calypso and VisionRT localization systems with a stereotactic linac. The system includes TrueBeam DeveloperMode, making possible the use of XML-scripting for automation of linac-related tasks. This study details the use of DeveloperMode to automate commissioning tasks for Varian EDGE, thereby improving efficiency and measurement consistency. Methods: XML-scripting was used for various commissioning tasks,including couch model verification,beam-scanning,and isocenter verification. For couch measurements, point measurements were acquired for several field sizes (2×2,4×4,10×10cm{sup 2}) at 42 gantry angles for two couch-models. Measurements were acquired with variations in couch position(rails in/out,couch shifted inmore » each of motion axes) compared to treatment planning system(TPS)-calculated values,which were logged automatically through advanced planning interface(API) scripting functionality. For beam scanning, XML-scripts were used to create custom MLC-apertures. For isocenter verification, XML-scripts were used to automate various Winston-Lutz-type tests. Results: For couch measurements, the time required for each set of angles was approximately 9 minutes. Without scripting, each set required approximately 12 minutes. Automated measurements required only one physicist, while manual measurements required at least two physicists to handle linac positions/beams and data recording. MLC apertures were generated outside of the TPS,and with the .xml file format, double-checking without use of TPS/operator console was possible. Similar time efficiency gains were found for isocenter verification measurements Conclusion: The use of XML scripting in TrueBeam DeveloperMode allows for efficient and accurate data acquisition during commissioning. The efficiency improvement is most pronounced for iterative measurements, exemplified by the time savings for couch modeling measurements(approximately 10 hours). The scripting also allowed for creation of the files in advance without requiring access to TPS. The API scripting functionality enabled efficient creation/mining of TPS data. Finally, automation reduces the potential for human error in entering linac values at the machine console,and the script provides a log of measurements acquired for each session. This research was supported in part by a grant from Varian Medical Systems, Palo Alto, CA.« less

  7. Ionospheric Correction of InSAR for Accurate Ice Motion Mapping at High Latitudes

    NASA Astrophysics Data System (ADS)

    Liao, H.; Meyer, F. J.

    2016-12-01

    Monitoring the motion of the large ice sheets is of great importance for determining ice mass balance and its contribution to sea level rise. Recently the first comprehensive ice motion of the Greenland and the Antarctica have been generated with InSAR. However, these studies have indicated that the performance of InSAR-based ice motion mapping is limited by the presence of the ionosphere. This is particularly true at high latitudes and for low-frequency SAR data. Filter-based and empirical methods (e.g., removing polynomials), which have often been used to mitigate ionospheric effects, are often ineffective in these areas due to the typically strong spatial variability of ionospheric phase delay in high latitudes and due to the risk of removing true deformation signals from the observations. In this study, we will first present an outline of our split-spectrum InSAR-based ionospheric correction approach and particularly highlight how our method improves upon published techniques, such as the multiple sub-band approach to boost estimation accuracy as well as advanced error correction and filtering algorithms. We applied our work flow to a large number of ionosphere-affected dataset over the large ice sheets to estimate the benefit of ionospheric correction on ice motion mapping accuracy. Appropriate test sites over Greenland and the Antarctic have been chosen through cooperation with authors (UW, Ian Joughin) of previous ice motion studies. To demonstrate the magnitude of ionospheric noise and to showcase the performance of ionospheric correction, we will show examples of ionospheric-affected InSAR data and our ionosphere corrected result for comparison in visual. We also compared the corrected phase data to known ice velocity fields quantitatively for the analyzed areas from experts in ice velocity mapping. From our studies we found that ionospheric correction significantly reduces biases in ice velocity estimates and boosts accuracy by a factor that depends on a set of system (range bandwidth, temporal and spatial baseline) and processing parameters (e.g., filtering strength and sub-band configuration). A case study in Greenland is attached below.

  8. The Development of Target-Specific Pose Filter Ensembles To Boost Ligand Enrichment for Structure-Based Virtual Screening.

    PubMed

    Xia, Jie; Hsieh, Jui-Hua; Hu, Huabin; Wu, Song; Wang, Xiang Simon

    2017-06-26

    Structure-based virtual screening (SBVS) has become an indispensable technique for hit identification at the early stage of drug discovery. However, the accuracy of current scoring functions is not high enough to confer success to every target and thus remains to be improved. Previously, we had developed binary pose filters (PFs) using knowledge derived from the protein-ligand interface of a single X-ray structure of a specific target. This novel approach had been validated as an effective way to improve ligand enrichment. Continuing from it, in the present work we attempted to incorporate knowledge collected from diverse protein-ligand interfaces of multiple crystal structures of the same target to build PF ensembles (PFEs). Toward this end, we first constructed a comprehensive data set to meet the requirements of ensemble modeling and validation. This set contains 10 diverse targets, 118 well-prepared X-ray structures of protein-ligand complexes, and large benchmarking actives/decoys sets. Notably, we designed a unique workflow of two-layer classifiers based on the concept of ensemble learning and applied it to the construction of PFEs for all of the targets. Through extensive benchmarking studies, we demonstrated that (1) coupling PFE with Chemgauss4 significantly improves the early enrichment of Chemgauss4 itself and (2) PFEs show greater consistency in boosting early enrichment and larger overall enrichment than our prior PFs. In addition, we analyzed the pairwise topological similarities among cognate ligands used to construct PFEs and found that it is the higher chemical diversity of the cognate ligands that leads to the improved performance of PFEs. Taken together, the results so far prove that the incorporation of knowledge from diverse protein-ligand interfaces by ensemble modeling is able to enhance the screening competence of SBVS scoring functions.

  9. Pixel-based image fusion with false color mapping

    NASA Astrophysics Data System (ADS)

    Zhao, Wei; Mao, Shiyi

    2003-06-01

    In this paper, we propose a pixel-based image fusion algorithm that combines the gray-level image fusion method with the false color mapping. This algorithm integrates two gray-level images presenting different sensor modalities or at different frequencies and produces a fused false-color image. The resulting image has higher information content than each of the original images. The objects in the fused color image are easy to be recognized. This algorithm has three steps: first, obtaining the fused gray-level image of two original images; second, giving the generalized high-boost filtering images between fused gray-level image and two source images respectively; third, generating the fused false-color image. We use the hybrid averaging and selection fusion method to obtain the fused gray-level image. The fused gray-level image will provide better details than two original images and reduce noise at the same time. But the fused gray-level image can't contain all detail information in two source images. At the same time, the details in gray-level image cannot be discerned as easy as in a color image. So a color fused image is necessary. In order to create color variation and enhance details in the final fusion image, we produce three generalized high-boost filtering images. These three images are displayed through red, green and blue channel respectively. A fused color image is produced finally. This method is used to fuse two SAR images acquired on the San Francisco area (California, USA). The result shows that fused false-color image enhances the visibility of certain details. The resolution of the final false-color image is the same as the resolution of the input images.

  10. C2 Product-Centric Approach to Transforming Current C4ISR Information Architectures

    DTIC Science & Technology

    2004-06-01

    each type of environment . For “Cultural Feature” Entity Kind only the “Land” Domain is defined and for “ Environmental ” Entity Kind only the...take advantage of both worlds. In particular, the unifying concept of a Model-Driven Architecture (MDA) under development by the Object Management...Exchange Requirements (IER) to an XML environment . FCS [4] developers have embraced both UML and XML for their architectures and MIP [5] too is migrating

  11. Leveraging Small-Lexicon Language Models

    DTIC Science & Technology

    2016-12-31

    shown in Figure 1. This “easy to use” XML build (from a lexicon.xml file) bakes in source and language metadata, shows both raw (“copper”) and...requires it (e.g. used as standoff annotation), or some or all metadata can be baked into each and every set. Please let us know if a custom...interpretations are plausible, they are pipe-separated: bake #v#1|toast#v#1. • several word classes have been added (with all items numbered #1): d

  12. Internet Economics IV

    DTIC Science & Technology

    2004-08-01

    components, and B2B /B2C aspects of those in a technical and economic snapshot. Talk number six discusses the trade-off between quality and cost, which...web services have been defined. The fifth talk summarizes key aspects of XML (Extended Markup Language), Web Services and their components, and B2B ...Internet is Run: A Worldwide Perspective 69 Christoph Pauls 5 XML, Web Services and B2C/ B2B : A Technical and Economical Snap- shot 87 Matthias Pitt 6

  13. Design and implementation of the mobility assessment tool: software description.

    PubMed

    Barnard, Ryan T; Marsh, Anthony P; Rejeski, Walter Jack; Pecorella, Anthony; Ip, Edward H

    2013-07-23

    In previous work, we described the development of an 81-item video-animated tool for assessing mobility. In response to criticism levied during a pilot study of this tool, we sought to develop a new version built upon a flexible framework for designing and administering the instrument. Rather than constructing a self-contained software application with a hard-coded instrument, we designed an XML schema capable of describing a variety of psychometric instruments. The new version of our video-animated assessment tool was then defined fully within the context of a compliant XML document. Two software applications--one built in Java, the other in Objective-C for the Apple iPad--were then built that could present the instrument described in the XML document and collect participants' responses. Separating the instrument's definition from the software application implementing it allowed for rapid iteration and easy, reliable definition of variations. Defining instruments in a software-independent XML document simplifies the process of defining instruments and variations and allows a single instrument to be deployed on as many platforms as there are software applications capable of interpreting the instrument, thereby broadening the potential target audience for the instrument. Continued work will be done to further specify and refine this type of instrument specification with a focus on spurring adoption by researchers in gerontology and geriatric medicine.

  14. Convergence of Health Level Seven Version 2 Messages to Semantic Web Technologies for Software-Intensive Systems in Telemedicine Trauma Care.

    PubMed

    Menezes, Pedro Monteiro; Cook, Timothy Wayne; Cavalini, Luciana Tricai

    2016-01-01

    To present the technical background and the development of a procedure that enriches the semantics of Health Level Seven version 2 (HL7v2) messages for software-intensive systems in telemedicine trauma care. This study followed a multilevel model-driven approach for the development of semantically interoperable health information systems. The Pre-Hospital Trauma Life Support (PHTLS) ABCDE protocol was adopted as the use case. A prototype application embedded the semantics into an HL7v2 message as an eXtensible Markup Language (XML) file, which was validated against an XML schema that defines constraints on a common reference model. This message was exchanged with a second prototype application, developed on the Mirth middleware, which was also used to parse and validate both the original and the hybrid messages. Both versions of the data instance (one pure XML, one embedded in the HL7v2 message) were equally validated and the RDF-based semantics recovered by the receiving side of the prototype from the shared XML schema. This study demonstrated the semantic enrichment of HL7v2 messages for intensive-software telemedicine systems for trauma care, by validating components of extracts generated in various computing environments. The adoption of the method proposed in this study ensures the compliance of the HL7v2 standard in Semantic Web technologies.

  15. Construction of a nasopharyngeal carcinoma 2D/MS repository with Open Source XML database--Xindice.

    PubMed

    Li, Feng; Li, Maoyu; Xiao, Zhiqiang; Zhang, Pengfei; Li, Jianling; Chen, Zhuchu

    2006-01-11

    Many proteomics initiatives require integration of all information with uniformcriteria from collection of samples and data display to publication of experimental results. The integration and exchanging of these data of different formats and structure imposes a great challenge to us. The XML technology presents a promise in handling this task due to its simplicity and flexibility. Nasopharyngeal carcinoma (NPC) is one of the most common cancers in southern China and Southeast Asia, which has marked geographic and racial differences in incidence. Although there are some cancer proteome databases now, there is still no NPC proteome database. The raw NPC proteome experiment data were captured into one XML document with Human Proteome Markup Language (HUP-ML) editor and imported into native XML database Xindice. The 2D/MS repository of NPC proteome was constructed with Apache, PHP and Xindice to provide access to the database via Internet. On our website, two methods, keyword query and click query, were provided at the same time to access the entries of the NPC proteome database. Our 2D/MS repository can be used to share the raw NPC proteomics data that are generated from gel-based proteomics experiments. The database, as well as the PHP source codes for constructing users' own proteome repository, can be accessed at http://www.xyproteomics.org/.

  16. Design and implementation of the mobility assessment tool: software description

    PubMed Central

    2013-01-01

    Background In previous work, we described the development of an 81-item video-animated tool for assessing mobility. In response to criticism levied during a pilot study of this tool, we sought to develop a new version built upon a flexible framework for designing and administering the instrument. Results Rather than constructing a self-contained software application with a hard-coded instrument, we designed an XML schema capable of describing a variety of psychometric instruments. The new version of our video-animated assessment tool was then defined fully within the context of a compliant XML document. Two software applications—one built in Java, the other in Objective-C for the Apple iPad—were then built that could present the instrument described in the XML document and collect participants’ responses. Separating the instrument’s definition from the software application implementing it allowed for rapid iteration and easy, reliable definition of variations. Conclusions Defining instruments in a software-independent XML document simplifies the process of defining instruments and variations and allows a single instrument to be deployed on as many platforms as there are software applications capable of interpreting the instrument, thereby broadening the potential target audience for the instrument. Continued work will be done to further specify and refine this type of instrument specification with a focus on spurring adoption by researchers in gerontology and geriatric medicine. PMID:23879716

  17. New Tools to Document and Manage Data/Metadata: Example NGEE Arctic and ARM

    NASA Astrophysics Data System (ADS)

    Crow, M. C.; Devarakonda, R.; Killeffer, T.; Hook, L.; Boden, T.; Wullschleger, S.

    2017-12-01

    Tools used for documenting, archiving, cataloging, and searching data are critical pieces of informatics. This poster describes tools being used in several projects at Oak Ridge National Laboratory (ORNL), with a focus on the U.S. Department of Energy's Next Generation Ecosystem Experiment in the Arctic (NGEE Arctic) and Atmospheric Radiation Measurements (ARM) project, and their usage at different stages of the data lifecycle. The Online Metadata Editor (OME) is used for the documentation and archival stages while a Data Search tool supports indexing, cataloging, and searching. The NGEE Arctic OME Tool [1] provides a method by which researchers can upload their data and provide original metadata with each upload while adhering to standard metadata formats. The tool is built upon a Java SPRING framework to parse user input into, and from, XML output. Many aspects of the tool require use of a relational database including encrypted user-login, auto-fill functionality for predefined sites and plots, and file reference storage and sorting. The Data Search Tool conveniently displays each data record in a thumbnail containing the title, source, and date range, and features a quick view of the metadata associated with that record, as well as a direct link to the data. The search box incorporates autocomplete capabilities for search terms and sorted keyword filters are available on the side of the page, including a map for geo-searching. These tools are supported by the Mercury [2] consortium (funded by DOE, NASA, USGS, and ARM) and developed and managed at Oak Ridge National Laboratory. Mercury is a set of tools for collecting, searching, and retrieving metadata and data. Mercury collects metadata from contributing project servers, then indexes the metadata to make it searchable using Apache Solr, and provides access to retrieve it from the web page. Metadata standards that Mercury supports include: XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115.

  18. User-Friendly Interface Developed for a Web-Based Service for SpaceCAL Emulations

    NASA Technical Reports Server (NTRS)

    Liszka, Kathy J.; Holtz, Allen P.

    2004-01-01

    A team at the NASA Glenn Research Center is developing a Space Communications Architecture Laboratory (SpaceCAL) for protocol development activities for coordinated satellite missions. SpaceCAL will provide a multiuser, distributed system to emulate space-based Internet architectures, backbone networks, formation clusters, and constellations. As part of a new effort in 2003, building blocks are being defined for an open distributed system to make the satellite emulation test bed accessible through an Internet connection. The first step in creating a Web-based service to control the emulation remotely is providing a user-friendly interface for encoding the data into a well-formed and complete Extensible Markup Language (XML) document. XML provides coding that allows data to be transferred between dissimilar systems. Scenario specifications include control parameters, network routes, interface bandwidths, delay, and bit error rate. Specifications for all satellite, instruments, and ground stations in a given scenario are also included in the XML document. For the SpaceCAL emulation, the XML document can be created using XForms, a Webbased forms language for data collection. Contrary to older forms technology, the interactive user interface makes the science prevalent, not the data representation. Required versus optional input fields, default values, automatic calculations, data validation, and reuse will help researchers quickly and accurately define missions. XForms can apply any XML schema defined for the test mission to validate data before forwarding it to the emulation facility. New instrument definitions, facilities, and mission types can be added to the existing schema. The first prototype user interface incorporates components for interactive input and form processing. Internet address, data rate, and the location of the facility are implemented with basic form controls with default values provided for convenience and efficiency using basic XForms operations. Because different emulation scenarios will vary widely in their component structure, more complex operations are used to add and delete facilities.

  19. Comparing the Performance of NoSQL Approaches for Managing Archetype-Based Electronic Health Record Data

    PubMed Central

    Freire, Sergio Miranda; Teodoro, Douglas; Wei-Kleiner, Fang; Sundvall, Erik; Karlsson, Daniel; Lambrix, Patrick

    2016-01-01

    This study provides an experimental performance evaluation on population-based queries of NoSQL databases storing archetype-based Electronic Health Record (EHR) data. There are few published studies regarding the performance of persistence mechanisms for systems that use multilevel modelling approaches, especially when the focus is on population-based queries. A healthcare dataset with 4.2 million records stored in a relational database (MySQL) was used to generate XML and JSON documents based on the openEHR reference model. Six datasets with different sizes were created from these documents and imported into three single machine XML databases (BaseX, eXistdb and Berkeley DB XML) and into a distributed NoSQL database system based on the MapReduce approach, Couchbase, deployed in different cluster configurations of 1, 2, 4, 8 and 12 machines. Population-based queries were submitted to those databases and to the original relational database. Database size and query response times are presented. The XML databases were considerably slower and required much more space than Couchbase. Overall, Couchbase had better response times than MySQL, especially for larger datasets. However, Couchbase requires indexing for each differently formulated query and the indexing time increases with the size of the datasets. The performances of the clusters with 2, 4, 8 and 12 nodes were not better than the single node cluster in relation to the query response time, but the indexing time was reduced proportionally to the number of nodes. The tested XML databases had acceptable performance for openEHR-based data in some querying use cases and small datasets, but were generally much slower than Couchbase. Couchbase also outperformed the response times of the relational database, but required more disk space and had a much longer indexing time. Systems like Couchbase are thus interesting research targets for scalable storage and querying of archetype-based EHR data when population-based use cases are of interest. PMID:26958859

  20. Comparing the Performance of NoSQL Approaches for Managing Archetype-Based Electronic Health Record Data.

    PubMed

    Freire, Sergio Miranda; Teodoro, Douglas; Wei-Kleiner, Fang; Sundvall, Erik; Karlsson, Daniel; Lambrix, Patrick

    2016-01-01

    This study provides an experimental performance evaluation on population-based queries of NoSQL databases storing archetype-based Electronic Health Record (EHR) data. There are few published studies regarding the performance of persistence mechanisms for systems that use multilevel modelling approaches, especially when the focus is on population-based queries. A healthcare dataset with 4.2 million records stored in a relational database (MySQL) was used to generate XML and JSON documents based on the openEHR reference model. Six datasets with different sizes were created from these documents and imported into three single machine XML databases (BaseX, eXistdb and Berkeley DB XML) and into a distributed NoSQL database system based on the MapReduce approach, Couchbase, deployed in different cluster configurations of 1, 2, 4, 8 and 12 machines. Population-based queries were submitted to those databases and to the original relational database. Database size and query response times are presented. The XML databases were considerably slower and required much more space than Couchbase. Overall, Couchbase had better response times than MySQL, especially for larger datasets. However, Couchbase requires indexing for each differently formulated query and the indexing time increases with the size of the datasets. The performances of the clusters with 2, 4, 8 and 12 nodes were not better than the single node cluster in relation to the query response time, but the indexing time was reduced proportionally to the number of nodes. The tested XML databases had acceptable performance for openEHR-based data in some querying use cases and small datasets, but were generally much slower than Couchbase. Couchbase also outperformed the response times of the relational database, but required more disk space and had a much longer indexing time. Systems like Couchbase are thus interesting research targets for scalable storage and querying of archetype-based EHR data when population-based use cases are of interest.

  1. An XML-Based Protocol for Distributed Event Services

    NASA Technical Reports Server (NTRS)

    Smith, Warren; Gunter, Dan; Quesnel, Darcy; Biegel, Bryan (Technical Monitor)

    2001-01-01

    A recent trend in distributed computing is the construction of high-performance distributed systems called computational grids. One difficulty we have encountered is that there is no standard format for the representation of performance information and no standard protocol for transmitting this information. This limits the types of performance analysis that can be undertaken in complex distributed systems. To address this problem, we present an XML-based protocol for transmitting performance events in distributed systems and evaluate the performance of this protocol.

  2. Research on Heterogeneous Data Exchange based on XML

    NASA Astrophysics Data System (ADS)

    Li, Huanqin; Liu, Jinfeng

    Integration of multiple data sources is becoming increasingly important for enterprises that cooperate closely with their partners for e-commerce. OLAP enables analysts and decision makers fast access to various materialized views from data warehouses. However, many corporations have internal business applications deployed on different platforms. This paper introduces a model for heterogeneous data exchange based on XML. The system can exchange and share the data among the different sources. The method used to realize the heterogeneous data exchange is given in this paper.

  3. Tactical Web Services: Using XML and Java Web Services to Conduct Real-Time Net-Centric Sonar Visualization

    DTIC Science & Technology

    2004-09-01

    Rosetti USN U.S. Navy Chesterton, IN 6. Erik Chaum NUWC Newport, RI 7. David Bellino NPRI Newport, RI 8. Dick Nadolink NUWC Newport, RI...found at (http://www.parallelgraphics.com/products/cortona). G. JFREECHART JFreeChart is an open source Java API created by David Gilbert and...www.xj3d.org/. Accessed 3 September 2004. Hunter, David , Kurt Cagle, and Chris Dix, eds. Beginning XML, Second Edition. Indianapolis, IN

  4. Toxics Release Inventory Chemical Hazard Information Profiles (TRI-CHIP) Dataset

    EPA Pesticide Factsheets

    The Toxics Release Inventory (TRI) Chemical Hazard Information Profiles (TRI-CHIP) dataset contains hazard information about the chemicals reported in TRI. Users can use this XML-format dataset to create their own databases and hazard analyses of TRI chemicals. The hazard information is compiled from a series of authoritative sources including the Integrated Risk Information System (IRIS). The dataset is provided as a downloadable .zip file that when extracted provides XML files and schemas for the hazard information tables.

  5. 2035 Air Dominance Requirements for State-On-State Conflict

    DTIC Science & Technology

    2011-02-16

    story.jsp?id=news/ awst /2011/01/10/AW_01_10_ 2011_p58-280833.xml&channel=misc (accessed 3 Jan 2011). Bilbro, James. “Technology Readiness Levels.” JB...channel = awst &id =news/aw121508p2.xml&headline=null&prev=10. (accessed 3 January 2011). Global Security.org. “Miniature Air-Launched Decoy (MALD...Battle.” Center for Strategic and Budgetary Assessment, http://www.csba online.com/4Publications/PubLibrary/ R .20100518.Slides_AirSea_Batt/ R .20100518

  6. Generating GraphML XML Files for Graph Visualization of Architectures and Event Traces for the Monterey Phoenix Program

    DTIC Science & Technology

    2012-09-01

    Thesis Advisor: Mikhail Auguston Second Reader: Terry Norbraten THIS PAGE INTENTIONALLY LEFT BLANK i REPORT DOCUMENTATION PAGE Form Approved...Language (GraphML). MPGrapher compiles well- formed XML files that conform to the yEd GraphML schema. These files will be opened and analyzed using...ABSTRACT UU NSN 7540-01-280-5500 Standard Form 298 (Rev. 2-89) Prescribed by ANSI Std. 239-18 ii THIS PAGE INTENTIONALLY LEFT BLANK iii Approved

  7. Ensemble Deep Learning for Biomedical Time Series Classification

    PubMed Central

    2016-01-01

    Ensemble learning has been proved to improve the generalization ability effectively in both theory and practice. In this paper, we briefly outline the current status of research on it first. Then, a new deep neural network-based ensemble method that integrates filtering views, local views, distorted views, explicit training, implicit training, subview prediction, and Simple Average is proposed for biomedical time series classification. Finally, we validate its effectiveness on the Chinese Cardiovascular Disease Database containing a large number of electrocardiogram recordings. The experimental results show that the proposed method has certain advantages compared to some well-known ensemble methods, such as Bagging and AdaBoost. PMID:27725828

  8. An Extended Petri-Net Based Approach for Supply Chain Process Enactment in Resource-Centric Web Service Environment

    NASA Astrophysics Data System (ADS)

    Wang, Xiaodong; Zhang, Xiaoyu; Cai, Hongming; Xu, Boyi

    Enacting a supply-chain process involves variant partners and different IT systems. REST receives increasing attention for distributed systems with loosely coupled resources. Nevertheless, resource model incompatibilities and conflicts prevent effective process modeling and deployment in resource-centric Web service environment. In this paper, a Petri-net based framework for supply-chain process integration is proposed. A resource meta-model is constructed to represent the basic information of resources. Then based on resource meta-model, XML schemas and documents are derived, which represent resources and their states in Petri-net. Thereafter, XML-net, a high level Petri-net, is employed for modeling control and data flow of process. From process model in XML-net, RESTful services and choreography descriptions are deduced. Therefore, unified resource representation and RESTful services description are proposed for cross-system integration in a more effective way. A case study is given to illustrate the approach and the desirable features of the approach are discussed.

  9. Ontology aided modeling of organic reaction mechanisms with flexible and fragment based XML markup procedures.

    PubMed

    Sankar, Punnaivanam; Aghila, Gnanasekaran

    2007-01-01

    The mechanism models for primary organic reactions encoding the structural fragments undergoing substitution, addition, elimination, and rearrangements are developed. In the proposed models, each and every structural component of mechanistic pathways is represented with flexible and fragment based markup technique in XML syntax. A significant feature of the system is the encoding of the electron movements along with the other components like charges, partial charges, half bonded species, lone pair electrons, free radicals, reaction arrows, etc. needed for a complete representation of reaction mechanism. The rendering of reaction schemes described with the proposed methodology is achieved with a concise XML extension language interoperating with the structure markup. The reaction scheme is visualized as 2D graphics in a browser by converting them into SVG documents enabling the desired layouts normally perceived by the chemists conventionally. An automatic representation of the complex patterns of the reaction mechanism is achieved by reusing the knowledge in chemical ontologies and developing artificial intelligence components in terms of axioms.

  10. Multipurpose Controller with EPICS integration and data logging: BPM application for ESS Bilbao

    NASA Astrophysics Data System (ADS)

    Arredondo, I.; del Campo, M.; Echevarria, P.; Jugo, J.; Etxebarria, V.

    2013-10-01

    This work presents a multipurpose configurable control system which can be integrated in an EPICS control network, this functionality being configured through a XML configuration file. The core of the system is the so-called Hardware Controller which is in charge of the control hardware management, the set up and communication with the EPICS network and the data storage. The reconfigurable nature of the controller is based on a single XML file, allowing any final user to easily modify and adjust the control system to any specific requirement. The selected Java development environment ensures a multiplatform operation and large versatility, even regarding the control hardware to be controlled. Specifically, this paper, focused on fast control based on a high performance FPGA, describes also an application approach for the ESS Bilbao's Beam Position Monitoring system. The implementation of the XML configuration file and the satisfactory performance outcome achieved are presented, as well as a general description of the Multipurpose Controller itself.

  11. Towards health care process description framework: an XML DTD design.

    PubMed Central

    Staccini, P.; Joubert, M.; Quaranta, J. F.; Aymard, S.; Fieschi, D.; Fieschi, M.

    2001-01-01

    The development of health care and hospital information systems has to meet users needs as well as requirements such as the tracking of all care activities and the support of quality improvement. The use of process-oriented analysis is of-value to provide analysts with: (i) a systematic description of activities; (ii) the elicitation of the useful data to perform and record care tasks; (iii) the selection of relevant decision-making support. But paper-based tools are not a very suitable way to manage and share the documentation produced during this step. The purpose of this work is to propose a method to implement the results of process analysis according to XML techniques (eXtensible Markup Language). It is based on the IDEF0 activity modeling language (Integration DEfinition for Function modeling). A hierarchical description of a process and its components has been defined through a flat XML file with a grammar of proper metadata tags. Perspectives of this method are discussed. PMID:11825265

  12. How Magnetic Disturbance Influences the Attitude and Heading in Magnetic and Inertial Sensor-Based Orientation Estimation.

    PubMed

    Fan, Bingfei; Li, Qingguo; Liu, Tao

    2017-12-28

    With the advancements in micro-electromechanical systems (MEMS) technologies, magnetic and inertial sensors are becoming more and more accurate, lightweight, smaller in size as well as low-cost, which in turn boosts their applications in human movement analysis. However, challenges still exist in the field of sensor orientation estimation, where magnetic disturbance represents one of the obstacles limiting their practical application. The objective of this paper is to systematically analyze exactly how magnetic disturbances affects the attitude and heading estimation for a magnetic and inertial sensor. First, we reviewed four major components dealing with magnetic disturbance, namely decoupling attitude estimation from magnetic reading, gyro bias estimation, adaptive strategies of compensating magnetic disturbance and sensor fusion algorithms. We review and analyze the features of existing methods of each component. Second, to understand each component in magnetic disturbance rejection, four representative sensor fusion methods were implemented, including gradient descent algorithms, improved explicit complementary filter, dual-linear Kalman filter and extended Kalman filter. Finally, a new standardized testing procedure has been developed to objectively assess the performance of each method against magnetic disturbance. Based upon the testing results, the strength and weakness of the existing sensor fusion methods were easily examined, and suggestions were presented for selecting a proper sensor fusion algorithm or developing new sensor fusion method.

  13. Prototype Development: Context-Driven Dynamic XML Ophthalmologic Data Capture Application

    PubMed Central

    Schwei, Kelsey M; Kadolph, Christopher; Finamore, Joseph; Cancel, Efrain; McCarty, Catherine A; Okorie, Asha; Thomas, Kate L; Allen Pacheco, Jennifer; Pathak, Jyotishman; Ellis, Stephen B; Denny, Joshua C; Rasmussen, Luke V; Tromp, Gerard; Williams, Marc S; Vrabec, Tamara R; Brilliant, Murray H

    2017-01-01

    Background The capture and integration of structured ophthalmologic data into electronic health records (EHRs) has historically been a challenge. However, the importance of this activity for patient care and research is critical. Objective The purpose of this study was to develop a prototype of a context-driven dynamic extensible markup language (XML) ophthalmologic data capture application for research and clinical care that could be easily integrated into an EHR system. Methods Stakeholders in the medical, research, and informatics fields were interviewed and surveyed to determine data and system requirements for ophthalmologic data capture. On the basis of these requirements, an ophthalmology data capture application was developed to collect and store discrete data elements with important graphical information. Results The context-driven data entry application supports several features, including ink-over drawing capability for documenting eye abnormalities, context-based Web controls that guide data entry based on preestablished dependencies, and an adaptable database or XML schema that stores Web form specifications and allows for immediate changes in form layout or content. The application utilizes Web services to enable data integration with a variety of EHRs for retrieval and storage of patient data. Conclusions This paper describes the development process used to create a context-driven dynamic XML data capture application for optometry and ophthalmology. The list of ophthalmologic data elements identified as important for care and research can be used as a baseline list for future ophthalmologic data collection activities. PMID:28903894

  14. Convergence of Health Level Seven Version 2 Messages to Semantic Web Technologies for Software-Intensive Systems in Telemedicine Trauma Care

    PubMed Central

    Cook, Timothy Wayne; Cavalini, Luciana Tricai

    2016-01-01

    Objectives To present the technical background and the development of a procedure that enriches the semantics of Health Level Seven version 2 (HL7v2) messages for software-intensive systems in telemedicine trauma care. Methods This study followed a multilevel model-driven approach for the development of semantically interoperable health information systems. The Pre-Hospital Trauma Life Support (PHTLS) ABCDE protocol was adopted as the use case. A prototype application embedded the semantics into an HL7v2 message as an eXtensible Markup Language (XML) file, which was validated against an XML schema that defines constraints on a common reference model. This message was exchanged with a second prototype application, developed on the Mirth middleware, which was also used to parse and validate both the original and the hybrid messages. Results Both versions of the data instance (one pure XML, one embedded in the HL7v2 message) were equally validated and the RDF-based semantics recovered by the receiving side of the prototype from the shared XML schema. Conclusions This study demonstrated the semantic enrichment of HL7v2 messages for intensive-software telemedicine systems for trauma care, by validating components of extracts generated in various computing environments. The adoption of the method proposed in this study ensures the compliance of the HL7v2 standard in Semantic Web technologies. PMID:26893947

  15. XML Based Markup Languages for Specific Domains

    NASA Astrophysics Data System (ADS)

    Varde, Aparna; Rundensteiner, Elke; Fahrenholz, Sally

    A challenging area in web based support systems is the study of human activities in connection with the web, especially with reference to certain domains. This includes capturing human reasoning in information retrieval, facilitating the exchange of domain-specific knowledge through a common platform and developing tools for the analysis of data on the web from a domain expert's angle. Among the techniques and standards related to such work, we have XML, the eXtensible Markup Language. This serves as a medium of communication for storing and publishing textual, numeric and other forms of data seamlessly. XML tag sets are such that they preserve semantics and simplify the understanding of stored information by users. Often domain-specific markup languages are designed using XML, with a user-centric perspective. Standardization bodies and research communities may extend these to include additional semantics of areas within and related to the domain. This chapter outlines the issues to be considered in developing domain-specific markup languages: the motivation for development, the semantic considerations, the syntactic constraints and other relevant aspects, especially taking into account human factors. Illustrating examples are provided from domains such as Medicine, Finance and Materials Science. Particular emphasis in these examples is on the Materials Markup Language MatML and the semantics of one of its areas, namely, the Heat Treating of Materials. The focus of this chapter, however, is not the design of one particular language but rather the generic issues concerning the development of domain-specific markup languages.

  16. RGG: A general GUI Framework for R scripts

    PubMed Central

    Visne, Ilhami; Dilaveroglu, Erkan; Vierlinger, Klemens; Lauss, Martin; Yildiz, Ahmet; Weinhaeusel, Andreas; Noehammer, Christa; Leisch, Friedrich; Kriegner, Albert

    2009-01-01

    Background R is the leading open source statistics software with a vast number of biostatistical and bioinformatical analysis packages. To exploit the advantages of R, extensive scripting/programming skills are required. Results We have developed a software tool called R GUI Generator (RGG) which enables the easy generation of Graphical User Interfaces (GUIs) for the programming language R by adding a few Extensible Markup Language (XML) – tags. RGG consists of an XML-based GUI definition language and a Java-based GUI engine. GUIs are generated in runtime from defined GUI tags that are embedded into the R script. User-GUI input is returned to the R code and replaces the XML-tags. RGG files can be developed using any text editor. The current version of RGG is available as a stand-alone software (RGGRunner) and as a plug-in for JGR. Conclusion RGG is a general GUI framework for R that has the potential to introduce R statistics (R packages, built-in functions and scripts) to users with limited programming skills and helps to bridge the gap between R developers and GUI-dependent users. RGG aims to abstract the GUI development from individual GUI toolkits by using an XML-based GUI definition language. Thus RGG can be easily integrated in any software. The RGG project further includes the development of a web-based repository for RGG-GUIs. RGG is an open source project licensed under the Lesser General Public License (LGPL) and can be downloaded freely at PMID:19254356

  17. ParaView visualization of Abaqus output on the mechanical deformation of complex microstructures

    NASA Astrophysics Data System (ADS)

    Liu, Qingbin; Li, Jiang; Liu, Jie

    2017-02-01

    Abaqus® is a popular software suite for finite element analysis. It delivers linear and nonlinear analyses of mechanical and fluid dynamics, includes multi-body system and multi-physics coupling. However, the visualization capability of Abaqus using its CAE module is limited. Models from microtomography have extremely complicated structures, and datasets of Abaqus output are huge, requiring a visualization tool more powerful than Abaqus/CAE. We convert Abaqus output into the XML-based VTK format by developing a Python script and then using ParaView to visualize the results. Such capabilities as volume rendering, tensor glyphs, superior animation and other filters allow ParaView to offer excellent visualizing manifestations. ParaView's parallel visualization makes it possible to visualize very big data. To support full parallel visualization, the Python script achieves data partitioning by reorganizing all nodes, elements and the corresponding results on those nodes and elements. The data partition scheme minimizes data redundancy and works efficiently. Given its good readability and extendibility, the script can be extended to the processing of more different problems in Abaqus. We share the script with Abaqus users on GitHub.

  18. Clinical image processing engine

    NASA Astrophysics Data System (ADS)

    Han, Wei; Yao, Jianhua; Chen, Jeremy; Summers, Ronald

    2009-02-01

    Our group provides clinical image processing services to various institutes at NIH. We develop or adapt image processing programs for a variety of applications. However, each program requires a human operator to select a specific set of images and execute the program, as well as store the results appropriately for later use. To improve efficiency, we design a parallelized clinical image processing engine (CIPE) to streamline and parallelize our service. The engine takes DICOM images from a PACS server, sorts and distributes the images to different applications, multithreads the execution of applications, and collects results from the applications. The engine consists of four modules: a listener, a router, a job manager and a data manager. A template filter in XML format is defined to specify the image specification for each application. A MySQL database is created to store and manage the incoming DICOM images and application results. The engine achieves two important goals: reduce the amount of time and manpower required to process medical images, and reduce the turnaround time for responding. We tested our engine on three different applications with 12 datasets and demonstrated that the engine improved the efficiency dramatically.

  19. A distance-driven deconvolution method for CT image-resolution improvement

    NASA Astrophysics Data System (ADS)

    Han, Seokmin; Choi, Kihwan; Yoo, Sang Wook; Yi, Jonghyon

    2016-12-01

    The purpose of this research is to achieve high spatial resolution in CT (computed tomography) images without hardware modification. The main idea is to consider geometry optics model, which can provide the approximate blurring PSF (point spread function) kernel, which varies according to the distance from the X-ray tube to each point. The FOV (field of view) is divided into several band regions based on the distance from the X-ray source, and each region is deconvolved with a different deconvolution kernel. As the number of subbands increases, the overshoot of the MTF (modulation transfer function) curve increases first. After that, the overshoot begins to decrease while still showing a larger MTF than the normal FBP (filtered backprojection). The case of five subbands seems to show balanced performance between MTF boost and overshoot minimization. It can be seen that, as the number of subbands increases, the noise (STD) can be seen to show a tendency to decrease. The results shows that spatial resolution in CT images can be improved without using high-resolution detectors or focal spot wobbling. The proposed algorithm shows promising results in improving spatial resolution while avoiding excessive noise boost.

  20. Thermoelectric properties of periodic quantum structures in the Wigner-Rode formalism

    NASA Astrophysics Data System (ADS)

    Kommini, Adithya; Aksamija, Zlatan

    2018-01-01

    Improving the thermoelectric Seebeck coefficient, while simultaneously reducing thermal conductivity, is required in order to boost thermoelectric (TE) figure of merit (ZT). A common approach to improve the Seebeck coefficient is electron filtering where ‘cold’ (low energy) electrons are restricted from participating in transport by an energy barrier (Kim and Lundstrom 2011 J. Appl. Phys. 110 034511, Zide et al 2010 J. Appl. Phys. 108 123702). However, the impact of electron tunneling through thin barriers and resonant states on TE properties has been given less attention, despite the widespread use of quantum wells and superlattices (SLs) in TE applications. In our work, we develop a comprehensive transport model using the Wigner-Rode formalism. We include the full electronic bandstructure and all the relevant scattering mechanisms, allowing us to simulate both energy relaxation and quantum effects from periodic potential barriers. We study the impact of barrier shape on TE performance and find that tall, sharp barriers with small period lengths lead to the largest increase in both Seebeck coefficient and conductivity, thus boosting power factor and TE efficiency. Our findings are robust against additional elastic scattering such as atomic-scale roughness at side-walls of SL nanowires.

  1. Knowledge representation for fuzzy inference aided medical image interpretation.

    PubMed

    Gal, Norbert; Stoicu-Tivadar, Vasile

    2012-01-01

    Knowledge defines how an automated system transforms data into information. This paper suggests a representation method of medical imaging knowledge using fuzzy inference systems coded in XML files. The imaging knowledge incorporates features of the investigated objects in linguistic form and inference rules that can transform the linguistic data into information about a possible diagnosis. A fuzzy inference system is used to model the vagueness of the linguistic medical imaging terms. XML files are used to facilitate easy manipulation and deployment of the knowledge into the imaging software. Preliminary results are presented.

  2. Application of Architectural Patterns and Lightweight Formal Method for the Validation and Verification of Safety Critical Systems

    DTIC Science & Technology

    2013-09-01

    to a XML file, a code that Bonine in [21] developed for a similar purpose. Using the StateRover XML log file import tool, we are able to generate a...C. Bonine , M. Shing, T.W. Otani, “Computer-aided process and tools for mobile software acquisition,” NPS, Monterey, CA, Tech. Rep. NPS-SE-13...C10P07R05– 075, 2013. [21] C. Bonine , “Specification, validation and verification of mobile application behavior,” M.S. thesis, Dept. Comp. Science, NPS

  3. YAdumper: extracting and translating large information volumes from relational databases to structured flat files.

    PubMed

    Fernández, José M; Valencia, Alfonso

    2004-10-12

    Downloading the information stored in relational databases into XML and other flat formats is a common task in bioinformatics. This periodical dumping of information requires considerable CPU time, disk and memory resources. YAdumper has been developed as a purpose-specific tool to deal with the integral structured information download of relational databases. YAdumper is a Java application that organizes database extraction following an XML template based on an external Document Type Declaration. Compared with other non-native alternatives, YAdumper substantially reduces memory requirements and considerably improves writing performance.

  4. Middleware Trade Study for NASA Domain

    NASA Technical Reports Server (NTRS)

    Bowman, Dan

    2007-01-01

    This presentation presents preliminary results of a trade study designed to assess three distributed simulation middleware technologies for support of the NASA Constellation Distributed Space Exploration Simulation (DSES) project and Test and Verification Distributed System Integration Laboratory (DSIL). The technologies are: the High Level Architecture (HLA), the Test and Training Enabling Architecture (TENA), and an XML-based variant of Distributed Interactive Simulation (DIS-XML) coupled with the Extensible Messaging and Presence Protocol (XMPP). According to the criteria and weights determined in this study, HLA scores better than the other two for DSES as well as the DSIL

  5. Performance Comparison of Relational and Native-XML Databases using the Semantics of the Land Command and Control Information Exchange Data Model (LC2IEDM)

    DTIC Science & Technology

    2005-09-01

    aid this thesis would not have come to existence. First, we would like to thank Eric Chaum for his enthusiasm and recommendation for doing this...by David Hina [HINA 00], discusses the use of available Commercial-Off-The-Shelf (COTS) XML technology to provide for the exchange of data between...air and maritime components and therefore forms the basis of a joint command and control data model [ CHAUM 04]. The data model is one of the two

  6. XML-BSPM: an XML format for storing Body Surface Potential Map recordings.

    PubMed

    Bond, Raymond R; Finlay, Dewar D; Nugent, Chris D; Moore, George

    2010-05-14

    The Body Surface Potential Map (BSPM) is an electrocardiographic method, for recording and displaying the electrical activity of the heart, from a spatial perspective. The BSPM has been deemed more accurate for assessing certain cardiac pathologies when compared to the 12-lead ECG. Nevertheless, the 12-lead ECG remains the most popular ECG acquisition method for non-invasively assessing the electrical activity of the heart. Although data from the 12-lead ECG can be stored and shared using open formats such as SCP-ECG, no open formats currently exist for storing and sharing the BSPM. As a result, an innovative format for storing BSPM datasets has been developed within this study. The XML vocabulary was chosen for implementation, as opposed to binary for the purpose of human readability. There are currently no standards to dictate the number of electrodes and electrode positions for recording a BSPM. In fact, there are at least 11 different BSPM electrode configurations in use today. Therefore, in order to support these BSPM variants, the XML-BSPM format was made versatile. Hence, the format supports the storage of custom torso diagrams using SVG graphics. This diagram can then be used in a 2D coordinate system for retaining electrode positions. This XML-BSPM format has been successfully used to store the Kornreich-117 BSPM dataset and the Lux-192 BSPM dataset. The resulting file sizes were in the region of 277 kilobytes for each BSPM recording and can be deemed suitable for example, for use with any telemonitoring application. Moreover, there is potential for file sizes to be further reduced using basic compression algorithms, i.e. the deflate algorithm. Finally, these BSPM files have been parsed and visualised within a convenient time period using a web based BSPM viewer. This format, if widely adopted could promote BSPM interoperability, knowledge sharing and data mining. This work could also be used to provide conceptual solutions and inspire existing formats such as DICOM, SCP-ECG and aECG to support the storage of BSPMs. In summary, this research provides initial ground work for creating a complete BSPM management system.

  7. Intelligent Visualization of Geo-Information on the Future Web

    NASA Astrophysics Data System (ADS)

    Slusallek, P.; Jochem, R.; Sons, K.; Hoffmann, H.

    2012-04-01

    Visualization is a key component of the "Observation Web" and will become even more important in the future as geo data becomes more widely accessible. The common statement that "Data that cannot be seen, does not exist" is especially true for non-experts, like most citizens. The Web provides the most interesting platform for making data easily and widely available. However, today's Web is not well suited for the interactive visualization and exploration that is often needed for geo data. Support for 3D data was added only recently and at an extremely low level (WebGL), but even the 2D visualization capabilities of HTML e.g. (images, canvas, SVG) are rather limited, especially regarding interactivity. We have developed XML3D as an extension to HTML-5. It allows for compactly describing 2D and 3D data directly as elements of an HTML-5 document. All graphics elements are part of the Document Object Model (DOM) and can be manipulated via the same set of DOM events and methods that millions of Web developers use on a daily basis. Thus, XML3D makes highly interactive 2D and 3D visualization easily usable, not only for geo data. XML3D is supported by any WebGL-capable browser but we also provide native implementations in Firefox and Chromium. As an example, we show how OpenStreetMap data can be mapped directly to XML3D and visualized interactively in any Web page. We show how this data can be easily augmented with additional data from the Web via a few lines of Javascript. We also show how embedded semantic data (via RDFa) allows for linking the visualization back to the data's origin, thus providing an immersive interface for interacting with and modifying the original data. XML3D is used as key input for standardization within the W3C Community Group on "Declarative 3D for the Web" chaired by the DFKI and has recently been selected as one of the Generic Enabler for the EU Future Internet initiative.

  8. Information persistence using XML database technology

    NASA Astrophysics Data System (ADS)

    Clark, Thomas A.; Lipa, Brian E. G.; Macera, Anthony R.; Staskevich, Gennady R.

    2005-05-01

    The Joint Battlespace Infosphere (JBI) Information Management (IM) services provide information exchange and persistence capabilities that support tailored, dynamic, and timely access to required information, enabling near real-time planning, control, and execution for DoD decision making. JBI IM services will be built on a substrate of network centric core enterprise services and when transitioned, will establish an interoperable information space that aggregates, integrates, fuses, and intelligently disseminates relevant information to support effective warfighter business processes. This virtual information space provides individual users with information tailored to their specific functional responsibilities and provides a highly tailored repository of, or access to, information that is designed to support a specific Community of Interest (COI), geographic area or mission. Critical to effective operation of JBI IM services is the implementation of repositories, where data, represented as information, is represented and persisted for quick and easy retrieval. This paper will address information representation, persistence and retrieval using existing database technologies to manage structured data in Extensible Markup Language (XML) format as well as unstructured data in an IM services-oriented environment. Three basic categories of database technologies will be compared and contrasted: Relational, XML-Enabled, and Native XML. These technologies have diverse properties such as maturity, performance, query language specifications, indexing, and retrieval methods. We will describe our application of these evolving technologies within the context of a JBI Reference Implementation (RI) by providing some hopefully insightful anecdotes and lessons learned along the way. This paper will also outline future directions, promising technologies and emerging COTS products that can offer more powerful information management representations, better persistence mechanisms and improved retrieval techniques.

  9. Information Model Translation to Support a Wider Science Community

    NASA Astrophysics Data System (ADS)

    Hughes, John S.; Crichton, Daniel; Ritschel, Bernd; Hardman, Sean; Joyner, Ronald

    2014-05-01

    The Planetary Data System (PDS), NASA's long-term archive for solar system exploration data, has just released PDS4, a modernization of the PDS architecture, data standards, and technical infrastructure. This next generation system positions the PDS to meet the demands of the coming decade, including big data, international cooperation, distributed nodes, and multiple ways of analysing and interpreting data. It also addresses three fundamental project goals: providing more efficient data delivery by data providers to the PDS, enabling a stable, long-term usable planetary science data archive, and enabling services for the data consumer to find, access, and use the data they require in contemporary data formats. The PDS4 information architecture is used to describe all PDS data using a common model. Captured in an ontology modeling tool it supports a hierarchy of data dictionaries built to the ISO/IEC 11179 standard and is designed to increase flexibility, enable complex searches at the product level, and to promote interoperability that facilitates data sharing both nationally and internationally. A PDS4 information architecture design requirement stipulates that the content of the information model must be translatable to external data definition languages such as XML Schema, XMI/XML, and RDF/XML. To support the semantic Web standards we are now in the process of mapping the contents into RDF/XML to support SPARQL capable databases. We are also building a terminological ontology to support virtually unified data retrieval and access. This paper will provide an overview of the PDS4 information architecture focusing on its domain information model and how the translation and mapping are being accomplished.

  10. Structured Course Objects in a Digital Library

    NASA Technical Reports Server (NTRS)

    Maly, K.; Zubair, M.; Liu, X.; Nelson, M.; Zeil, S.

    1999-01-01

    We are developing an Undergraduate Digital Library Framework (UDLF) that will support creation/archiving of courses and reuse of existing course material to evolve courses. UDLF supports the publication of course materials for later instantiation for a specific offering and allows the addition of time-dependent and student-specific information and structures. Instructors and, depending on permissions, students can access the general course materials or the materials for a specific offering. We are building a reference implementation based on NCSTRL+, a digital library derived from NCSTRL. Digital objects in NCSTRL+ are called buckets, self-contained entities that carry their own methods for access and display. Current bucket implementations have a two level structure of packages and elements. This is not a rich enough structure for course objects in UDLF. Typically, courses can only be modeled as a multilevel hierarchy and among different courses, both the syntax and semantics of terms may vary. Therefore, we need a mechanism to define, within a particular library, course models, their constituent objects, and the associated semantics in a flexible, extensible way. In this paper, we describe our approach to define and implement these multilayered course objects. We use XML technology to emulate complex data structures within the NCSTRL+ buckets. We have developed authoring and browsing tools to manipulate these course objects. In our current implementation a user downloading an XML based course bucket also downloads the XML-aware tools: an applet that enables the user to edit or browse the bucket. We claim that XML provides an effective means to represent multi-level structure of a course bucket.

  11. SU-F-P-36: Automation of Linear Accelerator Star Shot Measurement with Advanced XML Scripting and Electronic Portal Imaging Device

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, N; Knutson, N; Schmidt, M

    Purpose: To verify a method used to automatically acquire jaw, MLC, collimator and couch star shots for a Varian TrueBeam linear accelerator utilizing Developer Mode and an Electronic Portal Imaging Device (EPID). Methods: An XML script was written to automate motion of the jaws, MLC, collimator and couch in TrueBeam Developer Mode (TBDM) to acquire star shot measurements. The XML script also dictates MV imaging parameters to facilitate automatic acquisition and recording of integrated EPID images. Since couch star shot measurements cannot be acquired using a combination of EPID and jaw/MLC collimation alone due to a fixed imager geometry, amore » method utilizing a 5mm wide steel ruler placed on the table and centered within a 15×15cm2 open field to produce a surrogate of the narrow field aperture was investigated. Four individual star shot measurements (X jaw, Y jaw, MLC and couch) were obtained using our proposed as well as traditional film-based method. Integrated EPID images and scanned measurement films were analyzed and compared. Results: Star shot (X jaw, Y jaw, MLC and couch) measurements were obtained in a single 5 minute delivery using the TBDM XML script method compared to 60 minutes for equivalent traditional film measurements. Analysis of the images and films demonstrated comparable isocentricity results, agreeing within 0.3mm of each other. Conclusion: The presented automatic approach of acquiring star shot measurements using TBDM and EPID has proven to be more efficient than the traditional film approach with equivalent results.« less

  12. Poster — Thur Eve — 55: An automated XML technique for isocentre verification on the Varian TrueBeam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Asiev, Krum; Mullins, Joel; DeBlois, François

    2014-08-15

    Isocentre verification tests, such as the Winston-Lutz (WL) test, have gained popularity in the recent years as techniques such as stereotactic radiosurgery/radiotherapy (SRS/SRT) treatments are more commonly performed on radiotherapy linacs. These highly conformal treatments require frequent monitoring of the geometrical accuracy of the isocentre to ensure proper radiation delivery. At our clinic, the WL test is performed by acquiring with the EPID a collection of 8 images of a WL phantom fixed on the couch for various couch/gantry angles. This set of images is later analyzed to determine the isocentre size. The current work addresses the acquisition process. Amore » manual WL test acquisition performed by and experienced physicist takes in average 25 minutes and is prone to user manipulation errors. We have automated this acquisition on a Varian TrueBeam STx linac (Varian, Palo Alto, USA). The Varian developer mode allows the execution of custom-made XML script files to control all aspects of the linac operation. We have created an XML-WL script that cycles through each couch/gantry combinations taking an EPID image at each position. This automated acquisition is done in less than 4 minutes. The reproducibility of the method was verified by repeating the execution of the XML file 5 times. The analysis of the images showed variation of the isocenter size less than 0.1 mm along the X, Y and Z axes and compares favorably to a manual acquisition for which we typically observe variations up to 0.5 mm.« less

  13. An effective XML based name mapping mechanism within StoRM

    NASA Astrophysics Data System (ADS)

    Corso, E.; Forti, A.; Ghiselli, A.; Magnoni, L.; Zappi, R.

    2008-07-01

    In a Grid environment the naming capability allows users to refer to specific data resources in a physical storage system using a high level logical identifier. This logical identifier is typically organized in a file system like structure, a hierarchical tree of names. Storage Resource Manager (SRM) services map the logical identifier to the physical location of data evaluating a set of parameters as the desired quality of services and the VOMS attributes specified in the requests. StoRM is a SRM service developed by INFN and ICTP-EGRID to manage file and space on standard POSIX and high performing parallel and cluster file systems. An upcoming requirement in the Grid data scenario is the orthogonality of the logical name and the physical location of data, in order to refer, with the same identifier, to different copies of data archived in various storage areas with different quality of service. The mapping mechanism proposed in StoRM is based on a XML document that represents the different storage components managed by the service, the storage areas defined by the site administrator, the quality of service they provide and the Virtual Organization that want to use the storage area. An appropriate directory tree is realized in each storage component reflecting the XML schema. In this scenario StoRM is able to identify the physical location of a requested data evaluating the logical identifier and the specified attributes following the XML schema, without querying any database service. This paper presents the namespace schema defined, the different entities represented and the technical details of the StoRM implementation.

  14. Lapin Data Interchange Among Database, Analysis and Display Programs Using XML-Based Text Files

    NASA Technical Reports Server (NTRS)

    2005-01-01

    The purpose of grant NCC3-966 was to investigate and evaluate the interchange of application-specific data among multiple programs each carrying out part of the analysis and design task. This has been carried out previously by creating a custom program to read data produced by one application and then write that data to a file whose format is specific to the second application that needs all or part of that data. In this investigation, data of interest is described using the XML markup language that allows the data to be stored in a text-string. Software to transform output data of a task into an XML-string and software to read an XML string and extract all or a portion of the data needed for another application is used to link two independent applications together as part of an overall design effort. This approach was initially used with a standard analysis program, Lapin, along with standard applications a standard spreadsheet program, a relational database program, and a conventional dialog and display program to demonstrate the successful sharing of data among independent programs. Most of the effort beyond that demonstration has been concentrated on the inclusion of more complex display programs. Specifically, a custom-written windowing program organized around dialogs to control the interactions have been combined with an independent CAD program (Open Cascade) that supports sophisticated display of CAD elements such as lines, spline curves, and surfaces and turbine-blade data produced by an independent blade design program (UD0300).

  15. Sharing of secondary electrons by in-lens and out-lens detector in low-voltage scanning electron microscope equipped with immersion lens.

    PubMed

    Kumagai, Kazuhiro; Sekiguchi, Takashi

    2009-03-01

    To understand secondary electron (SE) image formation with in-lens and out-lens detector in low-voltage scanning electron microscopy (LV-SEM), we have evaluated SE signals of an in-lens and an out-lens detector in LV-SEM. From the energy distribution spectra of SEs with various boosting voltages of the immersion lens system, we revealed that the electrostatic field of the immersion lens mainly collects electrons with energy lower than 40eV, acting as a low-pass filter. This effect is also observed as a contrast change in LV-SEM images taken by in-lens and out-lens detectors.

  16. Comparison of classification algorithms for various methods of preprocessing radar images of the MSTAR base

    NASA Astrophysics Data System (ADS)

    Borodinov, A. A.; Myasnikov, V. V.

    2018-04-01

    The present work is devoted to comparing the accuracy of the known qualification algorithms in the task of recognizing local objects on radar images for various image preprocessing methods. Preprocessing involves speckle noise filtering and normalization of the object orientation in the image by the method of image moments and by a method based on the Hough transform. In comparison, the following classification algorithms are used: Decision tree; Support vector machine, AdaBoost, Random forest. The principal component analysis is used to reduce the dimension. The research is carried out on the objects from the base of radar images MSTAR. The paper presents the results of the conducted studies.

  17. RF to millimeter wave integration and module technologies

    NASA Astrophysics Data System (ADS)

    Vähä-Heikkilä, T.

    2015-04-01

    Radio Frequency (RF) consumer applications have boosted silicon integrated circuits (IC) and corresponding technologies. More and more functions are integrated to ICs and their performance is also increasing. However, RF front-end modules with filters and switches as well as antennas still need other way of integration. This paper focuses to RF front-end module and antenna developments as well as to the integration of millimeter wave radios. VTT Technical Research Centre of Finland has developed both Low Temperature Co-fired Ceramics (LTCC) and Integrated Passive Devices (IPD) integration platforms for RF and millimeter wave integrated modules. In addition to in-house technologies, VTT is using module and component technologies from other commercial sources.

  18. Content-Aware DataGuide with Incremental Index Update using Frequently Used Paths

    NASA Astrophysics Data System (ADS)

    Sharma, A. K.; Duhan, Neelam; Khattar, Priyanka

    2010-11-01

    Size of the WWW is increasing day by day. Due to the absence of structured data on the Web, it becomes very difficult for information retrieval tools to fully utilize the Web information. As a solution to this problem, XML pages come into play, which provide structural information to the users to some extent. Without efficient indexes, query processing can be quite inefficient due to an exhaustive traversal on XML data. In this paper an improved content-centric approach of Content-Aware DataGuide, which is an indexing technique for XML databases, is being proposed that uses frequently used paths from historical query logs to improve query performance. The index can be updated incrementally according to the changes in query workload and thus, the overhead of reconstruction can be minimized. Frequently used paths are extracted using any Sequential Pattern mining algorithm on subsequent queries in the query workload. After this, the data structures are incrementally updated. This indexing technique proves to be efficient as partial matching queries can be executed efficiently and users can now get the more relevant documents in results.

  19. An XML-based interchange format for genotype-phenotype data.

    PubMed

    Whirl-Carrillo, M; Woon, M; Thorn, C F; Klein, T E; Altman, R B

    2008-02-01

    Recent advances in high-throughput genotyping and phenotyping have accelerated the creation of pharmacogenomic data. Consequently, the community requires standard formats to exchange large amounts of diverse information. To facilitate the transfer of pharmacogenomics data between databases and analysis packages, we have created a standard XML (eXtensible Markup Language) schema that describes both genotype and phenotype data as well as associated metadata. The schema accommodates information regarding genes, drugs, diseases, experimental methods, genomic/RNA/protein sequences, subjects, subject groups, and literature. The Pharmacogenetics and Pharmacogenomics Knowledge Base (PharmGKB; www.pharmgkb.org) has used this XML schema for more than 5 years to accept and process submissions containing more than 1,814,139 SNPs on 20,797 subjects using 8,975 assays. Although developed in the context of pharmacogenomics, the schema is of general utility for exchange of genotype and phenotype data. We have written syntactic and semantic validators to check documents using this format. The schema and code for validation is available to the community at http://www.pharmgkb.org/schema/index.html (last accessed: 8 October 2007). (c) 2007 Wiley-Liss, Inc.

  20. Forward and backward tone mapping of high dynamic range images based on subband architecture

    NASA Astrophysics Data System (ADS)

    Bouzidi, Ines; Ouled Zaid, Azza

    2015-01-01

    This paper presents a novel High Dynamic Range (HDR) tone mapping (TM) system based on sub-band architecture. Standard wavelet filters of Daubechies, Symlets, Coiflets and Biorthogonal were used to estimate the proposed system performance in terms of Low Dynamic Range (LDR) image quality and reconstructed HDR image fidelity. During TM stage, the HDR image is firstly decomposed in sub-bands using symmetrical analysis-synthesis filter bank. The transform coefficients are then rescaled using a predefined gain map. The inverse Tone Mapping (iTM) stage is straightforward. Indeed, the LDR image passes through the same sub-band architecture. But, instead of reducing the dynamic range, the LDR content is boosted to an HDR representation. Moreover, in our TM sheme, we included an optimization module to select the gain map components that minimize the reconstruction error, and consequently resulting in high fidelity HDR content. Comparisons with recent state-of-the-art methods have shown that our method provides better results in terms of visual quality and HDR reconstruction fidelity using objective and subjective evaluations.

  1. Towards disparity joint upsampling for robust stereoscopic endoscopic scene reconstruction in robotic prostatectomy

    NASA Astrophysics Data System (ADS)

    Luo, Xiongbiao; McLeod, A. Jonathan; Jayarathne, Uditha L.; Pautler, Stephen E.; Schlacta, Christopher M.; Peters, Terry M.

    2016-03-01

    Three-dimensional (3-D) scene reconstruction from stereoscopic binocular laparoscopic videos is an effective way to expand the limited surgical field and augment the structure visualization of the organ being operated in minimally invasive surgery. However, currently available reconstruction approaches are limited by image noise, occlusions, textureless and blurred structures. In particular, an endoscope inside the body only has the limited light source resulting in illumination non-uniformities in the visualized field. These limitations unavoidably deteriorate the stereo image quality and hence lead to low-resolution and inaccurate disparity maps, resulting in blurred edge structures in 3-D scene reconstruction. This paper proposes an improved stereo correspondence framework that integrates cost-volume filtering with joint upsampling for robust disparity estimation. Joint bilateral upsampling, joint geodesic upsampling, and tree filtering upsampling were compared to enhance the disparity accuracy. The experimental results demonstrate that joint upsampling provides an effective way to boost the disparity estimation and hence to improve the surgical endoscopic scene 3-D reconstruction. Moreover, the bilateral upsampling generally outperforms the other two upsampling methods in disparity estimation.

  2. NASA Constellation Distributed Simulation Middleware Trade Study

    NASA Technical Reports Server (NTRS)

    Hasan, David; Bowman, James D.; Fisher, Nancy; Cutts, Dannie; Cures, Edwin Z.

    2008-01-01

    This paper presents the results of a trade study designed to assess three distributed simulation middleware technologies for support of the NASA Constellation Distributed Space Exploration Simulation (DSES) project and Test and Verification Distributed System Integration Laboratory (DSIL). The technologies are the High Level Architecture (HLA), the Test and Training Enabling Architecture (TENA), and an XML-based variant of Distributed Interactive Simulation (DIS-XML) coupled with the Extensible Messaging and Presence Protocol (XMPP). According to the criteria and weights determined in this study, HLA scores better than the other two for DSES as well as the DSIL.

  3. A standard format and a graphical user interface for spin system specification.

    PubMed

    Biternas, A G; Charnock, G T P; Kuprov, Ilya

    2014-03-01

    We introduce a simple and general XML format for spin system description that is the result of extensive consultations within Magnetic Resonance community and unifies under one roof all major existing spin interaction specification conventions. The format is human-readable, easy to edit and easy to parse using standard XML libraries. We also describe a graphical user interface that was designed to facilitate construction and visualization of complicated spin systems. The interface is capable of generating input files for several popular spin dynamics simulation packages. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  4. How Magnetic Disturbance Influences the Attitude and Heading in Magnetic and Inertial Sensor-Based Orientation Estimation

    PubMed Central

    Li, Qingguo

    2017-01-01

    With the advancements in micro-electromechanical systems (MEMS) technologies, magnetic and inertial sensors are becoming more and more accurate, lightweight, smaller in size as well as low-cost, which in turn boosts their applications in human movement analysis. However, challenges still exist in the field of sensor orientation estimation, where magnetic disturbance represents one of the obstacles limiting their practical application. The objective of this paper is to systematically analyze exactly how magnetic disturbances affects the attitude and heading estimation for a magnetic and inertial sensor. First, we reviewed four major components dealing with magnetic disturbance, namely decoupling attitude estimation from magnetic reading, gyro bias estimation, adaptive strategies of compensating magnetic disturbance and sensor fusion algorithms. We review and analyze the features of existing methods of each component. Second, to understand each component in magnetic disturbance rejection, four representative sensor fusion methods were implemented, including gradient descent algorithms, improved explicit complementary filter, dual-linear Kalman filter and extended Kalman filter. Finally, a new standardized testing procedure has been developed to objectively assess the performance of each method against magnetic disturbance. Based upon the testing results, the strength and weakness of the existing sensor fusion methods were easily examined, and suggestions were presented for selecting a proper sensor fusion algorithm or developing new sensor fusion method. PMID:29283432

  5. Automating Data Submission to a National Archive

    NASA Astrophysics Data System (ADS)

    Work, T. T.; Chandler, C. L.; Groman, R. C.; Allison, M. D.; Gegg, S. R.; Biological; Chemical Oceanography Data Management Office

    2010-12-01

    In late 2006, the U.S. National Science Foundation (NSF) funded the Biological and Chemical Oceanographic Data Management Office (BCO-DMO) at Woods Hole Oceanographic Institution (WHOI) to work closely with investigators to manage oceanographic data generated from their research projects. One of the final data management tasks is to ensure that the data are permanently archived at the U.S. National Oceanographic Data Center (NODC) or other appropriate national archiving facility. In the past, BCO-DMO submitted data to NODC as an email with attachments including a PDF file (a manually completed metadata record) and one or more data files. This method is no longer feasible given the rate at which data sets are contributed to BCO-DMO. Working with collaborators at NODC, a more streamlined and automated workflow was developed to keep up with the increased volume of data that must be archived at NODC. We will describe our new workflow; a semi-automated approach for contributing data to NODC that includes a Federal Geographic Data Committee (FGDC) compliant Extensible Markup Language (XML) metadata file accompanied by comma-delimited data files. The FGDC XML file is populated from information stored in a MySQL database. A crosswalk described by an Extensible Stylesheet Language Transformation (XSLT) is used to transform the XML formatted MySQL result set to a FGDC compliant XML metadata file. To ensure data integrity, the MD5 algorithm is used to generate a checksum and manifest of the files submitted to NODC for permanent archive. The revised system supports preparation of detailed, standards-compliant metadata that facilitate data sharing and enable accurate reuse of multidisciplinary information. The approach is generic enough to be adapted for use by other data management groups.

  6. Prototype Development: Context-Driven Dynamic XML Ophthalmologic Data Capture Application.

    PubMed

    Peissig, Peggy; Schwei, Kelsey M; Kadolph, Christopher; Finamore, Joseph; Cancel, Efrain; McCarty, Catherine A; Okorie, Asha; Thomas, Kate L; Allen Pacheco, Jennifer; Pathak, Jyotishman; Ellis, Stephen B; Denny, Joshua C; Rasmussen, Luke V; Tromp, Gerard; Williams, Marc S; Vrabec, Tamara R; Brilliant, Murray H

    2017-09-13

    The capture and integration of structured ophthalmologic data into electronic health records (EHRs) has historically been a challenge. However, the importance of this activity for patient care and research is critical. The purpose of this study was to develop a prototype of a context-driven dynamic extensible markup language (XML) ophthalmologic data capture application for research and clinical care that could be easily integrated into an EHR system. Stakeholders in the medical, research, and informatics fields were interviewed and surveyed to determine data and system requirements for ophthalmologic data capture. On the basis of these requirements, an ophthalmology data capture application was developed to collect and store discrete data elements with important graphical information. The context-driven data entry application supports several features, including ink-over drawing capability for documenting eye abnormalities, context-based Web controls that guide data entry based on preestablished dependencies, and an adaptable database or XML schema that stores Web form specifications and allows for immediate changes in form layout or content. The application utilizes Web services to enable data integration with a variety of EHRs for retrieval and storage of patient data. This paper describes the development process used to create a context-driven dynamic XML data capture application for optometry and ophthalmology. The list of ophthalmologic data elements identified as important for care and research can be used as a baseline list for future ophthalmologic data collection activities. ©Peggy Peissig, Kelsey M Schwei, Christopher Kadolph, Joseph Finamore, Efrain Cancel, Catherine A McCarty, Asha Okorie, Kate L Thomas, Jennifer Allen Pacheco, Jyotishman Pathak, Stephen B Ellis, Joshua C Denny, Luke V Rasmussen, Gerard Tromp, Marc S Williams, Tamara R Vrabec, Murray H Brilliant. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 13.09.2017.

  7. PRIDE Inspector Toolsuite: Moving Toward a Universal Visualization Tool for Proteomics Data Standard Formats and Quality Assessment of ProteomeXchange Datasets.

    PubMed

    Perez-Riverol, Yasset; Xu, Qing-Wei; Wang, Rui; Uszkoreit, Julian; Griss, Johannes; Sanchez, Aniel; Reisinger, Florian; Csordas, Attila; Ternent, Tobias; Del-Toro, Noemi; Dianes, Jose A; Eisenacher, Martin; Hermjakob, Henning; Vizcaíno, Juan Antonio

    2016-01-01

    The original PRIDE Inspector tool was developed as an open source standalone tool to enable the visualization and validation of mass-spectrometry (MS)-based proteomics data before data submission or already publicly available in the Proteomics Identifications (PRIDE) database. The initial implementation of the tool focused on visualizing PRIDE data by supporting the PRIDE XML format and a direct access to private (password protected) and public experiments in PRIDE.The ProteomeXchange (PX) Consortium has been set up to enable a better integration of existing public proteomics repositories, maximizing its benefit to the scientific community through the implementation of standard submission and dissemination pipelines. Within the Consortium, PRIDE is focused on supporting submissions of tandem MS data. The increasing use and popularity of the new Proteomics Standards Initiative (PSI) data standards such as mzIdentML and mzTab, and the diversity of workflows supported by the PX resources, prompted us to design and implement a new suite of algorithms and libraries that would build upon the success of the original PRIDE Inspector and would enable users to visualize and validate PX "complete" submissions. The PRIDE Inspector Toolsuite supports the handling and visualization of different experimental output files, ranging from spectra (mzML, mzXML, and the most popular peak lists formats) and peptide and protein identification results (mzIdentML, PRIDE XML, mzTab) to quantification data (mzTab, PRIDE XML), using a modular and extensible set of open-source, cross-platform libraries. We believe that the PRIDE Inspector Toolsuite represents a milestone in the visualization and quality assessment of proteomics data. It is freely available at http://github.com/PRIDE-Toolsuite/. © 2016 by The American Society for Biochemistry and Molecular Biology, Inc.

  8. Semantic e-Science: From Microformats to Models

    NASA Astrophysics Data System (ADS)

    Lumb, L. I.; Freemantle, J. R.; Aldridge, K. D.

    2009-05-01

    A platform has been developed to transform semi-structured ASCII data into a representation based on the eXtensible Markup Language (XML). A subsequent transformation allows the XML-based representation to be rendered in the Resource Description Format (RDF). Editorial metadata, expressed as external annotations (via XML Pointer Language), also survives this transformation process (e.g., Lumb et al., http://dx.doi.org/10.1016/j.cageo.2008.03.009). Because the XML-to-RDF transformation uses XSLT (eXtensible Stylesheet Language Transformations), semantic microformats ultimately encode the scientific data (Lumb & Aldridge, http://dx.doi.org/10.1109/HPCS.2006.26). In building the relationship-centric representation in RDF, a Semantic Model of the scientific data is extracted. The systematic enhancement in the expressivity and richness of the scientific data results in representations of knowledge that are readily understood and manipulated by intelligent software agents. Thus scientists are able to draw upon various resources within and beyond their discipline to use in their scientific applications. Since the resulting Semantic Models are independent conceptualizations of the science itself, the representation of scientific knowledge and interaction with the same can stimulate insight from different perspectives. Using the Global Geodynamics Project (GGP) for the purpose of illustration, the introduction of GGP microformats enable a Semantic Model for the GGP that can be semantically queried (e.g., via SPARQL, http://www.w3.org/TR/rdf-sparql-query). Although the present implementation uses the Open Source Redland RDF Libraries (http://librdf.org/), the approach is generalizable to other platforms and to projects other than the GGP (e.g., Baker et al., Informatics and the 2007-2008 Electronic Geophysical Year, Eos Trans. Am. Geophys. Un., 89(48), 485-486, 2008).

  9. PRIDE Inspector Toolsuite: Moving Toward a Universal Visualization Tool for Proteomics Data Standard Formats and Quality Assessment of ProteomeXchange Datasets*

    PubMed Central

    Perez-Riverol, Yasset; Xu, Qing-Wei; Wang, Rui; Uszkoreit, Julian; Griss, Johannes; Sanchez, Aniel; Reisinger, Florian; Csordas, Attila; Ternent, Tobias; del-Toro, Noemi; Dianes, Jose A.; Eisenacher, Martin; Hermjakob, Henning; Vizcaíno, Juan Antonio

    2016-01-01

    The original PRIDE Inspector tool was developed as an open source standalone tool to enable the visualization and validation of mass-spectrometry (MS)-based proteomics data before data submission or already publicly available in the Proteomics Identifications (PRIDE) database. The initial implementation of the tool focused on visualizing PRIDE data by supporting the PRIDE XML format and a direct access to private (password protected) and public experiments in PRIDE. The ProteomeXchange (PX) Consortium has been set up to enable a better integration of existing public proteomics repositories, maximizing its benefit to the scientific community through the implementation of standard submission and dissemination pipelines. Within the Consortium, PRIDE is focused on supporting submissions of tandem MS data. The increasing use and popularity of the new Proteomics Standards Initiative (PSI) data standards such as mzIdentML and mzTab, and the diversity of workflows supported by the PX resources, prompted us to design and implement a new suite of algorithms and libraries that would build upon the success of the original PRIDE Inspector and would enable users to visualize and validate PX “complete” submissions. The PRIDE Inspector Toolsuite supports the handling and visualization of different experimental output files, ranging from spectra (mzML, mzXML, and the most popular peak lists formats) and peptide and protein identification results (mzIdentML, PRIDE XML, mzTab) to quantification data (mzTab, PRIDE XML), using a modular and extensible set of open-source, cross-platform libraries. We believe that the PRIDE Inspector Toolsuite represents a milestone in the visualization and quality assessment of proteomics data. It is freely available at http://github.com/PRIDE-Toolsuite/. PMID:26545397

  10. Lapin Data Interchange Among Database, Analysis and Display Programs Using XML-Based Text Files

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The purpose was to investigate and evaluate the interchange of application- specific data among multiple programs each carrying out part of the analysis and design task. This has been carried out previously by creating a custom program to read data produced by one application and then write that data to a file whose format is specific to the second application that needs all or part of that data. In this investigation, data of interest is described using the XML markup language that allows the data to be stored in a text-string. Software to transform output data of a task into an XML-string and software to read an XML string and extract all or a portion of the data needed for another application is used to link two independent applications together as part of an overall design effort. This approach was initially used with a standard analysis program, Lapin, along with standard applications a standard spreadsheet program, a relational database program, and a conventional dialog and display program to demonstrate the successful sharing of data among independent programs. See Engineering Analysis Using a Web-Based Protocol by J.D. Schoeffler and R.W. Claus, NASA TM-2002-211981, October 2002. Most of the effort beyond that demonstration has been concentrated on the inclusion of more complex display programs. Specifically, a custom-written windowing program organized around dialogs to control the interactions have been combined with an independent CAD program (Open Cascade) that supports sophisticated display of CAD elements such as lines, spline curves, and surfaces and turbine-blade data produced by an independent blade design program (UD0300).

  11. Design of an RF System for Electron Bernstein Wave Studies in MST

    NASA Astrophysics Data System (ADS)

    Kauffold, J. X.; Seltzman, A. H.; Anderson, J. K.; Nonn, P. D.; Forest, C. B.

    2010-11-01

    Motivated by the possibility of current profile control a 5.5GHz RF system for EBW is being developed. The central component is a standard radar Klystron with 1.2MW peak power and 4μs typical pulse length. Meaningful experiments require RF pulse lengths similar to the characteristic electron confinement times in MST necessitating the creation of a power supply providing 80kV at 40A for 10ms. A low inductance IGBT network switches power at 20kHz from an electrolytic capacitor bank into the primary of a three-phase resonant transformer system that is then rectified and filtered. The system uses three magnetically separate transformers with microcrystalline iron cores to provide suitable volt-seconds and low hysteresis losses. Each phase has a secondary with a large leakage inductance and a parallel capacitor providing a boost ratio greater than 60:1 with a physical turns ratio of 13.5:1. A microprocessor feedback control system varies the drive frequency around resonance to regulate the boost ratio and provide a stable output as the storage bank discharges. The completed system will deliver RF to the plasma boundary where coupling to the Bernstein mode and subsequent heating and current drive can occur.

  12. SU-F-T-476: Performance of the AS1200 EPID for Periodic Photon Quality Assurance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeMarco, J; Fraass, B; Yang, W

    2016-06-15

    Purpose: To assess the dosimetric performance of a new amorphous silicon flat-panel electronic portal imaging device (EPID) suitable for high-intensity, flattening-filter-free delivery mode. Methods: An EPID-based QA suite was created with automation to periodically monitor photon central-axis output and two-dimensional beam profile constancy as a function of gantry angle and dose-rate. A Varian TrueBeamTM linear accelerator installed with Developer Mode was used to customize and deliver XML script routines for the QA suite using the dosimetry mode image acquisition for an aS1200 EPID. Automatic post-processing software was developed to analyze the resulting DICOM images. Results: The EPID was used tomore » monitor photon beam output constancy (central-axis), flatness, and symmetry over a period of 10 months for four photon beam energies (6x, 15x, 6xFFF, and 10xFFF). EPID results were consistent to those measured with a standard daily QA check device. At the four cardinal gantry angles, the standard deviation of the EPID central-axis output was <0.5%. Likewise, EPID measurements were independent for the wide range of dose rates (including up to 2400 mu/min for 10xFFF) studied with a standard deviation of <0.8% relative to the nominal dose rate for each energy. Also, profile constancy and field size measurements showed good agreement with the reference acquisition of 0° gantry angle and nominal dose rate. XML script files were also tested for MU linearity and picket-fence delivery. Using Developer Mode, the test suite was delivered in <60 minutes for all 4 photon energies with 4 dose rates per energy and 5 picket-fence acquisitions. Conclusion: Dosimetry image acquisition using a new EPID was found to be accurate for standard and high-intensity photon beams over a broad range of dose rates over 10 months. Developer Mode provided an efficient platform to customize the EPID acquisitions by using custom script files which significantly reduced the time. This work was funded in part by Varian Medical Systems.« less

  13. A Platform to Build Mobile Health Apps: The Personal Health Intervention Toolkit (PHIT).

    PubMed

    Eckhoff, Randall Peter; Kizakevich, Paul Nicholas; Bakalov, Vesselina; Zhang, Yuying; Bryant, Stephanie Patrice; Hobbs, Maria Ann

    2015-06-01

    Personal Health Intervention Toolkit (PHIT) is an advanced cross-platform software framework targeted at personal self-help research on mobile devices. Following the subjective and objective measurement, assessment, and plan methodology for health assessment and intervention recommendations, the PHIT platform lets researchers quickly build mobile health research Android and iOS apps. They can (1) create complex data-collection instruments using a simple extensible markup language (XML) schema; (2) use Bluetooth wireless sensors; (3) create targeted self-help interventions based on collected data via XML-coded logic; (4) facilitate cross-study reuse from the library of existing instruments and interventions such as stress, anxiety, sleep quality, and substance abuse; and (5) monitor longitudinal intervention studies via daily upload to a Web-based dashboard portal. For physiological data, Bluetooth sensors collect real-time data with on-device processing. For example, using the BinarHeartSensor, the PHIT platform processes the heart rate data into heart rate variability measures, and plots these data as time-series waveforms. Subjective data instruments are user data-entry screens, comprising a series of forms with validation and processing logic. The PHIT instrument library consists of over 70 reusable instruments for various domains including cognitive, environmental, psychiatric, psychosocial, and substance abuse. Many are standardized instruments, such as the Alcohol Use Disorder Identification Test, Patient Health Questionnaire-8, and Post-Traumatic Stress Disorder Checklist. Autonomous instruments such as battery and global positioning system location support continuous background data collection. All data are acquired using a schedule appropriate to the app's deployment. The PHIT intelligent virtual advisor (iVA) is an expert system logic layer, which analyzes the data in real time on the device. This data analysis results in a tailored app of interventions and other data-collection instruments. For example, if a user anxiety score exceeds a threshold, the iVA might add a meditation intervention to the task list in order to teach the user how to relax, and schedule a reassessment using the anxiety instrument 2 weeks later to re-evaluate. If the anxiety score exceeds a higher threshold, then an advisory to seek professional help would be displayed. Using the easy-to-use PHIT scripting language, the researcher can program new instruments, the iVA, and interventions to their domain-specific needs. The iVA, instruments, and interventions are defined via XML files, which facilities rapid app development and deployment. The PHIT Web-based dashboard portal provides the researcher access to all the uploaded data. After a secure login, the data can be filtered by criteria such as study, protocol, domain, and user. Data can also be exported into a comma-delimited file for further processing. The PHIT framework has proven to be an extensible, reconfigurable technology that facilitates mobile data collection and health intervention research. Additional plans include instrument development in other domains, additional health sensors, and a text messaging notification system.

  14. A Platform to Build Mobile Health Apps: The Personal Health Intervention Toolkit (PHIT)

    PubMed Central

    2015-01-01

    Personal Health Intervention Toolkit (PHIT) is an advanced cross-platform software framework targeted at personal self-help research on mobile devices. Following the subjective and objective measurement, assessment, and plan methodology for health assessment and intervention recommendations, the PHIT platform lets researchers quickly build mobile health research Android and iOS apps. They can (1) create complex data-collection instruments using a simple extensible markup language (XML) schema; (2) use Bluetooth wireless sensors; (3) create targeted self-help interventions based on collected data via XML-coded logic; (4) facilitate cross-study reuse from the library of existing instruments and interventions such as stress, anxiety, sleep quality, and substance abuse; and (5) monitor longitudinal intervention studies via daily upload to a Web-based dashboard portal. For physiological data, Bluetooth sensors collect real-time data with on-device processing. For example, using the BinarHeartSensor, the PHIT platform processes the heart rate data into heart rate variability measures, and plots these data as time-series waveforms. Subjective data instruments are user data-entry screens, comprising a series of forms with validation and processing logic. The PHIT instrument library consists of over 70 reusable instruments for various domains including cognitive, environmental, psychiatric, psychosocial, and substance abuse. Many are standardized instruments, such as the Alcohol Use Disorder Identification Test, Patient Health Questionnaire-8, and Post-Traumatic Stress Disorder Checklist. Autonomous instruments such as battery and global positioning system location support continuous background data collection. All data are acquired using a schedule appropriate to the app’s deployment. The PHIT intelligent virtual advisor (iVA) is an expert system logic layer, which analyzes the data in real time on the device. This data analysis results in a tailored app of interventions and other data-collection instruments. For example, if a user anxiety score exceeds a threshold, the iVA might add a meditation intervention to the task list in order to teach the user how to relax, and schedule a reassessment using the anxiety instrument 2 weeks later to re-evaluate. If the anxiety score exceeds a higher threshold, then an advisory to seek professional help would be displayed. Using the easy-to-use PHIT scripting language, the researcher can program new instruments, the iVA, and interventions to their domain-specific needs. The iVA, instruments, and interventions are defined via XML files, which facilities rapid app development and deployment. The PHIT Web-based dashboard portal provides the researcher access to all the uploaded data. After a secure login, the data can be filtered by criteria such as study, protocol, domain, and user. Data can also be exported into a comma-delimited file for further processing. The PHIT framework has proven to be an extensible, reconfigurable technology that facilitates mobile data collection and health intervention research. Additional plans include instrument development in other domains, additional health sensors, and a text messaging notification system. PMID:26033047

  15. Making journals accessible to the visually impaired: the future is near

    PubMed Central

    GARDNER, John; BULATOV, Vladimir; KELLY, Robert

    2010-01-01

    The American Physical Society (APS) has been a leader in using markup languages for publishing. ViewPlus has led development of innovative technologies for graphical information accessibility by people with print disabilities. APS, ViewPlus, and other collaborators in the Enhanced Reading Project are working together to develop the necessary technology and infrastructure for APS to publish its journals in the DAISY (Digital Accessible Information SYstem) eXtended Markup Language (XML) format, in which all text, math, and figures would be accessible to people who are blind or have other print disabilities. The first APS DAISY XML publications are targeted for late 2010. PMID:20676358

  16. Method for gathering and summarizing internet information

    DOEpatents

    Potok, Thomas E.; Elmore, Mark Thomas; Reed, Joel Wesley; Treadwell, Jim N.; Samatova, Nagiza Faridovna

    2010-04-06

    A computer method of gathering and summarizing large amounts of information comprises collecting information from a plurality of information sources (14, 51) according to respective maps (52) of the information sources (14), converting the collected information from a storage format to XML-language documents (26, 53) and storing the XML-language documents in a storage medium, searching for documents (55) according to a search query (13) having at least one term and identifying the documents (26) found in the search, and displaying the documents as nodes (33) of a tree structure (32) having links (34) and nodes (33) so as to indicate similarity of the documents to each other.

  17. System for gathering and summarizing internet information

    DOEpatents

    Potok, Thomas E.; Elmore, Mark Thomas; Reed, Joel Wesley; Treadwell, Jim N.; Samatova, Nagiza Faridovna

    2006-07-04

    A computer method of gathering and summarizing large amounts of information comprises collecting information from a plurality of information sources (14, 51) according to respective maps (52) of the information sources (14), converting the collected information from a storage format to XML-language documents (26, 53) and storing the XML-language documents in a storage medium, searching for documents (55) according to a search query (13) having at least one term and identifying the documents (26) found in the search, and displaying the documents as nodes (33) of a tree structure (32) having links (34) and nodes (33) so as to indicate similarity of the documents to each other.

  18. Method for gathering and summarizing internet information

    DOEpatents

    Potok, Thomas E [Oak Ridge, TN; Elmore, Mark Thomas [Oak Ridge, TN; Reed, Joel Wesley [Knoxville, TN; Treadwell, Jim N [Louisville, TN; Samatova, Nagiza Faridovna [Oak Ridge, TN

    2008-01-01

    A computer method of gathering and summarizing large amounts of information comprises collecting information from a plurality of information sources (14, 51) according to respective maps (52) of the information sources (14), converting the collected information from a storage format to XML-language documents (26, 53) and storing the XML-language documents in a storage medium, searching for documents (55) according to a search query (13) having at least one term and identifying the documents (26) found in the search, and displaying the documents as nodes (33) of a tree structure (32) having links (34) and nodes (33) so as to indicate similarity of the documents to each other.

  19. DbMap: improving database interoperability issues in medical software using a simple, Java-Xml based solution.

    PubMed Central

    Karadimas, H.; Hemery, F.; Roland, P.; Lepage, E.

    2000-01-01

    In medical software development, the use of databases plays a central role. However, most of the databases have heterogeneous encoding and data models. To deal with these variations in the application code directly is error-prone and reduces the potential reuse of the produced software. Several approaches to overcome these limitations have been proposed in the medical database literature, which will be presented. We present a simple solution, based on a Java library, and a central Metadata description file in XML. This development approach presents several benefits in software design and development cycles, the main one being the simplicity in maintenance. PMID:11079915

  20. The inclusion of an online journal in PubMed central - a difficult path.

    PubMed

    Grech, Victor

    2016-01-01

    The indexing of a journal in a prominent database (such as PubMed) is an important imprimatur. Journals accepted for inclusion in PubMed Central (PMC) are automatically indexed in PubMed but must provide the entire contents of their publications as XML-tagged (Extensible Markup Language) data files compliant with PubMed's document type definition (DTD). This paper describes the various attempts that the journal Images in Paediatric Cardiology made in its efforts to convert the journal contents (including all of the extant backlog) to PMC-compliant XML for archiving and indexing in PubMed after the journal was accepted for inclusion by the database.

  1. Minimum information required for a DMET experiment reporting.

    PubMed

    Kumuthini, Judit; Mbiyavanga, Mamana; Chimusa, Emile R; Pathak, Jyotishman; Somervuo, Panu; Van Schaik, Ron Hn; Dolzan, Vita; Mizzi, Clint; Kalideen, Kusha; Ramesar, Raj S; Macek, Milan; Patrinos, George P; Squassina, Alessio

    2016-09-01

    To provide pharmacogenomics reporting guidelines, the information and tools required for reporting to public omic databases. For effective DMET data interpretation, sharing, interoperability, reproducibility and reporting, we propose the Minimum Information required for a DMET Experiment (MIDE) reporting. MIDE provides reporting guidelines and describes the information required for reporting, data storage and data sharing in the form of XML. The MIDE guidelines will benefit the scientific community with pharmacogenomics experiments, including reporting pharmacogenomics data from other technology platforms, with the tools that will ease and automate the generation of such reports using the standardized MIDE XML schema, facilitating the sharing, dissemination, reanalysis of datasets through accessible and transparent pharmacogenomics data reporting.

  2. jqcML: an open-source java API for mass spectrometry quality control data in the qcML format.

    PubMed

    Bittremieux, Wout; Kelchtermans, Pieter; Valkenborg, Dirk; Martens, Lennart; Laukens, Kris

    2014-07-03

    The awareness that systematic quality control is an essential factor to enable the growth of proteomics into a mature analytical discipline has increased over the past few years. To this aim, a controlled vocabulary and document structure have recently been proposed by Walzer et al. to store and disseminate quality-control metrics for mass-spectrometry-based proteomics experiments, called qcML. To facilitate the adoption of this standardized quality control routine, we introduce jqcML, a Java application programming interface (API) for the qcML data format. First, jqcML provides a complete object model to represent qcML data. Second, jqcML provides the ability to read, write, and work in a uniform manner with qcML data from different sources, including the XML-based qcML file format and the relational database qcDB. Interaction with the XML-based file format is obtained through the Java Architecture for XML Binding (JAXB), while generic database functionality is obtained by the Java Persistence API (JPA). jqcML is released as open-source software under the permissive Apache 2.0 license and can be downloaded from https://bitbucket.org/proteinspector/jqcml .

  3. Ajax Architecture Implementation Techniques

    NASA Astrophysics Data System (ADS)

    Hussaini, Syed Asadullah; Tabassum, S. Nasira; Baig, Tabassum, M. Khader

    2012-03-01

    Today's rich Web applications use a mix of Java Script and asynchronous communication with the application server. This mechanism is also known as Ajax: Asynchronous JavaScript and XML. The intent of Ajax is to exchange small pieces of data between the browser and the application server, and in doing so, use partial page refresh instead of reloading the entire Web page. AJAX (Asynchronous JavaScript and XML) is a powerful Web development model for browser-based Web applications. Technologies that form the AJAX model, such as XML, JavaScript, HTTP, and XHTML, are individually widely used and well known. However, AJAX combines these technologies to let Web pages retrieve small amounts of data from the server without having to reload the entire page. This capability makes Web pages more interactive and lets them behave like local applications. Web 2.0 enabled by the Ajax architecture has given rise to a new level of user interactivity through web browsers. Many new and extremely popular Web applications have been introduced such as Google Maps, Google Docs, Flickr, and so on. Ajax Toolkits such as Dojo allow web developers to build Web 2.0 applications quickly and with little effort.

  4. Enabling model customization and integration

    NASA Astrophysics Data System (ADS)

    Park, Minho; Fishwick, Paul A.

    2003-09-01

    Until fairly recently, the idea of dynamic model content and presentation were treated synonymously. For example, if one was to take a data flow network, which captures the dynamics of a target system in terms of the flow of data through nodal operators, then one would often standardize on rectangles and arrows for the model display. The increasing web emphasis on XML, however, suggests that the network model can have its content specified in an XML language, and then the model can be represented in a number of ways depending on the chosen style. We have developed a formal method, based on styles, that permits a model to be specified in XML and presented in 1D (text), 2D, and 3D. This method allows for customization and personalization to exert their benefits beyond e-commerce, to the area of model structures used in computer simulation. This customization leads naturally to solving the bigger problem of model integration - the act of taking models of a scene and integrating them with that scene so that there is only one unified modeling interface. This work focuses mostly on customization, but we address the integration issue in the future work section.

  5. jmzReader: A Java parser library to process and visualize multiple text and XML-based mass spectrometry data formats.

    PubMed

    Griss, Johannes; Reisinger, Florian; Hermjakob, Henning; Vizcaíno, Juan Antonio

    2012-03-01

    We here present the jmzReader library: a collection of Java application programming interfaces (APIs) to parse the most commonly used peak list and XML-based mass spectrometry (MS) data formats: DTA, MS2, MGF, PKL, mzXML, mzData, and mzML (based on the already existing API jmzML). The library is optimized to be used in conjunction with mzIdentML, the recently released standard data format for reporting protein and peptide identifications, developed by the HUPO proteomics standards initiative (PSI). mzIdentML files do not contain spectra data but contain references to different kinds of external MS data files. As a key functionality, all parsers implement a common interface that supports the various methods used by mzIdentML to reference external spectra. Thus, when developing software for mzIdentML, programmers no longer have to support multiple MS data file formats but only this one interface. The library (which includes a viewer) is open source and, together with detailed documentation, can be downloaded from http://code.google.com/p/jmzreader/. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. An enhanced security solution for electronic medical records based on AES hybrid technique with SOAP/XML and SHA-1.

    PubMed

    Kiah, M L Mat; Nabi, Mohamed S; Zaidan, B B; Zaidan, A A

    2013-10-01

    This study aims to provide security solutions for implementing electronic medical records (EMRs). E-Health organizations could utilize the proposed method and implement recommended solutions in medical/health systems. Majority of the required security features of EMRs were noted. The methods used were tested against each of these security features. In implementing the system, the combination that satisfied all of the security features of EMRs was selected. Secure implementation and management of EMRs facilitate the safeguarding of the confidentiality, integrity, and availability of e-health organization systems. Health practitioners, patients, and visitors can use the information system facilities safely and with confidence anytime and anywhere. After critically reviewing security and data transmission methods, a new hybrid method was proposed to be implemented on EMR systems. This method will enhance the robustness, security, and integration of EMR systems. The hybrid of simple object access protocol/extensible markup language (XML) with advanced encryption standard and secure hash algorithm version 1 has achieved the security requirements of an EMR system with the capability of integrating with other systems through the design of XML messages.

  7. RUBE: an XML-based architecture for 3D process modeling and model fusion

    NASA Astrophysics Data System (ADS)

    Fishwick, Paul A.

    2002-07-01

    Information fusion is a critical problem for science and engineering. There is a need to fuse information content specified as either data or model. We frame our work in terms of fusing dynamic and geometric models, to create an immersive environment where these models can be juxtaposed in 3D, within the same interface. The method by which this is accomplished fits well into other eXtensible Markup Language (XML) approaches to fusion in general. The task of modeling lies at the heart of the human-computer interface, joining the human to the system under study through a variety of sensory modalities. I overview modeling as a key concern for the Defense Department and the Air Force, and then follow with a discussion of past, current, and future work. Past work began with a package with C and has progressed, in current work, to an implementation in XML. Our current work is defined within the RUBE architecture, which is detailed in subsequent papers devoted to key components. We have built RUBE as a next generation modeling framework using our prior software, with research opportunities in immersive 3D and tangible user interfaces.

  8. SBMLeditor: effective creation of models in the Systems Biology Markup Language (SBML)

    PubMed Central

    Rodriguez, Nicolas; Donizelli, Marco; Le Novère, Nicolas

    2007-01-01

    Background The need to build a tool to facilitate the quick creation and editing of models encoded in the Systems Biology Markup language (SBML) has been growing with the number of users and the increased complexity of the language. SBMLeditor tries to answer this need by providing a very simple, low level editor of SBML files. Users can create and remove all the necessary bits and pieces of SBML in a controlled way, that maintains the validity of the final SBML file. Results SBMLeditor is written in JAVA using JCompneur, a library providing interfaces to easily display an XML document as a tree. This decreases dramatically the development time for a new XML editor. The possibility to include custom dialogs for different tags allows a lot of freedom for the editing and validation of the document. In addition to Xerces, SBMLeditor uses libSBML to check the validity and consistency of SBML files. A graphical equation editor allows an easy manipulation of MathML. SBMLeditor can be used as a module of the Systems Biology Workbench. Conclusion SBMLeditor contains many improvements compared to a generic XML editor, and allow users to create an SBML model quickly and without syntactic errors. PMID:17341299

  9. SBMLeditor: effective creation of models in the Systems Biology Markup language (SBML).

    PubMed

    Rodriguez, Nicolas; Donizelli, Marco; Le Novère, Nicolas

    2007-03-06

    The need to build a tool to facilitate the quick creation and editing of models encoded in the Systems Biology Markup language (SBML) has been growing with the number of users and the increased complexity of the language. SBMLeditor tries to answer this need by providing a very simple, low level editor of SBML files. Users can create and remove all the necessary bits and pieces of SBML in a controlled way, that maintains the validity of the final SBML file. SBMLeditor is written in JAVA using JCompneur, a library providing interfaces to easily display an XML document as a tree. This decreases dramatically the development time for a new XML editor. The possibility to include custom dialogs for different tags allows a lot of freedom for the editing and validation of the document. In addition to Xerces, SBMLeditor uses libSBML to check the validity and consistency of SBML files. A graphical equation editor allows an easy manipulation of MathML. SBMLeditor can be used as a module of the Systems Biology Workbench. SBMLeditor contains many improvements compared to a generic XML editor, and allow users to create an SBML model quickly and without syntactic errors.

  10. XML-Based Visual Specification of Multidisciplinary Applications

    NASA Technical Reports Server (NTRS)

    Al-Theneyan, Ahmed; Jakatdar, Amol; Mehrotra, Piyush; Zubair, Mohammad

    2001-01-01

    The advancements in the Internet and Web technologies have fueled a growing interest in developing a web-based distributed computing environment. We have designed and developed Arcade, a web-based environment for designing, executing, monitoring, and controlling distributed heterogeneous applications, which is easy to use and access, portable, and provides support through all phases of the application development and execution. A major focus of the environment is the specification of heterogeneous, multidisciplinary applications. In this paper we focus on the visual and script-based specification interface of Arcade. The web/browser-based visual interface is designed to be intuitive to use and can also be used for visual monitoring during execution. The script specification is based on XML to: (1) make it portable across different frameworks, and (2) make the development of our tools easier by using the existing freely available XML parsers and editors. There is a one-to-one correspondence between the visual and script-based interfaces allowing users to go back and forth between the two. To support this we have developed translators that translate a script-based specification to a visual-based specification, and vice-versa. These translators are integrated with our tools and are transparent to users.

  11. CMO: Cruise Metadata Organizer for JAMSTEC Research Cruises

    NASA Astrophysics Data System (ADS)

    Fukuda, K.; Saito, H.; Hanafusa, Y.; Vanroosebeke, A.; Kitayama, T.

    2011-12-01

    JAMSTEC's Data Research Center for Marine-Earth Sciences manages and distributes a wide variety of observational data and samples obtained from JAMSTEC research vessels and deep sea submersibles. Generally, metadata are essential to identify data and samples were obtained. In JAMSTEC, cruise metadata include cruise information such as cruise ID, name of vessel, research theme, and diving information such as dive number, name of submersible and position of diving point. They are submitted by chief scientists of research cruises in the Microsoft Excel° spreadsheet format, and registered into a data management database to confirm receipt of observational data files, cruise summaries, and cruise reports. The cruise metadata are also published via "JAMSTEC Data Site for Research Cruises" within two months after end of cruise. Furthermore, these metadata are distributed with observational data, images and samples via several data and sample distribution websites after a publication moratorium period. However, there are two operational issues in the metadata publishing process. One is that duplication efforts and asynchronous metadata across multiple distribution websites due to manual metadata entry into individual websites by administrators. The other is that differential data types or representation of metadata in each website. To solve those problems, we have developed a cruise metadata organizer (CMO) which allows cruise metadata to be connected from the data management database to several distribution websites. CMO is comprised of three components: an Extensible Markup Language (XML) database, an Enterprise Application Integration (EAI) software, and a web-based interface. The XML database is used because of its flexibility for any change of metadata. Daily differential uptake of metadata from the data management database to the XML database is automatically processed via the EAI software. Some metadata are entered into the XML database using the web-based interface by a metadata editor in CMO as needed. Then daily differential uptake of metadata from the XML database to databases in several distribution websites is automatically processed using a convertor defined by the EAI software. Currently, CMO is available for three distribution websites: "Deep Sea Floor Rock Sample Database GANSEKI", "Marine Biological Sample Database", and "JAMSTEC E-library of Deep-sea Images". CMO is planned to provide "JAMSTEC Data Site for Research Cruises" with metadata in the future.

  12. Electronic stethoscope with frequency shaping and infrasonic recording capabilities.

    PubMed

    Gordon, E S; Lagerwerff, J M

    1976-03-01

    A small electronic stethoscope with variable frequency response characteristics has been developed for aerospace and research applications. The system includes a specially designed piezoelectric pickup and amplifier with an overall frequency response from 0.7 to 5,000 HZ (-3 dB points) and selective bass and treble boost or cut of up to 15 dB. A steep slope, high pass filter can be switched in for ordinary clinical auscultation without overload distortion from strong infrasonic signal inputs. A commercial stethoscope-type headset, selected for best overall response, is used which can adequately handle up to 100 mW of audio power delivered from the amplifier. The active components of the amplifier consist of only four opamp-type integrated circuits.

  13. MSAViewer: interactive JavaScript visualization of multiple sequence alignments.

    PubMed

    Yachdav, Guy; Wilzbach, Sebastian; Rauscher, Benedikt; Sheridan, Robert; Sillitoe, Ian; Procter, James; Lewis, Suzanna E; Rost, Burkhard; Goldberg, Tatyana

    2016-11-15

    The MSAViewer is a quick and easy visualization and analysis JavaScript component for Multiple Sequence Alignment data of any size. Core features include interactive navigation through the alignment, application of popular color schemes, sorting, selecting and filtering. The MSAViewer is 'web ready': written entirely in JavaScript, compatible with modern web browsers and does not require any specialized software. The MSAViewer is part of the BioJS collection of components. The MSAViewer is released as open source software under the Boost Software License 1.0. Documentation, source code and the viewer are available at http://msa.biojs.net/Supplementary information: Supplementary data are available at Bioinformatics online. msa@bio.sh. © The Author 2016. Published by Oxford University Press.

  14. MSAViewer: interactive JavaScript visualization of multiple sequence alignments

    PubMed Central

    Yachdav, Guy; Wilzbach, Sebastian; Rauscher, Benedikt; Sheridan, Robert; Sillitoe, Ian; Procter, James; Lewis, Suzanna E.; Rost, Burkhard; Goldberg, Tatyana

    2016-01-01

    Summary: The MSAViewer is a quick and easy visualization and analysis JavaScript component for Multiple Sequence Alignment data of any size. Core features include interactive navigation through the alignment, application of popular color schemes, sorting, selecting and filtering. The MSAViewer is ‘web ready’: written entirely in JavaScript, compatible with modern web browsers and does not require any specialized software. The MSAViewer is part of the BioJS collection of components. Availability and Implementation: The MSAViewer is released as open source software under the Boost Software License 1.0. Documentation, source code and the viewer are available at http://msa.biojs.net/. Supplementary information: Supplementary data are available at Bioinformatics online. Contact: msa@bio.sh PMID:27412096

  15. A fast recognition method of warhead target in boost phase using kinematic features

    NASA Astrophysics Data System (ADS)

    Chen, Jian; Xu, Shiyou; Tian, Biao; Wu, Jianhua; Chen, Zengping

    2015-12-01

    The radar targets number increases from one to more when the ballistic missile is in the process of separating the lower stage rocket or casting covers or other components. It is vital to identify the warhead target quickly among these multiple targets for radar tracking. A fast recognition method of the warhead target is proposed to solve this problem by using kinematic features, utilizing fuzzy comprehensive method and information fusion method. In order to weaken the influence of radar measurement noise, an extended Kalman filter with constant jerk model (CJEKF) is applied to obtain more accurate target's motion information. The simulation shows the validity of the algorithm and the effects of the radar measurement precision upon the algorithm's performance.

  16. Multiradar tracking for theater missile defense

    NASA Astrophysics Data System (ADS)

    Sviestins, Egils

    1995-09-01

    A prototype system for tracking tactical ballistic missiles using multiple radars has been developed. The tracking is based on measurement level fusion (`true' multi-radar) tracking. Strobes from passive sensors can also be used. We describe various features of the system with some emphasis on the filtering technique. This is based on the Interacting Multiple Model framework where the states are Free Flight, Drag, Boost, and Auxiliary. Measurement error modeling includes the signal to noise ratio dependence; outliers and miscorrelations are handled in the same way. The launch point is calculated within one minute from the detection of the missile. The impact point, and its uncertainty region, is calculated continually by extrapolating the track state vector using the equations of planetary motion.

  17. A comparison of Data Driven models of solving the task of gender identification of author in Russian language texts for cases without and with the gender deception

    NASA Astrophysics Data System (ADS)

    Sboev, A.; Moloshnikov, I.; Gudovskikh, D.; Rybka, R.

    2017-12-01

    In this work we compare several data-driven approaches to the task of author’s gender identification for texts with or without gender imitation. The data corpus has been specially gathered with crowdsourcing for this task. The best models are convolutional neural network with input of morphological data (fl-measure: 88%±3) for texts without imitation, and gradient boosting model with vector of character n-grams frequencies as input data (f1-measure: 64% ± 3) for texts with gender imitation. The method to filter the crowdsourced corpus using limited reference sample of texts to increase the accuracy of result is discussed.

  18. ADDJUST - An automated system for steering Centaur launch vehicles in measured winds

    NASA Technical Reports Server (NTRS)

    Swanson, D. C.

    1977-01-01

    ADDJUST (Automatic Determination and Dissemination of Just-Updated Steering Terms) is an automated computer and communication system designed to provide Atlas/Centaur and Titan/Centaur launch vehicles with booster-phase steering data on launch day. Wind soundings are first obtained, from which a smoothed wind velocity vs altitude relationship is established. Design for conditions at the end of the boost phase with initial pitch and yaw maneuvers, followed by zero total angle of attack through the filtered wind establishes the required vehicle attitude as a function of altitude. Polynomial coefficients for pitch and yaw attitude vs altitude are determined and are transmitted for validation and loading into the Centaur airborne computer. The system has enabled 14 consecutive launches without a flight wind delay.

  19. A system for tracking and recognizing pedestrian faces using a network of loosely coupled cameras

    NASA Astrophysics Data System (ADS)

    Gagnon, L.; Laliberté, F.; Foucher, S.; Branzan Albu, A.; Laurendeau, D.

    2006-05-01

    A face recognition module has been developed for an intelligent multi-camera video surveillance system. The module can recognize a pedestrian face in terms of six basic emotions and the neutral state. Face and facial features detection (eyes, nasal root, nose and mouth) are first performed using cascades of boosted classifiers. These features are used to normalize the pose and dimension of the face image. Gabor filters are then sampled on a regular grid covering the face image to build a facial feature vector that feeds a nearest neighbor classifier with a cosine distance similarity measure for facial expression interpretation and face model construction. A graphical user interface allows the user to adjust the module parameters.

  20. pyOpenMS: a Python-based interface to the OpenMS mass-spectrometry algorithm library.

    PubMed

    Röst, Hannes L; Schmitt, Uwe; Aebersold, Ruedi; Malmström, Lars

    2014-01-01

    pyOpenMS is an open-source, Python-based interface to the C++ OpenMS library, providing facile access to a feature-rich, open-source algorithm library for MS-based proteomics analysis. It contains Python bindings that allow raw access to the data structures and algorithms implemented in OpenMS, specifically those for file access (mzXML, mzML, TraML, mzIdentML among others), basic signal processing (smoothing, filtering, de-isotoping, and peak-picking) and complex data analysis (including label-free, SILAC, iTRAQ, and SWATH analysis tools). pyOpenMS thus allows fast prototyping and efficient workflow development in a fully interactive manner (using the interactive Python interpreter) and is also ideally suited for researchers not proficient in C++. In addition, our code to wrap a complex C++ library is completely open-source, allowing other projects to create similar bindings with ease. The pyOpenMS framework is freely available at https://pypi.python.org/pypi/pyopenms while the autowrap tool to create Cython code automatically is available at https://pypi.python.org/pypi/autowrap (both released under the 3-clause BSD licence). © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. The Biological Connection Markup Language: a SBGN-compliant format for visualization, filtering and analysis of biological pathways

    PubMed Central

    Rizzetto, Lisa; Guedez, Damariz Rivero; Donato, Michele; Romualdi, Chiara; Draghici, Sorin; Cavalieri, Duccio

    2011-01-01

    Motivation: Many models and analysis of signaling pathways have been proposed. However, neither of them takes into account that a biological pathway is not a fixed system, but instead it depends on the organism, tissue and cell type as well as on physiological, pathological and experimental conditions. Results: The Biological Connection Markup Language (BCML) is a format to describe, annotate and visualize pathways. BCML is able to store multiple information, permitting a selective view of the pathway as it exists and/or behave in specific organisms, tissues and cells. Furthermore, BCML can be automatically converted into data formats suitable for analysis and into a fully SBGN-compliant graphical representation, making it an important tool that can be used by both computational biologists and ‘wet lab’ scientists. Availability and implementation: The XML schema and the BCML software suite are freely available under the LGPL for download at http://bcml.dc-atlas.net. They are implemented in Java and supported on MS Windows, Linux and OS X. Contact: duccio.cavalieri@unifi.it; sorin@wayne.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:21653523

  2. Science-Filters Study of Martian Rock Sees Hematite

    NASA Image and Video Library

    2017-11-01

    This false-color image demonstrates how use of special filters available on the Mast Camera (Mastcam) of NASA's Curiosity Mars rover can reveal the presence of certain minerals in target rocks. It is a composite of images taken through three "science" filters chosen for making hematite, an iron-oxide mineral, stand out as exaggerated purple. This target rock, called "Christmas Cove," lies in an area on Mars' "Vera Rubin Ridge" where Mastcam reconnaissance imaging (see PIA22065) with science filters suggested a patchy distribution of exposed hematite. Bright lines within the rocks are fractures filled with calcium sulfate minerals. Christmas Cove did not appear to contain much hematite until the rover team conducted an experiment on this target: Curiosity's wire-bristled brush, the Dust Removal Tool, scrubbed the rock, and a close-up with the Mars Hand Lens Imager (MAHLI) confirmed the brushing. The brushed area is about is about 2.5 inches (6 centimeters) across. The next day -- Sept. 17, 2017, on the mission's Sol 1819 -- this observation with Mastcam and others with the Chemistry and Camera (ChemCam showed a strong hematite presence that had been subdued beneath the dust. The team is continuing to explore whether the patchiness in the reconnaissance imaging may result more from variations in the amount of dust cover rather than from variations in hematite content. Curiosity's Mastcam combines two cameras: one with a telephoto lens and the other with a wider-angle lens. Each camera has a filter wheel that can be rotated in front of the lens for a choice of eight different filters. One filter for each camera is clear to all visible light, for regular full-color photos, and another is specifically for viewing the Sun. Some of the other filters were selected to admit wavelengths of light that are useful for identifying iron minerals. Each of the filters used for this image admits light from a narrow band of wavelengths, extending to only about 5 nanometers longer or shorter than the filter's central wavelength. Three observations are combined for this image, each through one of the filters centered at 751 nanometers (in the near-infrared part of the spectrum just beyond red light), 527 nanometers (green) and 445 nanometers (blue). Usual color photographs from digital cameras -- such as a Mastcam one of this same place (see PIA22067) -- also combine information from red, green and blue filtering, but the filters are in a microscopic grid in a "Bayer" filter array situated directly over the detector behind the lens, with wider bands of wavelengths. Mastcam's narrow-band filters used for this view help to increase spectral contrast, making blues bluer and reds redder, particularly with the processing used to boost contrast in each of the component images of this composite. Fine-grained hematite preferentially absorbs sunlight around in the green portion of the spectrum around 527 nanometers. That gives it the purple look from a combination of red and blue light reflected by the hematite and reaching the camera through the other two filters. https://photojournal.jpl.nasa.gov/catalog/PIA22066

  3. Earth Science Datacasting v2.0

    NASA Technical Reports Server (NTRS)

    Bingham, Andrew W.; Deen, Robert G.; Hussey, Kevin J.; Stough, Timothy M.; McCleese, Sean W.; Toole, Nicholas T.

    2012-01-01

    The Datacasting software, which consists of a server and a client, has been developed as part of the Earth Science (ES) Datacasting project. The goal of ES Datacasting is to provide scientists the ability to automatically and continuously download Earth science data that meets a precise, predefined need, and then to instantaneously visualize it on a local computer. This is achieved by applying the concept of podcasting to deliver science data over the Internet using RSS (Really Simple Syndication) XML feeds. By extending the RSS specification, scientists can filter a feed and only download the files that are required for a particular application (for example, only files that contain information about a particular event, such as a hurricane or flood). The extension also provides the ability for the client to understand the format of the data and visualize the information locally. The server part enables a data provider to create and serve basic Datacasting (RSS-based) feeds. The user can subscribe to any number of feeds, view the information related to each item contained within a feed (including browse pre-made images), manually download files associated with items, and place these files in a local store. The client-server architecture enables users to: a) Subscribe and interpret multiple Datacasting feeds (same look and feel as a typical mail client), b) Maintain a list of all items within each feed, c) Enable filtering on the lists based on different metadata attributes contained within the feed (list will reference only data files of interest), d) Visualize the reference data and associated metadata, e) Download files referenced within the list, and f) Automatically download files as new items become available.

  4. Learning-based image preprocessing for robust computer-aided detection

    NASA Astrophysics Data System (ADS)

    Raghupathi, Laks; Devarakota, Pandu R.; Wolf, Matthias

    2013-03-01

    Recent studies have shown that low dose computed tomography (LDCT) can be an effective screening tool to reduce lung cancer mortality. Computer-aided detection (CAD) would be a beneficial second reader for radiologists in such cases. Studies demonstrate that while iterative reconstructions (IR) improve LDCT diagnostic quality, it however degrades CAD performance significantly (increased false positives) when applied directly. For improving CAD performance, solutions such as retraining with newer data or applying a standard preprocessing technique may not be suffice due to high prevalence of CT scanners and non-uniform acquisition protocols. Here, we present a learning-based framework that can adaptively transform a wide variety of input data to boost an existing CAD performance. This not only enhances their robustness but also their applicability in clinical workflows. Our solution consists of applying a suitable pre-processing filter automatically on the given image based on its characteristics. This requires the preparation of ground truth (GT) of choosing an appropriate filter resulting in improved CAD performance. Accordingly, we propose an efficient consolidation process with a novel metric. Using key anatomical landmarks, we then derive consistent feature descriptors for the classification scheme that then uses a priority mechanism to automatically choose an optimal preprocessing filter. We demonstrate CAD prototype∗ performance improvement using hospital-scale datasets acquired from North America, Europe and Asia. Though we demonstrated our results for a lung nodule CAD, this scheme is straightforward to extend to other post-processing tools dedicated to other organs and modalities.

  5. Athermal and wavelength-trimmable photonic filters based on TiO₂-cladded amorphous-SOI.

    PubMed

    Lipka, Timo; Moldenhauer, Lennart; Müller, Jörg; Trieu, Hoc Khiem

    2015-07-27

    Large-scale integrated silicon photonic circuits suffer from two inevitable issues that boost the overall power consumption. First, fabrication imperfections even on sub-nm scale result in spectral device non-uniformity that require fine-tuning during device operation. Second, the photonic devices need to be actively corrected to compensate thermal drifts. As a result significant amount of power is wasted if no athermal and wavelength-trimmable solutions are utilized. Consequently, in order to minimize the total power requirement of photonic circuits in a passive way, trimming methods are required to correct the device inhomogeneities from manufacturing and athermal solutions are essential to oppose temperature fluctuations of the passive/active components during run-time. We present an approach to fabricate CMOS backend-compatible and athermal passive photonic filters that can be corrected for fabrication inhomogeneities by UV-trimming based on low-loss amorphous-SOI waveguides with TiO2 cladding. The trimming of highly confined 10 μm ring resonators is proven over a free spectral range retaining athermal operation. The athermal functionality of 2nd-order 5 μm add/drop microrings is demonstrated over 40°C covering a broad wavelength interval of 60 nm.

  6. Predicting online ratings based on the opinion spreading process

    NASA Astrophysics Data System (ADS)

    He, Xing-Sheng; Zhou, Ming-Yang; Zhuo, Zhao; Fu, Zhong-Qian; Liu, Jian-Guo

    2015-10-01

    Predicting users' online ratings is always a challenge issue and has drawn lots of attention. In this paper, we present a rating prediction method by combining the user opinion spreading process with the collaborative filtering algorithm, where user similarity is defined by measuring the amount of opinion a user transfers to another based on the primitive user-item rating matrix. The proposed method could produce a more precise rating prediction for each unrated user-item pair. In addition, we introduce a tunable parameter λ to regulate the preferential diffusion relevant to the degree of both opinion sender and receiver. The numerical results for Movielens and Netflix data sets show that this algorithm has a better accuracy than the standard user-based collaborative filtering algorithm using Cosine and Pearson correlation without increasing computational complexity. By tuning λ, our method could further boost the prediction accuracy when using Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) as measurements. In the optimal cases, on Movielens and Netflix data sets, the corresponding algorithmic accuracy (MAE and RMSE) are improved 11.26% and 8.84%, 13.49% and 10.52% compared to the item average method, respectively.

  7. An object-oriented approach for harmonization of multimedia markup languages

    NASA Astrophysics Data System (ADS)

    Chen, Yih-Feng; Kuo, May-Chen; Sun, Xiaoming; Kuo, C.-C. Jay

    2003-12-01

    An object-oriented methodology is proposed to harmonize several different markup languages in this research. First, we adopt the Unified Modelling Language (UML) as the data model to formalize the concept and the process of the harmonization process between the eXtensible Markup Language (XML) applications. Then, we design the Harmonization eXtensible Markup Language (HXML) based on the data model and formalize the transformation between the Document Type Definitions (DTDs) of the original XML applications and HXML. The transformation between instances is also discussed. We use the harmonization of SMIL and X3D as an example to demonstrate the proposed methodology. This methodology can be generalized to various application domains.

  8. Managing and Querying Image Annotation and Markup in XML.

    PubMed

    Wang, Fusheng; Pan, Tony; Sharma, Ashish; Saltz, Joel

    2010-01-01

    Proprietary approaches for representing annotations and image markup are serious barriers for researchers to share image data and knowledge. The Annotation and Image Markup (AIM) project is developing a standard based information model for image annotation and markup in health care and clinical trial environments. The complex hierarchical structures of AIM data model pose new challenges for managing such data in terms of performance and support of complex queries. In this paper, we present our work on managing AIM data through a native XML approach, and supporting complex image and annotation queries through native extension of XQuery language. Through integration with xService, AIM databases can now be conveniently shared through caGrid.

  9. Development of clinical contents model markup language for electronic health records.

    PubMed

    Yun, Ji-Hyun; Ahn, Sun-Ju; Kim, Yoon

    2012-09-01

    To develop dedicated markup language for clinical contents models (CCM) to facilitate the active use of CCM in electronic health record systems. Based on analysis of the structure and characteristics of CCM in the clinical domain, we designed extensible markup language (XML) based CCM markup language (CCML) schema manually. CCML faithfully reflects CCM in both the syntactic and semantic aspects. As this language is based on XML, it can be expressed and processed in computer systems and can be used in a technology-neutral way. CCML HAS THE FOLLOWING STRENGTHS: it is machine-readable and highly human-readable, it does not require a dedicated parser, and it can be applied for existing electronic health record systems.

  10. Standardized Semantic Markup for Reference Terminologies, Thesauri and Coding Systems: Benefits for distributed E-Health Applications.

    PubMed

    Hoelzer, Simon; Schweiger, Ralf K; Liu, Raymond; Rudolf, Dirk; Rieger, Joerg; Dudeck, Joachim

    2005-01-01

    With the introduction of the ICD-10 as the standard for diagnosis, the development of an electronic representation of its complete content, inherent semantics and coding rules is necessary. Our concept refers to current efforts of the CEN/TC 251 to establish a European standard for hierarchical classification systems in healthcare. We have developed an electronic representation of the ICD-10 with the extensible Markup Language (XML) that facilitates the integration in current information systems or coding software taking into account different languages and versions. In this context, XML offers a complete framework of related technologies and standard tools for processing that helps to develop interoperable applications.

  11. Managing and Querying Image Annotation and Markup in XML

    PubMed Central

    Wang, Fusheng; Pan, Tony; Sharma, Ashish; Saltz, Joel

    2010-01-01

    Proprietary approaches for representing annotations and image markup are serious barriers for researchers to share image data and knowledge. The Annotation and Image Markup (AIM) project is developing a standard based information model for image annotation and markup in health care and clinical trial environments. The complex hierarchical structures of AIM data model pose new challenges for managing such data in terms of performance and support of complex queries. In this paper, we present our work on managing AIM data through a native XML approach, and supporting complex image and annotation queries through native extension of XQuery language. Through integration with xService, AIM databases can now be conveniently shared through caGrid. PMID:21218167

  12. FAIMS Mobile: Flexible, open-source software for field research

    NASA Astrophysics Data System (ADS)

    Ballsun-Stanton, Brian; Ross, Shawn A.; Sobotkova, Adela; Crook, Penny

    2018-01-01

    FAIMS Mobile is a native Android application supported by an Ubuntu server facilitating human-mediated field research across disciplines. It consists of 'core' Java and Ruby software providing a platform for data capture, which can be deeply customised using 'definition packets' consisting of XML documents (data schema and UI) and Beanshell scripts (automation). Definition packets can also be generated using an XML-based domain-specific language, making customisation easier. FAIMS Mobile includes features allowing rich and efficient data capture tailored to the needs of fieldwork. It also promotes synthetic research and improves transparency and reproducibility through the production of comprehensive datasets that can be mapped to vocabularies or ontologies as they are created.

  13. TOMML: A Rule Language for Structured Data

    NASA Astrophysics Data System (ADS)

    Cirstea, Horatiu; Moreau, Pierre-Etienne; Reilles, Antoine

    We present the TOM language that extends JAVA with the purpose of providing high level constructs inspired by the rewriting community. TOM bridges thus the gap between a general purpose language and high level specifications based on rewriting. This approach was motivated by the promotion of rule based techniques and their integration in large scale applications. Powerful matching capabilities along with a rich strategy language are among TOM's strong features that make it easy to use and competitive with respect to other rule based languages. TOM is thus a natural choice for querying and transforming structured data and in particular XML documents [1]. We present here its main XML oriented features and illustrate its use on several examples.

  14. WITH: a system to write clinical trials using XML and RDBMS.

    PubMed Central

    Fazi, Paola; Luzi, Daniela; Manco, Mariarosaria; Ricci, Fabrizio L.; Toffoli, Giovanni; Vignetti, Marco

    2002-01-01

    The paper illustrates the system WITH (Write on Internet clinical Trials in Haematology) which supports the writing of a clinical trial (CT) document. The requirements of this system have been defined analysing the writing process of a CT and then modelling the content of its sections together with their logical and temporal relationships. The system WITH allows: a) editing the document text; b) re-using the text; and c) facilitating the cooperation and the collaborative writing. It is based on XML mark-up language, and on a RDBMS. This choice guarantees: a) process standardisation; b) process management; c) efficient delivery of information-based tasks; and d) explicit focus on process design. PMID:12463823

  15. An Extensible Information Grid for Risk Management

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Bell, David G.

    2003-01-01

    This paper describes recent work on developing an extensible information grid for risk management at NASA - a RISK INFORMATION GRID. This grid is being developed by integrating information grid technology with risk management processes for a variety of risk related applications. To date, RISK GRID applications are being developed for three main NASA processes: risk management - a closed-loop iterative process for explicit risk management, program/project management - a proactive process that includes risk management, and mishap management - a feedback loop for learning from historical risks that escaped other processes. This is enabled through an architecture involving an extensible database, structuring information with XML, schemaless mapping of XML, and secure server-mediated communication using standard protocols.

  16. MASCOT HTML and XML parser: an implementation of a novel object model for protein identification data.

    PubMed

    Yang, Chunguang G; Granite, Stephen J; Van Eyk, Jennifer E; Winslow, Raimond L

    2006-11-01

    Protein identification using MS is an important technique in proteomics as well as a major generator of proteomics data. We have designed the protein identification data object model (PDOM) and developed a parser based on this model to facilitate the analysis and storage of these data. The parser works with HTML or XML files saved or exported from MASCOT MS/MS ions search in peptide summary report or MASCOT PMF search in protein summary report. The program creates PDOM objects, eliminates redundancy in the input file, and has the capability to output any PDOM object to a relational database. This program facilitates additional analysis of MASCOT search results and aids the storage of protein identification information. The implementation is extensible and can serve as a template to develop parsers for other search engines. The parser can be used as a stand-alone application or can be driven by other Java programs. It is currently being used as the front end for a system that loads HTML and XML result files of MASCOT searches into a relational database. The source code is freely available at http://www.ccbm.jhu.edu and the program uses only free and open-source Java libraries.

  17. Informatics in radiology: automated structured reporting of imaging findings using the AIM standard and XML.

    PubMed

    Zimmerman, Stefan L; Kim, Woojin; Boonn, William W

    2011-01-01

    Quantitative and descriptive imaging data are a vital component of the radiology report and are frequently of paramount importance to the ordering physician. Unfortunately, current methods of recording these data in the report are both inefficient and error prone. In addition, the free-text, unstructured format of a radiology report makes aggregate analysis of data from multiple reports difficult or even impossible without manual intervention. A structured reporting work flow has been developed that allows quantitative data created at an advanced imaging workstation to be seamlessly integrated into the radiology report with minimal radiologist intervention. As an intermediary step between the workstation and the reporting software, quantitative and descriptive data are converted into an extensible markup language (XML) file in a standardized format specified by the Annotation and Image Markup (AIM) project of the National Institutes of Health Cancer Biomedical Informatics Grid. The AIM standard was created to allow image annotation data to be stored in a uniform machine-readable format. These XML files containing imaging data can also be stored on a local database for data mining and analysis. This structured work flow solution has the potential to improve radiologist efficiency, reduce errors, and facilitate storage of quantitative and descriptive imaging data for research. Copyright © RSNA, 2011.

  18. The Graphical Representation of the Digital Astronaut Physiology Backbone

    NASA Technical Reports Server (NTRS)

    Briers, Demarcus

    2010-01-01

    This report summarizes my internship project with the NASA Digital Astronaut Project to analyze the Digital Astronaut (DA) physiology backbone model. The Digital Astronaut Project (DAP) applies integrated physiology models to support space biomedical operations, and to assist NASA researchers in closing knowledge gaps related to human physiologic responses to space flight. The DA physiology backbone is a set of integrated physiological equations and functions that model the interacting systems of the human body. The current release of the model is HumMod (Human Model) version 1.5 and was developed over forty years at the University of Mississippi Medical Center (UMMC). The physiology equations and functions are scripted in an XML schema specifically designed for physiology modeling by Dr. Thomas G. Coleman at UMMC. Currently it is difficult to examine the physiology backbone without being knowledgeable of the XML schema. While investigating and documenting the tags and algorithms used in the XML schema, I proposed a standard methodology for a graphical representation. This standard methodology may be used to transcribe graphical representations from the DA physiology backbone. In turn, the graphical representations can allow examination of the physiological functions and equations without the need to be familiar with the computer programming languages or markup languages used by DA modeling software.

  19. Context- and Template-Based Compression for Efficient Management of Data Models in Resource-Constrained Systems.

    PubMed

    Macho, Jorge Berzosa; Montón, Luis Gardeazabal; Rodriguez, Roberto Cortiñas

    2017-08-01

    The Cyber Physical Systems (CPS) paradigm is based on the deployment of interconnected heterogeneous devices and systems, so interoperability is at the heart of any CPS architecture design. In this sense, the adoption of standard and generic data formats for data representation and communication, e.g., XML or JSON, effectively addresses the interoperability problem among heterogeneous systems. Nevertheless, the verbosity of those standard data formats usually demands system resources that might suppose an overload for the resource-constrained devices that are typically deployed in CPS. In this work we present Context- and Template-based Compression (CTC), a data compression approach targeted to resource-constrained devices, which allows reducing the resources needed to transmit, store and process data models. Additionally, we provide a benchmark evaluation and comparison with current implementations of the Efficient XML Interchange (EXI) processor, which is promoted by the World Wide Web Consortium (W3C), and it is the most prominent XML compression mechanism nowadays. Interestingly, the results from the evaluation show that CTC outperforms EXI implementations in terms of memory usage and speed, keeping similar compression rates. As a conclusion, CTC is shown to be a good candidate for managing standard data model representation formats in CPS composed of resource-constrained devices.

  20. Developing and integrating an adverse drug reaction reporting system with the hospital information system.

    PubMed

    Kataoka, Satoshi; Ohe, Kazuhiko; Mochizuki, Mayumi; Ueda, Shiro

    2002-01-01

    We have developed an adverse drug reaction (ADR) reporting system integrating it with Hospital Information System (HIS) of the University of Tokyo Hospital. Since this system is designed with JAVA, it is portable without re-compiling to any operating systems on which JAVA virtual machines work. In this system, we implemented an automatic data filling function using XML-based (extended Markup Language) files generated by HIS. This new specification would decrease the time needed for physicians and pharmacists to fill the spontaneous ADR reports. By clicking a button, the report is sent to the text database through Simple Mail Transfer Protocol (SMTP) electronic mails. The destination of the report mail can be changed arbitrarily by administrators, which adds this system more flexibility for practical operation. Although we tried our best to use the SGML-based (Standard Generalized Markup Language) ICH M2 guideline to follow the global standard of the case report, we eventually adopted XML as the output report format. This is because we found some problems in handling two bytes characters with ICH guideline and XML has a lot of useful features. According to our pilot survey conducted at the University of Tokyo Hospital, many physicians answered that our idea, integrating ADR reporting system to HIS, would increase the ADR reporting numbers.

  1. A New Publicly Available Chemical Query Language, CSRML ...

    EPA Pesticide Factsheets

    A new XML-based query language, CSRML, has been developed for representing chemical substructures, molecules, reaction rules, and reactions. CSRML queries are capable of integrating additional forms of information beyond the simple substructure (e.g., SMARTS) or reaction transformation (e.g., SMIRKS, reaction SMILES) queries currently in use. Chemotypes, a term used to represent advanced CSRML queries for repeated application can be encoded not only with connectivity and topology, but also with properties of atoms, bonds, electronic systems, or molecules. The CSRML language has been developed in parallel with a public set of chemotypes, i.e., the ToxPrint chemotypes, which are designed to provide excellent coverage of environmental, regulatory and commercial use chemical space, as well as to represent features and frameworks believed to be especially relevant to toxicity concerns. A software application, ChemoTyper, has also been developed and made publicly available to enable chemotype searching and fingerprinting against a target structure set. The public ChemoTyper houses the ToxPrint chemotype CSRML dictionary, as well as reference implementation so that the query specifications may be adopted by other chemical structure knowledge systems. The full specifications of the XML standard used in CSRML-based chemotypes are publicly available to facilitate and encourage the exchange of structural knowledge. Paper details specifications for a new XML-based query lan

  2. Distribution of immunodeficiency fact files with XML--from Web to WAP.

    PubMed

    Väliaho, Jouni; Riikonen, Pentti; Vihinen, Mauno

    2005-06-26

    Although biomedical information is growing rapidly, it is difficult to find and retrieve validated data especially for rare hereditary diseases. There is an increased need for services capable of integrating and validating information as well as proving it in a logically organized structure. A XML-based language enables creation of open source databases for storage, maintenance and delivery for different platforms. Here we present a new data model called fact file and an XML-based specification Inherited Disease Markup Language (IDML), that were developed to facilitate disease information integration, storage and exchange. The data model was applied to primary immunodeficiencies, but it can be used for any hereditary disease. Fact files integrate biomedical, genetic and clinical information related to hereditary diseases. IDML and fact files were used to build a comprehensive Web and WAP accessible knowledge base ImmunoDeficiency Resource (IDR) available at http://bioinf.uta.fi/idr/. A fact file is a user oriented user interface, which serves as a starting point to explore information on hereditary diseases. The IDML enables the seamless integration and presentation of genetic and disease information resources in the Internet. IDML can be used to build information services for all kinds of inherited diseases. The open source specification and related programs are available at http://bioinf.uta.fi/idml/.

  3. Context- and Template-Based Compression for Efficient Management of Data Models in Resource-Constrained Systems

    PubMed Central

    Montón, Luis Gardeazabal

    2017-01-01

    The Cyber Physical Systems (CPS) paradigm is based on the deployment of interconnected heterogeneous devices and systems, so interoperability is at the heart of any CPS architecture design. In this sense, the adoption of standard and generic data formats for data representation and communication, e.g., XML or JSON, effectively addresses the interoperability problem among heterogeneous systems. Nevertheless, the verbosity of those standard data formats usually demands system resources that might suppose an overload for the resource-constrained devices that are typically deployed in CPS. In this work we present Context- and Template-based Compression (CTC), a data compression approach targeted to resource-constrained devices, which allows reducing the resources needed to transmit, store and process data models. Additionally, we provide a benchmark evaluation and comparison with current implementations of the Efficient XML Interchange (EXI) processor, which is promoted by the World Wide Web Consortium (W3C), and it is the most prominent XML compression mechanism nowadays. Interestingly, the results from the evaluation show that CTC outperforms EXI implementations in terms of memory usage and speed, keeping similar compression rates. As a conclusion, CTC is shown to be a good candidate for managing standard data model representation formats in CPS composed of resource-constrained devices. PMID:28763013

  4. Using the eXtensible Markup Language (XML) in a regional electronic patient record for patients with malignant diseases.

    PubMed

    Wolff, A C; Mludek, V; van der Haak, M; Bork, W; Bülzebruck, H; Drings, P; Schmücker, P; Wannenmacher, M; Haux, R

    2001-01-01

    Communication between different institutions which are responsible for the treatment of the same patient is of outstanding significance, especially in the field of tumor diseases. Regional electronic patient records could support the co-operation of different institutions by providing ac-cess to all necessary information whether it belongs to the own institution or to a partner. The Department of Medical Informatics, University of Heidelberg is performing a project in co-operation with the Thoraxclinic-Heidelberg and the Department of Clinical Radiology, University of Heidelberg with the goal: to define an architectural concept for interlinking the electronic patient record of the two clinical institutions to build a common virtual electronic patient record and carry out an exemplary implementation, to examine composition, structure and content of medical documents for tumor patients with the aim of defining an XML-based markup language allowing summarizing overviews and suitable granularities, and to integrate clinical practice guidelines and other external knowledge with the electronic patient record using XML-technologies to support the physician in the daily decision process. This paper will show, how a regional electronic patient record could be built on an architectural level and describe elementary steps towards a on content-oriented structuring of medical records.

  5. A structured interface to the object-oriented genomics unified schema for XML-formatted data.

    PubMed

    Clark, Terry; Jurek, Josef; Kettler, Gregory; Preuss, Daphe

    2005-01-01

    Data management systems are fast becoming required components in many biology laboratories as the role of computer-based information grows. Although the need for data management systems is on the rise, their inherent complexities can deter the full and routine use of their computational capabilities. The significant undertaking to implement a capable production system can be reduced in part by adapting an established data management system. In such a way, we are leveraging the Genomics Unified Schema (GUS) developed at the Computational Biology and Informatics Laboratory at the University of Pennsylvania as a foundation for managing and analysing DNA sequence data in centromere research projects around Arabidopsis thaliana and related species. Because GUS provides a core schema that includes support for genome sequences, mRNA and its expression, and annotated chromosomes, it is ideal for synthesising a variety of parameters to analyse these repetitive and highly dynamic portions of the genome. Despite this, production-strength data management frameworks are complex, requiring dedicated efforts to adapt and maintain. The work reported in this article addresses one component of such an effort, namely the pivotal task of marshalling data from various sources into GUS. In order to harness GUS for our project, and motivated by efficiency needs, we developed a structured framework for transferring data into GUS from outside sources. This technology is embodied in a GUS object-layer processor, XMLGUS. XMLGUS facilitates incorporating data into GUS by (i) formulating an XML interface that includes relational database key constraint definitions, (ii) regularising traversal through that XML, (iii) realising automatic processing of the XML with database key constraints and (iv) allowing for special processing of input data within the framework for automated processing. The application of XMLGUS to production pipeline processing for a sequencing project and inputting the Arabidopsis genome into GUS is discussed. XMLGUS is available from the Flora website (http://flora.ittc.ku.edu/).

  6. TU-CD-304-11: Veritas 2.0: A Cloud-Based Tool to Facilitate Research and Innovation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mishra, P; Patankar, A; Etmektzoglou, A

    Purpose: We introduce Veritas 2.0, a cloud-based, non-clinical research portal, to facilitate translation of radiotherapy research ideas to new delivery techniques. The ecosystem of research tools includes web apps for a research beam builder for TrueBeam Developer Mode, an image reader for compressed and uncompressed XIM files, and a trajectory log file based QA/beam delivery analyzer. Methods: The research beam builder can generate TrueBeam readable XML file either from scratch or from pre-existing DICOM-RT plans. DICOM-RT plan is first converted to XML format and then researcher can interactively modify or add control points to them. Delivered beam can be verifiedmore » via reading generated images and analyzing trajectory log files. Image reader can read both uncompressed and HND-compressed XIM images. The trajectory log analyzer lets researchers plot expected vs. actual values and deviations among 30 mechanical axes. The analyzer gives an animated view of MLC patterns for the beam delivery. Veritas 2.0 is freely available and its advantages versus standalone software are i) No software installation or maintenance needed, ii) easy accessibility across all devices iii) seamless upgrades and iv) OS independence. Veritas is written using open-source tools like twitter bootstrap, jQuery, flask, and Python-based modules. Results: In the first experiment, an anonymized 7-beam DICOM-RT IMRT plan was converted to XML beam containing 1400 control points. kV and MV imaging points were inserted into this XML beam. In another experiment, a binary log file was analyzed to compare actual vs expected values and deviations among axes. Conclusions: Veritas 2.0 is a public cloud-based web app that hosts a pool of research tools for facilitating research from conceptualization to verification. It is aimed at providing a platform for facilitating research and collaboration. I am full time employee at Varian Medical systems, Palo Alto.« less

  7. The relation between global migration and trade networks

    NASA Astrophysics Data System (ADS)

    Sgrignoli, Paolo; Metulini, Rodolfo; Schiavo, Stefano; Riccaboni, Massimo

    2015-01-01

    In this paper we develop a methodology to analyze and compare multiple global networks, focusing our analysis on the relation between human migration and trade. First, we identify the subset of products for which the presence of a community of migrants significantly increases trade intensity, where to assure comparability across networks we apply a hypergeometric filter that lets us identify those links which intensity is significantly higher than expected. Next, proposing a new way to define country neighbors based on the most intense links in the trade network, we use spatial econometrics techniques to measure the effect of migration on international trade, while controlling for network interdependences. Overall, we find that migration significantly boosts trade across countries and we are able to identify product categories for which this effect is particularly strong.

  8. 3D Sound Techniques for Sound Source Elevation in a Loudspeaker Listening Environment

    NASA Astrophysics Data System (ADS)

    Kim, Yong Guk; Jo, Sungdong; Kim, Hong Kook; Jang, Sei-Jin; Lee, Seok-Pil

    In this paper, we propose several 3D sound techniques for sound source elevation in stereo loudspeaker listening environments. The proposed method integrates a head-related transfer function (HRTF) for sound positioning and early reflection for adding reverberant circumstance. In addition, spectral notch filtering and directional band boosting techniques are also included for increasing elevation perception capability. In order to evaluate the elevation performance of the proposed method, subjective listening tests are conducted using several kinds of sound sources such as white noise, sound effects, speech, and music samples. It is shown from the tests that the degrees of perceived elevation by the proposed method are around the 17º to 21º when the stereo loudspeakers are located on the horizontal plane.

  9. Boosting TAG Accumulation with Improved Biodiesel Production from Novel Oleaginous Microalgae Scenedesmus sp. IITRIND2 Utilizing Waste Sugarcane Bagasse Aqueous Extract (SBAE).

    PubMed

    Arora, Neha; Patel, Alok; Pruthi, Parul A; Pruthi, Vikas

    2016-09-01

    This investigation utilized sugarcane bagasse aqueous extract (SBAE), a nontoxic, cost-effective medium to boost triacylglycerol (TAG) accumulation in novel fresh water microalgal isolate Scenedesmus sp. IITRIND2. Maximum lipid productivity of 112 ± 5.2 mg/L/day was recorded in microalgae grown in SBAE compared to modified BBM (26 ± 3 %). Carotenoid to chlorophyll ratio was 12.5 ± 2 % higher than in photoautotrophic control, indicating an increase in photosystem II activity, thereby increasing growth rate. Fatty acid methyl ester (FAME) profile revealed presence of C14:0 (2.29 %), C16:0 (15.99 %), C16:2 (4.05 %), C18:0 (3.41 %), C18:1 (41.55 %), C18:2 (12.41), and C20:0 (1.21 %) as the major fatty acids. Cetane number (64.03), cold filter plugging property (-1.05 °C), and oxidative stability (12.03 h) indicated quality biodiesel abiding by ASTM D6751 and EN 14214 fuel standards. Results consolidate the candidature of novel freshwater microalgal isolate Scenedesmus sp. IITRIND2 cultivated in SBAE, aqueous extract made from copious, agricultural waste sugarcane bagasse to increase the lipid productivity, and could further be utilized for cost-effective biodiesel production.

  10. Power inversion design for ocean wave energy harvesting

    NASA Astrophysics Data System (ADS)

    Talebani, Anwar N.

    The needs for energy sources are increasing day by day because of several factors, such as oil depletion, and global climate change due to the higher level of CO2, so the exploration of various renewable energy sources is very promising area of study. The available ocean waves can be utilized as free source of energy as the water covers 70% of the earth surface. This thesis presents the ocean wave energy as a source of renewable energy. By addressing the problem of designing efficient power electronics system to deliver 5 KW from the induction generator to the grid with less possible losses and harmonics as possible and to control current fed to the grid to successfully harvest ocean wave energy. We design an AC-DC full bridge rectifier converter, and a DC-DC boost converter to harvest wave energy from AC to regulated DC. In order to increase the design efficiency, we need to increase the power factor from (0.5-0.6) to 1. This is accomplished by designing the boost converter with power factor correction in continues mode with RC circuit as an input to the boost converter power factor correction. This design results in a phase shift between the input current and voltage of the full bridge rectifier to generate a small reactive power. The reactive power is injected to the induction generator to maintain its functionality by generating a magnetic field in its stator. Next, we design a single-phase pulse width modulator full bridge voltage source DC-AC grid-tied mode inverter to harvest regulated DC wave energy to AC. The designed inverter is modulated by inner current loop, to control current injected to the grid with minimal filter component to maintain power quality at the grid. The simulation results show that our design successfully control the current level fed to the grid. It is noteworthy that the simulated efficiency is higher than the calculated one since we used an ideal switch in the simulated circuit.

  11. An XML-Based Manipulation and Query Language for Rule-Based Information

    NASA Astrophysics Data System (ADS)

    Mansour, Essam; Höpfner, Hagen

    Rules are utilized to assist in the monitoring process that is required in activities, such as disease management and customer relationship management. These rules are specified according to the application best practices. Most of research efforts emphasize on the specification and execution of these rules. Few research efforts focus on managing these rules as one object that has a management life-cycle. This paper presents our manipulation and query language that is developed to facilitate the maintenance of this object during its life-cycle and to query the information contained in this object. This language is based on an XML-based model. Furthermore, we evaluate the model and language using a prototype system applied to a clinical case study.

  12. Networking observers and observatories with remote telescope markup language

    NASA Astrophysics Data System (ADS)

    Hessman, Frederic V.; Tuparev, Georg; Allan, Alasdair

    2006-06-01

    Remote Telescope Markup Language (RTML) is an XML-based protocol for the transport of the high-level description of a set of observations to be carried out on a remote, robotic or service telescope. We describe how RTML is being used in a wide variety of contexts: the transport of service and robotic observing requests in the Hands-On Universe TM, ACP, eSTAR, and MONET networks; how RTML is easily combined with other XML protocols for more localized control of telescopes; RTML as a secondary observation report format for the IVOA's VOEvent protocol; the input format for a general-purpose observation simulator; and the observatory-independent means for carrying out request transactions for the international Heterogeneous Telescope Network (HTN).

  13. Mapping DICOM to OpenDocument format

    NASA Astrophysics Data System (ADS)

    Yu, Cong; Yao, Zhihong

    2009-02-01

    In order to enhance the readability, extensibility and sharing of DICOM files, we have introduced XML into DICOM file system (SPIE Volume 5748)[1] and the multilayer tree structure into DICOM (SPIE Volume 6145)[2]. In this paper, we proposed mapping DICOM to ODF(OpenDocument Format), for it is also based on XML. As a result, the new format realizes the separation of content(including text content and image) and display style. Meanwhile, since OpenDocument files take the format of a ZIP compressed archive, the new kind of DICOM files can benefit from ZIP's lossless compression to reduce file size. Moreover, this open format can also guarantee long-term access to data without legal or technical barriers, making medical images accessible to various fields.

  14. On Logic and Standards for Structuring Documents

    NASA Astrophysics Data System (ADS)

    Eyers, David M.; Jones, Andrew J. I.; Kimbrough, Steven O.

    The advent of XML has been widely seized upon as an opportunity to develop document representation standards that lend themselves to automated processing. This is a welcome development and much good has come of it. That said, present standardization efforts may be criticized on a number of counts. We explore two issues associated with document XML standardization efforts. We label them (i) the dynamic point and (ii) the logical point. Our dynamic point is that in many cases experience has shown that the search for a final, or even reasonably permanent, document representation standard is futile. The case is especially strong for electronic data interchange (EDI). Our logical point is that formalization into symbolic logic is materially helpful for understanding and designing dynamic document standards.

  15. Model tool to describe chemical structures in XML format utilizing structural fragments and chemical ontology.

    PubMed

    Sankar, Punnaivanam; Alain, Krief; Aghila, Gnanasekaran

    2010-05-24

    We have developed a model structure-editing tool, ChemEd, programmed in JAVA, which allows drawing chemical structures on a graphical user interface (GUI) by selecting appropriate structural fragments defined in a fragment library. The terms representing the structural fragments are organized in fragment ontology to provide a conceptual support. ChemEd describes the chemical structure in an XML document (ChemFul) with rich semantics explicitly encoding the details of the chemical bonding, the hybridization status, and the electron environment around each atom. The document can be further processed through suitable algorithms and with the support of external chemical ontologies to generate understandable reports about the functional groups present in the structure and their specific environment.

  16. Development of Clinical Contents Model Markup Language for Electronic Health Records

    PubMed Central

    Yun, Ji-Hyun; Kim, Yoon

    2012-01-01

    Objectives To develop dedicated markup language for clinical contents models (CCM) to facilitate the active use of CCM in electronic health record systems. Methods Based on analysis of the structure and characteristics of CCM in the clinical domain, we designed extensible markup language (XML) based CCM markup language (CCML) schema manually. Results CCML faithfully reflects CCM in both the syntactic and semantic aspects. As this language is based on XML, it can be expressed and processed in computer systems and can be used in a technology-neutral way. Conclusions CCML has the following strengths: it is machine-readable and highly human-readable, it does not require a dedicated parser, and it can be applied for existing electronic health record systems. PMID:23115739

  17. CISN Display Progress to Date - Reliable Delivery of Real-Time Earthquake Information, and ShakeMap to Critical End Users

    NASA Astrophysics Data System (ADS)

    Rico, H.; Hauksson, E.; Thomas, E.; Friberg, P.; Frechette, K.; Given, D.

    2003-12-01

    The California Integrated Seismic Network (CISN) has collaborated to develop a next-generation earthquake notification system that is nearing its first operations-ready release. The CISN Display actively alerts users of seismic data, and vital earthquake hazards information following a significant event. It will primarily replace the Caltech/USGS Broadcast of Earthquakes (CUBE) and Rapid Earthquake Data Integration (REDI) Display as the principal means of delivering geographical seismic data to emergency operations centers, utility companies and media outlets. A subsequent goal is to provide automated access to the many Web products produced by regional seismic networks after an earthquake. Another aim is to create a highly configurable client, allowing user organizations to overlay infrastructure data critical to their roles as first-responders, or lifeline operators. And the final goal is to integrate these requirements, into a package offering several layers of reliability to ensure delivery of services. Central to the CISN Display's role as a gateway to Web-based earthquake products is its comprehensive XML-messaging schema. The message model uses many of the same attributes in the CUBE format, but extends the old standard by provisioning additional elements for products currently available, and others yet to be considered. The client consumes these XML-messages, sorts them through a resident Quake Data Merge filter, and posts updates that also include hyperlinks associated to specific event IDs on the display map. Earthquake products available for delivery to the CISN Display are ShakeMap, focal mechanisms, waveform data, felt reports, aftershock forecasts and earthquake commentaries. By design the XML-message schema can evolve as products and information needs change, without breaking existing applications that rely on it. The latest version of the CISN Display can also automatically download ShakeMaps and display shaking intensity within the GIS system. This can give Emergency Response managers' information needed to allocate limited personnel and resources after a major event. The shaking intensity shape files may be downloaded out-of-band to the client computer, and with the GIS mapping tool, users can plot organizational assets on the CISN Display map and analyze their inventory against potentially damaged areas. Lastly, in support of a robust design is a well-established and reliable set of communication protocols. To achieve a state-full server connection and messaging via a signaling channel the application uses a Common Object Request Broker Architecture (CORBA). The client responds to keep-alive signals from the server, and alerts users of changes in the connection status. This full-featured messaging service will allow the system to trigger a reconnect strategy whenever the client detects a loss of connectivity. This sets the CISN Display apart from its predecessors, which do not provide a failover mechanism, or a state of connection. Thus by building on past programming successes and advances in proven Internet technologies, the CISN Display will augment the emergency responder's ability to make informed decisions following a potentially damaging earthquake.

  18. Computer simulations of optimum boost and buck-boost converters

    NASA Technical Reports Server (NTRS)

    Rahman, S.

    1982-01-01

    The development of mathematicl models suitable for minimum weight boost and buck-boost converter designs are presented. The facility of an augumented Lagrangian (ALAG) multiplier-based nonlinear programming technique is demonstrated for minimum weight design optimizations of boost and buck-boost power converters. ALAG-based computer simulation results for those two minimum weight designs are discussed. Certain important features of ALAG are presented in the framework of a comprehensive design example for boost and buck-boost power converter design optimization. The study provides refreshing design insight of power converters and presents such information as weight annd loss profiles of various semiconductor components and magnetics as a function of the switching frequency.

  19. Use of Gabor filters and deep networks in the segmentation of retinal vessel morphology

    NASA Astrophysics Data System (ADS)

    Leopold, Henry A.; Orchard, Jeff; Zelek, John; Lakshminarayanan, Vasudevan

    2017-02-01

    The segmentation of retinal morphology has numerous applications in assessing ophthalmologic and cardiovascular disease pathologies. The early detection of many such conditions is often the most effective method for reducing patient risk. Computer aided segmentation of the vasculature has proven to be a challenge, mainly due to inconsistencies such as noise, variations in hue and brightness that can greatly reduce the quality of fundus images. Accurate fundus and/or retinal vessel maps give rise to longitudinal studies able to utilize multimodal image registration and disease/condition status measurements, as well as applications in surgery preparation and biometrics. This paper further investigates the use of a Convolutional Neural Network as a multi-channel classifier of retinal vessels using the Digital Retinal Images for Vessel Extraction database, a standardized set of fundus images used to gauge the effectiveness of classification algorithms. The CNN has a feed-forward architecture and varies from other published architectures in its combination of: max-pooling, zero-padding, ReLU layers, batch normalization, two dense layers and finally a Softmax activation function. Notably, the use of Adam to optimize training the CNN on retinal fundus images has not been found in prior review. This work builds on prior work of the authors, exploring the use of Gabor filters to boost the accuracy of the system to 0.9478 during post processing. The mean of a series of Gabor filters with varying frequencies and sigma values are applied to the output of the network and used to determine whether a pixel represents a vessel or non-vessel.

  20. Use of XML and Java for collaborative petroleum reservoir modeling on the Internet

    NASA Astrophysics Data System (ADS)

    Victorine, John; Watney, W. Lynn; Bhattacharya, Saibal

    2005-11-01

    The GEMINI (Geo-Engineering Modeling through INternet Informatics) is a public-domain, web-based freeware that is made up of an integrated suite of 14 Java-based software tools to accomplish on-line, real-time geologic and engineering reservoir modeling. GEMINI facilitates distant collaborations for small company and academic clients, negotiating analyses of both single and multiple wells. The system operates on a single server and an enterprise database. External data sets must be uploaded into this database. Feedback from GEMINI users provided the impetus to develop Stand Alone Web Start Applications of GEMINI modules that reside in and operate from the user's PC. In this version, the GEMINI modules run as applets, which may reside in local user PCs, on the server, or Java Web Start. In this enhanced version, XML-based data handling procedures are used to access data from remote and local databases and save results for later access and analyses. The XML data handling process also integrates different stand-alone GEMINI modules enabling the user(s) to access multiple databases. It provides flexibility to the user to customize analytical approach, database location, and level of collaboration. An example integrated field-study using GEMINI modules and Stand Alone Web Start Applications is provided to demonstrate the versatile applicability of this freeware for cost-effective reservoir modeling.

  1. The ProteoRed MIAPE web toolkit: A User-friendly Framework to Connect and Share Proteomics Standards*

    PubMed Central

    Medina-Aunon, J. Alberto; Martínez-Bartolomé, Salvador; López-García, Miguel A.; Salazar, Emilio; Navajas, Rosana; Jones, Andrew R.; Paradela, Alberto; Albar, Juan P.

    2011-01-01

    The development of the HUPO-PSI's (Proteomics Standards Initiative) standard data formats and MIAPE (Minimum Information About a Proteomics Experiment) guidelines should improve proteomics data sharing within the scientific community. Proteomics journals have encouraged the use of these standards and guidelines to improve the quality of experimental reporting and ease the evaluation and publication of manuscripts. However, there is an evident lack of bioinformatics tools specifically designed to create and edit standard file formats and reports, or embed them within proteomics workflows. In this article, we describe a new web-based software suite (The ProteoRed MIAPE web toolkit) that performs several complementary roles related to proteomic data standards. First, it can verify that the reports fulfill the minimum information requirements of the corresponding MIAPE modules, highlighting inconsistencies or missing information. Second, the toolkit can convert several XML-based data standards directly into human readable MIAPE reports stored within the ProteoRed MIAPE repository. Finally, it can also perform the reverse operation, allowing users to export from MIAPE reports into XML files for computational processing, data sharing, or public database submission. The toolkit is thus the first application capable of automatically linking the PSI's MIAPE modules with the corresponding XML data exchange standards, enabling bidirectional conversions. This toolkit is freely available at http://www.proteored.org/MIAPE/. PMID:21983993

  2. Incorporating Feature-Based Annotations into Automatically Generated Knowledge Representations

    NASA Astrophysics Data System (ADS)

    Lumb, L. I.; Lederman, J. I.; Aldridge, K. D.

    2006-12-01

    Earth Science Markup Language (ESML) is efficient and effective in representing scientific data in an XML- based formalism. However, features of the data being represented are not accounted for in ESML. Such features might derive from events (e.g., a gap in data collection due to instrument servicing), identifications (e.g., a scientifically interesting area/volume in an image), or some other source. In order to account for features in an ESML context, we consider them from the perspective of annotation, i.e., the addition of information to existing documents without changing the originals. Although it is possible to extend ESML to incorporate feature-based annotations internally (e.g., by extending the XML schema for ESML), there are a number of complicating factors that we identify. Rather than pursuing the ESML-extension approach, we focus on an external representation for feature-based annotations via XML Pointer Language (XPointer). In previous work (Lumb &Aldridge, HPCS 2006, IEEE, doi:10.1109/HPCS.2006.26), we have shown that it is possible to extract relationships from ESML-based representations, and capture the results in the Resource Description Format (RDF). Thus we explore and report on this same requirement for XPointer-based annotations of ESML representations. As in our past efforts, the Global Geodynamics Project (GGP) allows us to illustrate with a real-world example this approach for introducing annotations into automatically generated knowledge representations.

  3. Conversion of Radiology Reporting Templates to the MRRT Standard.

    PubMed

    Kahn, Charles E; Genereaux, Brad; Langlotz, Curtis P

    2015-10-01

    In 2013, the Integrating the Healthcare Enterprise (IHE) Radiology workgroup developed the Management of Radiology Report Templates (MRRT) profile, which defines both the format of radiology reporting templates using an extension of Hypertext Markup Language version 5 (HTML5), and the transportation mechanism to query, retrieve, and store these templates. Of 200 English-language report templates published by the Radiological Society of North America (RSNA), initially encoded as text and in an XML schema language, 168 have been converted successfully into MRRT using a combination of automated processes and manual editing; conversion of the remaining 32 templates is in progress. The automated conversion process applied Extensible Stylesheet Language Transformation (XSLT) scripts, an XML parsing engine, and a Java servlet. The templates were validated for proper HTML5 and MRRT syntax using web-based services. The MRRT templates allow radiologists to share best-practice templates across organizations and have been uploaded to the template library to supersede the prior XML-format templates. By using MRRT transactions and MRRT-format templates, radiologists will be able to directly import and apply templates from the RSNA Report Template Library in their own MRRT-compatible vendor systems. The availability of MRRT-format reporting templates will stimulate adoption of the MRRT standard and is expected to advance the sharing and use of templates to improve the quality of radiology reports.

  4. Use of XML and Java for collaborative petroleum reservoir modeling on the Internet

    USGS Publications Warehouse

    Victorine, J.; Watney, W.L.; Bhattacharya, S.

    2005-01-01

    The GEMINI (Geo-Engineering Modeling through INternet Informatics) is a public-domain, web-based freeware that is made up of an integrated suite of 14 Java-based software tools to accomplish on-line, real-time geologic and engineering reservoir modeling. GEMINI facilitates distant collaborations for small company and academic clients, negotiating analyses of both single and multiple wells. The system operates on a single server and an enterprise database. External data sets must be uploaded into this database. Feedback from GEMINI users provided the impetus to develop Stand Alone Web Start Applications of GEMINI modules that reside in and operate from the user's PC. In this version, the GEMINI modules run as applets, which may reside in local user PCs, on the server, or Java Web Start. In this enhanced version, XML-based data handling procedures are used to access data from remote and local databases and save results for later access and analyses. The XML data handling process also integrates different stand-alone GEMINI modules enabling the user(s) to access multiple databases. It provides flexibility to the user to customize analytical approach, database location, and level of collaboration. An example integrated field-study using GEMINI modules and Stand Alone Web Start Applications is provided to demonstrate the versatile applicability of this freeware for cost-effective reservoir modeling. ?? 2005 Elsevier Ltd. All rights reserved.

  5. The application of geography markup language (GML) to the geological sciences

    NASA Astrophysics Data System (ADS)

    Lake, Ron

    2005-11-01

    GML 3.0 became an adopted specification of the Open Geospatial Consortium (OGC) in January 2003, and is rapidly emerging as the world standard for the encoding, transport and storage of all forms of geographic information. This paper looks at the application of GML to one of the more challenging areas of automated geography, namely the geological sciences. Specific features of GML of interest to geologists are discussed and then illustrated through a series of geological case studies. We conclude the paper with a discussion of anticipated geological web services that GML will enable. GML is written in XML and makes use of XML Schema for extensibility. It can be used both to represent or model geographic objects and to transport them across the Internet. In this way it serves as the foundation for all manner of geographic web services. Unlike vertical application grammars such as LandXML, GML was intended to define geographic application languages, and hence is applicable to any geographic domain including forestry, environmental sciences, geology and oceanography. This paper provides a review of the basic features of GML that are fundamental to the geological sciences including geometry, coverages, observations, reference systems and temporality. These constructs are then employed in a series of simple geological case studies including structural geological description, surficial geology, representation of geological time scales, mineral occurrences, geohazards and geochemical reconnaissance.

  6. Representing Human Expertise by the OWL Web Ontology Language to Support Knowledge Engineering in Decision Support Systems.

    PubMed

    Ramzan, Asia; Wang, Hai; Buckingham, Christopher

    2014-01-01

    Clinical decision support systems (CDSSs) often base their knowledge and advice on human expertise. Knowledge representation needs to be in a format that can be easily understood by human users as well as supporting ongoing knowledge engineering, including evolution and consistency of knowledge. This paper reports on the development of an ontology specification for managing knowledge engineering in a CDSS for assessing and managing risks associated with mental-health problems. The Galatean Risk and Safety Tool, GRiST, represents mental-health expertise in the form of a psychological model of classification. The hierarchical structure was directly represented in the machine using an XML document. Functionality of the model and knowledge management were controlled using attributes in the XML nodes, with an accompanying paper manual for specifying how end-user tools should behave when interfacing with the XML. This paper explains the advantages of using the web-ontology language, OWL, as the specification, details some of the issues and problems encountered in translating the psychological model to OWL, and shows how OWL benefits knowledge engineering. The conclusions are that OWL can have an important role in managing complex knowledge domains for systems based on human expertise without impeding the end-users' understanding of the knowledge base. The generic classification model underpinning GRiST makes it applicable to many decision domains and the accompanying OWL specification facilitates its implementation.

  7. Lorentz boosted frame simulation technique in Particle-in-cell methods

    NASA Astrophysics Data System (ADS)

    Yu, Peicheng

    In this dissertation, we systematically explore the use of a simulation method for modeling laser wakefield acceleration (LWFA) using the particle-in-cell (PIC) method, called the Lorentz boosted frame technique. In the lab frame the plasma length is typically four orders of magnitude larger than the laser pulse length. Using this technique, simulations are performed in a Lorentz boosted frame in which the plasma length, which is Lorentz contracted, and the laser length, which is Lorentz expanded, are now comparable. This technique has the potential to reduce the computational needs of a LWFA simulation by more than four orders of magnitude, and is useful if there is no or negligible reflection of the laser in the lab frame. To realize the potential of Lorentz boosted frame simulations for LWFA, the first obstacle to overcome is a robust and violent numerical instability, called the Numerical Cerenkov Instability (NCI), that leads to unphysical energy exchange between relativistically drifting particles and their radiation. This leads to unphysical noise that dwarfs the real physical processes. In this dissertation, we first present a theoretical analysis of this instability, and show that the NCI comes from the unphysical coupling of the electromagnetic (EM) modes and Langmuir modes (both main and aliasing) of the relativistically drifting plasma. We then discuss the methods to eliminate them. However, the use of FFTs can lead to parallel scalability issues when there are many more cells along the drifting direction than in the transverse direction(s). We then describe an algorithm that has the potential to address this issue by using a higher order finite difference operator for the derivative in the plasma drifting direction, while using the standard second order operators in the transverse direction(s). The NCI for this algorithm is analyzed, and it is shown that the NCI can be eliminated using the same strategies that were used for the hybrid FFT/Finite Difference solver. This scheme also requires a current correction and filtering which require FFTs. However, we show that in this case the FFTs can be done locally on each parallel partition. We also describe how the use of the hybrid FFT/Finite Difference or the hybrid higher order finite difference/second order finite difference methods permit combining the Lorentz boosted frame simulation technique with another "speed up" technique, called the quasi-3D algorithm, to gain unprecedented speed up for the LWFA simulations. In the quasi-3D algorithm the fields and currents are defined on an r--z PIC grid and expanded in azimuthal harmonics. The expansion is truncated with only a few modes so it has similar computational needs of a 2D r--z PIC code. We show that NCI has similar properties in r--z as in z-x slab geometry and show that the same strategies for eliminating the NCI in Cartesian geometry can be effective for the quasi-3D algorithm leading to the possibility of unprecedented speed up. We also describe a new code called UPIC-EMMA that is based on fully spectral (FFT) solver. The new code includes implementation of a moving antenna that can launch lasers in the boosted frame. We also describe how the new hybrid algorithms were implemented into OSIRIS. Examples of LWFA using the boosted frame using both UPIC-EMMA and OSIRIS are given, including the comparisons against the lab frame results. We also describe how to efficiently obtain the boosted frame simulations data that are needed to generate the transformed lab frame data, as well as how to use a moving window in the boosted frame. The NCI is also a major issue for modeling relativistic shocks with PIC algorithm. In relativistic shock simulations two counter-propagating plasmas drifting at relativistic speeds are colliding against each other. We show that the strategies for eliminating the NCI developed in this dissertation are enabling such simulations being run for much longer simulation times, which should open a path for major advances in relativistic shock research. (Abstract shortened by ProQuest.).

  8. morph

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goodall, John; Iannacone, Mike; Athalye, Anish

    2013-08-01

    Morph is a framework and domain-specific language (DSL) that helps parse and transform structured documents. It currently supports several file formats including XML, JSON, and CSV, and custom formats are usable as well.

  9. Cost-effectiveness assessment of lumpectomy cavity boost in elderly women with early stage estrogen receptor positive breast cancer receiving adjuvant radiotherapy.

    PubMed

    Lester-Coll, Nataniel H; Rutter, Charles E; Evans, Suzanne B

    2016-04-01

    Breast radiotherapy (RT) for elderly women with estrogen receptor positive early stage breast cancer (ER+ESBC) improves local recurrence (LR) rates without benefitting overall survival. Breast boost is a common practice, although the absolute benefit decreases with age. Consequently, an analysis of its cost-effectiveness in the elderly ESBC populations is warranted. A Markov model was used to compare cost-effectiveness of RT with or without a boost in elderly ER+ESBC patients. The ten-year probability of LR with boost was derived from the CALGB 9343 trial and adjusted by the hazard ratio for LR from boost radiotherapy trial data, yielding the LR rate without boost. Remaining parameters were estimated using published data. Boost RT was associated with an increase in mean cost ($7139 vs $6193) and effectiveness (5.66 vs 5.64 quality adjusted life years; QALYs) relative to no boost. The incremental cost-effectiveness ratio (ICER) for boost was $55,903 per QALY. On one-way sensitivity analysis, boost remained cost-effective if the hazard ratio of LR with boost was <0.67. Boost RT for ER+ESBC patients was cost-effective over a wide range of assumptions and inputs over commonly accepted willingness-to pay-thresholds, but particularly in women at higher risk for LR. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. Can you boost your metabolism?

    MedlinePlus

    Weight-loss boost metabolism; Obesity - boost metabolism; Overweight - boost metabolism ... Cowley MA, Brown WA, Considine RV. Obesity. In: Jameson JL, De Groot ... and Pediatric . 7th ed. Philadelphia, PA: Elsevier Saunders; ...

  11. A Petri Net-Based Software Process Model for Developing Process-Oriented Information Systems

    NASA Astrophysics Data System (ADS)

    Li, Yu; Oberweis, Andreas

    Aiming at increasing flexibility, efficiency, effectiveness, and transparency of information processing and resource deployment in organizations to ensure customer satisfaction and high quality of products and services, process-oriented information systems (POIS) represent a promising realization form of computerized business information systems. Due to the complexity of POIS, explicit and specialized software process models are required to guide POIS development. In this chapter we characterize POIS with an architecture framework and present a Petri net-based software process model tailored for POIS development with consideration of organizational roles. As integrated parts of the software process model, we also introduce XML nets, a variant of high-level Petri nets as basic methodology for business processes modeling, and an XML net-based software toolset providing comprehensive functionalities for POIS development.

  12. Design and implementation of a health data interoperability mediator.

    PubMed

    Kuo, Mu-Hsing; Kushniruk, Andre William; Borycki, Elizabeth Marie

    2010-01-01

    The objective of this study is to design and implement a common-gateway oriented mediator to solve the health data interoperability problems that exist among heterogeneous health information systems. The proposed mediator has three main components: (1) a Synonym Dictionary (SD) that stores a set of global metadata and terminologies to serve as the mapping intermediary, (2) a Semantic Mapping Engine (SME) that can be used to map metadata and instance semantics, and (3) a DB-to-XML module that translates source health data stored in a database into XML format and back. A routine admission notification data exchange scenario is used to test the efficiency and feasibility of the proposed mediator. The study results show that the proposed mediator can make health information exchange more efficient.

  13. Using SWE Standards for Ubiquitous Environmental Sensing: A Performance Analysis

    PubMed Central

    Tamayo, Alain; Granell, Carlos; Huerta, Joaquín

    2012-01-01

    Although smartphone applications represent the most typical data consumer tool from the citizen perspective in environmental applications, they can also be used for in-situ data collection and production in varied scenarios, such as geological sciences and biodiversity. The use of standard protocols, such as SWE, to exchange information between smartphones and sensor infrastructures brings benefits such as interoperability and scalability, but their reliance on XML is a potential problem when large volumes of data are transferred, due to limited bandwidth and processing capabilities on mobile phones. In this article we present a performance analysis about the use of SWE standards in smartphone applications to consume and produce environmental sensor data, analysing to what extent the performance problems related to XML can be alleviated by using alternative uncompressed and compressed formats.

  14. Dealing with Diversity in Computational Cancer Modeling

    PubMed Central

    Johnson, David; McKeever, Steve; Stamatakos, Georgios; Dionysiou, Dimitra; Graf, Norbert; Sakkalis, Vangelis; Marias, Konstantinos; Wang, Zhihui; Deisboeck, Thomas S.

    2013-01-01

    This paper discusses the need for interconnecting computational cancer models from different sources and scales within clinically relevant scenarios to increase the accuracy of the models and speed up their clinical adaptation, validation, and eventual translation. We briefly review current interoperability efforts drawing upon our experiences with the development of in silico models for predictive oncology within a number of European Commission Virtual Physiological Human initiative projects on cancer. A clinically relevant scenario, addressing brain tumor modeling that illustrates the need for coupling models from different sources and levels of complexity, is described. General approaches to enabling interoperability using XML-based markup languages for biological modeling are reviewed, concluding with a discussion on efforts towards developing cancer-specific XML markup to couple multiple component models for predictive in silico oncology. PMID:23700360

  15. Method and system to discover and recommend interesting documents

    DOEpatents

    Potok, Thomas Eugene; Steed, Chad Allen; Patton, Robert Matthew

    2017-01-31

    Disclosed are several examples of systems that can read millions of news feeds per day about topics (e.g., your customers, competitors, markets, and partners), and provide a small set of the most relevant items to read to keep current with the overwhelming amount of information currently available. Topics of interest can be chosen by the user of the system for use as seeds. The seeds can be vectorized and compared with the target documents to determine their similarity. The similarities can be sorted from highest to lowest so that the most similar seed and target documents are at the top of the list. This output can be produced in XML format so that an RSS Reader can format the XML. This allows for easy Internet access to these recommendations.

  16. Effects of biodiesel on continuous regeneration DPF characteristics

    NASA Astrophysics Data System (ADS)

    Chen, Tao; Xie, Hui; Gao, Guoyou; Wang, Wei; Hui, Chun

    2017-06-01

    A critical requirement for the implementation of DPF on a modern engine is the determination of Break-even Temperature (BET) which is defined as the temperature at which particulate deposition on the filter is balanced by particulate oxidation on the filter. In order to study the influence of biodiesel on the Regenerating Characteristics of Continuously Regeneration DPF, Bench test were carried out to investigate the BET of a continuously regeneration DPF assembled with a diesel engine fueled with neat diesel and biodiesel. Test results show that at the same engine operation conditions the fuel consumption is higher for biodiesel case, and also the intake air quantity and boost pressure are lower; the BET for the Diesel fuel is about 310 ° while it is about 250 ° for the Biodiesel case. When the engine is at the low torque and low exhaust temperature operation condition, CO conversion rate is extremely low, NO2/NOX ratio is small; with the increase of torque and exhaust temperature, CO conversion and NO2/NOX ratio increased significantly, and the maximum NO2/NOX ratio (about 35%) has been measured at 350 °. In addition, the DPF has better filtration efficiency for biodiesel PM, and the use of biodiesel to engine assembled with DPF has significant benefits.

  17. Design of a Wireless Sensor System with the Algorithms of Heart Rate and Agility Index for Athlete Evaluation

    PubMed Central

    Li, Meina; Kim, Youn Tae

    2017-01-01

    Athlete evaluation systems can effectively monitor daily training and boost performance to reduce injuries. Conventional heart-rate measurement systems can be easily affected by artifact movement, especially in the case of athletes. Significant noise can be generated owing to high-intensity activities. To improve the comfort for athletes and the accuracy of monitoring, we have proposed to combine robust heart rate and agility index monitoring algorithms into a small, light, and single node. A band-pass-filter-based R-wave detection algorithm was developed. The agility index was calculated by preprocessing with band-pass filtering and employing the zero-crossing detection method. The evaluation was conducted under both laboratory and field environments to verify the accuracy and reliability of the algorithm. The heart rate and agility index measurements can be wirelessly transmitted to a personal computer in real time by the ZigBee telecommunication system. The results show that the error rate of measurement of the heart rate is within 2%, which is comparable with that of the traditional wired measurement method. The sensitivity of the agility index, which could be distinguished as the activity speed, changed slightly. Thus, we confirmed that the developed algorithm could be used in an effective and safe exercise-evaluation system for athletes. PMID:29039763

  18. The relative weights of direct and indirect experiences in the formation of environmental risk beliefs.

    PubMed

    Viscusi, W Kip; Zeckhauser, Richard J

    2015-02-01

    Direct experiences, we find, influence environmental risk beliefs more than the indirect experiences derived from outcomes to others. This disparity could have a rational basis. Or it could be based on behavioral proclivities in accord with the well-established availability heuristic or the vested-interest heuristic, which we introduce in this article. Using original data from a large, nationally representative sample, this article examines the perception of, and responses to, morbidity risks from tap water. Direct experiences have a stronger and more consistent effect on different measures of risk belief. Direct experiences also boost the precautionary response of drinking bottled water and drinking filtered water, while indirect experiences do not. These results are consistent with the hypothesized neglect of indirect experiences in other risk contexts, such as climate change. © 2014 Society for Risk Analysis.

  19. Automatic choroid cells segmentation and counting in fluorescence microscopic image

    NASA Astrophysics Data System (ADS)

    Fei, Jianjun; Zhu, Weifang; Shi, Fei; Xiang, Dehui; Lin, Xiao; Yang, Lei; Chen, Xinjian

    2016-03-01

    In this paper, we proposed a method to automatically segment and count the rhesus choroid-retinal vascular endothelial cells (RF/6A) in fluorescence microscopic images which is based on shape classification, bottleneck detection and accelerated Dijkstra algorithm. The proposed method includes four main steps. First, a thresholding filter and morphological operations are applied to reduce the noise. Second, a shape classifier is used to decide whether a connected component is needed to be segmented. In this step, the AdaBoost classifier is applied with a set of shape features. Third, the bottleneck positions are found based on the contours of the connected components. Finally, the cells segmentation and counting are completed based on the accelerated Dijkstra algorithm with the gradient information between the bottleneck positions. The results show the feasibility and efficiency of the proposed method.

  20. A Simple XML Producer-Consumer Protocol

    NASA Technical Reports Server (NTRS)

    Smith, Warren; Gunter, Dan; Quesnel, Darcy; Biegel, Bryan (Technical Monitor)

    2001-01-01

    There are many different projects from government, academia, and industry that provide services for delivering events in distributed environments. The problem with these event services is that they are not general enough to support all uses and they speak different protocols so that they cannot interoperate. We require such interoperability when we, for example, wish to analyze the performance of an application in a distributed environment. Such an analysis might require performance information from the application, computer systems, networks, and scientific instruments. In this work we propose and evaluate a standard XML-based protocol for the transmission of events in distributed systems. One recent trend in government and academic research is the development and deployment of computational grids. Computational grids are large-scale distributed systems that typically consist of high-performance compute, storage, and networking resources. Examples of such computational grids are the DOE Science Grid, the NASA Information Power Grid (IPG), and the NSF Partnerships for Advanced Computing Infrastructure (PACIs). The major effort to deploy these grids is in the area of developing the software services to allow users to execute applications on these large and diverse sets of resources. These services include security, execution of remote applications, managing remote data, access to information about resources and services, and so on. There are several toolkits for providing these services such as Globus, Legion, and Condor. As part of these efforts to develop computational grids, the Global Grid Forum is working to standardize the protocols and APIs used by various grid services. This standardization will allow interoperability between the client and server software of the toolkits that are providing the grid services. The goal of the Performance Working Group of the Grid Forum is to standardize protocols and representations related to the storage and distribution of performance data. These standard protocols and representations must support tasks such as profiling parallel applications, monitoring the status of computers and networks, and monitoring the performance of services provided by a computational grid. This paper describes a proposed protocol and data representation for the exchange of events in a distributed system. The protocol exchanges messages formatted in XML and it can be layered atop any low-level communication protocol such as TCP or UDP Further, we describe Java and C++ implementations of this protocol and discuss their performance. The next section will provide some further background information. Section 3 describes the main communication patterns of our protocol. Section 4 describes how we represent events and related information using XML. Section 5 describes our protocol and Section 6 discusses the performance of two implementations of the protocol. Finally, an appendix provides the XML Schema definition of our protocol and event information.

  1. A Generic Metadata Editor Supporting System Using Drupal CMS

    NASA Astrophysics Data System (ADS)

    Pan, J.; Banks, N. G.; Leggott, M.

    2011-12-01

    Metadata handling is a key factor in preserving and reusing scientific data. In recent years, standardized structural metadata has become widely used in Geoscience communities. However, there exist many different standards in Geosciences, such as the current version of the Federal Geographic Data Committee's Content Standard for Digital Geospatial Metadata (FGDC CSDGM), the Ecological Markup Language (EML), the Geography Markup Language (GML), and the emerging ISO 19115 and related standards. In addition, there are many different subsets within the Geoscience subdomain such as the Biological Profile of the FGDC (CSDGM), or for geopolitical regions, such as the European Profile or the North American Profile in the ISO standards. It is therefore desirable to have a software foundation to support metadata creation and editing for multiple standards and profiles, without re-inventing the wheels. We have developed a software module as a generic, flexible software system to do just that: to facilitate the support for multiple metadata standards and profiles. The software consists of a set of modules for the Drupal Content Management System (CMS), with minimal inter-dependencies to other Drupal modules. There are two steps in using the system's metadata functions. First, an administrator can use the system to design a user form, based on an XML schema and its instances. The form definition is named and stored in the Drupal database as a XML blob content. Second, users in an editor role can then use the persisted XML definition to render an actual metadata entry form, for creating or editing a metadata record. Behind the scenes, the form definition XML is transformed into a PHP array, which is then rendered via Drupal Form API. When the form is submitted the posted values are used to modify a metadata record. Drupal hooks can be used to perform custom processing on metadata record before and after submission. It is trivial to store the metadata record as an actual XML file or in a storage/archive system. We are working on adding many features to help editor users, such as auto completion, pre-populating of forms, partial saving, as well as automatic schema validation. In this presentation we will demonstrate a few sample editors, including an FGDC editor and a bare bone editor for ISO 19115/19139. We will also demonstrate the use of templates during the definition phase, with the support of export and import functions. Form pre-population and input validation will also be covered. Theses modules are available as open-source software from the Islandora software foundation, as a component of a larger Drupal-based data archive system. They can be easily installed as stand-alone system, or to be plugged into other existing metadata platforms.

  2. Metadata and Service at the GFZ ISDC Portal

    NASA Astrophysics Data System (ADS)

    Ritschel, B.

    2008-05-01

    The online service portal of the GFZ Potsdam Information System and Data Center (ISDC) is an access point for all manner of geoscientific geodata, its corresponding metadata, scientific documentation and software tools. At present almost 2000 national and international users and user groups have the opportunity to request Earth science data from a portfolio of 275 different products types and more than 20 Million single data files with an added volume of approximately 12 TByte. The majority of the data and information, the portal currently offers to the public, are global geomonitoring products such as satellite orbit and Earth gravity field data as well as geomagnetic and atmospheric data for the exploration. These products for Earths changing system are provided via state-of-the art retrieval techniques. The data product catalog system behind these techniques is based on the extensive usage of standardized metadata, which are describing the different geoscientific product types and data products in an uniform way. Where as all ISDC product types are specified by NASA's Directory Interchange Format (DIF), Version 9.0 Parent XML DIF metadata files, the individual data files are described by extended DIF metadata documents. Depending on the beginning of the scientific project, one part of data files are described by extended DIF, Version 6 metadata documents and the other part are specified by data Child XML DIF metadata documents. Both, the product type dependent parent DIF metadata documents and the data file dependent child DIF metadata documents are derived from a base-DIF.xsd xml schema file. The ISDC metadata philosophy defines a geoscientific product as a package consisting of mostly one or sometimes more than one data file plus one extended DIF metadata file. Because NASA's DIF metadata standard has been developed in order to specify a collection of data only, the extension of the DIF standard consists of new and specific attributes, which are necessary for an explicit identification of single data files and the set-up of a comprehensive Earth science data catalog. The huge ISDC data catalog is realized by product type dependent tables filled with data file related metadata, which have relations to corresponding metadata tables. The product type describing parent DIF XML metadata documents are stored and managed in ORACLE's XML storage structures. In order to improve the interoperability of the ISDC service portal, the existing proprietary catalog system will be extended by an ISO 19115 based web catalog service. In addition to this development there is ISDC related concerning semantic network of different kind of metadata resources, like different kind of standardized and not-standardized metadata documents and literature as well as Web 2.0 user generated information derived from tagging activities and social navigation data.

  3. A PIPO Boost Converter with Low Ripple and Medium Current Application

    NASA Astrophysics Data System (ADS)

    Bandri, S.; Sofian, A.; Ismail, F.

    2018-04-01

    This paper presents a Parallel Input Parallel Output (PIPO) boost converter is proposed to gain power ability of converter, and reduce current inductors. The proposed technique will distribute current for n-parallel inductor and switching component. Four parallel boost converters implement on input voltage 20.5Vdc to generate output voltage 28.8Vdc. The PIPO boost converter applied phase shift pulse width modulation which will compare with conventional PIPO boost converters by using a similar pulse for every switching component. The current ripple reduction shows an advantage PIPO boost converter then conventional boost converter. Varies loads and duty cycle will be simulated and analyzed to verify the performance of PIPO boost converter. Finally, the unbalance of current inductor is able to be verified on four area of duty cycle in less than 0.6.

  4. Storm Prediction Center Convective Outlooks

    Science.gov Websites

    Current Watches Meso. Discussions Conv. Outlooks Tstm. Outlooks Fire Wx Outlooks XML logo RSS Feeds E-Mail : Retrieve Outlooks Weather Topics: Watches, Mesoscale Discussions, Outlooks, Fire Weather, All Products

  5. MedlinePlus Connect: Frequently Asked Questions (FAQs)

    MedlinePlus

    ... topic data in XML format. Using the Web service, software developers can build applications that utilize MedlinePlus health topic information. The service accepts keyword searches as requests and returns relevant ...

  6. The PDS-based Data Processing, Archiving and Management Procedures in Chang'e Mission

    NASA Astrophysics Data System (ADS)

    Zhang, Z. B.; Li, C.; Zhang, H.; Zhang, P.; Chen, W.

    2017-12-01

    PDS is adopted as standard format of scientific data and foundation of all data-related procedures in Chang'e mission. Unlike the geographically distributed nature of the planetary data system, all procedures of data processing, archiving, management and distribution are proceeded in the headquarter of Ground Research and Application System of Chang'e mission in a centralized manner. The RAW data acquired by the ground stations is transmitted to and processed by data preprocessing subsystem (DPS) for the production of PDS-compliant Level 0 Level 2 data products using established algorithms, with each product file being well described using an attached label, then all products with the same orbit number are put together into a scheduled task for archiving along with a XML archive list file recoding all product files' properties such as file name, file size etc. After receiving the archive request from DPS, data management subsystem (DMS) is provoked to parse the XML list file to validate all the claimed files and their compliance to PDS using a prebuilt data dictionary, then to exact metadata of each data product file from its PDS label and the fields of its normalized filename. Various requirements of data management, retrieving, distribution and application can be well met using the flexible combination of the rich metadata empowered by the PDS. In the forthcoming CE-5 mission, all the design of data structure and procedures will be updated from PDS version 3 used in previous CE-1, CE-2 and CE-3 missions to the new version 4, the main changes would be: 1) a dedicated detached XML label will be used to describe the corresponding scientific data acquired by the 4 instruments carried, the XML parsing framework used in archive list validation will be reused for the label after some necessary adjustments; 2) all the image data acquired by the panorama camera, landing camera and lunar mineralogical spectrometer should use an Array_2D_Image/Array_3D_Image object to store image data, and use a Table_Character object to store image frame header; the tabulated data acquired by the lunar regolith penetrating radar should use a Table_Binary object to store measurements.

  7. Java Application Shell: A Framework for Piecing Together Java Applications

    NASA Technical Reports Server (NTRS)

    Miller, Philip; Powers, Edward I. (Technical Monitor)

    2001-01-01

    This session describes the architecture of Java Application Shell (JAS), a Swing-based framework for developing interactive Java applications. Java Application Shell is being developed by Commerce One, Inc. for NASA Goddard Space Flight Center Code 588. The purpose of JAS is to provide a framework for the development of Java applications, providing features that enable the development process to be more efficient, consistent and flexible. Fundamentally, JAS is based upon an architecture where an application is considered a collection of 'plugins'. In turn, a plug-in is a collection of Swing actions defined using XML and packaged in a jar file. Plug-ins may be local to the host platform or remotely-accessible through HTTP. Local and remote plugins are automatically discovered by JAS upon application startup; plugins may also be loaded dynamically without having to re-start the application. Using Extensible Markup Language (XML) to define actions, as opposed to hardcoding them in application logic, allows easier customization of application-specific operations by separating application logic from presentation. Through XML, a developer defines an action that may appear on any number of menus, toolbars, and buttons. Actions maintain and propagate enable/disable states and specify icons, tool-tips, titles, etc. Furthermore, JAS allows actions to be implemented using various scripting languages through the use of IBM's Bean Scripting Framework. Scripted action implementation is seamless to the end-user. In addition to action implementation, scripts may be used for application and unit-level testing. In the case of application-level testing, JAS has hooks to assist a script in simulating end-user input. JAS also provides property and user preference management, JavaHelp, Undo/Redo, Multi-Document Interface, Single-Document Interface, printing, and logging. Finally, Jini technology has also been included into the framework by means of a Jini services browser and the ability to associate services with actions. Several Java technologies have been incorporated into JAS, including Swing, Internal Frames, Java Beans, XML, JavaScript, JavaHelp, and Jini. Additional information is contained in the original extended abstract.

  8. National Space Science Data Center Information Model

    NASA Astrophysics Data System (ADS)

    Bell, E. V.; McCaslin, P.; Grayzeck, E.; McLaughlin, S. A.; Kodis, J. M.; Morgan, T. H.; Williams, D. R.; Russell, J. L.

    2013-12-01

    The National Space Science Data Center (NSSDC) was established by NASA in 1964 to provide for the preservation and dissemination of scientific data from NASA missions. It has evolved to support distributed, active archives that were established in the Planetary, Astrophysics, and Heliophysics disciplines through a series of Memoranda of Understanding. The disciplines took over responsibility for working with new projects to acquire and distribute data for community researchers while the NSSDC remained vital as a deep archive. Since 2000, NSSDC has been using the Archive Information Package to preserve data over the long term. As part of its effort to streamline the ingest of data into the deep archive, the NSSDC developed and implemented a data model of desired and required metadata in XML. This process, in use for roughly five years now, has been successfully used to support the identification and ingest of data into the NSSDC archive, most notably those data from the Planetary Data System (PDS) submitted under PDS3. A series of software packages (X-ware) were developed to handle the submission of data from the PDS nodes utilizing a volume structure. An XML submission manifest is generated at the PDS provider site prior to delivery to NSSDC. The manifest ensures the fidelity of PDS data delivered to NSSDC. Preservation metadata is captured in an XML object when NSSDC archives the data. With the recent adoption by the PDS of the XML-based PDS4 data model, there is an opportunity for the NSSDC to provide additional services to the PDS such as the preservation, tracking, and restoration of individual products (e.g., a specific data file or document), which was unfeasible in the previous PDS3 system. The NSSDC is modifying and further streamlining its data ingest process to take advantage of the PDS4 model, an important consideration given the ever-increasing amount of data being generated and archived by orbiting missions at the Moon and Mars, other active projects such as BRRISON, LADEE, MAVEN, INSIGHT, OSIRIS-REX and ground-based observatories. Streamlining the ingest process also benefits the continued processing of PDS3 data. We will report on our progress and status.

  9. The tissue microarray data exchange specification: A document type definition to validate and enhance XML data

    PubMed Central

    Nohle, David G; Ayers, Leona W

    2005-01-01

    Background The Association for Pathology Informatics (API) Extensible Mark-up Language (XML) TMA Data Exchange Specification (TMA DES) proposed in April 2003 provides a community-based, open source tool for sharing tissue microarray (TMA) data in a common format. Each tissue core within an array has separate data including digital images; therefore an organized, common approach to produce, navigate and publish such data facilitates viewing, sharing and merging TMA data from different laboratories. The AIDS and Cancer Specimen Resource (ACSR) is a HIV/AIDS tissue bank consortium sponsored by the National Cancer Institute (NCI) Division of Cancer Treatment and Diagnosis (DCTD). The ACSR offers HIV-related malignancies and uninfected control tissues in microarrays (TMA) accompanied by de-identified clinical data to approved researchers. Exporting our TMA data into the proposed API specified format offers an opportunity to evaluate the API specification in an applied setting and to explore its usefulness. Results A document type definition (DTD) that governs the allowed common data elements (CDE) in TMA DES export XML files was written, tested and evolved and is in routine use by the ACSR. This DTD defines TMA DES CDEs which are implemented in an external file that can be supplemented by internal DTD extensions for locally defined TMA data elements (LDE). Conclusion ACSR implementation of the TMA DES demonstrated the utility of the specification and allowed application of a DTD to validate the language of the API specified XML elements and to identify possible enhancements within our TMA data management application. Improvements to the specification have additionally been suggested by our experience in importing other institution's exported TMA data. Enhancements to TMA DES to remove ambiguous situations and clarify the data should be considered. Better specified identifiers and hierarchical relationships will make automatic use of the data possible. Our tool can be used to reorder data and add identifiers; upgrading data for changes in the specification can be automatically accomplished. Using a DTD (optionally reflecting our proposed enhancements) can provide stronger validation of exported TMA data. PMID:15871741

  10. Boosting BCG-primed responses with a subunit Apa vaccine during the waning phase improves immunity and imparts protection against Mycobacterium tuberculosis.

    PubMed

    Nandakumar, Subhadra; Kannanganat, Sunil; Dobos, Karen M; Lucas, Megan; Spencer, John S; Amara, Rama Rao; Plikaytis, Bonnie B; Posey, James E; Sable, Suraj B

    2016-05-13

    Heterologous prime-boosting has emerged as a powerful vaccination approach against tuberculosis. However, optimal timing to boost BCG-immunity using subunit vaccines remains unclear in clinical trials. Here, we followed the adhesin Apa-specific T-cell responses in BCG-primed mice and investigated its BCG-booster potential. The Apa-specific T-cell response peaked 32-52 weeks after parenteral or mucosal BCG-priming but waned significantly by 78 weeks. A subunit-Apa-boost during the contraction-phase of BCG-response had a greater effect on the magnitude and functional quality of specific cellular and humoral responses compared to a boost at the peak of BCG-response. The cellular response increased following mucosal BCG-prime-Apa-subunit-boost strategy compared to Apa-subunit-prime-BCG-boost approach. However, parenteral BCG-prime-Apa-subunit-boost by a homologous route was the most effective strategy in-terms of enhancing specific T-cell responses during waning in the lung and spleen. Two Apa-boosters markedly improved waning BCG-immunity and significantly reduced Mycobacterium tuberculosis burdens post-challenge. Our results highlight the challenges of optimization of prime-boost regimens in mice where BCG drives persistent immune-activation and suggest that boosting with a heterologous vaccine may be ideal once the specific persisting effector responses are contracted. Our results have important implications for design of prime-boost regimens against tuberculosis in humans.

  11. Linear high-boost fusion of Stokes vector imagery for effective discrimination and recognition of real targets in the presence of multiple identical decoys

    NASA Astrophysics Data System (ADS)

    El-Saba, Aed; Sakla, Wesam A.

    2010-04-01

    Recently, the use of imaging polarimetry has received considerable attention for use in automatic target recognition (ATR) applications. In military remote sensing applications, there is a great demand for sensors that are capable of discriminating between real targets and decoys. Accurate discrimination of decoys from real targets is a challenging task and often requires the fusion of various sensor modalities that operate simultaneously. In this paper, we use a simple linear fusion technique known as the high-boost fusion method for effective discrimination of real targets in the presence of multiple decoys. The HBF assigns more weight to the polarization-based imagery in forming the final fused image that is used for detection. We have captured both intensity and polarization-based imagery from an experimental laboratory arrangement containing a mixture of sand/dirt, rocks, vegetation, and other objects for the purpose of simulating scenery that would be acquired in a remote sensing military application. A target object and three decoys that are identical in physical appearance (shape, surface structure and color) and different in material composition have also been placed in the scene. We use the wavelet-filter joint transform correlation (WFJTC) technique to perform detection between input scenery and the target object. Our results show that use of the HBF method increases the correlation performance metrics associated with the WFJTC-based detection process when compared to using either the traditional intensity or polarization-based images.

  12. Forward collision warning based on kernelized correlation filters

    NASA Astrophysics Data System (ADS)

    Pu, Jinchuan; Liu, Jun; Zhao, Yong

    2017-07-01

    A vehicle detection and tracking system is one of the indispensable methods to reduce the occurrence of traffic accidents. The nearest vehicle is the most likely to cause harm to us. So, this paper will do more research on about the nearest vehicle in the region of interest (ROI). For this system, high accuracy, real-time and intelligence are the basic requirement. In this paper, we set up a system that combines the advanced KCF tracking algorithm with the HaarAdaBoost detection algorithm. The KCF algorithm reduces computation time and increase the speed through the cyclic shift and diagonalization. This algorithm satisfies the real-time requirement. At the same time, Haar features also have the same advantage of simple operation and high speed for detection. The combination of this two algorithm contribute to an obvious improvement of the system running rate comparing with previous works. The detection result of the HaarAdaBoost classifier provides the initial value for the KCF algorithm. This fact optimizes KCF algorithm flaws that manual car marking in the initial phase, which is more scientific and more intelligent. Haar detection and KCF tracking with Histogram of Oriented Gradient (HOG) ensures the accuracy of the system. We evaluate the performance of framework on dataset that were self-collected. The experimental results demonstrate that the proposed method is robust and real-time. The algorithm can effectively adapt to illumination variation, even in the night it can meet the detection and tracking requirements, which is an improvement compared with the previous work.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shusharina, N; Khan, F; Sharp, G

    Purpose: To determine the dose level and timing of the boost in locally advanced lung cancer patients with confirmed tumor recurrence by comparing different boosting strategies by an impact of dose escalation in improvement of the therapeutic ratio. Methods: We selected eighteen patients with advanced NSCLC and confirmed recurrence. For each patient, a base IMRT plan to 60 Gy prescribed to PTV was created. Then we compared three dose escalation strategies: a uniform escalation to the original PTV, an escalation to a PET-defined target planned sequentially and concurrently. The PET-defined targets were delineated by biologically-weighed regions on a pre-treatment 18F-FDGmore » PET. The maximal achievable dose, without violating the OAR constraints, was identified for each boosting method. The EUD for the target, spinal cord, combined lung, and esophagus was compared for each plan. Results: The average prescribed dose was 70.4±13.9 Gy for the uniform boost, 88.5±15.9 Gy for the sequential boost and 89.1±16.5 Gy for concurrent boost. The size of the boost planning volume was 12.8% (range: 1.4 – 27.9%) of the PTV. The most prescription-limiting dose constraints was the V70 of the esophagus. The EUD within the target increased by 10.6 Gy for the uniform boost, by 31.4 Gy for the sequential boost and by 38.2 for the concurrent boost. The EUD for OARs increased by the following amounts: spinal cord, 3.1 Gy for uniform boost, 2.8 Gy for sequential boost, 5.8 Gy for concurrent boost; combined lung, 1.6 Gy for uniform, 1.1 Gy for sequential, 2.8 Gy for concurrent; esophagus, 4.2 Gy for uniform, 1.3 Gy for sequential, 5.6 Gy for concurrent. Conclusion: Dose escalation to a biologically-weighed gross tumor volume defined on a pre-treatment 18F-FDG PET may provide improved therapeutic ratio without breaching predefined OAR constraints. Sequential boost provides better sparing of OARs as compared with concurrent boost.« less

  14. Milne boost from Galilean gauge theory

    NASA Astrophysics Data System (ADS)

    Banerjee, Rabin; Mukherjee, Pradip

    2018-03-01

    Physical origin of Milne boost invariance of the Newton Cartan spacetime is traced to the effect of local Galilean boosts in its metric structure, using Galilean gauge theory. Specifically, we do not require any gauge field to understand Milne boost invariance.

  15. Storm Prediction Center Storm Reports

    Science.gov Websites

    Current Watches Meso. Discussions Conv. Outlooks Tstm. Outlooks Fire Wx Outlooks XML logo RSS Feeds E-Mail , Mesoscale Discussions, Outlooks, Fire Weather, All Products, Contact Us NOAA / National Weather Service

  16. Robust boosting via convex optimization

    NASA Astrophysics Data System (ADS)

    Rätsch, Gunnar

    2001-12-01

    In this work we consider statistical learning problems. A learning machine aims to extract information from a set of training examples such that it is able to predict the associated label on unseen examples. We consider the case where the resulting classification or regression rule is a combination of simple rules - also called base hypotheses. The so-called boosting algorithms iteratively find a weighted linear combination of base hypotheses that predict well on unseen data. We address the following issues: o The statistical learning theory framework for analyzing boosting methods. We study learning theoretic guarantees on the prediction performance on unseen examples. Recently, large margin classification techniques emerged as a practical result of the theory of generalization, in particular Boosting and Support Vector Machines. A large margin implies a good generalization performance. Hence, we analyze how large the margins in boosting are and find an improved algorithm that is able to generate the maximum margin solution. o How can boosting methods be related to mathematical optimization techniques? To analyze the properties of the resulting classification or regression rule, it is of high importance to understand whether and under which conditions boosting converges. We show that boosting can be used to solve large scale constrained optimization problems, whose solutions are well characterizable. To show this, we relate boosting methods to methods known from mathematical optimization, and derive convergence guarantees for a quite general family of boosting algorithms. o How to make Boosting noise robust? One of the problems of current boosting techniques is that they are sensitive to noise in the training sample. In order to make boosting robust, we transfer the soft margin idea from support vector learning to boosting. We develop theoretically motivated regularized algorithms that exhibit a high noise robustness. o How to adapt boosting to regression problems? Boosting methods are originally designed for classification problems. To extend the boosting idea to regression problems, we use the previous convergence results and relations to semi-infinite programming to design boosting-like algorithms for regression problems. We show that these leveraging algorithms have desirable theoretical and practical properties. o Can boosting techniques be useful in practice? The presented theoretical results are guided by simulation results either to illustrate properties of the proposed algorithms or to show that they work well in practice. We report on successful applications in a non-intrusive power monitoring system, chaotic time series analysis and a drug discovery process. --- Anmerkung: Der Autor ist Träger des von der Mathematisch-Naturwissenschaftlichen Fakultät der Universität Potsdam vergebenen Michelson-Preises für die beste Promotion des Jahres 2001/2002. In dieser Arbeit werden statistische Lernprobleme betrachtet. Lernmaschinen extrahieren Informationen aus einer gegebenen Menge von Trainingsmustern, so daß sie in der Lage sind, Eigenschaften von bisher ungesehenen Mustern - z.B. eine Klassenzugehörigkeit - vorherzusagen. Wir betrachten den Fall, bei dem die resultierende Klassifikations- oder Regressionsregel aus einfachen Regeln - den Basishypothesen - zusammengesetzt ist. Die sogenannten Boosting Algorithmen erzeugen iterativ eine gewichtete Summe von Basishypothesen, die gut auf ungesehenen Mustern vorhersagen. Die Arbeit behandelt folgende Sachverhalte: o Die zur Analyse von Boosting-Methoden geeignete Statistische Lerntheorie. Wir studieren lerntheoretische Garantien zur Abschätzung der Vorhersagequalität auf ungesehenen Mustern. Kürzlich haben sich sogenannte Klassifikationstechniken mit großem Margin als ein praktisches Ergebnis dieser Theorie herausgestellt - insbesondere Boosting und Support-Vektor-Maschinen. Ein großer Margin impliziert eine hohe Vorhersagequalität der Entscheidungsregel. Deshalb wird analysiert, wie groß der Margin bei Boosting ist und ein verbesserter Algorithmus vorgeschlagen, der effizient Regeln mit maximalem Margin erzeugt. o Was ist der Zusammenhang von Boosting und Techniken der konvexen Optimierung? Um die Eigenschaften der entstehenden Klassifikations- oder Regressionsregeln zu analysieren, ist es sehr wichtig zu verstehen, ob und unter welchen Bedingungen iterative Algorithmen wie Boosting konvergieren. Wir zeigen, daß solche Algorithmen benutzt werden koennen, um sehr große Optimierungsprobleme mit Nebenbedingungen zu lösen, deren Lösung sich gut charakterisieren laesst. Dazu werden Verbindungen zum Wissenschaftsgebiet der konvexen Optimierung aufgezeigt und ausgenutzt, um Konvergenzgarantien für eine große Familie von Boosting-ähnlichen Algorithmen zu geben. o Kann man Boosting robust gegenüber Meßfehlern und Ausreissern in den Daten machen? Ein Problem bisheriger Boosting-Methoden ist die relativ hohe Sensitivität gegenüber Messungenauigkeiten und Meßfehlern in der Trainingsdatenmenge. Um dieses Problem zu beheben, wird die sogenannte 'Soft-Margin' Idee, die beim Support-Vector Lernen schon benutzt wird, auf Boosting übertragen. Das führt zu theoretisch gut motivierten, regularisierten Algorithmen, die ein hohes Maß an Robustheit aufweisen. o Wie kann man die Anwendbarkeit von Boosting auf Regressionsprobleme erweitern? Boosting-Methoden wurden ursprünglich für Klassifikationsprobleme entwickelt. Um die Anwendbarkeit auf Regressionsprobleme zu erweitern, werden die vorherigen Konvergenzresultate benutzt und neue Boosting-ähnliche Algorithmen zur Regression entwickelt. Wir zeigen, daß diese Algorithmen gute theoretische und praktische Eigenschaften haben. o Ist Boosting praktisch anwendbar? Die dargestellten theoretischen Ergebnisse werden begleitet von Simulationsergebnissen, entweder, um bestimmte Eigenschaften von Algorithmen zu illustrieren, oder um zu zeigen, daß sie in der Praxis tatsächlich gut funktionieren und direkt einsetzbar sind. Die praktische Relevanz der entwickelten Methoden wird in der Analyse chaotischer Zeitreihen und durch industrielle Anwendungen wie ein Stromverbrauch-Überwachungssystem und bei der Entwicklung neuer Medikamente illustriert.

  17. Heterologous Prime-Boost Immunisation Regimens Against Infectious Diseases

    DTIC Science & Technology

    2006-08-01

    of these cells by boosting. DNA vaccines are good priming agents since they are internalised by antigen presenting cells and can induce antigen...presentation via both MHC class I and class II, thereby inducing both cytotoxic T lymphocytes and type 1-helper T lymphocytes. Successful boosting agents ...assessing prime-boost vaccine combinations for protection against infectious agents . • In a number of prime - boost studies, the inclusion of growth

  18. 75 FR 57748 - Combined Notice of Filings No. 2

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-22

    ...: Cameron Interstate Pipeline, LLC. Description: Cameron Interstate Pipeline, LLC submits an eTariff XML...-mail FERCOnlineSupport@ferc.gov , or call (866) 208-3676 (toll free). For TTY, call (202) 502-8659...

  19. XML Encoding of Features Describing Rule-Based Modeling of Reaction Networks with Multi-Component Molecular Complexes

    PubMed Central

    Blinov, Michael L.; Moraru, Ion I.

    2011-01-01

    Multi-state molecules and multi-component complexes are commonly involved in cellular signaling. Accounting for molecules that have multiple potential states, such as a protein that may be phosphorylated on multiple residues, and molecules that combine to form heterogeneous complexes located among multiple compartments, generates an effect of combinatorial complexity. Models involving relatively few signaling molecules can include thousands of distinct chemical species. Several software tools (StochSim, BioNetGen) are already available to deal with combinatorial complexity. Such tools need information standards if models are to be shared, jointly evaluated and developed. Here we discuss XML conventions that can be adopted for modeling biochemical reaction networks described by user-specified reaction rules. These could form a basis for possible future extensions of the Systems Biology Markup Language (SBML). PMID:21464833

  20. Boosting BCG-primed responses with a subunit Apa vaccine during the waning phase improves immunity and imparts protection against Mycobacterium tuberculosis

    PubMed Central

    Nandakumar, Subhadra; Kannanganat, Sunil; Dobos, Karen M.; Lucas, Megan; Spencer, John S.; Amara, Rama Rao; Plikaytis, Bonnie B.; Posey, James E.; Sable, Suraj B.

    2016-01-01

    Heterologous prime–boosting has emerged as a powerful vaccination approach against tuberculosis. However, optimal timing to boost BCG-immunity using subunit vaccines remains unclear in clinical trials. Here, we followed the adhesin Apa-specific T-cell responses in BCG-primed mice and investigated its BCG-booster potential. The Apa-specific T-cell response peaked 32–52 weeks after parenteral or mucosal BCG-priming but waned significantly by 78 weeks. A subunit-Apa-boost during the contraction-phase of BCG-response had a greater effect on the magnitude and functional quality of specific cellular and humoral responses compared to a boost at the peak of BCG-response. The cellular response increased following mucosal BCG-prime–Apa-subunit-boost strategy compared to Apa-subunit-prime–BCG-boost approach. However, parenteral BCG-prime–Apa-subunit-boost by a homologous route was the most effective strategy in-terms of enhancing specific T-cell responses during waning in the lung and spleen. Two Apa-boosters markedly improved waning BCG-immunity and significantly reduced Mycobacterium tuberculosis burdens post-challenge. Our results highlight the challenges of optimization of prime–boost regimens in mice where BCG drives persistent immune-activation and suggest that boosting with a heterologous vaccine may be ideal once the specific persisting effector responses are contracted. Our results have important implications for design of prime–boost regimens against tuberculosis in humans. PMID:27173443

Top